From 9ec6ca0bde863eaa333fbc3c1aa13cbe845b4c23 Mon Sep 17 00:00:00 2001 From: Sergio Guadarrama Date: Wed, 2 Apr 2014 17:03:25 -0700 Subject: [PATCH] Added Link in index.md to perfomance_hardware.md --- docs/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/index.md b/docs/index.md index 6675be2..a817292 100644 --- a/docs/index.md +++ b/docs/index.md @@ -16,7 +16,7 @@ Caffe aims to provide computer vision scientists and practitioners with a **clea For example, network structure is easily specified in separate config files, with no mess of hard-coded parameters in the code. At the same time, Caffe fits industry needs, with blazing fast C++/CUDA code for GPU computation. -Caffe is currently the fastest GPU CNN implementation publicly available, and is able to process more than **40 million images per day** with a single K40 or Titan NVidia Card (20 million images per day on a single Tesla K20 NVidia Card)\*. Currently, caffe can process 192 images per second during training and 500 images per second during test (using K40 or Titan) \*. +Caffe is currently the fastest GPU CNN implementation publicly available, and is able to process more than **40 million images per day** with a single K40 or Titan NVidia Card (20 million images per day on a single Tesla K20 NVidia Card)\*. Currently, caffe can process 192 images per second during training and 500 images per second during test (using K40 or Titan) (see [Performance, Hardware tips](/perfomance_hardware.html))\*. Caffe also provides **seamless switching between CPU and GPU**, which allows one to train models with fast GPUs and then deploy them on non-GPU clusters with one line of code: `Caffe::set_mode(Caffe::CPU)`. Even in CPU mode, computing predictions on an image takes only 20 ms when images are processed in batch mode. While in GPU mode, computing predictions on an image takes only 2 ms when images are processed in batch mode. -- 2.7.4