Fix broken links in layer documentation, minor fixes.
authorJonathan R. Williford <jonathan@neural.vision>
Fri, 20 Jan 2017 11:53:12 +0000 (11:53 +0000)
committerJonathan R. Williford <jonathan@neural.vision>
Thu, 26 Jan 2017 14:04:27 +0000 (14:04 +0000)
docs/tutorial/layers/accuracy.md
docs/tutorial/layers/argmax.md
docs/tutorial/layers/infogainloss.md
docs/tutorial/layers/lrn.md
docs/tutorial/layers/memorydata.md
docs/tutorial/layers/multinomiallogisticloss.md
docs/tutorial/layers/silence.md

index ecf8409..80293b1 100644 (file)
@@ -10,7 +10,6 @@ title: Accuracy and Top-k
 * [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1AccuracyLayer.html)
 * Header: [`./include/caffe/layers/accuracy_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/accuracy_layer.hpp)
 * CPU implementation: [`./src/caffe/layers/accuracy_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/accuracy_layer.cpp)
-* CUDA GPU implementation: [`./src/caffe/layers/accuracy_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/accuracy_layer.cu)
 
 ## Parameters
 * Parameters (`AccuracyParameter accuracy_param`)
@@ -18,4 +17,4 @@ title: Accuracy and Top-k
 
 {% highlight Protobuf %}
 {% include proto/AccuracyParameter.txt %}
-{% endhighlight %}
\ No newline at end of file
+{% endhighlight %}
index f5f173a..9eb8b77 100644 (file)
@@ -8,7 +8,6 @@ title: ArgMax Layer
 * [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1ArgMaxLayer.html)
 * Header: [`./include/caffe/layers/argmax_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/argmax_layer.hpp)
 * CPU implementation: [`./src/caffe/layers/argmax_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/argmax_layer.cpp)
-* CUDA GPU implementation: [`./src/caffe/layers/argmax_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/argmax_layer.cu)
 
 ## Parameters
 * Parameters (`ArgMaxParameter argmax_param`)
@@ -16,4 +15,4 @@ title: ArgMax Layer
 
 {% highlight Protobuf %}
 {% include proto/ArgMaxParameter.txt %}
-{% endhighlight %}
\ No newline at end of file
+{% endhighlight %}
index 86140b6..b3b690d 100644 (file)
@@ -8,11 +8,10 @@ title: Infogain Loss Layer
 * [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1InfogainLossLayer.html)
 * Header: [`./include/caffe/layers/infogain_loss_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/infogain_loss_layer.hpp)
 * CPU implementation: [`./src/caffe/layers/infogain_loss_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/infogain_loss_layer.cpp)
-* CUDA GPU implementation: [`./src/caffe/layers/infogain_loss_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/infogain_loss_layer.cu)
 
-A generalization of [MultinomialLogisticLossLayer](layers/multinomiallogisticloss.md) that takes an "information gain" (infogain) matrix specifying the "value" of all label pairs.
+A generalization of [MultinomialLogisticLossLayer](multinomiallogisticloss.html) that takes an "information gain" (infogain) matrix specifying the "value" of all label pairs.
 
-Equivalent to the [MultinomialLogisticLossLayer](layers/multinomiallogisticloss.md) if the infogain matrix is the identity.
+Equivalent to the [MultinomialLogisticLossLayer](multinomiallogisticloss.html) if the infogain matrix is the identity.
 
 ## Parameters
 
index 387311c..2fbef73 100644 (file)
@@ -20,9 +20,9 @@ The local response normalization layer performs a kind of "lateral inhibition" b
 
 ## Parameters
 
-* Parameters (`Parameter lrn_param`)
+* Parameters (`LRNParameter lrn_param`)
 * From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):
 
 {% highlight Protobuf %}
-{% include proto/BatchNormParameter.txt %}
+{% include proto/LRNParameter.txt %}
 {% endhighlight %}
index 754e62a..afce4a2 100644 (file)
@@ -7,7 +7,7 @@ title: Memory Data Layer
 * Layer type: `MemoryData`
 * [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1MemoryDataLayer.html)
 * Header: [`./include/caffe/layers/memory_data_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/memory_data_layer.hpp)
-* CPU implementation: [`./src/caffe/layers/memory_data_layer.cpu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/memory_data_layer.cpu)
+* CPU implementation: [`./src/caffe/layers/memory_data_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/memory_data_layer.cpp)
 
 The memory data layer reads data directly from memory, without copying it. In order to use it, one must call `MemoryDataLayer::Reset` (from C++) or `Net.set_input_arrays` (from Python) in order to specify a source of contiguous data (as 4D row major array), which is read one batch-sized chunk at a time.
 
index a28ab91..5eab74a 100644 (file)
@@ -7,7 +7,7 @@ title: Multinomial Logistic Loss Layer
 * Layer type: `MultinomialLogisticLoss`
 * [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1MultinomialLogisticLossLayer.html)
 * Header: [`./include/caffe/layers/multinomial_logistic_loss_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/multinomial_logistic_loss_layer.hpp)
-* CPU implementation: [`./src/caffe/layers/multinomial_logistic_loss_layer.cpu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/multinomial_logistic_loss_layer.cpu)
+* CPU implementation: [`./src/caffe/layers/multinomial_logistic_loss_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/multinomial_logistic_loss_layer.cpp)
 
 ## Parameters
 
index 2c37a9c..8b4579a 100644 (file)
@@ -14,10 +14,4 @@ Silences a blob, so that it is not printed.
 
 ## Parameters
 
-* Parameters (`SilenceParameter silence_param`)
-* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):
-
-{% highlight Protobuf %}
-{% include proto/BatchNormParameter.txt %}
-{% endhighlight %}
-
+No parameters.