Generalize loss by allowing any top blob to be used as a loss in which
authorJeff Donahue <jeff.donahue@gmail.com>
Fri, 11 Jul 2014 08:55:17 +0000 (01:55 -0700)
committerJeff Donahue <jeff.donahue@gmail.com>
Wed, 13 Aug 2014 20:22:04 +0000 (13:22 -0700)
commit512a626fc71c69ed4460024b31c5fe8dff1e668c
treef3d11beb593a4e64e779a99b82538ceee7fae21a
parent7a3ed9b8edf43895770b63cb4d9f5cacf0dba047
Generalize loss by allowing any top blob to be used as a loss in which
its elements are summed with a scalar coefficient.

Forward for layers no longer returns a loss; instead all loss layers must have
top blobs.  Existing loss layers are given a top blob automatically by
Net::Init, with an associated top_loss_weight of 1 (set in
LossLayer::FurtherSetUp).  Due to the increased amount of common SetUp logic,
the SetUp interface is modified such that all subclasses should normally
override FurtherSetUp only, which is called by SetUp.
79 files changed:
include/caffe/common_layers.hpp
include/caffe/data_layers.hpp
include/caffe/layer.hpp
include/caffe/loss_layers.hpp
include/caffe/neuron_layers.hpp
include/caffe/util/device_alternate.hpp
include/caffe/vision_layers.hpp
src/caffe/layers/accuracy_layer.cpp
src/caffe/layers/argmax_layer.cpp
src/caffe/layers/bnll_layer.cpp
src/caffe/layers/bnll_layer.cu
src/caffe/layers/concat_layer.cpp
src/caffe/layers/concat_layer.cu
src/caffe/layers/conv_layer.cpp
src/caffe/layers/conv_layer.cu
src/caffe/layers/data_layer.cpp
src/caffe/layers/data_layer.cu
src/caffe/layers/dropout_layer.cpp
src/caffe/layers/dropout_layer.cu
src/caffe/layers/dummy_data_layer.cpp
src/caffe/layers/eltwise_layer.cpp
src/caffe/layers/eltwise_layer.cu
src/caffe/layers/euclidean_loss_layer.cpp
src/caffe/layers/euclidean_loss_layer.cu
src/caffe/layers/flatten_layer.cpp
src/caffe/layers/flatten_layer.cu
src/caffe/layers/hdf5_data_layer.cpp
src/caffe/layers/hdf5_data_layer.cu
src/caffe/layers/hdf5_output_layer.cpp
src/caffe/layers/hdf5_output_layer.cu
src/caffe/layers/hinge_loss_layer.cpp
src/caffe/layers/im2col_layer.cpp
src/caffe/layers/im2col_layer.cu
src/caffe/layers/image_data_layer.cpp
src/caffe/layers/image_data_layer.cu
src/caffe/layers/infogain_loss_layer.cpp
src/caffe/layers/inner_product_layer.cpp
src/caffe/layers/inner_product_layer.cu
src/caffe/layers/loss_layer.cpp
src/caffe/layers/lrn_layer.cpp
src/caffe/layers/lrn_layer.cu
src/caffe/layers/memory_data_layer.cpp
src/caffe/layers/multinomial_logistic_loss_layer.cpp
src/caffe/layers/mvn_layer.cpp
src/caffe/layers/mvn_layer.cu
src/caffe/layers/neuron_layer.cpp
src/caffe/layers/pooling_layer.cpp
src/caffe/layers/pooling_layer.cu
src/caffe/layers/power_layer.cpp
src/caffe/layers/power_layer.cu
src/caffe/layers/relu_layer.cpp
src/caffe/layers/relu_layer.cu
src/caffe/layers/sigmoid_cross_entropy_loss_layer.cpp
src/caffe/layers/sigmoid_cross_entropy_loss_layer.cu
src/caffe/layers/sigmoid_layer.cpp
src/caffe/layers/sigmoid_layer.cu
src/caffe/layers/slice_layer.cpp
src/caffe/layers/slice_layer.cu
src/caffe/layers/softmax_layer.cpp
src/caffe/layers/softmax_layer.cu
src/caffe/layers/softmax_loss_layer.cpp
src/caffe/layers/softmax_loss_layer.cu
src/caffe/layers/split_layer.cpp
src/caffe/layers/split_layer.cu
src/caffe/layers/tanh_layer.cpp
src/caffe/layers/tanh_layer.cu
src/caffe/layers/threshold_layer.cpp
src/caffe/layers/threshold_layer.cu
src/caffe/layers/window_data_layer.cpp
src/caffe/layers/window_data_layer.cu
src/caffe/net.cpp
src/caffe/test/test_euclidean_loss_layer.cpp
src/caffe/test/test_hinge_loss_layer.cpp
src/caffe/test/test_infogain_loss_layer.cpp
src/caffe/test/test_multinomial_logistic_loss_layer.cpp
src/caffe/test/test_net.cpp
src/caffe/test/test_sigmoid_cross_entropy_loss_layer.cpp
src/caffe/test/test_softmax_with_loss_layer.cpp
src/caffe/test/test_split_layer.cpp