platform/upstream/caffe.git
10 years agoMerge pull request #1183 from pluskid/hdf5layer
Jeff Donahue [Tue, 30 Sep 2014 18:06:57 +0000 (11:06 -0700)]
Merge pull request #1183 from pluskid/hdf5layer

10 years agoadded test case to cover new HDF5 behavior
Chiyuan Zhang [Fri, 26 Sep 2014 20:20:48 +0000 (16:20 -0400)]
added test case to cover new HDF5 behavior

10 years agomake HDF5 layer support multiple data output
Chiyuan Zhang [Fri, 26 Sep 2014 15:21:59 +0000 (11:21 -0400)]
make HDF5 layer support multiple data output

10 years agotest_gradient_based_solver.cpp: removed unused typedefs
Jeff Donahue [Tue, 30 Sep 2014 18:00:37 +0000 (11:00 -0700)]
test_gradient_based_solver.cpp: removed unused typedefs

10 years agoMerge pull request #1167 from Yangqing/factory
Yangqing Jia [Tue, 30 Sep 2014 17:35:39 +0000 (10:35 -0700)]
Merge pull request #1167 from Yangqing/factory

Layer factory.

10 years agodoxygen
Yangqing Jia [Tue, 30 Sep 2014 17:20:36 +0000 (10:20 -0700)]
doxygen

10 years agoMerge pull request #1187 from longjon/fix-cudnn-pooling-tests
longjon [Tue, 30 Sep 2014 04:40:23 +0000 (21:40 -0700)]
Merge pull request #1187 from longjon/fix-cudnn-pooling-tests

Fix cuDNN pooling tests

10 years agocuDNN pooling layer tests know that nonzero padding is not supported
Jonathan L Long [Tue, 30 Sep 2014 04:21:03 +0000 (21:21 -0700)]
cuDNN pooling layer tests know that nonzero padding is not supported

After #1172, some cuDNN pooling layer tests were failing due to use of
padding. (Before #1172, these tests were actually testing PoolingLayer
instead of CuDNNPoolingLayer via the fallback.) This commit disables
many tests via commenting, so that they can be easily readded when cuDNN
gains pooling padding support.

10 years agouse method overrides for CuDNNPoolingLayer top blob checking
Jonathan L Long [Tue, 30 Sep 2014 04:20:37 +0000 (21:20 -0700)]
use method overrides for CuDNNPoolingLayer top blob checking

10 years agoMerge pull request #1186 from ashafaei/error-fixes
longjon [Tue, 30 Sep 2014 03:35:02 +0000 (20:35 -0700)]
Merge pull request #1186 from ashafaei/error-fixes

Fixed some errors in layer_factory and cudnn_pooling

10 years agoFixed some errors in layer_factory and cudnn_pooling
Alireza Shafaei [Tue, 30 Sep 2014 03:24:00 +0000 (20:24 -0700)]
Fixed some errors in layer_factory and cudnn_pooling

10 years agoMerge remote-tracking branch 'bvlc/master' into dev
Jeff Donahue [Mon, 29 Sep 2014 21:16:58 +0000 (14:16 -0700)]
Merge remote-tracking branch 'bvlc/master' into dev

Conflicts:
src/caffe/layers/hdf5_data_layer.cpp

10 years agoMerge pull request #1166 from pluskid/master
Jeff Donahue [Mon, 29 Sep 2014 21:14:50 +0000 (14:14 -0700)]
Merge pull request #1166 from pluskid/master

bugfix: HDF5 layer reshape bug

10 years ago[examples] adding class names and deploy version of Flickr Style net
Sergey Karayev [Mon, 29 Sep 2014 09:55:22 +0000 (02:55 -0700)]
[examples] adding class names and deploy version of Flickr Style net

10 years agostatic initialization order fiasco
Yangqing Jia [Mon, 29 Sep 2014 02:16:01 +0000 (19:16 -0700)]
static initialization order fiasco

10 years agoadd explicit declaration - does that help the flakyness?
Yangqing Jia [Mon, 29 Sep 2014 01:11:35 +0000 (18:11 -0700)]
add explicit declaration - does that help the flakyness?

10 years agoadd long-ignored threshold layer
Yangqing Jia [Sun, 28 Sep 2014 23:04:53 +0000 (16:04 -0700)]
add long-ignored threshold layer

10 years agoconsolidate duplicate code
Yangqing Jia [Sun, 28 Sep 2014 22:46:03 +0000 (15:46 -0700)]
consolidate duplicate code

10 years agocmake.
Yangqing Jia [Sun, 28 Sep 2014 07:11:05 +0000 (00:11 -0700)]
cmake.

10 years agoMute factory registration message
Yangqing Jia [Sat, 27 Sep 2014 20:18:27 +0000 (13:18 -0700)]
Mute factory registration message

10 years agoAdd back static library. Using -Wl,--whole-archive will allow us to preserve all...
Yangqing Jia [Sat, 27 Sep 2014 20:17:33 +0000 (13:17 -0700)]
Add back static library. Using -Wl,--whole-archive will allow us to preserve all symbols.

10 years agoPre-lunch fun: add a dynamic library guard test.
Yangqing Jia [Sat, 27 Sep 2014 19:06:10 +0000 (12:06 -0700)]
Pre-lunch fun: add a dynamic library guard test.

10 years agomore docs
Yangqing Jia [Fri, 26 Sep 2014 23:17:04 +0000 (16:17 -0700)]
more docs

10 years agorunning factory.
Yangqing Jia [Fri, 26 Sep 2014 20:58:45 +0000 (13:58 -0700)]
running factory.

10 years agoMerge pull request #1172 from Yangqing/conv_factory
Yangqing Jia [Mon, 29 Sep 2014 00:40:45 +0000 (17:40 -0700)]
Merge pull request #1172 from Yangqing/conv_factory

Pooling factory

10 years agomessage
Yangqing Jia [Sun, 28 Sep 2014 22:33:52 +0000 (15:33 -0700)]
message

10 years agoMerge pull request #1161 from jjkjkj/dev-threshold-fix
Yangqing Jia [Sun, 28 Sep 2014 15:16:52 +0000 (08:16 -0700)]
Merge pull request #1161 from jjkjkj/dev-threshold-fix

Fix Threshold layer

10 years agoconst fix
Yangqing Jia [Sun, 28 Sep 2014 03:22:10 +0000 (20:22 -0700)]
const fix

10 years agocudnn pooling fallback option
Yangqing Jia [Sun, 28 Sep 2014 03:10:13 +0000 (20:10 -0700)]
cudnn pooling fallback option

10 years agofix hdf5 data layer bug
Chiyuan Zhang [Fri, 26 Sep 2014 19:21:57 +0000 (15:21 -0400)]
fix hdf5 data layer bug

10 years agoupdate HDF5 layer test data.
Chiyuan Zhang [Fri, 26 Sep 2014 19:21:45 +0000 (15:21 -0400)]
update HDF5 layer test data.

10 years agotweak test case to expose bug.
Chiyuan Zhang [Fri, 26 Sep 2014 17:18:38 +0000 (13:18 -0400)]
tweak test case to expose bug.

10 years agoUpdate layer_factory.cpp
jjkjkj [Fri, 26 Sep 2014 09:39:49 +0000 (12:39 +0300)]
Update layer_factory.cpp

10 years agoMerge pull request #1149 from ashafaei/crop_bugfix
longjon [Thu, 25 Sep 2014 17:35:32 +0000 (10:35 -0700)]
Merge pull request #1149 from ashafaei/crop_bugfix

Fixed the bug with random offset selection

10 years agoMerge pull request #1157 from ducha-aiki/fix-extract-features
longjon [Thu, 25 Sep 2014 17:22:23 +0000 (10:22 -0700)]
Merge pull request #1157 from ducha-aiki/fix-extract-features

Small fix in extract_features

10 years agoRandom crop bugfix and abstracting random number generation inside data_transformer
Alireza Shafaei [Wed, 24 Sep 2014 05:35:00 +0000 (22:35 -0700)]
Random crop bugfix and abstracting random number generation inside data_transformer

10 years agoRemoved unnecessary "mutable"
D.Mishkin [Thu, 25 Sep 2014 14:30:48 +0000 (17:30 +0300)]
Removed unnecessary "mutable"

10 years agoMerge pull request #1147 from savy-91/patch-1
Evan Shelhamer [Thu, 25 Sep 2014 03:29:23 +0000 (20:29 -0700)]
Merge pull request #1147 from savy-91/patch-1

Changed "blas" to "openblas"

10 years agoMerge pull request #1138 from ksimonyan/vgg_models_support
Evan Shelhamer [Thu, 25 Sep 2014 03:19:39 +0000 (20:19 -0700)]
Merge pull request #1138 from ksimonyan/vgg_models_support

VGG models support

10 years agoBack-merge
Evan Shelhamer [Wed, 24 Sep 2014 21:48:04 +0000 (14:48 -0700)]
Back-merge

  Fixed param order of cv::Size in cv::resize
  switch examples to lmdb (except for custom data loaders)
  fix cifar10 paths so they can be run from caffe root
  default backend to lmdb for image conversion and mean computation

10 years agoMerge pull request #1152 from sguada/fix_cv_size_order
Evan Shelhamer [Wed, 24 Sep 2014 21:43:35 +0000 (14:43 -0700)]
Merge pull request #1152 from sguada/fix_cv_size_order

Fix param order of cv::Size in cv::resize

10 years agoFixed param order of cv::Size in cv::resize
Sergio [Wed, 24 Sep 2014 21:27:57 +0000 (14:27 -0700)]
Fixed param order of cv::Size in cv::resize

10 years agoadded a Matlab demo with mean BGR pixel subtraction instead of the mean image subtraction
Karen Simonyan [Wed, 24 Sep 2014 21:08:46 +0000 (22:08 +0100)]
added a Matlab demo with mean BGR pixel subtraction instead of the mean image subtraction

10 years agoChanged "blas" to "openblas"
savy-91 [Tue, 23 Sep 2014 19:57:16 +0000 (21:57 +0200)]
Changed "blas" to "openblas"

The change was made because openblas has to be linked as "-lopenblas" and not "-lblas". This caused an error when compiling.

10 years agoRGB -> BGR in the matlab demo
Karen Simonyan [Tue, 23 Sep 2014 15:39:08 +0000 (16:39 +0100)]
RGB -> BGR in the matlab demo

10 years agoadded example usage to the Matlab script
Karen Simonyan [Mon, 22 Sep 2014 19:39:37 +0000 (20:39 +0100)]
added example usage to the Matlab script

10 years agoadded comments to the Matlab demo script
Karen Simonyan [Mon, 22 Sep 2014 19:09:35 +0000 (20:09 +0100)]
added comments to the Matlab demo script

10 years agoadded matcaffe_demo for the VGG models (RGB input)
Karen Simonyan [Sun, 21 Sep 2014 16:59:25 +0000 (17:59 +0100)]
added matcaffe_demo for the VGG models (RGB input)

10 years agoadded support for "k" LRN parameter to upgrade_proto
Karen Simonyan [Sun, 21 Sep 2014 16:58:41 +0000 (17:58 +0100)]
added support for "k" LRN parameter to upgrade_proto

10 years agoadds a parameter to the LRN layer (denoted as "k" in [Krizhevsky et al., NIPS 2012])
Karen Simonyan [Thu, 18 Sep 2014 00:08:56 +0000 (01:08 +0100)]
adds a parameter to the LRN layer (denoted as "k" in  [Krizhevsky et al., NIPS 2012])

10 years agoweb demo fix, closes #1002
Sergey Karayev [Mon, 22 Sep 2014 05:53:30 +0000 (22:53 -0700)]
web demo fix, closes #1002

10 years agoMerge pull request #1128 from shelhamer/default-db-lmdb
Evan Shelhamer [Mon, 22 Sep 2014 05:08:08 +0000 (22:08 -0700)]
Merge pull request #1128 from shelhamer/default-db-lmdb

default to lmdb for database storage

10 years agoswitch examples to lmdb (except for custom data loaders)
Evan Shelhamer [Sun, 21 Sep 2014 22:32:03 +0000 (15:32 -0700)]
switch examples to lmdb (except for custom data loaders)

10 years agofix cifar10 paths so they can be run from caffe root
Jeff Donahue [Fri, 19 Sep 2014 19:38:39 +0000 (12:38 -0700)]
fix cifar10 paths so they can be run from caffe root

10 years agodefault backend to lmdb for image conversion and mean computation
Evan Shelhamer [Sun, 21 Sep 2014 22:20:47 +0000 (15:20 -0700)]
default backend to lmdb for image conversion and mean computation

lmdb is 10-15% faster than leveldb although it takes ~1.1x the storage.
This is usually irrelevant in prefetching since both are fast enough,
but more important lmdb allows multiple, concurrent reads for training
and evaluation several models on the same data.

10 years agoBack-merge
Evan Shelhamer [Sun, 21 Sep 2014 21:21:16 +0000 (14:21 -0700)]
Back-merge

  define up-to-date all-in-one model for pascal finetuning
  load transform params in window data layer
  include WindowDataLayer in data param upgrade
  Fix typo in LRN-expression in docs

10 years agoMerge pull request #1126 from shelhamer/window-data-param-upgrade
Sergio Guadarrama [Sun, 21 Sep 2014 20:21:22 +0000 (13:21 -0700)]
Merge pull request #1126 from shelhamer/window-data-param-upgrade

[fix] include WindowDataLayer in data param upgrade

10 years agodefine up-to-date all-in-one model for pascal finetuning
Evan Shelhamer [Sun, 21 Sep 2014 18:38:20 +0000 (11:38 -0700)]
define up-to-date all-in-one model for pascal finetuning

10 years agoload transform params in window data layer
Evan Shelhamer [Sun, 21 Sep 2014 17:29:39 +0000 (10:29 -0700)]
load transform params in window data layer

10 years agoinclude WindowDataLayer in data param upgrade
Evan Shelhamer [Sun, 21 Sep 2014 16:52:51 +0000 (09:52 -0700)]
include WindowDataLayer in data param upgrade

10 years agoMerge pull request #1115 from rickardnorlander/master
Jeff Donahue [Sat, 20 Sep 2014 15:39:14 +0000 (08:39 -0700)]
Merge pull request #1115 from rickardnorlander/master

Fix typo in LRN-expression in docs

10 years agoMerge pull request #1118 from shelhamer/1x1-conv
longjon [Sat, 20 Sep 2014 06:51:25 +0000 (23:51 -0700)]
Merge pull request #1118 from shelhamer/1x1-conv

Optimize 1x1 convolution for Network-in-Network style operation

10 years agocombine col_{data,diff} into single col_buff to halve memory usage
Evan Shelhamer [Sat, 20 Sep 2014 06:12:12 +0000 (23:12 -0700)]
combine col_{data,diff} into single col_buff to halve memory usage

conv forward / backward only need one of the im2col data and diff
at-a-time so consolidating the two saves a lazy allocation.

10 years agoBack-merge
Evan Shelhamer [Sat, 20 Sep 2014 05:31:52 +0000 (22:31 -0700)]
Back-merge

10 years agooptimize 1x1 convolution for Network-in-Network style layers
Evan Shelhamer [Sat, 20 Sep 2014 01:59:38 +0000 (18:59 -0700)]
optimize 1x1 convolution for Network-in-Network style layers

1x1 convolution with stride 1 is a special case of Caffe matrix
multiplication convolution for which im2col / col2im transformations are
actually the identity. For this special case the memory and
transformation are skipped.

10 years agodrop out-of-date conv test comments
Evan Shelhamer [Sat, 20 Sep 2014 04:32:16 +0000 (21:32 -0700)]
drop out-of-date conv test comments

10 years agoMerge pull request #1117 from ronghanghu/fix-finetune-example
Evan Shelhamer [Sat, 20 Sep 2014 04:30:51 +0000 (21:30 -0700)]
Merge pull request #1117 from ronghanghu/fix-finetune-example

[fix] set R-CNN fine-tuning PASCAL example paths

10 years agofix directory in finetune pascal example
Ronghang Hu [Fri, 19 Sep 2014 23:55:32 +0000 (07:55 +0800)]
fix directory in finetune pascal example

10 years agoMerge pull request #945 from longjon/fixtypes
longjon [Fri, 19 Sep 2014 23:14:07 +0000 (16:14 -0700)]
Merge pull request #945 from longjon/fixtypes

Fix types of SetUp, Forward, Backward, and gradient checker calls

10 years agofix types of (Layer)SetUp, Reshape, Forward, and Backward calls
Jonathan L Long [Fri, 19 Sep 2014 23:04:37 +0000 (16:04 -0700)]
fix types of (Layer)SetUp, Reshape, Forward, and Backward calls

Using the type vector<Blob<Dtype*>* for outputs allows modification of
the vector itself, while it is only okay to modify the blobs pointed to
by the elements of the vector. Switching the types to const
vector<Blob<Dtype>*>& makes them more correct.

10 years agofix cifar10 paths so they can be run from caffe root
Jeff Donahue [Fri, 19 Sep 2014 19:38:39 +0000 (12:38 -0700)]
fix cifar10 paths so they can be run from caffe root

10 years agoFix typo in LRN-expression in docs
rickardnorlander [Fri, 19 Sep 2014 18:47:44 +0000 (20:47 +0200)]
Fix typo in LRN-expression in docs

This is more consistent with krizhevsky2012, and it seems to be what the code does.

10 years agoMerge pull request #1112 from BVLC/next
Evan Shelhamer [Fri, 19 Sep 2014 05:22:02 +0000 (22:22 -0700)]
Merge pull request #1112 from BVLC/next

Next: release candidate

10 years agorelax precision of gradient-based solver tests
Evan Shelhamer [Fri, 19 Sep 2014 05:07:01 +0000 (22:07 -0700)]
relax precision of gradient-based solver tests

10 years ago[example] groom siamese notebook
Evan Shelhamer [Fri, 19 Sep 2014 03:46:49 +0000 (20:46 -0700)]
[example] groom siamese notebook

10 years agoMerge pull request #959 from nickcarlevaris/contrastive_loss
Evan Shelhamer [Fri, 19 Sep 2014 04:37:31 +0000 (21:37 -0700)]
Merge pull request #959 from nickcarlevaris/contrastive_loss

  Add contrastive loss layer, tests, and a siamese network example

10 years ago[docs] order ipython notebooks
Evan Shelhamer [Fri, 19 Sep 2014 04:25:10 +0000 (21:25 -0700)]
[docs] order ipython notebooks

10 years ago[example] resurrect imagenet training scripts
Evan Shelhamer [Fri, 19 Sep 2014 03:24:19 +0000 (20:24 -0700)]
[example] resurrect imagenet training scripts

10 years ago[model zoo] ignore models -- only for reference or zoo
Evan Shelhamer [Fri, 19 Sep 2014 03:15:50 +0000 (20:15 -0700)]
[model zoo] ignore models -- only for reference or zoo

10 years ago[model zoo] download from gist grooming
Evan Shelhamer [Fri, 19 Sep 2014 03:06:24 +0000 (20:06 -0700)]
[model zoo] download from gist grooming

- invoke by shell
- default download dir to models/
- save to flat dir of owner-gist instead of nested owner/gist

10 years agoMerge pull request #1110 from sergeyk/dev
Evan Shelhamer [Thu, 18 Sep 2014 23:27:52 +0000 (16:27 -0700)]
Merge pull request #1110 from sergeyk/dev

  [model zoo] download gist script

10 years ago[model zoo] download gist script
Sergey Karayev [Thu, 18 Sep 2014 23:11:07 +0000 (16:11 -0700)]
[model zoo] download gist script

10 years agoMerge pull request #594 from longjon/layer-reshaping
Evan Shelhamer [Thu, 18 Sep 2014 20:32:35 +0000 (13:32 -0700)]
Merge pull request #594 from longjon/layer-reshaping

On-the-fly net resizing, without reallocation (where possible)

10 years agocheck that LRN's local_size is odd as the current implementation requires
Jonathan L Long [Fri, 12 Sep 2014 23:07:34 +0000 (16:07 -0700)]
check that LRN's local_size is odd as the current implementation requires

10 years ago[docs] clarify the use of Blob::Reshape a bit
Jonathan L Long [Fri, 12 Sep 2014 22:33:49 +0000 (15:33 -0700)]
[docs] clarify the use of Blob::Reshape a bit

10 years ago[pycaffe] expose Net::Reshape
Jonathan L Long [Fri, 12 Sep 2014 22:23:11 +0000 (15:23 -0700)]
[pycaffe] expose Net::Reshape

10 years agoadd Net::Reshape for only reshaping
Jonathan L Long [Fri, 12 Sep 2014 22:20:52 +0000 (15:20 -0700)]
add Net::Reshape for only reshaping

Note that it is not normally necessary to call this function when using
reshapable nets, but sometimes it can be useful to compute the sizes of
intermediate layers without waiting for the forward pass.

10 years agoinclude Reshape in caffe time
Jonathan L Long [Fri, 12 Sep 2014 21:34:16 +0000 (14:34 -0700)]
include Reshape in caffe time

Since we are now calling Reshape in the Forward pass, it's only fair to
include it when timing. Reshape calls should normally be four or so
orders of magnitude faster than Forward calls; this change also makes it
easy to notice a mistake that causes something slow to happen in
Reshape.

10 years agotest net reshaping
Jonathan L Long [Wed, 2 Jul 2014 19:21:10 +0000 (12:21 -0700)]
test net reshaping

10 years agodefault LayerSetUp to no-op instead of NOT_IMPLEMENTED
Jonathan L Long [Fri, 12 Sep 2014 20:58:10 +0000 (13:58 -0700)]
default LayerSetUp to no-op instead of NOT_IMPLEMENTED

Now that top blobs are set up in Layer::Reshape, it's Reshape that is
mandatory, and simple layers often don't need to implement LayerSetUp.
Reshape is (already) declared abstract, so not implementing it is a
compile-time error.

10 years agocall Reshape in Layer::SetUp
Jonathan L Long [Fri, 12 Sep 2014 20:56:38 +0000 (13:56 -0700)]
call Reshape in Layer::SetUp

Strictly speaking, Reshape doesn't need to be called until the first
Forward call; however, much existing code (especially tests) assumes
that top blobs will be set up in SetUp, so we may as well do it there.

10 years agosplit off Reshape for vision layers
Jonathan L Long [Thu, 11 Sep 2014 06:31:33 +0000 (23:31 -0700)]
split off Reshape for vision layers

Note that we are dropping some checks from LRN layer. However, these
checks are fairly redundant; something is very wrong if these layers
are producing top blobs that are different sizes than their inputs, and
tests are the right place to catch that. The thing that really should be
checked (that isn't) is that that local_size needs to be odd; this will
be added in a future commit.

10 years agosplit off Reshape for common layers
Jonathan L Long [Thu, 11 Sep 2014 05:42:45 +0000 (22:42 -0700)]
split off Reshape for common layers

10 years agosplit off Reshape for neuron layers
Jonathan L Long [Thu, 11 Sep 2014 04:48:51 +0000 (21:48 -0700)]
split off Reshape for neuron layers

10 years agosplit off Reshape for loss layers
Jonathan L Long [Thu, 11 Sep 2014 04:30:05 +0000 (21:30 -0700)]
split off Reshape for loss layers

10 years agosplit off Reshape for data layers
Jonathan L Long [Wed, 10 Sep 2014 21:57:07 +0000 (14:57 -0700)]
split off Reshape for data layers

10 years agoseparate setConvolutionDesc from createConvolutionDesc
Jonathan L Long [Thu, 11 Sep 2014 06:15:22 +0000 (23:15 -0700)]
separate setConvolutionDesc from createConvolutionDesc

10 years agoseparate setTensor4dDesc from createTensor4dDesc
Jonathan L Long [Thu, 11 Sep 2014 04:51:58 +0000 (21:51 -0700)]
separate setTensor4dDesc from createTensor4dDesc

This will make it possible to add reshaping to cuDNN layers.

10 years agoenable reshaping in the forward pass
Jonathan L Long [Wed, 10 Sep 2014 21:51:58 +0000 (14:51 -0700)]
enable reshaping in the forward pass

Note that calling Reshape when no reshape is necessary should be
effectively a no-op, so this is not a performance regression.

10 years agodon't reallocate blobs when shrinking memory use
Jonathan L Long [Wed, 2 Jul 2014 20:08:26 +0000 (13:08 -0700)]
don't reallocate blobs when shrinking memory use

This allows nets to be reshaped very quickly (essentially for free) as
long as sufficient memory has been allocated. Calling Blob::Reshape in
order to free up memory becomes impossible; however, this is not a
normal use case (and deleting blobs does free memory).