Jeff Donahue [Tue, 30 Sep 2014 18:06:57 +0000 (11:06 -0700)]
Merge pull request #1183 from pluskid/hdf5layer
Chiyuan Zhang [Fri, 26 Sep 2014 20:20:48 +0000 (16:20 -0400)]
added test case to cover new HDF5 behavior
Chiyuan Zhang [Fri, 26 Sep 2014 15:21:59 +0000 (11:21 -0400)]
make HDF5 layer support multiple data output
Jeff Donahue [Tue, 30 Sep 2014 18:00:37 +0000 (11:00 -0700)]
test_gradient_based_solver.cpp: removed unused typedefs
Yangqing Jia [Tue, 30 Sep 2014 17:35:39 +0000 (10:35 -0700)]
Merge pull request #1167 from Yangqing/factory
Layer factory.
Yangqing Jia [Tue, 30 Sep 2014 17:20:36 +0000 (10:20 -0700)]
doxygen
longjon [Tue, 30 Sep 2014 04:40:23 +0000 (21:40 -0700)]
Merge pull request #1187 from longjon/fix-cudnn-pooling-tests
Fix cuDNN pooling tests
Jonathan L Long [Tue, 30 Sep 2014 04:21:03 +0000 (21:21 -0700)]
cuDNN pooling layer tests know that nonzero padding is not supported
After #1172, some cuDNN pooling layer tests were failing due to use of
padding. (Before #1172, these tests were actually testing PoolingLayer
instead of CuDNNPoolingLayer via the fallback.) This commit disables
many tests via commenting, so that they can be easily readded when cuDNN
gains pooling padding support.
Jonathan L Long [Tue, 30 Sep 2014 04:20:37 +0000 (21:20 -0700)]
use method overrides for CuDNNPoolingLayer top blob checking
longjon [Tue, 30 Sep 2014 03:35:02 +0000 (20:35 -0700)]
Merge pull request #1186 from ashafaei/error-fixes
Fixed some errors in layer_factory and cudnn_pooling
Alireza Shafaei [Tue, 30 Sep 2014 03:24:00 +0000 (20:24 -0700)]
Fixed some errors in layer_factory and cudnn_pooling
Jeff Donahue [Mon, 29 Sep 2014 21:16:58 +0000 (14:16 -0700)]
Merge remote-tracking branch 'bvlc/master' into dev
Conflicts:
src/caffe/layers/hdf5_data_layer.cpp
Jeff Donahue [Mon, 29 Sep 2014 21:14:50 +0000 (14:14 -0700)]
Merge pull request #1166 from pluskid/master
bugfix: HDF5 layer reshape bug
Sergey Karayev [Mon, 29 Sep 2014 09:55:22 +0000 (02:55 -0700)]
[examples] adding class names and deploy version of Flickr Style net
Yangqing Jia [Mon, 29 Sep 2014 02:16:01 +0000 (19:16 -0700)]
static initialization order fiasco
Yangqing Jia [Mon, 29 Sep 2014 01:11:35 +0000 (18:11 -0700)]
add explicit declaration - does that help the flakyness?
Yangqing Jia [Sun, 28 Sep 2014 23:04:53 +0000 (16:04 -0700)]
add long-ignored threshold layer
Yangqing Jia [Sun, 28 Sep 2014 22:46:03 +0000 (15:46 -0700)]
consolidate duplicate code
Yangqing Jia [Sun, 28 Sep 2014 07:11:05 +0000 (00:11 -0700)]
cmake.
Yangqing Jia [Sat, 27 Sep 2014 20:18:27 +0000 (13:18 -0700)]
Mute factory registration message
Yangqing Jia [Sat, 27 Sep 2014 20:17:33 +0000 (13:17 -0700)]
Add back static library. Using -Wl,--whole-archive will allow us to preserve all symbols.
Yangqing Jia [Sat, 27 Sep 2014 19:06:10 +0000 (12:06 -0700)]
Pre-lunch fun: add a dynamic library guard test.
Yangqing Jia [Fri, 26 Sep 2014 23:17:04 +0000 (16:17 -0700)]
more docs
Yangqing Jia [Fri, 26 Sep 2014 20:58:45 +0000 (13:58 -0700)]
running factory.
Yangqing Jia [Mon, 29 Sep 2014 00:40:45 +0000 (17:40 -0700)]
Merge pull request #1172 from Yangqing/conv_factory
Pooling factory
Yangqing Jia [Sun, 28 Sep 2014 22:33:52 +0000 (15:33 -0700)]
message
Yangqing Jia [Sun, 28 Sep 2014 15:16:52 +0000 (08:16 -0700)]
Merge pull request #1161 from jjkjkj/dev-threshold-fix
Fix Threshold layer
Yangqing Jia [Sun, 28 Sep 2014 03:22:10 +0000 (20:22 -0700)]
const fix
Yangqing Jia [Sun, 28 Sep 2014 03:10:13 +0000 (20:10 -0700)]
cudnn pooling fallback option
Chiyuan Zhang [Fri, 26 Sep 2014 19:21:57 +0000 (15:21 -0400)]
fix hdf5 data layer bug
Chiyuan Zhang [Fri, 26 Sep 2014 19:21:45 +0000 (15:21 -0400)]
update HDF5 layer test data.
Chiyuan Zhang [Fri, 26 Sep 2014 17:18:38 +0000 (13:18 -0400)]
tweak test case to expose bug.
jjkjkj [Fri, 26 Sep 2014 09:39:49 +0000 (12:39 +0300)]
Update layer_factory.cpp
longjon [Thu, 25 Sep 2014 17:35:32 +0000 (10:35 -0700)]
Merge pull request #1149 from ashafaei/crop_bugfix
Fixed the bug with random offset selection
longjon [Thu, 25 Sep 2014 17:22:23 +0000 (10:22 -0700)]
Merge pull request #1157 from ducha-aiki/fix-extract-features
Small fix in extract_features
Alireza Shafaei [Wed, 24 Sep 2014 05:35:00 +0000 (22:35 -0700)]
Random crop bugfix and abstracting random number generation inside data_transformer
D.Mishkin [Thu, 25 Sep 2014 14:30:48 +0000 (17:30 +0300)]
Removed unnecessary "mutable"
Evan Shelhamer [Thu, 25 Sep 2014 03:29:23 +0000 (20:29 -0700)]
Merge pull request #1147 from savy-91/patch-1
Changed "blas" to "openblas"
Evan Shelhamer [Thu, 25 Sep 2014 03:19:39 +0000 (20:19 -0700)]
Merge pull request #1138 from ksimonyan/vgg_models_support
VGG models support
Evan Shelhamer [Wed, 24 Sep 2014 21:48:04 +0000 (14:48 -0700)]
Back-merge
Fixed param order of cv::Size in cv::resize
switch examples to lmdb (except for custom data loaders)
fix cifar10 paths so they can be run from caffe root
default backend to lmdb for image conversion and mean computation
Evan Shelhamer [Wed, 24 Sep 2014 21:43:35 +0000 (14:43 -0700)]
Merge pull request #1152 from sguada/fix_cv_size_order
Fix param order of cv::Size in cv::resize
Sergio [Wed, 24 Sep 2014 21:27:57 +0000 (14:27 -0700)]
Fixed param order of cv::Size in cv::resize
Karen Simonyan [Wed, 24 Sep 2014 21:08:46 +0000 (22:08 +0100)]
added a Matlab demo with mean BGR pixel subtraction instead of the mean image subtraction
savy-91 [Tue, 23 Sep 2014 19:57:16 +0000 (21:57 +0200)]
Changed "blas" to "openblas"
The change was made because openblas has to be linked as "-lopenblas" and not "-lblas". This caused an error when compiling.
Karen Simonyan [Tue, 23 Sep 2014 15:39:08 +0000 (16:39 +0100)]
RGB -> BGR in the matlab demo
Karen Simonyan [Mon, 22 Sep 2014 19:39:37 +0000 (20:39 +0100)]
added example usage to the Matlab script
Karen Simonyan [Mon, 22 Sep 2014 19:09:35 +0000 (20:09 +0100)]
added comments to the Matlab demo script
Karen Simonyan [Sun, 21 Sep 2014 16:59:25 +0000 (17:59 +0100)]
added matcaffe_demo for the VGG models (RGB input)
Karen Simonyan [Sun, 21 Sep 2014 16:58:41 +0000 (17:58 +0100)]
added support for "k" LRN parameter to upgrade_proto
Karen Simonyan [Thu, 18 Sep 2014 00:08:56 +0000 (01:08 +0100)]
adds a parameter to the LRN layer (denoted as "k" in [Krizhevsky et al., NIPS 2012])
Sergey Karayev [Mon, 22 Sep 2014 05:53:30 +0000 (22:53 -0700)]
web demo fix, closes #1002
Evan Shelhamer [Mon, 22 Sep 2014 05:08:08 +0000 (22:08 -0700)]
Merge pull request #1128 from shelhamer/default-db-lmdb
default to lmdb for database storage
Evan Shelhamer [Sun, 21 Sep 2014 22:32:03 +0000 (15:32 -0700)]
switch examples to lmdb (except for custom data loaders)
Jeff Donahue [Fri, 19 Sep 2014 19:38:39 +0000 (12:38 -0700)]
fix cifar10 paths so they can be run from caffe root
Evan Shelhamer [Sun, 21 Sep 2014 22:20:47 +0000 (15:20 -0700)]
default backend to lmdb for image conversion and mean computation
lmdb is 10-15% faster than leveldb although it takes ~1.1x the storage.
This is usually irrelevant in prefetching since both are fast enough,
but more important lmdb allows multiple, concurrent reads for training
and evaluation several models on the same data.
Evan Shelhamer [Sun, 21 Sep 2014 21:21:16 +0000 (14:21 -0700)]
Back-merge
define up-to-date all-in-one model for pascal finetuning
load transform params in window data layer
include WindowDataLayer in data param upgrade
Fix typo in LRN-expression in docs
Sergio Guadarrama [Sun, 21 Sep 2014 20:21:22 +0000 (13:21 -0700)]
Merge pull request #1126 from shelhamer/window-data-param-upgrade
[fix] include WindowDataLayer in data param upgrade
Evan Shelhamer [Sun, 21 Sep 2014 18:38:20 +0000 (11:38 -0700)]
define up-to-date all-in-one model for pascal finetuning
Evan Shelhamer [Sun, 21 Sep 2014 17:29:39 +0000 (10:29 -0700)]
load transform params in window data layer
Evan Shelhamer [Sun, 21 Sep 2014 16:52:51 +0000 (09:52 -0700)]
include WindowDataLayer in data param upgrade
Jeff Donahue [Sat, 20 Sep 2014 15:39:14 +0000 (08:39 -0700)]
Merge pull request #1115 from rickardnorlander/master
Fix typo in LRN-expression in docs
longjon [Sat, 20 Sep 2014 06:51:25 +0000 (23:51 -0700)]
Merge pull request #1118 from shelhamer/1x1-conv
Optimize 1x1 convolution for Network-in-Network style operation
Evan Shelhamer [Sat, 20 Sep 2014 06:12:12 +0000 (23:12 -0700)]
combine col_{data,diff} into single col_buff to halve memory usage
conv forward / backward only need one of the im2col data and diff
at-a-time so consolidating the two saves a lazy allocation.
Evan Shelhamer [Sat, 20 Sep 2014 05:31:52 +0000 (22:31 -0700)]
Back-merge
Evan Shelhamer [Sat, 20 Sep 2014 01:59:38 +0000 (18:59 -0700)]
optimize 1x1 convolution for Network-in-Network style layers
1x1 convolution with stride 1 is a special case of Caffe matrix
multiplication convolution for which im2col / col2im transformations are
actually the identity. For this special case the memory and
transformation are skipped.
Evan Shelhamer [Sat, 20 Sep 2014 04:32:16 +0000 (21:32 -0700)]
drop out-of-date conv test comments
Evan Shelhamer [Sat, 20 Sep 2014 04:30:51 +0000 (21:30 -0700)]
Merge pull request #1117 from ronghanghu/fix-finetune-example
[fix] set R-CNN fine-tuning PASCAL example paths
Ronghang Hu [Fri, 19 Sep 2014 23:55:32 +0000 (07:55 +0800)]
fix directory in finetune pascal example
longjon [Fri, 19 Sep 2014 23:14:07 +0000 (16:14 -0700)]
Merge pull request #945 from longjon/fixtypes
Fix types of SetUp, Forward, Backward, and gradient checker calls
Jonathan L Long [Fri, 19 Sep 2014 23:04:37 +0000 (16:04 -0700)]
fix types of (Layer)SetUp, Reshape, Forward, and Backward calls
Using the type vector<Blob<Dtype*>* for outputs allows modification of
the vector itself, while it is only okay to modify the blobs pointed to
by the elements of the vector. Switching the types to const
vector<Blob<Dtype>*>& makes them more correct.
Jeff Donahue [Fri, 19 Sep 2014 19:38:39 +0000 (12:38 -0700)]
fix cifar10 paths so they can be run from caffe root
rickardnorlander [Fri, 19 Sep 2014 18:47:44 +0000 (20:47 +0200)]
Fix typo in LRN-expression in docs
This is more consistent with krizhevsky2012, and it seems to be what the code does.
Evan Shelhamer [Fri, 19 Sep 2014 05:22:02 +0000 (22:22 -0700)]
Merge pull request #1112 from BVLC/next
Next: release candidate
Evan Shelhamer [Fri, 19 Sep 2014 05:07:01 +0000 (22:07 -0700)]
relax precision of gradient-based solver tests
Evan Shelhamer [Fri, 19 Sep 2014 03:46:49 +0000 (20:46 -0700)]
[example] groom siamese notebook
Evan Shelhamer [Fri, 19 Sep 2014 04:37:31 +0000 (21:37 -0700)]
Merge pull request #959 from nickcarlevaris/contrastive_loss
Add contrastive loss layer, tests, and a siamese network example
Evan Shelhamer [Fri, 19 Sep 2014 04:25:10 +0000 (21:25 -0700)]
[docs] order ipython notebooks
Evan Shelhamer [Fri, 19 Sep 2014 03:24:19 +0000 (20:24 -0700)]
[example] resurrect imagenet training scripts
Evan Shelhamer [Fri, 19 Sep 2014 03:15:50 +0000 (20:15 -0700)]
[model zoo] ignore models -- only for reference or zoo
Evan Shelhamer [Fri, 19 Sep 2014 03:06:24 +0000 (20:06 -0700)]
[model zoo] download from gist grooming
- invoke by shell
- default download dir to models/
- save to flat dir of owner-gist instead of nested owner/gist
Evan Shelhamer [Thu, 18 Sep 2014 23:27:52 +0000 (16:27 -0700)]
Merge pull request #1110 from sergeyk/dev
[model zoo] download gist script
Sergey Karayev [Thu, 18 Sep 2014 23:11:07 +0000 (16:11 -0700)]
[model zoo] download gist script
Evan Shelhamer [Thu, 18 Sep 2014 20:32:35 +0000 (13:32 -0700)]
Merge pull request #594 from longjon/layer-reshaping
On-the-fly net resizing, without reallocation (where possible)
Jonathan L Long [Fri, 12 Sep 2014 23:07:34 +0000 (16:07 -0700)]
check that LRN's local_size is odd as the current implementation requires
Jonathan L Long [Fri, 12 Sep 2014 22:33:49 +0000 (15:33 -0700)]
[docs] clarify the use of Blob::Reshape a bit
Jonathan L Long [Fri, 12 Sep 2014 22:23:11 +0000 (15:23 -0700)]
[pycaffe] expose Net::Reshape
Jonathan L Long [Fri, 12 Sep 2014 22:20:52 +0000 (15:20 -0700)]
add Net::Reshape for only reshaping
Note that it is not normally necessary to call this function when using
reshapable nets, but sometimes it can be useful to compute the sizes of
intermediate layers without waiting for the forward pass.
Jonathan L Long [Fri, 12 Sep 2014 21:34:16 +0000 (14:34 -0700)]
include Reshape in caffe time
Since we are now calling Reshape in the Forward pass, it's only fair to
include it when timing. Reshape calls should normally be four or so
orders of magnitude faster than Forward calls; this change also makes it
easy to notice a mistake that causes something slow to happen in
Reshape.
Jonathan L Long [Wed, 2 Jul 2014 19:21:10 +0000 (12:21 -0700)]
test net reshaping
Jonathan L Long [Fri, 12 Sep 2014 20:58:10 +0000 (13:58 -0700)]
default LayerSetUp to no-op instead of NOT_IMPLEMENTED
Now that top blobs are set up in Layer::Reshape, it's Reshape that is
mandatory, and simple layers often don't need to implement LayerSetUp.
Reshape is (already) declared abstract, so not implementing it is a
compile-time error.
Jonathan L Long [Fri, 12 Sep 2014 20:56:38 +0000 (13:56 -0700)]
call Reshape in Layer::SetUp
Strictly speaking, Reshape doesn't need to be called until the first
Forward call; however, much existing code (especially tests) assumes
that top blobs will be set up in SetUp, so we may as well do it there.
Jonathan L Long [Thu, 11 Sep 2014 06:31:33 +0000 (23:31 -0700)]
split off Reshape for vision layers
Note that we are dropping some checks from LRN layer. However, these
checks are fairly redundant; something is very wrong if these layers
are producing top blobs that are different sizes than their inputs, and
tests are the right place to catch that. The thing that really should be
checked (that isn't) is that that local_size needs to be odd; this will
be added in a future commit.
Jonathan L Long [Thu, 11 Sep 2014 05:42:45 +0000 (22:42 -0700)]
split off Reshape for common layers
Jonathan L Long [Thu, 11 Sep 2014 04:48:51 +0000 (21:48 -0700)]
split off Reshape for neuron layers
Jonathan L Long [Thu, 11 Sep 2014 04:30:05 +0000 (21:30 -0700)]
split off Reshape for loss layers
Jonathan L Long [Wed, 10 Sep 2014 21:57:07 +0000 (14:57 -0700)]
split off Reshape for data layers
Jonathan L Long [Thu, 11 Sep 2014 06:15:22 +0000 (23:15 -0700)]
separate setConvolutionDesc from createConvolutionDesc
Jonathan L Long [Thu, 11 Sep 2014 04:51:58 +0000 (21:51 -0700)]
separate setTensor4dDesc from createTensor4dDesc
This will make it possible to add reshaping to cuDNN layers.
Jonathan L Long [Wed, 10 Sep 2014 21:51:58 +0000 (14:51 -0700)]
enable reshaping in the forward pass
Note that calling Reshape when no reshape is necessary should be
effectively a no-op, so this is not a performance regression.
Jonathan L Long [Wed, 2 Jul 2014 20:08:26 +0000 (13:08 -0700)]
don't reallocate blobs when shrinking memory use
This allows nets to be reshaped very quickly (essentially for free) as
long as sufficient memory has been allocated. Calling Blob::Reshape in
order to free up memory becomes impossible; however, this is not a
normal use case (and deleting blobs does free memory).