Evan Shelhamer [Fri, 19 Sep 2014 03:15:50 +0000 (20:15 -0700)]
[model zoo] ignore models -- only for reference or zoo
Evan Shelhamer [Fri, 19 Sep 2014 03:06:24 +0000 (20:06 -0700)]
[model zoo] download from gist grooming
- invoke by shell
- default download dir to models/
- save to flat dir of owner-gist instead of nested owner/gist
Evan Shelhamer [Thu, 18 Sep 2014 23:27:52 +0000 (16:27 -0700)]
Merge pull request #1110 from sergeyk/dev
[model zoo] download gist script
Sergey Karayev [Thu, 18 Sep 2014 23:11:07 +0000 (16:11 -0700)]
[model zoo] download gist script
Evan Shelhamer [Thu, 18 Sep 2014 20:32:35 +0000 (13:32 -0700)]
Merge pull request #594 from longjon/layer-reshaping
On-the-fly net resizing, without reallocation (where possible)
Jonathan L Long [Fri, 12 Sep 2014 23:07:34 +0000 (16:07 -0700)]
check that LRN's local_size is odd as the current implementation requires
Jonathan L Long [Fri, 12 Sep 2014 22:33:49 +0000 (15:33 -0700)]
[docs] clarify the use of Blob::Reshape a bit
Jonathan L Long [Fri, 12 Sep 2014 22:23:11 +0000 (15:23 -0700)]
[pycaffe] expose Net::Reshape
Jonathan L Long [Fri, 12 Sep 2014 22:20:52 +0000 (15:20 -0700)]
add Net::Reshape for only reshaping
Note that it is not normally necessary to call this function when using
reshapable nets, but sometimes it can be useful to compute the sizes of
intermediate layers without waiting for the forward pass.
Jonathan L Long [Fri, 12 Sep 2014 21:34:16 +0000 (14:34 -0700)]
include Reshape in caffe time
Since we are now calling Reshape in the Forward pass, it's only fair to
include it when timing. Reshape calls should normally be four or so
orders of magnitude faster than Forward calls; this change also makes it
easy to notice a mistake that causes something slow to happen in
Reshape.
Jonathan L Long [Wed, 2 Jul 2014 19:21:10 +0000 (12:21 -0700)]
test net reshaping
Jonathan L Long [Fri, 12 Sep 2014 20:58:10 +0000 (13:58 -0700)]
default LayerSetUp to no-op instead of NOT_IMPLEMENTED
Now that top blobs are set up in Layer::Reshape, it's Reshape that is
mandatory, and simple layers often don't need to implement LayerSetUp.
Reshape is (already) declared abstract, so not implementing it is a
compile-time error.
Jonathan L Long [Fri, 12 Sep 2014 20:56:38 +0000 (13:56 -0700)]
call Reshape in Layer::SetUp
Strictly speaking, Reshape doesn't need to be called until the first
Forward call; however, much existing code (especially tests) assumes
that top blobs will be set up in SetUp, so we may as well do it there.
Jonathan L Long [Thu, 11 Sep 2014 06:31:33 +0000 (23:31 -0700)]
split off Reshape for vision layers
Note that we are dropping some checks from LRN layer. However, these
checks are fairly redundant; something is very wrong if these layers
are producing top blobs that are different sizes than their inputs, and
tests are the right place to catch that. The thing that really should be
checked (that isn't) is that that local_size needs to be odd; this will
be added in a future commit.
Jonathan L Long [Thu, 11 Sep 2014 05:42:45 +0000 (22:42 -0700)]
split off Reshape for common layers
Jonathan L Long [Thu, 11 Sep 2014 04:48:51 +0000 (21:48 -0700)]
split off Reshape for neuron layers
Jonathan L Long [Thu, 11 Sep 2014 04:30:05 +0000 (21:30 -0700)]
split off Reshape for loss layers
Jonathan L Long [Wed, 10 Sep 2014 21:57:07 +0000 (14:57 -0700)]
split off Reshape for data layers
Jonathan L Long [Thu, 11 Sep 2014 06:15:22 +0000 (23:15 -0700)]
separate setConvolutionDesc from createConvolutionDesc
Jonathan L Long [Thu, 11 Sep 2014 04:51:58 +0000 (21:51 -0700)]
separate setTensor4dDesc from createTensor4dDesc
This will make it possible to add reshaping to cuDNN layers.
Jonathan L Long [Wed, 10 Sep 2014 21:51:58 +0000 (14:51 -0700)]
enable reshaping in the forward pass
Note that calling Reshape when no reshape is necessary should be
effectively a no-op, so this is not a performance regression.
Jonathan L Long [Wed, 2 Jul 2014 20:08:26 +0000 (13:08 -0700)]
don't reallocate blobs when shrinking memory use
This allows nets to be reshaped very quickly (essentially for free) as
long as sufficient memory has been allocated. Calling Blob::Reshape in
order to free up memory becomes impossible; however, this is not a
normal use case (and deleting blobs does free memory).
Jonathan L Long [Wed, 10 Sep 2014 21:43:11 +0000 (14:43 -0700)]
add abstract Layer::Reshape, and document the new method protocol
Jonathan L Long [Thu, 11 Sep 2014 05:06:15 +0000 (22:06 -0700)]
use Blob directly instead of shared_ptr for EltwiseLayer::max_idx_
This is in keeping with #742.
Evan Shelhamer [Thu, 18 Sep 2014 16:45:31 +0000 (09:45 -0700)]
Merge pull request #1104 from shelhamer/conv-comments-tests
Document and Test Convolution
Evan Shelhamer [Thu, 18 Sep 2014 16:41:34 +0000 (09:41 -0700)]
Merge pull request #1100 from cNikolaou/issue1099
Polish mnist + cifar10 examples.
Evan Shelhamer [Thu, 18 Sep 2014 16:41:19 +0000 (09:41 -0700)]
[docs] lenet grooming
Evan Shelhamer [Thu, 18 Sep 2014 06:38:31 +0000 (23:38 -0700)]
[docs] comment ConvolutionLayer
Evan Shelhamer [Thu, 18 Sep 2014 15:40:00 +0000 (08:40 -0700)]
test convolution by random weights for robustness
Evan Shelhamer [Wed, 17 Sep 2014 22:10:18 +0000 (15:10 -0700)]
test convolution against explicit reference implementation
To thoroughly check convolution, the output is compared against a
reference implementation by explicit looping. Simple and group
convolution by the Caffe and cuDNN engines are checked against the
reference.
Christos Nikolaou [Thu, 18 Sep 2014 09:49:17 +0000 (12:49 +0300)]
Updated mnist/readme.md file with additional information.
Christos Nikolaou [Wed, 17 Sep 2014 18:28:51 +0000 (21:28 +0300)]
Update readme.md files of cifar10 and mnist examples. Fixed broken links.
Evan Shelhamer [Tue, 16 Sep 2014 23:01:13 +0000 (16:01 -0700)]
Merge pull request #1093 from CellScope/io-cant-load-error-msg
[fix] Move file reading error checking closer to actual file read command
Daniel Golden [Tue, 9 Sep 2014 20:42:40 +0000 (13:42 -0700)]
[Bugfix] Move error checking closer to file read
Previously, if (height > 0 && width > 0) was true, the
cv::resize() function would be called before cv_img_origin was
confirmed valid; if the image file/filename was not valid, this
caused an opencv assert error like this:
terminate called after throwing an instance of 'cv::Exception'
what(): /tmp/A3p0_4200_32550/batserve/A3p0/glnxa64/OpenCV/modules/imgproc/src/imgwarp.cpp:1725: error: (-215)
ssize.area() > 0 in function resize
longjon [Tue, 16 Sep 2014 22:38:05 +0000 (15:38 -0700)]
Merge pull request #1088 from shelhamer/fix-solverstate-filename
Fix snapshot filename for solver states
Jeff Donahue [Tue, 16 Sep 2014 17:33:51 +0000 (10:33 -0700)]
Merge pull request #1091 from ronghanghu/fix_window_data_layer
set up datum size for WindowDataLayer
Ronghang Hu [Tue, 16 Sep 2014 16:56:11 +0000 (09:56 -0700)]
set up datum size for WindowDataLayer
Evan Shelhamer [Tue, 16 Sep 2014 15:19:02 +0000 (08:19 -0700)]
[fix] snapshot model weights as .caffemodel, solver state as .solverstate
Evan Shelhamer [Tue, 16 Sep 2014 15:13:17 +0000 (08:13 -0700)]
[example] update paths in net surgery
Evan Shelhamer [Mon, 15 Sep 2014 22:59:17 +0000 (15:59 -0700)]
Merge pull request #1083 from longjon/fix-solver-gpu-init
Fix solver GPU initialization order (e.g., training with cuDNN on non-default device)
Jonathan L Long [Mon, 15 Sep 2014 21:15:58 +0000 (14:15 -0700)]
fix caffe train GPU initialization
Previously, the solver constructed nets before the caffe train tool read
the --gpu flag, which can cause errors due to LayerSetUp executing on
the wrong device (breaking cuDNN, for example).
Jeff Donahue [Sun, 14 Sep 2014 22:14:46 +0000 (15:14 -0700)]
Merge pull request #1077 from bhack/glog_ppa
Add ppa for gflag and glog
Evan Shelhamer [Sun, 14 Sep 2014 21:18:52 +0000 (14:18 -0700)]
Merge pull request #1076 from kloudkl/cuda-6.5
Update CUDA to version 6.5 in the Travis install script
bhack [Sun, 14 Sep 2014 20:19:51 +0000 (22:19 +0200)]
Fix a little typo
bhack [Sun, 14 Sep 2014 19:16:38 +0000 (21:16 +0200)]
Fix comments
Jonathan L Long [Sun, 14 Sep 2014 01:30:52 +0000 (18:30 -0700)]
fix spelling error in caffe.proto
Jonathan L Long [Sun, 14 Sep 2014 01:30:32 +0000 (18:30 -0700)]
fix out-of-date next ID comment for SolverParameter
Kai Li [Fri, 12 Sep 2014 16:21:06 +0000 (00:21 +0800)]
Update CUDA to version 6.5 in the Travis install script
bhack [Fri, 12 Sep 2014 17:50:15 +0000 (19:50 +0200)]
Add ppa for gflag and glog
Jeff Donahue [Thu, 11 Sep 2014 15:45:09 +0000 (17:45 +0200)]
Merge pull request #1051 from jeffdonahue/travis-red-errors
restore "red X" build failures in Travis
Jeff Donahue [Thu, 11 Sep 2014 14:08:12 +0000 (16:08 +0200)]
add -fPIC flag to CMake build
Jeff Donahue [Mon, 8 Sep 2014 09:11:41 +0000 (11:11 +0200)]
restore "red X" build failures in Travis
Jeff Donahue [Thu, 11 Sep 2014 05:00:33 +0000 (07:00 +0200)]
Merge pull request #1067 from bhack/lmdb
Get lmdb from openldap
bhack [Wed, 10 Sep 2014 22:36:04 +0000 (00:36 +0200)]
Fix lmbdb travis with openldap
Jeff Donahue [Wed, 10 Sep 2014 13:49:45 +0000 (15:49 +0200)]
Merge pull request #1053 from jeffdonahue/to3i-elem_max_layer
rebase and fixup #688 from @to3i: elementwise max
Jeff Donahue [Mon, 8 Sep 2014 15:49:42 +0000 (17:49 +0200)]
lint & reduce gradient check stepsize to pass checks
to3i [Fri, 11 Jul 2014 11:19:23 +0000 (13:19 +0200)]
Implemented elementwise max layer
Evan Shelhamer [Mon, 8 Sep 2014 10:47:47 +0000 (12:47 +0200)]
Back-merge to dev for slides
Evan Shelhamer [Mon, 8 Sep 2014 10:46:21 +0000 (12:46 +0200)]
Merge pull request #1052 from shelhamer/caffe-presentation
Caffe tutorial slides
Evan Shelhamer [Mon, 8 Sep 2014 10:44:21 +0000 (12:44 +0200)]
[docs] replace intro slides with caffe tutorial
Jeff Donahue [Mon, 8 Sep 2014 09:02:31 +0000 (11:02 +0200)]
Revert "call __signbit for CUDA >= 6.5 implementation" -- doesn't
compile on OSX w/ CUDA 6.5
This reverts commit
8819f5953b903ec8b48e541271737e89a2cd24e6.
Jeff Donahue [Mon, 8 Sep 2014 08:49:06 +0000 (10:49 +0200)]
Merge pull request #1050 from jeffdonahue/linecount-more
linecount counts more dirs than just src/
Jeff Donahue [Mon, 8 Sep 2014 08:48:49 +0000 (10:48 +0200)]
Merge pull request #1044 from jeffdonahue/no-tmpnam
change uses of tmpnam to mkstemp/mkdtemp
Jeff Donahue [Mon, 8 Sep 2014 08:42:55 +0000 (10:42 +0200)]
linecount counts more dirs than just src/
Evan Shelhamer [Mon, 8 Sep 2014 08:03:55 +0000 (10:03 +0200)]
[lint] cuDNN conv declaration
Evan Shelhamer [Mon, 8 Sep 2014 07:57:44 +0000 (09:57 +0200)]
Merge pull request #1046 from shelhamer/cudnn
cuDNN acceleration
Jeff Donahue [Mon, 8 Sep 2014 07:28:49 +0000 (09:28 +0200)]
Merge pull request #1049 from niuzhiheng/dev
Fixed CMake script of FindOpenBLAS.
ZhiHeng NIU [Mon, 8 Sep 2014 06:46:43 +0000 (14:46 +0800)]
Fixed CMake script of FindOpenBLAS.
Jeff Donahue [Mon, 8 Sep 2014 06:41:38 +0000 (08:41 +0200)]
Merge pull request #1045 from akosiorek/origin/dev
Fixed CMake building test objects multiple times
Jeff Donahue [Mon, 8 Sep 2014 05:34:46 +0000 (07:34 +0200)]
Merge pull request #1048 from jyegerlehner/conv_layer-init-weight
Conv layer: fix crash by setting weight pointer
J Yegerlehner [Mon, 8 Sep 2014 04:10:33 +0000 (23:10 -0500)]
Fix more lint.
J Yegerlehner [Mon, 8 Sep 2014 02:46:38 +0000 (21:46 -0500)]
Repair crash in conv_layer due to weight pointer being NULL.
Evan Shelhamer [Sun, 7 Sep 2014 16:59:40 +0000 (18:59 +0200)]
[docs] include cuDNN in installation and performance reference
Evan Shelhamer [Sat, 6 Sep 2014 07:27:51 +0000 (00:27 -0700)]
report cuDNN error string
Evan Shelhamer [Sat, 6 Sep 2014 07:18:10 +0000 (00:18 -0700)]
CUDNN_CHECK
Evan Shelhamer [Sat, 6 Sep 2014 06:53:04 +0000 (23:53 -0700)]
strategize cuDNN softmax
Evan Shelhamer [Sat, 6 Sep 2014 05:43:58 +0000 (22:43 -0700)]
strategize cuDNN activations: ReLU, Sigmoid, TanH
Evan Shelhamer [Sat, 6 Sep 2014 04:47:56 +0000 (21:47 -0700)]
strategize cuDNN pooling
Evan Shelhamer [Tue, 2 Sep 2014 05:05:43 +0000 (22:05 -0700)]
strategize cuDNN convolution
Evan Shelhamer [Thu, 4 Sep 2014 00:45:14 +0000 (17:45 -0700)]
call __signbit for CUDA >= 6.5 implementation
Evan Shelhamer [Mon, 1 Sep 2014 21:51:24 +0000 (14:51 -0700)]
add cuDNN to build
Adam Kosiorek [Sun, 7 Sep 2014 17:03:44 +0000 (19:03 +0200)]
added common.cpp explicitly to tests
Adam Kosiorek [Sat, 6 Sep 2014 12:40:54 +0000 (14:40 +0200)]
cpp and cu files processed separately in test build
Adam Kosiorek [Fri, 5 Sep 2014 23:23:02 +0000 (01:23 +0200)]
enabled object file reusing in test builds
Jeff Donahue [Sun, 7 Sep 2014 09:47:31 +0000 (11:47 +0200)]
add <cuda>/lib64 only if exists to suppress linker warnings
Jeff Donahue [Fri, 5 Sep 2014 12:01:46 +0000 (05:01 -0700)]
remove uses of tmpnam
Jeff Donahue [Sun, 7 Sep 2014 07:44:58 +0000 (09:44 +0200)]
fix transform_param in mnist_autoencoder.prototxt
Jonathan L Long [Sun, 7 Sep 2014 05:10:29 +0000 (22:10 -0700)]
[docs] tutorial/layers: fix inner product sample
Jonathan L Long [Sun, 7 Sep 2014 05:09:13 +0000 (22:09 -0700)]
[docs] tutorial/layers: describe some more data layers
Jonathan L Long [Sun, 7 Sep 2014 04:42:35 +0000 (21:42 -0700)]
[docs] tutorial/layers: clean up sample markdown
Jonathan L Long [Sun, 7 Sep 2014 04:39:56 +0000 (21:39 -0700)]
[docs] tutorial/layers: brief descriptions of some loss layers
Jonathan L Long [Sun, 7 Sep 2014 04:22:23 +0000 (21:22 -0700)]
[docs] in tutorial/layers, Options -> Parameters
It sounds funny to have optional options, and "parameters" is more in
line with the internal usage.
Jonathan L Long [Sun, 7 Sep 2014 04:20:36 +0000 (21:20 -0700)]
[docs] split layer params in required/optional
Also, make the parameter name come first. This makes it much easier to
find/scan parameters.
Jonathan L Long [Sun, 7 Sep 2014 04:05:56 +0000 (21:05 -0700)]
[docs] add LRN layer to tutorial/layers
Jonathan L Long [Sun, 7 Sep 2014 03:28:02 +0000 (20:28 -0700)]
[docs] fix pooling markdown and add some comments in tutorial
Jonathan L Long [Sun, 7 Sep 2014 03:23:55 +0000 (20:23 -0700)]
[doc] minor edits to convolution layer in tutorial
Jonathan L Long [Sun, 7 Sep 2014 03:14:28 +0000 (20:14 -0700)]
[docs] fixup the MathJax notation in tutorial/layers
Evan Shelhamer [Sun, 7 Sep 2014 01:47:46 +0000 (03:47 +0200)]
Merge pull request #1022 from shelhamer/engine
Add engine switch to pick computational backend
Evan Shelhamer [Sat, 6 Sep 2014 02:36:35 +0000 (19:36 -0700)]
revert separate strategies: engines will extend the caffe standards
Evan Shelhamer [Sat, 6 Sep 2014 02:16:55 +0000 (19:16 -0700)]
revert engine switch for build to always include caffe engine