Martin Häcker [Thu, 2 Apr 2015 20:50:55 +0000 (22:50 +0200)]
Add commented out helpers for homebrew users
Evan Shelhamer [Tue, 31 Mar 2015 18:14:30 +0000 (11:14 -0700)]
Merge pull request #2224 from small-yellow-duck/master
[build] check if CPU_ONLY is set when determining CUDA version
Evan Shelhamer [Tue, 31 Mar 2015 18:13:20 +0000 (11:13 -0700)]
Merge pull request #2231 from tnarihi/fix-travis-miniconda
Fix Travis: no need to remove libm in new Miniconda
Takuya Narihira [Tue, 31 Mar 2015 02:52:43 +0000 (19:52 -0700)]
Fix: libm.* doesn not exist
small-yellow-duck [Sun, 29 Mar 2015 21:30:21 +0000 (14:30 -0700)]
Check if CPU_ONLY is set when determining CUDA version
Previously, CUDA_VERSION would appear to be < 7 if there was no CUDA
installed, and that would generate the wrong C++ flags for compiling on
recent OSX versions. Instead, skip the CUDA version check if CPU_ONLY is
set. This change only affects CPU_ONLY installations.
Evan Shelhamer [Fri, 27 Mar 2015 22:58:14 +0000 (15:58 -0700)]
Merge pull request #2192 from lukeyeager/remove-scikit-learn
Remove scikit-learn dependency -- the need is noted in the relevant example
Jon Long [Fri, 27 Mar 2015 22:27:43 +0000 (15:27 -0700)]
Merge pull request #2199 from lukeyeager/downgrade-pillow
Downgrade Pillow pip requirement
Evan Shelhamer [Thu, 26 Mar 2015 23:50:52 +0000 (16:50 -0700)]
Merge pull request #2211 from nsubtil/fix-cudnn-algo
Fallback to different cuDNN algorithm when under memory pressure; fix #2197
Nuno Subtil [Tue, 24 Mar 2015 22:48:16 +0000 (15:48 -0700)]
Fallback to different cuDNN algorithm when under memory pressure
CUDNN_CONVOLUTION_FWD_PREFER_FASTEST requires a lot of GPU memory, which may
not always be available. Add a fallback path that uses
CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM when the allocation fails.
Luke Yeager [Wed, 25 Mar 2015 19:42:17 +0000 (12:42 -0700)]
Downgrade Pillow pip requirement
2.7.0 isn't really necessary - 2.3.0 is sufficient. This is the version
available on Ubuntu 14.04 via apt-get, and seems to be a reasonable lowest
common denominator in general.
http://pillow.readthedocs.org/installation.html#old-versions
Jeff Donahue [Wed, 25 Mar 2015 17:42:22 +0000 (10:42 -0700)]
Merge pull request #2160 from TorosFanny/master
change resorce to resource
Luke Yeager [Wed, 25 Mar 2015 01:15:08 +0000 (18:15 -0700)]
Add note in example about installing scikit-learn
Luke Yeager [Wed, 25 Mar 2015 00:42:56 +0000 (17:42 -0700)]
Remove scikit-learn dependency
Evan Shelhamer [Tue, 24 Mar 2015 21:49:31 +0000 (14:49 -0700)]
Merge pull request #2038 from shelhamer/cudnn-r2
cuDNN v2
Evan Shelhamer [Tue, 24 Mar 2015 21:17:57 +0000 (14:17 -0700)]
note cuDNN v2 convolutional TODOs
Evan Shelhamer [Tue, 17 Feb 2015 01:18:13 +0000 (17:18 -0800)]
cuDNN pooling can pad now
Evan Shelhamer [Tue, 17 Feb 2015 00:01:18 +0000 (16:01 -0800)]
replace cuDNN alphas and betas with coefficient values
Give cuDNN {0, 1} constants for controlling accumulation through the
alpha and beta coefficients.
Simon Layton [Tue, 10 Feb 2015 03:08:39 +0000 (22:08 -0500)]
switch to cuDNN R2
Jon Long [Mon, 23 Mar 2015 19:14:09 +0000 (12:14 -0700)]
Merge pull request #2178 from Lewuathe/typo-in-docs
Typos in documents
lewuathe [Sun, 22 Mar 2015 09:53:02 +0000 (18:53 +0900)]
Typos in documents
TorosFanny [Thu, 19 Mar 2015 13:16:37 +0000 (21:16 +0800)]
change resorce to resource
Jeff Donahue [Wed, 18 Mar 2015 02:49:11 +0000 (19:49 -0700)]
Merge pull request #1922 from erictzeng/lrn_large_region_fix
Cross-channel LRN bounds checking for GPU implementation
Jon Long [Tue, 17 Mar 2015 01:53:26 +0000 (18:53 -0700)]
Merge pull request #2127 from kentashoji/fix-python-classify
Fix invalid syntax on classify.py
Kenta Shoji [Sun, 15 Mar 2015 23:26:31 +0000 (08:26 +0900)]
Fix invalid syntax
Jeff Donahue [Fri, 13 Mar 2015 19:46:59 +0000 (12:46 -0700)]
Fix for solver issue pointed out by @moskewcz in #1972
Use ReadNetParamsFromBinaryFileOrDie to read a net param when restoring
from a saved solverstate, which upgrades old nets, rather than
ReadProtoFromBinaryFile.
Jeff Donahue [Fri, 13 Mar 2015 18:47:18 +0000 (11:47 -0700)]
HDF5DataLayer: remove redundant shuffle
Jeff Donahue [Fri, 13 Mar 2015 09:28:01 +0000 (02:28 -0700)]
Merge pull request #2118 from jeffdonahue/PatWie-shufflehdf5
Shuffle HDF5 data
Jeff Donahue [Fri, 13 Mar 2015 08:27:59 +0000 (01:27 -0700)]
HDF5DataLayer shuffle: minor cleanup; clarification in HDF5DataParameter
wieschol [Wed, 22 Oct 2014 13:22:36 +0000 (15:22 +0200)]
shuffle data
Jeff Donahue [Thu, 12 Mar 2015 07:01:35 +0000 (00:01 -0700)]
Merge pull request #1940 from tnarihi/prelu2
Add PReLU Layer
Takuya Narihira [Mon, 16 Feb 2015 17:52:47 +0000 (09:52 -0800)]
PReLU Layer and its tests
described in Kaiming He et al, "Delving Deep into Rectifiers: Surpassing
Human-Level Performance on ImageNet Classification", arxiv 2015.
Belows are commit message histories that I had while developing.
PReLULayer takes FillerParameter for init
PReLU testing consistency with ReLU
Fix : PReLU test concistency check
PReLU tests in-place computation, and it failed in GPU
Fix: PReLU in-place backward in GPU
PReLULayer called an incorrect API for copying
data (caffe_gpu_memcpy). First argment of `caffe_gpu_memcpy` should be
size of memory region in byte. I modified to use `caffe_copy` function.
Fix: style errors
Fix: number of axes of input blob must be >= 2
Use 1D blob, zero-D blob.
Rename: hw -> dim
Evan Shelhamer [Wed, 11 Mar 2015 21:04:51 +0000 (14:04 -0700)]
[docs] open release of BVLC models for unrestricted use
See BVLC model license details on the model zoo page.
[Re-commit of 6b84206 which somehow went missing.]
Sergio Guadarrama [Tue, 10 Mar 2015 20:55:32 +0000 (13:55 -0700)]
Remove Gist from BVLC GoogleNet
Jeff Donahue [Mon, 9 Mar 2015 19:45:15 +0000 (12:45 -0700)]
Merge pull request #2076 from jeffdonahue/accuracy-layer-fixes
Fixup AccuracyLayer like SoftmaxLossLayer in #1970
Jeff Donahue [Mon, 9 Mar 2015 19:16:30 +0000 (12:16 -0700)]
AccuracyLayerTest: add tests for ignore_label and spatial axes
max argus [Sun, 22 Feb 2015 21:00:38 +0000 (21:00 +0000)]
AccuracyLayer: add ignore_label param
Jeff Donahue [Mon, 9 Mar 2015 06:00:10 +0000 (23:00 -0700)]
Fixup AccuracyLayer like SoftmaxLossLayer in #1970 -- fixes #2063
Evan Shelhamer [Mon, 9 Mar 2015 06:19:59 +0000 (23:19 -0700)]
Merge pull request #2058 from shelhamer/py-fixes
Pycaffe fixes and example reformation
Evan Shelhamer [Mon, 9 Mar 2015 05:39:56 +0000 (22:39 -0700)]
[example] pycaffe classification downloads the model automatically
Jeff Donahue [Mon, 9 Mar 2015 01:28:25 +0000 (18:28 -0700)]
Merge pull request #2066 from caotto/leveldb_include_update
Update leveldb include variable name to match FindLevelDB.cmake
Jeff Donahue [Mon, 9 Mar 2015 00:16:08 +0000 (17:16 -0700)]
SoftmaxLossLayer fix: canonicalize input axis
Evan Shelhamer [Sat, 7 Mar 2015 20:53:34 +0000 (12:53 -0800)]
[example] warm-start web demo
call forward to warm-start the demo so the first request isn't slow.
Evan Shelhamer [Sat, 7 Mar 2015 09:35:17 +0000 (01:35 -0800)]
[example] revise hdf5 classification
- add a little explanation
- solve from Python and the command line
- time scikit-learn and caffe
- fix test iterations (number of instances / batch size)
Evan Shelhamer [Sat, 7 Mar 2015 08:46:05 +0000 (00:46 -0800)]
[example] revise net surgery + add designer filters
- N-D blob parameter dimensions
- add filtering section to net surgery example
- make Gaussian blur and Sobel edge kernels
- alter biases
- do flat assignment instead of reshape and explain memory layout
- make dir for surgery files
Evan Shelhamer [Sat, 7 Mar 2015 08:34:24 +0000 (00:34 -0800)]
[example] revise filter visualization
- download CaffeNet if it isn't there
- switch to caffe.Net
- reshape net for single input
- explain param, bias indexing
- update output for N-D blobs
Evan Shelhamer [Sat, 7 Mar 2015 03:36:29 +0000 (19:36 -0800)]
[pycaffe] align web demo with #1728 and #1902
Evan Shelhamer [Sat, 7 Mar 2015 03:34:32 +0000 (19:34 -0800)]
[pycaffe] classifier + detector only have one input
Evan Shelhamer [Sat, 7 Mar 2015 02:53:29 +0000 (18:53 -0800)]
[pycaffe] fix CPU / GPU switch in example scripts
ariandyy [Tue, 4 Nov 2014 15:48:02 +0000 (16:48 +0100)]
[pycaffe] make classify.py print input + output file paths
...and fix up detector quote convention.
Evan Shelhamer [Sat, 7 Mar 2015 00:23:43 +0000 (16:23 -0800)]
[pycaffe] no need to squeeze output after #1970
fix #2041 reported by @dgmp88
Evan Shelhamer [Sun, 8 Mar 2015 03:06:19 +0000 (19:06 -0800)]
Merge pull request #1457 from jyegerlehner/preserve-extracted-blob-shapes
extract_features preserves feature shape
J Yegerlehner [Wed, 19 Nov 2014 23:39:30 +0000 (17:39 -0600)]
extract_features preserves feature shape
Evan Shelhamer [Sun, 8 Mar 2015 03:00:22 +0000 (19:00 -0800)]
Merge pull request #1456 from jyegerlehner/load-weights-from-multiple-caffemodels
Load weights from multiple models by listing comma separated caffemodels
as the `-weights` arg to the caffe command.
J Yegerlehner [Wed, 19 Nov 2014 23:30:46 +0000 (17:30 -0600)]
Load weights from multiple caffemodels.
Jonathan L Long [Sun, 8 Mar 2015 02:16:18 +0000 (18:16 -0800)]
whitespace in common.hpp
Jonathan L Long [Sun, 8 Mar 2015 00:54:30 +0000 (16:54 -0800)]
comment grammar in net.cpp
Charles Otto [Sat, 7 Mar 2015 23:39:10 +0000 (18:39 -0500)]
Update leveldb include variable name to match FindLevelDB.cmake
Christos Nikolaou [Wed, 5 Nov 2014 16:09:47 +0000 (18:09 +0200)]
Fix references to plural names in API documentation
Changed each "&" character that conected the class name and the "s"
character to "%", so that it is properly displayed.
Evan Shelhamer [Sat, 7 Mar 2015 06:17:47 +0000 (22:17 -0800)]
Merge pull request #1987 from tnarihi/fix-siam-example
[docs] fix siamese ipynb example
Evan Shelhamer [Sat, 7 Mar 2015 06:08:28 +0000 (22:08 -0800)]
Merge pull request #2059 from philkr/python_layer_cmake
[cmake] add Python layer
Evan Shelhamer [Sat, 7 Mar 2015 06:07:04 +0000 (22:07 -0800)]
Merge pull request #2056 from philkr/hdf5
[cmake] add missing hdf5 include directory
Evan Shelhamer [Sat, 7 Mar 2015 06:05:59 +0000 (22:05 -0800)]
Merge pull request #2055 from tishibas/iss1506
compute mean from lmdb for imagenet example
philkr [Sat, 7 Mar 2015 00:49:19 +0000 (16:49 -0800)]
Making python layer work with cmake
Evan Shelhamer [Sat, 7 Mar 2015 00:26:30 +0000 (16:26 -0800)]
Merge pull request #2010 from danielhamngren/update_python_requirements
[pycaffe] Add Pillow to requirements.txt
Evan Shelhamer [Sat, 7 Mar 2015 00:25:09 +0000 (16:25 -0800)]
Merge pull request #2037 from shelhamer/expose-solver-restore
Expose Solver::Restore() as public for restoring without solving
tishibas67 [Fri, 6 Mar 2015 14:48:27 +0000 (23:48 +0900)]
fix #1506
philkr [Thu, 5 Mar 2015 23:00:23 +0000 (15:00 -0800)]
Adding correct hdf5 path
Evan Shelhamer [Thu, 5 Mar 2015 21:55:01 +0000 (13:55 -0800)]
[docs] include boost-python in OSX pycaffe install
Jonathan L Long [Thu, 5 Mar 2015 07:00:45 +0000 (23:00 -0800)]
[pycaffe] add missing import sys
Evan Shelhamer [Thu, 5 Mar 2015 01:18:26 +0000 (17:18 -0800)]
expose Solver::Restore() as public and Solver.restore() in pycaffe
The solver can restore its state without entering the Solve() loop.
Evan Shelhamer [Thu, 5 Mar 2015 01:13:38 +0000 (17:13 -0800)]
[pycaffe] check mean channels for transformation
follow-up to #2031: check that the input and mean channels are
compatible in the broadcast channels case.
Jeff Donahue [Wed, 4 Mar 2015 23:53:17 +0000 (15:53 -0800)]
Merge pull request #2035 from jeffdonahue/include-climits
add <climits> for INT_MAX
Jeff Donahue [Wed, 4 Mar 2015 23:22:46 +0000 (15:22 -0800)]
include/caffe/common.hpp: add <climits> for INT_MAX (now in blob.hpp)
Jeff Donahue [Wed, 4 Mar 2015 19:17:51 +0000 (11:17 -0800)]
fix comment I forgot about from @shelhamer's review of #1970
Evan Shelhamer [Wed, 4 Mar 2015 17:43:15 +0000 (09:43 -0800)]
Merge pull request #2031 from NVIDIA/image_mean
Check shape of input mean
Luke Yeager [Mon, 23 Feb 2015 17:18:31 +0000 (09:18 -0800)]
Add error checking for image mean
When setting the mean, assert that it is either one pixel or an array with
shape equal to the input data size.
Jon Long [Wed, 4 Mar 2015 11:33:13 +0000 (03:33 -0800)]
Merge pull request #1966 from philkr/python_fixes
cmake and python3 bugfixes for #1939 and #1923
Evan Shelhamer [Wed, 4 Mar 2015 06:27:55 +0000 (22:27 -0800)]
Merge pull request #1970 from jeffdonahue/tensor-blob
Blobs are N-D arrays (for N not necessarily equals 4)
Jonathan L Long [Mon, 2 Mar 2015 23:54:11 +0000 (15:54 -0800)]
[pytest] use non-4d blobs in test_python_layer
Jonathan L Long [Mon, 2 Mar 2015 23:27:45 +0000 (15:27 -0800)]
[pycaffe] expose Blob.reshape as *args function
Jeff Donahue [Sat, 31 Jan 2015 07:16:44 +0000 (23:16 -0800)]
Add option not to reshape to Blob::FromProto; use when loading Blobs
from saved NetParameter
Want to keep the param Blob shape the layer has set, and not necessarily
adopt the one from the saved net (e.g. want to keep new 1D bias shape,
rather than take the (1 x 1 x 1 x D) shape from a legacy net).
Jeff Donahue [Fri, 2 Jan 2015 01:32:38 +0000 (17:32 -0800)]
PyBlobs support generalized axes
Jeff Donahue [Fri, 16 Jan 2015 03:50:42 +0000 (19:50 -0800)]
Add CHECK_EQ(4, ...)s to "vision layers" to enforce that the
num/channnels/height/width indexing is valid.
Jeff Donahue [Thu, 1 Jan 2015 00:06:46 +0000 (16:06 -0800)]
DummyDataLayer outputs blobs of arbitrary shape
Jeff Donahue [Sun, 30 Nov 2014 02:00:44 +0000 (18:00 -0800)]
EuclideanLossLayer: generalized Blob axes
Jeff Donahue [Wed, 26 Nov 2014 20:57:15 +0000 (12:57 -0800)]
WindowDataLayer outputs 1D labels
Jeff Donahue [Wed, 26 Nov 2014 20:56:14 +0000 (12:56 -0800)]
ImageDataLayer outputs 1D labels
Jeff Donahue [Wed, 26 Nov 2014 13:42:50 +0000 (05:42 -0800)]
MemoryDataLayer outputs 1D labels
Jeff Donahue [Wed, 26 Nov 2014 13:34:47 +0000 (05:34 -0800)]
DataLayer outputs 1D labels
Jeff Donahue [Wed, 26 Nov 2014 13:42:11 +0000 (05:42 -0800)]
HDF5DataLayer shapes output according to HDF5 shape
Jeff Donahue [Wed, 26 Nov 2014 13:02:15 +0000 (05:02 -0800)]
SplitLayer: change Reshape(n,h,c,w) to ReshapeLike(...)
Jeff Donahue [Sat, 31 Jan 2015 07:22:26 +0000 (23:22 -0800)]
SoftmaxLossLayer generalized like SoftmaxLayer
Jeff Donahue [Tue, 10 Feb 2015 02:12:54 +0000 (18:12 -0800)]
CuDNNSoftmaxLayer: generalized Blob axes
Jeff Donahue [Sun, 15 Feb 2015 21:26:36 +0000 (13:26 -0800)]
SoftmaxLayer: generalized Blob axes
Jeff Donahue [Wed, 26 Nov 2014 11:22:59 +0000 (03:22 -0800)]
SliceLayer: generalized Blob axes
Jeff Donahue [Wed, 26 Nov 2014 08:03:36 +0000 (00:03 -0800)]
ConcatLayer: generalized Blob axes
Jeff Donahue [Wed, 26 Nov 2014 10:24:41 +0000 (02:24 -0800)]
TestConcatLayer: add forward/gradient tests for concatenation along num
Jeff Donahue [Wed, 26 Nov 2014 10:12:09 +0000 (02:12 -0800)]
TestConcatLayer: fix style errors
Jeff Donahue [Wed, 26 Nov 2014 11:23:42 +0000 (03:23 -0800)]
common_layers.hpp: remove unused "Blob col_bob_"
Jeff Donahue [Wed, 26 Nov 2014 08:11:06 +0000 (00:11 -0800)]
FlattenLayer: generalized Blob axes