Jeff Donahue [Sun, 8 Jun 2014 23:04:41 +0000 (16:04 -0700)]
add DummyDataLayer tests
Jeff Donahue [Sun, 8 Jun 2014 21:58:53 +0000 (14:58 -0700)]
add DummyDataLayer
Jeff Donahue [Mon, 9 Jun 2014 05:44:32 +0000 (22:44 -0700)]
fix ArgMaxLayer bug in num bottom blobs decl. pointed out by @sguada
Evan Shelhamer [Mon, 9 Jun 2014 04:11:24 +0000 (21:11 -0700)]
Merge pull request #479 from jeffdonahue/declare-layer-names-and-numblobs
Layers declare their types and number of bottom/top blobs
Jeff Donahue [Sun, 8 Jun 2014 19:45:17 +0000 (12:45 -0700)]
move MemoryDataLayer decl. from vision_layers.hpp to data_layers.hpp
Jeff Donahue [Fri, 6 Jun 2014 22:40:26 +0000 (15:40 -0700)]
layers declare their names and number of input/output blobs, and don't
check top/bottom blob counts explicitly in SetUp; instead call base
Layer::SetUp.
Jeff Donahue [Mon, 9 Jun 2014 01:08:24 +0000 (18:08 -0700)]
fix Makefile build dir link upgrade bug reported by @jamt9000
Jeff Donahue [Sun, 8 Jun 2014 19:03:22 +0000 (12:03 -0700)]
Merge pull request #473 from shelhamer/pad-max-pooling
Padding for Max Pooling
Jeff Donahue [Fri, 6 Jun 2014 20:59:19 +0000 (13:59 -0700)]
check the last pooling in padding and add padded max pooling test
[Adapted from a commit by @jeffdonahue by @shelhamer.]
Evan Shelhamer [Thu, 5 Jun 2014 17:53:42 +0000 (10:53 -0700)]
padding for max pooling
Max pooling pads by -inf if the padding parameter is set.
Padding for pooling, like padding for convolution, can preserve the
dimensions of the bottom at the top. By setting the padding to
floor(kernel_size / 2) the top output is the "same" instead of the
"valid" part of the bottom input.
Evan Shelhamer [Sun, 8 Jun 2014 05:00:05 +0000 (22:00 -0700)]
Merge pull request #475 from jeffdonahue/tanh-fixes
Make TanH cleaner, more efficient, and possible to use in-place
Jeff Donahue [Fri, 6 Jun 2014 00:20:30 +0000 (17:20 -0700)]
Make TanH cleaner, more efficient, and possible to use in-place
Jeff Donahue [Fri, 6 Jun 2014 01:06:01 +0000 (18:06 -0700)]
Merge pull request #471 from jeffdonahue/debug-release-build-dirs
Compile debug/release into separate directories
Evan Shelhamer [Tue, 3 Jun 2014 13:21:22 +0000 (09:21 -0400)]
Merge pull request #466 from robcurrie/dev
Update docs on building boost on OSX for the python wrappers
Jeff Donahue [Tue, 3 Jun 2014 06:57:37 +0000 (23:57 -0700)]
update .gitignore appropriately for separate debug/release build dirs
Jeff Donahue [Tue, 3 Jun 2014 06:47:09 +0000 (23:47 -0700)]
compile debug/release into separate directories so you don't have to
rebuild the whole thing to switch back and forth
Rob Currie [Thu, 29 May 2014 21:46:29 +0000 (14:46 -0700)]
Update docs on building boost on OSX for the python wrappers
Evan Shelhamer [Thu, 29 May 2014 00:18:22 +0000 (17:18 -0700)]
fix OSX 10.9 homebrew CXX doc
clang++ and not clang as accidentally committed in 2dcbcd9.
Sergio Guadarrama [Tue, 27 May 2014 19:06:57 +0000 (12:06 -0700)]
Merge pull request #422 from sguada/threshold_layer
Threshold layer to binarize features
Added GPU code and tested
Sergio [Tue, 27 May 2014 18:54:53 +0000 (11:54 -0700)]
Un comment Test GPUs cases, fixed ThresholdLayer.cu
Sergio [Tue, 27 May 2014 18:44:17 +0000 (11:44 -0700)]
Comment Test GPUs cases
Sergio [Tue, 27 May 2014 18:06:56 +0000 (11:06 -0700)]
Make lint happy
Sergio [Tue, 27 May 2014 18:02:20 +0000 (11:02 -0700)]
Fixed call to ThresholdForward in ThresholdLayer.cu
Sergio [Tue, 27 May 2014 18:01:19 +0000 (11:01 -0700)]
Fixed type in ThresholdLayer.cu
Sergio [Tue, 27 May 2014 17:59:36 +0000 (10:59 -0700)]
Fixed typo in Threshold Layer definition
Sergio [Tue, 27 May 2014 17:57:26 +0000 (10:57 -0700)]
Added ForwardGPU to ThresholdLayer and to the tests
Sergio [Tue, 27 May 2014 17:54:21 +0000 (10:54 -0700)]
Added Threshold layer to neuron_layers.hpp
Sergio Guadarrama [Fri, 16 May 2014 01:58:04 +0000 (18:58 -0700)]
Corrected conditions in test_threshold
Sergio Guadarrama [Fri, 16 May 2014 01:47:57 +0000 (18:47 -0700)]
Fix typo in test_threshold ThresholdParameter
Sergio Guadarrama [Fri, 16 May 2014 01:42:50 +0000 (18:42 -0700)]
Added the code for threshold_layer to the repo
Sergio Guadarrama [Fri, 16 May 2014 00:14:00 +0000 (17:14 -0700)]
Added NeuronLayer<Dtype>::SetUp(bottom, top) to ThresholdLayer
Sergio Guadarrama [Fri, 16 May 2014 00:01:50 +0000 (17:01 -0700)]
Added threshold setting test
Sergio Guadarrama [Thu, 15 May 2014 23:58:27 +0000 (16:58 -0700)]
Fixed name of blob_bottom_
Sergio Guadarrama [Thu, 15 May 2014 23:57:50 +0000 (16:57 -0700)]
Fixed name of threshold_ var
Sergio [Tue, 27 May 2014 17:32:11 +0000 (10:32 -0700)]
Fixed ThresholdParam
Conflicts:
src/caffe/proto/caffe.proto
Conflicts:
src/caffe/proto/caffe.proto
Conflicts:
src/caffe/proto/caffe.proto
Sergio [Thu, 15 May 2014 16:30:07 +0000 (09:30 -0700)]
Test for Threshold layer
Evan Shelhamer [Tue, 27 May 2014 04:58:34 +0000 (21:58 -0700)]
Merge pull request #459 from shelhamer/python-net-preprocessing-members
Make net preprocessing options belong to instantiated net and not class
Evan Shelhamer [Tue, 27 May 2014 04:58:34 +0000 (21:58 -0700)]
Merge pull request #459 from shelhamer/python-net-preprocessing-members
Make net preprocessing options belong to instantiated net and not class
Evan Shelhamer [Tue, 27 May 2014 04:50:39 +0000 (21:50 -0700)]
caffe.Net preprocessing members belong to object, not class
Evan Shelhamer [Mon, 26 May 2014 17:50:09 +0000 (10:50 -0700)]
Merge pull request #445 from jeffdonahue/convert_imageset_resize_option
Optionally resize an image set to canonical dimensions when converting.
Do this by default for creating the imagenet train + val leveldbs.
Evan Shelhamer [Mon, 26 May 2014 17:50:00 +0000 (10:50 -0700)]
convert imageset comment fixup
Evan Shelhamer [Mon, 26 May 2014 16:54:20 +0000 (09:54 -0700)]
Merge pull request #456 from longjon/spurious-ldflags
Don't pass LDFLAGS when only performing compilation (-c)
Jonathan L Long [Mon, 26 May 2014 08:34:32 +0000 (01:34 -0700)]
don't pass LDFLAGS when only compiling
Evan Shelhamer [Sun, 25 May 2014 23:19:14 +0000 (16:19 -0700)]
10.9 install doc formatting
Evan Shelhamer [Sun, 25 May 2014 04:41:16 +0000 (21:41 -0700)]
Back-merge recent fixes from master to dev
fix OSX 10.9 compiler/stdlib override for latest homebrew
follow-up on #443 to invert k channels (instead of 3)
Correctly invert the swapping of colour channels
link presentation on dropbox (was self-hosted during a dropbox issue)
link to demo
fix draw_net python script
release v1 model defs + weights
point out @niuzhiheng's work on the Windows port
fix test_all path in docs
link canonical bvlc site
fix detection notebook link
Evan Shelhamer [Sun, 25 May 2014 04:06:37 +0000 (21:06 -0700)]
fix OSX 10.9 compiler/stdlib override for latest homebrew
Sergio Guadarrama [Sun, 25 May 2014 02:39:58 +0000 (19:39 -0700)]
Merge pull request #448 from jeffdonahue/sguada-fix_maxpooling
Finish up max pooling with a mask from @sguada
Tested
Jeff Donahue [Sun, 25 May 2014 02:02:51 +0000 (19:02 -0700)]
merge caffe_set definitions; define for int as well
Jeff Donahue [Sun, 25 May 2014 01:40:03 +0000 (18:40 -0700)]
add tests for maxpooling layer forward, and for maxpooling with top mask
Jeff Donahue [Sun, 25 May 2014 01:09:05 +0000 (18:09 -0700)]
optionally output the mask to a top blob instead of storing internally
Jeff Donahue [Sat, 24 May 2014 23:59:15 +0000 (16:59 -0700)]
make a Blob<unsigned int> and use in dropout layer
Jeff Donahue [Sat, 24 May 2014 23:27:38 +0000 (16:27 -0700)]
use a Blob<int> instead of a SyncedMemory to store max_idx_
Jeff Donahue [Sat, 24 May 2014 23:14:44 +0000 (16:14 -0700)]
bugfix: setting count to the top count in backward doesn't process all
of the bottom (assuming the bottom is larger, which happens for
nontrivial poolsize>1)
Jeff Donahue [Sat, 24 May 2014 23:14:04 +0000 (16:14 -0700)]
mask should be const in backward pass
Jeff Donahue [Sat, 24 May 2014 23:10:23 +0000 (16:10 -0700)]
remove commented out code
Jeff Donahue [Sat, 24 May 2014 22:29:35 +0000 (15:29 -0700)]
lint and make compilable (using static_cast's found a couple bugs at
compile time)
Sergio [Mon, 14 Apr 2014 04:03:06 +0000 (21:03 -0700)]
Adapted to V1 proto definition, test don't pass
Sergio [Mon, 14 Apr 2014 03:29:26 +0000 (20:29 -0700)]
Commented Atomic Add, back to loop in GPU MaxPoolBackward
Sergio [Mon, 14 Apr 2014 03:27:16 +0000 (20:27 -0700)]
Attempt to use AtomicAdd but it seems slower
Sergio [Wed, 26 Feb 2014 01:29:43 +0000 (17:29 -0800)]
Added test for maxpool layer followed by dropout
Sergio [Mon, 14 Apr 2014 03:21:15 +0000 (20:21 -0700)]
Use loops in GPU again to avoid over-writting of bottom_diff
Sergio [Mon, 14 Apr 2014 03:14:43 +0000 (20:14 -0700)]
Fixed parameter order
Sergio [Mon, 14 Apr 2014 03:13:05 +0000 (20:13 -0700)]
Cleaned prints from test_pooling_layer.cpp
Sergio [Mon, 14 Apr 2014 03:11:49 +0000 (20:11 -0700)]
Set bottom_diff to 0 and remove Async memcopy
Sergio [Mon, 14 Apr 2014 03:10:10 +0000 (20:10 -0700)]
Remove top_data from backward Max Pooling
Sergio [Mon, 14 Apr 2014 03:06:38 +0000 (20:06 -0700)]
Use mask_idx to compute backward Max Pooling
Sergio [Mon, 14 Apr 2014 03:02:34 +0000 (20:02 -0700)]
Added test for Pooling layer GPU
Sergio [Mon, 14 Apr 2014 02:58:05 +0000 (19:58 -0700)]
Added max_idx to Pooling layer GPU
Sergio Guadarrama [Tue, 25 Feb 2014 01:36:17 +0000 (17:36 -0800)]
Default mask idx is -1
Sergio [Mon, 14 Apr 2014 02:49:02 +0000 (19:49 -0700)]
Added max_idx to Pooling layer CPU
Jeff Donahue [Fri, 23 May 2014 21:57:02 +0000 (14:57 -0700)]
add convert_imageset option to resize images; use in
convert_imageset.cpp and document
Evan Shelhamer [Fri, 23 May 2014 16:13:44 +0000 (09:13 -0700)]
follow-up on #443 to invert k channels (instead of 3)
James Thewlis [Fri, 23 May 2014 14:59:52 +0000 (15:59 +0100)]
Correctly invert the swapping of colour channels
In the 'deprocess' method, get back the image with the original channel order
by inverting the original transform, rather than reversing the tuple which is
incorrect.
Evan Shelhamer [Fri, 23 May 2014 16:13:44 +0000 (09:13 -0700)]
follow-up on #443 to invert k channels (instead of 3)
Evan Shelhamer [Fri, 23 May 2014 16:02:43 +0000 (09:02 -0700)]
Merge pull request #443 from jamt9000/correct-deprocess
Correctly invert the swapping of colour channels in Python API 'deprocess'
James Thewlis [Fri, 23 May 2014 14:59:52 +0000 (15:59 +0100)]
Correctly invert the swapping of colour channels
In the 'deprocess' method, get back the image with the original channel order
by inverting the original transform, rather than reversing the tuple which is
incorrect.
Evan Shelhamer [Fri, 23 May 2014 04:27:54 +0000 (21:27 -0700)]
Merge pull request #433 from shelhamer/eltwise
Elementwise layer takes sum or product; caffe_gpu_{add,sub}
Evan Shelhamer [Fri, 23 May 2014 04:24:16 +0000 (21:24 -0700)]
commment, lint
Evan Shelhamer [Fri, 23 May 2014 03:12:11 +0000 (20:12 -0700)]
weight elementwise sum with per-blob coefficients
Evan Shelhamer [Fri, 23 May 2014 01:24:09 +0000 (18:24 -0700)]
link presentation on dropbox (was self-hosted during a dropbox issue)
Sergey Karayev [Fri, 23 May 2014 01:18:25 +0000 (18:18 -0700)]
link to demo
Evan Shelhamer [Fri, 23 May 2014 00:41:24 +0000 (17:41 -0700)]
make sum the default eltwise operation
Evan Shelhamer [Thu, 22 May 2014 09:13:04 +0000 (02:13 -0700)]
fix layer name in logging
Evan Shelhamer [Thu, 22 May 2014 08:04:36 +0000 (01:04 -0700)]
Merge pull request #435 from shelhamer/v1-models
Release v1 model defs + weights
Evan Shelhamer [Thu, 22 May 2014 08:02:56 +0000 (01:02 -0700)]
fix draw_net python script
include caffe.draw for drawing functions.
Evan Shelhamer [Thu, 22 May 2014 07:56:35 +0000 (00:56 -0700)]
release v1 model defs + weights
- Caffe reference ImageNet model
- AlexNet
Note that one can upgrade the weights locally by
`upgrade_net_proto_binary.bin` to avoid re-downloading.
Evan Shelhamer [Thu, 22 May 2014 07:50:04 +0000 (00:50 -0700)]
Merge pull request #434 from shelhamer/little-cat
Reduce example image size
Evan Shelhamer [Thu, 22 May 2014 07:30:12 +0000 (00:30 -0700)]
reduce example image size
Evan Shelhamer [Thu, 22 May 2014 06:43:49 +0000 (23:43 -0700)]
point out @niuzhiheng's work on the Windows port
Evan Shelhamer [Thu, 22 May 2014 02:47:15 +0000 (19:47 -0700)]
add EltwiseLayer docstring
Evan Shelhamer [Thu, 22 May 2014 02:31:47 +0000 (19:31 -0700)]
Elementwise layer learns summation
Evan Shelhamer [Thu, 22 May 2014 01:57:33 +0000 (18:57 -0700)]
add caffe_gpu_add() and caffe_gpu_sub()
Evan Shelhamer [Thu, 22 May 2014 01:38:19 +0000 (18:38 -0700)]
EltwiseProductLayer -> EltwiseLayer for generality
Reproduce elementwise product layer in more generality.
Add elementwise operation parameter.
Prepare for elementwise sum operation choice.
Evan Shelhamer [Wed, 21 May 2014 17:52:45 +0000 (10:52 -0700)]
fix test_all path in docs
Evan Shelhamer [Wed, 21 May 2014 05:33:29 +0000 (22:33 -0700)]
Revert "setting canonical random seed"
1701 is the canonical random seed, and as this test makes only one call
for seeding there's no need for a member var.
Sergey Karayev [Wed, 21 May 2014 04:49:04 +0000 (21:49 -0700)]
Merge pull request #421 from sguada/argmax_layer
Sergey Karayev [Wed, 21 May 2014 04:48:23 +0000 (21:48 -0700)]
setting canonical random seed
Sergey Karayev [Wed, 21 May 2014 04:32:19 +0000 (21:32 -0700)]
Fixed lint errors due to ArgmaxLayer
Sergey Karayev [Wed, 21 May 2014 04:32:07 +0000 (21:32 -0700)]
Documented ArgMax layer in vision_layers.hpp
Sergey Karayev [Wed, 21 May 2014 04:24:03 +0000 (21:24 -0700)]
corrected the caffe.proto ids