Sergio [Mon, 14 Apr 2014 04:03:06 +0000 (21:03 -0700)]
Adapted to V1 proto definition, test don't pass
Sergio [Mon, 14 Apr 2014 03:29:26 +0000 (20:29 -0700)]
Commented Atomic Add, back to loop in GPU MaxPoolBackward
Sergio [Mon, 14 Apr 2014 03:27:16 +0000 (20:27 -0700)]
Attempt to use AtomicAdd but it seems slower
Sergio [Wed, 26 Feb 2014 01:29:43 +0000 (17:29 -0800)]
Added test for maxpool layer followed by dropout
Sergio [Mon, 14 Apr 2014 03:21:15 +0000 (20:21 -0700)]
Use loops in GPU again to avoid over-writting of bottom_diff
Sergio [Mon, 14 Apr 2014 03:14:43 +0000 (20:14 -0700)]
Fixed parameter order
Sergio [Mon, 14 Apr 2014 03:13:05 +0000 (20:13 -0700)]
Cleaned prints from test_pooling_layer.cpp
Sergio [Mon, 14 Apr 2014 03:11:49 +0000 (20:11 -0700)]
Set bottom_diff to 0 and remove Async memcopy
Sergio [Mon, 14 Apr 2014 03:10:10 +0000 (20:10 -0700)]
Remove top_data from backward Max Pooling
Sergio [Mon, 14 Apr 2014 03:06:38 +0000 (20:06 -0700)]
Use mask_idx to compute backward Max Pooling
Sergio [Mon, 14 Apr 2014 03:02:34 +0000 (20:02 -0700)]
Added test for Pooling layer GPU
Sergio [Mon, 14 Apr 2014 02:58:05 +0000 (19:58 -0700)]
Added max_idx to Pooling layer GPU
Sergio Guadarrama [Tue, 25 Feb 2014 01:36:17 +0000 (17:36 -0800)]
Default mask idx is -1
Sergio [Mon, 14 Apr 2014 02:49:02 +0000 (19:49 -0700)]
Added max_idx to Pooling layer CPU
Evan Shelhamer [Fri, 23 May 2014 16:13:44 +0000 (09:13 -0700)]
follow-up on #443 to invert k channels (instead of 3)
Evan Shelhamer [Fri, 23 May 2014 16:02:43 +0000 (09:02 -0700)]
Merge pull request #443 from jamt9000/correct-deprocess
Correctly invert the swapping of colour channels in Python API 'deprocess'
James Thewlis [Fri, 23 May 2014 14:59:52 +0000 (15:59 +0100)]
Correctly invert the swapping of colour channels
In the 'deprocess' method, get back the image with the original channel order
by inverting the original transform, rather than reversing the tuple which is
incorrect.
Evan Shelhamer [Fri, 23 May 2014 04:27:54 +0000 (21:27 -0700)]
Merge pull request #433 from shelhamer/eltwise
Elementwise layer takes sum or product; caffe_gpu_{add,sub}
Evan Shelhamer [Fri, 23 May 2014 04:24:16 +0000 (21:24 -0700)]
commment, lint
Evan Shelhamer [Fri, 23 May 2014 03:12:11 +0000 (20:12 -0700)]
weight elementwise sum with per-blob coefficients
Evan Shelhamer [Fri, 23 May 2014 00:41:24 +0000 (17:41 -0700)]
make sum the default eltwise operation
Evan Shelhamer [Thu, 22 May 2014 09:13:04 +0000 (02:13 -0700)]
fix layer name in logging
Evan Shelhamer [Thu, 22 May 2014 07:50:04 +0000 (00:50 -0700)]
Merge pull request #434 from shelhamer/little-cat
Reduce example image size
Evan Shelhamer [Thu, 22 May 2014 07:30:12 +0000 (00:30 -0700)]
reduce example image size
Evan Shelhamer [Thu, 22 May 2014 02:47:15 +0000 (19:47 -0700)]
add EltwiseLayer docstring
Evan Shelhamer [Thu, 22 May 2014 02:31:47 +0000 (19:31 -0700)]
Elementwise layer learns summation
Evan Shelhamer [Thu, 22 May 2014 01:57:33 +0000 (18:57 -0700)]
add caffe_gpu_add() and caffe_gpu_sub()
Evan Shelhamer [Thu, 22 May 2014 01:38:19 +0000 (18:38 -0700)]
EltwiseProductLayer -> EltwiseLayer for generality
Reproduce elementwise product layer in more generality.
Add elementwise operation parameter.
Prepare for elementwise sum operation choice.
Evan Shelhamer [Wed, 21 May 2014 05:33:29 +0000 (22:33 -0700)]
Revert "setting canonical random seed"
1701 is the canonical random seed, and as this test makes only one call
for seeding there's no need for a member var.
Sergey Karayev [Wed, 21 May 2014 04:49:04 +0000 (21:49 -0700)]
Merge pull request #421 from sguada/argmax_layer
Sergey Karayev [Wed, 21 May 2014 04:48:23 +0000 (21:48 -0700)]
setting canonical random seed
Sergey Karayev [Wed, 21 May 2014 04:32:19 +0000 (21:32 -0700)]
Fixed lint errors due to ArgmaxLayer
Sergey Karayev [Wed, 21 May 2014 04:32:07 +0000 (21:32 -0700)]
Documented ArgMax layer in vision_layers.hpp
Sergey Karayev [Wed, 21 May 2014 04:24:03 +0000 (21:24 -0700)]
corrected the caffe.proto ids
Sergio Guadarrama [Fri, 16 May 2014 01:02:08 +0000 (18:02 -0700)]
Change ArgMaxLayerParam to ArgMaxParam for consitency
Sergio Guadarrama [Fri, 16 May 2014 01:01:04 +0000 (18:01 -0700)]
Change ThresholdLayer to ArgMaxLayer in test_argmax
Sergio Guadarrama [Thu, 15 May 2014 23:55:45 +0000 (16:55 -0700)]
Fixed name of blob_bottom_
Sergio Guadarrama [Thu, 15 May 2014 23:54:21 +0000 (16:54 -0700)]
Fixed name of ArgMaxLayerParameter
Sergio Guadarrama [Thu, 15 May 2014 23:43:01 +0000 (16:43 -0700)]
Added missing ;
Sergio Guadarrama [Thu, 15 May 2014 23:09:07 +0000 (16:09 -0700)]
Added FLT_MAX to argmax layer
Sergio Guadarrama [Fri, 16 May 2014 00:42:38 +0000 (17:42 -0700)]
Fix types of ArgMax Layers params
Conflicts:
include/caffe/vision_layers.hpp
src/caffe/proto/caffe.proto
Sergio Guadarrama [Fri, 16 May 2014 00:39:52 +0000 (17:39 -0700)]
Fixed numbers in proto and name of ArgMaxParameter
Conflicts:
src/caffe/proto/caffe.proto
Sergio [Thu, 15 May 2014 16:49:36 +0000 (09:49 -0700)]
Added Test for ArgMax Layer
Sergio Guadarrama [Fri, 16 May 2014 00:38:03 +0000 (17:38 -0700)]
Added ArgMax Layer
Conflicts:
src/caffe/proto/caffe.proto
Evan Shelhamer [Tue, 20 May 2014 22:03:54 +0000 (15:03 -0700)]
Merge pull request #404 from jeffdonahue/net-param-in-solver
Specify net params in solver; log {Net,Solver} parameters; multiple test nets
Evan Shelhamer [Tue, 20 May 2014 21:44:47 +0000 (14:44 -0700)]
link canonical bvlc site
Evan Shelhamer [Tue, 20 May 2014 21:42:37 +0000 (14:42 -0700)]
fix detection notebook link
Evan Shelhamer [Tue, 20 May 2014 21:20:15 +0000 (14:20 -0700)]
Merge pull request #429 from shelhamer/next
Next: 0.999
Evan Shelhamer [Tue, 20 May 2014 19:44:51 +0000 (12:44 -0700)]
Back-merge changes in master
* master:
bundle presentation in gh-pages for now...
fix typo pointed out by @yinxusen
note support for non-MKL installation in dev
include pretrained snapshot and performance details
Document AlexNet model, include download script
define AlexNet architecture
polished ignore
Evan Shelhamer [Tue, 20 May 2014 19:20:00 +0000 (12:20 -0700)]
Merge pull request #311 from shelhamer/python-fixes
Improve python wrapper
Evan Shelhamer [Tue, 20 May 2014 08:04:17 +0000 (01:04 -0700)]
update notebook examples with new wrapper usage, re-organize
Evan Shelhamer [Tue, 20 May 2014 18:41:42 +0000 (11:41 -0700)]
preprocess single inputs instead of lists
For compositionality and expectations.
Evan Shelhamer [Tue, 20 May 2014 06:50:15 +0000 (23:50 -0700)]
windowed detection in python
Evan Shelhamer [Tue, 20 May 2014 05:55:50 +0000 (22:55 -0700)]
squash infuriating loop assignment bug in batching
Evan Shelhamer [Mon, 19 May 2014 22:31:49 +0000 (15:31 -0700)]
image classification in python
Evan Shelhamer [Mon, 19 May 2014 01:25:18 +0000 (18:25 -0700)]
fix padding for the last batch
Evan Shelhamer [Mon, 19 May 2014 00:14:53 +0000 (17:14 -0700)]
split drawnet into module code and script
Don't run scripts in the module dir to avoid import collisions between
io and caffe.io.
Evan Shelhamer [Mon, 19 May 2014 00:13:05 +0000 (17:13 -0700)]
add caffe.io submodule for conversions, image loading and resizing
Evan Shelhamer [Mon, 19 May 2014 00:11:38 +0000 (17:11 -0700)]
fix python mean subtraction
Sergey Karayev [Mon, 19 May 2014 22:50:33 +0000 (15:50 -0700)]
Merge pull request #376 from sergeyk/layer_reorg
Layer definitions and declarations re-organization and documentation
Sergey Karayev [Mon, 19 May 2014 18:11:37 +0000 (11:11 -0700)]
Incorporated Evan’s comments for neuron layers
Sergey Karayev [Mon, 19 May 2014 17:44:21 +0000 (10:44 -0700)]
Cosmetic change in ConcatLayer
Sergey Karayev [Mon, 19 May 2014 17:43:21 +0000 (10:43 -0700)]
Lil’ more docstring, and cosmetic change in EuclideanLossLayer
Sergey Karayev [Tue, 29 Apr 2014 07:21:15 +0000 (00:21 -0700)]
fwd/back math docs for neuron layers
Evan Shelhamer [Fri, 16 May 2014 23:03:55 +0000 (16:03 -0700)]
drop cute names in favor of Net.{pre,de}process() for input formatting
...and refer to inputs as inputs and not images since general vectors
and matrices are perfectly fine.
Evan Shelhamer [Fri, 16 May 2014 01:52:07 +0000 (18:52 -0700)]
Net.caffeinate() and Net.decaffeinate() format/unformat lists
Evan Shelhamer [Fri, 16 May 2014 01:14:45 +0000 (18:14 -0700)]
take blob args as ndarrays and assign on the python side
Take blob args and give blob returns as single ndarrays instead of lists
of arrays.
Assign the net blobs and diffs as needed on the python side, which
reduces copies and simplifies the C++ side of the wrapper.
Thanks @longjon for the suggestion.
Sergey Karayev [Tue, 29 Apr 2014 02:40:43 +0000 (19:40 -0700)]
Cosmetic change in prep for data layer work
Sergey Karayev [Tue, 29 Apr 2014 02:39:36 +0000 (19:39 -0700)]
Split all loss layers into own .cpp files
Sergey Karayev [Tue, 29 Apr 2014 02:06:07 +0000 (19:06 -0700)]
layer definition reorganization and documentation
- split out neuron, loss, and data layers into own header files
- added LossLayer class with common SetUp checks
- in-progress concise documentation of each layer's purpose
Evan Shelhamer [Thu, 15 May 2014 20:52:07 +0000 (13:52 -0700)]
resize to input dimensions when formatting in python
Evan Shelhamer [Thu, 15 May 2014 20:06:17 +0000 (13:06 -0700)]
replace iterator with indices for consistency
Evan Shelhamer [Thu, 15 May 2014 19:35:32 +0000 (12:35 -0700)]
python style
Evan Shelhamer [Thu, 15 May 2014 17:20:43 +0000 (10:20 -0700)]
fix accidental revert of Init() from
f5c28581
Evan Shelhamer [Thu, 15 May 2014 06:45:33 +0000 (23:45 -0700)]
batch inputs in python by forward_all() and forward_backward_all()
Evan Shelhamer [Thu, 15 May 2014 03:17:09 +0000 (20:17 -0700)]
don't squeeze blob arrays for python
Preserve the non-batch dimensions of blob arrays, even for singletons.
The forward() and backward() helpers take lists of ndarrays instead of a
single ndarray per blob, and lists of ndarrays are likewise returned.
Note that for output the blob array could actually be returned as a
single ndarray instead of a list.
Evan Shelhamer [Thu, 15 May 2014 00:38:33 +0000 (17:38 -0700)]
python forward() and backward() extract any blobs and diffs
Evan Shelhamer [Wed, 14 May 2014 21:37:33 +0000 (14:37 -0700)]
python Net.backward() helper and Net.BackwardPrefilled()
Evan Shelhamer [Wed, 14 May 2014 21:02:54 +0000 (14:02 -0700)]
bad forward/backward inputs throw exceptions instead of crashing python
Evan Shelhamer [Wed, 14 May 2014 20:39:06 +0000 (13:39 -0700)]
pycaffe Net.forward() helper
Do forward pass by prefilled or packaging input + output blobs and
returning a {output blob name: output list} dict.
Evan Shelhamer [Wed, 14 May 2014 02:56:14 +0000 (19:56 -0700)]
set input preprocessing per blob in python
Evan Shelhamer [Wed, 14 May 2014 01:53:36 +0000 (18:53 -0700)]
expose input and output blob names to python as lists
Jeff Donahue [Wed, 14 May 2014 20:05:17 +0000 (13:05 -0700)]
Merge pull request #417 from shelhamer/create-and-write-proto
Write/create/truncate prototxt when saving to fix #341
Evan Shelhamer [Wed, 14 May 2014 19:35:33 +0000 (12:35 -0700)]
fix workaround in net prototxt upgrade
Evan Shelhamer [Wed, 14 May 2014 19:28:55 +0000 (12:28 -0700)]
Write/create/truncate prototxt when saving to fix #341
WriteProtoToTextFile now saves the prototxt whether or not the file
already exists and sets the permissions to owner read/write and group +
other read.
Thanks @beam2d and @chyh1990 for pointing out the open modes bug.
Evan Shelhamer [Thu, 10 Apr 2014 03:27:58 +0000 (20:27 -0700)]
pycaffe comments, lint
Evan Shelhamer [Thu, 10 Apr 2014 01:37:32 +0000 (18:37 -0700)]
add python io getters, mean helper, and image caffeinator/decaffeinator
Evan Shelhamer [Thu, 10 Apr 2014 01:33:18 +0000 (18:33 -0700)]
make python wrapper mean match binaryproto dimensions
ilsvrc_2012_mean.npy has dims K x H x W.
Code written for the old D x D x K mean needs to be rewritten!
Evan Shelhamer [Wed, 9 Apr 2014 21:02:40 +0000 (14:02 -0700)]
match existing python formatting
Evan Shelhamer [Wed, 14 May 2014 01:10:22 +0000 (18:10 -0700)]
Merge pull request #414 from shelhamer/net-output-blobs
Make net know output blob indices
Evan Shelhamer [Wed, 14 May 2014 01:08:06 +0000 (18:08 -0700)]
net knows output blobs
Evan Shelhamer [Tue, 13 May 2014 20:16:19 +0000 (13:16 -0700)]
Merge pull request #413 from shelhamer/cublas-status-not-supported
add cublas status in cuda 6 to fix warning
Evan Shelhamer [Tue, 13 May 2014 19:30:34 +0000 (12:30 -0700)]
add cublas status in cuda 6 to fix warning
...and #define for older CUDAs to not break the build.
Jeff Donahue [Sun, 11 May 2014 00:41:39 +0000 (17:41 -0700)]
Merge pull request #406 from jeffdonahue/makefile-include-bug
Fix Makefile header dependency bug
Jeff Donahue [Sun, 11 May 2014 00:20:19 +0000 (17:20 -0700)]
fix Makefile bug - HXX_SRCS was things that don't end in .hpp, instead
of things that do...
Jeff Donahue [Sat, 10 May 2014 21:09:31 +0000 (14:09 -0700)]
require either train_net or train_net_param to be specified
Jeff Donahue [Sat, 10 May 2014 20:57:52 +0000 (13:57 -0700)]
fix proto comment for multiple test nets
Jeff Donahue [Sat, 10 May 2014 19:12:28 +0000 (12:12 -0700)]
add script to run lenet_consolidated_solver and add comment with results
for first/last 500 iterations
Jeff Donahue [Sat, 10 May 2014 18:46:19 +0000 (11:46 -0700)]
lint and two test_iters in lenet_consolidated_solver
Tobias Domhan [Sat, 10 May 2014 16:41:05 +0000 (18:41 +0200)]
multiple test_iter