James Thewlis [Fri, 27 Jun 2014 23:49:53 +0000 (00:49 +0100)]
Fix building tests with parallel make
The changes for .cu tests meant that creating
TEST_BUILD_DIR wasn't happening first
James Thewlis [Thu, 26 Jun 2014 17:12:30 +0000 (18:12 +0100)]
Test for im2col kernel
With associated Makefile changes for .cu tests
This tests that the grid-stride loop works for im2col,
using the CPU version as a reference.
Evan Shelhamer [Fri, 27 Jun 2014 02:13:28 +0000 (19:13 -0700)]
Merge pull request #511 from kloudkl/extract_multiple_features
Extract multiple features in a single Forward pass
Jeff Donahue [Thu, 26 Jun 2014 20:50:33 +0000 (16:50 -0400)]
Merge pull request #546 from BVLC/weight-sharing
Weight Sharing
Evan Shelhamer [Thu, 26 Jun 2014 20:05:44 +0000 (13:05 -0700)]
rename layer -> param mapping for clarity
Evan Shelhamer [Thu, 26 Jun 2014 19:49:00 +0000 (12:49 -0700)]
change weight blob field name to param
Jeff Donahue [Mon, 9 Jun 2014 02:53:45 +0000 (19:53 -0700)]
weight sharing
Evan Shelhamer [Thu, 26 Jun 2014 19:18:49 +0000 (12:18 -0700)]
Merge pull request #497 from jeffdonahue/fix-backward-interface
Improve Backward method interface
Jeff Donahue [Mon, 16 Jun 2014 23:37:17 +0000 (16:37 -0700)]
force_backward works properly with non-backproppable things
Jeff Donahue [Tue, 10 Jun 2014 19:38:59 +0000 (12:38 -0700)]
change Backward interface: propagate_down is a vector -- use to fix
long-standing issue with how this is handled in loss layers (esp.
EuclideanLossLayer)
Evan Shelhamer [Thu, 26 Jun 2014 17:25:12 +0000 (10:25 -0700)]
Merge pull request #522 from sguada/accuracy_without_loss
Split accuracy and loss
Evan Shelhamer [Thu, 26 Jun 2014 17:21:19 +0000 (10:21 -0700)]
file SoftmaxWithLoss in with loss layers
Evan Shelhamer [Thu, 26 Jun 2014 16:10:54 +0000 (09:10 -0700)]
Merge pull request #488 from longjon/wall-werror
Build with -Wall -Werror
Evan Shelhamer [Thu, 26 Jun 2014 00:58:38 +0000 (17:58 -0700)]
content ourselves to -Wall without -Werror for now
Evan Shelhamer [Thu, 26 Jun 2014 00:35:40 +0000 (17:35 -0700)]
make clang++ happy on OSX by not linking with pthread
Evan Shelhamer [Thu, 26 Jun 2014 00:32:27 +0000 (17:32 -0700)]
fix test data layer post-lmdb
Evan Shelhamer [Wed, 25 Jun 2014 23:58:03 +0000 (16:58 -0700)]
Merge pull request #478 from kloudkl/cpu_only_tests
CPU only tests
Jonathan L Long [Sat, 14 Jun 2014 02:59:33 +0000 (19:59 -0700)]
turn off some warnings for older compilers
Jonathan L Long [Tue, 10 Jun 2014 22:39:44 +0000 (15:39 -0700)]
add WARNINGS to CXXFLAGS
Jonathan L Long [Tue, 10 Jun 2014 22:38:06 +0000 (15:38 -0700)]
upgrade warnings to -Wall -Werror -Wno-sign-compare
Jonathan L Long [Tue, 10 Jun 2014 22:34:01 +0000 (15:34 -0700)]
don't end comments with \, so that -Wcomment can be used
Jonathan L Long [Tue, 10 Jun 2014 21:50:24 +0000 (14:50 -0700)]
initialize and comment variables that the compiler finds suspicious
Jonathan L Long [Tue, 10 Jun 2014 21:45:36 +0000 (14:45 -0700)]
move CUDA 6.0 check into switch statement itself
This allows -Wswitch to be turned on so that the compiler can check
exhaustiveness.
Jonathan L Long [Wed, 11 Jun 2014 22:29:04 +0000 (15:29 -0700)]
add missing const qualifiers to MemoryDataLayer ExactNum* functions
Jonathan L Long [Thu, 12 Jun 2014 02:31:40 +0000 (19:31 -0700)]
remove unused variables from tests
Jonathan L Long [Mon, 26 May 2014 09:31:34 +0000 (02:31 -0700)]
remove unused variables
Jonathan L Long [Thu, 12 Jun 2014 02:31:22 +0000 (19:31 -0700)]
initialize in declared order in tests
Jonathan L Long [Mon, 26 May 2014 09:31:05 +0000 (02:31 -0700)]
initialize in declared order
Jonathan L Long [Tue, 10 Jun 2014 22:34:52 +0000 (15:34 -0700)]
check if window file is empty in WindowDataLayer
Also note that window files containing windows with different numbers of
channels may not work correctly.
Jonathan L Long [Mon, 26 May 2014 09:02:42 +0000 (02:02 -0700)]
actually check status values from all HDF5 calls
Evan Shelhamer [Wed, 25 Jun 2014 21:47:06 +0000 (14:47 -0700)]
Merge pull request #427 from jamt9000/fix-kernel-index
Should not modify index in im2col kernel loop
Evan Shelhamer [Wed, 25 Jun 2014 03:01:10 +0000 (11:01 +0800)]
fix SOFTMAX_LOSS to work with loss top blob interface
Kai Li [Tue, 24 Jun 2014 14:14:54 +0000 (22:14 +0800)]
Init google logging
Kai Li [Fri, 20 Jun 2014 02:14:19 +0000 (10:14 +0800)]
Replace the raw pointers with shared_ptr to ensure memory is released
Kai Li [Tue, 17 Jun 2014 08:56:30 +0000 (16:56 +0800)]
No need to manually delete the pointers which are managed by std::vector
Kai Li [Tue, 17 Jun 2014 08:27:24 +0000 (16:27 +0800)]
Progress should be reported for each feature blob
Kai Li [Tue, 17 Jun 2014 06:53:00 +0000 (14:53 +0800)]
Extract multiple features in a single Forward pass
Yangqing Jia [Mon, 23 Jun 2014 12:56:30 +0000 (08:56 -0400)]
Merge pull request #508 from kloudkl/ImageDataLayer-RNG-core-dump
Image data layer rng core dump
Evan Shelhamer [Sun, 22 Jun 2014 04:17:45 +0000 (12:17 +0800)]
Merge pull request #529 from crizCraig/patch-3
Fix example text: there are 256 filters in conv2.
Craig Quiter [Sat, 21 Jun 2014 22:22:05 +0000 (15:22 -0700)]
There are 256 filters in conv2.
AlexNet paper splits conv2 into two GPU's, each with 128 filters. Reference paper: http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf Applying #528 to dev instead of master per @Yangqing.
Jonathan L Long [Sat, 21 Jun 2014 21:17:50 +0000 (14:17 -0700)]
Merge pull request #398 from sguada/L2_hinge_loss
Jonathan L Long [Sat, 21 Jun 2014 21:08:30 +0000 (14:08 -0700)]
explicitly name L1 hinge test
Jonathan L Long [Sat, 21 Jun 2014 21:08:15 +0000 (14:08 -0700)]
fix whitespace error in HingeLossLayer
Sergio [Sat, 21 Jun 2014 03:26:51 +0000 (20:26 -0700)]
Change hinge_norm to norm in test_hinge_loss
Sergio [Sat, 21 Jun 2014 03:22:34 +0000 (20:22 -0700)]
Remove C_ mentions, extra spaces and change hinge_norm to norm
Sergio [Sat, 21 Jun 2014 02:55:59 +0000 (19:55 -0700)]
Now AccuracyLayer only computes accuracy, one should use LossLayers to compute loss
Changed all val.prototxt in examples to add a LossLayer to compute loss in Test
Sergio [Sat, 21 Jun 2014 01:37:34 +0000 (18:37 -0700)]
Unify L1 and L2 Hinge_Loss to follow convention
Sergio [Sat, 21 Jun 2014 01:35:25 +0000 (18:35 -0700)]
Fix the loss to follow the convention
Conflicts:
src/caffe/layers/loss_layer.cpp
Sergio [Sat, 21 Jun 2014 01:34:24 +0000 (18:34 -0700)]
Fixed switch and test l2hingeloss
Conflicts:
src/caffe/layers/loss_layer.cpp
src/caffe/test/test_hinge_loss_layer.cpp
Sergio [Sat, 21 Jun 2014 01:32:49 +0000 (18:32 -0700)]
Remove spaces and merge tests into one file
Conflicts:
src/caffe/layers/loss_layer.cpp
src/caffe/test/test_hinge_loss_layer.cpp
Sergio [Sat, 21 Jun 2014 01:29:14 +0000 (18:29 -0700)]
Removed L2HingeLoss class now a case within HingeLoss class
Conflicts:
include/caffe/vision_layers.hpp
src/caffe/layers/loss_layer.cpp
src/caffe/proto/caffe.proto
src/caffe/test/test_l2_hinge_loss_layer.cpp
Sergio [Sat, 21 Jun 2014 01:26:05 +0000 (18:26 -0700)]
Merge HingeLoss and L2HingeLoss by adding hinge_norm to params
Conflicts:
src/caffe/layers/loss_layer.cpp
src/caffe/proto/caffe.proto
src/caffe/test/test_l2_hinge_loss_layer.cpp
Kai Li [Tue, 17 Jun 2014 01:36:46 +0000 (09:36 +0800)]
Verify the result of memtest in SyncedMemoryTest::TestGPURead
Kai Li [Tue, 10 Jun 2014 01:26:31 +0000 (09:26 +0800)]
Rename curand_availability_logged according to the Google style guide
Kai Li [Tue, 10 Jun 2014 01:24:39 +0000 (09:24 +0800)]
Revert the namespace ending comment to the same line of the bracket
Kai Li [Sat, 7 Jun 2014 16:00:26 +0000 (00:00 +0800)]
Suppress redundant log messages of unavailable curand
Kai Li [Mon, 2 Jun 2014 12:07:48 +0000 (20:07 +0800)]
Separate TestForward into Test{CPU, GPU}Forward in HDF5OutputLayerTest
Kai Li [Mon, 2 Jun 2014 05:12:48 +0000 (13:12 +0800)]
Extract GPU code out of SyncedMemoryTest::TestCPUWrite
Kai Li [Tue, 17 Jun 2014 05:35:59 +0000 (13:35 +0800)]
Initialize the RNG generator with an orthogonally newed Generator
Kai Li [Tue, 17 Jun 2014 03:43:10 +0000 (11:43 +0800)]
Fix the condition prefetch_needs_rand in the ImageDataLayer
Jonathan L Long [Fri, 20 Jun 2014 08:05:41 +0000 (01:05 -0700)]
remove erroneous comment in ArgMaxLayer
Sergey Karayev [Fri, 20 Jun 2014 00:13:34 +0000 (17:13 -0700)]
Merge pull request #521 from sguada/set_device_id_at_init
Set device_id at the beginning of Solver.Init()
Sergio [Thu, 19 Jun 2014 23:45:08 +0000 (16:45 -0700)]
Modified test_net to check loss layer with top
Sergio [Thu, 19 Jun 2014 23:40:32 +0000 (16:40 -0700)]
Now Loss layers would return the loss in the top blob if requested
Sergio [Thu, 19 Jun 2014 19:55:48 +0000 (12:55 -0700)]
Set device_id at the begining of Solver.Init() to avoid using memory in the default GPU
Yangqing Jia [Thu, 19 Jun 2014 01:18:08 +0000 (18:18 -0700)]
Merge pull request #504 from leelurch/Config-Example-Ubuntu14.04
Add comment for how to set the CUDA path when cuda tools are installed b...
Sergio Guadarrama [Tue, 17 Jun 2014 16:20:25 +0000 (09:20 -0700)]
Merge pull request #507 from longjon/set-device-early
Fix Caffe::SetDevice to avoid initializing on default device
Jonathan L Long [Mon, 16 Jun 2014 22:20:17 +0000 (15:20 -0700)]
in Caffe::SetDevice, call cudaSetDevice before Get
Otherwise initialization will be performed on whichever device is
default.
Sergey Karayev [Mon, 16 Jun 2014 06:38:42 +0000 (23:38 -0700)]
Merge pull request #431 from mavenlin/lmdb
Add support for LMDB (LevelDB alternative)
leelurch [Mon, 16 Jun 2014 01:07:04 +0000 (20:07 -0500)]
Add comment for how to set the CUDA path when cuda tools are installed by the package manager.
linmin [Sat, 14 Jun 2014 08:35:48 +0000 (16:35 +0800)]
fix string compare error
linmin [Sat, 14 Jun 2014 08:15:44 +0000 (16:15 +0800)]
add lmdb support for compute_image_mean
linmin [Sat, 14 Jun 2014 08:15:18 +0000 (16:15 +0800)]
add lmdb support for convert_imageset
Evan Shelhamer [Fri, 13 Jun 2014 20:22:22 +0000 (13:22 -0700)]
Merge pull request #495 from jeffdonahue/refactor-net
Minor Net::Init refactoring: name loop indices, add helpers
Evan Shelhamer [Fri, 13 Jun 2014 18:42:40 +0000 (11:42 -0700)]
add net surgery link to docs (+ drop old comment)
Jeff Donahue [Fri, 13 Jun 2014 06:07:23 +0000 (23:07 -0700)]
unify data layer tests: was copied four times for all combinations of
cpu/gpu and leveldb/lmdb; now just one copy of each test body
Jeff Donahue [Fri, 13 Jun 2014 05:51:33 +0000 (22:51 -0700)]
unify test_data_layer tests
Jeff Donahue [Fri, 13 Jun 2014 05:29:59 +0000 (22:29 -0700)]
lint
linmin [Fri, 13 Jun 2014 02:26:38 +0000 (10:26 +0800)]
fixed cpplint error
linmin [Fri, 13 Jun 2014 02:19:33 +0000 (10:19 +0800)]
add tests for lmdb of datalayer (copied from test_data_layer.cpp)
linmin [Wed, 21 May 2014 16:15:32 +0000 (00:15 +0800)]
add option for lmdb
Evan Shelhamer [Thu, 12 Jun 2014 23:00:49 +0000 (16:00 -0700)]
Merge pull request #455 from shelhamer/pycaffe-save
Save from python for net surgery
Jeff Donahue [Wed, 11 Jun 2014 02:22:16 +0000 (19:22 -0700)]
refactor Net::Init to call helpers AppendBottom and AppendTop
Jeff Donahue [Tue, 10 Jun 2014 20:19:34 +0000 (13:19 -0700)]
make Net::Init loop indices clearer
Jeff Donahue [Thu, 12 Jun 2014 22:37:10 +0000 (15:37 -0700)]
Merge pull request #496 from jeffdonahue/test-net-use-dummy-data
Make test_net use DUMMY_DATA instead of DATA (leveldb)
Jeff Donahue [Thu, 12 Jun 2014 21:58:07 +0000 (14:58 -0700)]
make test_net use DUMMY_DATA instead of leveldb
Evan Shelhamer [Thu, 12 Jun 2014 21:41:25 +0000 (14:41 -0700)]
make notebook for net surgery of fully-convolutional model
Evan Shelhamer [Wed, 11 Jun 2014 16:57:07 +0000 (09:57 -0700)]
define fully-convolutional imagenet model
Evan Shelhamer [Mon, 26 May 2014 06:49:51 +0000 (23:49 -0700)]
save from python for net surgery
0. Scheme desired parameters.
1. Do surgery on the net through `net.params['name'][idx].data[...] = `.
2. Save post-operation net params by `net.save('fname')`.
Handwoven deep nets, anyone?
Evan Shelhamer [Wed, 11 Jun 2014 22:22:04 +0000 (15:22 -0700)]
Merge pull request #482 from shelhamer/rcnn-detector-example
Make R-CNN the Caffe detection example
Evan Shelhamer [Wed, 11 Jun 2014 17:17:23 +0000 (10:17 -0700)]
Merge pull request #469 from weinman/grayscale-io-convert
Add grayscale input processing for intensity images in tools and
pycaffe.
Evan Shelhamer [Tue, 10 Jun 2014 22:01:41 +0000 (15:01 -0700)]
pycaffe: leave grayscale images gray according to arg
Evan Shelhamer [Wed, 11 Jun 2014 16:55:54 +0000 (09:55 -0700)]
drop learning rates and decays from deploy model
Evan Shelhamer [Tue, 10 Jun 2014 21:46:53 +0000 (14:46 -0700)]
groom install docs
- make OS X boost compilation more clear
- make punctuation more sincere
Jeff Donahue [Tue, 10 Jun 2014 18:35:08 +0000 (11:35 -0700)]
fix clang compilation problem w/ DummyDataLayer
Evan Shelhamer [Tue, 10 Jun 2014 03:56:19 +0000 (20:56 -0700)]
finish R-CNN detection example
- run through and save new output
- collect region proposals with R-CNN configuration (see sergeyk/selective_search_ijcv_with_python)
- call detect.py in GPU mode
- fix NMS plotting: X and Y coords were accidentally exchanged. print scores too.
Evan Shelhamer [Tue, 10 Jun 2014 03:43:47 +0000 (20:43 -0700)]
make selective search proposals with R-CNN configuration
Evan Shelhamer [Tue, 10 Jun 2014 02:46:07 +0000 (19:46 -0700)]
edit detection example, include R-CNN NMS
Evan Shelhamer [Sun, 8 Jun 2014 23:53:06 +0000 (16:53 -0700)]
make R-CNN the Caffe detection example
Evan Shelhamer [Mon, 9 Jun 2014 03:31:35 +0000 (20:31 -0700)]
pycaffe Detector crops with surrounding context
- caffe.Detector learned how to crop windows with context in the R-CNN
style s.t. the bordero of the network input is a given amount of
context.
- add --context_pad arg to detect.py for amount of context. Default is
16, as in R-CNN.