Yangqing Jia [Wed, 27 Aug 2014 22:40:14 +0000 (15:40 -0700)]
fix layer_factory.cpp bug: there should be no ifdefs
Evan Shelhamer [Tue, 26 Aug 2014 23:26:15 +0000 (16:26 -0700)]
Merge pull request #984 from shelhamer/drop-curand-reset
Drop obsolete CURAND reset for CUDA 6.5 compatibility
Evan Shelhamer [Tue, 26 Aug 2014 21:32:14 +0000 (14:32 -0700)]
default ilsvrc solving to GPU
Jonathan L Long [Tue, 26 Aug 2014 21:01:20 +0000 (14:01 -0700)]
FIX drop obsolete CURAND reset for CUDA 6.5 compatibility
Drop the legacy CURAND initialization steps; these are unnecessary and
cause dramatic slowdowns for CUDA 6.5. This does no harm for K20 usage
counter to the note at least for CUDA 6.5 and 5.0.
Sergey Karayev [Tue, 26 Aug 2014 07:44:56 +0000 (00:44 -0700)]
FIX web_demo upload was not processing grayscale correctly
Jeff Donahue [Tue, 26 Aug 2014 00:14:10 +0000 (17:14 -0700)]
Merge pull request #981 from jeffdonahue/fix-eltwise-product
Make eltwise product gradient stabler (fixes issues with WITHIN_CHANNEL LRN in cifar_full example)
Jeff Donahue [Mon, 25 Aug 2014 23:12:33 +0000 (16:12 -0700)]
remove warning about LRN layers in CPU mode
Jeff Donahue [Mon, 25 Aug 2014 23:10:22 +0000 (16:10 -0700)]
Add "stable_prod_grad" option (on by default) to ELTWISE layer to
compute the eltwise product gradient using a slower but stabler formula.
Jeff Donahue [Mon, 25 Aug 2014 20:16:37 +0000 (13:16 -0700)]
Merge pull request #980 from jeffdonahue/fix-memory-used
fix memory_used_ by computing after SetUp
Jeff Donahue [Mon, 25 Aug 2014 19:47:50 +0000 (12:47 -0700)]
fix memory_used_ by computing after SetUp
Jeff Donahue [Mon, 25 Aug 2014 19:42:24 +0000 (12:42 -0700)]
Merge pull request #979 from jeffdonahue/caffe-test-output
'caffe test' computes and prints all scores and their names
Jeff Donahue [Mon, 25 Aug 2014 18:59:20 +0000 (11:59 -0700)]
'caffe test' prints all scores and their names
Evan Shelhamer [Mon, 25 Aug 2014 17:25:53 +0000 (10:25 -0700)]
Merge pull request #976 from alfredtofu/dev
fix image resizing dimensions for IO
alfredtofu [Mon, 25 Aug 2014 08:22:52 +0000 (16:22 +0800)]
fix bug for resizing images.
Evan Shelhamer [Sun, 24 Aug 2014 22:36:52 +0000 (15:36 -0700)]
[example] add fully-convolutional efficiency note + confidence map
- spell out fully-convolutional efficiency
- add confidence map
- fix input size: 451 x 451 is correct for an 8 x 8 output map by the
equation input size = 227 + 32(d-1) for output map dimension of d
Evan Shelhamer [Sun, 24 Aug 2014 05:41:30 +0000 (22:41 -0700)]
fix internal thread interface confusion
thanks @ronghanghu for pointing this out in #964
Evan Shelhamer [Sun, 24 Aug 2014 02:25:56 +0000 (19:25 -0700)]
move {InnerProduct,Eltwise}Layer to common instead of vision
Evan Shelhamer [Fri, 22 Aug 2014 07:02:17 +0000 (00:02 -0700)]
fix parameter for transformation in ImageDataLayer constructor
Evan Shelhamer [Fri, 22 Aug 2014 05:50:47 +0000 (22:50 -0700)]
Merge pull request #963 from shelhamer/fix-transform-param
Make data transformations backwards-compatible and upgrade models
Evan Shelhamer [Fri, 22 Aug 2014 05:19:09 +0000 (22:19 -0700)]
upgrade model definitions for transformation params
Evan Shelhamer [Fri, 22 Aug 2014 04:22:44 +0000 (21:22 -0700)]
upgrade net parameter data transformation fields automagically
Convert DataParameter and ImageDataParameter data transformation fields
into a TransformationParameter.
Evan Shelhamer [Fri, 22 Aug 2014 02:02:41 +0000 (19:02 -0700)]
compact net parameter upgrade
Evan Shelhamer [Fri, 22 Aug 2014 02:03:09 +0000 (19:03 -0700)]
restore old data transformation parameters for compatibility
Evan Shelhamer [Fri, 22 Aug 2014 00:06:39 +0000 (17:06 -0700)]
Merge pull request #954 from geenux/dev-redundant-data
Refactor data layers to avoid duplication of data transformation code
Jeff Donahue [Thu, 21 Aug 2014 21:35:23 +0000 (14:35 -0700)]
Merge pull request #961 from jeffdonahue/gpu-flag-overrides-solver-mode
If specified, --gpu flag overrides SolverParameter solver_mode.
Jeff Donahue [Thu, 21 Aug 2014 19:53:43 +0000 (12:53 -0700)]
If specified, --gpu flag overrides SolverParameter solver_mode.
Evan Shelhamer [Thu, 21 Aug 2014 17:10:40 +0000 (10:10 -0700)]
Merge pull request #942 from yosinski/doc-update
protobuf should be installed with Python support on Mac
Evan Shelhamer [Thu, 21 Aug 2014 17:08:45 +0000 (10:08 -0700)]
Merge pull request #956 from longjon/clean-signbit
Clean up CPU signbit definition
TANGUY Arnaud [Thu, 21 Aug 2014 13:50:39 +0000 (15:50 +0200)]
Refactor ImageDataLayer to use DataTransformer
Evan Shelhamer [Thu, 21 Aug 2014 03:38:29 +0000 (20:38 -0700)]
specialize cpu_strided_dot before instantiation to fix clang++ build
Jonathan L Long [Thu, 21 Aug 2014 03:06:11 +0000 (20:06 -0700)]
clean up cpu signbit definition
The redundant "caffe_signbit" function was used as a circumlocution
around CUDA's signbit macro; we can just add extra parens to prevent
macro expansion.
TANGUY Arnaud [Wed, 20 Aug 2014 16:37:54 +0000 (18:37 +0200)]
Refactor DataLayer using a new DataTransformer
Start the refactoring of the datalayers to avoid data transformation
code duplication. So far, only DataLayer has been done.
longjon [Tue, 19 Aug 2014 08:43:37 +0000 (01:43 -0700)]
Merge pull request #940 from ronghanghu/channel-softmax
Softmax works across channels
Ronghang Hu [Sat, 16 Aug 2014 21:52:19 +0000 (14:52 -0700)]
implement GPU version of Softmax
Jonathan L Long [Fri, 15 Aug 2014 02:21:28 +0000 (19:21 -0700)]
test softmax and softmax with loss across channels
Jonathan L Long [Fri, 15 Aug 2014 02:20:30 +0000 (19:20 -0700)]
softmax and softmax loss layers work across channels
Jonathan L Long [Mon, 18 Aug 2014 06:15:03 +0000 (23:15 -0700)]
add caffe_cpu_strided_dot for strided dot products
This provides a more direct interface to the cblas_?dot functions.
This is useful, for example, for taking dot products across channels.
Jonathan L Long [Mon, 18 Aug 2014 05:01:24 +0000 (22:01 -0700)]
milliseconds is a word
Jeff Donahue [Mon, 18 Aug 2014 00:42:24 +0000 (17:42 -0700)]
Merge pull request #943 from jeffdonahue/parallel-travis-builds
add Travis build matrix to do parallel builds for (make, CMake) x (with CUDA, without CUDA)
Jeff Donahue [Sun, 17 Aug 2014 10:11:59 +0000 (03:11 -0700)]
Travis build matrix to do parallel builds for make and CMake; CUDA-less
and CUDA-ful. Move bits of functionality into scripts under scripts/travis
for readability. Only generate CUDA compute_50 for perfomance.
Jeff Donahue [Sun, 17 Aug 2014 09:03:51 +0000 (02:03 -0700)]
Merge pull request #623 from BVLC/cmake
CMake build system (feature development tracking PR)
Jeff Donahue [Sun, 17 Aug 2014 07:28:14 +0000 (00:28 -0700)]
restore .testbin extension, and move caffe_tool back to "caffe".
(Required as I had to change the tool target names to '.bin' but give
them an OUTPUT_NAME, but the .bin made the test_net tool collide with
the test_net unit test.)
Jeff Donahue [Sun, 17 Aug 2014 07:26:35 +0000 (00:26 -0700)]
use all caps for global preprocess vars (e.g. EXAMPLES_SOURCE_DIR), and
other minor cleanup
Jeff Donahue [Sun, 17 Aug 2014 07:03:28 +0000 (00:03 -0700)]
.travis.yml and .gitignore: various minor cleanup
Jeff Donahue [Sun, 17 Aug 2014 06:47:25 +0000 (23:47 -0700)]
[docs] CMake build steps and Ubuntu 12.04 install instructions
bhack [Tue, 29 Jul 2014 23:15:39 +0000 (01:15 +0200)]
Reduce packages
bhack [Tue, 29 Jul 2014 18:25:50 +0000 (20:25 +0200)]
Add ppa for CMake for fix 32bit precompiled cmake on 64bit
Adam Kosiorek [Tue, 29 Jul 2014 08:40:40 +0000 (10:40 +0200)]
added gflags + bugfixes + rebase on bvlc/caffe
* added gflags requirement in CMake
* fixed a bug that linked some tests into caffe lib
* renamed tools/caffe due to conflicting target names with caffe lib
* rebased onto bvlc/caffe
Adam Kosiorek [Mon, 28 Jul 2014 12:01:45 +0000 (14:01 +0200)]
Added lint target
Adam Kosiorek [Fri, 25 Jul 2014 10:22:00 +0000 (12:22 +0200)]
added proper 'runtest' target
Adam Kosiorek [Fri, 25 Jul 2014 07:38:42 +0000 (09:38 +0200)]
Examples_SOURCE_DIR cmake variable bugfix
* it was set only when BUILD_EXAMPLES==OFF
Adam Kosiorek [Thu, 24 Jul 2014 13:37:26 +0000 (15:37 +0200)]
enable both GPU and CPU builds + testing in travis
Adam Kosiorek [Thu, 24 Jul 2014 13:27:53 +0000 (15:27 +0200)]
cpu only build works
Adam Kosiorek [Thu, 24 Jul 2014 11:58:37 +0000 (13:58 +0200)]
cpu only
Adam Kosiorek [Thu, 24 Jul 2014 12:31:52 +0000 (14:31 +0200)]
restoring travis.yml
Adam Kosiorek [Thu, 24 Jul 2014 11:44:05 +0000 (13:44 +0200)]
cmake from binaries
Adam Kosiorek [Wed, 23 Jul 2014 07:41:36 +0000 (09:41 +0200)]
cmake build configuration for travis-ci
Adam Kosiorek [Wed, 23 Jul 2014 07:12:44 +0000 (09:12 +0200)]
fixed lint issues
Adam Kosiorek [Tue, 22 Jul 2014 12:44:19 +0000 (14:44 +0200)]
fixed CMake dependant header file generation
Adam Kosiorek [Tue, 1 Jul 2014 09:42:24 +0000 (11:42 +0200)]
examples CMake lists
Adam Kosiorek [Tue, 1 Jul 2014 07:56:20 +0000 (09:56 +0200)]
cmake build system
Jason Yosinski [Sun, 17 Aug 2014 06:41:52 +0000 (23:41 -0700)]
Updated installation docs for OS X 10.9 brew install protobuf as well
Jason Yosinski [Sun, 17 Aug 2014 06:18:12 +0000 (23:18 -0700)]
Updated documentation to include instructions to install protobuf with Python support on Mac OS X
Jeff Donahue [Sun, 17 Aug 2014 00:17:37 +0000 (17:17 -0700)]
Merge pull request #936 from jeffdonahue/not-stage
Add "not_stage" to NetStateRule to exclude NetStates with certain stages
Jeff Donahue [Fri, 15 Aug 2014 21:29:39 +0000 (14:29 -0700)]
Add "not_stage" to NetStateRule to exclude NetStates with certain
stages.
Evan Shelhamer [Fri, 15 Aug 2014 21:13:22 +0000 (14:13 -0700)]
[example] set phase test for fully-convolutional model
Evan Shelhamer [Fri, 15 Aug 2014 21:04:59 +0000 (14:04 -0700)]
[example] include imports in net surgery
Evan Shelhamer [Fri, 15 Aug 2014 02:27:03 +0000 (19:27 -0700)]
Merge pull request #897 from ashafaei/eltwise-abs
Absolute Value layer
Alireza Shafaei [Sun, 10 Aug 2014 05:44:12 +0000 (22:44 -0700)]
Added absolute value layer, useful for implementation of siamese networks!
This commit also replaces the default caffe_fabs with MKL/non-MKL implementation of Abs.
Jeff Donahue [Thu, 14 Aug 2014 02:03:11 +0000 (19:03 -0700)]
Merge pull request #923 from yosinski/doc-update
ImageNet tutorial update to merged train_val.prototext
Jason Yosinski [Thu, 14 Aug 2014 00:32:48 +0000 (17:32 -0700)]
Tried to clarify function of `include' lines and train vs. test network differences
Jason Yosinski [Wed, 13 Aug 2014 23:58:04 +0000 (17:58 -0600)]
Updated ImageNet Tutorial to reflect new merged train+val prototxt format. Also corrected 4,500,000 iterations -> 450,000 iterations.
Jeff Donahue [Wed, 13 Aug 2014 21:54:59 +0000 (14:54 -0700)]
Fix from loss-generalization: accidentally removed mid-Forward return
from PowerLayer (caused bad performance for trivial PowerLayer cases...)
Jeff Donahue [Wed, 13 Aug 2014 20:47:44 +0000 (13:47 -0700)]
Merge pull request #686 from jeffdonahue/loss-generalization
Loss generalization
Jeff Donahue [Sun, 13 Jul 2014 23:56:42 +0000 (16:56 -0700)]
Store loss coefficients in layer; use for prettier training output.
Jeff Donahue [Sat, 12 Jul 2014 06:22:21 +0000 (23:22 -0700)]
Add ACCURACY layer and softmax_error output to lenet_consolidated_solver
example.
Jeff Donahue [Sat, 12 Jul 2014 06:06:41 +0000 (23:06 -0700)]
Also display outputs in the train net. (Otherwise, why have them?)
Jeff Donahue [Sat, 12 Jul 2014 05:52:02 +0000 (22:52 -0700)]
Disallow in-place computation in SPLIT layer -- has strange effects in
backward pass when input into a loss.
Jeff Donahue [Sat, 12 Jul 2014 00:21:27 +0000 (17:21 -0700)]
AccuracyLayer only dies when necessary.
Jeff Donahue [Sat, 12 Jul 2014 00:19:17 +0000 (17:19 -0700)]
Net::Init can determine that layers don't need backward if they are not
used to compute the loss.
Jeff Donahue [Fri, 11 Jul 2014 10:17:53 +0000 (03:17 -0700)]
Make multiple losses work by inserting split layers and add some tests for it.
Test that we can call backward with an ACCURACY layer. This currently fails,
but should be possible now that we explicitly associate a loss weight with
each top blob.
Jeff Donahue [Fri, 11 Jul 2014 08:55:17 +0000 (01:55 -0700)]
Generalize loss by allowing any top blob to be used as a loss in which
its elements are summed with a scalar coefficient.
Forward for layers no longer returns a loss; instead all loss layers must have
top blobs. Existing loss layers are given a top blob automatically by
Net::Init, with an associated top_loss_weight of 1 (set in
LossLayer::FurtherSetUp). Due to the increased amount of common SetUp logic,
the SetUp interface is modified such that all subclasses should normally
override FurtherSetUp only, which is called by SetUp.
Jeff Donahue [Fri, 11 Jul 2014 08:48:36 +0000 (01:48 -0700)]
Add net tests for loss_weight.
Check that the loss and gradients throughout the net are appropriately scaled
for a few loss_weight values, assuming a default weight of 1 in the loss layer
only. Also modify test_gradient_check_util to associate a loss of 2 rather
than 1 with the top blob, so that loss layer tests fail if they don't scale
their diffs.
Jeff Donahue [Fri, 11 Jul 2014 08:44:49 +0000 (01:44 -0700)]
Add loss_weight to proto, specifying coefficients for each top blob
in the objective function.
Evan Shelhamer [Wed, 13 Aug 2014 17:42:30 +0000 (10:42 -0700)]
[docs] update docs generation for notebook metadata
Evan Shelhamer [Wed, 13 Aug 2014 17:33:53 +0000 (10:33 -0700)]
Merge pull request #921 from shelhamer/notebook-update
Fix plotting and metadata in notebook examples
Evan Shelhamer [Wed, 13 Aug 2014 16:27:56 +0000 (09:27 -0700)]
[example] change notebook name metadata to avoid conflict
see https://github.com/ipython/ipython/issues/5686
Evan Shelhamer [Wed, 13 Aug 2014 16:22:42 +0000 (09:22 -0700)]
[example] fix plt commands in detection
Clemens Korner [Wed, 13 Aug 2014 10:05:30 +0000 (12:05 +0200)]
use plt namespace for imshow in filter_visualization.ipynb
Yangqing Jia [Wed, 13 Aug 2014 05:27:43 +0000 (22:27 -0700)]
Merge pull request #914 from ashafaei/euclidean-loss-fix
Fixed the GPU implementation of EuclideanLoss!
Alireza Shafaei [Tue, 12 Aug 2014 20:54:51 +0000 (13:54 -0700)]
Fixed the GPU implementation of EuclideanLoss to report the loss to the top layer
Jeff Donahue [Tue, 12 Aug 2014 20:04:02 +0000 (13:04 -0700)]
Merge pull request #846 from qipeng/mvn-layer
mean-variance normalization layer
Jeff Donahue [Tue, 12 Aug 2014 19:54:29 +0000 (12:54 -0700)]
Merge pull request #863 from jeffdonahue/lint-check-caffe-fns
add lint check for functions with Caffe alternatives (memcpy, memset)
Jeff Donahue [Wed, 6 Aug 2014 00:33:19 +0000 (17:33 -0700)]
Fix caffe/alt_fn lint errors.
Jeff Donahue [Tue, 5 Aug 2014 23:59:14 +0000 (16:59 -0700)]
Create caffe_{,gpu_}memset functions to replace {m,cudaM}emset's.
Jeff Donahue [Tue, 5 Aug 2014 23:01:11 +0000 (16:01 -0700)]
Add caffe/alt_fn rule to lint checks to check for functions (like memset
& memcpy) with caffe_* alternatives that should be used instead.
Jeff Donahue [Tue, 5 Aug 2014 23:19:31 +0000 (16:19 -0700)]
lint targets should depend on the lint script itself
Evan Shelhamer [Mon, 11 Aug 2014 23:40:02 +0000 (16:40 -0700)]
[examples] fix links in feature extraction
Evan Shelhamer [Mon, 11 Aug 2014 23:11:58 +0000 (16:11 -0700)]
Merge pull request #910 from sergeyk/dev
web demo updates
Sergey Karayev [Mon, 11 Aug 2014 22:59:48 +0000 (15:59 -0700)]
[docs] ‘maximally accurate’ in the web demo explanation. closes #905