Adam Kosiorek [Thu, 24 Jul 2014 13:27:53 +0000 (15:27 +0200)]
cpu only build works
Adam Kosiorek [Thu, 24 Jul 2014 11:58:37 +0000 (13:58 +0200)]
cpu only
Adam Kosiorek [Thu, 24 Jul 2014 12:31:52 +0000 (14:31 +0200)]
restoring travis.yml
Adam Kosiorek [Thu, 24 Jul 2014 11:44:05 +0000 (13:44 +0200)]
cmake from binaries
Adam Kosiorek [Wed, 23 Jul 2014 07:41:36 +0000 (09:41 +0200)]
cmake build configuration for travis-ci
Adam Kosiorek [Wed, 23 Jul 2014 07:12:44 +0000 (09:12 +0200)]
fixed lint issues
Adam Kosiorek [Tue, 22 Jul 2014 12:44:19 +0000 (14:44 +0200)]
fixed CMake dependant header file generation
Adam Kosiorek [Tue, 1 Jul 2014 09:42:24 +0000 (11:42 +0200)]
examples CMake lists
Adam Kosiorek [Tue, 1 Jul 2014 07:56:20 +0000 (09:56 +0200)]
cmake build system
Jeff Donahue [Sun, 17 Aug 2014 00:17:37 +0000 (17:17 -0700)]
Merge pull request #936 from jeffdonahue/not-stage
Add "not_stage" to NetStateRule to exclude NetStates with certain stages
Jeff Donahue [Fri, 15 Aug 2014 21:29:39 +0000 (14:29 -0700)]
Add "not_stage" to NetStateRule to exclude NetStates with certain
stages.
Evan Shelhamer [Fri, 15 Aug 2014 21:13:22 +0000 (14:13 -0700)]
[example] set phase test for fully-convolutional model
Evan Shelhamer [Fri, 15 Aug 2014 21:04:59 +0000 (14:04 -0700)]
[example] include imports in net surgery
Evan Shelhamer [Fri, 15 Aug 2014 02:27:03 +0000 (19:27 -0700)]
Merge pull request #897 from ashafaei/eltwise-abs
Absolute Value layer
Alireza Shafaei [Sun, 10 Aug 2014 05:44:12 +0000 (22:44 -0700)]
Added absolute value layer, useful for implementation of siamese networks!
This commit also replaces the default caffe_fabs with MKL/non-MKL implementation of Abs.
Jeff Donahue [Thu, 14 Aug 2014 02:03:11 +0000 (19:03 -0700)]
Merge pull request #923 from yosinski/doc-update
ImageNet tutorial update to merged train_val.prototext
Jason Yosinski [Thu, 14 Aug 2014 00:32:48 +0000 (17:32 -0700)]
Tried to clarify function of `include' lines and train vs. test network differences
Jason Yosinski [Wed, 13 Aug 2014 23:58:04 +0000 (17:58 -0600)]
Updated ImageNet Tutorial to reflect new merged train+val prototxt format. Also corrected 4,500,000 iterations -> 450,000 iterations.
Jeff Donahue [Wed, 13 Aug 2014 21:54:59 +0000 (14:54 -0700)]
Fix from loss-generalization: accidentally removed mid-Forward return
from PowerLayer (caused bad performance for trivial PowerLayer cases...)
Jeff Donahue [Wed, 13 Aug 2014 20:47:44 +0000 (13:47 -0700)]
Merge pull request #686 from jeffdonahue/loss-generalization
Loss generalization
Jeff Donahue [Sun, 13 Jul 2014 23:56:42 +0000 (16:56 -0700)]
Store loss coefficients in layer; use for prettier training output.
Jeff Donahue [Sat, 12 Jul 2014 06:22:21 +0000 (23:22 -0700)]
Add ACCURACY layer and softmax_error output to lenet_consolidated_solver
example.
Jeff Donahue [Sat, 12 Jul 2014 06:06:41 +0000 (23:06 -0700)]
Also display outputs in the train net. (Otherwise, why have them?)
Jeff Donahue [Sat, 12 Jul 2014 05:52:02 +0000 (22:52 -0700)]
Disallow in-place computation in SPLIT layer -- has strange effects in
backward pass when input into a loss.
Jeff Donahue [Sat, 12 Jul 2014 00:21:27 +0000 (17:21 -0700)]
AccuracyLayer only dies when necessary.
Jeff Donahue [Sat, 12 Jul 2014 00:19:17 +0000 (17:19 -0700)]
Net::Init can determine that layers don't need backward if they are not
used to compute the loss.
Jeff Donahue [Fri, 11 Jul 2014 10:17:53 +0000 (03:17 -0700)]
Make multiple losses work by inserting split layers and add some tests for it.
Test that we can call backward with an ACCURACY layer. This currently fails,
but should be possible now that we explicitly associate a loss weight with
each top blob.
Jeff Donahue [Fri, 11 Jul 2014 08:55:17 +0000 (01:55 -0700)]
Generalize loss by allowing any top blob to be used as a loss in which
its elements are summed with a scalar coefficient.
Forward for layers no longer returns a loss; instead all loss layers must have
top blobs. Existing loss layers are given a top blob automatically by
Net::Init, with an associated top_loss_weight of 1 (set in
LossLayer::FurtherSetUp). Due to the increased amount of common SetUp logic,
the SetUp interface is modified such that all subclasses should normally
override FurtherSetUp only, which is called by SetUp.
Jeff Donahue [Fri, 11 Jul 2014 08:48:36 +0000 (01:48 -0700)]
Add net tests for loss_weight.
Check that the loss and gradients throughout the net are appropriately scaled
for a few loss_weight values, assuming a default weight of 1 in the loss layer
only. Also modify test_gradient_check_util to associate a loss of 2 rather
than 1 with the top blob, so that loss layer tests fail if they don't scale
their diffs.
Jeff Donahue [Fri, 11 Jul 2014 08:44:49 +0000 (01:44 -0700)]
Add loss_weight to proto, specifying coefficients for each top blob
in the objective function.
Evan Shelhamer [Wed, 13 Aug 2014 17:42:30 +0000 (10:42 -0700)]
[docs] update docs generation for notebook metadata
Evan Shelhamer [Wed, 13 Aug 2014 17:33:53 +0000 (10:33 -0700)]
Merge pull request #921 from shelhamer/notebook-update
Fix plotting and metadata in notebook examples
Evan Shelhamer [Wed, 13 Aug 2014 16:27:56 +0000 (09:27 -0700)]
[example] change notebook name metadata to avoid conflict
see https://github.com/ipython/ipython/issues/5686
Evan Shelhamer [Wed, 13 Aug 2014 16:22:42 +0000 (09:22 -0700)]
[example] fix plt commands in detection
Clemens Korner [Wed, 13 Aug 2014 10:05:30 +0000 (12:05 +0200)]
use plt namespace for imshow in filter_visualization.ipynb
Yangqing Jia [Wed, 13 Aug 2014 05:27:43 +0000 (22:27 -0700)]
Merge pull request #914 from ashafaei/euclidean-loss-fix
Fixed the GPU implementation of EuclideanLoss!
Alireza Shafaei [Tue, 12 Aug 2014 20:54:51 +0000 (13:54 -0700)]
Fixed the GPU implementation of EuclideanLoss to report the loss to the top layer
Jeff Donahue [Tue, 12 Aug 2014 20:04:02 +0000 (13:04 -0700)]
Merge pull request #846 from qipeng/mvn-layer
mean-variance normalization layer
Jeff Donahue [Tue, 12 Aug 2014 19:54:29 +0000 (12:54 -0700)]
Merge pull request #863 from jeffdonahue/lint-check-caffe-fns
add lint check for functions with Caffe alternatives (memcpy, memset)
Jeff Donahue [Wed, 6 Aug 2014 00:33:19 +0000 (17:33 -0700)]
Fix caffe/alt_fn lint errors.
Jeff Donahue [Tue, 5 Aug 2014 23:59:14 +0000 (16:59 -0700)]
Create caffe_{,gpu_}memset functions to replace {m,cudaM}emset's.
Jeff Donahue [Tue, 5 Aug 2014 23:01:11 +0000 (16:01 -0700)]
Add caffe/alt_fn rule to lint checks to check for functions (like memset
& memcpy) with caffe_* alternatives that should be used instead.
Jeff Donahue [Tue, 5 Aug 2014 23:19:31 +0000 (16:19 -0700)]
lint targets should depend on the lint script itself
Evan Shelhamer [Mon, 11 Aug 2014 23:40:02 +0000 (16:40 -0700)]
[examples] fix links in feature extraction
Evan Shelhamer [Mon, 11 Aug 2014 23:11:58 +0000 (16:11 -0700)]
Merge pull request #910 from sergeyk/dev
web demo updates
Sergey Karayev [Mon, 11 Aug 2014 22:59:48 +0000 (15:59 -0700)]
[docs] ‘maximally accurate’ in the web demo explanation. closes #905
Sergey Karayev [Mon, 11 Aug 2014 22:52:56 +0000 (15:52 -0700)]
[docs] [fix] closes #899
Evan Shelhamer [Mon, 11 Aug 2014 22:17:46 +0000 (15:17 -0700)]
Merge pull request #909 from sergeyk/dev
[docs] sort examples
Sergey Karayev [Mon, 11 Aug 2014 22:09:53 +0000 (15:09 -0700)]
[docs] sorting of examples. if doesn’t work for you, update jekyll.
qipeng [Mon, 11 Aug 2014 18:09:57 +0000 (11:09 -0700)]
lint
qipeng [Mon, 11 Aug 2014 17:48:57 +0000 (10:48 -0700)]
minor fix for layer factory
qipeng [Mon, 11 Aug 2014 17:44:27 +0000 (10:44 -0700)]
added cross-channel MVN, Mean-only normalization, added to layer factory, moved to common_layers
Yangqing Jia [Mon, 11 Aug 2014 17:35:04 +0000 (10:35 -0700)]
Merge pull request #904 from Yangqing/sweep
Add a leveldb option function in io.hpp/cpp
qipeng [Mon, 4 Aug 2014 18:29:54 +0000 (11:29 -0700)]
addressed Jeff's comment
qipeng [Mon, 4 Aug 2014 16:27:14 +0000 (09:27 -0700)]
lint
qipeng [Mon, 4 Aug 2014 16:03:34 +0000 (09:03 -0700)]
reduced blas calls
qipeng [Mon, 4 Aug 2014 01:19:58 +0000 (18:19 -0700)]
mean-variance normalization layer
Yangqing Jia [Mon, 11 Aug 2014 16:16:51 +0000 (09:16 -0700)]
Add a leveldb option function in io.hpp/cpp
Jonathan L Long [Mon, 11 Aug 2014 04:23:34 +0000 (21:23 -0700)]
fix formatting error in blob.hpp
Evan Shelhamer [Sun, 10 Aug 2014 05:24:15 +0000 (22:24 -0700)]
output loss for caffenet and alexnet train/val models
Evan Shelhamer [Sun, 10 Aug 2014 03:43:40 +0000 (20:43 -0700)]
default raw_scale in python scripts to ImageNet model value
Evan Shelhamer [Sat, 9 Aug 2014 21:20:44 +0000 (14:20 -0700)]
Merge pull request #891 from kloudkl/gflags_namespace
Fix the gflags namespace issue
Jeff Donahue [Sat, 9 Aug 2014 19:43:42 +0000 (12:43 -0700)]
Merge pull request #892 from kloudkl/distribute_generated_proto_headers
Distribute the generated proto header files
Kai Li [Sat, 9 Aug 2014 12:41:58 +0000 (20:41 +0800)]
Distribute the generated proto header files
Kai Li [Sat, 9 Aug 2014 12:11:38 +0000 (20:11 +0800)]
Fix the gflags namespace issue
Sergio Guadarrama [Sat, 9 Aug 2014 09:29:05 +0000 (02:29 -0700)]
Merge pull request #888 from ronghanghu/matcaffe-add-check
add necessary input checks for matcaffe
Ronghang Hu [Fri, 8 Aug 2014 22:47:23 +0000 (15:47 -0700)]
add necessary input checks for matcaffe
Evan Shelhamer [Fri, 8 Aug 2014 19:40:42 +0000 (12:40 -0700)]
[example] fix example names
Evan Shelhamer [Fri, 8 Aug 2014 19:01:39 +0000 (12:01 -0700)]
[docs] fix find complaint in example gathering script
Evan Shelhamer [Fri, 8 Aug 2014 18:18:46 +0000 (11:18 -0700)]
[example] fix broken links in ImageNet recipe
Evan Shelhamer [Thu, 7 Aug 2014 21:17:48 +0000 (14:17 -0700)]
Back-merge documentation and fixes
Jeff Donahue [Thu, 7 Aug 2014 20:01:14 +0000 (13:01 -0700)]
Merge pull request #872 from shelhamer/caffe-tool
Improve caffe tool
Evan Shelhamer [Thu, 7 Aug 2014 18:56:45 +0000 (11:56 -0700)]
consolidate gpu and device_id args in caffe tool
Evan Shelhamer [Thu, 7 Aug 2014 06:56:06 +0000 (23:56 -0700)]
update cli usage in examples
Evan Shelhamer [Thu, 7 Aug 2014 06:48:29 +0000 (23:48 -0700)]
fix deprecation warnings
Evan Shelhamer [Thu, 7 Aug 2014 06:38:17 +0000 (23:38 -0700)]
consolidate test into caffe cli
Evan Shelhamer [Thu, 7 Aug 2014 06:22:56 +0000 (23:22 -0700)]
comment caffe cli
Evan Shelhamer [Thu, 7 Aug 2014 06:22:40 +0000 (23:22 -0700)]
check required caffe cli args
Evan Shelhamer [Thu, 7 Aug 2014 06:22:13 +0000 (23:22 -0700)]
rename caffe cli args and revise text
Evan Shelhamer [Thu, 7 Aug 2014 03:17:00 +0000 (20:17 -0700)]
give usage message for caffe cli
- call format
- commands
- flags
Evan Shelhamer [Thu, 7 Aug 2014 03:10:27 +0000 (20:10 -0700)]
output INFO from caffe cli to stderr by default
Evan Shelhamer [Thu, 7 Aug 2014 02:45:05 +0000 (19:45 -0700)]
consolidaet GPU flag for caffe cli
Evan Shelhamer [Thu, 7 Aug 2014 02:22:58 +0000 (19:22 -0700)]
rename tools
Alireza Shafaei [Wed, 6 Aug 2014 04:34:45 +0000 (21:34 -0700)]
Painless binary mean conversion to matlab matrices.
Evan Shelhamer [Thu, 7 Aug 2014 01:39:24 +0000 (18:39 -0700)]
Merge pull request #868 from shelhamer/license-copyright
Clarify the license and copyright terms of the project
Evan Shelhamer [Wed, 6 Aug 2014 07:36:00 +0000 (00:36 -0700)]
lint for copyright
Evan Shelhamer [Wed, 6 Aug 2014 07:34:05 +0000 (00:34 -0700)]
[docs] detail attribution, license, and copyright for development
Evan Shelhamer [Wed, 6 Aug 2014 07:43:17 +0000 (00:43 -0700)]
LICENSE governs the whole project so strip file headers
Evan Shelhamer [Wed, 6 Aug 2014 07:16:36 +0000 (00:16 -0700)]
clarify the license and copyright terms of the project
As has been the case, contributions are copyright their respective
contributors and the project is BSD-2 licensed. By contributing to the
project, contributors release their contributions under these copyright
and license terms as declared in LICENSE.
Evan Shelhamer [Wed, 6 Aug 2014 06:24:41 +0000 (23:24 -0700)]
Merge pull request #816 from shelhamer/pycaffe-labels-grayscale-attrs-examples
Improve and polish pycaffe
Evan Shelhamer [Tue, 5 Aug 2014 17:14:35 +0000 (10:14 -0700)]
drop np.asarray() in favor of declaration (~1.75x speedup)
Evan Shelhamer [Sat, 2 Aug 2014 01:46:17 +0000 (18:46 -0700)]
fix pycaffe context cropping with or without mean
Evan Shelhamer [Sat, 2 Aug 2014 00:55:30 +0000 (17:55 -0700)]
take array in pycaffe `Net.set_mean()` instead of file path
Evan Shelhamer [Thu, 31 Jul 2014 23:19:20 +0000 (16:19 -0700)]
fix pycaffe input processing
- load an image as [0,1] single / np.float32 according to Python convention
- fix input scaling during preprocessing:
- scale input for preprocessing by `raw_scale` e.g. to map an image
to [0, 255] for the CaffeNet and AlexNet ImageNet models
- scale feature space by `input_scale` after mean subtraction
- switch examples to raw scale for ImageNet models
- fix #525
- preserve type after resizing.
- resize 1, 3, or K channel images with special casing between
skimage.transform (1 and 3) and scipy.ndimage (K) for speed
Evan Shelhamer [Tue, 29 Jul 2014 03:16:52 +0000 (20:16 -0700)]
[example] include prediction in classification, time on GTX 770
Evan Shelhamer [Mon, 28 Jul 2014 23:08:44 +0000 (16:08 -0700)]
[example] fix example outputs
With the right input processing, the actual image classification output
is sensible.
- filter visualization example's top prediction is "tabby cat"
- net surgery fully-convolutional output map is better
Fix incorrect class names too.
Evan Shelhamer [Tue, 29 Jul 2014 03:10:16 +0000 (20:10 -0700)]
[example] add caffe to pythonpath in all examples
Evan Shelhamer [Mon, 28 Jul 2014 21:55:13 +0000 (14:55 -0700)]
define caffe.Net input preprocessing members by boost::python
define `Net.{mean, input_scale, channel_swap}` on the boost::python side
so that the members always exist. drop ugly initialization logic.
Jeff Donahue [Tue, 5 Aug 2014 17:51:07 +0000 (10:51 -0700)]
Merge pull request #856 from jeffdonahue/lint-tweaks
add header alphabetization to lint checks
Jeff Donahue [Tue, 5 Aug 2014 16:22:49 +0000 (09:22 -0700)]
Merge pull request #859 from beam2d/fix-cifar-lrn-region
Fix conflict on setting of LRN layers between train/test net and deploy net