Jon Long [Mon, 9 Feb 2015 09:13:16 +0000 (01:13 -0800)]
Merge pull request #1838 from DmitryUlyanov/dev
Bug in MemoryData that does not allow using arrays with n_ * size_ > 2^31
Dmitry Ulyanov [Fri, 6 Feb 2015 07:56:47 +0000 (10:56 +0300)]
Allow using arrays with n_ * size_ > 2^31
uint64_t
size_t
Evan Shelhamer [Sat, 7 Feb 2015 06:05:12 +0000 (22:05 -0800)]
Merge pull request #1416 from mtamburrano/matVector
Feed cv::Mats to MemoryDataLayer and set batch size on-the-fly.
Evan Shelhamer [Sat, 7 Feb 2015 05:37:54 +0000 (21:37 -0800)]
groom #1416
- keep current `DataTransformer` check so that datums can be transformed
into a blob incrementally
- standardize check messages
- include opencv where needed, drop unneeded OS X guards
TODO these tests need to be de-duped
manuele [Mon, 12 Jan 2015 11:22:49 +0000 (12:22 +0100)]
removed needs_reshape_ and ChangeBatchSize is now set_batch_size
manuele [Thu, 8 Jan 2015 18:26:20 +0000 (19:26 +0100)]
small fixes
manuele [Tue, 9 Dec 2014 14:12:28 +0000 (15:12 +0100)]
MemoryDataLayer now correctly consumes batch_size elements
manuele [Mon, 10 Nov 2014 18:24:02 +0000 (19:24 +0100)]
MemoryDataLayer now accepts dynamic batch_size
manuele [Fri, 7 Nov 2014 19:01:33 +0000 (20:01 +0100)]
Added opencv vector<Mat> to memory data layer with tests
Evan Shelhamer [Sat, 7 Feb 2015 04:40:40 +0000 (20:40 -0800)]
Merge pull request #1789 from SaganBolliger/softmax_loss_gpu
GPU version of SoftmaxWithLossLayer
Sagan Bolliger [Fri, 6 Feb 2015 20:54:32 +0000 (12:54 -0800)]
Added GPU implementation of SoftmaxWithLossLayer.
Jeff Donahue [Fri, 6 Feb 2015 19:47:49 +0000 (11:47 -0800)]
Merge pull request #1837 from shelhamer/image-fail-die
Die on inputs that fail to load
Jeff Donahue [Fri, 6 Feb 2015 19:46:48 +0000 (11:46 -0800)]
Merge pull request #1840 from shelhamer/fix-power-test
Fix PowerLayer gradient check failures by reducing step size
Evan Shelhamer [Fri, 6 Feb 2015 09:37:02 +0000 (01:37 -0800)]
reduce step size in PowerLayer gradient checks: fix #1252
The gradient checker fails on certain elements of the PowerLayer checks,
but only 1-3 sometimes fail out of the 120 elements tested. This is not
due to any numerical issue in the PowerLayer, but the distribution of
the random inputs for the checks.
boost 1.56 switched the normal distribution RNG engine from Box-Muller
to Ziggurat.
Evan Shelhamer [Fri, 6 Feb 2015 08:51:25 +0000 (00:51 -0800)]
build with libc++ on Yosmite with CUDA 7
Jeff Donahue [Fri, 6 Feb 2015 04:05:20 +0000 (20:05 -0800)]
Merge pull request #1836 from jeffdonahue/loss-param-upgrade-fix
fix for layer-type-str: loss_param in V1LayerParameter
Jeff Donahue [Fri, 6 Feb 2015 03:28:42 +0000 (19:28 -0800)]
fix for layer-type-str: loss_param and DECONVOLUTION type should have
been included in V1LayerParameter, get upgraded
Evan Shelhamer [Fri, 6 Feb 2015 00:41:46 +0000 (16:41 -0800)]
die on inputs to IMAGE_DATA that fail to load
It's better to know than march silently on.
Jeff Donahue [Fri, 6 Feb 2015 00:06:17 +0000 (16:06 -0800)]
Merge pull request #1694 from jeffdonahue/layer-type-str
Layer type is a string
Jeff Donahue [Thu, 5 Feb 2015 23:17:24 +0000 (15:17 -0800)]
Upgrade existing nets using upgrade_net_proto_text tool
Restore comments afterwards
Jeff Donahue [Thu, 15 Jan 2015 09:43:23 +0000 (01:43 -0800)]
start layer parameter field IDs at 100
(always want them printed at the end, and want to allow more fields to
be added in the future, so reserve fields 10-99 for that purpose)
Jeff Donahue [Thu, 15 Jan 2015 08:13:23 +0000 (00:13 -0800)]
get rid of NetParameterPrettyPrint as layer is now after inputs
(whoohoo)
Jeff Donahue [Thu, 15 Jan 2015 04:17:26 +0000 (20:17 -0800)]
add message ParamSpec to replace param name, blobs_lr, weight_decay, ...
Jeff Donahue [Thu, 15 Jan 2015 03:00:58 +0000 (19:00 -0800)]
add test that all V1 layer type enum values upgrade to valid V2 string
types
Jeff Donahue [Thu, 15 Jan 2015 02:46:21 +0000 (18:46 -0800)]
add v1 to v2 upgrade tests
Jeff Donahue [Thu, 15 Jan 2015 02:31:23 +0000 (18:31 -0800)]
restore test_upgrade_proto to dev version
Jeff Donahue [Thu, 15 Jan 2015 00:55:14 +0000 (16:55 -0800)]
automagic upgrade for v1->v2
Jeff Donahue [Wed, 14 Jan 2015 23:07:20 +0000 (15:07 -0800)]
restore upgrade_proto
Jeff Donahue [Tue, 13 Jan 2015 01:02:15 +0000 (17:02 -0800)]
'layers' -> 'layer'
Jeff Donahue [Mon, 12 Jan 2015 21:42:51 +0000 (13:42 -0800)]
Add unit test for LayerRegistry::CreateLayer
Jeff Donahue [Mon, 12 Jan 2015 22:27:06 +0000 (14:27 -0800)]
DataLayer and HDF5OutputLayer can be constructed and destroyed without
errors
Jeff Donahue [Thu, 8 Jan 2015 04:41:09 +0000 (20:41 -0800)]
Layer type is a string
Evan Shelhamer [Wed, 4 Feb 2015 18:15:21 +0000 (10:15 -0800)]
fix Nesterov typo found by @bamos
Pannous [Fri, 30 Jan 2015 12:38:48 +0000 (13:38 +0100)]
fixed small bug: output label_file -> label_filename
Jeff Donahue [Mon, 2 Feb 2015 20:38:43 +0000 (12:38 -0800)]
add space after "Loading mean file from"
Evan Shelhamer [Mon, 2 Feb 2015 17:45:51 +0000 (09:45 -0800)]
fix GoogLeNet license overwritten by back-merge (see #1650)
Evan Shelhamer [Sun, 1 Feb 2015 07:53:58 +0000 (23:53 -0800)]
Merge pull request #1615 from longjon/deconv-layer
Add deconvolution layer with refactoring of convolution layer to share code
Evan Shelhamer [Fri, 30 Jan 2015 04:58:58 +0000 (20:58 -0800)]
Merge pull request #1748 from longjon/db-wrappers
Simple database wrappers
Evan Shelhamer [Fri, 30 Jan 2015 04:52:32 +0000 (20:52 -0800)]
Merge pull request #1794 from shelhamer/no-dump-net
drop dump_network tool
Evan Shelhamer [Fri, 30 Jan 2015 04:52:16 +0000 (20:52 -0800)]
Merge pull request #1654 from longjon/softmax-missing-values
Add missing value support to SoftmaxLossLayer
Jeff Donahue [Fri, 30 Jan 2015 03:17:46 +0000 (19:17 -0800)]
Merge pull request #1753 from jeffdonahue/enhance-debug-info
Enhance debug info
Jeff Donahue [Mon, 19 Jan 2015 19:52:04 +0000 (11:52 -0800)]
debug_info in NetParameter so it can be enabled outside training
Jeff Donahue [Tue, 7 Oct 2014 06:46:05 +0000 (23:46 -0700)]
debug_info: print param (and gradient) stats for whole Net after Backward
Jeff Donahue [Fri, 30 Jan 2015 03:07:12 +0000 (19:07 -0800)]
Add BlobMathTest with unit tests for sumsq and asum
Jeff Donahue [Tue, 7 Oct 2014 06:08:38 +0000 (23:08 -0700)]
Blob: add sumsq_{data,diff} methods
Jeff Donahue [Wed, 24 Sep 2014 22:52:14 +0000 (15:52 -0700)]
Enhancements for debug_info to display more information.
Now displays for:
-net inputs
-test nets
-params in ForwardDebugInfo
Jonathan L Long [Tue, 27 Jan 2015 21:27:48 +0000 (13:27 -0800)]
[test] gradient checks for softmax ignore_label and normalize: false
Jonathan L Long [Tue, 27 Jan 2015 21:09:32 +0000 (13:09 -0800)]
document the loss_param options to SoftmaxWithLossLayer
Jonathan L Long [Tue, 27 Jan 2015 20:24:39 +0000 (12:24 -0800)]
[test] simple test for DeconvolutionLayer
Jonathan L Long [Tue, 27 Jan 2015 18:45:41 +0000 (10:45 -0800)]
document DeconvolutionLayer
Evan Shelhamer [Mon, 26 Jan 2015 07:13:52 +0000 (23:13 -0800)]
Merge pull request #1555 from drdan14/draw-net-improvements
Improvements to network drawing via draw_net.py
Evan Shelhamer [Mon, 26 Jan 2015 07:08:38 +0000 (23:08 -0800)]
Merge pull request #1632 from 7hil/cifar_lmdb
switch cifar10 example to lmdb
Evan Shelhamer [Mon, 26 Jan 2015 07:06:57 +0000 (23:06 -0800)]
Merge pull request #1746 from dj1989/mat_hdf5_demo
Matlab demo for Caffe-compatible HDF5 read/write
Evan Shelhamer [Mon, 26 Jan 2015 07:05:33 +0000 (23:05 -0800)]
Merge pull request #1755 from jeffdonahue/softmax-optimization
SoftmaxLayer GPU optimization
Evan Shelhamer [Mon, 26 Jan 2015 04:37:52 +0000 (20:37 -0800)]
[pycaffe] de-dupe imports
Evan Shelhamer [Mon, 26 Jan 2015 00:04:51 +0000 (16:04 -0800)]
[example] lenet early stopping -> mnist examples
Jeff Donahue [Sun, 25 Jan 2015 22:06:48 +0000 (14:06 -0800)]
Merge pull request #1754 from jeffdonahue/softmax-loss-fix
SoftmaxWithLossLayer: use CreateLayer
Evan Shelhamer [Sat, 24 Jan 2015 06:46:53 +0000 (22:46 -0800)]
drop dump_network tool
Nets are better serialized as a single binaryproto or saved however
desired through the Python and MATLAB interfaces.
Jeff Donahue [Fri, 23 Jan 2015 04:03:15 +0000 (20:03 -0800)]
Merge pull request #1787 from shelhamer/pytest-caffe-set
[fix] align pytest for solver with #1728
Evan Shelhamer [Fri, 23 Jan 2015 03:54:57 +0000 (19:54 -0800)]
[fix] align pytest for solver with #1728
Jeff Donahue [Fri, 23 Jan 2015 03:45:34 +0000 (19:45 -0800)]
Merge pull request #1786 from xianjiec/dev
fix bugs by adding const
Xianjie Chen [Fri, 23 Jan 2015 03:27:24 +0000 (19:27 -0800)]
fix bugs by adding const
Evan Shelhamer [Thu, 22 Jan 2015 08:29:35 +0000 (00:29 -0800)]
Merge pull request #1473 from longjon/pytest
Python testing
Jeff Donahue [Sun, 2 Nov 2014 09:56:19 +0000 (01:56 -0800)]
hdf5_save_nd_dataset takes a const string& (instead of const string)
Jeff Donahue [Sun, 2 Nov 2014 04:37:05 +0000 (21:37 -0700)]
SoftmaxWithLossLayer: use CreateLayer so that a CuDNNSoftmaxLayer
is created if available
Evan Shelhamer [Wed, 21 Jan 2015 23:43:49 +0000 (15:43 -0800)]
Back-merge fixes + docs
and other fixes and documentation updates.
Jeff Donahue [Sun, 2 Nov 2014 04:22:11 +0000 (21:22 -0700)]
Unroll kernels in SoftmaxLayer...from terrible performance to mediocre
performance.
Jon Long [Tue, 20 Jan 2015 00:59:48 +0000 (16:59 -0800)]
Merge pull request #1756 from jeffdonahue/max-total-bytes-limit
Max out the protobuf file read size limit
Jeff Donahue [Thu, 16 Oct 2014 20:24:39 +0000 (13:24 -0700)]
SetTotalBytesLimit to the max (2 GB minus 1 byte)
Jonathan L Long [Fri, 16 Jan 2015 21:25:00 +0000 (13:25 -0800)]
gut dataset wrappers
Sergio Guadarrama [Fri, 16 Jan 2015 21:20:26 +0000 (13:20 -0800)]
test db wrappers
Jonathan L Long [Fri, 16 Jan 2015 21:20:15 +0000 (13:20 -0800)]
use db wrappers
Sergio Guadarrama [Fri, 16 Jan 2015 21:18:50 +0000 (13:18 -0800)]
add db wrappers
Jon Long [Mon, 19 Jan 2015 04:29:47 +0000 (20:29 -0800)]
Merge pull request #1747 from yosinski/doc-up
Updated doc to suggest boost 1.57
Jason Yosinski [Sun, 18 Jan 2015 04:07:36 +0000 (23:07 -0500)]
Updated doc to suggest boost 1.57
Dinesh Jayaraman [Sun, 18 Jan 2015 00:23:48 +0000 (18:23 -0600)]
Matlab demo for Caffe-compatible HDF5 read/write
Jeff Donahue [Sat, 17 Jan 2015 23:59:11 +0000 (15:59 -0800)]
Merge pull request #1434 from pcampr/patch-1
fixed filename in build_docs.sh
Jeff Donahue [Sat, 17 Jan 2015 23:41:44 +0000 (15:41 -0800)]
Make comments for sparse GaussianFiller match actual behavior
(Fixes #1497 reported by @denizyuret)
Christos Nikolaou [Wed, 29 Oct 2014 23:05:12 +0000 (01:05 +0200)]
Update interfaces.md file
Proofread and update the /docs/tutorial/interfaces.md file.
Evan Shelhamer [Fri, 16 Jan 2015 23:59:50 +0000 (15:59 -0800)]
Merge pull request #1388 from rohitgirdhar/cifar_docu_bug
[docs] run CIFAR10 example from caffe root
Evan Shelhamer [Fri, 16 Jan 2015 22:41:52 +0000 (14:41 -0800)]
Merge pull request #1704 from longjon/friendlier-link-messages
Makefile: friendlier messages for link commands
Evan Shelhamer [Fri, 16 Jan 2015 21:41:34 +0000 (13:41 -0800)]
[docs] OpenCV version >= 2.4
Evan Shelhamer [Fri, 16 Jan 2015 21:19:31 +0000 (13:19 -0800)]
Merge pull request #1705 from longjon/origin-rpath
Makefile: specify RPATH using $ORIGIN
Jon Long [Fri, 16 Jan 2015 19:21:49 +0000 (11:21 -0800)]
Merge pull request #1686 from longjon/net-const
Improve const-ness of Net
Evan Shelhamer [Fri, 16 Jan 2015 06:31:25 +0000 (22:31 -0800)]
Merge pull request #1662 from seanbell/fix-python-resize_image
Fix caffe.io.resize_image for the case of constant images
Evan Shelhamer [Fri, 16 Jan 2015 05:59:16 +0000 (21:59 -0800)]
Merge pull request #1728 from shelhamer/pycaffe-mode-phase-device
Change Python interface for mode, phase, and device
Evan Shelhamer [Fri, 16 Jan 2015 05:47:50 +0000 (21:47 -0800)]
check for enough args to convert_imageset
(this might better be handled by making all args flags...)
Evan Shelhamer [Fri, 16 Jan 2015 05:04:43 +0000 (21:04 -0800)]
Merge pull request #1236 from mlapin/legacy_nvcc_support
Drop OpenCV includes from NVCC code for legacy reasons.
Evan Shelhamer [Fri, 16 Jan 2015 04:49:50 +0000 (20:49 -0800)]
Merge pull request #1740 from shelhamer/yosemite-makefile
Support OS X Yosemite / 10.10
Evan Shelhamer [Fri, 16 Jan 2015 04:43:24 +0000 (20:43 -0800)]
lint internal thread
Evan Shelhamer [Fri, 16 Jan 2015 04:43:53 +0000 (20:43 -0800)]
Merge pull request #1335 from ryotat/master
Fix leaking thread and groom internal thread implementation.
Evan Shelhamer [Fri, 16 Jan 2015 00:28:09 +0000 (16:28 -0800)]
support OS X Yosemite / 10.10
- pick libstdc++ for OS X (regardless of version)
- make gtest rely on its own tuple to not conflict with clang
(thanks @pluskid!)
- 10.10 has Accelerate while 10.9 has vecLib for BLAS
(thanks @leonardt and @drdan14)
Evan Shelhamer [Thu, 15 Jan 2015 01:56:23 +0000 (17:56 -0800)]
set mode, phase, device in pycaffe; fix #1700
Attach mode, phase, and device setters to caffe module itself
so that these can be set before making nets. This is needed to properly
initialize layers with the right device and phase configuration.
Update examples to new usage.
Evan Shelhamer [Wed, 14 Jan 2015 19:41:37 +0000 (11:41 -0800)]
Merge pull request #1724 from pannous/wtf
Tell users to go to the caffe-users mailing list
Pannous [Wed, 14 Jan 2015 17:43:49 +0000 (18:43 +0100)]
Message: Please ask usage questions and how to model different tasks on the caffe-users mailing list
Jonathan L Long [Mon, 22 Dec 2014 03:43:36 +0000 (19:43 -0800)]
add DeconvolutionLayer, using BaseConvolutionLayer
Jonathan L Long [Mon, 22 Dec 2014 03:42:29 +0000 (19:42 -0800)]
rewrite ConvolutionLayer to use BaseConvolutionLayer helpers
Jonathan L Long [Mon, 22 Dec 2014 07:33:35 +0000 (23:33 -0800)]
add CPU_ONLY ifdef guards to BaseConvolutionLayer
Jonathan L Long [Mon, 22 Dec 2014 03:20:36 +0000 (19:20 -0800)]
add BaseConvolutionLayer
This provides a common place for code used by ConvolutionLayer and
DeconvolutionLayer, simplifying the implementations of both.
Jonathan L Long [Sat, 10 Jan 2015 09:42:23 +0000 (01:42 -0800)]
[build] specify RPATH using $ORIGIN
Currently, when dynamically linking against libcaffe (right now, only
done for tests), RPATH is specified relative to the caffe source root.
This commit fixes RPATH using the special $ORIGIN variable to be
relative to the executable itself, so that there is no dependence on the
working directory.