platform/upstream/caffeonacl.git
9 years agoMerge pull request #2511 from flx42/fix_illegal_mode_changes
Evan Shelhamer [Sat, 30 May 2015 06:11:35 +0000 (23:11 -0700)]
Merge pull request #2511 from flx42/fix_illegal_mode_changes

Fix invalid mode changes during tests

9 years agoMerge pull request #1977 from shelhamer/accum-grad
Evan Shelhamer [Sat, 30 May 2015 05:50:16 +0000 (22:50 -0700)]
Merge pull request #1977 from shelhamer/accum-grad

Decouple the computational batch size and minibatch size by accumulating gradients

9 years agoMerge pull request #2410 from sguada/datum_transform
Jeff Donahue [Sat, 30 May 2015 04:35:30 +0000 (21:35 -0700)]
Merge pull request #2410 from sguada/datum_transform

Datum transform

9 years ago Merge pull request #2294 from TorosFanny/master
Evan Shelhamer [Sat, 30 May 2015 04:23:22 +0000 (21:23 -0700)]
 Merge pull request #2294 from TorosFanny/master

  [example] fix path for diff in net surgery

9 years ago[example] fix path for diff in net surgery
TorosFanny [Sat, 30 May 2015 04:19:56 +0000 (21:19 -0700)]
[example] fix path for diff in net surgery

9 years agoMerge pull request #2240 from nsubtil/cmake-build-dependencies
Evan Shelhamer [Sat, 30 May 2015 01:23:42 +0000 (18:23 -0700)]
Merge pull request #2240 from nsubtil/cmake-build-dependencies

Wrangle (some) Caffe dependencies through CMake

9 years agoMerge pull request #2468 from Nerei/feature/minor_fix_in_cmake_config_generation
Evan Shelhamer [Sat, 30 May 2015 01:10:18 +0000 (18:10 -0700)]
Merge pull request #2468 from Nerei/feature/minor_fix_in_cmake_config_generation

[build] minor Cmake fix to clear python / numpy in CaffeConfig.cmake generation

9 years agoMerge pull request #2493 from longjon/sketchy-cuda-kernel-loop
Evan Shelhamer [Sat, 30 May 2015 01:08:27 +0000 (18:08 -0700)]
Merge pull request #2493 from longjon/sketchy-cuda-kernel-loop

Fix dangerous state in pooling and LRN CUDA kernels -- thanks @gustavla for the report in #2145

9 years agoMerge pull request #2514 from norouzi/master
Evan Shelhamer [Sat, 30 May 2015 00:56:09 +0000 (17:56 -0700)]
Merge pull request #2514 from norouzi/master

[bug] fix extract_features: zero pad keys + fix multi-feature dbtype bug

9 years agoMerge pull request #2505 from ronghanghu/matcaffe3
Evan Shelhamer [Fri, 29 May 2015 08:35:35 +0000 (01:35 -0700)]
Merge pull request #2505 from ronghanghu/matcaffe3

MatCaffe: overhaul and improve the MATLAB interface

9 years agoMore tests for Blob, Layer, copy_from and step, fix some typos
Ronghang Hu [Thu, 28 May 2015 23:50:23 +0000 (07:50 +0800)]
More tests for Blob, Layer, copy_from and step, fix some typos

More testes are added into test_net.m and test_solver.m

9 years agoFix automatic header file dependency for MatCaffe
Ronghang Hu [Thu, 28 May 2015 16:23:06 +0000 (00:23 +0800)]
Fix automatic header file dependency for MatCaffe

Automatic header file dependency was introduced in #1472, but
not correctly applied to matcaffe. Fix it by moving ./caffe_.d
to build/matlab/+caffe/private/caffe_.d and add it to DEPS

9 years agoMove demo to demo/ and check weights file existence
Ronghang Hu [Thu, 28 May 2015 10:33:54 +0000 (18:33 +0800)]
Move demo to demo/ and check weights file existence

Move all Matlab demo to caffe/matlab/demo. Since we want the user to add
caffe/matlab to Matlab search PATH, we don't want to mess it up with too
many files

Check if CaffeNet is already downloaded in classification demo.

9 years agoClean up old matcaffe wrapper and rename caffe.reset to caffe.reset_all
Ronghang Hu [Thu, 28 May 2015 08:24:30 +0000 (16:24 +0800)]
Clean up old matcaffe wrapper and rename caffe.reset to caffe.reset_all

Remove old matlab wrapper but keep the classification demo and hdf5 demo

Change 'caffe.reset()' to 'caffe.reset_all()' to avoid potential name conflict.
Otherwise, Matlab R2015a complains:
Warning: Function reset has the same name as a MATLAB builtin. We suggest you rename the
function to avoid a potential name conflict.

9 years agoAdd MatCaffe docs to docs/tutorial/interfaces.md
Ronghang Hu [Thu, 28 May 2015 06:52:47 +0000 (14:52 +0800)]
Add MatCaffe docs to docs/tutorial/interfaces.md

9 years agoAesthetic changes on code style and some minor fix
Ronghang Hu [Thu, 28 May 2015 05:40:26 +0000 (13:40 +0800)]
Aesthetic changes on code style and some minor fix

9 years agoFix matlab tailing dimension 1 issue for shape match
Ronghang Hu [Wed, 27 May 2015 04:37:20 +0000 (12:37 +0800)]
Fix matlab tailing dimension 1 issue for shape match

Matlab cannot have tailing dimension 1 for ndim > 2, so you
cannot create 20 x 10 x 1 x 1 array in matlab as it becomes 20 x 10.
Extend matlab arrays to have tailing dimension 1 during shape match.

9 years agoMatCaffe3 : a powerful matlab interface for caffe
Ronghang Hu [Thu, 21 May 2015 17:45:14 +0000 (01:45 +0800)]
MatCaffe3 : a powerful matlab interface for caffe

Added matcaffe3, a powerful matlab interface. To test it, run 'make mattest'

9 years agodirectly normalize accumulated gradients
Evan Shelhamer [Thu, 28 May 2015 19:43:29 +0000 (12:43 -0700)]
directly normalize accumulated gradients

`SGDSolver::Normalize()` normalizes accumulated gradients by scaling
inversely to the accumulation as `1 / iter_size`.

This fixes accumulation for AdaGrad and is more obvious than fooling
with rates and decays in 55585f5.

9 years agotest equivalence of solving with accumulating gradients
Evan Shelhamer [Fri, 22 May 2015 01:14:16 +0000 (18:14 -0700)]
test equivalence of solving with accumulating gradients

Compare the parameters after solving with a given batch size and the
halved batch size + two iter accumulation of gradients equivalent.

Note: the test net dummy data layer now makes constant data and random
gaussian targets. This assures the standard and gradient accumulation
cases check the same data. Otherwise the difference in batch sizes
causes different orders of random number draws.

9 years agoadjust local learning rate and decay according to gradient accumulation
Evan Shelhamer [Fri, 22 May 2015 00:06:42 +0000 (17:06 -0700)]
adjust local learning rate and decay according to gradient accumulation

Divide local rate by `iter_size` to normalize the gradient according to
the full minibatch size and not only the computational batch size.

Multiply the local decay by `iter_size` to counter the division of the
local learning rate since the decay is multiplied by the rate in the
update equation.

9 years agoaccumulate gradients in cudnn conv layer
Jonathan L Long [Sun, 14 Sep 2014 00:41:59 +0000 (17:41 -0700)]
accumulate gradients in cudnn conv layer

9 years agoaccumulate gradients in (de)conv layers
Jonathan L Long [Wed, 31 Dec 2014 06:29:35 +0000 (22:29 -0800)]
accumulate gradients in (de)conv layers

9 years agoaccumulate gradients in inner product layer
Sergio [Sat, 27 Sep 2014 06:03:26 +0000 (23:03 -0700)]
accumulate gradients in inner product layer

9 years agozero-init param diffs in gradient checker
Jonathan L Long [Wed, 31 Dec 2014 06:52:07 +0000 (22:52 -0800)]
zero-init param diffs in gradient checker

9 years agozero-init param diffs and accumulate gradients
Jonathan L Long [Tue, 12 Aug 2014 04:38:59 +0000 (21:38 -0700)]
zero-init param diffs and accumulate gradients

(With layers whose backward accumulates gradients), this effectively
decouples the computational batch from the SGD minibatch. Each
iteration accumulates gradients over iter_size batches, then parameters
are updated.

9 years agoMerge pull request #2518 from shelhamer/dedup_solvers
Jeff Donahue [Wed, 27 May 2015 20:42:47 +0000 (13:42 -0700)]
Merge pull request #2518 from shelhamer/dedup_solvers

Deduplicate solver regularization, logging, and local rates and decays

9 years agoSolver::MakeUpdate() -> Solver::ApplyUpdate
Evan Shelhamer [Wed, 27 May 2015 19:24:06 +0000 (12:24 -0700)]
Solver::MakeUpdate() -> Solver::ApplyUpdate

Designate `Solver::ApplyUpdate()` as the core method to compute
and apply parameter updates given the current state of the Net.

Make `Solver::ComputeUpdateValue()` a subordinate call overloaded by the
`SGDSolver`s to take care of optimization algorithm details.

9 years agofix the bug with db_type when the number of features to be extracted is larger than 1
Mohammad Norouzi [Wed, 27 May 2015 14:33:19 +0000 (10:33 -0400)]
fix the bug with db_type when the number of features to be extracted is larger than 1

9 years agodeduplicate decay and local rate in solver updates
Evan Shelhamer [Thu, 21 May 2015 23:34:43 +0000 (16:34 -0700)]
deduplicate decay and local rate in solver updates

9 years agoRefactor solvers regularization and logging code
Cyprien Noel [Tue, 19 May 2015 01:30:00 +0000 (18:30 -0700)]
Refactor solvers regularization and logging code

9 years agoadd leading zeros to keys in feature DB files
Mohammad Norouzi [Tue, 26 May 2015 21:25:56 +0000 (17:25 -0400)]
add leading zeros to keys in feature DB files

9 years agoMake class MultinomialLogisticLossLayerTest derive from CPUDeviceTest
Felix Abecassis [Tue, 26 May 2015 18:21:58 +0000 (11:21 -0700)]
Make class MultinomialLogisticLossLayerTest derive from CPUDeviceTest

9 years agoMake class CuDNNSoftmaxLayerTest derive from GPUDeviceTest
Felix Abecassis [Tue, 26 May 2015 18:21:56 +0000 (11:21 -0700)]
Make class CuDNNSoftmaxLayerTest derive from GPUDeviceTest

9 years agoMake class DummyDataLayerTest derive from CPUDeviceTest
Felix Abecassis [Tue, 26 May 2015 18:21:53 +0000 (11:21 -0700)]
Make class DummyDataLayerTest derive from CPUDeviceTest

9 years agoMake class CuDNNPoolingLayerTest derive from GPUDeviceTest
Felix Abecassis [Tue, 26 May 2015 18:21:50 +0000 (11:21 -0700)]
Make class CuDNNPoolingLayerTest derive from GPUDeviceTest

9 years agoMake class CuDNNConvolutionLayerTest derive from GPUDeviceTest
Felix Abecassis [Tue, 26 May 2015 18:21:48 +0000 (11:21 -0700)]
Make class CuDNNConvolutionLayerTest derive from GPUDeviceTest

9 years agoMake class ArgMaxLayerTest derive from CPUDeviceTest
Felix Abecassis [Tue, 26 May 2015 18:21:46 +0000 (11:21 -0700)]
Make class ArgMaxLayerTest derive from CPUDeviceTest

9 years agoMake class CuDNNNeuronLayerTest derive from GPUDeviceTest
Felix Abecassis [Tue, 26 May 2015 18:21:44 +0000 (11:21 -0700)]
Make class CuDNNNeuronLayerTest derive from GPUDeviceTest

9 years agoMake class AccuracyLayerTest derive from CPUDeviceTest
Felix Abecassis [Tue, 26 May 2015 18:21:41 +0000 (11:21 -0700)]
Make class AccuracyLayerTest derive from CPUDeviceTest

9 years agoMake class Im2colKernelTest derive from GPUDeviceTest
Felix Abecassis [Tue, 26 May 2015 18:21:39 +0000 (11:21 -0700)]
Make class Im2colKernelTest derive from GPUDeviceTest

9 years agoAdd classes GPUDeviceTest and CPUDeviceTest.
Felix Abecassis [Tue, 26 May 2015 18:21:36 +0000 (11:21 -0700)]
Add classes GPUDeviceTest and CPUDeviceTest.

These new classes can be used to implement test cases that are only
running on the GPU or the CPU. The goal is to move all calls to
Caffe::set_mode() inside the test framework, to discourage any test to
change the mode halfway through the execution, which is documented to
be illegal.

9 years agoSplit class StochasticPoolingLayerTest into CPUStochasticPoolingLayerTest and GPUStoc...
Felix Abecassis [Tue, 26 May 2015 18:21:32 +0000 (11:21 -0700)]
Split class StochasticPoolingLayerTest into CPUStochasticPoolingLayerTest and GPUStochasticPoolingLayerTest

9 years agoMerge pull request #1946 from nickcarlevaris/msra_init
Evan Shelhamer [Tue, 26 May 2015 20:13:01 +0000 (13:13 -0700)]
Merge pull request #1946 from nickcarlevaris/msra_init

  Add MSRAFiller, an Xavier-like filler designed for use with ReLUs

9 years agoinclude comment on Saxe and sqrt(2) scaling factor
Evan Shelhamer [Tue, 26 May 2015 19:39:14 +0000 (12:39 -0700)]
include comment on Saxe and sqrt(2) scaling factor

although different and independent, the derivation of Saxe et
al. with regards to the scaling factor might be of interest.

9 years agoAdded MSRAFiller, an Xavier-like filler designed for use with ReLUs
Nick Carlevaris-Bianco [Mon, 16 Feb 2015 05:19:43 +0000 (15:49 +1030)]
Added MSRAFiller, an Xavier-like filler designed for use with ReLUs

...instead of tanh. Based on paper: He et al, "Delving Deep into
Rectifiers: Surpassing Human-Level Performance on ImageNet
Classification," 2015.

- add VarianceNorm option to FillerParameters which allows one to
  normalize by fan_in, fan_out or their average.
- update XavierFiller to use the VarianceNorm option (default behavior
  unchanged).
- add tests for MSRAFiller and XavierFiller.

9 years agoSplit class MathFunctionsTest into CPUMathFunctionsTest and GPUMathFunctionsTest
Felix Abecassis [Tue, 26 May 2015 18:21:28 +0000 (11:21 -0700)]
Split class MathFunctionsTest into CPUMathFunctionsTest and GPUMathFunctionsTest

9 years agoRefactor types FloatCPU and DoubleCPU into a new type CPUDevice<T>
Felix Abecassis [Tue, 26 May 2015 18:21:15 +0000 (11:21 -0700)]
Refactor types FloatCPU and DoubleCPU into a new type CPUDevice<T>

Similarly, FloatGPU and DoubleGPU are replaced by a new type GPUDevice<T>.

9 years agoMerge pull request #2486 from tiangolo/ipython-notebook-v4
Evan Shelhamer [Fri, 22 May 2015 20:27:07 +0000 (13:27 -0700)]
Merge pull request #2486 from tiangolo/ipython-notebook-v4

Update IPython Notebooks to version 4, fix #2485

9 years agoUpdate python/requirements.txt to have ipython>=3.0.0
Sebastián Ramírez [Fri, 22 May 2015 01:19:59 +0000 (20:19 -0500)]
Update python/requirements.txt to have ipython>=3.0.0

9 years agomore const in LRN layer CUDA kernels
Jonathan L Long [Wed, 20 May 2015 05:59:27 +0000 (22:59 -0700)]
more const in LRN layer CUDA kernels

9 years agomore const in pooling layer CUDA kernels
Jonathan L Long [Wed, 20 May 2015 05:59:23 +0000 (22:59 -0700)]
more const in pooling layer CUDA kernels

This treats pointer arguments in the same way as non-pointer arguments,
and should help to avoid issues like the previous dangerous state issue.

9 years agoavoid dangerous state in LRN layer CUDA kernels
Jonathan L Long [Wed, 20 May 2015 05:52:12 +0000 (22:52 -0700)]
avoid dangerous state in LRN layer CUDA kernels

9 years agoavoid dangerous state in pooling layer CUDA kernels
Jonathan L Long [Wed, 20 May 2015 05:41:50 +0000 (22:41 -0700)]
avoid dangerous state in pooling layer CUDA kernels

Previously, pointers were modified with the assumption that they would
only be modified once. While this is true so far in practice, the
introduction of CUDA_KERNEL_LOOP makes this a dangerous assumption.

9 years agoMerge pull request #2488 from kibum14/master
Evan Shelhamer [Tue, 19 May 2015 18:11:38 +0000 (11:11 -0700)]
Merge pull request #2488 from kibum14/master

[docs] fix typos

9 years agofix typos in docs
Kibum Bae [Tue, 19 May 2015 13:43:49 +0000 (22:43 +0900)]
fix typos in docs

fix typos in install_osx.md and performance_hardware.md

9 years agoMerge pull request #2484 from ronghanghu/fix-caffe-test
Evan Shelhamer [Tue, 19 May 2015 05:46:56 +0000 (22:46 -0700)]
Merge pull request #2484 from ronghanghu/fix-caffe-test

[bug] fix blob_loss_weights index in test() in caffe.cpp

9 years agoMerge pull request #2482 from longjon/clean-message-comments
Evan Shelhamer [Mon, 18 May 2015 18:55:42 +0000 (11:55 -0700)]
Merge pull request #2482 from longjon/clean-message-comments

Clean up redundant protobuf message comments

9 years agoUpdate IPython Notebooks to version 4
Sebastián Ramírez [Mon, 18 May 2015 15:21:44 +0000 (10:21 -0500)]
Update IPython Notebooks to version 4

9 years agofix blob_loss_weights index in test() in caffe.cpp
Ronghang Hu [Mon, 18 May 2015 12:27:19 +0000 (20:27 +0800)]
fix blob_loss_weights index in test() in caffe.cpp

Correct the index for blob_loss_weights during output. Previously it was set to test_score index by mistake.

9 years agoclean up redundant message comments
Jonathan L Long [Mon, 18 May 2015 07:45:26 +0000 (00:45 -0700)]
clean up redundant message comments

9 years agoMerge pull request #2466 from ducha-aiki/mvn-less
Jeff Donahue [Sat, 16 May 2015 19:02:59 +0000 (12:02 -0700)]
Merge pull request #2466 from ducha-aiki/mvn-less

Remove unnecessary variance computation from backward in MVN layer

9 years agoMerge pull request #2095 from mtamburrano/skip_propagate_down_param
Jeff Donahue [Fri, 15 May 2015 19:51:16 +0000 (12:51 -0700)]
Merge pull request #2095 from mtamburrano/skip_propagate_down_param

Added param skip_propagate_down to LayerParameter

9 years agoMerge pull request #2467 from MartinThoma/moose
Jon Long [Fri, 15 May 2015 17:43:30 +0000 (10:43 -0700)]
Merge pull request #2467 from MartinThoma/moose

Python: Formatted docstrings to numpydoc (Take, Give -> Parameters, Returns)

9 years agominor fix in cmake.config generation - do not force client libs to include numpy...
Anatoly Baksheev [Fri, 15 May 2015 14:39:11 +0000 (17:39 +0300)]
minor fix in cmake.config generation - do not force client libs to include numpy include dirs

9 years agoPython: Formatted docstrings to numpydoc (Take, Give -> Parameters, Returns)
Martin Thoma [Fri, 15 May 2015 14:06:49 +0000 (16:06 +0200)]
Python: Formatted docstrings to numpydoc (Take, Give -> Parameters, Returns)

9 years agoRemove unnecessary variance computation from backward in MVN layer
Dmytro Mishkin [Fri, 15 May 2015 13:14:02 +0000 (16:14 +0300)]
Remove unnecessary variance computation from backward in MVN layer

9 years agoAdded "propagate_down" param to LayerParameter
manuele [Fri, 15 May 2015 09:17:00 +0000 (11:17 +0200)]
Added "propagate_down" param to LayerParameter

9 years agoMerge pull request #2274 from Ashwani001/patch-1
Jeff Donahue [Fri, 15 May 2015 07:10:30 +0000 (00:10 -0700)]
Merge pull request #2274 from Ashwani001/patch-1

Update generate_sample_data.py

9 years agoUpdate generate_sample_data.py
Ashwani001 [Fri, 15 May 2015 02:33:36 +0000 (10:33 +0800)]
Update generate_sample_data.py

made code more clearer and more concise

9 years agoMerge pull request #2201 from jeffdonahue/tutorial-fixes
Evan Shelhamer [Fri, 15 May 2015 01:42:02 +0000 (18:42 -0700)]
Merge pull request #2201 from jeffdonahue/tutorial-fixes

Update docs for ND blobs (#1970) and layer type is a string (#1694)

9 years agoUpdate docs for ND blobs (#1970) and layer type is a string (#1694)
Jeff Donahue [Thu, 26 Mar 2015 01:45:29 +0000 (18:45 -0700)]
Update docs for ND blobs (#1970) and layer type is a string (#1694)

9 years agoMerge pull request #2217 from jeffdonahue/ssafar-reshape-rebase
Jeff Donahue [Fri, 15 May 2015 01:36:22 +0000 (18:36 -0700)]
Merge pull request #2217 from jeffdonahue/ssafar-reshape-rebase

Rebase @ssafar's ReshapeLayer

9 years agoAdd ReshapeParameter axis and num_axes to reshape only a particular span
Jeff Donahue [Thu, 26 Mar 2015 08:13:18 +0000 (01:13 -0700)]
Add ReshapeParameter axis and num_axes to reshape only a particular span
of the input shape

9 years agobasic tests (Forward, Gradient) for ReshapeLayer
Jeff Donahue [Thu, 26 Mar 2015 09:25:48 +0000 (02:25 -0700)]
basic tests (Forward, Gradient) for ReshapeLayer

9 years agoReshapeLayer fixups for ND blobs
Jeff Donahue [Thu, 26 Mar 2015 00:44:37 +0000 (17:44 -0700)]
ReshapeLayer fixups for ND blobs

9 years agoAdded a Reshape layer for copying-free modification of blob dimensions.
Simon Safar [Thu, 16 Oct 2014 03:15:14 +0000 (20:15 -0700)]
Added a Reshape layer for copying-free modification of blob dimensions.

9 years agoMerge pull request #2177 from pgao/spp_layer
Jeff Donahue [Fri, 15 May 2015 01:17:11 +0000 (18:17 -0700)]
Merge pull request #2177 from pgao/spp_layer

Spatial Pyramid Pooling Layer

9 years agoSpatial Pyramid Pooling Layer
PETER_GAO [Sat, 21 Mar 2015 23:00:05 +0000 (16:00 -0700)]
Spatial Pyramid Pooling Layer

9 years agoMerge pull request #2115 from longjon/bogus-cross-entropy-gpu
Jeff Donahue [Thu, 14 May 2015 23:11:06 +0000 (16:11 -0700)]
Merge pull request #2115 from longjon/bogus-cross-entropy-gpu

Remove bogus implementation of SigmoidCrossEntropyLossLayer's Forward_gpu

9 years agoremove bogus implementation of SigmoidCrossEntropyLossLayer::Forward_gpu
Jonathan L Long [Fri, 13 Mar 2015 00:59:28 +0000 (17:59 -0700)]
remove bogus implementation of SigmoidCrossEntropyLossLayer::Forward_gpu

It was a verbatim copy of Forward_cpu; there is no proper GPU
implementation.

9 years agoMerge pull request #1969 from tnarihi/fix-empty-param_name
Jeff Donahue [Thu, 14 May 2015 22:41:57 +0000 (15:41 -0700)]
Merge pull request #1969 from tnarihi/fix-empty-param_name

Fix incorrectly storing empty param_name to param_names_index_

9 years agoMerge pull request #2168 from longjon/spurious-net-includes
Jeff Donahue [Thu, 14 May 2015 22:14:05 +0000 (15:14 -0700)]
Merge pull request #2168 from longjon/spurious-net-includes

Remove spurious inclusions of net.hpp

9 years agoMerge pull request #2165 from longjon/auto-reshape
Jeff Donahue [Thu, 14 May 2015 22:10:59 +0000 (15:10 -0700)]
Merge pull request #2165 from longjon/auto-reshape

Always call Layer::Reshape in Layer::Forward

9 years agoMerge pull request #2072 from jeffdonahue/final-snapshot-off-by-one
Jon Long [Thu, 14 May 2015 22:06:56 +0000 (15:06 -0700)]
Merge pull request #2072 from jeffdonahue/final-snapshot-off-by-one

Bugfix: final snapshot iter number is off by one

9 years agoMerge pull request #2456 from longjon/python-layer-object
Jeff Donahue [Thu, 14 May 2015 22:06:04 +0000 (15:06 -0700)]
Merge pull request #2456 from longjon/python-layer-object

Use bp::object instead of PyObject* for self in Python layer

9 years agoMerge pull request #2457 from longjon/superfluous-destructors
Jeff Donahue [Thu, 14 May 2015 21:57:52 +0000 (14:57 -0700)]
Merge pull request #2457 from longjon/superfluous-destructors

Remove superfluous empty destructors

9 years agoremove superfluous empty destructors
Jonathan L Long [Thu, 14 May 2015 05:08:57 +0000 (22:08 -0700)]
remove superfluous empty destructors

The removed definitions do nothing; these classes already have virtual
destructors inherited from their respective base classes.

9 years ago[pycaffe] use bp::object instead of PyObject* for self in Python layer
Takuya Narihira [Thu, 14 May 2015 04:16:28 +0000 (21:16 -0700)]
[pycaffe] use bp::object instead of PyObject* for self in Python layer

This simply allows direct use of the nicer bp::object interface.

9 years agoMerge pull request #2321 from nickcarlevaris/contrastive_loss_fix
Evan Shelhamer [Tue, 12 May 2015 20:52:48 +0000 (13:52 -0700)]
Merge pull request #2321 from nickcarlevaris/contrastive_loss_fix

Fixed contrastive loss layer to be the same as proposed in Hadsell et al 2006

9 years agoMerge pull request #2441 from gustavla/py3-fix
Jon Long [Mon, 11 May 2015 23:34:33 +0000 (16:34 -0700)]
Merge pull request #2441 from gustavla/py3-fix

Fixed wrong io import in Python 3

9 years agoMerge pull request #2443 from MartinThoma/master
Evan Shelhamer [Mon, 11 May 2015 22:20:06 +0000 (15:20 -0700)]
Merge pull request #2443 from MartinThoma/master

python: PEP8; changed docstring documentation style to NumPyDoc style

9 years agopython: PEP8; changed docstring documentation style to NumPyDoc style
Martin Thoma [Mon, 11 May 2015 20:35:48 +0000 (22:35 +0200)]
python: PEP8; changed docstring documentation style to NumPyDoc style

9 years agoThis imports the wrong io module in Python 3.
Gustav Larsson [Mon, 11 May 2015 16:54:45 +0000 (11:54 -0500)]
This imports the wrong io module in Python 3.

The Python standard lib has a module called io, so instead of Python 3
throwing an error, it imports the wrong module without complaining.

9 years agoMerge pull request #2426 from longjon/check-blob-overflow
Jeff Donahue [Thu, 7 May 2015 00:56:57 +0000 (17:56 -0700)]
Merge pull request #2426 from longjon/check-blob-overflow

Check that count does not overflow in Blob::Reshape

9 years agocheck that count_ does not overflow in Blob::Reshape
Jonathan L Long [Thu, 7 May 2015 00:40:12 +0000 (17:40 -0700)]
check that count_ does not overflow in Blob::Reshape

9 years agoMerge pull request #2414 from tnarihi/fix-prelu-redanduncy
Jeff Donahue [Tue, 5 May 2015 18:39:45 +0000 (11:39 -0700)]
Merge pull request #2414 from tnarihi/fix-prelu-redanduncy

Fix #2406: wrong thread blocks setting for PReLU

9 years agoModify for better readability regarding temporary bufffer for backward
Takuya Narihira [Mon, 4 May 2015 18:45:33 +0000 (11:45 -0700)]
Modify for better readability regarding temporary bufffer for backward
computation

9 years agoFix redundancy of parameter backward computation
Takuya Narihira [Mon, 4 May 2015 18:44:44 +0000 (11:44 -0700)]
Fix redundancy of parameter backward computation

9 years agoAdded support for original implementation, using (margin - d^2), through the
Nick Carlevaris-Bianco [Mon, 4 May 2015 02:11:44 +0000 (11:41 +0930)]
Added support for original implementation, using (margin - d^2), through the
legacy_version parameter.