platform/upstream/caffe.git
10 years agoMerge pull request #857 from netheril96/gflags
Evan Shelhamer [Thu, 28 Aug 2014 04:43:53 +0000 (21:43 -0700)]
Merge pull request #857 from netheril96/gflags

Use gflags to parse command line arguments for convert_imageset

10 years agocreate_imagenet.sh updated to new syntax
netheril96 [Thu, 28 Aug 2014 03:39:28 +0000 (11:39 +0800)]
create_imagenet.sh updated to new syntax

10 years agoMerge pull request #993 from Yangqing/sweep
Jeff Donahue [Wed, 27 Aug 2014 23:01:23 +0000 (16:01 -0700)]
Merge pull request #993 from Yangqing/sweep

fix layer_factory.cpp bug: there should be no ifdefs

10 years agofix layer_factory.cpp bug: there should be no ifdefs
Yangqing Jia [Wed, 27 Aug 2014 22:40:14 +0000 (15:40 -0700)]
fix layer_factory.cpp bug: there should be no ifdefs

10 years agoMerge pull request #984 from shelhamer/drop-curand-reset
Evan Shelhamer [Tue, 26 Aug 2014 23:26:15 +0000 (16:26 -0700)]
Merge pull request #984 from shelhamer/drop-curand-reset

Drop obsolete CURAND reset for CUDA 6.5 compatibility

10 years agodefault ilsvrc solving to GPU
Evan Shelhamer [Tue, 26 Aug 2014 21:32:14 +0000 (14:32 -0700)]
default ilsvrc solving to GPU

10 years agoFIX drop obsolete CURAND reset for CUDA 6.5 compatibility
Jonathan L Long [Tue, 26 Aug 2014 21:01:20 +0000 (14:01 -0700)]
FIX drop obsolete CURAND reset for CUDA 6.5 compatibility

Drop the legacy CURAND initialization steps; these are unnecessary and
cause dramatic slowdowns for CUDA 6.5. This does no harm for K20 usage
counter to the note at least for CUDA 6.5 and 5.0.

10 years agoFIX web_demo upload was not processing grayscale correctly
Sergey Karayev [Tue, 26 Aug 2014 07:44:56 +0000 (00:44 -0700)]
FIX web_demo upload was not processing grayscale correctly

10 years agoMerge pull request #981 from jeffdonahue/fix-eltwise-product
Jeff Donahue [Tue, 26 Aug 2014 00:14:10 +0000 (17:14 -0700)]
Merge pull request #981 from jeffdonahue/fix-eltwise-product

Make eltwise product gradient stabler (fixes issues with WITHIN_CHANNEL LRN in cifar_full example)

10 years agoremove warning about LRN layers in CPU mode
Jeff Donahue [Mon, 25 Aug 2014 23:12:33 +0000 (16:12 -0700)]
remove warning about LRN layers in CPU mode

10 years agoAdd "stable_prod_grad" option (on by default) to ELTWISE layer to
Jeff Donahue [Mon, 25 Aug 2014 23:10:22 +0000 (16:10 -0700)]
Add "stable_prod_grad" option (on by default) to ELTWISE layer to
compute the eltwise product gradient using a slower but stabler formula.

10 years agoMerge pull request #980 from jeffdonahue/fix-memory-used
Jeff Donahue [Mon, 25 Aug 2014 20:16:37 +0000 (13:16 -0700)]
Merge pull request #980 from jeffdonahue/fix-memory-used

fix memory_used_ by computing after SetUp

10 years agofix memory_used_ by computing after SetUp
Jeff Donahue [Mon, 25 Aug 2014 19:47:50 +0000 (12:47 -0700)]
fix memory_used_ by computing after SetUp

10 years agoMerge pull request #979 from jeffdonahue/caffe-test-output
Jeff Donahue [Mon, 25 Aug 2014 19:42:24 +0000 (12:42 -0700)]
Merge pull request #979 from jeffdonahue/caffe-test-output

'caffe test' computes and prints all scores and their names

10 years ago'caffe test' prints all scores and their names
Jeff Donahue [Mon, 25 Aug 2014 18:59:20 +0000 (11:59 -0700)]
'caffe test' prints all scores and their names

10 years agoMerge pull request #976 from alfredtofu/dev
Evan Shelhamer [Mon, 25 Aug 2014 17:25:53 +0000 (10:25 -0700)]
Merge pull request #976 from alfredtofu/dev

fix image resizing dimensions for IO

10 years agofix bug for resizing images.
alfredtofu [Mon, 25 Aug 2014 08:22:52 +0000 (16:22 +0800)]
fix bug for resizing images.

10 years ago[example] add fully-convolutional efficiency note + confidence map
Evan Shelhamer [Sun, 24 Aug 2014 22:36:52 +0000 (15:36 -0700)]
[example] add fully-convolutional efficiency note + confidence map

- spell out fully-convolutional efficiency
- add confidence map
- fix input size: 451 x 451 is correct for an 8 x 8 output map by the
  equation input size = 227 + 32(d-1) for output map dimension of d

10 years agofix internal thread interface confusion
Evan Shelhamer [Sun, 24 Aug 2014 05:41:30 +0000 (22:41 -0700)]
fix internal thread interface confusion

thanks @ronghanghu for pointing this out in #964

10 years agomove {InnerProduct,Eltwise}Layer to common instead of vision
Evan Shelhamer [Sun, 24 Aug 2014 02:25:56 +0000 (19:25 -0700)]
move {InnerProduct,Eltwise}Layer to common instead of vision

10 years agofix parameter for transformation in ImageDataLayer constructor
Evan Shelhamer [Fri, 22 Aug 2014 07:02:17 +0000 (00:02 -0700)]
fix parameter for transformation in ImageDataLayer constructor

10 years agoMerge pull request #963 from shelhamer/fix-transform-param
Evan Shelhamer [Fri, 22 Aug 2014 05:50:47 +0000 (22:50 -0700)]
Merge pull request #963 from shelhamer/fix-transform-param

Make data transformations backwards-compatible and upgrade models

10 years agoupgrade model definitions for transformation params
Evan Shelhamer [Fri, 22 Aug 2014 05:19:09 +0000 (22:19 -0700)]
upgrade model definitions for transformation params

10 years agoupgrade net parameter data transformation fields automagically
Evan Shelhamer [Fri, 22 Aug 2014 04:22:44 +0000 (21:22 -0700)]
upgrade net parameter data transformation fields automagically

Convert DataParameter and ImageDataParameter data transformation fields
into a TransformationParameter.

10 years agocompact net parameter upgrade
Evan Shelhamer [Fri, 22 Aug 2014 02:02:41 +0000 (19:02 -0700)]
compact net parameter upgrade

10 years agorestore old data transformation parameters for compatibility
Evan Shelhamer [Fri, 22 Aug 2014 02:03:09 +0000 (19:03 -0700)]
restore old data transformation parameters for compatibility

10 years agoMerge pull request #954 from geenux/dev-redundant-data
Evan Shelhamer [Fri, 22 Aug 2014 00:06:39 +0000 (17:06 -0700)]
Merge pull request #954 from geenux/dev-redundant-data

Refactor data layers to avoid duplication of data transformation code

10 years agoMerge pull request #961 from jeffdonahue/gpu-flag-overrides-solver-mode
Jeff Donahue [Thu, 21 Aug 2014 21:35:23 +0000 (14:35 -0700)]
Merge pull request #961 from jeffdonahue/gpu-flag-overrides-solver-mode

If specified, --gpu flag overrides SolverParameter solver_mode.

10 years agoIf specified, --gpu flag overrides SolverParameter solver_mode.
Jeff Donahue [Thu, 21 Aug 2014 19:53:43 +0000 (12:53 -0700)]
If specified, --gpu flag overrides SolverParameter solver_mode.

10 years agoMerge pull request #942 from yosinski/doc-update
Evan Shelhamer [Thu, 21 Aug 2014 17:10:40 +0000 (10:10 -0700)]
Merge pull request #942 from yosinski/doc-update

protobuf should be installed with Python support on Mac

10 years agoMerge pull request #956 from longjon/clean-signbit
Evan Shelhamer [Thu, 21 Aug 2014 17:08:45 +0000 (10:08 -0700)]
Merge pull request #956 from longjon/clean-signbit

Clean up CPU signbit definition

10 years agoRefactor ImageDataLayer to use DataTransformer
TANGUY Arnaud [Thu, 21 Aug 2014 13:50:39 +0000 (15:50 +0200)]
Refactor ImageDataLayer to use DataTransformer

10 years agospecialize cpu_strided_dot before instantiation to fix clang++ build
Evan Shelhamer [Thu, 21 Aug 2014 03:38:29 +0000 (20:38 -0700)]
specialize cpu_strided_dot before instantiation to fix clang++ build

10 years agoclean up cpu signbit definition
Jonathan L Long [Thu, 21 Aug 2014 03:06:11 +0000 (20:06 -0700)]
clean up cpu signbit definition

The redundant "caffe_signbit" function was used as a circumlocution
around CUDA's signbit macro; we can just add extra parens to prevent
macro expansion.

10 years agoRefactor DataLayer using a new DataTransformer
TANGUY Arnaud [Wed, 20 Aug 2014 16:37:54 +0000 (18:37 +0200)]
Refactor DataLayer using a new DataTransformer

Start the refactoring of the datalayers to avoid data transformation
code duplication. So far, only DataLayer has been done.

10 years agoMerge pull request #940 from ronghanghu/channel-softmax
longjon [Tue, 19 Aug 2014 08:43:37 +0000 (01:43 -0700)]
Merge pull request #940 from ronghanghu/channel-softmax

Softmax works across channels

10 years agoimplement GPU version of Softmax
Ronghang Hu [Sat, 16 Aug 2014 21:52:19 +0000 (14:52 -0700)]
implement GPU version of Softmax

10 years agotest softmax and softmax with loss across channels
Jonathan L Long [Fri, 15 Aug 2014 02:21:28 +0000 (19:21 -0700)]
test softmax and softmax with loss across channels

10 years agosoftmax and softmax loss layers work across channels
Jonathan L Long [Fri, 15 Aug 2014 02:20:30 +0000 (19:20 -0700)]
softmax and softmax loss layers work across channels

10 years agoadd caffe_cpu_strided_dot for strided dot products
Jonathan L Long [Mon, 18 Aug 2014 06:15:03 +0000 (23:15 -0700)]
add caffe_cpu_strided_dot for strided dot products

This provides a more direct interface to the cblas_?dot functions.
This is useful, for example, for taking dot products across channels.

10 years agomilliseconds is a word
Jonathan L Long [Mon, 18 Aug 2014 05:01:24 +0000 (22:01 -0700)]
milliseconds is a word

10 years agoMerge pull request #943 from jeffdonahue/parallel-travis-builds
Jeff Donahue [Mon, 18 Aug 2014 00:42:24 +0000 (17:42 -0700)]
Merge pull request #943 from jeffdonahue/parallel-travis-builds

add Travis build matrix to do parallel builds for (make, CMake) x (with CUDA, without CUDA)

10 years agoTravis build matrix to do parallel builds for make and CMake; CUDA-less
Jeff Donahue [Sun, 17 Aug 2014 10:11:59 +0000 (03:11 -0700)]
Travis build matrix to do parallel builds for make and CMake; CUDA-less
and CUDA-ful.  Move bits of functionality into scripts under scripts/travis
for readability. Only generate CUDA compute_50 for perfomance.

10 years agoMerge pull request #623 from BVLC/cmake
Jeff Donahue [Sun, 17 Aug 2014 09:03:51 +0000 (02:03 -0700)]
Merge pull request #623 from BVLC/cmake

CMake build system (feature development tracking PR)

10 years agorestore .testbin extension, and move caffe_tool back to "caffe".
Jeff Donahue [Sun, 17 Aug 2014 07:28:14 +0000 (00:28 -0700)]
restore .testbin extension, and move caffe_tool back to "caffe".
(Required as I had to change the tool target names to '.bin' but give
them an OUTPUT_NAME, but the .bin made the test_net tool collide with
the test_net unit test.)

10 years agouse all caps for global preprocess vars (e.g. EXAMPLES_SOURCE_DIR), and
Jeff Donahue [Sun, 17 Aug 2014 07:26:35 +0000 (00:26 -0700)]
use all caps for global preprocess vars (e.g. EXAMPLES_SOURCE_DIR), and
other minor cleanup

10 years ago.travis.yml and .gitignore: various minor cleanup
Jeff Donahue [Sun, 17 Aug 2014 07:03:28 +0000 (00:03 -0700)]
.travis.yml and .gitignore: various minor cleanup

10 years ago[docs] CMake build steps and Ubuntu 12.04 install instructions
Jeff Donahue [Sun, 17 Aug 2014 06:47:25 +0000 (23:47 -0700)]
[docs] CMake build steps and Ubuntu 12.04 install instructions

10 years agoReduce packages
bhack [Tue, 29 Jul 2014 23:15:39 +0000 (01:15 +0200)]
Reduce packages

10 years agoAdd ppa for CMake for fix 32bit precompiled cmake on 64bit
bhack [Tue, 29 Jul 2014 18:25:50 +0000 (20:25 +0200)]
Add ppa for CMake for fix 32bit precompiled cmake on 64bit

10 years agoadded gflags + bugfixes + rebase on bvlc/caffe
Adam Kosiorek [Tue, 29 Jul 2014 08:40:40 +0000 (10:40 +0200)]
added gflags + bugfixes + rebase on bvlc/caffe
* added gflags requirement in CMake
* fixed a bug that linked some tests into caffe lib
* renamed tools/caffe due to conflicting target names with caffe lib
* rebased onto bvlc/caffe

10 years agoAdded lint target
Adam Kosiorek [Mon, 28 Jul 2014 12:01:45 +0000 (14:01 +0200)]
Added lint target

10 years agoadded proper 'runtest' target
Adam Kosiorek [Fri, 25 Jul 2014 10:22:00 +0000 (12:22 +0200)]
added proper 'runtest' target

10 years agoExamples_SOURCE_DIR cmake variable bugfix
Adam Kosiorek [Fri, 25 Jul 2014 07:38:42 +0000 (09:38 +0200)]
Examples_SOURCE_DIR cmake variable bugfix
* it was set only when BUILD_EXAMPLES==OFF

10 years agoenable both GPU and CPU builds + testing in travis
Adam Kosiorek [Thu, 24 Jul 2014 13:37:26 +0000 (15:37 +0200)]
enable both GPU and CPU builds + testing in travis

10 years agocpu only build works
Adam Kosiorek [Thu, 24 Jul 2014 13:27:53 +0000 (15:27 +0200)]
cpu only build works

10 years agocpu only
Adam Kosiorek [Thu, 24 Jul 2014 11:58:37 +0000 (13:58 +0200)]
cpu only

10 years agorestoring travis.yml
Adam Kosiorek [Thu, 24 Jul 2014 12:31:52 +0000 (14:31 +0200)]
restoring travis.yml

10 years agocmake from binaries
Adam Kosiorek [Thu, 24 Jul 2014 11:44:05 +0000 (13:44 +0200)]
cmake from binaries

10 years agocmake build configuration for travis-ci
Adam Kosiorek [Wed, 23 Jul 2014 07:41:36 +0000 (09:41 +0200)]
cmake build configuration for travis-ci

10 years agofixed lint issues
Adam Kosiorek [Wed, 23 Jul 2014 07:12:44 +0000 (09:12 +0200)]
fixed lint issues

10 years agofixed CMake dependant header file generation
Adam Kosiorek [Tue, 22 Jul 2014 12:44:19 +0000 (14:44 +0200)]
fixed CMake dependant header file generation

10 years agoexamples CMake lists
Adam Kosiorek [Tue, 1 Jul 2014 09:42:24 +0000 (11:42 +0200)]
examples CMake lists

10 years agocmake build system
Adam Kosiorek [Tue, 1 Jul 2014 07:56:20 +0000 (09:56 +0200)]
cmake build system

10 years agoUpdated installation docs for OS X 10.9 brew install protobuf as well
Jason Yosinski [Sun, 17 Aug 2014 06:41:52 +0000 (23:41 -0700)]
Updated installation docs for OS X 10.9 brew install protobuf as well

10 years agoUpdated documentation to include instructions to install protobuf with Python support...
Jason Yosinski [Sun, 17 Aug 2014 06:18:12 +0000 (23:18 -0700)]
Updated documentation to include instructions to install protobuf with Python support on Mac OS X

10 years agoMerge pull request #936 from jeffdonahue/not-stage
Jeff Donahue [Sun, 17 Aug 2014 00:17:37 +0000 (17:17 -0700)]
Merge pull request #936 from jeffdonahue/not-stage

Add "not_stage" to NetStateRule to exclude NetStates with certain stages

10 years agoAdd "not_stage" to NetStateRule to exclude NetStates with certain
Jeff Donahue [Fri, 15 Aug 2014 21:29:39 +0000 (14:29 -0700)]
Add "not_stage" to NetStateRule to exclude NetStates with certain
stages.

10 years ago[example] set phase test for fully-convolutional model
Evan Shelhamer [Fri, 15 Aug 2014 21:13:22 +0000 (14:13 -0700)]
[example] set phase test for fully-convolutional model

10 years ago[example] include imports in net surgery
Evan Shelhamer [Fri, 15 Aug 2014 21:04:59 +0000 (14:04 -0700)]
[example] include imports in net surgery

10 years agoMerge pull request #897 from ashafaei/eltwise-abs
Evan Shelhamer [Fri, 15 Aug 2014 02:27:03 +0000 (19:27 -0700)]
Merge pull request #897 from ashafaei/eltwise-abs

Absolute Value layer

10 years agoAdded absolute value layer, useful for implementation of siamese networks!
Alireza Shafaei [Sun, 10 Aug 2014 05:44:12 +0000 (22:44 -0700)]
Added absolute value layer, useful for implementation of siamese networks!
This commit also replaces the default caffe_fabs with MKL/non-MKL implementation of Abs.

10 years agoMerge pull request #923 from yosinski/doc-update
Jeff Donahue [Thu, 14 Aug 2014 02:03:11 +0000 (19:03 -0700)]
Merge pull request #923 from yosinski/doc-update

ImageNet tutorial update to merged train_val.prototext

10 years agoTried to clarify function of `include' lines and train vs. test network differences
Jason Yosinski [Thu, 14 Aug 2014 00:32:48 +0000 (17:32 -0700)]
Tried to clarify function of `include' lines and train vs. test network differences

10 years agoUpdated ImageNet Tutorial to reflect new merged train+val prototxt format. Also corre...
Jason Yosinski [Wed, 13 Aug 2014 23:58:04 +0000 (17:58 -0600)]
Updated ImageNet Tutorial to reflect new merged train+val prototxt format. Also corrected 4,500,000 iterations  -> 450,000 iterations.

10 years agoFix from loss-generalization: accidentally removed mid-Forward return
Jeff Donahue [Wed, 13 Aug 2014 21:54:59 +0000 (14:54 -0700)]
Fix from loss-generalization: accidentally removed mid-Forward return
from PowerLayer (caused bad performance for trivial PowerLayer cases...)

10 years agoMerge pull request #686 from jeffdonahue/loss-generalization
Jeff Donahue [Wed, 13 Aug 2014 20:47:44 +0000 (13:47 -0700)]
Merge pull request #686 from jeffdonahue/loss-generalization

Loss generalization

10 years agoStore loss coefficients in layer; use for prettier training output.
Jeff Donahue [Sun, 13 Jul 2014 23:56:42 +0000 (16:56 -0700)]
Store loss coefficients in layer; use for prettier training output.

10 years agoAdd ACCURACY layer and softmax_error output to lenet_consolidated_solver
Jeff Donahue [Sat, 12 Jul 2014 06:22:21 +0000 (23:22 -0700)]
Add ACCURACY layer and softmax_error output to lenet_consolidated_solver
example.

10 years agoAlso display outputs in the train net. (Otherwise, why have them?)
Jeff Donahue [Sat, 12 Jul 2014 06:06:41 +0000 (23:06 -0700)]
Also display outputs in the train net.  (Otherwise, why have them?)

10 years agoDisallow in-place computation in SPLIT layer -- has strange effects in
Jeff Donahue [Sat, 12 Jul 2014 05:52:02 +0000 (22:52 -0700)]
Disallow in-place computation in SPLIT layer -- has strange effects in
backward pass when input into a loss.

10 years agoAccuracyLayer only dies when necessary.
Jeff Donahue [Sat, 12 Jul 2014 00:21:27 +0000 (17:21 -0700)]
AccuracyLayer only dies when necessary.

10 years agoNet::Init can determine that layers don't need backward if they are not
Jeff Donahue [Sat, 12 Jul 2014 00:19:17 +0000 (17:19 -0700)]
Net::Init can determine that layers don't need backward if they are not
used to compute the loss.

10 years agoMake multiple losses work by inserting split layers and add some tests for it.
Jeff Donahue [Fri, 11 Jul 2014 10:17:53 +0000 (03:17 -0700)]
Make multiple losses work by inserting split layers and add some tests for it.
Test that we can call backward with an ACCURACY layer.  This currently fails,
but should be possible now that we explicitly associate a loss weight with
each top blob.

10 years agoGeneralize loss by allowing any top blob to be used as a loss in which
Jeff Donahue [Fri, 11 Jul 2014 08:55:17 +0000 (01:55 -0700)]
Generalize loss by allowing any top blob to be used as a loss in which
its elements are summed with a scalar coefficient.

Forward for layers no longer returns a loss; instead all loss layers must have
top blobs.  Existing loss layers are given a top blob automatically by
Net::Init, with an associated top_loss_weight of 1 (set in
LossLayer::FurtherSetUp).  Due to the increased amount of common SetUp logic,
the SetUp interface is modified such that all subclasses should normally
override FurtherSetUp only, which is called by SetUp.

10 years agoAdd net tests for loss_weight.
Jeff Donahue [Fri, 11 Jul 2014 08:48:36 +0000 (01:48 -0700)]
Add net tests for loss_weight.

Check that the loss and gradients throughout the net are appropriately scaled
for a few loss_weight values, assuming a default weight of 1 in the loss layer
only.  Also modify test_gradient_check_util to associate a loss of 2 rather
than 1 with the top blob, so that loss layer tests fail if they don't scale
their diffs.

10 years agoAdd loss_weight to proto, specifying coefficients for each top blob
Jeff Donahue [Fri, 11 Jul 2014 08:44:49 +0000 (01:44 -0700)]
Add loss_weight to proto, specifying coefficients for each top blob
in the objective function.

10 years ago[docs] update docs generation for notebook metadata
Evan Shelhamer [Wed, 13 Aug 2014 17:42:30 +0000 (10:42 -0700)]
[docs] update docs generation for notebook metadata

10 years agoMerge pull request #921 from shelhamer/notebook-update
Evan Shelhamer [Wed, 13 Aug 2014 17:33:53 +0000 (10:33 -0700)]
Merge pull request #921 from shelhamer/notebook-update

Fix plotting and metadata in notebook examples

10 years ago[example] change notebook name metadata to avoid conflict
Evan Shelhamer [Wed, 13 Aug 2014 16:27:56 +0000 (09:27 -0700)]
[example] change notebook name metadata to avoid conflict

see https://github.com/ipython/ipython/issues/5686

10 years ago[example] fix plt commands in detection
Evan Shelhamer [Wed, 13 Aug 2014 16:22:42 +0000 (09:22 -0700)]
[example] fix plt commands in detection

10 years agouse plt namespace for imshow in filter_visualization.ipynb
Clemens Korner [Wed, 13 Aug 2014 10:05:30 +0000 (12:05 +0200)]
use plt namespace for imshow in filter_visualization.ipynb

10 years agoMerge pull request #914 from ashafaei/euclidean-loss-fix
Yangqing Jia [Wed, 13 Aug 2014 05:27:43 +0000 (22:27 -0700)]
Merge pull request #914 from ashafaei/euclidean-loss-fix

Fixed the GPU implementation of EuclideanLoss!

10 years agoFixed the GPU implementation of EuclideanLoss to report the loss to the top layer
Alireza Shafaei [Tue, 12 Aug 2014 20:54:51 +0000 (13:54 -0700)]
Fixed the GPU implementation of EuclideanLoss to report the loss to the top layer

10 years agoMerge pull request #846 from qipeng/mvn-layer
Jeff Donahue [Tue, 12 Aug 2014 20:04:02 +0000 (13:04 -0700)]
Merge pull request #846 from qipeng/mvn-layer

mean-variance normalization layer

10 years agoMerge pull request #863 from jeffdonahue/lint-check-caffe-fns
Jeff Donahue [Tue, 12 Aug 2014 19:54:29 +0000 (12:54 -0700)]
Merge pull request #863 from jeffdonahue/lint-check-caffe-fns

add lint check for functions with Caffe alternatives (memcpy, memset)

10 years agoFix caffe/alt_fn lint errors.
Jeff Donahue [Wed, 6 Aug 2014 00:33:19 +0000 (17:33 -0700)]
Fix caffe/alt_fn lint errors.

10 years agoCreate caffe_{,gpu_}memset functions to replace {m,cudaM}emset's.
Jeff Donahue [Tue, 5 Aug 2014 23:59:14 +0000 (16:59 -0700)]
Create caffe_{,gpu_}memset functions to replace {m,cudaM}emset's.

10 years agoAdd caffe/alt_fn rule to lint checks to check for functions (like memset
Jeff Donahue [Tue, 5 Aug 2014 23:01:11 +0000 (16:01 -0700)]
Add caffe/alt_fn rule to lint checks to check for functions (like memset
& memcpy) with caffe_* alternatives that should be used instead.

10 years agolint targets should depend on the lint script itself
Jeff Donahue [Tue, 5 Aug 2014 23:19:31 +0000 (16:19 -0700)]
lint targets should depend on the lint script itself