platform/upstream/caffeonacl.git
10 years agoDon't make clean when running linecount
Jeff Donahue [Tue, 8 Jul 2014 22:02:00 +0000 (15:02 -0700)]
Don't make clean when running linecount

10 years agoMerge pull request #614 from ronghanghu/rectangular_pooling
Jeff Donahue [Mon, 7 Jul 2014 23:40:50 +0000 (16:40 -0700)]
Merge pull request #614 from ronghanghu/rectangular_pooling

Rectangular pooling

10 years agoadded gradient check for non-square pooling
Ronghang Hu [Mon, 7 Jul 2014 19:12:59 +0000 (12:12 -0700)]
added gradient check for non-square pooling

10 years agofixed style errors
Ronghang Hu [Mon, 7 Jul 2014 18:32:58 +0000 (11:32 -0700)]
fixed style errors

10 years agoMerge pull request #611 from shelhamer/makefile-config-cxx
Evan Shelhamer [Sun, 6 Jul 2014 19:47:53 +0000 (12:47 -0700)]
Merge pull request #611 from shelhamer/makefile-config-cxx

Customize compiler setting in Makefile.config

10 years agoadd tests for rectangular pooling regions
Ronghang Hu [Sat, 5 Jul 2014 15:21:50 +0000 (08:21 -0700)]
add tests for rectangular pooling regions

10 years agofixing pooling SetUp() to allow default values for stride and pad
Ronghang Hu [Sat, 5 Jul 2014 00:29:56 +0000 (17:29 -0700)]
fixing pooling SetUp() to allow default values for stride and pad

10 years agoUpdate pooling_layer.cu
Ronghang Hu [Fri, 4 Jul 2014 04:05:23 +0000 (21:05 -0700)]
Update pooling_layer.cu

Replace pad_, kernel_size_, stride_ with pad_h_, pad_w_, kernel_size_h_, kernel_size_w_, stride_h_, stride_w_ to support pooling on rectangle regions.

10 years agoUpdate pooling_layer.cpp
Ronghang Hu [Fri, 4 Jul 2014 03:42:58 +0000 (20:42 -0700)]
Update pooling_layer.cpp

Replace pad_, kernel_size_, stride_ with pad_h_, pad_w_, kernel_size_h_, kernel_size_w_, stride_h_, stride_w_ to support pooling on rectangle regions.

10 years agoUpdate vision_layers.hpp
Ronghang Hu [Fri, 4 Jul 2014 03:33:50 +0000 (20:33 -0700)]
Update vision_layers.hpp

Replace pad_, kernel_size_, stride_ with pad_h_, pad_w_, kernel_size_h_, kernel_size_w_, stride_h_, stride_w_ to support pooling on rectangle regions.

10 years agoUpdate caffe.proto
Ronghang Hu [Fri, 4 Jul 2014 03:30:25 +0000 (20:30 -0700)]
Update caffe.proto

Add pad_h, pad_w, kernel_size_h, kernel_size_w, stride_h, stride_w to support pooling on rectangle regions.

10 years agocustomize compiler setting in Makefile.config
Evan Shelhamer [Fri, 4 Jul 2014 01:15:38 +0000 (18:15 -0700)]
customize compiler setting in Makefile.config

in case of issues with default compilers or exotic platforms.

10 years agoMerge pull request #549 from jamt9000/fix-resize-crop-pil
Evan Shelhamer [Fri, 4 Jul 2014 00:29:14 +0000 (17:29 -0700)]
Merge pull request #549 from jamt9000/fix-resize-crop-pil

Make resizing & cropping with PIL work

10 years agoMerge pull request #555 from shelhamer/uva-memory
Evan Shelhamer [Fri, 4 Jul 2014 00:27:37 +0000 (17:27 -0700)]
Merge pull request #555 from shelhamer/uva-memory

Switch to Unified Virtual Address memory copies

10 years agoMerge pull request #602 from kloudkl/layers-in-order
Jeff Donahue [Fri, 4 Jul 2014 00:25:51 +0000 (17:25 -0700)]
Merge pull request #602 from kloudkl/layers-in-order

Layers in order

10 years agofix casts (static for void*)
Evan Shelhamer [Sat, 28 Jun 2014 21:05:35 +0000 (14:05 -0700)]
fix casts (static for void*)

10 years agoreduce caffe_copy to instantiations, split off caffe_memcpy for void*
Evan Shelhamer [Sat, 28 Jun 2014 20:48:37 +0000 (13:48 -0700)]
reduce caffe_copy to instantiations, split off caffe_memcpy for void*

10 years agoreplace all memset with caffe_set() / caffe_gpu_set()
Evan Shelhamer [Sat, 28 Jun 2014 08:52:49 +0000 (01:52 -0700)]
replace all memset with caffe_set() / caffe_gpu_set()

...except for `SyncedMem` since it has no type.

10 years agoreplace all memcpy by caffe_copy
Evan Shelhamer [Sat, 28 Jun 2014 02:51:33 +0000 (19:51 -0700)]
replace all memcpy by caffe_copy

10 years agodo all caffe_copy() as UVA mem copy, and drop caffe_gpu_copy()
Evan Shelhamer [Sat, 28 Jun 2014 04:25:36 +0000 (21:25 -0700)]
do all caffe_copy() as UVA mem copy, and drop caffe_gpu_copy()

Do all memory copies by `cudaMemcpy` in UVA mode so that the same
`caffe_copy()` interface works for all transfers.

`cudaMemcpy()` is used in lieu of BLAS copies because they do not
understand UVA.

Drop the now unnecessary `caffe_gpu_copy()` since location of the
pointers is now irrelevant to the interface.

10 years agoreplace softmax cudaMemcpy with caffe_gpu_copy
Evan Shelhamer [Sat, 28 Jun 2014 01:38:39 +0000 (18:38 -0700)]
replace softmax cudaMemcpy with caffe_gpu_copy

10 years agoswitch to unified virtual addressing CUDA memcpy
Evan Shelhamer [Sat, 28 Jun 2014 01:36:48 +0000 (18:36 -0700)]
switch to unified virtual addressing CUDA memcpy

Host / device copies are distinguished by the virtual address of the
pointers instead of explicit memcpy modes.

10 years agoreport UVA in platform test
Evan Shelhamer [Fri, 27 Jun 2014 22:01:02 +0000 (15:01 -0700)]
report UVA in platform test

10 years agoMerge pull request #609 from jeffdonahue/multiconv
Evan Shelhamer [Fri, 4 Jul 2014 00:11:20 +0000 (17:11 -0700)]
Merge pull request #609 from jeffdonahue/multiconv

Generalize CONVOLUTION layer to multiple inputs/outputs

10 years agoConvolutionLayer can take N bottom blobs and N top blobs
Jeff Donahue [Thu, 3 Jul 2014 22:33:12 +0000 (15:33 -0700)]
ConvolutionLayer can take N bottom blobs and N top blobs

10 years agoadd EqualNumBottomTopBlobs() property for layers; use in
Jeff Donahue [Thu, 3 Jul 2014 21:39:35 +0000 (14:39 -0700)]
add EqualNumBottomTopBlobs() property for layers; use in
ConvolutionLayer

10 years agoOrganize the loss layers in alphabetical order
Kai Li [Thu, 3 Jul 2014 12:17:41 +0000 (20:17 +0800)]
Organize the loss layers in alphabetical order

10 years agoArrange the data layers to be in alphabetical order
Kai Li [Thu, 3 Jul 2014 12:16:54 +0000 (20:16 +0800)]
Arrange the data layers to be in alphabetical order

10 years agoSeparate layers relatively independent of images out of vision_layers
Kai Li [Thu, 3 Jul 2014 12:15:53 +0000 (20:15 +0800)]
Separate layers relatively independent of images out of vision_layers

10 years agofix uninitialized variable warnings in tools
Jeff Donahue [Tue, 1 Jul 2014 17:15:29 +0000 (10:15 -0700)]
fix uninitialized variable warnings in tools

10 years agoUpdate Makefile.config.example
yzhuan [Mon, 30 Jun 2014 08:08:55 +0000 (16:08 +0800)]
Update Makefile.config.example

fix some typing error

10 years agoMerge pull request #545 from jamt9000/im2col-kernel-test
Evan Shelhamer [Mon, 30 Jun 2014 00:27:50 +0000 (17:27 -0700)]
Merge pull request #545 from jamt9000/im2col-kernel-test

Test for im2col kernel

10 years agolint
Evan Shelhamer [Sat, 28 Jun 2014 20:54:58 +0000 (13:54 -0700)]
lint

10 years agoMerge pull request #502 from sguada/fix_dropout_backward
Sergio Guadarrama [Sat, 28 Jun 2014 17:37:40 +0000 (10:37 -0700)]
Merge pull request #502 from sguada/fix_dropout_backward

Fix dropout backward in TEST phase

10 years agoRemove Cuda.major >= 2 check on Dropout test
Sergio [Sat, 28 Jun 2014 17:22:13 +0000 (10:22 -0700)]
Remove Cuda.major >= 2 check on Dropout test

10 years agoCheck that pointers are different before copying in caffe_copy and caffe_gpu_copy
Sergio [Thu, 19 Jun 2014 01:49:14 +0000 (18:49 -0700)]
Check that pointers are different before copying in caffe_copy and caffe_gpu_copy

10 years agoAdded test to Dropout to check gradients during Test phase
Sergio [Sat, 14 Jun 2014 15:39:35 +0000 (08:39 -0700)]
Added test to Dropout to check gradients during Test phase

10 years agoFix var names in Dropout.cu
Sergio [Sat, 14 Jun 2014 01:54:28 +0000 (18:54 -0700)]
Fix var names in Dropout.cu

10 years agoModify Dropout to allow backward pass in TEST phase
Sergio [Sat, 28 Jun 2014 01:54:08 +0000 (18:54 -0700)]
Modify Dropout to allow backward pass in TEST phase

Conflicts:
src/caffe/layers/dropout_layer.cpp
src/caffe/layers/dropout_layer.cu

10 years agoFix building tests with parallel make
James Thewlis [Fri, 27 Jun 2014 23:49:53 +0000 (00:49 +0100)]
Fix building tests with parallel make

The changes for .cu tests meant that creating
TEST_BUILD_DIR wasn't happening first

10 years agoMerge pull request #510 from crizCraig/patch-1
Evan Shelhamer [Fri, 27 Jun 2014 21:15:02 +0000 (14:15 -0700)]
Merge pull request #510 from crizCraig/patch-1

Add comment to Makefile.config.example about DEBUG flag issue in OSX

10 years agoMerge pull request #531 from flickr/dev-top-k-accuracy
Evan Shelhamer [Fri, 27 Jun 2014 20:46:49 +0000 (13:46 -0700)]
Merge pull request #531 from flickr/dev-top-k-accuracy

Measure accuracy as top-k (default to top-1)

10 years agoComment-fix.
Rob Hess [Tue, 24 Jun 2014 00:22:43 +0000 (17:22 -0700)]
Comment-fix.

10 years agoUpdate name of last added param.
Rob Hess [Fri, 20 Jun 2014 21:39:58 +0000 (14:39 -0700)]
Update name of last added param.

10 years agoAdd unit test for accuracy layer.
Rob Hess [Fri, 20 Jun 2014 21:37:07 +0000 (14:37 -0700)]
Add unit test for accuracy layer.

10 years agoNext LayerParameter proto id
cypof [Thu, 19 Jun 2014 22:52:10 +0000 (15:52 -0700)]
Next LayerParameter proto id

10 years agoUse vectors instead of arrays.
Rob Hess [Thu, 19 Jun 2014 22:12:56 +0000 (15:12 -0700)]
Use vectors instead of arrays.

10 years agoCompute top-k accuracy in AccuracyLayer.
Rob Hess [Mon, 16 Jun 2014 20:35:46 +0000 (13:35 -0700)]
Compute top-k accuracy in AccuracyLayer.

10 years agoIncorporate top_k param into AccuracyLayer and check it's value.
Rob Hess [Sat, 14 Jun 2014 02:07:38 +0000 (19:07 -0700)]
Incorporate top_k param into AccuracyLayer and check it's value.

10 years agoAdd parameter for AccuracyLayer in proto.
Rob Hess [Sat, 14 Jun 2014 01:16:04 +0000 (18:16 -0700)]
Add parameter for AccuracyLayer in proto.

10 years agoMerge pull request #554 from shelhamer/nvcc-arch-50
Evan Shelhamer [Fri, 27 Jun 2014 15:53:52 +0000 (08:53 -0700)]
Merge pull request #554 from shelhamer/nvcc-arch-50

Add latest CUDA arch to fix invalid device function errors on such devices

10 years agoadd latest CUDA arch to fix invalid device function errors
Evan Shelhamer [Fri, 27 Jun 2014 15:47:42 +0000 (08:47 -0700)]
add latest CUDA arch to fix invalid device function errors

...on devices of that architecture.

10 years agoMake resizing & cropping with PIL work
James Thewlis [Fri, 27 Jun 2014 09:49:08 +0000 (10:49 +0100)]
Make resizing & cropping with PIL work

Previously it was trying to use undefined variables

10 years agoTest for im2col kernel
James Thewlis [Thu, 26 Jun 2014 17:12:30 +0000 (18:12 +0100)]
Test for im2col kernel

With associated Makefile changes for .cu tests

This tests that the grid-stride loop works for im2col,
using the CPU version as a reference.

10 years agoMerge pull request #511 from kloudkl/extract_multiple_features
Evan Shelhamer [Fri, 27 Jun 2014 02:13:28 +0000 (19:13 -0700)]
Merge pull request #511 from kloudkl/extract_multiple_features

Extract multiple features in a single Forward pass

10 years agoMerge pull request #546 from BVLC/weight-sharing
Jeff Donahue [Thu, 26 Jun 2014 20:50:33 +0000 (16:50 -0400)]
Merge pull request #546 from BVLC/weight-sharing

Weight Sharing

10 years agorename layer -> param mapping for clarity
Evan Shelhamer [Thu, 26 Jun 2014 20:05:44 +0000 (13:05 -0700)]
rename layer -> param mapping for clarity

10 years agochange weight blob field name to param
Evan Shelhamer [Thu, 26 Jun 2014 19:49:00 +0000 (12:49 -0700)]
change weight blob field name to param

10 years agoweight sharing
Jeff Donahue [Mon, 9 Jun 2014 02:53:45 +0000 (19:53 -0700)]
weight sharing

10 years agoMerge pull request #497 from jeffdonahue/fix-backward-interface
Evan Shelhamer [Thu, 26 Jun 2014 19:18:49 +0000 (12:18 -0700)]
Merge pull request #497 from jeffdonahue/fix-backward-interface

Improve Backward method interface

10 years agoforce_backward works properly with non-backproppable things
Jeff Donahue [Mon, 16 Jun 2014 23:37:17 +0000 (16:37 -0700)]
force_backward works properly with non-backproppable things

10 years agochange Backward interface: propagate_down is a vector -- use to fix
Jeff Donahue [Tue, 10 Jun 2014 19:38:59 +0000 (12:38 -0700)]
change Backward interface: propagate_down is a vector -- use to fix
long-standing issue with how this is handled in loss layers (esp.
EuclideanLossLayer)

10 years agoMerge pull request #522 from sguada/accuracy_without_loss
Evan Shelhamer [Thu, 26 Jun 2014 17:25:12 +0000 (10:25 -0700)]
Merge pull request #522 from sguada/accuracy_without_loss

Split accuracy and loss

10 years agofile SoftmaxWithLoss in with loss layers
Evan Shelhamer [Thu, 26 Jun 2014 17:21:19 +0000 (10:21 -0700)]
file SoftmaxWithLoss in with loss layers

10 years agoMerge pull request #488 from longjon/wall-werror
Evan Shelhamer [Thu, 26 Jun 2014 16:10:54 +0000 (09:10 -0700)]
Merge pull request #488 from longjon/wall-werror

Build with -Wall -Werror

10 years agocontent ourselves to -Wall without -Werror for now
Evan Shelhamer [Thu, 26 Jun 2014 00:58:38 +0000 (17:58 -0700)]
content ourselves to -Wall without -Werror for now

10 years agomake clang++ happy on OSX by not linking with pthread
Evan Shelhamer [Thu, 26 Jun 2014 00:35:40 +0000 (17:35 -0700)]
make clang++ happy on OSX by not linking with pthread

10 years agofix test data layer post-lmdb
Evan Shelhamer [Thu, 26 Jun 2014 00:32:27 +0000 (17:32 -0700)]
fix test data layer post-lmdb

10 years agoMerge pull request #478 from kloudkl/cpu_only_tests
Evan Shelhamer [Wed, 25 Jun 2014 23:58:03 +0000 (16:58 -0700)]
Merge pull request #478 from kloudkl/cpu_only_tests

CPU only tests

10 years agoturn off some warnings for older compilers
Jonathan L Long [Sat, 14 Jun 2014 02:59:33 +0000 (19:59 -0700)]
turn off some warnings for older compilers

10 years agoadd WARNINGS to CXXFLAGS
Jonathan L Long [Tue, 10 Jun 2014 22:39:44 +0000 (15:39 -0700)]
add WARNINGS to CXXFLAGS

10 years agoupgrade warnings to -Wall -Werror -Wno-sign-compare
Jonathan L Long [Tue, 10 Jun 2014 22:38:06 +0000 (15:38 -0700)]
upgrade warnings to -Wall -Werror -Wno-sign-compare

10 years agodon't end comments with \, so that -Wcomment can be used
Jonathan L Long [Tue, 10 Jun 2014 22:34:01 +0000 (15:34 -0700)]
don't end comments with \, so that -Wcomment can be used

10 years agoinitialize and comment variables that the compiler finds suspicious
Jonathan L Long [Tue, 10 Jun 2014 21:50:24 +0000 (14:50 -0700)]
initialize and comment variables that the compiler finds suspicious

10 years agomove CUDA 6.0 check into switch statement itself
Jonathan L Long [Tue, 10 Jun 2014 21:45:36 +0000 (14:45 -0700)]
move CUDA 6.0 check into switch statement itself

This allows -Wswitch to be turned on so that the compiler can check
exhaustiveness.

10 years agoadd missing const qualifiers to MemoryDataLayer ExactNum* functions
Jonathan L Long [Wed, 11 Jun 2014 22:29:04 +0000 (15:29 -0700)]
add missing const qualifiers to MemoryDataLayer ExactNum* functions

10 years agoremove unused variables from tests
Jonathan L Long [Thu, 12 Jun 2014 02:31:40 +0000 (19:31 -0700)]
remove unused variables from tests

10 years agoremove unused variables
Jonathan L Long [Mon, 26 May 2014 09:31:34 +0000 (02:31 -0700)]
remove unused variables

10 years agoinitialize in declared order in tests
Jonathan L Long [Thu, 12 Jun 2014 02:31:22 +0000 (19:31 -0700)]
initialize in declared order in tests

10 years agoinitialize in declared order
Jonathan L Long [Mon, 26 May 2014 09:31:05 +0000 (02:31 -0700)]
initialize in declared order

10 years agocheck if window file is empty in WindowDataLayer
Jonathan L Long [Tue, 10 Jun 2014 22:34:52 +0000 (15:34 -0700)]
check if window file is empty in WindowDataLayer

Also note that window files containing windows with different numbers of
channels may not work correctly.

10 years agoactually check status values from all HDF5 calls
Jonathan L Long [Mon, 26 May 2014 09:02:42 +0000 (02:02 -0700)]
actually check status values from all HDF5 calls

10 years agoMerge pull request #427 from jamt9000/fix-kernel-index
Evan Shelhamer [Wed, 25 Jun 2014 21:47:06 +0000 (14:47 -0700)]
Merge pull request #427 from jamt9000/fix-kernel-index

Should not modify index in im2col kernel loop

10 years agofix SOFTMAX_LOSS to work with loss top blob interface
Evan Shelhamer [Wed, 25 Jun 2014 03:01:10 +0000 (11:01 +0800)]
fix SOFTMAX_LOSS to work with loss top blob interface

10 years agoInit google logging
Kai Li [Tue, 24 Jun 2014 14:14:54 +0000 (22:14 +0800)]
Init google logging

10 years agoReplace the raw pointers with shared_ptr to ensure memory is released
Kai Li [Fri, 20 Jun 2014 02:14:19 +0000 (10:14 +0800)]
Replace the raw pointers with shared_ptr to ensure memory is released

10 years agoNo need to manually delete the pointers which are managed by std::vector
Kai Li [Tue, 17 Jun 2014 08:56:30 +0000 (16:56 +0800)]
No need to manually delete the pointers which are managed by std::vector

10 years agoProgress should be reported for each feature blob
Kai Li [Tue, 17 Jun 2014 08:27:24 +0000 (16:27 +0800)]
Progress should be reported for each feature blob

10 years agoExtract multiple features in a single Forward pass
Kai Li [Tue, 17 Jun 2014 06:53:00 +0000 (14:53 +0800)]
Extract multiple features in a single Forward pass

10 years agoMerge pull request #508 from kloudkl/ImageDataLayer-RNG-core-dump
Yangqing Jia [Mon, 23 Jun 2014 12:56:30 +0000 (08:56 -0400)]
Merge pull request #508 from kloudkl/ImageDataLayer-RNG-core-dump

Image data layer rng core dump

10 years agoMerge pull request #529 from crizCraig/patch-3
Evan Shelhamer [Sun, 22 Jun 2014 04:17:45 +0000 (12:17 +0800)]
Merge pull request #529 from crizCraig/patch-3

Fix example text: there are 256 filters in conv2.

10 years agoThere are 256 filters in conv2.
Craig Quiter [Sat, 21 Jun 2014 22:22:05 +0000 (15:22 -0700)]
There are 256 filters in conv2.

AlexNet paper splits conv2 into two GPU's, each with 128 filters. Reference paper: http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf Applying #528 to dev instead of master per @Yangqing.

10 years agoMerge pull request #398 from sguada/L2_hinge_loss
Jonathan L Long [Sat, 21 Jun 2014 21:17:50 +0000 (14:17 -0700)]
Merge pull request #398 from sguada/L2_hinge_loss

10 years agoexplicitly name L1 hinge test
Jonathan L Long [Sat, 21 Jun 2014 21:08:30 +0000 (14:08 -0700)]
explicitly name L1 hinge test

10 years agofix whitespace error in HingeLossLayer
Jonathan L Long [Sat, 21 Jun 2014 21:08:15 +0000 (14:08 -0700)]
fix whitespace error in HingeLossLayer

10 years agoChange hinge_norm to norm in test_hinge_loss
Sergio [Sat, 21 Jun 2014 03:26:51 +0000 (20:26 -0700)]
Change hinge_norm to norm in test_hinge_loss

10 years agoRemove C_ mentions, extra spaces and change hinge_norm to norm
Sergio [Sat, 21 Jun 2014 03:22:34 +0000 (20:22 -0700)]
Remove C_ mentions, extra spaces and change hinge_norm to norm

10 years agoNow AccuracyLayer only computes accuracy, one should use LossLayers to compute loss
Sergio [Sat, 21 Jun 2014 02:55:59 +0000 (19:55 -0700)]
Now AccuracyLayer only computes accuracy, one should use LossLayers to compute loss
Changed all val.prototxt in examples to add a LossLayer to compute loss in Test

10 years agoUnify L1 and L2 Hinge_Loss to follow convention
Sergio [Sat, 21 Jun 2014 01:37:34 +0000 (18:37 -0700)]
Unify L1 and L2 Hinge_Loss to follow convention

10 years agoFix the loss to follow the convention
Sergio [Sat, 21 Jun 2014 01:35:25 +0000 (18:35 -0700)]
Fix the loss to follow the convention

Conflicts:
src/caffe/layers/loss_layer.cpp