+++ /dev/null
----
-layout: default
----
-
-# Pre-trained models
-
-[BVLC](http://bvlc.eecs.berkeley.edu) aims to provide a variety of high quality pre-trained models.
-Note that unlike Caffe itself, these models usually have licenses **academic research / non-commercial use only**.
-
-## TODO
-
-Write something about the model zoo.
-
-## Auxiliary Data
-
-Additionally, you will probably eventually need some auxiliary data (mean image, synset list, etc.): run `data/ilsvrc12/get_ilsvrc_aux.sh` from the root directory to obtain it.
A 4-page report for the ACM Multimedia Open Source competition.
- [Installation instructions](/installation.html)<br />
Tested on Ubuntu, Red Hat, OS X.
-* [Pre-trained models](/getting_pretrained_models.html)<br />
-BVLC provides ready-to-use models for non-commercial use.
+* [Model Zoo](/model_zoo.html)<br />
+BVLC suggests a standard distribution format for Caffe models, and provides trained models for non-commercial use.
* [Developing & Contributing](/development.html)<br />
Guidelines for development and contributing to Caffe.
* [API Documentation](/doxygen/)<br />
# Caffe Model Zoo
+Lots of people have used Caffe to train models of different architectures and applied to different problems, ranging from simple regression to AlexNet-alikes to Siamese networks for image similarity to speech applications.
+
+To lower the friction of sharing these models, we introduce the model zoo framework:
+
+- A standard format for packaging Caffe model info.
+- Tools to upload/download model info to/from Github Gists, and to download trained `.caffemodel` binaries.
+- A central wiki page for sharing model info Gists.
+
+## Where to get trained models
+
+First of all, we provide some trained models out of the box.
+Each one of these can be downloaded by running `scripts/download_model_binary.py <dirname>` where `<dirname>` is specified below:
+
+- **BVLC Reference CaffeNet** in `models/bvlc_reference_caffenet`: AlexNet trained on ILSVRC 2012, with a minor variation from the version as described in the NIPS 2012 paper.
+- **BVLC AlexNet** in `models/bvlc_alexnet`: AlexNet trained on ILSVRC 2012, almost exactly as described in NIPS 2012.
+- **BVLC Reference R-CNN ILSVRC-2013** in `models/bvlc_reference_rcnn_ilsvrc13`: pure Caffe implementation of [R-CNN](https://github.com/rbgirshick/rcnn).
+
+User-provided models are posted to a public-editable [wiki page](https://github.com/BVLC/caffe/wiki/Model-Zoo).
+
+## Model info format
+
A caffe model is distributed as a directory containing:
-- solver/model prototxt(s)
-- model binary file, with .caffemodel extension
-- readme.md, containing:
- - YAML header:
- - model file URL or (torrent magnet link) and MD5 hash
- - Caffe commit hash use to train this model
- - [optional] github gist id
- - license type or text
- - main body: free-form description/details
-- helpful scripts
-It is up to the user where to host the model file.
-Dropbox or their own server are both fine.
+- Solver/model prototxt(s)
+- Readme.md containing
+ - YAML frontmatter
+ - Caffe version used to train this model (tagged release or commit hash).
+ - [optional] file URL and SHA1 of the trained `.caffemodel`.
+ - [optional] github gist id.
+ - Information about what data the model was trained on, explanation of modeling choices, etc.
+ - License information.
+- [optional] Other helpful scripts.
+
+## Hosting model info
+
+Github Gist is a good format for model info distribution because it can contain multiple files, is versionable, and has in-browser syntax highlighting and markdown rendering.
+
+- `scripts/download_model_from_gist.sh <gist_id>`: downloads the non-binary files from a Gist into `<dirname>`
+- `scripts/upload_model_to_gist.sh <dirname>`: uploads non-binary files in the model directory as a Github Gist and prints the Gist ID. If `gist_id` is already part of the `<dirname>/readme.md` frontmatter, then updates existing Gist.
-We provide scripts:
+### Hosting trained models
-- publish_model_as_gist.sh: uploads non-binary files in the model directory as a Github Gist and returns the id. If gist id is already part of the readme, then updates existing gist.
-- download_model_from_gist.sh <gist_id>: downloads the non-binary files from a Gist.
-- download_model_binary.py: downloads the .caffemodel from the URL specified in readme.
+It is up to the user where to host the `.caffemodel` file.
+We host our BVLC-provided models on our own server.
+Dropbox also works fine (tip: make sure that `?dl=1` is appended to the end of the URL).
-The Gist is a good format for distribution because it can contain multiple files, is versionable, and has in-browser syntax highlighting and markdown rendering.
+- `scripts/download_model_binary.py <dirname>`: downloads the `.caffemodel` from the URL specified in the `<dirname>/readme.md` frontmatter and confirms SHA1.
-The existing models distributed with Caffe can stay bundled with Caffe, so I am re-working them all into this format.
-All relevant examples will be updated to start with `cd models/model_of_interest && ../scripts/download_model_binary.sh`.
## Tasks
-- get the imagenet example to work with the new prototxt location
+x get the imagenet example to work with the new prototxt location
+x make wiki page for user-submitted models
+- add flickr model to the user-submitted models wiki page
+x make docs section listing bvlc-distributed models
+- write the publish_model_as_gist script
+- write the download_model_from_gist script
"description": "Use the pre-trained ImageNet model to classify images with the Python interface.",
"example_name": "ImageNet classification",
"include_in_docs": true,
- "signature": "sha256:4f8d4c079c30d20ef4b6818e9672b1741fd1377354e5b83e291710736cecd24f"
},
"nbformat": 3,
"nbformat_minor": 0,
"\n",
"Caffe provides a general Python interface for models with `caffe.Net` in `python/caffe/pycaffe.py`, but to make off-the-shelf classification easy we provide a `caffe.Classifier` class and `classify.py` script. Both Python and MATLAB wrappers are provided. However, the Python wrapper has more features so we will describe it here. For MATLAB, refer to `matlab/caffe/matcaffe_demo.m`.\n",
"\n",
- "Before we begin, you must compile Caffe and install the python wrapper by setting your `PYTHONPATH`. If you haven't yet done so, please refer to the [installation instructions](installation.html). This example uses our pre-trained ImageNet model, an ILSVRC12 image classifier. You can download it (232.57MB) by running `examples/imagenet/get_caffe_reference_imagenet_model.sh`. Note that this pre-trained model is licensed for academic research / non-commercial use only.\n",
+ "Before we begin, you must compile Caffe and install the python wrapper by setting your `PYTHONPATH`. If you haven't yet done so, please refer to the [installation instructions](installation.html). This example uses our pre-trained CaffeNet model, an ILSVRC12 image classifier. You can download it by running `./scripts/download_model_binary.py models/bvlc_reference_caffenet`. Note that this pre-trained model is licensed for academic research / non-commercial use only.\n",
"\n",
"Ready? Let's start."
]
"\n",
"# Set the right path to your model definition file, pretrained model weights,\n",
"# and the image you would like to classify.\n",
- "MODEL_FILE = 'imagenet/imagenet_deploy.prototxt'\n",
- "PRETRAINED = 'imagenet/caffe_reference_imagenet_model'\n",
+ "MODEL_FILE = '../models/bvlc_reference_caffenet/deploy.prototxt'\n",
+ "PRETRAINED = '../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'\n",
"IMAGE_FILE = 'images/cat.jpg'"
],
"language": "python",
"metadata": {}
}
]
-}
\ No newline at end of file
+}
"description": "Run a pretrained model as a detector in Python.",
"example_name": "R-CNN detection",
"include_in_docs": true,
- "signature": "sha256:8a744fbbb9ed80acab471247eaf50c27dcbd652105404df9feca599939f0c0ee"
},
"nbformat": 3,
"nbformat_minor": 0,
"\n",
"- [Selective Search](http://koen.me/research/selectivesearch/) is the region proposer used by R-CNN. The [selective_search_ijcv_with_python](https://github.com/sergeyk/selective_search_ijcv_with_python) Python module takes care of extracting proposals through the selective search MATLAB implementation. To install it, download the module and name its directory `selective_search_ijcv_with_python`, run the demo in MATLAB to compile the necessary functions, then add it to your `PYTHONPATH` for importing. (If you have your own region proposals prepared, or would rather not bother with this step, [detect.py](https://github.com/BVLC/caffe/blob/master/python/detect.py) accepts a list of images and bounding boxes as CSV.)\n",
"\n",
- "- Follow the [model instructions](http://caffe.berkeleyvision.org/getting_pretrained_models.html) to get the Caffe R-CNN ImageNet model.\n",
+ "-Run `./scripts/download_model_binary.py models/bvlc_reference_caffenet` to get the Caffe R-CNN ImageNet model.\n",
"\n",
"With that done, we'll call the bundled `detect.py` to generate the region proposals and run the network. For an explanation of the arguments, do `./detect.py --help`."
]
"input": [
"!mkdir -p _temp\n",
"!echo `pwd`/images/fish-bike.jpg > _temp/det_input.txt\n",
- "!../python/detect.py --crop_mode=selective_search --pretrained_model=imagenet/caffe_rcnn_imagenet_model --model_def=imagenet/rcnn_imagenet_deploy.prototxt --gpu --raw_scale=255 _temp/det_input.txt _temp/det_output.h5"
+ "!../python/detect.py --crop_mode=selective_search --pretrained_model=models/bvlc_reference_rcnn_ilsvrc13/bvlc_reference_rcnn_ilsvrc13.caffemodel --model_def=models/bvlc_reference_rcnn_ilsvrc13/deploy.prototxt --gpu --raw_scale=255 _temp/det_input.txt _temp/det_output.h5"
],
"language": "python",
"metadata": {},
"description": "Extracting features and visualizing trained filters with an example image, viewed layer-by-layer.",
"example_name": "Filter visualization",
"include_in_docs": true,
- "signature": "sha256:b1b0457e2b10110aca847a718a3fe631ebcfce63a61cbc33653244f52b1ff4af"
},
"nbformat": 3,
"nbformat_minor": 0,
"cell_type": "markdown",
"metadata": {},
"source": [
- "Follow the [instructions](http://caffe.berkeleyvision.org/getting_pretrained_models.html) for getting the pretrained models, load the net, specify test phase and CPU mode, and configure input preprocessing."
+ "Run `./scripts/download_model_binary.py models/bvlc_reference_caffenet` to get the pretrained CaffeNet model, load the net, specify test phase and CPU mode, and configure input preprocessing."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
- "net = caffe.Classifier(caffe_root + 'examples/imagenet/imagenet_deploy.prototxt',\n",
- " caffe_root + 'examples/imagenet/caffe_reference_imagenet_model')\n",
+ "net = caffe.Classifier(caffe_root + 'models/bvlc_reference_caffenet/deploy.prototxt',\n",
+ " caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel')\n",
"net.set_phase_test()\n",
"net.set_mode_cpu()\n",
"# input preprocessing: 'data' is the name of the input blob == net.inputs[0]\n",
"metadata": {}
}
]
-}
\ No newline at end of file
+}
Now we can train! (You can fine-tune in CPU mode by leaving out the `-gpu` flag.)
- caffe % ./build/tools/caffe train -solver examples/finetune_flickr_style/flickr_style_solver.prototxt -weights examples/imagenet/caffe_reference_imagenet_model -gpu 0
+ caffe % ./build/tools/caffe train -solver examples/finetune_flickr_style/flickr_style_solver.prototxt -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel -gpu 0
[...]
I0828 22:10:04.025378 9718 solver.cpp:46] Solver scaffolding done.
I0828 22:10:04.025388 9718 caffe.cpp:95] Use GPU with device ID 0
- I0828 22:10:04.192004 9718 caffe.cpp:107] Finetuning from examples/imagenet/caffe_reference_imagenet_model
+ I0828 22:10:04.192004 9718 caffe.cpp:107] Finetuning from models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel
[...]
+# This file is for the net_surgery.ipynb example notebook.
name: "CaffeNetConv"
input: "data"
input_dim: 1
---
title: ImageNet tutorial
-description: Train and test "CaffeNet" on ImageNet challenge data.
+description: Train and test "CaffeNet" on ImageNet data.
category: example
include_in_docs: true
priority: 1
Brewing ImageNet
================
-We are going to describe a reference implementation for the approach first proposed by Krizhevsky, Sutskever, and Hinton in their [NIPS 2012 paper](http://books.nips.cc/papers/files/nips25/NIPS2012_0534.pdf).
-Since training the whole model takes some time and energy, we provide a model, trained in the same way as we describe here, to help fight global warming.
-If you would like to simply use the pretrained model, check out the [Pretrained ImageNet](../../getting_pretrained_models.html) page.
-*Note that the pretrained model is for academic research / non-commercial use only*.
-
-To clarify, by ImageNet we actually mean the ILSVRC12 challenge, but you can easily train on the whole of ImageNet as well, just with more disk space, and a little longer training time.
+This guide is meant to get you ready to train your own model on your own data.
+If you just want an ImageNet-trained network, then note that since training takes a lot of energy and we hate global warming, we provide the CaffeNet model trained as described below in the [model zoo](/model_zoo.html).
Data Preparation
----------------
+*The guide specifies all paths and assumes all commands are executed from the root caffe directory.*
+
+*By "ImageNet" we here mean the ILSVRC12 challenge, but you can easily train on the whole of ImageNet as well, just with more disk space, and a little longer training time.*
+
We assume that you already have downloaded the ImageNet training data and validation data, and they are stored on your disk like:
/path/to/imagenet/train/n01440764/n01440764_10026.JPEG
You will first need to prepare some auxiliary data for training. This data can be downloaded by:
- cd $CAFFE_ROOT/data/ilsvrc12/
- ./get_ilsvrc_aux.sh
+ ./data/get_ilsvrc_aux.sh
The training and validation input are described in `train.txt` and `val.txt` as text listing all the files and their labels. Note that we use a different indexing for labels than the ILSVRC devkit: we sort the synset names in their ASCII order, and then label them from 0 to 999. See `synset_words.txt` for the synset/name mapping.
-You may want to resize the images to 256x256 in advance. By default, we do not explicitly do this because in a cluster environment, one may benefit from resizing images in a parallel fashion, using mapreduce. For example, Yangqing used his lightedweighted [mincepie](https://github.com/Yangqing/mincepie) package to do mapreduce on the Berkeley cluster. If you would things to be rather simple and straightforward, you can also use shell commands, something like:
+You may want to resize the images to 256x256 in advance. By default, we do not explicitly do this because in a cluster environment, one may benefit from resizing images in a parallel fashion, using mapreduce. For example, Yangqing used his lightweight [mincepie](https://github.com/Yangqing/mincepie) package. If you prefer things to be simpler, you can also use shell commands, something like:
for name in /path/to/imagenet/val/*.JPEG; do
convert -resize 256x256\! $name $name
done
-Go to `$CAFFE_ROOT/examples/imagenet/` for the rest of this guide.
-
-Take a look at `create_imagenet.sh`. Set the paths to the train and val dirs as needed, and set "RESIZE=true" to resize all images to 256x256 if you haven't resized the images in advance.
-Now simply create the leveldbs with `./create_imagenet.sh`. Note that `ilsvrc12_train_leveldb` and `ilsvrc12_val_leveldb` should not exist before this execution. It will be created by the script. `GLOG_logtostderr=1` simply dumps more information for you to inspect, and you can safely ignore it.
+Take a look at `examples/imagenet/create_imagenet.sh`. Set the paths to the train and val dirs as needed, and set "RESIZE=true" to resize all images to 256x256 if you haven't resized the images in advance.
+Now simply create the leveldbs with `examples/imagenet/create_imagenet.sh`. Note that `examples/imagenet/ilsvrc12_train_leveldb` and `examples/imagenet/ilsvrc12_val_leveldb` should not exist before this execution. It will be created by the script. `GLOG_logtostderr=1` simply dumps more information for you to inspect, and you can safely ignore it.
Compute Image Mean
------------------
The model requires us to subtract the image mean from each image, so we have to compute the mean. `tools/compute_image_mean.cpp` implements that - it is also a good example to familiarize yourself on how to manipulate the multiple components, such as protocol buffers, leveldbs, and logging, if you are not familiar with them. Anyway, the mean computation can be carried out as:
- ./make_imagenet_mean.sh
+ ./examples/imagenet/make_imagenet_mean.sh
which will make `data/ilsvrc12/imagenet_mean.binaryproto`.
-Network Definition
-------------------
-
-The network definition follows strictly the one in Krizhevsky et al. You can find the detailed definition at `examples/imagenet/imagenet_train_val.prototxt`. Note the paths in the data layer --- if you have not followed the exact paths in this guide you will need to change the following lines:
+Model Definition
+----------------
- source: "ilvsrc12_train_leveldb"
- mean_file: "../../data/ilsvrc12/imagenet_mean.binaryproto"
+We are going to describe a reference implementation for the approach first proposed by Krizhevsky, Sutskever, and Hinton in their [NIPS 2012 paper](http://books.nips.cc/papers/files/nips25/NIPS2012_0534.pdf).
-to point to your own leveldb and image mean.
+The network definition (`models/bvlc_reference_caffenet/train_val.prototxt`) follows the one in Krizhevsky et al.
+Note that if you deviated from file paths suggested in this guide, you'll need to adjust the relevant paths in the `.prototxt` files.
-If you look carefully at `imagenet_train_val.prototxt`, you will notice several `include` sections specifying either `phase: TRAIN` or `phase: TEST`. These sections allow us to define two closely related networks in one file: the network used for training and the network used for testing. These two networks are almost identical, sharing all layers except for those marked with `include { phase: TRAIN }` or `include { phase: TEST }`. In this case, only the input layers and one output layer are different.
+If you look carefully at `models/bvlc_reference_caffenet/train_val.prototxt`, you will notice several `include` sections specifying either `phase: TRAIN` or `phase: TEST`. These sections allow us to define two closely related networks in one file: the network used for training and the network used for testing. These two networks are almost identical, sharing all layers except for those marked with `include { phase: TRAIN }` or `include { phase: TEST }`. In this case, only the input layers and one output layer are different.
-**Input layer differences:** The training network's `data` input layer draws its data from `ilsvrc12_train_leveldb` and randomly mirrors the input image. The testing network's `data` layer takes data from `ilsvrc12_val_leveldb` and does not perform random mirroring.
+**Input layer differences:** The training network's `data` input layer draws its data from `examples/imagenet/ilsvrc12_train_leveldb` and randomly mirrors the input image. The testing network's `data` layer takes data from `examples/imagenet/ilsvrc12_val_leveldb` and does not perform random mirroring.
**Output layer differences:** Both networks output the `softmax_loss` layer, which in training is used to compute the loss function and to initialize the backpropagation, while in validation this loss is simply reported. The testing network also has a second output layer, `accuracy`, which is used to report the accuracy on the test set. In the process of training, the test network will occasionally be instantiated and tested on the test set, producing lines like `Test score #0: xxx` and `Test score #1: xxx`. In this case score 0 is the accuracy (which will start around 1/1000 = 0.001 for an untrained network) and score 1 is the loss (which will start around 7 for an untrained network).
* The network will be trained with momentum 0.9 and a weight decay of 0.0005.
* For every 10,000 iterations, we will take a snapshot of the current status.
-Sound good? This is implemented in `examples/imagenet/imagenet_solver.prototxt`. Again, you will need to change the first line:
-
- net: "imagenet_train_val.prototxt"
-
-to point to the actual path if you have changed it.
+Sound good? This is implemented in `models/bvlc_reference_caffenet/solver.prototxt`.
Training ImageNet
-----------------
Ready? Let's train.
- ./build/tools/caffe train --solver=examples/imagenet/imagenet_solver.prototxt
+ ./build/tools/caffe train --solver=models/bvlc_reference_caffenet/solver.prototxt
Sit back and enjoy!
Resume Training?
----------------
-We all experience times when the power goes out, or we feel like rewarding ourself a little by playing Battlefield (does someone still remember Quake?). Since we are snapshotting intermediate results during training, we will be able to resume from snapshots. This can be done as easy as:
+We all experience times when the power goes out, or we feel like rewarding ourself a little by playing Battlefield (does anyone still remember Quake?). Since we are snapshotting intermediate results during training, we will be able to resume from snapshots. This can be done as easy as:
./build/tools/caffe train --solver=examples/imagenet/imagenet_solver.prototxt --snapshot=examples/imagenet/caffe_imagenet_10000.solverstate
-where in the script `imagenet_train_1000.solverstate` is the solver state snapshot that stores all necessary information to recover the exact solver state (including the parameters, momentum history, etc).
+where in the script `imagenet_train_10000.solverstate` is the solver state snapshot that stores all necessary information to recover the exact solver state (including the parameters, momentum history, etc).
Parting Words
-------------
"description": "How to do net surgery and manually change model parameters, making a fully-convolutional classifier for dense feature extraction.",
"example_name": "Editing model parameters",
"include_in_docs": true,
- "signature": "sha256:10c551b31a64c2210f6094dbb603f26c206a7b72cd99032f475cb5023edcdc43"
},
"nbformat": 3,
"nbformat_minor": 0,
"cell_type": "code",
"collapsed": false,
"input": [
- "!diff imagenet/imagenet_full_conv.prototxt imagenet/imagenet_deploy.prototxt"
+ "!diff imagenet/imagenet_full_conv.prototxt ../models/bvlc_reference_caffenet/deploy.prototxt"
],
"language": "python",
"metadata": {},
"import caffe\n",
"\n",
"# Load the original network and extract the fully-connected layers' parameters.\n",
- "net = caffe.Net('imagenet/imagenet_deploy.prototxt', 'imagenet/caffe_reference_imagenet_model')\n",
+ "net = caffe.Net('../models/bvlc_reference_caffenet/deploy.prototxt', 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel')\n",
"params = ['fc6', 'fc7', 'fc8']\n",
"# fc_params = {name: (weights, biases)}\n",
"fc_params = {pr: (net.params[pr][0].data, net.params[pr][1].data) for pr in params}\n",
"collapsed": false,
"input": [
"# Load the fully-convolutional network to transplant the parameters.\n",
- "net_full_conv = caffe.Net('imagenet/imagenet_full_conv.prototxt', 'imagenet/caffe_reference_imagenet_model')\n",
+ "net_full_conv = caffe.Net('imagenet/imagenet_full_conv.prototxt', '../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel')\n",
"params_full_conv = ['fc6-conv', 'fc7-conv', 'fc8-conv']\n",
"# conv_params = {name: (weights, biases)}\n",
"conv_params = {pr: (net_full_conv.params[pr][0].data, net_full_conv.params[pr][1].data) for pr in params_full_conv}\n",
"metadata": {}
}
]
-}
\ No newline at end of file
+}
class ImagenetClassifier(object):
default_args = {
'model_def_file': (
- '{}/examples/imagenet/imagenet_deploy.prototxt'.format(REPO_DIRNAME)),
+ '{}/models/bvlc_reference_caffenet/deploy.prototxt'.format(REPO_DIRNAME)),
'pretrained_model_file': (
- '{}/examples/imagenet/caffe_reference_imagenet_model'.format(REPO_DIRNAME)),
+ '{}/models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'.format(REPO_DIRNAME)),
'mean_file': (
'{}/python/caffe/imagenet/ilsvrc_2012_mean.npy'.format(REPO_DIRNAME)),
'class_labels_file': (
The demo server requires Python with some dependencies.
To make sure you have the dependencies, please run `pip install -r examples/web_demo/requirements.txt`, and also make sure that you've compiled the Python Caffe interface and that it is on your `PYTHONPATH` (see [installation instructions](/installation.html)).
-Make sure that you have obtained the Caffe Reference ImageNet Model and the ImageNet Auxiliary Data ([instructions](/getting_pretrained_models.html)).
+Make sure that you have obtained the Reference CaffeNet Model and the ImageNet Auxiliary Data:
+
+ ./scripts/download_model_binary.py models/bvlc_reference_caffenet
+ ./data/ilsvrc12/get_ilsvrc_aux.sh
+
NOTE: if you run into trouble, try re-downloading the auxiliary files.
## Run
filename = list_im;
list_im = read_cell(filename);
end
-% Adjust the batch size to match with imagenet_deploy.prototxt
+% Adjust the batch size and dim to match with models/bvlc_reference_caffenet/deploy.prototxt
batch_size = 10;
-% Adjust dim to the output size of imagenet_deploy.prototxt
dim = 1000;
disp(list_im)
if mod(length(list_im),batch_size)
end
if nargin < 2 || isempty(model_def_file)
% By default use imagenet_deploy
- model_def_file = '../../examples/imagenet/imagenet_deploy.prototxt';
+ model_def_file = '../../models/bvlc_reference_caffenet/deploy.prototxt';
end
if nargin < 3 || isempty(model_file)
% By default use caffe reference model
- model_file = '../../examples/imagenet/caffe_reference_imagenet_model';
+ model_file = '../../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel';
end
parser.add_argument(
"--model_def",
default=os.path.join(pycaffe_dir,
- "../examples/imagenet/imagenet_deploy.prototxt"),
+ "../models/bvlc_reference_caffenet/deploy.prototxt"),
help="Model definition file."
)
parser.add_argument(
"--pretrained_model",
default=os.path.join(pycaffe_dir,
- "../examples/imagenet/caffe_reference_imagenet_model"),
+ "../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel"),
help="Trained model weights file."
)
parser.add_argument(
parser.add_argument(
"--model_def",
default=os.path.join(pycaffe_dir,
- "../examples/imagenet/imagenet_deploy.prototxt"),
+ "../models/bvlc_reference_caffenet/deploy.prototxt.prototxt"),
help="Model definition file."
)
parser.add_argument(
"--pretrained_model",
default=os.path.join(pycaffe_dir,
- "../examples/imagenet/caffe_reference_imagenet_model"),
+ "../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel"),
help="Trained model weights file."
)
parser.add_argument(