From f347355207601a602cc1081ea10f969f995138e9 Mon Sep 17 00:00:00 2001 From: Sergio Date: Sat, 20 Dec 2014 23:24:32 -0800 Subject: [PATCH] Added credits for training bvlc models --- models/bvlc_alexnet/readme.md | 2 ++ models/bvlc_googlenet/readme.md | 2 ++ models/bvlc_reference_caffenet/readme.md | 2 ++ models/bvlc_reference_rcnn_ilsvrc13/readme.md | 2 ++ models/finetune_flickr_style/readme.md | 2 ++ 5 files changed, 10 insertions(+) diff --git a/models/bvlc_alexnet/readme.md b/models/bvlc_alexnet/readme.md index 20c393f..c25fd4f 100644 --- a/models/bvlc_alexnet/readme.md +++ b/models/bvlc_alexnet/readme.md @@ -18,6 +18,8 @@ The best validation performance during training was iteration 358,000 with valid This model obtains a top-1 accuracy 57.1% and a top-5 accuracy 80.2% on the validation set, using just the center crop. (Using the average of 10 crops, (4 + 1 center) * 2 mirror, should obtain a bit higher accuracy.) +This model was trained by Evan Shelhamer @shelhamer + ## License The data used to train this model comes from the ImageNet project, which distributes its database to researchers who agree to a following term of access: diff --git a/models/bvlc_googlenet/readme.md b/models/bvlc_googlenet/readme.md index 27022d3..8a3bbec 100644 --- a/models/bvlc_googlenet/readme.md +++ b/models/bvlc_googlenet/readme.md @@ -5,6 +5,7 @@ caffemodel_url: http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel license: non-commercial sha1: 405fc5acd08a3bb12de8ee5e23a96bec22f08204 caffe_commit: bc614d1bd91896e3faceaf40b23b72dab47d44f5 +gist_id: 866e2aa1fd707b89b913 --- This model is a replication of the model described in the [GoogleNet](http://arxiv.org/abs/1409.4842) publication. We would like to thank Christian Szegedy for all his help in the replication of GoogleNet model. @@ -25,6 +26,7 @@ Timings for bvlc_googlenet with cuDNN using batch_size:128 on a K40c: - Average Backward pass: 1123.84 ms. - Average Forward-Backward: 1688.8 ms. +This model was trained by Sergio Guadarrama @sguada ## License diff --git a/models/bvlc_reference_caffenet/readme.md b/models/bvlc_reference_caffenet/readme.md index d1c6269..b867e73 100644 --- a/models/bvlc_reference_caffenet/readme.md +++ b/models/bvlc_reference_caffenet/readme.md @@ -18,6 +18,8 @@ The best validation performance during training was iteration 313,000 with valid This model obtains a top-1 accuracy 57.4% and a top-5 accuracy 80.4% on the validation set, using just the center crop. (Using the average of 10 crops, (4 + 1 center) * 2 mirror, should obtain a bit higher accuracy still.) +This model was trained by Jeff Donahue @jeffdonahue + ## License The data used to train this model comes from the ImageNet project, which distributes its database to researchers who agree to a following term of access: diff --git a/models/bvlc_reference_rcnn_ilsvrc13/readme.md b/models/bvlc_reference_rcnn_ilsvrc13/readme.md index fb8f26d..5d4bc5a 100644 --- a/models/bvlc_reference_rcnn_ilsvrc13/readme.md +++ b/models/bvlc_reference_rcnn_ilsvrc13/readme.md @@ -13,6 +13,8 @@ Try the [detection example](http://nbviewer.ipython.org/github/BVLC/caffe/blob/m *N.B. For research purposes, make use of the official R-CNN package and not this example.* +This model was trained by Ross Girshick @rbgirshick + ## License The data used to train this model comes from the ImageNet project, which distributes its database to researchers who agree to a following term of access: diff --git a/models/finetune_flickr_style/readme.md b/models/finetune_flickr_style/readme.md index d2a8a95..aac7f7c 100644 --- a/models/finetune_flickr_style/readme.md +++ b/models/finetune_flickr_style/readme.md @@ -15,6 +15,8 @@ The final performance: I1017 07:36:17.370730 31333 solver.cpp:247] Iteration 100000, Testing net (#0) I1017 07:36:34.248730 31333 solver.cpp:298] Test net output #0: accuracy = 0.3916 +This model was trained by Sergey Karayev @sergeyk + ## License The Flickr Style dataset contains only URLs to images. -- 2.7.4