-# The training protocol buffer definition
-train_net: "lenet_train.prototxt"
-# The testing protocol buffer definition
-test_net: "lenet_test.prototxt"
+# The train/test net protocol buffer definition
+net: "examples/mnist/lenet_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
max_iter: 10000
# snapshot intermediate results
snapshot: 5000
-snapshot_prefix: "lenet"
-# solver mode: 0 for CPU and 1 for GPU
-solver_mode: 1
-device_id: 1
+snapshot_prefix: "examples/mnist/lenet_multistep"
+# solver mode: CPU or GPU
+solver_mode: GPU
and you will be using CPU for training. Isn't that easy?
MNIST is a small dataset, so training with GPU does not really introduce too much benefit due to communication overheads. On larger datasets with more complex models, such as ImageNet, the computation speed difference will be more significant.
+
+### How to reduce the learning rate a fixed steps?
+Look at lenet_multistep_solver.prototxt