Skip to content

Commit 8419af8

Browse files
committed
Merge branch 'master' of github.com:hughperkins/DeepCL
2 parents 8dd9bf7 + 6b3d99d commit 8419af8

File tree

2 files changed

+27
-19
lines changed

2 files changed

+27
-19
lines changed

doc/Commandline.md

Lines changed: 25 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,10 @@
1818

1919
# Commandline usage
2020

21+
## Training
22+
23+
Use `train` to run training.
24+
2125
* Syntax is based on that specified in Ciresan et al's [Multi-column Deep Neural Networks for Image Classification](http://arxiv.org/pdf/1202.2745.pdf), section 3, first paragraph:
2226
* network is defined by a string like: `100C5-MP2-100C5-MP2-100C4-MP2-300N-100N-6N`
2327
* `100c5` means: a convolutional layer, with 100 filters, each 5x5
@@ -28,82 +32,82 @@
2832
* `tanh` means a tanh layer
2933
* Thus, you can do, for example:
3034
```bash
31-
./deepclrun netdef=8c5z-relu-mp2-16c5z-relu-mp3-150n-tanh-10n learningrate=0.002 dataset=mnist
35+
./train netdef=8c5z-relu-mp2-16c5z-relu-mp3-150n-tanh-10n learningrate=0.002 dataset=mnist
3236
```
3337
... in order to learn mnist, using the same neural net architecture as used in the [convnetjs mnist demo](http://cs.stanford.edu/people/karpathy/convnetjs/demo/mnist.html)
3438
* Similarly, you can learn NORB, using approximately the architecture specified in [lecun-04](http://yann.lecun.com/exdb/publis/pdf/lecun-04.pdf), by doing:
3539
```bash
36-
./deepclrun netdef=8c5-relu-mp4-24c6-relu-mp3-80c6-relu-5n learningrate=0.0001 dataset=norb
40+
./train netdef=8c5-relu-mp4-24c6-relu-mp3-80c6-relu-5n learningrate=0.0001 dataset=norb
3741
```
3842
* Or, you can train NORB using the very deep, broad architecture specified by Ciresan et al in [Flexible, High Performance Convolutional Neural Networks for Image Classification](http://ijcai.org/papers11/Papers/IJCAI11-210.pdf):
3943
```bash
40-
./deepclrun netdef=MP3-300C6-RELU-MP2-500C4-RELU-MP4-500N-TANH-5N learningrate=0.0001 dataset=norb
44+
./train netdef=MP3-300C6-RELU-MP2-500C4-RELU-MP4-500N-TANH-5N learningrate=0.0001 dataset=norb
4145
```
4246

43-
## Convolutional
47+
### Convolutional
4448

4549
* eg `-32c5` is a convolutional layer with 32 filters of 5x5
4650
* `-32c5z` is a convolutional layer with zero-padding, of 32 filters of 5x5
4751

48-
## Fully-connected
52+
### Fully-connected
4953

5054
* eg `-150n` is a fully connected layer, with 150 neurons.
5155

52-
## Max-pooling
56+
### Max-pooling
5357

5458
* Eg `-mp3` will add a max-pooling layer, over 3x3 non-overlapping regions. The number is the size of the regions, and can be modified
5559

56-
## Dropout layers
60+
### Dropout layers
5761

5862
* Simply add `-drop` into the netdef string
5963
* this will use a dropout ratio of 0.5
6064

61-
## Activation layers
65+
### Activation layers
6266

6367
* Simply add any of the following into the netdef string:
6468
* `-tanh`
6569
* `-sigmoid`
6670
* `-relu`
6771

68-
### Random patches
72+
#### Random patches
6973

7074
* `RP24` means a random patch layer, which will cut a 24x24 patch from a random position in each incoming image, and send that to its output
7175
* during testing, the patch will be cut from the centre of each image
7276

73-
### Random translations
77+
#### Random translations
7478

7579
* `RT2` means a random translations layer, which will translate the image randomly during training, up to 2 pixels, in either direction, along both axes
7680
* Can specify any non-negative integer, less than the image size
7781
* During testing, no translation is done
7882

79-
## Multi-column deep neural network "MultiNet"
83+
### Multi-column deep neural network "MultiNet"
8084

8185
* You can train several neural networks at the same time, and predict using the average output across all of them using the `multinet` option
8286
* Simply add eg `multinet=3` in the commandline, to train across 3 nets in parallel, or put a number of your choice
8387

84-
## Repeated layers
88+
### Repeated layers
8589

8690
* simply prefix a layer with eg `3*` to repeat it. `3*` will repeat the layer 3 times, and similar for other numbers, eg:
8791
```
88-
./deepclrun netdef=6*(32c5z-relu)-500n-361n learningrate=0.0001 dataset=kgsgoall
92+
./train netdef=6*(32c5z-relu)-500n-361n learningrate=0.0001 dataset=kgsgoall
8993
```
9094
... will create 6 convolutional layers of 32 5x5 filters each.
9195
* you can also use parentheses `(...)` to repeat multiple layers, eg:
9296
```
93-
./deepclrun netdef=3*(32c5z-relu-mp2)-150n-10n
97+
./train netdef=3*(32c5z-relu-mp2)-150n-10n
9498
```
9599
... will be expanded to:
96100
```
97-
./deepclrun netdef=32c5z-relu-mp2-32c5z-relu-mp2-32c5z-relu-mp2-150n-10n
101+
./train netdef=32c5z-relu-mp2-32c5z-relu-mp2-32c5z-relu-mp2-150n-10n
98102
```
99103

100-
## File types
104+
### File types
101105

102106
* Simply pass in the filename of the data file with the images in
103107
* Filetype will be detected automatically
104108
* See [Loaders](loaders.md) for information on available loaders
105109

106-
## Weight persistence
110+
### Weight persistence
107111

108112
* By default, weights will be written to `weights.dat`, after each epoch
109113
* You can add option `writeweightsinterval=5` to write weights every 5 minutes, even if the epoch hasnt finished yet. Just replace `5` with the number of minutes between each write
@@ -113,7 +117,7 @@
113117
* Epoch number, batch number, batch loss, and batch numcorrect will all be loaded from where they left off, from the weights file, so you can freely stop and start training, without losing the training
114118
* be sure to use the `writeweightsinterval=5` option if you are going to stop/start often, with long epochs, to avoid losing hours/days of training!
115119

116-
## Command-line options
120+
### Command-line options
117121

118122
| Option | Description |
119123
|----|----|
@@ -145,5 +149,8 @@
145149
| writeweightsinterval=5 | write the weights to file every 5 minutes of training, even if epoch hasnt finished yet. Default is 0, ie only write weights after each epoch |
146150
| loadweights=1 | load weights at start, from weightsfile. Current training config, ie netdef and trainingfile, should match that used to create the weightsfile. Note that epoch number will continue from file, so make sure to increase numepochs sufficiently |
147151

152+
## Prediction
153+
154+
Use `predict to run prediction
148155

149156

src/layer/Layer.h

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@
1616
#include "layer/LayerMaker.h"
1717
#include "util/stringhelper.h"
1818
#include "EasyCL.h"
19+
#include "DeepCLDllExport.h"
1920

2021
#define VIRTUAL virtual
2122

@@ -24,7 +25,7 @@ class TrainerStateMaker;
2425

2526
PUBLICAPI
2627
/// A single layer within the neural net
27-
class Layer {
28+
class DeepCL_EXPORT Layer {
2829
public:
2930
Layer *previousLayer;
3031
Layer *nextLayer;

0 commit comments

Comments
 (0)