-
Notifications
You must be signed in to change notification settings - Fork 95
/
Copy pathTODO.txt
53 lines (40 loc) · 1.68 KB
/
TODO.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
To-do:
- rbm_pt.m, grbm_pt.m: merge
- rbm_pt.m: use CUDA (matlab parallel computing toolbox)
- rbm.m, rbm_pt.m: merge
- dbm.m: support parallel tempering or cast
- Progress display for DBM, DAE
Done:
- rbm.m, grbm.m: merge => rbm.m only, grbm.m obsolete now
- rbm.m: use CUDA (matlab parallel computing toolbox)
- dbm.m: use CUDA (matlab parallel computing toolbox)
- dbm.m: adaptive learning rate
- dae.m, dae_get_hidden.m, default_dae.m: a single-layer DAE
- dae.m: contractive, soft-sparsity regularizations
- sdae.m: deep autoencoder (tied weights)
- dae_get_hidden.m: explicit sparsification
- sdae_get_hidden.m: explicit sparsification
- dbn.m: up-down learning algorithm for deep belief net
- mlp.m default_mlp.m classify_mlp.m: MLP (stochastic backpropagation, no fancy thing)
- rbm.m: Fast PCD
- dbm.m: centering trick (fixed centering variables only)
Implemented Features:
- GPU Computing: Matlab Parallel Computing Toolbox
- (Denoising) Autoencoder
+ Shallow one: sparsity, contractive regularization
+ Deep one: stochastic backprop
+ Binary/Gaussian visible and hidden units
+ Soft-sparsity regularization (Gaussian-Gaussian case)
- Multi-layer Perceptron
+ Stochastic Backpropagation, Adagrad
+ tanh/sigm nonlinearities
- Restricted Boltzmann Machines & Deep Belief Networks
+ Binary, Gaussian
+ Enhanced Grad., Adaptive Learning Rate
+ Contrastive Divergence, (Fast) Persistent Contrastive Divergence
+ Parallel Tempering
+ Deep belief networks: up-down learning algorithm
- Deep Boltzmann Machines
+ Binary, Gaussian
+ Enhanced Grad., Adaptive Learning Rate
+ Contrastive Divergence, Persistent Contrastive Divergence