top of page
Search
Writer's pictureDR.GEEK

The end of the "winter days"

(21th-September-2020)


• DNN is hard to learn - over learning

• It is okay if you do "pre-training per layer"! [Hinton + 06]

• Computational complexity - The computational complexity of Backpropagation is huge

• Improvement of computing capacity; Appearance of GPU and PC cluster

• Know-how ("black magic") necessary to derive performance - myriad parameters such as learning factor, momentum, network structure

• It is okay if you do "pre-training per layer"! [Hinton + 06]

(Is not it progressing?)


No2


Pretraining

Pretraining - Non Teacher learning so that a set of input data can be reproduced - Executed layer by layer in order from the input layer


Pretraining of each layer

Learn to best reproduce the set of input data - Encoder-decoder

• Auto encoder - Another way: Restricted Boltzmann Machine (RBM) - explained in "Generation model"



5 views0 comments

Recent Posts

See All

Comments


bottom of page