top of page
Search

The end of the "winter days"

  • Writer: DR.GEEK
    DR.GEEK
  • Sep 21, 2020
  • 1 min read

(21th-September-2020)


• DNN is hard to learn - over learning

• It is okay if you do "pre-training per layer"! [Hinton + 06]

• Computational complexity - The computational complexity of Backpropagation is huge

• Improvement of computing capacity; Appearance of GPU and PC cluster

• Know-how ("black magic") necessary to derive performance - myriad parameters such as learning factor, momentum, network structure

• It is okay if you do "pre-training per layer"! [Hinton + 06]

(Is not it progressing?)


No2


Pretraining

Pretraining - Non Teacher learning so that a set of input data can be reproduced - Executed layer by layer in order from the input layer


Pretraining of each layer

Learn to best reproduce the set of input data - Encoder-decoder

• Auto encoder - Another way: Restricted Boltzmann Machine (RBM) - explained in "Generation model"



 
 
 

Comments


© 2023 by Walkaway. Proudly created with Wix.com

  • Facebook Black Round
  • Twitter Black Round
bottom of page