top of page
Search

method and pooling case study

  • Writer: DR.GEEK
    DR.GEEK
  • Sep 26, 2020
  • 1 min read

(26th-September-2020)


"Further data and more early GPUs can gain immediate performance improvement" [Krizhevsky + 12]

• Large-scale network → Improve potential recognition ability + Increase risk of excessive learning

• Large amount of data → Reduce the risk of over learning + It takes time to calculate

CNN vs. Fully-connected NN

CNN

• No pre-training is required - local RF, tied weights - "prewired"

• Difficulty in architectural design - Filter size, number of strides, number of maps, pooling size, stride

Fully-connected NN

• Pre-training Enabled - It was thought that it was essential before, but it seems not

• Alternative method - Drop-out: Fully-connect learning method to avoid over learning of NN [Hinton 12] - discrimintive pretreating

 
 
 

Comments


© 2023 by Walkaway. Proudly created with Wix.com

  • Facebook Black Round
  • Twitter Black Round
bottom of page