top of page
Search
Writer's pictureDR.GEEK

method and pooling case study

(26th-September-2020)


"Further data and more early GPUs can gain immediate performance improvement" [Krizhevsky + 12]

• Large-scale network → Improve potential recognition ability + Increase risk of excessive learning

• Large amount of data → Reduce the risk of over learning + It takes time to calculate

CNN vs. Fully-connected NN

CNN

• No pre-training is required - local RF, tied weights - "prewired"

• Difficulty in architectural design - Filter size, number of strides, number of maps, pooling size, stride

Fully-connected NN

• Pre-training Enabled - It was thought that it was essential before, but it seems not

• Alternative method - Drop-out: Fully-connect learning method to avoid over learning of NN [Hinton 12] - discrimintive pretreating

5 views0 comments

Recent Posts

See All

Comments


bottom of page