Found in 1 comment on Hacker News
andyxor · 2021-04-17 · Original thread
deep neural nets are an extension of sparse autoencoders which perform nonlinear principal component analysis [0,1]

There is evidence for sparse coding and PCA-like mechanisms in the brain, e.g. in visual and olfactory cortex [2,3,4,5]

There is no evidence though for backprop or similar global error-correction as in DNN, instead biologically plausible mechanisms might operate via local updates as in [6,7] or similar to locality-sensitive hashing [8]

[0] Sparse Autoencoder https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf

[1] Eigenfaces https://en.wikipedia.org/wiki/Eigenface

[2] Sparse Coding http://www.scholarpedia.org/article/Sparse_coding

[3] Sparse coding with an overcomplete basis set: A strategy employed by V1?https://www.sciencedirect.com/science/article/pii/S004269899...

[4] Researchers discover the mathematical system used by the brain to organize visual objects https://medicalxpress.com/news/2020-06-mathematical-brain-vi...

[5] Vision And Brain https://www.amazon.com/Vision-Brain-Perceive-World-Press/dp/...

[6] Oja's rule https://en.wikipedia.org/wiki/Oja%27s_rule

[7] Linear Hebbian learning and PCA http://www.rctn.org/bruno/psc128/PCA-hebb.pdf

[8] A neural algorithm for a fundamental computing problem https://science.sciencemag.org/content/358/6364/793

Fresh book recommendations delivered straight to your inbox every Thursday.