you might also be interested in the recent work on "resonator networks" VSA architecture [1-4] by Olshausen lab at Berkeley (P. Kanerva who created the influential SDM model [5] is one of the lab members).
It's a continuation of Plate [6] and Kanerva work in the 90s and Olshausen' groundbreaking work on sparse coding [7] which inspired the popular autoencoders [8].
I find it especially promising they found this superposition based approach to be competitive with optimization so prevalent in modern neural nets. May be backprop will die one day and be replaced with something more energy efficient along these lines.
It's a continuation of Plate [6] and Kanerva work in the 90s and Olshausen' groundbreaking work on sparse coding [7] which inspired the popular autoencoders [8].
I find it especially promising they found this superposition based approach to be competitive with optimization so prevalent in modern neural nets. May be backprop will die one day and be replaced with something more energy efficient along these lines.
[1] https://redwood.berkeley.edu/wp-content/uploads/2020/11/frad...
[2] https://redwood.berkeley.edu/wp-content/uploads/2020/11/kent...
[3] https://arxiv.org/abs/2009.06734
[4] https://github.com/spencerkent/resonator-networks
[5] https://en.wikipedia.org/wiki/Sparse_distributed_memory
[6] https://www.amazon.com/Holographic-Reduced-Representation-Di...
[7] http://www.scholarpedia.org/article/Sparse_coding
[8] https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf