EEC289Q (Winter 2026): Neurally Inspired Algorithms and Architectures
Course Summary
This course provides an overview of topics at the intersection of neural computation and algorithm design and is intended for students with a solid background in linear algebra and probability. Prior exposure to the basic principles of statistical estimation and optimization will be helpful, but not strictly required. The goals for the course are for students to learn about:
- Mathematically rigorous tools for understanding and analyzing algorithms in neural computation.
- The use of randomized methods in neural computation and algorithms.
- Other paradigms for learning in addition to the currently dominant statistical one.
- Connections between topics in the analysis of algorithms and neuroscience/neural computation.
- Applications of some of the above to developing new kinds of computer hardware.
Course grades will be based on a mix of problem sets, in-class presentations, paper-readings, and a final project.
Note: this is not a class about deep learning. There are many excellent courses at UCD that cover deep learning, but this is not one of them. This course will focus primarily on the mathematical analysis of algorithms in neural computation and related topics in computing.
Pre-Requisites
EEC161 (or equivalent background in probability), MAT 22A (or equivalent background in linear algebra). Prior exposure to machine learning, statistics, and optimization will be helpful but not strictly required.
Grading
Grades will be based on the following components:
- Problem sets (3): 30%
- Final Paper: 30%
- Final Presentation: 30%
- Class participation and feedback on presentations: 10%
You are expected to regularly attend class and engage with paper discussions. You are also required to submit feedback forms for presentations by your peers.
Final Project
The course final project is fairly open ended. Examples may include:
- A thorough literature review on a topic (related to the course) of your choosing
- A replication study of a paper of your choosing
- Implementation of an algorithm from the course in hardware
- Application of one or more concepts from the course to your own research
Depending on the class size, final projects may end up being completed in groups.
Tentative Course Outline
A tentative list of topics is as follows. Topics are subject to change depending on the interests and pace of the class. The boundaries between units are not sharp and some may blend into each other.
Unit 1: Introduction
- The relevance of neuroscience to algorithm design.
- The relevance of algorithm design to neuroscience.
- Why study theory? What do we want theory to tell us?
- Papers:
- D. Marr. Vision: a computational investigation into the human representation and processing of visual information. MIT Press, 1982 (republished 2010).
Unit 2: Algorithmic and Statistical Perspectives on Learning
- Generative Modeling:
- Perception as inference
- Algorithmic Approaches Learning:
- Statistical Learning:
- Risk, empirical risk minimization
- Online Learning:
- Regret
- Statistical Learning:
- Papers/Blog Posts:
- Leo Breiman: Statistical Modeling: the Two Cultures
- Peter Norvig: On Chomsky and the Two Cultures of Statistical Modeling
- Rich Sutton: The Bitter Lesson
- Rodney Brooks: A Better Lesson
Unit 3: Perceptrons and Hebbian Learning
- Perceptrons
- PCA and Eigenvalue Problems:
- Power method and friends
- Oja’s rule and friends
- Related topics in ML:
- Kernel machines
- Kernel smoothers
Unit 4: Random Projection and Friends
- Introduction to sketching: Bloom filters, count-sketch, count-min sketch
- Johnson Lindenstrauss Lemma:
- The JLTs: dense JLTs, sparse JLT, fast JLT.
- Subspace embeddings, randomized least-squares.
- Random feature models:
- Random Fourier features, polynomial random features + tensor sketching.
- Related Neural Architectures
- Vector Symbolic Architectures
- Reservoir Computing
- Papers:
- S. Dasgupta, C. Stevens, S. Navlakha. A neural algorithm for a fundamental computing problem. Science, 2017.
- S. Dasgupta, D. Hattori and S. Navlakha. A neural theory for counting memories. Nature Communications, 2022.
- K. Clarkson, S. Ubaru, and E. Yang. Capacity analysis of vector symbolic architectures. arXiv preprint, 2023.
- N. Pham, and R. Pagh. Fast and scalable polynomial kernels via explicit feature maps. ACM SIGKDD. 2013
- A. Rahimi, and B. Recht. Random features for large-scale kernel machines. Neural Information Processing Systems, 2007
- T. Plate. Holographic Reduced Representations. IEEE Transactions on Neural Networks. 1995
Unit 5: Sparse Recovery and Friends
- Compressed Sensing/LASSO:
- Restricted isometry property and recovery guarantees
- Relationship with the JL-property
- Sublinear compressed sensing and relationship with group-testing (time permitting)
- Dictionary Learning
- Papers:
- B. Olshausen, and D. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 1996.
- D. Donoho. Compressed Sensing. IEEE Transactions on Information Theory, 2006
- F. Krahmer, and R. Ward. New and improved Johnson-Lindenstrauss embeddings via the restricted isometry property. SIAM Journal on Mathematical Analysis, 2011.
- A. Gilbert, M. Iwen, and M. Strauss. Group testing and sparse signal recovery. Proceedings of the 42nd Asilomar Conference on Signals, Systems, and Computers (IEEE). 2008.
Unit 6: (Time permitting) Beyond McCulloch, Pitts, Rosenblatt
- Sigma-Pi Neurons
- Relationship to tensor-sketching
- Spiking neural networks
Additional Suggested Papers for Final Projects (will be updated periodically)
- Y. Shen, S. Dasgupta and S. Navlakha. Reducing catastrophic forgetting with associative learning: a lesson from fruit flies. Neural Computation, 35(11): 1797-1819, 2023.
- S. Chandra, S. Sharma, R. Chaudhuri, I. Fiete, Episodic and associative memory from spatial scaffolds in the hippocampus. Nature, 2025.
- C. Kymn et. al. Binding in Hippocampal-Entorhinal Circuits Enables Compositionality in Cognitive Maps. Neural Information Processing Systems, 2025.
- C. Rozell, D. Johnson, R. Baraniuk, B. Olshausen. Sparse coding via thresholding and local competition in neural circuits. Neural Computation. 2008.
- Y. Chen, D. Paiton, B. Olshausen. The sparse manifold transform. Advances in Neural Information Processing Systems. 2018.
- T. Ahle et. al. Oblivious sketching of high-degree polynomial kernels. Proceedings of the Fourteenth Annual ACM-SIAM Symposium of Discrete Algorithms. 2020.
- J. von Neumann. Probabilistic logics and the synthesis of reliable organisms from unreliable components. Automata Studies. 1956.
- A. Rahimi et. al. High-dimensional computing as a nanoscalable paradigm. IEEE Transactions on Circuits and Systems I: Regular Papers, 2017.
- H. Peng et. al. Random Feature Attention. ICLR, 2021
- M. Lewicki. Efficient coding of natural sounds. Nature Neuroscience, 2002.
- T. Kohonen. The self-organizing map. Proceedings of the IEEE, 2002
- K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 1980.
- E. Arias-Castro, D. Mason, B. Pelletier. On the estimation of the gradient lines of a density and the consistency of the mean-shift algorithm. Journal of Machine Learning Research, 2016.
