đź§ Rethinking AI: Learning to Navigate Like the Brain
The Problem with Backpropagation
Most modern AI systems rely on a learning method called backpropagation, which involves computing precise gradients and propagating them backward through deep neural networks. It’s powerful — but it’s also biologically implausible. In real brains, neurons don’t seem to have access to such detailed feedback. They learn through local signals, not full-blown error messages passed backward layer by layer.
So if our brains don’t use backprop, how do they learn?
Introducing Bio-Plausible Learning
Biologically plausible (or bio-plausible) learning refers to training mechanisms that are consistent with how actual neural circuits function. These algorithms aim to:
-
Use local learning rules, like Hebbian learning (“cells that fire together, wire together”).
-
Incorporate three-factor rules, influenced by both neuron activity and modulatory signals like dopamine.
-
Avoid mechanisms that require full access to the internal structure of the network (unlike backprop).
One promising method is CLAPP (Contrastive Local Assignments of Predictive Projections), a novel learning rule, developed by the Laboratory of Computational Neuroscience at EPFL, where I'm doing this research, that provides a biologically-inspired way for neural networks to learn representations.
My Research: Teaching Agents to Navigate
My summer project is built on a deceptively simple challenge: navigation. We want an AI agent to move through a virtual maze based on what it sees — just like a rat in a lab experiment.
Here’s the twist: instead of using traditional deep learning methods, we’re testing how well the agent can learn using visual representations generated by bio-plausible learning rules like CLAPP. These features are then fed into lightweight reinforcement learning algorithms (like Actor-Critic or PPO) that control the agent’s movement.
The goal? To prove that bio-plausible models can not only learn — but learn efficiently and robustly, even in complex environments.
Why This Matters
This project lives at the crossroads of neuroscience and AI. If bio-plausible methods can replace or augment traditional deep learning, we might build smarter, more efficient AI — ones that learn like we do.
-
For science: It brings us closer to understanding how real brains learn from the world.
-
For technology: It opens doors to more energy-efficient, explainable, and potentially safer AI systems.
-
For society: It shows that interdisciplinary thinking — combining brain science and machine learning — might be the key to the next generation of intelligent systems.
What’s Next?
Beyond navigation, our research could expand into modeling spatial memory (like the brain’s hippocampus), improving intrinsic motivation through curiosity modules, and designing even more brain-inspired learning architectures.
We're not just training smarter agents — we're exploring what it means to learn like a brain.
Please sign in
If you are a registered user on Laidlaw Scholars Network, please sign in
This is incredibly exciting work, Mattia — love how you’re pushing beyond conventional deep learning to explore how intelligence actually emerges in biological systems. CLAPP sounds like a fascinating step toward bridging the gap between neuroscience and AI. The navigation angle makes it so tangible, too — seeing these principles applied in action is where theory meets impact. Looking forward to following where this research leads next!