Reevaluating Our Perspective on the Brain: Insights for AI
Written on
Chapter 1: A Shift in Understanding the Brain
Our perception of the world often relies on concepts, which are frequently articulated through language or mathematics. However, this conceptual framework presents a dilemma: where do these concepts originate? How can we develop new theories using established terminology while avoiding the implicit biases that come with it?
The influence of 18th-century empiricists like Locke, Hume, and Berkeley continues to shape modern scientific thought. Empiricism posits that the world exists objectively, awaiting discovery. Our senses provide accurate descriptions of this reality, which we express through concepts and abstractions. Yet, does this truly reflect the functioning of our brains?
As Ludwig Wittgenstein noted, "Philosophy is a battle against the bewitchment of our intelligence by means of language." Cognitive neuroscience, a relatively new discipline, often relies on terminology that is far older, tracing back to ancient Greek philosophers. The 19th century's pseudoscientific phrenology illustrated the dangers of overly simplistic approaches to brain function, claiming that distinct areas of the brain correspond to specific tasks—like knitting or poetry.
Today, many accuse modern cognitive neuroscience of a form of "neo-phrenology," as it often seeks to assign specific functions to parts of the brain based on concepts developed long before we truly understood its mechanics. This is a continuation of David Marr's paradigm, where we first conceptualize a function in terms of information processing, then search for the algorithm, and finally identify the neural implementation. While Marr emphasized the interplay of these components, the prevailing view still largely interprets brain function from an external perspective, likening it to a computer processing sensory information.
Yet, as neuroscientist György Buzsáki argues in his enlightening book, The Brain From Inside Out, this perspective is fundamentally limited. The following sections will delve into Buzsáki's insights, examining how this viewpoint affects both neuroscience and artificial intelligence (AI), and how a paradigm shift could enhance our understanding of the brain.
The first video features Iain McGilchrist and Satish Kumar discussing the delicate balance of the brain's functions and how this understanding impacts our view of intelligence and AI.
Section 1.1: The Meaning of Inputs
How do we ascribe significance to the information we perceive? Buzsáki's straightforward answer is that meaningful input binding occurs exclusively through action. Actions provide context to our senses, forming the basis for their significance. The development of sensory organs is driven by the organism's ability to act upon sensory input. The brain prioritizes information that aids in fulfilling its needs rather than seeking objective truths.
Sensing is often an active endeavor. Take children, for example: they discover the physical properties of their bodies through random movements and learn to articulate by experimenting with sounds, which eventually leads to meaningful speech. Their actions, shaped by feedback from their environment—be it encouragement from caregivers or the discomfort of bumping into furniture—become imbued with meaning.
Even the scientific process itself is rooted in action. Galileo's famous experiments with gravity from the Tower of Pisa exemplify this active exploration of reality.
The speed at which an organism can respond shapes the relevant perception timescales. For instance, stationary trees do not benefit from having rapidly moving eyes to capture their environment. Our cognitive processing speed is closely linked to our physical capabilities. Buzsáki challenges the prevailing representation-centric view in neuroscience: instead of asking what computations a neuron or neural assembly performs, we should consider what functions they actually serve.
Section 1.2: Rethinking Experimental Design
This reorientation carries significant practical implications. Current neuroscientific methodologies often involve presenting stimuli to subjects while monitoring their neural responses. However, this approach, viewed from an inside-out perspective, lacks grounding. It's akin to documenting the vocabulary of a lost language without a Rosetta Stone.
Buzsáki posits that the brain's vocabulary comprises internally generated dynamic sequences. Words can be seen as sequences at the neuronal assembly level, where learning involves selecting pre-existing sequences that best align with novel experiences. The brain is not a blank slate; it comes equipped with established dynamics. Learning transforms initially meaningless sequences into meaningful constructs.
A prime example lies in the interplay between the hippocampus and the prefrontal cortex in memory formation. The hippocampus acts as a sequence generator, while the neocortex identifies relevant associations, converting short-term sequences into long-term memories.
Chapter 2: Bridging AI and Neuroscience
In AI's formative years, the focus was predominantly on symbol-based systems, embedding abstract representations within computers. This outside-in approach proved ineffective, highlighting the challenge of translating abstract reasoning into real-world behavior.
Buzsáki's insights present a parallel to contemporary neural network architectures, such as reservoir computing, which consist of fixed non-linear dynamical systems. Instead of training the hidden layer, the learning process involves aligning the output layer with pre-existing dynamics in the reservoir. This reflects Buzsáki's assertion that the brain is rich in complex dynamics, allowing for meaning to emerge through action.
This perspective also offers a solution to the issue of catastrophic forgetting in neural networks, where previously learned tasks are lost upon learning new ones. By integrating new experiences into existing dynamic patterns, the brain can retain previous knowledge more effectively.
Another intriguing development in deep learning is the edge-popup algorithm, which refines a large network by removing unessential connections. This method reveals preconfigured subnetworks capable of performing tasks without extensive training.
Despite the waning popularity of reservoir computing in favor of LSTMs and GRUs, the principles of the inside-out approach could inspire future innovations in AI and machine learning.
The second video features Iain McGilchrist discussing the misconceptions surrounding our understanding of the brain and its implications for artificial intelligence.
Buzsáki further argues that higher cognitive functions, such as reasoning, can be interpreted as internalized actions. Memories are formed by mapping real-world events to established dynamic patterns, while future plans may resemble reversed episodic memories of actions.
The ability to internalize actions has conferred a significant evolutionary advantage, enabling humans to simulate future scenarios and choose actions based on anticipated outcomes. As complexities in potential actions increase, the brain adapts by adding layers of sophistication, yet its primary function remains action-driven.
As abstract thought may have evolved from spatial reasoning, the architecture of the prefrontal cortex mirrors that of the motor cortex. This suggests that our higher cognitive functions are deeply intertwined with our capacity for action.
"I have always thought the actions of men the best interpreters of their thoughts." — John Locke
While the brain excels at processing reality, its primary purpose is not merely computation or the display of knowledge. Buzsáki presents a compelling argument: viewing the brain as a reservoir of dynamics seeking meaningful experiences through action grounds his theory in evolutionary principles.
Our separation of action from intelligence may stem from the historical Cartesian divide between mind and body. It is imperative to transcend this dichotomy, recognizing that action is central to understanding intelligence. The prospect of action-driven breakthroughs in neuroscience and AI is indeed an exciting one.
In conclusion, I encourage readers to explore Buzsáki's work further or watch his discussions online for a deeper understanding of these concepts.