- Project Flux
- Posts
- The Plastic Thinker
The Plastic Thinker
A reflection and conceptualisation of human and machine cognition
Preface
Welcome to the age of AI—the fourth impending revolution that is steered by a few, shaped by some, and ignored by many. While AI has grown markedly in the last few years, our general global understanding is still fragmented. This essay is my reflection on the progress of AI intelligence in 2024, with an attempt to conceptualise how this translates to differences in thought between us and the machines. I feel that those who want to dive deeper into AI and explore why, despite its advancements, we as humans remain beautifully unique will best benefit from reading this. If this essay also helps those who are entirely unaware of AI, then fantastic—we’ve exceeded expectations. If not, then that group can continue to drift through an era where, for the first time in history, humans are “playing god” and creating what Mustapha Suleyman aptly calls a “new digital species”.
The typical dialogue of AI across the career landscape goes like this… “With AI, your role will transform, and it’s up to you to evolve or become obsolete… Yet AI isn’t perfect, so we still need your expert judgment… It’s a tool for augmentation, not automation… It will elevate your productivity by handling lower-value tasks, so you can focus on higher-value ones…” And so on. While this dialogue is meant to reassure, it often feels ambiguous and unhelpful. These conversations leave me grappling with even more uncertainty. What determines low-value versus high-value activity? How do I define this distinction in my role? What if I can’t find higher-value activities in my work? And as AI evolves so rapidly, what if it overtakes high-value tasks overnight? Do I just wait for universal basic income to kick in and embrace a machine-run world? What are you talking about?!
It’s good to ask these questions and dig deeper, but let’s be honest: AI isn’t easy to grasp. It’s complex, multifaceted, and steeped in speculation. It’s the perfect embodiment of the saying: the more you know, the less you know. And when you combine this with the intricacies of our careers and work domains, you create a tangled web of uncertainty.
This new complexity often manifests as anxiety. It’s like being a novice farmer, standing in a barren orchard, hungry and desperate, trying to grow fruit you’ve never grown before. The tools and practices are unfamiliar, the soil feels alien, and the knowledge you need is just out of reach but still close by. Every attempt feels futile, and without understanding how to proceed, your efforts seem hopelessly “fruitless.” To thrive, we need more than just ambition—we need understanding, guidance, experimentation, and the patience to nurture meaningful growth. For those unbothered by AI however, it’s like ignoring a mild itch on your elbow—barely worth a second thought.
With this essay, I hope to help people better understand the differences between human and AI thought. I do this through an exploration into concepts Including systems, emergence, compositionality, qualia and plasticity. By exploring these topics to see where our paths diverge we can also map out a future where they converge. We gain a clearer picture of not just the current and future potential of AI—but also of ourselves as humans.
System
Like most things, at the heart of both biological and artificial intelligence lies the concept of the system. A system, in this context, is not simply a collection of parts, but a structured, interdependent arrangement of components working together to process and infer. If we see intelligence through the lens of a system, we find how complex behaviours arise from well-defined architectures, whether they exist within a human brain or a deep learning AI model.
Defining a Systemic Perspective
A systems perspective views cognition as the product of interactions between multiple parts, each contributing to the overall behaviour of the whole. It goes beyond individual neurons or nodes and focuses on how their organisation, connectivity, and interaction as a system create an environment which enables behaviour like recognising a face, understanding speech, or navigating an environment. This take is crucial, because it shifts our attention from isolated elements to the patterns of relationships and processes that govern their collective operation.
The Brain as a Biological System
In biological neural networks, the brain exemplifies a highly intricate, organised system. It comprises billions of neurons, each capable of transmitting electrical impulses and releasing neurotransmitters so that we can act on our environment, a much larger, connected system. These neurons form intricate networks through synapses, which are not randomly distributed, but arranged in patterns and hierarchies. I first truly understood this during my time in Neuroscience when I studied the visual cortex. The visual cortex is organised hierarchically. When we see something, the processing typically begins in the lower levels of the visual cortex and moves step by step through higher levels. This gradual process builds up a complete mental representation of what we’ve perceived with our eyes. For example, say you look at something, the primary visual cortex (V1) in your brain first detects basic features such as edges and orientations. This information is then relayed to the V2 and V3, where more intricate patterns and motion are analysed. Subsequently, the V4 specialises in processing colour and form, allowing for the recognition of objects and their attributes, and finally the V5 is dedicated to the perception of motion. This layered progression from V1 to V5 shows how hierarchical systems in the brain help the transformation of simple sensory inputs into rich, detailed visual experiences. Whilst this is an over-simplified account for how our cortical vision works, the key is that none of these regions function in isolation; they are woven into a lattice of connectivity that allows information to flow seamlessly across multiple areas. The result? A system that can learn, adapt, and refine its cognitive framework in response to experience. Did you know some people with cortical blindsight who’s V1 is damaged can somehow still partially see? This is down to a phenomena we’ll touch on later.
In this biological context, the system’s architecture—how neurons are connected, how signals propagate, and how changes occur—directly influence our cognitive capability. Without this complex architecture, the brain wouldn’t be able to integrate sensory inputs, store memories, or coordinate behaviour in an adaptive manner.
Deep Learning Architectures as Artificial Systems
Similarly, deep learning models demonstrate that the principles of system design are not confined to biology. Artificial neural networks, composed of layers of artificial neurons, are engineered to mimic certain properties of their biological counterparts. Although artificial neurons are mathematically simpler and lack biochemical complexity, they are still arranged systematically to solve specific tasks (this is also important for later).
In these networks, layers are typically stacked in a hierarchy, with early layers detecting simple features (e.g., edges in an image), and subsequent layers combining these features into progressively more abstract patterns. The model’s architecture—how many layers it has, how units are connected, and how information flows—is tuned with weights so the model can classify images, understand language, or predict trends in data. Deep learning models, though not as flexible or context-aware as our human brains, also rely on iterative adjustments of these weights across the entire network— to learn and adapt.
Both biological and artificial systems rely on feedback mechanisms so that new information is integrated coherently with what it already knows. The very fact it exists within a system enables learning and thus stronger inference. In establishing the key role of systems in intelligence, we can now focus on what exactly happens when these systems operate.
Emergence
What makes systems so important, is the fact it gives rise to behaviour, and this is what we call —emergence. While a system’s structural design is crucial, it is the emergence of its behaviour that we truly see, feel and resonate with. Not many of us would have really cared about LLM’s until we saw what it was capable of. To put it bluntly, we only care about how things work once we know and care about what they do.
Now taken together, emergence explains how new qualities, capabilities, and behaviour can manifest when individual units operate together within its system. For example, a single neuron in the human brain transmits electrical signals, but it cannot reason, remember, or perceive on its own. Only when vast networks of neurons interact, strengthen or weaken their connections, and coordinate their firing patterns do we witness the emergence of phenomena such as consciousness, problem-solving, and creative thinking.
Emergence in Biological Neural Networks
Biological neural networks are a prime example of how intricate, emergent behaviours arise naturally. Each neuron has limited action: it can either fire or remain silent in response to incoming signals. Yet, as neurons form circuits and constantly update their connections through learning and adaptation, their collective activity leads to the emergence of high-level cognition. This includes pattern recognition, language comprehension, emotional responses, and somehow self-aware consciousness. None of these capabilities can be directly attributed to a single neuron or synapse; rather, they emerge from the brains collective system operation.
Emergence in Deep Learning Systems
Emergence also characterises the operation of artificial neural networks, especially as models have grown in complexity and scale. A system that once only recognised a simple sentence can grow and learn to identify complex scenes, respond to spoken language, or even generate novel imagery. Emergent behaviour in deep learning is often seen as networks develop internal representations of concepts without explicit instructions. A model trained to classify animals, for instance, might spontaneously form internal layers that distinctly encode features like fur, snouts, or stripes—abstract features that the designers never explicitly defined. This self-organisation allows the network to generalise from examples it has seen to correctly classify new, previously unseen instances.
Constraining and Guiding Emergence
While emergence can yield impressive and useful capabilities, it is also somewhat unpredictable. In nature, the emergent characteristics of the brain have been shaped by millions of years of evolution, honing its functionality toward survival, adaptation, and innovation. Because we have had time to learn what it is to be human through the time we spend with ourselves and each other, we have built a level of behavioural predictability.
Artificial systems, by contrast, are new. Our understanding of this intelligence is nascent, and so there is much to do to truly profile this emergence and grasp what we’re actually dealing with. For instance, how we came to effectively prompt the first few wave of LLM’s like GPT-4o and Claude 3.5 Sonnet might not be the best way to prompt future generations of LLM’s like OpenAI’s o1 or o3 model. This is because the model’s architecture has already evolved, and techniques such as chain-of-thought (CoT) prompting that were once encouraged to the user are now baked-in and advised against. The models are built to think for longer without you telling it to do so. Newer models even have some autonomy over how long they think to give even smarter responses. This itself has created a new challenge - in that through the mazy linear scaling of thinking time and intelligence, we’re lost trying to predict model behaviour. For this reason, a lot of research is being done to better understand an AI’s chain of thought. With techniques such as mechanistic interpretability, we can predict the emergent behaviour of newer more powerful models with greater precision.
So what can we do to better understand the emergence of AI thought? Well for me it was this remarkable technique of scaling the models inference compute (time to think) that made me realise, one way to understand how a future AI might think, is to view its intelligence development as you would a human (just without emotion). After all, deep learning neural networks are in part a biomimicry of the human brain and so we should in theory expect models to think more similarly to us. Let’s take this within the context of work. If I asked a colleague to write a go-to-market strategy for a new product, are they more likely to give me a better strategy now or in a weeks time? It’s obvious to the assume the latter, and so you see this is no different with AI. Giving the same task to ChatGPT for a response now, or to an agentic workflow with recurring CoT to then answer a few days later - the latter is more likely to give a better response. This is especially if you define the agents instructions which is easier to do with existing LLM’s.
Emergence as a Bridge to Advanced Thought
Ultimately, emergence provides a bridge from the structural foundations of a system—its nodes, connections, and layers—to the cognition and behaviour we see. Once we recognise that no single component dictates the system’s collective behaviour, we can appreciate the complexity of thought and intelligence as derivates of collective interactions. This understanding paves the way for exploring how even more advanced cognitive functions unfold, how systems can become more broadly intelligent, and how principles observed in neurocognition can still inspire more sophisticated artificial designs. A great example of this is associative memory and Hopfield networks of which the Nobel prize in physics was just awarded to John Hopfield and Geoffrey Hinton last year.
Having established the virtuous dynamic of systems and emergence, we now stand better equipped to explore advanced thought, and how his differs between humans and AI by how it is conceptually pieced together.
Compositionality
Having explored systems and emergence, we now turn to the principle that enables the construction of intricate and adaptive cognitive frameworks: compositionality. When I first read about this term, it was in the context of language, referring to how sentences gain meaning when they’re understood by their word makeup. Compositionality can be defined as a concept of assembling simpler elements into increasingly complex structures and ideas. So how does it apply to thought?
Imagine thought as a puzzle-solving process. Individual pieces—concepts, symbols, perceptions, emotions and rules—by themselves hold limited meaning. Yet when arranged together, they produce entire landscapes of understanding. The more pieces we can handle simultaneously and the more ways we can fit them together, the richer the picture becomes. This analogy helps us see how the complexity of a completed puzzle corresponds to the complexity of thought. But beyond simply fitting pieces together, advanced thinking involves how efficiently and autonomously this assembly occurs, how much reflection guides it, and how well the skills learned transfer to new, different puzzles. Compositionality is important for thought, as it here where we can conceptualise differences between humans and machines. The Puzzle analogy is important, because it will help us explain this difference, not because I like puzzles.
Five Dimensions of Advanced Thought
The distinction between human and AI thinking lies in our compositional abilities, which become increasingly apparent as tasks grow in complexity and abstraction. Here are five key dimensions that build upon the fundamental concept of compositionality explained in the context of a puzzle:
Integration:
Solving a puzzle isn't just about fitting pieces together—it’s about understanding how different shapes, colours, and patterns come together to create a meaningful image. We are masters at blending these diverse elements, even when they draw on unrelated concepts. AI however, relies heavily on pattern-matching through the relationships within its training data. So what happens when a puzzle is new to both humans and AI? For instance, it wasn’t contained within the AI’s training data or is of a landscape we’ve never seen before? In time, our human intuition and creativity will eventually find a solution, for an existing AI, it is more likely to hallucinate.Optimisation:
Consider the efficiency and speed at which a puzzle is solved. AI readily outperforms humans at certain narrow tasks, optimising patterns it has seen before and rapidly running computations. In terms of compositionality, AI can find the best-fitting pieces of a known puzzle quickly—and so this is where current AI shines.Autonomy:
Autonomy is about the independence and depth of instructions when completing the puzzle. Humans are good at figuring out how to approach a puzzle without explicit step-by-step guidance, relying on intuition, experience, and creative problem-solving. We can even redefine the puzzle’s purpose mid-stream. AI, while becoming increasingly sophisticated, is still heavily reliant on predefined instructions, architectures, and training data. It doesn’t have the abstract reasoning capability we do.Reflection:
Reflection involves thinking about one’s approach to assembling the puzzle—evaluating strategies, questioning assumptions, and learning from mistakes. We naturally reflect on our CoT, refining methods and gaining insights into why one solution works and another fails. Funnily enough when I started writing this essay, AI models by contrast lacked any basic form of self-reflection. Fast-forward to today, a kind of reasoning reflection does exist with AI, in that they can update parameters based on errors and feedback, but they do not truly understand their reasoning steps at a metacognitive level (thinking about how one thinks).Adaptability:
Slightly covered but worth explaining as its own dimension - Adaptability gauges whether the skills used to solve one puzzle can be transferred to another, perhaps entirely different, challenge. We excel at this form of generalisation because we can break down previous experiences into recomposable parts, then apply them in new contexts. AI systems can handle transfer learning within similar domains but often struggle when the puzzle fundamentally changes shape—for instance, new unfamiliar textures, dimensions, or objectives are introduced.
So how do we compare?
If we profile current AI across these dimensions, we find that it excels in optimisation, leveraging its computational power to solve certain well-defined puzzles faster than any human could. It can handle a high degree of complexity within a given frame—such as recognising objects in images or making strategic moves in a game—because those tasks fit the puzzle patterns it knows.
However, as the puzzle grows more elaborate and shifts toward requiring autonomy, true reflection, and adaptability, AI’s strengths wane. Consider AI as an expert at assembling 2D jigsaw puzzles it has seen before: it can snap pieces together at record speed. Yet, when asked to solve a puzzle that is three-dimensional, with pieces representing abstract concepts, emotions, or changing goals, AI struggles. It lacks the depth of understanding to navigate subtle context, grasp theoretical concepts, qualitative nuances, or cultural references, especially those that fall outside its training data. Humans, on the other hand, already use our autonomy to set new goals, reflect on past mistakes, and adapt our learned skills to entirely new kinds of puzzles, be they physical, intellectual, or social.
For AI to reach nearer to our level of thought, today’s efforts aim to imbue models with better methods for recomposing knowledge, reflecting on their reasoning steps through CoT, setting their own objectives, reinforcement learning with human feedback and transferring skills. Achieving this via agents or through pure scaling would mean the AI will then assemble puzzle pieces not just from flat, familiar images, but from an ever-shifting array of shapes and textures that reflect the richness of our human experience. I don’t believe this will happen soon.
Compositionality scales intelligence
When thinking through the relationship of compositionality and thought, it made me realise that by thriving with autonomy, reflecting on our thinking, and learning from experience, we turn compositionality into a tool for true general intelligence. However, it does not work in isolation. It is dictated by the system which determine how pieces can be combined, and its benefits are fully realised when behaviour emerges. This trilogy—systems providing structure, emergence surfacing thought, and compositionality handling complexity—is central to achieving higher-level cognition or even general intelligence. For me, once an AI solves this equation, we will have reached AGI. Yet even still, you’ll recall that a paragraph back I mentioned “nearer to our level of thought”. This is because, by this definition, even when we reach AGI, humans will still have a divine advantage.
Qualia
Until now, we have explored how compositionality underpins complex thought and described the advantages we have in comparison to AI across multiple dimensions. Yet for me, there is a key ingredient that makes our thought somewhat divine. At this point, one of the most critical distinctions between human and artificial cognition comes into focus: qualia - the subjective qualities of experience that arise from our perceptual encounters with the world. Your perception is your reality and man do I love it.
Defining Qualia
Qualia are the raw, felt aspects of our sensory experiences—the redness of a rose on valentines, the warmth of sunlight on our face, or the bitter taste of strong coffee. They are not just data points or patterns; they are deeply personal sensations that imbue our perceptions with richness and meaning. Unlike a camera’s sensor readings or an AI’s classification of input pixels, qualia represent how the world feels from the inside. They are the subtle essence of what it is like to experience something, and they cannot be fully captured by lists of properties or external measurements. It’s that gut feeling you get, that is so enticing to follow even though your brain tells you not to.
This uniqueness arises because human perception is not just a mechanical process of interpreting stimuli. It is from birth, a seamless integration of sensory signals, emotional resonance, memory, and context. In other words, it is inherently bottom-up, grounded in raw experiences that shape and refine our internal models of reality. Qualia emerge from these dynamic interactions, forging a bridge between the physical input of our senses and the conceptual frameworks we use to understand the world.
Bottom-Up Perception and Cognitive Depth
In humans, perception starts from the sensory data itself—light, sound, texture—flowing inward and upward through layers of neural processing. At each stage, raw signals are not only analysed for patterns but also shaped by attention, past experiences, and emotional relevance. And so, as we compose higher-level concepts, bottom-up perception ensures they keep tethered to genuine experiential grounding. We are not simply recognising shapes or predicting outcomes; we are living through sensations, feelings, and interpretations.
This integration brings a powerful dimension to compositional thinking. When humans assemble concepts, we do so with qualia colouring our perceptions, giving each idea a lived quality. It’s why we can empathise with a character in a story, and appreciate art made by a human more than we do an AI. Our ability to invest abstract concepts with emotional and experiential significance helps us to navigate not just logical puzzles but also social relationships, cultural artefacts, and moral dilemmas. Consider adaptability, because we have lived experiences to anchor our understanding, we can recombine familiar concepts and sensations to approach new challenges. The bottom-up route ensures that each new situation resonates with something we have felt before, even if distantly related, allowing us to make intuitive leaps where AI would fall.
The Limitations of AI’s Top-Down Approach
In contrast, today’s AI typically engages the world through top-down frameworks. Rather than grounding understanding in raw, subjective experience, AI systems rely on predefined goals, training objectives, and optimisation. They approach problems by matching patterns to known categories or strategies, often excelling when the rules are clear and the data is ample. While this method can achieve impressive feats—chess matches, protein folding predictions, language model outputs—it lacks the primal, first-personal dimension of qualia.
Because AI doesn’t feel the warmth of sunlight or taste bitterness, it cannot assign value or meaning that stems from genuine personal experience. It computes, classifies, and approximates without the emotional resonance that humans live through. While AI can mimic human-like outputs, it does so without the underlying qualitative awareness that enriches human reasoning. Without qualia, AI’s conceptual frameworks remain somewhat hollow, devoid of the “lived-in” quality that characterises our cognition.
Qualia as a force multiplier
In revisiting the equation I drew for intelligence, qualia can be added as a force multiplier (excuse the pun). It reminds me that compositional ability is not just balanced by integration, optimisation, autonomy, reflection and adaptability. Each dimension is also multiplied by qualia, serving subjective richness and emotional depth into a more profound interplay. It is this interplay which shapes the essential character of human intelligence. It ensures that we not only solve puzzles but also find meaning in them, lending to our metacognition and our wisdom. For me, this is something that no current AI can emulate.
As humans, our awesome cognition—doesn’t spring from accident. It traces back to the very core of our biological system. Unlike the relatively rigid architectures of AI, our cognitive frameworks are designed to grow, adapt, and reinvent themselves in response to changing circumstances. In the last chapter, we’ll delve into the remarkable and I mean remarkable biological capacity that makes this adaptability possible.
Plasticity
Up to this point, we have navigated through the intricacies of human and AI cognition—from the structural foundations of systems to the emergent properties that characterise thought, and from the compositionality that builds higher constructs to the qualia that infuses them with meaning and adaptability. Now we arrive at a key underlying force that ties these threads together: Plasticity - the dynamic biological property that structurally distinguishes human thought from the relatively static architectures of AI.
Understanding Plasticity
Plasticity is the powerful capacity of the human brain to reorganise its connections in response to experiences, learning, and environmental demands. It’s adaptable wiring gives us the foundation to learn new skills, absorb novel information and also reinvent how we think, perceive, and feel as we move through life. At the start of this essay, I mentioned there are some people who are cortically blind but can still perceive. I also mentioned the V1 a brain area which functions to identify lines and edges in our vision. Well for people with blindsight, some have experienced brain damage to this area and well typically, they shouldn’t be able to perceive. But, because of neural plasticity, the brain marvellously reorganises itself so that other areas of the brain can compensate for this impairment up keeping some ability to see. Simply incredible. Unlike an AI model that once trained, largely maintains fixed parameters until explicitly retrained, the human brain is in a state of perpetual, refinement.
Plasticity and Emergence
The emergence of thought owe their persistence and creativity to plasticity. As humans encounter new challenges, the brain’s flexible wiring allows emergent patterns to bloom and fade, seeking optimised configurations. Emergence is not a static feature; it thrives on the ability to adapt. Because the brain is not locked into one architecture, the interplay of countless neurons and synapses can continually reconfigure, giving rise to meta-learning and dynamic cognitive landscapes that transcend any single moment’s solution.
This adaptability underpins the very idea of general intelligence. Without the capacity to reshape underlying structures, each emergent cognitive pattern might remain tied to its original context. Instead, plasticity ensures that the brain can negotiate novel terrains, weaving previously established ideas into new contexts and forging fresh connections that extend understanding into unfamiliar challenges.
Plasticity and Compositionality
Compositionality also depends on plasticity. Over time, with repeated exposure to diverse ideas and perspectives, the brain refines the associations between concepts. Plasticity ensures that these connections are not weak. Rather than treating each new scenario as an alien landscape, we can rearrange existing building blocks and even craft new ones.
This plastic flexibility explains why humans can learn a foreign language well into adulthood, transfer problem-solving strategies from one discipline to another, or develop entirely new artistic or scientific paradigms. The neural rewiring that underlies such feats is continuous and context-sensitive, making compositionality an evolving capacity rather than a static skill. This makes me wonder why so much focus is on AI becoming smarter, with no regard to the evolution of our own intelligence.
Contrasting Plasticity with AI’s Rigidity
Whereas the human brain continuously updates its wiring from within, current AI often requires top-down, guided adjustments and extensive computational resources to adapt. Even then, it struggles with the subtlety and fluidity that plasticity grants the human mind. The result is a system that can excel at narrowly defined tasks but lacks the organic growth that transforms intelligence into wisdom, pattern recognition into understanding, and memory into intuition.
The Plastic Thinker
Plasticity sits at the core of our highest cognitive achievements, from the grandeur of abstract reasoning to the warmth of empathy. In celebrating plasticity, we recognise the most vital force that propels human cognition beyond what current AI can reach. It is the reason we are not simply problem-solvers but problem-redefiners, not mere pattern detectors but meaning makers. This adaptable, living infrastructure grants us the unique capacity to learn from everything, respond to anything, and grow into something more than we were before. I hope this essay has given you comfort in realising just how powerful your mind is. I also hope it’s given you clarity as to where your strengths lie as we move into a world where AI may compete for your career. For AI, where do we go from here?
The Road Ahead
As we stand on the cusp of revolutionary change, the question naturally arises: how might the future of AI begin to bridge the gap we have already leaped over? While the journey is complex, two areas show promise: the rise of autonomous AI agents in the near term, and the longer-term potential for embodied, sensor-rich robots capable of experiencing the world as we do.
Short-Term Steps: Beyond Passive Models to Active Agents
Today’s AI often appears clever but passive, lacking true independence. Yet in the near term, we can expect a landscape of AI agentic systems that proactively seek out information, refine their goals, and adapt their strategies in fluid contexts. Unlike traditional deep learning models or current agentic workflows that are confined to narrow, well-defined tasks, agentic systems explore environments, gather feedback, and adapt their internal parameters as situations evolve. As of now, this is a big but expensive leap toward true autonomy and compositional reasoning. Though still limited and goal-driven at their core, these agents will begin to nudge machine cognition closer to a more organic thought process—one marked by curiosity, trial and error, and meta-learning.
Long-Term Visions: Embodiment and the Dawn of Artificial Qualia
In a more distant future, we can imagine the next grand leap: artificial systems that do not merely simulate minds in silico, but inhabit the physical world as embodied robots, endowed with senses akin to our own. In this scenario, an AI no longer “hears” sound as a flat set of numbers but has ears placed on either side of its robotic head, allowing it to experience the directionality of a rustling leaf or the subtle echo of footsteps. Instead of “seeing” pixels, it might have cameras tuned as eyes that capture shifting light and shadow as the robot moves and turns, infusing vision with a dynamic, lived perspective.
I feel as though sensory integration opens the door to a more profound notion: that once AI systems truly live within sensory frameworks, they may develop something that begins to resemble qualia. By feeling vibrations through robotic fingertips, discerning the texture of objects, or noticing the faint smell of chemicals in a laboratory—an AI system moves beyond the static, top-down view of reality. Embodied cognition can pull AI from the abstract realm into the concrete here-and-now, granting it the raw material upon which subjective experience might gradually form. Of course, even this vision does not guarantee that AI will replicate human qualia. Yet, the leap from pure computation to embodied interaction will change the world as we know it. By grounding cognition in sensory experience, these future systems might not merely respond to the world, but genuinely inhabit it, cultivating perception.
In our lifetimes, we may see the first glimmers: robots that adjust their learning strategies based on how they physically navigate a space, or that recognise emotional cues because they have a vantage point in a social scenario, rather than simply mapping patterns onto fixed databases. Over decades to come, these incremental changes could accumulate into something genuinely transformative—a shift in AI’s character, from a powerful but alien mind into something that resonates more closely with the way we understand and experience our world. They may develop personalities - and that to me is mind-blowing.
In envisioning this future, I get that it’s still speculative. Yet even by considering it, we deepen our appreciation for what sets our own cognition apart. In exploring the lengths to which we must go—imbuing machines with exploration, embodied perception, and sensory grounding—to mirror our own human mind, we reaffirm just how extraordinary our own intelligence truly is.