When researchers David Rumelhart and James McClelland published their groundbreaking volumes Parallel Distributed Processing: Explorations in the Microstructure of Cognition in 1986, they introduced a radical idea: that complex intelligence could emerge from simple, unintelligent components. These neuron-like networks, known as PDP models, laid the foundation for modern artificial intelligence, transforming how scientists understood learning, memory, and thought itself.
Before the PDP revolution, most AI systems relied on explicit programming—human-made rules designed to mimic reasoning. Rumelhart and McClelland took the opposite approach. They asked whether networks made up of small, single-minded “neurons,” each following a few basic mathematical principles, could collectively perform tasks associated with intelligence. Remarkably, they could. These simple systems learned to recognize objects, recall memories, infer context, and make decisions—without any built-in rules or prior knowledge.
This insight marked a turning point. It suggested that intelligence might not require conscious direction or explicit understanding. Instead, intelligence could arise naturally from pattern recognition, connection, and feedback. The PDP models showed that when neurons communicate, adjust their connections, and strengthen successful pathways, something astonishing happens: order and meaning emerge from apparent chaos.
At first, these networks were meant to model the human brain rather than replace it. By studying how artificial neurons learn, researchers hoped to uncover how real neurons collaborate to produce thought and memory. The PDP systems became powerful analogies for how the brain might handle incomplete information—retrieving a memory from a faint cue, completing a partial pattern, or recognizing a familiar face in an unfamiliar setting. They showed that even mindless mechanisms could mimic the hallmarks of cognition.
In the decades that followed, this research evolved into the deep learning systems that now power artificial intelligence. Yet one question has persisted: if unconscious networks can appear intelligent, what role does consciousness actually play? Does awareness add something essential, or is it simply a byproduct of vast, interconnected activity?
A new book, The Emergent Mind: How Intelligence Arises in People and Machines, co-authored by McClelland and psychologist Gaurav Suri, revisits these questions with the benefit of forty years of progress. It traces the journey from those early PDP models to today’s neural networks, bridging neuroscience and artificial intelligence. Suri and McClelland explore how intelligence unfolds from simple units, and they invite readers to ponder where consciousness fits in this architecture of emergence.
The enduring mystery remains the same: how can mindless processes give rise to mindful experience? Just as Rumelhart and McClelland’s early networks showed that neurons don’t need to be smart individually to create smart systems, today’s AI demonstrates how simplicity, scaled across billions of connections, can simulate thought. Yet, beneath the algorithms and synapses, the question of subjective awareness—the feeling of being—remains unresolved.
As neural science and AI continue to evolve, one lesson from the PDP pioneers stands clear: intelligence is not a property of parts but of relationships. It is born not from command or design, but from interaction, adaptation, and connection. In understanding how mind emerges from the mindless, we inch closer to understanding not only machines—but ourselves.














