On a rainy October evening in 1950, British mathematician Alan Turing sat at his desk, pen in hand, wrestling with an ancient question. Philosophers had asked it for centuries, but Turing wanted a fresh angle. His paper, Computing Machinery and Intelligence, began with six deceptively simple words: “I propose to consider the question, ‘Can machines think?’”
Seventy-five years later, we’re still considering it.
What Do We Mean by “Thinking”?
When René Descartes declared cogito, ergo sum—“I think, therefore I am”—in the 17th century, he tied thought to existence itself. For him, thinking meant awareness, consciousness, the feeling of being “me.” If that’s our definition, then today’s machines do not think. They solve, they calculate, they generate—but they don’t feel.
But Turing, ever the pragmatist, sidestepped metaphysics. Instead of debating the essence of thought, he proposed a game: what if a machine could imitate human conversation so well that you couldn’t tell the difference? If it could, he suggested, then for all practical purposes, the machine was thinking. This thought experiment became the Turing Test, a landmark in the philosophy of AI.
Not everyone agreed. Decades later, philosopher John Searle imagined the Chinese Room: a person who doesn’t understand Chinese sits in a room with rulebooks that tell him how to respond to Chinese characters slipped under the door. To an outsider, it looks like the person knows Chinese, but in reality he’s just shuffling symbols. For Searle, that’s what computers do: they simulate understanding without actually having it.
The Birth of Artificial Intelligence
While Turing planted the seed, another figure gave the field its name. In the summer of 1956, John McCarthy, a young computer scientist at Dartmouth College, organized a workshop that would become legendary. He coined the term “artificial intelligence” and gathered pioneers who dreamed of creating machines that could learn, reason, and even use language.
McCarthy was optimistic—too optimistic, as it turned out. He once quipped that “as soon as it works, no one calls it AI anymore.” Indeed, many things once seen as the pinnacle of machine intelligence—chess-playing programs, voice recognition, route planning—are now ordinary features of our phones.
Yet the dream of true machine thought has proved more elusive.
Machines That Think (Sort Of)
Fast-forward to the twenty-first century, and machines are now capable of feats that once belonged only to the realm of imagination. In 1997, IBM’s Deep Blue sat across the board from Garry Kasparov, the reigning world chess champion, in a contest that captured global attention. Kasparov was one of the most brilliant players in history, famous for his intuition and psychological dominance. Yet he found himself unsettled by the cold precision of his new opponent. After Deep Blue’s victory, Kasparov admitted that he had sometimes felt as if he were facing not a program but “a kind of intelligence.” The machine had no nerves, no fatigue, no fear—it played with an alien calm that unnerved even the greatest of human champions.
Nearly two decades later, in 2016, another shock arrived. Google’s AlphaGo, an artificial intelligence trained through reinforcement learning, defeated Lee Sedol, one of the finest players of the ancient Chinese game of Go1. For centuries, Go had been revered as a test of human intuition, a game so vast in its possibilities that brute-force calculation was thought useless. Yet in the fourth game of their five-match series, AlphaGo produced what commentators simply called “Move 37.” It was an unconventional move, one that no human professional would have dared at that stage of the game, and yet it turned out to be brilliant. The move seemed not just computationally clever but creatively inspired. Lee Sedol, visibly shaken, left the room for several minutes before returning to continue. Humanity, it seemed, had been confronted with a new kind of mind.
And now, only a few years later, large language models generate essays, compose poems, draft computer code, and carry on conversations in natural language. They write with a fluency that astonishes, weaving together knowledge from medicine, law, philosophy, and literature into coherent answers that, at first glance, appear deeply thoughtful. To many people, interacting with these systems feels uncannily like talking to a real intelligence.
But under the hood, their workings are profoundly different from ours. These machines excel at pattern recognition rather than genuine understanding. They predict the next word, move, or action by drawing upon colossal databases of examples, not by reflecting on meaning or purpose. When a child learns the word dog, she encounters the animal itself: she pets its fur, hears its bark, feels its warmth. Her understanding is not just linguistic but embodied, woven into a world of sights, sounds, and feelings. For the machine, the word dog exists only as a constellation of statistical associations, shaped by millions of sentences written by humans. It can generate descriptions, jokes, or metaphors about dogs, but it has never seen one, smelled one, or been licked by one. Its knowledge is wide, but it is weightless—an elaborate web of patterns with no anchor in experience.
This difference matters. When we marvel at AlphaGo’s Move 37 or a chatbot’s eloquent essay, we are tempted to believe that the machine has glimpsed some truth or felt some inspiration. But the reality is that these systems, for all their brilliance, operate without awareness. They do not know that they are playing Go, or conversing, or composing a poem. They are mirrors of human creativity, amplifying and recombining the data we have given them. Their power is real and transformative, but it is not thought in the human sense.
And yet, these victories are not meaningless. They force us to confront the unsettling possibility that many of the things we have long considered “thinking” may in fact be achievable without consciousness, without awareness, without the spark of a self. Machines show us that intuition can be simulated, that strategy can emerge from computation, that language can be generated without meaning. This realization challenges our pride as thinking beings and blurs the line between intelligence as behavior and intelligence as experience.

Where Machines Still Fall Short
Despite dazzling progress, artificial intelligence still stumbles over tasks that humans perform almost without thinking. One of the most striking gaps lies in common sense. A child, barely old enough to tie her shoes, understands that you cannot fit a watermelon into a sandwich bag, or that if you drop a glass it will fall rather than hover. Yet machines, even the most advanced, can sometimes produce bizarre answers to such questions, because they do not truly grasp the physical world. They have no intuitive physics, no lived experience of objects in space. Their “knowledge” is an accumulation of patterns in data, which can be vast but brittle, powerful in some domains yet absurd in others.
Equally absent is consciousness. Machines do not know they exist. They have no sense of being, no inner perspective, no silent witness that says, “I am here.” They do not feel joy when they succeed or frustration when they fail. They do not get bored, anxious, or curious. They can write moving essays about love or grief, but they do so the way a mirror reflects an image—faithfully and convincingly, but without inhabiting it. They manipulate words with stunning skill, yet the words are not tethered to genuine experience.
This gap between simulation and experience is what makes many philosophers skeptical of attributing thought to machines. The machine’s prose about justice may sound convincing, but it is not rooted in the lived struggles of fairness, loss, or compassion. It is rooted only in statistical echoes of what humans have written before. To call this “thinking” seems, at least for now, a category mistake: machines are experts at pattern, not at meaning.
And yet neuroscience complicates the picture. The human brain is, in one sense, also a machine—a network of roughly eighty-six billion neurons firing in intricate rhythms, a biochemical storm of signals and connections. Consciousness, so far as we know, arises not from a mystical spark but from the interactions of this tissue. If thought and awareness emerge from complexity in a biological machine, then might they not also emerge from an artificial one, given enough scale and sophistication?
Some researchers believe this is inevitable. They argue that consciousness is a property of information processing itself, not of carbon or flesh. In their view, once artificial systems reach sufficient complexity, they may not just mimic thought but cross into genuine awareness. Others disagree, pointing out that the biological substrate may be essential—that there may be something about living cells, about metabolism and embodiment, that makes human consciousness what it is. If that is true, then no matter how intricate, a silicon machine may always remain an imitator rather than a thinker.
For now, no one knows where the truth lies. Neuroscience has made astonishing progress in mapping the brain, but the “hard problem of consciousness”—why and how subjective experience arises at all—remains unsolved. Artificial intelligence, meanwhile, grows ever more powerful, but its inner life remains, so far as we can tell, an empty theater: a stage without actors, a play without an audience. The puzzle sits before us unsolved, at the crossroads of philosophy, neuroscience, and computer science.
Thinking About Thinking
So, can machines think? The answer depends very much on what we mean by the word “thinking.” If by thinking we mean the ability to process information, solve problems, recognize patterns, and adapt to new challenges, then machines are already thinkers of a sort. Every time a navigation app reroutes around traffic, or a medical algorithm identifies a tumor on a scan, or a chess program anticipates its opponent’s next move, the machine is performing tasks that once required a human mind. These are functional forms of thought: the ability to take in data, process it according to rules, and produce decisions that prove useful in the real world. In this sense, we already live in an age where machines think—though their style of thinking is alien to ours, more statistical than intuitive, more mechanical than reflective.
But if by thinking we mean the rich inner life that accompanies human consciousness—the awareness of being, the silent sense of “I,” the capacity to reflect, to suffer, to imagine, to be moved by beauty—then machines remain far outside the circle of thought. A computer program may describe love in fluent prose, but it has never felt the sting of heartbreak. It may write an ode to the stars, but it has never looked up at the night sky with wonder. What it produces is not the voice of an experiencing subject, but the echo of countless human voices woven together by patterns. In this sense, machines are brilliant imitators of thought but not yet minds in their own right.
The tension between these two definitions of thinking—functional and conscious—forces us to reflect not only on machines, but on ourselves. For centuries, we drew a boundary around certain activities and declared them uniquely human. Playing complex games, composing music, recognizing faces, driving cars—these were, we believed, unmistakable signs of intelligence. And yet, one by one, machines have taken up those challenges and mastered them. Each victory forces us to redraw the boundary. What we once thought of as proof of “thinking” now becomes, in retrospect, mere calculation.
This shifting line suggests that the real story may not be about whether machines can think, but about how our definition of thought evolves in response to them. Perhaps “thinking” has always been a broader phenomenon than we realized. Or perhaps we are discovering that what we call thinking is a spectrum, stretching from mechanical pattern-recognition to conscious self-reflection, with many shades in between.
The deeper question may therefore be not simply “can machines think?” but “how will their form of thought coexist with ours?” Machines may never dream, or despair, or fall in love, but they will increasingly make decisions, take actions, and interact with us in ways that shape our lives profoundly. Their intelligence, however different from our own, will live alongside ours. The challenge for the future is to understand what it means to share a world with minds that resemble us in some ways and remain utterly alien in others.
- Go is an ancient board game of Chinese origin, considered one of the most complex strategy games in the world. It originated in China over 2,500 years ago and later spread to Japan and Korea, becoming an integral part of their cultures. The game is played on a grid of lines (the most common is 19×19, but smaller versions, such as 9×9 or 13×13, also exist). The two players take turns placing black and white stones on the board (black moves first). The goal is not to “capture” the opponent like in chess, but to surround as much territory as possible on the board, marking empty areas and capturing the opponent’s stones by completely isolating them. ↩︎
