Have you ever talked to someone “in consciousness?” How did that conversation go? Do they make vague gestures in the air with both hands? Did they refer to the Tao Te Ching or Jean-Paul Sartre? Did they say that, really, scientists can’t be sure of anything, and that fact is only as true as we make it?
The fuzziness of consciousness, its imprecision, has made its study anathema to the natural sciences. At least until recently, the project has largely been left to philosophers, who are often only marginally better than others at clarifying their object of study. Hod Lipson, a roboticist at Columbia University, says some people in his field refer to consciousness as “the C-word.” Grace Lindsay, a neuroscientist at New York University, says, “There’s this idea that you can’t study consciousness until you have tenure.”
However, a few weeks ago, a group of philosophers, neuroscientists and computer scientists, Dr. Lindsay with them, proposed a rubric by which to determine whether an AI system like ChatGPT can be considered conscious. The reportwhich examines what Dr. Lindsay’s “brand new” science of consciousness, combines elements from half a dozen nascent empirical theories and proposes a list of measurable properties that might suggest some presence in a machine.
For example, recurrent processing theory focuses on the differences between conscious perception (for example, actively studying the apple in front of you) and unconscious perception (such as your feeling of an apple flying towards your face ). Neuroscientists have argued that we become unconscious of things when electrical signals are passed from the nerves in our eyes to the primary visual cortex and then to deeper parts of the brain, like a baton being passed from one cluster of nerves to another. These perceptions seem to become conscious when the baton is passed back, from the deeper parts of the brain to the primary visual cortex, creating a loop of activity.
Another theory describes specialized sections of the brain that are used for specific tasks — the part of your brain that can balance your very heavy body on a pogo stick is different from the part of your brain that can take on a wide view. We are able to put all this information together (you can bounce a pogo stick while appreciating a beautiful view), but only to a certain extent (it’s hard to do). So neuroscientists postulate the existence of a “global workspace” that allows for control and coordination of what we pay attention to, what we remember, even what we see. Our consciousness can emerge from this integrated, evolving workspace.
But it can also come from the ability to be aware of your own consciousness, to create virtual models of the world, to predict future experiences and to locate your body in space. The report argues that any of these features could, potentially, be an important part of what it means to be conscious. And, if we understand these characteristics in a machine, we can consider the machine.
One of the difficulties of this approach is that the most advanced AI systems are deep neural networks that “learn” how to do things on their own, in ways that humans don’t always understand. We can get some kind of information from their internal structure, but only in limited ways, at least for the moment. This is the black box problem of AI So even if we had a complete and exact rubric of consciousness, it would be difficult to apply it to the machines we use every day.
And the authors of the recent report are quick to note that theirs is not a definitive list of what makes a consciousness. They rely on an account of “computational functionalism,” according to which consciousness is reduced to bits of information passed back and forth within a system, such as in a pinball machine. In principle, according to this view, a pinball machine can have consciousness, if it is made more complex. (That might mean it’s no longer a pinball machine; let’s cross that bridge if we come to it.) But others have proposed theories that draw on our biological or physical characteristics, social or cultural context, as essential pieces of consciousness. It’s hard to see how these things can be coded into a machine.
And even among researchers who are largely on board with computational functionalism, no existing theory seems adequate for consciousness.
“For any of the report’s conclusions to be meaningful, the theories have to be correct,” said Dr. Lindsay. “That they are not.” This may be the best we can do for now, he added.
After all, does it seem that any one of these features, or all of them combined, constitutes what William James described as the “heat” of conscious experience? Or, in the words of Thomas Nagel, “what it is like” to be you? There is a gap between the ways we can measure subjective experience in science and subjective experience itself. This is what David Chalmers has labeled the “hard problem” of consciousness. Even if an AI system has repetitive processing, a global workspace, and a sense of its physical location — what if it still lacks the thing that makes it like one thing?
When I mentioned this emptiness to Robert Long, a philosopher at the Center for AI Safety who led the work on the report, he said, “That feeling is the kind of thing that happens whenever you try to explain scientifically, or reduce to physical processes. , some high-level concepts.”
The stakes were high, he added; advances in AI and machine learning are coming faster than our ability to explain what’s going on. In 2022, Blake Lemoine, an engineer at Google, argued that the company’s LaMDA chatbot was conscious (although most experts disagreed); the further integration of generative AI into our lives means that the topic may become more contentious. Dr. Long argues that we need to start making some claims about what consciousness can be and laments the “vague and sensationalist” way we’ve gone about it, often conflating subjective experience with general intelligence or reason. “It’s an issue we’re dealing with now, and for the next few years,” he said.
As Megan Peters, a neuroscientist at the University of California, Irvine, and an author of the report said, “Whether or not a person is there makes a big difference in how we treat it.”
We already do this kind of research on animals, which requires careful study to make the most basic claim that other species have experiences similar to ours, or even that we understand. It can be like a fun activity at home, like shooting empirical arrows from moving platforms at shape-changing targets, with bows that occasionally turn into spaghetti. But sometimes we get hit. As Peter Godfrey-Smith writes in his book “Metazoa,” cephalopods probably have a stable but definitely different kind of subjective experience from humans. Octopuses have 40 million neurons in each arm. What is that?
We rely on a series of observations, inferences and experiments — both organized and unorganized — to solve this problem of other minds. We talk, communicate, play, hypothesize, induce, control, X-ray and dissect, but, ultimately, we still don’t know what makes us conscious. As long as we know that we are.