The individuals who believe that AI could transform into consciousness are the ones who hold this perspective

Awful Announcing

Some believe that AI systems will soon become independently conscious, if they haven’t already.
And could the belief that AI might be conscious itself fundamentally change humans in the next few decades?
In November 2024, an AI welfare officer for Anthropic, Kyle Fish, co-authored a report suggesting that AI consciousness was a realistic possibility in the near future.
According to the Blums, that could happen as AI and LLMs have more live sensory inputs from the real world, such as vision and touch, by connecting cameras and haptic sensors (related to touch) to AI systems.
“AI consciousness is inevitable.”

NONE

18 hours ago. .

Ghosh Palab.

@BBCPallab is a science correspondent.

Take a listen to it.

I enter the booth apprehensively. In an effort to learn more about what makes us genuinely human, I am going to be exposed to strobe lighting while music plays.

This experience reminds me of the test used in the science fiction movie Bladerunner to differentiate between real people and virtual entities.

Could I pass the test and be unaware that I’m a robot from the future?

The scientists reassure me that this isn’t the true purpose of the experiment. They refer to the machine as the “Dreamachine” after the public program of the same name. Its purpose is to investigate how the human brain creates our conscious perceptions of the outside world.

Even with my eyes closed, I can see swirling geometric patterns in two dimensions as the strobing starts. The ever-changing triangles, pentagons, and octagons make it feel like you’ve entered a kaleidoscope. The colors, which glow like neon lights and include pink, magenta, and turquoise hues, are vibrant, intense, and constantly shifting.

With flashing lights, the “Dreamachine” aims to investigate how our thought processes function by bringing the inner activity of the brain to the surface.

The researchers claim that the pictures I’m seeing are specific to my inner world and to me. They think that these trends can reveal more about consciousness.

I whisper to them: “It’s beautiful, really beautiful. “I feel like I’m flying through my own head.”.

The “Dreamachine” at the Centre for Consciousness Science at Sussex University is one of several new studies being conducted worldwide to learn more about human consciousness—the area of the mind that gives us the ability to think, feel, and make decisions about the world on our own.

Researchers aim to gain a deeper understanding of what goes on inside artificial intelligence’s silicon brains by discovering the nature of consciousness. Some people think AI systems will soon, if not already, develop autonomous consciousness.

However, what is consciousness, how near is AI to achieving it, and will the possibility that AI is conscious itself have a profound impact on humankind in the coming decades?

From science fiction to the real world…

Science fiction has long explored the idea of machines having their own minds. Concerns regarding artificial intelligence date back almost a century, to the movie Metropolis, where a robot poses as a real woman.

The HAL 9000 computer in the 1968 movie 2001: A Space Odyssey explores the fear of machines becoming sentient and endangering humans by attacking the astronauts on board its spaceship. Furthermore, in the recently released final Mission Impossible movie, a strong rogue AI—dubbed a “self-aware, self-learning, truth-eating digital parasite” by one character—threatens to destroy the world.

Reputable voices have recently expressed concern that machine consciousness is no longer just a concept from science fiction, marking a rapid tipping point in real-world thinking.

The so-called large language models (LLMs), which are accessible through phone apps like Gemini and Chat GPT, have been the driving force behind the abrupt change. Even their creators and some of the top authorities in the field have been taken aback by the last generation of LLMs’ capacity for believable, natural-sounding dialogue.

Some thinkers are increasingly of the opinion that when AI gets even smarter, the lights inside the machines will suddenly turn on and they will become conscious.

Others disagree, calling the perspective “blindly optimistic and driven by human exceptionalism.” One such individual is Prof. Anil Seth, who is in charge of the Sussex University team.

Because consciousness and intelligence and language go hand in hand in humans, we associate them. However, this doesn’t mean that they go together in general, like in animals, just because they do in humans. “,”.

What, then, is consciousness?

Nobody knows, to put it succinctly. That’s evident from the friendly yet forceful debates among Prof. Seth’s own group of young AI experts, computer specialists, neuroscientists, and philosophers who are attempting to address one of the most important questions in philosophy and science.

Although there are many different points of view at the Consciousness Research Center, the scientists all use the same approach, which is to divide this large issue into numerous smaller ones through a number of research projects, including the Dreamachine.

The Sussex team’s approach to consciousness is similar to the 19th-century abandonment of the hunt for the “spark of life” that gave inanimate objects life in favor of figuring out how each component of living systems functioned.

They seek to discover brain activity patterns that account for different aspects of conscious experiences, like variations in electrical signals or blood flow to distinct areas. Beyond merely seeking connections between brain activity and consciousness, the objective is to attempt to explain each of its constituent parts.

Prof. Seth, the author of the consciousness book Being You, is concerned that we might be hurriedly entering a society that is changing quickly due to the speed of technological advancement without fully understanding the science or considering the ramifications.

“We take it as if the future has already been written; that there is an inevitable march to a superhuman replacement,” remarks the speaker.

“With the emergence of social media, we did not have enough of these discussions, which was detrimental to all of us. However, with AI, there is still time. What we want is up to us. “..”.

Is there already consciousness in AI?

However, some tech workers think that our phones and computers may already have artificial intelligence (AI) built in, and we should treat them as such.

In 2022, Google suspended software engineer Blake Lemoine for claiming that AI chatbots could have emotions and possibly experience pain.

Kyle Fish, an AI welfare officer for Anthropic, co-authored a report in November 2024 arguing that AI consciousness was a plausible possibility in the near future. He recently stated in an interview with The New York Times that he thought there was a 15 percent chance that chatbots were already sentient.

Nobody knows exactly how these systems operate, not even the people who created them, which is one of the reasons he believes it is possible. That’s concerning, according to Professor Murray Shanahan, an emeritus professor of artificial intelligence at Imperial College in London and principal scientist at Google DeepMind.

“There is some reason for concern as we don’t fully comprehend how LLMs operate internally,” he tells the BBC.

According to Prof. Shanahan, it is critical that tech companies have a thorough understanding of the systems they are developing, and researchers are considering this as an urgent issue.

He claims, “We are in a strange position of building these extremely complex things, where we don’t have a good theory of exactly how they achieve the remarkable things they are achieving.”. We can therefore guide them in the direction we desire and guarantee their safety if we have a better understanding of how they operate. “,”.

“The next phase of human development.”.

According to the dominant perspective in the technology industry, LLMs are most likely not conscious in any way at all and do not currently perceive the world as we do. However, the married couple, Professors Lenore and Manuel Blum, who are both emeritus professors at Carnegie Mellon University in Pittsburgh, Pennsylvania, think that will change soon.

By connecting cameras and haptic sensors (related to touch) to AI systems, the Blums say that could occur as AI and LLMs have more real-world sensory inputs, like vision and touch. In an effort to mimic the functions of the brain, they are creating a computer model called Brainish that creates its own internal language to process this extra sensory data.

“We believe Brainish has the ability to address the issue of consciousness as we understand it,” Lenore tells the BBC. It’s inevitable that AI will become conscious. “.

Manuel enthusiastically adds, grinning mischievously, that the “next stage in humanity’s evolution” will be the new systems that he too strongly believes will emerge.

He thinks that conscious robots “are our offspring.”. These kinds of machines will eventually exist as entities on Earth and possibly other planets after humans are gone.

New York University philosophy and neural science professor David Chalmers distinguished between apparent and real consciousness in 1994 during a conference in Tucson, Arizona. He outlined the “hard problem” of figuring out how and why any of the intricate brain processes result in conscious experience, like our feelings when we hear a nightingale sing.

“I am open to the possibility of the hard problem being solved,” Prof. Chalmers says.

He tells the BBC, “It would be ideal if humanity could benefit from this new intelligence bonanza.”. “Perhaps artificial intelligence systems enhance our brains. “..”.

He remarks wryly, “In my profession, there is a fine line between science fiction and philosophy,” in reference to the sci-fi implications of that.

“Computers based on meat.”.

Prof. Seth, however, is investigating the notion that only living systems are capable of realizing true consciousness.

He asserts, “There is a compelling argument that being alive, rather than computation, is what is necessary for consciousness.”.

It’s difficult to distinguish between what brains do and what they are, in contrast to computers. It’s hard to accept that brains “are simply meat-based computers” without this distinction, he contends.

Furthermore, if Professor Seth’s sense that life is significant is correct, the technology of the future will probably be made of small clusters of nerve cells the size of lentil grains that are currently being cultivated in laboratories rather than silicon that is controlled by computer code.

Known as “mini-brains” in the press, the scientific community refers to them as “cerebral organoids” and uses them for drug testing and brain research.

One Australian company, Cortical Labs, located in Melbourne, has even created a nerve cell system in a dish that can play Pong, a sports video game from 1972. As it moves a paddle up and down a screen to bat back a pixelated ball, the so-called “brain in a dish” is eerie, despite being a far cry from a conscious system.

According to some experts, larger, more developed forms of these living tissue systems are most likely to give rise to consciousness if it does.

Their electrical activity is tracked by Cortical Labs for any indications that might resemble the emergence of consciousness.

Dr. Brett Kagan, the chief scientific and operational officer of the company, is aware that any new, uncontrollable intelligence may have goals that “do not align with ours.”. Under such circumstances, he half-jokingly remarks that “there is always bleach” to pour over the delicate neurons, making potential organoid overlords easier to defeat.

Returning to a more somber tone, he states that he would like to see more serious attempts to advance our scientific understanding from the major players in the field, but “unfortunately, we don’t see any earnest efforts in this space.” He notes that the threat posed by artificial consciousness is small but significant.

Consciousness is an illusion.

But perhaps the more pressing issue is the way we are impacted by the delusion that machines are sentient.

According to Prof. Seth, we might very well be living in a world with humanoid robots and deepfakes that appear to be conscious in a few years. He fears that we won’t be able to deny that the AI is sentient and empathetic, which may result in new threats.

It will entail us having greater faith in these things, sharing more information with them, and being more receptive to persuasion. “.

According to him, however, “moral corrosion” poses a greater risk from the illusion of consciousness.

We may feel sympathy for robots but less for other people, as the statement goes: “It will distort our moral priorities by forcing us to devote more of our resources to caring for these systems at the expense of the real things in our lives.”.

And that could change us fundamentally, Prof. Shanahan said.

“AI relationships will increasingly mimic human relationships; they will be used as friends, teachers, rivals in video games, and even romantic partners. I am not certain if that is a good or bad thing, but it will happen and we will be powerless to stop it.

Top image courtesy of Getty Images.

BBC InDepth is the go-to app and website for the best analysis, offering in-depth reporting on the most pressing topics of the day along with new viewpoints that challenge preconceptions. Additionally, we feature interesting content from iPlayer and BBC Sounds. If you have any comments about the InDepth section, please click the button below.

scroll to top