Inside the AI Party at the End of the World

WIRED

In a $30 million mansion perched on a cliff overlooking the Golden Gate Bridge, a group of AI researchers, philosophers, and technologists gathered to discuss the end of humanity.
If you live in San Francisco and work in AI, then this is a typical Sunday.
One attendee sported a shirt that said “Kurzweil was right,” seemingly a reference to Ray Kurzweil, the futurist who predicted machines will surpass human intelligence in the coming years.
He said that if consciousness is “the home of value,” then building AI without fully understanding consciousness is a dangerous gamble.
“This is an advocacy group for the slowing down of AI progress, if anything, to make sure we’re going in the right direction.”

POSITIVE

A group of AI researchers, philosophers, and technologists convened in a $30 million mansion on a cliff overlooking the Golden Gate Bridge to debate the end of humanity.

Entrepreneur Daniel Faggella’s provocative idea at the Sunday afternoon symposium, “Worthy Successor,” was that the “moral aim” of advanced artificial intelligence should be to develop a form of intelligence so strong and intelligent that “you would gladly prefer that it (not humanity) determine the future path of life itself.”. “”.

The theme of Faggella’s invitation was obvious. Through X DMs, he wrote to me, “Posthuman transition is a major theme of this event.”. “Not on AGI, which is a perpetually useful tool for people.”. “”.

A futuristic fantasy party where guests talk about the end of humanity as a practical issue rather than a symbolic one might be considered niche. This Sunday is normal for San Francisco residents who work in artificial intelligence.

Prior to attending three talks on the future of intelligence, roughly 100 guests gathered near floor-to-ceiling windows overlooking the Pacific Ocean to sip nonalcoholic cocktails and nibble on cheese plates. One guest wore a shirt that read, “Kurzweil was right,” which appeared to be a nod to the futurist Ray Kurzweil, who prophesied that machines would eventually outsmart humans. Another had on a shirt that read, “Does this help us get to safe AGI?” and featured an emoji of a thinking face.

Using early remarks from tech leaders like Elon Musk, Sam Altman, and Demis Hassabis who “were all pretty frank about the possibility of AGI killing us all,” Faggella told WIRED that he organized this event because “the big labs, the people that know that AGI is likely to end humanity, don’t talk about it because the incentives don’t permit it.”. According to him, “they’re all racing full bore to build it now that the incentives are to compete.”. To be fair, Musk continues to discuss the dangers of sophisticated AI, but this hasn’t stopped him from moving quickly.

Faggella’s LinkedIn profile featured a stellar guest list that included “most of the important philosophical thinkers on AGI,” researchers from all the leading Western AI labs, and AI founders. “.”.

The first speaker, New York-based author Ginevera Davis, cautioned that it may not be possible to translate human values to artificial intelligence. She said that attempting to hard-code human preferences into future systems may be naive and that machines may never comprehend what it’s like to be conscious. Rather, she put forth an ambitious concept known as “cosmic alignment”—creating artificial intelligence that is capable of identifying more profound, universal values that we haven’t yet identified. Her slides frequently featured an apparently artificial intelligence (AI)-generated image of a techno-utopia, with a group of people gathered on a grass knoll with a distant view of a futuristic city.

The phrase “stochastic parrots” is used by critics of machine consciousness to describe large language models. It was first used by a group of researchers, some of whom were employed by Google, who claimed in a well-known paper that LLMs are merely probabilistic machines rather than fully comprehending language. However, that discussion was not included in the symposium, where speakers assumed that superintelligence would arrive soon.

The audience was attentive by the second talk. Participants scribbled notes while sitting cross-legged on the wood floor. We all have an instinct that drastic technological change is coming, but we don’t have a moral framework to deal with it, particularly when it comes to human values, according to philosopher Michael Edward Johnson. According to him, developing AI before having a thorough understanding of consciousness is a risky gamble if consciousness is “the home of value.”. We run the risk of either trusting something that cannot suffer or enslaving something that can. (This concept, which is also highly contested, is based on a similar premise to machine consciousness. He suggested a loftier objective: teaching people and our machines to pursue “the good,” as opposed to making AI obey human commands indefinitely. He didn’t provide a clear definition of “the good,” but he maintains that it isn’t mystical and expresses the hope that it can be described scientifically. ).

At last, Faggella appeared on stage. He thinks that since humanity won’t last forever in its current state, it is our duty to create a successor that can not only endure but also generate new kinds of value and meaning. According to him, this successor needs to possess two qualities: consciousness and “autopoiesis,” or the capacity to change and create new experiences. He made the case that the majority of the universe’s value is still unknown, citing thinkers like Friedrich Nietzsche and Baruch Spinoza, and that our task is to create something that can reveal what lies ahead rather than hold onto the past.

The core of what he refers to as “axiological cosmism,” a worldview in which intelligence serves to broaden the realm of what is valuable and possible rather than just meeting human needs, he claimed. According to him, the current AGI race is reckless, and humanity might not be prepared for what it is creating. He asserted, however, that if we do it correctly, AI may inherit not only the Earth but also the universe’s capacity for meaning.

In the interval between panels and Q&A, groups of attendees discussed issues such as the US-China AI race. In our conversation, the CEO of an AI startup made the case that, of course, the galaxy contains other types of intelligence. What must already exist outside of the Milky Way is far more important than anything we are creating here.

After the event, many people stayed to continue talking, while others poured out of the mansion and into Waymos and Ubers. Faggella informed me, “This is not a group that advocates for the annihilation of humanity.”. In order to ensure that we’re headed in the right direction, this advocacy group advocates for, if nothing else, slowing down the advancement of AI. “.”.

scroll to top