Friday, August 29, 2025

From predicting the potential destruction of humanity to a transhumanist apocalypse where people merge with AI, here’s what some of the key players are saying.

The title was part of a quote from: 

How Silicon Valley is using religious language to talk about AI

Though a transhumanist apocalypse could be possible where people begin to merge with AI, as long as people actually have a choice about what happens to them they are likely to choose something slightly different.

Here's an example of myself in a minor quandary about all this.

I look at a Waymo Taxi and no way to stop this thing in an emergency or even steer it in an emergency as something I NEVER want to ride in. 

However, if I was a 12 to 18 year old girl riding in a driverless Waymo might be preferable to dying driving with a driver their own age.

However, I like using my automatic Driving feature in my 2022 car to drive for me especially in heavy traffic. So, in heavy traffic it is more like being in my living room and touching one button every so often than anything else in a large traffic jam. The only time this became a problem recently was when my air conditioner went out in Orange California and I was having the car drive itself through heavy bumper to bumber traffic in 100 degrees which didn't work for me very well because I'm 77 and have already had heat prostration as a younger man more than once. So, I was pretty scared that I would pass out before traffic got moving again without my air conditioner. However, I likely never will drive in a driverless Waymo Taxi because I think it is insane to trust your life to this kind of thing. However, this is just me.

So, some people will give over their wills to AI and some people like myself will not choose to do this. This is just reality of people given the choice of what they are going to do in their lives. 

  

How Silicon Valley is using religious language to talk about AI

From predicting the potential destruction of humanity to a transhumanist apocalypse where people merge with AI, here’s what some of the key players are saying.
 
Sam Altman, chief executive officer of OpenAI Inc., during a Senate Commerce, Science, and Transportation Committee hearing in Washington, D.C., on May 8.Nathan Howard / Bloomberg / Getty Images

TORONTO — As the rapid, unregulated development of artificial intelligence continues, the language people in Silicon Valley use to describe it is becoming increasingly religious.

From predicting the potential destruction of humanity to a transhumanist apocalypse where people merge with AI, here’s what some of the key players are saying.

___

“I think religion will be in trouble if we create other beings. Once we start creating beings that can think for themselves and do things for themselves, maybe even have bodies if they’re robots, we may start realizing we’re less special than we thought. And the idea that we’re very special and we were made in the image of God, that idea may go out the window.”

— Nobel Prize winner Geoffrey Hinton, often dubbed the “Godfather of AI” for his pioneering work on deep learning and neural networks.

___

“By 2045, which is only 20 years from now, we’ll be a million times more powerful. And we’ll be able to have expertise in every field.”

— author and computer scientist Ray Kurzweil, who believes humans will merge with AI.

___

“There certainly are dimensions of the technology that have become extremely powerful in the last century or two that have an apocalyptic dimension. And perhaps it’s strange not to try to relate it to the biblical tradition.”

— PayPal and Palantir co-founder Peter Thiel speaking to the Hoover Institution at Stanford University.

___

“I feel that the four big AI CEOs in the U.S. are modern-day prophets with four different versions of the Gospel and they’re all telling the same basic story that this is so dangerous and so scary that I have to do it and nobody else.”

— Max Tegmark, a physicist and machine learning researcher at the Massachusetts Institute of Technology.

___

“When people in the tech industry talk about building this one true AI, it’s almost as if they think they’re creating God or something.”

— Meta CEO Mark Zuckerberg on a podcast promoting his company’s own venture into AI.

___

“Everyone (including AI companies!) will need to do their part both to prevent risks and to fully realize the benefits. But it is a world worth fighting for. If all of this really does happen over 5 to 10 years — the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights — I suspect everyone watching it will be surprised by the effect it has on them.”

— Anthropic CEO Dario Amodei in his essay, “Machines of Loving Grace: How AI Could Transform the World for the Better.”

___

“You and I are living through this once-in-human-history transition where humans go from being the smartest thing on planet Earth to not the smartest thing on planet Earth.”

— OpenAI CEO Sam Altman during an interview for TED Talks.

___

“These really big, scary problems that are complex and challenging to address — it’s so easy to gravitate towards fantastical thinking and wanting a one-size-fits-all global solution. I think it’s the reason that so many people turn to cults and all sorts of really out there beliefs when the future feels scary and uncertain. I think this is not different than that. They just have billions of dollars to actually enact their ideas.”

— Dylan Baker, lead research engineer at the Distributed AI Research Institute.

  •  

     

    No comments: