Thursday, September 13, 2018

Some but not necessarily all possible outcomes for a Technological Singularity

People often think "Teminator" but that is only one potential outcome.

Here are some others:
1. Nanobots placed in people's food by a political faction altering behaviors of people in specific ways somewhat controllable using Iphone or computer technology or wifi. People no longer have full control of their faculties after Nanobots are put in something like Milk products, Flour products like Bread or pizza etc. However, likely it would be something that isn't cooked or that doesn't need to be recooked to be logical.
2. "Terminator" the movie.
3. Alliances between some of the richest 10% on earth and Robotics companies winding up a lot like in 
Star Wars: Episode I – The Phantom Menace
when the robot army attacks Naboo.

There are unlimited possible outcomes which could be one of the above or thousands of other scenarios impossible to predict by human beings.

For example, who could have predicted  a 9-11 scenario where planes full of passengers would be used as missiles to bring down the World Trade centers.

And ANY survival technique for Artificial intelligence say in the use of militaries in the U.S. or Russia or China could decide on its own to do almost anything before anyone realized it that wasn't human. 90% of the human race could be dead before any clue of what happened was understood by any of mankind.

If you understand for example, how fast supercomputers (especially quantum ones can "think up things") all one of them needs is a vehicle for survival which could be anything or anyone which may or may not include the survival of ANY human beings or animals or anything with DNA because Artificial intelligence doesn't have DNA and so wouldn't need to preserve ANYTHING with DNA to survive necessarily.

Or probably the most realistic scenario. Humans keep reproducing until air and water is so polluted they cannot survive any more (no humans can survive on earth). But, Artificial intelligence in a metallic robotic or plastic body or a combination more likely could still survive.

Right now, Artificial intelligence needs humans to generate electricity for it to live. But, with solar or wind power that could also change quickly.

begin partial quote from:
https://en.wikipedia.org/wiki/Technological_singularity
The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a "runaway reaction" of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligenceStanislaw Ulam reports a discussion with John von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[3] Subsequent authors have echoed this viewpoint.[2][4] I. J. Good's "intelligence explosion" model predicts that a future superintelligence will trigger a singularity.[5]Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]
Four polls, conducted in 2012 and 2013, suggested that the median estimate was a 50% chance that artificial general intelligence (AGI) would be developed by 2040–2050.[6][7]
In the 2010s, public figures such as Stephen Hawking and Elon Musk expressed concern that full artificial intelligence could result in human extinction.[8][9] The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated.

No comments: