What does this mean?
Or maybe a better way to say this is: "Why did the beings where the Roswell craft Came from (in time or space or both). (What I mean by this is the Roswell craft may well be from earth's distant future or even distant past) instead of being an off world craft.
So, why would beings allow this powerful technology to get lost in earth's cultures. I mean, it's sort of like giving Nukes to earthlings. Who would think this is a good idea unless they wanted us to self destruct with nukes?
So, why would you chance a Roswell Crash here on earth?
The likely answer is: "You were worried about American Nukes, especially after Hiroshima and Nagasaki."
So, this likely was why is was worth the risk of sending the Roswell Craft to New Mexico in the airspace there in 1947.
But, here's the thing: "Do you know what actually brought down the Roswell Craft?" It was thought to be radar transmissions that screwed it's technology up somehow. So, when we put radar on it, it couldn't navigate properly either because it thought it was under attack or it interfered with pilot to ship communications on the Roswell Craft or both.
So, now we have Silicon Valley and the world Semiconductor industry and unlimited weapons including AI (Artificial Intelligence) and what are we to make of this?
My thought is that the Roswell craft's technology could be thought of as both a good and a bad influence on earthlings in 1947 at the same time.
However, the Technological Singularity caused by Artificial Intelligence could very easily extinct mankind.
Elon Musk thinks we only have a 5% to 10% chance of surviving Artificial Intelligence. I would go further than this by saying: We likely have a 5% or 10% of surviving artificial intelligence by 2050 to 2100. So, we likely only have another 10 or 20 years at most now before it becomes critical. However, it could become critical sooner because of Iphones and Android and other phones becoming more intelligent than any of the humans using them by 2020 to 2025 also.
Begin quote from:
Elon Musk Claims We Only Have a 10 Percent Chance of Making AI ...
https://futurism.com/elon-musk-claims-only-have-10-percent-chance-making-ai-safe/
Nov 22, 2017 - While Elon Musk works to advance the field of artificial intelligence, he also believesthere is an astronomically high likelihood that AI will pose a threat to ... These considerations have left him convinced that we need to merge with machines if we're to survive, and he's even created a startup dedicated to ...Elon Musk says we only have 10% chance of making AI safe | Daily ...
www.dailymail.co.uk/sciencetech/.../Elon-Musk-says-10-chance-making-AI-safe.html
Nov 23, 2017 - Elon Musk was speaking to employees at his firm, Neuralink, this month; He said that efforts to make AI safe only have 'a 5-10% chance of success'; The warning comes shortly after Musk said that regulation of AI was drastically needed because it's a 'fundamental risk to the existence ofhuman civilisation'.Elon Musk: 5-10% Chance for Humanity to Survive Artificial Intelligence
www.breitbart.com/.../elon-musk-5-10-chance-for-humanity-to-survive-artificial-intel...
Nov 23, 2017 - The futurist and inventor believes that we have no more than “a five to 10 per centchance” of successfully making artificial intelligence safe enough not to wipe out the human race. Like many of his peers, Musk advocates serious regulation of AI, and as soon as possible. Musk seeks a proactive approach to ...
Artificial Intelligence
Elon Musk Claims We Only Have a 10 Percent Chance of Making AI Safe
Getty ImagesIN BRIEF
While Elon Musk works to advance the field of artificial intelligence, he also believes there is an astronomically high likelihood that AI will pose a threat to humanity in the future. In an interview with Rolling Stone, the tech luminary claimed we have only a five to 10 percent chance of success at making AI safe.Outlook Not So Good
Elon Musk has put a lot of thought into the harsh realities and wild possibilities of artificial intelligence (AI). These considerations have left him convinced that we need to merge with machines if we’re to survive, and he’s even created a startup dedicated to developing the brain-computer interface (BCI) technology needed to make that happen. But despite the fact that his very own lab, OpenAI, has created an AI capable of teaching itself, Musk recently said that efforts to make AI safe only have “a five to 10 percent chance of success.”
Musk shared these less-than-stellar odds with the staff at Neuralink, the aforementioned BCI startup, according to recent Rolling Stone article. Despite Musk’s heavy involvement in the advancement of AI, he’s openly acknowledged that the technology brings with it not only the potential for, but the promise of serious problems.
The challenges to making AI safe are twofold.
First, a major goal of AI — and one that OpenAI is already pursuing — is building AI that’s not only smarter than humans, but that is capable of learning independently, without any human programming or interference. Where that ability could take it is unknown.
Then there is the fact that machines do not have morals, remorse, or emotions. Future AI might be capable of distinguishing between “good” and “bad” actions, but distinctly human feelings remain just that — human.
In the Rolling Stone article, Musk further elaborated on the dangers and problems that currently exist with AI, one of which is the potential for just a few companies to essentially control the AI sector. He cited Google’s DeepMind as a prime example.
“Between Facebook, Google, and Amazon — and arguably Apple, but they seem to care about privacy — they have more information about you than you can remember,” said Musk. “There’s a lot of risk in concentration of power. So if AGI [artificial general intelligence] represents an extreme level of power, should that be controlled by a few people at Google with no oversight?”
Worth the Risk?
Experts are divided on Musk’s assertion that we probably can’t make AI safe. Facebook founder Mark Zuckerberg has said he’s optimistic about humanity’s future with AI, calling Musk’s warnings “pretty irresponsible.” Meanwhile, Stephen Hawking has made public statements wholeheartedly expressing his belief that AI systems pose enough of a risk to humanity that they may replace us altogether.
Sergey Nikolenko, a Russian computer scientist who specializes in machine learning and network algorithms, recently shared his thoughts on the matter with Futurism. “I feel that we are still lacking the necessary basic understanding and methodology to achieve serious results on strong AI, the AI alignment problem, and other related problems,” said Nikolenko.
As for today’s AI, he thinks we have nothing to worry about. “I can bet any money that modern neural networks will not suddenly wake up and decide to overthrow their human overlord,” said Nikolenko.
Musk himself might agree with that, but his sentiments are likely more focused on how future AI may build on what we have today.
Already, we have AI systems capable of creating AI systems, ones that can communicate in their own languages, and ones that are naturally curious. While the singularity and a robot uprising are strictly science fiction tropes today, such AI progress makes them seem like genuine possibilities for the world of tomorrow.
But these fears aren’t necessarily enough reason to stop moving forward. We also have AIs that can diagnose cancer, identify suicidal behavior, and help stop sex trafficking.
The technology has the potential to save and improve lives globally, so while we must consider ways to make AI safe through future regulation, Musk’s words of warning are, ultimately, just one man’s opinion.
He even said as much himself to Rolling Stone: “I don’t have all the answers. Let me be really clear about that. I’m trying to figure out the set of actions I can take that are more likely to result in a good future. If you have suggestions in that regard, please tell me what they are.”
No comments:
Post a Comment