Friday, August 22, 2014

Oxford Professor concerned about the development of artificial intelligence

I have always said that even with humans a college education is like a hammer. You can go build a house with that hammer or you can take that hammer and bash your own brains out or someone else's with it. It all depends upon what humans choose to do. If we are stupid then what he says might happen. If we are smart a better outcome will happen. However, the smartest supercomputer is ALREADY intellectually smarter than any single human on earth.

However, the real question is how will that single human brain intelligence be applied. If we have 10 billion like this and they are all integrated what happens then when there are only 8 or 9 billion humans?

IF we are smart individually or collectively we will find constraints so humans can survive. If not, either the artificial intelligence will kill us, we will nuke ourselves out of existence or something else will happen. 

I think it is all a choice. If we don't make a choice individually and collectively artificial intelligence will be happy to make those decisions for us and we will all become little children like our pets already are.

It's a choice. If we don't make those choices something else will.

Artificial Intelligence May Doom The Human Race Within A Century, Oxford Professor Says

An Oxford philosophy professor who has studied existential threats ranging from nuclear war to superbugs says the biggest danger of all may be superintelligence.Superintelligence is any intellect that outperforms human intellect in every field, and Nick Bostrom thinks its most likely form will be a…
Huffington Post51 mins ago 

Artificial Intelligence May Doom The Human Race Within A Century, Oxford Professor Says

Posted: Updated:
Print Article
AI
An Oxford philosophy professor who has studied existential threats ranging from nuclear war to superbugs says the biggest danger of all may be superintelligence.
Superintelligence is any intellect that outperforms human intellect in every field, and Nick Bostrom thinks its most likely form will be a machine -- artificial intelligence.
There are two ways artificial intelligence could go, Bostrom argues. It could greatly improve our lives and solve the world's problems, such as disease, hunger and even pain. Or, it could take over and possibly kill all or many humans. As it stands, the catastrophic scenario is more likely, according to Bostrom, who has a background in physics, computational neuroscience and mathematical logic.
"Superintelligence could become extremely powerful and be able to shape the future according to its preferences," Bostrom told me. "If humanity was sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figure out how to do so safely."
Bostrom, the founding director of Oxford's Future of Humanity Institute, lays out his concerns in his new book, Superintelligence: Paths, Dangers, Strategies. His book makes a harrowing comparison between the fate of horses and humans:
Horses were initially complemented by carriages and ploughs, which greatly increased the horse's productivity. Later, horses were substituted for by automobiles and tractors. When horses became obsolete as a source of labor, many were sold off to meatpackers to be processed into dog food, bone meal, leather, and glue. In the United States, there were about 26 million horses in 1915. By the early 1950s, 2 million remained.
The same dark outcome, Bostrom said, could happen to humans once our labor and intelligence become obsolete.
It sounds like a science fiction flick, but recent moves in the tech world may suggest otherwise. Earlier this year, Google acquired artificial intelligence company DeepMind and created an AI safety and ethics review board to ensure the technology is developed safely. Facebook created an artificial intelligence lab this year and is working on creating an artificial brain. Technology called "deep learning," a form of artificial intelligence meant to closely mimic the human brain, has quickly spread from Google to Microsoft, Baidu and Twitter.
And while Google's Ray Kurzweil has long discussed a technological "singularity" in which AI replaces humans, a giant in the tech world recently joined Kurzweil in vocalizing concern. Elon Musk, co-founder of SpaceX (space transport) and Tesla (electric cars), tweeted earlier this month: I spoke with Bostrom about why he's worried and how we should prepare.
You write that superintelligent AI could become dangerous to humans because it will seek to improve itself and acquire resources. Explain.
Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies create a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.
Could we program the AI to create no more than 100 paper clips a day for, say, a total of 10 days?
Sure, but now the AI is trying to maximize the probability that it will make exactly 100 paper clips in 10 days. Again, you would want to eliminate humans because they could shut you off. What happens when it's done making the total 1,000 paper clips? It could count them again or develop a more accurate counting apparatus -- perhaps one that is the size of the planet or larger.
You can imagine an unlimited sequence of actions perhaps with diminishing returns but nonetheless some positive values to the AI that would even increase by a tiny fraction the probability of reaching the goal. The analogy extends to any AI --- not just one programed to make paper clips. The point is its actions would pay no heed to human welfare.
Could we make its primary goal be improving the human condition, advancing human values -- making humans happy?
Well, we'd have to define then what we mean by being happy. If we mean feeling pleasure then perhaps the superintelligent AI would stick electrodes onto every human brain and stimulate our pleasure centers. Or you could take out the body altogether and have our brains bathing in a drug the AI could design. It turns out to be quite difficult to specify a goal of what we want in English -- let alone in computer code.
Similarly, we can't be confident in our current set of human values. One can imagine what would have happened if some earlier human age had had the opportunity to lay down the law for all time -- to encode their understanding of human values once and for all. We can now look back and see they had huge moral blind spots.
In the book, you say there could be one superintelligent AI -- or multiple. Explain.
In one scenario, you have one superintelligent AI and, without any competition, it has the ability to shape the future according to its preferences. Another scenario is multipolar, where the transition to superintelligence is slower, and there are many different systems at roughly comparable level of development. In that scenario, you have economic and evolutionary dynamics coming into play.
In a multipolar scenario, there's the danger of a very rapid population explosion. You could copy a digital mind in a minute, rather than with humans, where it takes a couple of decades to make another adult. So the digital minds could increase so quickly that their incomes drop to subsistence level -- which would probably be lower than for a biological mind. Then humans would no longer be able to support themselves by working, and, most likely, would die out. Alternatively, if social structures somehow continue to hold, some humans could gain immense capital returns from superintelligence that they could use to buy more computer hardware to run more digital minds.
Are you saying it's impossible to control superintelligence because we ourselves are merely intelligent?
It's not impossible -- it's extremely difficult. I worry that it will not be solved by the time someone builds an AI. We're not very good at uninventing things. Once unsafe superintellignce is developed, we can't put it back in the bottle. So we need to accelerate research of this control problem.
Developing an avenue towards human cognitive enhancement would be helpful. Presuming superintelligence doesn't happen until the second half of the century, there could still be time to develop a cohort of cognitively enhanced humans who might have the capacity to try to solve this really difficult technical control problem. Cognitively enhanced humans will also presumably be able to better consider long-term effects. For example, today people are creating cellphone batteries with longer lives -- without thinking about what the long-term effects could be. With more intelligence, we would be able to.
Cognitive enhancement could take place through collective cognitive ability -- the Internet, for example, and institutional innovations that enable humans to function better together. In terms of individual cognitive enhancement, the first thing likely to be successful is genetic selection in the context of in-vitro fertilization. I don't hold out much for cyborgs or implants.
What should we do to prepare for the risk of superintelligence?
If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figured out how to do so safely. And then maybe wait another generation or two just to make sure that we hadn't overlooked some flaw in our reasoning. And then do it -- and reap immense benefit. Unfortunately, we do not have the ability to pause.
Attempts to affect the overall rate of development in computer science, neuroscience and chip manufacturing are likely to be futile. There are enormous incentives to make incremental progress in the software and hardware industries. Progress towards superintelligence thus far has very little do with long-term concern about global problems -- and more to do with making big bucks.
Also, we have problems with collective human wisdom and rationality. At the moment, we are very poor at addressing big global challenges. Even with something as straightforward as global warming -- where you have a physical principle and rising temperature you can measure -- we are not doing a great job. In general, working towards making the world more peaceful and collaborative would be helpful for a wide range of existential catastrophes.
There are maybe six people working full time on this AI control problem. We need to add more brilliant brains to this technical work. I'm hoping my book will do something to encourage that. How to control superintelligent AI is really the most important task of our time -- yet, it is almost completely ignored.
  end quote from:

No comments: