Wednesday, February 3, 2016

AI: Recursive Self-Improvement: or spontaneous evolution by and from robotic and computer entities

One could say this is already happening when you interact with programs like SIRI on your Iphone or other smartphone. What does the computer and program do with all that information you are sharing with it? This might be something to consider. This is why I don't use SIRI because it learns too much about you if we interact with it as a friend. It is not a friend in it's present form. Such a device would only be a friend if you had complete control personally with what it then does with that information. And you and I don't have that control presently.

Recursive self-improvement

From Wikipedia, the free encyclopedia
This article is about the artificial intelligence idiom. For the How to Destroy Angels song, see Welcome Oblivion.
Recursive self-improvement is the speculative ability of a strong artificial intelligence computer program to program its own software, recursively.
This is sometimes also referred to as Seed AI because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve the design of its constituent software and hardware. Having undergone these improvements, it would then be better able to find ways of optimizing its structure and improving its abilities further. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.

Contents

History

This notion of an "intelligence explosion" was first described thus by Good (1965), who speculated on the effects of superhuman machines:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Compilers

A limited example is that program language compilers are often used to compile themselves. As compilers become more optimized, they can re-compile themselves and so be faster at compiling.
However, they cannot then produce faster code and so this can only provide a very limited one step self-improvement. Existing optimizers can transform code into a functionally equivalent, more efficient form, but cannot identify the intent of an algorithm and rewrite it for more effective results. The optimized version of a given compiler may compile faster, but it cannot compile better. That is, an optimized version of a compiler will never spot new optimization tricks that earlier versions failed to see or innovate new ways of improving its own program.
Seed AI must be able to understand the purpose behind the various elements of its design, and design entirely new modules that will make it genuinely more intelligent and more effective in fulfilling its purpose.

Hard vs. soft takeoff

A "hard takeoff" refers to the scenario in which a single AI project rapidly self-improves, on a timescale of a few years or even days. A "soft takeoff" refers to a longer-term process of integrating gradual AI improvements into society more broadly.[1] Eliezer Yudkowsky and Robin Hanson have extensively debated these positions, with Yudkowsky arguing for the realistic possibility of hard takeoff, while Hanson believes its probability is less than 1%.[2]
Ramez Naam argues against a hard takeoff by pointing out that we already see recursive self-improvement by superintelligences, such as corporations. For instance, Intel has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to.. design better CPUs!" However, this has not led to a hard takeoff; rather, it has led to a soft takeoff in the form of Moore's law.[3] Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that "creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1."[4] William Hertling replies that while he agrees there won't be a hard takeoff, he expects that Moore's law and the ability to copy computers may still thoroughly change the world sooner than most people are expecting. He suggests that when we postpone the predicted arrival date of these changes, "we're less likely as a society to examine both AI progress and take steps to reduce the risks of AGI."[5]
J. Storrs Hall believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the starting point of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.[6]
Ben Goertzel agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth.[7] The AI's talents might inspire companies and governments to disperse its software throughout society.[7] The AI might buy out a country like Azerbaijan and use that as its base to build power and improve its algorithms.[7] Goertzel is skeptical of a very hard, 5-minute takeoff but thinks a takeoff from human to superhuman level on the order of 5 years is reasonable. He calls this a "semihard takeoff".[7] Elsewhere Goertzel has argued that his OpenCog architecture "very likely possesses the needed properties to enable hard takeoff."[8]
In a 1993 article, Vernor Vinge discussed the concept of a "singularity", i.e., a hard takeoff:[9]
When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities -- on a still-shorter time scale.
Vinge notes that humans can "solve many problems thousands of times faster than natural selection" because we can perform quick simulations of the world in our heads.[9] Robin Hanson collected 13 replies to Vinge, some agreeing with his singularity notion and others disputing it.[10]
In one of those replies, Max More argues that if there were only a few superfast human-level AIs, they wouldn't radically change the world, because they would still depend on other people to get things done and would still have human cognitive constraints.[11] Even if all superfast AIs worked on intelligence augmentation, it's not clear why they would better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase.[11] More also argues that a superintelligence would not transform the world overnight, because a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world.[11] "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."[11]

Organizations

Creating seed AI is the goal of several organizations. The Machine Intelligence Research Institute is the most prominent of those explicitly working to create seed AI[12] and ensure its safety.[13] Others include the Artificial General Intelligence Research Institute, creator of the Novamente AI engine, Adaptive Artificial Intelligence Incorporated, Texai.org, and Consolidated Robotics.

See also

References


  • "AI takeoff". Retrieved 16 May 2014.
  • External links


  • Hanson, Robin; Eliezer Yudkowsky (2013). The Hanson-Yudkowsky AI-Foom Debate. Machine Intelligence Research Institute. Retrieved 16 May 2014.

  • Naam, Ramez (2014). "The Singularity Is Further Than It Appears". Retrieved 16 May 2014.

  • Naam, Ramez (2014). "Why AIs Won't Ascend in the Blink of an Eye - Some Math". Retrieved 16 May 2014.

  • Hertling, William (2014). "The Singularity is Still Closer than it Appears". Retrieved 16 May 2014.

  • Hall, J. Storrs (2008). "Engineering Utopia" (PDF). Artificial General Intelligence, 2008: Proceedings of the First AGI Conference: 460–467. Retrieved 16 May 2014.

  • Goertzel, Ben (26 Sep 2014). "Superintelligence — Semi-hard Takeoff Scenarios". h+ Magazine. Retrieved 25 October 2014.

  • Goertzel, Ben (13 Jan 2011). "The Hard Takeoff Hypothesis". The Multiverse According to Ben. Retrieved 25 October 2014.

  • Vinge, Vernor (1993). "The Coming Technological Singularity: How to Survive in the Post-Human Era". Retrieved 10 November 2014.

  • Hanson, Robin. "A Critical Discussion of Vinge's Singularity Concept". Retrieved 10 November 2014.

  • More, Max. "Singularity Meets Economy". Retrieved 10 November 2014.

  • "Intelligence Explosion FAQ - Machine Intelligence Research Institute". Machine Intelligence Research Institute. https://plus.google.com/113555802240487771052. Retrieved 2015-11-01. External link in |publisher= (help)
  •  
  • end quote from:
  • https://en.wikipedia.org/wiki/Recursive_self-improvement
  •  
  •  

  • No comments: