Intuitive fred888

To the best of my ability I write about my experience of the Universe Past, Present and Future

Top 10 Posts This Month

  • Rosamund Pike: Star of New Amazon Prime Series "Wheel of Time"
  • Belize Barrier Reef coral reef system
  • SNAP rulings ease shutdown pressure as Thune rebuffs Trump call to end filibuster
  • Pacific Ocean from Encyclopedia Britannica
  • Flame (the Giant Pacific Octopus) whose species began here on earth before they were taken to another planet by humans in our near future
  • Nine dead, dozens injured in crowd surge at Hindu temple in southern India
  • Learning to live with Furosemide in relation to Edema
  • I put "Blue Sphere" into the search engine for my site and this is what came up.
  • Reprint of: Padmasambhava's Dorje Drollo footprints in Rock
  • Siege of Yorktown 1781

Wednesday, January 6, 2016

Intelligence explosion

This article is also related to this article:

2000 to 2100: The Era of the Technological Singula...

Intelligence explosion

From Wikipedia, the free encyclopedia
An intelligence explosion is the expected outcome of the hypothetically forthcoming technological singularity, that is, the result of man building artificial general intelligence (strong AI). Strong AI would be capable of recursive self-improvement leading to the emergence of superintelligence, the limits of which are unknown.
The notion of an "intelligence explosion" was first described by Good (1965), who speculated on the effects of superhuman machines, should they ever be invented:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
Although technological progress has been accelerating, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia.[1] However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is more intelligent than humanity.[2] If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of. It could then design an even more capable machine, or re-write its own software to become even more intelligent. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.

Contents

  • 1 Plausibility
    • 1.1 Speed improvements
    • 1.2 Intelligence improvements
  • 2 Impact
    • 2.1 Superintelligence
    • 2.2 Existential risk
  • 3 See also
  • 4 References
  • 5 Bibliography
  • 6 External links

Plausibility

Most proposed methods for creating superhuman or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The means speculated to produce intelligence augmentation are numerous, and include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain–computer interfaces and mind uploading. The existence of multiple paths to an intelligence explosion makes a singularity more likely; for a singularity to not occur they would all have to fail.[3]
Hanson (1998) is skeptical of human intelligence augmentation, writing that once one has exhausted the "low-hanging fruit" of easy methods for increasing human intelligence, further improvements will become increasingly difficult to find. Despite the numerous speculated means for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option for organizations trying to advance the singularity.[citation needed]
Whether or not an intelligence explosion occurs depends on three factors.[4] The first, accelerating factor, is the new intelligence enhancements made possible by each previous improvement. Contrariwise, as the intelligences become more advanced, further advances will become more and more complicated, possibly overcoming the advantage of increased intelligence. Each improvement must be able to beget at least one more improvement, on average, for the singularity to continue. Finally the laws of physics will eventually prevent any further improvements.
There are two logically independent, but mutually reinforcing, accelerating effects: increases in the speed of computation, and improvements to the algorithms used.[5] The former is predicted by Moore’s Law and the forecast improvements in hardware,[6] and is comparatively similar to previous technological advance. On the other hand, most AI researchers believe that software is more important than hardware.[citation needed]

Speed improvements

The first is the improvements to the speed at which minds can be run. Whether human or AI, better hardware increases the rate of future hardware improvements. Oversimplified,[7] Moore's Law suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity.[8] An upper limit on speed may eventually be reached, although it is unclear how high this would be. Hawkins (2008), responding to Good, argued that the upper limit is relatively low;
Belief in this idea is based on a naive understanding of what intelligence is. As an analogy, imagine we had a computer that could design new computers (chips, systems, and software) faster than itself. Would such a computer lead to infinitely fast computers or even computers that were faster than anything humans could ever build? No. It might accelerate the rate of improvements for a while, but in the end there are limits to how big and fast computers can run. We would end up in the same place; we'd just get there a bit faster. There would be no singularity.
Whereas if it were a lot higher than current human levels of intelligence, the effects of the singularity would be enormous enough as to be indistinguishable (to humans) from a singularity with an upper limit. For example, if the speed of thought could be increased a million-fold, a subjective year would pass in 30 physical seconds.[3]
It is difficult to directly compare silicon-based hardware with neurons. But Berglas (2008) notes that computer speech recognition is approaching human capabilities, and that this capability seems to require 0.01% of the volume of the brain. This analogy suggests that modern computer hardware is within a few orders of magnitude of being as powerful as the human brain.

Intelligence improvements

Some intelligence technologies, like seed AI, may also have the potential to make themselves more intelligent, not just faster, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on.
This mechanism for an intelligence explosion differs from an increase in speed in two ways. First, it does not require external effect: machines designing faster hardware still require humans to create the improved hardware, or to program factories appropriately. An AI which was rewriting its own source code, however, could do so while contained in an AI box.
Second, as with Vernor Vinge’s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual improvements in intelligence would be qualitatively different. Eliezer Yudkowsky compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life had been a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.[9]
There are substantial dangers associated with an intelligence explosion singularity. First, the goal structure of the AI may not be invariant under self-improvement, potentially causing the AI to optimise for something other than was intended.[10][11] Secondly, AIs could compete for the scarce resources mankind uses to survive.[12][13]
While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support mankind to promote its own goals, causing human extinction.[14][15][16]
Carl Shulman and Anders Sandberg suggest that intelligence improvements (i.e., software algorithms) may be the limiting factor for a singularity because whereas hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI was developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.[17] An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."[18]

Impact

Dramatic changes in the rate of economic growth have occurred in the past because of some technological advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world’s economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis.[19]

Superintelligence

Further information: Superintelligence § Superintelligence scenarios
A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent.
Technology forecasters and researchers disagree about when human intelligence is likely to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Existential risk

Main article: Existential risk from advanced artificial intelligence
Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, so that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility).[20][21][22] Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments.[23] AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[12][24] and humans would be powerless to stop them.[25] Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.[16]
Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:
When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.
A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[26]
Eliezer Yudkowsky proposed that research be undertaken to produce friendly artificial intelligence in order to address the dangers. He noted that the first real AI would have a head start on self-improvement and, if friendly, could prevent unfriendly AIs from developing, as well as providing enormous benefits to mankind.[15]
Bill Hibbard (2014) proposes an AI design that avoids several dangers including self-delusion,[27] unintended instrumental actions,[10][28] and corruption of the reward generator.[28] He also discusses social impacts of AI[29] and testing AI.[30] His 2001 book Super-Intelligent Machines advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator.
One hypothetical approach towards attempting to control an artificial intelligence is an AI box, where the artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world. However, a sufficiently intelligent AI may simply be able to escape by outsmarting its less intelligent human captors.[31][32][33]
Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believes that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." Hawking believes more should be done to prepare for the singularity:[34]
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI.

See also

  • Accelerating change
  • Artificial consciousness
  • Flynn effect
  • Outline of transhumanism

References



  • Ehrlich, Paul. The Dominant Animal: Human Evolution and the Environment

    1. Stephen Hawking (1 May 2014). "Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'". The Independent. Retrieved May 5, 2014.

    Bibliography

    • Good, I. J. (1965), Franz L. Alt and Morris Rubinoff, eds., "Speculations Concerning the First Ultraintelligent Machine", Advances in Computers, Advances in Computers (Academic Press) 6: 31–88, doi:10.1016/S0065-2458(08)60418-0, ISBN 9780120121069, archived from the original on 2001-05-27, retrieved 2007-08-07
    • Hanson, Robin (1998), Some Skepticism, Robin Hanson, archived from the original on 2009-08-28, retrieved 2009-06-19
    • Berglas, Anthony (2008), Artificial Intelligence will Kill our Grandchildren, retrieved 2008-06-13
    • Bostrom, Nick (2002), "Existential Risks", Journal of Evolution and Technology 9, retrieved 2007-08-07
    • Hibbard, Bill (5 November 2014). "Ethical Artificial Intelligence". arXiv:1411.1373 [cs.AI].

    External links

    • Why an Intelligence Explosion is Probable
    Categories:
    • Philosophy of artificial intelligence
    • Singularitarianism

    Navigation menu

    • Not logged in
    • Talk
    • Contributions
    • Create account
    • Log in
    • Article
    • Talk
    • Read
    • Edit
    • View history
    • Main page
    • Contents
    • Featured content
    • Current events
    • Random article
    • Donate to Wikipedia
    • Wikipedia store

    Interaction

    • Help
    • About Wikipedia
    • Community portal
    • Recent changes
    • Contact page

    Tools

    • What links here
    • Related changes
    • Upload file
    • Special pages
    • Permanent link
    • Page information
    • Cite this page

    Print/export

    • Create a book
    • Download as PDF
    • Printable version

    Languages

    Add links
    • This page was last modified on 24 October 2015, at 02:41.


  • Superbrains born of silicon will change everything.

  • "What is the Singularity? | Singularity Institute for Artificial Intelligence". Singinst.org. Retrieved 2011-09-09.

  • David Chalmers John Locke Lecture, 10 May, Exam Schools, Oxford, Presenting a philosophical analysis of the possibility of a technological singularity or "intelligence explosion" resulting from recursively self-improving AI.

  • The Singularity: A Philosophical Analysis, David J. Chalmers

  • "ITRS" (PDF). Retrieved 2011-09-09.

  • Siracusa, John (2009-08-31). "Mac OS X 10.6 Snow Leopard: the Ars Technica review". Arstechnica.com. Retrieved 2011-09-09.

  • Eliezer Yudkowsky, 1996 "Staring into the Singularity"

  • Eliezer S. Yudkowsky. "Power of Intelligence". Yudkowsky. Retrieved 2011-09-09.

  • Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008

  • "Artificial General Intelligence: Now Is the Time". KurzweilAI. Retrieved 2011-09-09.

  • Omohundro, Stephen M., "The Nature of Self-Improving Artificial Intelligence." Self-Aware Systems. 21 Jan. 2008. Web. 07 Jan. 2010.

  • Barrat, James (2013). "6, "Four Basic Drives"". Our Final Invention (First Edition. ed.). New York: St. Martin's Press. pp. 78–98. ISBN 978-0312622374.

  • "Max More and Ray Kurzweil on the Singularity". KurzweilAI. Retrieved 2011-09-09.

  • "Concise Summary | Singularity Institute for Artificial Intelligence". Singinst.org. Retrieved 2011-09-09.

  • Bostrom, Nick, The Future of Human Evolution, Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy, pp. 339–371, 2004, Ria University Press.

  • Shulman, Carl; Anders Sandberg (2010). Mainzer, Klaus, ed. "Implications of a Software-Limited Singularity" (PDF). ECAP10: VIII European Conference on Computing and Philosophy. Retrieved 17 May 2014.

  • Muehlhauser, Luke; Anna Salamon (2012). "Intelligence Explosion: Evidence and Import". In Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Singularity Hypotheses: A Scientific and Philosophical Assessment (PDF). Springer.

  • Robin Hanson, "Economics Of The Singularity", IEEE Spectrum Special Report: The Singularity, retrieved 2008-09-11 & Long-Term Growth As A Sequence of Exponential Modes

  • Ethical Issues in Advanced Artificial Intelligence, Nick Bostrom, in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12–17

  • Eliezer Yudkowsky: Artificial Intelligence as a Positive and Negative Factor in Global Risk. Draft for a publication in Global Catastrophic Risk from August 31, 2006, retrieved July 18, 2011 (PDF file)

  • The Stamp Collecting Device, Nick Hay

  • 'Why we should fear the Paperclipper', 2011-02-14 entry of Sandberg's blog 'Andart'

  • Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008.

  • de Garis, Hugo. "The Coming Artilect War", Forbes.com, 22 June 2009.

  • Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004

  • Hibbard, Bill (2012), "Model-Based Utility Functions", Journal of Artificial General Intelligence 3: 1, arXiv:1111.3934, Bibcode:2012JAGI....3....1H, doi:10.2478/v10229-011-0013-5.

  • Avoiding Unintended AI Behaviors. Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle. This paper won the Machine Intelligence Research Institute's 2012 Turing Prize for the Best AGI Safety Paper .

  • Hibbard, Bill (2008), "The Technology of Mind and a New Social Contract", Journal of Evolution and Technology 17.

  • Decision Support for Safe AI Design|. Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle.

  • Yudkowsky, Eliezer (2008), Bostrom, Nick; Cirkovic, Milan, eds., "Artificial Intelligence as a Positive and Negative Factor in Global Risk" (PDF), Global Catastrophic Risks (Oxford University Press): 303, Bibcode:2008gcr..book..303Y, ISBN 978-0-19-857050-9

  • Artificial Intelligence Will Kill Our Grandchildren (Singularity), Dr Anthony Berglas

  • The Singularity: A Philosophical Analysis David J. Chalmers
  • end quote from:
  • ntelligence explosion

  • Posted by intuitivefred888 at 12:31 AM
    Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
    Labels: Intelligence explosion

    No comments:

    Post a Comment

    Newer Post Older Post Home
    Subscribe to: Post Comments (Atom)

    Subscribe!

    Posts
    Atom
    Posts
    Comments
    Atom
    Comments

    Top 10 Most Popular Posts

    • The ultra-lethal drones of the future | New York Post 2014 article
    • reprint of: Drones very small to large
    • Dow futures jump 600 points after Trump says he doesn’t plan to get rid of Fed chief: Live updates
    • most read articles from KYIV Post
    • Anthropogenic effects:Human impact on the environment:Wikipedia
    • Russia and Brazil Hit Hardest in Sovereign Risk Ratings...
    • Cessna 152
    • 158,008 visits to intuitivefred888
    • How He lives without money
    • Help:Wiki markup language

    About Me

    intuitivefred888
    I live in Coastal Northern California at present but was raised mostly in Los Angeles and San Diego Counties. I have also lived in Seattle, Santa Fe, New Mexico, Maui and the big Island of Hawaii. My archive site is: dragonofcompassion.com
    View my complete profile

    Search This Blog

    Translate Page

    Archives

    • ►  2025 (6271)
      • ►  December (120)
      • ►  November (646)
      • ►  October (635)
      • ►  September (539)
      • ►  August (468)
      • ►  July (437)
      • ►  June (464)
      • ►  May (387)
      • ►  April (650)
      • ►  March (757)
      • ►  February (511)
      • ►  January (657)
    • ►  2024 (6943)
      • ►  December (806)
      • ►  November (1020)
      • ►  October (618)
      • ►  September (475)
      • ►  August (634)
      • ►  July (704)
      • ►  June (591)
      • ►  May (571)
      • ►  April (382)
      • ►  March (451)
      • ►  February (324)
      • ►  January (367)
    • ►  2023 (3205)
      • ►  December (199)
      • ►  November (257)
      • ►  October (262)
      • ►  September (251)
      • ►  August (179)
      • ►  July (293)
      • ►  June (187)
      • ►  May (300)
      • ►  April (331)
      • ►  March (286)
      • ►  February (348)
      • ►  January (312)
    • ►  2022 (5784)
      • ►  December (342)
      • ►  November (475)
      • ►  October (324)
      • ►  September (465)
      • ►  August (652)
      • ►  July (432)
      • ►  June (336)
      • ►  May (479)
      • ►  April (532)
      • ►  March (489)
      • ►  February (386)
      • ►  January (872)
    • ►  2021 (6974)
      • ►  December (1125)
      • ►  November (660)
      • ►  October (486)
      • ►  September (492)
      • ►  August (733)
      • ►  July (535)
      • ►  June (476)
      • ►  May (487)
      • ►  April (306)
      • ►  March (474)
      • ►  February (486)
      • ►  January (714)
    • ►  2020 (8426)
      • ►  December (522)
      • ►  November (870)
      • ►  October (729)
      • ►  September (666)
      • ►  August (753)
      • ►  July (914)
      • ►  June (588)
      • ►  May (551)
      • ►  April (598)
      • ►  March (1042)
      • ►  February (718)
      • ►  January (475)
    • ►  2019 (8007)
      • ►  December (621)
      • ►  November (615)
      • ►  October (632)
      • ►  September (643)
      • ►  August (798)
      • ►  July (934)
      • ►  June (649)
      • ►  May (702)
      • ►  April (568)
      • ►  March (578)
      • ►  February (620)
      • ►  January (647)
    • ►  2018 (5468)
      • ►  December (337)
      • ►  November (412)
      • ►  October (443)
      • ►  September (405)
      • ►  August (458)
      • ►  July (869)
      • ►  June (393)
      • ►  May (381)
      • ►  April (447)
      • ►  March (493)
      • ►  February (417)
      • ►  January (413)
    • ►  2017 (4986)
      • ►  December (434)
      • ►  November (502)
      • ►  October (398)
      • ►  September (308)
      • ►  August (306)
      • ►  July (382)
      • ►  June (443)
      • ►  May (516)
      • ►  April (484)
      • ►  March (495)
      • ►  February (278)
      • ►  January (440)
    • ▼  2016 (5863)
      • ►  December (545)
      • ►  November (519)
      • ►  October (293)
      • ►  September (335)
      • ►  August (419)
      • ►  July (703)
      • ►  June (499)
      • ►  May (475)
      • ►  April (362)
      • ►  March (603)
      • ►  February (609)
      • ▼  January (501)
        • Mahasiddha: Wikipedia
        • 980,000 visits to intuitivefred888
        • Power's out
        • Soul Travel
        • Trees going down all over from heavy winds off the...
        • Vajrayana Buddhism
        • Thubten and Drukpa Kunley
        • Drukpa Kunley "The Divine Madman of Bhutan"
        • (Tibetan) Buddhism in Bhutan: Wikipedia
        • Why a path of Compassion?
        • Will Too big to fail Banks Bankrupt all nations on...
        • spiritual but not religious growing in the U.S.
        • America’s Changing Religious Landscape
        • Tibetan Buddhism: wikipedia
        • An Iowa Voter Forced Ted Cruz To Confront The Huma...
        • 1932 Ford Hi-Boy Roadster
        • Blogging
        • Live arctic Wind patterns (really beautiful to loo...
        • Three warm January Arctic storms aim to unfreeze t...
        • Northrup Grumman close to having portable Fusion R...
        • No one president has the power to solve all problems
        • Bernie Sanders and Donald Trump Voters Share Anger...
        • Oil Will Fall Sharply Lower Than Anyone Ever Imagined
        • El Misti Volcano awakens in Peru after 500 years
        • China Just Released True Color HD Photos Of The Moon
        • Investors grateful for end of January 2016
        • At Least 37 Migrants Drown Trying to Reach Greece
        • Zika virus outbreak in the Americas:Wikipedia:
        • Over 2000 Pregnant Women Infected With Zika in Col...
        • The Generation Gap?
        • 7 Shocking 3D Printed Things
        • 60MPH 3D Printed Eco Car
        • How a Korean Tank Crosses a River
        • New Storms Brewing in the West:Video
        • Ore. protester's family questions shooting
        • Why would people think Hillary would have trouble ...
        • Trump Tapping into Union Support?
        • Kate Brown: Wikipedia (present Oregon Governor)
        • Two new additions to the top ten articles read her...
        • The Human Condition
        • How well do you know American extremism?
        • Why did FBI release video of the Oregon occupier s...
        • Hotel de Glace — Quebec City's Ice Hotel
        • Abridged Classics from Time magazine
        • If you have a college degree in Anything but can't...
        • Globalization: The death of the Baby Boomers!
        • What's really killing middle-aged whites?
        • Your next Android phone might 'see' like humans
        • Babies Born To Obese Mothers Have Nearly Twice The...
        • Massive Sinkhole Closes Part of Oregon Highway 101
        • Pregnant Woman in NYC Is One of 31 Infected With Z...
        • Oregon standoff: Video of LaVoy Finicum's death fu...
        • Mother and daughter look alikes?
        • Donald Trump Says Fox News “Apologized”
        • Solar in Cars and Homes is the eventual death of a...
        • Donald Trump exposes the Big Lie at Fox News?
        • Zika virus: Five German infections after Americas ...
        • The Slow Painful Death of OIl
        • How Children Are Forced To The Front Lines Of Yeme...
        • Google parent Alphabet may soon top Apple's market...
        • 'Jon Snow Is Dead,' But May Be Returning?
        • Zika Virus May Push South America to Loosen Aborti...
        • Iowa GOP Debate Without Trump
        • Average LIfe expectancy of a mineworker in LIverpo...
        • labor law: Wikipedia
        • New Chemicals killed thousands in U.S. in first ha...
        • Radium Girls - Wikipedia
        • Prohibition killed more people than Americans died...
        • I think this election likely will be decided by th...
        • Gingrich says GOP is in a 'tremendous transition p...
        • Iran says low oil prices will not last long
        • Russia is teaching Hezbollah some terrifying new t...
        • Advice from countries where Zika is spreading loca...
        • Al Gore's global-warming Armageddon
        • CDC: Zika: 31 isolated cases in 11 U.S. states and DC
        • Could Barack Obama become a Supreme Court justice?
        • “Our Lady of Perpetual Exemption”?
        • Police Arrest Man With Guns in Luggage at Disneyla...
        • Zika virus outbreak in South America:Wikipedia
        • Zika Virus? The next Worldwide Pandemic?
        • New York Times Zika Virus 'Spreading Explosively' ...
        • URL for Weather radar for storms in both north and...
        • Ammon Bundy: Wikipedia
        • Occupation of the Malheur National Wildlife Refuge...
        • 2012 (2009)
        • The Day After Tomorrow (2004)
        • This Simulation Shows What Would Happen If an Eart...
        • Joe Versus the Volcano (1990)
        • California's snowpack is deepest in five years aft...
        • Joe Scarborough: 'Fox News has just walked into th...
        • Hard to watch stuff from the first part of the 190...
        • Totally Amazing few years
        • Voter thinks Clinton should Nominate Obama for Sup...
        • 1,000,000 in 5 weeks?
        • Netflix: American Experience: The Poisoner's Handbook
        • Terrorists now driving Rhinos extinct in Africa by...
        • Jail is better than Dying for most people
        • Why Lead Used to Be Added To Gasoline
        • The Planet Nibiru and the Apocalypse?
        • Drug kingpin Joaquin 'El Chapo' can't sleep in pri...
    • ►  2015 (4642)
      • ►  December (454)
      • ►  November (452)
      • ►  October (473)
      • ►  September (305)
      • ►  August (403)
      • ►  July (361)
      • ►  June (452)
      • ►  May (277)
      • ►  April (235)
      • ►  March (419)
      • ►  February (401)
      • ►  January (410)
    • ►  2014 (5288)
      • ►  December (408)
      • ►  November (490)
      • ►  October (442)
      • ►  September (418)
      • ►  August (489)
      • ►  July (454)
      • ►  June (391)
      • ►  May (527)
      • ►  April (433)
      • ►  March (512)
      • ►  February (324)
      • ►  January (400)
    • ►  2013 (4282)
      • ►  December (362)
      • ►  November (338)
      • ►  October (410)
      • ►  September (371)
      • ►  August (364)
      • ►  July (291)
      • ►  June (380)
      • ►  May (386)
      • ►  April (407)
      • ►  March (364)
      • ►  February (277)
      • ►  January (332)
    • ►  2012 (2056)
      • ►  December (251)
      • ►  November (201)
      • ►  October (210)
      • ►  September (214)
      • ►  August (179)
      • ►  July (144)
      • ►  June (149)
      • ►  May (171)
      • ►  April (148)
      • ►  March (128)
      • ►  February (124)
      • ►  January (137)
    • ►  2011 (1207)
      • ►  December (145)
      • ►  November (70)
      • ►  October (70)
      • ►  September (63)
      • ►  August (106)
      • ►  July (98)
      • ►  June (68)
      • ►  May (120)
      • ►  April (114)
      • ►  March (182)
      • ►  February (69)
      • ►  January (102)
    • ►  2010 (1090)
      • ►  December (76)
      • ►  November (92)
      • ►  October (110)
      • ►  September (96)
      • ►  August (133)
      • ►  July (48)
      • ►  June (74)
      • ►  May (115)
      • ►  April (112)
      • ►  March (82)
      • ►  February (79)
      • ►  January (73)
    • ►  2009 (859)
      • ►  December (77)
      • ►  November (63)
      • ►  October (66)
      • ►  September (83)
      • ►  August (44)
      • ►  July (43)
      • ►  June (56)
      • ►  May (89)
      • ►  April (102)
      • ►  March (94)
      • ►  February (86)
      • ►  January (56)
    • ►  2008 (830)
      • ►  December (85)
      • ►  November (85)
      • ►  October (59)
      • ►  September (64)
      • ►  August (46)
      • ►  July (37)
      • ►  June (78)
      • ►  May (87)
      • ►  April (86)
      • ►  March (87)
      • ►  February (64)
      • ►  January (52)
    • ►  2007 (193)
      • ►  December (53)
      • ►  November (55)
      • ►  October (43)
      • ►  September (42)
    Picture Window theme. Powered by Blogger.