Saturday, February 27, 2016

Future Proofing?

I found some statements about AI in the latest Time magazine that I found from Experts in the field of Artificial intelligence here is what they think:

Ray Kurzweil:
"Kurzweil believes human level AI will be achieved by 2029 (or before). Given the technology's potential to help find cures for diseases and clean up the environment, he says, we have "a moral imperative to realize this promise while controlling the peril!"

Sam Altman:
"Altman who is working on developing an open-source verions of AI that would be available to all, believes future iterations could be designed to self-police, working toward benevolent ends only."

Michio Kaku:
Kaku takes a longer, more pragmatic view, calling AI and end of the century problem. He adds that even then, if humanity has come up with no better methods to constrain rogue AI robots, it'll be a matter of putting a chip in their brain to shut them off."

Bill Gates:
"The computer software magnte turned philathropist views near future low intelligence AI as a positive labor replacement tool but worries that the super intelligent systems coming a few decades down the road will be strong enough to be a concern."

Stephen Hawking:
"The famed theorist believes AI to be both miraculous and catastrophic, calling it "the biggest event in human history" but also potentially the last, unless we learn to avoid the risks."

I think Stephen Hawkings statement most clearly mirrors my own concerns after studying about and working on computers both at first work and then as a hobby since 1966. Also, it might be important to realize that Hawking likely would have died without his computer assisted life. So, he is more integrally involved with AI in his life and survival than most people on earth and has been for some time.

 I learned to program in Cobol and Fortran in College in 1966 and 1967 and 1968. I then taught myself the Basic Language which many early home computers used until Gates created MS-DOS and then I learned to use this until Windows 95 came into being and so on. I also taught all my older kids( now 42 to 45 years of age) Basic and MS-DOS in the early to late 80s.

At that time most work was done with punch cards especially in accounting fields of work and processing. There was no Ram or microchips when I started working and playing with computers.

Nick Bostrom:
"Bosttrom warns that AI could turn dark quickly and dispose of humans. The subsequent world would harbor "economic miracles and technological awesomeness, with nobody there to benefit", like a Disneyland without children (or any humans at all).

Elon  Musk:
"The outspoken engineer and inventor has famously called AI our "biggest existential threat", fretting that it may be tantamount to "Summoning the demon".

Musk's statement gives portent to an old concept which might be "The mark of the beast" in Christianity.

So, will AI help mankind or extinct mankind or both?

That is entirely up to all of us, isn't it?

Only by each of us  taking personal responsibility for the survival of mankind will it survive this in any way, shape or form.

And by the way I am saying we are presently entering the Technological singularity right now. The only question is: "when does it Peak?"

Trump running for president and being this successful tells me that the Technological Singularity has already begun in earnest.

What is crazy to me really is that Global Climate change and the Technological Singularity might peak the same year!

"Which is not good news for humans in general."

No comments: