That's the wrong question people should be asking entirely.
The right question is:
"Under what conditions is Artificial intelligence likely to fail?"
This is the right question.
Why?
Because Artificial Intelligence has completely different strengths and weaknesses than humans do.
Humans in Self driving planes, cars or trucks need to know what those weaknesses are but mostly they won't because they haven't received technological educations. And manufacturers don't have the desire to fully inform humans of those Artificial intelligence weaknesses.
There are two main weaknesses:
Software
and hardware.
Software because it can be vulnerable in a chip or ram to electrical interferences which can create glitches in the software over time.
The software can degrade or it can be hacked from outside the car or truck.
The hardware can fail if dust gets into the wrong components.
There also can be freak accidents resulting in deaths caused by unexpected interfaces between software and hardware.
And hardware can fail for almost any reason you could think of.
So, are self driving cars, trucks and planes safe?
What do you think?
To the best of my ability I write about my experience of the Universe Past, Present and Future
Top 10 Posts This Month
- Because of fighting in Ukraine and Israel Bombing Iran I thought I should share this EMP I wrote in 2011
- "There is nothing so good that no bad may come of it and nothing so bad that no good may come of it": Descartes
- Keri Russell pulls back the curtain on "The Diplomat" (season 2 filming now for Netflix)
- most read articles from KYIV Post
- Historicity of Jesus-Wikipedia
- reprint of: Drones very small to large
- US intelligence officials make last-ditch effort to sound the alarm over foreign election interference
- The ultra-lethal drones of the future | New York Post 2014 article
- Jack Ryan from Prime (4 seasons)
- When I began to write "A Journey through Time"
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment