Saturday, March 31, 2018

Many drivers of "Partial Self driving Vehicles" don't seem to understand fully their vehicles

What I mean by this is they don't seem to understand where the car leaves off and they begin as a driver. In other words "When responsibility is shared between a driver and Artificial Intelligence there is liable to be serious problems with some drivers and some fatalities caused by this confusion or better said 'misunderstanding'."

Because in the end most people can either be drivers or passengers (one or the other) trying to be both often will result in the death of the driver and the crash of the car.

So, likely there will be less fatalities in "Fully Self driving Vehicles" than in "partial self driving vehicles".

There is a law governing technology which is "Whatever can go wrong will go wrong at some point."

What I mean by this is that most people usually don't understand just how fragile the software-hardware  interface bond actually is in so called "partial self driving vehicles" let alone Fully self driving vehicles.

In other words: "Would you trust your life and your survival to your Macbook pro laptop, your PC desktop, or your IPhone or Android or whatever?"

Because when you are trusting your life to a partial self driving vehicle or a fully self driving vehicle that's exactly what you are doing!

How many times has your Iphone or macbook pro or PC failed you in a moment of need? 10, 50, 100, 1000? Now think about if that failure was from your partially or fully self driving car or one you rented or a self driving taxi you rented?

Now just multiply the complexity of the software and hardware in a partial self driving vehicle or a fully self driving vehicle and watch what happens when you multiply the complexity by a factor of 20 or 100 or even 1000 or more (compared to your Iphone, macbook or PC or tablet!

You could wind up with 1000 times more chance you  are going to die than in a fully manual vehicle under certain conditions where some type of minor or major failure occurs.

Then you bring in a Byzantine Fault Tolerance where systems cannot even agree with what is working and what is not in relation to either software or Hardware and then watch what happens.

And in a partially Self driving vehicle the Byzantine Fault Tolerance problem has to also apply to the driver if you properly flow chart while being serious about writing software or designing software for partially self driving vehicles.

Then how do you deal with emergency procedures when you have malfunctions that could be life threatening if you don't give the passengers a brake handle or a steering wheel to take over in an emergency?

The problem as I see it becomes: "Riding in a passenger plane (already configured with many non-manual electronically enhanced systems, already is 6 times safer than driving in a car in most situations. But, then you take this down to a street level off a freeway in a partially self driving vehicle with almost infinite variables possible that that artificial intelligence can encounter in a split second without warning that that software is not designed to handle and you are going to have some very very very strange fatal or near fatal accidents because of this problem because of the artificial intelligence not being designed in how to deal with that previously unknown and un-programmed for event.



Byzantine fault tolerance - Wikipedia

https://en.wikipedia.org/wiki/Byzantine_fault_tolerance
Byzantine fault tolerance (BFT) is the dependability of a fault-tolerant computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component is failed. In a "Byzantine failure", a component such as a server can inconsistently appear both failed and ...

No comments: