What do I mean by this?
What I mean is the more complexity in the design of anything the more likelihood of failure at some point.
So, for example, you are likely going to have less problems, say in a VW Bug with an idiot manual
( in the 1960s and 1970s) than you would with all the automatic electronic functions of most cars today. Why?
As the complexity of designs increase for comfort or for self driving features or "Self parking features" you also increase the capacities for failure.
If you are logical this just makes logical sense. So, the more complex the systems are in any vehicle the more likely you are going to have problems with some of the systems in any vehicle. And the more simplistic the systems are in any vehicle (given that all vehicles are properly designed in the first place, the more likely that vehicle won't have as many problems as a more complex one.
So, because of this a luxury car (over time) is going to likely have more "small problems" that a less expensive car more simplistically engineered.
The same is true of self driving vehicles and partially self driving vehicles. Given the law "whatever can go wrong will eventually go wrong in any given vehicle in time."
Which is why many people change cars after 3, 5, or 10 years or whatever to prevent this law from taking effect.
However, the other way to look at it is that after 5 years most vehicles are paid off and any money you pay at this point to repair those vehicles might be less than the costs of buying a new one. This also is often true.
So, if you buy a vehicle known to achieve 300,000 miles quite often rather than a vehicle known only to last 100,000 miles often this is a better deal economically.
But, in regard to self driving vehicles the biggest problem I see coming is when you try to align 1000 or 10,000 or 100,000 of them at a time on a freeway for example to a single controller system so people can be riding in their cars all 6 inches from each other down a freeway at 70 miles per hour.
The complexity of such a system is bound to cause catastrophic deaths of many of these 100,000 people eventually when something goes wrong with such a system. It's just an extension of "Whatever can go wrong will go wrong eventually in any electronic or mechanical system." And a Self driving vehicle (or 100,000 of them combined in a single system also apply themselves directly to this rule. And self driving vehicles have to deal with not only mechanical failure and electronic failure but also microchip and software failure too both within the vehicle and externally.
So, they become vulnerable to:
Mechanical failures
electronic failures
Microchip failures
Software failures
So, you already have gone from just mechanical and electronic failures with a human driver and multiplied the possibility of failures with microchip failures and software failures for any reason too.
Then you have to multiply the risk factors by adding in a universal controller that might act a little like HTTP or HTTPS over a system that might work through a Freeway WIFI system which might control all the cars (as far as alignment 6 inches from each other) as another interfacing system on top of whatever systems resident in each self driving vehicle.
So, if you align 100,000 cars in rush hour traffic anywhere eventually this system is going to have a problem and 100,000 cars might be full of dead people in.
The way this could actually work is through a Freeway wifi that would organize all the cars by location and destination which would separate the cars by destination by which freeway exit they would need (both freeway entrance ramp and freeway exit ramp).
Then when the self driving vehicle left the freeway it would again function as an independent self driving vehicle once again separate from the freeway system which would organize all self driving cars on the freeway.
IF there were human driven cars and non-human driven cars on the same freeway likely they would use different lanes with the fastest lanes being the self driving cars. So, human drivers could use those fastest lanes if they wanted to use the self driving features if their cars had them for getting to their destinations faster.
Then changing lanes regarding self driving vehicles and human driven vehicles becomes the biggest problem to solve along the way.
But, as complexity of systems rises inevitably there will be a rise in fatalities for the first 25 to 50 years or more too. And some kind of government watchdogs will be needed watching fatalities regarding any system to KEEP it safe enough for people to want to travel AT ALL in self driving vehicles or even be on the same road with one. (if you are a human driver driving your vehicle).
Another option is separate roads and lanes for self driving vehicles and human driven vehicles.
To the best of my ability I write about my experience of the Universe Past, Present and Future
Top 10 Posts This Month
- Here Are the New Members of Donald Trump’s Administration So Far
- Trump and Musk unleash a new kind of chaos on Washington
- Greenland's leader says "we are not for sale" after Trump suggests U.S. takeover
- Crowdsourcing - Wikipedia
- Thousands of Jews have left Israel since the October 7 attacks
- The AI Translated this about Drone Sightings in Europe from German to English for me
- The state of the Arctic: High temperatures, melting ice, fires and unprecedented emissions
- Philosophic Inquiry is nothing more than asking questions and looking for real (Not imagined) answers
- "There is nothing so good that no bad may come of it and nothing so bad that no good may come of it": Descartes
- reprint of: Friday, March 18, 2016 More regarding "As Drones Evolve"
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment