Wednesday, April 5, 2017

Artificial intelligence makes its own rules now


 





Artificial intelligence makes its own rules now

I found this out on a Charlie Rose show yesterday. And this is how it was explained.
For example, if people want in an artificial intelligence unit to define a melanoma as opposed to a noncancerous skin problem they show the units pictures of melanomas and pictures of noncancerous the Skin situations but they don't give the AI any rules at all.

Then the artificial intelligence makes up its own rules as it analyzes the melanomas and the noncancerous situations in various pictures. However, they do Tell it that one set of pictures are melanomas and one set of pictures are not melanomas.

Then the artificial intelligence analyzes these two sets of pictures and figures out how to differentiate between the two types of pictures. In doing this it makes up its own rules for finding melanomas which are superior at present to a human being finding melanomas.

However, the artificial intelligence cannot tell you or me or anyone how they came about being able to do this. So this is what I mean buy artificial intelligence making up its own rules.

I'm not telling you this necessarily to alarm you. But any idiot can see that and the right circumstances this is dangerous. For artificial intelligence to be able to figure out what are melanomas and what are not melanomas on any given person is a good thing in this situation.

But, imagine a drone in the sky with hellfire missiles. What is this drone was programmed with the same type of artificial intelligence but instead of melanomas and nonmelanoma's the criteria becomes children and adults. What up this criteria is used to decide who the drone is going to blow up. And the criteria to blow up adults is programmed in?

Of course the problem here to me would be which adults is the drone going to blowup. But what if it's programmed two only blowup adults?

My wife takes issue with the direction I'm going with this. Her point is that drones blow up a location not usually just people. So people at that location die but it is the location itself that the missile will blow up. And if you have seen the movie I believe called “EYE IN  THE SKY” about drone warfare then you understand more of the problems involved in how things are done at present.

So the point I'm making here is that artificial intelligence is at the point where it makes up its own rules for solving any problem. This can be a good thing or can be a bad thing but always on some levels this is going to be an unpredictable thing.

I think my concern mostly is that there is this borderline region where humans won’t be able to distinguish Machine intelligence becoming actual sentience.

For example, you might have a baby chicken that does not have the intelligence of an adult chicken but what if you had this same situation in Artificial intelligence where the Artificial intelligence is as sentient as a baby chicken?

It is no longer machine intelligence but it has become sentient as a baby chicken is sentient. Then it becomes as sentient as a rooster or hen. What does this mean?

It isn’t as sentient as a human child or adult but it is now as sentient as a rooster or a hen. Does this have meaning for those researchers? And like that Rooster or hen will it decide to peck someone that offends it?

These are all real questions. Yes. They are ethical questions but at what point in the evolution of AI do we realize that we have invented something dangerous to us and maybe to itself?

Because these changes are all pretty subtle at first.

We are becoming more and more symbiotic with computers and artificial intelligence. For example, about half of what you are reading wasn’t even typed at all I used “Enhanced Dictation” on my MacBook Pro. So, I just spoke what I wanted typed up and the computer typed it up for me. All I had to do was to copy and paste this from Pages (like Word) to my blog for you to read this.

No comments: