Begin partial quote from: Time magazine page 9 from the July 15th 2024 issue:
In May, Google made changes to it's new "AI Overviews" search feature, after it told some users it was safe to eat rocks. And then in June 2023 two lawyers were fined $5000 after one of them used a ChatGPT to help him write a court filing. The chatbot had added fake citations to the submission, which pointed to cases that never existed."
end quote.
This is why I don't see AI as being necessarily factual. Like yesterday I asked for meta.ai to draw a self regenerative drawing of Purple Delta 7 who I often write about. However, how she depicted this was as one women starting to arise from another's body (so her duplicating herself). I didn't understand and my wife had to point out to me that the AI was doing what I asked by demonstrating Purple's ability to duplicate herself as an android robot. To the AI this was logical in it's thinking. To me, because I'm just a human being it was lost on me. However, my wife got it and shared this with me. So, in regard to AI it is just a prediction model like a potential weather report. You cannot expect facts from a prediction model of software. That isn't what they do. They are simply estimating things like any gambler would do to make predictions to win a bet. nothing really different than that. It's not factual it's an estimation of potential facts nothing more.
No comments:
Post a Comment