Friday, October 10, 2025

ChatGPT safety systems can be bypassed to get weapons instructions

This isn't the only problem. AI could theoretically give people information about building nuclear weapons too. The problem is that AI is everywhere now and anything online is presently vulnerable. This is pretty obvious. Governments are so Gung Ho about offensive weapons now that programming is 10 to 20 years behind regarding protecting the people from unknowns in weapons systems from terrorists everywhere. I Think Trump's administration is incompetent enough to face these things. And now because information isn't being let out about just how incompetent this present government is, it means we are in incredible dangers as a world and as the human race right now and at least 3 years into the future or more. 

My Daughter who is more tech savvy than I am presently said: 'There is NO Government on Earth that is competent to protect the people of earth from Terrorists using Artificial intelligence" whether those terrorists are civilians or in militaries around the world 

 begin quotes:

ChatGPT safety systems can be bypassed to get weapons instructions

In a series of tests, OpenAI’s models generated hundreds responses with instructions on making chemical and biological weapons.

No comments:

Post a Comment