The CEO behind the company that created ChatGPT
believes artificial intelligence technology will reshape society as we
know it. He believes it comes with real dangers, but can also be "the
greatest technology humanity has yet developed" to drastically improve
our lives.
"We've got to be careful here," said Sam Altman, CEO of OpenAI. "I think people should be happy that we are a little bit scared of this."
Altman
sat down for an exclusive interview with ABC News' chief business,
technology and economics correspondent Rebecca Jarvis to talk about the rollout of GPT-4 -- the latest iteration of the AI language model.
In
his interview, Altman was emphatic that OpenAI needs both regulators
and society to be as involved as possible with the rollout of ChatGPT —
insisting that feedback will help deter the potential negative
consequences the technology could have on humanity. He added that he is
in "regular contact" with government officials.
ChatGPT is an AI language model, the GPT stands for Generative Pre-trained Transformer.
Recent Stories from ABC News
Released
only a few months ago, it is already considered the fastest-growing
consumer application in history. The app hit 100 million monthly active
users in just a few months. In comparison, TikTok took nine months to
reach that many users and Instagram took nearly three years, according to a UBS study.
Watch the exclusive interview with Sam Altman on "World News Tonight with David Muir" at 6:30 p.m. ET on ABC.
Though
"not perfect," per Altman, GPT-4 scored in the 90th percentile on the
Uniform Bar Exam. It also scored a near-perfect score on the SAT Math
test, and it can now proficiently write computer code in most
programming languages.
GPT-4 is
just one step toward OpenAI's goal to eventually build Artificial
General Intelligence, which is when AI crosses a powerful threshold
which could be described as AI systems that are generally smarter than
humans.
Though he celebrates the
success of his product, Altman acknowledged the possible dangerous
implementations of AI that keep him up at night.
"I'm
particularly worried that these models could be used for large-scale
disinformation," Altman said. "Now that they're getting better at
writing computer code, [they] could be used for offensive cyberattacks."
A
common sci-fi fear that Altman doesn't share: AI models that don't need
humans, that make their own decisions and plot world domination.
"It waits for someone to give it an input," Altman said. "This is a tool that is very much in human control."
However,
he said he does fear which humans could be in control. "There will be
other people who don't put some of the safety limits that we put on," he
added. "Society, I think, has a limited amount of time to figure out
how to react to that, how to regulate that, how to handle it."
President Vladimir Putin is quoted telling Russian students on their first day of school in 2017 that whoever leads the AI race would likely "rule the world."
"So
that's a chilling statement for sure," Altman said. "What I hope,
instead, is that we successively develop more and more powerful systems
that we can all use in different ways that integrate it into our daily
lives, into the economy, and become an amplifier of human will."
Concerns about misinformation
According
to OpenAI, GPT-4 has massive improvements from the previous iteration,
including the ability to understand images as input. Demos show GTP-4
describing what's in someone's fridge, solving puzzles, and even
articulating the meaning behind an internet meme.
But
a consistent issue with AI language models like ChatGPT, according to
Altman, is misinformation: The program can give users factually
inaccurate information.
"The
thing that I try to caution people the most is what we call the
'hallucinations problem,'" Altman said. "The model will confidently
state things as if they were facts that are entirely made up."
Recent Stories from ABC News
The model has this issue, in part, because it uses deductive reasoning rather than memorization, according to OpenAI.
"One
of the biggest differences that we saw from GPT-3.5 to GPT-4 was this
emergent ability to reason better," Mira Murati, OpenAI's Chief
Technology Officer, told ABC News.
"The
goal is to predict the next word – and with that, we're seeing that
there is this understanding of language," Murati said. "We want these
models to see and understand the world more like we do."
"The
right way to think of the models that we create is a reasoning engine,
not a fact database," Altman said. "They can also act as a fact
database, but that's not really what's special about them – what we want
them to do is something closer to the ability to reason, not to
memorize."
Altman
and his team hope "the model will become this reasoning engine over
time," he said, eventually being able to use the internet and its own
deductive reasoning to separate fact from fiction. GPT-4 is 40% more
likely to produce accurate information than its previous version,
according to OpenAI. Still, Altman said relying on the system as a
primary source of accurate information "is something you should not use
it for," and encourages users to double-check the program's results.
Precautions against bad actors
The
type of information ChatGPT and other AI language models contain has
also been a point of concern. For instance, whether or not ChatGPT could
tell a user how to make a bomb. The answer is no, per Altman, because
of the safety measures coded into ChatGPT.
"A
thing that I do worry about is ... we're not going to be the only
creator of this technology," Altman said. "There will be other people
who don't put some of the safety limits that we put on it."
There
are a few solutions and safeguards to all of these potential hazards
with AI, per Altman. One of them: Let society toy with ChatGPT while the
stakes are low, and learn from how people use it.
Right now, ChatGPT is available to the public primarily because "we're gathering a lot of feedback," according to Murati.
As the public continues to test OpenAI's applications, Murati says it becomes easier to identify where safeguards are needed.
"What
are people using them for, but also what are the issues with it, what
are the downfalls, and being able to step in [and] make improvements to
the technology," says Murati. Altman says it's important that the public
gets to interact with each version of ChatGPT.
"If
we just developed this in secret -- in our little lab here -- and made
GPT-7 and then dropped it on the world all at once ... That, I think, is
a situation with a lot more downside," Altman said. "People need time
to update, to react, to get used to this technology [and] to understand
where the downsides are and what the mitigations can be."
Regarding
illegal or morally objectionable content, Altman said they have a team
of policymakers at OpenAI who decide what information goes into ChatGPT,
and what ChatGPT is allowed to share with users.
"[We're]
talking to various policy and safety experts, getting audits of the
system to try to address these issues and put something out that we
think is safe and good," Altman added. "And again, we won't get it
perfect the first time, but it's so important to learn the lessons and
find the edges while the stakes are relatively low."
Will AI replace jobs?
Among
the concerns of the destructive capabilities of this technology is the
replacement of jobs. Altman says this will likely replace some jobs in
the near future, and worries how quickly that could happen.
"I
think over a couple of generations, humanity has proven that it can
adapt wonderfully to major technological shifts," Altman said. "But if
this happens in a single-digit number of years, some of these shifts ...
That is the part I worry about the most."
But
he encourages people to look at ChatGPT as more of a tool, not as a
replacement. He added that "human creativity is limitless, and we find
new jobs. We find new things to do."
The ways ChatGPT can be used as tools for humanity outweigh the risks, according to Altman.
"We
can all have an incredible educator in our pocket that's customized for
us, that helps us learn," Altman said. "We can have medical advice for
everybody that is beyond what we can get today."
ChatGPT as 'co-pilot'
In education, ChatGPT has become controversial, as some students have used it to cheat on assignments.
Educators are torn on whether this could be used as an extension of
themselves, or if it deters students' motivation to learn for
themselves.
"Education
is going to have to change, but it's happened many other times with
technology," said Altman, adding that students will be able to have a
sort of teacher that goes beyond the classroom. "One of the ones that
I'm most excited about is the ability to provide individual learning --
great individual learning for each student."
In
any field, Altman and his team want users to think of ChatGPT as a
"co-pilot," someone who could help you write extensive computer code or
problem solve.
"We can have that
for every profession, and we can have a much higher quality of life,
like standard of living," Altman said. "But we can also have new things
we can't even imagine today -- so that's the promise."
Here is a good example: "Would you let your 5 year old pilot your plane and drive all your cars while you are riding inside?
If your answer is "Yes" then you could go extinct next.
Why?
Because Artificial intelligence is sort of like allowing your 5 year old to drive you (unsupervised) all over the world in a car or plane or boat or truck.
This is the real problem of artificial intelligence that has no ethics and doesn't really care whether anyone lives or die including you and your pets.
People get enthralled with what Artificial intelligence does until they realize what it really is.
What is Artificial intelligence?
It is mathematical formulas simulating human thought regarding everything.
It appears to be real but it isn't.
It's just mathematical formulas made from Zeros and ones. And that's all!
To think it is something more is how people die every day by letting their cars drive them places without supervising these cars just like you would your 5 year old child if he or she sat in your lap and you let them steer your vehicle ongoing.
Supervised this might work for awhile if you are far from city traffic but won't work at all in many situations where the variables are too great for the AI to handle.
It doesn't matter what AI is used for it has no ethics or morals sort of like a serial killer.
So, if you see AI like a Serial killer this might be useful to you.
This is just a carry over from the last 400 years of Americans going back to the first Pilgrims or going back thousands of years here in the U.S. through native American Tribes. You are always looking for landmarks to geo-locate yourself. You look for mountains, you look for specific trees and rocks that look differently from other trees and rocks. If you are closer into civilization you look for buildings or farms or ranches to geo-locate in relation to them.
For example, you are driving down a road nearby or in the wilderness. When you get out of your car or truck or before you begin sizing up what is there whether it be trees or meadows or mountains or whatever. When you park your vehicle you begin geo-locating yourself so you can always find your way back to your vehicle.
For example, I often will not hike into areas where it is all trees for miles and miles unless I'm traveling on foot with others who have traveled this route before. Why?
Because trees and trees and trees and more trees without being able to see out is very potentially dangerous and an easy place to get lost and not know where you are. Then the only way without a phone or GPS device to know where you are you might have to climb the biggest tree around to try to see out to get your bearings. And without someone familiar with this trail you shouldn't go into a place like this alone without maps at the very least or a GPS device or a phone (if there is any signal at all).
For example, most wilderness places in California there might be NO SIGNAL on your cell phones at all.
For me, especially if you travel around Mt. Shasta to the north or the east there are areas where you aren't likely to see anyone at all for up to 50 miles at a time. IN these areas you better have forest service maps and know how to read them or you are going to get lost even in your car or truck or motorcycle or bicycle.
So, it is quite possible to get lost in the wilderness even in your car, truck or on your bicycle or motorcycle as well if you aren't familiar where you are going to.
Editor’s Note: Michael Bociurkiw (@WorldAffairsPro)
is a global affairs analyst currently based in Odesa. He is a senior
fellow at the Atlantic Council and a former spokesperson for the
Organization for Security and Cooperation in Europe. He is a regular
contributor to CNN Opinion. The opinions expressed in this commentary
are his own. View more opinion at CNN.
CNN
—
Those expecting that the upcoming Ukrainian counteroffensive will be a shock and awe bombing campaign similar to the 2003 US strikes on Iraq will be disappointed.
Michael Bociurkiw
To be sure, there is a sort of unspoken pressure on the
administration of Ukrainian President Volodymyr Zelensky to press
ahead with its planned counteroffensive as soon as possible – and
demonstrate that the billions of dollars of Western military aid is
capable of pushing Russian President Vladimir Putin at least back to
pre-2022 full-scale invasion lines.
Ukrainian leaders and military planners need also to be mindful of
developments across the Atlantic, where its most powerful ally, the
United States, could see the return of Donald Trump in 2025 to the White
House – and with that, a likely drop in support.
Yet Kyiv seems to be playing cagey, implementing a long-range view
of the counteroffensive, avoiding being pressured into action and
keeping battlefield plans close to its chest.
We already know Zelensky needs time to build up weapons stocks and to train troops.
But make no mistake. The much talked about counteroffensive is
increasingly coming into view – not with an Iraq-style type of
invasion – but with subtle and some might say brilliant strikes against
Russia.
Then on Tuesday morning a drone attack on the Russian capital brought
the conflict to Russian soil with fresh clarity. Moscow blamed Ukraine
for what it described as a “terrorist attack,” while Kyiv denied
involvement in the strike, which caused minor damage and injuries.
Whoever is to blame, one thing was for certain: it gave Moscow
residents a taste of what people in the Ukrainian capital are facing day
after day.
But it was the incursion by two anti-Kremlin groups which claimed
to have controlled, at least temporarily, 16 square miles of Russian
territory last week, that set Ukrainian Telegram channels on fire.
Claimed to be acting independently of Ukrainian forces, the
combatants’ provocation prompted a major evacuation and represented the
most intense fighting inside Russia since the start of the
full-scale invasion of Ukraine.
There are clearly chinks in Putin’s armor. Should
these types of disruptive attacks increase in frequency and spread to
other regions within Russia. One might speculate that they could lead to
a tipping point for Putin’s hold on power.
The aim here appears not to actually occupy Russian land – but to
send a message to Putin and the Russian public that the Ukraine war is a
waste of blood and treasure.
While such a scenario may make officials in
Washington anxious that it could escalate matters, European officials
seem to be looking the other way as Kyiv becomes more aggressive in
their shortlisting of Russian targets.
What is more, if Ukrainians are prevented from striking key
military sites deep within Russian territory, then the question has to
be asked: What is the point of this David and Goliath fight with one hand tied behind Kyiv’s back?
The latest incursions, if they were associated in any way with
Kyiv, were executed with brilliant timing as they occurred when Russian
forces are pre-occupied elsewhere along the frontline trying to gain
territory and defend occupied lands.
The Russian Volunteer Corps (RVC) and the Freedom of Russia Legion appear to be Russian volunteers backing Ukraine and with the intent to topple Putin. Unlike the RVC, the Legion claims to
be fighting under the leadership of Ukrainian command and “out of the
wish of Russians to fight in the ranks of the Armed Forces of Ukraine
against Putin’s armed gang.”
Just as word began to circle the globe about these two insurgent
groups who had little name recognition – even among those of us who
follow the region closely – the New York Times published a piece about the affiliation of a leader of the RVC to neo-Nazi splinter groups.
If proven true, it could be used by the Kremlin spin machine to
paint Ukraine as a haven for Nazis, one of the false pretexts for the
invasion.
Wisely, Zelensky and his inner circle have remained mostly quiet about the incursions.
Even Wagner boss Yevgeny Prigozhin warned last week that Russians might move to topple the regime if the so-called “special military operation” continues to go sideways.
What is perhaps likely in the short term, is that Russia will use
a hybrid strategy to attack Ukraine and make life uncomfortable for the
West.
That means a continuation of the daily strikes on Kyiv and other
major centers (which, by depriving residents of sleep, is a form of
psychological warfare); the weaponization of food by restricting ships
carrying grain and other agricultural products from Ukraine to western
markets; and even the weaponization of migration by creating enough fear
from drone and missile attacks to prevent the millions of Ukrainians
refugees from returning home.
It is reasonable to assume that Putin will not end this war
voluntarily, by submitting to a ceasefire or peace deal. Rather,
Putin appears to believe he can win by running out the clock.
Collateral damage has never been a concern for Putin, only his own
safety and power. Now, it seems the buffer between Moscow and the
frontline, is rapidly shortening.
And with the war he started getting uncomfortably close, I believe Putin’s days in office could be numbered as well.