Facebook Will Use Artificial Intelligence to Find Extremist Posts
This is a slippery Slope legally speaking. Even though I agree ISIS must be stopped almost at all costs it is still a slipper slope we are embarking on using Bots to ferret out dissent of any and all kinds.
Because when you go down this path it is only a small change from this to say "Anyone who writes anything bad about Trump or Putin or Duterte having a hit squad with AK-47s heading to their homes to shoot them.
And by the way this is already happening in the Philippines and Russia.
You think I'm kidding? This is so "1984" the book it is almost clinical in it's slippery slope and this is why getting rid of ISIS is good but getting rid of ALL dissent of ANY government is bad and will cause human extinction eventually through nukes. Because if you cannot dissent even when you need to to survive you are going to end the world eventually. This is just a given based upon human nature down through history.
Facebook Will Use Artificial Intelligence to Find Extremist Posts. The New YorkTimes. Under pressure from governments to counter inappropriate content, the ...
Apr 25, 2017 ... In early January, I went to see Mark Zuckerberg at MPK20, ... Nearly two billion people use Facebook every month, about 1.2 billion of them daily. ..... In sorting these posts, Facebook does not optimize for any single metric: not ... were meant to teach the artificial-intelligence system that runs News Feed how ...
Commentary and archival information about artificial intelligence from The NewYork Times. ... The Times Sharply Increases Articles Open for Comments, Using Google's Technology. A new comments system at The New York Times will allow the community ... What Self-Driving Cars See ... Facebook to Bring Chatbots Back .
nytimes.com - 20 mins ago. Facebook Will Use Artificial Intelligence to FindExtremist Posts. Facebook says the first use of its ... Full story on nytimes.com. 2.2 K.
Facebook Will Use Artificial Intelligence to Find Extremist Posts
Photo
Facebook says the first use
of its new artificial intelligence program will be to prevent the
posting of gruesome content such as images from terrorist attacks.Credit
Patricia De Melo Moreira/Agence France-Presse — Getty Images
SAN FRANCISCO — Responding to complaints that not enough is being done to keep extremist content off social media platforms, Facebook said Thursday that it would begin using artificial intelligence to help remove inappropriate content.
Artificial intelligence
will largely be used in conjunction with human moderators who review
content on a case-by-case basis. But developers hope its use will be
expanded over time, said Monika Bickert, the head of global policy
management at Facebook.
One
of the first applications for the technology is identifying content
that clearly violates Facebook’s terms of use, such as photos and videos
of beheadings or other gruesome images, and stopping users from
uploading them to the site.
“Tragically,
we have seen more terror attacks recently,” Ms. Bickert said. “As we
see more attacks, we see more people asking what social media companies
are doing to keep this content offline.”
In a blog post
published Thursday, Facebook described how an artificial-intelligence
system would, over time, teach itself to identify key phrases that were
previously flagged for being used to bolster a known terrorist group.
The
same system, they wrote, could learn to identify Facebook users who
associate with clusters of pages or groups that promote extremist
content, or who return to the site again and again, creating fake
accounts in order to spread such content online.
“Ideally,
one day our technology will address everything,” Ms. Bickert said.
“It’s in development right now.” But human moderators, she added, are
still needed to review content for context.
Brian
Fishman, Facebook’s lead policy manager for counterterrorism, said the
company had a team of 150 specialists working in 30 languages doing such
reviews.
Facebook
has been criticized for not doing enough to monitor its site for
content posted by extremist groups. Last month, Prime Minister Theresa
May of Britain announced that she would challenge internet companies — including Facebook — to do more to monitor and stop them.
“We
cannot allow this ideology the safe space it needs to breed,” Ms. May
said after the bombing of a concert in Manchester that killed 22 people.
“Yet that is precisely what the internet — and the big companies that
provide internet-based services — provide.”
J.
M. Berger, a fellow with the International Centre for Counter-Terrorism
at The Hague, said a large part of the challenge for companies like
Facebook is figuring out what qualifies as terrorism — a definition that
might apply to more than statements in support of groups like the
Islamic State.
“The
problem, as usual, is determining what is extremist, and what isn’t,
and it goes further than just jihadists,” he said. “Are they just
talking about ISIS and Al Qaeda, or are they going to go further to deal
with white nationalism and neo-Nazi movements?”
Ms.
Bickert said Facebook was hopeful that the new artificial intelligence
technology could be used to counter any form of extremism that violated
the company’s terms of use, although for the time being it will be
narrowly focused.
Still, questions about the program persist.
“Will
it be effective or will it overreach?” said Jillian York, the director
for international freedom of expression at the Electronic Frontier
Foundation. “Are they trying to discourage people from joining terrorist
groups to begin with, or to discourage them from posting about
terrorism on Facebook?”
No comments:
Post a Comment