- Facebook Will Use Artificial Intelligence to Find Extremist Posts. The New York Times. Under pressure from governments to counter inappropriate content, the ...
- Apr 25, 2017 ... In early January, I went to see Mark Zuckerberg at MPK20, ... Nearly two billion people use Facebook every month, about 1.2 billion of them daily. ..... In sorting these posts, Facebook does not optimize for any single metric: not ... were meant to teach the artificial-intelligence system that runs News Feed how ...
- Commentary and archival information about artificial intelligence from The New York Times. ... The Times Sharply Increases Articles Open for Comments, Using Google's Technology. A new comments system at The New York Times will allow the community ... What Self-Driving Cars See ... Facebook to Bring Chatbots Back .
- nytimes.com - 20 mins ago. Facebook Will Use Artificial Intelligence to Find Extremist Posts. Facebook says the first use of its ... Full story on nytimes.com. 2.2 K.SAN FRANCISCO — Responding to complaints that not enough is being done to keep extremist content off social media platforms, Facebook said Thursday that it would begin using artificial intelligence to help remove inappropriate content.Artificial intelligence will largely be used in conjunction with human moderators who review content on a case-by-case basis. But developers hope its use will be expanded over time, said Monika Bickert, the head of global policy management at Facebook.One of the first applications for the technology is identifying content that clearly violates Facebook’s terms of use, such as photos and videos of beheadings or other gruesome images, and stopping users from uploading them to the site.“Tragically, we have seen more terror attacks recently,” Ms. Bickert said. “As we see more attacks, we see more people asking what social media companies are doing to keep this content offline.”Continue reading the main storyADVERTISEMENTContinue reading the main storyIn a blog post published Thursday, Facebook described how an artificial-intelligence system would, over time, teach itself to identify key phrases that were previously flagged for being used to bolster a known terrorist group.The same system, they wrote, could learn to identify Facebook users who associate with clusters of pages or groups that promote extremist content, or who return to the site again and again, creating fake accounts in order to spread such content online.“Ideally, one day our technology will address everything,” Ms. Bickert said. “It’s in development right now.” But human moderators, she added, are still needed to review content for context.Brian Fishman, Facebook’s lead policy manager for counterterrorism, said the company had a team of 150 specialists working in 30 languages doing such reviews.Facebook has been criticized for not doing enough to monitor its site for content posted by extremist groups. Last month, Prime Minister Theresa May of Britain announced that she would challenge internet companies — including Facebook — to do more to monitor and stop them.“We cannot allow this ideology the safe space it needs to breed,” Ms. May said after the bombing of a concert in Manchester that killed 22 people. “Yet that is precisely what the internet — and the big companies that provide internet-based services — provide.”J. M. Berger, a fellow with the International Centre for Counter-Terrorism at The Hague, said a large part of the challenge for companies like Facebook is figuring out what qualifies as terrorism — a definition that might apply to more than statements in support of groups like the Islamic State.“The problem, as usual, is determining what is extremist, and what isn’t, and it goes further than just jihadists,” he said. “Are they just talking about ISIS and Al Qaeda, or are they going to go further to deal with white nationalism and neo-Nazi movements?”Ms. Bickert said Facebook was hopeful that the new artificial intelligence technology could be used to counter any form of extremism that violated the company’s terms of use, although for the time being it will be narrowly focused.Still, questions about the program persist.“Will it be effective or will it overreach?” said Jillian York, the director for international freedom of expression at the Electronic Frontier Foundation. “Are they trying to discourage people from joining terrorist groups to begin with, or to discourage them from posting about terrorism on Facebook?”Continue reading the main story
To the best of my ability I write about my experience of the Universe Past, Present and Future
Top 10 Posts This Month
- Here Are the New Members of Donald Trump’s Administration So Far
- Trump and Musk unleash a new kind of chaos on Washington
- Greenland's leader says "we are not for sale" after Trump suggests U.S. takeover
- Crowdsourcing - Wikipedia
- The state of the Arctic: High temperatures, melting ice, fires and unprecedented emissions
- Thousands of Jews have left Israel since the October 7 attacks
- The AI Translated this about Drone Sightings in Europe from German to English for me
- Philosophic Inquiry is nothing more than asking questions and looking for real (Not imagined) answers
- "There is nothing so good that no bad may come of it and nothing so bad that no good may come of it": Descartes
- reprint of: Friday, March 18, 2016 More regarding "As Drones Evolve"
Thursday, June 15, 2017
Facebook Will Use Artificial Intelligence to Find Extremist Posts
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment