Hate speech is tough for artificial intelligence to distinguish from legitimate speech, Facebook CEO Mark Zuckerberg told senators Tuesday.
- Facebook CEO Mark Zuckerberg said that while it's relying increasingly on artificial intelligence to police content on its site, AI doesn't work well for identifying hate speech.
- AI won't be ready to reliably distinguish hate speech from legitimate expression for another five to 10 years, he said.
- Zuckerberg's comments came during his testimony Tuesday at a Senate hearing focusing on the Cambridge Analytica scandal.
Facebook is increasingly relying on artificial intelligence to identify content posted to its service that violates its policies, but CEO Mark Zuckerberg said there's one type of content AI struggles with — hate speech.
Indeed, it will take another five to 10 years for AI to be ready to police hate speech and be able to reliably distinguish it from legitimate political expression, Zuckerberg told senators during his testimony at a congressional hearing Tuesday. Although Facebook has worked on AI that could identify hate speech, the error rates are just too high, he said.
"We're not there yet," he said.
Hate speech is a problem for AI, because it's subject to lots of nuance, he said. Also, because Facebook operates in numerous countries around the world, its AI needs to understand those nuances in multiple languages.
"You have to understand what's a slur and whether something hateful," he said.
In his testimony, Zuckerberg noted that Facebook originally relied on its users to identify objectionable content. In the wake of the 2016 election and reports that Russian-linked actors hijacked Facebook's service to spread fake news and other propaganda, the company has been stepping up efforts to police content on the service. Facebook expects to have some 20,000 people working on security and reviewing content by the end of this year, Zuckerberg said.