Why Facebook Will Never Fully Solve Its Problems with AI


note Zuckerberg offered AI as a panacea for Facebook’s massive content problem during Tuesday’s testimony before the Senate Judiciary and Commerce committees — but this is ultimately a inaccurate promise.

Leaning on the dispersion of artificial intelligence to detect and remove the kind of problem content that is drawing scrutiny to the social network invariably leaves room for Facebook to never fully or directly lift responsibility for what’s happening on its platform — and worse, it will effect this at scale.

approximately one hour into his marathon testimony, Facebook’s CEO unexpectedly gave up the “neutral platform” defense that Facebook, and so many other technology companies, own deployed to distance themselves from being held accountable for the problems on their platforms.

“In the past, we’ve been told that platforms like Facebook, Twitter, Instagram, the like are neutral platforms. … They bore no responsibility for the content,” Sen. John Cornyn told Zuckerberg. “effect you agree now that Facebook and the other social media platforms are not neutral, but bear some responsibility for the content?”

“I agree that we’re responsible for the content,” Zuckerberg answered. It was an astonishing concession. But it didn’t final.

Seconds later, he launched into a talking point approximately how AI could address undesirable content, effectively abdicating Facebook’s responsibility for the problem. He would return to this defense 10 more times before his testimony ended.

“In the future, we’re going to own tools that are going to be able to identify more types of scandalous content” like dislike speech, fake news, obscenity, revenge porn, and other controversial content on Facebook, Zuckerberg said. The company is hiring more content moderators, with the aim of having 20,000 workers by the conclude of this year, and “building AI tools is going to be the scaleable way to identify and root out most of this harmful content.”

Call it AI solutionism. It’s an appealing understanding. But it will never fully work.

“Proposing AI as the solution leaves a very long time period where the issue is not being addressed, during which Facebook’s respond to what is being done is, ‘We are working on it,’” Georgia Tech AI researcher note Riedl told BuzzFeed News.

Fake news running rampant? The algorithm hasn’t been trained on enough contextual data. Violence-inciting messages in Myanmar? The AI isn’t worthy enough, or perhaps, possibly there aren’t enough Burmese-speaking content moderators — but don’t worry, the tools are being worked on. AI automation also gives the company deniability: whether it makes a mistake, there’s no holding the software accountable.

“There is a tendency to want to see AI as a neutral moral authority,” Riedl told BuzzFeed News. “However, we also know that human biases can creep into data sets and algorithms. Algorithms can be wrong and there needs to be recourse.” Human biases can gather coded into the AI, and uniformly applied across users of different backgrounds, in different countries with different cultures, and across wildly different contexts.

Facebook did not immediately respond to a request for comment from BuzzFeed News.

To be honest, even Zuckerberg was upfront approximately some of the limitations of AI, saying that while AI may be able to root out dislike speech in five to 10 years, “nowadays we are not there yet”:

“Some problems lend themselves more easily to AI solutions than others. dislike speech is one of the hardest, because determining whether something is dislike speech is very linguistically nuanced. You need to understand what is a slur, and whether something is hateful. Not just in English — majority of people on Facebook exhaust it in different across the world. Contrast for example with an area like finding terrorist propaganda which we’ve been very successful at deploying AI tools on already.

“nowadays as we sit here 99% of the ISIS and Al Qaeda contempt we lift down, AI flags before any human sees it. So that’s success in terms of rolling out AI tools that can proactively police and enforce safety across the community.”

But several AI researchers told BuzzFeed News this ignored several facets of the problem. First, as Cornell AI professor Bart Selman said, you could argue that artificial intelligence, and algorithms in general, seriously contributed to Facebook’s quandary in the first residence.

“AI algorithms operate by finding intelligent ways to optimize for a pre-programmed objective,” Selman said. “Facebook instructs its news feed algorithms to optimize for ‘user engagement.’”

When Facebook users engaged with posts that reaffirmed their biases, Facebook showed them more of it. News feeds got increasingly polarized. Then scandalous actors realized they could game the system, and so fake news and extremist content became a problem.

Of course, Zuckerberg doesn’t want to talk approximately how AI got us into this mess.

As for Facebook’s systems catching what it considers “scandalous” content, Jana Eggers, the CEO of AI startup Nara Logics, said she “doubts” Facebook is rooting out as much of the terrorist content as Zuckerberg said it did. “There is plenty of that propaganda that is also being spread that they don’t find,” she told BuzzFeed News. “I worry that he has a inaccurate sense of pride on how much propaganda they are actually getting, and that inaccurate sense of pride will lead it its own set of problems.”

What’s more, the researchers warned that Zuckerberg’s timeline of AI understanding the human context in dislike speech within five to 10 years could be unrealistic. “AI systems would own to develop fairly sophisticated forms of ethical reasoning and journalistic integrity to deal with such language,” said Cornell University’s Selman. “We are at least 20 to 30 years absent from that for AI systems, and that may be an optimistic estimate.” But even Zuckerberg’s optimistic 10-year timeline would be “too long of a wait,” he said.

Tarleton Gillespie, who studies how algorithms and platforms shape public discourse at Microsoft Research, told BuzzFeed News that he wasn’t just skeptical that it would lift “a while” for technology companies to develop AI adequate enough to address dislike speech and controversial content on platforms. “AI likely can’t ever effect what platforms want it to effect,” he said.

At its size, Facebook is never going to fully address its huge content problem. Yes, having some AI systems to benefit those 20,000 content moderators is better than zero. “But AI for content monitoring would need to be carefully designed and monitored with the right human interest-aligned objectives in intellect,” Selman said.

Which implies a perpetual problem. Culture, the complexity of language, the tricks of those who willfully violate platform standards and game AI systems — these are total factors that the people developing AI systems themselves acknowledge are in flux. And that makes the training of data itself fluid by definition, Microsoft Research’s Gillespie pointed out. Platforms will always need people to detect and assess novel forms of dislike and harassment, and they will never be able to eliminate the need for humans dealing with this problem.

What AI automation really does, Gillespie argued, is “detach human judgment from the encounter with the specific user, interaction, or content and shift it to the analysis of predictive patterns and categories for what counts as a violation, what counts as harm, and what counts as an exception.” whether Facebook truly wants to manufacture a worthy-faith effort to grapple with its content problem, it shouldn’t outsource this judgment to general AI.

For as long as Facebook is as huge as it is, AI will never be a total solution. One real — though unlikely — solution? Downsize. “Venture capitalists and the market may not own supported such an approach,” Selman said, “[but] whether Facebook had opted for a more manageable size, the core problems would likely own been avoided.”

“It’s indeed the relentless pursuit of rapid growth that drove the need for near-total AI automation, which caused the problems with these platforms.”



Source link

You might also like More from author

%d bloggers like this: