YouTube is Copying X’s Most Interesting Feature

0
>>Follow Matzav On Whatsapp!<<

Since Elon Musk bought Twitter in 2022 and turned it into X, he has relaxed the site’s speech rules and laid off many of the people responsible for enforcing them. But he has embraced one content moderation idea that predated his arrival: a feature called Community Notes, formerly Birdwatch, that enlists users to fact-check and add context to potentially misleading posts.

When a user proposes a Community Note, other users are asked to review that note’s helpfulness before it gets published. If there’s wide agreement the note is useful, it will then appear below the post in question for all X users to see. The feature has led to public debunking of posts by users ranging from cryptocurrency scammers to President Biden – and on several occasions, Musk himself.

– The idea of Community Notes is now spreading

This week, Google’s YouTube began testing a similar as-yet-unnamed feature, inviting a group of users to start proposing notes that add context to others’ videos. Examples include “notes that clarify when a song is meant to be a parody, point out when a new version of a product being reviewed is available, or let viewers know when older footage is mistakenly portrayed as a current event,” the company said in a blog post.

Like X, YouTube said it will use a special type of ranking system, known as a “bridging-based algorithm,” to decide which proposed notes deserve promotion. Bridging systems are designed to prioritize ideas that appeal to a broad range of people, rather than just those on one side of an issue.

That feature means a note or fact check must gain approval from users who have previously disagreed with each other to appear on the site. The mechanism has helped Community Notes to avoid being taken over by either the right or the left, although there may still be ways to game it.

– For tech companies, letting users do the fact-checking holds a lot of appeal

It’s cheaper than hiring professional fact-checkers and relatively easy to scale up once the system is in place. Perhaps most important, it gets the company out of the controversial role of arbitrating directly what the truth is.

In YouTube’s case, spokesperson Elena Hernandez said the notes are meant to “build on” other forms of context that the company already adds to some videos, such as information panels that point to reliable sources on controversial topics.

“One of the challenges we’ve faced is how to raise third-party context on a video in a way that is specific, detailed and timely to the video itself, and can be done at scale across the huge variety of content on YouTube,” Hernandez said via email. “We’re testing notes as a feature that can help address this.”

– Executives at X say they’re pleased to see YouTube following their lead

Keith Coleman, X’s vice president of product, helped to spearhead the original Birdwatch project at Twitter before Musk bought it. He said YouTube’s move to test a similar feature is “a testament to the power of the Community Notes approach that gives voice to the people.”

Coleman cited research that found Community Notes reduce the spread of misinformation on X, often prompting the person who wrote the misleading post to delete it. Whereas traditional fact-checking is often “polarizing,” he added, one recent study found that people across the political spectrum rated Community Notes as more trustworthy than a simple misinformation flag.

In October, Musk announced that creators on X who have a post corrected by Community Notes would not be able to earn money from that post. That won’t be the case on YouTube for now, as it plans to experiment with its notes feature before deciding whether to roll it out more widely.

– Still, crowdsourced fact-checks are far from a silver bullet for online scams or lies

The rigorous review process for Community Notes means they often appear on a post only after it has reached a wide audience, blunting their impact. And while X reported that it showed nearly 30,000 notes in the first four months of this year, that probably represents a small fraction of all the misleading posts circulating on the platform. Last fall, an analysis by NewsGuard of 250 widely viewed X posts advancing misinformation about the Israel-Gaza war found that just 32 percent were flagged by Community Notes.

In December, after one of his posts was flagged by Community Notes, Musk claimed that the system was being “gamed by state actors.”

While Community Notes is a promising feature, it works best as a supplement to conventional fact-checking and content moderation, said Alexios Mantzarlis, director of the Security, Trust and Safety Initiative at Cornell Tech and a former product policy manager at Google. He noted that YouTube has rolled back some misinformation stances ahead of the 2024 U.S. presidential election, as has rival Meta.

“If this is intended to be the sole countermeasure against 2020 election denialism claims” that no longer violate YouTube’s policies, he said, “it feels insufficient.”

(c) 2024, The Washington Post · Will Oremus 


LEAVE A REPLY

Please enter your comment!
Please enter your name here