Last May, as Twitter was testing warning labels for false and misleading tweets, it tried out the word “disputed" with a small focus group. It didn't go over well.
“People were like, well, …
Join our family of readers for as little as $5 per month and support local, unbiased journalism.
Already have an account? Log in to continue. Otherwise, follow the link below to join.
Please log in to continue |
Last May, as Twitter was testing warning labels for false and misleading tweets, it tried out the word “disputed" with a small focus group. It didn't go over well.
“People were like, well, who’s disputing it?" said Anita Butler, a San Francisco-based design director at Twitter who has been working on the labels since December 2019. The word “disputed," it turns out, had the opposite effect of what Twitter intended, which was to "increase clarity and transparency," she said.
The labels are an update from those Twitter used for election misinformation before and after the 2020 presidential contest. Those labels drew criticism for not doing enough to keep people from spreading obvious falsehoods. Now, Twitter is overhauling them in an attempt to make them more useful and easier to notice, among other things. Beginning Thursday, the company will the redesigns with some U.S. users on the desktop version of its app.
Experts say such labels — used by Facebook as well — can be helpful to users. But they can also allow social media platforms to sidestep the more difficult work of content moderation — that is, deciding whether or not to remove posts, photos and videos that spread conspiracies and falsehoods.
“It's the best of both worlds" for the companies, said Lisa Fazio, a Vanderbilt University psychology professor who studies how false claims spread online. “It's seen as doing something about misinformation without making content decisions."
While there is some evidence that labels can be effective, she added, social media companies don't make public enough data for outside researchers to study how well they work. Twitter only labels three types of misinformation: “manipulated media,” such as videos and audio that have been deceptively altered in ways that could cause real-world harm; election and voting-related misinformation and false or misleading tweets related to COVID-19.
One thing that's clear, though, is that they need to be noticeable in a way that prevents eyes from glossing over them in a phone scroll. It's a problem similar to the one faced by designers of cigarette warning labels. Twitter's election labels, for instance, were blue, which is also the platform's regular color scheme. So they tended to blend in.
The proposed designs added orange and red so they stand out more. While this can help, Twitter says its tests also showed that if a label is too eye-catching, it leads to more people to retweet and reply to the original tweet. Not what you want with misinformation.
Then there's the wording. When “disputed" didn't go over well, Twitter went with “stay informed." In the current test, tweets that get this label will get an orange icon and people will still be able to reply or retweet them. Such a label might go on a tweet containing an untruth that could be, but isn't necessarily immediately harmful.
More serious misinformation — for instance, a tweet claiming that vaccines cause autism — would likely get a stronger label, with the word “misleading" and a red exclamation point. It won't be possible to reply to, like or retweet these messages.
“One of the things we learned was that words that build trust were important and also words that that were not judgmental, non-confrontational, friendly," Butler said.
This makes sense from Twitter's perspective, Fazio said. After all, “a lot of people don't like to see the platforms have a heavy hand," she added.
As a result, she said, it's hard to tell if Twitter's main goal is to avoid making people angry and alienating them from Twitter instead of simply helping them understand “what is and isn’t misinformation.”