Under the old system, the onus was on users to determine which Twitter rules had been broken.
Twitter is known as the internet’s online swamp for a reason, a reputation most users can attest to, although the company desperately wants that to change. To help speed things along, the social media network has launched a new “symptoms-first” reporting process to help users report bad behavior.
In the past, the onus was on the user to determine what violation had been committed, which could be both frustrating and confusing. Under the new system, described by the company as “more empathetic” and supportive, Twitter will first ask users what happened, or the “symptom,” and then use that information to suggest the type of violation that may have occurred.
The new process, which is now available globally on the web, iOS, and Android, aims to make it easier for users to report others when they break Twitter’s rules, be it by posting misinformation or engaging in hate speech, among other violations. According to Twitter, the number of actionable reports it received increase by 50% in tests with the new system.
“In moments of urgency, people need to be heard and feel supported. Asking them to open the medical dictionary and saying, ‘point to the one thing that’s your problem’ is something people aren’t going to do,” Brian Waismeyer, a data scientist at Twitter that led the development of this new process, said in a blog post in December. “If they’re walking in to get help, what they’re going to do well is describe what is happening to them in the moment.”
Twitter initially revealed that it was working on the redesigned process Waismeyer’s blog post in December. The news, which was spotted by the Verge, was confirmed to Gizmodo by Twitter on Friday.
The new reporting process resembles a mini questionnaire and doesn’t appear to be too difficult to use. First, Twitter asks you who you’re submitting the report for, whether that’s for yourself, someone else, or a group of people. The process moves on to the “gathering info” section and asks you to categorize the harmful behavior in question. After that, you’ll be asked how the user you’re reporting is acting.
Based on all of your answers, Twitter will suggest what kind of report you should make, such as one for hateful conduct, and ask you to confirm that its assessment is correct. If it is, you continue the reporting process. If Twitter’s off base, though, you can select another option. Overall, this will allow Twitter to take appropriate steps to address the content being reported.
“This report essentially triggers a review of the content. If Twitter determines that the content is violating and our rules dictate that the content be removed, that will happen,” Fay Johnson, product management director on Twitter’s health team, said. “We’ll do some additional investigation to see if there are other things that we need to take down based on what was reported, whether it be the content itself or an account.”
Tags:
Twitter