Starting next week, Twitter will prioritize the removal of the most harmful misleading information.
Using a combination of technology and human verification, Twitter announces that it will enforce this updated policy on December 21st and expand its actions in the following weeks.
Beginning in early 2021, Twitter will be able to flag or warn tweets that bring up unsubstantiated rumors, controversial claims, and incomplete or out of context information about vaccines.
“We will enforce this policy in close consultation with local, national and global health authorities around the world and strive to make our approach iterative and transparent,” the microblogging platform said in a statement late Wednesday.
According to Twitter, starting next week, the expanded policy will contain false claims suggesting vaccines and vaccines are being used to deliberately harm or control populations, as well as statements about vaccines creating an intentional conspiracy.
“False claims that have been largely debunked about the adverse effects or effects of vaccination, or false claims that COVID-19 is not real or not serious and therefore vaccinations are not required,” are also counted as part of the expanded policy.
Tweets flagged under the advanced guidelines may reference authoritative public health information or the Twitter rules to provide people with additional context and authoritative information about COVID-19.
The Twitter action came after reports surfaced earlier this week that Facebook is now sending notifications directly to users who like, share, or comment on such posts.
According to a report by Fast Company, the social network is changing the way it reaches people who have encountered misinformation on its platform.
“The company will now send notifications to anyone who liked, commented on, or shared a Covid-19 misinformation that was removed for violating the platform’s terms of service,” the report said.
n / A/