Screenshots shared on Twitter showed a notice asking
"Are you concerned that someone you know is becoming an extremist?"
and another that alerted users "you may have been exposed to harmful
extremist content recently." Both included links to "get
support."
The world's largest social media network has long been under
pressure from lawmakers and civil rights groups to combat extremism on its
platforms, including U.S. domestic movements involved in the Jan. 6 Capitol
riot when groups supporting former President Donald Trump tried to stop the
U.S. Congress from certifying Joe Biden's victory in the November election.
Facebook said the small test, which is only on its main
platform, was running in the United States as a pilot for a global approach to
prevent radicalization on the site.
"This test is part of our larger work to assess ways to
provide resources and support to people on Facebook who may have engaged with
or were exposed to extremist content, or may know someone who is at risk,"
said a Facebook spokesperson in an emailed statement. "We are partnering
with NGOs and academic experts in this space and hope to have more to share in
the future."
It said the efforts were part of its commitment to the
Christchurch Call to Action, a campaign involving major tech platforms to
counter violent extremist content online that was launched following a 2019
attack in New Zealand that was live-streamed on Facebook.
Facebook said in the test it was identifying both users who
may have been exposed to rule-breaking extremist content and users who had
previously been the subject of Facebook's enforcement.
The company, which has tightened its rules against violent
and hate groups in recent years, said it does remove some content and accounts
that violate its rules pro-actively before the material is seen by users, but
that other content may be viewed before it is enforced against. – Reuters