The company said bot accounts "can bring a lot of value
to the service," but acknowledged that "it can be confusing to people
if it's not clear that these accounts are automated."
Twitter has faced years of calls from misinformation
researchers to disclose more information about bots, which have been used to
amplify influence operations and make certain narratives appear more popular on
its site.
It started requiring developers to identify automated
accounts as bots in March, but resisted pressure to apply a designated label,
saying as recently as May that "calls for bot labeling don't capture the
problem we're trying to solve."
Twitter also said on Thursday that it would build a new
"memoralised account" type in 2021 for people who have died.
Abuse of those accounts has likewise been a feature of
information campaigns, such as in one case documented last year by academic
Marc Owen Jones involving the verified account of an American meteorologist who
died of cancer in 2016 that began tweeting pro-Saudi government content in
Arabic two years later.
Twitter announced last month that it would restart its
verification programme early next year, after pausing submissions in 2017 amid
criticism over how it awarded the blue check-mark badges used to authenticate
the identity of prominent accounts.
It said it would begin removing verified badges from
inactive and incomplete accounts that fail to adhere to the new guidelines as
of January 20, 2021, although it would leave up inactive accounts of people who
are no longer living while working on the new memorial feature.
Reuters