Opinion: Understanding social media and conflict
By Samidh Chakrabarti, Director of Product Management, Civic Integrity; and Rosa Birch, Director of Strategic Response, Facebook
At Facebook, a dedicated, multidisciplinary team is focused on understanding the historical, political and technological contexts of countries in conflict. Today we’re sharing an update on their work to remove hate speech, reduce misinformation and polarization, and inform people through digital literacy programs.
Last
week, we were among the thousands who gathered at RightsCon, an international summit on human rights in the digital
age, where we listened to and learned from advocates, activists, academics, and
civil society. It also gave our teams an opportunity to talk about the work
we’re doing to understand and address the way social media is used in countries
experiencing conflict. Today, we’re sharing updates on: 1) the dedicated team
we’ve set up to proactively prevent the abuse of our platform and protect
vulnerable groups in future instances of conflict around the world; 2)
fundamental product changes that attempt to limit virality; and 3) the
principles that inform our engagement with stakeholders around the world.
We
care about these issues deeply and write today’s post not just as
representatives of Facebook, but also as concerned citizens who are committed
to protecting digital and human rights and promoting vibrant civic discourse.
Both of us have dedicated our careers to working at the intersection of civics,
policy and tech.
Last
year, we set up a dedicated team spanning product, engineering, policy,
research and operations to better understand and address the way social media
is used in countries experiencing conflict. The people on this team have spent
their careers studying issues like misinformation, hate speech, polarization
and misinformation. Many have lived or worked in the countries we’re focused
on. Here are just a few of them:
Ravi,
Research Manager: With a PhD in social
psychology, Ravi has spent much of his career
looking at how conflicts can drive division and polarization. At Facebook, Ravi analyzes user behavior data and surveys to
understand how content that doesn’t violate our Community Standards — such as posts from gossip pages — can still sow division.
This analysis informs how we reduce the reach and impact of polarizing posts
and comments.
Sarah,
Program Manager: Beginning as a
student in Cameroon,
Sarah has devoted nearly a decade to understanding the role of technology in
countries experiencing political and social conflict. In 2014, she moved to Myanmar to
research the challenges activists face online and to support community
organizations using social media. Sarah helps Facebook respond to complex
crises and develop long-term product solutions to prevent abuse — for example,
how to render Burmese content in a machine-readable format so our AI tools can
better detect hate speech.
Abhishek,
Research Scientist: With a masters in
computer science and a doctorate in media theory, Abhishek focuses on issues
including the technical challenges we face in different countries and how best
to categorize different types of violent content. For example, research in Cameroon
revealed that some images of violence being shared on Facebook helped people
pinpoint — and avoid — conflict areas. Nuances like this help us consider the
ethics of different product solutions, like removing or reducing the spread of
certain content.
Emilar,
Policy Manager: Prior to joining
Facebook, Emilar spent more than a decade working on human rights and social
justice issues in Africa, including as a
member of the team that developed the African Declaration on Internet Rights
and Freedoms. She joined the company to work on public policy issues in Southern Africa, including the promotion of affordable,
widely available internet access and human rights both on and offline.
Ali,
Product Manager: Born and raised in Iran in the 1980s and 90s, Ali and his family
experienced violence and conflict firsthand as Iran
and Iraq
were involved in an eight-year conflict. Ali was an early adopter of blogging
and wrote about much of what he saw around him in Iran. As an adult, Ali received his
PhD in computer science but remained interested in geopolitical issues. His
work on Facebook’s product team has allowed him to bridge his interest in
technology and social science, effecting change by identifying technical
solutions to root out hate speech and misinformation in a way that accounts for
local nuances and cultural sensitivities.
In
working on these issues, civil society has given us invaluable input on our
products and programs. No one knows more about the challenges in a given
community than the organizations and experts on the ground. We regularly
solicit their input on our products, policies and programs, and last week we published the
principles that guide our continued
engagement with external stakeholders.
In
the last year, we visited countries such as Lebanon,
Cameroon, Nigeria, Myanmar,
and Sri Lanka
to speak with affected communities in these countries, better understand how
they use Facebook, and evaluate what types of content might promote
depolarization in these environments. These findings have led us to focus on
three key areas: removing content and accounts that violate our Community Standards, reducing the spread of borderline content that has
the potential to amplify and exacerbate tensions and informing people
about our products and the internet at large. To address content that may lead
to offline violence, our team is particularly focused on combating hate speech
and misinformation.
Hate speech isn’t allowed under our Community Standards. As we shared last year, removing this content requires supplementing user reports with AI that can proactively flag potentially violating posts. We’re continuing to improve our detection in local languages such as Arabic, Burmese, Tagalog, Vietnamese, Bengali and Sinhalese. In the past few months, we’ve been able to detect and remove considerably more hate speech than before. Globally, we increased our proactive rate — the percent of the hate speech Facebook removed that we found before users reported it to us — from 51.5% in Q3 2018 to 65.4% in Q1 2019 [refer image 1].
We’re
also using new applications of AI to more effectively combat hate speech
online. Memes and graphics that violate our policies, for example, get added to
a photo bank so we can automatically delete similar posts. We’re also using AI
to identify clusters of words that might be used in hateful and offensive ways,
and tracking how those clusters vary over time and geography to stay ahead of
local trends in hate speech. This allows us to remove viral text more quickly.
Still,
we have a long way to go. Every time we want to use AI to proactively detect
potentially violating content in a new country, we have to start from scratch
and source a high volume of high quality, locally relevant examples to train
the algorithms. Without this context-specific data, we risk losing language
nuances that affect accuracy.
Globally,
when it comes to misinformation, we reduce the spread of content that’s been deemed false by third-party
fact-checkers. But in countries with fragile information ecosystems, false news
can have more serious consequences, including violence. That’s why last year we
updated our global violence and
incitement policy such that we now
remove misinformation that has the potential to contribute to imminent violence
or physical harm. To enforce this policy, we partner with civil society
organizations who can help us confirm whether content is false and has the
potential to incite violence or harm.
We’re also making fundamental changes to our products to address virality and reduce the spread of content that can amplify and exacerbate violence and conflict. In Sri Lanka, we have explored adding friction to message forwarding so that people can only share a message with a certain number of chat threads on Messenger. This is similar to a change we made to WhatsApp earlier this year to reduce forwarded messages around the world. It also delivers on user feedback that most people don’t want to receive chain messages [refer Image 2].
About the Team
Focus Areas
Removing Bad Actors and Bad Content

Reducing Misinformation and Borderline Content


