A new study published on 21 August in Nature attempts to answer the baffling question of how online hate groups manage to thrive on social media platforms. The results could help answer, perhaps, an even more important question: what measures can be taken to reduce — or even eliminate — their presence?
Social media have enabled a world of interconnectedness. But with this comes opportunities for less virtuous organisations to spread their messages. And also creates a world-wide platform for recruiters seeking impressionable minds to join their cause.
Just some of the adverse events linked to online hate and extremist narratives include a recent surge in hate crimes, an alarming increase in teen suicide rates, as well as inciting mass shootings, stabbings, and bombings. Indeed, social media outlets open the door for global recruitment by extremist groups.
Understanding hate-community dynamics could be the key to effectively reducing such behaviours. So, in a compelling study, the researchers looked at the dynamics of online hate communities on multiple social media platforms.
The team of researchers from George Washington University and the University of Miami used mathematical modelling to examine these so-called hate clusters — online pages or groups of individuals with similar views, interests, or purposes — on two popular social media platforms, Facebook and VKontakte (a Russian online social media platform), over several months.
Each cluster contains links to other communities or clusters that users can join. Therefore, the authors were able to track how and when members of one cluster also joined other clusters. And found that hate group clusters are highly resilient.
In particular, they discovered that when hate groups are “attacked” — for example, a site being removed by a platform administrator — the clusters quickly repair themselves and the network rapidly rewires itself.
This rewiring is mainly facilitated by the strong bonds created between users mutual users of multiple clusters, which the authors suggest is analogous to a strong covalent bond in chemistry. Sometimes, two or more small clusters may even merge into a larger cluster.
In this way, banning hate content on a single platform only succeeds in aggravating these online hate communities. And worse yet, promotes the creation of clusters that often go undetected, allowing them to thrive away from the watchful eye of platform policing.
Based on the analysis, the authors propose four policies to reduce hate content on online social media:
- Banning relatively small hate clusters, rather than removing the largest online hate cluster, since targeting larger clusters only leads to the establishment of new ones that form out of the many smaller groups.
- Banning a small number of users selected at random from online hate clusters. This can help avoid outrage and complaints that social media platforms may be attempting to suppress free speech.
- Promote the organization of clusters of anti-hate users, which could serve as a ‘human immune system’ to fight and counteract hate clusters.
- Introduce artificial groups of users to encourage interactions between hate clusters with opposing views.
However, the authors also recommend proceeding with caution in light of ever-increasing privacy concerns. Moreover, the advantages and disadvantages of implementing each policy must be carefully assessed.
(1) Johnson, N.F. et al. Hidden resilience and adaptive dynamics of the global online hate ecology. Nature (2019). DOI: 10.1038/s41586-019-1494-7