#notracist: The Denial of Online Racism

by Sanjay Sharma

16 Aug 2016

finally got a new boss today. Hes under 50 good guy has social skills totally white with zero accent. I am so pleased #notracist


Id rather hit my face into a solid wall then hear my Asian maths teacher speak at the board #notracist #justconfused


I literally cant stop eating watermelon. & Im not even black. #notracist#lol



How do people say racist things and simultaneously refute malicious intent? Recently, one of my case studies of digital racism has focussed on the hashtag #notracist – see examples of Twitter messages above – exploring how users on social media ‘publicly’ rebuff their expressions of racism, using either shared humour or so-called real life observations to justify their stance. The sentiment “I’m not racist, but...” is increasingly heard in a climate when public expressions of explicit racism, (misogyny and homophobia) as hate speech have become less acceptable in mainstream society. Racism denial captures everyday forms of micro-aggressions which often escape our attention, yet create the conditions for legitimating cultures of online hate. The study highlights how seemingly privatised expressions of racism are entangled with their public modes of denial.  


Hate appears to be on the rise the more we participate in social media platforms. A recent campaign Reclaim the Internet, championed by Yvette Cooper MP aims to bring together tech companies, think tanks, politicians, the police, educationalists, journalists and young people to develop policies and practices against all forms of online abuse. The ease of connecting and communicating to an imagined audience, the potential of anonymity of users and the spontaneous formation of online hate mobs have created a hostile web, especially for women and racialized minority groups. However, it’s naive to characterise the Web as a hateful unregulated space, because it obscures understanding the different kinds of online hostility. This can range from trolling, flaming, bullying, abuse and threats; and be perpetrated by individuals or groups, with diverse sets of motivations and capacities. The fact that online hate takes many forms means that there is no simple or single solution.


The #notracist study is the first to examine everyday expressions of racism. It focuses on the popular Twitter social media platform, particularly as it's often singled out as fuelling online hate and abuse - to the extent that its CEO admitted that not enough has been done with the platform to tackle the problem. Our study identified Twitter messages that included the hashtag #notracist. Hashtags have become synonymous with Twitter as means of promoting and sharing messages in the vast data-stream of social media. We discovered that the #notracist hashtag was used in multiple ways, revealing the varied functioning of everyday racism in social media. Moreover, this finding suggests that the social media platform itself – especially the hashtag in the case of Twitter – plays a key role in determining the type and significance of  how a message is formed and publicly communicated.


We found that the #notracist hashtag punctuated a user’s message, as a declarative stance denying that their tweet had racist intent. And in the form of a hashtag, #notracist appeared to create an online affiliation amongst users who mostly were unknown to each other, in their justification of racism denial. Moreover, in our dataset – approximately 25,000 tweets collected over an eight month period – it was remarkable that many users included more than one hashtag alongside #notracist; such as: #fact; #truth; #justsayin/g; #funny; #lol; #joke. Adding more than one hashtag to a tweet is actually uncommon in Twitter: these users are conspicuously cognisant of breaching social norms. Analysing how additional hashtags are included in tweets reveals the intricate practices of online racism denial.


Figure 1 below presents a visual mapping of the most frequent hashtags occurring together with #notracist. Hashtags are represented by nodes in the visualisation, and its branches indicate how they are related. The closer the hashtags appear together along a branch, the more likely they co-occur within the same message. The hashtags can be seen to form groups, as indicated by the inner and outer red radials. Beyond the claim of being #notracist, racism denial is further justified by either including an additional ‘humour’ type hashtag such as #funny and #lol which involves joke-telling; or a ‘truth’ type hastag such as #fact or #iswear based on the claim of a real world observation.


None


 


Figure 1: Visual map of relationships between hashtags within the #notracist dataset. Labels are given to hashtags which feature in >1% of tweets.


The humour-type hashtags appear clustered together (inner radial), and the truth-type hashtags are more dispersed around the edges (outer radial). The different mappings of the hashtags represents the varied strategies of social media racism denial. In the first case, including a ‘humour’ hashtag to a #notracist tweet appears to rely on the common tactic of joke-telling as a defensive measure against accusations of racism. Although, it’s interesting to observe that a relatively small set of the same types of humour hashtags are shared amongst users in their tweeting practices (which explains the dense cluster of hashtags appearing in the inner radial). A culture of racialized online humour – as indicated by commonly occurring humour hashtags – suggests that when communicating online, users self-assuredly imagine a ‘real’ audience being in on the joke.  


In the second case, including a ‘truth’ type hashtag alongside #notracist message is less common than in comparison to the inclusion of ‘humour’ hashtags; although there are a wider range of ‘truth’ type hastags used. Furthermore, these hashtags are not shared as much in comparison to ‘humour’ type hashtags (which explains ‘truth’ type hashtags appearing on the outer radial of Figure 1). It’s fruitful to question why ‘Truth’ as a mode of online racialized denial talk draws on a diverse array of infrequently used hashtags, in comparison to ‘Humour’ which draws on a relatively narrow set of commonly used hashtags.


‘Truth’-based messages include hashtags that seek to make explicit a user’s semantic intention. In comparison to humour, truth-based hashtags are largely devoid of a shared online culture. A strategy of intensifying a user’s stance via adding this type of hashtag seeks to legitimize the possible breaching of what is publicly unacceptable: to say something seemingly racist yet claim it’s just how things are. Nonetheless, as indicated by the creation of many truth-type hashtags, this practice of racism denial is a fraught activity for users, because their ‘imagined audience’ appears largely unknown. To put it more simply, social media users find it less onerous to justify their public racism via relying on a shared culture of online humour, in comparison to self-professed real world observations.


Much of the public and media concern about online hate dwells on its explicit manifestations. This isn’t surprising, as extreme and highly visible forms of hate cause considerable distress to individuals unfortunate enough to experience it. Mainstream accounts of internet racism as online hate frame it as an aberration of social norms and acceptable conduct. But this limits fully grasping how racism pervades everyday social relations. Online racism operates across a spectrum, from the extreme to the everyday, and by understanding the breadth and depth of its manifestations we can begin to tackle it.


The #notracist study highlighted that online racialized expressions are symptomatic of how racism is a social and technological phenomena. It was found that everyday racism denial is manifested through different deployments of racialized hashtags on Twitter. Rather than simply blaming pathological individuals/groups or the medium, online racism is produced and propagated by an inter-play between users and the technologies of social media platforms. And the blurring between the public and private, real and virtual spaces, suggests that to look for a solution to everyday social media racism while maintaining these divisions is a flawed strategy.


Dr Sanjay Sharma is a Senior Lecturer in the Department of Social Sciences, Media & Communications at Brunel University, London. This blog post is based on the study  #notracist: Exploring Racism Denial Talk on Twitter (full-text), funded by a British Academy Small Grant. Sanjay’s research interests include exploring networked racisms and social media. He is a founding editor of the open access darkmatter Journal.  


Sign up to our email newsletters