One of the major concerns of social media research relates to ethics. The unprecedented abundance of data available online constitutes an enormous resource for researchers, but equally poses some serious and pressing challenges. A workshop held in Bristol on the 23rd of April, organised by the Association for Research Ethics, brought together four social media researchers to discuss some of these issues, and to propose means of negotiating this ethical terrain. Two of these speakers – Farida Vis and Anne Burns – are members of the Visual Social Media Lab, and this workshop was a great opportunity to share some of the thought processes and principles that had fed into our own ethical framework for the Imaging Sheffield project.

Carl Miller began the day by discussing his work at Demos, regarding the analysis of social media data in the run up to the UK general election. He discussed how audience reactions to politician’s speeches – in the form of the sentiment analyses of tweets – could potentially be incorporated, in real-time, into the speech itself. This insight into the political utilisation of data demonstrates a new ethical concern beyond the standard privacy debate, in that the collection and presentation of material can have an immediate impact on political debate and policy. Given the considerable implications of this kind of work, researchers will need to think very carefully about how they gather and present data.

Farida Vis outlined a series of case studies, which explored some of the potential ethical pitfalls of social media research. In discussing two projects, Vis identified the problem of reproducing subjects’ usernames, either in connection with various forms of content (in the case of Reading the Riots), or through mapping the connections between various people (as in the case of her study of the YouTube video Fitna). In particular, Vis noted the need to consider different types of public visibility, in that the connections she identified between YouTube users was not immediately accessible, but rather only available through the site’s API.

Sanjay Sharma’s work addresses ambient forms of racist talk on Twitter, in terms of the discussion focused around hashtags such as #notracist. His talk explored the co-occurrence of certain themes in relation to the #notracist hashtag, which principally related to either claims of veracity, or declarations of humorous intent.

Screen Shot 2015-04-29 at 11.33.25

Sharma also asserted the need to study the ‘long tail’ of racist online discussion, in that the everyday and mundane forms of racism are just as pernicious as the more high-profile, and more frequently researched, instances of explicit and widely-circulated racist speech.

Screen Shot 2015-04-29 at 10.26.46

But most importantly, Sharma’s work on this topic demonstrates a useful case study for the use of social media data, in that he had stripped the material of its identifying characteristics. As I argued in this blog post, even subjects whose behaviour might be viewed as morally reprehensible must have their identity anonymised by researchers. Importantly, Sharma here shows that ethical research need not compromise data richness, complexity or integrity.

My own talk addressed some of the issues I spoke about in this earlier blog post, relating to the uncomfortable overlaps between some instances of social media research and involuntary pornography, in that both phenomena conceptualise privacy as simplistic, binary and ultimately the subject’s responsibility. As Vis has identified earlier, reproducing data alongside subjects’ usernames is problematic in several ways. I followed on from this by arguing that if we, as researchers, assume that material visible online is ours for the taking, then we are maintaining power dynamics that maintain social inequality. To demonstrate this, I played a video produced by the Ohio commission, that presents a fairly explicit, and deeply uncomfortable, view of the (young, female) subject’s responsibility for safeguarding their own data. I also cited research by the dating app OK Cupid that claimed all Internet users were subject to research all the time, and that people should just accept this. These two stances – of the subject as completely responsible, and yet also utterly powerless – demonstrates some of the contradictions surrounding the politics of data privacy. Ethical social media research needs to negotiate a path through these extremes, in order to avoid not just individual instances of harm, but also the reproduction of problematic narratives of either corporate entitlement, or individual culpability.

‘Picturing the Social: Transforming our Understanding of Images in Social Media and Big Data research’ is an 18-month research project that started in September 2014 and is based at the University of Sheffield in the United Kingdom. It is funded through an ESRC’s Transformative Research grant and is focused on transforming the social science research landscape by carving out a more central place for image research within the emerging fields of social media and Big Data research. The project aims to better understand the huge volumes of images that are now routinely shared on social media and what this means for society. This project involves an interdisciplinary team of seven researchers from four universities as well as industry with expertise in: Media and Communication Studies (Farida Vis and Anne Burns, University of Sheffield), Visual Culture (Simon Faulkner and Jim Aulich, Manchester School of Art), Software Studies and Sociology (Olga Goriunova, Warwick University), Computer and Information Science (Francesco D’Orazio, Pulsar and Mike Thelwall, University of Wolverhampton). The project is part of the Visual Social Media Lab.