Features

Prevalence of self-harm reflects our social isolation

Since the late 1980s growing numbers of mental health professionals and media commentators in Britain and the USA have been concerned with a behaviour labelled as ‘self-harm’, ‘deliberate self-harm’ or ‘self-injury’. It is often seen as a secret or hidden practice, and it is almost always ‘on the rise’, especially among adolescent females. Most commonly it refers to self-cutting or self-burning, performed in order to relieve intolerable emotional tension or numbness.

In contrast, during the 1960s and ‘70s, the term ‘self-harm’ generally referred to somebody ‘crying for help’ by taking an overdose (self-poisoning). Now, it predominantly means regulating emotional tension by self-cutting or -burning. The ratio of cutting to overdosing in hospital statistics hasn’t changed very much, remaining around eight or nine to one in favour of self-poisoners. Why, in such a short space of time, have popular self-harm stereotypes shifted so dramatically?

The first thing to acknowledge in the face of this shift is that self-harm hasn’t always meant what we think it means. In the very recent past in Britain, ‘self-harm’ did not conjure up images of blood and cutting, but medication and overdosing. The ways in which we understand self-harm are both relatively recent and incredibly narrow.

This goes against some ideas of self-harm as timeless and almost mystical, which link it to religious self-flagellation, bloodletting, and even Tibetan tantric practices and the Passion of Christ – all of which focus on, or involve bleeding.

The term ‘deliberate self-harm’ was proposed in 1975 (as a new label) at a hospital in Bristol. It was used to describe a group of patients where 92 per cent had poisoned themselves (mostly with prescription or over the counter medication). It remains the case that hospital statistics for self-harm typically contain between 80 per cent and 95 per cent self-poisoners, and only a small minority of self-cutters.

This situation dates back to the 1950s. Groups of psychiatrists became concerned at a number of people who presented at casualty departments having harmed themselves in such a way that death was unlikely. This was usually achieved by taking an overdose of medication (typically aspirin, or strong sedatives called barbiturates). This act was initially called ‘attempted suicide’ by psychiatrist Erwin Stengel, even though he thought that death was not what was being attempted. Later, other psychiatrists tried to improve on the term, calling it ‘self-poisoning’, ‘pseudocide’ or ‘propetia’ (from the Greek for ‘rashness’). These doctors agreed that the overdoses were not intended to cause death; instead they were a desperate, maladjusted attempt at language, a ‘cry for help’ to their friends and family that they were in distress. Taking an overdose became recognised as an attempt to communicate with a social circle or ‘significant other’.

Part of the change in stereotypes can be linked to changes in how hospitals assess people presenting at A&E [or, the ER] having self-harmed. In the early 1980s the British government reviewed guidance on hospital self-harm assessment (mostly self-poisoning), recommending that it can be delegated out of the hands of psychiatrists. Thus self-poisoning studies by research psychiatrists declined in number and receded in prominence. (This is not the same as saying that the number of people self-poisoning has decreased.) Many of these psychiatrists collaborated with social workers, who did much of the work trying to reconstruct the social context for these overdoses. They did this by visiting patients at home after the event, interviewing family members and spouses, and bringing this information back to psychiatrists, who could then present the overdose as a result of these relationships. Once psychiatrists reduced the amount of patient assessment at A&E, these collaborations dwindled.

Self-cutting studies were largely unaffected by these changes as they tended to come from analysis of small groups either involved in individual counselling, inside psychiatric hospitals, or, more recently from community studies.  Around the same time, self-cutting was given a boost in visibility with its inclusion (as a symptom of Borderline Personality Disorder) in the American Diagnostic and Statistical Manual of Mental Disorders, 3rd Edition (DSM-III) (1980).

However, this simply shifts the mystery. Why have stereotypes come to emphasise self-cutting, which almost eclipses ideas of self-poisoning? This is despite the fact that studies coming out of hospitals still include self-poisoners in with self-cutters. This is a change in stereotypes, not statistics. Much clinical work outside hospitals dealing with self-injury now focuses on practices involving the surface of the skin, to the exclusion of overdosing. The new edition of the influential American manual, DSM-5, includes a category of non-suicidal self-injury (NSSI) that specifically excludes actions not performed on the surface of the skin.

One explanation for this shift in stereotypes lies in broad political shifts that have occurred between the 1960s and the present. Particularly relevant here is what political theorists call ‘the collapse of consensus politics’ in Britain. This consensus prevailed from the end of the Second World War until the late 1970s. It describes a political environment where the main political parties agreed on a number of things: the principle of state ownership of certain industries (such as energy and telecoms) and the state’s owning and running healthcare (the NHS) and public transport (British Rail).

This consensus was broken by the ascent of ‘neoliberal’ ideas – which focus on the individual and competition, distrust the state, and prefer the free market. These ideas are perhaps best summed up by Margaret Thatcher’s immortal words in a 1987 interview with Woman’s Own, that there was ‘no such thing as society’, but instead families and individuals.

One particular kind of self-harm (overdosing as cry for help) is understood as an action communicating with others: a specific person, a group of friends, or even society in general. This stereotype of a ‘cry for help’ begins to resonate a little less with the prevailing climate, fitting in less well with a focus on individuals. It also falls due to reduced psychiatric attention, and reduced social work input – which is crucial to understanding behaviour in its social context.

At the same time another self-harm stereotype (self-cutting as emotional control) is based upon the idea of an individual, self-regulating. This comes more to the fore in this environment. This new understanding doesn’t need social workers, but relies upon assessments of the patient’s emotional state at the time of self-harm, something investigated intensively by counsellors.

This is also bound up with the relative fall of ‘social psychiatry’ which understands people much more in their social context. It is pushed aside by psychiatric ideas that understand people in terms of their individual biology and brain chemistry, rather than by their family relationships and social experiences.

It might well be asked how the privatisation of the British rail network (for example) could have any impact on whether pills or cutting are seen as self-harm? This question is valid, and the answer is partial and a little speculative. As we are talking about broad stereotypes, rather than statistical evidence, we need to zoom out to take a much broader view. (This doesn’t mean we forget practical shifts like the decline in social work collaboration or psychiatric assessment at A&E.)

The concepts that we use to understand our own and others’ behaviour form small parts of what we consider ‘common sense’. Common sense about racial difference in human beings (for example) is very different in 1900 than it is in 1950, and again from 1950 to 2000. Common sense shifts over time – including what is generally meant by the term ‘self-harm’. It is related to broad social and political shifts. In the same way that presentation of classic hysteria (with swooning, paralysis and ‘the vapours’) belongs to a certain social and cultural context, self-harm presentations are bound up with history, culture and politics. In a post-1980s environment that focuses upon the individual, individualised explanations predominate. In the 1960s where there is a more communal, collective approach, it makes more sense, to more people, to understand actions in terms of a social context.

Psychological labels and concepts that seem to have an independent existence (like ideas of self-harm) are thus intimately related to these larger shifts.

 

Chris Millard is a Wellcome Trust Medical Humanities Research Fellow at the Centre for the History of the Emotions at Queen Mary, University of London. He has worked on the history of self-harm (self-cutting and overdosing) the history of social work, and ‘parity of esteem’ between physical and mental health. He is currently writing about Munchausen syndromes: illness deception (Munchausen), child abuse (Munchausen by Proxy) and online health behaviours (Munchausen by Internet). He is the author of A History of Self-Harm in Britain: A Genealogy of Cutting and Overdosing (Palgrave, 2015).


Leave a Reply

Your email address will not be published. Required fields are marked *