Wednesday, 29 October 2014

Why the Samaritans' Radar is bone stupid.


The Samaritans have launched a Twitter Bot - those annoying things that follow you for no other reason than you once quoted a Beatles lyric or what have you - which will alert people's followers whenever they use a word or phrase on a list identified as having connotations of potential suicide.  I don't deny that this is a well-intentioned thing, but nonetheless it'll do more harm than good so I'm going to deconstruct it here.

1)  The Invasive Argument

  If you contact the Samaritans and they intervene then they are serving their purpose with your consent.  If you do not contact them, if you want nothing to do with them and they intervene anyway then that is an invasion.  Corporations can ask to be "whitelisted", ie the bot will ignore them, but in a shockingly arrogant display of paternalism individuals cannot.  You cannot in any way direct this thing to leave you alone.  If you are on Twitter then it deems you as being on its radar, and it will interfere in your business.

  We have this idea that all people who are contemplating ending their lives are lost souls in need of heroic salvation.  Some are, some aren't, but we cannot force this upon people from afar or we will ultimately achieve nothing.  Those who are genuinely bent on suicide will require something more than tweets to talk them down, if at all.  Those who want help, who want to be talked down, will often as not phone the bloody Samaritans!

2)  The Data Protection Argument

  This bot will end up sitting on a great list of tweets it has sent to people.  Essentially it is gathering links to personal data into a coherent archive without either the consent of those whose data it links to or any lawful authority to do so without that consent.  It is also potentially alerting people to the likelihood that someone has a mental illness, which is arguably a breach of Data Protection, the Equality Act, and accepted counselling ethics, and it'd be a daft judge that would consider the Samaritans to be acting as anything other than a counselling agency.

  Consider also that many people are followed on Twitter by their employers!  Telling an employer - rightly or wrongly - that an employee is suicidal could end up costing that employee their livelihood!
3)  The Equality Act Argument 

  Because disabled people, the mentally ill (as a subset of disabled), and LGBT people all have higher rates of attempted suicide and actual suicide than the rest of society, the negative effects of this bot will be disproportionately felt by these communities (and any others I've missed out), potentially leading to a civil breach of the Equality Act.

  If it inadvertently outs a person as transgender then that could arguably constitute a criminal breach of the Equality Act, which is highly likely given as one in three transgender people attempt suicide.  Furthermore:  a small subsect of feminists who notably use feminism as a front for transphobic bigotry has a proven track record of hounding transgender people to suicide.  The Sandyford leak happening at the same time as the release of Radar essentially hands these people an automated hit list.  All they have to do is follow the email addresses contained in the leak and see who pings the Radar, then lean on that person.  This is not a paranoid fantasy, this is a very real concern with genuine historical underpinnings.

4)  The Desensitisation Argument 

  If every time someone uses the language of bog-standard depression they get flagged to their mates as a suicide risk then their mates will turn the bot off or block it or what have you.  They'll grow sick of it and come to regard its warnings as spam, no different to the "Nigeria Letters".  When eventually the person is at genuine risk of suicide there will be a boy-who-cried-wolf effect in play and the warnings will likely go unheeded.

5)  The Own-Goal Argument

  Consider the arguments under 3 and 4, above, with suicidal people being encouraged or ignored.  Consider the argument under 2 with people's jobs being jeopardised (by the Samaritans, no less, who they might otherwise speak to after being laid off).  Consider the argument under 1, with vulnerable people feeling violated by the bot's interventions.  You have a person who is already at risk of suicide, who has had additional stressors, who has been dissuaded from reaching out to others, whose friends have been desensitised to their plight.  You've turned a person who might waver into someone more determined.  This has the potential to increase the rate of suicide in Britain.

For all these reasons I see Radar as nothing but a menace.


  1. It's not a bot, it's an app, deployed by individual user accounts. It follows who that account follows.

    While many of the angles you cite are, indeed, problematic, this is really basic keyword mining from public posts. Apps like this already exist that scan ALL of twitter (using the twitter API, which is set up to allow this) and aggregate that data to try to extrapolate information about individuals. There is a store in America that uses data like this to figure out with startling accuracy, whether or not a particular individual at a given address is pregnant. It then addresses adverts to their houses, trying to sell them baby stuff.

    I'm not saying this app is awesome or that we have to race to the bottom, but this app does really simple keyword matching. If we're that upset about this, we should care a lot about everyone else reading our feeds.

    1. I see your argument Les, but it's different in part because of how serious suicide is.

    2. Made a blog post on this.