The Christchurch shootings, and their live-broadcast shocked the world. The social media companies immediately had to roll out two different crisis communications plans, to deal with two different things. Firstly – how to deal with the unfolding global crisis and restrict access to problematic footage. Secondly – to defend their reputation amongst fierce criticism for not being fast enough at the first.
Media companies have always been good case studies when looking at how they put their crisis communications plans and training into practice – I was a media relations manager for BBC News during September 11 and the subsequent war in Afghanistan. At the time Crisis communications was part of my day to day routine.
Social media companies are even better case studies, given their ’cause’ and ‘effect’ involvement.
Tom Watson, deputy Leader of the Labour Party and a man steeped in digital culture, was one of many people calling out the apparent inabilities of YouTube in particular to contain the spread of the Christchurch footage.
There has been a lot written about what the social media companies did, or did not do to stop the spread of the live-streamed video – including these excellent pieces in Wired and the BBC.
But the long and short of it is that they failed.
This is a watershed moment for social channels as the global consensus is that they should have done better. Regulation is rightly just around the corner. (How effective it’ll be is another thing entirely).
Which makes this New Yorker piece, published more than a month after the event, even more interesting.
Facebook's crisis communications plans
The New Yorker reports that Facebook has a three-step crisis management protocol
which is not too dissimilar from the four-step crisis management protocol we have developed for our clients:
But what surprises me (according to the New Yorker piece) is the relatively slow speed at which Facebook appeared to operate.
Facebook’s ‘understand’ phase started immediately (or 17 minutes after the video ended according to the Guardian and Wired), and staff worked around the clock ‘following the sun’ – as our clients’ brand/crisis-monitoring operations routinely to identify what is being shared online.
Their ‘isolate’ phase started within six hours of the shooting. In other ‘live-streamed’ situations the relevant team has to be careful – to remove posts which gratuitously share or advocate violence, but to leave the posts which are critical of it, or deemed ‘newsworthy’.
But in this case the New Zealand Government asked Facebook to delete everything. Which they tried to do – but failed, partly because those reposing the video were exploiting the ‘hash’ technology in Facebook’s own AI.
Facebook's AI wasn't up to scratch
Even though the ‘source’ video was given a digital fingerprint which meant that duplicates could be removed automatically, some people had realised that re-posting videos of that initial video from a slight angle, or screengrabbing anything but the whole screen would defeat the AI. So they did.
Of the 1.5million copies removed from Facebook within the first 24 hours, only 1.2million were removed automatically at the point of upload.
Facebook’s ‘enforcement’ phase is quoted as starting within 36 hours of the incident.
Is it just me, or do all of those times seem too slow?
When it comes to crisis management plans, Facebook knows well in advance what the issues are that it will need to prepare for (live-streaming), and what it will be criticised for (being too slow, not preventing copycat streams, restricting freedom of speech).
The best way to deal with a crisis is nearly always to show how have have adapted your processes based on what you’ve learned. And the sooner you do that, the more sympathy people have with you.
But if Facebook really took so long to move from phase 1 to phase 3 – a timeframe that seemed to be similar to YouTube’s – I’m really not surprised they attracted the criticism that they did.
Tougher regulations are overdue. In fact, they may well be just around the corner.