The right to be forgiven will soon be more important than the right to be forgotten
It quickly struck me this was a very modern inversion, because we’re constantly judged by robots.
We’re observed and recorded everywhere we go online thanks to cookies, certificates and loyalty codes. All of that goes into databases – “vast and cool and unsympathetic” – waiting for data scientists to algorithmically augur the entrails of our comings and goings, the better to render judgement on our potential for future acts of commerce.
We want to believe these judgements have been constructed along purely capitalist lines. But it’s becoming apparent that what’s really being judged is our values.
To make that judgement, robots use techniques developed in the early years of the century, as security agencies worldwide worked out how to winnow the chaff from an increasing torrent of communications traffic.
They soon learned how to qualify traffic not just by content – certain keywords we’re all familiar with – but, even more significantly, by studying the networks forged by senders and receivers, modeled implicit ‘social graphs’ years before Mark Zuckerberg ever used the term. Sharing indicates affinity, and affinity for one set of values implies affinity for other values – values that may be at odds with the security of the state.
Although always denied, Snowden made it clear that USA and each of the other “Five Eyes” intelligence-sharing alliance read the communications traffic of partner states.
The same techniques used to detect hostile intentions can be targeted toward any behavior deemed sufficiently anti-social, whether actually criminal or simply that which conflicts with the prerogatives of the state.
Nearly two years ago The Register reported that data analytics tools developed at the University of Cardiff would be deployed in Los Angeles, providing a sort of “pre-crime” detection capacity for hate crimes. Pre-crime is a neologism from Spielberg’s Minority Report, useful because it sounds so much friendlier than “thought police”. Yet the methodology and aims are so nearly the same that Winston Smith would know it for what it is.
This framework of surveillance-analytics-insights-controls perfectly describes China’s “social credit” system, wherein each of the Middle Kingdom’s billion connected adults have a rating drawn from the performance of their public role. Too many censored posts criticising President-for-life-Xi on microblog Weibo? Good luck with that university application, or accessing a better house!
Such state shunning works better in the minds of China’s citizens than any fear of the Ministry of Love and engenders a capacity for self-censorship (or, if you prefer, “doublethink”). Individuals instinctively avoid doing things that would cause them to lose social credit in the same way that they might avoid walking out into traffic.
If that sounds horrible, remember that for a generation we’ve all lived with ‘credit ratings” attached to our identities. Those ratings circumscribe a particular set of possibilities: where we can afford to live, or learn, or work, not all that different from a “social credit” system, if less blatantly political. Yet we all know of people who, through bad luck or bad data, have run afoul of credit system, and have suffered as a result. That’s a big reason why identity theft scares us so much – we know that a carefully cultivated rating can be destroyed in an instant, through no fault of our own.
As we move toward rating everyone for every service they deliver – consider the number of times a day you’re offered a one-to-five-star “help us improve our service” screen – we need to remember that ratings hide as much as they illuminate. Data scrubbed of story can be cruel and unforgiving. A machine can never provide a truly human insight, nor the sort of forgiveness we will need in a world where every mistake and every transgression becomes part of a permanent record. The EU may soon find its “right to forget” superseded by a “right to forgive”. ®