Written by Adam Waters
Director BFBS Academy and Creative
The AI nightmare is already here. Not in the form of rampaging robots or sinister cyborgs, but rather in a way that affects us every day. Society-changing algorithms are unleashed by powerful corporations, who can neither control nor understand their implications. The genie is already out of the lamp. There is no point in arguing as to whether AI could one day end up controlling society... it already does.
Policy makers do not understand what’s changed. We are trying to structure our laws and societies around thinking that is decades out of date.
Nick Bostrom’s ‘Superintelligence’ is a great place to start reading about AI. It opens it with a fable – sparrows long for an owl to help make their lives easier. But they don’t know how to tame one, or where to even find one. Some of the sparrows go off in search of an owl egg, whilst the others are left behind, hoping they can figure out how to tame an owl before their return.
When it comes to thinking about AI there are two extremes -
Techo-skeptics who believe that it’s simply not possible for humanity to create something so advanced. Andrew NG, who was the Chief Scientist at Baidu, says that fearing the rise of killer robots is like ‘worrying about overpopulation on Mars.’ We can’t map our own brains, or even reliably make simple to use software (I’m writing this on Microsoft Word. Imagine what will happen if I try moving a photo around in this document). Why worry about it?
Then there are those who believe that AI poses one of the gravest risks to humanity. Either that, or that AI will become a digital dictator (the horrifying I Have No Mouth But I Must Scream is worth a read), will simply have no interest in nor need for humans, or be badly-programmed, and will replace all matter in the universe with paperclips. Some suggest AI could be a kind of benevolent overlord, programmed to make humans as happy as possible.
I think we’re somewhere very different.
Terrifying ideas about AI are always tempting for newspaper editors. It’s easy to leap to these extreme science fiction scenarios. The truth is more subtle.
AI is often used as shorthand for several different concepts. Canny marketers use it to describe machine learning. When I type ‘red cup’ into google photos, it knows what photos have red cups in - that's because of machine-learning, combined with a vast number of images. It’s also often used to mean algorithms. Facebook’s newsfeed decides what post to show you first based on thousands of different signals, rather than because of a mysterious electronic brain in California.
So why do I think the AI nightmare is already here?
The combination of algorithms, machine learning, and clever marketing as AI has in the last few years led to -
This useful paper demonstrates how data mining techniques deny disadvantaged groups within society
China has been able to create a surveillance society with citizens being awarded scores for good and bad behaviour.
Mutiple police forces use Palantir technology to ‘predictively police.’
Facebook’s newsfeed algorithm becoming the world’s most powerful controller of information with over 2 billion regular users. An algorithm designed to show people more and more of their views and interests back at them.
These are just a small selection of the nightmarish situations that have come around.
And the worst part? Most of these have come around by accident.
A culture that venerates the stereotype of techbro disruptor has lead to large organisations creating tools they cannot control, nor properly understand, and then unleashing them into the world.
That same algorithm that now controls how a significant part of humanity understand the world around them was originally a tool to rate how hot Harvard students were. Facebook were caught out by Myanmar’s military using the network to incite genocide. So much hateful content is uploaded to it that they have to outsource to moderators all around the world, who themselves suffer serious problems.
Microsoft’s racist Twitter bot was an experiment. Instagram was a photography tool.
Where the power of AI is used intentionally, its effects are rarely thought out.
Policy makers do not understand these trends. It’s too easy to dismiss social media as photos of people’s lunch, to believe that newspapers and half hour TV news bulletins are the ‘media,’ or that these are just fads soon to burn out.
It’s all too easy to reach for your copy of 1984 when writing about surveillance, technology, and control. Orwell’s vision of a society ruthlessly controlled by surveillance technology and an all-powerful state.
Perhaps what has come to pass is more sinister – people willingly embracing this technology into their lives, their homes, their views. Go on, ask Alexa what she thinks.
It’s not only policy makers who need to understand the role AI has, but all of us - and how apparent they are in our everyday lives.
With any new technology it’s always easy to reel off a list of new problems it creates a society. Longing for the past is always a mistake, and always impossible to make a reality. How can we ensure we create an enlightened world that is supported by AI and its associated technologies?
If only there was a simple answer.
But I believe that organisations involved in policy making, communicating or reporting about technology must ensure they understand them as much as possible. People using these services should also try and see how their behaviors can be affected by them.
AGI, Artificial General Intelligence, will one day be possible, but will probably come around without us ever realising. It is already prevalent in our lives in all sorts of ways. There’s no point trying to be a collective King Canute.
Policies should enforce community moderation standards, whilst fiercely protecting freedom of speech. It’s too easy for tech firms to claim they are simply the platform, and are not responsible for the content on it.
Rather than try and push progress backwards, we should fund research into both the development of AGI, and the effects it may have on society.
And I believe there is a strong case to be made for making platforms legally responsible for the worst crimes enabled by them, such as genocide (along with the governments that perpetrate them).
We should also be more comfortable as a society where algorithms make decisions. Sometimes very important ones. We often feel revulsion at these decisions not being made by people, but think of how many biases and influences there are on human decisions, both obvious and subtle.
That might sound contradictory given most of the things I’ve said.
Algorithms, and the organisations that are responsible for them, must be held to account.
Ultimately the role of AI must be understood and shared through effective communications, whilst inviting challenge, debate, and quite possibly regulation informed by engaged governments.
It will be far better to be on a rollercoaster, clinging on for dear life, than behind the wheel of a runaway hearse.
If you’d like to read more -
Is transparency in algorithms possible? https://storage.googleapis.com/pub-tools-public-publication-data/pdf/36914.pdf