A look at what’s happening now in UK businesses and public services and what’s just around the corner
The following is an edited version of a Computing Delta research keynote delivered at our AI & Machine Learning Live 2019 event in July.
AI is a fascinating and boundless topic to write about since it has the potential to overturn just about everything. The shape of some of the changes it will bring is already visible, even if the details are as yet unclear.
It’s also a frustrating topic to write about because as soon as one puts pen to paper (or voice to speech recognition software) it’s out of date. Thus, a section on deepfake videos, comparing fake Obama skateboarding (2013) to fake Obama talking about deepfakes (2018) had to be rewritten in the light of the drunken Pelosi and then again when Zuckerberg admitted, in a rare moment of honesty, that he is in league with Spectre in his bid for world domination. With less than a year separating the second Obama video and Zuckerberg, the leap in quality is astounding. And within a month, no doubt, a new video will go viral that’ll be twice as convincing as Zuckerfake.
Things are speeding along on multiple fronts. The iterative, self-improving nature of algorithmic learning has a multiplier effect, meaning that once a promising model has been developed it can be refined much more quickly than with previous generations of technology. We are on the cusp of some very interesting times – hopefully not of the Chinese curse variety.
As part of our research for the Computing Delta market intelliegence service, we asked 100 Computing readers to tell us about AI applications they knew about in their particular sectors (bedside chatbots, which are already ubiquitous).
Natural language processing is an area where much progress is being made, and there was an interesting use case from the police service anaylsing street slang, which evolves with gang culture. In the heath sector we have an example of image recognition with topological reconstruction of X-ray scans, and there’s pattern-recognition and NLP in education with anti-plagiarism systems.
There’s a lot going on in HR too, with automated recruitment systems growing more and more intelligent. And soon, when you apply for leave, the decision may be made not by a manager but by a bot. And much of this is happening in conjunction with the IoT, which is the basis of much of the predictive-type technologies seen in manufacturing and agriculture.
Safe to say, we ain’t seen nothing yet, and many jobs are about to change out of all recognition. Many things will need updating too, including the stock tech adoption joke about teenage.
Actually, it’s not entirely true that no-one is doing it (AI, that is). Most of the companies we spoke to are starting to get to grips with machine learning – but only 16 per cent have put anything into production, with the rest somewhere between preliminary research and pilot studies.
So what is that leading 16 per cent doing? Well, the ever-present chatbots are high on the agenda, and virtual personal assistants too. The other AI staples recommendation engines, anti-fraud systems and predictive maintenance are also present.
These are all applications that have been around a couple of years or more, but for anyone wondering what’s around the corner, Google’s blog is always an interesting read.
As mentioned, natural language processing seems to be where most of the action is. NLP-based applications are that are already available, at least in the US, include Google Smart Compose, which autocompletes emails in Gmail. Google Duplex is a personal assistant that you can instruct to ring a restaurant to reserve a table or a hairdresser to book a trim. According to Google’s promotional video, Duplex is clever enough to negotiate an alternative time on your behalf should your first choice be unavailable.
Still in development is Translatotron Google’s end-to-end speech-to-speech translation model – a real-life babelfish from Hitchhiker’s Guide to the Galaxy.
Another example of NLP is the transcription app Otter which can distinguish between multiple voices. It’s not perfect, but it’s way better than anything that was available a year ago.
All of which is a shame, as it will soon make this rather excellent joke obsolete.
Another area advancing – literally – in leaps and bounds is gesture recognition.
Microsoft was first to market with the Kinect for Xbox 360 in 2010. Since 2016, BMW 7 Series cars have had gesture recognition that allows drivers to turn up or turn down the volume, accept or reject a phone call, and change the angle of the multicamera without touching the dash. There is progress in the medical sphere too, and Swedish company Tobii Rex, has created an infrared-light-based eye-tracking device that makes it possible for disabled people to use their eyes to point and interact with a computer.
In the field of deep learning, Google DeepMind claims to be able to predict the power generated by a wind farm 36 hours in advance. And staying with power generation, AI might just help bring closer the advent of the energy holy grail of nuclear fusion, by allowing more rapid iteration of experiments using digital twins in place of physical testbeds.
Deep learning is also being used to develop algorithms designed to identify signs of poaching in wildlife reserves, or to count numbers of different fish species in deep and murky waters.
Speaking of murky waters, when we step back the benefits of AI are not always clear-cut. It really needs to be seen in the round. For a start, training deep learning algorithms is incredibly energy-intensive. Researchers calculated that developing a deep learning algorithm called BERT produced CO2 emissions equivalent to that emitted by five cars over their entire lifetimes, with energy use increasing markedly at the fine-tuning stage – which is exactly where those algorithms become “intelligent”.
So it could be that any emissions reductions from optimising of low-carbon energy sources are wiped out by the increasing demands of compute for AI.
Then there are the frankly terrifying uses of AI by the Chinese government which combines facial recognition technology with close monitoring of spending and communications to create what is now close to a total surveillance state and which has imprisoned one million people in so-called re-education camps. And which country is starting to export AI to regimes all around the world? That’s right, China.
It’s not just China, of course. Other issues that have shunted AI into the headlines for the wrong reasons include Tesla’s fatal autopilot accident and Google’s secretive involvement in Project Maven, a Pentagon-sponsored AI warfare project, which was protested against by employees. It was also revealed that Amazon had been selling its facial recognition technology to police departments with no guidelines in place as to its acceptable use. The list goes on.
These and other issues have suddenly brought ethics and governance to the forefront – and not before time. AI is extremely powerful stuff and thus far it has been rolled out with few standards or guidelines as to its appropriate use.
The ethical dimension
Various industrial and governmental and legislative bodies bodies are now rushing to put together guidelines on the ethical use of AI – and not before time. One draft from an EC high-level group lists seven core principles: Accountability, Diversity, non-discrimination and fairness, Human agency and oversight, Privacy and data governance, and so on, the aim being to ensure that AI is developed for the long-term good of humanity and to minimise harmful effects, deliberate or accidental.
To ascertain the most critical areas, we asked our survey respondents to rank the list by picking the three they thought were most important.
Top of the list was accountability, allocating responsibility for AI systems and their outcomes. Machine learning produces results that are unpredictable, sometimes in potentially dangerous ways. Who should be responsible when things go wrong? The developer? The customer? Insurance companies? This needs working out with some urgency.
Next was privacy and data governance. How to tell algorithms where to get off with their inferences about us, and how to restrict which data they may use, without unduly affecting their effectiveness.
After that came transparency. This reflects worries about black-box algorithms making important possibly life-or-death decisions in ways that we can’t understand.
Even without the inevitable missteps, AI will lead to what technology writer Adam Greenfield describes in his book Radical Technologies as “the eclipse of human discretion”, where we become increasingly unable to tell fact from fiction, reality from unreality using our own sensory and cognitive faculties.
Which brings us back to deepfakes.
About a quarter of our respondents said they have come across deepfake videos online. Most cited the Obama and Trump footage as examples, but I’m grateful to one person who provided this very eloquent take.
“Deepfakes can scramble our understanding of truth in multiple ways. By exploiting our inclination to trust the reliability of evidence that we see with our own eyes, they can turn fiction into apparent fact. And, as we become more attuned to the existence of deepfakes, there is also a subsequent, corollary effect: they undermine our trust in all videos, including those that are genuine.”
When even easily disprovable fakes such as the Pope endorsing then-candidate Trump can be believed by millions what hope is there when the fakes are indistinguishable from the real thing and their creation is effectively democratised so that anyone can make them?
Ironically part of the solution may be to use more AI. Ex-BT CTO Peter Cochrane wrote an interesting piece for Computing recently called Tackling fake news and propaganda with AI and machine learning. Indeed, this has always been Zuckerberg’s favoured solution, in part because it kicks the can conveniently down the road. At the moment though the fakers are many cycles ahead of the truth engines.