Me, myself and A.I.





By Dean Russell



The rapid rise of technology means that, for the first time in human history, the future we face is indeed uncertain. The technologically driven “fourth industrial revolution” is happening around us, whether we like it or not. Artificial intelligence (AI) is set to be at the centre of the revolution, and the choices we make now, from government policy to business planning, could define the role the UK plays on the global stage for the next century.





Unlike the original Industrial Revolution, which emerged visibly around the UK through the growth of factories, manufacturing, and industry, forging new communities around us, this fourth Industrial Revolution will be silent and stealthy, driving more people to work from home on increasingly specialised activities in an evermore globalised economy. This (industrial) revolution will not be televised. Instead, it will happen while the television is busy watching us (probably while we are working on the sofa talking to our watches).

Despite the majority of the general discussion focusing on fears of AI causing people to lose jobs and uncertainties around a post-work world, the considerations have to go much deeper. There are genuine ethical, moral, and societal concerns that we should also be discussing right now.

There are prominent voices with extreme concern about the risks of blindly walking into an AI-driven future. People ranging from Tesla visionary Elon Musk to Professor Stephen Hawking have all stated their fear that artificial intelligence could lead to the end of humanity. Musk, for example, has publicly spoken about the risks of what he calls technological singularity: the point at which machines will be sufficiently smart enough to redesign themselves to be even smarter and hence, see no need for dumb humans to exist.

While these fears need to be addressed, we can’t ignore the fact the AI’s ship has already left the port. So, we must explore the tremendous opportunities AI can bring to the UK. Perhaps instead of taking jobs, AI will in reality be more mundane in the mid-term, doing tasks humans just can’t do.

The health sector is one such area where this principle applies. Huge leaps have been made by using AI to analyse and predict the onset of disease or to identify injuries. For example, just recently, there have been reports ranging from news about the improved prediction of Alzheimer’sto articles on AI providing advanced analysis of x-rays and samples of diseased tissues. In the latter article The Guardian reported on the NHS England Expo where Prof Sir Bruce Keogh stated: “All of this [AI] takes us into a very new territory, and it’s not a long way over there, it’s actually here now”.

While, understandably, there will be those who feel this is the first move in creating a workforce of robotic doctors, the reality is more likely to be improved prediction analysis behind-the-scenes in laboratories. There is no denying that data is becoming as much part of healthcare as the petri dish or the bandage; this is only going to increase in scale and importance. For example, according to recent research in Information Age, 60% of all medical information that has ever been generated was done so in the last six years alone. This figure is no doubt only going to rise especially with the increased use of devices such as the Apple Watch, which is becoming as much about health as it is about technology. We will need AI to help us take advantage of the insights this impossibly immense amount of data can bring.

Opportunities for positive intervention via AI have the chance to save lives, but we also need to have an open discussion about how we can apply this insight. In policing, for example, Forbes recently wrote about several UK law enforcement agencies dabbling in predictive policing. A case study was shared from Durham, England, where they use a system called HART (Harm Assessment Risk Tool), which classifies individual offenders ranked by the probability that they will commit another offence in the future. While this test run seemingly worked out relatively accurately, other examples elsewhere have shown that some instances of AI have unexpectedly displayed signs of racial and gender bias, so AI may not be as impartial as some may hope.

The question is where the use of AI will end and how and where we decide to apply its a judgement. For example, research is already underway to develop artificial intelligence programs designed for the legal world. A researcher at the University of Alberta, Randy Goebel, is already working with Japanese researchers to create an AI that can pass the bar exam. He argues that search engines like Google are already commonplace in the courts, so artificial intelligence is the next logical step. This may be the case, but the process must surely include a discussion of where the distinction between a human begins and AI judgement ends. The ethical and legal quagmire that this could create in society could be enormous. As anyone applying for a bank loan nowadays will know, institutions are quick to hide their decisions behind a computer screen. So, in this instance, we should begin to be less worried about when the computer says no but perhaps when the computer says yes.

With AI expected to impact all of us in so many areas, it is no surprise that big businesses around the world are investing millions in research and development so that they can launch new AI-driven offerings. In fact, according to the research by CB Insights, the level of investment in AI start-ups this year is projected to surpass $10.8 billion (nearly double that in 2016). As part of the discussion around AI, the UK has an opportunity right now to attract this investment for our businesses and start-ups, perhaps creating the ideal environment here for a new AI-driven ‘silicon valley’ over the coming years.

If there is even a small chance that AI could impact our lives as some predict, then we must begin a serious political discussion now about how we want to adapt, adopt or legislate for its use.

I don’t say this without some reference to history. When Tim Berners Lee invented the World Wide Web in 1989 and innocently sent his first message ‘Hello World’, no one could have honestly predicted how this would impact the world around us. Less than three decades on, the impact has been immense, both positively and negatively, from the decline of the high street to the rise of new businesses and careers, access to the world’s information with a click of a button to the rise of global terrorism using social media to proliferate vile propaganda. On a more personal level for us all, the web is slowly changing our concept of privacy and what we allow (or unwittingly allow) others to know about us. Today, we face risks of hacking by people or governments, but imagine what hacking by an autonomous and unrelenting AI could mean for our privacy, security, and even our sense of identity.

Unlike the web, we may not have the luxury of time for AI to have a similar, unanticipated, impact on the world around us. Surely, we must plan for all likely (and unlikely) scenarios before we find we are playing catch-up. Even if there is a small chance AI could become smarter than us, then surely, we must work together to understand what this might mean for society, economic growth, the workplace, law, and even war.

Politically speaking, while I am no fan of quangos; due to the potentially far reaching impact of AI, this is one situation where I believe it makes sense to connect the dots across all of government, business and society through the creation of a watchdog or similar body to engage, understand, and set recommendations around the UK’s approach to AI. Surely it makes sense to begin discussions now around how we, as a society, can apply the benefits of AI to the UK whilst exploring the risks of us all becoming unknowing slaves to the machine.

While the Roman poet Juvenal asked, “Quis custodiet ipsos custodies” (Who watches the watchmen), perhaps in this modern age we should be asking similar questions about AI, most notably (but not as elegantly), “Who will be intelligent enough to control artificial intelligence?”

Dean Russell as Founder of epifny consulting and digital transformation director at Parliament Street