Is it time to start worrying about Artificial General Intelligence?

Is it time to start worrying about Artificial General Intelligence?

Concerns about the risks associated with run-away artificial intelligence superseding human intelligent (“The Singularity”) have been around for many years, with scientists as eminent as Stephen Hawkins expressing alarm. At SAMI we lean to the more technology-sceptic, in the sense that radical societal shift occurs far less frequently because of technological advance than is often suggested – flying taxis anyone?  Technological change is often over-hyped in the short term, yet under-estimated on a longer timescale, being the accumulated results of successive incremental changes.

But recent advances in AI, such as large language models (LLMs), suggest that it might be time to re-assess. Clearly LLMs are nowhere near intelligent. But they do represent another step forward that could indicate more significant developments to come, even if as yet there is no evidence that artificial general intelligence (AGI) is actually feasible. Only this week, OpenAI announced the release of GPT-4. Microsoft announced its search engine, Bing, would use GPT-4 customised for search. And Google have just released generative AI across its whole Workspace suite of tools.

Many experts however expect AGI before the end of the century. In the 2022 Expert Survey on Progress in AI, conducted with 738 experts, they estimated there was a 50% chance that high-level machine intelligence will occur by 2059. In older surveys (2012), 10% thought it would happen by 2022. So some scepticism is clearly warranted. Part of the argument for AGI relies on the exponential growth in computing power, set against the static capabilities of the human brain.

There is a problem in the terminology – what do we mean by “intelligence” anyway?

Historically, the definition of “intelligence” has a white-supremacist background, with the Stanford-Binet IQtests being used to demonstrate the inferiority of non-white races. Howard Gardner’s theory of multiple intelligences extends to eight different types of intelligences consisting of: Linguistic, Logical/Mathematical, Spatial, Bodily-Kinesthetic, Musical, Interpersonal, Intrapersonal, and Naturalist. There is also Emotional Intelligence – the ability to perceive, control, and evaluate emotions – as popularized in a 1995 book by Daniel Goleman.

But let’s put aside our scepticism about the feasibility of AGI and consider the scenario where it looks as if it will soon come about.  The singularity hypothesis, posits that a self-upgrading system will eventually enter a “runaway reaction” of self-improvement cycles, resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence. At that point it becomes impossible to know for sure that human beings’ interests will be to the fore, and a wild range of dystopian stories have been written about the consequences.

Even AI developers, such as Sam Altman of OpenAI, recognise the problem and propose caution.  Some are wary of his statement and want more immediate, co-ordinated action. Certainly it seems that waiting until AGI happens and then trying to take action is risky. So there is a groundswell of opinion that is pushing for an “anticipatory governance study” to get the conditions for AGI right before it is created.

The Millennium Project, a global think tank, is one such group. They point out that although there are international groups, such as the Global AI Ethics Institute,  looking to get agreement about some of the difficult issues with today’s AI, there is not an equivalent for AGI. Without such agreement, the AI community is left to self-regulation, which in an era of competing power-blocks could result in an uncontrolled race with uncertain consequences.  It’s true that international consensus is holding around major ethical issues in gene editing (just about), but leaving such a key issue to informal agreements also seems a poor answer.

Getting global consensus in today’s challenging geo-political environment would not be easy, even if technical conditions for the safe launch of AGI could be developed. Developing such a consensus on governance could easily take 10 years or more – by which time, what new advances in AI will have happened?

It would be easy to bury our heads and ignore the issue, to pray that AGI is actually not feasible, to leave it a while to see how things develop, or simply to say it’s all too difficult and give in.  But the whole point of foresight thinking is to explore the “what-if’s” and to put in place plans that are activated at the right time. Is now the right time?

Written by Huw Williams, SAMI Principal

The views expressed are those of the author(s) and not necessarily of SAMI Consulting.

Achieve more by understanding what the future may bring. We bring skills developed over thirty years of international and national projects to create actionable, transformative strategy. Futures, foresight and scenario planning to make robust decisions in uncertain times. Find out more at www.samiconsulting.co.uk.

If you enjoyed this blog from SAMI Consulting, the home of scenario planning, please sign up for our monthly newsletter at newreader@samiconsulting.co.uk and/or browse our website at https://www.samiconsulting.co.uk

Featured image by neo tam from Pixabay

One Comment
Leave a reply

Your email address will not be published. Required fields are marked *