Getting to Artificial General Intelligence

Getting to Artificial General Intelligence

Sometimes, rarely, something comes along which just has to be included in scenario building. Not a matter of “I like that” but “this is so significant it has to be in everything from now on”. There aren’t that many – the climate crisis, and the continuing impact of the 2008 financial crisis, are the most obvious. The Covid pandemic was wrenching when we were in it – but its continuing effects are debateable.

Is artificial intelligence another? And if so, where does it take us? Most importantly, does artificial general intelligence – a programme which can do any task a human can, unrecognizably and probably better – need to be factored in. And when?

AI, as we are currently defining it, is clearly making a difference right now. Recent developments in AI have been largely dominated by improvements in specific domains, such as natural language processing (NLP), image recognition, and generative AI technologies. The emergence of models like GPT-4 and systems like Stable Diffusion, Google Lens, and Whisper API, have significantly advanced the field, showcasing AI’s growing capability to interact with and understand the world in complex ways​​​​​​.

Technical performance benchmarks continue to be met or surpassed by current AI models, indicating a need for more challenging tests to push the boundaries of what AI can achieve​​. However, this rapid advancement comes with its own set of challenges, including the high environmental costs associated with training large models and an increase in AI-related controversies and ethical concerns​​.

The existing forms of AI are impacting everywhere that thinking, words and pictures are important – indeed, this may be the first industrial revolution that affects those groups thought traditionally to be immune from risk from technological developments, such as creatives – and even students. If it requires thinking, it will be assisted, or possibly displaced, by AI (futurists not being immune).

And the concerns about fakes, disinformation, news manipulation, and use in conflict are now standard talking points in both mainstream and grey media. The rise of generative AI has made it easier to create convincing deepfakes and AI-generated disinformation. This technology poses a significant threat to the integrity of elections and public discourse, as seen in instances where politicians have weaponized AI tools to attack opponents. The proliferation of AI-generated content could make it increasingly difficult to discern reality online, further inflaming already polarized political climates​​.

Two recent developments, though, bring AGI to the fore.

Sometime soon, ChatGPT-4 will be replaced by ChatGPT-5 and 6. We know they are in development, but not yet their release date. Whilst these versions are designed to do “a more reliable job, with better personalisation, with multi-modal outputs.”   , they are not expected to be near-AGI versions or strong AI “(eg something that can solve quantum gravity) without some breakthroughs in reasoning”.

And whilst one might be wary of a company that decided the metaverse was going to be a thing (it isn’t, not yet), Mark Zuckerberg’s announcement about Meta’s initiative to develop open-source Artificial General Intelligence is significant. Zuckerberg emphasized Meta’s commitment to open-source principles to benefit a wide audience, signalling a shift towards more accessible and collaborative AI development​​. This move is supported by substantial investments, including the acquisition of 350,000 NVIDIA H100 GPUs, aimed at building a robust infrastructure for advanced AI models​​.

However, the ambition to open-source AGI comes with its set of challenges and concerns. The risks associated with open models, such as the potential for embedding ‘sleeper agents’ with harmful capabilities, need to be addressed with robust safety and monitoring frameworks​​. Zuck’s project raises questions about the governance of such powerful technologies, the standards for responsible use, and the measures to prevent misuse.

While the advancements in AI are impressive, the leap to AGI involves overcoming significant hurdles that span technical, ethical, and societal dimensions. The core challenge lies in developing AI systems that exhibit general intelligence, as opposed to being highly specialized in narrow tasks. Currently, AI technologies, including those that have made significant headlines, operate within the confines of the specific domains they were designed for.

The interest in moving toward AGI is palpable, with increasing AI labour demand and corporate investments in AI technologies. The field is also seeing a surge in ethics-related research, indicating a growing awareness of the broader implications of AI and the need for responsible development practices​​.

Our concern here though is not with the implications. They are legion. It is with whether AGI/strong AI is in fact possible.

And here there is substantial disagreement. The Church-Turing thesis “states that any mathematically computable function is computable by a Turing machine, which is a certain abstract model of computation created by Alan Turing that inspired modern-day computers. Another version says that any physical system is simulable by a Turing machine.” So we could simulate the physics of a brain. But could we reproduce something that was in effect an electronic version of one? Herbert Dreyfus (“What computers still can’t do”) argues “that humans are always already “in a situation,” whereas AIs are not and maybe never can be.” If human thinking is not a computational task, then no computer can do it – it may give the impression of doing so, but will never in fact be able to do so.

So much for the philosophical issues. What about the practical ones? Language processing is not thinking. “A chatbot’s fluency doesn’t prove that it reasons or achieves understanding in a manner similar to humans.”

At the moment, for instance, AI can’t tell the difference between an astronaut riding a horse and a horse riding an astronaut. A Tesla on autopilot recently drove directly toward a human worker carrying a stop sign in the middle of the road, slowing down only when the human driver intervened.

To quote a recent article in Scientific American “Although deep learning has advanced the ability of machines to recognize patterns in data, it has three major flaws. The patterns that it learns are, ironically, superficial not conceptual; the results it creates are hard to interpret; and the results are difficult to use in the context of other processes, such as memory and reasoning. As Harvard University computer scientist Les Valiant noted, “The central challenge [going forward] is to unify the formulation of … learning and reasoning.””

So when will we reach AGI? For futurists, it’s perhaps best if we assume AI will improve, and continue to do so, perhaps even with its own assistance. Let’s have AGI as a useful wildcard – by some breakthrough in understanding, or technology, or the sudden appearance of an alien technology. Future definite, no. Future maybe, perhaps. A useful thinking exercise for futures thinking. Definitely.

Written by Jonathan Blanchard Smith, SAMI Fellow and Director

The views expressed are those of the author(s) and not necessarily of SAMI Consulting.

Future-prepared firms outperform the average by 33% higher profitability and 200% higher growth. SAMI Consulting brings 30 years of experience delivering foresight, futures and scenario planning – enabling companies and organisations make “robust decisions in uncertain times”. Find out more www.samiconsulting.co.uk.

If you enjoyed this blog from SAMI Consulting, the home of scenario planning, please sign up for our monthly newsletter at newreader@samiconsulting.co.uk and/or browse our website at https://www.samiconsulting.co.uk

Featured image by Gerd Altmann from Pixabay

One Comment
Leave a reply

Your email address will not be published. Required fields are marked *