Anyone who has read Richard Morgan’s “Altered Carbon” or seen the TV series based on it will be familiar with the idea of the “cortical stack”, an electronic implant which preserves the personality. Cortical stacks allow people to be “needlecast” across vast distances to occupy blank, personality-less bodies and inhabit them as if they were their own. They also allow for more unpleasant uses should your enemies capture the cortical stack in which your personality is housed.
It’s good science fiction – and like all good science fiction, it engages with the concerns of the present as well as the risks and opportunities of the future. One of those concerns of the present was brought into focus by, of all organisations, the Information Commissioner’s Office recently.
In an eye-catching report in their excellent Tech Futures series, the ICO warns of the dangers of neurotechnology. “The use of technology to monitor neurodata, the information coming directly from the brain and nervous system,” they say, “will become widespread over the next decade.”
The ICO’s concerns include “If not developed and tested on a wide enough range of people, there is a risk of inherent bias and inaccurate data being embedded in neurotechnology – negatively affecting people and communities in the UK.”
Their sector scenarios work is solid, covering medical, workplace and consumer implications.
There is, though, far more here – and in many respects, it is far more concerning than the examples the ICO highlights. The obvious example – and obvious because it is both helmed by Elon Musk and public, as opposed to the vast quantities of research undoubtedly taking place in more quiet, private places – is Neuralink. A brain-computer interface recently given FDA approval, Neuralink’s mission is to “Create a generalised brain interface to restore autonomy to those with unmet medical needs today and unlock human potential tomorrow.”
It is that last phrase that links a medical implant with cortical stacks. As David Tuffley highlights, “Musk has made many radical claims regarding his future vision for the technology. He has claimed Neuralink could augment human intelligence by creating an on-demand connection with artificial intelligence systems.”
Tuffley says, “[Musk] has even gone as far as to say the Link could allow high-bandwidth telepathic communication between two or more people connected via a mediating computer.”
This, using a technology that, in development, was subject to employee concerns about the needless suffering caused to animal subjects in testing – a fact which must give its human trial participants some misgivings.
Many people will welcome the opportunity to enhance their brains with technology. William Gibson’s Sprawl books include a behind-the-ear jack into which all sorts of augmentations can be fitted – instantly learning a new skill, connections to people, entertainment, and information, simply by “jacking in”.
We talk in futures thinking about “weak” and “strong” signals of change – moments when we can identify a change happening to one or another level of confidence. Neuralink is, in many ways, a weak signal – the development of a technology in one company, directed by one particularly driven (and wealthy) man, with a single stated objective. The ICO’s report, though, is the moment a weak signal becomes strong – when a government body becomes sufficiently concerned about a technology that it produces a well-researched, highly competent report warning of the dangers of an emergent technology.
The ICO approaches the technology from three main angles – the risk of discrimination, bias, and the unwanted collection of vast amounts of highly personal data through the interfaces. The report is a practical jumping-off point, constrained by its reach – after all, it is the regulator for information use, not a moral or technological arbiter. But it is a strong signal, and it is now time to consider Gibson’s and Morgan’s thinking in considerably more depth.
We at SAMI have been monitoring human augmentation for some time. But now, the ICO thinks the technology will become widespread over the next decade. It is time for governments to decide how to deal with this radical change in human capacity and mitigate its risks. It is for us in the futures thinking space to understand what it means and what opportunities and dangers flow from it.
Morgan and Gibson make clear that there are many significant advantages to brain-mind interfaces. There are also many risks. We should start understanding them now.
Written by Jonathan Blanchard Smith, SAMI Fellow and Director
The views expressed are those of the author(s) and not necessarily of SAMI Consulting.
Future-prepared firms outperform the average by 33% higher profitability and 200% higher growth. SAMI Consulting brings 30 years of experience delivering foresight, futures and scenario planning – enabling companies and organisations make “robust decisions in uncertain times”. Find out more www.samiconsulting.co.uk.
If you enjoyed this blog from SAMI Consulting, the home of scenario planning, please sign up for our monthly newsletter at email@example.com and/or browse our website at https://www.samiconsulting.co.uk