Credit: Dall-E-3
Is AI a discriminatory technology that can be used to reinforce existing power structures? Yes. Is AI a new species that could threaten the survival of humankind? Also yes. With differing degrees of probability, perhaps, but AI is Both.
The debate around managing the risks of AI often gets sidetracked into separate silos. Like in the parable about the blind men describing the elephant, the debate tends to cluster into separate groups, each focused on describing their part of the AI elephant.
I see this often in my work – a large chasm between those working on different kinds of AI risks. Those working on AI discrimination use different tools and different techniques than those working on AI misuse risks. They tend to talk past one another or distrust others’ methods.
This is not surprising, as AI is multi-faceted and AI progress is exponential. But it is unfortunate since it means missed opportunities to really figure out how AI risk should be managed. AI needs to be understood as a whole. The elephant only takes shape when everyone combines their separate data points. We have seen this play out in all research areas over recent decades, where increased amounts of knowledge have led to increased need to narrowly specialize. Many try to preach interdisciplinarity to little avail. However, given its importance, AI risk is the ultimate example of where we need interdisciplinary research to understanding its risks to society.
I therefore like to say that “AI is both”. Is it a new technology? Yes, in many ways, it is “just” a new technology, like cars or social media. So when people (like Steven Pinker) argue that AI is likely to follow the same path as other new technologies, they are not wrong.
Given that the future is uncertain, it is correct to look at history for base rates, patterns for analogous events in history. So those arguing AI is just another new technology will look at the base rates of societal adaptation to technologies like cars or aviation. This provides them with argument that society is pretty robust and tends to be able to adapt. It takes a number of warning shots, but in the end, society arrives at a level of regulation of a technology that it is collectively comfortable with. Arguably, society seems comfortable with a surprisingly high number of deaths in the case of cars. However, given the lack of debate surrounding car fatalities and the reluctance to adopt self-driving technology, this does seem to be society’s accepted risk tolerance. The same thing can be said for nuclear power. Society has likely over-regulated nuclear power and therefore not reaped enough of the benefits, but the argument definitely holds that we have managed to control the technology in line with society’s norms.
However, the problem is that you could also argue, like Mustafa Suleyman did at TED recently, that AI is “a new species”. And he would also not be wrong. AI is crystallized intelligence, and intelligence underlies all human creation. Intelligence can be defined as the ability of an entity to achieve its goals. Introducing new forms of intelligence that will have goals and ways to achieve them that are completely alien to humans can therefore be seen as bringing about a whole new life form.
Seeing AI from this view brings in a completely new set of base rates. The last time a new intelligent species was introduced in the world, in the form of Homo Sapiens, it seemed to have non-negligible negative impact on existing species, to say the least (although there has recently been some debate surrounding whether Neanderthals really died out only due to the arrival of Homo Sapiens).
So AI is “both”, and this means we will need “both” also for the flipside – AI risk management. This means traditional ways to manage risk as well as completely new ones. At the same TED conference, Helen Toner stressed the need for audits of AI companies. Audits will be needed. Similarly, we will need regulation, legal frameworks and other risk management and governance measures.
However, we will also need new risk management measures, to manage the “new species” view of AI risk. The probability may be lower, but given the size of its impact, we must still prepare for it. The “new species” risks are the ones that affect societies deeply and that impact how we as humans survive, thrive and find meaning in life.
There is e.g. the risk of AI job automation. As Ethan Mollick describes in his new book Co-Intelligence, it will likely be many decades until AI could do 100% of the typical worker’s job, but that doesn’t mean AI is not already able to do a large proportion. A McKinsey study from last year estimated 30% within a few years. As much as companies like to talk about augmentation rather than automation, that still means companies can get rid of 30% of their workforces.
There is also the risk of a loss of human agency and purpose. As Nick Bostrom analyzes in his new book, Deep Utopia, in a world where AIs can do anything humans can, what meaning will remain for humans to base their lives on? There are also “loss of control” risks. We may gradually hand over more and more processes to AI systems faster and more accurate than us. Over time, we would become unable to put humans back in the loop, due to the competitive pressures (both between companies and between nations), since that would mean a reduction in speed and efficiency.
For these risks, we are going to need new kinds of risk management and governance tools and techniques. These should likely focus on building resilience on a societal level – making society more adaptative, removing weak links or increasing optionality. This is research that is just beginning and which needs a lot more work, from everyone concerned with AI risks.
> non-negligent
Guessing this should be "non-negligible"
"For these risks, we are going to need new kinds of risk management and governance tools and techniques. These should likely focus on building resilience on a societal level – making society more adaptative, removing weak links or increasing optionality. This is research that is just beginning and which needs a lot more work, from everyone concerned with AI risks.": 100%
My first intuition is that there is a lot in the most innovative+effective+robust management practices from which we could learn from and help us move in this direction. So not "new new", but "new for the field". So, at least, start from diffusing and scaling up existing "excellence", which makes it a bit more tractable, as we have experience in doing that; "we" being "at least some humans". The assumption would be that this kind of building of capability should enable us to get to the true "new new" that we are likely to need. There is a good amount on this in the innovation management literature I am thinking about.