4 Comments
Jul 12Liked by Malcolm Murray

> non-negligent

Guessing this should be "non-negligible"

Expand full comment
author

Yes, nice catch! Thank you!

Expand full comment

"For these risks, we are going to need new kinds of risk management and governance tools and techniques. These should likely focus on building resilience on a societal level – making society more adaptative, removing weak links or increasing optionality. This is research that is just beginning and which needs a lot more work, from everyone concerned with AI risks.": 100%

My first intuition is that there is a lot in the most innovative+effective+robust management practices from which we could learn from and help us move in this direction. So not "new new", but "new for the field". So, at least, start from diffusing and scaling up existing "excellence", which makes it a bit more tractable, as we have experience in doing that; "we" being "at least some humans". The assumption would be that this kind of building of capability should enable us to get to the true "new new" that we are likely to need. There is a good amount on this in the innovation management literature I am thinking about.

Expand full comment

I am more inclined to believe that researchers are correct to distrust each others methods, and that the different AI concerns are not mutually compatible with each other, than that this is some how an "it's both" situation. If the sum of AI concerns appear to be a contradiction it is more reasonable to reject some or all of them, than it is to keep an open mind that what superficially appears to be a contradiction is not a contradiction (absent any reasoning to resolve the contradiction).

Expand full comment