Eve’s two fantastic events with GameShift coincided with the news that Stockholm Resilience Centre has confirmed that the first full quantification of the nine essential earth systems shows that we are “well outside the safe operating space for humanity”.
It left me wondering, can AI actually be part of the solution here, to this most wicked of all wicked problems? I’m not sure, because AI is itself generally seen as a wicked problem.
What I appreciate most about Eve is how, in all her work, she combines deep scholarship and penetrating thoughtfulness with being human about it all. Eve is fun, human, irreverent – and massively interesting. Her book, “Robot Souls – Programming in Humanity” is out now, and needs to be on the reading list for anyone wanting to think strategically about the place of AI in addressing our most pressing challenges.
A while ago I read James Lovelock’s “Novacene” and left feeling unsatisfied and unsettled. Lovelock, an amazing engineer and inventor, and the co-developer of Gaia Theory, ends up seeing a future protected by a benevolent super intelligence that looks after the earth better than humans have ever managed to do. The book received a lot of criticism for being at the frontiers of optimism that a post-human hyperintelligence would be a universally benign force for planetary good. Eve’s book provided me with an inkling of why that could, just could, be the case.
There’s too much in Eve’s book for me to provide a summary, but you can hear her presentation in the recording below. What sticks with me most is Eve’s decision to turn towards salvation in the very frailties that make humans unique – our “junk code” as she terms it. As Eve says, we’ve turned to biomimicry for so much, but haven’t yet turned to mimic our own code in the way we create AI.
Eve argues that humans have evolved with a web of apparently contradictory and flawed bits of code that individually don’t seem always to be that hot. But together they add up to something powerful and vital in helping humans to live in a community and stay alive. This messy mixture – free will, emotion, intuition, a capacity for uncertainty, our ability to make mistakes and learn moral lessons from them, our meaning making and storytelling – amount to “an incredibly clever design”. In Eve’s argument, letting AI learn to be more like humans by putting in our junk code, not editing it out, may well be the smartest move. Eve is no blind optimist here, but she is an ambassador for the view that there is something quite special in humanity that we code out of AI at our peril.
There’s a long way to go, for me, before I know what to do with this in my day-to-day life, even in my organisational work with teams on strategy and change. But of this I am certain. None of us can afford to have our eyes closed. We all need to be exploring, and thinking about how AI might shape our world, our work, and our organisations. We all need to be playing our part in the conversation, making choices about how we interact with this phenomenon.
It just may be that in the end, Lovelock is right, that intelligence that learns and thinks 10,000 times faster than humans will end up determining the fate of the planet and our species. In that case, we’d better make sure that the AI understands humans a bit better than it would if all the junk code ends up in the coding room bin.