If you look at the state of the discourse about artificial intelligence you can see stronger and stronger signs of polarisation. A lot of people are afraid of the implications of this not entirely new technology. A lot of people bring forward criticism of a system that is emerging out of a value system that is known to not always get ethics right. And of course there are also people who think that their privilege is threatened and those who descend on the discourse like vultures (except the discourse is not dead, either rendering them hawks or leading to regret). Fun times in AI discourse at the end of the world.
Let me focus on what is often referred to as “Creative AI” though, because that’s what I’m working with. I share some of the fears about AI permeating the wrong aspects of life (and death), like the military, the police, policy making, law, cars, and marketing. Not because I expect an artificial general intelligence to crush us in anything but go and chess any time soon, but because the core issue of using past data for future predictions is that humanity’s past is a dark place we should learn from by critical reflection and not via stochastic modelling. Specifically the past of the countries most involved in pushing AI technologies1.
In Creative AI we face one interesting conundrum that I was pointed at by a tweet that then went on to talk about the Singularity so I’m not going to paste it here. One day I will write about the issues I have with the Singularity but for now I will just mention that the Strugatsky brothers’ concept of Vertical Progress is the best response to the conceptualisation of the Singularity I’ve ever read and, peculiarly, they wrote it before the Singularity was mentioned the first time.
The tweet I am talking about observed that NFTs create artificial scarcity while generative models create artificial abundance. Reading it made me understand why I intuitively found one of those ideas idiotic and the other one amazing. When the internet came along – and I remember the times – its wave of abundance created a new understanding of how to experience the world. I don’t think generative AI models can entirely compare to what happened back then but there is a hint of a break-through, of an actual change in how we do a lot of things. This is conundrum one. Why did those two things happen in parallel2?
There is a second conundrum and it has to do with the ignorance of context. This is not unique to Creative AI but is endemic to new technologies. Even in this day, where the diversity of creators of new technologies is much greater than in the recent past, it can be observed. The technology is created out of fascination with what is possible, which is a good reason in my opinion. But then it takes a surprising amount of time and effort to understand the implications of a new technology and how it embeds into existing social systems. And that happens every time. It’s mesmerising. If an AI wins an art competition, like it happened recently, then that says just as much about that competition as it says about the powers of generative models. It’s a lovely little Duchamp moment that should be the occasion of reflection and maybe of laughing a bit about us tiny humans.
The third conundrum is: why look in the past to make the future? It is inherent to the way we create generative models, by training them on loads and loads of data, that they are informed by the past. Past values, world views, ideas, theories, realities. The system treats all of them on eye level. I do not know how we turn something that has so much conservatism (as in “it conserves”) built-in into a building block of the future. A real challenge lies in this3.
Finally, an anachronism that machine learning builds upon can be found in the general idea that we can throw raw compute power at a problem instead of efficiently solving it. We live in times that require a radical rethinking of how we live on this planet. Because the planet in question can not take much more of our lifestyle. Shouldn’t we all write the neatest algorithms that precisely solve a problem? Can the overall gain be big enough to warrant burning Bitcoin levels of processor cycles? I do not know. I hope, though.
All4 of these critical thoughts are expressed in a more lively way in a talk I had the pleasure to give at the Ars Electronica Festival last week. Here’s a video:
In case you want to see me on stage again, raging and entertaining, you have that opportunity at AdventureX in London on November 5 & 6 where I will deliver a more productive, less critical, and as entertaining performance together with Char.
I am inclined to leave out China in this assessment because I know too little about the Chinese past to judge them. Western Europe, the US, and Japan, though …
And why are digital illustrators angry at both developments?
Bias does not come out of nowhere. It also comes out of all of history having been written by the victors.
Actually not all but most.