Subscribe to the Newsletter
When starting with my project investigating the ‘soul’ of AI, I aimed to introduce incongruities. For instance, I entered the prompt: ‘The philosopher Hannah Arendt contemplates the mystery of birth and death.’ Surprisingly, this produced a black-and-white photo of two women who bore a striking resemblance to each other, standing next to a dome covered by cloth. Like some artificial womb maybe.
Subsequently, I entered a similar phrase: ‘The artist Alfred Marseille reveals the mystery of birth and death.’ This resulted in a series of entirely different images: two similar-looking old men, possibly brothers, in each image displaying, or ‘revealing’, some peculiar object. These results didn’t make sense, but at the same time were meaningful in many ways.
That's when I felt this perspective was promising. The results were rich in associations and full of surprises, and I could achieve things with this approach that would be quite challenging otherwise.
Still, it took a while to find results of sufficiently high quality, partly because of many obvious shortcomings in the software. You have to make an effort to overcome all the stereotypes, there are technical issues like hands that often don't quite match and so on. I also got pictures of, for example, five people who looked remarkably similar, even though I hadn't asked for that. It was very disorienting and sometimes also very rewarding to exploit these glitches.
You can already see that the software has improved and is learning from existing images. Also, there are of course the (often flawed) efforts to make AI ‘do the right thing’: for instance to be less racist, avoid hate speech and to be more truthful. This also poses the relevant question of whose worldview is implemented here. Although the latter isn’t a straightforward process, as in the case where Google’s Gemini, in an attempt to diversify images, started to produce images of Nazi officers that were both black and female. Google co-founder Serge Brin, who has been contributing to Google’s AI projects since late 2022, was at a loss too, saying: “We haven’t fully understood why it leans left in many cases” and “that’s not our intention.”
Another question is whether all these 'enhancements' render the outcomes more artistically compelling. Probably the results of AI will become more mundane and predictable over time. Although of course, you can always ask for an image of a monkey riding a bicycle in a glass of beer while smoking a cigarette. But then again, maybe the cigarette will be omitted because, of course, smoking is not permitted.