Subscribe to the Newsletter
Before the flood of Midjourney dreamscapes and perfectly plausible deep fakes, there was a more modest—I think more magical—introduction to AI's creative potential. Much of that early enchantment came through Janelle Shane's blog AI Weirdness, born in 2016, which explores ‘the strange and sometimes unsettling ways machine learning gets things wrong.’
Shane's experiments often begin with small datasets—lists of ice cream flavours, paint colours, superhero names—and use Recurrent Neural Networks (RNNs) to dream up new variations. The RNNs learn patterns within the text and then generate new text. Although contextually relevant, the results are unexpected combinations drifting between coherence and absurdity. What particularly excited me about Shane’s outputs wasn't just the final output, but a specific parameter: temperature.
In machine learning, temperature isn’t about heat, but entropy. The system I explored back then—a character-based recurrent neural network—isn’t trained on words, rather on letters. It processes text and learns patterns, predicting each new character based on the one before it. Because of this narrow focus, it is prone to oddities—fragments that stutter and morph—as it doesn’t understand syntax or grammar.
When setting the RNN’s parameters, a low temperature setting keeps the AI conservative, favouring predictable, statistically likely answers. Increase the temperature and the algorithm explores less likely possibilities. What emerges are unusual combinations and unexpected connections. I call these ‘hot searches’. They might be inefficient and have high error rates, but they occasionally return answers of surprising beauty and originality.
In 2017, I began my first series using AI as a collaborative partner, Computers Can’t Jump. Using the largest dataset I could find – BBC sound effects – I took its prosaic descriptions and created my own RNN to generate artwork titles, deliberately adjusting the temperature to court chaos and offer up phrases that teetered between absurdity and poetry: “Shepherdess Mess on Incline Bench Press”, “1 Fly Atmosphere Inbuilt”, “Subdued Conversation Salad”. These titles became springboards into new visual possibilities that felt both alien and weirdly personal.
I favour a deliberate cultivation of glitches as they feel more human. Today's algorithms increasingly optimize for smoothness and safety, with internet browsing becoming accident-proof—wild discovery replaced by curated feeds. As Justin Patrick Moore said, “We don't have enough Dada in this world of too much data.” What if we embraced the slippage instead?
Computers Can't Jump remains in its original technological era by design. Comparing my temperature-adjusted RNN with newer models like ChatGPT and Claude reveals a striking difference: contemporary systems produce refined, coherent content that's occasionally clever but rarely surprising. They are too polished and predictable, leaving little room for interpretive leaps. Those early RNN misfires didn't just mimic creativity—they exposed something essential about it. While today's algorithms chase perfection, these imperfect outputs remind us why our own unpolished thinking matters.