Associative imagery

Hello readers. This letter comes to you from beneath the waning moon just past the equinox, where the lakes cool a few degrees each night and the wind, although the same as ever in direction and intensity, takes on a mysteriously autumnal character. This month we’ll conduct a little experiment, which might at first seem a bit odd, but resonates with the topics I’ve been writing about over the last few months. In the spirit of the ‘generative recombinatory flood’ at the center of the second issue of the zine, we’ll be playing a bit with artificial intelligence.

You may have encountered images on the internet lately presented as ‘AI-generated art’, which as uncanny and disquieting as they can be, also hold an undeniable appeal. In the name of our experiment, here is such an image generated from part of a recent poem I wrote:

As the verdant herd grazes the dunes, their lush amethyst plumage, ruffled by wind, shines in the bright noon sun.

What is going on here? The image was produced by a model comprised of two so-called neural networks (VQGAN and CLIP) that trade iterations back and forth toward fidelity to a given prompt, in this case a line from my poem. VQGAN, the generator, starts with noise and produces an arbitrary image it considers suitably ‘realistic’ (if entirely nonsensical), while CLIP, the perceptor, evaluates how well the image depicts the prompt, and responds to the generator with feedback toward a ‘better’ depiction. The iteration continues until someone tells it to stop. The prompt came from ‘Point phases’:

thistle-bloom full chatoyant face to sun

I asked the model to render the image as if it were a painting by the Swedish artist Simon Stålenhag (whose use of color and light feels right for the subject matter) by appending in the style of Simon Stålenhag to the prompt. The model, surprisingly enough, has seen his paintings and understands what I mean by style. I also added the secondary prompt wild sand dunes to gently steer the model toward the world of the poem and away from the artist’s characteristic post-industrial, robot-inhabited landscape. How might we describe the above image in the style of the poem? Perhaps:

amethyst plumage lush on the verdant flock

Yes, I’d say that sounds about right. What do we get when we ask the model to produce an image from that prompt?

Among small clusters of trees in spring-green foliage, two peacocks sleep between the lavender folds of the creeping dunes.

The iteration could of course continue, with a description like peacock asleep between the lavender folds, and the mutual shaping among poet, AI model and unwitting Swedish collaborator might develop further, but we’ll stop here for now.

When I wrote the original ‘phases’ poem, I had in mind the apparent fact that the human unconscious grasps a pattern like the lunar cycle in both literal and associative terms — which is to say we recognize not only the literal phases of the moon itself, but also images of newness and fullness, waxingness and waningness more broadly in our surroundings. In the poem, for example, new moon arrives as goose’s eye from the rushes, first quarter as a quartzite pebble in the hand and full moon as a Pitcher’s thistle flower facing the sun. To my eyes, the imagery produced by the AI model is associative in a way superficially similar to that produced by the unconscious, in that both imageries imply underlying processes that are tirelessly creative; never sure but never still.

I do think the resemblance is superficial, though, and that we should be wary of seduction. The dreamlike character common to the images produced by open-source AI models may well be explicitly encoded by the engineers (like in the early DeepDream model), and thus serve as a kind of gimmick to warm us to a future of ubiquitous, non-dreamlike, mundane AI-generated imagery. To assume that the output of a brainlike network should necessarily resemble human dreams is a dangerous assumption (and a huge leap), taking for granted the threadbare assertion that psyche itself somehow arises out of sufficient material complexity.

But, leaving such assumptions aside, a conversation between the imagination and various novel text-to-image models, in all their hyper-associative generativity, does have a certain appeal. We do well to remember that the human psyche is not a computer, though, and that lush, mysterious imagery has always surrounded and infused each of us.


Thanks for reading along with this September installment of Polylith. If you enjoyed it, consider sharing it with a kindred reader — that would mean a lot, and is the only way new people hear about the project. The second issue of Drift Body zine, exploring generativity and more, is available for purchase in the bookshop.

Until next month, happy associating!