This week’s ART∩CODE letter is an interlude essay. It's part of the story of SELF ABSORBED about what AI takes from the world in order to manifest its magic. More details on the new installation shortly.
Don’t you think it strange that companies building text-to-image AIs like Midjourney and Stable Diffusion seem so keen to attract artists as their earliest users? After all, artists are the people whose work has been taken (without consent) to train these models. Artists are the ones whose livelihoods are threatened by these models. And artists are certainly not the multi-billion dollar market whose allure drives those funding the development of these models (or the expensive GPUs they run on).
When a new technology arrives, new possibilities arrive. It leaves us with an ethical openness. We slowly understand the implications as we see what happens and check in with our own moral intuitions.
Those with vested interests move quickly to define the narrative. In the early days of surveillance capitalism, Zuboff describes how Google worked to make it seem like their business model was inevitable, that these products couldn’t exist without unwilling data harvesting. She writes
People habituate to the incursion with some combination of agreement, helplessness, and resignation. The sense of astonishment and outrage dissipates. The incursion itself, once unthinkable, slowly worms its way into the ordinary. Worse still, it gradually comes to seem inevitable. New dependencies develop.
Inevitability is a way of overcoming your sense that things aren’t right. The inevitable is something you just have to come to terms with. It’s often framed more subtly as a devil’s bargain: “Don’t want your data harvested? Then don’t use the service.” Good luck with that one.
The present feels like a similar moment with generative AI.
The grand appropriation of artists’ creative output is happening in the ambiguity of a new ethical landscape. We don’t have rules about appropriating artists’ work by training AI models because it hasn’t previously been possible. Humans can rip each other off, but we have physical limits. Machines could duplicate, but with very limited creativity. Certainly for me, plagiarism and copyright infringement feel like fundamentally different moral issues. And neither quite fits what generative AI does.
Getting artists on board complicates what could be a simple narrative of corporations appropriating for themselves the creative life work of individuals.
This is not a critique of AI art or another tedious ‘but is it art’ debate (my answer: I don’t care). Plenty of AI art is spectacular, original, creative and demanding of craftsmanship. However, when we simply pick up and play with tools of appropriation, we risk become pawns in a larger propaganda campaign that says artists are the beneficiaries, not the losers.
Again, my mind comes back to this apparent sense of inevitability in a new devil’s bargain: “If you want these great tools, you’re just going to have to deal with it. And now the genie’s out the bottle, it’s inevitable that others will work with them. Wouldn’t you rather it’s you?”
If we can’t say no, then let’s at least use it with a critical awareness.
This devil’s bargain is different than with surveillance capitalism. The artists who are losing out are likely different people than the artists winning out. This distinction is easily lost in the narrative of how AI impacts “artists” in the abstract. For example, here is the AI researcher Piekniewski in a mostly reasonable post:
Yes the tool will replace some human work ... just like cheap Ikea mass printed paintings replaced some that would be painted by real people. But of course there are still painters and there are still people paying the painters for real stuff. Similarly digital technology actually opened up tons of new ways of expression and created tons of jobs for artists. And same will be with text-to-image generation.
As with surveillance capitalism, beware economic decisions that masquerade as technical inevitabilities. Artists could be receiving royalties for the influence their images have on an AI’s output. We could build models where people actually consent to being included in the training data (really consent, not the manufactured consent of a Terms of Service). These wouldn’t be as simple, or as cheap. But many important things are made complicated and expensive by ethical necessity: medical research, healthcare, employment.
I expect that eventually regulators (probably the EU) will ponder this question. Until then, it’s a race to see how much of the ethical landscape can be occupied and settled before the borders are drawn.
Tim
Montreal 10 Feb 2022