For lots of folks working in animation they were that kid who drew all the time, made flipbooks in their math textbooks, and sold Bart Simpson drawings to their friends at recess.
That wasn’t me. I fell in love with animation first and I learned to draw in order to make cartoons. And let me tell you, it wasn’t easy. I still remember my mom visiting me at college and asking why none of my work was up on the wall. When I told her my drawings must not be good enough she replied, “Doesn’t that worry you?”
The point is making art is hard. It’s hard for me, and it’s hard for the most talented artists I know. We spend a lifetime learning and honing and exploring. So, when suddenly a text prompt can spit out a finished painting in seconds, it’s no wonder artists are terrified and frankly mad as hell.
So today, let’s look into AI image generators, the ethics behind them, and how artists can move forward in this uncharted new world.
In the Beginning
The first computer generated art I can remember seeing were fractals. Remember those things? They were visualizations of math equations, beautiful in a 90’s screensaver kinda way.

But automated art actually goes back much further than that. Starting in the 18th century engineers were experimenting with ingenious clockwork robots that wielded pencils (see the top image).
Alas, machine art didn’t stay that adorable forever. After years of algorithmic art experiments from the 1960s – 2000s, the real breakthrough in machine-learning image creation happened in the 2010s with the advent of generative adversarial networks (or GANs). Essentially that refers to two different neural networks - one that creates an image and another that judges the quality (not unlike my time in art school).

After GANs came diffusion models (Stable Diffusion, DALL-E, Imagen) that train by slowly adding static to images, so that later it can form pictures by doing the reverse: starting with a field of noise and creating a picture guided by your prompt.
The trick is when you train the models on an incredibly gigantic number of images (namely 5 billion!) they can produce convincing results, six-fingered hands aside.
For many working artists, this shift from a curious science experiment to serious competition felt like it happened in a heartbeat.
Training vs Learning
Supporters of AI generated art will say that these systems are simply learning the same way human artists do. Just like you might pore over the pages of The Art of Into the Spider-Verse or learn by making copies of Jack Kirby drawings in your sketchbook, the AI systems are doing all those things just faster and at a bigger scale.
Okay, but in this case scale and speed really matters. No human can compete with a machine that absorbs the entire history of art in weeks and mimics it in seconds. What we consider standard and acceptable practice for learning art has been developed over centuries with humans in mind, not machines.
A Question of Consent
Which brings us to what data these tools are trained on, namely the internet. The tech firms will tell you that since the internet is available to everyone, they are simply training on “publicly available data” and therefore it is “fair use”. But that’s assuming everything on the internet has been uploaded entirely legally, and if you think that’s the case, I think you’d better sit down.
Dodgy parts of the internet aside, even if an artist had uploaded their own work to the public internet legally for all to see, it’s not reasonable to assume that by doing so those artists consent to for-profit tech firms using those images to create a product worth billions of dollars in valuation.
“Art theft!” is the cry of artists everywhere. While I’m entirely sympathetic, there might be an even better metaphor. It’s not like OpenAI is stealing your car, it’s more like they’re using it as an Uber when you’re not looking. They’ll say, “What’s the problem? I gave it back,” to which I bet you’d say, “But you’re making money from MY car! That I paid for! Plus the floors are all sticky now for some reason.”
Tragedy of the Digital Commons
Copyright has always been a balancing act between protecting creators and allowing them to build off older works. Disney’s Snow White is only possible because of a centuries-old fairy tale that had fallen into the public domain. The bargain worked because time was a buffer. You could exploit an idea only after the original creators had the opportunity to make a living off their art.
When the speed and volume of AI art outpaces the creator’s ability to capitalize on it, the commons looks less like a garden of ideas and more like a strip-mine. A working artist relies on using their skills and style acquired over a lifetime to fulfil a creative need (and get paid!) When their work can be emulated at near-zero cost, then what’s a working artist to do?
The Ethical Road Ahead
As artists, we have to be honest with ourselves that the genie is out of the bottle (And Eric Goldberg’s Genie animation has probably trained the AI). The question going forward shouldn’t be can we stop it, but what can we advocate for as we look towards a more ethical and fair future?
Compensation
Some tech ethicists like Andy Baio have argued that banning AI from using online images might be unrealistic but instead suggests benefit-sharing/licensing models. Flat out, artists need to be paid on both ends: on the training and on any image that is monetized based on their work. There are a number of high profile lawsuits happening right now trying to resolve this issue.
Opting in and out
In addition to compensation, artists should be able to decide if they want their work to be part of these models or not. Simply making your artwork publicly available shouldn’t be implied permission for companies to do whatever they want with it. Walt Disney could only make Snow White because it was in the public domain, AI companies need to do the same thing and governments need to hold their digital feet to the fire emoji.
Ethically sourced models
There are some new image generators such as Adobe’s Firefly that claim to be trained exclusively on images that were paid for and / or given permission to use. It’s an encouraging trend. One note of caution: so-called LoRAs (Image generators fine tuned on a certain artist’s style) might solve some of the ownership issues but are still built on top of the larger base models that have the “original sin” of data scraping.
Better copyright violation detection
If these AI systems are smart enough to put the Pope in a puffy jacket, they should be smart enough to detect when a copyright is violated in their output. It’s not enough to forbid the prompt “Draw me a picture of Charlie Brown” but then still give you a picture of Schulz’s character when you type, “Draw me a bald kid with a zigzag shirt.”
A Reason to Hope
The good news is that conversations about AI ethics aren't slowing down. Most companies working with AI in animation and VFX claim they're committed to ethical practices and it's our collective responsibility to hold them accountable.
What can we do? Support artists advocating for fair practices. Press tech companies about how their tools respect consent and fair compensation. Engage policymakers by backing legislation that puts artists first when it comes to AI. Let’s not stick our heads in the sand; let’s stand up tall and be heard.
Like many of you, I’ve spent a lifetime trying to improve my art skills. Beyond what any computer can do (or what my mom says), it continues to be a rewarding (and difficult, and heartbreaking, and strange) journey. Despite this, I still want to embrace innovation so long as it doesn’t come at a human cost. AI tech firms clearly value artists; after all, they’ve built entire business models around our work. We just need a fair shake.
Seeya next time,
Matt Ferg.
p.s. Make sure you check out the Kids’ Media Book Club by
. It’s a very thoughtful newsletter about the kids media landscape structured like a book club (don’t worry, you can still get a lot out of it even if you don’t read the book). This month is particularly relevant as Julia is tackling AI and how it relates to kids’ media and education.
Great read. What makes me sad about these lawsuits, though, it’s that the ones calling the shots at the end of the day still aren’t artists. It’s a bunch of big exploitative companies prioritizing their own bottom line that sometimes create downstream benefits to some artists.
In any case, it’ll be so important to see what comes out of the lawsuits. Whatever it is will strongly impact everyone one way or another, even if you are not in a creative field.
I know it’s not the same, but I wonder what can be learned from the case of Barnes and Noble, which once was viewed as a threat to small bookstores, now as something to be protected from Amazon. But at the same time more small bookstores have opened up around in cities because people enjoy the experience. I myself only go to small bookshops because the ones in my city are excellent.