Last summer, the esoteric corner of data science devoted to building AI (artificial intelligence) models burst into the limelight with the release of several consumer grade text-to-image applications, including Dall-E, Midjourney and Stable Diffusion.  The output of these tools even in the early stages of their development is often beautiful, but when artists started doing the math for themselves, the picture wasn’t as pretty.  Now the lawyers are starting to get involved, with a big case filed in San Francisco last week taking on some of the biggest names in this emerging new segment of the tech industry.

The rise of Skynet for art.  For anyone who ever harbored the desire to be a visual artist but lacked the skill, imagination or work ethic to actually do it, these apps seem like magic. Just describe a picture to a greater or lesser degree of detail, wait a minute while the computer crunches together billions of images and data points, then spits out an assortment of images, some sublime and some deeply disturbing.  Even more incredibly, the software could create work based on the style, or combination of styles, of different artists to a high degree of polish and precision.

Professional artists, for the most part, don’t see magic.  They see an existential threat to their livelihoods that impacts them in a couple of different ways.

First, this new tech holds the potential to automate entire stacks of jobs in the commercial art field, including character design, environmental design, production design, illustration and photo manipulation.  Unlike earlier generations of digital tools like desktop publishing or Photoshop, AI not only collapses the time- and labor-intensive tasks associated with design and production, it automates large swaths of the skill and judgment necessary to produce art in a commercial context.  Even extremely talented human artists have acknowledged that it is difficult to compete with the composition and color choices of the new tools, much less the speed of an AI capable of cranking out dozens of comps and variations in a matter of seconds.

Worse, because the systems have scraped and sampled the works of millions of artists online, including living artists, they can produce simulations of an artist’s own style.  Illustrator Greg Rutkowski is a particular favorite of folks making prompt-generated art, and he fears that when art directors look for examples of his work online, the results are now choked with hundreds of thousands of machine-made synthetic works.

Can the Terminator be stopped?  It’s not clear why the people who made these tools chose to train their sites on artists, writers and even fellow software developers: some of the few remaining jobs with high degrees of autonomy and satisfaction remaining in the postindustrial economy.  When I actually asked David Holz, CEO of Midjourney, that very question, it did not seem like something he even considered.

Of course, the simple answer is probably some combination of greed and desire to demonstrate technical virtuosity.  Over the last 250 years, efforts to pause or arrest the march of labor-saving (and job destroying) technology have not had a great winning record.  Our economy is strongly biased towards anything that increases productivity and centralizes control of production with managers rather than artisans.

And the smell of money is strong.  At a time when the investment world is taking a more skeptical view of pie-in-the-sky technologies, OpenAI, the consortium behind Dall-E and ChatGPT, is in talks with Microsoft to raise $10 billion.

Lawyers to the rescue?  Some of what the AI tools do is difficult to challenge, despite the potential disruption of entire industries and creative professions.  Individuals can copyright individual creative works, and companies can trademark specific visual elements that constitute their brand, but most lawyers agree it is not possible to legally protect a style.  As much as artists like Rutkowski might protest the impact of AI tools on their ability to trade on their distinctive works, that seems like a high bar to clear given the state of IP law.

The soft underbelly of this tech is how the data sets were collected and the models were "trained" in the first place.  The developers claim that data is public, and therefore up for grabs. They also say that, as an academic research project, OpenAI enjoys a wider presumption of fair use when it comes to sampling copyrighted works.  Lawyers and other AI ethicists are saying "not so fast."

Which brings us to the suit filed Friday by attorney Matthew Butterick, the Joseph Saveri Law Firm (currently also suing Github, Microsoft and others over another AI-related issue), and the litigators of Lockridge Grindal Nauen P.L.L.P.  In the suit, Butterick claims that significant numbers of images used to train AI-based art generators were under copyright and acquired without the content or permission of their creators.  Artists whose work was included in the training set are now having to compete with synthetic versions of their own work generated by the tools, ostensibly as new, original creations.

In the brief, Butterick states: "The harm to artists is not hypothetical—works generated by AI Image Products 'in the style' of a particular artist are already sold on the internet, siphoning commissions from the artists themselves."

Butterick is filing the suit on behalf of three artists claiming to be harmed by the infringements: commercial illustrator Karla Ortiz, fine artist Kelly McKernan, and Eisner-nominated cartoonist Sarah Andersen ("Sarah's Scribbles"), who wrote an outstanding op-ed on this topic for the New York Times recently.  The targets of the class action filing are Stability AI, the popular online art community DeviantArt, and Midjourney.

The devil is in the details.  I have to say, I am watching this case closely for a couple of reasons. First, the implications for creative professionals could not be higher stakes.  As a fan of art and human artists, I feel like I have a rooting interest; as a professional writer, I feel the knife of ChatGPT and similar text-based systems at my throat.

Second, I note with interest that some of the case against Midjourney uses direct quotes from the interview I published in Forbes with founder David Holz, where he explained that he built the dataset using "a big scrape of the Internet," without seeking consent from living artists or work still under copyright.  Holz was remarkably open in this interview and said a lot of stuff that I would not have expected from the CEO of a company.  It will be interesting to see if it comes back to haunt him.

Finally, in my academic role, I’m leading a research study on the ethics and practices of training AI systems.  A lot of this work is done by companies that hire task workers to tag and sort random images with little guidance and at low rates of pay.

These art, text and code generators are undeniably marvels of technical ingenuity.  They are easy to use, fun to play with, and capable of giving almost anyone the tools for expressing themselves in different media with proficiency that would have taken years or decades of hard work to achieve.  There’s no simple answer now that the genie is out of the bottle.  I guess I’m just hoping that, for once, we can find some balance in the application of these tools.

The opinions expressed in this column are solely those of the writer, and do not necessarily reflect the views of the editorial staff of ICv2.com.

Rob Salkowitz (@robsalk) is the author of Comic-Con and the Business of Pop Culture.