Friday, November 22, 2024

Rage against the machine: creatives vs. AI

Share

No, this is not another fluff piece about the importance of humanity in creativity, designed to placate those who have lost their jobs to the heartless algorithm. This is the story of how creators around the world are standing up for their IP in unsurprisingly creative ways. Should artificial intelligence be quaking in its boots? Maybe a little.

Picture the following scenario: you’re using an image generator like Midjourney or Dall-E to create a visual for a campaign marketing fashion accessories. You type in the prompt “woman carrying a handbag”. The image generator spits out an image of a fashionable young woman carrying a toaster. Perplexed, you retype the prompt: “woman carrying a HANDBAG” – and receive another image of a woman holding a toaster. Unsure why this prompt isn’t working, you decide to try something else. You type in “woman wearing a hat” instead. The AI happily returns an image of a woman with a three-tier cake stacked on top of her head.

What is going on here?

What I’ve described is the future of AI, as seen through the eyes of its would-be disruptors. But why would anyone want to make artificial intelligence less effective? The answer to that question starts with how artificial intelligence became so smart in the first place.

The ethics of data scraping

The New York Times. Elon Musk. John Grisham, George R.R Martin and Jodi Picoult. These are just a few of the big names that have filed lawsuits against OpenAI since the launch of its deep learning model, ChatGPT, in 2020.

Generative AI systems, such as ChatGPT, work by interpreting user prompts through sophisticated algorithms. These algorithms are trained on extensive datasets, which are accessed by “scraping” and analysing billions of pieces of online text. At the heart of many of the lawsuits that OpenAI is facing is the question of who gave the company permission to access copyrighted materials for the training of its AI models.

It all sounds very vague when we talk about the IP attached to written words or images, so let me illustrate the problem for you with a more tangible example.

Imagine that you owned a bakery renowned for making a particular type of bread. One day, you come into your bakery to find a man in the corner who stands there all day, watching as you work. You did not give the man permission to be in your bakery and he won’t leave when you ask him to. The next day, he goes to the bakery around the corner and does the same thing there. And the day after that, he goes to a bakery two streets down and repeats the performance.

At the end of the week, the man opens a stall in the town square and starts giving free lessons on how to bake bread, incorporating signature techniques that he saw in your bakery and the others he visited. The enthralled public flock to his stall, and many of them go home and start baking their own bread. Next week, your bread sales are halved.

The authors who are filing lawsuits against OpenAI are making the case that the company is using material that they hold the copyright to without their permission (and, most importantly, without compensating them). Since OpenAI has been notoriously unforthcoming about where it sources its datasets from (remember, we’re talking about billions and billions of samples of text here), there is almost certainly a chance that they are drawing on copyrighted material from ebooks, scripts and online newspapers. Open AI has thus far neither denied nor confirmed whether this is the truth.

Instead, OpenAI is countering these arguments using a very open-ended exception to copyright protection known as “fair use”, which allows for the limited reproduction of text for uses like commentary or criticism. If you apply that argument back to our bakery example, that would be the man in the bakery claiming that anyone in the bakery (customers included) could look over the counter at any time and see the methods you are using to bake your bread.

It’s a tough argument to win and a bitter pill for creatives to swallow. While writers at the top of the bestseller lists are unlikely to be too affected by the rise in AI-generated writing, writers who work on a humbler scale (yours truly included) are working tirelessly to convince clients that their work is worth paying for. It’s hard not to feel that the whole situation is unfair.

It seems highly unlikely that the man with the stall in town square will be going away anytime soon. Fortunately, creatives never encountered a problem they couldn’t find an interesting solution for.

Define “sabotage”

While writers are taking AI to court, creatives in visual media are showing that they aren’t afraid to get hands-on when it comes to protecting what’s theirs.

In the same way that ChatGPT has disrupted the writing industry, AI-powered image generators have wreaked havoc in the art and design industries. The method for training these image generators also involves data scraping – and therein lies the crux of what might be AI’s undoing.

It started with Glaze, a tool developed by a team at the University of Chicago under the leadership of professor Ben Zhao. Glaze allows artists to “mask” their own personal style by changing the pixels of images in subtle ways. These changes are invisible to the human eye, but are powerful enough to manipulate machine-learning models to interpret an image as something different from what it actually shows. You can think of Glaze as a set of curtains in front of a window, obscuring a clear view of what’s inside.

While Glaze was designed as a countermeasure to IP theft, its sibling, Nightshade, takes a far more aggressive approach.

Developed by the same team at UC, Nightshade is a data poisoning tool that has the potential to break deep learning models by feeding them incorrect data. This tool strikes at the heart of AI’s biggest weakness – it needs to access vast amounts of data in order to learn. When the data is manipulated, the machine doesn’t know the difference; it simply absorbs the information that it is given without questioning whether that information is right or wrong. If you tell an image generating tool enough times that hats are synonymous with cakes, pretty soon it will believe you.

Just like many of the image generative tools that are out there, Nightshade is open source, which means that others are able to tinker with it, make their own versions, and distribute at scale. There’s a reason for this, according to team leader Zhao: the more people use it and make their own versions of it, the more powerful the tool becomes. The data sets for large AI models can consist of billions of images, so the more poisoned images can be scraped into the model, the more damage the technique will cause.

When an image generator starts spitting out images of weird animals with six legs and three eyes when prompted with “dog”, getting to the root of the problem is anything but simple. The poisoned data is very difficult to remove, as it requires tech companies to painstakingly find and delete each corrupted sample, of which there could be thousands.

As an artist who has always had a penchant for the surreal, I’d be lying if I said I wasn’t a little bit excited about the strange and twisted versions of reality that a poisoned image generator could produce. But I’m guessing the folks who are developing Google Gemini (and all the other models out there) are less excited about that prospect.

About the author:

Dominique Olivier is a fine arts graduate who recently learnt what HEPS means. Although she’s really enjoying learning about the markets, she still doesn’t regret studying art instead.

She brings her love of storytelling and trivia to Ghost Mail, with The Finance Ghost adding a sprinkling of investment knowledge to her work.

Dominique is a freelance writer at Wordy Girl Writes and can be reached on LinkedIn here.

9 COMMENTS

  1. Fantastic article, wonderful that there is a way for the fight back & protect artists lawful property rights

    • Thank you so much Johan – as a creative working in this space I was quite excited to stumble across these tools as well.

  2. Thanks for a great article Dominique. It describes a problem (opportunity?) we’re encountering every day. I believe that the NY Times is already suing Open AI for learning with its articles to the point that it can write articles that appear to be NY Times originals.

    Sometimes AI can be your friend before it takes over. In The Second Machine Age (Brynjolfsson and McAfee), the situation is described where a machine is able to inspect a lesion on a person’s skin and, by comparing the image to literally millions of images on its data base, can predict with a high degree of accuracy whether the lesion is cancerous. Since no dermatologist in his/her career is likely to have personally inspected more than a tiny fraction of this number, the machine generally wins in the diagnosis. The machine helps the dermatologist at first but over time may replace him/her since all that is required is a machine and a non-medical technician to operate it. The challenge then is to find ways for the dermatologist to add value while letting the machine do the diagnosis. This will require a shift in how we think about work and the value that humans can add. Machines are not going to go away, and nor should we want them to.

    • You’re absolutely right Tim. All we can really do is to continue to add the things that make us uniquely human. I remember reading a story quite some time ago about a restaurant in China that had been staffed entirely by robots. The restaurant owner claimed that the robots were more useful because they were fluent in many Mandarin dialects, while human servers could only speak or understand a few. To which he was met with the question – would a robot server know the appropriate time to make a joke?

  3. Always fascinating, @ Tim am thinking along the same lines , the only thing I see is an opportunity for continuous self improvement to achieve competitiveness

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles

Opinion

Verified by MonsterInsights