The AI Poison Pill

I have benefitted from the use of AI recently. When I wanted to buy a complementary DAC and speaker combination, I consulted the "Schiit Talker," Schiit Audio's chatbot. It helpfully steered me away from some speakers that were less compatible with the DAC (but cheaper) I was looking at and towards the Kanto Yu4 set. I've been extremely happy with the DAC + speaker combo since I acted on the advice and purchased the equipment. The experience helped me warm up a bit to the idea of using AI for advice where it would be unusual to get human help.

While I've enjoyed using DALL-E for images (dig the one for this post), I can see the point of artists who are opposed to their works being used to train imaging AI. To that end, I was interested to see a kind of "poison pill" technology that warps an image when AI tries to use it for training. Kiona N. Smith reports on the new technique for protecting images for Ars Technica.

The open source "poison pill" tool (as the University of Chicago's press department calls it) alters images in ways invisible to the human eye that can corrupt an AI model's training process. Many image synthesis models, with notable exceptions of those from Adobe and Getty Images, largely use data sets of images scraped from the web without artist permission, which includes copyrighted material. (OpenAI licenses some of its DALL-E training images from Shutterstock.)

It seems like an effective way of fighting back against unauthorized usage of what could be protected material. I guess you beat tech with better tech.