AI trained on photos from kids’ entire childhood without their consent
Here’s the terrible thing about AI model training sets —
LAION began removing links to photos from the dataset while also advising that “children and their guardians were responsible for removing children’s personal photos from the Internet.” That, LAION said, would be “the most effective protection against misuse.” [Hye Jung Han] told Wired that she disagreed, arguing that previously, most of the people in these photos enjoyed “a measure of privacy” because their photos were mostly “not possible to find online through a reverse image search.” Likely the people posting never anticipated their rarely clicked family photos would one day, sometimes more than a decade later, become fuel for AI engines.
And indeed, here we are, with our family photos ingested long ago into many, many models, mainly hosted in jurisdictions outside the GDPR, and with no practical way to avoid it. Is there a genuine way to opt out, at this stage? Even if we do it for LAION, what about all the other model scrapes that have gone into OpenAI, Apple, Google, et al? Ugh, what a mess.(tags: privacy data-protection kids children family laion web-scraping ai models photos)