Artists under siege by artificial intelligence (AI) that studies their work, then replicates their styles, have teamed with university researchers to stymy such copycat activity, reports BSS.
US illustrator Paloma McClain went into defense mode after learning that several AI models had been "trained" using her art, with no credit or compensation sent her way.
“I believe truly meaningful technological advancement is done ethically and elevates all people instead of functioning at the expense of others.”
The artist turned to free software called Glaze created by researchers at the University of Chicago.
Glaze essentially outthinks AI models when it comes to how they train, tweaking pixels in ways indiscernible by human viewers but which make a digitized piece of art appear dramatically different to AI.
"We're basically providing technical tools to help protect human creators against invasive and abusive AI models," claimed by a professor of computer science Ben Zhao of the Glaze team.
Created in just four months, Glaze spun off technology used to disrupt facial recognition systems.
"We were working at super-fast speed because we knew the problem was serious," Zhao said of rushing to defend artists from software imitators.
"A lot of people were in pain."
Generative AI giants have agreements to use data for training in some cases, but the majority if digital images, audio, and text used to shape the way supersmart software thinks has been scraped from the internet without explicit consent.
Bd-pratidin English/Tanvir Raihan