• AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    6
    ·
    9 months ago

    This is the best summary I could come up with:


    The goal is to help visual artists and publishers protect their work from being used to train generative AI image synthesis models, such as Midjourney, DALL-E 3, and Stable Diffusion.

    The open source “poison pill” tool (as the University of Chicago’s press department calls it) alters images in ways invisible to the human eye that can corrupt an AI model’s training process.

    Those with access to existing large image databases (such as Getty and Shutterstock) are at an advantage when using licensed training data.

    But as the Nightshade team sees it, research use and commercial use are two entirely different things, and they hope their technology can force AI training companies to license image data sets, respect crawler restrictions, and conform to opt-out requests.

    “The point of this tool is to balance the playing field between model trainers and content creators,” co-author and University of Chicago professor Ben Y. Zhao said in a statement.

    Shawn Shan, Wenxin Ding, Josephine Passananti, Haitao Zheng, and Zhao developed Nightshade as part of the Department of Computer Science at the University of Chicago.


    The original article contains 656 words, the summary contains 179 words. Saved 73%. I’m a bot and I’m open source!