Key points:
-
Cara’s Rapid Growth: The app gained 600,000 users in a week
-
Artists Leaving Instagram: The controversy around Instagram using images to train AI led many artists to seek an alternative
-
Cara’s Features: The app is designed specifically for artists and offers a ‘Portfolio’ feature. Users can tag fields, mediums, project types, categories, and software used to create their work
-
While Cara has grown quickly, it is still tiny compared to Instagram’s massive user base of two billion.
-
Glaze Integration: Cara is working on integrating Glaze directly in the app to provide users with an easy way to protect their work from be used by any AI
more about: https://blog.cara.app/blog/cara-glaze-about
It’s not. It’s supposed to target certain open source AIs (Stable Diffusion specifically).
Latent diffusion models work on compressed images. That takes less resources. The compression is handled by a type of AI called VAE. For this attack to work, you must have access to the specific VAE that you are targeting.
The image is subtly altered so that the compressed image looks completely different from the original. You can only do that if you know what the compression AI does. Stable Diffusion is a necessary part of the Glaze software. It is ineffective against any closed source image generators that have trained their own VAE (or equivalent).
This kind of attack is notoriously fickle and thwarted by even small changes. It’s probably not even very effective against the intended target.
If you’re all about intellectual property, it kinda makes sense that freely shared AI is your main enemy.
Not only is this kind of attack notoriously unstable, finding out what images have been glazed is a fantastic indicator for finding high-quality art that is the stuff you want to train on.
I doubt that. Having a very proprietary attitude towards one’s images and making good images are not related at all.
Besides, good training data is to a large extent about the labels.