Google watermarks their images; I believe it's a sort of subtle rainbow striping that can survive basic filters/transforms. https://deepmind.google/models/synthid/ Other providers might do something similar but I'm not aware of any.
Otherwise no. There are lots of common tells, and some classifiers that might be better than guessing (at least within a given test set) but nothing reliable.
halos, of hue or regional variations of pixel "jagginess", reflections that dont correspond to the environment, distortions of the 2D pixel area that dont agree with underlying 3D structure.
Some of them do implement a stenographic watermark, but it's a continual game of cat and mouse. It would shock me if even SOTA watermarks persisted if you ran the image through a local model's img2img with a low denoise.
I struggle to tell. And the AI image generators are only going to get better. I fear we are going to spend the next years debating if something is real or AI.
I read an online conversation where people said that billionaires like those in the Epstein files would have loved to use the excuse that the implicating images of them were AI generated by Epstein.
I saw a reel of Ghibli's The Wind Rises, and it was just full of haters confidently exclaiming that it's AI slop. If you run it through a standard detector, it would probably trigger them too - the yellowed background, the distorted animation and exaggerated facial expressions.
There's the famous 4 second crowd scene which took a year to draw. This looks incredibly AI now. It's highly unlikely studios will manually draw things like this in the future.
A lot of people have just given up separating them. A friend only searches for images made before 2021 because they're not AI made.
I think it's really sad, because it hurts the artists most of all.
And this is where those in power are headed: to use AI to sow misinformation and disinformation across all forms of media until people trust nothing but what other people tell them to trust. They become trivially easy to lead in whatever direction is needed.
You can see this in how conservatism has evolved over the last two decades. The bottom-99% of all conservatives have been so badly brainwashed and gaslit that they actively vote against their own best interests, such that the oligarchs and other members of the Parasite Class benefit enormously.
Google watermarks their images; I believe it's a sort of subtle rainbow striping that can survive basic filters/transforms. https://deepmind.google/models/synthid/ Other providers might do something similar but I'm not aware of any.
Otherwise no. There are lots of common tells, and some classifiers that might be better than guessing (at least within a given test set) but nothing reliable.
halos, of hue or regional variations of pixel "jagginess", reflections that dont correspond to the environment, distortions of the 2D pixel area that dont agree with underlying 3D structure.
I don't know how reliable this is but one can upload an image to a few different AI's and ask if it was AI generated.
I don't see how that would work. Isn't it akin to coping an AI generated text and asking different AI models if the texts were generated by AI?
They wouldn't be able to tell.
But maybe images have a sort of marker. I don't know.
Give it a shot and see. It could be some facet of AI image generation is unique to other processes that it knows about. Try a few of them.
Semi-related it was able to spot the fake Ghislaine [1]
[1] - https://www.youtube.com/watch?v=pEA6Cwzwip8
Some of them do implement a stenographic watermark, but it's a continual game of cat and mouse. It would shock me if even SOTA watermarks persisted if you ran the image through a local model's img2img with a low denoise.
I struggle to tell. And the AI image generators are only going to get better. I fear we are going to spend the next years debating if something is real or AI.
I read an online conversation where people said that billionaires like those in the Epstein files would have loved to use the excuse that the implicating images of them were AI generated by Epstein.
I saw a reel of Ghibli's The Wind Rises, and it was just full of haters confidently exclaiming that it's AI slop. If you run it through a standard detector, it would probably trigger them too - the yellowed background, the distorted animation and exaggerated facial expressions.
There's the famous 4 second crowd scene which took a year to draw. This looks incredibly AI now. It's highly unlikely studios will manually draw things like this in the future.
A lot of people have just given up separating them. A friend only searches for images made before 2021 because they're not AI made.
I think it's really sad, because it hurts the artists most of all.
And this is where those in power are headed: to use AI to sow misinformation and disinformation across all forms of media until people trust nothing but what other people tell them to trust. They become trivially easy to lead in whatever direction is needed.
You can see this in how conservatism has evolved over the last two decades. The bottom-99% of all conservatives have been so badly brainwashed and gaslit that they actively vote against their own best interests, such that the oligarchs and other members of the Parasite Class benefit enormously.