He allegedly used Stable Diffusion, a text-to-image generative AI model, to create “thousands of realistic images of prepubescent minors,” prosecutors said.
In many ways they are. The image generated from a prompt isn’t unique, and is actually semi random. It’s not entirely in the users control. The person could argue “I described what I like but I wasn’t asking it for children, and I didn’t think they were fake images of children” and based purely on the image it could be difficult to argue that the image is not only “child-like” but actually depicts a child.
The prompt, however, very directly shows what the user was asking for in unambiguous terms, and the negative prompt removes any doubt that they thought they were getting depictions of adults.
In many ways they are. The image generated from a prompt isn’t unique, and is actually semi random. It’s not entirely in the users control. The person could argue “I described what I like but I wasn’t asking it for children, and I didn’t think they were fake images of children” and based purely on the image it could be difficult to argue that the image is not only “child-like” but actually depicts a child.
The prompt, however, very directly shows what the user was asking for in unambiguous terms, and the negative prompt removes any doubt that they thought they were getting depictions of adults.
And also it’s an AI.
13k images before AI involved a human with Photoshop or a child doing fucked up shit.
13k images after AI is just forgetting to turn off the CSAM auto-generate button.