I wonder if the forensic techniques to identify photoshopped images /altered audio work on ai-generated media?
I know you can timestamp audio to a specific point in time by matching the frequency of the background electrical hum if it’s available, so if it should be available but isn’t that could indicate a video or audio clip is fake, and that differences in image grain/quality can identify patchwork images, but does that also come out in AI-generated images?
I wonder if the forensic techniques to identify photoshopped images /altered audio work on ai-generated media?
I know you can timestamp audio to a specific point in time by matching the frequency of the background electrical hum if it’s available, so if it should be available but isn’t that could indicate a video or audio clip is fake, and that differences in image grain/quality can identify patchwork images, but does that also come out in AI-generated images?