cross-posted from: https://rss.ponder.cat/post/193608

No One Knows How to Deal With 'Student-on-Student' AI CSAM

Schools, parents, police, and existing laws are not prepared to deal with the growing problem of students and minors using generative AI tools to create child sexual abuse material of other their peers, according to a new report from researchers at Stanford Cyber Policy Center.

The report, which is based on public records and interviews with NGOs, internet platforms staff, law enforcement, government employees, legislators, victims, parents, and groups that offer online training to schools, found that despite the harm that nonconsensual causes, the practice has been normalized by mainstream online platforms and certain online communities.

“Respondents told us there is a sense of normalization or legitimacy among those who create and share AI CSAM,” the report said. “This perception is fueled by open discussions in clear web forums, a sense of community through the sharing of tips, the accessibility of nudify apps, and the presence of community members in countries where AI CSAM is legal.”

The report says that while children may recognize that AI-generating nonconsensual content is wrong they can assume “it’s legal, believing that if it were truly illegal, there wouldn’t be an app for it.” The report, which cites several 404 Media stories about this issue, notes that this normalization is in part a result of many “nudify” apps being available on the Google and Apple app stores, and that their ability to AI-generate nonconsensual nudity is openly advertised to students on Google and social media platforms like Instagram and TikTok. One NGO employee told the authors of the report that “there are hundreds of nudify apps” that lack basic built-in safety features to prevent the creation of CSAM, and that even as an expert in the field he regularly encounters AI tools he’s never heard of, but that on certain social media platforms “everyone is talking about them.”

The report notes that while 38 U.S. states now have laws about AI CSAM and the newly signed federal Take It Down Act will further penalize AI CSAM, states “failed to anticipate that student-on-student cases would be a common fact pattern. As a result, that wave of legislation did not account for child offenders. Only now are legislators beginning to respond, with measures such as bills defining student-on-student use of nudify apps as a form of cyberbullying.”

One law enforcement officer told the researchers how accessible these apps are. “You can download an app in one minute, take a picture in 30 seconds, and that child will be impacted for the rest of their life,” they said.

One student victim interviewed for the report said that she struggled to believe that someone actually AI-generated nude images of her when she first learned about them. She knew other students used AI for writing papers, but was not aware people could use AI to create nude images. “People will start rumors about anything for no reason,” she said. “It took a few days to believe that this actually happened.”

Another victim and her mother interviewed for the report described the shock of seeing the images for the first time. “Remember Photoshop?” the mother asked, “I thought it would be like that. But it’s not. It looks just like her. You could see that someone might believe that was really her naked.”

One victim, whose original photo was taken from a non-social media site, said that someone took it and “ruined it by making it creepy […] he turned it into a curvy boob monster, you feel so out of control.”

In an email from a victim to school staff, one victim said “I was unable to concentrate or feel safe at school. I felt very vulnerable and deeply troubled. The investigation, media coverage, meetings with administrators, no-contact order [against the perpetrator], and the gossip swirl distracted me from school and class work. This is a terrible way to start high school.”

One mother of a victim the researchers interviewed for the report feared that the images could crop up in the future, potentially affecting her daughter’s college applications, job opportunities, or relationships. “She also expressed a loss of trust in teachers, worrying that they might be unwilling to write a positive college recommendation letter for her daughter due to how events unfolded after the images were revealed,” the report said.

💡Has AI-generated content been a problem in your school? I would love to hear from you. Using a non-work device, you can message me securely on Signal at ‪emanuel.404‬. Otherwise, send me an email at [email protected].

In 2024, Jason and I wrote a story about how one school in Washington state struggled to deal with its students using a nudify app on other students. The story showed how teachers and school administration weren’t familiar with the technology, and initially failed to report the incident to the police even though it legally qualified as “sexual abuse” and school administrators are “mandatory reporters.”

According to the Stanford report, many teachers lack training on how to respond to a nudify incident at their school. A Center for Democracy and Technology report found that 62% of teachers say their school has not provided guidance on policies for handling incidents

involving authentic or AI nonconsensual intimate imagery. A 2024 survey of teachers and principals found that 56 percent did not get any training on “AI deepfakes.” One provider told the authors of the report that while many schools have crisis management plans for “active shooter situations, they had never heard of a school having a crisis management plan for a nudify incident, or even for a real nude image of a student being circulated.”

The report makes several recommendations to schools, like providing victims with third-party counseling services and academic accommodations, drafting language to communicate with the school community when an incident occurs, ensuring that students are not discouraged or punished for reporting incidents, and contacting the school’s legal counsel to assess the school’s legal obligations, including its responsibility as a “mandatory reporter.”

The authors also emphasized the importance of anonymous tip lines that allow students to report incidents safely. It cites two incidents that were initially discovered this way, one in Pennsylvania where a students used the state’s Safe2Say Something tipline to report that students were AI-generating nude images of their peers, and another school in Washington that first learned about a nudify incident through a submission to the school’s harassment, intimidation, and bullying online tipline.

One provider of training to schools emphasized the importance of such reporting tools, saying, “Anonymous reporting tools are one of the most important things we can have in our school systems,” because many students lack a trusted adult they can turn to.

Notably, the report does not take a position on whether schools should educate students about nudify apps because “there are legitimate concerns that this instruction could inadvertently educate students about the existence of these apps.”


From 404 Media via this RSS feed

  • ferristriangle [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    15
    ·
    21 hours ago

    One NGO employee told the authors of the report that “there are hundreds of nudify apps” that lack basic built-in safety features to prevent the creation of CSAM,

    I love the pearl clutching around lacking safeguards against non-consensual child nudity in specific, as though an app that is designed to generate non-consensual nude images of adults is the gold standard we should be aspiring to and the only problem with this idea is this one teensy little edge case they forgot to consider.

  • SoyViking [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    26
    ·
    1 day ago

    I seriously didn’t think they had nudify apps on the app stores. Firstly, these apps are blatantly obviously unethical in virtually all use cases, secondly, I naively thought that app stores were typical American prudes, hostile to anything involving sex or nudity. And here they are peddling revenge porn apps.

    • LaughingLion [any, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      21 hours ago

      only real ethical use case is using it for yourself, like on your own picture for entertainment purposes but of course people like us dont need that because hexbear users are already a fucking snacc

  • UmbraVivi [he/him, she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    62
    ·
    edit-2
    1 day ago

    Ah yes. Generative AI has revolutionized:

    • Fraud

    • Spam

    • Revenge porn

    • Cheating

    • Schizophrenia

    And now: - Child porn

    But hey, at least it also made many existing services noticeably worse, is causing significant environmental damage and will cause millions to lose their jobs! That makes it all worth it!

      • UmbraVivi [he/him, she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        36
        ·
        1 day ago

        It doesn’t even do that joker-dancing OpenAI is bleeding money and none of the other big tech corporations investing into AI are seeing any returns either. The only people making money off of this are the hardware providers like Nvidia. Generative AI is just not very useful for anything that requires any minimum standard of quality, the only thing it’s good for is generating unimaginable amounts of useless slop.

        GenAI is a big tech hailmary like the blockchain and the metaverse before it. Google, Microsoft and Meta desperately need there to be another growth market, they need the line to go up but all the current business models have been exhausted. They are betting on GenAI being the next growth market, the next big thing, they are merely hoping that it will make line go up in the foreseeable future.

  • Bonifratz@lemm.ee
    link
    fedilink
    English
    arrow-up
    35
    ·
    1 day ago

    I know all of this sucks donkey balls, but the one tiny positive I can see in this development is that in the future, victims of actual revenge porn can at least say it’s all AI. I. e., if everyone can post sexual video material of anybody else anyway, then real material will get lost in the flood.

    But yeah, other than that, fuck AI.

  • VibeCoder [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    1 day ago

    I was actually just looking into CSAM moderation and the current state of available solutions is abysmal. Cloudflare’s fuzzy hash comparison is widely recommended as if it’s a get-out-of-jail free card to wash your hands of CSAM detection. But it only detects hashes of known CSAM material. So good luck with minors uploading their own nudes in DMs, let alone dealing with this AI bullshit.

    Edit: I should say there are indeed services catered toward corporations that offer these kind of detection. It’s a tier of service where you need to contact sales in order to get pricing if that works as a shorthand for people. I was looking at it as a solo developer and comparing it to a lot of cloud services and platforms, it’s just not catered to smaller scales.

  • anarchoilluminati [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 day ago

    Aren’t these AI programs paid for? Wouldn’t they register customer info? How are kids even signing up for these things without their parents cards or whatever? I don’t see how there aren’t ways to figure this out. It doesn’t seem that easy for kids to produce this, there must be ways to make it more difficult.

    I don’t even use Google, much less use AI, so I genuinely don’t even know how this works, and I normally don’t like surveillance in any way, but seems kinda weird that they’re just immediately throwing their hands up in the air about CSAM of all things.

    • sodium_nitride [she/her, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 day ago

      Many of the ai models can be run locally. And many of the ones that are paid give free trials.

      I mean, I could set this shit up in a single afternoon on my laptop fully locally and woth FOSS. And that was 2 years ago when I wanted to experiment with image generation for a while. I imagine going to local route is much easier now.

      That’s kind of why “solving” AI under capitalism is close to impossible. Everything is a temporary workaround or a bandaid. And by capitalism I don’t mean “rule by bourgeois classes” but straight up commodity production. I would be very surprised if this shit didn’t start appearing in AES systems as well to some extent.

      • anarchoilluminati [comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        Well, that leads to me to the other thing I was thinking that maybe it’s not even other kids as much as adults making this CSAM.

        I wouldn’t even know how to locally setup some AI like that, I know kids are usually on the cusp of whatever is being released and know more than some olders like me but I still think it sounds like a bit much for a kid who just wants to see some nudes. These kids who have the CSAM made of them usually have some social media and who knows who’s taking their picture and processing this stuff. That will be much more difficult to resolve, if not impossible to a certain degree like you said. But I don’t think it sounds impossible to prevent kids from doing this stuff to other kids. They make it sound like this is some naughty kids up to no good when in reality these are tools being utilized by peds, in my opinion.

        • sodium_nitride [she/her, any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          I wouldn’t even know how to locally setup some AI like that

          It wasn’t too hard, I just had to follow a step by step guide. I really can’t tell you how much easier it is today because I lost interest in image generation after using it like 3 times, but the trajectory of most FOSS software is to just turn them into a convenient package that you just download, then run. For a kid willing to pretty much ruin a classmate’s life, it is frighteningly easy. Kids can do a lot of shit with a little dedication.

          But I don’t think it sounds impossible to prevent kids from doing this stuff to other kids.

          It will be very difficult to resolve this issue. Criminal prosecutions of the practice involved will likely lead to the same outcomes as other state sanctioned crackdowns on electronic communications. It will barely resolve the actual problem (because the actual criminal activity can be hidden away with some effort) but the state can use it as a pretense to attack things it doesn’t like.

          In general, police effectiveness in capitalist states for preventing and catching crime tend to be surprisingly limited. And whatever effectiveness they have plumets with regulating electronic communications. Like, the state can’t stop the illegal distribution of pirated media, even though protecting property rights is literally the fundamental interest of capitalist states. I somehow doubt that most western countries will really do anything significant to curtail this phenomenon, especially since most of them have an interest in upholding violence against women and especially trans women, who I would not be surprised if they were targeted way more often. Like, is the Trump admin of all people really going to regulate the use of AI to stop revenge porn?

          They make it sound like this is some naughty kids up to no good when in reality these are tools being utilized by peds, in my opinion.

          The primary perpetrator of such CSAM humiliation rituals would almost certainly be classmates, who would have a personal motivation to ruin someone’s life, as opposed to some stranger pedophile, who has no real reason to share CSAM material amongst the classmates of someone they don’t know. Such SA and SV incidents are really a matter of the perpetrator asserting power over the victim, and the violation of intimate boundaries is one of the strongest ways to assert control and dominance.

          • anarchoilluminati [comrade/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            4
            ·
            23 hours ago

            Yeah, you’re very right about the humiliation aspect and the rest, especially if other students are seeing it all too. That’s terrible that kids are doing this type of shit at such an early age. At that age, I just wanted to see nudes. Revenge AI porn would’ve been unthinkable. It’s still hard for me to process that kids are thinking like this but you’re right.

            Ugh. I’m so sorry for young girls going through this shit. Capitalism is terrible. This shit should never have been released to the public without the safety mechanisms in place.

            • sodium_nitride [she/her, any]@hexbear.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              This shit should never have been released to the public without the safety mechanisms in place.

              Kinda late reply, but honestly, yeah. The safety mechanisms placed were woefully inadequate, and for all the hype openAI made about “AI taking over humanity and is extremely dangerous”, they never bothered to just delay their releases unilt after the appropriate guardrails were laid (which would still only be a bandaid solution).

              I’ve never before seen a company create a marketing campaign about how genuinely dangerous their product is while simultaneously rushing releases as fast as possible and creating as much hype as possible so that everyone will rush to develop and use the dangerous technology more.

  • Lerios [hy/hym]@hexbear.net
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 day ago

    abuse material of other their peers

    despite the harm that nonconsensual causes,

    2 grammer fuck ups in the first 2 paragraphs. i love an article about ai being written with ai lmao.

  • Belly_Beanis [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 day ago

    The reason laws are falling behind is because this type of AI was intended for CSAM and revenge porn from the start. It was one of the first things people pointed out would happen: if you can create pictures of people naked, it will be used without people’s consent. The only conclusion I can come to is the creators of these apps and programs are doing this deliberately. They intended for this to be legal non-consensual material from the very get go. The purpose of the system is whatever the system does.

  • Simon 𐕣he 🪨 Johnson@lemmy.ml
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    1 day ago

    This was going to be the end state of AI trends as a solution to the loneliness epidemic esp. with the sexual services side of things.

    There are 2 real choices:

    1. Children have AI “partners” whose characterization is inherently pedophillic
    2. Children have AI “partners” that are age appropriate making the technology inherently pedophillic.

    It’s unrealistic to believe that this technology can be made “safe” or that this technology can be eradicated. If we eradicated it 2-3 years ago it would have worked, now the ship has sailed. Generative image models and some video models are in reach for personal consumers. While most models still require specialized enterprise level tech to use it as “service”, what we’re really talking about is the dialup porn experience vs broadband.

    Photoshopping their peers was already happening long before this. Liberal law has never been able to square this circle because it’s not possible to make a sensible system in a deonotological way. A deontological mindset will always resist conseqeuentialist approaches. Look at family court for example. There’s millions of men and women out there that do not understand that the minor legal protections we have for children should supersede their individual rights.