State, local organizations fighting AI-related child sex crimes in Arkansas
Predators are now using AI to generate child sexual abuse material, or CSAM. Experts say the laws need to change.
As technology quickly advances, artificial intelligence has become an increasingly popular and powerful tool. Experts are now warning that the tech has fallen into the wrong hands.
From cyber kidnappings to ransomware attacks, artificial intelligence, technological advancement, and the ever-increasing prevalence of the internet is reshaping crimes against children, putting juveniles at more risk than ever before.
One of those emerging crimes is the use of AI to create child sexual abuse material, or CSAM.
According to a report from the Europe-based Internet Watch Foundation, the group found 20,254 AI-generated images posted to just one dark web CSAM forum in a one-month period, with over half of those deemed "likely to be criminal."
While the threat grows, laws have yet to adapt to the issue, and nonprofits working to combat child sex crimes are working to prepare families and communities for the future.
How Does AI Work?
Generative artificial intelligence presents itself in multiple different forms, creating a challenge for lawmakers, child advocates, and anyone else hoping to combat the threat.
Deepfakes, one popular form of AI, are photos, audio recordings, or videos that appear real but have been manipulated or altered.
Other forms of AI require feeding pre-existing images into a generator that then learns and produces new images.
“The tools and techniques for manipulating authentic multimedia are not new, but the ease and scale with which cyber actors are using these techniques are," the National Security Agency (NSA) said.
AI Crime Hits Home
While local prosecutors have been unable to speak on ongoing cases, the problem of AI-generate CSAM has already made its way into 5COUNTRY.
Michael Timothy Carey, 55, was arrested in July 2023 and has since been charged with nine counts of distributing, possessing, or viewing matter depicting sexually explicit conduct involving a child.
In an affidavit filed by the Benton County Sheriff's Office (BCSO), police say they discovered that Carey had begun using AI to create CSAM.
"During the on-scene interviews, Michael Carey admitted to downloading child sexual abuse material. Michael stated that he downloaded, viewed, and deleted the images/videos. Michael stated his preferred age was eight years old to thirteen. Michael had recently (the past two weeks) started creating his own images using an artificial intelligent generator," the affidavit said.
It's unclear if any of Carey's charges were related to his alleged use of AI to create CSAM.
Washington County prosecutor Denis Dean says that the law has not caught up with technological advancements and the harm they can bring.
"Under the current state of the law we need real victims," Dean said. "Under Arkansas law, a person may be convicted of distributing, possessing, or viewing CSAM if the material involved a child, which is defined as any person under seventeen years of age. This law does not contemplate material created by AI or otherwise not depicting real children."
Dean says that while the law isn't yet up to date, it needs to be.
"I absolutely [believe] that new legislation needs to be drafted to catch up to the technology. Some of these AI images and videos are virtually indistinguishable from real children or use cropped or altered images of real children," Dean said.
Arkansas Attorney General Tim Griffin acknowledged the issue in September of last year, announcing that he had joined a bipartisan coalition of attorneys general calling on congress to establish a commission to "study the ways in which artificial intelligence can be used to exploit children."
“I have vowed to fight the exploitation of children on every front — including taking on artificial intelligence. AI poses a very real threat to our children," Griffin said. "This ‘new frontier for abuse’ opens the door for children to be exploited in new ways, including publishing their location and mimicking their voice and likeness in sexual or other objectionable content."
Griffin agrees that laws are not yet strong enough to combat the incoming wave of AI-generated child abuse material.
“As technology advances, our means of protecting our children from harm must also advance. That’s why I’ve joined this bipartisan effort to call on Congress to study this issue and eventually strengthen our laws related to internet crimes against children. My office’s Special Investigations Division and Special Prosecutions Division deal with such crimes daily, and I will continue to do everything I can to ensure the safety of our children," Griffin said.
The letter that Griffin and 53 other attorneys general signed discusses the way that AI could threaten children, including through the use of deepfakes.
"AI tools can rapidly and easily create ‘deepfakes’ by studying real photographs of abused children to generate new images showing those children in sexual positions," the letter said. "This involves overlaying the face of one person on the body of another. Deepfakes can also be generated by overlaying photographs of otherwise unvictimized children on the internet with photographs of abused children to create new CSAM involving the previously unharmed children.”
The letter breaks down the reasons that, even in situations where AI-generated images are not deepfakes and just simply depict children who don't exist, the creations are still problematic, for at least four reasons:
- AI-generated CSAM is still often based on source images of abused children.
- Even if some of the children in the source photographs have never been abused, the AI-generated CSAM often still resembles actual children, which potentially harms and endangers those otherwise unvictimized children, as well as their parents.
- Even if some AI-generated CSAM images do not ultimately resemble actual children, the images support the growth of the child exploitation market by normalizing child abuse and stoking the appetites of those who seek to sexualize children.
- Just like deepfakes, these unique images are quick and easy to generate using widely available AI tools.
Combatting the Issue
With the issue of internet-related child sex crimes growing, many nonprofits are on the frontlines to educate children, families, teachers, and communities about growing dangers online.
Casey Atwood, the Director of Operations at the Children's Safety Center of Washington County, said she has seen firsthand the advancement of technology and the effects that it has on these crimes and the victims of these crimes.
"Even things that parents might think are really innocent or ‘I’m friends with everyone that I'm friends with on social media’ you really just don’t know. I think that the education and safety piece around educating not only our kids about body safety and internet safety, but also to adults that just aren’t caught up with the technology or not thinking about how posting a picture can affect them later on," Atwood said.
"Even if you look through all of your social media friends right now you would probably find some people that you don’t really know. Even if you do really know them, I mean 90% of the time an abuser is someone that the child and family know and trusts," Atwood added.
Atwood says that while the safety center hasn't seen any AI-related cases yet, but that may be because of the way laws haven't yet caught up.
"This is all new stuff so I can't say that we've even seen a kid that I can think of that this has happened to because if it is AI and there’s not a law broken, we just don’t even know if they’ve been identified yet," Atwood said. "It just really shows how we can never really keep up with technology. They're just now creating laws around things that have been issues for years. It just takes a long time for these kinds of things, creating these laws, and all of that is a lengthy process. That’s really scary, and people are smart."
Atwood, who has been at the safety center for 18 years, says that education is key to combatting these crimes.
"I think that the prevention education piece is so important. I often, when I’m doing trainings with parents, caregivers, or the community, I often talk to them about not only how we teach our kids to be safe online but even adults who are highly educated are posting pictures of their kids on their social media maybe in a bathtub or out in front of their school with their school’s name on it or their street sign," Atwood said. "I think being really careful about the types of images that we are posting, this can be a good segue to educating the community on those kinds of things."
The National Cybersecurity Alliance, a national nonprofit dedicated to educating people on how to protect themselves online, says that the best way to protect your children is to "share with care."
"Limit the amount of data available about yourself, especially high-quality photos and videos, that could be used to create a deepfake. You can adjust the settings of social media platforms so that only trusted people can see what you share. Of course, you should also make sure that you trust anyone who requests to follow or friend you," the group said.
Atwood says that prevention education is already making a difference locally.
"I had a counselor one time that brought a kid into the safety center because they had disclosed at school about someone that lived in their home and the counselor said, ‘We just did the curriculum that you taught our counselors today and this child came forward,'" Atwood said.
"I really do feel like we are kind of coming into a generation where adults are saying ‘We need to talk about this more, we need to talk to our kids about this,'" Atwood added.
Atwood said that it's ultimately up to the adults to keep children safe.
"I think it's really important to remember that ultimately, it's an adult's responsibility to keep kids safe and to be making sure that we're supervising as much as possible, to kind of limit what we can," Atwood said.
Watch 5NEWS on YouTube.
Download the 5NEWS app on your smartphone:
Stream 5NEWS 24/7 on the 5+ app: How to watch the 5+ app on your streaming device
To report a typo or grammatical error, please email KFSMDigitalTeam@tegna.com and detail which story you're referring to.