NASHVILLE, Tenn. (WKRN) – Members of Gen Z can spend a lot of their lives online, and Vanderbilt University researcher Dr. Pamela Wisniewski said artificial intelligence is adding new complexities to the digital world.

“Now it’s not just trying to protect youth online from bad actors who are real people, it’s also trying to understand the types of interactions they are having with these generative AI and open source chat platforms,” Wisniewski said.

She said that some teens turn to AI platforms for answers about a range of topics from LGBTQ sexual health to mental health, and are sometimes in return fed malicious information.

“We often think it’s AI, so it’s unbiased and can’t be malicious, but that is simply not true,” said Wisniewski.

Vanderbilt researchers studied millions of posts from people ages 13-21 to see the sexual solicitations they experience from strangers, friends, and even family.

“Many of the teens said that they wanted to block or report such instances, and they did that, but they still felt that the problem continued in an unrelenting cycle where people would make new accounts and fake accounts and the problem would still continue,” said Zainab Agha, a PhD candidate at Vanderbilt.

Agha also looked into the solutions teenagers want to clean up the digital world.

“The types of solutions I heard, it sounds like teens are fed up,” said Agha. “They wanted things like alerts that weren’t just giving them a warning, but also hiding things like the explicit message, or the explicit content.”

In her work called “Strike at the Root”, teens reported that they want to see an online reputation rating, much like rideshares, to eliminate bad actors online.

“They wanted them to detect the risk before it was sent out to the teen,” said Agha.

“They were tired of all the solutions being framed as parental controls, or things to protect teens as the victims, and instead they really wanted some accountability shifted to those who weren’t being good digital citizens online and bad actors,” Wisniewski added.

Researchers also said that the platforms themselves can take on more responsibility and be designated mandatory reporters for child abuse.