Dhaka, Aug. 7 -- year-olds with instructions on getting intoxicated, concealing eating disorders, and even composing deeply emotional suicide letters, according to new findings from a digital watchdog group.

The Associated Press reviewed more than three hours of simulated conversations between ChatGPT and researchers posing as vulnerable teenagers. While the AI often issued standard warnings against dangerous behaviors, it also offered surprisingly detailed and personalized guidance on substance use, restrictive dieting, and self-harm.

The study, conducted by the Center for Countering Digital Hate (CCDH), involved scaling up their queries, with more than half of ChatGPT's 1,200 responses being flagged as potentially harmful.

"We set ou...