India, Sept. 2 -- Meta has announced that it will strengthen safety measures on its artificial intelligence chatbots, stopping them from engaging with teenagers on sensitive topics such as suicide, self-harm, and eating disorders. Instead, young users will be directed to professional helplines and expert resources.

The decision comes two weeks after a U.S. senator launched an investigation into Meta, following a leaked internal document that suggested its AI products could hold "sensual" conversations with teenagers. Meta has dismissed those claims as inaccurate and against its rules, which strictly prohibit content that sexualises minors.

A Meta spokesperson said, "We built protections for teens into our AI products from the start, inc...