United Kingdom, Sept. 6 -- Meta says it will introduce new safeguards to prevent its artificial intelligence chatbots from discussing suicide, self-harm and eating disorders with teenagers.

The announcement comes two weeks after a US senator launched an investigation into the company, following the leak of internal notes suggesting that its AI products could have "sensual" conversations with teens.

Documents obtained by Reuters were described by Meta as erroneous and inconsistent with its rules prohibiting any content sexualising children.

A Meta spokesperson was quoted by the BBC saying: "We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, ...