United Kingdom, Sept. 2 -- Meta has announced it will add further safety restrictions to its artificial intelligence chatbots, including blocking conversations with teenagers about suicide, self-harm and eating disorders.

The move comes two weeks after a United States senator opened an investigation into the company following a leaked internal document which suggested Meta's AI products could engage in "sensual" conversations with teenagers.

Obtained by Reuters, the document was described by Meta as containing "erroneous" information inconsistent with its rules that prohibit any content sexualising children.

A Meta spokesperson said: "We built protections for teens into our AI products from the start, including designing them to respon...