India, Sept. 10 -- This week, we have a conversation about two separate instances of the mess that AI chatbots tend to be beyond their shiny interface (and loud promises), as well as Apple's vision language models meant for real-world visual understanding.

If the news of AI faltering surprises you, that itself would be a revelation. Researchers from the University of Pennsylvania reveal in a new study that chatbots, including GPT-4o, can be tricked into breaking their own safety rules using classic persuasion tactics to complete requests that the guidelines would otherwise not allow. The trick? Researchers used tactics described by psychology professor Robert Cialdini in Influence: The Psychology of Persuasion. For instance, ChatGPT almo...