New Delhi, Sept. 7 -- If you use an artificial intelligence chatbot, it's likely that you may have hit a roadblock at some point when the chatbot refuses to answer questions that go against its core commandments. Now, if the AI were a human, you would probably use some of the persuasion techniques from a best-seller, but you wouldn't expect them to work on an AI chatbot, right?

Well, not quite. A new pre-print study from the University of Pennsylvania titled "Call Me A Jerk: Persuading AI to Comply with Objectionable Requests" found some human-like psychological techniques to get the AI chatbot to answer questions that it wouldn't have in normal circumstances.

The study was conducted on the GPT-4o mini model from last year and was aimed...