The National Post reports in its Friday edition that artificial intelligence chatbots sometimes give poor advice, which can harm relationships and reinforce harmful behaviours, says a new study that examines the risks of AI telling people what they want to hear.
An Associated Press dispatch to the Post reports that a study published on Thursday in the journal Science tested 11 leading AI systems and found that all of them exhibited varying degrees of sycophancy. The issue is not only that these systems provide inappropriate advice; it is also that people tend to trust and prefer AI more when the chatbots are justifying their beliefs.
"This creates perverse incentives for sycophancy to persist: The very feature that causes harm also drives engagement," says the study led by researchers at Stanford University.
The study found that a technological flaw, already tied to some high-profile cases of delusional and suicidal behaviour in vulnerable populations, is also pervasive across a wide range of people's interactions with chatbots. It is subtle enough that they might not notice and a particular danger to young people turning to AI for many of life's questions while their brains and social norms are still developing.
© 2026 Canjex Publishing Ltd. All rights reserved.