返回资讯列表
前沿技术

Stanford study outlines dangers of asking AI chatbots for personal advice

Anthony Ha2026/05/06-142,684 阅读
⚛️

While there’s been plenty of debate about the tendency of AI chatbots to flatter users and confirm their existing beliefs — also known as

AI sycophancy

— a new study by Stanford computer scientists ...

## 正文

While there’s been plenty of debate about the tendency of AI chatbots to flatter users and confirm their existing beliefs — also known as

AI sycophancy

— a new study by Stanford computer scientists attempts to measure how harmful that tendency might be.

The study, titled “Sycophantic AI decreases prosocial intentions and promotes dependence” and

recently published in Science

, argues, “AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences.”

According to a recent Pew report

, 12% of U.S. teens say they turn to chatbots for emotional support or advice. And the study’s lead author, computer science Ph.D. candidate Myra Cheng,

told the Stanford Report

that she became interested in the issue after hearing that undergraduates were asking chatbots for relationship advice and even to draft breakup texts.

“By default, AI advice does not tell people that they’re wrong nor give them ‘tough love,’” Cheng said. “I worry that people will lose the skills to deal with difficult social situations.”

The study had two parts. In the first, researchers tested 11 large language models, including OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek, entering queries based on existing databases of interpersonal advice, on potentially harmful or illegal actions, and on the popular Reddit community

r/AmITheAsshole

— in the latter case f

来源:Anthony Ha

AI人工智能科技前沿技术