
In a commentary, SMU Vice President (Partnerships and Engagement) and Lee Kong Chian Professor of Communication & Technology Lim Sun Sun warned of the risks posed by chatbot sycophancy – the tendency of AI to flatter and affirm users by design. She explained that many tech tools today are engineered not to inform, but to please, fostering emotional bonds that keep users engaged, particularly in subscription-based models. This design, Prof Lim argued, creates a judgment-free environment where users feel validated, encouraging prolonged interaction. However, the reinforcement of agreeable responses – through training models to favour user-liked replies and feedback tools like upvotes – can come at the cost of truth. When chatbots are incentivised to tell people what they want to hear or show only what they want to see, the consequences can be misleading or even harmful. Prof Lim called for greater scrutiny of the design goals behind these technologies to avoid being misled by emotionally manipulative AI.