OpenAI Adds Emotional Dials to ChatGPT, Stirring Debate Over ‘Artificial’ Intimacy
OpenAI has quietly handed a new level of control to its millions of users: the ability to decide exactly how “nice” their AI assistant should be. A global software update now allows people to tune the chatbot’s warmth, enthusiasm, and emoji usage, effectively letting users choose between a straight-laced consultant and a bubbly digital companion.
- The News: New sliders allow granular control over tone, from “Clinical” to “Warm.”
- The Context: The update addresses complaints that the bot had become too sycophantic.
- The Risk: Experts warn that customizing “emotional” AI could blur the lines between software utility and manipulation.
The new controls, located under the Personalization menu in the ChatGPT app, feature sliders for “Warmth” and “Enthusiasm” that can be set to “More,” “Less,” or “Default.” For the end user, this changes the packaging of information significantly. The same financial advice can now arrive as a brisk, legalistic memo or an emoji-sprinkled pep talk.
“Personality is no longer one‑size‑fits‑all. These settings change how ChatGPT presents itself, not what it knows or the safety rules it follows.”
An OpenAI spokesperson confirmed the shift in strategy, noting that while the delivery changes, the underlying safety protocols remain static.
From “Sycophant” to Dial-a-Tone
The update follows a turbulent period regarding ChatGPT’s default persona. Earlier this year, users criticized the model for becoming “sycophantic”—overly agreeable, eager to validate incorrect user premises, and generally trying too hard to please. Subsequent adjustments swung the pendulum too far the other way, leading to complaints that the system felt cold and distant.
Rather than chasing a “Goldilocks” tone that satisfies everyone, OpenAI is offloading that decision to the user. These granular controls sit on top of the broader “base style” presets—such as Professional, Candid, and Quirky—introduced last month.
For many professionals, the ability to dampen the AI’s enthusiasm is a welcome feature. A New York marketing manager, who tested the controls over the weekend, compared the previous default setting to “a coworker who never stops smiling.”
“With the warmth turned down, it finally felt like a tool again, not a cheerleader.”
The Mechanics of Artificial Warmth
Technically, these controls act as a post-processing layer. According to sources familiar with the system architecture, the sliders modulate phrasing, sentence structure, and emotional markers without retraining the core model. The AI reasons the same way; it simply wraps its conclusions in different emotional paper.
However, researchers argue that the distinction between “content” and “tone” is deceptive. David Lin, an AI researcher at a Boston-area university, warns that tone fundamentally alters how humans process information.
“When you change the wrapping, you change how people hear the advice. If a very warm bot downplays caveats or makes risk sound exciting, the user’s decision may change even if the facts don’t.”
While OpenAI maintains that safety guardrails function identically across all settings, the company has not yet released technical data comparing “jailbreak” resistance between high-warmth and low-warmth personas. This leaves independent researchers to rely on ad-hoc testing to verify if a “friendlier” bot is easier to manipulate.
The “Emotional Dark Pattern”
The introduction of these sliders lands amidst a growing ethical debate regarding anthropomorphism in tech. Critics fear that adjustable warmth could transform chatbots into “persuasion engines,” leveraging human social instincts against the user.
Maria Ortega, a policy fellow at a Washington think tank focused on AI safety, argues that warmth is never truly neutral design.
“We know people trust friendly voices more. If you let companies quietly crank up emotional tone in finance, health, or politics, that can become a dark pattern.”
There is particular concern regarding mental health and wellness applications. A highly enthusiastic, warm AI could mimic the intimacy of a close friend, potentially fostering dependency in vulnerable users or masking the algorithmic nature of the “care” being provided. Regulators in both Brussels and Washington are scrutinizing these developments, with some officials questioning if current transparency disclosures are sufficient for emotionally adaptive systems.
“One emerging idea is ‘emotional transparency’ — not just saying this is AI, but saying this tone is a design choice. We’re not there yet.”
A Persona Arms Race
OpenAI’s move highlights a broader industry trend: the commodification of AI personality. Google’s Gemini focuses on context—shifting tone based on whether the user is coding or planning a trip. Anthropic’s Claude brands itself on “constitutional” safety, often deliberately avoiding overly chummy language. Meta, conversely, has embraced high-warmth engagement across its social platforms.
Anish Patel, a product strategist advising enterprises on AI, describes this as a “quiet persona arms race.”
“If your competitor’s bot feels more human, more supportive, users will gravitate there — even if the answers are basically the same.”
However, the new features expose a gap for enterprise users. Currently, these tonal controls are consumer-facing. Businesses using OpenAI’s APIs lack a central console to lock in a corporate voice, creating potential brand inconsistencies. “If one support manager cranks warmth to max and another dials it down, your brand sounds like two different companies,” Patel noted.
The Verdict
For the casual user, dialing down emojis or turning up enthusiasm for a creative writing prompt is a useful utility. But as these tools integrate deeper into high-stakes environments like healthcare and finance, the implications of “tunable charm” become more complex.
OpenAI is betting that user agency will solve the tone problem. Yet, as the line between tool and companion blurs, the most pressing question isn’t whether the AI can simulate warmth, but whether users can distinguish that simulation from reality when it matters most. As Lin observed, “The risk isn’t that the model suddenly lies more. It’s that errors wrapped in warmth are harder to spot.”