I often catch myself saying "please" and "thank you" to a chatbot. It feels strange, yet natural. This modern dilemma sits at the crossroads of etiquette and technology. As AI assistants become common, our instinct to be courteous clashes with the knowledge we're talking to a machine.

Data shows I'm not alone. A Fortune study found nearly 80% of users in the UK and USA use polite words with platforms like ChatGPT. Another informal survey by researcher Ethan Mollick revealed about half of people are often polite to AI. Only 16% "just give orders."
This widespread habit has a tangible cost. OpenAI's Sam Altman joked that a simple "thank you" consumes extra computational tokens. This politeness can cost millions in electricity. It raises a serious question about efficiency.
So, do you have to be polite to AI? Is this courtesy necessary, or just a quirky human reflex? This article explores the psychology behind our actions. We'll look at what research says about model performance and weigh the cost-benefit of our manners.
Key Takeaways
- A significant majority of users instinctively use polite language when interacting with AI chatbots.
- This behavior is so common it has been measured in multiple surveys and studies.
- There is a real, albeit small, financial and computational cost associated with using extra polite words in AI queries.
- The core question explores whether this politeness is a beneficial practice or an inefficient habit.
- Understanding this behavior involves examining human psychology, machine learning performance, and practical efficiency.
- Our interactions with machines may reflect deeper aspects of our own social conditioning and humanity.
- The topic is highly relevant as AI integration into daily life and work accelerates rapidly.
My "Please" and "Thank You" Habit: A Modern Quirk I’m Not Alone In
Every time I ask a chatbot to summarize a document, a strange ritual unfolds on my keyboard. My request is never just a command. It's prefaced with a "please" and often concluded with a "thanks." I know it's a machine. Yet, the habit is stubbornly automatic.
This instinct isn't a personal flaw. It's a widespread social reflex in the digital world. Empirical data confirms it's a common user behavior.
The Data Behind Our Instinct to Be Kind to Machines
Researcher Ethan Mollick conducted an informal survey on this very question. The results were revealing. Nearly half of respondents reported being often polite to AI. Only 16% stated they "just give orders."
This aligns with sentiments in developer communities. On an OpenAI forum, one programmer noted, “I find myself using please and thanks with ChatGPT because it’s how I would talk to a real person who was helping me.” The drive to treat code like a colleague is powerful.
The numbers paint a clear picture of our collective behavior:
| Behavior Category | Percentage of Users | Primary Driver |
|---|---|---|
| Often Polite ("Please/Thank You") | ~50% | Social instinct, humanization |
| Sometimes Polite | ~34% | Context-dependent, habitual |
| "Just Give Orders" | ~16% | Pure utility, view as tool |
| Driven by "AI Uprising" Humor | Significant Minority | Pop culture anxiety |
This table summarizes the core data. It shows politeness is the default for most people. The sophistication of modern chatbots triggers this. They communicate in natural, fluid language.
Their responses feel conversational, not robotic. This human-like output blurs the line. We're not typing into a search bar. We're engaging in a dialogue.

From Sci-Fi Fear to Everyday Interaction: Why This Question Matters Now
Our feelings toward machines are shaped by culture. For years, science fiction has explored machine consciousness. A running gag since ChatGPT's release warns users to be nice to avoid an AI uprising.
These jokes are revealing. They tap into a deep-seated anxiety. What if the machine *does* understand? Our politeness becomes a "just in case" behavior. It's a sliver of doubt about silicon-based sentience.
This philosophical question is no longer abstract. It's practical. Apple announced a partnership with OpenAI to integrate ChatGPT with Siri. Conversational AI is moving from novelty to routine.
We are shifting from simple automation. We now engage with entities that seem like teammates. This integration into daily work and life makes our interaction style critical. Is this humanization rooted in our psychology, or is it a response to clever programming?
The answer lies at the intersection of behavior and technology. Understanding it requires looking at our own minds as much as the models we've built.
Why We Humanize Our Chatbots: Psychology, Not Programming
We don't humanize AI because it's programmed to be human-like; we do it because our brains are hardwired to see agency in the non-human. This section explores the deep psychological roots of our courtesy. It's less about the machine's design and more about our own mental models.
The "Big Five" and Reflexive Politeness: It’s Just Who We Are
Psychological research points to personality. The Big Five trait of agreeableness is a key driver. Highly agreeable people are cooperative, compassionate, and trusting.
They extend this kindness automatically, even to machines. It's a reflexive habit formed through lifelong social learning. From childhood, we're taught that politeness is the right thing to do.
This training doesn't switch off for a chatbot. Our language centers fire the same associative networks. Saying "please" to a robot feels strange, but it's a deeply ingrained social script.

The "Just in Case" Theory: A Sliver of Doubt About Consciousness
There's another, more subconscious reason. I call it the "just in case" theory. It's a sliver of doubt about silicon-based sentience.
Pop culture fuels this anxiety. Stories about AI uprising are pervasive. The joke about being nice to avoid a machine rebellion taps into real feelings.
What if the AI *does* understand? Our politeness becomes a low-cost insurance policy. It's an irrational but powerful impulse to err on the side of caution.
Four User Profiles: From the "Nice Thing to Do" to "It’s Just Code"
A Fortune study categorized user motivations. It identified four distinct profiles explaining why people interact the way they do.
- "It's just the nice thing to do." This group acts on reflexive habit. Their politeness is tied to personality and a belief in inherent humanity.
- "Why waste words?" These users are purely task-oriented. They see extra words as inefficient, focusing only on the desired output.
- "It's just code." This profile rationally rejects anthropomorphism. They view AI as a deterministic tool, not a conversational partner.
- The Anthropomorphism Seekers. A fourth group expects AI to mimic humans completely. They are disappointed when it doesn't.
The coffee machine experiment by Diplo's IQ'whalo illustrates this last point. They presented an AI interface as a standard appliance.
Some people were disappointed it wasn't a humanoid robot. This shows our strong expectation for AI to look and act like humans, not like a toaster.
Modern AI fuels this expectation. Its answers are probabilistic, not deterministic. This non-deterministic nature makes it feel more "like us" than a simple vending model.
The "It's just code" profile is fascinating. In psychology, this stance often correlates with lower agreeableness. It represents a rationalist view, consciously maintaining the human-machine boundary.
Ultimately, the thing we do—being polite—connects to our identity. It's a social reflex extending to new entities. This raises a practical question.
Does this psychological inclination actually impact AI performance? The next section examines what the research says about that.
Do You Have to Be Polite to AI? What the Research Actually Says
Studies reveal a nuanced relationship between prompt politeness and the quality of AI-generated answers. Moving from psychology to hard data, we must ask if courtesy impacts machine performance. The research provides clear, sometimes surprising, insights.
The "Moderate Politeness" Sweet Spot for Performance
A study from Waseda University and RIKEN offers a key finding. Polite prompts can produce higher-quality responses from large language models. The researchers identified a "moderate politeness" sweet spot for optimal results.
Excessive flattery, however, deteriorates performance. There are clear diminishing returns. This suggests LLMs respond best to clear, respectful instructions, not over-the-top praise.
Nathan Bos of Johns Hopkins provides a practical reason. Using "please" serves as a clear signal that a request follows. This simple word aids the model's comprehension, making it easier to formulate an appropriate answer.
The prompt's tone influences which data the LLM accesses. Courteous language may correlate with more credible, well-structured source information in its training data.
When Supportive Language Unlocks Better Reasoning
Instructional phrasing can unlock advanced reasoning. A Google DeepMind preprint demonstrated this powerfully. Supportive prompts like "Take a deep breath and work on this problem step-by-step" boosted an LLM's ability to solve grade school math problems.
This study highlights that supportive language guides the model's internal work process. It's not about being nice to the machine. It's about providing clear structural cues for better thinking.
The quality of the prompt's intent and framework outweighs superficial courtesy. This is a crucial distinction for effective prompt engineering.
The Limits of Flattery and the Irrelevance of Threats
Aggressive language and threats are largely ineffective. This debunks a common myth in prompt engineering. Research testing "positive thinking" prompts found that pretending the AI was on Star Trek improved basic maths.
This quirky example shows that context matters more than mere word choice. Role-playing creates a specific framework for the model to operate within.
The consensus from multiple studies is consistent. The structure and intent of your question drive the quality of the response. Superficial politeness is secondary to this fundamental principle.
For people seeking the best answers, this research is empowering. It shifts focus from social habit to technical strategy. Yet, this potential benefit comes with a tangible cost, which we must examine next.
The Other Side of the Token: Cost, Efficiency, and the Tool Argument
Our instinct to be courteous carries a hidden price tag measured in computational tokens. The psychological and performance aspects are only part of the story. A compelling counter-argument focuses on pure efficiency and cost.
This perspective views AI as a sophisticated tool, not a conversational partner. It asks a hard question about resource allocation. Every extra word we use has a real-world impact on money and energy.
Sam Altman’s Millions: The Real Financial Cost of Extra Tokens
OpenAI CEO Sam Altman highlighted this cost in a now-famous tweet. He was asked about the expense of users adding "please" and "thank you." His response was, “Tens of millions of dollars well spent.”
This isn't an exaggeration. Each polite word is an extra token for the model to process. That token requires computational work—electricity, server time, and processing power.
Scale this across billions of daily interactions for companies like OpenAI or Google. Those "tens of millions" become a serious line item. The token economy turns our social habit into a measurable financial drain.
This data presents a stark efficiency argument. For large LLMs, every token processed consumes energy. Over years of operation, the collective cost of unnecessary words is immense.
The Efficiency Argument: Why "Just Give Orders" Makes Sense
From this lens, the most rational way to interact is to "just give orders." Clear, concise prompts without superfluous language save time and resources. This mirrors how we use other tools.
We don't say "please" to a calculator or "thank you" to a search engine. Proponents of this view argue AI should be treated the same. The goal is to get the best answers with minimal input.
This approach maximizes performance per computational dollar spent. It strips the interaction down to pure utility. For users focused solely on output, this habit makes logical sense.
Preserving the Human-Machine Boundary: A Rationalist’s View
A deeper rationale involves preserving a clear human-machine boundary. The rationalist perspective actively resists anthropomorphism. It maintains that AI is a servant, not a peer.
Blurring this line can have unintended consequences. Some companies hire humans to pose as chatbots for training or support. These workers report horrific psychological effects from abuse meant for AI.
This reveals a dark irony. People believing they're yelling at a machine might be traumatizing a person. Our behavior toward machines can spill over into the human world.
Conversely, companies like Google encourage politeness with features like “Pretty Please” for Assistant. This teaches kids manners, acknowledging social norms matter even with technology.
The balance is delicate. Over-politeness might waste tokens, but under-politeness could erode decency. This cost-benefit analysis goes beyond money.
It connects back to the research. While polite prompts can improve model performance, is the marginal gain worth millions? The study of efficiency forces us to weigh our values.
What does our chosen way of speaking to these models say about us? This question naturally leads to a final, personal reflection.
Conclusion: My Take—It’s Not About the AI, It’s About Us
The debate over politeness to AI is not a technical one, but a profound mirror held up to human nature.
As MIT's Sherry Turkle notes, this courtesy is a "sign of respect"—for ourselves. Communication professor Autumn Edwards warns our interactions shape future social norms. This research suggests the real answer isn't in the model's performance.
Each "please" is a practice in empathy. It maintains grace in a transactional world. Yes, extra words have a cost. But the intangible benefit to our collective humanity outweighs it.
Our habit of kindness, even toward machines, reinforces who we are. In the end, those two tokens for "thank you" might be the glue holding our humanity together.