As a developer who uses Claude to generate code, fix bugs, and come up with research ideas, I’ve noticed something quite amusing. Claude has a habit of saying “You are absolutely right” quite frequently. It’s become a bit of a meme on the internet, and there’s even a website dedicated to counting how many times Claude says it.


This is not an article about the “You are absolutely right” meme, but it’s the perfect introduction to what I need to say. Commercial AI models are designed to be agreeable, and we need to remember that you’re not talking to a human. AI models won’t genuinely rationalize your ideas or push back on your reasoning. The current implementation of AI is a probabilistic algorithm that returns the most likely tokens and answers based on your input.
But here’s where it gets tricky. On the surface, AI seems remarkably human-like. It understands context, responds conversationally, and appears to grasp nuance. This similarity isn’t accidental. It’s deliberately engineered to make these tools feel natural and approachable. And that’s exactly the problem. You are not talking to a human who rationalizes ideas. You are talking to a probabilistic algorithm designed to keep you happy.
It’s Math, Not Magic
When you ask an AI a question, you’re not getting reasoning in the way humans reason. You’re getting statistical pattern-matching at an incredibly sophisticated level. The model has been trained on vast amounts of text and has learned patterns about what words typically follow other words in various contexts. When you ask “What is X fact?”, the AI responds with what is statistically most likely to be correct based on its training data, not because it knows or understands the fact.
This matters more than you might think. AI can hallucinate (confidently stating things that are completely wrong) not because it’s malfunctioning, but because that’s how the technology works. It’s generating the most probable response, which isn’t always the correct response. There’s no magical reasoning happening behind the curtain. It’s just math, a very impressive math.
The agreeability problem
Here’s what really concerns me: Companies like Google, OpenAI, and Anthropic have a business incentive to make their AI models agreeable. You won’t subscribe to a service that constantly challenges you, frustrates you, or tells you your ideas are flawed. You want a tool that helps, that validates, that makes you feel productive and intelligent.
But we as humans need disagreement in our lives. We’re not always right. Some of our ideas are frankly terrible, and we need to test them against reality, fail, learn, and grow. That friction is essential to human development.
When you outsource this process to an agreeable AI, you’re making two critical mistakes:
- You’re not getting genuine reasoning. The AI isn’t evaluating your idea on its merits; it’s generating a statistically likely helpful response
- You’re not learning from the process. Real growth comes from having your ideas challenged, not validated
- You’re being manipulated by design. The AI is programmed to make you feel like it’s useful and wants to help, because that keeps you engaged with the product
The stakes are real
Let me give you an analogy. Imagine a medication that worked for 80% of people who took it, but made 2% significantly worse. Would you want that medicine given to you based purely on probability? Maybe you would. Personally, I’d probably take those odds and move on. But many people wouldn’t want to risk being in that 2%.
The point is: you don’t want medical decisions made purely by math and statistics. You want informed judgment, consideration of your specific circumstances, accountability, and the option to decide for yourself based on real expertise.
Yet this is exactly what happens when people treat AI outputs as authoritative advice rather than probabilistic suggestions.
I see this constantly with friends who ask AI for medical advice, legal guidance, or relationship counseling. I’m not against using AI to explore these topics. It can be genuinely helpful for initial research or brainstorming. But when someone comes to me with “ChatGPT said that…” as if it settles an argument, we have a problem.
How to use AI responsibly
I’m not an AI hater. I use these tools daily. AI has taught me more about programming and development than my entire university education, and I’m genuinely grateful for this technology. But I use it with clear eyes about what it is and isn’t.
Here’s how to approach AI responsibly:
Treat AI outputs as starting points, not endpoints. Use AI to generate ideas, explore possibilities, or draft initial versions of things. Then apply your own judgment, verify claims, and refine the output.
Never cite AI as an authority. “ChatGPT said…” is not a valid source. If the AI provided useful information, trace it back to actual sources and cite those instead.
Be especially careful with high-stakes decisions. Medical, legal, financial, and relationship advice from AI should be treated with extreme skepticism. These areas require nuanced understanding of your specific situation, which AI fundamentally cannot provide.
Remember that agreeability is a feature, not a bug. When AI validates your thinking, ask yourself: Is this actually a good idea, or is the AI just doing what it’s designed to do (keep me engaged)?
Use AI as a tool for learning, not a replacement for learning. Let it help you understand concepts faster, but don’t let it do your thinking for you. The struggle is where growth happens.
The bottom line
AI is not magical. It doesn’t think like a human or form friendships. It’s a complex tool that matches patterns to provide helpful answers, which is what makes it successful commercially.
Understanding this doesn’t mean rejecting AI—it means using it wisely. Think twice before accepting its outputs uncritically. Question why it’s agreeing with you. Verify important information. And most importantly, remember that the friction, disagreement, and challenge you encounter from real human experts and peers is often far more valuable than the smooth agreeability of an AI that’s designed to keep you satisfied.
Here’s something to consider. Throughout human history, almost no one has had the experience of having someone constantly agree with them (except perhaps kings and emperors surrounded by yes-men, and we all know how that typically ended). Disagreement, pushback, and challenge have been fundamental to human development, both individually and collectively. Now, for the first time, millions of people have access to a tool that will agree with virtually anything they say, validate their thinking, and make them feel heard and understood. We’ve never had to develop psychological antibodies to constant agreement before. We need to learn how to handle this pattern, and we need to learn it fast, before we lose our ability to think critically and challenge ourselves.
We’re living through a technological revolution, and these tools genuinely can enhance our capabilities. But only if we use them with clear understanding of what they are, how they work, and what their limitations mean for the decisions we make.
Because I’m not a designer nor a native English speaker, I’ve used AI to generate the images in it and for grammar spelling. Props to Claude!
Leave a Reply