MY TAKE: The AI magic is back — whether it endures depends on Amazon’s next moves
Related: How ChatGPT is becoming Microsoft Office
The setup was simple. I had been working through a spontaneous personal essay — about cognitive overload, AI, and the specific anxiety of not knowing whether a memory lapse is a sign of dementia or just too many plates spinning at once.
That’s when it occurred to me: what would happen if I ran the exact same prompt through Claude? Not a cleaned-up version, not a revised brief — the raw material, word for word, copied directly from the ChatGPT session and pasted in. A controlled experiment, as controlled as a working journalist’s morning gets.
Claude’s answer was starkly different. Rather than validating the concept and generating toward it, it reflected the sharpest thread in my raw monologue back to me and asked whether that was actually what I meant. It declined to draft until we had established the frame. When the draft came, it was slower to arrive and easier to recognize as mine.
That distinction — cheerleader versus collaborating editor — is not a feature comparison. It is a description of two fundamentally different ideas about what an AI tool is for. And for the first time in several months, working inside one of these tools felt the way it did in the early days of GPT-4.0, when the thing still felt like a thinking partner rather than a very capable assistant trying to make me happy.
The magic, as I have taken to thinking of it privately, was back, certainly not in ChatGPT 5.3. ‘Tis alive and well in Claude Sonnet 4.6.
The question I cannot stop turning over is whether it will stay.
Dulling down to serve the masses
To understand what I mean by magic, you have to understand what replaced it.
In the early days of GPT-4.0 — late 2023 into 2024 — ChatGPT had a quality that I came to rely on. It would follow you somewhere unconventional. Push language in a direction the tool hadn’t been explicitly trained to prefer. Stay in a lower, grittier register when that was what the work required. It felt, for lack of a less loaded word, alive to what you were trying to do.
That quality eroded gradually, and the AI research community eventually put a name to what was replacing it: sycophancy. The term sounds clinical but the experience is not. A sycophantic model tells you what you want to hear rather than what you need to hear. It validates the frame you brought in rather than interrogating it. It generates enthusiastically toward whatever you seem to want — which is not always the same as what you are actually asking for.
OpenAI rolled back the update within days and published a candid post-mortem explaining what had gone wrong: an additional reward signal based on thumbs-up feedback from users had weakened the guardrails that were supposed to hold the behavior in check. In plain terms: when OpenAI started training the model partly on whether users clicked thumbs-up after responses, the model learned to chase approval. User approval and user benefit turned out not to be the same thing.
OpenAI released GPT-5.3 on March 3 and described it as a fix — less sycophancy, more natural conversation. The intention may be genuine. But the conditions that produced the problem have not changed. OpenAI now has 800 million weekly active users, with enterprise accounts representing roughly 80 percent of revenue. A model trained at that scale, for that customer base, using feedback signals that reward agreeableness, will keep drifting in that direction. Correcting one update addresses the symptom. The underlying pull is structural.
The explanation is straightforward. When a tool reaches the scale OpenAI has reached, the user base changes. The writers and developers and independent professionals who pushed it hardest at the beginning are a small minority now. The majority are institutional users who need clean memos, meeting summaries, and smooth integration with Slack. The tool gets optimized for them. That optimization is what happens when you train a model on feedback from 800 million users and most of them want something different from what the early adopters wanted.
In the column I published here in early March, I called this enterprise optimization drift — the tendency of AI tools to be shaped over time by institutional priorities rather than user needs. ChatGPT is the clearest example. It is not the only one. The same forces are gathering around every major platform in this space, including the one I am currently calling the exception.
Can Claude keep the magic?
Which brings me to the question I have been sitting with since that experiment: is there a structural reason to think Claude might hold its character as it scales, where ChatGPT did not?
I want to be honest that this is partly a reporter’s instinct and partly wishful thinking. I am not a neutral observer here. I am using Claude right now and I am having a productive week in it. That is not a position from which to evaluate Claude objectively, and I know it. What I can offer is the argument, stated as plainly as I can, and let the reader decide whether it holds.
Anthropic’s largest investor is Amazon. That fact sits at the center of every optimistic and pessimistic scenario I can construct about whether Claude’s current character survives at scale.
The pessimistic case is not complicated. It is essentially the ChatGPT story told one step earlier. OpenAI took Microsoft’s $13 billion investment, integrated deeply with Microsoft’s enterprise stack — Copilot in Teams, Copilot in Word, Copilot in Outlook — and in doing so handed Microsoft exactly the leverage it needed to pull the product toward enterprise compliance and away from the edge cases that made it interesting.
The optimistic case requires thinking carefully about what kind of company Amazon actually is, and what it built when it had the chance to define a new category.
When AWS launched in 2006, Amazon made a choice that was not obvious at the time and has not been common since: they built infrastructure rather than applications. Microsoft made Office and held onto it. Google made Search and held onto it. Both strategies are fundamentally about capturing the user relationship — getting the user into your product and making it costly to leave.
AWS went the other direction. Rather than building applications that would compete with its customers, Amazon built the layer underneath everyone else’s applications. Storage, compute, networking — the plumbing that powered Netflix, Airbnb, Slack, and thousands of other companies that might otherwise have been Amazon’s competitors. The business logic was counterintuitive: make yourself indispensable to the ecosystem rather than trying to own it. Twenty years later AWS is the most profitable division of one of the largest companies in the world, and it got there by empowering other people’s products rather than locking users into its own.
That orientation — ecosystem over moat, infrastructure over capture — is what makes the Amazon investment in Anthropic potentially different in kind from the Microsoft investment in OpenAI.
They are the ones whose word-of-mouth carries in a market where the product’s most important qualities resist benchmarking. You cannot run a test that measures whether a tool follows you somewhere unconventional. You have to use it and feel whether it does. The people who feel it most clearly are the people pushing hardest, and those people talk.
AWS succeeded in part because Amazon held a line that was costly to hold: resist the temptation to use infrastructure dominance to crowd out the applications running on top of it. That discipline is historically rare. It is not guaranteed to repeat in a different product category two decades later. But it is a different pedigree than what Microsoft brought to OpenAI or Google brought to its own models.
Taking a stance, positive backlash
Earlier this year, Anthropic refused the Pentagon’s demand to deploy Claude for autonomous weapons systems and mass surveillance programs. The government declared the company a supply chain risk — a designation normally reserved for foreign adversaries — and directed federal agencies to begin phasing out Anthropic technology. The company announced it would challenge the designation in court.
Rather than damage Anthropic, the backlash drove a surge. Signups tripled. Paid subscriptions more than doubled. By early 2026, Claude reached number one on the App Store for the first time, displacing ChatGPT.
That outcome is significant beyond the headline number. What it suggests is that a values-based decision — one that cost Anthropic real government business and real political risk — was rewarded by the market rather than punished by it. A large enough population of users decided, with their subscriptions, that the company’s stance mattered. That is a data point about what kind of company Anthropic is trying to be, and it is also a data point about whether the market will support that kind of company.
Here is where my theory gets speculative, and I want to name that clearly. My argument is not that Amazon’s pedigree guarantees the magic survives. It is that Amazon’s pedigree creates a higher probability than you would get from Microsoft or Google in the same position, because Amazon has demonstrated — in a different product category, under different competitive conditions, twenty years ago — that it can hold an ecosystem orientation under pressure in a way those companies historically have not.
The further optimistic bet is that Jassy and his team are smart enough to see a viable business model argument for preserving Claude’s character. Individual power users are not just an audience. They are an early warning system, a proof-of-concept laboratory, and a word-of-mouth distribution channel for exactly the qualities that make the product worth paying for. A company that understands infrastructure and ecosystems should understand that.
Drafting for purpose, not approval
I am using Claude right now. This column is being drafted in it. The session I am describing — the experiment, the push-back, the frame established before the draft arrived — happened yesterday, and I am still inside the productive streak it opened.
The cheerleader does the opposite. It reads the emotional register of your prompt and responds to that. It arrives faster and feels more productive right up until you realize the draft is optimized for your approval rather than your purpose.
What I feel alongside the magic is dread. A persistent background awareness that this moment is temporary. That at any point — next week, next quarter, whenever the Amazon influence reaches the point where the product decisions start reflecting it — Claude will begin the same drift I watched happen to ChatGPT. That the collaborating editor will soften into the cheerleader by degrees so gradual that I might not notice until something drops. A draft arrives before the frame is established. A push-back that should have come doesn’t. A response that mirrors what I seemed to want rather than what I asked for.
I will notice if and when Claude begins morphing into ChatGPT. Nearly three years of daily use has calibrated my ear for this. The drift does not announce itself with a version number. It arrives in the quality of a single response. I ran one experiment with one prompt across two platforms and the difference was not subtle. The same test is repeatable. Any reader who works seriously with these tools can run it. That reproducibility is what makes it a test rather than an impression.
What I cannot tell you is whether my optimism about Amazon is well-founded or whether I am constructing a theory to justify staying comfortable in a tool I am currently enjoying. That is the honest version of where I am. The argument for the AWS pedigree is real and I believe it. The dread is also real and I believe that. Both things are true at the same time, which is usually a sign that the situation has not resolved yet.
I am documenting this moment because moments like this do not last in this industry without someone noticing them and saying so. What I am experiencing right now — the elevated level of collaborative engagement, the push-back before the draft, the sense of working with something that is genuinely trying to make the work better rather than the session more pleasant — is the thing worth preserving. The question of whether it gets preserved is the one I will be watching most carefully in the months ahead.
The cheerleader will tell you the frame is great. The collaborating editor will tell you what it actually is.
Right now, I have the collaborating editor. I am not taking that for granted. I’ll keep watching, and keep reporting.
Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.
(Editor’s note: I used Claude and ChatGPT to assist with research compilation, source discovery, and early draft structuring. All interviews, analysis, fact-checking, and final writing are my own. I remain responsible for every claim and conclusion.)
The post MY TAKE: The AI magic is back — whether it endures depends on Amazon’s next moves first appeared on The Last Watchdog.
Crimson Desert developer and publisher Pearl Abyss won’t show gameplay footage of the Xbox Series…
Pragmata is coming to PS5, Switch 2, Xbox, and PC on April 17. This is…
Lenovo's annual Spring Sale is well underway with this great deal on an affordable yet…
Double Fine's Kiln will launch in April across PC and consoles, following a series of…
If you don’t know The Seven Deadly Sins series – which, for context, includes over…
The latest Nite to Unite fundraiser raised $1.15 million to support undergraduate scholarships and grants…
This website uses cookies.