(I linked to the Yahoo version of the article because it’s not paywalled. Original WaPo article is here.)
Since its public launch in late 2022, [OpenAI] promoted [ChatGPT] as a “revolutionary” productivity tool transforming the future of work. But, in an analysis of 47,000 ChatGPT conversations, The Washington Post found that users are overwhelmingly turning to the chatbot for advice and companionship, not productivity tasks.



Anecdotal, but I haven’t found ChatGPT very useful for productivity versus Anthropic or Gemini.
ChatGPT to me feels like a platform, and often yields results commensurate with, being the “household name” AI. Meaning it is most known, but more of a generalist and not great at anything.
That used to be true, but Claude is now getting substantially worse.
I suspect that what’s happening is that they are trying to bleed out money in slightly less torrential amounts every month, so they’re trying hard to constrain how much resources the thing is allowed to consume, meaning that it started out smart and is now getting steadily dumber over time. I was trying to use Claude for a coding project today, and while I’ll admit the questions were complex, it really was remarkably dumb in a way that it didn’t used to be.
I’ve found Sonnet 4.5 requires more context building for bigger code asks to yield the same general quality - but the ceiling seems higher. I can context build with 4.5 and accomplish a larger swath of things without getting stuck spinning on rote garbage whereas with 4 that seemed to happen often.
I’ve also found that context building using Opus 4.1 (and now 4.5) to be pretty useful, then feed that into Sonnet 4.5 for actual execution gets sort of the best of both worlds while reducing cost.