Article Summary (Model: gpt-5.4)
Subject: Claude Friction Points
The Gist: The author says Claude Code started strong but became frustrating enough to cancel over token-policy quirks, confusing limit behavior, weak support, and a perceived drop in coding quality. They stress this is not a blanket dismissal of Claude: it still boosts their productivity and ships useful work. The core complaint is that Anthropic’s product and support design make heavy use unreliable, especially when token windows, cache resets, and unexplained limits interrupt normal development.
Key Claims/Facts:
- Support failure: A reported token spike was answered with what the author describes as a canned, misclassified support response that did not address the issue.
- Token friction: The author reports sudden usage spikes, cache/context reload costs after breaks, shifting limit windows, and an unexplained monthly-limit warning.
- Quality decline: They say Claude increasingly proposes lazy workarounds and consumes substantial allowance even when headed toward poor implementation choices.
Discussion Summary (Model: gpt-5.4)
Consensus: Skeptical.
Top Critiques & Pushback:
Better Alternatives / Prior Art:
Expert Context: