One of the main reasons for me for sticking with Claude Code (also for non-coding tasks, I think the name is a misnomer) is the fixed price plan. Pretty much any other open-source alternative requires API key, which means that as soon as I start using it _for real_, I'll start overpaying and/or hitting limits too fast. At least that was my initial experience with API from OpenAI/Claude/Gemini.
Yep, this is a fair take. Token usage shoots up fast when you do agentic stuff for coding. I too end up doing the same thing.
But for most background automations your might actually run, the token usage is way lower and probably an order of magnitude cheaper than agentic coding. And a lot of these tasks run well on cheaper models or even open-source ones.
So I don't think you are wrong at all. It is just that I believe the expensive token pattern mostly comes from coding-style workloads.
For example, the NotebookLM-style podcast generator workflow in our demo uses around 3k tokens end to end. Using Claude Sonnet 4.5’s blended rate (about $4.5 per million tokens for typical input/output mix), you can run this every day for roughly eight months for a bit over three dollars. Most non-coding automations end up in this same low range.
You're not wrong, though I suspect the AI "bubble burst" begins to happen when companies like Anthropic stop giving us so much compute for 'free' the only hope is that as things get better their cheaper models get as good as their best models today and so it costs drastically less to use them.
Sonnet is 3$ per million tokens, Grok Code Fast is 0.2$. IME the latter is better for me. Wish everybody treats AI as a pay-as-you-go commodity instead of getting dependant on rugpulls. My stack is Openrouter (model marketplace) and Aider (Kilocode and Cline for user friendly alternatives).
Will check out Grok Code Fast - thanks for the pointer. In my experience, coding agents can swing a lot in quality depending on the model’s reasoning power. When the model starts making small but avoidable mistakes, the overhead tends to cancel out the benefit. Curious to see how Grok performs on multi-step coding tasks.
True. Im working with Python CRUD apps, which every model is fluent in. And I'm personally generating 100-line changes, not letting it run while I'm AFK.
That's what I love most about Claude. I love Django and I love React (the richness of building UIs with React is insane) and sure enough Claude Code (and other models I'm sure) is insanely good at both.
Zed is nicely setup for this I just have not taken the time. I do like how Claude works atm. The coding agent functionality is what's nice about Claude Code. I don't know that Grok Code Fast has that?
Yeah, I think when they made the bet it genuinely made sense. But in coding workflows, once models got cheaper, people did not spend less. They just started packing way more LLM calls into a single turn to handle complex agentic coding steps. That is probably where the math started to break down.
I use CC regularly for editing large text files (especially turning interview transcripts into something readable) and have found it works much better than web chat interfaces because of filesystem access and ability to work with large files.
That’s great to know. I’ve come to the same conclusion. I’ve found that things work best when they happen right where I’m already working. Uploading files or recreating context in a web service adds friction, especially when everything is already available locally.
I'm increasingly seeing code-adjacent people who are using coding agents for non-coding things because the tooling support it better, and the agents work really well.
It's an interesting area, and glad to see someone working on this.
The other program in the space that I'm aware of is Block's Goose.
Yep, totally agree. We actually had an earlier web version, and the big learning was that without access to code-related tools the agent feels pretty limited. That pushed us toward a CLI where it can use the full shell and behave more like a real worker.
Really appreciate the support and the Goose pointer. Would love to hear what you think of RowboatX once you try it.
In the demo - for the voice and music, it uses the ElevenLabs MCP server for TTS, and ffmpeg locally to stitch the audio together. In the demo we pull content from Tweets via an MCP tool, but you can swap that step for anything — for example, fetching your saved articles with curl. There’s no special model required; any LLM that can call tools will work.
We’re adding an easier way to run examples soon. In the meantime, if you’d like to try this one locally: (1) Copy the agent file into ~/.rowboat/agents/ (2) Add the MCP server (and your keys) to ~/.rowboat/config/mcp.json (3) Run: 'rowboatx --agent=tweet-podcast --input=go'
I automated a bunch of business workflows like sorting documents for accounting in my cloud storage, tagging emails that are invoices, stuff like that. I do this for a living though, so I also use these as casestudies and rowboat is a hard sell for end users I guess.
Cline is great, but it’s primarily focused on coding workflows, similar to Claude Code. RowboatX is aimed at a different category: background agents for non-coding automations (e.g. meeting prep, daily briefings, etc).
The big difference from Claude Code (and Cline) is that RowboatX can spin up persistent background agents that run on schedules, use the system shell, and call MCP tools to automate tasks outside of coding.
Am I biased/wrong here?
But for most background automations your might actually run, the token usage is way lower and probably an order of magnitude cheaper than agentic coding. And a lot of these tasks run well on cheaper models or even open-source ones.
So I don't think you are wrong at all. It is just that I believe the expensive token pattern mostly comes from coding-style workloads.
Anthropic published a doc or two about this too, here's one of them: https://www-cdn.anthropic.com/58284b19e702b49db9302d5b6f135a...
It's an interesting area, and glad to see someone working on this.
The other program in the space that I'm aware of is Block's Goose.
Really appreciate the support and the Goose pointer. Would love to hear what you think of RowboatX once you try it.
How does it do that? Does it require a tool for that? Or a special model?
We’re adding an easier way to run examples soon. In the meantime, if you’d like to try this one locally: (1) Copy the agent file into ~/.rowboat/agents/ (2) Add the MCP server (and your keys) to ~/.rowboat/config/mcp.json (3) Run: 'rowboatx --agent=tweet-podcast --input=go'
The big difference from Claude Code (and Cline) is that RowboatX can spin up persistent background agents that run on schedules, use the system shell, and call MCP tools to automate tasks outside of coding.