Isn't it our responsibility to own our dependencies? I get this is frustrating, but if your client deliverables hinge on your $200 subscription to meet, you need to figure out some redundancy.
The need to reimburse for time down, but not for lost revenue/opportunity. That's our responsibility, as builders, to plan for.
I don't think that's clear. They're certainly burning money, but that's mostly R&D: salaries and training compute. Once you remove those, it's unclear whether the AI companies would be losing money on just inference.
You can't "remove" costs willy nilly to make a company look profitable. Running an independent ISP is an extremely lucrative business if you gloss over the capex requirements of installing and maintaining the infrastructure.
You don't remove them willy-nilly, but evaluating companies based on their operating costs is standard accounting practice. R&D is not part of operating costs.
Capex for inference is in, capex for training is out for that analysis.
There was some credible analysis that I don't have the link to which estimated 50% gross margins for OpenAI, largely eaten up by operational expenses. So not awful unit economics, but not good either.
Assuming that's even true, the big asterisk is uncertainty around efficiency gains in the future. The intelligence divided by cost ratio is changing very quickly. It is hard to make confident predictions more than 3 months out.
Unless SLA (or equivalent) is specifically mentioned in the T&S (which isn't), I don't see why Anthropic would ever compensate anyone. Measuring uptime and performance is meaningful and valuable, but expecting compensation is a completely different question.
If you don't like it, go use someone else's product. There are plenty of choices.
All true, but also depends on jurisdiction the product is operating in
For example, in Australia any consumer has rights around products and services they purchased regardless of the terms agreed to during sale. If a business offered 99% uptime in their marketing, then they are required to provide that or, something equivalent, or a refund, even if it was never mentioned or some lower number was mentioned in the terms and conditions.
Enforcing that however, particularly with companies that are renowned for having no human staff, is tricky.
So yes, definitely agree - builders have to be aware of their dependencies and work with the realities of what they provide, not the theoretical
I'm using Jetbrain's AI Assistant (Junie, with different LLMs), and it also has frequent agent crashes (bye tokens) and sometimes is unreachable - albeit never longer than minutes so a few retries suffice. And even with the default Gemini 3 Flash I can easily burn through 10 or more bucks over a normal coding day (which is not every day, sometimes there's more reading). Do I still get value? Definitely. But it's not the easy life either.
I've got no dog in this fight, since I don't use any AI, but the math feels unfinished in this. Calculated the $150USD/day for the loss, but not the $X for how much the app is speeding you up when you are able to use it. Obviously the net loss would be the benefit minus the loss. So, unless the claim is that you use the AI and it gives you no performance enhancement whatsoever (in which case, why use it?), then the numbers are a bit incomplete.
That said, I do feel like this is just one person's way of coming around to the realization that they don't actually need this product. At least not in this form. One more step towards normalizing local-only llms that produce better results with smaller scopes and fewer resources.
The way you have portrayed it makes it sound like your clients should be paying Claude rather than you for the work, maybe you could take some kind of broker fee. But is your AI assistant being down really as crippling as you make it sound? But I understand your frustration, $200 is a lot of money.
Please. This just feels so entitled. Infra is hard, hyper scale ai infra is like three years old as a discipline, api based calls would cost 10x the subscription. I guess you can always cancel!
All gists smell like AI-generated.
You're _probably_ going to reply to a bot.
Sad to see this on the HN front page.
The need to reimburse for time down, but not for lost revenue/opportunity. That's our responsibility, as builders, to plan for.
Shouldn't that be the other way around? Isn't every LLM provider losing money? Especially on premium subscriptions.
Capex for inference is in, capex for training is out for that analysis.
Assuming that's even true, the big asterisk is uncertainty around efficiency gains in the future. The intelligence divided by cost ratio is changing very quickly. It is hard to make confident predictions more than 3 months out.
If you don't like it, go use someone else's product. There are plenty of choices.
For example, in Australia any consumer has rights around products and services they purchased regardless of the terms agreed to during sale. If a business offered 99% uptime in their marketing, then they are required to provide that or, something equivalent, or a refund, even if it was never mentioned or some lower number was mentioned in the terms and conditions.
Enforcing that however, particularly with companies that are renowned for having no human staff, is tricky.
So yes, definitely agree - builders have to be aware of their dependencies and work with the realities of what they provide, not the theoretical
That said, I do feel like this is just one person's way of coming around to the realization that they don't actually need this product. At least not in this form. One more step towards normalizing local-only llms that produce better results with smaller scopes and fewer resources.
Also the math in this gist doesnt math.
So maybe the chargeout rate is highly informative.
2/10 would not read again.
Someone posing as a freelance dev trying to con Anthropic out of money they themselves would be incapable of making on their own.
Can't even write their own impassioned write-up.
Whoever prompted Claude to write this: figure out what YOU can do that's actually marketable. You don't seem to have found it yet.
Perhaps not the best idea to be this dependent on a product.