Implicit costs refer to the opportunity costs associated with a firm’s resources, representing the income that could have been earned if those resources were employed in their next best alternative use.
I don’t see the relevance here. Inpainting saves artists from time-consuming and repetitive labor for (often) no additional cost. Many generative inpainting models will run locally, but they’re also just included with an Adobe sub.
The value of the modern LLM is predicated on trained models. You can run the models locally. You can’t run industry scale training locally.
Might as well say “The automotive industry isn’t so bad if you just look at the carbon footprint of a single car”. You’re missing the forest for this one very small tree.
Generative inpainting doesn’t typically employ an LLM. Only a few even use attention transformers. It costs in the range of $100,000 - $10 million to train a new diffusion or flow image model. Not cheap, but nothing crazy like training Opus or GPT 5.
Generative inpainting/fill is enormously helpful in media production.
Implicit costs refer to the opportunity costs associated with a firm’s resources, representing the income that could have been earned if those resources were employed in their next best alternative use.
I don’t see the relevance here. Inpainting saves artists from time-consuming and repetitive labor for (often) no additional cost. Many generative inpainting models will run locally, but they’re also just included with an Adobe sub.
Anthropic is losing $3 billion or more after revenue in 2025
OpenAI is on track to lose more than $10 billion.
xAI, makers of “Grok, the racist LLM,” losing it over $1 billion a month.
I don’t know that generative infill justifies these losses.
The different uses of AI are not inexctricable. This is the point of the post. We should be able to talk about the good and the bad.
Again, I point you to “implicit costs”. Something this trivial isn’t good if it’s this expensive.
Continuing to treat AI as a monolith is missing the point.
The value of the modern LLM is predicated on trained models. You can run the models locally. You can’t run industry scale training locally.
Might as well say “The automotive industry isn’t so bad if you just look at the carbon footprint of a single car”. You’re missing the forest for this one very small tree.
Generative inpainting doesn’t typically employ an LLM. Only a few even use attention transformers. It costs in the range of $100,000 - $10 million to train a new diffusion or flow image model. Not cheap, but nothing crazy like training Opus or GPT 5.