The Profit Model of Large Language Models Resembles Traditional Manufacturing

Recently, I chatted with a few friends working in AI, and they all laughed, saying, “You think we’re in the internet business, but it’s more like running a factory.”

At first, I didn’t take it seriously, assuming they were exaggerating. But upon reflection, it’s strikingly true. The logic behind the investment, operations, and profitability of large language models (LLMs) bears an uncanny resemblance to traditional manufacturing.

Let’s start with investment. LLMs aren’t like launching an app and effortlessly attracting users. Whether it’s building computing clusters, curating massive training datasets, or coordinating engineering teams, every step is asset-heavy, cost-intensive, and time-consuming. Training a large model once can result in computing bills running into millions or even tens of millions. The internet model emphasizes zero marginal cost—more users mean lower per-user costs. But LLMs face increasing marginal costs: more users mean higher inference costs. Each additional request today means more electricity consumed and more computing power occupied.

Then there’s operations. Do you think once the model is deployed, traffic will just roll in? No. Every iteration and optimization is akin to process improvements in industrial manufacturing. Data cleaning is like raw material processing, model training is like assembly line operations, and inference optimization is like production line scheduling. These tasks may sound tedious, but they determine whether a company can turn a profit. Algorithmic innovation matters, but stability, low consumption, and high output are the true moats. The internet model thrives on storytelling and grabbing attention; LLMs thrive on efficiency and craftsmanship.

And then there’s profitability. The internet relies on traffic, advertising, long-tail effects, and network effects—often virtual miracles of scale. LLMs are different; they depend on computing power, data, and the accumulation of engineering systems. No matter how clever your approach, if the industrial process isn’t streamlined, profitability slips through your fingers like sand. That’s why, in the LLM industry, some companies appear technologically advanced but struggle with consistent profitability.

The real turning point is moving from “building models” to “building systems.” When a model is no longer a precision machine for one-off training but a sustainable, reusable, and scalable system, marginal costs can be controlled, and economies of scale begin to emerge. At that point, LLM companies truly resemble manufacturing—not making money through stories, but through efficiency and process. Algorithms, data, computing power, and engineering teams are all like parts and procedures, indispensable.

So, LLMs are not an extension of the internet; they are more like the next chapter of industrial civilization. On the surface, it’s an intelligence revolution; in essence, it’s a manufacturing upgrade. Those who love telling investment stories may get excited, but the ones who truly make money are always those who know how to “run the industrial process smoothly.”