Built upon pillars of sand…
The microprocessors in our smartphones and computers start as quartz sand. The sand is heated to over 1,400 °C to extract pure silicon, then precision-shaped into flawless-looking wafers, and stacked with more than a hundred layers only atoms thick. Each chip holds billions of transistors, tens of thousands of times smaller than a human hair. This is perhaps the most complex and precise process in human history, perfected and scaled over decades to produce billions of processors every year.
Yet at every layer tiny flaws form. Wafers are not identical. They are sorted by degrees of perfection, and this is one of the ways chips are clocked and sold at different speeds. Still, we run our computers expecting them to perform the same operation billions of times a second without error. But this whole system is fundamentally built upon pesky imperfect organic matter.
Until now, we have built machines on logic. This gave us the comfort of repeatability: if I write a piece of code, the computer behaves the same every single time. Students new to computer science are often told, “the compiler is never wrong”, usually while their IDE barks that the code they entered makes no sense to a machine. Fifteen years later, I humbly still remind myself of that phrase whenever software I have written does not behave as I expected.
Large language models, however, introduce uncertainty into the system. Researchers debate whether they can truly reason, or whether they are simply approximating patterns of 1s and 0s that only look like reasoning. Unlike traditional code, they can handle and produce soft, imprecise results in a way logic-bound systems never could.
The first time LLMs amazed me was at Airbnb, where we spent a lot of time figuring out how to represent complex and unique homes in a structure that could be easily categorized and sorted. We would spend months designing UIs to train hosts to fill out forms describing their amenities or room details. In 2022 I saw a demo where an LLM converted a free-form paragraph written by a host into clean, structured data in seconds. That kind of flexibility would have taken us years to engineer by hand.
The next platform of computer science will be about building new structures around these systems. Call them agents, call it orchestration. The challenge is to create large, repeatable, reliable structures on top of inherent imprecision. That may define the next twenty years of computation. Will this count as reasoning? Will it become general intelligence? If you can approximate reason statistically in a systematic, scalable way, then the line between true reasoning and probabilistic guessing that gets the right answer almost every time starts to blur. And I have certainly debated humans who string words together in ways that only appear to be reason.
Just as silicon wafers carry hidden flaws yet power our most precise machines, language models carry uncertainty yet may drive the next wave of computation. Both remind us that perfection is an illusion. What matters is how we build stable, scalable systems on top of imperfection. Reason itself may turn out to be less about purity than about what we can construct from flawed parts, repeated billions of times until it feels like certainty.