Built upon pillars of sand…
The microprocessors in our smartphones and computers start as quartz sand. The sand is heated to over 1,400 °C to extract pure silicon, then precision-shaped into flawless-looking wafers, and stacked with more than a hundred layers only atoms thick. Each chip holds billions of transistors, tens of thousands of times smaller than a human hair. This is perhaps the most complex and precise process in human history, perfected and scaled over decades to produce billions of processors every year.
Yet at every layer tiny flaws form. Wafers are not identical. They are sorted by degrees of perfection, and this is one of the ways chips are clocked and sold at different speeds. Still, we run our computers expecting them to perform the same operation billions of times a second without error. But this whole system is fundamentally built upon pesky imperfect organic matter.
Until now, we have built machines on logic. This gave us the comfort of repeatability: if I write a piece of code, the computer behaves the same every single time. Students new to computer science are often told, “the compiler is never wrong”, usually while their IDE barks that the code they entered makes no sense to a machine. Fifteen years later, I humbly still remind myself of that phrase whenever software I have written does not behave as I expected.
Large language models, however, introduce uncertainty into the system. Researchers debate whether they can truly reason, or whether they are simply approximating patterns of 1s and 0s that only look like reasoning. Unlike traditional code, they can handle and produce soft, imprecise results in a way logic-bound systems never could.
The first time LLMs amazed me was at Airbnb, where we spent a lot of time figuring out how to represent complex and unique homes in a structure that could be easily categorized and sorted. We would spend months designing UIs to train hosts to fill out forms describing their amenities or room details. In 2022 I saw a demo where an LLM converted a free-form paragraph written by a host into clean, structured data in seconds. That kind of flexibility would have taken us years to engineer by hand.
The next platform of computer science will be about building new structures around these systems. Call them agents, call it orchestration. The challenge is to create large, repeatable, reliable structures on top of inherent imprecision. That may define the next twenty years of computation. Will this count as reasoning? Will it become general intelligence? If you can approximate reason statistically in a systematic, scalable way, then the line between true reasoning and probabilistic guessing that gets the right answer almost every time starts to blur. And I have certainly debated humans who string words together in ways that only appear to be reason.
Just as silicon wafers carry hidden flaws yet power our most precise machines, language models carry uncertainty yet may drive the next wave of computation. Both remind us that perfection is an illusion. What matters is how we build stable, scalable systems on top of imperfection. Reason itself may turn out to be less about purity than about what we can construct from flawed parts, repeated billions of times until it feels like certainty.
Credit cards vs mortgages
Credit card debt and mortgages are two very different types of debt. What makes credit card debt so insidious is that it is often accrued accidentally and grows incredibly quickly. Mortgages, however, are carefully planned, relatively cheap, and allow one to build equity sooner rather than later. A mortgage lets you live in a house today that might take 30 years to afford, if ever, while also benefiting from the asset’s gains. Good debt pulls forward value today that would otherwise take time to acquire.
Some tech debt is like credit card debt. These are systems so brittle and severe that they begin to degrade the quality and availability of the user experience. Code that makes it impossible for organizations to move quickly. We’ve all seen this kind of debt: it slows down every project and creates endless hours of work just to keep things stable. This type of tech debt is extremely expensive, both in terms of financial cost and human capital.
But other tech debt should be thought of as mortgages. This is code that hasn’t been touched in ages but still hums along reliably. A feature written five years ago that still does exactly what it’s supposed to do, even if it doesn’t follow the company’s latest design and engineering paradigms. The final modules needed to complete a migration. The last few bad dependencies. I believe this code should be left alone for as long as possible.
Choosing to leave this kind of code untouched does increase tech debt, but you are incurring it in a controlled and rational manner. Doing so allows your organization to focus on more important things. In this way, you are using tech debt to pull forward value today, like a mortgage. Good engineers can see through the noise and distinguish between the two.
On bad code…
I spent many years working at Airbnb and Amazon and have friends all over the valley in places like Facebook and Google. One thing that shocked me—both from my own experience and from conversations with friends—is that all of these companies have a lot of bad code, even within critical systems. These codebases suffer from broken design patterns, missing tests, years-long migrations in progress, and scattered TODOs and FIXMEs. The entire industry is held together by duct tape and glue. If Google and Airbnb both have bad code, then where does good code exist?
1) Companies that succeed focus on solving business problems rather than fixing code that doesn’t fundamentally impact customers.
2) Real businesses are complex and don’t fit neatly into good design patterns. Every major refactor and rearchitecture I saw at Airbnb started with solid design principles but became messy once they encountered real-world challenges.
Great engineers deliver good results despite bad code. They take the time to deeply understand their tech stack, identify brittle areas, and develop strategies to improve it over the long term. But they are also able to tie those issues back to specific company goals. The making this fix will speed up development, reduce mistakes, and ultimately save the business time and money. An engineer must make the case to their business partners that their proposals are valuable.
Lightbulbs to TVs
What would it mean for AI to grow at a rate similar to that of the light bulb? In short, we don’t know.
Some of electricity’s early use cases were quite obvious. We can create light everywhere and replace candles. We no longer need to worry about bulbs, wicks, and oil to create additional lights. Then one can imagine a medium-order effect. What if an entire building had simple electric-based lighting? What if a whole city powered electricity to lights in every home? One can even envision Times Square as an incredible growth from the simple lightbulb becoming an entire ecosystem of smaller light bulbs.
But there are other things that are simply not possible to predict. One could never imagine so many lightbulbs packed together so tightly and operating so precisely to produce dynamic images like we see commonplace on 4K LED TVs today. One could never imagine gates opening and closing shrinking so small to be a fraction of the size of a single hair, producing what we today call a microprocessor.
In the same way, it is easy today to see certain tasks humans do being replaced by machines. If someone has a job that involves taking PDFs they receive and inputting that information into a database via a CRM, it is not hard to imagine this can be done fairly trivially with AI now or soon. But what would happen if we saw a curve in AI that grows as rapidly and as exponentially as we see with a lightbulb to a TV?