Life is an ensemble model
In 2008, Netflix held an open competition that offered $1 million to the team that could best improve its recommendation system to suggest users films. The team that won didn’t use a single traditional model like linear regression. Instead, they created a new kind of model called an ensemble model. What made this model unique was that, rather than trying to build one perfect predictor, they combined many weaker models into a single, stronger one. By “stacking” models, the ensemble performed better than any of the individual components on their own. The idea was revolutionary in the field, and I think is an interesting framework for life.
We are born with, and through experience are taught, many different models of understanding the world. Some are biological (touching a hot stove hurts, reproduction ensures survival, and hunger is our body telling us that we need more calories). Some are more abstract, like the idea of the present value of money or opportunity cost. Others are socialized into us (the Lakers are the best team and if they win I am happy, debt is bad, substances are for the morally weak, or that the United States and capitalism are good). Each of these models provides a lens upon the world differently. None of these are correct or wrong on their own, but each is correct in some capacity, and maybe wrong in others.
I grew up in a religious community, but over time I was put off by specific things I disagreed with, and by many of the greater wars in history that had been done in the name of crusading. I stopped feeling particularly spiritual or moved by those practices, and by my twenties, I went through a phase where I thought religious people were foolish, especially those strengthening a way of life I had deliberately chosen to leave behind. But as I’ve gotten older, I see how religion can provide people with a sense of purpose, a humility because there is something that exists that is greater than them, and a sense of community in a way that I think the absence of religion often leaves a big hole in people’s lives. As a framework, I still don’t believe religion is for me, but I don’t try and overfit the model.
As a general rule, I consider myself a strong supporter of capitalism. I believe it creates wealth. I believe markets naturally gravitate towards certain efficiencies. We also witness markets being created in the most obscure areas, from childhood playground lunch exchanges for the best snacks, to organ donations, to dating app matching, so that denying the existence of capitalism is almost like denying the existence of gravity. However, when I look at a city like New York and the challenges it faces today, I see a lot of problems that capitalism cannot solve. Some measures like free public buses or temporary rent freezes might offer real relief in the short term, even if these solutions lean a bit more 'socialist' than my usual philosophy.
As I go about my life I try to live with two deep philosophies (maybe contradictory given the rest of this blog). Try to maximize the number of frameworks you learn in your life. Doing so will give you the most ways to understand the world. And you can learn models you disagree with, with the understanding that some models are weak, but that they might still be useful in specific circumstances. All models are probably correct with some level of precision and recall.
And when you meet someone with a different belief, try asking yourself: what model are they using?
Are they operating from a different framework rather than a different reality? Are they weighing the same models as I am, but with different priorities? By recognizing that, you start to speak to them in the model they understand, or move beyond debates about who’s “right,” and start appreciating the value of seeing through more than one lens.
Built upon pillars of sand
The microprocessors in our smartphones and computers start as quartz sand. The sand is heated to over 1,400 °C to extract pure silicon, then precision-shaped into flawless-looking wafers, and stacked with more than a hundred layers only atoms thick. Each chip holds billions of transistors, tens of thousands of times smaller than a human hair. This is perhaps the most complex and precise process in human history, perfected and scaled over decades to produce billions of processors every year.
Yet at every layer tiny flaws form. Wafers are not identical. They are sorted by degrees of perfection, and this is one of the ways chips are clocked and sold at different speeds. Still, we run our computers expecting them to perform the same operation billions of times a second without error. But this whole system is fundamentally built upon pesky imperfect organic matter.
Until now, we have built machines on logic. This gave us the comfort of repeatability: if I write a piece of code, the computer behaves the same every single time. Students new to computer science are often told, “the compiler is never wrong”, usually while their IDE barks that the code they entered makes no sense to a machine. Fifteen years later, I humbly still remind myself of that phrase whenever software I have written does not behave as I expected.
Large language models, however, introduce uncertainty into the system. Researchers debate whether they can truly reason, or whether they are simply approximating patterns of 1s and 0s that only look like reasoning. Unlike traditional code, they can handle and produce soft, imprecise results in a way logic-bound systems never could.
The first time LLMs amazed me was at Airbnb, where we spent a lot of time figuring out how to represent complex and unique homes in a structure that could be easily categorized and sorted. We would spend months designing UIs to train hosts to fill out forms describing their amenities or room details. In 2022 I saw a demo where an LLM converted a free-form paragraph written by a host into clean, structured data in seconds. That kind of flexibility would have taken us years to engineer by hand.
The next platform of computer science will be about building new structures around these systems. Call them agents, call it orchestration. The challenge is to create large, repeatable, reliable structures on top of inherent imprecision. That may define the next twenty years of computation. Will this count as reasoning? Will it become general intelligence? If you can approximate reason statistically in a systematic, scalable way, then the line between true reasoning and probabilistic guessing that gets the right answer almost every time starts to blur. And I have certainly debated humans who string words together in ways that only appear to be reason.
Just as silicon wafers carry hidden flaws yet power our most precise machines, language models carry uncertainty yet may drive the next wave of computation. Both remind us that perfection is an illusion. What matters is how we build stable, scalable systems on top of imperfection. Reason itself may turn out to be less about purity than about what we can construct from flawed parts, repeated billions of times until it feels like certainty.
Credit cards vs mortgages
Credit card debt and mortgages are two very different types of debt. What makes credit card debt so insidious is that it is often accrued accidentally and grows incredibly quickly. Mortgages, however, are carefully planned, relatively cheap, and allow one to build equity sooner rather than later. A mortgage lets you live in a house today that might take 30 years to afford, if ever, while also benefiting from the asset’s gains. Good debt pulls forward value today that would otherwise take time to acquire.
Some tech debt is like credit card debt. These are systems so brittle and severe that they begin to degrade the quality and availability of the user experience. Code that makes it impossible for organizations to move quickly. We’ve all seen this kind of debt: it slows down every project and creates endless hours of work just to keep things stable. This type of tech debt is extremely expensive, both in terms of financial cost and human capital.
But other tech debt should be thought of as mortgages. This is code that hasn’t been touched in ages but still hums along reliably. A feature written five years ago that still does exactly what it’s supposed to do, even if it doesn’t follow the company’s latest design and engineering paradigms. The final modules needed to complete a migration. The last few bad dependencies. I believe this code should be left alone for as long as possible.
Choosing to leave this kind of code untouched does increase tech debt, but you are incurring it in a controlled and rational manner. Doing so allows your organization to focus on more important things. In this way, you are using tech debt to pull forward value today, like a mortgage. Good engineers can see through the noise and distinguish between the two.
On bad code
I spent many years working at Airbnb and Amazon and have friends all over the valley in places like Facebook and Google. One thing that shocked me—both from my own experience and from conversations with friends—is that all of these companies have a lot of bad code, even within critical systems. These codebases suffer from broken design patterns, missing tests, years-long migrations in progress, and scattered TODOs and FIXMEs. The entire industry is held together by duct tape and glue. If Google and Airbnb both have bad code, then where does good code exist?
1) Companies that succeed focus on solving business problems rather than fixing code that doesn’t fundamentally impact customers.
2) Real businesses are complex and don’t fit neatly into good design patterns. Every major refactor and rearchitecture I saw at Airbnb started with solid design principles but became messy once they encountered real-world challenges.
Great engineers deliver good results despite bad code. They take the time to deeply understand their tech stack, identify brittle areas, and develop strategies to improve it over the long term. But they are also able to tie those issues back to specific company goals. The making this fix will speed up development, reduce mistakes, and ultimately save the business time and money. An engineer must make the case to their business partners that their proposals are valuable.
Lightbulbs to TVs
What would it mean for AI to grow at a rate similar to that of the light bulb? In short, we don’t know.
Some of electricity’s early use cases were quite obvious. We can create light everywhere and replace candles. We no longer need to worry about bulbs, wicks, and oil to create additional lights. Then one can imagine a medium-order effect. What if an entire building had simple electric-based lighting? What if a whole city powered electricity to lights in every home? One can even envision Times Square as an incredible growth from the simple lightbulb becoming an entire ecosystem of smaller light bulbs.
But there are other things that are simply not possible to predict. One could never imagine so many lightbulbs packed together so tightly and operating so precisely to produce dynamic images like we see commonplace on 4K LED TVs today. One could never imagine gates opening and closing shrinking so small to be a fraction of the size of a single hair, producing what we today call a microprocessor.
In the same way, it is easy today to see certain tasks humans do being replaced by machines. If someone has a job that involves taking PDFs they receive and inputting that information into a database via a CRM, it is not hard to imagine this can be done fairly trivially with AI now or soon. But what would happen if we saw a curve in AI that grows as rapidly and as exponentially as we see with a lightbulb to a TV?