The Four Instruments Of Expansion

In The Evolution of Civilizations, master historian Carroll Quigley argues that a civilization’s growth and decline is driven by the development of its instrument of expansion. To oversimplify, the instrument of expansion is the social system that accumulates wealth and invests it into further production. When this system is functioning well, the civilization grows and flourishes, not just economically but also demographically, geographically, and scientifically. When the instrument of expansion inevitably ossifies and comes to serve entrenched interests rather than serving its ostensible function, growth slows and conflict intensifies, until either the instrument of expansion is supplanted by a different one and the cycle repeats, or else the civilization decays and is eventually destroyed. There have been somewhere around a dozen civilizations where we have enough surviving evidence to determine the instrument of expansion.

This is a dense theory and I will not try to do it justice here. [1] This essay is written for people who have read the book and are already familiar with Quigley’s model. Feel free to read it and come back, the book is worth your time and this page will still be here. You can skip chapter 6.

These instruments of expansion can be categorized, and Quigley discusses a few of the categories. I’m willing to go a step further. There have been four instruments of expansion in human history, and every civilization has been based on one of these:

Manorialism. Landlords accumulate and invest surplus through local agricultural exploitation and development, extracting crops from those who work on the land that he personally owns or politically dominates.

Examples include feudalism in the Middle Ages of Western civilization, slavery in Greco-Roman civilization, and probably Chinese civilization before the Song dynasty.

Commercial capitalism. Merchants accumulate and invest surplus by long-distance trade in luxury goods within a price system.

Examples include Phoenician civilization, Western civilization in the Early Modern period, and Chinese civilization from roughly the Song through Ming dynasties.

National bureaucracy. [2] State officials accumulate and invest surplus by appropriating resources to a centralized bureaucracy via taxation and spending those resources directly on state-managed production.

Examples include ancient Mesopotamian civilization, ancient Egyptian civilization, Soviet Communism, and perhaps the Incan and Minoan civilizations.

Industrial capitalism. Business owners accumulate and invest surplus through the operation of machines, by selling the products of those machines at a profit within a price system.

Examples include Western civilization since the Industrial Revolution, and much of East Asia for the last several decades.

I know of no civilization whose economy does not fit neatly into one of these categories. [3] This is not to say that the instrument of expansion composed the entirety or even the bulk of the economy, only that it was the dominant way that wealth was accumulated and that investment was directed into innovation and growth. There were outposts of manorial slavery in the U.S. until 1865; you can find isolated instances of national bureaucracy ranging from the silver mines of the Roman Emperors to the sovereign wealth funds of 2023; individual commercial capitalists have existed in every single civilization that has ever existed and probably predate civilization itself. The instrument of expansion is distinguished by being the main source of economic growth and technological progress, and by the political and cultural ascendancy of the surplus-accumulating social class.

There is some variation within each category, of course. Ancient Greek slavery was not economically identical to Japanese feudalism, and the thalassocracies of Tyre and Carthage were not economically identical to the European guild-based mercantile system in the time of the Hansa and the Medicis, but in both cases there is a strong family resemblance. These structural similarities in the instrument of expansion are upstream of other similarities between civilizations in other areas like politics and culture.

The different instruments of expansion tend to arise in different political and economic circumstances. Manorialism is a very natural form of organization during periods of anarchy, and is frequently established after the collapse of a previous civilization leaves the land infested with warlords and raiders. Commercial capitalism can be a later development, often arising as a civilization’s second instrument of expansion after manorialism ossifies. Industrial capitalism is still so new that it’s difficult to determine the conditions where it can arise, and answering this question is one of the most important and widely-debated questions in history and economics today.

These are the only four instruments which have served as the basis of a civilization so far, but are not the only possible instruments. Five hundred years ago, there had never been such a thing as industrial capitalism. Today there is. In another five hundred years there might be a fifth. New instruments of expansion could conceivably come into being either because of new technology, or because of new forms of social organization.


[1] One relevant passage: “This surplus-creating instrument does not have to be an economic organization. In fact, it can be any kind of organization, military, political, social, religious, and so forth. In Mesopotamian civilization it was a religious organization, the Sumerian priesthood to which all members of the society paid tribute. In Egyptian, Andean and, probably, Minoan civilizations it was a political organization, a state that created surpluses by a process of taxation or tribute collection. In Classical civilization it was a kind of social organization, slavery, that allowed one class of society, the slaveowners, to claim most of the production of another class in society, the slaves. In the early part of Western civilization it was a military organization, feudalism, that allowed a small portion of the society, the fighting men or lords, to collect economic goods from the majority of society, the serfs, as a kind of payment for providing political protection for these serfs. In the later period of Western civilization the surplus-creating instrument was an economic organization (the price-profit system, or capitalism, if you wish) that permitted entrepreneurs who organized the factors of production to obtain from society in return for the goods produced by this organization a surplus (called profit) beyond what these factors of production had cost these entrepreneurs.”

[2] Quigley calls this socialism, which I think is now a confusing name. In today’s language “socialism” refers to a political program for distributing surplus rather than an economic program for accumulating surplus.

[3] Clan pastoralism is debatably a fifth form. Eurasian steppe clans accumulated wealth by nomadic animal husbandry—“capital” is from the same linguistic root as “cattle”, after all—and were sometimes able to use this as a basis for large-scale trade and warfare with settled civilizations, which is most of how we know about them in the historical record. I don’t know enough about pastoral clans in e.g. North Africa, the Middle East, or the Scottish highlands to say whether they also achieved this type of wealth accumulation. Either way, this leaves the question of whether nomad clans count as a “civilization”. These societies left little in the way of archeological records, and essentially nothing in the way of written records, so it may be impractical to tell whether they went through patterns of development similar to those of settled agricultural civilizations.

Why We Can’t Have Nice Things

The things around us have become plainer. In 1923, or 1823, the fashion was for intricate and richly ornamented architecture, furniture, clothes, dishware, or whatever else. In 2023, fashionable objects are plain and minimalist, if not outright utilitarian. Steve Jobs believed that every object should look as much like a featureless white sphere as possible, and the rest of us follow in his footsteps.

Most people have noticed this trend, of course. To illustrate, here are two streetlights in San Francisco. The first, right outside my office, is from 1916:

The second is a much more recent streetlight not far from my house:

Granted, even in 1916 most streetlights didn’t go quite that hard. Still, it demonstrates the difference—the contemporary streetlight isn’t just plain, it’s ugly, without even the attention to aesthetics found in deliberate minimalism. This isn’t just a transition from old-fashioned styles to newer styles; here there is no attempt at style at all. You won’t find anything like that in 1916 streetlights.

When I ask people why this is, they usually tell me it’s because people can no longer afford ornamentation as a result of improved technology. This seems backwards—everywhere else, improved technology makes nice things cheap and plentiful. Why would this be the opposite for ornamentation? I’m often told it’s because of the Baumol effect—that when technology improves productivity in some fields, the cost rises in other fields where productivity remains relatively lower, such as live music performances. Because ornamentation must be handmade, improved technology does not improve productivity in this sector, and so prices rise and rise until it is unaffordable.

This is hot nonsense. Technology obviously improves the productivity of making and installing ornamentation. There is no law of nature which requires everything beautiful must be made by a seventy-year-old master craftsman peering through spectacles as he works with hammer and chisel. Men have used mass-production technology to make better and cheaper ornamentation since the first day of the Industrial Revolution—literally. James Watt, whose improved steam engine marks the Industrial Revolution’s beginning in 1776, was good friends with Josiah Wedgwood,1 an entrepreneur who used improved production techniques and advanced tooling to mass-manufacture decorative vases and tableware, including machine-made copies of ancient Greek and Roman works. Wedgwood sold to both middle-class consumers and the Queen of England. 

For almost two centuries, improved technology steadily made ornamented objects and buildings accessible to more and more people. By the late 1800s, “Elaborate ornamentation [in American homes] was quite common, and its cost had been greatly reduced as manufacturers learned to replace skilled craftsmen with modern machines. “Instead of stone carvers who chiseled the cornices, there was a machine that stamped out cheap tin imitations. Instead of wood-carvers, a hydraulic press squeezed wood into intricate carved shapes.””2 In just the last few years, 3D-printed busts have become widely and cheaply available through online storefronts.

These advances in mass manufacturing were also responsible for the 1916 streetlight. Over three hundred were installed—you can just see two others in the background of my photo above. It looks like the pieces were cast from molds and assembled onsite:

Progress has continued, and with modern tooling and containerized freight, we now have the technology to make similar streetlights with even better methods and larger economies of scale. A single factory could turn them out by the thousands and ship them across the world. Today we can carve out molds with CNC machine tools rather than having a sculptor make them manually like we did in 1916, and for production at that scale we probably would, unless it’s someday supplanted by 3D printed molds or another new technology that’s even better and cheaper. The engineers who design flat-packed furniture for unskilled assembly in your living room could certainly design a beautiful streetlight for semiskilled laborers to easily and cheaply assemble on streets from Chicago to Nanjing.

Let’s take a simpler example, with no need for analyzing manufacturing processes and supply chains. This cabinet is in the lobby of my office building, the Mechanics Institute, built in 1910:

Why is simple decoration like this so rare in more recent work? There’s nothing prohibitively expensive about applying a stencil and a bit of gold paint. There’s clearly some barrier to installing things like this today, but it’s not the cost of labor.

If we move from architecture to tableware, this is even more obvious. Stamping plates or mugs with intricate designs is so cheap it adds practically nothing to the final cost, and this has been true for decades at least. I paid two dollars per piece for these:

So what’s going on? If it’s not a matter of well-designed things becoming prohibitively expensive, why do we see less of this stuff?

I have some stories that can shed light on this. When my former roommate Daniel was admiring the plates above, he mentioned that the idea of buying decorated plates and bowls had never occurred to him. In other words, the reason our previous tableware had been featureless white was not because, like the economically rational agent who theorists love so much, he had considered all available options and deliberately decided that the aesthetics-price package of plain white plates was superior. Rather, he bought it on autopilot, as we all do for most of our purchases.

Another story. As you can probably guess, I like ornate and ornamented furniture. I can’t find the things I like from popular furniture makers at any price, but this actually isn’t a problem, because it’s fast and easy to find gorgeous antiques browsing Craigslist or Facebook Marketplace or equivalents, and these are much cheaper than new unornamented furniture.3 When I go to pick it up, literally more than half the time the seller tells me without prompting how glad she is that someone is taking this, she was about to throw it out because nobody wants it, which would be a big shame because it belonged to her recently-deceased mother or something. The rest of the time, the sellers are hobbyists who attend estate sales for love of the art, and resell the good pieces at little or no profit.

Another story. Not too long after I moved in, Daniel picked up a bit of an eye for older ornamented furniture. One day he passed a nice cabinet left on the sidewalk for anyone to claim. Daniel went to get his car to bring the cabinet home. When he returned, the owner was smashing the unwanted cabinet apart so it would be easier to trash, having given up on giving it away. (Fortunately he had only dismantled a couple of drawers before Daniel stopped him, and I was able to patch it back together so you’d never know anything happened.)

All of this paints a clear picture. Why is there so much less decoration today? “It’s too expensive” doesn’t hold water. The decorated objects are out there, if you look for ten minutes. They’re often the same price as decent-quality plain objects. If they’re durable and there’s a secondary market, then they’re cheaper or sometimes outright free. Nevertheless people do not consume what’s available. Often they’re thrown out and destroyed for lack of interest.

People just don’t want this stuff. Why not? I have my guesses, but I’ll leave the speculation for the footnotes.4 What’s clear is that, for whatever reason, most people choose plain objects over intricate ones at the same price point. In some cases, such as tableware, ornate products are available cheaply and in quantity, but are ignored by most consumers. In other cases, such as streetlights, making ornate products cheaply could be done only at large scale with a serious capital investment, and this is not done because manufacturers rightly believe there is no demand for the potential products.

If this demand existed, we’d also be able to see it in the real estate market. As I mentioned, my office is in a gorgeous building from 1910. Here’s another photo I took in the lobby:

According to the demand hypothesis, most people would love to be in a building like this and seek it out, and these offices would go for at least a modest premium. In fact, offices in this building rent for below the San Francisco market average.

So what’s a man to do if he wants beautiful things? Put your money where your mouth is. There’s no sense in following lots of aesthetic social media accounts and talking a good game about returning to the days of good taste if every object in your house is a beige rectangle. A lot of people would rather hold off until someone else starts a grand movement and makes a new style popular, but artistic trends don’t start when people sit and wait, they start when people act. You can live in a beautiful building. You can have beautiful furniture. You can frame a giant print of your favorite Romantic-period painting in your living room. There’s lots of stuff like this out there for the taking.

There are also a few things you might want that aren’t being made because of a lack of public demand. But “public demand” isn’t the weather. You are the public. Demand it. But don’t demand it on social media. Demand it the way a businessman demands it. Pay cash to the local artists and the Etsy craftsmen and the antique dealers. If your taste is good, then producers of beautiful things will flourish a bit more, and the next person who comes looking will find that much more of it. If you come, they will build it.

  1. Wedgwood was also an ardent abolitionist who turned his factories to producing thousands of medallions with the “Am I not a man and a brother?” symbol, originating the anti-slavery movement’s most influential and enduring image. ↩︎
  2. Robert Gordon, The Rise And Fall Of American Growth, p. 107. He, in turn, quotes Edward Clifford Clark, The American Family Home, 1800-1960, p. 82. ↩︎
  3. The exception is upholstered furniture, which doesn’t survive for a century unless it’s almost completely replaced, Ship of Theseus style. The result is that even the tiny public demand for ornate seating can exhaust the equally tiny supply of adequately-maintained antiques, and bid up the price to vaguely reasonable levels on par with new, plain furniture. ↩︎
  4. To say that the fashion has changed is unhelpful. If it became fashionable to use ornate and beautiful things, then most people would change what they buy. This is just a tautology.
    There are two interesting questions here. The first is why the fashion changed in this way. Some of my friends say that it’s a consequence of increasing wealth; when ornament was expensive, it was a reliable signal of wealth, and so it was a good way to show off. Today people often countersignal wealth and sophistication by avoiding the now-affordable ornamentation, similar to how rich people used to be fatter than average, but today are thinner than average. Others say that the World Wars broke people’s faith in the culture and values which the older styles express, and so people turned away into modernist styles which rejected the older ways and express very different ideas—you can arguably see similar transitions in fashion, music, and fine art. Neither of these feels completely satisfying to me but I think the latter is closer to the mark.
    The other interesting question is why most people say that the ornamented style is more beautiful and more pleasant, but surround themselves with plain or minimalist objects instead. Partly, it’s because people usually buy one of the “default” options that’s presented to them in store shelves or online search results rather than hunting for something specific, and the defaults will be whatever is in fashion; you make a lot of purchases in a year, and no one has the attention to think deeply about all of them. Mostly, though, it’s because most people prioritize looking normal or fashionable above their own sense of beauty. ↩︎

The Fourth Form Of Government

Traditionally we think of government as coming in three forms. According to Polybius’s Histories, one of the most influential works in the Western canon, there is democracy, the rule of the common people (demos), where ultimate power rests in popular elections; aristocracy, the rule of the best (aristos), where a coalition of independently-powerful elites is in charge; and dictatorship,1 with one man at the helm.

Each of these three forms has a functional version, which mostly works in the interests of the people and the state, but inevitably decays into a dysfunctional version which mostly pursues its own privileges instead. The dysfunctional version of dictatorship is tyranny. The dysfunctional version of aristocracy is oligarchy, the rule of the few (oligos). The dysfunctional version of democracy is ochlocracy, the rule of the mob (oklos).

Polybius gives a compelling argument that the Roman Republic’s strength comes from incorporating elements of all three forms of government. Polybius is hardly an unbiased observer—he was packed off to Rome and socialized into its elite, then later sent back to his homeland and tasked with setting up Rome’s colonial government, sort of like an ancient Syngman Rhee—but his point that states use a mix of these forms is well-taken. It’s illuminating to analyze states as combinations of these three elements. The U.S. was founded as a mix of aristocracy and democracy, then slowly and steadily became more democratic over the next 150 years or so. Early modern England was a tug of war between aristocracy and dictatorship. Nazi Germany was one of the purest dictatorships in recorded history. Today Brazil, El Salvador, and much of South America is a mix of democracy and dictatorship, which confounds liberal ideology but is no less coherent than these other combinations.

However, this three-part breakdown can’t account for everything. In recent decades, many of the U.S. government’s major decisions can’t be explained by any of these three forms, e.g. the response to Covid. The European Union is even less accurately described by this model. Older societies like Song dynasty China or ancient Ur fare no better.

The missing element in these cases is bureaucracy, the rule of salaried officials whose power comes from an appointed position.2 Like the other three forms of government, bureaucracy comes in both functional and dysfunctional versions. For the functional version, think of the U.S. in the middle of the 20th century, when government bureaus were eradicating malaria, installing artificial harbors at Normandy, and landing men on the Moon. In the dysfunctional version of bureaucracy, which we can call red tape, the interests and privileges of bureaus come before fulfilling their stated missions, and they become a drag on action rather than a source of action. In America today the medical bureaus like the CDC and FDA might be the clearest examples, and any American can list many many more.3

Once we’ve added in bureaucracy, we can account for the remaining cases. America today is primarily a bureaucracy, with democratic elements in the legislature and White House, and aristocratic elements in business. Ur was a mix of dictatorship and bureaucracy, as far as we can tell from the surviving sources. And so on.

It seems these four fundamental forms of government—democracy, aristocracy, dictatorship, and bureaucracy—are sufficient to explain the political structure of states. As I look out across time and space, the governments I see are composed of combinations of these forms, with very little left over.


[1] The Greeks used the word “monarchy”, the rule of a single person (monos). However, over the Middle Ages “monarchy” instead came to mean long-lasting dynasties whose legitimacy comes mainly from royal blood. Polybius would not call King Charles III or Emperor Naruhito “monarchs”, but he would use that word for Vladimir Putin or Lee Kuan Yew or Charles de Gaulle. In today’s English our word for these people is “dictator”.

[2] Why didn’t Polybius include bureaucracy in his schema? Because he didn’t have much contact with it. Salaried officials wielded very little power in the Roman Republic and pre-Roman Greece. A skilled theorist will very often come up with a schema which accurately describes their own time and place, but which doesn’t generalize. The Greek and Roman political theorists could have seen bureaucracy if they looked beyond their own borders—the Achaemenid Empire had substantial bureaucratic elements by the time of their wars with the Greek city-states—but they were never interested in explaining the political structures of the barbarians. Now that bureaucracy has become the dominant power in Western government, we have reams and reams of theory on the subject.

[3] A noteworthy subtype is the dysfunctional military bureaucracy, when a government is dominated by the military, and the military serves mainly as a lucrative jobs program for salaried military officers rather than as a warfighting institution. The most famous example is the Roman Empire’s Praetorian Guard. Contemporary examples include Egypt and Thailand.

America And The Future

(I wrote this as a speech for a 4th of July celebration.)

I’ve talked a lot about what America means to Americans. I’d also like to talk about what America means to the world, and to the future. 

Americans are notorious idealists, but our ideals haven’t stayed within this country. The American conceptions of virtues— democracy, freedom of speech, human rights—have influenced every major country, most obviously with our friends and allies, but even our rivals. People laugh when the Chinese government calls itself democratic or when Putin claims he’s invading Ukraine to protect human rights, and yes it’s very funny, but it also shows how influential American morals are. And when a society keeps hearing the pretense of American morals used to justify itself, then slowly the pretense will start to become more real.

That’s the biggest reason American morality has spread so far and so deeply: sometimes our wealth has helped and sometimes our might has helped, but mostly it’s our spiritual power. American ideas are actually very good, and when people hear them it makes sense, and they want to live up to the ideals as much as you and I do.

We also can’t forget that America is the driver of humanity’s technological progress. Cars, airplanes, nuclear power, solar power, the phone, the transistor, the internet—for eighty years this country has been the source of every major technology breakthrough, and that won’t change any time soon. From Paris to Shanghai, prosperity is built out of American inventions. The greatest feat in the history of the species was landing men on the Moon. And when we landed, we left a memorial that says “WE CAME IN PEACE FOR ALL MANKIND”.

And as far as this country has already led the world, we’re not done. Every one of us here knows our country still has huge moral problems to overcome. Some of us here are organizing to overcome them. With every step forward, the world will watch, and follow our trail if they still like where we’re going. Every one of us here knows how much the world needs better technology. Some of us here are inventing it. I don’t know what technology is going to lift the next billion people out of poverty, but I can guess where it will be invented.

And this isn’t only our work. When our time is past, our children will pick up the torch. Our grandchildren will carry it to places we can’t even imagine.

Some day, when alabaster cities gleam on every rock in the Solar System, not all of them will be American. But all of them will be there because of America.

Computers Are Overrated

Since the ancestors of homo sapiens first made tools out of rocks and sticks and grass, society has been transformed by the development of ever more powerful technologies, from stone axes to the steam engine to the GPS satellite. Computers and the internet, the most important technologies of recent decades, are the latest step in this long, long process.

How important are these technologies? How much have they changed society? Compared to other events in living memory, they have been revolutionary. The world’s most valuable corporations are now mostly internet software companies. The internet has been responsible for the rise and fall of heads of government, and sometimes of entire governments. Computers and the internet play a large role in the daily life and experience of billions of people. I am writing this on a computer right now to share it over the internet.

Commentators have therefore described computers and their effects as among the greatest transformations to ever strike human society, or occasionally even the single most important transition in the history of the species. We see this in the use of terms like “Information Age” or “Digital Revolution”. According to Wikipedia’s article on the Information Age, “During rare times in human history, there have been periods of innovation that have transformed human life. The Neolithic Age, the Scientific Age and the Industrial Age all, ultimately, induced discontinuous and irreversible changes in the economic, social and cultural elements of the daily life of most people. Traditionally, these epochs have taken place over hundreds, or in the case of the Neolithic Revolution, thousands of years, whereas the Information Age swept to all parts of the globe in just a few years.” Or in the “Fourth Industrial Revolution” framework popularized by the World Economic Forum, human history has seen four distinct industrial revolutions, and fully half of them are the result of computer technology.

From a historical perspective, however, the social changes caused by computers do not live up to these claims. There are many earlier technologies that have also transformed the structure of society and the landscape of power: bronzeworking, electricity, antibiotics, the horse collar, the radio, gunpowder, the railroad, the atom bomb, contraception, the printing press… any student of history could keep the list going. To put computers into historical perspective, we cannot look only at the effects of computers. We also need to compare these effects to those of other major technologies.

There is no definitive comparison, but one rough categorization scheme is below:

  1. Utterly transformative. The difference between societies with and without this technology was on par with the difference between societies of different hominid species. Examples: Agriculture, writing, fire.
  2. Civilization-scale. This technology was sufficient to force a complete reorganization of one of a civilization’s most basic functions, such as economic production or political legitimacy. Examples: Centralized irrigation, printing press, steam engine.
  3. Transformative. While this did not force a major reorganization of how the civilization’s core institutions related to each other, the individual institutions in the relevant fields were forced to reorganize themselves to adapt to the new technology, or else were supplanted by those that did. Examples: Railroads, automobiles, broadcast radio, muskets.
  4. Decisive. While the civilization’s core institutions could adopt the new technology without major reorganization, details of the technology’s powers and limits were a major factor in the specific balance of power between institutions and a source of local advantage. Many particular institutions rose and fell as a result of the new technology—companies went bankrupt, militaries lost wars, governments lost elections. Examples: Artillery, airplanes, television.

A comparison like this cannot be perfectly objective. You might argue over exactly where to place different technologies. Perhaps the railroads should be at level 4 rather than level 3, depending on how much of a role you assign them in the development of megacorporations (or “trusts”, as they were called at the time). Or you might use a different ranking system that emphasizes something else, e.g. whether a technology changes the total number of people a society can support, or how much of an average person’s time is spent interacting with the technology—this scale focuses on how much a technology affects the structure of society only because that’s the question we’re asking right now.

Still, we can get a very rough ranking of how different technologies stack up against each other. It is safe to say that writing transformed society more than the telegraph, or that the printing press transformed society more than the airplane, or the steam engine transformed society more than artillery. Different rankings will disagree on edge cases, but big differences should be consistent. Keeping in mind that this process is inherently very fuzzy, let’s run with this scale for now.

Where do computers and the internet fall on this scale?

These technologies have allowed many individual institutions to dominate the competitive landscape. In business there are companies like Microsoft, Google, and many more. In politics there are the campaigns of candidates like Barack Obama or Donald Trump. Advertising and political education which had previously been done via other media has shifted more and more to the internet. The transformation in journalism, academia, and intellectual discourse more broadly has been especially dramatic.

Economically, computers have had their strongest impact on record-keeping and administration, where they have been truly revolutionary. More and more commerce is being done over the internet. Industrially, these technologies’ effects have also been substantial, but hardly overwhelming. Computerized tools such as articulated robot arms or CNC machine tools occupy very important niches. Thanks to tools like these as well as incremental efficiency gains from computerized administration, computers have helped industrial productivity to continue its long-term growth, albeit at rates slower than those seen in the first half of the 20th century. But the industrial effects of computers are not nearly as deep or as widespread as the industrial effects of famous earlier innovations such as interchangeable parts or Bessemer steel.

There are also the arguments that computers should be considered revolutionary because of predicted future changes. Mass automation will transform industrial production as deeply as the Industrial Revolution of old. The blockchain will usher in a new financial age. Swarms of autonomous combat drones will make infantry obsolete. Chatbots will put the laptop class out of work. Unfriendly artificial general intelligence will disassemble humanity for parts. Perhaps reality will someday catch up to the think tank whitepapers and science fiction—it wouldn’t be the first time. Or perhaps belief in these hypothetical technologies will peter out like we’ve seen in quests for the holy grails of previous generations, like moon bases, fusion power, or the cure for cancer. Predicting the future of society is hard, and predicting the future of technology is even harder. Narratives which try to claim a special place for computers on the basis of predicted future technologies, which could arrive in ten years or a hundred years or never, cannot play a role in our assessment of what computers have achieved so far.

Taking all this together, in terms of the four-level scale above, I would rank computers at level 3. This judgment is necessarily a bit subjective, and I can imagine events developing so that in the year 2080 we look back and move the classification up or down a notch. Still, this lets us put some bounds on the computer’s importance relative to other technologies. Computers have changed our economy, but not nearly as much as the steam engine did. The internet has changed our intellectual environment and structures of political legitimacy somewhat more than the transition from radio to television, but much less than the printing press. In short, computers are a big deal, and very probably the most important technological development in living memory. But from the perspective of human history as a whole, the computer doesn’t stand out. There have been dozens of technologies that were at least as important, although probably fewer than a hundred.

Then why do so many people think computers are the most important transformation ever? Because many of these commentators aren’t trying to place the development of computers within a firm understanding of the grand sweep of history. They’re trying to explain their own direct experience of the world. In their own lifetimes, computers were indeed the most revolutionary new technology, and compared to the things that they experience firsthand or spend time thinking about, nothing else comes close. It’s tempting to attribute big changes to the most powerful force you’ve experienced, and it’s tempting to believe that the changes happening in your own lifetime are the most important in human history. Or sometimes people are trying to hype up new projects and products, which of course calls for boosting the present technology and gliding past the previous cases. But in either case, the reason their viewers and readers let them get away with it is that most people just don’t know the history they’re implicitly comparing to.

To take one personal pet peeve as an example, I have heard dozens and dozens of people describe how the internet has created never-before-seen problems in journalism, completely unaware that the unprecedented problems they describe were just as bad or worse in the journalism of, say, the late 1800s. Of course it’s perfectly fine to think about these things without knowing all the history, but making historical comparisons without knowing the history is dangerous. You don’t need to know who Charles Sumner was before you say that news media is more polarized today than it was in your childhood. However, if you want to say that social media has made the discourse more fragmented and conspiracy-prone than ever before, you do need to know who Alfred Dreyfus was.

Comparisons like these are necessary to thinking about how dramatically computers have changed our world. Media and intellectual discourse have been transformed many many times in modern history, and evaluating the scale of the internet’s changes requires a memory that goes back further than American media’s golden age of the late 20th century. While anyone can and should observe the very clear fact that computers are changing the economy, figuring out the significance of a “Fourth Industrial Revolution” requires a comfortable knowledge of the First. When you hear a claim that computers have caused some gigantic change, you should ask yourself “Compared to what?”

Plenty of common arguments cast computers and the internet as far more transformative than they actually are, often through ignorance of history or through wildly overconfident predictions about the near future. Yet there’s no need to exaggerate. When taking the long view and properly comparing computers to other technologies, they are still a pretty big deal. We can think about their effects on society even if they haven’t caused a fundamental transformation.

Probability Is Not A Substitute For Reasoning

Several Rationalists objected to my recent “Against AGI Timelines” post, in which I argued that “the arguments advanced for particular timelines [to AGI]—long or short—are weak”. This disagreement is broader than predicting the arrival of one speculative technology. It illustrates a general point about when thinking in explicit probabilities is productive and illuminating vs when it’s misleading, confusing, or a big waste of effort.

These critics claim that a lack of good arguments is no obstacle to using AGI timelines, so long as those timelines are expressed as a probability distribution rather than a single date. See e.g. Scott Alexander on Reddit, Roko Mijic on Twitter, and multiple commenters on LessWrong.1

And yes, if you must have AGI timelines, then having a probability distribution is better than just saying “2033!” and calling it a day, but even then your probability distribution is still crap and no one should use it for anything. Expressing yourself in terms of probabilities does not absolve you of the necessity of having reasons for things. These critics don’t claim to have good arguments for any particular AGI timeline. As far as I can tell, they agree with my post’s central claim, which is that there’s no solid reasoning behind any of the estimates that get thrown around.

You can use bad arguments to guess at a median date, and you will end up with noise and nonsense like “2033!”. Or you can use bad arguments to build up a probability distribution… and you will end up with noise and nonsense expressed in neat rows and figures. The output will never be better than the arguments that go into it!2

As an aside, it seems wrong to insist that I engage with people’s AGI timelines as though they represent probability distributions, when for every person who has actually sat down and thought through their 5%/25%/50%/75%/95% thresholds and spot-checked this against their beliefs about particular date ranges and etc etc in order to produce a coherent distribution of probability mass, there are dozens of people who just espouse timelines like “2033!”.

Lots of people, Rationalists especially, want the epistemic credit for moves that they could conceivably make in principle but actually have not done. This is bullshit. Despite his objection above, even Alexander—who is a lot more rigorous than most—is still perfectly happy to use single-date timelines in his arguments, and to treat others’ probability distributions as interchangeable with their median dates:

“For example, last year Metaculus thought human-like AI would arrive in 2040, and superintelligence around 2043 … If you think [AGI arrives in] 2043, the people who work on this question (“alignment researchers”) have twenty years to learn to control AI.”

Elsewhere he repeats this conflation and also claims he discards the rest of the probability distribution [emphasis mine]:

“I should end up with a distribution somewhere in between my prior and this new evidence. But where?

I . . . don’t actually care? I think Metaculus says 2040-something, Grace says 2060-something, and Ajeya [Cotra] says 2050-something, so this is basically just the average thing I already believed. Probably each of those distributions has some kind of complicated shape, but who actually manages to keep the shape of their probability distribution in their head while reasoning? Not me.

Once you’ve established that you ignore the bulk of the probability distribution, you don’t get to fall back on it when critiqued. But if Alexander doesn’t actually have a probability distribution, then plausibly one of my other critics might, and Cotra certainly does. Some people do the real thing, so let’s end this aside about the many who gesture vaguely at “probability distributions” without putting in the legwork to use one. If this method actually works, then we only need to pay attention to the few who follow through, and I’ll return to the main argument to address that. 

Does it work? Should we use their probability distributions to guide our actions, or put in the work to develop probability distributions of our own?

Suppose we ask an insurance company to give “death of Ben Landau-Taylor timelines”. They will be able to give their answer as a probability distribution, with strong reasons and actuarial tables in support of it. This can bear a lot of weight, and is therefore used as a guide to making consequential decisions—not just insurance pricing, but I’d also use this to evaluate e.g. whether I should go ahead with a risky surgery, and you bet your ass I’d “keep the shape of the probability distribution in my head while reasoning” for something like that. Or if we ask a physicist for “radioactive decay of a carbon-14 atom timelines”, they can give a probability distribution with even firmer justification, and so we can build very robust arguments on this foundation. This is what having a probability distribution looks like when people know things—which is rarer than I’d like, but great when you can get it.

Suppose we ask a well-calibrated general or historian for “end of the Russia-Ukraine war timelines” as a probability distribution.3 Most would answer based on their judgment and experience. A few might make a database of past wars and sample from that, or something. Whatever the approach, they’ll be able to give comprehensible reasons for their position, even if it won’t be as well-justified and widely-agreed-upon as an actuarial table. People like Ukrainian refugees or American arms manufacturers would do well to put some weight on a distribution like this, while maintaining substantial skepticism and uncertainty rather than taking the numbers 100% literally. This is what having a probability distribution looks like when people have informed plausible guesses, which is a very common situation.

Suppose we ask the world’s most renowned experts for timelines to peak global population. They can indeed give you a probability distribution, but the result won’t be very reliable at all—the world’s most celebrated experts have been getting this one embarrassingly wrong for two hundred years, from Thomas Malthus to Paul Ehrlich. Their successors today are now producing timelines with probabilistic prediction intervals showing when they expect the growth of the world population to turn negative.4 If this were done with care then arguably it might possibly be worth putting some weight on the result, but no matter how well you do it, this would be a completely different type of object from a carbon-14 decay table, even if both can be expressed as probability distributions. The arguments just aren’t there.

The timing of breakthrough technologies like AGI are even less amenable to quantification than the peak of world population. A lot less. Again, the critics I’m addressing don’t actually dispute that we have no good arguments for this, the only people who argued with this point were advancing (bad) arguments for specific short timelines. The few people who have any probability distributions at all give reasons which are extremely weak at best, if not outright refutable, or sometimes even explicitly deny the need to have a justification.

This is not what having a probability distribution looks like when people know things! This is not what having a probability distribution looks like when people have informed plausible guesses! This is just noise! If you put weight on it then the ground will give way under your feet! Or worse, it might be quicksand, sticking you to an unjustified—but legible!—nonsense answer that’s easy to think about yet unconnected to evidence or reality.

The world is not obligated to give you a probability distribution which is better or more informative than a resigned shrug. Sometimes we have justified views, and when we do, sometimes probabilities are a good way of expressing those views and the strength of our justification. Sometimes we don’t have justified views and can’t get them. Which sucks! I hate it! But slapping unjustified numbers on raw ignorance does not actually make you less ignorant.


[1] While I am arguing against several individual Rationalists here, this is certainly not the position of all Rationalists. Others have agreed with my post. In 2021 ur-Rationalist Eliezer Yudkowsky wrote:

“I feel like you should probably have nearer-term bold predictions if your model [of AGI timelines] is supposedly so solid, so concentrated as a flow of uncertainty, that it’s coming up to you and whispering numbers like “2050” even as the median of a broad distribution. I mean, if you have a model that can actually, like, calculate stuff like that, and is actually bound to the world as a truth.

If you are an aspiring Bayesian, perhaps, you may try to reckon your uncertainty into the form of a probability distribution … But if you are a wise aspiring Bayesian, you will admit that whatever probabilities you are using, they are, in a sense, intuitive, and you just don’t expect them to be all that good.

I have refrained from trying to translate my brain’s native intuitions about this into probabilities, for fear that my verbalized probabilities will be stupider than my intuitions if I try to put weight on them.”

Separately, “Against AGI Timelines” got a couple other Rationalist critics who do claim to have good arguments for short timelines. I’m not persuaded but they are at least not making the particular mistake that I’m arguing against here.

[2] It’s not a priori impossible that there could ever be a good argument for a strong claim about AGI timelines. I’ve never found one and I’ve looked pretty damn hard, but there are lots of things that I don’t know. However, if you want to make strong claims—and “I think AGI will probably (>80%) come in the next 10 years” is definitely a strong claim—then you need to have strong reasons.

[3] The Good Judgment Project will sell you their probability distribution on the subject. If I were making big decisions about the war then I would probably buy it, and use it as one of many inputs into my thinking.

[4] I’m sure every Rationalist can explain at a glance why the UN’s 95% confidence range here is hot garbage. Consider this a parable about the dangers of applying probabilistic mathwashing to locally-popular weakly-justified assumptions.

Against AGI Timelines

Some of my friends have strong views on how long it will be until AGI is created. The best arguments on the subject establish that creating a superintelligent AGI is possible, and that such an AGI would by default be “unfriendly”, which is a lot worse than it sounds. So far as speculative engineering goes, this is on relatively solid ground. It’s quite possible that as research continues, we’ll learn more about what sorts of intelligence are possible and discover some reason that an AGI can’t actually be built—such discoveries have happened before in the history of science and technology—but at this point, a discovery like that would be a surprise.

The loudest voices warning of AGI also make additional claims about when AGI is coming.1 A large contingent argue for “short timelines”, i.e., for AGI in about 5-10 years. These claims are much shakier.

Of course, short timelines don’t follow automatically from AGI being possible. There is often a very long time between when someone figures out that a technology ought to work in principle, and when it is built in reality. After Leonardo da Vinci sketched a speculative flying machine based on aerodynamic principles similar to a modern helicopter, it took about 400 years before the first flight of a powered helicopter, and AGI could well be just as far from us. Since the discovery of nuclear physics and the construction of particle accelerators about a century ago, we have known in principle how to transmute lead into gold, but this has never actually been done. Establishing timelines to AGI requires additional arguments beyond its mere possibility, and the arguments advanced for particular timelines—long or short—are weak.

Whenever I bring this up, people like to switch to the topic of what to do about AI development. That’s not what I’m discussing here. For now I’m just arguing about what we know (or don’t know) about AI development. I plan to write about the implications for action in a future post.

1.

The most common explanation2 I hear for short timelines (e.g. here) goes roughly like this: Because AI tech is getting better quickly, AGI will arrive soon. Now, software capabilities are certainly getting better, but the argument is clearly incomplete. To know when you’ll arrive somewhere, you have to know not just how fast you’re moving, but also the distance. A speed of 200 miles per hour might be impressive for a journey from Shanghai to Beijing (high-speed rail is a wonder) but it’s very slow for a journey from Earth orbit to the Moon. To be valid, this argument would also need an estimate of the distance to AGI, and no one has ever provided a good one.

Some people, like in the earlier example, essentially argue “The distance is short because I personally can’t think of obstacles”. This is unpersuasive (even ignoring the commenters responding with “Well I can think of obstacles”) because the creation of essentially every technology in the history of the world is replete with unforeseen obstacles which crop up when people try to actually build the designs they’ve imagined. This is most of the reason that engineering is even hard.

Somewhat more defensibly, I’ve heard many people argue for short timelines on the basis of expert intuition. Even if most AI experts shared this intuition—which they do not, there is nothing close to a consensus in the field—this is not a reliable guide to technological progress. An engineer’s intuition might be a pretty good guide when it comes to predicting incremental improvements, like efficiency gains in batteries or the cost of photovoltaic cells. When it comes to breakthrough technologies with new capabilities, however, the track record of expert intuition is dismal. The history of artificial intelligence specifically is famously littered with experts predicting major short-term breakthroughs based on optimistic intuition, followed by widespread disappointment when those promises aren’t met. The field’s own term for this, “AI winter”, is older than I am.

It’s worth a look at the 1972 Lighthill Report, which helped usher in the first AI winter half a century ago (emphasis added): 

“Some workers in the field freely admit that originally they had very naive ideas about the potentialities of intelligent robots, but claim to recognise now what sort of research is realistic. In these circumstances it might be thought appropriate to judge the field by what has actually been achieved than by comparison with early expectations. On the other hand, some such comparison is probably justified by the fact that in some quarters wild predictions regarding the future of robot development are still being made.

When able and respected scientists write in letters to the present author that AI, the major goal of computing science, represents another step in the general process of evolution; that possibilities in the nineteen-eighties include an all-purpose intelligence on a human-scale knowledge base; that awe-inspiring possibilities suggest themselves based on machine intelligence exceeding human intelligence by the year 2000; when such predictions are made in 1972 one may be wise to compare the predictions of the past against performance as well as considering prospects for the realisation of today’s predictions in the future.”

While there has been tremendous progress in software capabilities since the Lighthill Report was written, many of the experts’ “wild predictions” for the next 20-30 years have not yet come to pass after 50. The intuition of these “able and respected scientists” is not a good guide to the pace of progress towards intelligent software.

Attempts to aggregate these predictions, in the hopes that the wisdom of crowds can extract signal from the noise of  individual predictions, are worth even less. Garbage in, garbage out. There has been a great deal of research on what criteria must be met for forecasting aggregations to be useful, and as Karger, Atanasov, and Tetlock argue, predictions of events such as the arrival of AGI are a very long way from fulfilling them. “Forecasting tournaments are misaligned with the goal of producing actionable forecasts of existential risk”.

Some people argue for short AGI timelines on the basis of secret information. I hear this occasionally from Berkeley rationalists when I see them in person. I’m pretty sure this secret information is just private reports of unreleased chatbot prototypes before they’re publicly released about 2-6 weeks later.3 I sometimes get such reports myself, as does everyone else who’s friends with engineers working on chatbot projects, and it’s easy to imagine how a game of Telephone could exaggerate this into false rumors of a new argument for short timelines rather than just one more piece of evidence for the overdetermined “the rate of progress is substantial” argument.

Edit: Twitter commenter Interstice reminds me of an additional argument, “that intelligence is mostly a matter of compute and that we should expect AGI soon since we are approaching compute levels comparable to the human brain”, and links Karnofsky’s argument. Yudkowsky’s refutation of this approach is correct.

2.

It’s worth a step back from AGI in particular to ask how well this type of speculation about future technology can ever work. Predicting the future is always hard. Predicting the future of technology is especially hard. There are lots of well-publicized, famous failures. Can this approach ever do better than chance?

When arguing for the validity of speculative engineering, short timeline proponents frequently point to the track record of the early 20th century speculative engineers of spaceflight technology like Tsiolkovsky. This group has many very impressive successes—too many to be explained by a few lucky guesses. Before any of the technology could actually be built, they figured out a great deal of the important principles: fundamental constraints like the rocket equation, designs for spacecraft propulsion which were later used more-or-less according to the early speculative designs, etc. This does indeed prove that speculative engineering is a fruitful pursuit whose designs should be taken as serious possibilities.

However, the speculative engineers of spaceflight also produced many other possible designs which have not actually been built. According to everything we know about physics and engineering, it is perfectly feasible to build a moon base, or even an O’Neill cylinder. A space elevator should work as designed if the materials science challenges can be solved, and those challenges in turn have speculative solutions which fit the laws of physics. A mass driver should be able to launch payloads into orbit (smaller versions are even now being prototyped as naval guns). But just because one of these projects is possible under the laws of physics, it does not automatically follow that humanity will ever build one, much less that we will build one soon.

After the great advances in spaceflight of the mid-20th century, most of the best futurists believed that progress would continue at a similar pace for generations to come. Many predicted moon bases, orbital space docks, and manned Mars missions by the early 2000s, followed by smooth progress to colonies on the moons of outer planets and city-sized orbital habitats. From our vantage today, none of this looks on track. Wernher von Braun would weep to see that since his death in 1977 we have advanced no further than Mars rovers, cheaper communication satellites, and somewhat larger manned orbital laboratories.

On the other hand, technological advances in some fields, such as materials science or agriculture, have continued steadily for generation on generation. Writers throughout the 1800s and 1900s spoke of the marvels that future progress would bring in these fields, and those expectations have mostly been realized or exceeded. If we could bring Henry Bessemer to see our alloys and plastics or Luther Burbank to see our crop yields, they would be thrilled to see achievements in line with our century-old hopes. There is no known law to tell which fields will stall out and which will continue forward.

If today’s AI prognosticators were sent back in time to the 1700s, would their “steam engine timelines” be any good? With the intellectual tools they use, they would certainly notice that steam pump technology was improving, and it’s plausible that their speculation might envision many of steam power’s eventual uses. But the intellectual tools they use to estimate the time to success—deference to prestigious theorists, listing of unsolved difficulties, the intuition of theorists and practitioners4—would have given the same “timelines” in 1710 as in 1770. These methods would not pick out the difference ahead of time between steam engines like Savery’s (1698) and Newcomen’s (1712), which ultimately proved to be niche curiosities of limited economic value, and the Watt steam engine (1776), which launched the biggest economic transformation since agriculture.

3.

Where does this leave us? While short AGI timelines may be popular in some circles, the arguments given are unsound. The strongest is the argument from expert intuition, and this one fails because expert intuition has an incredibly bad track record at predicting the time to breakthrough technology improvements.

This does not mean that AGI is definitely far away. Any argument for long timelines runs into the same problems as the arguments for short timelines. We simply are not in a position to know how far away AGI is. Can existing RLHF techniques with much more compute suffice to build a recursively self-improving agent which bootstraps to AGI? Is there some single breakthrough that a lone genius could make that would unlock AGI on existing machines? Does AGI require two dozen different paradigm-shifting insights in software and hardware which would take centuries to unlock, like da Vinci’s helicopter? Is AGI so fragile and difficult to create as to be effectively out of human reach? Many of these possibilities seem very unlikely, but none of them can be ruled out entirely. We just don’t know.

Addendum: Several people object that none of this presents a problem if you give timelines probabilistically rather than as a single date. See Probability Is Not A Substitute For Reasoning for my response.


[1] I’ve even heard a secondhand report of one-year timelines, as of February 2023.

[2] Okay, the most common explanation people give me for short timelines is that they’re deferring to subculture orthodoxy or to a handful of prestigious insiders. But this essay is about the arguments, and these are the arguments of the people they defer to.

[3] On the other hand, if the Berkeley AI alignment organizations had more substantial secret information or arguments guiding their strategy, I expect I would’ve heard it. They’ve told me a number of their organizations’ other secrets, sometimes deliberately, sometimes accidentally or casually, and on one occasion after I specifically warned my interlocutor that he was telling me sensitive information and should stop but he kept spilling the beans anyway. I very much doubt that they could keep a secret like this from me when I’m actually trying to learn it, if it were really determining their strategy.

[4] The theorists and practitioners of the 1700s didn’t predict accurate “steam engine timelines”. Watt himself thought his machine would be useful for pumping water out of mines more efficiently than the Newcomen engine, and did not expect its widespread revolutionary use.

New Industries Come From Crazy People

I’ve been behind on posting my work to this site. Some recent material:

New Industries Come From Crazy People at Palladium. This is a look at how different cultures interface with the antisocial weirdos like Thomas Edison and Steve Jobs who drive industrial breakthroughs. Most societies don’t tolerate them. The ones that do, drive economic progress.

Narratives Podcast with Will and David Jarvis. We talk about some of the ideas in the Palladium essay, how state capacity has changed over the last century, why I’m optimistic about America’s future, and more.

Two Reports On Industry

I recently concluded an in-depth case study on the machine tool industry with Oberon Dixon-Luinenburg. We just published two summaries of our research.

  1. Machine Tools: A Case Study In Advanced Manufacturing, with Bismarck Analysis. This report covers where machine tools are built, why they’re built in those places, and what this tells us about industrial policy more broadly.
  2. How State Capacity Drives Industrialization, with Palladium. This article describes the history of industrialization, and why it doesn’t happen without guidance from the state.

Please do comment or email me your thoughts.

Markets Are Thin

When people talk about a “market economy”, what exactly does that mean? You might assume it means a society where all or most economic decisions are made through markets. But this does not describe our own economy, or anything else that gets called a “market economy”. Most economic decisions are made inside particular organizations, which make their internal decisions according to hierarchical structures and individual judgment, taking markets into account as one factor among many. If two workers both want to avoid a Monday morning shift, or two department heads are arguing over hiring policy, or two engineers are debating which design to adopt, no one expects the people involved to start setting prices and bidding against each other like they would in a market. And yet markets are nevertheless crucial to our economic organization, in ways that are notably different from non-“market economies”. How, then, does the market fit in?

What markets make possible is smooth interaction between independent units. Organizations have hierarchical control over their own assets, including very complex social processes like medical schools or aircraft control towers, but they also need to coordinate with other parts of society external to the organization itself, like customers and suppliers. Markets make this easy and reliable. The market is a thin layer of crucial but simple social technology that lets an organization ignore the deep complexity of the people and organizations it interacts with. This is helpful because the market interface an organization presents to the outside world is so much simpler than the non-market-based coordination it uses internally.

In a way, the market is like the docking module between the Apollo and Soyuz rockets. It’s a crucial component, and without it the entire system couldn’t possibly function as a unified whole, but it’s still a relatively small fraction of the system’s mass and complexity. Understanding how it works won’t tell you all that much about how the Soyuz itself works.

Apollo-soyuz.jpg

Of course organizations don’t always relate to each other according to the rules of the “free market”. Perfect markets don’t and can’t exist, and large swathes of the current economy don’t even approximate markets (see e.g. Patrick McKenzie noting how success in venture capital is determined by “access”). Nevertheless, such exceptions are uncommon. Most of our economy is mostly market-based. This means that organizations and individuals can usually use the market as a reasonably accurate guide to how they should interact with the rest of society—and nothing more.