What to Do About Radical Uncertainty

PUBLISHED ON PROJECT SYNDICATE on July 21, 2023


A longstanding belief in the predictability of market behavior and outcomes has created plenty of fodder for academic theorizing in economics. But firms operating in the real world succeed by recognizing that the future is unknowable – and acting accordingly.

CAMBRIDGE – A central premise of neoclassical economics is that the consequences of the decisions of market participants can be known in advance and quantified as risk-adjusted estimates. As John Kay and Mervyn King showed in their 2020 book, Radical Uncertainty: Decision-Making Beyond the Numbers, such probabilistic reasoning has a long history. As applied in economics, it has operationalized the concept of “expected utility” – the desideratum that rational economic agents are defined to be maximizing.

As the author of a major analysis of the stock market sponsored by the British government (Kay) and a former governor of the Bank of England (King), both men are well equipped to examine the complex interaction between financial markets and markets for “real” things (goods, services, labor, patents, and so forth). In doing so, they have challenged the statistical methodologies and ontological assumptions that lead economists to regard the future as measurable and manageable.


MANAGING EXPECTATIONS

From John Maynard Keynes at the University of Cambridge 90 years ago through Robert Lucas at the University of Chicago in the mid-twentieth century, economists have placed expectations at the core of market dynamics. But they differ on how expectations are formed. Are the data we observe the outcome of processes that are as “stationary” as physical laws, like those determining the properties of light and gravity? Or do the social processes that animate markets render future outcomes radically uncertain?

For a long generation starting in the 1970s, Lucas and his colleagues dominated economic theory, giving rise to different strands of Chicago School economics. While the Efficient Market Hypothesis asserted that prices in financial markets incorporate all relevant information, the Real Business Cycle Theory of New Classical Economics held that the macroeconomy is a self-equilibrating system whose markets are both efficient and complete. The system may be subject to external shocks, but it is not amenable to fiscal or monetary management.

This assumption of complete markets implies that we can overcome our ignorance of the future. It suggests that we could, at any moment, write contracts to insure ourselves against all the infinite possible future states of the world. But since perfect, complete markets obviously do not exist, the Chicago School’s Rational Expectations Hypothesis (REH) proposes that market participants will guide their forward-looking decisions by reference to a (generally implicit) model of how, on average, the world works and will continue to work. As a result, expectations will be tamed and aligned with efficient market equilibria.

For their part, Kay and King look further back to the pre-REH period, when Frank Knight and then Keynes correctly showed that our ignorance of future outcomes is inescapable. As Keynes famously put it in 1937:

“By ‘uncertain knowledge’ … I do not mean merely to distinguish what is known from what is merely probable. … The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.”

The shockingly unanticipated Global Financial Crisis of 2008 brought this insight back to the fore.

The renewed relevance of Keynes’ economics reflected its core teaching: a monetary market economy cannot be relied on to generate full employment. This lesson emerged from the central position Keynes gave to the fragility of the expectations on which investment decisions are made. In 2008, investment expectations were smashed: such a massive market failure called for equally massive offsetting interventions by state institutions. Even Lucas conceded that, “Everyone is a Keynesian in a foxhole.” As Kay and King write:

“The advances in economic theory lauded by Lucas did not prevent a major downturn in the world economy, nor did they give policymakers the tools they required to deal with that downturn. The models he described assumed a stable and unchanging structure of the economy and could not cope with unique events that derived from the essential non-stationarity of a market economy.”

As the discipline of economics has evolved since the shock of 2008, Keynes’ theoretical stance with respect to inescapable uncertainty versus manageable risk is gradually being recognized to be as relevant as the policy prescriptions derived from his macroeconomics.

The central question remains: Where can we find guidelines for mitigating the consequences of radical uncertainty? What basis is there for purposive action in the face of “We simply do not know”?

I see three paths forward. The first two are defensive, and the third is proactive. All three reject an exclusive focus on efficiency in the allocation of resources. Thus, they stand outside what remains the dominant paradigm of mainstream economics.


CASH AND CONTROL

The first path is “cash and control.” This represents a toolset for hedging uncertainty in my own professional domain of venture capital, but its uses have been broadly recognized across the spectrum of financial institutions and markets.

I first came to appreciate radical uncertainty as an aspiring venture capitalist some 40 years ago. I had been prepared for what I would encounter while earning my doctorate in economics at Cambridge, under the supervision of Richard Kahn, Keynes leading student and intellectual executor.

As I recount in Doing Capitalism in the Innovation Economy, I learned a hard lesson when I led my VC firm’s investment in an emergent life-sciences company, Bethesda Research Laboratories (BRL). Owing to failures of management by the founders, as well as failures of governance by other investors and directors, I was forced to launch an operation to save our investment. Despite all the due diligence we had done to understand the company’s technology and potential markets, we discovered that we had invested in ignorance.

Reflecting on the experience decades later, I asked: “Could we … have hedged against our necessary ignorance?” The answer was yes, but if and only if markets were complete and we could have purchased precisely those kinds of securities that would have paid off in the unique situation in which we had found ourselves. However, as Kay and King document in exhaustive detail, markets can never be complete. So, with the benefit of hindsight, I improvised a retrospective hedge that later became the basis for my strategy of cash and control.

At the micro level of the VC investor, cash and control means that, when bad things happen, you will have unequivocal access to the cash you need to buy the time needed to understand the problem, and sufficient control to change the parameters of the situation. In the case of BRL, my partners and I had privileged access to more than enough cash from our institutional-investor clients (whose investment we were powerfully motivated to protect). But to obtain full control of the company, we had to use that access to prevail against other VCs and the founders – an arduous exercise.

The availability of cash and control is obviously context-specific. During the recent Unicorn Bubble, which had been inflated by years of ultra-loose monetary policies, cash was so abundantly available to entrepreneurs that maintaining control had become almost impossible. The balance of power had shifted to founders, many of whom secured entrenched control through ownership of super-voting shares.

This dual-class stock structure had jumped from the world of family-controlled media companies to Silicon Valley, starting with Google’s first venture funding in 1999. That deal was blessed by the two leading VCs of their generation: John Doerr of Kleiner Perkins and Mike Moritz of Sequoia Capital. In the years that followed, every entrepreneur aspired to the status of Google founders Sergey Brin and Larry Page. Most notably, Facebook’s founder, Mark Zuckerberg, followed directly in their footsteps.

The cost of accepting these terms was soon dramatized through public battles – at the board level and in the courts – to force out founders at Uber and WeWork. And now that the normalization of monetary policy has ended the Unicorn Bubble, cash and control again stands as a relevant guide for venture capitalists.


MARKET POWER AND MERCANTILISM

Beyond the world of startups, in the heart of the corporate economy, one major source of control is market power, or what the investor Warren Buffett famously described as a company’s “moat.” As he explained at Berkshire Hathaway’s 1995 annual meeting:

“What we’re trying to find is a business that, for one reason or another – it can be because it’s the low-cost producer in some area, it can be because it has a natural franchise because of surface capabilities, it could be because of its position in the consumers’ mind, it can be because of a technological advantage, or any kind of reason at all – that it has this moat around it.”

Since then, a substantial body of academic literature has emerged to theorize and quantify the observable increase in market power across industries, especially in the United States. Market power – and the monopoly rents that follow – confers an ability to self-insure at scale. Hence, the four most successful companies in the world, each backed by venture capitalists in their early years, tend to hold massive amounts of cash and short-term marketable securities. As of March 31, 2023, Alphabet’s (Google) holdings were $115 billion, Amazon’s were $69 billion, Apple’s were $56 billion (plus $110 billion in marketable long-term securities), and Microsoft’s were $104 billion.

Having accepted radical technological uncertainty in the development of novel products and services, along with radical commercial uncertainty about whether there were customers for their innovations, these companies have refused to accept any financial uncertainty.

The same pursuit of strategic financial autonomy also led Jamie Dimon to construct the “fortress balance sheet” that gave JPMorgan – perhaps alone among global banks – the means to survive the 2008 global financial crisis without emergency government assistance. Similarly, East Asian countries responded to the destruction wrought by the International Monetary Fund in the late 1990s by adopting aggressively mercantilist policies. By boosting current-account surpluses and reserves, they achieved cash and control, and thus robust financial security.

There is a good reason why such policies are called “protectionist” – whether they are implemented by means of an undervalued currency or through legislated tariffs and subsidies. They certainly serve the economic interests of those who export at the expense of the mass of consumers (who in turn suffer at the margin from the adverse shift in the terms of trade). At the extreme, they threaten to unleash the sort of trade wars that contributed to the Great Depression.

But the rhetoric of free trade masks the pragmatic political economy of international commerce. As Friedrich List observed 180 years ago, “Any power which by means of a protective policy has attained a position of manufacturing and commercial supremacy can (after she has obtained it) revert with advantage to a policy of free trade.” One is reminded, here, of Great Britain in 1846 or the US in 1945.

Holding cash reserves that are excessive in normal times and pursuing policies that will generate such reserves over time are both deviations from the efficient ideal. As economists have grappled with the reality of incomplete markets, self-insurance with uncommitted cash has increasingly been recognized as “rational,” though it comes with a visible cost.


INVESTING IN RESILIENCE

That brings us to the second path: investing in resilience by strategically allocating more capital than optimally efficient production chains would seem to require. Kay and King themselves emphasize the necessary trade-off between efficiency and resilience.

The COVID-19 pandemic laid bare the fragility of extended supply chains. Production networks that were optimized to minimize the deployment of working capital quickly collapsed as efficiency was revealed to be the enemy of resilience. And this happened just 12 years after the global financial crisis demonstrated that the efficient allocation of capital in the banking sector had radically reduced the financial system’s resilience.

Resilience in the banking system requires levels of capital that are excessive when measured against “normal” financial conditions, just as resilience in production systems turns on the maintenance of buffer stocks and alternative sources. In either case, the time series of data that define what is “normal” and what constitutes a statistically quantifiable deviation are misleading guides to unforeseeable shocks – the Fukushima tsunami, the COVID-19 pandemic, the Russian invasion of Ukraine, and so forth – that can stress the system beyond its capacity.

Working capital represents a firm’s investments in inventories and in the accounts receivable owed by customers who have not yet paid for their purchases. Firms do not control the level of their receivables, but they can decide to self-insure by holding more inventories than historical trends suggest are required to support current and planned levels of production. The necessary reduction in the realized return on capital is the price of this insurance. 

That said, contrarian behavior also can be punished. In financial markets, the literature on “the limits of arbitrage” has shown how liquidity on the right-hand side of an investor’s balance sheet (where the liabilities that finance asset holdings are listed) enable bank runs and constrain how long a manager can bet against the market. Berkshire Hathaway’s structure as a closed-end fund meant that Buffett could choose to sit out the great tech bubble of the 1990s without any risk of losing his funding base. The growth in private equity in recent decades has vastly expanded Buffett’s model.

The typical firm may not have this luxury. If its contrarian behavior reduces short-term financial performance, it could invite the attention of activist hedge funds or a determined acquirer. For the operating firm as for the investment manager, the crucial question is: “How long can you afford to be wrong?” Acquisition by private equity offers corporate managers insulation from such threats, at the expense of definitively transferring control to the new owners.


TOWARD A NEW MESOECONOMICS

Taking production-network fragility seriously opens the door to a strategic extension of the economics discipline. From the classical economics of Adam Smith and David Ricardo through the marginalist economists of the late nineteenth century, whose mathematics paved the way for modern neoclassical economics, the discipline has been practiced as a bimodal subject.

While microeconomics addresses the behavior of individual agents (firms, consumers, workers, investors), macroeconomics addresses the behavior of aggregates (as measured by gross national product, gross domestic product, national income, and so forth). But the space between has largely been neglected.

Against the grain, there are two great economists who concerned themselves with this intermediate domain. One was the Soviet-American Nobel laureate Wassily Leontief, who constructed the first input-output tables to illustrate the flow of goods from primary resources to final output.

Today, the US Bureau of Economic Analysis produces national input-output tables on an annual basis, but these are necessarily static and backward-looking. They report on the changed structure of the economy, but they do not provide the theoretical framework and empirical information necessary to understand how shocks propagate through the system and how the economic attributes of different sectors interact dynamically.

Then there was the Italian economist Luigi Pasinetti, whose Structural Economic Dynamics sought to illuminate how the distinctive elasticities of demand and supply, with respect to price and income (along with industry-specific productivity growth), could animate a model economy. But Pasinetti’s work was purely conceptual, lacking both the data and the relevant mathematical tools to be operationalized.

Now that the digitalization of economic life and the availability of relevant computational resources have made operationalizing mesoeconomics possible, an international team of economists with a hub at Cambridge is picking up where Pasinetti left off. By looking out to the frontier of firm entry and exit and financial dependencies between market participants, this wide-ranging research program incorporates both theoretical and empirical advances in the analysis of networks.

Mesoeconomics promises to deliver guidelines for identifying and evaluating potential points of failure and channels of propagation, calling attention to where investing in resilience is likely to be more necessary and effective. The relevance of this approach is underscored by ongoing debates about the causes, nature, and alternative responses to inflation. Is it primarily the consequence of too much macroeconomic demand stimulus or of sector-specific supply shocks? How can we parse the complex interactions between shocks on both the demand and supply sides of the economy?

An alternative, comparably strategic use case is mapping the dependencies entailed by industrial-policy initiatives. For example, any effort to reconstruct a high-tech manufacturing base in the US will encounter many bottlenecks, and the responses to these will generate feedbacks that can mitigate the downstream consequences. Applied mesoeconomics can help anticipate where enabling co-investments should be targeted.

Of course, even when the network externalities of a complex, dynamic production system are visible, their evolution will be influenced by reflexive interactions and significant innovations along the supply chain. The outcome of these forces will necessarily remain uncertain. But the fact remains: it is both appropriate and necessary to maintain working capital above what one would need in a hypothetical, optimally efficient production system.


EXPERIMENTATION AND INNOVATION

The third path forward begins by recognizing that innovation, by definition, confronts radical uncertainty. As I wrote in Doing Capitalism:

“The innovation economy begins with discovery and culminates in speculation. Over some 250 years, economic growth has been driven by successive processes of trial and error and error and error: upstream exercises in research and invention and downstream experiments in the new economic space opened by innovation. Each of these activities necessarily generates much waste along the way: dead-end research programs, useless inventions and failed commercial ventures.”

If the lowest-risk, most efficient allocation of capital dominates, the necessarily costly process of experimentation will not be undertaken. Nor will innovations with transformative potential be realized.

Experiments are how we map the contours of the unknown. In the VC world, every startup is an experiment, and most fail. Competition in the market economy liquidates the failures and confirms the winners. Without such “Schumpeterian waste,” the persistent, cumulative upward drift in productivity and living standards would not occur.

When scientific discovery supplanted mechanical tinkering as the basis for productive innovation in the late nineteenth century, the necessary research funding came from the giant corporations spawned by the Second Industrial Revolution (which gave us railroads, electrification, and mass production). Whether they owed their market positions to formal agreements with the federal government (AT&T), patent monopolies (RCA and Xerox), or a combination of innovative research and commercial dominance (DuPont and IBM), the leading corporate laboratories could afford to invest upstream in the basic science from which commercially significant technological innovations might evolve.

By allocating their monopoly profits to scientific research and development, these corporations extended their market power while also serving a larger, social purpose. But their market positions proved transient. Within the space of a generation, the monopoly profits available for funding R&D came under growing pressure, and the great tech companies of the post-World War II era succumbed to the forces of Schumpeterian creative destruction and federal antitrust enforcement.

Moreover, this trend was reinforced by the pressure to maximize shareholder value. Following a 1982 regulatory change, companies were allowed to reward shareholders through stock buybacks, which became a compelling alternative to investing surplus cash flows in radically uncertain experimentation.


SPECULATION AND THE STATE

The post-war era was also when the US state emerged as the dominant source of R&D funding. While the Department of Defense laid the groundwork for what would become the digital revolution, the National Institutes of Health played a similar role for biotechnology. By the 1980s, a professional VC industry had emerged to dance on the platforms created by the US state.

Long before the creation of the National Venture Capital Association, financial speculation had provided the funding for developing and deploying transformational technologies at scale – from the canals and railroads of the First Industrial Revolution through the era of electrification during the “roaring” 1920s. More recently, the tech/internet/dot-com bubble of the late 1990s not only funded the physical infrastructure of the internet; it also financed a vast array of experiments in what to do with digital and network technologies. Among the offshoots of these experiments were early e-commerce and social-media platforms.

Then, the prolonged period of “unconventional” monetary policies (in response to the 2008 financial crisis and then the COVID-19 pandemic) sponsored the Unicorn Bubble. Again, a wide variety of experiments gained funding: some, like machine learning, hold transformational economic potential (for better or worse); others, like instant delivery start-ups, seem destined for the scrap heap of history if they fail to finance their operations from the services they sell rather than from the speculative securities for which there are no longer buyers.

The magnitude of this latest bubble is apparent in the extreme reach of some experiments whose potential success will not come until after the lifetimes of the committed VC funds – as in the case of quantum computing or fusion energy. But when financial speculation is the source of funding for innovation, some investors will win by selling into the bubble before they know whether an experiment has succeeded.

The public sector can also pursue speculative bets on innovation and with greater long-term continuity. Program managers at the Defense Advanced Research Projects Agency (DARPA) are recruited from the private sector for fixed terms, and then empowered to fund projects to address some of the armed forces’ specific needs. With market risk eliminated, DARPA can fund extremely challenging technical experiments. A key to its historical success has been its mandate to accept failure as a necessary concomitant of experimentation. That success serves as a compelling model for funding the innovations needed to respond effectively to climate change.

In all these contexts, an excessive focus on efficiency in the allocation of resources is the enemy of innovation, and sometimes the enemy of a firm’s survival. The reality of radical uncertainty inverts Cassius’s assertion: The fault, indeed, is in our stars. We are condemned to meet the future as best we can, without any hope of ever finding an optimal path forward. Against the efficient failure, let us value the effective (and necessarily wasteful) success.