The EA-Progress Studies War is Here, and It’s a Constructive Dialogue!
We’re hoping Marc Andreessen doesn’t read this and polarize everyone again.
Clara: I’d like to kick off this discussion by talking about what effective altruism and progress studies are, and why we’re putting them in conversation. They’re both social movements. They both have roots in a particular milieu — California techies crossed with wonky economists. They both share a strong emphasis on technology as the fundamental factor shaping the human condition. And they’re both very interested in how scientific and technological progress can be steered. Also we all like to write really excruciatingly detailed blog posts, which might be the most important parallel of all.
So what’s the difference?
Jason: You forgot to mention that they both overlap with the rationalist community, which might have something to do with the excruciatingly detailed blog posts.
First, as an aside, I’ve never loved the term “progress studies,” which makes it sound much more academic than it is; I usually talk about the progress community or the progress movement. (It’s also hard to figure out what to call us — progress studiers? Progress people?)
But let’s talk about the differences. The two movements start from very different places, and have different intellectual lineages. EA, as I understand it, can be traced to the philosopher Peter Singer, who concluded that if you were truly altruistic, you wouldn’t just give to some local cause that had a personal meaning to you and made you feel good, you would analyze it to maximize your utilitarian impact, and end up giving away almost all your money to the desperately poor.
And that idea, somehow, reached the California techies and/or wonky economists who decided to draw up spreadsheets to figure out which charity was saving the most lives per dollar. And then other philosophers took this idea even more seriously and decided, as long as we’re doing the most good for anyone anywhere, why not throw in any time, and also worry about the potential quadrillions of future people all throughout the galaxy? And then someone realized that if humanity goes extinct, then none of those people will exist, and so human extinction is the greatest EA cause of all.
That’s a very top-down approach. The progress community is more bottom-up. I think a few of us converged on the idea of progress being important in different ways. One motivation comes from simply looking at history: The progress of the last few centuries is the greatest thing ever to happen to humanity, so one of the most important things we can do is to keep that going (and even accelerate it), and the way to do that is by studying how it happened, in order to understand the root causes. Another motivation comes from wanting to defend what we might call Enlightenment values: reason, science, technology, industry, capitalism, liberty. The progress of the last few centuries is arguably the product of those ideas, and therefore vindicates the Enlightenment. One other motivation is looking at the last approximately 50 years and realizing that technological/economic progress has actually slowed down. If that’s a problem, then we should figure out why it happened and how to fix it.
And that points to what I see as another difference between the communities, which is epistemic. While both communities appreciate the importance of both empirical data and abstract theory, I think EAs are generally more willing to trust a conclusion derived from modeling and analysis, even when it contradicts intuition, and they are more willing to extrapolate far beyond its empirical base or far into the future.
The progress community is more likely to be suspicious of such analyses. Re: “longtermism,” a common reaction from within progress studies is: You have absolutely no idea what the long-term future holds, let alone any ability to influence it in any way that you can control. Or consider Bayesian reasoning: EAs, perhaps because so many of them are rationalists, think that putting a probability on any assertion is not only permissible but in some sense mandatory. The progress community is more likely to invoke Knightian uncertainty — “unknown unknowns” — and refuse to give probabilities when there are too many — such as p(doom).
In practice, where you see this is that EA funding tends to look like, “We made a spreadsheet and our expected value of this project is 10,483.5 QALYs,” whereas progress funding tends to look like, “Let’s find some brilliant people with crazy ideas that would be amazing if they worked.”
I think you also see it in the fact that — and I might catch flak from EAs for this — I think the EA mindset is often technocratic, in the sense of the view that the world is and ought to be run by educated, technical elites. The progress community tends to be much more Hayekian: Let lots of people try things, compete on the free market, and see what works.
And then of course there’s the difference in priorities. For one, EA, as a form of altruism, is very interested in saving lives and relieving suffering — I think of this as “negating negatives.” In the progress community, we think more about advancing science and technology and driving economic growth. We see saving lives and reducing suffering as a natural consequence of that; indeed, nothing has saved more lives or relieved moresuffering than science, technology, and capitalism! But this approach gets you not just the negation of negatives, but also the creation of amazing new things that people didn’t even know they needed. One hundred fifty years ago, those amazing new things were the light bulb, automobile, airplane, telephone, radio, Haber-Bosch process, etc. Today, maybe the amazing new things are nanotech manufacturing, fusion energy, reusable rockets that colonize Mars, or AI tutors for every student on Earth. EA is interested in advancing science, but in practice, it seems like the cause areas that get the most resources here are medical research to cure diseases.
Related to this is that, to massively oversimplify, EA is always worried about what could go wrong with new technology, whereas the progress community is excited about what could go right. I suspect this is just a difference in temperament that is extremely difficult to get underneath via rational argument.
One subtlety here is that I think EA tends to see progress as pretty much inevitable: It’s barreling ahead and almost unstoppable, and if we throw all of our weight against it we might, at best, be able to slow it down a tiny bit — and thereby give ourselves a decade or two in which to prevent human extinction. In contrast, the progress community sees progress as fragile, in need of constant protection, and highly contingent on people driving it forward. In most of human history, inventive periods were brief and then died out; the Industrial Revolution was the first inventive period that kept going and accelerating. We shouldn’t take that for granted — especially when we already see signs of a slowdown.
And related to that: Because of the EA technocratic mindset, when they think about how to move the world, they often think in terms of influencing policies and institutions. One thing I rarely if ever hear them discuss is public opinion, and one problem I rarely hear them address is that outside of our little bubble of California techies and wonky economists, most of the world does not see enormous value in science, technology, and progress; many have even bought into “degrowth.” In contrast, I think about public opinion a lot, about societal attitudes toward progress, and I think it’s one of the big causes of recent stagnation. When you do get EAs to acknowledge any issues along these lines, they tend to talk about “coordination” problems, which is an odd way of thinking to me — as if Waymo, and the mob in San Francisco who recently set fire to a Waymo, just needed to “coordinate” better.
C: I’d quibble with your timeline of EA. I don’t think there’s a natural, sequential progression from Singer to Oxford, then out to California, culminating in longtermist concerns about human extinction. Rather, I think these were all separate strains present in the movement from the beginning. Will MacAskill and Toby Ord at Oxford were developing Giving What We Can, which was then just focused on global poverty, at around the same time that Holden Karnofsky and Elie Hassenfeld were figuring out that GiveWell should focus on global poverty charities (instead of, say, public schools in New York City), and Eliezer Yudkowsky was debating Robin Hanson on the likelihood that superintelligent AI would lead to human extinction. (This is roughly 2007-2009). For various historical, contingent reasons these groups of people have done a lot of work together, and of course they’ve influenced each other, but even today, they haven’t fully merged.
This sounds like a nitpick, but I think it matters, because while EA can look very unified from the outside, in my experience it’s really a lot of different priorities and worldviews that share a belief that "doing good" is something we can quantify and should be cause-agnostic about. This position is controversial in a lot of circles, but I think it’s shared by many people in the progress community. Just look at how you characterize the movement — you’re not arguing that technological progress is cool, or good for American national security interests, or in keeping with the telos of humanity as a rational species. You’re arguing that it’s had more impact on human well-being than any other phenomenon in history, and so our collective priority should be making sure it keeps going.
I think all of us share a tendency to identify particular facts as a key feature of the world that should guide our actions: "Technological progress is the greatest thing to ever happen to humanity," "Money goes a lot further in the developing world than it does in America," "The trendline in AI development points to some concerning places if it continues." Personally, I happen to think that all of these facts are true and what we should do about them is really a question of resource prioritization — but of course, that’s a very EA answer!
But let’s talk about your epistemic critique. Yes, EAs are more inclined toward quantitative modeling. I’m of two minds about this. When I argue with other EAs about some proposal, I usually find myself pushing against relying too much on the models. Spreadsheets are inherently simplifying — it’s easy to miss important considerations, drastically miscalculate unlikely risks, or fail to account for second-order effects. When the model goes against our gut, it’s often because our gut isn’t able to digest — so to speak — a broader set of factors.
On the other hand, I usually find that the exercise of building the model is worth it. My instincts are often wrong, sometimes extremely wrong. The process of trying to reduce my gestalt impression to a set of specific numerical factors forces me to think about things I wouldn’t have explicitly considered otherwise and make sure my vague intuitions about the world are backed up by empirical facts. I’m not a particularly quantitative person, and on the margin I probably do less modeling than I should — even when I throw out the end result, the process makes me a better thinker. In practice, I think this dialectic informs a lot of EA decision-making.
I’m less convinced that any of this is why EAs are often drawn to philosophical longtermism. I’m not the best person to defend the philosophy, since I’m not a longtermist myself. I agree with you here: I don’t know what the world is going to look like in a million years, and I don’t think I have particularly good levers for influencing it. What I do believe is that technological developments that could happen in the next few decades could have an outsize impact on the trajectory of human civilization, which it’s important to understand and prepare for.
With all that in mind, then, let’s talk about p(doom) (that is, the probability that advanced AI will cause some sort of existential catastrophe). I have a lot of issues with this concept. I think it elides different possible outcomes: I’ve been in plenty of conversations where one person is talking about the probability of human extinction while another is worried about a future where humans are confined to Earth as a lush nature preserve by our AI overlords. More to the point, coming up with these numbers can create, as you say, a false sense of rigor (and the people who make the most careful models of different AI outcomes are usually the first to admit this). It’s a very hard question to get right and on the whole our community is probably getting it wrong.
That said, I don’t think it’s fundamentally unknowable. This seems like a very strong claim to make about risks from advanced AI! These arguments aren’t based on tenuous chains of logic reaching into the distant future. Most of them come from a pretty specific set of observations about technologies that exist today and arguments about how they might perform over the next few years or decades. To adopt Knightian uncertainty here is to say that we can’t make predictions about AI progress: We might be headed for the singularity, or another few centuries of business as usual, but there’s just no way of knowing. I just don’t buy that. One could say that the case against AI doom is so overwhelmingly strong we shouldn’t waste time considering other possibilities, which is more plausible, but it takes us out of the realm of epistemology and back to disagreements about particular empirical claims.
Maybe the real difference here is that EAs tend to be much more explicitly conscious of resource constraints. “Let’s find some brilliant people with crazy ideas that would be amazing if they worked” is a more attractive proposition when you’re not trying to account for every dollar. It’s a very venture capital-inflected way of looking at the world, whereas EA is coming from academia and philanthropy. This isn’t to say EAs don’t fund high-risk, high-upside propositions — this is a very common EA funding model. But as you say, there’s pressure to justify it with spreadsheets.
I do think this penny-pinching can be a blind spot for EA. At the same time, it’s very salient to me that every dollar directed away from GiveWell’s top charities is something like one-three thousandth of a statistical life we didn’t save. The whole justification for doing anything else is that riskier interventions have the potential to do much more good. And progress-style interventions have the potential to do a lot of good. (Nor am I alone here — Open Philanthropy, the biggest EA funder, makes grants in innovation policy and metascience.) It’s a bet that the march of progress will make the lives of future people unimaginably better, at the expense of some of the poorest people alive right now. I think it probably will, to be clear, but I don’t want to make that determination lightly — so it’s back to the spreadsheets for me.
I like the distinction you draw between the perspective that progress is inevitable or thinking that it’s fragile, but I’d like to complicate it a bit. EAs are, classically, pretty interested in differential technological development — the idea that it’s possible to steer which kinds of technologies are invented. There’s maybe an implicit claim in your argument here that we can’t actually do this as a society. We have one lever — progress or not — and all we can do is turn it up and hope for the best. In this framework, of course progress looks better than no progress. But if we have some ability to steer things, then maybe it’s better to push society in the direction of inventing vaccines for neglected tropical diseases instead of tools that can be used to make bioweapons.
You’re right that EAs tend to be worrywarts, though I’d say that those worries are mostly confined to a specific set of new technologies — AI foundation models and gene synthesis. But some of the more promising interventions to reduce risks from those technologies require yet more technological progress, in areas like alignment and computer security and personal protective equipment. As you say, all of us live in a little bubble of California techies and wonky economists. In our hearts, many EAs are really temporarily embarrassed techno-optimists.
J: I think what you say about the EA movement comprising several clusters choosing to get along is true of many (most? all?) movements, and I can certainly see it applying to the progress movement. We have libertarian-ish advocates of economic growth as a moral imperative, left-leaning “supply-side progressives,” and right-leaning advocates of “American dynamism.” We have YIMBYism, which wants more housing; metascience, which wants to improve our research institutions; and ecomodernism, which wants to address environmental concerns with technology and growth. And what we all have in common is the conviction that science, technology, and economic growth are good and we should have more of them.
Regarding moral frameworks: I see the EA community as broadly unified on utilitarianism. The progress community talks less about moral frameworks, and so is probably less unified in this way. There’s a shared set of values, but less agreement on the fundamental justification for those values.
Speaking only for myself: I am neither a utilitarian nor an altruist. I favor a “decentralized” morality that does not recognize any universal utility function, only an individual utility function for each agent. Each agent’s utility is personal but not totally arbitrary — there are values we all have in common, such as life, health, and happiness, rooted in our nature as a particular kind of biological entity. Social values such as cooperation, peace, and trade come from what we have in common and from the opportunities for win-win relationships. All this is a kind of enlightened egoism. (I explicitly reject the implications of Singer’s drowning child thought experiment: You’re kind of a monster if you don’t save the drowning child, but not if you don’t want to make the equivalent trade-off to aid the global poor.)
I’m not advocating that we pursue progress in order to achieve the greatest good for the greatest number. Rather, I personally want to live in a world of progress and enjoy its fruits. And that is also what I want for everyone I care about, now or in the future.
Next, on differential technology development. I don’t think DTD is impossible; there is not a single dial of progress. Quite the contrary: We can and should steer what technologies are created. The devil is in the details. How do we know in advance which technologies are going to be more beneficial and which more harmful? How do we know which programs will promote which type of development? It’s not impossible to answer these questions in some cases — your interview with Kevin Esvelt about the risks of synthetic biology in a previous issue had some very good examples. but it’s hard, and I hear a lot more advocating for DTD in the abstract then I hear in-depth analysis of what we should actually do in specific fields. We need more of the latter.
Without that, a desire for DTD quickly slides into generic calls to slow, pause, or halt progress in a field. I think we’re seeing this now in AI.
My current take is roughly that AI, like any new technology, creates risk, and that we should actively identify and mitigate that risk. What that looks like, I think, is thorough testing of models before release. It should include monitoring during production as well. It should involve planning ahead in the form of something like responsible scaling policies. It should also involve hardening our world against threats from pandemics, cyberattacks, etc., whether those dangers come from AI or not.
But there are many proposals that seem ripe for creating regulatory overreach that would slow or halt progress; and some that require a frightening expansion of government surveillance. We should avoid creating new government agencies to regulate AI, especially any that are required to review and approve AI systems before release, especially if they believe their mandate is to make sure that systems are “proven safe.” We should not license GPUs, clusters, or training runs, let alone set up a draconian global tracking regime to enforce this.
I think EAs should pay careful attention to how all these issues are playing out — not only in excruciatingly detailed blog posts, but in the actual arena of politics. Otherwise they risk becoming, in practice and in effect, neo-Luddites, even if in theory and in their hearts they are temporarily embarrassed techno-optimists.
C: Guilty as charged: I’m an altruist, and — more complicatedly — a utilitarian, or at least close enough to one for government work. And, in addition to our compulsive need to hedge everything, EAs definitely share a certain commitment to moral universalism. (That said, I actually agree with you in not finding Singer particularly helpful here, mainly because his argument elides the — to my mind — key point that figuring out how to help people far away from us is usually difficult and nonintuitive.)
Much as I’d enjoy it, though, I don’t think we’re going to hash out the foundations of normative ethics in this email exchange. So let’s talk about DTD. I was actually just revisiting the essay where Oxford philosopher Nick Bostrom introduces this term and it’s kind of wild to read now — he thinks we might have to build AGI faster so it can help us protect ourselves from the risk of self-replicating nanotechnology! And, interestingly, he thinks it’s impossible for bans to fix it. The idea behind DTD is that, given that we can’t slow technology down, we should at least try to steer it.
In practice, I’m interested in all our available options. But I think it’s useful to remember that this idea of shaping technological progress in the interests of safety doesn’t just mean pulling back, it’s as much or more about what we should proactively try to build. Kevin Esvelt’s work on making biotechnology safer is a great example of this. When you look at the projects he’s involved in, very little of it is regulatory — it’s all things like security protocols for DNA synthesis machines and observatories for detecting viruses in wastewater and germicidal UV light.
From my perspective — which, let’s be clear, is a shared office with something like half of the Berkeley AI safety community — I see similar things happening here as well. There’s things like designing better safety evaluations, which you’ve mentioned, as well as methods for getting more useful work out of potentially dangerous AIs and a lot of work trying to answer some of the questions you’ve posed about understanding all of this better. Of course, there’s always much more to be done.
There’s a question I hear sometimes from people who work in technical AI safety: Would the principles we’re advocating here tell us to delay the invention of the printing press? What about the Industrial Revolution? I’m not sure who originally made this point — maybe Ajeya Cotra? — but I think it’s a great intuition pump for the kinds of considerations that might come up with really advanced AIs.
Another thing I find myself ruminating on is the 1970s antinuclear movement. We can see now that overregulation of nuclear power in the U.S. was a terrible policy mistake. At the same time, it’s easy to see the depth of fear people had toward nuclear power at that time as an outgrowth of very reasonable concerns about nuclear weapons and even fallout from tests during the Cold War. When you’re trying to raise the salience of real risks, it’s hard to control the magnitude of the response. I can easily see us making that kind of misstep.
Which brings us, as you say, to the actual arena of politics. This is where things get really complicated, because I don’t think there is any one set of EA opinions on the correct set of policies to implement. I realize I keep saying this, but it really is the core issue that makes this debate so difficult: EA is a social movement, and it’s very hard for social movements to achieve their goals without some kind of centralized movement discipline. But we’re also a bunch of very fractious people with major factual disagreements that lead to very different ideas about what we ought to be doing. My own specific views here are fairly complicated, but I don’t have any ideological commitment for or against any particular measure. (Though I do think that competent, knowledgeable regulatory agencies can be good for an industry — just look at aviation.) It’s all just a question of evaluating what we think the actual risks are and what responses are appropriate.
I could also ask a similar question of the progress movement. You have your YIMBYs and your metascience people and your ecomodernists, but to what extent do these different groups form a unified progress agenda? Or is that something you’d even want?
J: I think Bostrom’s sense of the relative risk from nanotech vs. AI is a great example of how difficult it is to predict the future, especially the detailed contours of the future that often make all the difference.
That said, I’m grateful for the work being done in moving safety technology forward. I think it is underrated how much dedicated technical work is required to achieve safety. I think of METR (formerly ARC Evals) as sort of like the Underwriters Laboratories for AI. If anyone in AI safety wants some historical inspiration and likes reading old books, I recommend A Symbol of Safety, a history of UL written in 1923. In the late 19th and early 20th centuries, UL dedicated a lot of engineering time to devising testing and certification standards for electrical components, firefighting equipment, and building materials. It was only through their work that we were able to dramatically reduce the incidence of fire.
Regarding a progress agenda, I wouldn’t want a strongly unified or centralized one, but I think if you immerse yourself in the community, you start to see common themes. The three big ones are tech, policy, and culture.
In technology, we’re interested in frontier technologies with huge potential, especially ones that seem somehow neglected or underrated: nanotech, longevity, supersonic flight, geothermal energy, climate geoengineering. And yes, that includes health and safety technology — I can’t wait to figure out the far-UVC germicidal light thing and basically eliminate respiratory disease. The progress community includes a lot of technologists and founders, and many of them are doing ambitious new ventures in areas like these.
In policy, we want to see regulatory reform to unblock progress, especially but not limited to permitting reform (such as NEPA). We’re basically all YIMBYs. We’d like to see the NRC actually approve some nuclear power, and it would be nice if the FDA didn’t make it so difficult to develop medicine. We generally want more immigration, especially of the most talented and ambitious people. Depending on how libertarian you are/aren’t, you might want to see more government investment to create public goods in science and technology (think Operation Warp Speed for everything). We also want to see more experiments in a broader range of ways to fund and manage research (such as FROs or private ARPAs.)
In culture, we want society to regain a bold, ambitious vision of the future. The path to this is through education, media, and art. Progress should be on the curriculum in high school and college, and every student should graduate with “industrial literacy”: a basic understanding of how industrial civilization works, what it took to establish our standard of living, and what is required to maintain it. Journalists should apply a basic understanding of and respect for progress when they cover it: “Starship achieves new milestone in latest test” instead of “Elon’s rocket blew up again.” There should be more Hollywood biopics of scientists and inventors, and there should be sci-fi that shows us a future we want to live in. And yes, we need progress studies.
But for anyone thinking about where to invest their time, money, or other resources, I want this list to be just a set of suggestions for inspiration, not a definitive list of all the things worth working on. If someone is deeply motivated to pursue an idea, I want them to feel encouraged to do it. That’s the kind of Hayekian bottom-up order without design that lets a social movement — or an economy, or humanity — achieve its goals without centralized discipline.
C: Far-UVC is a great example of some of the differences we’ve been talking about in the EA and progress studies mindsets. EAs, of course, are interested in this technology because it could be an important defense against future pandemics, especially artificial ones that might be far deadlier than naturally occurring diseases. I remember talking with some EA friends about who might be interested in funding some particular far-UVC study — I don’t remember all the details, but I do remember saying that probably people in the progress community would be into it, not because of pandemics, but because normal, common respiratory diseases are cumulatively a major drag on workforce productivity. And then something like two days later you tweeted exactly that.
I realize this makes us sound like real downers, especially compared to the bold, bright vision you’ve just articulated. And I do think there can be something epistemically distortionary in this focus on worst-case scenarios.
At the same time, I’m skeptical of reflexive optimism. When people in the progress community talk about what they want for the world, I often hear echoes of the 1950s: industrial R&D labs, megaprojects, the space race, science as the savior of democracy and capitalism. But the key inventions of the modern era — electrification, cars, plastics, radio — mostly don’t come out of the 1950s. They’re from about the 1880s through the 1930. And culturally, this era is quite mixed — you’ve got world’s fairs and radical utopians, but also a lot of backlash to the consequences of industrialization. This is before we even get to the First World War. It’s an era with an enormous amount of fear and anxiety. And — of course it was! This is the fastest people’s lives had been changing because of technology in all of human history.
So when it comes to the sunny view of technology we associate with the middle of the 20th century, I don’t think of driving rapid progress. I think there’s a confidence that comes from stability. Rapid technological change is wonderful, but it’s also scary. It can mean Blake’s dark satanic mills, or the threat of nuclear war, or just creating a world for our grandchildren that we find completely incomprehensible.
This isn’t an argument against progress, or even for slowing down. But I find it useful to sit with the confusion and uncertainty that it creates.
J: Well, to be clear, I am also in favor of things that reduce tail risk, even if they don’t alleviate everyday problems. For pandemics, that might mean advanced wastewater monitoring, or ways to more rapidly develop tests and vaccines, or something like Kevin Esvelt’s SecureDNA. Risk is real, and reducing it is a crucial aspect of progress.
And I, too, am wary: not of too much optimism, but of the wrong kind. I am wary of complacent, passive optimism: the assumption that progress is inevitable or automatic, or that it is automatically good. I want us to have an active optimism — “optimism of the will” — the energy, courage, and determination to work for a better future, to create it through choice and effort, to embrace whatever problems or risks arise and to solve them. Hopefully that’s something both communities can agree on.
I enjoyed this dialogue. One thing that makes me hesitate to jump in with the progress community is Edward Wilson's quote about humans having "Paleolithic emotions, medieval institutions, and god-like technology".
I get the impression that, at our current level of development, reforming our institutions is probably more important and neglected than further technological progress.