Vaniver

I'm going to expand on something brought up in this comment. I wrote:

A lot of my thinking over the last few months has shifted from "how do we get some sort of AI pause in place?" to "how do we win the peace?". That is, you could have a picture of AGI as the most important problem that precedes all other problems; anti-aging research is important, but it might actually be faster to build an aligned artificial scientist who solves it for you than to solve it yourself (on this general argument, see Artificial Intelligence as a Positive and Negative Factor in Global Risk). But if alignment requires a thirty-year pause on the creation of artificial scientists to work, that belief flips--now actually it makes sense to go ahead with humans researching the biology of aging, and to do projects like Loyal

This isn't true of just aging; there are probably something more like twelve major areas of concern. Some of them are simply predictable catastrophes we would like to avert; others are possibly necessary to be able to safely exit the pause at all (or to keep the pause going when it would be unsafe to exit).

I think 'solutionism' is basically the right path, here. What I'm interested in: what's the foundation for solutionism, or what support does it need? Why is solutionism not already the dominant view? I think one of the things I found most exciting about SENS was the sense that "someone had done the work", had actually identified the list of seven problems, and had a plan of how to address all of the problems. Even if those specific plans didn't pan out, the superstructure was there and the ability to pivot was there. It looked like a serious approach by serious people. What is the superstructure for solutionism such that one can be reasonably confident that marginal efforts are actually contributing to success, instead of bailing water on the Titanic?

Restating this, I think one of the marketing problems with anti-aging is that it's an ancient wish and it's not obvious that, even with the level of scientific mastery that we have today, it's at all a reasonable target to attack. (The war on cancer looks like it's still being won by cancer, for example.) The thing about SENS that I found most compelling is that they had a frame on aging where success was a reasonable thing to expect. Metabolic damage accumulates; you can possibly remove the damage; if so you can have lifespans measured in centuries instead of decades (because after all there's still accident risk and maybe forms of metabolic damage that take longer to show up). They identified seven different sorts of damage, which felt like enough that they probably hadn't forgotten one and few enough that it was actually reasonable to have successful treatments for all of them.

When someone thinks that aging is just about telomere shortening (or w/e), it's pretty easy to suspect that they're missing something, and that even if they succeed at their goal the total effect on lifespans will be pretty small. The superstructure makes the narrow specialist efforts add up into something significant.

I strongly suspect that solutionist futurism needs a similar superstructure. The world is in 'polycrisis'; there used to be a 'aligned AGI soon' meme which allowed polycrisis to be ignored (after all, the friendly AI can solve aging and climate change and political polarization and all that for you) but I think the difficulties with technical alignment work have made that meme fall apart, and it needs to be replaced by "here is the plan for sufficiently many serious people to address all of the crises simultaneously" such that sufficiently many serious people can actually show up and do the work.

kave

I don't know how to evaluate whether or not the SENS strategy actually covers enough causes of ageing, such that if you addressed them all you would go from decades-long lifespans to centuries-long lifespans. I think I'm also a little more optimistic than you that a bunch of "bailing out the sinking ship" adds up to "your ship is floating on its own".

I think that a nice thing about incremental and patch solutions is that each one gives you some interesting data about exactly how they worked, and details about what happened as a result. For example, it's interesting if, when you give someone a drug to make their blood pressure lower, for example, you end up with some other system reliably failing (more often than in the untreated population). And so I have a bit of hope that if you just keep trying the immediate things, you end up in a much better vantage point for solving a bunch of the issues.

I guess this picture still factors through "we realised what the main problems were and we fixed them", it's just a bit more sympathetic to "we did some work that wasn't on the main problems along the way".

I dunno how cruxy this is for your "superstructure" picture, or what the main alternative would be. I guess there are questions like "many rationalist-types like to think about housing reform. Is that one of the crises we have to address directly, or not?". Is that the type of thing you're hoping to answer?

Vaniver

I think I'm also a little more optimistic than you that a bunch of "bailing out the sinking ship" adds up to "your ship is floating on its own".

I think I'm interested in hearing about the sources of your optimism here, but I think more than that I want to investigate the relative prevalence of our beliefs.

I have a sense that lots of people are not optimistic about the future or about their efforts improving the future, and so don't give it a serious try. There's not a meme that being an effective civil servant is good for you or good for the world. [Like, imagine Teach For America except instead it's Be A Functionary For America or w/e.]

There is kind of a meme that doing research / tech development for climate change is helpful, but I think even then it is somewhat overpowered by the whiny activism meme. (Is the way to stop oil to throw soup on paintings or study how to make solar panels more efficient?)

It seems to me like you're saying "just do what seems locally good" (i.e. bailing out your area of the ship) both 1) adds up to the ship floating and 2) is widely expected to add up to the ship floating, and I guess that's not what I see when I look around.

kave

I now wonder if I understood what you meant by 'superstructure' correctly. For example, I was imagining a coordinating picture that tells you whether or not to be an effective civil servant, and even what kind of civil servant to be. For example SENS, guides your efforts within ageing research. Like something that enumerates the crises within the polycrisis.

But it seems like you're imagining something that is like "do stuff that adds up rather than stuff that doesn't". For example, do you imagine the superstructure is encouraging of 'be a civil servant making sanitation work well in your city' or not? I was imagining that it might rule it out, and similarly might rule out 'try and address renewable energy via Strategy A', and I was saying "I feel pretty hopeful about people trying Strategy A and making sanitation work and so on, even if it's not part of an N-Step Plan for saving civilisation".

Vaniver

I guess there are questions like "many rationalist-types like to think about housing reform. Is that one of the crises we have to address directly, or not?". Is that the type of thing you're hoping to answer?

I think there's a thing that Eliezer complains about a lot where the world is clearly insane-according-to-him and he's sort of hopeless about it. Like, what do you expect? The world being insane is a Nash equilibrium, he wrote a whole book about the generators of that.

And part of me wants to shake him and say--if you think the FDA is making a mistake you could write them a letter! You could sue them! The world has levers that you are not pulling and part of the way the world becomes more sane is by individual people pulling those levers. (I have not shaken Eliezer in part because he is pulling levers and has done more than other people and so on.) There's a 'broken windows' thing going on where fixing the visible problems makes a place less seem like the sort of place that has problems, and so people both 1) generate fewer problems and 2) invest more in fixing the problems that remain.

 Like something that enumerates the crises within the polycrisis.

I think this is exactly what I'm looking for.

Like, imagine you went to the OpenPhil cause area website and learned that they will succeed at all their goals in the next 5 years. ("succeed" here meant in the least ambitious way--an AI pause instead of developing superalignment, for example.) Does that give you a sense of "great, we have fixed the polycrisis / exited the acute risk period / I am optimistic about the future"? I think currently "not yet", and to be fair to them I don't think they're trying to Solve Every Problem. 

To maybe restate my hypothesis more clearly:

I think if there were A Plan to make the world visibly less broken, made out of many components which are themselves made out of components that people could join and take responsibility for, this would increase the amount of world-fixing work being done and would meaningfully decrease the brokenness of the world. Further, I think there's a lot of Common Cause of Many Causes stuff going on here, where people active in this project are likely to passively or actively support other parts of this project / there could be an active consulting / experience transfer / etc. scene built around it.

I think this requires genuine improvements in governance and design. I don't think someone declaring themselves god-Emperor and giving orders either works or is reasonable to expect. I think this has to happen in deep connection to the mainstream (like, I am imagining people rewriting health care systems and working in government agencies and at insurance companies and so on, and many of the people involved not having any sort of broader commitment to The Plan).

kave

There are two reasons I can immediately feel to have a Plan.

The first is: you need a Plan in order to just make sure when you've finished the Plan you're "done" (where "done" might mean "have advanced significantly") and to make sure you're prioritising.

The second is: it's an organising principle to allow people to feel like they are pulling together, to see some ways in which their work is accumulating and to have a flag for a group that can have positive regard for each of its members doing useful stuff.

I feel pretty sold on the second! I'm not so sure about the first. Happy to go more into that, but also pretty happy to take it as a given for awhile and allow the conversation to move past that question.

Vaniver

Hmm I'm not sure that distinction is landing for me yet. Like I think I mostly want the second--but in order for the second to be real the plan must also be real. (If I thought the SENS plan contained significant oversights or simplifications, for example, I would not expect it to be very useful for motivating useful effort.)

kave

If I were to discuss something else, I would be pretty interested in questions like "what does the plan need to cover?", "what are some of the levers that are going tragically unpulled? or "what does the superstructure need to be like to be sociologically/psychologically viable (or whatever other considerations)?"

Vaniver

Yeah happy to move to specifics. I think I don't have a complete Plan yet and so some of the specifics are fuzzy--I think I'm also somewhat pessimistic about the Plan being constructed by a single person.

kave

I guess the difference is I expect more things to help some than you do? If I believed SENS were missing lots of things, I could still imagine being excited to work on it as long as I believed the problems were fairly real, even if not complete. Admittedly, I would be a bit more predisposed to try piecemealing together a bunch of hacks and seeing where that took us.

kave

Totally makes sense to be pessimistic about the Plan being constructed by a single person. But it seems that the Plan will be constructed by people like you and I doing some kind of mental motion, and I was wondering if maybe we should just do some of that now. Sort of analogous to how the hope is that people will do the pieces of object-level work that adds up to a solved polycrisis, it seems good if people do the pieces of meta-level work that adds up to a Plan.

Vaniver

ok, so not attempting to be comprehensive:

  • Energy abundance. One of the answers for "wtf went wrong in 1970?" is energy prices stopped going down and went up instead. Having cheaper energy is generically good. Progress here looks like 1) improvements in geothermal energy / transitioning oil drilling to geothermal drilling, 2) continued rollouts of solar, 3) increased nuclear production. People are currently pretty excited about permitting reform for geothermal production for a number of reasons and I would probably go into something like that if I were going to work in this field.
  • Land use. The dream here is land value taxes, but there are lots of other things that make sense too. You brought to my attention the recent piece by Judge Glock about how local governments used to be pro-growth because it determined their revenues and then various changes to the legal and regulatory environment stopped that from being true, giving a lot of support to anti-growth forces. Recent successes in California have looked more like the state government (which is more pro-growth than local governments) seizing control in a lot of areas, but you could imagine various other ways to go about this that are better designed. Another thing here that is potentially underrated is just being a property developer in Berkeley/SF; my understanding is that a lot of people working in the industry did not take advantage of Builder's Remedy because they're here for the long haul and don't want to burn bridges, but I have only done a casual investigation.
  • Labor retooling. When I worked at Indeed (the job search website company) ~7 years ago there were three main types of jobs people wanted to place ads for, one of which was truckers. And so Indeed was looking ahead to when self-driving trucks would eat a bunch of those jobs, both to try to figure out how to replace the revenue for the company and to try to figure out how to help that predictable flood of additional users find jobs that are good for them. I don't have a great sense of what works well here (people are excited about UBIs and I think they work at one half of the problem but leave the other half unaddressed). Now that I think we have economically relevant chatbots, I think this is happening (or on the horizon) for lots of jobs simultaneously.
  • Health care. The American system is a kludge that was patched together over a long time; satisfaction is low enough that I think there is potential to just redesign it from scratch. (See Amy Finkelstein's plan, for example.)
  • Aging. It would be nice to not die, and people having longer time horizons possibly makes them more attuned to the long-term consequences of their actions.
  • Political polarization. If you take a look at partisan support for opposite-party presidents in the US, it's declining in a pretty linear fashion, and projecting the lines forward it will not be that long until there is 0% Republican support for Democratic presidents (and vice versa). This seems catastrophically bad if you were relying on the American government as part of your global-catastrophe-avoidance-plan. More broadly, I have a sense that the American system of representative-selection is poorly fit to the modern world and satisfaction is low enough that there's potential for reform.
  • Catastrophe avoidance. It seems like there should be some sort of global surveillance agency (either one agency or collaboration across Great Power lines or w/e) that is 'on top of things' for biorisk and AI risk and so on. I'm imagining a ~30 year pause in AI development, here, which likely requires active management.

There are some things that maybe belong on this list and maybe don't? Like I think education is a thing that people perennially love to complain about but it's not actually obvious to me that it's in crisis to the degree that healthcare is in crisis, or that it won't be fixed on its own by independent offerings. (Like, Wikipedia and Khan Academy and all that are already out there; it would be nice for public schools to not be soul-destroying but I think I am more worried about 'the world' being soul-destroying.) I think this list would be stronger if I had more clearly negative examples of "yeah sorry we don't care about Catalan independence" or w/e, but this seems like the sort of thing that is solved by a market mechanism (of no one buys into the prize for fixing Catalan independence, or no one decides to work on it).

Vaniver

So one of the things that feels central to me is the question of 'design' in the Christopher Alexander sense; having explicitly identified the constraints and finding a form that suits all of them.

I think the naive pro-growth view is "vetocracy is terrible" – when you have to get approval from more and more stakeholders to do projects, projects are that much harder to do, and eventually nothing gets done. But I think we need to take the view that "just build it" is the thesis, "get approval" is the antithesis, and the synthesis is something like "stakeholder capitalism" where getting stakeholder approval is actually just part of the process but is streamlined instead of obstructive.

Like, as population density increases, more people are negatively affected by projects, and so the taxes on projects should actually increase. But also more people should be positively affected by projects (more people can live in an 8-story apartment building than a 4-story one) and so on net this probably still balances out in favor of building. We just need to make the markets clear more easily, which I think involves looking carefully at what the market actually is and redesigning things accordingly.

kave

As well as negative examples, I wonder if it would be good to contend with the possibility of other 'royal solutions' in the absence of AI. For example, human intelligence enhancement. My guess is that that isn't a solution, but it does possibly change the landscape so much that many other problems (for example ageing and energy abundance) become trivial.

Vaniver

I think human intelligence enhancement definitely goes on the list. I think a large part of my "genuine improvements in governance and design" is something like "intelligence enhancement outside of skulls"--like if prediction markets aggregate opinions better than pundits writing opinion columns, a civilization with prediction markets is smarter than a civilization with newspapers. A civilization with caffeine is also probably smarter than a civilization with alcohol, but that's in a within-skull sort of way. Doing both of those seems great.

kave

Designing the markets to clear more easily is quite appealing. But it also has some worrisome 'silver bullet' feeling to it; a sense of impracticality or of my not having engaged enough with the details of current problems for this to be the right next step.

Vaniver

Yeah so one of my feelings here is also from Matt Yglesias on Slow Boring, called Big ideas aren't enough. Roughly speaking, his sense (as a Democratic detail-oriented policy wonk) is that the Republican policy wonks just really weren't delivering on the details, and so a lot of their efforts failed. It's one thing to say "we need to have energy abundance" and another thing to say "ok, here's this specific permitting exemption that oil and gas projects have, if we extend that to geothermal projects it'll have these positive effects which outweigh those negative effects". It's one thing to have spent 5 minutes thinking about healthcare and guessing at a solution, and another thing to have carefully mapped out the real constraints  and why you believe they're real and find something that might actually be a Pareto improvement for all involved (or, if it's a Kaldor-Hicks improvement instead, figure out who needs to be bribed and how much it would take to bribe them).

I think it's more plausible that whatever Consortium of Concerned Citizens can identify the problem areas than that they can solve them--one of the things that I think broadly needs to change is a switch from "people voting for solutions" to "people voting for prices for solutions" that are then provided by a market. If you think increasing CO2 levels in the atmosphere is the problem, it really shouldn't concern you how CO2 levels are adjusted so long as they actually decrease, and you should let prices figure out whether that's replacement with solar or nuclear or continuing to burn fossil fuels while sequestering the carbon or whatever. [Of course this is assuming that you're pricing basically every externality well enough; you don't want the system to be perpetually sweeping the pollution under the next rug.]

kave

There's also a thing with solutions rather than prices for solutions where it's often easier to check that inputs are conforming than outputs.

A friend was trying to get fire insurance for their venue, and the fire insurers needed them to upgrade their fire alarm system. They asked the insurer "how much more would it be not to upgrade the fire alarm system?" and the answer was "No. We do not offer insurance if you don't upgrade the system", presumably because the bespoke evaluation was too expensive.

[I don't quite know the import of this to what you wrote above, but it's a heuristic and anecdote that I have ping me when this sort of stuff comes up]

kave

So, we wrapped there because of time constraints. Thanks for chatting. I enjoyed this. I would be interested in picking up again in the future.

New Comment
8 comments, sorted by Click to highlight new comments since:

The polycrisis has been my primary source of novelty/intellectual stimulation for a good long while now. Excited to see people explicitly talking about it here.

With regard to the central proposition:

I think if there were A Plan to make the world visibly less broken, made out of many components which are themselves made out of components that people could join and take responsibility for, this would increase the amount of world-fixing work being done and would meaningfully decrease the brokenness of the world. Further, I think there's a lot of Common Cause of Many Causes stuff going on here, where people active in this project are likely to passively or actively support other parts of this project / there could be an active consulting / experience transfer / etc. scene built around it.

I think this is largely sensible and true, but consider top-down implementation of such to be a pipe dream.
Instead there is a kind of grassroots version where you do some combination of:


1.) Clearly state the problems that need to be worked on, and provide reasonable guidance as to where and how they might be worked on
2.) Notice what work is already being done on the problems, and who is doing it (avoid reinventing the wheel/not invented here syndrome; EA is especially guilty of this)
3.) Actively develop useful connections between 2.)
4.) Measure engagement (resource flows) and progress


And from that process I expect something like a plan to emerge - it won't be the best possible plan, but it will be far from the worst plan, more adequate than not, and importantly it will survive contact with reality because reality was a key driver in the development of the plan.

The platform for generating the plan would need to be more-open-than-not, and should be fairly bleeding edge - incorporating prediction markets, consensus seeking (polis), eigenkarma etc


It should be a design goal that high value contributions should be noticed, no matter the source. An example of this actually happening is where Taiwan was able to respond rapidly to Covid thanks to a moderator noticing and doing due diligence on a post in the .g0v forums re: covid, and having a process in place where that information could be escalated to government.


It should also be subject to a serious amount of adversarial testing - such a platform, if successful, will influence $ flows, and thus will be a target for capture/gaming etc etc.

As it stands, we're lacking all 4. We're lacking a coherent map of the polycrisis[1], we're lacking in useful+discoverable communication channels, we're lacking meaningful 3rd party measurement.

As it stands, the barriers to entry for those wishing to engage in meaningful work in this space are absurd.
If you lack the credentials and/or wealth to self-fund, then you're effectively excluded - a problem which was created by an increasingly specialized world (And the worldview, cultural dynamics and behaviours it engenders) has gatekeepers from that same world, enforcing the same bottlenecks/selective pressures of that world on those who would try to solve the problem.

The neighbourhood is on fire, and the only people allowed to join the bucket chain are those most likely to be ignoring the fire - so very catch-22.

P.S.

I think there's a ton of funding available in this space, specifically I think speculating on the markets informed by the kind of worldview that allows one to perceive the polycrisis has significant alpha. I think we can make much better predictions about the next 5-10 years than the market, and I don't think most of the market is even trying to make good predictions on those timescales.

I'd be interested in talking/collaborating with anyone who either strongly agrees or disagrees with this logic.

  1. ^

    On this note, if anyone wants to do and/or fund a version of aisafety.world for the polycrisis, I'm interested in contributing.

we're lacking all 4. We're lacking a coherent map of the polycrisis (if anyone wants to do and/or fund a version of aisafety.world for the polycrisis, I'm interested in contributing)

Joshua Williams created an initial version of a metacrisis map and I suggested to him a couple of days ago to make the development of such a resource more open, e.g., to turn it into a Github repository.

I think there's a ton of funding available in this space, specifically I think speculating on the markets informed by the kind of worldview that allows one to perceive the polycrisis has significant alpha. I think we can make much better predictions about the next 5-10 years than the market, and I don't think most of the market is even trying to make good predictions on those timescales.

Do you mean that it's possible to earn by betting long against the current market sentiment? I think this is wrong for multiple reasons, but perhaps most importantly, because the market specifically doesn't measure how well we are faring on a lot of components of polycrisis -- e.g., market would be great if all people are turned into addicted zombies. Secondly, people don't even try to make predictions in the stock market anymore -- its turned into a completely irrational valve of liquidity that is moved by Elon Musk's tweets, narratives, and memes more than by objective factors. 

Joshua Williams created an initial version of a metacrisis map

 

It's a good presentation, but it isn't a map. 

A literal map of the polycrisis[1] can show:

  • The various key facets (pollution, climate, biorisk, energy, ecology, resource constraints, globalization, economy, demography etc etc)
  • Relative degrees of fragility / timelines (e.g. climate change being one of the areas where we have the most slack)
  • Many of the significant orgs/projects working on these facets, with special emphasis placed on those that are aware of the wider polycrisis
  • Many of the significant communities
  • Many of the significant funders

Do you mean that it's possible to earn by betting long against the current market sentiment?

 

In a nutshell

 

  1. ^

    I mildly prefer polycrisis because it's less abstract. The metacrisis points toward a systems dynamic for which we have no adequate levers, whereas the polycrisis points toward the effects in the real world that we need to deal with.

    I am assuming we live in a world that is going to be reshaped (or ended) by technology (probably AGI) within a few decades, and that if this fails to occur the inevitable result of the metacrisis is collapse.

    I think the most impact I can have is to kick the can down the road far enough that the accelerationistas get their shot. I don't pretend this is the world I would choose to be living in, or the horse I'd want to be betting on. It is simply my current understanding of reality.

    Hence: polycrisis. Deal with the symptoms. Keep the patient alive.

1.) Clearly state the problems that need to be worked on, and provide reasonable guidance as to where and how they might be worked on
2.) Notice what work is already being done on the problems, and who is doing it (avoid reinventing the wheel/not invented here syndrome; EA is especially guilty of this)
3.) Actively develop useful connections between 2.)
4.) Measure engagement (resource flows) and progress

I posted some parts of my current visions of 1) and 2) here and here. I think these, along with the Gaia Network design that we proposed recently (the Gaia Network is not "A Plan" in its entirety, but a significant portion of it), address @Vaniver's and @kave's points about realism and sociological/psychological viability.

The platform for generating the plan would need to be more-open-than-not, and should be fairly bleeding edge - incorporating prediction markets, consensus seeking (polis), eigenkarma etc

I think this is a mistake to import "democracy" at the vision level. Vision is essentially a very high-level plan, a creative engineering task. These are not decided by averaging opinions. "If you want to kill any idea in the world, get a committee working on it." Also, Deutsch was writing about this in "The Beginning of Infinity" in the chapter about democracy.

We should aggregate desiderata and preferences (see "Preference Aggregation as Bayesian Inference"), but not decisions (plans, engineering designs, visions). These should be created by a coherent creative entity. The same idea is evident in the design of Open Agency Architecture.

we're lacking meaningful 3rd party measurement

If I understand correctly what you are gesturing at here, I think that some high-level agents in the Gaia Network should become a trusted gauge for the "planetary health metrics" we care about.

I think this is a mistake to import "democracy" at the vision level. Vision is essentially a very high-level plan, a creative engineering task. These are not decided by averaging opinions. "If you want to kill any idea in the world, get a committee working on it." Also, Deutsch was writing about this in "The Beginning of Infinity" in the chapter about democracy.

We should aggregate desiderata and preferences (see "Preference Aggregation as Bayesian Inference"), but not decisions (plans, engineering designs, visions). These should be created by a coherent creative entity. The same idea is evident in the design of Open Agency Architecture.

 

Democracy is a mistake, for all of the obvious reasons.
As is the belief amongst engineers that every problem is an engineering problem :P

We have a whole bunch of tools going mostly unused and unnoticed that could, plausibly, enable a great deal more trust and collaboration than is currently possible. 

We have a whole bunch of people both thinking about and working on the polycrisis already. 

My proposal is that we're far more likely to achieve our ultimate goal - a future we'd like to live in - if we simply do our best to empower, rather than direct, others.

I expect attempts to direct, no matter how brilliant the plan or the mind(s) behind it, are likely to fail. For all the obvious reasons.

(caveat: yes AGI changes this, but it changes everything. My whole point is that we need to keep the ship from sinking long enough for AGI to take the wheel)

I now think that the ultimate "rising tide that lifts all boats" is availability of jobs. The labor market should be a seller's market. Everything else, including housing / education / healthcare, follows from that. (Sorry Georgists, it's not land but labor which is key.) But the elite is a net buyer of labor, so it prefers keeping labor cheap. When Marx called unemployed people a "reserve army of labor", whose plight scares everyone else into working for cheap, he was right. And from my own experience, having lived in a time and place where you could find a job in a day, I'm convinced that it's the right way for a society to be. It creates a general sense of well-being and rightness, in a way that welfare programs can't.

So the problem is twofold: 1) which policies would shift the labor market balance very strongly toward job seekers, 2) why the elite would implement such policies. If you have a democracy, you at least nominally have a solution to (2). But first you need to figure out (1).

ok, so not attempting to be comprehensive:

  • Energy abundance...

I came up with a similar kind of list here!

I appreciate both perspectives here, but I lean more towards kave's view: I'm not sure how much overall success hinges on whether there's an explicit Plan or overarching superstructure to coordinate around.

I think it's plausible that if a few dedicated people / small groups manage to pull off some big enough wins in unrelated areas (e.g. geothermal permitting or prediction market adoption), those successes could snowball in lots of different directions pretty quickly, without much meta-level direction.

I have a sense that lots of people are not optimistic about the future or about their efforts improving the future, and so don't give it a serious try.


I share this sense, but the good news is the incentives are mostly aligned here, I think? Whatever chances you assign to the future having any value whatsoever, things are usually nicer for you personally (and everyone around you) if you put some effort into trying to do something along the way.

Like, you shouldn't work yourself ragged, but my guess is for most people, working on something meaningful (or at least difficult) is actually more fun and rewarding compared to the alternative of doing nothing or hedonism or whatever, even if you ultimately fail. (And on the off-chance you succeed, things can be a lot more fun.)

Like, you shouldn't work yourself ragged, but my guess is for most people, working on something meaningful (or at least difficult) is actually more fun and rewarding compared to the alternative of doing nothing or hedonism or whatever, even if you ultimately fail. (And on the off-chance you succeed, things can be a lot more fun.)

I think one of the potential cruxes here is how many of the necessary things are fun or difficult in the right way. Like, sure, it sounds neat to work at a geothermal startup and solve problems, and that could plausibly be better than playing video games. But, does lobbying for permitting reform sound fun to you?

The secret of video games is that all of the difficulty is, in some deep sense, optional, and so can be selected to be interesting. ("What is drama, but life with the dull bits cut out?") The thing that enlivens the dull bits of life is the bigger meaning, and it seems to me like the superstructure is what makes the bigger meaning more real and less hallucinatory.

those successes could snowball in lots of different directions pretty quickly, without much meta-level direction.

This seems possible to me, but I think most of the big successes that I've seen have looked more like there's some amount of meta-level direction. Like, I think Elon Musk's projects make more sense if your frame is "someone is deliberately trying to go to Mars and fill out the prerequisites for getting there". Lots of historical eras have people doing some sort of meta-level direction like this.

But also we might just remember the meta-level direction that was 'surfing the wave' instead of pushing the ocean, and many grand plans have failed.