habryka

Running Lightcone Infrastructure, which runs LessWrong. You can reach me at habryka@lesswrong.com

Sequences

A Moderate Update to your Artificial Priors
A Moderate Update to your Organic Priors
Concepts in formal epistemology

Wiki Contributions

Comments

I don't think this is true. Nonprofits can aim to amass large amounts of wealth, they just aren't allowed to distribute that wealth to its shareholders. A good chunk of obviously very wealthy and powerful companies are nonprofits.

LessWrong does not have a history of being a particularly thoughtful place for people to have policy discussions,

This seems wrong. Scott Alexander and Robin Hanson are two of the most thoughtful thinkers on policy in the world and have a long history of engaging with LessWrong and writing on here. Zvi is IMO also one of the top AI policy analysts right now.

Definitely true policy thinking here has a huge libertarian bent, but I think it's pretty straightforwardly wrong to claim that LW does not have a history of being a thoughtful place to have policy discussions (indeed, I am hard-pressed to find any place in public with a better history)

Welcome! Looking forward to having you around!

habryka156

I have indeed been publicly advocating against the inside game strategy at labs for many years (going all the way back to 2018), predicting it would fail due to incentive issues and have large negative externalities due to conflict of interest issues. I could dig up my comments, but I am confident almost anyone who I've interfaced with at the labs, or who I've talked to about any adjacent topic in leadership would be happy to confirm.

habryka414

Oh, weird. I always thought "ETA" means "Edited To Add". 

Sure, I'll try to post here if I know of a clear opportunity to donate to either. 

habryka339

I would be happy to defend roughly the position above (I don't agree with all of it, but agree with roughly something like "the strategy of trying to play the inside game at labs was really bad, failed in predictable ways, and has deeply eroded trust in community leadership due to the adversarial dynamics present in such a strategy and many people involved should be let go").

I do think most people who disagree with me here are under substantial confidentiality obligations and de-facto non-disparagement obligations (such as really not wanting to imply anything bad about Anthropic or wanting to maintain a cultivated image for policy purposes) so that it will be hard to find a good public debate partner, but it isn't impossible.

habryka3418

The document doesn't specify whether "deployment" includes internal deployment. (This is important because maybe lots of risk comes from the lab using AIs internally to do AI development.)

This seems like such an obvious and crucial distinction that I felt very surprised when the framework didn't disambiguate between the two. 

habryka116

Yeah, at the time I didn't know how shady some of the contracts here were. I do think funding a legal defense is a marginally better use of funds (though my guess is funding both is worth it).

Answer by habryka60

We don't have a live count, but we have a one-time analysis from late 2023: https://www.lesswrong.com/posts/WYqixmisE6dQjHPT8/2022-and-all-time-posts-by-pingback-count 

My guess is not much has changed since then, so I think that's basically the answer.

Load More