DPiepgrass

Worried that typical commenters at LW care way less than I expected about good epistemic practice. Hoping I'm wrong.

Software developer and EA with interests including programming language design, international auxiliary languages, rationalism, climate science and the psychology of its denial.

Looking for someone similar to myself to be my new best friend:

❖ Close friendship, preferably sharing a house ❖ Rationalist-appreciating epistemology; a love of accuracy and precision to the extent it is useful or important (but not excessively pedantic) ❖ Geeky, curious, and interested in improving the world ❖ Liberal/humanist values, such as a dislike of extreme inequality based on minor or irrelevant differences in starting points, and a like for ideas that may lead to solving such inequality. (OTOH, minor inequalities are certainly necessary and acceptable, and a high floor is clearly better than a low ceiling: an "equality" in which all are impoverished would be very bad) ❖ A love of freedom ❖ Utilitarian/consequentialist-leaning; preferably negative utilitarian ❖ High openness to experience: tolerance of ambiguity, low dogmatism, unconventionality, and again, intellectual curiosity ❖ I'm a nudist and would like someone who can participate at least sometimes ❖ Agnostic, atheist, or at least feeling doubts

Wiki Contributions

Comments

This is practice sentence to you how my brain. I wonder how noticeable differences are to to other people.

That first sentence looks very bad to me; the second is grammatically correct but feels like it's missing an article. If that's not harder for you to understand than for other people, I still think there's a good chance that it could be harder for other dyslexic people to understand (compared to correct text), because I would not expect that the glitches in two different brains with dyslexia are the same in every detail (that said, I don't really understand what dyslexia means, though my dad and brother say they have dyslexia.)

the same word ... foruthwly and fortunly and forrtunaly

You appear to be identifying the word by its beginning and end only, as if it were visually memorized. Were you trained in phonics/phonetics as a child? (I'm confused why anyone ever thought that whole-word memorization was good, but it is popular in some places.) This particular word does have a stranger-than-usual relationship between spelling and pronunciation, though.

> I can do that too. Thankfully. Unless I don’t recognize the sounds.

My buffer seems shorter on unfamiliar sounds. Maybe one second.

> reading out loud got a little obstructive. I started subvocalizing, and that was definitely less fun.

I always read with an "auditory" voice in my head, and I often move my tongue and voicebox to match the voice (especially if I give color to that voice, e.g. if I make it sound like Donald Trump). I can't can't speed-read but if I read fast enough, the "audio" tends to skip and garble some words, but I still mostly detect the meanings of the sentences. My ability to read fast was acquired slowly through much practice, though. I presume that the "subvocalization" I do is an output from my brain rather than necessary for communication within it. However, some people have noticed that sometimes, after I say something, or when I'm processing what someone has told me, I visibly subvocalize the same phrase again. It's unclear whether this is just a weird habit, or whether it helps me process the meaning of the phrase. (the thing where I repeat my own words to myself seems redundant, as I can detect flaws in my own speech the first time without repetition.)

Doublecrux sounds like a better thing than debate, but why such an event should be live? (apart from "it saves money/time not to postprocess")

Yeah, the lyrics didn't sit well with me either so I counterlyricized it.

You guys were using an AI that generated the music fully formed (as PCM), right?

It ticks me off that this is how it works. It's "good", but you see the problems:

  1. Poor audio quality [edit: the YouTube version is poor quality, but the "Suno" versions are not. Why??]
  2. You can't edit the music afterward or re-record the voices
  3. You had to generate 3,000-4,000 tracks to get 15 good ones

Is there some way to convince AI people to make the following?

  1. An AI (or two) whose input is a spectral decomposition of PCM music (I'm guessing exponentially-spaced wavelets will be better than FFT) whose job is to separate the music into instrumental tracks + voice track(s) that sum up to the original waveform (and to detect which tracks are voice tracks). Train it using (i) tracker and MIDI archives, which are inherently pre-separated into different instruments, (ii) AI-generated tracker music with noisy instrument timing (the instruments should be high-quality and varied but the music itself probably doesn't have to be good for this to work, so a quick & dirty AI could be used to make training data) and (iii) whatever real-world decompositions can be found.
  2. An AI that takes these instrumental tracks and decomposes each one into (i) a "music sheet" (a series of notes with stylistic information) and (ii) a set of instrument samples, where each sample is a C-note (middle C ± one or two octaves, drums exempt), with the goal of minimizing the set of instrument samples needed to represent an instrument while representing the input faithfully (if a large number of samples are needed, it's probably a voice track or difficult instrument such as guitar, but some voice tracks are repetitive and can still be deduplicated this way, and in any case the decomposition into notes is important). [alternate version of this AI: use a fixed set of instrument samples, so the AIs job is not to decompose but to select samples, making it more like speech-to-text rather than a decomposition tool. This approach can't handle voice tracks, though]
  3. Use the MIDI and tracker libraries, together with the output of the first two AIs inferencing on a music library, to train a third AI whose job is to generate tracker music plus a voice track (I haven't thought through how to do the part where lyrics drive the generation process). Train it on the world's top 30,000 songs or whatever.

And voila, the generated music is now editable "in post" and has better sound quality. I also conjecture that if high-quality training data can be found, this AI can either (i) generate better music, on average, than whatever was used for "I Have Been a Good Bing" or (ii) require less compute, because the task it does is simpler. Not only that, while the third AI was the goal, the first pair of AIs are highly useful in their own right and would be much appreciated by artists.

Even if the stars should die in heaven
Our sins can never be undone
No single death will be forgiven
When fades at last the last lit sun.

Then in the cold and silent black
As light and matter end
We’ll have ourselves a last look back.

And toast an absent friend.

[verse 2]

I heard that song which left me bitter
For all the sins that had been done
But I had thought the wrong way 'bout it
[cuz] I won't be there to see that sun

I noticed then I could let go
Before my own life ends
It could have been much worse you know

Relaxing with my friends
Hard work I leave with them
Someday they'll get it done

A million years too young
For now we'll have some fun

I guess you could try it and see if you reach wrong conclusions, but that only works isn't so wired up with shortcuts that you cannot (or are much less likely to) discover your mistakes.

I've been puzzling over why EY's efforts to show the dangers of AGI (most notably this) have been unconvincing enough so that other experts (e.g. Paul Christiano) and, in my experience, typical rationalists have not adopted p(doom) > 90% like EY, or even > 50%. I was unconvinced because he simply didn't present a chain of reasoning that shows what he's trying to show. Rational thinking is a lot like math: a single mistake in a chain of reasoning can invalidate the whole conclusion. Failure to generate a complete chain of reasoning is a sign that the thinking isn't rational. And failure to communicate a complete chain of reasoning, as in this case, should fail to convince people (except if the audience can mentally reconstruct the missing information).

I read all six "tomes" of Rationality: A-Z and I don't recall EY ever writing about the importance of having a solid and complete chain (or graph) of reasoning―but here is a post about the value of shortcuts (if you can pardon the strawman; I'm using the word "shortcut" as a shortcut). There's no denying that shortcuts can have value, but only if it leads to winning, which for most of us including EY includes having true beliefs, which in turn requires an ability to generate solid and complete chains of reasoning. If you used shortcuts to generate it, that's great insofar as it generates correct results, but mightn't shortcuts make your reasoning less reliable than it first appears? When it comes to AI safety, EY's most important cause, I've seen a shortcut-laden approach (in his communication, if not his reasoning) and wasn't convinced, so I'd like to see him take it slower and give us a more rigorous and clear case for AI doom ― one that either clearly justifies a very high near-term catastrophic risk assessment, or admits that it doesn't.

I think EY must have a mental system that is far above average, but from afar it seems not good enough.

On the other hand, I've learned a lot about rationality from EY that I didn't already know, and perhaps many of the ideas he came up with are a product of this exact process of identifying necessary cognitive work and casting off the rest. Notable if true! But in my field I, too, have had various unique ideas that no one else ever presented, and I came about it from a different angle: I'm always looking for the (subjectively) "best" solutions to problems. Early in my career, getting the work done was never enough, I wanted my code to be elegant and beautiful and fast and generalized too. Seems like I'd never accept the first version, I'd always find flaws and change it immediately after, maybe more than once. My approach (which I guess earns the boring label 'perfectionism') wasn't fast, but I think it built up a lot of good intuitions that many other developers just don't have. Likewise in life in general, I developed nuanced thinking and rationalist-like intuitions without ever hearing about rationalism. So I am fairly satisfied with plain-old perfectionism―reaching conclusions faster would've been great, but I'm uncertain whether I could've or would've found a process of doing that such that my conclusions would've been as correct. (I also recommend always thinking a lot, but maybe that goes without saying around here)

I'm reminded of a great video about two ways of thinking about math problems: a slick way that finds a generalized solution, and a more meandering, exploratory way way that looks at many specific cases and examples. The slick solutions tend to get way more attention, but slower processes are way more common when no one is looking, and famous early mathematicians haven't shied away from long and even tedious work. I feel like EY's saying "make it slick and fast!" and to be fair, I probably should've worked harder at developing Slick Thinking, but my slow non-slick methods also worked pretty well.

Speaking for myself: I don't prefer to be alone or tend to hide information about myself. Quite the opposite; I like to have company but rare is the company that likes to have me, and I like sharing, though it's rare that someone cares to hear it. It's true that I "try to be independent" and "form my own opinions", but I think that part of your paragraph is easy to overlook because it doesn't sound like what the word "avoidant" ought to mean. (And my philosophy is that people with good epistemics tend to reach similar conclusions, so our independence doesn't necessarily imply a tendency to end up alone in our own school of thought, let alone prefer it that way.)

Now if I were in Scott's position? I find social media enemies terrifying and would want to hide as much as possible from them. And Scott's desire for his name not to be broadcast? He's explained it as related to his profession, and I don't see why I should disbelieve that. Yet Scott also schedules regular meetups where strangers can come, which doesn't sound "avoidant". More broadly, labeling famous-ish people who talk frequently online as "avoidant" doesn't sound right.

Also, "schizoid" as in schizophrenia? By reputation, rationalists are more likely to be autistic, which tends not to co-occur with schizophrenia, and the ACX survey is correlated with this reputation. (Could say more but I think this suffices.)

Scott tried hard to avoid getting into the race/IQ controversy. Like, in the private email LGS shared, Scott states "I will appreciate if you NEVER TELL ANYONE I SAID THIS". Isn't this the opposite of "it's self-evidently good for the truth to be known"? And yes there's a SSC/ACX community too (not "rationalist" necessarily), but Metz wasn't talking about the community there.

My opinion as a rationalist is that I'd like the whole race/IQ issue to f**k off so we don't have to talk or think about it, but certain people like to misrepresent Scott and make unreasonable claims, which ticks me off, so I counterargue, just as I pushed a video by Shaun once when I thought somebody on ACX sounded a bit racist to me on the race/IQ topic.

Scott and myself are consequentialists. As such, it's not self-evidently good for the truth to be known. I think some taboos should be broached, but not "self-evidently" and often not by us. But if people start making BS arguments against people I like? I will call BS on that, even if doing so involves some discussion of the taboo topic. But I didn't wake up this morning having any interest in doing that.

Huh? Who defines racism as cognitive bias? I've never seen that before, so expecting Scott in particular to define it as such seems like special pleading.

What would your definition be, and why would it be better?

Scott endorses this definition:

Definition By Motives: An irrational feeling of hatred toward some race that causes someone to want to hurt or discriminate against them.

Setting aside that it says "irrational feeling" instead of "cognitive bias", how does this "tr[y] to define racism out of existence"?

Load More