Tuesday, August 19, 2014
And doing the thing where you decide that clearly they’re not going to be into you so you’re not going to ask them out is ineffective: don’t reject yourself, other people are perfectly capable of rejecting you themselves if they so choose. ozymandias271 being super smart and saying the right things.
Sunday, August 17, 2014
Clark Glymour has recently raised an interesting problem for this theory. It is not uncommon for scientists to find support for a theory in evidence known long before the theory was even introduced, so that, intuitively, there are cases of already known, or “old,” evidence confirming “new” theories or hypotheses. Flymour cites the examples of the support for Copernicus’ theory derived from previous astronomical observations, the support for Newton’s theory of gravitation derived from the already established second and third laws of Kepler, and the support for Einstein’s gravitational field equations derived from the already known anomalous advance of the perihelion of Mercury. But if evidence E is already known before theory or hypothesis T is invented, then Pr(E) already equals 1 at that later time, so that, at that later time, Pr(T|E) must equal Pr(T); this follows from the usual axioms of probability and the definition of Pr(T|E). Thus, Bayesian confirmation theory seems to imply that already known evidence cannot support newly invented theories, contrary to what seems true in the cases Glymour cites.

- Ellery Eells, Bayesian Problems of Old Evidence

Oh my god old Bayesians are so precious.

I don’t know whether I should headdesk, shake my head, sigh in frustration, or laugh at this kind of thing. Dear lord.

Saturday, August 16, 2014

## Why meta-desires aren’t inherently better

Sometimes, people say something along the lines of “not all desires are good. You might desire something about your desires, and that is what we should actually follow, so that we don’t just give addicts more heroin and off unstable teenagers.” But I don’t think that meta-desires are necessarily better. Let’s create a person, and call her T.

T is a submissive, and wants to be have sex. But, T was raised in a cult, and desires that she not desire to have sex. However, T has escaped this cult, and desires that she not desire that she not desire to have sex, because she is trying to reject the trappings of the cult and accept her true self. On the other hand, T desires that she not desire that she not desire that she not desire to have sex, because she doesn’t like mental dissonance and wishes that she wasn’t trying to change her beliefs about something this hard to shake off. Lastly, T desires that she not desire that she not desire that she not desire that she not desire to have sex, because she also really appreciates mental games and trickery and philosophy questions, and thinks that her life would be less interesting if it were less meta.

Now, is it the case that the last desire, because it is the most meta, should be the one that we pay attention to? Do we only pay attention to the meta-desire, even though the meta-meta desire seems pretty important and relevant in this case?

I reject the claim that meta-desires should be privileged over normal desires. Thoughts?

Well, my intuition is that eventually it bottoms out. I suppose maybe in principle you could have an agent that goes omega meta-steps in their utility function by contradicting the next lower meta-step, but that doesn’t happen in real life.

And even if it did, we have a lot of ordinal numbers to go beyond omega.

What I’m trying to say is, unless the agent’s meta-desires are completely self-contradicting forever in all possible ordinal meta-levels, eventually they will have an infinity of agreeing meta-utilities. That’s where the weight comes from in (what I see as) the meta-desires argument: I not only desire not to desire eating sweets, I also desire to desire not to desire eating sweets, and I desire to desire to desire not to desire eating sweets, and I (desire to)* desire to not to desire eating sweets.

So um… in most agents, the inconsistencies are only finitely deep, I’d think, which means that they have zero weight when summing up all desires, so the meta-level after the highest meta-level that’s inconsistent with some meta-level above it feels like the one that should be heard.

In the case of an infinitely inconsistent agent? I dunno, that agent can probably just toss a coin or something.

(I feel like this should be looked into wrt FAI.)

It seems to me that not all desires have the same weight, and a finite number of distinct desires could very easily have greater total weight than the total weight of an infinite number of total desires.

Oh yes of course that is a thing I should have taken into account indeed.

## Bayesian falsification and the strength of a hypothesis

Bayesian falsification and the strength of a hypothesis

At the end of my post about other ways of looking at probability, I showed you a graph of evidence against probability. This is the relevant graph:

Looking at this graph was one of the most useful things I’ve ever done as a Bayesian. It shows, as I explained, exactly where most of the difficulty is in proving things, in coming up with hypotheses, etc. Another interesting aspect is the symmetry,…

View On WordPress

crankofkipling said: (I feel like this should be looked into wrt FAI.) -- CEV? Coherent Extrapolated Volition. Also, what about Nietzsche Will-to-Power type stuff, rather than recycling mathematics?

No, not really CEV, because CEV is sort of only appliable to “humans.” Or rather, it’s only a relevant concept when applied to agents that don’t have access to their own utility functions, or do not have well-defined utility functions; in the case of the AI itself, its CEV is just straightforwardly maximising its own expected utility.

What I mean is that observing the world and interacting with it can (and actually should) introduce inconsistencies in the AI’s own utility function, and while the heuristic rule “find highest meta-inconsistency, apply highest consistent criterion down” and the injunction “most relevant agents only have finitely deep inconsistencies” sound intuitively plausible and attractive, I think mathematical proofs ought to be involved somewhere.

## Why meta-desires aren’t inherently better

Sometimes, people say something along the lines of “not all desires are good. You might desire something about your desires, and that is what we should actually follow, so that we don’t just give addicts more heroin and off unstable teenagers.” But I don’t think that meta-desires are necessarily better. Let’s create a person, and call her T.

T is a submissive, and wants to be have sex. But, T was raised in a cult, and desires that she not desire to have sex. However, T has escaped this cult, and desires that she not desire that she not desire to have sex, because she is trying to reject the trappings of the cult and accept her true self. On the other hand, T desires that she not desire that she not desire that she not desire to have sex, because she doesn’t like mental dissonance and wishes that she wasn’t trying to change her beliefs about something this hard to shake off. Lastly, T desires that she not desire that she not desire that she not desire that she not desire to have sex, because she also really appreciates mental games and trickery and philosophy questions, and thinks that her life would be less interesting if it were less meta.

Now, is it the case that the last desire, because it is the most meta, should be the one that we pay attention to? Do we only pay attention to the meta-desire, even though the meta-meta desire seems pretty important and relevant in this case?

I reject the claim that meta-desires should be privileged over normal desires. Thoughts?

Well, my intuition is that eventually it bottoms out. I suppose maybe in principle you could have an agent that goes omega meta-steps in their utility function by contradicting the next lower meta-step, but that doesn’t happen in real life.

And even if it did, we have a lot of ordinal numbers to go beyond omega.

What I’m trying to say is, unless the agent’s meta-desires are completely self-contradicting forever in all possible ordinal meta-levels, eventually they will have an infinity of agreeing meta-utilities. That’s where the weight comes from in (what I see as) the meta-desires argument: I not only desire not to desire eating sweets, I also desire to desire not to desire eating sweets, and I desire to desire to desire not to desire eating sweets, and I (desire to)* desire to not to desire eating sweets.

So um… in most agents, the inconsistencies are only finitely deep, I’d think, which means that they have zero weight when summing up all desires, so the meta-level after the highest meta-level that’s inconsistent with some meta-level above it feels like the one that should be heard.

In the case of an infinitely inconsistent agent? I dunno, that agent can probably just toss a coin or something.

(I feel like this should be looked into wrt FAI.)

Anonymous said: Why do you fear death?

<essay>

The real question is not why I fear death, but why you do not (and by extension feel the need to ask this question.)

Let me clarify: I do not fear sudden, accidental death much. It’s a necessary risk to live a full life.

I fear all the associated symptoms with what one would call a natural death. Old age, disease, loss of mobility, mental function and senses, chronic pain, and so on.

In my view, growing old is a disease. There is nothing in modern cellular biology pointing to old age being inherent to our anatomy: emortality (life without aging) is simply a thing which evolution didn’t see fit to grant us.

And I feel cheated. As it is now, (without any major scientific breakthroughs in radical life extension,) I will probably grow to be 120 years old, 30 or so of which I will spend in some amount of agony.

I saw my grandmother die of lung cancer. It was not dignified, it was not peaceful. It was ugly and horrifying.

Death has in all human history been looming over us. Losing your grandmother is as terrible now, as it was fifteen thousand years ago.

Human nature strives to make sense of this chaotic world, and so we ask “why death?” We ask our wisest, oldest. The shamans and the sages and the priests and the witch-doctors. And they say “because magic.”

Not in those words exactly, but, something along the lines of “divine will” or “nature’s great scheme” or “the allmother call them home.”

Something to make the pain bearable. To provide some form of closure on why grandma had to die. They might even invent funeral rites, to ensure that the allmother really does get her daughter home.

And so humanity grows complacent with old age and death. The greatest calamity to befall us becomes a “fact of life.” (Notice here that it is neither a fact, as noted before with the biologists, nor is death a part of life; it is darkly humourous.)

We live in a culture where striving to stave off death is a sin. I have seen so many villains strive for immortality (or even just emortality, eternal youth,) and it being painted as a bad thing.

And even the process of obtaining emortality is painted in a bad light; whatever happened to the well of youth? Why all the human sacrifice and evil scientific progress (another pet peeve of mine: progress = evil. bull. shit. Watching the first ten minutes of Captain America: The Winter Soldier shoots down any argument in that direction.)

Immortality is demonized in popular culture: attaining it costs immoral sacrifice, it is only ever for one person, whoever seeks it is defying nature… (The naturalistic fallacy is so stupid. For one, homophobes use it to argue against homosexuality, for two, modern medicine is anything but natural, and we still see that as a good thing.)

I hear arguments that immortality is selfish. That my loved ones will die from me and I will end up depressed and wishing to die (another trope which is such bullshit: the immortal dude who wants to die.)

I say: why would I want immortality for me. I want immortality for everyone!

Then people cry “overpopulation!” “you’re selfish, what about future generations?!”

Overpopulation is easily solved, when our best scientists stop growing old. Imagine what a 400 year old scientist can do compared to a 40 year old one. We could pepper the sky with solar-power satellites and grow all our food in super-ultra-mega-efficient science-fiction farms.

And as for the “future generations,” that is the exact same argument which pro-lifers use. I am pro-life: my life. I don’t really give a crap about my hypothetical great-great-great-grandchildren, unless I am there to witness them taking their first steps.

We live in a deathist society. A culture that thinks the extinction of consciousness is a good thing.

I am the guy who thinks that the five year old kid who really just wants his grandma back, is on to something.

I fear death. I hate death. I want people to stop dying. Do you?

</essay>

## the biggest difference between Yudkowsky and Jaynes

• Jaynes: It seems to us inelegant to base the principles of logic on such a vulgar thing as expectation of profit.
• Yudkowsky: If it ever turns out that Bayes fails - receives systematically lower rewards on some problem, relative to a superior alternative, in virtue of its mere decisions - then Bayes has to go out the window. "Rationality" is just the label I use for my beliefs about the winning Way - the Way of the agent smiling from on top of the giant heap of utility.

## How and when to respect authority

How and when to respect authority

When I discussed the usefulness (or lack thereof) of Aumann’s Agreement Theorem, I mentioned that the next best thing to sharing the actual knowledge you gathered (or mind melding) was sharing likelihood ratios.

But sometimes… you can’t. Well, most of the time, really. Or all the time. Humans do not actuallyhave little magical plausibility fluids in their heads that flow between hypotheses and…

View On WordPress

Friday, August 15, 2014

## How to prove stuff

How to prove stuff

A while ago, I wrote up a post that explained what a mathematical proof is. In short, a mathematical proof is a bunch of sentences that follow from other sentences. And when mathematicians have been trying to prove stuff for hundreds of years, well, we’re bound to get fairly good at it. And to develop techniques.

So, then. Given any theory (that is, a set of logical sentences) $\mathcal T$, when a sentence S…

View On WordPress