Monday, September 1, 2014
Saturday, August 30, 2014

Absence of evidence is evidence of absence

The W’s article about Evidence of Absence is confusing. They have an anecdote:

A simple example of evidence of absence: A baker never fails to put finished pies on her windowsill, so if there is no pie on the windowsill, then no finished pies exist. This can be formulated as modus tollens in propositional logicP implies Q, but Q is false, therefore P is false.

But then go on to say: “Per the traditional aphorism, ‘absence of evidence is not evidence of absence’, positive evidence of this kind is distinct from a lack of evidence or ignorance[1] of that which should have been found already, had it existed.[2]

And at this point I go all ?????.

And then they continue with an Irving Copi quote: “In some circumstances it can be safely assumed that if a certain event had occurred, evidence of it could be discovered by qualified investigators. In such circumstances it is perfectly reasonable to take the absence of proof of its occurrence as positive proof of its non-occurrence.”

UM.

Alright so, trying to untangle this mess, they seem to want to make a qualitative distinction between “high-expectation evidence” and “low-expectation evidence.” Now, if you have read other stuff on this blog, like stuff about Bayes’ Theorem and the Bayesian definition of evidence and the many ways to look at probability and… Well, you must know by now that probability theory has no qualitative distinctions. Everything is quantitative. Any sharp divisions are strictly ad hoc and arbitrary and not natural clusters of conceptspace.

Thankfully, there is another quote in that W article that’s closer to the mark:

If someone were to assert that there is an elephant on the quad, then the failure to observe an elephant there would be good reason to think that there is no elephant there. But if someone were to assert that there is a flea on the quad, then one’s failure to observe it there would not constitute good evidence that there is no flea on the quad. The salient difference between these two cases is that in the one, but not the other, we should expect to see some evidence of the entity if in fact it existed. Moreover, the justification conferred in such cases will be proportional to the ratio between the amount of evidence that we do have and the amount that we should expect to have if the entity existed. If the ratio is small, then little justification is conferred on the belief that the entity does not exist. [For example] in the absence of evidence rendering the existence of some entity probable, we are justified in believing that it does not exist, provided that (1) it is not something that might leave no traces and (2) we have comprehensively surveyed the area where the evidence would be found if the entity existed…[5]

—J.P. Moreland and W.L. Craig, Philosophical Foundations for a Christian Worldview

This looks much more like Bayesian reasoning than the rest of that article did. But let’s delve deeper and see how to prove a negative.

Continue reading 

queerbaitingforgodot:

pros to having a human flesh body:

- can whap my stomach and make drum sounds

- allows me to blend in with the rest of the human population the vast majority of which also inhabits human flesh bodies

- good resting place for small furry animals e.g. cats, dogs, buns

cons to having a human flesh body:

- [pulls out massive stack of binders] ah yes where shall we start, alphabetically or by order of least to most horrifying and unpleasant

Thursday, August 28, 2014

Beliefs and aliefs

What does it mean to believe?

This is not supposed to be some Deeply Wise prod to make someone write philosophical accounts of the mystical uniqueness of human consciousness or some such. It’s an actual question about the actual meaning of the actual word. Not that words have intrinsic meanings, of course, but what do we mean when we use this word?

And like many good words in the English language, it has a lot of meanings.

LessWrong has a lot of talk about this. Amongst the meanings of the verb ”to believe” talked about in the linked Sequence are to anticipate an experience, to anticipate anticipating an experience, to cheer for a team, and to signal group membership. And of course, that’s not all. Some people in the atheist movement, for instance, use the word “belief” sometimes to refer to unjustified or faith-based models-of-the-world.

Now, there is a very interesting other word in philosophy and psychology: “alief.” To alieve something is to have a deep, instinctual, subconscious belief, and the word is used especially when this subconscious feeling is at odds with the conscious mind. The W uses a few examples to explain the concept, like the person who is standing on a transparent balcony and, in spite of believing themself safe, alieves the danger of falling.

This is a very interesting (and fairly obvious, after you grok the difference between your Systems 1 and 2) internal dichotomy. Ideally, we want our beliefs and aliefs to be identical, and whenever we change our beliefs we’d like to likewise change our aliefs. And I think much of what Yudkowsky means when he talks about making beliefs pay rent refers exactly to this concept, turning beliefs into aliefs. This would seem to be very useful for rationality in general - a large part of rationality techniques consists of a bunch of heuristics for turning conscious deliberations into intuitive judgements. And of course, it’s very hard to do.

Pascal’s Wager (the one that says that, on the off-chance that god does in fact exist and will punish you for not believing, you should believe in it) has lots of flaws in it, but I think this is a particularly severe one. Sure, maybe the human brain is absolutely and completely insane in how it translates beliefs into aliefs and vice-versa, but it seems to me that, most of the time, you can’t just, by an effort of will, force it to turn a belief into an alief. And Pascal himself admitted this, and said that what the rational person should do is act and behave as if they believed until they actually did. And I’m sure that would work with some people, eventually, in the sense that they’d believe they believe, they’d profess and cheer and wear their belief.

But I’ll be damned if any amount of praying will actually convince me, on the brink of death, that I’m about to meet the Creator.

Or some such, depending on which religion you’re talking about.

And one would think maybe a just god would reward honesty more than barefaced self-manipulation.

Whichever the case, you can’t just choose to anticipate experiences: either you do, or you don’t, for good or for ill. And the brain isn’t completely stupid, if it didn’t move somewhat according to evidence it would’ve been selected out of the gene pool a long time ago, but it’s not terribly efficient or smart about it, and its belief → alief translation procedure can be overriden by a lot of other modules, or twisted and hacked into unrecognisability. But it seems that, in general, a lot of rationality heuristics boil down to: okay, this is the normatively correct way to think - how do I internalise it?

I don’t know. It appears to take lots of practice or some such, and different kinds of belief require different kinds of alief-generating, and some people seem to be naturally better than others at this “taking ideas seriously” skill. But we all know that the whole of rationality isn’t limited to what Less Wrong has to offer, and as further research is done, well, I’d be eager to learn how to more efficiently internalise my beliefs.

View On WordPress

Wednesday, August 27, 2014

raginrayguns:

scientiststhesis is always saying that you can’t do tests of fit because you can’t disconfirm a hypothesis without anything to compare it to

buut I think sometimes you wanna say “my hypothesis fits the data so poorly, that it’s probably massively disconfirmed relative to some hypothesis I have not yet thought of

I mean, that’s why we think when we see something strange, right? Anything that seems out of place given our assumptions will confirm a different set of assumptions, if only we can think of it

and also, the reason i’m thinking about this is because my bayesian stats teacher recommended that you look at p(data|background info) to see how well the data fit, to see if you chose priors/likelihood function rwong.

I think a good real-world example of this is the current state of quantum mechanics.

We have stuff like the vacuum catastrophe. We know that this has a really ridiculous fit and this probably means that QM (or QFT at any rate) is ridiculously wrong. But we don’t know of any good alternatives.

So yeah, the data we have suggests that some hypothesis I have not yet thought of is massively confirmed, but for as long as I haven’t thought of it, I can’t just do the frequentist thing and say that QM has been ruled out by experiment and should be rejected and say I’m done. For as long as we don’t have a suitable alternative, we’ll have to make do with it.

And like Jaynes mentioned, we can measure how much support is “conceivable” for some hypothesis, which is just another way of putting the thing you said about p(data), and if that number is very high we can conceive that there is some other hypothesis in hypothesis-space that’s a better fit.

Friday, August 22, 2014

Learning Bayes [part 1]

Learning Bayes [part 1]

I have a confession to make.

I don’t actually know Bayesian statistics.

Or, any statistics at all, really.

Shocking, I know. But hear me out.

What I know is… Bayesian theory. I can derive Bayes’ Theorem, and I also can probably derive most results from it. I’m a good mathematician. But I haven’t actually spent any time doing practical Bayesian statistics stuff. Very often a friend, like raginrayg…

View On WordPress