Let's call what follows Formulation A of Newcomb's dilemma:

One day, you get a call from some Robert Nozick to inform you that yo have been selected to earn some free money. He asks you to meet him somewhere. You go there and he is with two boxes. Welcome to Newcomb's dilemma, he says. The rules are simple: Here, in the first box, he says while he opens it, there are 1000\$. So now you have a box and 1000\$ on a table. In the other box you can have either nothing or 1000000\$. Your goal, he informs you, is to get the maximum amount of money possible. But what is actually in the box, Mr. Nozick? Well, that depends on what you do. You see, I happen to have a very accurate decision-predictor we'll call Alpha. I ran Alpha before calling you, and set up the box back then. If Alpha predicted you'd take both boxes, it will have put nothing there. But if it predicted that you'd take only the closed box, it will have placed 1000000\$ in it. Nothing you do now will alter what is in the boxes. Hmmm.... Well, there's more. There is a friend of mine, Newcomb, who happens to be able to see what is inside the other box. He can't tell you, though. But just imagine what he would tell you if he were able to. Money won't be magically appearing there, so if he tells you to open the box and take the 1000\$, it won't matter whether there is anything in the box. So... I should take the money and open the box, right? Even if the machine is a perfect predictor, nothing I do now will causally affect the content of the boxes. Therefore, I go for two-boxing.

Consider now Formulation B of Newcomb's Dilemma

Apparently, the reasoning I presented here is similar to the situation is Kavka's toxin puzzle.

Consider now Formulation C of Newcomb's Dilemma

One day, you get a call from some Robert Nozick to inform you that yo have been selected to earn some free money. He tells you about Newcomb's dilemma and about Alpha. He then says that people who two-box end up with 1000\$ and people who one-box end up with 100000\$. Knowing this, you are presented with the Dilemma. As again, Nozick placed or didn't place te money as Alpha told him to do before calling you. What do you do? Here, you one-box, given that actually, what happens is that one-boxers maximise their money.

In case C, I think it is totally clear that you should go for one-boxing. After all, you have evidence that one-boxers win. This doesn't seem like a dilemma at all, actually. Cases A and B try to play with the point in time where the decision is taken, and the idea of committment. They both present the opposing intuitions that arise in the dilemma.

Now, to study the problem itself, there are a couple of points that should be clarified, like:

• Are you really free to choose?
• How accurate is the predictor?

Maybe you have total free will, yet the predictor knows what you will be willing in the future. Maybe you don't have free will, and the predictor still knows. Free will is irrelevant, I think.

What matters is the ability of the predictor to predict. 100% accuracy means that you can't fool the predictor at all. But even then, after the boxes are in place, it won't matter what you do, so apparently it doesn't matter what the predictor does. This is less obvious in Formulation B: There the predictor choses after you've made your mind, so you know that if you are insincere, the predictor will know too, so you know you shouldn't even attempt to get the money and open the box in the future.

This leads us to a question: Is it even possible to a priori want to one-box and a posteriori Alpha's decision change your mind without that implying your insincerity in the first place?

Other question: Is this discussion relevant? There is one case in which it doesn't seem to be: Case C, where you have inductive evidence that you ought to one-box.

Could we have similar evidence for two boxing?

Formulation D1 of Newcomb's Dilemma is as C, but knowing that everyone who one-boxes get 1000000\$ and everyone who two-boxes gets 1001000\$.

Formulation D2 of Newcomb's Dilemma is as C, but knowing that one-boxers get nothing, and two-boxers get 1000\$

Formulation D3 of Newcomb's Dilemma is as C, but knowing that one-boxers get nothing, and two-boxers get 1001000\$

The problem with D3 is that Alpha would be predicting in an opposite way  to the one it's meant to. That is, if you are going to one-box, it gives you nothing, and if you are going to two-box, it will give you everything, so here there is not even an ex-ante dilemma: two-boxing always wins.

In D2, the Predictor always gets wrong your intention to one-box (thinks that you will two-box, therefore doesn't put money), but correctly predicts when you are going to two-box (and leaves the money empty). So here the predictor is following the algorithm of always leaving the box empty. As if were not predicting. So this is not a plausible case.

In D1, the predictor behaves again as if it were not predicting: it just leaves the money always in the box.

In cases D, apparently, the predictor wouldn't fit its description. It wouldn't be predicting, so there is no evidence that could serve for two-boxing, as there is for the one-boxing case. So you should one-box.

We could then have cases E with mixed cases. There we could infer probabilistically what to do.

But is it completely true that there can be no pure (the predictor always acts the same) cases in which the predictor works as expected, and that serve as evidence for two-boxing?

From cases D, the one that would get close is the first one, in which one-boxing is great, but two-boxing is even better. The predictor would be predicting that you will always one-box, and at least when you do one-box, it gets those cases right. But it always fails when you two-box.

I probably need to read more about this, but if I were presented with this right now,  and I knew the predictor works I would go with one-boxing, just because it seems hard to conceive a world in which you could have evidence that both the predictor works as advertised, and that two-boxing works. Or perhaps, the problem itself is incoherent.

In case A, though, I would definitely two-box.

EDIT: Also, listen/read this