The solution to Pascal's mugging
Pascal's Wager is an scenario where there is a potential for an infinite reward (heaven) and that overrides every other consideration.
Pascal's Mugging is an scenario where the theological considerations are replaced with a more mundane setting, where we are faced with a being who claims to have supernatural powers, that will pay us greatly if we pay them a small sum in advance. I think the earliest example I could find online of this particular case was this 2000 paper from Alex Tabarrok, who presents a similar case.
Bostrom (2009) has a nice writeup online, and is probably the canonical explainer for the scenario.
Pascal's mugging
Pascal's mugging, at its core does away with the difficulties of thinking about voluntary belief or infinities and instead presents us with a simple decisiontheoretical question:
If someone offers you the deal described, should you accept?
At first, the problem does seem easy: Imagine the amount of money you are offered is £1000. Then, it does seem like quite a lot of people could pay you. As the amount increases, it becomes more and more unlikely that the person in question actually has the money. Ultimately if the amount offered is larger than Jeff Bezos' wealth, one then has to account for even more complicated shenanigans involving conspiracies between multimillionaires, governments, and central banks, to which one can safely assign even more minute probabilities.
One can also consider the base rate for the kind of event we are considering: How often are people offered the deal? How often are people offered the deal and then the deal happens successfully? For it would be different if indeed we lived in a world where there are people offering such deal, and a nontrivial fraction of them are honest, then it could be rational to accept; and in this situation our intuition is aligned with our calculation.
This reasoning, it seems to me, deals with most plausible Pascal muggings: low base rate+lower probability of having higher amounts of money.
Then, we can think about what happens with situations like the one presented by Eliezer Yudkowsky in a Less Wrong post:
Now suppose someone comes to me and says, "Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people."
Or in Bostrom's paper:
Look at my pale countenance, my dark eyes; and note that I’m dressed in black from top to toe. These are some of the telltale signs of an Operator of the Seventh Dimension. That’s where I come from and that’s where the magic work gets done. [...] If you hand me your wallet, I will perform magic that will give you an extra 1,000 quadrillion happy days of life
In the first case one could deploy a metaphysical argument: There is no such thing as simulating consciousness, no more than water is simulated water. But this doesn't get us very far, because one can substitute simulating those people with just regular people and a thread to murder them.
One can simplify those cases to distill what's the issue there. Imagine that the cases being discussed merely involve one person instead of 3^^^^3, or 1 extra year of happy life. This doesn't seem to alter much our evaluation of the case.
Compare with this two cases:
 A 10% (guaranteed) chance of earning £100k, if one pays £5 in advance
 A 10% (guaranteed) chance of earning £10, if one pays £5 in advance
Here the second gamble is not worth it. The probability is the same, what we are changing are the rewards, and we are changing our actions, but we are seemingly not doing so if what we change are the probabilities in the case above.
Thus, the issue in those examples is not that the quantities involved are high, but the metaphysical dubiousness of the examples. What really drives those examples is not a high reward, but the low likelihood of someone being from outside the Matrix, or of being an Operator from the Seventh Dimension.
So, one way out is to declare, by fiat, that in our mental model of the world we live in the real world, and that there is no Seventh dimension, that the probability of those things being true is exactly that of 2+2 equalling anything other than 4, that is, zero.
This would require more justification: An a priori proposition like 2+2 is necessarily true, no matter what we know about the world, but knowledge about the empirical world is probabilistic. Can we say, with complete certainty, that there is no Seventh Dimension, that perhaps there is a loophole to the laws of physics that allows extradimensional beings to interact? Probably not.
But if we accept that possibility, we get back the situation again. Gwern (2011) discussed the problems with this approach:
One way to try to escape a mugging is to unilaterally declare that all probabilities below a certain small probability will be treated as zero. With the right pair of lower limit and mugger’s credibility, the mugging will not take place.
But such a ad hoc method violates common axioms of probability theory, and thus we can expect there to be repercussions. It turns out to be easy to turn such a person into a money pump, if not by the mugging itself.
Suppose your friend adopts this position, and he says specifically that any probabilities less than or equal to 1/20 are 0. You then suggest a game; the two of you will roll a d20 die, and if the die turns up 119, you will pay him one penny and if the die turns up 20, he pays you one dollar  no, one bazillion dollars.
Your friend then calculates: there is a 19/20 chance that he will win a little money, and there is a 1/20 chance he will lose a lot of money  but wait, 1/20 is too small to matter! It is zero chance, by his rule. So, there is no way he can lose money on this game and he can only make money.
He is of course wrong, and on the 5th roll you walk away with everything he owns. (And you can do this as many times as you like.)
Nor can it be rescued simply by shrinking the probability; the same game that defeated a limit of 1/20 can easily be generalized to 1/n, and in fact, we can make the game even worse: instead of 1 to n1 winning, we define 0 and 1 as winning, and each increment as losing. As n increases, the large percentage of instant losses increases.
So clearly this modification doesn’t work.
Thinking beyond the original problem is probably the key to solve it. Instead of solving Pascal's mugging, we can solve the problem of tiny probabilities, and see what that does to the original problem.
Small probabilities
Consider the act of closing the document you are now reading. It is conceivable that a being in Dimension N will kill you if you do so, for not having a permanently open view of Nintil is sacrilegious. What's the probability of that? Similarly, in dimension L, thinking that Nintil is not the best blog in the world is a hideous act, and is punished by death, even for us inhabitants of the regular universe. What's the probability of that? Furthermore, should you close Nintil? Should you dare think Nintil is not the best blog ever?
If we have said that we can't rule them out with the hammer of apriorism, as we would do with 2+2=5, there is still one avenue, which is the mu answer: To say that the probability is undefined for a different reason.
Probabilistic reasoning portions out the space of possible events, where each slice has some probability. For example, when rolling a 6face dice one can model the outcome as having 6 possible states, where each state has the same probability. We don't usually care about whether or not the dice actually has 7 faces, or some other unlikely possibility. For the events that happen after you close Nintil, what could there be in that set? "Nothing is one of them, with high likelihood, and perhaps "Something unexpected but inconsequential". Under that model, there is no room for weird events happening in Dimension N, the probability of the event "You get killed by a being from Dimension N" is either undefined, or zero, depending on how you decide to treat it.
This answer does sound satisfying to me: it describes what is going on when we go about in the world doing whatever is is that we are doing at that moment: We have a mental model of what is possible, within that we assign probabilities, and work out what to do; we are not thinking about everything that could possibly happen, all the time.
Now, I can imagine someone thinking that this whole business of having a mental model of the possible doesn't sound very computable, and in turn that's fishy; also the method may sound wishy washy and hacky, perhaps okay for an imperfect limited reasoner like a human being, but not what a perfect reasoner would do.
On perfect reasoners, I will make the claim that it is just not possible to consider every possibility: one must use a model and leave things out. Nature abhors infinities, and constructing a system that can consider infinite nonparameterized^{1} hypothesis at once (or sequentially, in finite time) is impossible in principle.
Sure, one can concede, but then still in the mugger case we are are only considering two items in the set: {Mugger is from the 7th dimension and has awesome powers, Mugger is fake news}. There are no infinities here.
The point above is that you can have whatever you want in your model, including a single element, in which case that'll have a probability of 1. As the event "Mugger is from the 7th dimension" is not even in the model, it gets ignored when choosing an action. Problem solved.
Choosing models
This creates a problem: That it seems to condemn us the epistemic version of bipolar disorder: We are believing that the probability of the mugger being from another dimension is nonzero, and at the same time we are acting as if it were zero.
We already do this in other domains, or at least I do: As I mentioned above, the probability that a deductively valid logic proposition is true is one, likewise for a priori knowledge (e.g. 2+2=4). However, I also know that I can be wrong; I can be asked if 232433443 is a prime number. It is or it is not with probability 1, but I don't know; in that case I end up with a probability that is either 0 or 1, and also another one that is between 0 and 1.
I don't intend to solve these issues here, and I don't know if anyone has solved them yet^{2}. I will just handwave towards the fact that one has to choose what to put in the model, and that it boils down to heuristics, without rejecting for now that one can come up with a mechanical way of doing so.
Is there any other way?
Round probabilities below a threshold to zero
Doesn't work for the reasons outlines by Gwern
Penalize the prior probability (The original Hansonian solution)
This is alluring, and it plays nicely with a regular bayesian framework. but ultimately it's hard to see how to construct a prior in such a way that it avoids the mugging situation. I discussed above that there are good reasons to believe that the probability of the money actually being payed decrease with the amount (less people can pay it), but once we reach the level of the 7th dimensional mugger, it does not. Conditional on the mugger having superpowers, the probability that they have the capability of paying does not seem to change with the amount, at least not by much.
Bound the maximum utility that goes into the calculation
Even accepting it it feels ad hoc, it doesn't make the problem go away; the mugging reoccurs if one splits the reward in smaller rewards, or a similar trick.
The Kelly criterion
As suggested by Gwern here, rather than maximizing the mean, maximize the median; which in these odd low probability cases just recommend to keep going as usual. Does this cause problems elsewhere? It does seem a nice elegant solution to the problem.
If you still disagree
The largest number ever devised by mankind appears in Michael Huemer's Approaching Infinity, is \(✳^{8✳}8\), leaving 3↑↑↑3 as a little baby. (Sure, now you can think of a larger number). Here and now I offer you the following: If you give me $1 (Email me at artir@nintil.com for payment details!), then there is some chance that in a parallel universe \(✳^{8✳}8\) lives worth living will be created. There is also a similar chance that that same number of lives will be tortured elsewhere. Choose wisely! Agree with me or pay! (Or explain where I went wrong^{3} and if I find you convincing, I'll pay you!)
Note that at the beginning of the post, I state that my confidence in the reasoning here is not that high, so I won't be that surprised if I'm wrong
Changelog

2019/08/22 Added Kelly criterion at Gwern's suggestion.
Because of course, upon reading that someone might think that a system that solves the system of equations "x=x2" is considering an infinite set of hypothesis (values of x). Even under that interpretation, it is a single structure we are working with, merely changing the parameters across a well defined continuum. That doesn't work with, say, predicting what someone will say next, the number of possible utterances is unbounded.
Logical induction does not solve this problem, it just aims to provide a better tool of coming up the the second kind of probability, without resolving the underlying issue.
If I'm wrong, the correct answer will probably involve the fact that there is a recursive game going on here, whereby the mugger knows that we'll give a low probability to him being legit and will adjust accordingly, but then we know that he knows, and so on.
Citation
In academic work, please cite this essay as:
Ricón, José Luis, “The solution to Pascal's mugging”, Nintil (20190822), available at https://nintil.com/pascalsmugging/.