Scott Alexander wrote some time ago a piece titled Meditations on Moloch. As far as I know, there are two or three replies to it, here and here. He replied to some critiques here.

The theme of the essay, as we will see, is a reflection on the general human condition, and on our future. Still, Scott is too pessimistic, and I’ll proceed to give him reasons to believe that niceness, after all, can triumph. It basically takes one or two little things, like to relax his apparent assumption of 100% selfish behaviour and set it at just 95%.

Like the post I’m commenting, this one is pretty long, so brace yourself. Unlike it, though, this is not as metaphoric, or (un?)inspiring. It could be twice that long if I tried to do that, and that would be against the truth/words ratio of this post.

Having read most of what Scott has written, including every single page of his tumblr and his old blog, plus some of his older micronationalist stuff in some random internet forums, I think I can undertand a bit the way he sees the world. Usually, the received view among the smart side of the internet is that the many problems of the world are not due to people being intrinsically evil, but ignorant and/or irrational. They aren’t able to see cooperative equilibriums, or get dragged along by emotional responses that end up leading to war, corruption, crime, and so on.

Moloch is more fearsome than that, says Scott. Assume we were perfectly rational and intelligent. Moloch is seeing the solutions to our problems, to know what we would have to do to solve them, but not being able to due to bad incentives. A Molochian scenario is a prisonners’ dilemma: As a prisonner, you know where the cooperative equilibrium lies, but whatever the other does, you’re apparently better off defecting, and so both defect, and both end worse of, even when both saw this coming from the beginning. Not being much of a fan of emotional language occurring in this blog, I’ll make a little exception and say that, to Scott, this may look sad, and depressing. Via Ginsberg’s Howl, second Part (Moloch), he conveys us that idea, an ugly society driven by incentives to globally inefficient equilibria that arise from local optimization processes.

In what follows, I will do what is popularly known as fisking: I will be going down his article and commenting as I go.

So here it goes:

Earth could be fair, and all men glad and wise. Instead we have prisons, smokestacks, asylums.

Why can’t we just get along? I can almost hear there. And I agree with the worldview behind that. Unlike conservatives, who believe in the impossibility of a reasonable straightening of the crooked timber out of which humanity is made, I do think that progress is possible, if we try hard enough. Things will not get better if no one does anything, obviously.

The implicit question is – if everyone hates the current system, who perpetuates it?

Here is where I basically agree with Scott: There are many problems in the world, and many of them are caused by the way we respond to incentives.

The Moloch. Elua. The perverse incentives. It’s true. All of it. They’re real. (Something Han Solo could have said here)

He then asks us to imagine one Molochian scenario: A world with two rules: Every person must spend eight hours a day shocking themselves with electric shocks, and if someone fails to follow a rule, speak against it, or does not enforce it, citizens must kill that person. Everyone expects the rules to be followed. You then, assume, that if you don’t shock yourself, others will kill you. Others assume the same. And so everyone wants to stop shocking themselves, but fail to do so. This example is kind of contrived, as he says, certainly, but let’s see why.

In the system, you shock yourself eight hours, but then you’re free to do as you please (Save for speaking against the system). But, I assume, you have friends, family, and so on. So one day, when having dinner with your relatives, you say: We are alone here, no one can hear us, so listen well: I think the system we are living in is fundamentally nonsense, and we should stop shocking each other. Your family knows that they are expected to kill you for having said that, but they also think that the system is nonsense. And, being family, they don’t really expect other family members to kill each other. So they agree, and make common knowledge their disagreement. The rule is thus defused for the family. On a series of ony-by-one talks with other friends, this innovative idea spreads along groups until the system is effectively abolished.

Moloch is thus slayed for this example. I did follow the letter of Scott’s example, but not his spirit, as I plugged in some additional norms or traditions, such as family, friendship, and respect in those close moral circles, but I think I’m allowed to do that, since it’s so commonplace in real life.

It turns out formal game theoretical model usually don’t capture that well the nuances of actually existing social arrangements.

If we just had a formal model of how the masochist system works, and the rules were only those, and so on, then the conclusions would follow. It wouldn’t be common knowledge that the system ought to be abolished. On common knowledge and cooperation, see [1].

The only plausible real world examples of something as extreme as this would be religious sects or particularly opressive political regimes (Like North Korea), where people don’t even think there’s anything wrong with the system, as the information they have is itself controlled. When people are fully aware of their situation, and it is imposed from the outside, we may even get uprisings, even when an individual does not have, apparently, incentives to rebel. [2]

The examples

Scott then presents us with a series of examples:

Prisoner’s dilemma

The reference to libertarians is odd coming from someone as generally nice as Scott, unless he equates libertarians with purely selfish agents, or something like that. (Objectivists, maybe). As said before, in the PD, you can see the solution, but have no incentive to solve it. However:

  • The Prisoner’s Dilemma is actually solved in real life, even by prisonners [3], in many cases, while the prediction is that it cannot be solved*.*
  • As a thought experiment, you would also cooperate with your family or friends. Why? Because you take their preferences into consideration too, and know they also do, and you know they will cooperate. You behave superrationally in that situation. Whatever the reason, it is an empirical fact that it is like that.
  • Next, prisoner’s dilemmas are rare. Socially, you mostly have repeated games, and the equilibrium there, especially for smart people, is to cooperate. Those are the Prisoner’s Dilemma (And similar games like Stag Hunts) that matter, and we can solve them pretty well, if we are smart enough. (This does not mean that every one will be resolved nicely, of course) [4,5,6]

You could argue here that either these are not real prisonner dilemmas, as the incentive structure is not the textbook one, but then, real prisonner dilemmas aren’t that prevalent in the world, so they would be irrelevant.

Dollar auctions

This case is quite funny. Watch this video first to see the invisible hand of Moloch trolling people. But things like that don’t usually happen. You just need one guy to say: Hey! This is a Dollar Auction game! If you play, you’ll loose, just see how the rules lead to that. Insert explanation. And solved.

Another way to win the auction is precommitment: You bid one cent and say “If anyone else bids, I will bid one dollar if he bids less than one dollar, or 1 cent plus what the other bids, if he bids more”. You then swiftly sign an enforceable contract. Others back off, and you win 19.99 monetary units. [7] And as said in the article,

Can we generalize from this formal structure to interorganization fights or international escalation? Only in a limited manner is the generalization useful. The international negotiation has communication conditions considerably different from the parlor game. Signals and quasi-commitment are possible and common. The game theory analysis of the game in extensive form shows us that the game theory model alone does not appear to be adequate.

This would be a problem if the world were full of random people trying to auction off dollars. Initially. Then a norm would develop that you should not participate in this.

The fish farming story from the Non-Libertarian FAQ 2.0

Spoiler: I’m writing a rebuttal of the whole FAQ (See that “Soon.. ” tab in the menu? There!). Let me say that this is one of the best critiques of libertarianism that are around, and counts among its virtues that it’s a nice critique. He doesn’t show hate, or constructs blatant strawmans of libertarianism. The world of libertarian criticism -and libertarianism itself- would be much better if the other critics were replaced with copies of Scott Alexander, dedicated full time to find flaws in it. As a side effect, the world of blogging would also receive a boost, as long as they can find different anagrams of their real names to form names for their blogs.

Now, to the story.

Let there be a lake with 1000 separately owned fish farms, earning a profit of $1000/month each. Farms pollute the lake, reducing the profits one can make by $1/month. Problem: Since there are 1000 farms, there’s a loss of $1000/month per farm, and so no one makes a profit. But they have an option: Install a filter for $300/month. Everyone does, and the profit is now $700/month. Then, one fisherman shirks, stops paying for the filter. His profit is now $999, while the others is $699. Others see Steve is making loads of money, so they disconnect their filters and back to square one.

They try to solve this by signing the Filter Pact, but then a guy stops complying. So the pact-signers are now making $699/month, and Mike is making $999/month. Then, people start disconnecting their filters because they want to be like Mike.

He then finishes:

A self-interested person never has any incentive to use a filter. A self-interested person has some incentive to sign a pact to make everyone use a filter, but in many cases has a stronger incentive to wait for everyone else to sign such a pact but opt out himself. This can lead to an undesirable equilibrium in which no one will sign such a pact.

The more I think about it, the more I feel like this is the core of my objection to libertarianism, and that Non-Libertarian FAQ 3.0 will just be this one example copy-pasted two hundred times. From a god’s-eye-view, we can say that polluting the lake leads to bad consequences. From within the system, no individual can prevent the lake from being polluted, and buying a filter might not be such a good idea.

Before slaying Moloch once more, some comments on these final paragraphs: As a general critique of libertarianism it’s not terrible, but it’s not very good either. Free markets, it is said, are locally but not globally optimising. Thus there are situations of higher gains for everyone (and the agents know this), yet they are not reached. This need not be a problem if the gains are trivial, buy may be if they are huge. (e.g. an asteroid is coming towards the Earth, people want to stop it, but can’t finance solutions because of discoordination). But can’t agents who know they are in this situation solve it?

Post twist: I will answer this later on, to avoid getting into politics that early.

The Malthusian Trap

The Malthusian trap, in its canonical version, is a situation in which

the level of technology at any given time permitted only a certain number of people to live off any given piece of land. The carrying capacity could vary according to the natural ecology of the land, because some environments are naturally more productive than others. Different peoples also possessed different levels of technology, defined in the widest possible sense as the stock of knowledge about the manipulation of the environment. When a people entered new, empty land, they would reproduce themselves until their population hit the carrying capacity — just like caribou or horse flies. [8]

While Malthus was right that the logic of societal dynamics operated like that back then, we escaped it after the Industrial Revolution. In this case, it wasn’t that people explicitly knew what was going on and then went on to fix it. There was an explosive growth in innovative activity, that had the slaying of the Moloch as a side effect. Nowadays, in advanced economies, we are so far from Malthusian conditions that fertility rates are even below two! Meaning that if that continues, and if we don’t defeat aging, there will be no human race left in some hundred years. But social norms can change (encouraging more fertility), or we can actually defeat aging, or both.

For Scott’s rats, they have things hard if they don’t care at all about others, or don’t have internal rules to coordinate themselves. If they have a huge, commonally owned island, they may all agree in limiting consumption, and agree to punish defectors. If on the contrary they have small plots of private property, some rats can impose an offspring limitation policy in theirs, thus creating gardens of wellbeing among the crowded island. This will work as long as rats respect rights, that is.

Interestingly, Scott need not see this as something bad. He’s an utilitarian, and utilitarianism endorses The Repugnant Conclusion [9] (So do I, in a way!) . But in practise, I don’t think he is that much of an utilitarian. I think he just wants a world that works nicely, with art, science, cool buildings, gardens, forests, nature, love, music, and so on. He’s for a Shining Garden, most probably. He may say he is because altough he finds problems with it, there are no other alternatives. To be totally fair, The Repugnant Conclusion is not about max population=max U, but that a really high population does = max U, if there are decreasing returns of utility with increasing population and they turn negative.

Capitalism

Imagine a world of perfect competition, where a capitalist wants to do a different thing from perfectly competing. He can’t, by definition of perfect competition. That’s basically what Scott is telling there. Says he that companies, under a sufficiently intense competition are forced to abandon all values except profit maximizing, or they will be outcompeted. He analogizes capitalism to evolution. Fit companies survive (those that make the customer ant to buy from them), expand, and inspire future efforts, and unfit companies die. He even goes as far as to say that

The reasons Nature is red and tooth and claw are the same reasons the market is ruthless and exploitative

From a god’s-eye-view, we can contrive a friendly industry where every company pays its workers a living wage. From within the system, there’s no way to enact it.

I read this as a metaphoric paragraph inside a highly metaphoric piece to avoid getting this face.

In the real world, the case Scott describes, sweatshops do get low wages, but those are higher than that of alternative jobs, and increasing. Moloch is not dead here, but there is not great evilness. [10-12]. This is not to fully excuse sweatshops. I think, contra Sumner, and, unexpectedly, with Paul Krugman, there are reasons to impose safety standards on sweatshops if the contracts signed are not clear enough. If a worker wants to risk his life in a sweatshop, the sweatshop owner should disclose relevant information to the future employee. Moloch may be stopping that from happening in poorer countries, but it doesn’t in richer ones, and it isn’t just because there are health and safety regulations: People care more about safety, and expectations are in place for workplaces to be safe unless the contrary is explicitly stated, and if this is violated, it may be considered a contract or law violation. Courts work better, too, in richer countries, which decreases enforcement costs and increases compliance.

The real life Moloch, then, exists, but it’s not that bad. Or maybe it doesn’t exist: Because in real life, the sweatshop owners don’t care, seemingly, that much about their workers, so it’s not a coordination problem, but just an outcome of their preferences, and not revealing the knowledge they have about the safety of their buildings. You can say that if they had different preferences, they would be nicer, but they are not being nicer because of an inability to coordinate.

The Moloch may be real if we lived in an environment of perfect competition. But that doesn’t exist. Luckily! There’s dynamic competition. Perfect competition, in real life, can be harmful:

perfect competition is a useful thought experiment to compare with the real world, but it’s misleading to see it as an attainable goal. In 2007 the only two US satellite radio stations on the market – Sirius and XM – wanted to merge. Regulators objected on the grounds that moving from two firms to one reduced competition. And indeed in terms of market concentration, this is correct. But as Steve Chapman pointed out, ‘the alternative to one (merged) satellite radio company may not be two companies but none’. The main reason they wanted to merge was because theywere losing money. [13]

When we released the PayPal product in the late 1999, Elon Musk’s X.com was right on our heels[…] Many of us at PayPal logged 100-hour workweeks. No doubt that was counterproductive, but the focus wasn’t on objective productivity; the focus was defeating X.com. […] But in Febryary 2000, Elon and I were more scared about the rapidly inflating tech bubble than we were about fighting each other: a financial crash would ruin us both before we could finish our fight. So in early March we met on neutral ground, and negotiated a 50-50 merger. De-escalating the rivalry post-merger wasn’t easy, but as far a problems go, it was a good one to have. As a unified team, we were able to ride out the dot-com crash and then build a successful business. [14]

The point there is not the details of the story, which may be idealised, as I was reminded on twitter, but that there were two companies that pursued the same, and they merged instead of competing. This is publicly known, as is enough to count as Moloch defiance.

Then there’s this entire article on why a milk cartel is may actually be a good thing, including a sentence full of irony:

Anathema: people cooperating within capitalism rather than competing to annihilate each other.

Labour unions are also cartels, as are consumer groups, and others. Cartels are no more than groups of people cooperating in certain ways that violate the basic presuppositions of perfect competition, but they need not be inefficient, or welfare-reducing in general. If they were, there would be a profit opportunity for others to enter the market, if it is contestable.

Same for this other article: same idea, different sector: cathodic ray tubes. Imagine a bunch of CRT manufacturers. They are in decline because people buy less TVs and screens with CRTs. They are fiercely competing with each other to survive in this environment of declining demand. From a God’s eye view, it seems that the best thing to do is to band together, and orderly scale down production. Well, this is actually what happened: companies formed a cartel to do that.

The same who contemptuously say over and over that markets are ruled by the law of the jungle and its self-destructive tendencies are who are shocked to discover that no, in markets there are also cooperative tools that limit the scope of cutthroat competition.

Furthermore, we don’t seem to see this cutthroat competition in places with freer markets. Is it because capitalism is regulated? Or also because workers prefer one things and not others? Imagine a company where you have to work really hard, and earn more money, and another where you work less, and earn less money. Even without taking into account decreasing returns to scale (Beyond a point, you always have them, which is why we don’t see an increasing size of companies [13]), you will run out of workers who want to work in your company. So either you give other workers different conditions, or you won’t expand that much. Alternatively, it may be the case that it is good to have such a workforce. SpaceX is an example of that, but actually, it seems a good thing that SpaceX is there, and employees seem to love what they are doing.

Econ 101 texbooks may depict capitalism as ruthless (What is perfect competition if not that?), and famous dictums by Adam Smith “It is not from the benevolence of the butcher…” or Milton Friedman “The social responsability of business is to increase profits” don’t help either to picture it differently. But it is not ruthless in so bad of a way! Following The Soylent Green Principle of Institutional Analysis, Capitalism is people, and is enmeshed in institutions, rules, governance, norms. The world is not ruled solely by incentives, but also about beliefs about how the world should be. And we don’t like it like that.

Finally, capitalism (or human action, rather) is not that similar to evolution: It is much, much better. And Scott should know, as he was a prominent Less Wronger. As a metaphor, it’s not terrible, but not completely accurate either. Evolution is about literally mindless trial and error, judged by the ability to spawn more copies. The corporation is a consciously designed thing, and while there is substantial trial and error, the mechanisms by which capitalism adapts is not blind survival of the fittest, but human choice. Evolution works from what is given, and taking little steps, taking really long scales of time. What happens in one part of the world doesn’t affect what happens in the other. There is no regard for anything, it’s just organisms outreproducing other organisms. Furthermore, organisms pass their traits to their descendence. With capitalism, everyone can learn from everyone else, the system can quickly readapt, because actors in it understand what’s going on. If there were a meteorite coming our way, evolution would do nothing to stop it. But we can notice it and act. If we estimate a future decrease in the availability of a resource, we can begin adapting right now, while an evolutionary process can’t see it coming. Evolution is always backwards looking, human action is forward looking. Evolution takes millions of years to give you homo sapiens sapiens, human action takes some millenia to figure out the secrets of the universe and build reusable rockets.

The two income trap

Requires reading another whole post, so I mostly skip it. You can try to think of ways I would reply based on what I say here.

Agriculture

Maybe, from a local point of view, agriculture did locally worsen people’s life conditions, and then competitive preassures ensured farmers outcompeted hunter-gatherers (Or, rather, nomads conquered farmers and turned them into war machines, see the stationary bandit model).

But from a gods-eye-view, the nice thing to do would have been to have a balanced diet, and not fight. Not stay hunter-gatherer forever!

But this doesn’t seem an example of Moloch, but of ignorance (of how diet works, plus glorified beliefs about war and violence, probably religion, and so on. See the usual Pinkerian explanations why it was like that, and why it isn’t anymore). The competition here was to beat other groups, so health was inadvertedly trampled over

Arms races

This is somewhat true. There are two big equilibria: the nice one is no one spends on offense. The bad one: everyone spends everything in defense. We are in neither of those, and if anything, moving towards the nice equilibrium.

While no country can unilaterally enforce anything, countries can check each other. This, helped with the desire, by everyone, of not going to war, leads to the reduction of the world’s nuclear stockpile, US spending as % of GDP going down, the fact that there are countries out there, happily living without armies at all.

To these, add all those countries that are near countries stronger than them: Canada, everything in the Caribbean, Mexico, everything in the Pacific. Canada or Mexico don’t have an army capable of stopping the United State’s yet they don’t invade? Why? Becaue the realist theory of international relations is bunk. People don’t generally want to go to war nowadays, and politicians follow. Arms races are truly problematic if you assume not only that agents are purely selfish, but that can’t see where selfishness leads them to. For more on, this, sec. 12.3.6 in [22]. Also, for a view of defense as a public bad, see [23].

Arms races are real: The Cold War was a massive one, but we’re still around. The game-theoretical argument for escalation is valid, but not sound.

Cancer

Cells are not agents conscious of incentives and ways of improving their situation, so paraphrasing Wolfgang Pauli, it’s not even Moloch. It’s not that a cell decides to defect, but just that there are mistakes made in some mollecular operation, and cancer happens. And even if, going beyond Scott, I granted that cells were aware of their situation, the rational thing to do wouldn’t be to go cancer, but to stay where they are. Each cell would know it has the power to kill the organism, and if it dies, the cell dies too. Each cell would be armed by a nuke, and no cell would be the first to fire. Not even in principle could the cell win, unlike in the Cold War: the payoff matrix tells the cell to be nice.

The “race to the bottom”

Oh, if this were true! But it generally isn’t. At the between and within country levels we don’t see this. If this were that prevalent, the world would a giant Hong Kong, but it isn’t. The opposite is closer to truth. Why is this generally benign incarnation of Moloch almost nonexistant? In that politics is not a market. Up to some point, companies try to shop around for looser regulation. In the same way, people shop around for courts, yet we don’t live in a lawless world. There is some evidence that this does happen for labour market regulation in some countries [15], but it even seems good to me. Unregulated labour markets are good. For pollution control, there is no racing to the bottom neither in emissions [16] nor in environmental certifications (ISO 14001) [17], which is good, because pollution is a negative externality.

Ultimately, it is an empirical question whether a race to the bottom does work as the simple model predicts, and I’ll grant Moloch points where it is due.

These were the more Molochian cases, apparently,

Before we go on, there’s a slightly different form of multi-agent trap worth investigating. In this one, the competition is kept at bay by some outside force – usually social stigma. As a result, there’s not actually a race to the bottom – the system can continue functioning at a relatively high level – but it’s impossible to optimize and resources are consistently thrown away for no reason. Lest you get exhausted before we even begin, I’ll limit myself to four examples here.

Education

Here I agree with Scott. Education is, to a great extent, signalling. However, there’s some light at the end of the tunnel: With the advent of online learning, and with the actual realisation that it is indeed signaling, things may begin to change. If companies knew that, just because of self-interest, they would pay less attention to which college you came from. Students would focus more on learning on their own, etc. This is one of the most Molochians examples around, I have to admit.

Science

Agreement again, at least for the social sciences, because of the inherent fuzziness of the results, plus some ideological bias. But people spoking loudly against it, and people agreeing with the critique is something recent, I think. So while I grant a few Moloch points here for coordination problems, the problems of Science are also due to sheer ignorance of what is going on.

Government corruption

I don’t know of anyone who really thinks, in a principled way, that corporate welfare is a good idea.

Corporations. And people who are employed by them. Which is why the mechanism described in the next subsection happens. Politicians may think that by passing corporate welfare laws, they are helping American business be more competitive vs the rest of the world, or maintaining employment for their constituents, or securing a comfy seat in the corporation afterwards. Regular people may also like the idea of helping some crucial sectors like agriculture, but then, of course, there’s plenty of, maybe not campaign donations, but surely votes themselves inolved.

This is not a case of Moloch. There are agents who do want the subsidies, and agents that are not aware of that happening at the scale it does. Is this problem fixable? Maybe. In Europe and the US, farm subsidies run wild, in New Zeland they don’t. An inquiry into that would be interesting.

Read more about the world of cronyism here.

Congress

People are catching up with reality. Now people’s approval rating of their member of Congress in the US is 49%. That said, the explanation for that may well be that people like in-groupers more than out-groupers. One from your region is more in-groupy than an abstract assembly of people you’ve never heard of, and who can’t ever agree on anything.

Scott is right again, to a point. This is not fully Mollochian, as the previous case, because there are agents interested in Congressional favours, and approval ratings mean little. The US is far from a civil revolt. People have little incentive to even be aware of the many things that go about in Congress, which is why this happens. Things could be worse: Public Choice paints a bleaker picture than actually observed.

You could say, though, that there should be a way to break this out from within the system. There is: The long-term painful and very difficult way is criticising cronyism year after year until politicians interiorise the anti-cronyism rule. Not all countries are equally corrupt, nor it seems to be increasing at fast paces. The short-term, and also highly difficult Public Choice solution is ban government from being able to do those things.

III

A basic principle unites all of the multipolar traps above. In some competition optimizing for X, the opportunity arises to throw some other value under the bus for improved X. Those who take it prosper. Those who don’t take it die out. Eventually, everyone’s relative status is about the same as before, but everyone’s absolute status is worse than before. The process continues until all other values that can be traded off have been – in other words, until human ingenuity cannot possibly figure out a way to make things any worse.

However, his examples don’t quite work like this: I think he’s mixing different problems, as we’ll see with his Las Vegas example later on.

Where are the other values being sacrificed in the Prisonner’s Dilemma, or th Dollar Auction? Even in the rat example, he deviates from the classical Malthusian logic to talk about rats sacrificing other values due to evolutionary pressures. Malthus is just about preferences for more offspring clash with limited resources for a births=deaths equilibrium, nothing else. Same for capitalism: Companies try to please customers, but they also have to please workers, and shareholders, and all within a framework of private property. They have to strike a balance, and make tradeoffs between satisficing ones and satisficing others. Multiple values are served, in varying ratios. This holds even if profit maximizing was a totally accurate description of the market: It’s everywhere and always constrained profit maximizing.

For agriculture, in Scott’s paragraph, it’s more plausible: people were optimising for killing each other, so the good life of the hunter-gatherer was thrown away because agriculture was needed to win that competition.

In general, I don’t see people reduced to subsistance in those examples as they are instantiated in real life, and problems in examples 11-14 are not that terrible.

Regarding utopia, and incentives,

Just as you can look at an arid terrain and determine what shape a river will one day take by assuming water will obey gravity, so you can look at a civilization and determine what shape its institutions will one day take by assuming people will obey incentives.

That’s an interesting, empirical, claim. If true, the strongest version of it implies that everyone will behave the same given the same incentives, because if that wasn’t the case, there are non-incentive factors that explain the differences. And those exist, as discussed in [6] and [4-5]. You could take the incentive structure of Finland, shoehorn it into Liberia, and still not get a Nordic paradise.

But that means that just as the shapes of rivers are not designed for beauty or navigation, but rather an artifact of randomly determined terrain, so institutions will not be designed for prosperity or justice, but rather an artifact of randomly determined initial conditions.

It is the case that intitutions can be designed for justice. Beliefs about what people think it’s just or fair are what, in the end, rule: they set the contours of what’s permissible in society, and civil revolts are what end up happening if violated.

Just as people can level terrain and build canals, so people can alter the incentive landscape in order to build better institutions. But they can only do so when they are incentivized to do so, which is not always. As a result, some pretty wild tributaries and rapids form in some very strange places.

But people can be incentivised just by the fact of wanting to solve those problems! The contrary is to suppose that we just care about ourselves, and that this reflects into the institutions. To solve the many coordination problems we face, we don’t need to be complete altruists, just have a pinch of other-regarding preferences. And we do have them. Were not for that, not even Hobbesian contractarianism can get off the ground.

The Las Vegas mystical experience

Like, by what standard is building gigantic forty-story-high indoor replicas of Venice, Paris, Rome, Egypt, and Camelot side-by-side, filled with albino tigers, in the middle of the most inhospitable desert in North America, a remotely sane use of our civilization’s limited resources?

A reflection often made, by reference not just to casinos, but to everything that’s above basic needs, in general. Is a remotely sane use of resources for me to pay to eat Apfelstrudel, knowing that that money could be used to save other people’s lives? This is basically the idea behind using dead children as currency: choices we make matter, and doing something for yourself meas not doing something for others, and viceversa.

And it occurred to me that maybe there is no philosophy on Earth that would endorse the existence of Las Vegas. Even Objectivism, which is usually my go-to philosophy for justifying the excesses of capitalism, at least grounds it in the belief that capitalism improves people’s lives. Henry Ford was virtuous because he allowed lots of otherwise car-less people to obtain cars and so made them better off. What does Vegas do? Promise a bunch of shmucks free money and not give it to them.

One thing is to actively celebrate Las Vegas as something good, and other would be to endorse the destruction of it (or actively avoiding its construction) as something right. For a utilitarian this doesn’t make sense: the right is the good, and the more the better. For for the rest of the world, the difference is important.

And even then, for utilitarians, it doesn’t seem that bad to have Las Vegas: you have lots of people who enjoy wasting money, some people who enjoy winning money, and a few people who enjoy getting rich in the process. Making an utilitarian case against Las Vegas is not trivial. But Scott makes it straight ahead, but this makes sense from his actual worldview:

So we have all this amazing technological and cognitive energy, the brilliance of the human species, wasted on reciting the lines written by poorly evolved cellular receptors and blind economics, like gods being ordered around by a moron.

As said before, when Scott saw Las Vegas he probably had his vision of a good society superimposed in Las Vegas. They could be exercising our higher faculties of reasoning, or making fine art, having meaninful social relations, or whatever, and instead they are just gambling. I guess that this reaction would be the same if the expected winnings in a casino were less negative: people would be said to be paying for the thrill, or something.

I’m not of the school of thought that defends “Meh, everything is fine as long as people don’t hurt each other”. I see something quite similar to Scott’s vision as the ideal of a good society, worth trying to get close to.

I haven’t had anything like this experience, but the closest was probably reading Tabarrok & Beito’s The Voluntary City [19]. There are people who say that since reading Meditations on Moloch, they see perverse incentives everywhere. Since reading that book, I see cooperation everywhere, and ways of getting around perverse incentives.

IV

But then, things are not that bad, grants he, at last. And discusses some reasons why that is so.

One is excess resources: If there are plenty, we can do things other than compete against everyone like music, art, philosophy and love. He cites a brief Hanson post there which point isn’t obvious, or even properly argued. Those non-adaptative things, that moments of sticking out the middle finger to biology, and evolution, are among the most human moments we have. Each symbol drawn in a blackboard by an analytic topologist or a philosopher is a boastful laugh against the forces that shaped us.

Assuming a future of no more technological advancement, where technology is constant, we would need increasing fertility levels (>2) to return to the Malthusian scenario, but if fertility <2, assuming resources will either increase or stagnate relative to now, there won’t be future scarcity, but more abundance.

The second one is physical limitations. Slaves aren’t worked to death because they are more profitable alive. Scott says that slavery in Southern US was weird because it was economically inefficient. It wasn’t, according to Robert Fogel, and if we are to believe Pseudoerasmus’ reading of the literature [18], that’s the consensus opinion. But the reading of what happened is true: Overworked, mistreated, slaves aren’t as good workers as one who are treated a bit better. Think of a Slaffer’s Curve: Slaves who don’t work don’t produce, and slaves made to work at 100% will exhaust and don’t produce either, so you would have to work them at some point in the middle. Or people can believe slavery is wrong, and stop doing it altogether, even when it’s economically efficient.

The third is utility maximisation

But many of the most important competitions / optimization processes in modern civilization are optimizing for human values. You win at capitalism partly by satisfying customers’ values. You win at democracy partly by satisfying voters’ values.

Indeed. Part of what I did in the sections above was to plug in the logic of this paragraph into the examples described.

Regarding his Ethiopians with pesticides example, the coffee growers know that Americans don’t want that. For some time, they may be able to get away with it, until some independent lab, or the government, or consumers, find out about the pesticide and its unadvertised toxicity and sue the farmers. Ideally, it would count as a contract violation (people bought “coffee”, and “coffee” means a set of expectations that exclude toxic pesticides), and costs would be imposed to the farmers. If farmers had a low enough temporal preference, they would see that using the pesticide is not good for them. In reality, doing this will be hard due to differences in time preference, and the difficulties of cross-nationally suing poor farmers. But in advanced nations, this problems are easily dealt with.

Regarding the baby boom, it all comes down to what people are willing to endure. You may end up with some companies overworking people, or with some more companies, for the same employment level and maybe the same wages, for a more reasonable wage.

With the robot coffee picker, Ethiopians could do something different, or people would have to value employment as an end in itself, via buying “Fair Employment” coffee that guarantees that it has been picked by hand. However, economically and morally, it is probably better for them to dedicate themselves to other activity, and for us to buy the robot coffee. But if we really cared about the problems, if we valued their employment enough, we would solve it via that Fair Employment coffee.

Or suppose that there is some important value that is neither a value of the employees or the customers. Maybe the coffee plantations are on the habitat of a rare tropical bird that environmentalist groups want to protect. Maybe they’re on the ancestral burial ground of a tribe different from the one the plantation is employing, and they want it respected in some way. Maybe coffee growing contributes to global warming somehow. As long as it’s not a value that will prevent the average American from buying from them or the average Ethiopian from working for them, under the bus it goes.

That’s what we have property and laws for! With those, things valued by some do not enter into the action plans of others, and externalities are mandated to be internalised. In a previous section, I discussed city pollution, and how that Moloch has been slain. For global warming, there is not a full agreement on what to do about it, but it seems that technological advancement (solar power, more efficient turbines, and, hopefully, fusion energy) is on its way to solve it. Farmers, even if aware of their contribution to GW, know that they contribute just a little. Larger industries are more aware of it, and its easier to monitor them, so the problem is more easily solved that way.

I know that “capitalists sometimes do bad things” is not exactly an original talking point. But I do want to stress how it’s not equivalent to “capitalists are greedy”. I mean, sometimes they are greedy. But other times they’re just in a sufficiently intense competition where anyone who doesn’t do it will be outcompeted and replaced by people who do. Business practices are set by Moloch, no one else has any choice in the matter.

That could be true if the market was an unconstrained profit maximisation process. But it isn’t: markets are people, and people have values. Business practices are not set by Moloch, they are set by people. People make tradeoffs between different values, and some equilibriums are just socially forbidden.

And as well understood as the capitalist example is, I think it is less well appreciated that democracy has the same problems. Yes, in theory it’s optimizing for voter happiness which correlates with good policymaking. But as soon as there’s the slightest disconnect between good policymaking and electability, good policymaking has to get thrown under the bus.

In theory, democracy does not optimize for voter happiness. Democracy just implements whatever the ruling set of politicians want to implement. In turn, those are elected based on the preferences of the people who vote them. But people can vote policies that go against their happiness, or more broadly, against their meaningful project pursuing. Sometimes, or even, many times, going against what people vote is what ends up being beneficial. Some argue that the precise reason democracy works is because it isn’t that tightly coupled to people desires, and elite preferences are overrepresented. And elites are better informed, and less crazy than the rest of the people. Surely there are disconnects between electability and good policymaking: economic policy does not look like as designed by economists in every country, but in general it doesn’t look as designed by the average citizen either.

For example, ever-increasing prison terms are unfair to inmates and unfair to the society that has to pay for them. Politicans are unwilling to do anything about them because they don’t want to look “soft on crime”, and if a single inmate whom they helped release ever does anything bad (and statistically one of them will have to) it will be all over the airwaves as “Convict released by Congressman’s policies kills family of five, how can the Congressman even sleep at night let alone claim he deserves reelection?”. So even if decreasing prison populations would be good policy – and it is – it will be very difficult to implement.

I agree with this. I don’t think people deserve to suffer for not knowing something they did was wrong. So it is unfair to inmates. People do want to abolish mandatory minimum sentences in the US, and it wouldn’t surprise me this will happen in a few years.

However, I wouldn’t say this outcomes are the result of utility maximization. They are the result of respecting people’s rights and caring about others, and contrained by that, pursue your conception of the good. That in many cases will involve making money, and in quite a lot of those, it will involve trying to make more money than the rest.

Regarding coordination, Scott discusses the concept of a garden: everyone bands together under a single authority that ensures the good cooperative situations happen. Governments would be an example of this, says Scott. Governments could end pollution, prisoner dilemmas, workers being harmed, arms races, and a World Government would be able to solve those, for the entire world.

Scott characterises governments are rule patterned violence: agreements plus enforcement. But things different from governments also do this, he reminds us: teachers act like governments by having rules against cheating, limiting actions student can take when competing for grades. Norms among the students can also work as a government: rules (don’t cheat) and punishment (or else we will shun you).

Social codes, gentlemens’ agreements, industrial guilds, criminal organizations, traditions, friendships, schools, corporations, and religions are all coordinating institutions that keep us out of traps by changing our incentives.

But these institutions not only incentivize others, but are incentivized themselves. These are large organizations made of lots of people who are competing for jobs, status, prestige, et cetera – there’s no reason they should be immune to the same multipolar traps as everyone else, and indeed they aren’t. Governments can in theory keep corporations, citizens, et cetera out of certain traps, but as we saw above there are many traps that governments themselves can fall into.

The first paragraph is a nice list of the battalions serving us in mercilessly slaying Molochs. The second paragraph reminds us that those institutions can still fell prey to Moloch, including governments. Governments are people, and as such they behave.

The United States tries to solve the problem by having multiple levels of government, unbreakable constutitional laws, checks and balances between different branches, and a couple of other hacks.

Saudi Arabia uses a different tactic. They just put one guy in charge of everything.

This is the much-maligned – I think unfairly – argument in favor of monarchy. A monarch is an unincentivized incentivizer. He actually has the god’s-eye-view and is outside of and above every system. He has permanently won all competitions and is not competing for anything, and therefore he is perfectly free of Moloch and of the incentives that would otherwise channel his incentives into predetermined paths. Aside from a few very theoretical proposals like my Shining Garden, monarchy is the only system that does this.

Now, this paragraphs are quite interesting. The first one explains the idea of separation of powers and Constitutions: the solution to the who will watch the watchers problem. But it isn’t a real solution. Limiting government power with government via incentives doesn’t work in theory, altough it somewhat does in practise. You need a power from the outside to contrain it, and ultimately it is what people think is just and fair. That’s Etienne de la Boetie [20] great insight. What really holds it together is beliefs, ethics, tradition. And yes, incentives too.

For monarchies, Scott presents the argument for them: Monarchs are outside the system. They escaped Moloch and can do as they wish. He says that besides some highly theoretical proposals like his Shining Garden, ONLY monarchy can do that. I won’t criticise or praise the Shining Garden here, but I’ll point out that monarchs are not truly Moloch free, thing he admits later on. Let’s pull some more paragraphs to finish this section,

But then instead of following a random incentive structure, we’re following the whim of one guy. Caesar’s Palace Hotel and Casino is a crazy waste of resources, but the actual Gaius Julius Caesar Augustus Germanicus wasn’t exactly the perfect benevolent rational central planner either.

He grants that altough monarchs could, in theory, take the god’s-eye view and solve problems, this is not a necessity. Precisely because their apparent freedom from incentives, there is little (besides people’s willingness to cope) to stop them from doing what they want. Except other monarchs or countries, that is. Monarchs would also be stuck in Molochian traps vis-a-vis other monarchs, as they are in a symmetrical relation to each other. Scott’s argument for monarchy would work not for your average Habsburg or Bourbon, but for a World Monarch. Unless we recognise, as we should, that you can escape Moloch, or perverse incentives, without a power above. This is also why the Hobbesian logic applied to the world fails: countries aren’t constantly fighting.

A monarchy is not something qualitatively different from one of the organisations described above by Scott (companies, associations…). The difference is that they are usually large. But imagine a small monarchy near a country with really low taxes and regulations. ¿Could the monarchy avoid lowering taxes and regulations to attract capital? If the monarch cares only about money, she will lower them. If the monarch cares about maintaining her garden as it is, she won’t. But same goes for any other organisation.

The libertarian-authoritarian axis on the Political Compass is a tradeoff between discoordination and tyranny. You can have everything perfectly coordinated by someone with a god’s-eye-view – but then you risk Stalin. And you can be totally free of all central authority – but then you’re stuck in every stupid multipolar trap Moloch can devise.

The libertarians make a convincing argument for the one side, and the neoreactionaries for the other, but I expect that like most tradeoffs we just have to hold our noses and admit it’s a really hard problem.

Freedom and uncoordination, or coordination but risk of opression. This, to some length, is true. But it’s more like freedom and risk of uncoordination, or coordination but risk of opression. Scott grants the NRx that ideally, their theory would deliver coordination and niceness, but that in practise it may not. To libertarians, the theory is not granted. Not even in theory will coordination will be achieved, let alone in practise. I will return to this later on.

V

A summary of Scott’s point so far:

Multipolar traps – races to the bottom – threaten to destroy all human values. They are currently restrained by physical limitations, excess resources, utility maximization, and coordination. […] the most important change in human civilization over time is the change in technology. So the relevant question is how technological changes will affect our tendency to fall into multipolar traps.

Develop a new robot, and suddenly coffee plantations have “the opportunity” to automate their harvest and fire all the Ethiopian workers. Develop nuclear weapons, and suddenly countries are stuck in an arms race to have enough of them. Polluting the atmosphere to build products quicker wasn’t a problem before they invented the steam engine.

The limit of multipolar traps as technology approaches infinity is “very bad”.

Multipolar traps are currently restrained by physical limitations, excess resources, utility maximization, and coordination.

If he mentioned before that there were some factors keeping Moloch at bay, now the problem is that technology can destroy those.

For physical limitations, the slavemaster can feed the slaves Soylent and modafinil, and track them with GPS. As for Malthusian conditions, Scott cites Bostrom: maybe now fertility rates are <2, but people who do have children pass their traits to the newer generations, perhaps increading long run fertility. Or maybe, cultural norms will keep that constrained. He quickly jumps to a world of ems that can quickly reproduce and exhaust all available hardware. If new agents can be very quickly created, physical limits will have expanded and won’t be a contrain anymore.

This just assumes that emulated people will trample over other people’s rights. But they can be constrained, as people right now are.

As for excess resources, with asteroid mining, and constructing worlds in virtual reality, limits to growth lure far in the future. But they are, in theory, there, and Scott reminds us of that. Unless we grow at a lower pace, and develop faster than light propulsion, or something.

As for utility maximization (more like not being fully selfish, actually), Scott presents the everyone out of work because of robots-scenario. So far, it is not happening except for low-wage jobs. But in theory, there are many jobs that could fall prey to robots. [21] There is a way out of this

(there are some scenarios in which a few capitalists who own the robots may benefit here, but in either case the vast majority are out of luck)

If those capitalists, given the absurdly rich they would be, donated a minuscule fraction of their wealth, perhaps things would be really nice. Cynics would propose a universal basic income, which is coercing them into doing the same thing.

As for religious sects taking over due to increased natality, he notes that many of them quit, so it’s not that problematic.

But still, memes are dangerous, because they spread independetly of their truth value

The creationism “debate” and global warming “debate” and a host of similar “debates” in today’s society suggest that the phenomenon of memes that propagate independent of their truth value has a pretty strong influence on the political process. Maybe these memes propagate because they appeal to people’s prejudices, maybe because they are simple, maybe because they effectively mark an in-group and an out-group, or maybe for all sorts of different reasons.

The point is – imagine a country full of bioweapon labs, where people toil day and night to invent new infectious agents. The existence of these labs, and their right to throw whatever they develop in the water supply is protected by law. And the country is also linked by the world’s most perfect mass transit system that every single person uses every day, so that any new pathogen can spread to the entire country instantaneously. You’d expect things to start going bad for that city pretty quickly.

Well, we have about a zillion think tanks researching new and better forms of propaganda. And we have constitutionally protected freedom of speech. And we have the Internet. So we’re pretty much screwed.

There are a few people working on raising the sanity waterline, but not as many people as are working on new and exciting ways of confusing and converting people, cataloging and exploiting every single bias and heuristic and dirty rhetorical trick

So as technology (which I take to include knowledge of psychology, sociology, public relations, etc) tends to infinity, the power of truthiness relative to truth increases, and things don’t look great for real grassroots democracy. The worst-case scenario is that the ruling party learns to produce infinite charisma on demand. If that doesn’t sound so bad to you, remember what Hitler was able to do with an famously high level of charisma that was still less-than-infinite.

Hm. Maybe in the short run. But in the long run, there is a long-term tendency towards the truth. Religions are fading out, at least in developed countries, communism wasn’t able to remain in this world at full strength even if it had a superpower, and the academic elite of the other superpower backing it, and Science progresses happily despite all the problems mentioned by Scott, plus many others (people don’t care about Science that much, plus many people beliefs contradict basic scientific truths). For democracies, despite voter ignorance [22], advances countries are far from hellholes.

For coordination, he says that with technology, it may improve, but then says that it will work only if >51% want to do the coordination, and there are no brilliant tricks to make it impossible.

People are using the contingent stupidity of our current government to replace lots of human interaction with mechanisms that cannot be coordinated even in principle. I totally understand why all these things are good right now when most of what our government does is stupid and unnecessary. But there is going to come a time when – after one too many bioweapon or nanotech or nuclear incidents – we, as a civilization, are going to wish we hadn’t established untraceable and unstoppable ways of selling products.

This danger is more tangible than those , and global governance institutions could, if the threat of bio/nuclear/nanoterrorism comes true, be required, yes.

And if we ever get real live superintelligence, pretty much by definition it is going to have >51% of the power and all attempts at “coordination” with it will be useless.

Or not: What if the SAI is actually designed to respect other people’s rights? Hard, but doable. That supposedly is the point of the Friendly AI programme.

So I agree with Robin Hanson. This is the dream time. This is a rare confluence of circumstances where the we are unusually safe from multipolar traps, and as such weird things like art and science and philosophy and love can flourish.

As technological advance increases, the rare confluence will come to an end. New opportunities to throw values under the bus for increased competitiveness will arise. New ways of copying agents to increase the population will soak up our excess resources and resurrect Malthus’ unquiet spirit. Capitalism and democracy, previously our protectors, will figure out ways to route around their inconvenient dependence on human values. And our coordination power will not be nearly up to the task, assuming somthing much more powerful than all of us combined doesn’t show up and crush our combined efforts with a wave of its paw.

Absent an extraordinary effort to divert it, the river reaches the sea in one of two places.

It can end in Eliezer Yudkowsky’s nightmare of a superintelligence optimizing for some random thing (classically paper clips) because we weren’t smart enough to channel its optimization efforts the right way. This is the ultimate trap, the trap that catches the universe. Everything except the one thing being maximized is destroyed utterly in pursuit of the single goal, including all the silly human values.

Or it can end in Robin Hanson’s nightmare (he doesn’t call it a nightmare, but I think he’s wrong) of a competition between emulated humans or “ems”, entities that can copy themselves and edit their own source code as desired. Their total self-control can wipe out even the desire for human values in their all-consuming contest. What happens to art, philosophy, science, and love in such a world? […]

But even after we have thrown away science, art, love, and philosophy, there’s still one thing left to lose, one final sacrifice Moloch might demand of us. Bostrom again:

We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.

The last value we have to sacrifice is being anything at all, having the lights on inside. With sufficient technology we will be “able” to give up even the final spark.

(Moloch whose eyes are a thousand blind windows!)

Everything the human race has worked for – all of our technology, all of our civilization, all the hopes we invested in our future – might be accidentally handed over to some kind of unfathomable blind idiot alien god that discards all of them, and consciousness itself, in order to participate in some weird fundamental-level mass-energy economy that leads to it disassembling Earth and everything on it for its component atoms.[…]

Competition and optimization are blind idiotic processes and they fully intend to deny us even one lousy galaxy. […]

We will break our back lifting Moloch to Heaven, but unless something changes it will be his victory and not ours.

Woah. This holds if the em world works as Hanson envisions and/or if SAI goes wrong. This problems are different from the ones presented before: We’ve gone from prisonner dilemmas and polluting fishermen to Paperclip maximisers and self-replicating p-zombies. I will comment on this later on.

VI

He discusses the NRx concept of Gnon,

The high priest of Gnon is Nick Land of Xenosystems, who argues that humans should be more Gnon-conformist (pun Gnon-intentional). He says we do all these stupid things like divert useful resources to feed those who could never survive on their own, or supporting the poor in ways that encourage dysgenic reproduction, or allowing cultural degeneration to undermine the state. This means our society is denying natural law, basically listening to Nature say things like “this cause has this effect” and putting our fingers in our ears and saying “NO IT DOESN’T”. Civilizations that do this too much tend to decline and fall, which is Gnon’s fair and dispassionately-applied punishment for violating His laws. […]

On the margin, compliance with the Gods of the Copybook Headings, Gnon, Cthulhu, whatever, may buy you slightly more time than the next guy. But then again, it might not. And in the long run, we’re all dead and our civilization has been destroyed by unspeakable alien monsters.

At some point, somebody has to say “You know, maybe freeing Cthulhu from his watery prison is a bad idea. Maybe we should not do that.”

That person will not be Nick Land. He is totally one hundred percent in favor of freeing Cthulhu from his watery prison and extremely annoyed that it is not happening fast enough. I have such mixed feelings about Nick Land. On the grail quest for the True Futurology, he has gone 99.9% of the path and then missed the very last turn, the one marked ORTHOGONALITY THESIS.

Well. Ok. In this section, he tells us about Gnon and Nick Land. I assume that he does this just to make the connection to Moloch, but I see little point in this section. Nice Lovecraft story.

VII

Here we have an argument for Neoreaction (remember, coordination at the price of risking Stalin):

Instead of the destructive free reign of evolution and the sexual market, we would be better off with deliberate and conservative patriarchy and eugenics driven by the judgement of man within the constraints set by Gnon. Instead of a “marketplace of ideas” that more resembles a festering petri-dish breeding superbugs, a rational theocracy. Instead of unhinged techno-commercial exploitation or naive neglect of economics, a careful bottling of the productive economic dynamic and planning for a controlled techno-singularity. Instead of politics and chaos, a strong hierarchical order with martial sovereignty. These things are not to be construed as complete proposals; we don’t really know how to accomplish any of this. They are better understood as goals to be worked towards. This post concerns itself with the “what” and “why”, rather than the “how”. (Nyan)

This seems to me the strongest argument for neoreaction. Multipolar traps are likely to destroy us, so we should shift the tyranny-multipolarity tradeoff towards a rationally-planned garden, which requires centralized monarchical authority and strongly-binding traditions. (Scott)

Suppose that in fact patriarchy is adaptive to societies because it allows women to spend all their time bearing children who can then engage in productive economic activity and fight wars. This doesn’t seem too implausible to me. In fact, for the sake of argument let’s assume it’s true. The social evolutionary processes that cause societies to adopt patriarchy still have exactly as little concern for its moral effects on women as the biological evolutionary processes that cause wasps to lay their eggs in caterpillars.

Evolution doesn’t care. But we do care. There is a tradeoff between Gnon-compliance – saying “Okay, the strongest possible society is a patriarchal one, we should implement patriarchy” and our human values – like women who want to do something other than bear children.

Too far to one side of the tradeoff, and we have unstable impoverished societies that die out for going against natural law. Too far to the other side, and we have lean mean fighting machines that are murderous and miserable. Think your local anarchist commune versus Sparta.

Nyan the NRx again:

Thus we arrive at Neoreaction and the Dark Enlightenment, wherein Enlightenment science and ambition combine with Reactionary knowledge and self-identity towards the project of civilization. The project of civilization being for man to graduate from the metaphorical savage, subject to the law of the jungle, to the civilized gardener who, while theoretically still subject to the law of the jungle, is so dominant as to limit the usefulness of that model.

This need not be done globally; we may only be able to carve out a small walled garden for ourselves, but make no mistake, even if only locally, the project of civilization is to capture Gnon.

Scott:

I maybe agree with Nyan here more than I have ever agreed with anyone else about anything. He says something really important and he says it beautifully and there are so many words of praise I want to say for this post and for the thought processes behind it.

But what I am actually going to say is…

Gotcha! You die anyway!

Suppose you make your walled garden. You keep out all of the dangerous memes, you subordinate capitalism to human interests, you ban stupid bioweapons research, you definitely don’t research nanotechnology or strong AI.

Everyone outside doesn’t do those things. And so the only question is whether you’ll be destroyed by foreign diseases, foreign memes, foreign armies, foreign economic competition, or foreign existential catastrophes.

Indeed, as I mentioned earlier. Monarchy is not enough, unless everything is brought under under control, or competition is walled with rights and people caring about them.

VIII

So let me confess guilt to one of Hurlock’s accusations: I am a transhumanist and I really do want to rule the universe.

Not personally – I mean, I wouldn’t object if someone personally offered me the job, but I don’t expect anyone will. I would like humans, or something that respects humans, or at least gets along with humans – to have the job.

But the current rulers of the universe – call them what you want, Moloch, Gnon, Azathoth, whatever – want us dead, and with us everything we value. Art, science, love, philosophy, consciousness itself, the entire bundle. And since I’m not down with that plan, I think defeating them and taking their place is a pretty high priority.

Yeah! 😀

The opposite of a trap is a garden. The only way to avoid having all human values gradually ground down by optimization-competition is to install a Gardener over the entire universe who optimizes for human values.

There we go: Scott finally settles for a solution that, in theory, would work. Supposedly, at least.

And the whole point of Bostrom’s Superintelligence is that this is within our reach. Once humans can design machines that are smarter than we are, by definition they’ll be able to design machines which are smarter than they are, which can design machines smarter than they are, and so on in a feedback loop so tiny that it will smash up against the physical limitations for intelligence in a comparatively lightning-short amount of time.

Or not, see the Yudkowsky-Hanson debate [23].

In the very near future, we are going to lift something to Heaven. It might be Moloch. But it might be something on our side. If it is on our side, it can kill Moloch dead.

Very near future? That’s debatable. Also, it depends on what he means by very near. In the next 50 years? 100? 300? 500?

I realize that sounds like hubris – it certainly did to Hurlock – but I think it’s the opposite of hubris, or at least a hubris-minimizing position.

To expect God to care about you or your personal values or the values of your civilization, that is hubris.

To expect God to bargain with you, to allow you to survive and prosper as long as you submit to Him, that is hubris.

To expect to wall off a garden where God can’t get to you and hurt you, that is hubris.

To expect to be able to remove God from the picture entirely…well, at least it’s an actionable strategy.

I am a transhumanist because I do not have enough hubris not to try to kill God.

Agree.

IX

The question everyone has after reading Ginsberg is: what is Moloch?

My answer is: Moloch is exactly what the history books say he is. He is the god of Carthage. He is the god of child sacrifice, the fiery furnace into which you can toss your babies in exchange for victory in war.

He always and everywhere offers the same deal: throw what you love most into the flames, and I will grant you power.

As long as the offer is open, it will be irresistable. So we need to close the offer. Only another god can kill Moloch. We have one on our side, but he needs our help. We should give it to him.

The title I initially thought for my post was “We are become Elua, destroyers of Moloch”. Elua is another God, a God of Good. He implies that Elua is a Singleton (We’re going to lift something to heaven…). My point being that we are that God, and that we have been slaying Molochs succesfully until now, and that, probably, we will continue to do so. There is no need for an AI mastermind.

Scott reaches the right conclusion given his premises: To stop the competitive processes, you need to put everything under the control of a single power, and pray that the power is good. Elua would meet both requirements. The thing is that, and this may fill Scott with despair, it seems difficult both to design and implement and Elua that does its job right. And so Scott and many readers of Meditations of Moloch are left, I think, the same way existentialists are: We are going to die, the world is absurd, and there is no way to escape it. Fortunately, the premises are wrong.

Along this post, I’ve pointed out ways this need not be the case. In this final section, I will further develop those, and argue that Scott’s concerns with the Moloch are the result of inadvertedly mixing together a series of problems, making them seem greater than they really are.

Spotting and annihilating Molochs as a social activity

The Pantheon of Molochs

What does the metaphor of Moloch exactly refer to?

  1. To ignorance, and inhability to conceive cooperative equilibriums where one benefits more than otherwise
  2. To irrationality, to choosing suboptimal means to achieve goals
  3. To perverse incentives: following ‘rational’ decisions leading to aggregate problems, of varying magnitude
  4. To the imperfections of our biology
  5. A combination of the above

The Prisonner dilemma is a case of 3 (Or 2, depending on your conception of rationality)

The dollar auction is a case of 1: Not knowing ex ante where that game leads to. Some problems of democracy are also due to this.

The Las Vegas example is mostly 4. Is it irrational to play in a casino? Not necessarily, if you care about different things than actually winning. Is it part of a good life? Probably not, in my and Scott’s worldviews.

The Malthusian cases are a combination of 3-4.

Then, the problems described by Scott can be sorted out in two big bins:

  1. Problems that someone knows how to solve
  2. Problems that we don’t know how to solve

The em world and SAI is this second case, and most of what Scott said is the 1st case. Scott begins by building his case from small observations, here and there, of mini Molochs before going full on and connect those type 1 problems to type 2 problems.

If you have in your head lots of things at the same time, it seems problems are bigger than what they really are. Here we have a poem, a vision of Las Vegas, irrationality, ignorance, incentives, actually existing problems, and potential problems. The whole thing and fuzz about Moloch can be cleared up and reduced to the problem of ems and SAI. As said: Moloch is not a big deal now. I won’t solve the em-SAI problem disjunction here, I’ll at least wait until I’ve read Hanson’s book on the topic. Suffice to say that we are quite safe until those problems hit us. If they never come, we’ll be safe forever, seemingly.

The Politics of Moloch

What prompted this absurdly long (for my standards, I don’t approve of super lengthy texts) piece was a comment in twitter by Devon Marisa Zuegel. Though Moloch is everywhere, and Scott’s piece has not politics as a main theme, a political reading of it can be made: We need government to keep Moloch at bay. Individual self-interest and competition won’t slay it. Therefore, the reality of Moloch poses a problem for libertarianism, it could be said.

It is sometimes said that a libertarian is someone who has only studied half a semester of economics, where the niceness of markets is explained, but skipped the second semester, where market failures are described. I’ll go further and say that most economists finish their studies there. Further studies lead you to Public Choice, were you learn that, even if market fails, and even if in theory governments can correct market failures, in practise they need not do it, and it may be better, given plausible assumptions, to let the market alone. Finally, the fourth step is to realise that we can escape incentives to some extent and troll naive economic textbooks really hard. We can notice we are in lose-lose situations and devise institutions to solve them, changing our incentives. We can band together to solve a problem. We can sign contracts to bind ourselves credibly. Scott calls many thing like these ‘government’. But this is simplistic. Remember what I said Scott said:

Scott characterises governments are rule patterned violence: agreements plus enforcement. But things different from governments also do this, he reminds us: teachers act like governments by having rules against cheating, limiting actions student can take when competing for grades. Norms among the students can also work as a government: rules (don’t cheat) and punishment (or else we will shun you).

Governments are not rule patterned violence. It’s not only that they are not only that, but they are something else altogether. Governments are the institutions in control of States: sovereign entities who enjoy the recognition of their legitimacy by a sizable fraction of the people it rules over. As a result, it enjoys political authority, even when most philosophers say such thing is undefensible [24]. Governance is what we are looking for, and this one thing governments can do, and almost always do. Me signing a contract with you is governance, and asking a trustworthy third part to punish a defector is governance. Not government. Government is more like omnicomprensive, large, and unchallengeable. Governance is more specific, small, and polycentric. Among these extremes there is a range of possibilities. Right now, it is well known that you can have effective rules in the abscence of government [25-29], even if it is to form pirate bands or prison gangs. The bounds of cooperation are set by beliefs about what’s permissible, remember, so not all cooperation is nice. You see this even in settings where people are supposed to fight, like online games, e.g. EVE Online [30-31]. Can we translate this to the entire world, to have actually existing extensive cooperation without governments? Maybe [32]

Minimal-state libertarianism is far easier to defend. Economic arguments, even when overlooking our cooperative skills, let you justify a government of up to 30-35% of GDP. [33]. That’s no minimal government, but around the levels Switzerland is, and Switzerland is one of the model countries libertarians usually refer to. Other examples that meet the target are Hong Kong and Singapore. And this includes the need to assist the poor! If you are more optimistic regarding markets, you can bring down it to 5% of GDP, as recently defended, in Spanish, here [34].

We can argue over those arguments. But what is simply not true is that in theory libertarianism doesn’t work. Empirically it may not. Market failures can be solved inside the market itself, in theory, given realistic assumptions about how human being are. This assumptions do not include pure selfishness or perfect competition.

Finally, if there are some more coordination failures remaining, they can be solved by the minimal-State government, or left there to be solved by pure cooperation alone, in the very long run.

An express debunking of the Non-Libertarian FAQ 3.0, or helping some fishermen

Let’s go back to the fishermen example and finally solve it.

So we have our 1000 firms, each one causing a -1$ loss to others, from an initial gain of $1000 each, so they are making zero. The fishermen get together and lay out their situation. They agree on signing a conditional contract, not too dissimilar from what you can do in Kickstarter: Everyone will pay 300$ for a filter, so that everyone makes $700 a month. The contract will only be binding if everyone signs it, and they go to a court and sign it there so that it becomes binding, and enforceable by an external agency. Now they have changed their incentives: They can choose to do nothing (payoff=0) or sign (expected payoff=700). So they sign. Once they have gone to court, they can either cooperate (expected payoff=700) or can shirk. But now their payoff is not 999, but paying a fine until they cooperate, so now it’s rational to cooperate. Fixed.

But you have used a court! So what? In Libertonia there are courts, what were you expecting? Third-party enforceability is a valuable service.

But what if there’s a stubborn fisherman who demands others to pay him to actually sign the contract. There we run into the question of whether the fishermen actually had the right to pollute the water in the first place. [35].

So suppose that the fishermen raise this issue. Most of them argue that since the lake is a common resource, and since it’s in the interest in all of them for it to be clean, they should regard it as a common property, where rules can be imposed to ensure inner coordination. The hypothetical free rider could argue that since he was the first to get to the river, he has the right to pollute, and that others ought to pay him. This could be some sort of Coasean bargain. In this case, if he demands payment P from each other, he would be making 700+999P. Others would be making 700-P. In the cooperative case, total earnings are 7001000=0.7 megadollars. In the noncooperative case, 700+999P+700999-999P=0.7 megadollars, same amount, different distribution. P would depend here on a variety of factors, but total gains are the same. The court system would have to deal with cases like this, and resolve disputes, set precedents, trying to ensure good outcomes while respecting rights. No trivial task, but not an impossible one either.

If this is Scott’s core objection to libertarianism, I think he’ll now be a step closer to it. And he’s already really close, if not already into it!

On the need for social coercion, you can also read [36]

Conclusion

The point of this essay has been that the Moloch problem is overrated, and that we can overcome it. The living Molochs of today are the dead Molochs of tomorrow’s History books. There remain some “What-if” Molochs that seem difficult to slay: The Em World and an ungood Superintelligence. Until those come in the future, we are mostly safe from Molochlike threats. Why has Scott overestimated the threat? It seems to me that it is because of particularly narrow view of how markets work, and how people behave.

As for political systems, Molochlike problems pose greater dangers for democracy than for capitalism. Capitalism allows for both small, step-by-step solutions and big ones that can be carried out even if other people disagree, within the limits of property rights. The logic of sovereignty bans this. It makes really difficult to experiment with different political organisations that may help solve the problems Scott describes. Within the market, gardens can be built and last. To me, Elua is more a polycentrically governed world coupled with some altruism on our side than a benevolent Superintelligence. Difficult as both are, my solution seems a bit easier.

His speculations are about how an AI could solve our problems. Mine involve us, actual humans, crafting institutions and rules to do so from within the system we live in. We have tools for cooperation, but no guarantee that we will be wise enough to use them. As for Scott’s plan, it involves the solution of two hard problems: Building a SAI and then making it nice.

In a way, the solution to the problem was monarchy at a large scale: to make all of us kings, with walls of rights we recognise each other. But at the same time, we don’t make this wall infinitely high. There are situations where it may not be bad to violate them. Probably in those cases where their definition is not very clear, like in the fishermen case, or cases where the benefit/cost ratio is really high. Ideally, we would end up with a world filled with institutions encompassing resources where problems of this sort tend to happen. Make lakes or rivers common resources for their users, the atmosphere could be regarded as one large common property. Streets, buildings, and cities can be dealt with in the same way. I’m not downplaying the difficulty of that, I assure you.

Will we necessarily cooperate? No. Can we, in theory? Yes. Is it likely that we will? If we can envision and implement adequate rules and institutions, and if we care about others, yes, we will.

Scott has described our enemy. I have described us, and the weapons we have. Now, go and keep working on making things better! Moloch is not going to kill itself!

Brought to you by Artir, @ArtirKel

Bibliography

[1] Thomas, K. A., DeScioli, P., Haque, O. S., & Pinker, S. (2014). The psychology of coordination and common knowledge.

[2] Kricheli, R., Livne, Y., & Magaloni, B. (2011, April). Taking to the streets: Theory and evidence on protests under authoritarianism. In APSA 2010 Annual Meeting Paper.

[3] Khadjavi, M., & Lange, A. (2013). Prisoners and their dilemma. Journal of Economic Behavior & Organization, 92, 163-175.

[4] Boukephalos, P. (2015) “Experimenting with Social Norms” in Small-Scale Societies. Blog. Accesed 22/12/15

[5] Boukephalos, P. (2015) Where do pro-social institutions come from? Blog. Accessed 22/12/15

[6] Alexander, S. (2015) Book Review: Hive Mind. Blog. Accessed 22/12/15

[7] Shubik, M. (1971). The dollar auction game: A paradox in noncooperative behavior and escalation.Journal of Conflict Resolution, 109-111.

[8] Boukephalos, P. (2014) The Little Divergence. Blog. Accessed 22/12/15

[9] Huemer, M. (2008). In defence of repugnance. Mind,117(468), 899-933.

[10] Powell, B. and Skarbek, D. (2006) Sweatshops and third-world living standards: are the jobs worth the sweat?, Journal of Labor Research, 17 (2), 263–274

[11] Powell, B. and Zwolinski, M. (2012) The Ethical and Economic Case Against Sweatshop Labor: A Critical Assessment, Journal of Business Ethics, 107 (4), 449–472

[12] Powell, B. (2014) Out of Poverty: Sweatshops in the Global Economy, University of Cambridge Press.

[13] Evans, A. J. (2014). Markets for Managers: A Managerial Economics Primer. John Wiley & Sons.

[14] Thiel, P., & Masters, B. (2014). Zero to one: notes on startups, or how to build the future. Crown Business.

[15] Olney, W. W. (2013). A race to the bottom? Employment protection and foreign direct investment.Journal of International Economics, 91(2), 191-203.

[16] Wheeler, D. (2001). Racing to the bottom? Foreign investment and air pollution in developing countries.The Journal of Environment & Development, 10(3), 225-245.

[17] Prakash, A., & Potoski, M. (2006). Racing to the bottom? Trade, environmental governance, and ISO 14001. American Journal of Political Science, 50(2), 350-364.

[18] Boukephalos, P. (2014) Time on the Cross Summary. Blog. Accessed 23/12/15

[19] Beito, D. T., Gordon, P., & Tabarrok, A. (2002). The voluntary city: choice, community, and civil society. University of Michigan Press.

[20] Boétie, E. D. L. (1975). Politics of Obedience: The Discourse of Voluntary Servitude, The. Ludwig von Mises Institute.

[21] Frey, C. B., & Osborne, M. A. (2013). The future of employment: how susceptible are jobs to computerisation. Retrieved September, 7, 2013.

[22] Caplan, B. (2011). The myth of the rational voter: Why democracies choose bad policies. Princeton University Press.

[23] Hanson, R. (2013). The Hanson-Yudkowsky AI-Foom Debate. Berkeley, CA: Machine Intelligence Research Institute.

[24] Huemer, M. (2012). The problem of political authority: An examination of the right to coerce and the duty to obey. Palgrave Macmillan.

[25] Leeson, P. T., Coyne, C. J., & Duncan, T. K. (2014). A Note on the Market Provision of National Defense. Journal of Private Enterprise, 29 (Spring 2014), 51-55.

[26] Leeson, P. T. (2014). Anarchy unbound: Why self-governance works better than you think. Cambridge University Press.

[27] Stringham, E. P. (2015). Private Governance: Creating Order in Economic and Social Life. Oxford University Press.

[28] Skarbek, D. (2014). The social order of the underworld: How prison gangs govern the American penal system. Oxford University Press.

[29] Leeson, P. T. (2009). The invisible hook: the hidden economics of pirates. Princeton University Press.

[30] Cavender, R. S. (2013). Internal Reinforcement of Cooperative Outcomes: Evidence from Virtual Worlds. Available at SSRN 2503732.

[31] Cavender, R. S. (2014). Reputation Revisited: Evidence from Virtual Worlds. Available at SSRN 2503733.

[32] Barnett, R. E. (2014). The structure of liberty: Justice and the rule of law. Oxford University Press.

[33] Tanzi, V. (2004). A Lower Tax Future? The Economic Role of the State in the 21 st Century. Politeia, 45.

[34] Rallo, J. R. (2014). Una revolución liberal para España. Deusto.

[35] Zwolinski, M. (2014) Libertarianism and Pollution. Philosophy & Public Policy Quaterly. Vol 32 No 3/4.

[36] Huemer, M. On the need for social coercion. Web. Accessed 24/12/15

Apologies to Pseudoerasmus for citing him in a post defending something somewhat optimist!

  • Chris Waterguy Chris Waterguy 2017-06-05T01:23:03Z

    "I assume that the probability that you won’t be betrayed is 1. A realistic case is where P is sufficiently high. As far as I know, not even in the Soviet Union or North Korea is this prevalent. In those countries people usually honestly believe in their regime. They don’t rebel against the system because they endorse it. This makes the system stable. I don’t think a system can be made stable by fear alone, which is why I initially found Scott’s example a bit unnatural."

    I disagree with basically everything here.

    Re the Port Lameron Harbor fishery:

    "Local fishers see themselves as having exclusive rights to their territory, which extends eleven miles along the coast and more than thirteen miles seaward. They actively defend it against outsiders."

    I.e. coercion.

  • mindlevelup mindlevelup 2016-10-23T22:48:28Z

    Oh wow, this was a lot to digest. Just discovering your blog, and it looks like I'll be enjoying this a lot!

  • Artir Artir 2016-10-25T16:29:56Z

    Thanks :)