Here I will try to list everything Scott has written that is relevant to understand his ethics. I imagine his views have changed along time, and the final authority on what Scott thinks is Scott, so if he has any objection to his previous selves' thoughts, he wins. One could think that the topic begins and ends with his Consequentialist FAQ, written in 2011. But one would be wrong.

For some things I provide paragraphs, or comments. In all cases, you don't need to read the comment section, I've already copypasted the relevant material here. For the articles that say 'Entire article', I recommend you to read it. Everything is in chronological order.

If you want a TL;DR, it's something like: Scott Alexander is a utilitarian in theory, but in practise he isn't. He probably has an idea of how the world should look like, and if that conflicts with utilitarianism, the idea trumps utilitarianism. It happens that utilitarianism and his idea tend to align, so it's not that bad. Lately, he has been exploring if ethics can have a more solid basis by adding contractualism into the mix. The idea of grounding ethics has been with Scott since he began blogging, and at times he even seems like a moral realist. Also, contrast his argument contra Huemer with what he says in Misperceptions on Moloch, quoted below.

In Less Wrong

The trouble with "Good" (April 2009, Entire post)

In the old blog

I got inordinately excited when I learned about utilitarianism as a college freshman, because it solved many questions that were legitimately bothering me, like "How come it's okay for governments to tax when that's a lot like theft" or "How do we balance competing moral obligations?". Recently (by which I mean the past five years) I've been exposed to a number of serious problems with utilitarianism, like "No one has more than a vague idea what utility is, and any particular formulation seems to lead to wildly counterintuitive conclusions". Some of these problems have proven tractable, others have not. Yet I have to admit, deep down, that a lot of them provoke the same response in me as "What are elementary particles made of?" (this reminds me of how mathematicians tend to dismiss certain problems as "trivial", including ones they don't know how to solve. Certain issues in utilitarianism seem "trivial" to me in this sense; for example, when asked about total versus average utilitarianism, I tend to just say "The domain of utilitarian theory is over moral problems that do not involve changing the number of agents.") And this served me well for a while in letting me concentrate on problems I found interesting, but now I'm starting to think that a major reason people differ about the big things is a difference in which problems they consider "trivial". Some intelligent people I know avoid utilitarianism precisely because of problems like these. (Stuff, Jun 2012)

I've always been a fan of the utilitarian argument that you should donate everything you have to charity and even wrote up my own version, but of course that will never happen and so the tendency is just to sit around feeling vaguely guilty and not donate anything. (Utilitarian Jihad, Sep 2012)

The liberal project is to push the fundamental unity of mankind - to cast off all differences between white and black, rich and poor, men and women, gay and straight, patriot or foreigner, Westerner or foreigner, as irrelevant beneath our common humanity. The ultimate liberal morality is utilitarianism, which converts everything into a single common moral currency and goes from there. Although liberals are generally not in favor of world government right away, most of them admire weaker institutions like the UN and think a genuine one-world government is an admirable if somewhat starry-eyed goal. One cynic declared that the liberal project was to "eliminate all distinctions not relevant to the profitability of an investment bank", and I admire the sentiment if not the pessimism. Both Moldbug and the Discordians use the same term for the liberal project: they want to immanentize the eschaton. (The Wisest Steel Man, Sep 2012)

Consider the following argument:

If entities are alike, it's irrational to single one out and treat it differently. For example, if there are sixty identical monkeys in a tree, it is irrational to believe all these monkeys have the right to humane treatment except Monkey # 11. Call this the Principle of Consistency. You are like other humans, not an outlier from the human condition. You have no unique talents or virtues that make you a special case. You want to satisfy your own preferences. So by Principle of Consistency, it's rational to want to satisfy the preferences of all humans. Therefore, morality.

Does this argument go wrong, and if so, where? It feels like cheating to me. And if I had to cash out exactly why, it would be a lack of belief in categorical rationality, rationality that can tell you what to want independent of ends. "It is rational to want" seems like a weird category error, and my description of the Principle of Consistency sort of sneaks it in by conflating epistemic and instrumental rationality. On the other hand, a lot of people do believe in categorical morality and in fact get really upset when moral theories aren't categorical and can't tell them what to want from first principles. I wonder if those people would accept this as a valid grounding of morality. [...] A lot of people seem to be rejecting the "all humans are sufficiently alike" principle. But in order to thwart the argument, I feel like you not only have to prove the relatively easy proposition that humans are not actually alike, but that the differences among humans occur in such a way that you deserve special positive moral treatment (so that you can focus on your own desires but ignore others' desires consistently). In other words, the differences between humans have to be such as to grant muflax [Note: one commenter] alone special moral status. (or muflax and a small group of others selected for some objective non-indexical criterion. That would also create a morality, albeit not a very inclusive one. If you think only white people have moral value, you're a racist but at least not an error theorist) [...] If we treat morality as an objective fact, like there really is such a thing as "right to humane treatment" which monkeys either do or don't have, then it would be weird to suspect without evidence some distinction between monkeys, just as it would be odd to point Monkey #11 and say "I bet that monkey, and none of the others, has liver cancer". If we treat morality as being about your desires, then of course you can randomly choose one monkey and do whatever you want with it; it's not rational, but desires aren't supposed to be. (Ground morality in one hundred words or less, Nov 2012)

Utilitarianism for engineers, Jan 2013, Entire post

In Slate Star Codex

[Comment section] It’s a desirable feature for me, or at least one of the moral intuitions I have is that a moral system ought to be objective, and this is sufficiently important that I’d be willing to trade off a lot of other ways in which a moral system could match my desires in order to satisfy objectivity. This is part of why I’m saying I should be willing to compromise my morals in order to establish a communion of people who all agree on the same moral system and can act as if it’s objective. [...] Like I thought you couldn’t prove utilitarianism was right, but once you accepted it, you could use the single word “utilitarianism” to instantly derive an elegant moral system out of thin air, which was the obvious Schelling point for anyone and would correspond to all my moral intuitions. Instead, it turns out I basically have to enter in all my moral intuitions by hand, and what I’m doing is obviously just doing what I want in a way that doesn’t make a convenient Schelling point at all. Whose Utilitarianism (April 2013 Entire post)

Book review: After virtue (April 2013, Entire post)

Newtonian Ethics (May 2013, Entire post)

Posts on Raikoth (May 2013, All the posts)

A something sort of like left-libertarianism-ist manifesto (December 2013, Entire post)

You Kant dismiss universalizability (May 2014, Entire post)

Meditations on Moloch (July 2014, Entire post)

The Invisible Nation. Reconciling utilitarianism and contractualism (August 2014, Entire post)

Morality is really complicated, but if we are to believe moral discussion can be productive even in principle, we have to believe that our brains are less than maximally perverse – that they have some ability to distinguish the moral from the immoral.

If our brains are built to accept true ideas about facts and morality, the default should be that many people believing something is positive evidence for its truth, or at least not negative evidence. Misperceptions on Moloch (August 2014, Entire post)

Cooperation un-veiled (September 2014, Entire post)

Bottomless Pits of Suffering (September 2014, Entire post)

Ethics offsets(January 4 2015, Entire post)

Blame Theory (April 2015, Entire post)

[Comment section]I do think continuation (and success) of the human species is a good in itself for non-utilitarian reasons. [...] I actually think painlessly killing animals is pretty morally neutral. The main reason that painlessly killing humans isn’t morally neutral is because humans have preferences, are able to see it coming, and have complicated social relationships that get disrupted when they’re killed. (Vegetarianism for meat eaters, September 2015)

Contra Huemer on morals (October 2015, Entire post)

You may wonder the why of this. Scott is one of the few people for which, if they say anything, I will probably take it as prima facie true, no questions asked. People to which I grant this privilege are people who have an adequate mix of: humility, knowledge, ability to change one's mind, charity when criticising others, and good manners. I find that among my chosen people tends to be broad agreement about many things, and discrepancies prompt me to try to understand their source, and this ends up being understanding the whole worldview of someone. Ideally, either said worldview will break the containment unit you set up in your brain and take over you, in which case you get to agreement, or your worldview smashes the other, in which case you can criticise the original one, and hope that that will induce a change of opinion in the other, and thus, agreement. I don't believe that Aumann's Agreement Theorem holds, but I believe that broad agreement among people with the characteristics described above is a desirable epistemic property.

Comments from WordPress

  • The Non-Non Libertarian FAQ | Nintil 2016-03-24T20:16:17Z

    […] are, in general, not implicit or full consequentialists. Not even Scott Alexander. Even self-declared consequentialists do not behave like such, even though it is possible to do […]