... it is about signaling! :)

If you are part of the select readership of Nintil, you probably know about Robin Hanson of Overcoming Bias fame. Perhaps you have also heard about Kevin Simler, who blogs at MeltingAsphalt. They just released a book together, The Elephant in the Brain.

It is worth laying out some backstory to understand where this book comes from before getting into the review itself: For years, Robin Hanson has been writing pieces arguing that "X is not about X, it is about Y" (Where Y is usually signaling). Healthcare is not about healthcare, is about showing how much we care. Politics is not about policy, and so on. The core idea here is that many people, or even entire societies do things that seem, superficially, to be done for a reason, but actually, on deeper reflection (more on this later) are done for a different reason.

On the other hand, Kevin Simler had been writing at his blog about many things, among them reflections on that nebulous concept that is social status, with practical examples, including an anthropological view on the innerworkings of a startup. He then wanted to do a PhD thesis, but instead ended up teaming up with Robin to produce this book, which may have been in production for four years (like a proper PhD thesis).

(0): Preliminary considerations

Below follows my review. I have chosen to focus on claims that I considered interesting, or that I disagreed with. Thus it may seem like I disagree more than I actually do, or that I address secondary or minor claims too much: if I agree I will either mention it in passing or omit it, while disagreements require me to explain why I disagree, hence the assymmetry.

The review also doesn't just deal with the arguments and points made in the book, I also expand on some questions I thought were interesting. Some lead to support for the book's thesis, other to more nuanced results.

(I): An introduction

The book's main thesis can be boiled down to

Our main goal is to demonstrate that hidden motives are common and important— that they’re more than a minor correction to the alternate theory that people mostly do things for the reasons that they give. For this purpose, we don’t need to be right about everything. In fact, we expect most readers to buy only about 70 percent of what we’re selling— and we’re OK with that.

In some sense, I agree with the thesis, and in another sense I disagree, and overall I gravitate towards this second sense, but after having read the review you may think that I actually gravitate more towards the first one. It can be both things, depending on how one weighs the different agreements and disagreements. I am weighing heavily the core claims and discounting many object level claims. I am also putting some weight behind conceptual clarity and discussion of possible alternative explanations (Side effects of reading too much philosophy!). This last thing does not make me disagree per se, but makes me reduce the strenght of my agreement with a given claim.

By where is this disagreement? Isn't it true that education is -to a large degree - about signaling? Isn't it true that politics is not just about making policy? Isn't it true that charity is not just about helping others in the most efficient way? Yes, those things are true, but that's not my point. The object-level claims of the book, the claims about how things are are largely correct. It is the interpretation I take issue with.

I'll come back to this shortly, but first some unpacking of the author's main thesis:

What is exactly the elephant in the brain?

The authors (Simler and Hanson, henceforth SH) say: selfishness, the selfish parts of us. But not quite, they say shortly after. The elephant in the brain is also competition for status, power, and sex, and also misdirection, lies, and self-delusion.

These are, note, different things. One can be acting for selfish purposes deliberately instead of "unconsciously", for example. Indeed, most of the things we do are done selfishly, for our own goals and purposes. We perhaps don't think about it, but we don't think that we are constantly breathing either, yet we wouldn't say that we are deluded about breathing.

Things we do that are not selfish are rare and few: From helping strangers, to donating to charity, to just being nice to people, but note that in some of these there can even be a selfish component to altruistic actions, and it can even be the case that this selfish component can be arranged to serve a non-selfish end, as we will see with the discussion on Effective Altruism.

Given that we already do most things for selfish purposes, and that this should be obvious, SH's novel thesis cannot be just that we do have these selfish purposes. What the book does is to offer deeper (ultimate) explanations for the reasons (proximate) behind behaviours that shine new light on everyday life. It explains why we  like certain things, and a lot of that has indeed to do with status and sex.

(II): Education

Let's get concrete and consider education.

My prior view is that everyone will admit that the reason one goes to university is to get a job (not about learning), which is as selfish as motives go. Then, when asked why they pursue a given degree, students will say that education will give you knowledge or skills which will be required for the future job. Again, no self-deception here. You see engineers doing engineering things, and you see more of these engineering things throughout your degree. You learn many things, many of which you will never see afterwards, but a student will just say that this will give you the grounding to learn why techniques you will use work (e.g. learning about the Navier-Stokes equations before using CFD software), and will give you the knowledge to know where to look if you need to learn about X. You also gain broad "analytical skills and commitment to hard work" (Altough, as SH show, you mostly had those to begin with) You then graduate, get a job, and boom, done. Where is the elephant in the brain?

Let's have a look at the book itself to see where it is. SH present a series of puzzles about the simple view of education that they think most people have:

  • Anyone can get a world-leading education sans the degree itself (anyone can walk into Stanford), yet no one does
  • Students (think high school now) get happy when a techer cancels a class, or when they can't go to school because of bad weather.
  • Employers value finishing the last year of uni a lot, but they do not value the intermediate years of uni as much (sheepskin effect). If the value added by university were about the learning, one should expect that each year gains you far more human capital.
  • Being a graduate gets you higher pay in jobs that don't require formal education, like waitressing.
  • Much of what schools teach is of little use in real life, like history, art, or foreign languages (Unless your foreign language is English, I must add!). Similarly, for those who do not pursue scientific careers, science classes are a waste of time.
  • In college, 35% of college students major in things like communications, English, liberal arts, interdisciplinary studies, history, psychology, social sciences, or visual and performing arts.
  • Even in engineering, students will not use much of the knowledge they acquired, and companies see themselves as having to train graduates
  • Students forget most of the stuff
  • Schools use suboptimal teaching methods and start classes at suboptimal times

Why is this?

Signaling. The idea here is that the way education works is not so much oriented towards teaching (building up human capital), but towards credentialing: each student has a somewhat fixed ability level (i.e. intelligence, conscientiousness, etc), and being able to go through university certifies that you do have those skills. IQ tests alone wouldn't do, because the sought traits are not just intelligence. Going through 3-5 years of university maintaining consistent high scores not only says you are smart, but that you are willing to devote those smarts to a task, for years, dealing with both subjects you will like and subjects that you will not like: Education doesn't make you a better worker, it reveals how much of a good worker you are. (Well, it will help somewhat, but you get the idea).

This, they say, explains

  • Why students are more interested in credentials rather than learning (and why they don't just get free education)
  • No one is bothered when curricula are impractical, as what matters is not the content
  • Signaling also explains the sheepskin effect
  • That education doesn't increase national income much, even though it does increase the income of an individual getting it (Zero-sum game)

They then say: Isn't there an alternative explanation for this? They consider the explanation that "going to school is the norm" (but find it unsatisfying), but if it is so inefficient, why don't people innovate? There is coursera, but most kids go to school and most young adults still go to uni.

Something that helps accept school is that school also serves as a consumption good: we make friends and contacts at uni, and teachers babysit kids at primary and secondary school. But they say that people tend to downplay this, and play up the prosocial aspect: learning, which improves society overall. This is an interesting bit: In society-level conversations "learning" is what is usually talked about. "Let's invest in uni to get more uni grads and improve productivity, etc", but on a micro-level people mostly want the jobs, and don't talk about the learning that much.

Two more functions they adscribe to education. One is propaganda: Education is used by states to instill in kids a sense of shared citizenship and patriotism. Two is domestication: By subjecting kids to routines over and over, they grow accustomer to the lives they will have when they grow up, and so they will be able to follow (and give) orders.

Hm.

Here's another proposed explanation: Yes, there is signaling, but there are no hidden motives. There is just misinformed honesty and misaligned incentives.

  • The authors claim students would get the degree for free rather than the actual education. But they don't back this up with anything. To me, this doesn't ring true, for the same reason one wouldn't want a Nobel prize for free without the related accomplishment. It would feel fake (Harmful for ones self-image) and you may think that if you would be lying to employers, potentially getting you into trouble. I may be wrong, but this doesn't feel like a strong argument. Students may prefer a nice job without going through uni, but those are rare.
  • Contra SH, People are bothered by curricula being impractical. Googling "Why does school teach useless things" returns 4.5 million hits. This is, some people who reflect upon their situation say this. Other people may just go with the flow, accepting tacitly (and being bothered by) what they are taught, but they have no choice. Both at school and university, I remember conversations about this same thing.
  • Signaling explains the sheepskin effect, no disagreement here
  • Education is socially meh and individuall gainful. Again, no disagreements.

What about the puzzles:

  • Students largely don't get a world-leading education at Stanford for free because
    • They don't even know this is a possibility. I didn't become aware of the possibility of this until I was already well into uni and the thought occurred to me. Unis do not publicise this information, and no one really talks about it.
    • People don't like to be the ultimate conspicuous free rider. Yes, you can sit in a corner for four years surrounded by strangers, thinking if they will ever find out, fearing being asked by a professor if they are registered at uni. Plus maybe students don't like free riders!
    • That said, some studentsdo indeed do that. But from an employability point of view this might not be the best thing. How can the employer be sure that you have the knowledge you say you do? And it would look awkward to write in your CV "I sat for four years at Stanford, but I didn't pay or anything. Will pass an exam if you want me to". I would be, as a recruiter, interested in such a candidate, but from the company's perspective, doing the exam ourselves would be an extra cost which you can avoid by hiring an actual grad.
  • Students are happy when classes are canceled because they think classes are inefficient and thus can be compressed. Students at school are there to pass the exams, get high grades, go to uni and get a job, ask any. There is a time T to teach all the material, and if you think, as a student, the efficient time is smaller than T, as it was the case when I was at school, then you would be happy if classes are cancelled: it just means you will get more condensed classes afterwards. Exam and learning-wise it is the same, plus you get free time.
  • Much of what school teaches is useless: Why do schools teach what they teach? Many reasons
    • States have made it compulsory for schools to teach a set of subjects. In the UK, by law, schools have to teach a bunch of stuff, including art, music, physical education, and foreign languages. The same applies to any developed country (AFAIK). And it applies to most if not all forms of education, private or public. Not everyone can homeschool.
    • And why do they teach that? Because whoever makes those laws honestly believes that those things matter. Sure, you can argue that an arts teacher is selfishly concerned with her employment prospects, but I see as more plausible a model where thinking that art matters causes both being an arts teacher, and wanting others to learn art.
    • Other arguments are given, of course: Students don't know what they will like, so they should be exposed to a variety of subjects, that general knowledge is useful in and of itself, etc.
    • Ultimately, we might have these things - I'm speculating here, by own admission- because of the aristocratic and then religious origins of modern education. In the past, if you wanted to become a blacksmith, you just learned from your father or became an apprentice at the local guild. Aristocrats had to learn their signaling subjects. Men of religion had to learn grammar, and logic, rhetoric, and latin to engage in theology. When education was then extended to the masses, elites wanted them to have proper values and morals, etc. People might also have been looking upwards at what the elites were doing, and seeing those things as being high status, they became engrained into what education was becoming. Fast forward to the present and we still have some of that present. (This could also be tied with the propaganda explanation)
  • About students who study the "useless" subjects like liberal arts mentioned is because, as anyone who has access to Google would know, those subjects get you consumption benefits (People like studying them), and they also get you access to jobs. Are students deluded about the purposes of their education? Not at all! They can honestly think that they study what they like, and that they also gain skills somewhat related to their future employment. A story like "I like History so I study it, plus learning History will help me gain a broad understanding of human nature, writing, critical analysis, etc, and this is useful for management or whatever" sounds like a plausible reason a History student would give.
  • Students forget most of the stuff: But they, ex ante, underestimate the degree of this forgetting. There is so much and so much deeper material in uni that extrapolating from school is doomed to fail. Plus even when they forget, they can usually recall (I expect) where to relearn those things. Right now I don't remember the exact composition of weathering steel or doing obscure integrals (Classic example of "These contrived functions never appear in physics or engineering, why are we doing them!"), but I vaguely remember what an answer would look like.
  • Schools use suboptimal teaching methods: SH say lessons could start later at schools, and I agree. But this sounds easier to impute to social inertia: reports saying that those suboptimal practices are suboptimal have not become news until recently. It will still take time until research percolates to policy. It is a change that has just began to happen.

Misinformation, incentives, and social inertia doesn't sound as much of a breakthrough theory as us having hidden motives, but the former seems to me a more parsimonious explanation.

So here is the thing: I have agreed with the signaling theory of how education works. I have agreed with the object level claims that education is inefficient, that students forget about what they learn, that much of what schools teach is useless for getting a job and so forth. My disagreement is in the why. We observe and agree about the reality of the same phenomena, but we disagree about the underlying, hard to observe (We would have to engage in some surveying and historical analysis) causes of what we see.

(III): Medicine

We have seen education. Let's have a look at medicine. Again, I largely buy the object-level claims.

Medicine seems to be about health, but it is about showing that we care. Medicine is a giant, socially coordinated, motherly kiss in a wound. That's the gist of the chapter.

The facts to explain here are:

  • Medicine is very expensive
  • Historically, medicine was awful (e.g. leeches, rituals, dances and prayers) and yet people kept demanding it. Some of these are described as "elaborate and esoteric", conspicious displays of efforts (to show that the patient was cared for)
  • People overconsume medicine
  • Even within the same country (the US), rates of different sorts of sugery are quite different, and so is their expenditure.
  • Extra medicine doesn't help that much. (e.g. RAND study)
  • People  who live in richer countries spend more in healthcare than people (with the same  income and wealth who live in poorer countries)(
  • Patients and families are often dismissive of simple cheap remedies, preferring expensive, complicated gadgets and complex procedures provided by the best doctor.
  • Only doctors are allowed to treat patients, even though nurses are just as good.
  • Patients are not much interested in how good hospitals are, even though there is a variation in outcomes between restaurants in the same area. Publishing death rates for surgery in different hospitals didn't almost change admission rates. Only a high profile news story about a patient dying did.
  • It is taboo to question the quality of medicine
  • Very simple practices would much improve healthcare (e.g. getting doctors to wash their hands!!) but they are largely ignored by the general public. People rarely seek a second opinion.

I agree with many of these: medicine is expensive, medicine has been historically awful, people get too much of it, medicine is not as helpful as people think, that healthcare behaves as a luxury good (The richer you are the more you get relative to your income).

The explanation given for these patterns is a story about us evolving to show conspicuous caring for others, and conspicious demand for care oneself, but it sems that it can also be explained differently.

My explanation for the above is

  • The public is generally ignorant
  • In healthcare, knowing what works is specially difficult
  • The particular incentives of socialised healthcare (The US ends up having this due to how incentives are set up).
  • Self-interested healthcare professionals lobbying
  • A general desire to provide healthcare to the poor
  • Risk-aversion

SH offer a series of predictions that would be true if the caring model were true:

First, it should be true that if people around you spend a lot, you will spend a lot to avoid looking like you don't care. On this, they mention a study that says people with similar income and wealth who live in different countries spend different amounts of money on healthcare: those in richer contries (richer neighbours) spend more. Those in poorer countries spend less. That is: if your income stays the same and you move from a poorer to a richer country, you will spend more. But, they argue, if medicine is just about getting an amount of health for X amount of money and the health you get per X is the same in every country, the fact that you spend more means that you get more healthcare. But why would you do that when moving to a richer country: because your neighbours are richer, they say, and you want to keep up with them.

The paper they mention doesn't literally test that hypothesis at the micro level, which is the one I think should be used to judge the hypothesis. What the paper shows that healthcare consumption is a normal good at the individual level (If your income increases by X%, you consume X% more), but that at a national level, it is a luxury good (If national income increases by X%, national health expenditures increases by more than X%).  Not everyone agrees with this: The OECD notes that there are disagreements on elasticities at the national scale, and that the most recent fildings, including natural experiments (e.g. oil shocks) point to healthcare being not a luxury good. (de la Maisonneuve and Oliveira Martins, 2013). They point to technological progress as a driver of that increased consumption (But there is also evidence against this!)

This increased consumption doesn't seem to be due to higher prices, as it has been argued. It's not because of an increase in consumption of certain healthcare service either: The US doesn't have more doctors, consultations, or hospitals beds. It even has shorter hospitals stays. These metrics in general do not correlate with national income. What drives the association is more cutting edge technology and diagnostics: stuff like use of PET, MRI, or advanced surgeries, and probably just making hospitals nicer places.

Is these all extra medicine worthless? Hanson says yes: Quoting the RAND study and other sources, he says extra medicine doesn't help. But these sources may not be picking up specific treatments, and the stuff that is making US's healthcare expensive is not something that would show in the RAND study: the RAND study measured blood pressure, vision, cholesterol levels, heart attack rates or weight, and back then many treatments now common did not exist or weren't advanced enough. And because of this, I join Hanson in calling for a RAND study Mk 2.

Those advanced procedures instead show up if one looks at survival rates for cancer, heart attack or stroke survival, and successful medication of diabetes (Preston and Ho, 2009). Even when life expectancy in the US is lower than you'd expect it to be, once you reach old age, your life expectancy is higher than in Europe. Survival from cancer is also higher in the US vs other countries.

So it turns out all that extra money being pumped into healthcare is doing something, but not something measurable in the general population: only in those few unfortunate that suffer from certain illnesses. If you are thinking that there are other factors behind, one can look at RCTs instead.

Does medicine work? Hanson hasn't said much about specific treatments. He has -rightly - complained about cancer screening , but this is no evidence against cancer therapies themselves (RCTs show that they do work,  on top of surgery). As for the most popular forms of surgery, cataract surgery also works. Arthroscopy, a very popular procedure, doesn't on the other hand (It has a tiny effect, but no more than analgesics). Organ transplants are also a big thing, but no RCTs have been done so far. But if you count "dying while on a waiting list" as random enough then they also work. This show that claims like "Medicine doesn't have any effect" have to be considered on a case by case basis. Sure, many therapies do not have solid evidence (Meta-analysis of RCTs) behind them, but that doesn't mean they don't work.

Still, I repeat my agreement with Hanson that lots of medicine is waste. But the reason for this is that knowing what works is hard. One can do RCTs, but if practice advances fast enough (some RCTs can take a decade to measure effects), then those RCTs won't be of much use if those practices have been abandoned. What is worse, RCTs themselves are a relative novelty, especially high sample size and well designed. The first modern RCT appeared in 1948, but it wasn't until much later -1980-when they became the gold standard and the literature began to take them more into consideration. And even then, evidence is hard to gather: imagine if someone were told, "hey, we are doing a heart bypass RCT, do you want to join the study? If you do, and whenever you need surgery, you will be given surgery or a mock surgery at random". Who would sign up for that?

One may say: "But what about the simple stuff? Knowing whether complex surgery works is hard. But 40% doctors don't wash their hands properly!" That is also hard: That washing hands is important is accepted and has been known for more than a century, but then doctors may think they are doing enough,when they are not.  The statistic Hanson mention is from 2008. But in 2010, compliance rates have increased to 81% in the same hospitals. (Chassin et al. 2015). Another of the examples discussed, adopting simple checklists, the evidence is quite recent, and far from uniformly positive. In 2010, Canada mandated the adoption of of one of these checklists. A recent paper  (Urbach et al., 2014) studied the effect of this in 133 different hospitals, with sample size much larger than the previous studied, and failed to see any effects.

Low hanging fruit is out there for sure, but once it becomes known with reasonable certainty, it propagates through the health system. But this take time.

Back to the book. Contrary to Hanson, a fixed amount of dollars doesn't get you the same amount of care everywhere. In less developed countries, some treatments won't be available for the general popualation, or they will be extremely scarce. And of course at an world-level, it is possible to get high quality healthcare in poorer countries (due to Baumol's cost disease I adventure) for cheap: that's why medical tourism is a thing.

Their second prediction is that there will be a preference for treatments requiring visible effort and sacrifice to show caring, in opposition to just choosing what works. SH argue that patients and families are dismissive of simple and cheap remedies like "relax, eat better, get sleep and exercise" and that they prefer complex medical gadgets provided by good doctors. What is more, patients feel better if they are given a placebo pill, and if it is expensive, they feel better.

First, note that the placebo effects exists onlyfor pain in particular, and in this particular case pain is modulated by expectations. (A more expensive pill presumably is better. Usually better things are more expensive, heuristically)

Second, the preference for expensive treatments makes sense: If you have a headache, you take a pill and it goes away. If you have a broken bone, surgery to fix it is going to be better than letting it be. From a patient point of view, it makes sense to choose the most effective treatment. If you are insulated from the cost, you will choose the most expensive, because, why not? : As I said, if there are less and more expensive treatments for the same condition, the better-is-expensive heuristic would lead you to go for the expensive one if you can. Similarly, given a choice between a "worse" doctor and a "better" docor, almost by definition will lead to the "better" doctor being chosen: why wouldn't one go with the best, if you can afford it.

Note that this works even if the "best" doctor is no better than others in reality, or if the cheap treatment works as well as the expensive one: In cases of high uncertainty, and given cost insulation, you will go with the expensive one.

But then, for health conditions people are aware of their workings, like headaches, colds, or the flu, rarely one goes to get any expensive treatment: one stays in bed, or buys some pills.

The terminally ill consume more resources, they say, and this is true. But if you are terminally ill, given your terrible health condition, is going to be progressively harder to keep you alive, ramping up costs, and people like to cling to life as much as they can. Treatments do accomplish this. Relatives may also want to bet on the small chance of a recovery. These efforts do accomplish what people think they accomplish: extending the life of the patient. It also accomploshed something unwanted: making the patient suffer. But again: if you expect recovery, then suffering for a while and living afterwards can be better than just dying (Like with painful medical procedures).

The third prediction is a focus on public rather than private signs of medical quality. This means that people don't look at local performance track records, but look at standard and widely visible credentials and reputations. Nurses are as effective as doctors, but only doctors are allowed to treat patients.

Patients who would be undergoing a dangerous surgery were offered information on the survival rates from different surgeons and hospitals (These rates happen to show large variation). Only 8% of the patients were willing to spend more than $50 to learn these death rates. When the US government published risk-adjusted death rate for hospitals, their admissions didn't fell much. But a high profile news story about a patient dying did.

Regarding this later thing, with the high-profile case, that probably reached most people. With the risk-adjusted death rates, most people were unaware they could use them. In the study they cite, only 12% were aware of that information existing before the operation. After becoming aware of the information existing, 56% were interested in seeing it. The study discusses possible reasons why this may be so: one of them is that, regarding healthcare, people value more anecdotical reports from relatives and friends more than reports from the governments or the media. (I haven't looked into this much, I note it down for a possible followup)

Regarding doctors and nurses, the explanation for be may be the explanation for most occupational licensing: genuine concern with quality, plus lobbying.

The fourth prediction is a reluctance to question medical quality. Does this even exist?  The authors point out a series of easy yet not universally followed procedures that, if adopted, would save lives. Of course, I would say that people are not even aware of these procedures, so this is not a good argument. If people knew, then they may complain.

The core piece of evidence here would be the observation that questioning healthcare is taboo. Yet I've observed plenty of ranting about healthcare in real life: complaints about too many operations, or complaints about the rooms in the hospital not being nice enough, or complaints about waiting lists.

But it makes sense that, except in the most egregious cases of mistreatment or blatant observable error, that one wouldn't complaint: First, you know that health treatment sometimes can be nuisances, and you are not really sure of how much treatment you need. The doctor is an expert who has studied and knows about that, and you are not, you think. This is not unique to healthcare. If one is receiving any service provided by an expert (legal advice (the lawyer knows more about law than you), car repair (the technician knows more about car repairs than you),  financial advice, etc, one doesn't question the expert unless one has grounds to do so, and this is not the usual case. (Why would you get the expert in the first place if you thought you knew more than him?)

Fifth prediction is a focus on helping during dramatic health crises. SH say that people favour providing medical interventions to those who are sick, but less eager to favour "lifestyle interventions" (as in changing diet, sleeping, exercise more, etc).

This also makes sense under the null hypothesis of no health as caring: people place a value on the autonomy of the patient. You want to help people get back to their daily life of choice, not alter their lifes. That said, people do routinely tell each other to do more exercise, to smoke less, and so on, but they don't insist hard on that, and it is usually something done by someone close, not strangers. We in general want to have and eat our cake: we want to eat whatever we want, and avoid exercise, while at the same time fixing any issues we find using healthcare.

Does the healthcare-as-caring theory make more predictions?

Maybe, that we would use healthcare as a gift ("Happy birthday, here is a free bloodtest!"), and give each other healthcare vouchers (which would be available), but we don't see that. Or perhaps kids and babies would be given lots of healthcare. But we don't see that either (healthcare use rises linearly with age, as one would expect is true weakening of health is behind it). Or perhaps we would talk about the healthcare we get more. People usually talk about their travels, or nice restaurants, but rarely about the healthcare they consume (They don't signal their consumption).

Another of the predictions from this specific evolutionary model is that high status people will be healthier because of their higher status (Hanson, 2008). This correlation is real. Hanson postulated that this would me mediated by stress. He says that most of the relationship goes from status to health, not viceversa. In the paper, he already noted that identifying the why has proven difficult. But I think that as of today we can say that the relationship is noncausal. When people win money in a lottery, for example, their income (imperfect proxy for status?) rises, but their health largely does not. (Why is there a correlation: genes are a possible confounder).

Of course, we care about others. One way of showing that we care is giving others things we think are good for them. I don't deny that this motive may be at place, but it doesn't seem like the main thing.

(IV): Charity

Let's jump to charity now.

The chapters begin with the well known "drowning child" thought experiment, conceived by Peter Singer in his Famine, Affluence and Morality (1972). He said "If you see a boy drowning in a shallow pond, you have a moral obligation to rescue him, even if you have to sacrifice something minor (e.g. your expensive shoes) to rescue the kid. But then, if the pond and the kid are far away does it make a difference? If instead of a pond it is famine does it make a difference?" It doesn't, argues Singer, and so if we think we ought to save the kid, we ought to save all the kids, or as much as we can. Thus, we ought to donate most of our income to help the poor abroad.

SH say that this highlights everyday human hypocresy: a gap between the stated ideal of wanting to help those who need it the most and what people actually do.

I take issue with both the thought experiment and what the authors read from it. Back a few years ago when I read Singer's paper, I immediately thought that the argument was weak. Sure, one could say, if you have one kid drowning and that's it there doesn't seem to be much of an issue in arguing for a positive duty of rescue. But imagine the following scenario: "When you come back from work, there is a pond of lots of drowning kids. You rescue a bunch until you are tired. You go back to work, and come back, and the pond is still full. You again rescue as many as you can. And so on. Your live is then reduced to the bare minimum to keep rescuing kids. Of course, you are allowed to do whatever you need to keep you motivated to keep doing so. But in this scenario, which is a far better analogy of the real situation, few people would say you have a perpetual duty of rescue. Most people, myself included, will accept that we have a much weaker duty of beneficence towards others. Singer's argument might work to argue for donating a small % of your income towards the poor, but not much more. (Jason Brennan also argued along the same lines here)

Why would people be hypocrites? SH's argument seems to be:

  1. People state an ideal of wanting to help those who need it the most
  2. People don't
  3. Hence, people are hypocrites.

Is 1 true?  Again, instead of psychologising people, let's hear them. This is a running theme in this review, I have to point out. SH are talking about hidden motives and self-delusion. Just listen and observe what people do. If that suffices to explain something, good. If not, we need to complicate our model and consider other things. If one is analysing societal practices through the lens of hidden motives, one of course will ignore what people say! After all, they will say what you want to hear, not what they really think, or even when they say X, they might be deluded about X, so why even pay attention? (This is a hyperbole of the framework SH are using! Of course they are not blind to what people say about their motives. I'm just arguing for taking a more sincere view, on the margin).

Here there are some sincere reasons that people give (according to The Life You Can Save) why people don't donate to the world's poor. Here there are others

We see that people give because:

  • They feel the need to give back to their community
  • Or their nation
  • Or humanity in general
  • They had a bad experience (e.g. cancer) and they would like to decrease the likelihood of that for everyone (This is like expressive voting, in a way, because selfishly it doesn't make much sense)
  • It feels good

And a reason people don't give to big or international charities is the suspicion that the money won't reach the poor. In helping your community you can see with your own eyes what your donation is doing. Plus people straight away say that they don't have an obligation with foreigners. First family, then community, then nation, and so on, they may say.

But no one says they have the duty to do the most good!

Charity can be done better, and they point the readers towards what peak charity looks like. (As the meme goes, you might not like it, but it is the truth). That truth is Effective Altruism.

They also say that people say that they care about charity performance, but not that many people do research on where to give (Perhaps word of mouth and related things suffices for them?) I mean, maybe the studies cited in the book say that people care more in measurable dimension X than what they actually do something like 80% of the people saying "I should spend 5 hours or more reseraching where to give" vs 20% of people actually spending that time, but it's not clear in the book.

People also donate to other ends. SH report that when Princess Diana of Wales died, the Princess of Wales charity got over one billion GBP in donations even before knowing what the charity would be about. Why did people donate? The most likely explanation, that people wanted to express support for the recently deceased Diana, supporting a charity that, it made sense to believe, would support what Diana would have supported.

The issue with the authors is to say "Charity is about X. But people don't do X, they instead do Y. So something else explains why people say X and do Y. "

I say "For people, charity is not just about X. It is also explicitly about Y. They are not hypocrites and what they do makes sense given what they say, their values, and their ignorance relative to the average Effective Altruist (about charity efficiency)"

Then the authors do finally say something about hidden motives that does strike true: Charity, for many people, feels good. So, they grant, donating because feels good is a motive people will admit to. But then they want to explain why it feels good. This is all fine, they give a series of motives why: social recognition, peer pressure, proximity. The two first would count perhaps as hidden motives, but are more aptly described as hidden causes. Proximity not quite: It is a straightforwardly admitted motive (as we saw before)

Effective Altruism is also an example of something else that the authors probably think but don't say. Seemingly, to make EA a stronger foil to warm glow charity, they overlooked the signaling aspects of EA. They surely know about the arguments: They have read Geoffrey Miller's book the Mating Mind, the same Miller who gave a talk on signaling in EA. But if on a second thought one can be cynical, on a third thought, one has to appreciate the creative institutional design (though I don't know if fully intentional) behind the EA Community.

How do you get lots of money funneled towards effective charities? Well:

  1. You make sure people know the facts
  2. You create a club of smart, well connected and fun people (networking!) such that to get in you have to - or at least pretend to! - donate money to effective charities
  3. This cements the EA social identity in the minds of bright and talented youngsters.
  4. EA becomes signaling for talent and intelligence.
  5. Doing EA work will count more in CVs as more and more of the highly skilled workforce becomes aware of EA, this should be specially true in high paying sectors, not in jobs in general.
  6. ...
  7. Profit!

In doing this, the EA community leverages signaling, the mating motive and whatnot into doing real good. (!) And this is not a bad thing. It is working within the constraints of how the human brain work, and succeeding. It is the sort of thing that SH say should be done to fix these "hidden motive"-caused problems: Creating institutions that do X, while drawing from motives Y.

(V): Confused concepts

I'm not going to review every single chapter at length, but I'll make in this section a series of remarks about the underlying theory.

First, on the idea of self deception. The authors seem to see self-deception as pervasive. I, if it wasn't clear at this point, tend to see it as quite rare: the average human being is right about almost everything, at least for what they need in their daily life, and this is why it is so striking when we make big cognitive errors.

What is self-deception? You'd think that for a book that has it as a central concept, defining what it means would be an obvious move. Yet core concepts like self-deception or selfishness are not defined in the book. So what, one may say. So a lot, as we'll see.

Take sex. What is sex about? Why do people engage in sex with protection? It is pleasurable. That's it. I hope you don't find this statement controversial: It is the answer most people would give, and if you poked their brains hard enough.

It would be odd to see someone saying: No, that's not the real reason: the real reason is reproduction, or bonding, or something, people are hypocrites about why they have sex.

This confused use of concepts applies to many of these hidden motives: and it is the basic misunderstanding first year students of evolutionary psychology are taught to avoid: There are ultimate and proximate explanations for behaviours. Reproduction may be the ultimate reason why we happen to find sex pleasurable. But pleasure itself and alone is in most cases why people have sex while going to great lengths to avoid conceptions.

Similarly, for selfishness, take the case of a parent who does X for her child out of altruism. Again, something saying: "it is actually selfishness, X helps propagate certain genes that reward X" would be mistaken in their assessment of X as selfish in the standard meaning of selfishness, even if that were correct from a gene-centric perspective.

So again, what it self-deception? I am inclined to think that it should feature representing the belief p and ~p at different levels: being aware of p, believing p, saying that p is true, but behaving in ways consistent with the fact that at some hidden level you really think ~p. Robert Trivers et al. (2017) don't like this definition: he says it defines many cases of self-deception out of existence. Instead, when he talks about self-deception (And I guess, by extension SH), he refers to:

Any information processing bias that favors preferred over non-preferred conclusions has the potential to facilitate self-deception. For example, people can self-deceive by biasing their information search strategies to selectively avoid bad news, people can self-deceive by biasing their interpretive processes to selectively re-construe bad news as non-diagnostic, and people can self-deceive by biasing their memory processes to selectively forget bad news. What marks all of these processes as self-deceptive, rather than simply unintended or random error, is that people favor welcome over unwelcome information in a manner that reflects their goals or motivations (von Hippel & Trivers, 2011).

Onto another key idea, adaptive self-deception, some examples scattered here and there through the book are given to support this, but it doesn't seem a good pile of evidence.

In chapter 5, about self-deception, some work is mentioned to support the claim that self-deception is out there, and that it is adaptive:

Starek and Keating is a low-sample (N=40) paper that asserts that self-deception is higher among more successful swimmers. How do they measure self-deception? With the self-deception scale. What does this escale measure? We have to go to Gur & Sachkeim (1979) and it seems to me like a bad scale. I literally am never angry: that gives me self-deception points. Are you purely heterosexual? Also self-deception. Have you had nice parents? Self-deception too. Asexuality? Yeah, also self-deception. High success in mating? Also self-deception. Absence of rape-fantasies? Also self-deception. And of course, if you heve never had suicidal thoughts, also self-deception.

The potential for confounders (neuroticism, perhaps even testosterone) is huge. The paper also has other measure, using binocular rivalry, but the fact that the sample size is low, the scale is dubious, and that they use one-tailed t-tests [instead of two tailed, which are more rigorous for the matter at hand] are enough to quarentine this paper for now)

Other claim they make is that is that a marathon runner may trick herself into thinking that she's not as fatigued as she "really is". No evidence is given to back this up. (Do runners really do this?)

SH then switch to health: When patients are given a cholesterol test and asked about it months later, patients with the worst results were more likely to misremember their results (they thought they were healthier). This is a good study(Coyle et al.  2006).. But we have to see what the study says. The study measured both -the exact cholesterol level- (good luck remembering that), and a risk category (low, medium, high). For the exact amount of cholesterol, 24% thought they had higher cholesterol, 31.3% got it right, and 44.7% thought they were healthier. Then, the bias does occur in the direction SH point out: People engaged on average on wishful thinking, but also a sizable fraction did the opposite.

However, here we don't see proof of this being useful, besides to feel better, a motive which SH reject as the basis for self-deception, but that Coyle et al. do attribute to self-enhancement bias ( a way of "feeling better"). And it also has to be mentioned that when asked not about their exact level of cholesterol, but about the risk categories, 88.7% of people got them right. As far as doing anything about your cholesterol is concerned, this is what would matter: Being told it is okay, or if it is bad. For this crucial outcome, the most likely to matter in real life situations, we see people mostly getting things right.

In another study, smokers, but not nonsmokers, also chose not to hear about the dangers of smoking. This makes sense to me, but again sample sizes are small (7 smokers and 16 nonsmokers in Experiment 1, 10 smokers and 24 nonsmokers in Experiment 2, 13 smokers and 15 nonsmoker, and 8 smokers and 18 nonsmokers for Experiment 4), and effect sizes were also not particularly big, but we need not even distrust this study to make the point: This is again self-serving bias. It is even perhaps self-destructive (non adaptive) self-serving bias due to addiction: As an addict, you rationally want to stop and say so, yet you do not.

Onto more theoretical grounds, SH present us with two views on self-deception: self-deception as a protection of our self-esteem (to make us feel good), and self-deception as a way to deceive others. I admit not to have read much of the literature, but I think their rejection of the former view is not all that convincing: They say that it is unlikely that such a thing would have evolved: If the goal is to preserve self-esteem, why not just a more efficient self-esteem mechanism? And also, if you just hide information from yourself to feel good, won't that backfire?

This doesn't seem a good argument. First, nature is not a giant efficient market. Evolution is a sloppy assembler of hacky lumps of carbon. There are tradeoffs to be made, and conditioning paths dependencies. Perhaps a super efficient self-esteem sytem required changes in the brain that affected other systems such that global fitness would go down. Also, self-esteem matters: If you don't feel like doing anything, you can't do much in terms of passing down your genes. Second: imperfect cognitive skills (read: certain biases) can be adaptive if what is gained is more than what is lost. Some biases are thought to be due to heuristics that may have been fitness-enhancing back millennia ago that currently are not.

The opposing view, following Trivers and Kurzban, argues that self-deception is not due to old heuristics or about protecting your feelings, but that it evolved to convince others of certain things: If you really believe your lies you won't even think you are lying, so you get the benefit of lying without worrying about having lied, or having to keep parallel threads of reality running in your head. As Trivers say and SH repeat, "We deceive ourselves the better to deceive others".

But ironically the same criticism can be levered at this theory (as Jess Ridelalso has pointed out): Why didn't nature design a lying faculty that is immune to worrying, or a mental module that helps keeps tracks of the lies very well so that there is no need to self-deceive? SH say: Because The evolutionary costs outweighted the evolutionary benefits (over time), which is the same answer that a defender of the self-enhancing theory would give. To set things straight, we would have to actually study what self-deception however defined does to people.

Another case they mention is  blindsight, but blindsight is no evidence for adaptive self-delusion: It just shows that the brain does not process all sensory information consciously. This is a case that meets a partial condition for self-delusion: The individual "knows" at some deeper level what the truth is, but is not consciously aware of it. And this can be tested. Now, blindsight is not present in healthy individuals, and has not been proven to be advantageous. Similar considerations apply to the cases of confabulation.

Can we find an example of a case where an individual "knows" (in the blindsight sense) something, yet is not aware of it? A case where we can probe (again, like in the blindsight case) this hiden knowledge, and furthermore, on top of that, prove that this shenanigan is advantageous for the individual?

Note that this is different from mere bias. In the cholesterol case the individuals truly and honestly misremember, and the authors of the paper do not go into probing if "at some deeper level" the subjects do really remember. And because it is different from mere bias, it should be possible to tease out empirically.

Through the chapters, I notice a maneouvre from the authors: Evidence for the components of adaptive self-deception is shown, but rarely for adaptive self-deception itself. It looks like the authors are showing us the T and the F, but not the complete Tensorflow logo:

Indeed the T and the F can be explained by the same thing, but they can also be explained by different things. There can be modularity and biases without self-deception. Have a look at this handy table, then read through the book and note down how the items 1-3 do get their backing, but item 4 not quite.

| | Is there “Hidden knowledge”? | Is it directed to the self? | Is it adaptive? | How well supported it is? | | Cognitive bias | No | No | It can be* | Well | | Confabulation, blindsight*** | Yes | No | No | Okay | | Lying | Yes | No | (Probably) Yes | Extremely well | | Adaptive self-deception | Yes | Yes | Yes (By definition) | Poorly** |

(*See cases like the swimmers we saw before)

(**In the book. Things will get a bit better for the book's thesis a few paragraphs below)

(*** But see Overgaard 2012)

In a way, I end up making a similar point to that of Neil Van Leeuwen's review of The Folly of Fools  (Triver's recent book on self deception): the term self-deception is used too liberally for many different phenomena, which together don't really form an argument for the conclusion. Says Van Leeuwen:

In my view, this theory is both elegant and important. Furthermore, I suspect it may even be true of some of the phenomena that fall loosely under that paradoxical-seeming term “selfd-deception.” But it is the very promise of this theory that sets the reader up for disappointment. The Folly of Fools suffers from two remarkable flaws. 1. The book as a whole simply does not add up to being an argument for its main thesis. 2. The focal term “self-deception” is used so loosely throughout that it becomes impossible to determine what the scope of the thesis actually is. [...]

Is an overconfident football player really in the grip of the same psychological phenomenon as a gambling addict who tells himself he doesn’t have a problem? Maybe, but maybe not. Of course we might—loosely— label both of them cases of “self-deception.” But that may be like calling Venus a “star,” using the same appellation for a large satellite of the sun as we do for larger burning balls of gas light years away. Same pre*scientific word—“star”—but distinct phenomena. It was of course an achievement in astronomy to realize that Venus was not the same sort of thing as other stars. And this points to a major desideratum on scientific enterprises: recognizing genuinely distinct phenomena, despite the conflations of everyday speech. [...]

Trivers, however, appears unconcerned with this desideratum. He does, to be fair, attempt a definition of self-deception: “true information is preferentially excluded from consciousness ... false information is put into the conscious mind” (p. 9). But this is vague, and he goes on in the course of the book to lump so many things under the heading “self-deception” that every bias psychology has ever discovered seems to count as self-deception [...]

A reader of this journal, for example, might wonder to what extent schizophrenia or various monothematic delusions, like Capgras’, are to be counted as “self-deceptive.” Alternately, one might wonder whether the feelings of grandeur that occur at high points in the bipolar cycle should be thought of as “self-deception,” or whether one should put any of the anosognosias following neurological damage in this category. Or, is body dysmorphia a form of self-deception? [...]

Now we come to the first major flaw: The Folly of Fools does not provide an argument for its main thesis. What would a proper argument look like? Since Trivers can’t rely on “self-eception” as a unified class, for each type of “self-deception” he would like to include under his main thesis he would have to do the following: 1) Develop specific hypotheses about the evolutionary benefits and costs of a particular Self-Deceptive Phenotype as opposed to the benefits and costs of a Liar Phenotype and as opposed to an Honest Phenotype within the same ecological niche. 2) Show, through psychological research, that a large portion of humans has the Self-Deceptive Phenotype so posited. 3) Find empirical evidence for the hypotheses’ entailments about costs and benefits.

Let’s consider, for example, male self-presentation in courtship, where Trivers thinks overconfidence is a form of self-deception evolved in part to attract females. This hypothesis yields some testable predictions in keeping with the above requirements, a few of which are: Prediction 1: the presence of desirable females triggers self-inflating self-deception in males. Prediction 2: males who self-deceptively self-inflate are preferred by females to simply honest males or males who knowingly lie. Prediction 3: males simply lying about their qualities will be found out more frequently than males self-deceived about their qualities. Prediction 4: retaliation against self-deceived overconfident males is typically less severe than against males who are discovered to have lied.

For all we know, these four predictions, or some of them, could turn out true. But we won’t learn whether they’re true from reading Trivers, who relies largely on anecdotes. For example: I am walking down the street with a younger, attractive woman, trying to amuse her__enough that she will permit me to remain nearby. Then I see an old man on the__other side of her, white hair, ugly, face falling apart, walking poorly, indeed__shambling, yet keeping perfect pace with us—he is, in fact, my reflection in the store__windows we are passing. Real me is seen as ugly by selfd-deceived me. (pp. 17-18) This is an admirable instance of personal honesty—and the book is full of them— but the reader finishes the book wondering to what extent such anecdotes can be backed by more rigorous research. Someone reading this book review so far might have the impression that Trivers neglects empirical data altogether. That’s not my claim; the book in fact discusses quite a range of interesting research. My claim is rather that evidence that would support Trivers’ specific theory is missing, over and above the relatively uncontroversial claim that selfd-deception exists.

(VI): What do we really know about adaptive self-deception?

Not that much, oddly enough. The Elephant in the Brain sounds like there is a solid and vast empirical literature behind it, but a cursory examination of the literature reveals that at the moment it is still in its infancy.

Trivers originally published his model in 1976. And by published I mean proposed: he said that the model made sense and why. But for 40 years, no one examined if it was true or not. (Smith, Trivers and von Hippel, 2017).

But as of today, papers testing the hypothesis can almost be counted with the fingers of one hand:

Lamba and Nityuananda (2014) was probably the first one. It is a good study (But with a small-ish sample size, N=73, 85% female) , set in a real world setting: A set of students who do not knew each other beforehand assisting tutorial sessions at university. They were asked to predict grades for other students and for themselves. Self-deception was the difference between their real grade and their predicted grade, and deception was measured as the difference between their real grade and the median of the predictions of other participants )

The study was ran twice: first after the first tutorial session, and then six weeks later to see if mutual knowledge had any effect on deception or self-deception. The sample is composed of 12 tutorial groups in two universities in London, and each group has 2-8 students. They were however pooled for analysis.

The relevant conclusions were that individuals who rated themselves higher were rated higher by others, regardless of their performance after controlling for actual grade. We have to thank the authors for the plot they provide, as they help us visualise what the results actually look like (Red dots and line are absolute grades, and blue are relative rankings).

In other words: underconfident individuals were ranked lower and overconfident individuals were ranked higher. This is shown as proof of the adaptiveness of self-deception. On the one hand, say the authors, overconfidence has obvious benefits: it makes other belive that you are good. But underconfidence they find harder to explain, saying that maybe deceiving oneself into being underconfident is a tactic to play "underdog" and end up winning more somehow.

In this study, crucially, students were not trying to deceive

But there is an alternate explanation for the findings. Imagine you are a student and see someone who seems very secure, is never nervous, ask questions, etc, vs someone who rarely asks questions and is insecure. You will, rationally, as a good intuitive Bayesian, assume that the former will get better grades and the later will get lower grades (At least, if people tend to be honest most of the time). Over time, the study also shows, self-deception is not correlated with deception any more. People are still as self-deceived, but others get better at assessing them. This may show that the self-deception-to-deceive mechanism works for short-term interactions: We can imagine how living in a tribe with others monitoring oneself may make self-deception costlier, as you won't get the benefit of fooling others. In short-run interactions, however, you won't get that long term monitoring and so it will make more sense. But it is the first kind of self-deception that would be required to explain the persistence of institutional-level arrangements in healthcare or politics.

After that one, another study was published, (Wright, Berry et al, 2015), again with a small sample size (N=75), comparable to the previous one. Here self-deception was measured  with the Self-Deception Scale of the Balanced Inventory of Desirable Responding. This is items 1-20 on pages 40-41 here. This includes more promising questions than the previous scale I mentioned, but still leaves some to be desired. They found not only that more self-deceived individuals are not better liars, but that in fact they are worse liars. The authors of the study note a caveat that also applies to tbe previous study but that went unmentioned: In neither studies were the individuals trying to actively deceive other people.

Fast forward to December 2017 and we get to Smith, Trivers, and von Hippel (2017) .This paper is the one that says that for 40 years there has been no published evidence directly testing Trivers' hypothesis. This paper thus tries to test it, with the largest sample size so far (N=306). This is the best empirical evidence so far for adaptive self-deception.

Participants were shown a series of videos where someone was engaging in "positive" , "neutral", and "negative" behaviours. These videos were arranged in blocks.  A block is a set of 3 videos, and some participants were shown more negative videos first, and others were shown more positive videos first. Participants could choose, after watching a block, if they wanted to watch more.

They were then asked to write a text about this individual trying to: a) persuade the audience that Mark is nice b) persuade the audience that Mark is dislikable and c) persuade the audience of whatever opinion they came to form on their own. Participants were rewarded for participating and for writing essays ranked among the best ones. They were also asked for their private opinion of the individual (numerical value from 0 to 100), and asked to guess the average rating given by everyone.

They found that participants thought the individual was more likable if they had to argue for him being likable, and viceversa (This for the positive-to-negative order of videos). Those who had to argue for him being more likable rated him with 80 (out of 100) and those who had to argue for him being dislikable gave a rating of around 50. But the study also revealed a strong anchoring effect: if the negative videos were shown first, then ratings dropped by abount 55 points.

Next, they look at whether their guess of the average score other participants were giving was biased or not depending on their task. If it were not biased, it would mean that the bias is affecting what they say and believe themselves, but that somehow they are able to remove this bias for a different task. The predictions were however found to be biased too, and anchoring effects were also substantial.

Trivers et al. cite other papers that also have good designs and samples (Schwardmann and van der Weele, 2016) which also show a performance enhancing effect from overconfidence, which however was not present when participants who assessed those participants who engaged in self-deception were given a short briefing on detecting lies.

And that's mostly it. That's the bulk of the literature on self-deception and deceiving others. I don't blame anyone for not including a paper that was released almost at the same time as their book, of course, but SH did not mention the other papers either...

Triver's last paper, the most solid proof so far, is better proof that we are prone, at least in this case, to believe something that is helpful in order to convince someone. But here we don't see any_hidden internal representations of an unbiased truth_: The information goes into the brain and shifts beliefs accordingly ("I am told this is is a bad guy and I have to convince it is a bad guy, so I'll think he actually is"). But no "true and unbiased" representation is kept. This matters because SH do claim in the book that:

As we’ve mentioned, the main cost is that it leads to suboptimal deci sion-making. Like the general who erases the mountain range on the map, then leads the army to a dead end, self-deceivers similarly run the risk of acting on false or missing information. Luckily, however, we don’t have to bear the fullbrunt of our own decep tions. Typically, at least part of our brain continues to know the truth. In other words, our saving grace is inconsistency. [...] What this means for self-deception is that it’s possible for our brains to maintain a relatively accurate set of beliefs in systems tasked with evaluat ing potential actions, while keeping those accurate beliefs hidden from the systems (like consciousness) involved in managing social impressions. In other words, we can act on information that isn’t available to our verbal, conscious egos. And conversely, we can believe something with our conscious egos without necessarily making that information available to the systems charged with coordinating our behavior. [...] No matter how fervently a person believes in Heaven, for example, she’s still going to be afraid of death. This is because the deepest, oldest parts of her brain—those charged with self-preservation—haven’t the slightest idea about the afterlife. Nor should they. Self-preservation systems have no business dealing with abstract concepts. They should run on autopilot and be extremely difficult to override (as the difficulty of committing suicide attests41). This sort of division of mental labor is simply good mind design. As psychologists Douglas Kenrick and Vladas Griskevicius put it, “Although we’re aware of some of the surface motives for our actions, the deep-seated evolutionary motives often remain inaccessible, buried behind the scenes in the subconscious workings of our brains’ ancient mechanisms.”42 Thus the very architecture of our brains makes it possible for us to behave hypocritically—to believe one set of things while acting on another. We can know and remain ignorant, as long as it’s in separate parts of the brain.43

Note also the conflation in the last two highlighted sentences: Kenrick and Griskevicius are talking about ultimate causes, not the brain still keeping track of truth. SH's wording seem to be equate self-deception with an evolutionary reason for a behaviour.

(VI): Laughter

I deceived you, I'm going to keep reviewing chapters! :) (Canned_laughter_) You have made it through so many words you deserved a rest from all that seriousness.

This sentence above might have made you smile or emit a sligh "hah!", but why?

The author's view on laughter is that laughter is a play signal, or maybe more than that. It is a bit unclear. Play signal is their stated answer, but the chapter goes on to talk about more than that, and it is unclear what the specific thesis is, also noted by Riedel's review. As one function of laughter, laugh as a play signal seems plausible. But as its only one it does not.

I have to say that before reading the book, I hadn't thought about what different laugh-situations have in common, but I had thought about humor. My running theory is one of the ones they discard, that of expectation violation. I came to believe in this after some thinking prompted by reading a certain very bloody chapter in one of the A Song of Ice and Fire books. During the chapter, certain subtle clues were slowly laid out so that the reader may notice that something is out of place but not know what specifically. Then, there is a big reveal and lots of people die. A that moment, the chapter in its entirety, plus plotlines from previous chapter made sense, and I chuckled and emitted some laugh-type sounds. Then I stopped, like when a debugged program reaches a breakpoint, and thought: Why did I just chuckle? It wasn't the fact of people dying, it was that things made sense. The event was unexpected, but it made sense (more expected) once you considered the previous clues, and the previous clues became salient once you have read about the event. I then noticed that I did the same thing when I read some Sherlock Holmes' stories, and that jokes in general follow the same pattern: Clues X are laid such that the punchline Y cannot be easily foreseen, but when Y is delivered, the perception of X changes such that they lead obviously to Y.

There are no social aspects to these manifestations of laugh, and it is hard to reconcile with the "play signal" model. Maybe it is the case that the behaviour of laugh evolved for X reason, but that it is repurposed for other uses, or maybe laugh, jokes, and ahás should be studied separately.

This chapter does at least do what the book is supposed to be doing: people think (or at least, I think people think) - as per the evidence they give - that laugh usually happens with jokes. But it happens in many more occasions, and it is the speaker the one who laughs the most. But still wonder, Is humour-laugh the same laugh as social-chat-laugh? It sounds different. When comfortable with someone, flirting light-laughs might be extensions of a smile, more like chuckles. When getting the punchline of a joke, laugh tends to be louder, to the point of tears.

(VII): Conversation

The puzzle for this chapter is that we like talking. Why do we like to give out information so much? People think we like conversation (and writing, etc) to share information, the authors argue. Listening/reading costs very little and speaking/writing costs more. One would expect that we like to absorb information (it's free) and minimise utterances, if all we care about is information sharing, then we should just say the minimum necessary to get the information we want.

But they argue that this view is wrong: If the reason we talk is that we expect others to give us information in turn, people would keep track of converational debts, but the authors argue that they don't. (Maybe they do: A couple of times I've observed that someone (including myself) is told "You talk too little, you should integrate more in the group" and stuff like that). For the most part, however, it is true that people don't complain about others' amount of speech. Perhaps it is a threshold based reaction, and most people meet the threshold, but this is just a minor observation.

People also like to speak, as we said, though I have to add that this trait probably varies across the population, and it is probably tied to introversion. Here I admit I might suffer from availability bias: I do talk very little, people complain about it, and I tend to listen more than I talk. I don't write that often (but when I do, it is a lot, and never for chit-chattery purposes), and so I may be inclined to think that the rest of the world is like me, which is clearly not. Could it be that having a lot of superchatters and a minority of superlisteners is an equilibrium? No idea, but maybe. Regardless, the author's own explanation seems good.

Puzzle three is that conversations are not random requests for information, they tend to follow some structure: what B says has to have some relation to what A says.

Finally, puzzle four is that when two people meet, they don't exchange the most important bits of information they have, we usually "chit-chat" about TV shows, hobbies, or current news.

The proposed solution is that speaking functions in part as an act of showing off: good talking points are rewarded with status. Status in turn gets you allies, mates, and other non-informational rewards. Saying something interesting not only transmits the information but also sends the signal "I am the kind of person who knows such things". I agree with this view: I said at the very beginning that this review is not about reviewing, that is it about signaling: and that is true if properly interpreted: I am doing this not only to signal, but showing off is part of it.

The authors make the good point that sometimes we explicitly go for the signal and not for the context: in job interviews we don't care about exactly what the candidate is saying specifically (i.e. we won't remember much about the specifics of that PhD thesis about the AdS/CFT correspondence ), but we care instead about what sort of information the act of speech conveys: Is he confident? Does he answer followup questions reasonably well? Is he able to adjust the depth of the explanation as prompted? And the reason these things are sought is that, we tend to think, they correlate with the sort of stuff that will make you a good employee.

This view accounts for the puzzles mentioned before. Again, I agree with the authors.

Two cases of specific application are mentioned: news, and academic research.

People are obsessed with news and have always been, say the authors (and I agree). Why are people so interested in random stuff that doesn't affect them? One unconvincing explanation is that people read the news to get useful information for voting, but that's not true, for the reasons argued in the book.

They are also right in this case. One can make distinctions in the sort of reasons people follow news for: some sort of news feel like reading a story (the unfolding of a war), others we enjoy for identity purposes (sports news), etc, but the fact remain that a big part of that listening to news serves the purpose the authors impute to it. This may be the ultimate reason why people like the news.

But what is the real (proximate) reason newswatcher give? I would bet that there are a few:

  • Perceived duty to stay informed in order to vote
  • Sense of meaningfulness in being aware of the specific context ons is in
  • Social conformity (One is expected to do so)
  • Intrinsic pleasure (we like being told stories)

These motives are no doubt powered by SH's deep motives: social conformity may come from the sense of discomfort experienced if one can't keep up with a conversation about daily affairs.

Academic research is said to have great benefits, but it may be overrated. I tend to agree. With Science, it seems like "more is better" and that people are unwilling to make tradeoffs. Big Science projects like the LHC, the International Space Station, ITER, large telescopes, or big scientific satellites, are unlikely to pay off for the average person(or at lest, right now. They may make sense in the future), and in terms of material wealth, it is also likely that that money would do more good if it were invested in less visible but more effective kinds of science. I personally like these projects, and probably many of my readers do so too, but that doesn't justify pursuing them: they are pursued with public money, so the relevant question is not if I like it, but if the average citizen benefits, and it seems more likely that people who like science think everyone else values knowledge for its own sake as much as they do.

I have written a bunch of stuff about innovation and technology in the past, see here.

Interestingly, in the same way that Hanson suggests axing healthcare in half, one of the chapters of the Handbooks of the Economics of Innovation does say that only around 50% of all research is justified by market failure considerations. It would be too hasty to suggest cutting R&D in half, but... ;-')

Back to the book.

The authors say that "we have reasons to doubt whether these [innovation and growth through increased undestanding] are the main motivations that drive academia". Researchers seek status by working with prestigious mentors, getting degrees from prestigious universities, etc.

But imagine you are a scientist. You like doing research, you want to get tenure or some position that will allow you to do what you want. What do you do? The sort of stuff that SH mention. It is known that to get to good positions one has to do XYZ. So here the motive is selfish (doing research beacause it is enjoyed), with an altruistic component (advancing knowledge). You are not seeking status directly: it happens that incentives are lined up that way. The status motive surely also plays a part: Gaining status and success is another motive that we have and readily acknowledge, but the core motivation here seems to me self-oriented (preferred career) rather than other-oriented (status).When asked, only 8% of scientists cite a prosocial motivation as their main motivation to go into science.

This is from a scientist-centered perspective. Socially, we do say that Science is about knowledge and innovation, and that is also right. That's what the research community produces, even if it doesn't aim at that as an intrinsic goal. Similarly, we could remark that agriculture is for producing food, but from a farmer's perspective, it is not done to produce food, but with the ulterior goal of making money.

(VIII): Consumption

The authors say that many of us work more hours that our grandfathers. It has to be said that Average working hours in 1870 were in the 60-70 range, now they are in the 30-50 range. Clearly we have experienced a decline. But, as SH say, this decline has been less optimistic that what Keynes expected, and again as they note, it has to be mentioned highly skilled people are pulling off more hours than the average person. Why?

We are stuck in a game of competitive signaling, is the authors' answer: we consume to show off, and because signaling is a zero sum game, we are driven to consume more and more.

A specific example of this consumption as signaling is green products: electric cars are more expensive than non-electrics. Convetional wisdom is that people buy them to help the environment. But maybe there is more at play: the mention an study from 2010 that says that people primed with a status-seeking motive expressed a preference for a green car vs a control group that expressed a preference for a merely luxurious car. The study referenced is Griskevicius et al. 2010 (N1=168, N2=93). I checked for replications of this study because I had a vague memory of having seen this particular effect failing to replicate, and I found them:

On the direct replication tests, there are two, hosted at OSF. (One, two), with sample sizes larger than the original study. One may complain that the studies are not peer reviewed, but given that a) I looked for all replications attempts, regardless of outcome b) The source data for those studies is available and c) The methods followed are claimed to be the same as those of the original study, these replications should be given some consideration, with the caveats the replication themselves make. (There is a thirdand fourth failures to replicate, but these looks dodgier (sample sizes, discussions), so I exclude them from my consideration, and a replication that was sucessful, but I also exclude it for the same reason, I couldn't even find the source paper. If anyone is willing to do a k=6 meta-analysis, I'll accept whatever outcome results.

More broadly, Berger(2017) , with 5 different experiments (N between 150 and 880) doesn't directly address Griskevicius paper, but tests a very similar hypothesis, Costly Signaling Theory: that there is a hard to observe trait that individuals can signal, but that only individuals who truly have the trait can emit certain specific signals due to their high cost. In this context, conspicuous consumption is a signal of wealth. The purpose of this signal is to gain status, and be a better regarded ally or mate.

The conclusions of the study are:

  • Individuals wearing luxury brand clothing are perceived as having higher status and having more wealth
  • But they were also perceived as less trustworthy, prosocial, and environmentally friendly.
  • Individuals wearing a "green label" T-shirt the wearer was perceived as more environmentally friendly, but there were no effects (relative to a control) on perceived wealthiness, prosociality, or wealth.
  • Individuals asking people to take part in a survey, or asking people money for charity were not treated more favourable when they were wearing a "green label" T-shirt
  • And the conclusion holds if you do the same experiments in poorer neighborhoods, where the signals should be harder to fake

Admittedly these findings need not doom the green-to-be-seen idea, but should give us at least some pause.

Next the authors consider a thought experiment to see to what extent our consumption is conspicuous vs inconspicuous: imagine we stopped forming impressions about other people's things: clothes, cars, houses, tech gadgets, etc. For example, no one will comment on our clothes ever, or notice if one stops washing their car. The authors distinguish between products bought for signaling reasons, and for personal use, and many products are a mix of both.

They argue that in this thought experiment, people wouldn't care about car aesthetics (only about quality and "mundane" specs). Fashion would be greatly reduced, and dress codes would tend to disappear (no one would notice): at the margin, we will prefer smaller, cheaper houses that are easier to maintain. Living rooms will disappear or get repurposed.

Okay about the increase emphasis in mundane specs and quality, but the living rooms point is hard to imagine: guests may still come, and it is the usual place for  families to gather together to watch TV or whatever.

I for one welcome a world with less product variety and cheaper products, but I do think that the self-image motive would still be important enough to keep more variety than SH say. I can imagine people who buy, for example, shirts themed with music band logos to keep buying them, because they have the "fan of X" identity, and that's what fans of X do.

That said I admit I don't hold this view strongly, and that I haven't read on the relative weight of one's identity vs signaling on choice of clothing.

The advertisement part is good, with an intereting discussion of the social channels through which ads can also be effective.

Before moving on to the next chapter, I want to mention an extra paper byShanks, Vadillo, et al. (2015) . The chapter in the Elephant in the Brain does not specifically delves into whether a mating motive may be behind this conspicuous consumption, but it is alluded to in the beginning of the chapter,

No matter how fast the economy grows, there remains a limited supply ofsex and social status - and earning and spending money is still a good way to compete for it.

The paper I mention is a meta-analysis, plus a series of replications of several studies that investigate into the relationship between "mating motive" priming, consumer choice, and risk taking. The first thing is what the paragraph above points to, the second thing is part of the more general theory presented by Geoffrey Miller in his book. They review papers that use different sorts of priming: both a short story, and opposite-sex pictures, and mention that many previous priming studies -in unrelated fields- have collapsed under closer examination.

This is the reason I cite this study: I do assign low credibility to priming-based studies (Griskevicius' was one), requiring at least a meta-analysis to give it enough weight to base further claims on top. I didn't find a meta-analysis for the green-signaling or status-signaling effects more broadly, and this is the closest related meta-analysis I found.

The meta-analysis finds a medium sized effect size, d=0.57, but once they looked at publication bias, the field (15 papers) was ripe with it:

Captura de pantalla de 2018-01-10 18-42-57.png

Therefore they decided to run eight experiments that used both visual and text primes. Sample sizes for each of the experiments are small, but so are the sample sizes of this particular literature. In this case, this is not that much of an issue: If you expect a priori an effect size of the order of d=0.57, their sample size has enough statistical power to detect it. The conclusion of their additional studies:

The studies reported here can be readily summarized: They have failed to detect any effects of mating primes on risk-taking, expenditure on publicly consumed goods and services, or loss aversion. Indeed, as indicated by the Bayes factor analyses, their results strongly support the null hypothesis of no effect. Together with the asymmetric funnel plot shown in Figure 2, which implies the existence either of p-hacking in previously published studies or selective publication of results (or both), our results suggest the real possibility that romantic primes have no meaningful effect on decision-making behaviors. Although the major findings comprise null results, our experiments were able to replicate other anticipated effects unrelated to priming, including quite subtle ones. For example we confirmed in Study 3 that males and females differ in their judgments concerning consumption and benevolence, depending on whether the behaviors in question are conspicuous or inconspicuous, mirroring the pattern reported by Griskevicius et al. (2007, Study 2); in Study 6, we confirmed that males were substantially more risk-seeking than females but only in the domain of sexual risk-taking and not gambling, exactly as found by Greitemeyer et al. (2013); and in Study 8 participants were significantly loss averse.

This paper does not mean that the claims like "Individuals take more risks in the presence of a member of the opposite sex" are false, the authors do note that that one in particular is true. It just shows that studies that rely on priming should be taken with a grain of salt.

Going back to the beginning of the chapter: the author says that a rat race of consumption is the reason why we don't work less. But I don't think they provide evidence for this. Sure, nothing is a "necessity". Necessities are conditional on one having certain preferences. Just because something goes above your minimal physiological needs doesn't mean you are doing it to show off. Eating a great variety of international food is still delicious even if no one knows about it. For any of us, would it be possible to work less? Some don't work less because they like what they do, relative to a more boring but less timeconsuming job. Others need that extra money to afford stuff. Ultimtely, beyond what the chapter talks about, we may ask if a world where people work 1-2 hours but cannot travel, enjoy nice food or play videogames is worth better than one where you work more and get more.

(IX): Art

What it art? For the book's purposes, the working definition is "something made special, for the sole purpose of human attention and enjoyment". This seems a good enough definition.

Art is a human universal, and as such it has been practiced for a long time, in every culture. Why do we do art? Art is costly and impractical in the sense that it doesn't seem to do anything for the fitness of an organism, so they wonder how did that instinct evolve.

This line of thinking is a good one, but one has to remember that just because a behaviour is universal it need not be an adaptation, let alone a behaviour that is not universal. Simple mathematical ability is present also in every culture, yet there is little evidence that topological algebra could have had any adaptive value. The mental faculties that enable maths probably did: so from the combination of adaptive traits you get free advanced maths as a spandrel. The authors also point this out with the example of reading. With art, however, SH say there is consensus in that it is an adaptation, with the notable exception of Steven Pinker, who thinks of art (or at least, music) as a spandrel.

The argument for thinking of art as an adaptation rather than a spandrel is that: it is a human universal, it is costly, and it is old enough for selection to have had time to act on it.

The proposed explanation for art here is once again based on Geoffrey Miller's Mating Mind: that art was sexually selected, art is a costly signal that through signalling your general fitness helps you get mates.

Like art in some animals -like the bowerbird - art is used by people for courtship and as a general purpose fitness display: an advertisement of the artist's health, energy, vigor, coordination, etc. Thus, art is (partially) about showing off.

The authors follow with an interesting discussion of the intrinsic (the artwork itself) and extrinsic (who painted it, the work it took, etc) aspects of art. People think that most of art's value lies in its intrinsic properties, in how beautiful or impressive it is. But extrinsic properties matter a great deal: if art is about showing off, then the art has to say something about the artist.

A beautiful work of art that is beautiful but that was easy to make (a painting copied from a photograph), we would judge it as less valuable than similar work that required greater skill to roduce.

SH asks us to imagine a musuem containing high fidelity replicas of the world's masterpieces, and they would be cheaper to see. If we just cared about the aesthetic properties of the art itself, we would like these things, but these museums are rare if not nonexistant. Why is this? It may be because replicas are not good enough, or because part of the experience is actually being surrounded by paintings. When we get to have good VR, what will happen? I don't expect museums to go away: we will still have this attachment towards the originals, but this may lead to more virtual tourism.

What is even more, a study asked people that, in the event the Mona Lisa burned down, if they would prefer to see an indistiguishable of the Mona Lisa or its burned ashes. 80% preferred to see the ashes (Altough, I have to say, as of today this study has not been published, even though it was done in 2013. I do find the result plausible, though: I would prefer to see the ashes too [You can always see the replica on the web])

Another example of the intrinsic/extrinsic distinction is that if a friend of us showed a fancy seashell, claiming it is art. The seashell is aesthetically pleasing, but our assessment of its quality as art wouldn't be much high: there wasn't any effort behind. But imagine now that same thing was made of marble, chiselled over the months, and meticulously painted over. Then, the_same object_ becomes good art.

But... apparently the most praised piece of art of the 20th century was one that precisely was that. It is one of those things that most people would disagree with, and it would be interesting to know why some people get to praise this art.

There are a few more examples of the importance of extrinsic factors, but I think they solidly establish the point that they matter greatly.

The chapter moves on onto changes in extrinsic properties, leaving the intrinsic properties changed. Lobster, they say, was considered to be low-class food, eaten by the poor and those jailed. But as it became scarce, it became expensive, and a delicacy.

I take issue with this, the big reveal coming from a recent visit to the Royal Museum of Fine Arts in Brussels (No, I didn't go as far as Brussels just for research, I wasn't expecting to do anything blog-related there, but one has to seize good opportunities!). Have a look at this Flemish painting from the 1600s. (More here)

These paintings depict plentifulness (Google for Nature morte au homard), so it is highly unlikely that they would put low-class food in a prominent place. If anything, the pictures incline me to think that lobster were high status.

But perhaps it was, and then it wasn't and then it was. But not quite. On the enlightened side of the internet, David Foster Wallace's story reigns supreme, but if one pokes around, one findsfood historians who have actually done the reserach. Turns out even in the US it wasn't considered food for the poor -though sometimes it was fed to them-: Lack of markets and transportation made it so that they were cheap there where they could be fished, so they would be available to everyone.

This is just a minor correction: What made them into a "elite food" is, as the authors say, that they subsequently became scarcer and expensive.

They do mention convincing cases though: pale skin was valued, now not so (However, I do see that among cognitive elites (but not "low-brow elites" like footballers or socialites), pale skin is preferred again today. Countersignalling?]. Perfect artisanal crafts were valued until perfect objects could be made with the help of machinery, so people began seeing imperfect artisanal crafts as higher status because they are scarcer and more difficult to make. Similarly, realism was valued in art until photography came along and made realism easy, so painters came up with surrealism, cubism, or expressionism. In architecture, most people dislike brutalist architecture because it looks awful, but many architects like it because it is hard to make: building with concrete seems easy but it is actually quite hard, and they admire that effort.

Near the end of the chapter there is a section I don't agree with that much: SH say we learn what is good or bad art by sampling different sorts of art and noticing what is high status. This makes intuitive sense. But then we ask: What is high status art?

I bet that even if people knew how hard brutalism is, they would still not like it, for one. And perhaps in some very closed art circles abstract art may be considered good art, but for the majority of the population, and even probably for most of the cognitive elite (That is, you, readers), abstract art is not considered as interesting and good as "classic art".

It can perfectly be the case that different groups of people have different notions of what it is high status. But what is worse, even  some of this high-status (in some groups) abstract art is easy to make. How much skill do Mark Rothko orJackson Pollock signal, compared that of the hyperrealists?

Very rich people may see collecting certain kinds of art like collecting stamps or coins. The unintended signaling here is of their wealth. That highly sought art may be easy to make but it is scarce: there are only so many paintings painted by those authors. Thus art may also be valued because others value it, with no big reason behind.

Also, the claim that effort provides value to art only go so far: Someone spending years randomly chiseling the face of a cliff with random signs won't get much attention (Building a church with your own hands for your entire life might.)

Fitness itself won't make your art good either, though this is ultimately an empirical dispute: I see a small role for a halo effect around your attractiveness, say, so that people will like your art more, but not much more. George Bush Jr reached the heights of the status hierarchy, yet he didn't make much of a splash as a painter.  The same applies to most famous people who also happen to do art.

Now, onto more substantial claims, the chapter explains why we should expect art to be an adaptation, and explains a few characteristics of art that make sense if it is used to display fitness. But the chapter itself does not mention empirical literature on whether this proposed explanation is true (I think?). Pinker's explanation could be true: It could be that some uses of art are adaptive, or that the precursors of art are adaptive, but what we now call art is a "mental cheesecake" as he says, a spandrel.

To properly arbiter we should go out there and see if the theory works: How would the world look like if SH are right, and how would it look like if Pinker is right? Here there are some predictions. I don't frame them as absolutes, each would add some evidence to the overall hypothesis, and one being false doesn't entail the hypothesis being false. For example:

  1. Highly regarded artists will have higher mating success, controlling for being highly regarded
  2. Engaging in art will bring about some sort of non-mating (social) benefit
  3. Successful artists will have a higher degree of genetic fitness

Here's an study, for the case of music. Mosing, Verweij et al. (2015) study in a sample of around 11000 Swedish twins an attempt of measuring hypothesis 1 and 3. (I haven't seen any studies on 2. It would involve taking random people, getting some of them, teaching them art, having them practice for a while, then compare number of mates, etc)

  • Here musical aptitude was measured using a test to measure pitch, melody, and rhytm discrimination. Musical achievement was measured with a questionnaire that included questions like "I am not engaged in music at all" or "I have played, or sung, or my music has been played in public concerts in my home town, but I have not been paid for this" and "I an a profesionally active musician, etc". Musical achievement and aptitude correlated moderately (r=0.47)
  • Mating success was measured by different metrics: lifetime number of sex partners, age of first intercourse, and a questionnaire about attitudes towards sexuality to meaure openness to sex. Finally, of course, number of children.
  • Genetic fitness was proxied with intelligence, reaction time, and height.

The authors concluded that

The findings provided little support for a role of sexual selection in the evolution of music. Individuals with higher musical ability were generally not more sexually successful (at least not quantitatively), although men scoring higher on the music achievement scale did have more offspring. Musical aptitude was correlated with other potential indicators of fitness, such as general intelligence, simple reaction time, and—for females—height. However, the genetic components of these associations were not significant with the exception of the genetic covariation between musical aptitude and general intelligence. The evolutionary basis of music remains unclear. Future studies are needed to test alternative characterizations of the sexual selection hypothesis as well as other theories of music evolution.

And that the findings are not definitive, and potentially compatible with both Miller and Pinker's theories.

Artists in general, as noted by casual empiricism and the empirical literature (Booker et al., 2012, Götz and Götz, 1979 ,Feist, 1998 ,Batey and Furnham 2006) are less mentally stable, and high neuroticism is a trait associated with lower success - except, perhaps, in art itself-. Not only that, artists are less conscientious (But more open to new experiences), and show lower self-control.

This points out to art not signaling general fitness. But note that even though the book leans itself towards the view that sexual selection is about individuals signaling their fitness by costly signals, sexual selection is not necessarily about this: There are other models of sexual selection that do not involve signaling fitness (Like Fisher's model, Jones and Ratterman 2009). In this model, the species is trapped in a prisoner's dilemma situation, where traits that do little or nothing in terms of fitness are selected just because they help get mates. (Richard Prum is an recent advocate of this view (See also Coyne for a more nuanced view on Prum's)). If this view is true, certain sexually selected behaviours -including art - would indeed lead to higher mating success after all, but we wouldn't observe that it is better fitness that is driving it, just_l'art pour l'art. _

Making things special - art - in a small scale is rare. How many people compose poems, or write books? Jewelry is widespread instance, but in this one we don't see a signal about fitness value, but a costly signal of caring (and wealth).

Other examples of art given in the book -interior design, gardening or cooking - that people do partake in regularly are not particularly high cost. Those things may signal different things, but it has to be further proven that they work in the same way that more common art works.

So I'm inclined to think that most art is generally produced by a select elite, and is consumed by the masses. We perhaps would have to separate "that" art from everyday art (whatever that is) and study them separately: common day art is usually bought as a signal of caring, but not of fitness.

Moving on to other forms of art,

For painting, we do see that in the available literature some papers find that artistic success correlates with mating success in visual artists (Clegg, Nettle, et al. 2011), even after controlling for income.

And for writing we find mixed results: From some lists of great writers (Lange and Euler, 2014), there is a correlation between numbers of works written and number of mates, but not marriages or children. The authors also note that most literature tends to be written by men of reproductive age, and that this may be evidence for sexual selection. But think about it: Kids are not be doing much of importance, and old men won't have enough energy or mental faculties either. So we are left with middle-aged people. In their own data, women also showed a peak at the same age as men (30-40 years old). The fact that many more books were written by men is not surprising: men are also behind every intellectual or creative endeavor, or even further, for most "Great X" for any field you may think of, we will find almost always find men there. For this, you could have a story that builds upon risk and status seeking and/or higher intelligence variance, with no specific reference to art.

So while the evolution of art by means of sexual selection is plausible, and after reading the book it sounds more plausible to me than before, I'm not fully convinced that it is not a spandrel as Pinker says.

(X): Religion

This chapter follows a similar structure to previous ones: People do weird religious rituals with special clothing and objects, some abstain fro mating, sacrifice healthy enimals, and undergo mutilations like circumsision or scarification. Every Sunday, millions of Christians all over the world go to their local churches. Every year, over a million muslims from all over the world travel from every part of the world to Mecca for the Hajj.

Given that these things don't seem to have obvious adaptive value, a deeper explanation is sought.

One initial explanation is that supernatural beliefs in God and Heaven cause these behaviours, but SH inform us that this is not the consensus view among anthropologists and sociologists (but do not cite evidence for this immediately after the claim, something I would have preferred). They say that we don't woship because we believe, but that we worship - and believe - because it helps us as social creatures. This strikes me as a odd wording: Surely if religion is adaptive, the specific worship and rituals could be caused by biasing the brain towards holding religious beliefs, plus making us like certain sort of activities involving other people. But worship almost always necessitates belief to enable it, while there is plenty of belief without worship. If you take away religious belief, you generally take away worship too, unless you want to define a friendly gathering as a religious event.

The authors then say something that I hadn't noticed before: That with the exception of Christianity, Judaism, and Islam, most religions do not care about your private beliefs as long as you show public acceptance, citing Greek and Roman religions as examples.

Now, the authors mention Buddhism, Shintoism, and Hinduism. Do these religions care about what you publicly do? What does this mean?

  1. Maybe it means that part of the religious canon has things like "It's okay if you don't believe in any Gods" and "You have to do the ritual thingies. Or else."
  2. Or maybe it means that people won't burn you in the stake for your beliefs, but you have to participate in public religious events, or you will be shunned.

It's complicated, and the answer has probably changed from place to place and from time to time.

I doubt any religion explicitly says that it is okay to disbelieve it, so (1) is out of the question. (2) may have applied to Rome and Greece, but this changed over time: Romans at some point started to prosecute Christians. Christians themselves went to great lengths to extend their belief around (Crusades, Inquisition, etc) but now it has become more of a cultural thing, and similarly Islam manifested itself in different ways: in the Ottoman Empire, for example, you could be a Christian and not be prosecutted, even when Islam was the state religion.

Thus there are religions concerned with what you believe and do privately and also do publicly (old Christianity, present day Islam), religions unconcerned with anything that work more like cultures (Shinto), religions that are more about public display rather than belief (Grecorroman religion), and religions that are about private belief but not public displays (The rarest of all, sounds like some very small sect that borders on philosophy)

There is then an interesting set of claims in a single paragraph:

Compared to their secular counterparts, religious people tend to smoke less, (Strawbridge et al. 1997) donate and volunteer more, (Schlegelmilch, Diamantopoulos and Love 1997, Becker and Dhingra, 2001) have more social connections,(Strawbridge et al. 1997) get and stay married more,(Mahoney et al. 2002, Strawbridge et al. 1997, Kenrick 2011) and have more kids (Frejka and Westoff 2008, Kenrick 2011).  They also live longer,(McCullough et al. 2000, Hummer et al. 1999, Strawbridge et al. 1997) earn more money,(Steen 1996) experience less depression,(Wink, Dillon, and Larsen, 2005) and report greater happiness and fulfillment in their lives. (Lelkes, 2006)  These are only correlations, yes, which exist to some extent because healthier, better-adjusted people choose to join religions. Still, it’s hard to square the data with the notion that religions are, by and large, harmful to their members.

This looks quite an interesting thing, so I did a quick review of the literature for some of these claims (I focus on those that most related to self-centered benefits)

Overall, most of the literature is correlational in nature, and attempts at causal analysis were few. Some interesting heterogeneous patterns emerged from the review, and SH's core conclusion is correct in general: Religion is not harmful to individuals, with the caveats that follow the discussion below.

For physical **health related outcomes, **most papers indeed find that religious people are healthier across a range of measures in different countries, but causality has been in general difficult to ascertain. A recent paper that tries to address these concerns ended up finding a negative relationship instead for a sample of European countries plus the US (Berggren and Ljunge, 2017). So the conclusion may be that healthier people are more religious, but that if you take a person and "apply" religion to it, you get worse health. The reason for this finding. I acknowledge this is a non-peer reviewed work, but it is the only published so far that tries to go beyond correlations. Some weight has to be given to this one for trying to look at causality, but the rest of the literature is still there, so I wouldn't claim religion is harmful just because of that paper yet.

For mental health and happiness, the Wink et al. paper focuses on the elderly. Initially no relation was found, but on a deeper look, a peculiar relationship emerged: Among those in poor health, religion reduced depression, but the opposite effect obtained for those in good physical condition.

A recent paper (Diener and Tay, 2011) tried to answer the question that, if religion is so good, why are people leaving it? As one climbs higher in the scales of education, wealth, and intelligence, across individuals and countries, one finds less religion. But why are they leaving it? SH have said that individuals generally know what is good for them (and I agree). Could it be here that individuals know that they are trading off earnings, health, and happiness against the truth?

The paper found that - as one might expect - living in diffcult conditions makes one more religious. Positive correlations were found between religion and subjective wellbeing (SWB) after controlling for life circumstances, in the US and in the world more broadly. But in less religious countries, there was no relation, a pattern similar to other studies.  (Snoep, 2007, Stavrova, Fetchnhauer, and Schlösser, 2013). So maybe religion makes you well adjusted if you fit in better with those who surround you if they are also religious, but not if they do not.

For earnings, Steen's paper interestingly decomposes the results by religion. Catholic men earn more than Protestant men, and Jews earn even more. The usual explanation for the salubrious effects of religion seems to operate through either more subjective happiness due to belief, and more importantly, through having a stronger supportive community. But here we find differences withing religions, and importantly, that non-religious men earn as much as Protestants. Maybe catholics have a super-strong work ethic that Protestants do not have (?)

In a different country, Canada (Dilmaghani, 2011), religion in general is found to have a negative effect, but again Jews have higher wages. (This makessense). But in a more recent paper, decomposing results by province, differential effects were found: religion predicted higher wages in religious areas, but lower wages in less religious areas. (Dilmaghani, 2016). This goes with the religion-to-fit-in idea.

Similarly, in a German sample (Cornelissen and Jirjahn, 2012) - but note that this is a working paper - religion was found to have negative effects on wages, but oddly being raised by religious parents and then becoming and atheist raised earnings.

In the US, coming back to the Steen paper, Steen  himself did another study some years after the one mentioned by SH (Steen, 2004). Catholics were again the ones found to be earning more. For a generic religion-is-good-for-earnings theory, this doesn't seem to be what it would predict. (May there be hard to discern ethnic correlations behind?)

At a more macro level, besides the notorious inverse correlation between income and religion at a national scale, studies at a county level in the US show negative or neutral effect of religion (Rupasingha and Chilton, 2009)

And finally, a recent study (Herzer and Strulik, 2016) looking at a national level (in Western countries) considering causal effects find that both income reduces religiosity, and that lower religiosity increases income.

In general, it appears that religion does have some positive effects that materialise themselves through increase community participation. However, evidence on changes of faith causally inducing those is nonexistant, thus I wouldn't recommend anyone to try to acquire belief (As Hanson seems to recommend here) in pursuance of a better life.

As a final note, some of the findings above may change if one considers non-linear relationships: It has been argued that if one compares highly religious and highly atheistic (Where highly here means that they have high conviction), both of those groups have better mental health and wellbeing compared to those who are weakly religious (As noted by Galen, 2015, and Galen and Kloet  2011 ).

(XI): Politics

This chapter is good, and I endorse it motly as is. If you have read Jason Brennan and Bryan Caplan, this is a similar angle.

(XII): Conclusion

Thus we come to the end of the book. The final chapter rehashes the key points of the book, and posits ways in which knowing the contents of the book may be useful.

Do I agree with the book? The book makes many claims, and so it is useful to separate them. I've made many remarks in the section above about specific claims, but here I limit myself to what I think are the core underlying claims of the book.

  1. There are evolutionary explanations for many commonly occurring behaviours
  2. People are generally unaware of them
  3. Human cognition is biased and imperfect. Some of this is beneficial (either because it makes you feel good or because it gives you some external benefit) for the individual in some situations. We may also call this "weakly self-deceived" (eg. confirmation bias, self-serving bias, social desirability bias)*
  4. Large scale social patterns as in education or healthcare can be explained in a substantial part by these biases
  5. Humans are largely selfish. Most of our motives are selfish. Even many acts of altruism have a selfish component. By selfish I mean beneficial to one's interests, not to the propagation of one's genes.
  6. Humans are unaware of this preponderance of selfish motives
  7. People are generally self-deceived. By self-deception I mean that there is a gap between knowledge about X stored somewhere in the brain, and the knowledge about X that is brought to conscious attention. We may call this "strongly self-deceived".
  8. A substantial proportion of people are hypocrites. By hypocrisy I mean to profess ideals X but fail short of X, or to criticise in others failing Y while doing Y oneself, or to believe X and do Y that goes against X.
  9. Large scale social patterns as in education or healthcare can be explained by means of hidden motives (Adaptive self-deception)

I agree with points 1 to 5, and disagree with the rest.

*I put here the effect Trivers has found. On the one hand it is a change of belief motivated by willingness to believe something, which sounds like a common use of "self-delusion", but on the other hand it doesn't show that deep down our brain knows the truth. I however don't think that -this- effect is a big explainer of things we see outside of the lab.

The reason for my disagreement regarding 4 is that prima facie, it seems to me that people do know this: I asked a bunch of people and they confirmed my view, what they do they do for selfish reasons, one doesn't have to poke very deep. SH don't have better than my casual empiricism, they assume people are unaware. It is true that people don't talk about a motive being selfish or unselfish regularly, and that we generally prefer to embellish our talk, but this doesn't mean we are unaware it. My best model of standard human behaviour is that it is selfish within the constraints of certain moral rules. Most people wouldn't commit murder even if it were in their own self interest. And I think this model is also the model most people have.

The reason for my disagreement with 5 is that no evidence for a dual representation of knowledge has been provided (I am not alone in making this conceptual complaint, see Pinker , 2011): When someone believes one of those false beliefs that happen to be "beneficial" there is no hidden representation of the truth in our brains. This is my reading of the evidence as of the date of writing, but I am open to revising this.

One may argue: "But what if we include self-serving biases of different sorts and biased forms of cognitions in the category of self-deception?" And I may reply:

Well, okay, but then it would still not be true that people are unaware of their selfish motives, it wouldn't be true that people are hypocrites, and it wouldn't be true that adaptive self-deception is the mechanism behind all the troubles.

As for large scale social patterns, we would have indeed gotten closer to explain them, but in doing so we would be offering a cognitive-bias explanation (I endorsed point 4). This is not bad! It may not be a new exciting idea, but the truth need not be exciting.

In any case, the evolutionary explanation for a behaviour doesn't warrant claims about the motives of an individual. I use the same example I used above: A couple engaging in safe sex does not have the subconscious motive of having children. A painter creating art doesn't have the subconscious motive of mating. People engaging in conversation don't have to have the subconscious motive to show off. Someone going to eat nice food doesn't have a subconscious motive of actually alleviating hunger.

Instead, we say that sex is pleasurable because individuals that find sex pleasurable have higher fitness: pleasure is the explanation for safe sex. Mating is the explanation for why that is pleasurable. But the mating motive is absent in the execution of the adaptation. Likewise with the painter. The painter may not know why he has that deep passion for creating art: but it is that passion that truly drives him, not mating. And with people in a conversation, if one asks one will get answer like "We talk because we like it" or "To show that we care" or "To know what is happening around us (in a small social network)". Those were pointed to as hidden motives, but I see people readily mentioning them for why they talk. Now, yes, why do people like talking? Why do people like to show that they care? There we can apply our evolutionary explanations.

At least in one more ocasion the authors imputed most people with motives that I think they wouldn't really give: Grab any student and ask them why did they go to uni. (Do this being a friend, to avoid socially conforming responses). And you'll get answers like "To get a job", "Because I liked the subject (and to get a job)"". Some will say "to learn about X because I am passionate about X", but they will be the minority (yet, that person can truly love X, so no self-deception either).

(XIII): Fixing social institutions

The book aims, not just at increasing our knowledge and signalling Simler and Hanson's prowess and intelligence, as theya admit, but also at proposing a novel way of looking at and solving big social issues. If healthcare is supposed to be about health, but we also want to show that we care, then we need a system that, in Hanson's words

In order to actually make progress, what we need to do is to produce new social institutions that pretend to give people what they pretend they want while simultaneously actually giving them the things they actually want.

How should healthcare be fixed for example?

Well, in my view, if you assume healthcare is privatised and desubsidised, if patent laws on drugs are greatly relaxed (perhaps substituted by prizes), if tax exemptions are abolished, if medical practise is made more broadly accessible (not only defund, but deregulate), you will in theory have a system that aligns private costs and benefits. If you wish you can sprinkle on top regulations to deal with possible market failures, or subsidies for those who cannot afford healthcare.

This gives you optimal healthcare for individuals who care about healthcare, and this is my solution to the problems of healthcare provision. I do happen to think that the market failure arguments against private healthcare are not that good( here, )but if you don't agree it's fine, just turn up the dial of regulation until you are happy.

This solution doesn't assume that people have hidden motives around healthcare, it just tries to provide what healthcare is suppose to provide, efficiently.  I would guess that this system will involve a patchwork of providers, profit and non-profit, that almost all expenses will be out of pocket, and that insurance will only be used for expensive treatments. Hospitals will publish their prices, and so on.

Do the SH view lead to something different?

We could try to add some things to my model, or we may have to face tradeoffs between efficiency and acceptability, but I couldn't think of any obvious alternative.

For politics, I personally favor scraping the whole thing down 😀 and reducing the size of government until the amount of things it can do are so limited and boring people stop caring. There needs to be an end to the idea of a shared national "us". States should be more like HOAs. Alternatively, one may want to implement some of the remedies described by Jason Brennan in Against Democracy. I expect many to disagree with me, but this example is not about the example, it is about analysing the impact of evolutionary explanations in the sort of policies one may favor.

This measure, if implemented, does away with the putative hidden motives of politics by doing away with politics itself, so this is already a plausible hidden-motive compatible solution: downscaling politics. (But people love their nation stuff, so this is hard)

If you want to give politics a greater scope,  I found a jocular (don't take it seriously) solution to the problem of politics that might be worth sharing for the lulz: The Court of Values and the Bureau of Boringness : it aims to try to please those who care about policy and those who care about policy. Maybe this, but made serious could be a hidden motive-compatible solution?

For other of the issues discussed in the book, it doesn't even seem clear that they have to be solved. Some food and pieces of clothing are considered high status, and SH say why. This won't stop those who want them from wanting them.

Overall, I fail to see how the "hidden motives" story would help in the design of institutional fixes for the problems at hand, above and beyond what we already knew. Bryan Caplan's forthcoming _The Case Against Education _gives the same diagnose for education as SH without resorting to "hidden motives". Had Caplan talked about "hidden motives" would that have enriched his analysis to the point where those extra paragraphs supported alternative interventions relative to the baseline book?

(XVI): Final remarks

As I mentioned at the beginning, I have not commented on everything. Nor I have read all the evidence that has been written about every single question. I have tried however to give the book a fair assessment. Some of what I have written above will be wrong, and everyone involved will profit from me being corrected. I admit that one of the motives I write this is to learn more for cheap. If I elicit a reply from others -ideally Simler and Hanson- then I will learn more about either myself (How do I get things wrong, or right), or the world (If one of my arguments fails because of a misreading of the literature, or cherrypicking).

I am in a weird position: On the one hand my heuristics tell me that when Robin Hanson, Kevin Simler, Scott Aaronson, Jason Brennan and others jointly single out a book as worthy and correct, then chances are extremely high it indeed is. On the other hand my own assessment of the book is not as uniformly positive, to the point where I am rejecting the core claim.

Even accounting for that, it is a good book in that it offers a run through lots of theories and ways of looking at things, some of which I have noted down for further investigation. It is because of this thought-provokingness and summarisation of dozens of books into a single one that I ultimately recommend the book for purchase.

I am aware some of my comments and remarks are not complete, and many specific claims could be further expanded. For each of the chapters I could have written a blogpost of its own, but I hope that if critiques to my review are made, I will have the opportunity to expand on those ones.

As a last word, I want to offer a positive view of The Elephant in the Brain in the form of a brief thesis statement (paraphrasing their own) that I do accept:

Simler and Hanson's main goal has been to demonstrate that there exist evolutionary explanations for many commonplace behaviours, and that most people are not aware of these reasons. We show that we suffer from all sorts self-serving biases. Some of these biases are behind large scale social problems like the inflated costs of education and healthcare, and the inefficiencies of scientific research and charity.

Comments from WordPress

  • Anon. Anon. 2018-01-17T01:27:05Z

    It seems like a good deal of your disagreement comes down to proximate vs distal causes. Hanson wrote about it that argument here: http://www.overcomingbias.com/2017/11/authentic-signals.html

    Is it self-deception to be an artist when you really love art, even though the ultimate purpose is different? That's an unwieldy philosophical-type question that perhaps we could (should) avoid somehow, I'm not sure what the best approach would be though.

  • Recomendaciones | intelib 2018-01-22T06:45:55Z

    […] This Review is not about reviewing The Elephant in the Brain, by Artir […]

  • Ad hoc explanations – a rejoinder to Hanson | Nintil 2018-01-28T11:40:07Z

    […] ← This Review is not about reviewing The Elephant in the Brain […]

  • Epiphyte Epiphyte 2018-01-17T15:42:53Z

    Samantha the bee discovers a valuable flower patch. She returns to the hive and reports her discovery by dancing. Her dancing communicates three things about the patch... distance, direction and value. The longer/harder she dances, the more precious calories she sacrifices, the more valuable she perceive the patch to be.

    Sacrificing calories is a costly signal. Just like spending money is a costly signal. The benefit of costly signals is that they are more reliable/credible/trustworthy than cheap signals.

    Me: What did you think about the book? You: It was wonderful! You should read it!!! Me: How much would you be willing to sacrifice for the book? You: One penny.
    Me: Maybe I'll read another book instead...

    In this case your not-review is huge. But how does that convert into dollars? It would be helpful if there was the correct dollar amount at the beginning. Unfortunately, I wouldn't be able to verify that you had actually spent/sacrificed that amount.

  • Rational Feed – deluks917 2018-01-20T18:24:12Z

    […] This Review Is Not About Reviewing The Elephant In The Brain by Artir – Very, very in depth review of Robin Hanson’s book on hidden motives. Discussion of the core thesis: What exactly is the elephant in the brain? Topcis: Singnalling Model of Education, Medicine (The key facts to explain and which treatments empirically work), Charity (how people actually donate, the drowning child), Diverse thoughts on the underlying thoery and ‘confused concepts’. […]

  • Overcoming Bias : A LONG review of Elephant in the Brain 2018-01-22T17:24:35Z

    […] Kel has posted a 21K word review of our book, over 1/6 as long as the book itself! He has a few nice things to […]

  • 1Z 1Z 2018-01-24T17:23:12Z

    Art:

    You focus a lot on the question of why people produce are, but people also consume it...indeed, it seems to be a characteristic of art that it is intended to be displayed or performed. So it is produced because there is a demand for it. The consumption of low art is easily explained, as it tends to be immediately satisfying. High art tends not to be, so it provides an opportunity to signal the consumer's ability to appreciate it.