In defense of Emil Kirkegaard

Emil O.W. Kirkegaard (@KirkegaardEmil) and Julius D. Bjerrekær have recently published a notably large (N=68371, 2620 variables) dataset scraped from the popular dating site OKcupid.

My initial reaction to this was a positive one: He went through the effort of scraping the data, putting together the dataset, writing the scripts to analyse it, and then sharing it with everyone, for free. It’s even admirable. If every researcher were as open as him, Science would probably make progress much faster.

Some people had a rather different reaction. I am aware of two posts criticising Kirkegaard, which I now comment

Emily Gorcenski writes at her blog a post that begins with the quite unkind words “Content Warning: Actual Nazis”.

The first complaint is that the dataset includes complete usernames, and responses of a highly private nature (e.g. sexual preferences). All this, without OKcupid’s or the user’s consent.

While this point is reasonable, what follows certainly isn’t,

The purpose of this research appears to be begging the question. Despite the wealth of available information in the data, the users chose to test hypotheses comparing cognitive ability to religious affiliation and to explore correlations between Zodiac signs and certain preferences. This has a dramatic stench of attempting to find a dataset to match a pre-formed conclusion; in this case, it smells a lot like the prototypical rhetoric of a specific athiest [sic] politic. One author’s comments betray any sense of independence in this regard.

What question is being begged here? Kirkegaard and Bjerrekær (KB) decided to extract a measure of cognitive ability from the dataset, because it is well known that cognitive ability has correlations with many interesting variables. Pretty standard for a serious social science paper that has access to this information. Then, they picked some variables as an example. What’s wrong with that? Is it wrong to say that atheists and agnostics are smarter than religious people, on average? This is a scientific question, and we should welcome honest attempts to answer it. Should I, a man, accuse of bias those researchers who say that men are more aggressive, on average, than women? I may point out methodological issues in their research, but not get angry at all with them. Emotional reactions like this are improper in the realm of scientific inquiry.

Gorcenski then does point out that the study is deeply flawed in multiple ways. But as we will see, her complaints are mostly ethical, not about the study itself.

Ethical issues

First, the study violates OKC’s terms of service and research ethics. She then draws a comparison with Nazis, saying that one key principle of ethical human subject research is “voluntary, well-informed, understanding of the human subject in a full legal capacity”. According to her, their hypothesis falls within the scope of medical research ethics.

OkCupid users do not automatically consent to third party psychological research, plain and simple. This study violates the first and most fundamental rule of research ethics. In fact, OkCupid’s Terms of Service includes the following statement:

You further agree that you will not use personal information about other users of this Website for any reason without the express prior consent of the user that has provided such information to you.

This is a bit incomplete: The TOC itself also states that

You should appreciate that all information submitted on the Website might potentially be publicly accessible. Important and private information should be protected by you. We are not responsible for protecting, nor are we liable for failing to protect, the privacy of electronic mail or other information transferred through the Internet or any other network that you may utilize. See Humor Rainbow’s privacy policy for more information regarding privacy. The privacy policy is incorporated into and a part of these Terms of Use.

There is one item about not using the data for commercial purposes

The Website is for your personal use only and may not be used in connection with any commercial endeavors. Organizations, companies, and/or businesses may not join and use the Website for any purpose. Illegal and/or unauthorized uses of the Website, including collecting usernames and/or email addresses by electronic or other means for the purpose of sending unsolicited email or using personal identifying information for commercial purposes, linking to the Website, or unauthorized framing may be investigated and appropriate legal action will be taken, including without limitation, civil, criminal, and injunctive redress. Use of the Website is with our permission, which may be revoked at any time, for any reason, in our sole discretion. At our sole discretion, we may take reasonable steps, including limiting the numbers of emails you send or receive and electronically filtering or throttling or terminating your e-mail.

But this is not KB’s case.

Furthermore, the TOC makes explicit that people should be aware that their data may be acquired by third parties

Your safety and security are very important to us. The nature of this Website promotes the sharing of personal information by users with other users. Humor Rainbow cannot and does not assure that it is safe for you to have direct contact with other users of this Website. Current technological developments make it possible for users of the Internet to obtain personal information about, and locate, other users, with very little other information. For example, it is possible to use certain widely available commercial Internet search engines to locate a person’s home solely using that person’s correct name

Her next point is that KB’s research must do no harm, and they must seek to answer a legitimate question.

The thing is that she has not understood that this, rather than a research with a particular finding, is a release of a dataset, with some exploratory findings. Her last paragraph shows a deep misunderstanding of the KB paper, and of cognitive psychology in general.

Even still, attempting to link cognitive capacity to religious affiliation is fundamentally an eugenic practice.

Wut? Attempting? They didn’t attempt it, they found it. And they are not even the first in finding it, see Zuckerman (2013), or Ritchie, Gow and Deary (2014). Among philosophers, who are notorious for having disagreements in everything, disbelief in God is one of the more agreed on issues. So what? Why has it to be the case, a priori, that intelligence is equal among atheists and religious people? It’s an empirical question. Perhaps these studies are wrong. But the proper reaction is to explain why they are wrong, not say that these researchers are doing eugenics. Eugenics, as understood popularly is negative eugenics: coercively improving a given gene pool by procedures such as forced esterilisation . KB have not advocated that, not in this paper, and not in any of their writings, except if we count opposition to inmigration as eugenics. But that would make a sizable fraction of the population of many countries eugenicists, which is a dubious conclusion. And I say this as someone who is in favour of a less restrictive inmigration policy.

The next critique is that the data release might be used against OKC’s users. Technically, yes. But in practise, no. First, OKC has usernames, not real names, and the dataset does not include pictures. Second, Many users in OKC also have a specific username for OKC, so that they are not easily identifiable outside of the platform. Furthermore, if one particular employer or person wanted to find out about an employee or friend, or lover, they could have done so already (but, given the previous issues, this is not very likely).

The fact that the information is put together is irrelevant. Yes, a single user can’t get all of this information together by normal use. So? You are not closer to unmasking people you want to unmask. If I were an employer and wanted to find out things about employees, you can just go to OKC, set a filter to a given city, and begin scrolling down. Pinpointing people from OKC is easier than from KB’s dataset, actually.

Methodological flaws

One methodological flaw she identifies is that the results may be biased due to an option in OKC that hides ‘queer users’ from ‘straight users.’

Consequently, the underlying data set contains a significant sample bias: queer people are excluded disproportionately from the data. This is conventional queer erasure: queer identified folks are not included in a study, so conclusions applying only to straight people are used to inform conclusions which then get pressed upon queer people. This bias is unfortunately commonplace, but the authors appeared to make no effort to address it.

Certainly, this could be a methodological concern, but not grounds for accusing KB of ‘queer erasure’. They have not deliberately done so.

KB admit that the dataset has limitations

It is worth emphasizing the limitations of the dataset. The sample is not representative of any national population, rather it is a self-elected convenience sample that consists mostly of young to middle-aged adults from the US, Canada and the UK. Furthermore, due to the way we sampled the data from the
site, it is not even representative of the users on the site, because users who answered more questions are overrepresented.
They did not address the particular concern that Gorcenski raised. However, it is not clear how many users were hidden from the bot that way. In this case, queer users in general are not excluded, only those queer users that chose that option. This could be added as a caveat to the dataset, along with others that are identified. However, IF the number of queer users that hide their profiles from straight users is low enough, and sufficient queer users have been sampled, the KB dataset will not really suffer from this bias.

Peer review failure

The journal in which the data was published is one in which one of the authors, Kirkegaard, is an editor, true. But so what? Is he wrong because of that?. Read the paper, and point out the mistakes, the merits or demerits of it are in the paper itself. Good methodology is good methodology, it doesn’t matter who does it. And KB are providing everything one needs to replicate their findings.

As an heuristic, ignoring papers published in minor journals is a good idea. But ignoring is one thing, and what the critics attempt is another.

The second accusation is even worse. Because in Open Differential Psychology there are studies about intelligence and religion, or about inmigration and intelligence, it is then drawn that

These papers have exhibit disturbing far-right trends with dramatic methodological failures and should not be considered scientific research in any way.

far-right trends? As far as I know, the authors are not far-righters. I do infer, though, that Gorcenski thinks that the papers are wrong because she dislikes the conclusions (for being far-right). She should have pointed out the precise methodological problems she references. Small samples? Confounding variables? P-hacking? Wrong causal models? She doesn’t say. If ODP research is obviously wrong, it should be trivial to say why is it wrong, and such a debunking would be far more credible and damaging for ODP and Kirkegaard in particular than just complaining about results one doesn’t like. For my position (lifting inmigration restrictions), results indicating that inmigration may affect national average IQ (and national average IQ is important) are supposed to be damaging. But so what? Maintaining a policy position by refusing to acknowledge reality is ideological nonsense: you can admit the research is true, and keep defending that policy, or qualify its defense. There is an is-ought gap, after all.

The only valid complaint that Gorcenski makes is the ethical one. I’ll come back to that later.

Next critique is Olivier Keyes’.

The beginning of the post is quite derisive:

At this point in the story I’d like to introduce you to Emil Kirkegaard, a self-described “polymath” at the University of Aarhus who has neatly managed to tie every single way to be irresponsible and unethical in academic publishing into a single research project. This is going to be a bit long, so here’s a TL;DR: linguistics grad student with no identifiable background in sociology or social computing doxes 70,000 people so he can switch from publishing pseudoscientific racism to publishing pseudoscientific homophobia in the vanity journal that he runs.

So now, just because someone doesn’t have a formal background in the relevant fields, one cannot say anything meaningful about them? Even if one has read the relevant textbooks, and is up to date in the relevant literature? That’s pure academic fetishism. Again: if Kirkegaard doesn’t understand what he is talking about, it should be easy to say where? Is he using improper statistical methods? Misunderstanding key concepts? Keyes doesn’t say. Even from a casual overview of any of Kirkegaard paper, it seems that he does know what he is talking about. And he gives you everything you might want to try to prove him wrong (data and scripts).

Like with Gorcenski’s, the accusations of pseudoscientific homophobia and racism are unfounded. First, because Kirkegaard has never drawn any racist or homophobic conclusion in his published research. And no, doing correlational studies with intelligence or other variables doesn’t count as racism. It’s Science. Science can be done poorly, of course, but the fact that the critiques do not engage with Kirkegaard’s technical details may show that they haven’t actually checked whether the methodology he used is pseudoscientific. Hence, like with Gorcenski, we may conclude that Keyes is biased in the same direction, for even if he had a vast empirical literature backing the  -in his opinion- obvious facts that Kirkegaard misses, that is no ground to make such accusations.

He then makes similar ethical complaints as Gorcenski. His critique goes again wrong in this paragraph,

This isn’t academic: it’s willful obtuseness from a place of privilege. Every day, marginalised groups are ostracised, excluded and persecuted. People made into the Other by their gender identity, sexuality, race, sexual interests, religion or politics. By individuals or by communities or even by nation states, vulnerable groups are just that: vulnerable.

This kind of data release pulls back the veil from those vulnerable people – it makes their outsider interests or traits clear and renders them easily identifiable to their friends and communities. It’shappened before. This sort of release is nothing more than a playbook and checklist for stalkers, harassers, rapists.

His link references the Ashley Maddison leak, that had the real user names, not the nicknames. No one ever uses their real name in OKC, so the cases are different. And then, if some one or two people did, what’s the chance that that particular person will also suffer negative consequences?

The tone of the paragraph again suggest bias. This is not to say that he is wrong in what he is literally saying. Around the world, and even in developed countries, people are discriminated for the very same things he mentions. That includes, not in the same degree of course, people who do scientific research in the very same areas as Kirkegaard. Isn’t it disrespectful and damaging for his future career to shame him as a homophobic racist nazi? He is doing statistics, not writing ramblings about white supremacism! Is it that hard to see?

And it gets worse

And that’s when things go from “terribly poor judgment” to “actively creepy”. Some research questions were used, as a way of demonstrating what the data can be used for, and Kirkegaard’s choices definitely say…something, about him.

His first research question was: what if gay men are basically just women? We have data on gender and we have data on sexuality; let’s see if LGB people provide different answers from straight people for their given gender! Let’s see if they’re basically the opposite gender!

You’ll be amazed to know he didn’t end up incorporating this into the paper, presumably because the answer was “of course not, you bigot”. But he did find time to evaluate whether religious people are just plain ol’ idiots – the methodology for which is checking the user’s response to various questions akin to IQ test entries. You know, the racist classist sexist thing.

Again, choosing research questions is up to researchers. What is pseudocientific is his attitude towards the best known and most reliable construct in psychology: the general factor of intelligence, g. Here’s a very good introduction to the field, you’re welcome.

A critique from an intelligence denier who gets angry about certain questions does not make it less valid per se, but heuristically, it may detract credibility from the rest.

As an aside, this kind of creepy superpositional homophobia is actually an improvement on much of the work I’ve found from Kirkegaard while digging into this, which is not superpositional at all: previous credits include such pseudoscience as arguing that letting low-IQ immigrants in will damage Danish society, and they should be subjected them to brain implants and genetic engineering for entry, and (I wish this was a joke) checking whether immigrants commit more crime if they have tiny penises

Neither of these questions are things that would pass actual institutional review as justifications for gathering data of this depth without informed consent. Neither of these questions are particularly original – there’s literature on both, using information gathered by qualified people in responsible situations. And no vaguely competent peer reviewer would look at this methodology, this dataset, and this set of conclusions, and determine that it constituted a publication.

What is the problem in asking and answering scientific questions, people? If it’s the topic, then Keyes should also say the other ‘qualified’ researchers are also homophobes? I think that his hate for certain areas of scientific inquiry muddles his valid ethical arguments.

Then there are critiques of Kirkegaard publishing in his own journal, but this has an answer: he cares about openness, and regular journals don’t let him be as open as he likes with his research. Regarding peer review, yes, there is someone who mentioned ethical issues the day before it was published.

He then comments on Kirkegaard’s responses on twitter to people asking him ethical issues, to which he answered that the data was already public.

This is not how academia works, at least not in the field Kirkegaard is publishing this data in and about: responsiveness to questions about the merits or implications of your work is as essential to good research as consent, as picking appropriate methods, as peer review. Refusal to discuss is not professional – and neither is throwing derogatory collective nouns at the people taking issue.

I would say that bashing people like Keyes does is not how academia works either, but unfortunately sometimes it is like that. Science is like Soylent Green, made of people after all.

But there’s no indication here that Kirkegaard cares about professionalism; a very, very generous read is that he’s out of his depth in a space and field he doesn’t understand, was never trained in how to do this right, and panicking. A less generous read is that he’s privileged, willfully obtuse and deeply, deeply arrogant about where he has expertise and the work he does. Either way, this work is an embarrassing and unprofessional offering from an institute-affiliated researcher that puts vulnerable populations at risk.

He is here mixing two different things: the quality of the work, and the ethical issues surrounding it. So far, Keyes has not given evidence that this paper, or Kirkegaard’s work in general, leads to wrong conclusions.


The critiques err in their tone and accusations of pseudoscience, racism, homophobia, and even Nazism. They do point out an interesting ethical issue.

However, the magnitude of the problem is not as large as the critics say it it. It may seem so to them due to their angry state, provoked by their reaction to Kirkegaard’s past research, not just because of the ethical issue itself. Given that there is little (scientifically) to be gained from usernames (compared to the other variables), and that there are no pictures in the dataset, it will be hard to identify people from it. If I were Kirkegaard, I wouldn’t have published the usernames, and I would have asked OKC to clarify if that use of their site was appropriate. That’s not up to me to decide.

Then, it has to be pointed out that those limitations on data acquisition are a bit pointless in today’s world. Emil is not the first one that scrapes OKC. Here, a data scientist did that very same thing (but did not release the data). While we are talking, anyone could be scraping any of our public accounts. It is wrong to publicise some things someone said some time ago with the intention of harming that person, or if there is expected harm from the data release. But here there is no intention to harm, and there is, in my view, little expected damage to any of the users whose data has been scraped.

As of now, Kirkegaard has password-protected the dataset, so  no one has access to the data at this moment (except for those who already downloaded the data). In my opinion, removing usernames would be enough to make the data public in an ethical way. But perhaps other people, with more experience in the ethics of publishing, can help solve this issue. In any case, this should be approached in a calm, rational way, not the way it has happened until now.

This entry was posted in Blog. Bookmark the permalink.

12 Responses to In defense of Emil Kirkegaard

  1. Informed consent is the cornerstone of modern privacy law – everyone that wants to process personal data about identified or identifiable persons have to either base that processing on consent or some other legal basis (depending on jurisdiction, this is the case for EU). In this situation there was no consent and no other legal basis. “Someone else could have done it” isn’t a valid defense. As shows, this is in line with the American Psychological Association:

    “The American Psychological Association makes it very clear: Participants in studies have the right to informed consent. They have a right to know how their data will be used, and they have the right to withdraw their data from that research. (There are some exceptions to the informed consent rule, but those do not apply when there’s a chance a person’s identity can be linked to sensitive information.)”.

    Not sure why you choose to downplay this issue. The lack of respect for privacy is a huge problem.

  2. torvon says:

    I’m all up for publishing some interesting data, but how does publishing *names of users* –– how does making individuals identifiable –– contribute to science? Why would you make a choice to publish clearnames of users as a scientist? Certainly, these individuals signed up for a website that stated that the clear names would not be made available.

    “By accessing this Website, you agree to use any personal information provided to you by other users of this Website in a lawful and responsible manner. You further agree that you will not use personal information about other users of this Website for any reason without the express prior consent of the user that has provided such information to you.”

    • Uhhh says:

      Usernames are not the names of users.

      • Mithrandir says:

        They are for most of the users, the default username of John Smith (on Facebook) is johnsmith and you need to pay them to change the username.

        Just a cursory search through the public, non-profile, interface of OKC, revealed 45 persons with real name listed there.

  3. Pingback: OKCupid Data Leak – Framing the Debate – Neuroconscience

  4. Jan says:

    “Then there are critiques of Kirkegaard publishing in his own journal, but this has an answer: he cares about openness, and regular journals don’t let him be as open as he likes with his research. Regarding peer review, yes, there is someone who mentioned ethical issues the day before it was published.”

    AFAIK, the paper has NOT been published in the sense of having been accepted for that journal. Instead, the manuscript, with attached data, has been submitted to the journal and it awaits peer review. However, because of the open access nature of the journal, the paper and the data are public from the moment the manuscript is submitted, and peer review takes places openly, too, in a publicly accessible forum thread. The paper was submitted on May 8 and doesn’t seem to have attracted any peer reviews yet.

  5. Is it possible to find my username and my image in this data linked to other facts? In that case I think it is a crime.

    If not… legitimate science attempt?

    Let’s be clear he did not blow the whistle on a multi-million company or the USA so in my mind that is not a whistle blower… it is someone who used very personal data of people that didn’t consent for well what motivation was behind it exactly? I sure hope it was something worthwhile.

  6. TeresaTr says:

    Here we are, back to measuring skulls to say things about “races”.

    • TeresaTr says:

      Also, expecting people to actually read the Terms is naive at best, ad-hoc at worst.

  7. Mithrandir says:

    Just removing the usernames won’t increase the privacy of the users by a lot. I work in the privacy domain so I know a lot of disclosure attacks which succeeded because of cross-correlation with other tables.

  8. ken says:

    Aside from legalities…..what he did is immoral.

  9. Pingback: Contra Sadedin & Varinsky: the Google memo is still right, again | Nintil

Comments are closed.