Collection of papers and articles that I’ve spotted since my previous links post that seem interesting.
- Chengdu to build an artificial moon satellite(!). Engineering never goes too far.
- Battery-free HD video streaming
doing research onthrowing money at Quantized Inertia, an obscure theory of physics that among other things, says that the Emdrive actually works. Unlikely to be true, but awesome if it is.
- It used to be believed that poorer countries were not converge in terms of income per capita with richer countries. New paper shows that this was true when a landmark study on the topic was published, but that in more recent times, poorer countries are indeed catching up.
- If we look at fastest growing countries, this will make more sense
- Judges exposed to law and economics training does change the behaviour of said judges.
- File this under ideas and culture can overrule incentives
- Steelmanning the NIMBYs
- NIMBYs get a lot of flak. There are lots of YIMBY associations, with YIMBY manifestors and stuff, but there is no similar intellectual substrate for NIMBYism. Scott Alexander makes one, and hints at an argument we discussed IRL: I suggested that residents of a city may have a right to decide how they want their city to look like. Newcomers know what to expect, and this is done at a city level, so it’s not like it’s hard to escape the low density areas. One solution for the housing issue in SF is for more housing to be built. Another is for people to move out of SF.
- Economists and economics in tech companies
- Some interesting stuff, but Noah Smith is still right regarding macroeconomics.
- Tabarrok’s post on the Liberal Radical mechanism for public goods, proposed by Glen Weyl et al.
- An thought had while reading is that there may be an impossibility theorem that lets you choose only two of: Optimal amount of public goods provision, Optimal amount of public goods funding, and non-coercion. LR picks 1 and 2, Tabarrok’s DAC picks 2 and 3.
The government used the contribution levels under the top-up mechanism as a signal to decide how much of the public good to produce and almost magically the top-up function is such that citizens will voluntarily contribute exactly the amount that correctly signals how much society as a whole values the public good. Amazing!
Naturally there are a few issues. The optimal solution is a Nash equilibrium which may not be easy to find as everyone must take into account everyone else’s actions to reach equilibrium (an iterative process may help). The mechanism is also potentially vulnerable to collusion. We need to test this mechanism in the lab and in the field. Nevertheless, this is a notable contribution to the theory of public goods and to applied mechanism design.
- Common ownership (when many players in the same industry are owned by Vanguard, Fidelity and the likes) is thought to decrease competition, rise prices, etc. This has led to some to propose fixing this (See for example the relevant chapter in Radical Markets). An example of these findings can be found in Azar et al. (2017), who studied the airline sector. A Cato paper (being a Cato paper you know what they would say) argues that this is not the case.
- Alex Tabarrok published a study a while ago that didn’t find much of an effect of regulations on industry level dynamism. New paper, looking at the same dataset, does find a negative effect on business investment. The effect doesn’t seem to be that big, a 1 SD increase in the “regulation” variable leads to a 5.5% decrease in business investment rate. Still, it’s something.
- A defence of the market on behavioural economics grounds
Suppose the cake is at the front of the display. When the ordinary human “Joe” goes to the cafe, he selects the cake. If the fruit had been at the front, he would have selected the fruit. Has Joe made an error in his choice? We need to ask what his latent preference is. But suppose Joe is indifferent between cake and fruit. He is not misled by labelling or any false beliefs about the products or their effects on their health. He simply feels a desire to eat whatever is at the front of the display. What is the nature of the error?
To help answer this, imagine that SuperReasoner also goes to the cafe. SuperReasoner is just like Joe except that he “has the intelligence of Einstein, the memory of Deep Blue, and the self-control of Gandhi”. (Sugden borrows this combination of traits from Nudge). What happens when SuperReasoner encounters cake and fruit that vary in prominence? Since he is just like Joe, he is indifferent between the two. He also has the same feelings as Joe, so feels a desire to eat whatever is at the front. This is not a failing of intelligence, memory or self-control. There is no error. Rather, the latent preference itself is context dependent. But if latent preferences themselves are context dependent, how do you ever determine what a latent preference is? What is the right context?
- Glen Weyl’s syllabus for a course on Radical Markets
- We knew that gender differences in career choice are greater in more egalitarian countries. The same seems to be true for many measures of personality.
- The gender equality index they use is the first component of a PCA of different indices that do not seem necessarily directly related to social norms. For gender equality, I prefer using the SIGI. If anything, I’d expect that the study underestimates the extent to which differences increase with equality.
- A popular model of moral psychology I’ve fired shots at are previously here is that intuitive responses are deontological and feelings-driven, that can then be overruled by deliberate, rational, cold utilitarian thinking. Paper argues that those who give utilitarian responses are not supressing this initial intuition. Rather, their intuitions are utilitarian to begin with. Sample size is above the threshold I usually set as minimum to bother sharing (100).
- What meta-analysis reveal about the replicability of psychological research
- An interesting finding is the % of studies that have “adequate statistical power” by field. Statistical power (usually represented as alpha) is how likely an study is to find out an effect if there is an effect. Or, if an alternative hypothesis is true, what are the odds that the null hypothesis will be rejected by a study.
- As one might expect, genetics came up first, with 70% of studies having over 70% power, while consumer and cognitive psychology literatures are the weakest.
- This means that increasing power (for example, larger sample sizes) may help find some effects in litetures that at the moment are failed-to-replicate quagmires
- How big should a sample size be? For example, if you expect a small effect size (d=0.1) and you want 80% power, then N~800-1000. If you expect a large (d=0.8) effect size, you may be okay with just N=15, but that opens the door to p-hacking and other dodgy practies.
- Meta-analysis on whether attractiveness correlates with broad fitness (like, say, being healthy). This pitches a pure sexual-selection model vs a good-looks-as-signaling model. The first model seems stronger. (ht @DrXaverius)
- Physics nihilist-in-chief Sabine Hossenfelder (I mean it in a good way!) on some results that may rule out the existence of dark matter.
It is well documented that there is diminishing returns in research funding. Concentrating your research dollars into too few individuals is wasteful. My own explanation for this phenomenon is that, Elon Musk aside, we have all have cognitive bottlenecks. (Highlight from Dan Lemire’s links)
- How Information Theory became a field (h/t Noor Siddiqui)
- A defence of a very odd paper (Feminist Glaciology) is attempted, and a critique is provided.
- Getting a O-1 visa (for the US). I thought one had to have a Nobel prize or something, but actually, it’s not that hard.
- Y Combinator report into female founder harassment
- 19 out of 88 surveyed female founders reported some form of sexual harassment from angel investors or VCs
- Predicting population growth is hard.
- One might think that the human race will go extinct in a few hundred years if the fertility transition continues in the developing world, and if the fertility rate stays below 2.1 you get that result, mathematically. (This implies that a family of 2 is not able to replace with 2 more individuals +0.1 accounting for accidents, etc)
- But what if, as population declines, evolution selects for people who are more into having kids? One can imagine that, if the Amish continue being Amish, everyone will eventually be Amish or something. A paper studies this evolutionary dynamic, predicts that population will keep growing
- How to find out if your brain is a computer
- John Baez on misleading extrapolations
- The philosophy of cloud atlas