Collection of papers and articles that I’ve spotted since my previous links postthat seem interesting.
- The landscape of innovative governance
- After a recession, economies do not return to their original growth trend. This may sound obvious, but many smart people -including Milton Friedman-have thought different (see his plucking model)
- As the Austrians and others have been saying
- Note to self: write about business cycle theories at some point
- Financial stability without central banks, new Selgin booklet
- The NGDP targeting reader
- As I think free banking is the proper way of arranging a financial system, then I do not believe that NGDP targeting is superior to it. However it could be argued that free banking approximates NGDP targeting, but this need not be the case if banks monetise just liquid, high turnover assets (real bills doctrine) like they used to in Scotland.
- A critique of the theory that atheism is caused by mutational load (which goes hand in hand with lower fitness across the board
- As I said in my review of the Elephant in the Brain, the salutary effects of religion are overblown
- An interesting upcoming book on the role of reason in moral reasoning
- Same but in general
- A review of theories of human altruism
- High IQ individuals have better art taste
- “Better art taste” is to be read like “intelligence” as in g. There is a scale that measures it, and the proposition is to be read in that way.
- Maybe the reason people dislike the idea of IQ is that there is a difference between the popular concept of intelligence and the psychologists’ concept. As readers of Nintil, you won’t be shocked if I say that “A is less intelligent than B” is a coherent proposition that can be true or false. But similar statements could be made with other properties such as aesthetic properties “A is less beautiful than B”. This is generally regarded as the realm of the subjective, but making a scale about it enables one to make statements that sound broader than they actually are.
- Branko Milanovic reviews Nassim Taleb’s work
- T. Greer on Jordan Peterson’s grand project
- ” It is nothing less than the revitalization of Western civilization itself.”
- Note that I am by no means a “Petersonian”, but this is an interesting view
- Virgin Rawls vs Chad Nozick
- Rarely a meme deserves a position in my links posts.
- The consciousness deniers, by Galen Strawson
- There are people out there who believe that consciousness does not exist. WTF! says Strawson
- More Strawson
- What’s wrong with speciecism?
- Singer got it wrong when he acussed everyone of speciecism. Instead, people hold what one might call personism, and this is a far more robust position
- What is utopia? Playing games
- The meta-problem of consciousness. Chalmers: Why do we think there is one?
- There used to be such thing as a Basque-Icelandic pidgin
- Movie about Roger Penrose’s cosmological model
- Awake under anesthesia
- Gender segregated occupations in Norway and the US
- What if high ranking journals publish shabbier results?
- Synapse weights in neurons are more volatile than originally thought
- This is here for future reference, but some argued that “key liberal principles lead logically to egalitarianism“. Logic-based politics is a thing I’ve seen in many places, so this should take its on post. I think the article above is mistaken.
- Deep RL doesn’t work yet
Last, I meant to write a review of Pinker’s last book, but I didn’t. Instead, I’ll just say that I find myself in agreement with most of what he says, except in the bit around existential risks, where he is not sufficiently Bayesian (But maybe he is, depending on how your read his words. Is he denying the possibility of assigning credences to rare events?) His view of the problems around superintelligent AI are better than his previous takes on the matter, but still a bit silly. As an example, at some point he claims that if mankind is smart enough to build superintelligent AI, we are smart enough to avoid killing ourselves with it, ignoring that our end was a possible outcome of the Cold War. Yes, we survived, but we could have not. Interestingly, he says that if an AI is smart enough to take over the world, it will be smart enough to understand what we mean by “Make me some paperclips” (Make 4 or 5 vs take over the universe to make lots, just in case). Initially, this may seem like another bad take: An AI system composed of several modules can have a deficient input module for interpreting orders and yet be good at getting things done, but here it could be argued that having good priors about the world (knowing what humans mean by “make me a bunch of paperclips”) is part of what is needed to take over it. But if he wants to offer an argument against worrying about AI, perhaps a better one is the Talebian argument: the world is just too random. On a coin tossing exercise, a superintelligent AI won’t do better than you. Can a superintelligent AI do better than the stock market? Etc. I haven’t seen this explored in much depth elsewhere, but if valid, this defuses this issue.
Another problem with Pinker’s is data quality and comparability. His book claims that – indirectly – Montreal in the 60s is as liberal as the Middle East in 2005. This is not plausible. It is an artifact of the way that plot is constructed (by extrapolation) and the thing it measures: subjective responses. If one asks “Is being gay a bad thing? Rank your answer in a Likert scale”, a 5 in Montreal might mean that one has some vague squirm about them, but otherwise that’s it, while a 5 in Iraq may mean that one will stone them on sight.
And another is data quality. However, even if these critiques undermine partially some of the points, the conjunction of all the evidence is mutually reinforcing, and the point still stands strong.