OpenPhil's report on the social returns to research. Seems about right.
Themes in Elon Musk's emails
- Don't be dumb
- Talk to each other clearly
- Micromanagement is good
How common is independent discovery?
Pick a discovery or innovation at random, and the probability it has much in the way of built-in redundancy is probably pretty small. I think it is quite plausible that for most papers or patents, if you erased them from history, no one else would independently reproduce the work in the next two decades.
But that’s for a discovery selected at random. If you pick a patent or paper at random, in all likelihood it won’t be a particularly impactful patent or paper. With innovation, a small number of hits appear to have a disproportionate impact on the direction of a discipline or industry. It seems plausible that the most promising ideas attract many times as many potential discoverers as a randomly selected paper. If the annual probability of getting scooped on an important paper is 10% instead of 2-3%, that implies something quite different about long-term redundancy. With a 10% annual probability of discovery, the probability that no one makes the discovery in twenty years drops from over 50% to just 12%.
That, in turn, suggests there is a lot of redundancy in the most important ideas and inventions, but not in the details. The main trunk of humanity’s scientific and technical knowhow is pretty robust, but the positions of the branches and twigs are not.
The story of rapamycin
The annotated S4 and simplifying S4
Scott Alexander reviews The Man from the Future
Scott Alexander reviews San Fransicko
The petty pleasures of watching crypto profiteers flounder. Some other crypto failures.
Do academic citations measure the impact of new ideas? Michael Nielsen doesn't think so. We all agree that it is one thing to describe trends in general and another to say much about a specific paper, that is much harder, especially at lower citation counts.
Decentralized Society: Finding Web3’s Soul. I might come back to this if/when I write about crypto
Webstrates (ht/ Andy Matuschak & Slim Lim)
"Yet another huge NIH bet (tens of millions $) crashing and burning. Journals and institutions doing far too little far too late." (On Alzheimer's). See also Derek Lowe
Techniques to get Large Language Models to expand the power of a single model. Andrej makes the good point in the thread that these methods also make (part of) the internal state of the LLM readily interpretable
One of New Science's reports makes it to the policy sphere
Jacob Steinhardt report on AI forecasting: Progress is happening faster than forecasters expected! Also Jack Clark, same story: ML systems are beating benchmarks at an ever increasing pace.
Particle Physics, now powered by Nintil.com
Vitalik reviews Balaji'ss book
The dark matter wars continue
Cells are supposed to have nuclei, unless you're a small insect
Red Pen reviews, one of the few sources on nutrition I trust, warns us not to eat too much salt
My twitter take on whether LaMDA is conscious