The use of the Forced Perspective Technique in Lord of the Rings

To my surprise, OpenAI seemingly didn't get their money back for GPT5's training if one includes the costs of running the org. I have this longstanding (but not deeply examined yet) intuition that even though very useful, LLM labs might end up becoming like airlines even when they develop super intelligence: low market cap, low profit commodities.

Anthropic as an untrustworthy organization, with some spicy discussion on LessWrong. My own take: Untrustworthy seems too strong a word, and part of the why is the the Anthropic of 2026 is not the Anthropic of 2021; current Anthropic seems to believe something closer to "We are the best (better than OpenAI, Google, etc) positioned to build safe AI, alignment is going really well, hitting the gas makes sense" than the organization would have honestly believed in 2021. I think that doesn't reach the bar of "untrustworthy". However, them being publicly loud about this could be counterproductive for vibes/comms reasons (Or perhaps because of the non-disparagement agreements that the founders signed?) and so maybe that's why they don't do it. This model explains some but not all of the observations collected in the post. On the other hand, they did stick to their guns wrt the US Department of War.

Interactive visualization on how Airfoils work

Javanese tree frog gut bacteria vs cancer

How will OpenAI compete?

Bliss isn't the point: from (jhanic) states to traits

The sky is blue (but not always)!

A new map of human experience

When engineering gets 100% metarational

Eli Dourado on that Citrini piece

On lab automation

Twitter's engineering org is seemingly a few dozen people, down from ~2000

Dwarkesh+John Collison interview Elon

Why clinical trials are inefficient