Links (67)
Links
Nikunj Koathari's H1B-to-US-residency guide
The present, past, and future of pharmaceutical blockbusters
Experimental gene therapy trials, happening in the charter city Prospera
Estimating ChatGPT's inference costs
Ben Reinhardt launches Speculative Technologies (previously PARPA)
A proposal to accelerate funding decisions at the NIH
The coming of local LLMs
What's the most beautiful place of worship in the world?
New FROs launched!
Dialogue on Viriditas
No physicists? No problem: (advances in) Deep symbolic regression
Accelerating genetic design
Sam Zeloof and Jim Keller launch Atomic Semi, a startup aiming to build better semiconductor fabs
Making mice transparent
Packing 17 equal squares into a big square. The optimal solution doesn't look like you expect!
Music: I fell into a techno-esque rabbithole lately. One example. Books: Been reading some books on the Toyota Production System, and I really enjoyed the new Stripe Press book from Claire Hughes Johnson.
ARIA heads Ilan Gur and Matt Clifford, interviewed by Works In Progress
But where's GPT-4 you might ask! I have a later post coming up on that.
Retro Biosciences
I started a new job at Retro Biosciences. There was a recent article you can read about the company here, as well as this talk from the CEO, Joe Betts-LaCroix. I'm joining Retro as Head of Theory. Not the first Head of Theory in history, before me there was Wolfgang Lerche at CERN. I have an unusual bag of skills and we needed a name for it! In the past month I've decided what instruments to buy, sourced them and set them up (yes, I'm pipetting!), helped design experiments, contributed to our codebase, led journal clubs, done literature reviews, and helped with planning and goal-setting across the organization.
Joining Retro is the culmination of a journey that started a few years ago when I got curious about biology. My 2016 post 'No Great Stagnation' didn't mention biology by name, but I was starting to form thoughts that biology might be definitely anything but stagnating as a field, and that I should try to learn it. I was working on ML at the time so I already had ML knowledge (another area that I thought was promising). The result of that learning period was my Longevity FAQ. I wrote some other posts since. But at some point one gets tired of writing about X and wanting to push the state of the art in X, so then there was Rejuvenome (started in 2021 and ended, prematurely, in 2022). I kept also writing in parallel on meta-science and broadly 'accelerating science' as a theme over that period, ending with this post where I decide to abandon that line of thinking and instead go do research directly. I've learned more about accelerating science at Retro than months of reading meta-science papers, unsurprisingly. Questions that were once very abstract and at best second-hand? (How are discoveries made? How are experiments planned? How should science be managed? Budgets allocated?) are now more real and vivid than they ever were in the years prior.
I'm very happy with my decision!
Now, being a resident of San Francisco, I also have to answer:
Why Retro, when all the cool kids are flocking to that other of Sam's investments? Given my starting point, I could have a bigger impact (and more fun) at Retro than at OpenAI, by virtue of me having a lot of domain knowledge about biology and aging, and Retro being smaller.
OpenAI has found something that might work and now is a matter of cranking at the scaling lever (good luck to them!) whereas for Retro we are still trying to make things work. I don't expect I would move the needle that much in OpenAI as a whole, but for Retro that's very different. Of course, if AGI happens before we're done, our work wouldn't have mattered. But also I think that it is further into the future than others think. The combination of these two things
- I can't affect the pace of AI development, and the biggest thing I can do there is write some blogposts on capabilities and safety
- I can contribute a lot more to longevity than to AI (and have more fun while doing it)
means that "AGI when/how likely" do not affect my actions much. Doing good work that wouldn't have otherwise happened is much more attractive.
Every resident of the SF scene looking for a job must face the "why not alignment research" (or something of the sort; eventually someone will ask at a house party), i.e. the argument that the rewards of aligned AI can be so large that even a tiny contribution (Which surely I can make) outweigh other considerations. My answer to that is: alignment as usually discussed won't work, so there's no point in spending a lot of time on that. But also current systems will probably be fine (as in they won't suddenly get sneakily situationally aware and take over). Rather, a more realistic threat model is Skynet scenarios: A bad actor actively making use of the improved LLMs we'll see in the near futures with nefarious purposes. And me being good at blogging, there's higher ROI in writing an article that will have some chance of convincing safety-pilled individuals to drop alignment to work on what I call 'robustness' than me doing that directly (What do I know about cybersecurity?).