Ben Kuhn thinks dispensing life advice is underrated and that we should do it more. In that spirit, here's some of it.
When making decisions we take a lot for granted. Most of this is fine, and it's not really feasible to avoid making assumptions completely. If you go out to buy bread, you're assuming the bakery you're going to is open, that they have the bread you want, and that you are not going to get murdered on the way there. You could, in principle, before taking any action, check on Google Maps that the bakery is open or call them to check if they have rye bread that day. But if you do this for every single thing you do in a day you won’t be able to get much done.
Most life advice is very generic. If I just tell you to “stop assuming so much” it’s not clear what you should change about your life. I am not saying you have to be seeking perfect certainty in every situation; gathering information has costs. So what am I saying here? I could be saying that on the margin you should check your assumptions more often in general but this wouldn’t be quite right, because in many contexts the default assumptions we make are perfectly valid and do not need to be challenged all the time.
Instead, what I am saying is that for situations that you consider important to you (everything from relationships to employment), set some time aside to think about the assumptions you are making in that context. Then think about the cost to transform an assumption into a certainty. If it seems worth it, then do that.
But even saying this is not enough, so I will run you through a bunch of examples so you get an idea of exactly what I mean, hoping also that the examples will be memorable enough you will remember them when you are thinking about assumptions you are making. As Ben points out, advice is easy to understand, but hard to implement. It's easy to understand why doing something may be useful to do, but then often we don't do as the advice says, despite rationally endorsing it.
I suspect this is because advice tends to be given in very generic terms: the advice-giver is already following the advice so they tell you what they do but they don’t give you a strong justification for why you should do it, or how to get yourself to do it when you are getting started.
My first example is the one time when I assumed that one can only hold one O1 visa at a time (I currently hold two). I was aware that one could transfer visas to new employers and that it was possible to get hired by a company of one's creation, have that entity contract with various parties, and thus be able to work multiple jobs. No one I know has two O1 visas. I didn't even try to search whether this was possible online, despite the fact that there are various sources that will in fact tell you that it is possible to have multiple O1s. It was only when I was talking to someone about this when I was asked if I knew for sure about this and I of course said no; then I was simply asked to go check. Thinking "sure, checking takes 5 seconds, even if the answer is going to be that you can't have concurrent O1 visas", I emailed my lawyer, getting shortly after confirmation that indeed in fact it's possible to have concurrent O1 visas. I then applied for, and successfully was granted, a second O1.
This may sound silly to you. In retrospect it does seem so to me. Checking whether one can have multiple concurrent O1s takes 5 seconds on Google (or asking a lawyer); the cost is low and the expected payoff (getting the visa) high. But to me this, at the time, surprising as it may seem, looked the same as "I know there is a bakery in that corner, no need to check the map", I was too overconfident in my guess.
Another immigration-related example of this is applying for my first O1: Having read the list of requirements I thought I would not be able to meet them. My friends told me that I would probably be able to, because the way in which the O1 criteria are examined is not as strict as what the language they are written in suggests: you have to be good, but not necessarily Nobel Prize-winning good. In this case "knowing" for sure if one can get a visa means applying for it (which costs money). However, a good proxy for this is, once again, talking to a lawyer. You might then assume that talking to a lawyer costs money, but now figuring this out is free: Just email a lawyer a question and see if they charge you for an answer (The answer is no, in general immigration lawyers don't charge for these first consultations with them).
When talking with Ben Kuhn about these examples, he pointed me to one of his recurring pieces of advice he gives during 1:1 to his reports: If someone thinks someone else is annoyed at them for whatever reason, then they should go and talk to them instead of letting it go. It may be that a) It was all in their head (so they assumed wrongly, in which case any potential conflict is defused) or b) They were in fact annoyed at them (they assumed correctly! And now they can talk about why and how to fix that).
When working on Rejuvenome, one key design decision we had to make was how many mice each cohort should have. This not only depends on statistical power, but also on how many cohorts are being run simultaneously, and what the mortality curves for the mice look like. The capacity of the room where the mice were was about 5000, so on paper the study should ramp up cohorts until the ceiling of 5000 was reached, and then enroll mice at a rate similar to the rate at which mice die. We had a plan and we roughly assumed there would be enough space for that plan. As the time to order the mice came closer, I began thinking more about whether we really had enough space for all these mice. Eventually, I decided to write a simple simulation model, accounting for age-dependent mouse mortality, in Python and run it to see what the plan implied in terms of mice present in the cages at any given time. The result was that the rather ambitious design was unfeasible. If we also wanted to test enough cohorts to go through many interventions, we would need to either scale down or get a bigger room. This came as a surprise, but then the project was quickly redesigned to achieve most of what we wanted, by dropping a requirement (statistical power to detect small lifespan changes), which allowed us to have smaller cohorts, fitting all the mice in that given room. Fortunately we realized this early enough, but it should have been even earlier: the work required to write the model and play with various scenarios is perhaps one day of work, but the impact on everything from hiring to projected costs is substantial.
Why did this happen? In retrospect it seems obvious: There were many things going on with the project, from figuring out the best way to process blood from mice to building data pipelines to get data from the instruments to a unified database. I didn't think enough about prioritization; what needed to be done today vs what could wait a bit longer, instead I just jumped into writing software because it's satisfying to do while thinking about prioritization work takes longer and it doesn't deliver the same dopamine hits that writing code delivers. Had I asked myself “Do I know for sure we can do what we are saying we are going to do?” I would have, in full honesty, said no. Then, when asked what can I do to make sure we can do the study I would have said "run a simulation and see what happens instead of guessing". Lastly, if asked “Is this a top priority? Are there more important things to do?” I would have realized this needed to be done as soon as possible.
Cold emailing is another case where assuming too much has costs. Lots has been written about why and how to cold email and getting over the reluctance to do it. Those experienced in a field, empirically, tend to enjoy talking about their work and helping newcomers. They don’t like feeling exploited. The desire to avoid bothering someone with an email is at the core of this, but rather than assume that a thoughtful, well crafted email bothers someone, check it: If you got the email you are about to send, would it bother you? Ask others that get cold emails, do they like to help others? The compiled knowledge of cold-emailers is that cold emailing works and that thoughtful cold emailing is well received, so stop assuming the contrary!
One last example comes from Tesla. There was a part in one of the cars that a team thought was there for safety reasons, and thus owned by a different team in charge of safety. Unbeknown to them, this second safety team believed the part was there to provide some other function, and was owned by the first team. As a result, the part was there for no reason. Eventually Elon just asked the teams about the part and got the part removed. In this particular case, it may have been cognitively taxing to check the ownership of every single part, so the solution is not for the engineers to have to make those checks all the time, rather one could have a well maintained database and processes to ensure every part has a reason to be there. As a general point, making awareness of relevant facts easier and cheaper makes it more likely that more assumptions will be checked and fewer mistakes made.
Lastly, we can take the logic of this advice one step further: Sometimes we make mistakes because we assume too much. We can fix that by being more thoughtful about what we assume. But how do we do that? By thinking about why we assume too much in some contexts. Some possible ideas to check:
- Be more aware of why you are not checking a particular assumption. Checking an assumption might involve talking or emailing someone that for some reason one feels reluctant to. As a result we delude ourselves into thinking that the assumption is more solid than it is because in that case we don’t have to go check it. If this is the case, make it explicit, write “My project is in danger because I don’t want to send a brief email” and if it rings true, you may be able to send that email.
- If something requires some effort or time to check, weigh explicitly this vs the potential downside of getting a decision wrong. Think whether there are simpler ways of checking.
- Request feedback from others. We can be blind to the fact we are making an assumption and it can be hard to notice what others can immediately see.
- Commit to a habit of periodic review. We don’t set aside time to even go through what we are assuming in our various projects. This is, on paper, as easy as blocking off an hour on a given day and doing a review, but in practice it requires committing to a habit of periodic review.
This latter point is the most important one. You can read this essay and nod throughout and then forget about it. Or you can pick some aspect of your life, book one hour in your calendar, and think about what assumptions you are making there. As I was writing this line I just did this. It’s just one hour! It could be today!
Thanks to Sholto Douglas and Ben Kuhn for their feedback on this post