Systematising breakthroughs
Suppose you want to solve a problem. When problem-solving one can be in one of two states: Stuck or exploring paths forward. A path forward is a potential solution that might work but we are not sure so we need to do extra work to assess if that's the right way. In science this may be akin to "regular science" or 1 to N science or something like that.
This post is about the process of going from being stuck to having a path forward. This would be breakthrough science or 0 to 1 science. Of course important discoveries and achievements in science are not just the fruit of a single spark of insight but the succession of many, coupled with hard work. Here I am thinking of the atomic insight or spark.
By 0 to 1 I mean something that is hard to see from a prior conceptual framework( the assumptions we have when considering an idea) but perhaps after the fact they become obvious ("Why didn't that occur to me!")
There may be three ways of going about doing this:
- Randomly getting unstuck. You know how this works: Newton's apple; staring at a bunch of code that you know is buggy, going to meet your random friend from some other scientific discipline to get some fresh ideas. A concrete example of this would be this potential cure for cancer that was found while researching something totally unrelated. Aubrey de Grey's story about how he thought that the key to remove lipofuscin from lysosomes was in cemeteries could also qualify.
- Systematically getting unstuck. This is where we survey the problem following a checklist or structured way. For example you could look at physical constraints on the problem to see if it is even possible in the first place; an example being the fact that we are far from the lower bound on energy needed to do computation (Landauer's limit) so a solution to the increasing energy demands of modern CPUs would be reversible computing.
- Brute-forcing: Just checking one by one (Or in parallel) all possible solutions. We see this for example in the proof of the four color theorem or in some high throughput screens to find drugs where keep testing compounds until one works. ML is an example of this too insomuch as we are not thinking directly about the problem but instead delegating the thinking to a more powerful system that can do search in a more efficient way while we instead think about the design of such system.
I'd argue that actually random discoveries are just brain-assisted brute-forcing; from the initial solution you keep thinking of possible ways of solving it, you don't know what you will be thinking about each day, and at some point the solution pops into your head. Anyone who has done research will know the mental state of being stuck for a week seemingly doing nothing and then one day everything clicks and lots of work happen until the next problem is found.
Likewise the intentional approach is a guided form of bruteforce; bruteforce with priors if you wish. Rather than the bruteforcing happening at the object level, we move it up to a framework that tells us what to think about in what order. We are still exploring the solution space one by one rather than deriving a solution straight from first principles. This is the most efficient way of doing discovery; in opposition to the gentleman scientist's random paleo organic gluten free one-of-a-kind discovery, this is a mass-produced breakthrough in an assembly line of knowledge.
Consider expansion microscopy, this is the idea that to observe a sample, you can make it larger instead to facilitate observation. You could arrive at that idea by being lucky and having a spark triggered by some experience. Or you could follow a hypothetical mental model for the "Generalized Measurement Problem", walking through each of the steps until something useful pops into your mind (A path forward)
(A sketch of ways to think about the) Generalised Measurement Problem is: Measuring property X of system Y
- Change the measurand (Maybe you really care about Z and X is related or maybe measure a proxy of X which is easier to measure.) For example rather than measuring synaptic activity directly, you can look at blood flow via fMRI.
- Make the system intrinsically easier to measure. In expansion microscopy you enlarge the system; in FISH assays you make it fluorescent with antibody tags; in optogenetics you genetically engineer neurons to make them controllable with light.
- Change the context of the system. Certain kinds of measurements may be easier to take in zero-gravity. Others may be easier in vivo or in vitro. Maybe you can settle for doing invasive surgery on a monkey that you wouldn't be able to do in a human.
- Multiplexing. The original multiplexing in the context of signals is sending various signals at once via the same medium. Here you could look at improving or examining the way the data gets from the system to the instrument. For example, you may think that you would never be able to use light to image the brain because there's a skull inbetween. But... turns out you can shine light through it after all.
- Measure multiple times, and aggregate them later. If it is hard to do the "proper" measurement, then you could try to split the measurement into smaller measurements and put them together. This is essentially how Next Gen Sequencing work or in a different context, weak quantum measurement.
- Make the system record itself. Yeah that. Imagine if we could do local storage of data in molecular substrates. Each neuron could be engineered to write a record of its own time-varying electrical activities onto a biological macromolecule, allowing off-line extraction of data after the experiment. Such systems could, in principle, be genetically encoded, and would thus naturally record from all neurons at the same time. (Marblestone et al., 2013). How do you even get this idea, I thought when I first read this paper!
I am not claiming to invent this, shortly after I posted the video recording of me writing this, I was sent a link to TRIZ, one example of a more fleshed out methodology to approach problems in general. That's a starting point for further thinking, what's the state of the art in this sort of thing? Does science happen mostly by chance (Like the Wales' cancer drug case) or on purpose?