It has been argued that the rate of scientific or technical progress is slowing down. When there is not enough of something, the two common solutions is to use more resources for that end, or to use them better. I'm of the view, though this would take another post to explain, that the problems that science suffers have more to do with how science is done (funded, organized, produced) rather than how much money is gets, in general. I spend a lot of time online [citation needed] and more recently I've seen more and more people interested in a well oiled machinery of science. This post is not a survey per se of problems science has, that's part 2. Here I am just concerned with finding and listing initiatives that aim to make science great again.

One broad framework is that of Open Science. Even though within scientific communities there may be a degree of openness in sharing knowledge and a desire to work across labs, from the outside, science looks more like a silo. Scientific papers notoriously are most commonly found behind paywalls, academic conferences are not particularly cheap for non-academics, and the raw analysis and data that goes into the studies is not often published, making it hard to replicate the study and check the validity of its conclusions, or worse, making other scientists reinvent the wheel every single time instead of being able to easily copy sections of existing analytical protocols.

Open Science as a movement is not new; and institutionally it's represented by the Center for Open Science, that has launched projects to replicate studies in cancer research and the social sciences, in addition to supporting the Open Science Framework: a platform that aims to provide a centralized hub for science to happen in, providing file storage for datasets and code and facilitating the sharing of information.

Then there are initiatives that aim to produce literature reviews and meta-analysis on research literatures to provide an all-things-considered perspective. One can, for example, go into the Cochrane Collaboration website and find the answer to questions like "Is is worth to apply methods XYZ for lung cancer screening?", or find software to support the production of said reviews. Similarly the Campbell Collaboration funds and publishes systematic reviews in a broad range of fields (e.g. education )

These are established and sort of well known already. But recently there has been a few proposals from various individuals that aim to build platforms that provide functionality not covered by the above cited.

Then there's a rather ambitious  initiative (yet to gain traction)  launched by Sylvain Chabé-Ferret, the Social Science Knowledge Accumulation Initiative (SKY). He initially targets the social sciences, and the problems he identifies are multiple: One, that evidence is nor very well organized. What has replicated, what hasn't? What's the answer to "Do videogames make kids more violent?". For that he proposes a databases of studies that are automatically meta-analysed as new evidences comes through. Two, he proposes to have a repository of methods and procedures to do social science to address poor research designs. This aims to target social sciences, but I can imagine a similar approach would be also ideal for the sciences in general.

Another proposal, Jonatan Pallesen has identified poor media reporting and low trustworthiness of scientific studies themselves - especially in the social sciences - as key issues, and has proposed, though not implemented, a platform where studies would be evaluated, and media presentations of studies can be corrected. In line with something like SKY but for the natural sciences,  Brian Heligman recently launched Sciwiki, a wiki site to aggregate scientific know-how, with a focus on natural-science, the introduction to the essay points out the issue I mentioned before: That knowledge in science is not open enough

My motivation for this stems from my recent experience venturing into solid-state chemistry. Like any scientist exploring a new field, I’ve made tons of mistakes, but the frustrating part was I knew I’m wasn’t the first person to make those exact mistakes. I’m lucky in that I’m part of a big research group; when something doesn’t work, I just ask a postdoc. That method of troubleshooting science does not work for small labs, especially ones at new institutions. It excludes a ton of people and wastes everyone’s time.

These are just some; on twitter one can find others musing on ways to improve the way science works. Some time ago, I outlined a proposal quite similar to Sylvain's, which is encouraging: it shows both that many people are thinking about the same issues, and are converging in the same sets of proposed solutions; my idea was to have a platform with features like

  • Hosting code and text, be able to “compile” the paper from the source dataset and the text, plus a script, so that no number is written by hand, everything is as was defined in code, executed line by line
  • Curated answers to specific questions, aiming to be as accurate or more as anything anyone else has made publicly available
    • "Ah, has this person even read X, what would they say had they done so?", as in
    • "Those who argue that vegan/keto/paleo/carnivore diets are healthier than X, have they read this other paper suggesting that they are wrong?"
    • "Ah, this book -How Asia works- claims that the West developed because of protectionism, but the author didn't read Doug Irwin's strong case against that thesis, what is the all-things-considered view?
  • A platform for open peer review
  • Repository of research methods, easy to use, perhaps even with a questionnaire that allows you to input what you want to know and what you should go for. E.g. should you control for this or that? What instrumental variables? What sort of regression? What you should pay attention to. This way, best practices are available in a single place, for both academics and the public to refer to.
  • Bounties, requests for research, requests for replications
  • Attempts at linking specific innovations that improve people’s lives to specific pieces of research (In google scholar one can see citations, why not also applications)
  • ‘Quality ranking’ for papers, depending on methods used, this would be initially done by hand, using a checklist.
  • Based on the above, a quality and impact weighted impact metric.
  • Be able to propose new studies and discuss them (both public and scientists )

Social sciences vs the rest

Except for Brian's wiki, most of the work on spotting shoddy practices and improving how things work are targeted at social sciences. This is reasonable, as it seems easier to make mistakes, or produce hyped results, in the social sciences. In the natural sciences, either you know your method is not good enough (you can't even detect the magnitude of interest), or you can obviously study what interests you, without an ample wiggle room for p-hacking.

That said, there may be other problems here too. As Sabine Hossenfelder tirelessly points out, theoretical physics, and presumably theoretical X and likely computational X (insert your favourite discipline in X) have their own problems that seemingly no one is addressing at the same institutional level as with the social sciences.

What about more experimental sciences? Are there issues with the way, say, research into Li-ion batteries is conducted? In bio-sciences, by one estimate 85% of all research funding is wasted. , due to

inappropriate designs, unrepresentative samples, small samples, incorrect methods of analysis, and faulty interpretation,” together with pervasive biased under-reporting of research.

But what about, say, material science, or chemistry?

For these cases, it seems to me that the issues -if any!- would lie not as much in the veracity of the published body of work itself, but on the questions that get asked and answered in papers, which in turn is driven by how funding gets allocated, what kinds of research gets one tenure, and other incentives; scientists are at the end of the day not necessarily just focused on knowledge in the abstract, they are human beings with aspirations, dreams of fame, and a desire for a stable(r) life (and truthseeking too!).

Harder to find, but we can still see that there are instances of papers that have been retracted in chemistry. The reasons leading up to retractions had mostly to do with plagiarism (duplicated research), and just blatantly making up the data, and less so using inadequate methods. One shouldn't get too excited about fixing this as a way to make science work better; even at its worst, the retraction rate is only 14 papers per 10,000 papers published (in Iran). (Other countries ranking high in the ranking include Singapore, India, China, and Netherlands). By another metric, the same countries show up. The US, for comparison, has half the retraction rate that China has. This can still be a sign of an unhealthy system; there can still be production of low quality papers no one reads, or high quality papers in areas that are not as immediately useful to engineering new or improved products.

In the next post I'll dig deeper into the entire list of putative issues that science has, paying more attention to the natural sciences, as this seem underexplored to me.