Just read a nice blog post from Stephen Heard about replicability vs. robustness that I really agree with. Basically, the idea under discussion is how much effort we should devote to exactly repeating experiments (narrow robustness) vs. the more standard way of doing science, which is everyone does their own version to see whether the result holds more generally (broad robustness). In my particular niche of molecular biology, I think most (though definitely not all, you know who you are!) errors are those of judgement rather than technical competence/integrity, and so I think most exact replication efforts are a waste of time, an argument which many other have made as well.
In the comments, some people arguing for more narrow replication studies made the point that very little (~0%) of our current research budget is devoted to explicitly to replication. Which got me wondering: what might happen if we suddenly funded a lot of replication studies?
In particular, I worry about positive-result bias. Positive-result bias is basically the natural human desire to find something new: our expectation is X, but instead we found Y. Hooray, look, new science! Press release, please! :)
Now what happens when when we start a bunch of studies with the explicit mandate to replicate a previous study? Here, the expectation is now what was already found and so positive-result bias would bias towards a refutation. I mean, let’s face it, people want to do something interesting and new that other people care about. The cancer reproducibility project in eLife provides an interesting case study: most of the press around the publication was about how the results were “muddy”, and I definitely saw a great deal more interest in what didn’t replicate than what did.
Look, I’m not saying that scientists are so hungry for attention that most, or even more than a few, would consciously try to have a replication fail (although I do wonder about that eLife replication paper that applied what seemed to be overly stringent statistical criteria in order to say something did not replicate). All I’m saying is the same hype incentives that we complain about are clearly aligned with failed replication results, and so we should be just as critical and vigilant about them.
As for apportionment of resources towards replication, I think that setting aside the question as to whether it’s a good use of money from the scientific perspective (I, like others, would argue largely not), there’s also the question of whether it’s a good use of human resources. Having a student or postdoc work on a replication study for years during their training period is not, I think, a good use of their time, and keeps them from the more valuable training experience of actually, you know, doing their own science—let alone robbing them of the thrill of new discovery. Perhaps such studies are best left to industry, which is where I believe they already largely reside.
Saturday, April 22, 2017
Saturday, April 8, 2017
The hater’s guide to (experimental) reproducibility
(Thanks to Caroline Bartman and Lauren Beck for discussions.)
Okay, before I start, I just want to emphasize that my lab STRONGLY supports computational reproducibility, and we have released data + code (code all the way from raw data to figures) for all papers primarily from our lab for quite some time now. Just sayin’. We do it because a. we can; b. it enforces a higher standard within the lab; c. on balance, it’s the right thing to do.
All right, that said, I have to say that I find, like many others, the entire conversation about reproducibility right now to be way off the rails, mostly because it’s almost entirely dominated by the statistical point of view. My opinion is that this is totally off base, at least in my particular area of quantitative molecular biology; like I said before, “If you think that github accounts, pre-registered studies and iPython notebooks will magically solve the reproducibility problem, think again.” Yet, it seems that this statistically-dominated perspective is not just a few Twitter people sounding off about Julia and Docker. This "science is falling apart" story has taken hold in the broader media, and the fact that someone like Ioannidis was even being mentioned for director of NIH (!?) shows how deeply and broadly this narrative has taken hold.
Anyway, I won’t rehash all the ways I find this annoying, wrongheaded and in some ways dangerous, I’ll just sum up by saying I’m a hater. But like all haters, deep down, my feelings are fueled by jealousy. :) Jealousy because I actually deeply admire the fact that computational types have spent a lot of time thinking about codifying best practices, and have developed a culture and sense of community standards that embodies those practices. And while I do think that a lot of the moralistic grandstanding from computational folks around these issues is often self-serving, that doesn’t mean that talking about and encouraging computational/statistical reproducibility is a bad thing. Indeed, the fact that statisticians dominate the conversation is not their fault, it’s ours: why is there no experimental equivalent to the (statistical/computational) reproducibility movement?
So first off, the answer is that there is, with lists of validated antibodies and an increased awareness of things like cell line and mycoplasma contamination and so forth. That is all great, but in my experience, these things journals make you check are not typically the reasons for experimental irreproducibility. Fundamentally, these efforts suffer from what I consider a “checklist problem”, which is the idea that reproducibility can be codified into a simple, generic checklist of things. Like, the thought is that if I could just check off all the boxes on mycoplasma and cell identification and animal protocols, then my work would be certified as Reproducible™. This is not to say that we shouldn’t have more checklists (see below), but I just don’t think it’s going to solve the problem.
Okay, so if simplistic checklists aren’t the full solution, then what is? I think the crux of the issue actually comes back to a conversation we had with the venerable Warren Ewens a while back about how to analyze some data we were puzzling over, and he said something to the effect of “There are all these statistical tests we can think about, but it also has to pass the smell test.” This resonated with me, because I realize that that at least some of us experimentalists DO teach reproducibility, but it’s more of an experiential learning to try and impart an intuitive sense of what discrepancies to ignore and which to lose sleep over. In particular in molecular biology, where our tools are imprecise and the systems are (hopelessly?) complex, this intuition is, in my opinion, the single most skill we can teach our trainees.
Okay, before I start, I just want to emphasize that my lab STRONGLY supports computational reproducibility, and we have released data + code (code all the way from raw data to figures) for all papers primarily from our lab for quite some time now. Just sayin’. We do it because a. we can; b. it enforces a higher standard within the lab; c. on balance, it’s the right thing to do.
All right, that said, I have to say that I find, like many others, the entire conversation about reproducibility right now to be way off the rails, mostly because it’s almost entirely dominated by the statistical point of view. My opinion is that this is totally off base, at least in my particular area of quantitative molecular biology; like I said before, “If you think that github accounts, pre-registered studies and iPython notebooks will magically solve the reproducibility problem, think again.” Yet, it seems that this statistically-dominated perspective is not just a few Twitter people sounding off about Julia and Docker. This "science is falling apart" story has taken hold in the broader media, and the fact that someone like Ioannidis was even being mentioned for director of NIH (!?) shows how deeply and broadly this narrative has taken hold.
Anyway, I won’t rehash all the ways I find this annoying, wrongheaded and in some ways dangerous, I’ll just sum up by saying I’m a hater. But like all haters, deep down, my feelings are fueled by jealousy. :) Jealousy because I actually deeply admire the fact that computational types have spent a lot of time thinking about codifying best practices, and have developed a culture and sense of community standards that embodies those practices. And while I do think that a lot of the moralistic grandstanding from computational folks around these issues is often self-serving, that doesn’t mean that talking about and encouraging computational/statistical reproducibility is a bad thing. Indeed, the fact that statisticians dominate the conversation is not their fault, it’s ours: why is there no experimental equivalent to the (statistical/computational) reproducibility movement?
So first off, the answer is that there is, with lists of validated antibodies and an increased awareness of things like cell line and mycoplasma contamination and so forth. That is all great, but in my experience, these things journals make you check are not typically the reasons for experimental irreproducibility. Fundamentally, these efforts suffer from what I consider a “checklist problem”, which is the idea that reproducibility can be codified into a simple, generic checklist of things. Like, the thought is that if I could just check off all the boxes on mycoplasma and cell identification and animal protocols, then my work would be certified as Reproducible™. This is not to say that we shouldn’t have more checklists (see below), but I just don’t think it’s going to solve the problem.
Okay, so if simplistic checklists aren’t the full solution, then what is? I think the crux of the issue actually comes back to a conversation we had with the venerable Warren Ewens a while back about how to analyze some data we were puzzling over, and he said something to the effect of “There are all these statistical tests we can think about, but it also has to pass the smell test.” This resonated with me, because I realize that that at least some of us experimentalists DO teach reproducibility, but it’s more of an experiential learning to try and impart an intuitive sense of what discrepancies to ignore and which to lose sleep over. In particular in molecular biology, where our tools are imprecise and the systems are (hopelessly?) complex, this intuition is, in my opinion, the single most skill we can teach our trainees.
Thing is, some do a much better job of teaching this intuition than others. I think that where we can learn from the computational/statistical reproducibility movement is to try and at least come up with some general principles and guidelines for enhancing the quality of our science, even if they can’t be easily codified. And within a particular lab, I think there are some general good practices, and maybe it’s time to have a more public discussion about them so that we can all learn from each other. So, with all that in mind, here’s our attempt to start a discussion with some ideas for experimental reproducibility, ranging from day-to-day to big picture:
- Keep an online lab notebook that is searchable with links to protocols and is easily shared with other lab members.
- Organize protocols in an online doc that allows for easy sharing and commenting. Avoid protocol "fragmentation"; if a variation comes up, spend the time to build that in as a branch point in the protocol. Otherwise, there will be protocol drift, and others may not know about new improvements.
- Annotate protocols carefully, explaining, where possible, which elements of the protocol are critical and why (and ideally have some documentation). This helps to avoid protocol cruft, where new steps get introduced and reified without reason. Often, leading a new trainee through a protocol is a good time to annotate, since it exposes all the unwritten parts of the protocol. Note: this is also a good way to explore protocol simplification!
- Catalog important lab-generated reagents (probes, plasmids, etc.) with unique identifiers and develop a system for labeling. In the lab, we have a system for labeling and cataloging probes, which helps us figure out post-facto what the difference is between "M20_probe_Cy3" and "M20_probe_Cy3_usethis". What is hard with this is to develop a system for labeling enforcement. Not sure how best to do this. My system is that I won't order any new probes for a person until all their probes are appropriately cataloged.
- Carefully track biologic reagents that are known to suffer from lot variability, including dates, lot numbers, etc. Things like matrigel, antibodies, R-spondin.
- Set up a system for documenting little experiments that establish a little factoid in the lab. Like "Oh, probe length of 30 works best for expansion microscopy based on XYZ…". These can be invaluable down the line, since they're rarely if ever published—and then turn from lab memory into lab lore.
- Journal length limits have led to a culture of very short and non-detailed methods, but there's this thing called the internet that apparently can store and share a lot of information. I think we need to establish a culture of publicly sharing detailed protocols, including annotating all the nuances and so forth. Check out this from Feng Zhang about CRISPR (we also have made an extensive single molecule RNA FISH page here).
- (Lauren) Track experiments in a log, along with all relevant (or even seemingly irrelevant) details. This could be, for instance, a big Google Doc with list of all similar types of experiments, pointing to where the data is kept, and critically, all the little details. These tabulated forms of lab notebooks can really help identify patterns in those little details, but also serve to show other members of the lab what details matter and that they should be attentive to.
- Along those lines, record all your failures, along with the type of failure. We've definitely had times when we could have saved a lot of time in the lab if we had kept track of that. SHARE FAILURES with others in the lab, especially the PI.
- (Caroline) Establish an objective baseline for an experiment working, and stick to it. Sort of like pre-registering your experiment, in a way. If you take data, what will allow you to say that it worked or didn't work. If it didn't work, is there a rationalization? If so, discuss with someone, including the PI, to make sure you aren't deluding yourself and just ignoring data you don't like. There are often good reasons to drop bits of data, and sometimes we make mistakes in our judgement calls, but at least get a second opinion.
- Develop lab-specific checklists. Every lab has it's own set of things it cares about and that people should check, like microscope light intensity or probe HPLC trace or whatever. Usually these are taught and learned through experience, but that strikes me as less efficient than it could be.
- Replicates: What constitutes a biological replicate? Is it the same batch of cells grown in two wells? Is it two separate passages of the same cell line? If so, separated by how much time? Or do you want to start each one fresh from a frozen vial? Whatever your system, it's important to come up with some ground rules for what replicates means, and then stick to it. I feel like one aspect of replication is that you don't want the conditions to be necessarily exactly the same, so a little variability is good. After all, that's what separates a biological replicate (which is really about capturing systematic but unknown variability) from a technical replicate (which is statistically variability).
- Have someone else take a look at your data without leading them too much with your hypothesis. Do they follow the same logic to reach the same conclusion? Many times, people fall so in love with their crazy hypothesis that they fail to see the simpler (and far more plausible) boring explanation instead. (Former postdoc Gautham Nair was so good at finding the simple boring explanation that we called it the "Gautham transform" in the lab!)
- Critically examine parts that don't fit in the story. No story is perfect, especially in molecular biology, which has a serious "everything affects everything" problem. Often times there is no explanation, and there's nothing you can really do about it. Okay, but resist the urge to sweep it under the rug. Sometimes there's new science in there!
- Finally, there is no substitute for just thinking long and hard about your work with a critical mindset. Everything else is just, like I said, a checklist, nothing more, nothing less.
Anyway, some thoughts, and I'm guessing most people already do a lot of this, implicitly or explicitly. We'd love to hear the probably huge list of other ideas people out there have for improving the quality/reproducibility of their science. Point is, let's have a public discussion so that everyone can participate!
On criticism
-by Caroline Bartman
Viewed in a certain light, grad school- all of scientific training- is a process of becoming a good critic. You need to learn to evaluate papers and grants either to make them better, to score/review them, or to try to expand your understanding of the field. However, there are many nuances to being a good critic that were never spelled out in my grad school classes, and that I still try to improve on all the time.
Viewed in a certain light, grad school- all of scientific training- is a process of becoming a good critic. You need to learn to evaluate papers and grants either to make them better, to score/review them, or to try to expand your understanding of the field. However, there are many nuances to being a good critic that were never spelled out in my grad school classes, and that I still try to improve on all the time.
0. Seeing the bigger picture: What statement is the paper trying to make? How do you feel about THAT STATEMENT after reading it? Every paper has experiments with shortcomings or design flaws. Does the scientific light shine through in spite of that? Or are the authors over-interpreting the data? This is really the key to criticizing scientific work thoughtfully and productively.
1. Compassion: Especially important when evaluating the work of others. One person or group can only do so much, due to time, resources, and experimental considerations. When I was an undergrad never having written a paper, I would go to journal clubs and say things like ‘This was a good paper, but what really would have nailed it would be to use these three additional transgenic mouse strains.’ Not realistic! And devalues the effort that’s already represented in the paper. Before you ask for additional experiments, step back: would those really change the interpretation of the paper? Sometimes yes, often no (goes back to point 0).
Plus, consciously noting the good aspects of a paper or grant, and only pointing out limited, specific criticisms will make the author happier! So they will be more likely to adopt your suggestions, and in a way actually facilitates the science moving forward.
2. Balance: Comes into play when evaluating work that you would be predisposed to like- such as your own work! But also the work of well-known labs (aka fancy science). I often find myself cutting myself slack I wouldn’t give others. (‘That experiment is really just a control, so it’s a waste of time’, etc. ) Reviewers (and also my PIs, thanks Gerd and Arjun) won’t necessarily see your work in such a rosy light!
With fancy science, it’s easy to see that e.g. a statement made in a paper isn’t so well supported by the data, but say ‘They’re experts! They founded this field. They probably know what they’re doing.’ Sometimes true, but sometimes not. Would you feel the same way about the paper if it came from an unknown PI? Plus, a fancy lab actually has the best capacity and manpower to carry out the very best experiments with the newest tech! Maybe they should be subject to even harsher scrutiny in their papers.
3. Ignorance: I don’t really know if there’s a good name for this quality. Maybe comfort with uncertainty? You are often called upon to evaluate papers or grants that aren’t in your sub-sub-sub field, and that can instill doubts. Yes, you have to recognize your possible lack of expertise. But you can still have valuable opinions! Ideally papers would be read by scientists outside the immediate field, and help inform their thinking. Plus, while technologies differ, scientific reasoning is pretty much constant. So if an experiment or a logical progression doesn’t make sense, you can say something. The worst thing that could happen is someone tells you you’re wrong.
Grad school tends to instill the idea that knowledge is the primary quality required to evaluate scientific work. Partially because young trainees do indeed need to amass some body of understanding in order to ‘get’ the field and make comments. But knowledge is really not enough, and sometimes (point 3) not even necessary!
Comment if you have more ideas on requirements for a good scientific critic!
1. Compassion: Especially important when evaluating the work of others. One person or group can only do so much, due to time, resources, and experimental considerations. When I was an undergrad never having written a paper, I would go to journal clubs and say things like ‘This was a good paper, but what really would have nailed it would be to use these three additional transgenic mouse strains.’ Not realistic! And devalues the effort that’s already represented in the paper. Before you ask for additional experiments, step back: would those really change the interpretation of the paper? Sometimes yes, often no (goes back to point 0).
Plus, consciously noting the good aspects of a paper or grant, and only pointing out limited, specific criticisms will make the author happier! So they will be more likely to adopt your suggestions, and in a way actually facilitates the science moving forward.
2. Balance: Comes into play when evaluating work that you would be predisposed to like- such as your own work! But also the work of well-known labs (aka fancy science). I often find myself cutting myself slack I wouldn’t give others. (‘That experiment is really just a control, so it’s a waste of time’, etc. ) Reviewers (and also my PIs, thanks Gerd and Arjun) won’t necessarily see your work in such a rosy light!
With fancy science, it’s easy to see that e.g. a statement made in a paper isn’t so well supported by the data, but say ‘They’re experts! They founded this field. They probably know what they’re doing.’ Sometimes true, but sometimes not. Would you feel the same way about the paper if it came from an unknown PI? Plus, a fancy lab actually has the best capacity and manpower to carry out the very best experiments with the newest tech! Maybe they should be subject to even harsher scrutiny in their papers.
3. Ignorance: I don’t really know if there’s a good name for this quality. Maybe comfort with uncertainty? You are often called upon to evaluate papers or grants that aren’t in your sub-sub-sub field, and that can instill doubts. Yes, you have to recognize your possible lack of expertise. But you can still have valuable opinions! Ideally papers would be read by scientists outside the immediate field, and help inform their thinking. Plus, while technologies differ, scientific reasoning is pretty much constant. So if an experiment or a logical progression doesn’t make sense, you can say something. The worst thing that could happen is someone tells you you’re wrong.
Grad school tends to instill the idea that knowledge is the primary quality required to evaluate scientific work. Partially because young trainees do indeed need to amass some body of understanding in order to ‘get’ the field and make comments. But knowledge is really not enough, and sometimes (point 3) not even necessary!
Comment if you have more ideas on requirements for a good scientific critic!
Sunday, April 2, 2017
Nabokov, translated for academia
Nabokov: I write for my pleasure, but publish for money.
Academia: I write for your pleasure, but pay money to publish.
More specifically…
Undergrad: I don’t know how to write, but please let me publish something for med school.
Grad student: I write my first paper draft for pleasure, but my thesis for some antiquated notion of scholarship.
Postdoc: I write "in press" with pleasure, but "in prep" for faculty applications.
Academia: I write for your pleasure, but pay money to publish.
More specifically…
Undergrad: I don’t know how to write, but please let me publish something for med school.
Grad student: I write my first paper draft for pleasure, but my thesis for some antiquated notion of scholarship.
Postdoc: I write "in press" with pleasure, but "in prep" for faculty applications.
Editor: You write for my pleasure, but these proofs gonna cost you.
SciTwitter: I write preprints for retweets, but tweet cats/Trump for followers.
Junior PI: I write mostly out of a self-imposed sense of obligation, but publish to try and get over my imposter syndrome.
Mid-career PI: I say no to book chapters (finally (mostly)), but publish to see if anyone is still interested.
Senior PI: I write to explain why my life’s work is under-appreciated, but give dinner talks for money.
Mid-career PI: I say no to book chapters (finally (mostly)), but publish to see if anyone is still interested.
Senior PI: I write to explain why my life’s work is under-appreciated, but give dinner talks for money.
Subscribe to:
Posts (Atom)