Sunday, April 20, 2014

Talking with my mom about GMOs

(I normally steer well clear of anything remotely political on the blog for obvious reasons. But this is not really about politics. Sort of. Whatever.)

I just had a huge argument with my mom about genetically modified organisms (GMOs). My mom is staunchly anti-GMO, and will not change her position no matter what I say. Despite the argument that the scientific consensus is that genetically modified organisms are not intrinsically a bad thing (indeed, essentially all scientists I have met who are even tangentially qualified to speak on the topic agree), my mom simply will not budge. My mom belongs in some region of a Venn diagram of people who in the most extreme intersection are simultaneously reasonably (often formidably) well-educated, believe in global warming, are anti-vaccination, and are anti-GMO. They are also probably very likely to eat gluten-free diets and kale chips. (Note that my mom is neither anti-vaccine nor gluten-free. I have not asked her about kale chips. For the record, my own personal political views are that I am neutral on kale chips.)

What's interesting here is that if you push these folks on climate change, they will probably tell you that the scientific consensus is in very strong agreement that anthropogenic climate change is real. Why would the same argument based on science not apply to genetically modified organisms or vaccines? I think that reveals a more fundamental truth: nominally, you may expect folks like this, who are well-educated and most likely politically labeled as liberal, to be intrinsically pro-science, and perhaps that is true on some level. But I think that a more accurate characterization would be "pro-nature" or "pro-environment" or maybe "anti-man-made". If this coincides with science, then science is right. But if not, well, science is wrong. The mentality is not much different than those on the other end of the political spectrum, just with a different set of beliefs.

(Again, for the record, I am both pro-Nature and pro-Science.  I would happily publish papers at either one.) (Haha, bad joke!) (You know you love it.)

I think we scientists had better keep this reality in mind. Just remember that virtually nobody outside of science really knows what the hell we are talking about. Some people may want to support us based on some alignment of their belief system and priorities with some aspects of our beliefs and priorities. Fine. But there are very few people out there who support us based on an actual, real understanding of what we do. I'm not saying this is good or bad, rather that it's a reality and we should brace ourselves accordingly for when those belief systems shift. Overall, perhaps we're lucky to be just a rounding error in the federal budget. I think this reflects the fact that we are a rounding error in most people's minds.

We can of course rightly argue that we provide society with incredible (and outsized) benefits in that scientific progress has led to an enormous gains in virtually every aspect of human life. So perhaps the fact that there are people out there whose interests align with ours, however imperfectly the motivations may match up, is good. But I think that we have to be very careful about relying on a system that is set up in such a way. Here's a Feynman quote that I serendipitously happened across just now that is particularly apropos:
I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. I am not trying to tell you what to do about cheating on your wife, or fooling your girlfriend, or something like that, when you’re not trying to be a scientist, but just trying to be an ordinary human being. We’ll leave those problems up to you and your rabbi. I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you are maybe wrong, that you ought to have when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.
For example, I was a little surprised when I was talking to a friend who was going to go on the radio. He does work on cosmology and astronomy, and he wondered how he would explain what the applications of this work were. “Well,” I said, “there aren’t any.” He said, “Yes, but then we won’t get support for more research of this kind.” I think that’s kind of dishonest. If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing–and if they don’t want to support you under those circumstances, then that’s their decision.
(I actually came across this quote in this blog post about the recent, umm, "back and forth" between Lior Pachter and Manolis Kellis. Scientific celebrity death match: Fight!)

I guess I think that it's a sign of a highly civilized society that we can have people sit around and spend precious resources thinking about stuff that quote often just doesn't matter. Would that be enough justification for the rest of society? Should I be okay with my mom supporting science funding despite her views? Is it possible or even desirable to live a life that is completely free of hypocrisy? To the latter question, I think the answer is no. People would be so boring otherwise.

Saturday, April 19, 2014

How to review a paper

Or, I should say, how I review a paper.

Peer review is a mess. We all know it, and have written about it endlessly, including myself. And we’ve also railed against a system in which we do all the work for the benefit of the publishers. But wait: if we are doing all the work, then we should be able to bend this system to our collective will, right? When we complain about bad reviews, just remember that we ourselves are the ones giving these terrible reviews. So our goal should be to give good reviews!

How do you do that? Here are some principles I try to think about and follow:

1. Don’t review papers you think will be bad. You can usually tell if a paper is low quality (and thus unlikely to make the cut) just by looking at the title and abstract. If you think it’s a waste of your energy, it probably is, and so don’t waste your time reviewing it. Somebody else will do it. Or not. It’s not your obligation to do so. Beware also the ego issue.

2. Give the authors the benefit of the doubt. The authors have been thinking about the problem for years, you have been thinking about it for a couple hours. So if you find a particular point confusing, either try to figure it out and really understand it, or just give the authors the benefit of the doubt that they did the right thing. They may not have, but so what? The alternative is worse, in my opinion. I hate getting the “The authors must check for XYZ if they want to make that claim”, when in fact we already did exactly that.

3. Stick to the evidence and the claims. I stay away from any of that crap about “impact". To me, that is an editorial responsibility. I try my best to just evaluate the evidence for the claims that the authors are making. If the authors claim something that they can’t based on their evidence, then I just say that the authors should alter their claim. And I try to say what they can claim, rather than just what they can’t claim. I generally do NOT propose new experiments to support their claim.

4. Do not EVER expand the scope of the paper. It is not your paper, and so it is neither your responsibility nor mandate to dictate the authors’ research priorities. The worst reviews are the ones in which the reviewers ask the authors to do a whole new PhD’s worth of studies. Often, these types of comments include vague prescriptions about novelty and mechanism. To wit, check out this little snippet from another reviewer for a paper I reviewed recently: "However, the major pitfall of current study is not to provide any novel information/mechanism behind the heterogeneity.” What is the author supposed to do with that?

5. Worth repeating–do not propose new experiments. Those reviews with a laundry list of experiments are very disheartening, and usually don’t add much to the paper. Remember that experiments are expensive, and sometimes the main author has left the lab and it’s very hard to even do the suggested experiments. Again, far better to just have them alter their claims in line with what their available data shows. I will sometimes explicitly say “The authors do not need to do an experiment” just to make that clear to the authors and the editor.

6. Be specific. If something is unclear, say exactly what it is that was confusing. If some claim is too broad, then give a specific alternative that the authors should consider (I’ll say “the authors could consider and discuss alternative hypothesis XYZ.”). This helps authors know exactly what they should do to make the paper go through. Also clearly delineate what are critical points and what are minor points.

7. Be positive and nice. Every paper represents a lot of someone’s blood, sweat and tears, usually a young scientist, and scathing reviews can be personally devastating. If you have to be negative (which is difficult, following the above rubric), then try not to phrase your review in terms like “I don’t know why anybody would ever do this”. Here’s an example of the opening lines from a review we got a while ago: "My opinion is that this manuscript is not very well thought through and of rather low quality. The authors' misconceptions are most obvious in their gross misstatement… [some relatively minor and inconsequential misstatement]”. Ouch. What’s the point of that? That review ended with “If [after doing a bunch of proposed experiments that don’t make sense] they find that [bunch of stuff that is irrelevant] they would begin to address their question.” It’s very belittling to basically imply that after all this work, we are still at square one. Not true, not positive and not nice.

8. (related) Write as though you were not anonymous. That will help with all the above issues.

One other note: I realize that for editors, even academic editors, the issues of novelty and impact are difficult to gauge and that they feel the need to lean on the reviewers for this information. Fine, I get that. But I will not provide it. Sorry.

Anyway, my main point is that WE run this system. It is within our power to change it for the better. Train your people and yourself to be better reviewers, and maybe we will all even be better people for it.

Wednesday, April 16, 2014

Machine learning, take 2

As mentioned earlier, one of my favorite Gautham quotes is "Would Newton have discovered gravitation by machine learning?" I think the point is solid, that a bunch of data + statistics is not science.

At least not yet. Technically, Newton's brain was a machine, and it came up with gravitation. So it is formally possible to have a machine come up with a theory. And I don't think this argument is just based on a technicality. I was chatting with Gautham yesterday about what a theory is, and doesn't it start with observing a pattern of some kind? Newton had access to centuries (millennia?) of star charts–people had misinterpreted them into epicycles, but the data were there for him. In response to my previous post on statistics, Shankar Mukherji mentioned the work of Hod Lipson, in which they are able to deduce physical laws from the data. Very cool. It seems that progress towards this goal is already underway. My guess is that as we make more progress on machine learning (my completely uninformed bet is on neural network approaches), computers will start to see more seemingly incredible inferences about the world. My other guess is that this will happen a lot sooner than we think.

In the meantime, though, I still think we are pretty far from having Newton in silico, and I think that Gautham's point about real learning vs. (the current state of) machine learning is still a valid one. Until this future of intelligent machines arrives, I think most fields of science will still require a lot more thinking to make sense of the data, and simple classifiers may not yield what we consider scientific insight.

Monday, April 14, 2014

Papers are a lot of work, and some of it is even worth the effort

I often say that the current model for publishing is a complete waste of time, and I still think that's true for so many parts of the publishing process, like dealing with reviews, etc.  It's hard for young faculty and even harder for trainees, for whom so much rides on the seemingly arbitrary whims of reviewers and editors.  Wouldn't it just be better to post on a blog, I often wonder?

I think deep down I know the answer is no.  Not that publishing in a particular journal is really important.  But there is something to putting together a well-constructed, high quality paper that is a worthy use of time. Often it feels like finishing a paper is just a bunch of i dotting and t crossing. Yet I've often found that it's in those final stages that we make the most crucial insights. Hedia's lincRNA paper is a good example: it was only towards the end when we were writing it up that we figured out what was really going on with the siRNA vs. ASO oligonucleotide treatment.  The details aren't so important, but the point is that this was in some ways the most important finding of the paper, and it was lurking within our data almost until the very end.

I've found the last few weeks before submission to be a stressful period, when you really want to get the paper out the door and at the same time you feel like you're putting a lot on the line that you want to get right.  It's exciting but scary to put something out there. And it's especially scary to look at your data again, here at the end of the road, and wonder what it all means after years of hard work. But I feel like this mental incubation period is a necessary part of doing good science, and where many new ideas are born.

Thursday, April 10, 2014

Why is everyone piling on that poor STAP stem cell woman?

I just read a little news feature in Nature today that made me very sad. For those of you who don't know, it's about the researcher from Japan who came up with this STAP method (stimulus-triggered activation of pluripotency), in which squeezing cells and putting them in acid can make them into pluripotent stem cells. This is a huge discovery, because it means you can make stem cells without having to perform the usual manipulations (such as genetic ones) to convert cells into stem cells.

Nature published these studies to huge fanfare a little while ago, but then, almost within a month or so, many people started to publicly question whether the results were true, including even one of the coauthors (one of those "victory has a thousand fathers, defeat is an orphan" situations). People started saying that nobody could replicate the findings, and also found some errors in the manuscript, including some plagiarized materials and methods, an old image of a teratoma and some gel-lane mixups. Her institute started an investigation, and she's had to hire a lawyer and defend herself to the press and (from this little Nature article) appears to be in the hospital.

This whole situation is completely ridiculous and strikes me as something that has gotten completely out of hand. Seriously, people, it's just a paper. First, to the method itself: it seems weird to me that people are criticizing this method already so soon after publication. Honestly, if I had a nickel for every time someone couldn't do RNA FISH and said our method doesn't work, I'd have, well, a lot of nickels. And that's something so easy to do that undergrads routinely do it on their first day in lab. Something tells me that this method must be fairly tricky, otherwise someone would have probably already figured it out by now. So let's give her the benefit of the doubt, at least for a couple years.

All the investigations into the little errors and discrepancies in her paper strike me as silly and vindictive. Would all of your papers survive such deep scrutiny? Yes, her paper is very important, significantly more so than anything I've ever done, but remember that's she's still just a scientist working in a lab like you and me. Any paper is such a huge mess of data and figures that little errors will creep in from time to time. To discount her work because of them is utterly ridiculous. And plagiarism of materials and methods? Come on! How many ways can you describe how you culture cells?

And if her work doesn't end up panning out? SO WHAT! Again, it's just a paper! If I had a nickel for every Nature paper that ended up being wrong, well, you know what I'm saying. I personally know of several examples of big Cell, Science, Nature papers that are wrong that got people fancy jobs at top institutions, grants, tenure, etc. Some of these are cases in which people have grossly overstated the effect of something through some sort of tricky analysis. Some of these are cases in which the authors greatly overinterpreted the data, leading them to the wrong conclusion, often because of some sloppy science. Some of these are in the fraud gray zone, where they cover up particular discrepant results that either confuse or refute the main conclusions, or do experiments over and over again until they get the "right" outcome. Those people have jobs and everyone's happy–they're certainly not being investigated by their own institutions. Why is this woman being taken down so hard? Is it because what she's doing is so important? In that case, the lesson is clear: don't do anything important. Is that the message we want to be sending?

Wednesday, April 9, 2014

Terminator 1 and 2 were the first great comic book movies

Just watched Terminator 1 again–how awesome! Not quite as good as Terminator 2, which is probably one of the top action movies of all time, but still great, maybe top 10-20. As I was watching it, I was thinking that a lot of what made the movie so appealing is the character of an unstoppable super man (or in this case, robot). Much better as a bad guy than as a good guy, because the unstoppable good guy is boring (see: Superman). Isn't this the prototype for all the modern day comic book movies? One of the things that makes comic book movies exciting is the epic battles between the comic book characters, both doing incredible things, and waiting to see who breaks first. Terminator 2 is still amongst the best (if not the best) in this regard. Another cool thing is that the Terminator movies did this with much worse special effects than we have today, especially Terminator 1, which looks prehistoric. Practically expected claymation sometimes. But it's still awesome. Compelling movie action is more about engendering fear, suspense and relief than just special effects. Still, Terminator 2 would just not have been as awesome without the (for its time) unprecedented special effects, which have aged remarkably well.

NB: Yes, I realize that the original Superman movies came out before T1. But they just weren't as good. And that's a fact. You know it, too.

Sunday, April 6, 2014

The principle of WriteItAllOut

After Gautham's thoughts about code and clarity and lots of paper writing and grant writing these days, a couple of conclusions. First, grant writing is boring. Second, when in doubt, write it all out. For computer code, this means having long variable names. If you have the option of writing a variable name of "mntx" or "meanTranscriptionSiteIntensityInHeterokaryon", go for the latter. Yes, it takes a little more effort, but not much, and its a MUCH better idea in the long run. I wish we could do this in math and physics also. Same holds for papers and grants, both in figures and in text. In figures, if you can give an informative axis label, do it. "Mean (CRL)" is much less informative that "Mean transcript abundance per gene in human foreskin fibroblasts". It's longer, but with some creativity you can make it work. In main text, AVOID ALL ACRONYMS! People less often read papers straight through from beginning to end these days, and if someone looks at a paragraph halfway through the text and sees something like:
Similarly, we find that 9.3% of autosomally expressed accessible novel TARs show ASE, we expect this number to be lower than genes as novel TARs correspond to exons of genes.
then they will be lost. And I don't think the space taken by expanding out these acronyms is a legitimate excuse. For the record, though, I do use DNA, RNA, SNP and FISH. Actually, I'd probably be well served to expand out the latter two, although they are fairly standard.

Remember, the main point of a paper is not to make little puzzles for your readers to decipher, but to convey information, both accurately and as efficiently as possible. For grants, well, after getting some... strange reviews, I'm honestly not sure what the goal is. Except to get money.