I just had a really lovely experience reviewing a paper at Cell Systems. Started out fairly standard: got an e-mail asking to review, paper looked good (showing good taste), so I accepted, then submitted my review. Then I got a personal e-mail from the editor thanking me for my review, remarking how nice the paper was, and asking a question or two. Then they issued the decision letter and I got a personal thank you for my review. It just really helped me feel involved and connected to the process, much more so than the auto-generated form e-mails I typically get. It also makes me way more inclined to see the editors at Cell Systems as thoughtful people who care about science, and thus more likely to recommend it to others and to submit there myself. Note to editors: a little humanity goes a long way. (I have had similarly good experiences with the editors at Nature Methods.)
Also want to say for the record that I still think that the current journal system is generally not good for how we do science. But in the meantime while we wait for preprints or whatever to become the standard, I think it’s nice to acknowledge a job well done when you see it.
Friday, July 24, 2015
Thursday, July 23, 2015
RNA integrity in tissues
Been thinking a lot about expression in tissue these days. Funny quote in a post from the always quotable Dan Graur: “They looked at the process of transcription in long-dead tissues? Isn’t that like studying the [...] circulation system in sushi?” He also points to this study from Yoav Gilad about RNA degradation, which is really great–wish there were more such studies.
We have been doing a fair amount of RNA FISH in tissues (big thanks to help from Shalev Itzkovitz and Long Cai), and while we haven’t done a formal study, I can say that RNA degradation is a huge problem in tissue. We’ve seen RNA in some tissues disappear literally within minutes after tissue harvest. This seems somewhat correlated with RNase content of the tissue, but it’s still unclear. We’ve also worked in fresh frozen human samples, all collected ostensibly the same way, and found huge variability in RNA recovery, with some samples showing great signals and other, seemingly identical samples showing no RNA whatsoever. This is true even for GAPDH. No clue whether the variability is biological or not, but I'm inclined to think it's technical. Most likely culprit is ischemic time, of which we had no control in the human samples.
We’ve also found that we’ve been able to get decent signals in formaldehyde fixed paraffin embedded samples, even though those are thought to be generally worse than fresh frozen. If I had to guess, I’d say it’s all about sample handling before freezing/fixing. I would be very hesitant to make any strong claims about gene expression without being absolutely certain about the sample quality. Problem is, I don't know what it means to be absolutely certain... :)
Anyway, so far, all we have is the sum of anecdotes, which I share here in case anyone’s interested. We really should do a more formal study of this at some point.
We have been doing a fair amount of RNA FISH in tissues (big thanks to help from Shalev Itzkovitz and Long Cai), and while we haven’t done a formal study, I can say that RNA degradation is a huge problem in tissue. We’ve seen RNA in some tissues disappear literally within minutes after tissue harvest. This seems somewhat correlated with RNase content of the tissue, but it’s still unclear. We’ve also worked in fresh frozen human samples, all collected ostensibly the same way, and found huge variability in RNA recovery, with some samples showing great signals and other, seemingly identical samples showing no RNA whatsoever. This is true even for GAPDH. No clue whether the variability is biological or not, but I'm inclined to think it's technical. Most likely culprit is ischemic time, of which we had no control in the human samples.
We’ve also found that we’ve been able to get decent signals in formaldehyde fixed paraffin embedded samples, even though those are thought to be generally worse than fresh frozen. If I had to guess, I’d say it’s all about sample handling before freezing/fixing. I would be very hesitant to make any strong claims about gene expression without being absolutely certain about the sample quality. Problem is, I don't know what it means to be absolutely certain... :)
Anyway, so far, all we have is the sum of anecdotes, which I share here in case anyone’s interested. We really should do a more formal study of this at some point.
Wednesday, July 15, 2015
RNA-seq vs. RNA FISH, part 2: differential expression of 19 genes
On the heels of RNA FISH vs. RNA-seq (absolute abundance), here's cells in two different conditions, differential expression of 19 genes, RNA FISH vs. RNA-seq:
A few are way off, but not bad, on the whole.
A few are way off, but not bad, on the whole.
Saturday, July 11, 2015
How should we do script review to spot errors?
Sydney just thought up a great idea for the lab: she was wondering if someone could review all her analysis scripts to look for errors before we finalize it and submit a manuscript. Sort of like a code review, I guess. I think this is awesome, and can definitely reduce the potential for getting some very serious egg on your face after publication. (Note: I'm not talking about infrastructure-type software, which I think has a very different set of problems and solutions. This is about analysis scripts for the science itself.)
We all discussed briefly at group meeting about how this might work in practice, which took on a very practical significance because Chris was going over figures for the paper he's putting together. Here were some of the points of discussion, much revolving around the time it takes for someone to go over someone else's code.
We all discussed briefly at group meeting about how this might work in practice, which took on a very practical significance because Chris was going over figures for the paper he's putting together. Here were some of the points of discussion, much revolving around the time it takes for someone to go over someone else's code.
- When should the review happen? In the ideal world, the reviewer would be involved each step of the way, spotting errors early on in the process. In practice, that's a pretty big burden on the reviewer, and there's the potential to spend time reviewing analyses that never see the light of day. So I think we all thought it's better done at the end. Of course, doing it at the bitter end could be, well, bitter. So we're thinking maybe doing it in chunks when specific pieces of the analysis are finalized?
- Who should do it? Someone well-versed in the project would obviously be able to go through it faster. Also, they may be better able to suggest "sanity checks" (additional analyses to demonstrate correctness) than someone naive to the project. Then again, might their familiarity blind them to certain errors? I'm just not sure at this stage how much work it is to go through this.
- Related: How actively should the code author be involved? On the one hand, looking at raw code without any guidance can be very intimidating and time-consuming. On the other hand, having someone lead you through the code might inadvertently steer the reviewer away from problem areas.
- Who should do it, part 2? Some folks in the lab are a bit more computationally savvy than others. I worry that the more computationally savvy folks might get overburdened. It could be a training exercise for others to learn, but the quality of the review itself might suffer somewhat.
- How should we assign credit? Acknowledgement on the paper? Co-authorship? I could see making a case either way, guess it probably depends on the specifics.
Anyway, don't know if anyone out there has tried something like this, but if so, we'd love to hear your thoughts. I think it's increasingly important to think about these days.
Some of my favorite meta-science posts from the blog
I recently was asked to join a faculty panel on writing for Penn Bioengineering grad students, and in doing so, I realized that this blog already has a bunch of thoughts on "meta-science", like how to do science, manage time, give a talk, write. Below are some vaguely organized links to various posts on the subject, along with a couple outside links. I'll also try and maintain this Google Doc with links as well.
Time and people management:
Save time with FAQs
Quantifying the e-mail in my life, 1/2
Organizing the e-mail in my life, 2/2
How to get people to do boring stuff
The Shockley model of academic performance
Use concrete rules to change yourself
Let others organize your e-mail for you
Some thoughts on time management
Is my PI out to get me?
How much work do PIs do?
What I have learned since being a PI
How to do science:
The Shockley model of academic performance
What makes a scientist creative?
Why there is no journal of negative results
Why does push-button science push my buttons
Some thoughts on how to do science
Storytelling in science
Uri Alon's cloud
The magical results of reviewer experiments
Being an anal scientist
Statistics is not science
Machine learning, take 2
Giving talks:
How to structure a talk
http://www.howtogiveatalk.com/
http://www.ibiology.org/ibioseminars/techniques/susan-mcconnell-part-1.html
Figures for talks vs. figures for papers
Simple tips to improve your presentations
Images in presentations
A case against laser pointers for talks
A case against color merges to show colocalization
Writing:
The most annoying words in scientific discourse
How to write fast
Passive voice in scientific writing
The principle of WriteItAllOut
Figures for talks vs. figures for papers
What's the point of figure legends?
Musing on writing
Another short musing on writing
Publishing:
The eleven stages of academic grief
A taxonomy of papers
Why there is no journal of negative results
How to review a paper
How to re-review a paper
What not to worry about when you submit a manuscript
Storytelling in science
The cost of a biomedical research paper
Passive-aggressive review writing
The magical results of reviewer experiments
Retraction in the age of computation
Career development:
Why are papers important for getting faculty positions?
Is academia really broken? Or just really hard?
How much work do PIs do?
What I have learned since being a PI
Is my PI out to get me?
Why there's a great crunch coming in science careers
Change yourself with rules
The royal scientific jelly
Programming:
The hazards of commenting code
Why don't bioinformaticians learn how to run gels?
Time and people management:
Save time with FAQs
Quantifying the e-mail in my life, 1/2
Organizing the e-mail in my life, 2/2
How to get people to do boring stuff
The Shockley model of academic performance
Use concrete rules to change yourself
Let others organize your e-mail for you
Some thoughts on time management
Is my PI out to get me?
How much work do PIs do?
What I have learned since being a PI
How to do science:
The Shockley model of academic performance
What makes a scientist creative?
Why there is no journal of negative results
Why does push-button science push my buttons
Some thoughts on how to do science
Storytelling in science
Uri Alon's cloud
The magical results of reviewer experiments
Being an anal scientist
Statistics is not science
Machine learning, take 2
Giving talks:
How to structure a talk
http://www.howtogiveatalk.com/
http://www.ibiology.org/ibioseminars/techniques/susan-mcconnell-part-1.html
Figures for talks vs. figures for papers
Simple tips to improve your presentations
Images in presentations
A case against laser pointers for talks
A case against color merges to show colocalization
Writing:
The most annoying words in scientific discourse
How to write fast
Passive voice in scientific writing
The principle of WriteItAllOut
Figures for talks vs. figures for papers
What's the point of figure legends?
Musing on writing
Another short musing on writing
Publishing:
The eleven stages of academic grief
A taxonomy of papers
Why there is no journal of negative results
How to review a paper
How to re-review a paper
What not to worry about when you submit a manuscript
Storytelling in science
The cost of a biomedical research paper
Passive-aggressive review writing
The magical results of reviewer experiments
Retraction in the age of computation
Career development:
Why are papers important for getting faculty positions?
Is academia really broken? Or just really hard?
How much work do PIs do?
What I have learned since being a PI
Is my PI out to get me?
Why there's a great crunch coming in science careers
Change yourself with rules
The royal scientific jelly
Programming:
The hazards of commenting code
Why don't bioinformaticians learn how to run gels?
Thursday, July 9, 2015
Notes from a Chef Watson lab party
I recently read about Chef Watson, which is a website that that is the love child of IBM's Watson (the one that won at Jeopardy) and Bon Appetit Magazine. Basically, you put in ingredients and out comes crazy recipes, generates by Watson's artificial intelligence. Note: it doesn't give you existing recipes. No, it actually generates the recipe based on its silicon-based knowledge of what tastes good with what.
After reading this awesome review (Diner Cod Pizza?!?), I had to try it for myself. After doing a trial run with some deviled eggs (made with soy sauce, tahini, white miso, mayonnaise, onion–yum!), I somehow convinced everyone in the lab into holding a Chef Watson-inspired lab dish-to-pass. I thought it would be a good idea because it combines our love of food with our love of artificial intelligence. And here are the results:
Maggie: Appetizer Rhubarb Tartlets
Paul taking a sip:
"That was interesting!":
Update 7/11/2015: Some people strongly disagree with my sentiment that Chef Watson is great. I view it as glass half-full: Watson gives you interesting ingredients to use, but the first time you combine them you probably won't get the proportions right. But they are definitely combinations you would not have chosen otherwise. The glass half-empty version is that we already have tried and true recipes. Why mess with success? Well, I guess I'm just an optimist! Rhymes with futurist! :)
After reading this awesome review (Diner Cod Pizza?!?), I had to try it for myself. After doing a trial run with some deviled eggs (made with soy sauce, tahini, white miso, mayonnaise, onion–yum!), I somehow convinced everyone in the lab into holding a Chef Watson-inspired lab dish-to-pass. I thought it would be a good idea because it combines our love of food with our love of artificial intelligence. And here are the results:
Maggie: Appetizer Rhubarb Tartlets
Made with polenta, rhubarb, orange juice, boursin, tamarind paste, shallots, basil. I actually really like this one, although it was a bit tart.
Ally: Grapefruit potato salad
Didn't get the complete recipe details, but, umm, it had grapefruit and potato. I actually thought it was not too bad, considering I'm not a huge grapefruit fan.
Andrew: Bean... thingy
Made with kidney beans, pecorino romano, salami, tahini, pepper sauce, capers, chicken, green chiles, onions, mint syrup. This one totally rocked! Consensus winner!
Paul: Crab soup
Hmm. Don't remember what all was in this, but there was some crab. And a bunch of other random stuff. This recipe was interesting. Very interesting. The crazy thing was how the flavors evolved in every bite. Started sort of like crab soup and ended with the taste of Indian food (to me). Did I mention interesting? It won the prize for most interesting.
"That was interesting!":
Sara: Banana Lime Macaroni and Cheese
Yep, that just about sums it up, ingredients-wise. This dish was fairly polarizing (sorry, didn't get a picture). I actually thought it was pretty good. Sara herself was somewhat less enthusiastic. Meanwhile, she was busy blowing bubbles for her son Jonah. Meanwhile, Jonah drank a bottle of bubble mixture.
Claire: Corn bread
All in all, this was really good, and relatively normal. Only "weird" thing was honey, which I thought added a nice sweetness, although Claire thought it was a bit much. This picture is great, as much for the food as for the highly sceptical look on Todd's face!
Lauren: Chips and Salsa
Non-Watson, hence less outlandish. But tasty!
Stefan: Sausages
Non-Watson, but homemade and delicious.
Me: Asian Sesame Oil Pasta Salad
Japanese noodles, tahini, mayo, thyme, sherry vinegar, peanut, green pepper, yellow pepper, broccoli, sweet onions, apple. Forgot to take a picture, but not bad. Perhaps a bit bland, but tasted good with some chili pepper oil.
Verdict: Overall, I think Chef Watson is great! It definitely suggests flavor combinations that you would likely never think of otherwise. I think one lesson was that you probably want to flip through a bunch of recipes until you come across one that sort of makes sense. The other lesson is that Watson isn't so great at getting proportions and cooking times right. You definitely have to use your own judgement or things could get ugly. Anyway, I for one welcome our robotic cooking overlords.
Update 7/11/2015: Some people strongly disagree with my sentiment that Chef Watson is great. I view it as glass half-full: Watson gives you interesting ingredients to use, but the first time you combine them you probably won't get the proportions right. But they are definitely combinations you would not have chosen otherwise. The glass half-empty version is that we already have tried and true recipes. Why mess with success? Well, I guess I'm just an optimist! Rhymes with futurist! :)
Subscribe to:
Posts (Atom)