Friday, October 19, 2012

Some notes on our PLoS Bio 2006 paper

It's been a while since our PLoS Biology paper (2006), but I've found that every so often, I meet someone for whom the following information is relevant, so here it is:

In that paper, we found that transcription occurs in bursts, whereby we mean that the gene itself transitions randomly between an active and inactive state.  We found this to be the case when we isolated clones in which each clone had stably inserted a RNA-FISHable transgene into a particular (but unknown) locus.  In the paper, we analyzed one of those clones to show that for that one clone, changing the concentration of transactivator resulted in a change in the burst size (i.e., number of mRNA produced during a gene ON event).

What we didn't report is what happened in other clones.  I didn't analyze any further clones in great detail, but we did observe something interesting.  Just by eye, I noticed that the fraction of ON cells seemed to vary tremendously from clone to clone--some had every other cell ON, some had just 1 in 25 or so ON.  But in each ON cell, the degree to which it was ON didn't really seem to vary much.  On the other hand, varying the promoter (between 7x and 1x tetO) seemed to change the degree to which a cell was ON.  So together, this would argue that in this one specific case, burst frequency is largely determined by the genetic context of the gene, but burst size is governed by transcription factor activity and how the transcription factor binds interacts with the promoter.  Anyway, just in case someone finds this minor technical point interesting...

Saturday, October 13, 2012

Rejection is good for you?

So this recent paper just came out in Science that examines the pre-publication process, including rejection and resubmission.  One interesting finding was that papers who went through the "publication shuffle" of getting rejected from journal(s) before eventually getting published tended to have higher citation counts that those that connected at the first journal to which they were submitted.  My initial reaction was that, well, these are actually important studies that for whatever reason did not connect at the fancy initial journal that perhaps it deserved to be in.  But the authors then show that this is unlikely because the effect persisted independent of whether the paper ultimately got published in a higher or lower impact journal than the initial one (hmm, would like to see how many papers.  They then say that they believe the most likely explanation is that papers get better through repeated rounds of editing and peer review, and go on to say that authors should persevere through the publication treadmill.

I would agree that papers typically get better through peer review.  However, I view this as being a rather facile argument for the value of repeated cycles of revision and rejection.  Sure, any paper will get better if you spend more time on it and get people's comments on it.  However, I think I'm not alone in finding that for every useful point made by reviewers, there are typically another 4-5 points one has to address in order to get your paper in (often through laborious but pointless experiments).  I think peer review and repeated rejection is certainly not the most efficient way to make a paper better, and it is probably mostly just a waste of everyone's time.  Also, I would point out that peer review typically does not stop most bad papers from coming out eventually.

I guess my main point is that the authors seem to be arguing in favor of the current system, stating that "Perhaps more importantly, these results should help authors endure the frustration associated with long resubmission processes and encourage them to take the challenge".  I think this is a dangerous statement, because it ignores the very real psychological damage these rejections have on young scientists and also the toll these sometimes seemingly endless submission cycles take on lab finances and productivity.

Friday, October 12, 2012

Wednesday, October 3, 2012