Sunday, December 16, 2012

How to understand biology

Lately, I've been wondering whether creating a complete understanding of biology is hopeless given the complexity involved.  Maybe it's like predicting the weather or something--just too many variables and too many unknowns.  Often you hear the analogy that the way we try to understand biology today is like trying to understand how a clock works by throwing it at the wall and looking at the pieces.  I think there is a fundamental truth to that.  Lately, in my talks, I've been using the analogy of grinding up an iPhone into a pile of dust, and then trying to understand how the iPhone works by considering the elemental makeup of that pile of dust.  That's sort of like what we do now.  We grind up a bunch of cells and see what genes are up and down in comparing one "pile of dust" to another.  I think it can be hard to gain real mechanistic insight from that, or at least it seems that it will be hard to really understand how a cell works that way.  Hmm.  Is there a point when we'll actually get there?

Which leads me to a thought.  Maybe it is hopeless to actually figure out how an iPhone works.  But maybe we don't need to.  Take reprogramming of stem cells.  In that case, we don't need to know exactly what the cell does with those reprogramming factors, we just know that they can reprogram the cell.  It's sort of like saying, "Well, someone gave me this iPhone, and I have no idea how it works (having ground a couple of them up), but at the very least, it seems like if I push these particular buttons, then I can somehow get it to call my mom".  So we've gained some higher level understanding of how the system works, enough to bend it to our will occasionally.  That seems more feasible.  Maybe.  I hope.

Wednesday, November 21, 2012

Apple owns the interface of today, Google owns the interface of tomorrow


I've been thinking a bit lately about computer interfaces, and I feel like we're going to see a big change in the next 5-10 years, a shift probably as big as the advent of the GUI.  And I think that Google is the company best prepared to deliver that future.

Let's imagine what the ideal interface would look like.  Ideally, I would just have a computer respond immediately to my every thought.  I would think "remind me to get milk on the way home" and the computer would just do it.  I would think "make this figure graphic with a bunch of equally sized circles here, here and here" and the computer would just do it.  This is obviously still a dream (although perhaps one that is less far off than we think).  But the idea is that the computer just does what you want without you having to do a lot of work.

Contrast this to interfaces of yesterday and today.  In the 80s and 90s, we had software that had thick instruction manuals and very much made us do all the hard work of trying to get our thoughts of what we were trying to accomplish, trying to remember all these weird key codes to try and get Word (or WordPerfect, hah!) to change some stupid font or something like that.  Over time, interfaces have taken a huge step forward, probably because of some combination of better design and more powerful hardware.  Nowadays, it's much less common to read the instructions: interfaces are much more "discoverable" and the usage of well-designed program (or app) will usually be fairly obvious.  Apple is quite clearly the best at this model.  Their apps (and apps in their ecosystem) really do require little to no instruction to use and typically do exactly what you think they will.  And they are better in that regard than Google and definitely Microsoft.  And don't even get me started on Adobe Illustrator.

But this is very much the interface of today.  As computers are getting more powerful, I think there is a change underway towards interfaces that are even closer to the ideal of just think about it and it happens.  To me, the best example is Google search.  Google search has a seemingly magical ability to know what you're thinking about almost before you even think it.  It suggests things that you want before you finish typing, it suggests things you didn't know you want but you do before you finish typing, and it does this on a personal basis and does it super fast.  It doesn't care if you misspell or mistype or whatever.  It just does what you want, at least for some set of things.  It also responds to a variety of different types of input.  I can type "weather" and my local weather pops up.  If I type "weather boulder CO", it gives me the weather in Boulder.  Same if I type "weather 80302".  It doesn't care, it just knows.  It's another step closer to the computer conforming to you rather than you conforming to the computer.

Apple is trying to make headway in this regard with Siri, and it's true that Siri came out before a similar option from Google.  But the internet abounds with example of Google's new voice tool kicking Siri's butt:


One of the most telling moments in this video is when the narrator searches for "How tall is Michael Jordan" and Google shows up instantly while Siri takes 5-6 seconds.  It's not about the timing, but the narrator says something like "Those seconds count, because if it takes that long, you might as well just Google it."  To me, that's the difference.  Google has a HUGE lead in these sort of search queries, probably insurmountable, and Apple is just nowhere close.

Searching for stuff about celebrities, etc., is one thing, but this has real practical consequences as well.  Consider the Apple maps fiasco.  Many have pointed out that the maps are inaccurate, and perhaps they are.  I haven't really noticed anything like that, honestly, and I actually like the new app design and interface a lot.  To me, the far bigger problem, however, is that it just doesn't have all that Google magic "I know what you mean" intelligence to it.  If I search for "Skirkanich Hall" in Google maps, it knows exactly what I mean.  Same thing yields a bunch of random crap in Apple maps.  This sort of thing pervades the new Maps app, where you often have to type in the exact address instead of just saying what you mean.  To me, that's a huge step back in usability and interface.  It's making you conform to the program rather than having the program work for you.

The problem for Apple is that this Google magic is not just about good design (which Apple is rightly famous for).  It's about making some real R&D progress in artificial intelligence.  Apple certainly has the money to do it, and I think I read something about how they're increasing their R&D budget.  But they're comically far behind Google in this regard.  So I think the interface of tomorrow will belong to Google.

Saturday, November 3, 2012

In the beginning

Looking around through some old pictures, I found this shot of when we just moved into the lab:


And then this picture that's from a day or two later:


Well, I would like to say that things are a bit less messy these days...

One possible meaning of "learning something"

- Gautham

Sometimes I come away after reading a paper or going to a talk and I say to myself "That was nice. I feel I learned something." Opinions no doubt disagree about what is the most desirable meaning of the word to "learn" in the context of scientific research. One possible sense, that I think I like, is that to have learned something is:

to reduce the problem of explaining a phenomenon to that of explaining one that is more basic, simpler or general.

In biology we can think of a few instances where science has proceeded in the "reduction" learning method. The theory that Darwin is famous for was incredible because it reduced the problem of accounting for the immense variety of species (the "mystery of mysteries") to the problem of explaining phenotype variation and its inheritance. Consider the problem of bacterial chemotaxis. E. coli move towards regions where the concentration of a desirable chemical is higher. It constitutes learning something to reduce the problem of how they do this to the problem of how they remember whether they just "ran" from an area of high or low attractant. In our lab's work, the problem of explaining incomplete penetrance of certain skn-1 mutants in C. elegans was reduced to the problem of explaining the variability in gene expression of end-1 in those mutants.

In this sense, the ultimate in learning about a phenomenon is to reduce it to pieces that are not deemed to need further reduction or to plug it into phenomena that are extremely general such as physical law. For example, reducing the phenomenon of conventional superconductivity to the electron-phonon interaction, and thus reducing it to the basic rules of quantum mechanics and electrodynamics, means that in some sense everything about it has been learned. When one gets to the point of a proof where one can write "It therefore suffices to prove that quantum mechanics is correct," one can be sure that a kind of progress has been made.

Reductions can be proved correct, and therefore guaranteed to have been a learning of something, by several methods.
- Reconstitution in biology. In molecular biology, different parts of a putative explanation can be identified with objects such as proteins that can be physically purified and put back together to reconstitute a process. Combined with experiments that delete components from a natural environment, both necessity and sufficiency can be proved.
- Conceptual reconstitution. This standard from physics is a form of reconstitution that works when the system is simple enough to think about but its reduction is impossible to physically separate into parts. You cannot, in the lab, delete an axiom or turn off Maxwell's equations and redo the experiment. Conceptual reconstitution usually involves mathematical derivation or computation. Biology is such a low-symmetry problem that we are used to entertaining the possibility of modifying or extracting one thing without changing anything else. With a few exceptions like isotope exchange experiments, physics and chemistry are relative strangers to that approach.

Some efforts do not, on their own, imply that something has been learned if using reduction as a strict requirement. To make an observation that does not distinguish between competing reductions (theories) of a phenomenon does not on its own reduce. However it may suggest to someone a new reduction. That seems to be the hope underlying many high-throughput experiments these days. Also, an observation can lead to a new phenomenon, to a question we did not even know, and that may lead to new learning. So it is unwise to always deride pure observation as a kind of shooting in the dark or fishing for questions. Superconductivity needed to be observed before it needed to be reduced. And as to what Leeuwenhoek did with his microscope, where would we be without that?

On the other hand, some might argue that observation of a new fact, not reduction, may be the loftiest goal. It does not seem right to put some explanation at a higher rank than Romer's discovery that light travels at a finite speed. Surely, this discovery was critical to the later explanation of the origin of light by Maxwell, its value appears to be partially independent of its later utility.

Feynman warns when talking about the character of physical law that: "Now such a topic has a tendency to become too philosophical because it becomes so general, and a person talks in such generalities, that everybody can understand him. It is then considered to be some deep philosophy." But it is probably a good thing for each scientist to have their own idea of what they can hope to learn by their research.


Friday, November 2, 2012

Friday, October 19, 2012

Some notes on our PLoS Bio 2006 paper



It's been a while since our PLoS Biology paper (2006), but I've found that every so often, I meet someone for whom the following information is relevant, so here it is:

In that paper, we found that transcription occurs in bursts, whereby we mean that the gene itself transitions randomly between an active and inactive state.  We found this to be the case when we isolated clones in which each clone had stably inserted a RNA-FISHable transgene into a particular (but unknown) locus.  In the paper, we analyzed one of those clones to show that for that one clone, changing the concentration of transactivator resulted in a change in the burst size (i.e., number of mRNA produced during a gene ON event).

What we didn't report is what happened in other clones.  I didn't analyze any further clones in great detail, but we did observe something interesting.  Just by eye, I noticed that the fraction of ON cells seemed to vary tremendously from clone to clone--some had every other cell ON, some had just 1 in 25 or so ON.  But in each ON cell, the degree to which it was ON didn't really seem to vary much.  On the other hand, varying the promoter (between 7x and 1x tetO) seemed to change the degree to which a cell was ON.  So together, this would argue that in this one specific case, burst frequency is largely determined by the genetic context of the gene, but burst size is governed by transcription factor activity and how the transcription factor binds interacts with the promoter.  Anyway, just in case someone finds this minor technical point interesting...

Saturday, October 13, 2012

Rejection is good for you?


So this recent paper just came out in Science that examines the pre-publication process, including rejection and resubmission.  One interesting finding was that papers who went through the "publication shuffle" of getting rejected from journal(s) before eventually getting published tended to have higher citation counts that those that connected at the first journal to which they were submitted.  My initial reaction was that, well, these are actually important studies that for whatever reason did not connect at the fancy initial journal that perhaps it deserved to be in.  But the authors then show that this is unlikely because the effect persisted independent of whether the paper ultimately got published in a higher or lower impact journal than the initial one (hmm, would like to see how many papers.  They then say that they believe the most likely explanation is that papers get better through repeated rounds of editing and peer review, and go on to say that authors should persevere through the publication treadmill.

I would agree that papers typically get better through peer review.  However, I view this as being a rather facile argument for the value of repeated cycles of revision and rejection.  Sure, any paper will get better if you spend more time on it and get people's comments on it.  However, I think I'm not alone in finding that for every useful point made by reviewers, there are typically another 4-5 points one has to address in order to get your paper in (often through laborious but pointless experiments).  I think peer review and repeated rejection is certainly not the most efficient way to make a paper better, and it is probably mostly just a waste of everyone's time.  Also, I would point out that peer review typically does not stop most bad papers from coming out eventually.

I guess my main point is that the authors seem to be arguing in favor of the current system, stating that "Perhaps more importantly, these results should help authors endure the frustration associated with long resubmission processes and encourage them to take the challenge".  I think this is a dangerous statement, because it ignores the very real psychological damage these rejections have on young scientists and also the toll these sometimes seemingly endless submission cycles take on lab finances and productivity.

Friday, October 12, 2012

Wednesday, October 3, 2012