Wednesday, November 21, 2012

Apple owns the interface of today, Google owns the interface of tomorrow


I've been thinking a bit lately about computer interfaces, and I feel like we're going to see a big change in the next 5-10 years, a shift probably as big as the advent of the GUI.  And I think that Google is the company best prepared to deliver that future.

Let's imagine what the ideal interface would look like.  Ideally, I would just have a computer respond immediately to my every thought.  I would think "remind me to get milk on the way home" and the computer would just do it.  I would think "make this figure graphic with a bunch of equally sized circles here, here and here" and the computer would just do it.  This is obviously still a dream (although perhaps one that is less far off than we think).  But the idea is that the computer just does what you want without you having to do a lot of work.

Contrast this to interfaces of yesterday and today.  In the 80s and 90s, we had software that had thick instruction manuals and very much made us do all the hard work of trying to get our thoughts of what we were trying to accomplish, trying to remember all these weird key codes to try and get Word (or WordPerfect, hah!) to change some stupid font or something like that.  Over time, interfaces have taken a huge step forward, probably because of some combination of better design and more powerful hardware.  Nowadays, it's much less common to read the instructions: interfaces are much more "discoverable" and the usage of well-designed program (or app) will usually be fairly obvious.  Apple is quite clearly the best at this model.  Their apps (and apps in their ecosystem) really do require little to no instruction to use and typically do exactly what you think they will.  And they are better in that regard than Google and definitely Microsoft.  And don't even get me started on Adobe Illustrator.

But this is very much the interface of today.  As computers are getting more powerful, I think there is a change underway towards interfaces that are even closer to the ideal of just think about it and it happens.  To me, the best example is Google search.  Google search has a seemingly magical ability to know what you're thinking about almost before you even think it.  It suggests things that you want before you finish typing, it suggests things you didn't know you want but you do before you finish typing, and it does this on a personal basis and does it super fast.  It doesn't care if you misspell or mistype or whatever.  It just does what you want, at least for some set of things.  It also responds to a variety of different types of input.  I can type "weather" and my local weather pops up.  If I type "weather boulder CO", it gives me the weather in Boulder.  Same if I type "weather 80302".  It doesn't care, it just knows.  It's another step closer to the computer conforming to you rather than you conforming to the computer.

Apple is trying to make headway in this regard with Siri, and it's true that Siri came out before a similar option from Google.  But the internet abounds with example of Google's new voice tool kicking Siri's butt:


One of the most telling moments in this video is when the narrator searches for "How tall is Michael Jordan" and Google shows up instantly while Siri takes 5-6 seconds.  It's not about the timing, but the narrator says something like "Those seconds count, because if it takes that long, you might as well just Google it."  To me, that's the difference.  Google has a HUGE lead in these sort of search queries, probably insurmountable, and Apple is just nowhere close.

Searching for stuff about celebrities, etc., is one thing, but this has real practical consequences as well.  Consider the Apple maps fiasco.  Many have pointed out that the maps are inaccurate, and perhaps they are.  I haven't really noticed anything like that, honestly, and I actually like the new app design and interface a lot.  To me, the far bigger problem, however, is that it just doesn't have all that Google magic "I know what you mean" intelligence to it.  If I search for "Skirkanich Hall" in Google maps, it knows exactly what I mean.  Same thing yields a bunch of random crap in Apple maps.  This sort of thing pervades the new Maps app, where you often have to type in the exact address instead of just saying what you mean.  To me, that's a huge step back in usability and interface.  It's making you conform to the program rather than having the program work for you.

The problem for Apple is that this Google magic is not just about good design (which Apple is rightly famous for).  It's about making some real R&D progress in artificial intelligence.  Apple certainly has the money to do it, and I think I read something about how they're increasing their R&D budget.  But they're comically far behind Google in this regard.  So I think the interface of tomorrow will belong to Google.

Saturday, November 3, 2012

In the beginning

Looking around through some old pictures, I found this shot of when we just moved into the lab:


And then this picture that's from a day or two later:


Well, I would like to say that things are a bit less messy these days...

One possible meaning of "learning something"

- Gautham

Sometimes I come away after reading a paper or going to a talk and I say to myself "That was nice. I feel I learned something." Opinions no doubt disagree about what is the most desirable meaning of the word to "learn" in the context of scientific research. One possible sense, that I think I like, is that to have learned something is:

to reduce the problem of explaining a phenomenon to that of explaining one that is more basic, simpler or general.

In biology we can think of a few instances where science has proceeded in the "reduction" learning method. The theory that Darwin is famous for was incredible because it reduced the problem of accounting for the immense variety of species (the "mystery of mysteries") to the problem of explaining phenotype variation and its inheritance. Consider the problem of bacterial chemotaxis. E. coli move towards regions where the concentration of a desirable chemical is higher. It constitutes learning something to reduce the problem of how they do this to the problem of how they remember whether they just "ran" from an area of high or low attractant. In our lab's work, the problem of explaining incomplete penetrance of certain skn-1 mutants in C. elegans was reduced to the problem of explaining the variability in gene expression of end-1 in those mutants.

In this sense, the ultimate in learning about a phenomenon is to reduce it to pieces that are not deemed to need further reduction or to plug it into phenomena that are extremely general such as physical law. For example, reducing the phenomenon of conventional superconductivity to the electron-phonon interaction, and thus reducing it to the basic rules of quantum mechanics and electrodynamics, means that in some sense everything about it has been learned. When one gets to the point of a proof where one can write "It therefore suffices to prove that quantum mechanics is correct," one can be sure that a kind of progress has been made.

Reductions can be proved correct, and therefore guaranteed to have been a learning of something, by several methods.
- Reconstitution in biology. In molecular biology, different parts of a putative explanation can be identified with objects such as proteins that can be physically purified and put back together to reconstitute a process. Combined with experiments that delete components from a natural environment, both necessity and sufficiency can be proved.
- Conceptual reconstitution. This standard from physics is a form of reconstitution that works when the system is simple enough to think about but its reduction is impossible to physically separate into parts. You cannot, in the lab, delete an axiom or turn off Maxwell's equations and redo the experiment. Conceptual reconstitution usually involves mathematical derivation or computation. Biology is such a low-symmetry problem that we are used to entertaining the possibility of modifying or extracting one thing without changing anything else. With a few exceptions like isotope exchange experiments, physics and chemistry are relative strangers to that approach.

Some efforts do not, on their own, imply that something has been learned if using reduction as a strict requirement. To make an observation that does not distinguish between competing reductions (theories) of a phenomenon does not on its own reduce. However it may suggest to someone a new reduction. That seems to be the hope underlying many high-throughput experiments these days. Also, an observation can lead to a new phenomenon, to a question we did not even know, and that may lead to new learning. So it is unwise to always deride pure observation as a kind of shooting in the dark or fishing for questions. Superconductivity needed to be observed before it needed to be reduced. And as to what Leeuwenhoek did with his microscope, where would we be without that?

On the other hand, some might argue that observation of a new fact, not reduction, may be the loftiest goal. It does not seem right to put some explanation at a higher rank than Romer's discovery that light travels at a finite speed. Surely, this discovery was critical to the later explanation of the origin of light by Maxwell, its value appears to be partially independent of its later utility.

Feynman warns when talking about the character of physical law that: "Now such a topic has a tendency to become too philosophical because it becomes so general, and a person talks in such generalities, that everybody can understand him. It is then considered to be some deep philosophy." But it is probably a good thing for each scientist to have their own idea of what they can hope to learn by their research.


Friday, November 2, 2012

Overfitted


Paul was an "overfitted model" for Halloween. Heh, heh, heh.