I don't know when this "[something you don't like] is broken" thing became a... thing, but it's definitely now a... thing. I have no real idea, but I'm guessing maybe it started with the design police (e.g. this video), then spread to software engineering, and now there's apparently 18 million things you can look at on Google about how academia is broken. Why are so many things seemingly broken? I think the answer in many cases is that this is the natural steady-state in the evolution of design.
To begin with, though, it's worth mentioning that some stuff is just broken because somebody did something stupidly or carelessly, like putting the on/off switch somewhere where you might hit it by accident. Or the "Change objectives" button on a microscope right next to other controls so that you might hit it accidentally while fumbling around in the dark (looking at you, Nikon!). Easy fodder for the design police. Fine, let's all have a laugh, then fix it.
I think a more interesting reason why many things are apparently broken is because that's in some ways the equilibrium solution. Let me explain with a couple examples. One of the most (rightly) ridiculed examples of bad design is the current state of the remote control:
Here's a particularly funny example of a smart home remote:
Yes, you can both turn on your fountain and source from FTP with this remote.
Millions of buttons of unknown function, hard to use, bad design, blah blah. But I view this not as a failure of the remote, but rather a sign of its enormous success. The remote control was initially a huge design win. It allowed you to control your TV from far away so that you didn't have to run around all the time just to change the channel. And in the beginning, it was just basically channel up/down, volume up/down and on/off. A pretty simple and incredibly effective design if you ask me! The problem is that the remote was a victim of its own success: as designers realized the utility of the remote, they began to pile more and more functionality into it, often with less thought, and potentially pushing beyond what a remote was really inherently designed to do. It was the very success of the remote that made it ripe for so much variation and building-upon. It's precisely when the object itself becomes overburdened that the process stops and we settle into the current situation: a design that is "broken". If everything evolves until the process of improvement stops by virtue of the thing being broken, then practically by definition, almost everything should be broken.
Same in software development. Everyone knows that code should be clean and well engineered, and lots of very smart people work hard to ensure that they make as smart decisions as possible. Why, then, do things always get refactored? I think it's because any successfully designed object (in this case, say, a software framework) will rapidly get used by a huge number of people, often for things far beyond its original purpose. The point where the progress stalls is again precisely when the framework's design is no longer suitable for its purpose. That's the "broken" steady state we will be stuck with, and ironically, the better the original design, the more people will use it and the more broken it will ultimately become. iTunes, the once transformative program for managing music that is now an unholy mess, is a fantastic example of this. Hence the need for continuous creative destruction.
I see this same dynamic in science all the time. Take the development of a new method. Typically, you start with something that works really robustly, then push as far as you can until the whole thing is held together with chewing gum and duct tape, then publish. Not all methods papers, but many are like this, with a method that is an amazing tour-de-force... and completely useless to almost everyone outside of that one lab. My rule of thumb is that if you say "How did they do that?" when you read the paper, then you're going to say "Hmm, how are we gonna do that?" when you try to implement in your own lab.
Take CRISPR as another example. What's really revolutionary about it is that it actually works and works (relatively) easily, with labs adopting it quickly around the world. Hence, the pretty much insane pace of development in this field. Already, though, we're getting to the point where there are massively parallel CRISPR screens and so forth, things that I couldn't really imagine doing in my own lab, at least not without a major investment of time and effort. After a while, the state of the art will be methods that are "broken" in the sense that they are too complex to use outside of the confines of the lab that invented it. Perhaps the truest measure of a method is how far it goes before getting to the point of being "broken". From this viewpoint, being "broken" should be in some ways a cause for celebration!
(Incidentally, one could argue that grant and paper review and maybe other parts of academia are broken for some of the same reasons.)
No comments:
Post a Comment