Showing posts with label miscellaneous. Show all posts
Showing posts with label miscellaneous. Show all posts

Sunday, April 28, 2019

Reintegrating into lab following a mental health leave

[From AR] These days, there is a greatly increased awareness and decreased stigmatization of mental health amongst trainees (and faculty, for that matter), which is great. For mentors, understanding mental health issues amongst trainees is super important, and something we have until recently not gotten a lot of training on. More recently, it is increasingly common to get some training or at least information on how to recognize the onset of mental health issues, and in graduate groups here at Penn at least, it is fairly straightforward to initiative a leave of absence to deal with the issue, should that be required. However, one aspect of handling mental health leaves for which there appears to be precious little guidance out there is what challenges trainees face when returning from a mental health leave of absence, and what mentors might do about it. Here, I present a document written by four anonymous trainees with some of their thoughts (and I will chime in at the end with some thoughts from the mentor perspective).


[From trainees] This article is a collection of viewpoints from four trainees on mental health in academia. We list a collection of helpful practices on the part of the PI and the lab environment in general for cases when the trainees return to lab after recovering from mental health issues.

A trainee typically returns either because they feel recovered and ready to get back to normalcy, or they are **better** than before and have self-imposed goals (e.g. finishing their PhD), or they just miss doing science. Trainees in these situations are likely to have spent time introspecting on multiple fronts and they often return with renewed drive. However, it is very difficult to shake off the fear of recurrence of the episode (here we use episode broadly to refer to a phase of very poor mental health), which can make trainees more vulnerable and sensitive to external circumstances than an average person; for instance, minor stresses can appear much larger. In particular, an off-day post a mental health issue can make one think they are already slipping back into it. In some cases, students may find it more difficult to start a new task, perhaps due to the latent fear of not being able to learn afresh. Support from the mentor and lab environment in general can be crucial in both providing and sustaining the confidence of the trainee. It is important that the mentor recognize that the act of returning to the lab is an act of courage in itself. The PI’s interactions with the trainee have a huge bearing on how the trainee re-integrates into his/her work. Here are some steps that we think can help:

Explicitly tell trainees to seek the PI out if they need help. This can be important for all trainees to hear because the default assumption is that these are personal problems to be dealt with personally in its entirety. In fact, advisors should do this with every trainee -- explicitly tell them that they are there to be reached out to, should their mental health be compromised/affected in any way. Restating this to a returning trainee can help create a welcoming and safe environment.

Reintegrating the trainee into the lab environment. The PI should have an open conversation with the trainee about how much information they want divulged to the rest of the group/department, and how they communicate the trainee’s absence to the group, if at all.

Increased time with the mentee. More frequent meetings with a returning student for the first few months help immensely for multiple reasons: a. It can help quell internal fears by a process of regular reinforcement; b. It can get the students back on track with their research faster; c. The academically stimulating conversations can provide the gradual push needed to think at a level they were used to before mental health issues. Having said that, individuals have their preferred way of dealing with the re-entering situation and a frank conversation about how they want to proceed helps immensely.

Help rebuild the trainee’s confidence. One of the authors of this post recounts her experience of getting back on her feet. Her advisor unequivocally told her: “Your PhD will get done; you are smart enough. You just need to work on your mental health, and I will work with you to make that the first priority.” Words of encouragement can go a long way -- there is ample anecdotal evidence that people can fully recover from their mental health state if proper care is taken by all stakeholders.

Create a small, well-defined goal/team goals. One of the authors of this article spent her first few months working on a fairly easy and straightforward project with a clear message, one that was easy to keep pushing on as she settled in to lab again. While this may not be the best way forward for everyone depending on where they are with their research, a clearly-defined goal can come as a quick side-project, or a deliberate breaking-down of a large project into very actionable smaller ones. Another alternative is to allow the trainee to work with another student/postdoc, something which allows a constant back-and-forth, and quicker validation which can lead to less room for mental doubt.

Remember that trainees may need to come back for a variety of other reasons as well. There are costs associated with a prolonged leave of absence, and for some trainees, they may need to come back before they are totally done with their mental health work. It's likely that some time needs to be set aside to continue that work, and it's helpful if PIs can work with students to accommodate that, within reason.

Finally, it is important for all involved parties to realize that the job of a PI is not to be the trainee’s parent, but to help the student along in their professional journey. Facilitating a lab environment where one feels comfortable, respected, and heard goes a long way, even if that means going an extra mile on the PI’s part to ensure such conditions, case-by-case.

[Back to AR] Hopefully this article is helpful for mentors and also for trainees as they try to reintegrate into the lab. For my part as a mentor, I think that a little extra empathy and attention can go a long way. I think it's important for all parties to realize that mentors are typically not trained mental health professionals, but some common sense guidelines could include increased communication, reasonable expectations, and in particular a realization that tasks that would seem quite easy for a trainee to accomplish before might be much harder now at first, in particular anything out of the usual comfort zone, like a new technique, etc.

Comments more than welcome; it seems this is a relatively under-reported area. And a huge thank you to the anonymous writers of this letter for starting the discussion.

Sunday, April 2, 2017

Nabokov, translated for academia

Nabokov: I write for my pleasure, but publish for money.
Academia: I write for your pleasure, but pay money to publish.

More specifically…

Undergrad: I don’t know how to write, but please let me publish something for med school.
Grad student: I write my first paper draft for pleasure, but my thesis for some antiquated notion of scholarship.
Postdoc: I write "in press" with pleasure, but "in prep" for faculty applications.
Editor: You write for my pleasure, but these proofs gonna cost you.
SciTwitter: I write preprints for retweets, but tweet cats/Trump for followers.
Junior PI: I write mostly out of a self-imposed sense of obligation, but publish to try and get over my imposter syndrome.
Mid-career PI: I say no to book chapters (finally (mostly)), but publish to see if anyone is still interested.
Senior PI: I write to explain why my life’s work is under-appreciated, but give dinner talks for money.

Sunday, March 12, 2017

I love Apple, but here are a few problems

First off, I love Apple products. I’ve had only Apple computers for just about 2 decades, and have been really happy to see their products evolve in that time from bold, renegade items to the refined, powerful computers they are today. My lab is filled with Macs, and I view the few PCs that we have to use to run our microscopes with utter disdain. (I’m sort of okay with the Linux workstations we have for power applications, but they honestly don’t get very much use and they’re kind of a pain.)

That said, lately, I’ve noticed a couple problems, and these are not just things like “Apple doesn’t care about Mac software reliability” or “iTunes sucks” or whatever. These are fundamental bets Apple has made, one in hardware and one in software, that I think are showing signs of being misplaced. So I wrote these notes on the off chance that somehow, somewhere, they make their way back to Apple.

One big problem is that Apple’s hardware has lost its innovative edge, mostly because Apple seems disinclined to innovate for various reasons. This has become plainly obvious by watching the undergraduate population at Penn over the last several years. A few years ago, it used to be that a pretty fair chunk of the undergrads I met had MacBook Airs. Like, a huge chunk. It was essentially the standard computer for young people. And rightly so: it was powerful (enough), lightweight, not too expensive, and the OS was clean and let you do all the things you needed to do.

Nowadays, not so much. I’m seeing all these kids with the Surfaces and so forth that are real computers, but with a touch screen/tablet “mode” as well. And here’s the thing: even I’m jealous. Now, I’m not too embarrassed to admit that I have read enough Apple commentary on various blogs to get Apple’s reasons for not making such a computer. First off, Apple believes that most casual users, perhaps including students, should just be using iPads, and that iOS serves their needs while providing the touch/tablet interface. Secondly, they believe that the touch interface has no place, both ergonomically or in principle, on laptop and desktop Macs. And if you’re one of the weird people who somehow needs a touch interface and full laptop capabilities, you should buy both a Mac and an iPad. I’m just realizing now that Apple is just plain wrong on this.

Why don’t I see students with iPads, or an iPad Pro instead of a computer? The reality is that, no matter how much Apple wants to believe it and Apple fans want to rationalize it (typically for “other people”), iOS is just not useful for doing a lot of real work. People want filesystems. People want to easily have multiple windows open, and use programs that just don’t exist on iOS (especially students who may need to install special software for class). The few people I know who have iPad Pros are those who have money to burn on having an iPad Pro as an extra computer, but not as a replacement. The ONLY person I know who would probably be able to work exclusively or even primarily with an iPad is my mom, and even she insists on using what she calls a “real” computer (MacBook Pro).

(Note about filesystems: Apple keeps trying to push this “post-filesystem” world on us, and it just isn’t taking. Philosophical debates aside, here’s a practical example: Apple tried to make people switch away from using “Save As…” to a more versioned system more compatible with the iOS post-filesystem mindset, with commands like “Revert” and “Duplicate”. I tried to buy in, I really did. I memorized all the weird new keyboard shortcuts and kept saying to myself “it’ll become natural any day now”. Never did. Our brains just don’t work that way. And it’s not just me: honestly, I’m the only one in my lab who even understands all this “Duplicate” “Revert” nonsense. The rest of them can’t be bothered—and mostly just use other software without this “functionality” and… Google Drive.)

So you know what would be nice? Having a laptop with a tablet mode/touch screen! Apple’s position is it’s an interface and ergonomic disaster. It’s hard to use interface elements with touch, and it’s hard to use a touch screen on a vertical laptop screen. There are merits to these arguments, but you know what? I see these kids writing notes freehand on their computer, and sketching drawings on their computer, and I really wish I could do that. And no, I don’t want to lug around an iPad to do that and synchronize with my Mac via their stupid janky iCloud. I want it all in one computer. The bottom line is that Surface is cool. Is it as well done as Apple would do it? No. But it does something that I can’t do on an Apple, and I wish I could. Apple is convinced that people don’t want to do those things, and that you shouldn’t be able to do those things. The reality seems to be that people do want to do those things and that it’s actually pretty useful for them. Apple’s mistake is thinking that the reason people bought Apples was for design purity. We bought Apples because they had design functionality. Sometimes these overlap, which has been part of Apple’s genius over the last 15 years, and so you can mistake one for the other. But in the end, a computer is a tool to do things I need.

Speaking of which, the other big problem that Apple has is its approach to cloud computing. I think it’s pretty universally acknowledged that Apple’s cloud computing efforts suck, and I won’t document all that here. Mostly, I’ve been trying to understand exactly why, and I think that the fundamental problem is that Apple is thinking synchronize while everyone else is thinking synchronous. What does that mean? Apple’s is stuck in an “upload/download” (i.e., synchronize) mindset from ten years ago while everyone else has moved on to a far more seamless design in which the distinction between cloud and non-cloud is largely invisible. And whatever attempts Apple has made to move to the latter have been pretty poorly executed (although that at least gives hope that they are thinking about it).

Examples abound, and they largely manifest as irritations in using Apple’s software. Take, for example, something as simple as the Podcast App in the iPhone, which I use every day when I bike to work (using Aftershokz bone conduction headphones, suhweet, try them!). If I didn’t pre-download the next podcast, half the time, it craps out when it gets to the next episode in my playlist, even though I have cell service the whole way. Why? Because when it gets there, it waits to download the next one before playing, and sometimes gets mixed up during the download. So I end up trying to remember to pre-download them. And then I have to watch space with all the downloads, making sure the app removes the downloads. Why am I even thinking about this nowadays? Why can’t it just look at my playlist and make them play seamlessly? Upload/download is an anachronism from a time of synchronize when most things are moving to synchronous.

Same with AppleTV (sucks) compared to Netflix on my computer, or Amazon on my computer, or HBO, or whatever. They just work without me having to thinking about the pre-download of the whatever before the movie can start.

I suppose there was a time when this was important for when you were offline. Whatever, I’m writing this in a Google Doc on an airplane without WiFi. And when I get back online, it will all just merge up seamlessly. With careful thought, it can be done. (And yes, I am one of the 8 people alive who has actually used Pages on the web synchronized with Pages on the Mac—not quite there yet, sorry.)

To its credit, I think Apple does sort of get the problem, belatedly. Problem is that when they have tried synchronous, it’s not well done. Take the example of iCloud Photos or whatever the hell they call it. One critical new feature that I was excited about was that it will sense if you’re running out of space on your device and then delete local copies of old photos, storing just the thumbnails. All your photos accessible, but using up only a bit of space, sounds very synchronous! Problem is that as currently implemented, I have only around 150MB free on my Phone and ~1+ GB of space used by Photos. Same on my wife’s MacBook Pro: not a lot of HD space, but Photos starts doing this cloud sync only when things are already almost completely full. The problem is that Apple views this whole system as a backup measure to kick in only in emergencies, when if they bought into the mentality completely, Photos on my computer would take up only a small fraction of the space it does, freeing up the rest of the computer for everything else I need it to do (you know, with my filesystem). Not to mention that any synchronization and space freeing is completely opaque and happens seemingly at random, so I never trust it. Again, great idea, poor execution.

Anyway, I guess this was marginally more productive than doing the Sudoku in back of United Magazine, but not particularly so, so I’ll stop there. Apple, please get with it, we love you!

Saturday, January 7, 2017

I think Apple is killing the keyboard by slow boiling

I’m pretty sure Apple is planning to kill the mechanical keyboard in the near future. What’s interesting is how they’re going about it.

Apple has killed/"moved forward" a lot of tech by unilateral fiat, including the floppy drive, DVD drive, and of course various ports and cables. (Can we just stop for a minute and consider the collective internet brainpower wasted arguing about the merits of these moves? (Yes, I can appreciate the irony.).)

The strategy with the keyboard, however, is something different. For the past several design iterations, the keyboard travel has been getting thinner and thinner, to the point where the travel on the latest keyboards is pretty tiny. It’s pretty easy to see that the direction Apple is headed is towards a future in which the keyboard has no mechanical keys, but is rather some sort of iPad like thing, perhaps with haptic feedback, but with no keys in the traditional sense of the term. (The force touch trackpad and the new touch bar are perhaps harbingers of this move.)

What’s interesting is how Apple is making this transition. With the other transitions, Apple just pulls the plug on a tech (Firewire, I barely knew thee), leading to squealing by a small but not insignificant number of very visible angry users, modestly annoyed shrugs from everyone else, and a Swiss-Army-knife-like conglomeration of old projector adapters wherever I go give presentations. With this keyboard transition, though, the transition has been far more gradual—and the pundit class has consequently been far more muted. Instead of the usual “Apple treats their users with utter contempt!” “Apple is doomed by their arrogance!” and so forth, the response is more like “huh, weird, but you’ll get used to it.” Perhaps this reflects more the fact that there’s no way to “transition” to a new port interface (there is no port equivalent to “reduced key travel”, although perhaps microUSB qualifies), but still.

Why might Apple be doing this? There are three possibilities I can think of. First, one formal possibility is that there could be some convenience/cost benefit to Apple to doing this, like reduced component cost or whatever. This strikes me as unlikely for a number of reasons, not least of which being that it is almost certainly a pain in the butt to keep designing new keyboards. Another possibility is that the there is some tradeoff, most obviously with thickness: clearly, having a shorter travel will let you make a thinner computer. While this is a likely scenario, and perhaps the most likely, there are some reasons to question this explanation. For instance, why do the keyboards on the desktop Macs (remember those?) also have shorter key travel now? One could say that it’s to maintain parity with laptops, but then again, anyone suffering through desktop Macs these knows that parity isn’t exactly the name of Apple’s game these days—frankly, the keyboard is just about the only thing that got updated on the iMacs in the last several years. Which leads to the third possibility, which is that having a non-mechanical keyboard (essentially a big iPad) down there would enable new interfaces and so forth. Hmm. Well, either way, I think we’ll find out soon.

Thursday, January 5, 2017

Why care about the Dow? Why not?

Just listened to this Planet Money podcast all about hating on the Dow Jones Industrial Average. Gist of it: the Dow Jones calculates its index in a weird (and most certainly nonsensical) way, and is an anachronism that must die. They also say that no market "professional" (quote added by me) ever talks about the dow, but measures like the S&P 500 and the Wilshire 5000 are far more sensible.

This strikes me as a criticism that distracts from the real issue, which is whether one should be using any stock market indicator as an indicator of anything. Sure, the Dow is "wrong" and the S&P 500 is more "right" in that they weight by market cap. Whatever. Take a look at this:


Pretty sure that this fact goes back longer as well, but Wolfram Alpha only goes back 5 years and I've already wasted too much time on this. Clearly, also, short term fluctuations are VERY strongly correlated—here's the correlation with the S&P 500 in terms of fluctuations:


So I think the onus is on the critics to show that whatever differences there are between the S&P and the Dow are meaningful as predicting something about the economy. Good luck with that.

Of course, as an academic, far be it from me to decry the importance of doing something the right way, even if it has no practical benefit :). That said, in the podcast, they make fun of how the Dow talks about its long historical dataset as an asset, one that outweighs its somewhat silly mode of computation. This strikes me as a bit unfair. Given the very strong correlation between the Dow and S&P 500, this long track record is a HUGE asset, allowing one to make historical inferences way back in time (again, to the extent that any of this stuff has meaning anyway).

I think there are some lessons here for science. I think that it is of course important to calculate the right metric, e.g. TPM vs. FPKM. But let's not lose sight of the fact that ultimately, we want these metrics to reflect meaning. If the correspondence between a new "right" metric and an older, flawed one is very strong, then there's no a priori reason to disqualify results calculated with older metrics, especially if those differences don't change any *scientific* conclusions. Perhaps that's obvious, but I feel like I see this sort of thing a lot.

Saturday, November 5, 2016

On bar graphs, buying guides and avoiding the tyranny of choice

Ah, the curse of the internet! Once upon a time, we would be satisfied just to get an okay taco in NYC. Now, unless you get the VERY best anything as rated by the internet, you’re stuck feeling like this:

Same goes for everything from chef’s knives to backpacks to whatever it is (I recommend The Sweethome as an excellent site with buying guides for tons of products). Funnily enough, I think we have ended up with this problem for the same reason that people whine on about bar graphs: because we fail to show the data points underlying the summary statistic. Take a look at these examples from this paper:


For most buying guides, they usually just report the max (rather than the mean in most scientific bar graphs), but the problem is the same. The max is most useful when your distribution looks like this:
However, reporting the max is far less useful a statistic when your distribution looks like this or this:




What I mean by all this is that when we read an online shopping guide, we assume that their top pick is WAY better than all the other options—a classic case of the outlier distribution I showed first. (This is why we feel like assholes for getting the second best anything.) But for many things, the best scoring item is not all that much better than the second best. Or maybe even the third best. Like this morning, when I was thinking of getting a toilet brush and instinctively went to look up a review. Perhaps there are some toilet brushes are better than others. Maybe there are some with a fatal flaw that means you really shouldn’t buy them. But I’m guessing that most toilet brushes basically are just fine. Of course, that doesn’t prevent The Sweethome providing me a guide for the best toilet brush: great, deeply appreciative. But if I just go to the local store and get a toilet brush, I’m probably not all that far off. Which is to say that the distribution of “scores” for the toilet brush are probably closely packed and not particularly differentiated—there is no outlier toilet brush.

While there may be cases where there is truly a clear outlier (like the early days of the iPod or Google (remember AltaVista?)), I venture to say that the distribution of goodness most of the time is probably bimodal. Some products are good and roughly equivalent, some are duds. Often the duds will have some particular characteristic to avoid, like when The Sweethome says this about toilet brushes:
We were quick to dismiss toilet brushes whose holders were entirely closed, or had no holders at all. In the latter category, that meant eliminating the swab-style Fuller brush, a $3 mop, and a very cheap wire-ring brush.
I think this sort of information should be at the top of the page, and so you buying guide could say “Pretty much all decent toilet brushes are similar, but be sure to get one with an open holder. And spend around $5-10.”

Then again, when you read these guides, it often seems that there’s no other rational option than their top choice, portraying it as being by far and away the best based on their extensive testing. But that’s mostly because they’ve just spend like 79 hours with toilet brushes and are probably magnifying subtle distinctions invisible to the majority of people, and have already long since discarded all the duds. It’s like they did this:


Now this is not to say those smaller distinctions don’t matter, and by all means get the best one, but let’s not kill ourselves trying to get the very best everything. After all, do those differences really matter for the few hours you’re likely to spend with a toilet brush over your entire lifetime? (And how valuable was the time you spent on the decision itself?)

All of this reminds me of a trip I took to New York City to hang out with my brother a few months back. New York is the world capital of “Oh, don't bother with these, I know the best place to get toilet brushes”, and my brother is no exception. Which is actually pretty awesome—we had a great time checking out some amazing eats across town. But then, at the end, I saw a Haagen Dazs and was like "Oh, let's get a coffee milkshake!". My brother said "Oh, no, I know this incredible milkshake place, we should go there." To which I said, "You ever had a coffee milkshake from Haagen Dazs? It's actually pretty damn good." And good it was.

Wednesday, April 6, 2016

The hierarchy of academic escapism

Work to escape from the chaos of home. 

Conference travel to escape from the chaos of work.

Laptop in hotel room to escape from the chaos of the poster session.

Email to escape the tedium of reviewing papers.

Netflix to escape the tedium of email.

Sleep to escape the tedium of Sherlock Season 3.

And then it was Tuesday.

Thursday, June 25, 2015

Biking in a world of self-driving cars will be awesome

While I was biking home the other day, I had a thought: this ride would be so much safer if all these cars were Google cars. I think it’s fair to say that most bikers have had some sort of a run-in with a car at some point in their cycling lives, and the asymmetry of the situation makes it very dangerous for bikers. Thing is, we can (and should) try to raise bike awareness in drivers, but the fact is that bikes can often come out of nowhere and in places that drivers don’t expect, and it’s just hard for drivers to keep track of all these possibilities. Whether it’s “fair” or “right” or not is beside the point: when I’m biking around, I just assume every driver I meet is going to do something stupid. It’s not about being right, it’s about staying alive.

But with self-driving cars? All those sensors means that the car would be aware of bikers coming from all angles. I think this would result in a huge increase in biker safety. I think it would also greatly increase ridership. I know a lot of people who at least say they would ride around a lot more if it weren’t for their fear of getting hit by a car. It would be great to get all those people on the road.

Two further thoughts: self-driving car manufacturers, if you are reading this, please come up with some sort of idea for what to do about getting “doored” (when someone opens a door in the bike lane). Perhaps some sort of warning, like “vehicle approaching”? Not just bikes, actually–would be good to avoid cars getting doored (or taking off the door) as well.

Another thing I wonder about is whether bike couriers and other very aggressive bikers will take advantage of cautious and safe self-driving cars to completely disregard traffic rules. I myself would never do that :), but I could imagine it becoming a problem.

Sunday, April 12, 2015

Why is everything broken? Thoughts from the perspective of methods development

I don't know when this "[something you don't like] is broken" thing became a... thing, but it's definitely now a... thing. I have no real idea, but I'm guessing maybe it started with the design police (e.g. this video), then spread to software engineering, and now there's apparently 18 million things you can look at on Google about how academia is broken. Why are so many things seemingly broken? I think the answer in many cases is that this is the natural steady-state in the evolution of design.

To begin with, though, it's worth mentioning that some stuff is just broken because somebody did something stupidly or carelessly, like putting the on/off switch somewhere where you might hit it by accident. Or the "Change objectives" button on a microscope right next to other controls so that you might hit it accidentally while fumbling around in the dark (looking at you, Nikon!). Easy fodder for the design police. Fine, let's all have a laugh, then fix it.

I think a more interesting reason why many things are apparently broken is because that's in some ways the equilibrium solution. Let me explain with a couple examples. One of the most (rightly) ridiculed examples of bad design is the current state of the remote control:


Here's a particularly funny example of a smart home remote:
Yes, you can both turn on your fountain and source from FTP with this remote.

Millions of buttons of unknown function, hard to use, bad design, blah blah. But I view this not as a failure of the remote, but rather a sign of its enormous success. The remote control was initially a huge design win. It allowed you to control your TV from far away so that you didn't have to run around all the time just to change the channel. And in the beginning, it was just basically channel up/down, volume up/down and on/off. A pretty simple and incredibly effective design if you ask me! The problem is that the remote was a victim of its own success: as designers realized the utility of the remote, they began to pile more and more functionality into it, often with less thought, and potentially pushing beyond what a remote was really inherently designed to do. It was the very success of the remote that made it ripe for so much variation and building-upon. It's precisely when the object itself becomes overburdened that the process stops and we settle into the current situation: a design that is "broken". If everything evolves until the process of improvement stops by virtue of the thing being broken, then practically by definition, almost everything should be broken.

Same in software development. Everyone knows that code should be clean and well engineered, and lots of very smart people work hard to ensure that they make as smart decisions as possible. Why, then, do things always get refactored? I think it's because any successfully designed object (in this case, say, a software framework) will rapidly get used by a huge number of people, often for things far beyond its original purpose. The point where the progress stalls is again precisely when the framework's design is no longer suitable for its purpose. That's the "broken" steady state we will be stuck with, and ironically, the better the original design, the more people will use it and the more broken it will ultimately become. iTunes, the once transformative program for managing music that is now an unholy mess, is a fantastic example of this. Hence the need for continuous creative destruction.

I see this same dynamic in science all the time. Take the development of a new method. Typically, you start with something that works really robustly, then push as far as you can until the whole thing is held together with chewing gum and duct tape, then publish. Not all methods papers, but many are like this, with a method that is an amazing tour-de-force... and completely useless to almost everyone outside of that one lab. My rule of thumb is that if you say "How did they do that?" when you read the paper, then you're going to say "Hmm, how are we gonna do that?" when you try to implement in your own lab.

Take CRISPR as another example. What's really revolutionary about it is that it actually works and works (relatively) easily, with labs adopting it quickly around the world. Hence, the pretty much insane pace of development in this field. Already, though, we're getting to the point where there are massively parallel CRISPR screens and so forth, things that I couldn't really imagine doing in my own lab, at least not without a major investment of time and effort. After a while, the state of the art will be methods that are "broken" in the sense that they are too complex to use outside of the confines of the lab that invented it. Perhaps the truest measure of a method is how far it goes before getting to the point of being "broken". From this viewpoint, being "broken" should be in some ways a cause for celebration!

(Incidentally, one could argue that grant and paper review and maybe other parts of academia are broken for some of the same reasons.)

Saturday, April 11, 2015

Gregg Popovich studied astronomical engineering

I was just reading this SI.com piece about Gregg Popovich, legendary NBA coach of the San Antonio Spurs, and found this line to be really interesting:
By his senior year he was team captain and a scholar-athlete, still the wiseass but also a determined cadet who loaded up with tough courses, such as advanced calculus, analytical geometry, and engineering—astronomical, electrical and mechanical. [emphasis mine]

Now, I'm pretty sure they meant aeronautical engineering, but that got me wondering if there is such a thing as astronomical engineering. Well, Wikipedia says there is something called astroengineering, which is about the construction of huge (and purely theoretical) objects in space. I wonder if Pop is thinking about Dyson spheres during timeouts.

Saturday, April 4, 2015

Why not test your blood every quarter?

Lenny just pointed me to a little internet kerfuffle emerging because of Mark Cuban’s twittering about saying it would be a good thing to run blood tests all the time. Here's what Cuban said:







Essentially, his point is that by using the much larger and more well-controlled dataset that you could get from regular blood testing, you would be able to get a lot more information about your health and thus perhaps be able to earlier action on emerging health issues. Sounds pretty reasonable to me. So I was surprised to see such a strong backlash from the medical community. The counter-argument seems to have a couple of main points:
  1. Mark Cuban is a loudmouth who somehow made billions of dollars and now talks about stuff he doesn’t know anything about.
  2. Just as whole body scans can lead to tons of unnecessary interventions for abnormalities that are ultimately benign, regular blood testing would lead to tons of additional tests and treatments that would be injurious to people.
  3. Performing blood tests on everyone is prohibitively expensive, so we’d end up with “elite” patients and non-elite patients.
I have to say that I find these counterarguments to be essentially anti-scientific. On the face of it, of course Cuban is right. I’ve always been struck by how unscientific medical measurements are. If we wanted to measure something in the lab, we would never be as haphazard and uncontrolled as people are in the clinic. There are of course good reasons why it’s more difficult to do in the clinic, but just because something is hard does not mean that it is fundamentally bad or useless.

I think this feeds into the most interesting aspect of the argument, namely whether it would lead to a huge increase in false positives and thus unnecessary treatment. Well, first off, doing a single measurement is hardly a good way to avoid false positives and negatives. Secondly, yes, in our current medical system, you might end up with more unnecessary treatment–with many noting that getting into the system is the surest way to end up less healthy. That is more of an indictment of the medical system than of Cuban’s suggestion. Sure, it would require a lot more research to fully understand what to do with this data. But without the data, that research cannot happen. And having more information is practically by definition a better way to make decisions than less information, end of story. To argue otherwise sounds a lot like sticking your head in the sand. I'm also not so sure that doctors wouldn't be able to make wise judgements based on even n=1 data without extensive background work. Take a look at Mike Snyder's Narcissome paper (little Nature feature as well). He was able to see the early onset of Type II diabetes and make changes to stave off its effects. Of course, he had a big team of talented people combing over his data. But with time and computational power, I think everyone would have access to the interpretation. What's sad is for people to not have the data.

Leading to another interesting point from the medical research standpoint. If it were really rich people making up the primary dataset, I don’t think that’s a bad thing. Medicine has a pretty long history of doing testing primarily on non-elite patients, after all.

Friday, November 7, 2014

My water heater is 100% efficient (in the winter)

Just had a thought while taking a shower the other day. These days, there's lots of effort to rate appliances by their efficiency. But it occurs to me that inefficiency leads to heat, and if you are heating your home, then you are basically using all that "wasted" energy. So even if some of the gas used for our water heater doesn't actually heat the water, as long as its in the basement and the heat travels upward, that heat is not going to waste. So the effective efficiency of the appliance is actually higher than expected. Conversely, in summer, if you use the air conditioner, the opposite is true. I guess the overall efficiency would depend on your mix of heating and cooling.

I was also thinking about this a while ago when I installed a bunch of LED lightbulbs. Although they use much less energy, they are producing much less heat to warm up the house. I mentioned this to Gautham, and he pointed out that using electricity to heat your house may be considerably less efficient than, say, natural gas, and so that means it's not 100% efficient, relatively speaking. Still, it's better than what one would naively expect.

Of course, the best thing about LED lightbulbs is not so much the electricity or cost savings (which are pretty modest, frankly), but the fact that they don't burn out. If you have a bunch of 50W halogen spotlights, you know what I mean. By the way, just got a "TorchStar UL-listed 110V 5W GU10 LED Bulb - 2700K Warm White LED Spotlight - 320 Lumen 36 Degree Beam Angle GU10 Base for Home" from Amazon, and it looks great (better than the other one I got from Amazon for sure).

Thursday, August 14, 2014

An argument as to why the great filter may be behind us

A little while back, I read a great piece on the internet about the Fermi Paradox and the possibility of other life in our galaxy (blogged about it here). To quickly summarize, there are tons of earth-like planets out there in our galaxy, and so a pretty fair number of them likely have the potential to harbor life. If we are just one amongst the multitudes, then some civilizations must have formed hundreds of millions or billions of years ago. Now, there’s a credible argument to be made that a civilization that is a few hundred million years more advanced than we are should actually have developed into a “Type III” civilization that has colonized the entire galaxy (gets into the somewhat spooky concept of the von Neumann probe). The question then is why haven’t we actually met any aliens in a galaxy that seemingly should be teeming with life.

There are two general answers. One is that life is out there, but we just haven’t detected it yet, and that online piece does a good job of going through all the possible reasons why we might not yet have detected any life out there. But the other possibility, and the one that I think is frankly a bit more plausible, is that there aren’t any Type III civilizations out there. Yet. Will we be the first? That’s what this piece by Nick Bostrom is all about. The idea is that somewhere in the history of a Type III civilization is an event known as the great filter. This is some event during the course of civilization development that is exceptionally rare, thus providing a great filter between the large number of potential life-producing worlds out there and the complete and utter radio silence of the galaxy as we know it. What are candidates for the great filter? Well, the development of life itself is one. Maybe the transition from prokaryotic life to eukaryotic life. Or maybe all civilizations are doomed to destroy themselves. So in many ways, the existential question facing humanity is whether this great filter is behind us (yay!) or ahead of us (uh-oh!). One fun point that Nick Bostrom makes is that it’s a good thing we haven’t yet found life on Mars. If we did find life on Mars, then that means that the formation of life is not particularly rare, meaning that cannot be a great filter event. The more complex the life that we found on Mars, the worse and worse it would be, because that would eliminate ever greater number of potential great filter candidates behind us, meaning that it is likely that the great filter is ahead of us. Ouch! So don’t be rooting for life on Mars. But while the presence of life on Mars would likely indicate that the great filter is ahead of us, the absence of such life doesn’t say anything, and certainly doesn’t prove that the great filter is behind us. Hmm.

So for a while, I thought this was a classic optimist/pessimist divide: if you’re an optimist, then you believe the filter is behind us, pessimist, ahead of us. But I think there’s actually a rational argument to be made that it’s behind us. Why? Well, I think there are two possible categories of great filter events ahead of us. One is destruction of all life by outside forces. These could be asteroid impacts, gamma ray bursts, etc. Bostrom makes a good argument against these being great filters because a great filter has to be something that is almost vanishingly rare to get past. So even if only 1/1000 civilizations made it past these asteroids and bursts and whatever, then it’s still not a great filter, given the enormous number of potentially life-sustaining planets out there. The other category of filter events (which is in some ways more depressing) are those that basically say that intelligent life is inherently self-destructive, along the lines of “we cause global warming and kill ourselves”, or global thermonuclear war, etc. This is the pessimists line of argument.

Here’s a statistical counterargument in favor of the filter being behind us, or at least against the self-destruct scenario. Suppose that the civilizations are inherently self-destructive and that the filter event ahead of us. Then I would argue that we should see the remnants of previous civilizations on our planet. The idea is that as long as a civilization’s self-destruction doesn’t cause the complete and total annihilation of our planet (which I think unlikely, more in a bit), then conditions would be favorable for life to again evolve until it hits the filter again. And again. And again. Statistically speaking, it would be very unlikely for us, right now, to be the very first in this series of civilizations. Possible, but unlikely.

Now, this argument relies on the notion that whatever these potential future filter events are, they don’t prevent the re-evolution of intelligent life. I think this is likely to be the case. Virtually every such candidate we can think of would probably destroy us, maybe even most life, but it’s hard to imagine them killing off all life on earth, permanently. Global warming? It’s been hot in the earth’s past, with much higher levels of CO2, and life thrived, probably would again. Nuclear war or extreme pollution? Might take a billion or two more years, but eventually, intelligent cockroaches would be wandering the earth in our place. Remember, it doesn’t have to happen overnight. I think there are very few self-destruct scenarios that would lead to COMPLETE destruction–all I can think of are events that literally destroy the planet, like making a black hole that eats up the solar system or something like that. I feel like those are unlikely.

So where does that leave us? Well, I can think of two possibilities. One is that we are not destined for self-destruction, but that the “filter” event is one that just prevents us from colonizing the galaxy. Given our current technological trajectory, I don’t think this is the case. Thank god! Stasis would just feel so… ordinary. The other much more fun possibility is that we are the first ones past the great filter, and we’re going to colonize the galaxy! Awesome! Incidentally, I’m an optimist and an (unofficial) singularitarian. So keep that in mind as well.

So what was the great filter, if it really is behind us? I personally feel like the strongest candidate is the development of eukaryotic life (basically, the development of cells with nuclei). You can get some sense for how rare something is by seeing how long it took to happen, given that conditions aren’t changing. This is hard, because conditions are always changing, but still. Take the development of life itself. Maybe a couple hundred million years? That’s a long time, but not that long, and moreover, conditions on early Earth were changing a lot, so it could be that it didn’t take very long at all once the conditions were right. But eukaryotic life? Something like 1.5-2 billion years! Now that’s a long time, no matter how cosmic your timescale. And the “favorable conditions” issue doesn’t really apply: presumably the conditions favorable to eukaryotic life aren’t really any different than for prokaryotic life, since it's just different rearrangements of the same basic stuff. So prokaryotic life just sat around for billions of years until the right set of cells bumped into each other to make eukaryotic life. Seems like a good candidate for a great filter to me.

Incidentally, one of the things I like about thinking about this stuff is how it puts life on earth in perspective. Given all the conflicts in the news these days, I can’t help but wonder that if we all thought more about our place in the universe, maybe we’d stop fighting with each other so much. We should all be working to better humanity and become a Type III civilization! The wisdom of a fool, I suppose...

Saturday, July 5, 2014

The Fermi Paradox

I think almost every scientist has thought at one point or another about the possiblity of extra-terrestrial life. What I didn't appreciate was just how much thought some folks have put into the matter! I found this little article really summarized the various possibilities amazingly well. Reading it really gave me the willies, alternately filling me with joyous wonder, existential angst and primal dread.

One cool concept is that of the "Great Filter" that weeds out civilizations (explaining why we don't see them). Is this filter ahead or behind us? Hopefully behind, right? Better hope there's NOT life on Mars:
This is why Oxford University philosopher Nick Bostrom says that “no news is good news.” The discovery of even simple life on Mars would be devastating, because it would cut out a number of potential Great Filters behind us. And if we were to find fossilized complex life on Mars, Bostrom says “it would be by far the worst news ever printed on a newspaper cover,” because it would mean The Great Filter is almost definitely ahead of us—ultimately dooming the species. Bostrom believes that when it comes to The Fermi Paradox, “the silence of the night sky is golden.”

Tuesday, June 3, 2014

Penn’s patent policy is crummy for inventors

So Penn has decided to invest heavily in “innovation”, whatever that means. I think part of what it means is transferring technologies developed in academic labs at Penn to the outside world, which is of course a good thing all around. Of course, the devil’s in the details. And Penn’s details are devilish indeed!

On their fancy new website, Penn says “[Penn’s patent policy] provides a means for Inventors to receive a generous share of any income that is derived from the licensing of inventions they create as Penn employees…” Which is funny, because frankly, of all the places I’ve been, it is by far the least generous towards inventors.

For those of you who are not particularly familiar with patents and licensing at universities, here’s how it usually works. Basically, the university owns everything you invent. You own nothing. Fair or not, that’s the way it works. If you have something that you think the outside world would find useful (and is patentable), then you go to the university’s technology transfer office. Naively, the idea is that these folks will decide on whether your work is patentable and worth patenting. They then shop this "intellectual property” to various companies/startups to try and strike a licensing deal. In practice, the issue is that most of these patents just end up sitting on the shelf and never get licensed by a company. It turns out to be very hard to just have a patent sit there and somehow find a home. I think that commercialization works best if the professor herself is actively engaged in trying to develop commercial interest in her technologies, either by making a startup or getting an existing company interested–relying on someone else to make those connections in a largely anonymous fashion seems like playing some very low odds.

The issue is that all these unlicensed patents still cost money to prosecute, and somebody’s got to pay for it. And that’s where Penn’s system gets particularly bad. Typically, the breakdown is such that the inventors get around 1/3 of the money from licensing (the rest goes to various factions at Penn). Some places its a bit more (I’ve seen 35%) and some a bit less (Penn is stingy at 30%). But then come all these legal costs, and here Penn does something very strange. They basically take all the legal costs and take it primarily out of everybody’s personal share. These legal costs typically amount to a whopping 25% of the total inventor’s share. So effectively, you’re getting just 22%! And, more importantly, this applies even if you have already gotten the licensee to pay all the patent expenses! That means that even though the company you are working with is paying all the legal expenses, you are still paying out of your own pocket for everyone else’s patenting costs. In other words, the winners pay for the losers, and they pay a lot.

At other places, they will charge non-legal operating expenses out of the initial pot of licensing income, and then have you pay back the legal expenses on your own particular patent out of the royalties first. This way, if your licensee pays your legal expenses, that directly benefits you. This makes some sort of sense, I suppose–I’m still not sure why inventors get so little, but at least it more directly rewards those who actually manage to get their work out there. Note that somebody still has to pay for the unlicensed patents, but at least those costs usually get split between the university’s share and the inventor’s share, which lightens the load (especially since the university share is much bigger!).

Anyway, look, realistically, nobody’s getting into academia to get rich. That said, commercialization is the best way to have a real impact in the world–that’s just a fact. Why shouldn’t we as scientists benefit somewhat from the ideas that we work so hard to cultivate? And why should Penn scientists benefit considerably less than those at other universities?

Thursday, May 22, 2014

The case for paying off your mortgage

Just reading NYtimes.com, which had an article about whether buying or renting is better. They're saying that now that the housing market is perhaps over-inflating again (seriously, people, what the heck?), it's often better to rent than to buy. Overall, I would have to agree with that in most cases, not just by the raw numbers, but because people always underestimate the costs of maintaining a home (which are absurd–costs like $2K just to cut down a tree, etc.), and because people don't factor in the time and hassles associated with home maintenance as well. To the latter point, I think homes are best thought of as a hobby for the home improvement weekend warrior. Not my kind of thing.

But let's say you have kids, and, sigh, you own a house. A common bit of financial wisdom is that, especially when mortgage rates are low, you should take out a big mortgage and pay it off slowly, because you can invest that money and make more with it. Roughly, the argument is that if your mortgage costs you 4% interest and investing nets you 5% return, well, you could be making 1%. There are some problems with this thinking. First off, one of the common arguments for it is that you save a lot on taxes because mortgage interest is deductible, so the effective cost is actually lower. True. But what every calculator I've seen fails to take into account is that you're only saving the difference between the itemized deduction and your standard deduction. If you have a family, which is probably often the case for people who own a house, then your standard deduction is pretty sizeable. Let's say that your mortgage interest costs you $15K per year, but your standard deduction is $12K. Then the real savings on your taxable income from the mortgage is just $3K, NOT the $15K that the most use in their calculators. If your tax rate is 25%, then this is $750–not chump change, but not really a game changer in the grand scheme of things, and much less than the $3750 you would calculate without thinking about the standard deduction. (It is also true that this becomes less of a factor the bigger your mortgage is, another reason why the mortgage interest deduction is such a regressive policy.) The other thing is that your investments are taxed, which many calculators also don't take into account.

The other problem with considering investment return is that the actual return is, of course, unknown. On average, your investment will probably go up, but it's highly dependent on the details, even in a 30 year horizon. Look, if you gave some Wall Street-type a GUARANTEED rate of return of, say, 4-5%, you can bet your life they would invest in it (and, in fact, they do). What's the interest rate on your savings account? Or even your CD? Almost always less than 1%. So the banks are basically saying that they can only offer you a guaranteed return of well under 1%. What accounts for this spread? One factor is that little market inefficiency called, you know, bankers' salaries. The other is the fact that not everyone pays off their mortgage, so there is some risk that the bank takes on. But if you believe in your own ability to pay off your mortgage, then it's a guaranteed return: every dollar you put in will earn you roughly 4-5% per year without fail.

Also, for most of us, sitting around and calculating this stuff all day is just about the least interesting thing we could do with our time. As Gautham says, peace of mind is the only thing worth anything in this world. Most readers of this blog probably choose to push themselves out of their comfort zone by pursuing science, and have precious little mental energy to waste worrying about other stuff. So whatever, just sign up for autopay and forget about it. Better yet, just move into an apartment.

Wednesday, April 9, 2014

Terminator 1 and 2 were the first great comic book movies

Just watched Terminator 1 again–how awesome! Not quite as good as Terminator 2, which is probably one of the top action movies of all time, but still great, maybe top 10-20. As I was watching it, I was thinking that a lot of what made the movie so appealing is the character of an unstoppable super man (or in this case, robot). Much better as a bad guy than as a good guy, because the unstoppable good guy is boring (see: Superman). Isn't this the prototype for all the modern day comic book movies? One of the things that makes comic book movies exciting is the epic battles between the comic book characters, both doing incredible things, and waiting to see who breaks first. Terminator 2 is still amongst the best (if not the best) in this regard. Another cool thing is that the Terminator movies did this with much worse special effects than we have today, especially Terminator 1, which looks prehistoric. Practically expected claymation sometimes. But it's still awesome. Compelling movie action is more about engendering fear, suspense and relief than just special effects. Still, Terminator 2 would just not have been as awesome without the (for its time) unprecedented special effects, which have aged remarkably well.

NB: Yes, I realize that the original Superman movies came out before T1. But they just weren't as good. And that's a fact. You know it, too.