Friday, December 30, 2016

Last post ever on postdoc pay

Original post, first follow up, this post

Short intro: wrote a post about how I didn't like how some folks were (seemingly) bragging about how high they pay their postdocs on the internet, got a lot of responses, wrote a post with some ideas about how postdocs and PIs could approach the subject of pay. That was meant to deal with short term practical consequences. Here, I wanted to highlight some of the responses I got about aspects of postdoc pay that have to do with policy, likely with no surprises to anyone who's thought about this for more than a few minutes. Again, no answers here, just mostly reporting what I heard. So sorry, first part of the post is probably kind of boring. At the end, I'll talk about some things I learned about discussing this sort of thing on the internet.

First off, though, again, for the record, I support paying postdocs well and support the increased minimum. I think a minimum starting salary of $48K (however inadvertently that number was reached) seems to be a reasonable minimum to enforce across the US. Based on what, I dunno, honestly. I just think we need a flat national minimum: it would be hard/weird for NIH to do it by cost of living across the US, but at the same time, relying on institutions to set their own wage scales is ripe for abuse. More on that later.

Anyway, it is clear that one of the top concerns about postdoc pay was child care. No surprise there, postdoc time often coincides with baby time, and having kids is expensive, period. One can get into debates about whether one's personal life choices should figure into how much pay someone "deserves", but considering that the future of the human race requires kids, I personally think it's a thing we absolutely must be considering. There are no easy answers here, though. Igor Ulitsky summed it up nicely:







I think Igor is absolutely right, an institutional child care subsidy is really the only way to do it. The problem otherwise is that the costs are so high for childcare that just paying everyone enough for childcare regardless of family status would quickly bankrupt most PIs' grants. But just paying more based on "need" has a lot of flaws. I think it was telling that at least some trainees said that they wouldn't begrudge their coworker with a kid if the PI paid them more. Well, what if your coworker had parents who lived with them? Or parents who could live with them? Or a spouse who earned a lot of money? Or was home from work often because of the kid? And how much extra should they be paid? Enough for "cadillac" child care? Bare minimum child care? I just don't think it's reasonable or wise for PIs to be making these decisions. If, on the other hand, the institution stepped in to make this a priority (as both my postdocs have argued), then this would solve a lot of problems. They could either provide a voucher applicable to local daycares or provide daycare itself at a heavily subsidized rate (I think Penn does provide a subsidy, but it's not much). This is, of course, a huge expense for institutions to take on, and I'm sure they won't do it willingly, but perhaps it's time to have that discussion. Anecdotally, I think there really has been a change—before, many academics would wait until getting a faculty position (maybe even tenure) before having kids, whereas now, many academics come into the faculty position with kids. I think this is good and important especially for women, and I think it's pushing this particular issue for postdocs into the foreground.

The other big issue folks brought up was diversity. Low wages mean that those without means face a pretty steep price for staying in science, potentially forcing them out, as this commenter points out from personal experience. I think this is a real problem, and again, no real answer here. I'm not convinced, however, that the postdoc level is where that gap typically emerges—I'm guessing that it's mostly at the decision to go to graduate school in the first place. (The many confounders likely make such analyses difficult to interpret, though I don't know much about it.) Which is in some ways perhaps a bit surprising, since unlikely medical/law/business school, you actually get paid to do a PhD (although I believe most analyses still suggest that you could earn more overall by just getting a job straight away, maybe depending on the field). Also, higher pay would mean fewer postdoc positions, making the top ones more competitive, thus potentially further hurting the chances for those facing bias, although my guess is that this latter concern would not outweigh the former on diversity.

Along these lines is the notion of opportunity cost, with at least a few people (typically computational) noting that the postdocs they want to hire can earn so much on the open market that if they didn't pay them a lot, it would be hard to get them. At the same time, interestingly, a couple trainees invoked the ideals of the free market, saying that people should be paid whatever they can earn. Hmm. Well, I think this gets into the question of what the cost of doing science is. All stages of scientist (from trainees to PIs) probably on average earn less than we could in private industry, with that differential varying by field and circumstance—that is the price for doing what we love. The obvious question is whether this sets up a system primed for abuse. There are some who are willing to work like a dog for next to nothing for the chance to keep doing science. For this reason, there has to be a reasonable minimum to ensure at least some degree of diversity in the talent pool. Beyond that, I personally have no problem with people paying above the minimum if they so choose (and institutional policies that prevent that strike me as pretty unfair and something to fight against). If this helps keep talented people in science, great!

The notion of a free-market approach to pay is an interesting one, one that led me to the following question about the cost of doing science. Let's say that I had a ton of money. Is there some amount of money I could pay to get a postdoc that I otherwise would lose to some big name PI? Like, let's say I paid my postdoc $1M per year. Well, I'd probably be getting a lot of top quality postdoc applications (although still probably not even close to all). But what about $100K? How much would that factor into someone's decision to do a postdoc with me? I venture to say that the answer is not much. How little would someone be willing to accept for the opportunity to work with a big name who could greatly aid their quest for a faculty job? All I can say is I'm glad there's a minimum. :)

I also learned a bit about online discussions on this topic. As I said in my first post, I was super reluctant to discuss this topic at all online, given the opportunity for misunderstanding and so forth. And sure enough, I got some of what I thought were unfairly accusatory responses. Which, of course, is something that I was guilty of myself (and I apologize to MacArthur for that). Hmm. I still stand by, sort of, my point that the original tweet from MacArthur came across in a way that was perceived by many as boastful, even if that was not his intent, and that that may not be the most productive way to start a discussion. That said, I also have to acknowledge that waiting for the "perfect" way to discuss the issue means waiting forever, and in the meantime, just saying something, anything, publicly can have an effect. Clearly the collective tweets, posts and responses on the topic (most are imperfect, though I particularly like this one from Titus Brown) are having the desired effect of engendering a discussion, which is good. And, as a practical matter, I'm hopeful that airing some of the institutional differences in postdoc pay may help both trainees and mentors (see some examples in my second post). It is clear that there's a lot of mystery shrouding the topic, both for trainees and PIs alike, and a little sunlight is a good thing.

All that said, I still think that in addition to online rants of various kinds, with an issue this complex, it's pretty important for us all to talk with each other face to face as well. After all, we're all on the same team here. Academia is a small world, and while it's important to disagree, personal attacks generally serve nobody… and might as well be transparent about who you're disagreeing with so they can disagree back:






(In my defense, the only reason I "subtweeted" is that I really didn't want to call MacArthur out personally because his was just the latest tweet out of many of this kind I had seen. And I suppose it worked in that many people I know who read the post indeed had no idea who I was referring to. But giving him the chance to respond is probably on balance the right thing to do.)

Anyway, while I have not met MacArthur in person, I'm guessing we'll probably cross paths at some point, at which point my main concern is that we'll discover we agree on many things and so I won't have anything else to write about… :)

Wednesday, December 7, 2016

Some less reluctant(ish) follow up thoughts on postdoc pay

(Original post, this post, second follow up)

Well, looks like that last post incited some discussion! tl;dr from that post: I wrote that I found tweeting about how high you pay your postdocs above what most other labs pay to be off-putting. There are many factors that go into pay, and I personally don't think talking about how much you yourself pay is a productive way to discuss the important issue of postdoc pay in general. Even if the intent is not to boast, it certainly comes across as boastful to a number of people, which turns them off from the conversation. To be clear, I also said that I support paying postdocs well and support the increased minimum. It's the perceived boast, not the intent, that I have issue with.

So I learned a LOT from the feedback! Lots of comments, fair number of tweets (and these things called "subtweets"; yay internet!) and several personal e-mails and messages—more on all that in a later post; suffice it to say there's a "diversity of opinion". Anyway, okay, I said that I didn't like this particular way of bringing about discussion about postdoc pay. But at the same time, I do think it's a good thing to discuss, and discuss openly. Alright, so it's easy for me to criticize others about their tweets or whatever, but what, then, do I think is a good way to discuss things? Something I've been thinking about, and so I want to write a couple posts with some ideas and thoughts.

Overall, I think there are two somewhat separate issues at play. One is the immediate, practical issue of how to increase awareness of the problems people have and bring about some better outcomes in the near-term. The other is long-term policy goals and values that I will bring up in a later post (with relatively few ideas on what specifically to do, sorry).

So, to the first point, one of the things I learned is how surprisingly mysterious the subject of postdoc pay is, both to prospective postdocs and to PIs alike. Morals and high-minded policy discussions aside, seems like many just don't know some basic practical matters that can have a real impact. Anyway, here's a few relatively off the cuff suggestions of things to think about based on what I've heard, and feel free to add to the list.

First, for potential postdocs, the main thing to do is to remember that while science should in my opinion be the primary factor in choosing a postdoc, pay is another important factor and one you should definitely not shy away from, awkward though it may seem. I think advocacy begins here, on a practical level, by advocating for yourself. Keeping in mind that I haven't hired that many postdocs and I'm not sure how some of these ideas might hold up in practice, here is some information and some ideas for trainees on how to approach pay:
  • Ask about pay relatively early on, perhaps once there's real interest on both sides, during or maybe better after a visit (dunno on that). It may be uncomfortable, but at least make sure that it's clear that it's on your radar as a thing to discuss. Doesn't mean that you have to come to a hard number right away, but signal that it's worth talking about.
  • Before having such a discussion, it's worth thinking about what number seems fair to you. There is the NIH minimum, and then there's your life situation and location and so forth. You are an adult with a PhD, so take stock of what you think you need to be happy and productive, and don't be afraid of saying so. What can help with this is to think about what you might otherwise make outside of academia, or what the average cost of living is in your area, our your particular personal situation, or whatever other factors, and come up with a number. Having some rationalization for your number, whatever it may be, is important to help you maintain fortitude when you do discuss pay and not feel like you're being impudent. Remember that the PI probably finds this awkward as well, and so having guidance can actually help both parties! And if you're a decent candidate, you may have a surprising amount of bargaining power. At the same time, remember that the PI may have their own expectations for the discussion (which may include not having the conversation!), and so you may catch them a bit off guard, depending.
  • Some basic orientation about pay: the major national guideline comes from the NIH. The NIH sets a *minimum for fellowship* pay. This used to be ~$42K a year for a starting postdoc, and then there was some labor ruling that caused that to increase to ~$48K a year. Institutions often follow this NIH guidance to set up their pay guidelines. This ruling got overturned recently, and so now some institutions have gone back to $42K starting, while some others have not. These are the national guidelines for a baseline. Clearly, some places in the country are going to be more expensive than others.
  • This is the NIH guidance on the minimum. At some places, yes, you can definitely be paid more than the minimum (apparently, many trainees didn't know that). At some places, there are institutional rules that prevent PIs from paying more than the minimum or some other defined number or range. At some places, there are institutional rules that require PIs to pay above the minimum. If the PI has flexibility, they may have their own internal lab policy on pay, including a "performance raise" if you get a fellowship. And it's also possible that the PI just doesn't have any clue about any of this and just goes along with what HR tells them. At the same time, keep in mind that the PI does manage a team with existing players, and they must manage issues of fairness as well. Anyway, point is ask, do not ever assume.
  • Some points of reference. Many (most?) postdocs work for the NIH minimum (which of course does not mean you should or should not, necessarily). Stanford institutionally starts at $50K. As mentioned last time, some folks pay $60K (Tweet was from Daniel MacArthur, who has asked that I not subtweet, sorry). Right or wrong, clearly some PIs take issue with this. I've heard of some fellowships that went up north of $80K. I think that $80K is probably considered by most to be a pretty eye-poppingly high salary for a postdoc, but dunno, I'm old now. Computational work often pays more than straight biology because a lot of those folks could make so much in industry that it's harder to attract them for less (maybe $10K+ premium?). Math often pays higher than biology because postdocs are considered sort of like junior faculty. Physics often pays better as well, perhaps dependent on whether you have some named fellowship. Anyway, you have an advanced degree, do some homework. I think it makes sense to be sure your number reflects your self-assessed worth but is within reasonable norms, however you choose to define "reasonable".
  • As in any negotiation, there may be back and forth. As this happens, you may have areas in which you are flexible, and maybe the PI is flexible. It is also possible that the PI is unable or unwilling to bend on pay. At that point, it is up to you to make the decision about whether that sacrifice is worth it for you. There are of course further policy discussions that must happen in this regard, but for now, this is what you are faced with, and it's your decision to make.
  • It is possible that PIs may not even know all the options for pay. Sometimes, there is some institutional inertia on "how they do things" that everyone just goes along with. This can be hard to find out until you get there and find out who to ask, though.
  • There are often some hidden costs, and it's worth considering what those may be in your case. These can include things like out of pocket payments for health insurance (including family), gym memberships, and various other benefits. Note that sometimes these costs can vary depending on your official position at the institution, which in turn can change depending on whether you have a fellowship or whatever (sometimes, a fellowship reduces your status, thus costing you more for many things, ironically). There may be some sort of child care benefit or something, or at least access to the university daycare. And there may be some commuting benefits, in case that's relevant. Some places are able to cover moving costs if the PI wishes.
  • There are a host of issues for foreign postdocs, and someone more knowledgeable than I should probably write about them, but some costs I've seen are visa costs (sometimes paid by institution, sometimes not, very confusing), and also travel costs associated with yearly return visits to the home country for visa purposes. These return visits, by the way, may be avoidable with longer contracts, which may or may not be available, which was something I just learned recently myself.
  • For a lot of the above hidden costs, the PI may not even realize that these sorts of things are going on, and they may be willing to help. There is a possibility that they can cover some of these costs, depending on institutional rules, or maybe it can be a rationale to negotiate a higher salary.
Here are some thoughts for PIs, probably mostly for junior people (which I still consider myself, but I'm probably just kidding myself). Most of these I'm just kind of making up on the spot, being a relatively inexperienced postdoc-hirer myself:
  • It took me a while to learn all the intricacies of what constitutes pay. What are the pay scales? What can I pay for? Moving costs? Commuting costs? Benefits? I still don't think I fully understand all of this, but I wish I had a better understanding when I started. When I started, it was like "you can hire a postdoc, here you go."
  • I'm still not fully clear on all the hidden costs to my people and what benefits they get, and I should really brush up on that, potentially making a plain English document for new lab members.
  • At the institutional level, it took me a while to disentangle what is actual policy on things like pay vs. what is just "the way we have always done it". Breaking these unofficial rules gave me some flexibility to do good things for my people.
  • I am thinking of developing a coherent lab policy on pay, explicitly stating what I will and will consider when figuring out overall pay level, relative pay between people, etc. I haven't really worried about it so far, and that's been fine, but having something like that would really help. I guess that's sort of obvious, so maybe I'm just sort of late to this bit of common sense. Am I alone in that?
  • I think in the course of coming up with such a policy on pay, I'll probably think about exactly what my values are, what these kids' opportunity costs are, and how much I think is reasonable to live on in Philly. I mean, I kinda do this already, but haven't really thought about it very seriously, and periodic reexamination seems appropriate.
  • I'm not entirely sure I would share this policy within the lab, though. Thing is, everyone's circumstances are different, and exceptions are frankly pretty much the rule. I think the point is just to have some sort of internal guidance so that at least you won't forget about anything when deliberating.
  • I'm wondering whether and to what extent it's worth discussing lab cost management with the people in your lab so that they see how the sausage gets made. I had one trainee who was surprised to find out (not from me, but rather from Penn HR) exactly how much their pay actually counted against a grant once all the benefits and so forth were added in. There is an argument to be made (that I've mostly subscribed to) that postdocs should just focus on their work and not worry about the lab bills. There's another argument to be made that sharing such information gives people a sense of the true costs of running a lab for training purposes. Then again, it's a fine line between being informative and passive-aggressive. Dunno on this one.
Anyway, who knows if this will help anything, but consider this my contribution to the discussion for now. While it certainly won't solve all the problems out there, given the surprising lack of knowledge out there, perhaps this information will be of some use. More in another post later on policy things that came up, as well as how to talk about these things on the internet.

Saturday, December 3, 2016

Some (reluctant) thoughts on postdoc pay

Update 12/7/2016: (first follow up here, second follow up here)

I have generally steered well clear of the issue of postdoc pay, which engenders pretty heated conversations that I'm SO not interested in getting into publicly, but one thing I'm seeing is really bugging me these days: people bragging on Twitter about how much they pay their postdocs above the NIH minimum. Like this:



I don't mean to single these folks out—it just happened that I saw these tweets most recently—but I've seen a few such statements over the last year or so since the announcement that the mimimum for salaried workers would be increased to ~$48K or so (which was just recently reversed).

Why is this irritating? Well, first of all, in this funding climate, and given many labs that have to make many tough choices, it does strike me as a bit arrogant to talk about how much more you can afford to pay than many, many other very well-intentioned scientists. The implication is that people who don't pay as much as you do are paying an abusively low amount, which is I think an unfair charge. For these reasons (and maybe a few others), I just don't think it's really appropriate to publicly talk about how much you pay your people. For the record, I support paying postdocs well, and I think the increase is overall a good idea. My point here will be that there is not an obvious default "right" position on the issue of postdoc pay, and I think it is far more complex than just saying "We should pay postdocs a decent wage."

Indeed, I think the key difficulty is pinning down exactly what we mean by the notion of "decent wage". For instance, in the first tweet above, the PI is from Cambridge/Boston, and the second is from NYC. Now, the proposed federal regulation for starting postdocs is (was) $47,484, and that would apply everywhere. Including, say, Ann Arbor, Michigan (which I choose for no particular reason other than it's home to a major, world-class research institution, but in a relatively affordable location). Now, comparing the cost of living of any two places is tricky, but I found this estimate that Boston is roughly 1.4x as pricey as Ann Arbor (which sounds probably about right). Bragging about paying $60K? Well, shouldn't that be $66K? Live in Cambridge MA instead? No better, $76K. So let's stop crowing about how "decently" the Broad Institute pays, okay?

So, is $60K "fair"? Hmm. From the PI perspective: a Boston PI could say, well my dollars don't go as far, so in a way, doesn't the Michigan PI have an unfair advantage? Then again, the Michigan PI could say hey, why do I have to pay more (relatively speaking) for my postdocs? Why does the Boston PI not have to pay the same effective wages I do? Why should they not have an enforced effective minimum standard pay and have the freedom to pay effectively less?

The motivation of PIs may also matter here as well. The focus in the discussion has been on PIs taking advantage of cheap labor, and that definitely happens. But some PIs may define their mission as training as many scientists as possible, which certainly seems reasonable to me, at least from one point of view. (And I do wonder how often those who brag about paying so much above the minimum have actually had to make the tough choice of turning away a talented postdoc candidate due to constrained funding.)

From the NIH perspective: what is the goal? To get as much science as "efficiently" as possible? To train people? To create a stable scientific workforce? Or to better human health? Should the NIH even allow people in high cost of living areas to pay their postdocs more? Would it be fair to consider this pay scale in grant review, just as other areas of budgets are scrutinized? Does increasing the minimum penalize those who pay the minimum in non-Boston/SF locations unfairly, thus increasing inequity? Or does it provide a general boost for those places, now making them more attractive because their NIH minimum dollars go further? Should the NIH scale the size of grant by cost of living in the area of the host institution? To what extent should the NIH support diversity of locations, anyway?

From the trainee perspective: It's pretty easy for trainees to say that whatever they're paid right now is not fair (though you might be surprised how little many assistant professors make). So for trainees reading this post, let me ask: what would be fair? Okay, maybe now you have a number in your head. Where does that number come from? Is it based on need? Consider: should a postdoc who has a family be paid more? Wait a minute, what about the postdoc without a family? What about immigrants with expensive visa costs? Or potentially families to support in their home country? Moving costs? Commuting costs? Should postdocs be paid more when the institution is in an expensive city? Should postdocs be forced to live further away from the institute to seek more affordable housing? My point is that there is no clear line between necessity and luxury, and wherever that blurry line does get drawn will be highly dependent on a trainee's circumstances and choices.

Or should that number be based on performance? Should the postdoc entering the lab with a flashy paper or two be paid more than the one without? Should a postdoc get a raise every time they publish a paper, scaled by how important the paper is? How many grants it generates? I think it's reasonable to assume that such an environment would be toxic within a lab, but wouldn't the same be true of pay based on personal circumstance, as just discussed above? And isn't such performance-based pay already what's sort of happening at a more global level in flush institutes where PIs can get enough grants to pay well above the minimum?

As you have probably noticed, this post has way more question marks than periods, and I don't claim to know the answers to any of these questions. I have thoughts, like everyone else, and I'm happy to talk about them in person, where nuance and human connection tend to breed more consensus than discord. My point is that reducing all this to a single number is sort of ridiculous, but that's how it works, and so that's what we all have to start from, along with various institutional prerogatives. In the meantime, given how simplistic it is to reduce this discussion to a single number, can we please stop with the public postdoc pay-shaming?

Sunday, November 20, 2016

Anti-Asian bias in science

Scientists are a cloyingly liberal bunch. In the wake of this (horrifying) election, seems like every other science Tweet I saw was like
To all my Inuit friends and colleagues: I stand with you. Against fear. Against hate.
Lovely sentiments, for sure, and as a non-white person living in the Philly suburbs at this frightening time, that is welcome. (Although I do wonder who would actually step up if something really went down. Would I? Would I even stand up for myself?)

At the same time, beneath this moralistic veneer, it is of course impossible to deny that there is tons of discrimination and bias in science. Virtually any objective look at the numbers shows that women and under-represented minorities face hurdles that I most definitely have not, and these numbers are backed up with the personal stories we have all heard that are truly appalling. But there is, I think, another less widely-acknowledged or discussed form of discrimination in science, which is discrimination targeted towards Asian scientists.

Asians make up a relatively small (though rapidly growing) portion of the US population. In science, however, they're highly over-represented, making up a large fraction of the scientific workforce. And with that comes a strange situation: a group that is clearly not a small minority, and that is doing well in this highly regarded and respected area, and yet clearly faces bias and discrimination in a number of ways, many of which may be different from those that other minorities face.

First off, what do I mean by Asian? I'm guessing I'm not the only one who feels like I'm checking the "miscellaneous box" when I'm faced with one of these forms and choose "Asian":


I mean, there's a billion Indians and a billion Chinese people EACH out there (not to mention 10s to 100s of millions of other Asian groups), but whatever. Point is, Asians are a diverse group, and I think these different groups face some common and some distinct forms of discrimination. Aside from the various distinctions by ethnic category, there are also distinct forms of bias directed towards Asians coming from abroad as opposed to Asian-Americans. I think all Asians face some measure of discrimination, and in particular, those of East Asian (and within that, Chinese) origin face some of the biggest obstacles.

(I could be completely wrong about this, but I do feel like East Asian scientists face more barriers than South Asians for whatever reason. Part of this may be an matter of numbers: there are simply fewer South Asians in science to begin with. And certainly South Asians from abroad run into trouble, especially a generation ago. That said, as an Indian-American I don't personally feel like I've been on the short end of the stick for racial reasons. Then again, who knows what I'm not hearing, know what I mean? Indeed, I think it's specifically because I'm not Chinese that I've seen mostly anti-Chinese bias, which is what I'll focus on here.)

Exactly what are these barriers? After all, don't the stereotypes of Chinese in the US typically involve words like "diligent", "hard working", "good at math"? Well, I think it's important to realize that it is these very words that implicitly provide an upper limit on what Chinese scientists can aspire to in academia. Consider the following statement I heard from someone (I can't exactly remember the context) that went something like "Oh, they'll just hire a bunch of Chinese postdocs for that, I'm sure." As in "do what they're told", "just labor", "interchangeable", "quiet". Are such sentiments that far from "not independent-minded" or "lacking vision"?

You'd think that these stereotypes may have faded in recent years, and I think that is true to some extent. Then again, take a look at this well-meaning guide from a university in Germany for Chinese/German relationships called "When a Chinese PhD student meets a German supervisor", written by a couple of Chinese PhD students in Germany. I think it actually has a lot of useful things in there, and it would be disingenuous to say that there are no meaningful cultural differences, especially for a foreign student coming to Germany. At the same time, I found some aspects of the guide worrisome:
Through constant discussions, Ming gradually learned when he should obey his supervisor and when he should argue. Ming’s supervisor was very happy when he noticed that the way Ming approached his work had changed and therefore said, “German universities train PhD students to think independently and critically.”
There it is: implicitly, Chinese students don't think independently or critically without extensive German retraining.

And check out this one:
PhD students in Germany are not just students, they often are also researchers and employees at universities. On the one hand, they need to finish their scientific projects independently; on the other hand, they have to teach courses that are assigned by the university or their research groups and they have to do daily organizational work as well. All these tasks require professional qualities. In each research group, every member performs his or her duties according to their contracts.

At the beginning of his PhD, Ming had no plan or agenda at all when he talked to his supervisor, which resulted in aimless and inefficient discussions. After being reminded by the supervisor, Ming began to write agendas for their discussions, but they were always extensive instead of being brief, which made it a laborious task for the supervisor to read. Then the supervisor taught Ming to use bullet points, i.e., to list every question or issue that needs to be discussed with a word or a short phrase.
Right… because I've never had non-Asian students who had these problems with "professional qualities".

I mean, I think this guide is addressing some real concerns and is probably very helpful (check out the part where they describe how to sort garbage like the locals—sounds like someone had a traumatic experience leading to that particular section). But there are long-term consequences to reinforcing the stereotypes of lack of independence, lack of communication skills and the such. Notice how these stereotypes are all about the qualities people think are required for getting to the next level in academia?

Also, this stereotyping is not the only form of bias and racism that Chinese people face in science. Indeed, because the number of Chinese people in science is so large, they must constantly be vigilant about accusations of favoritism and reverse bias. This can come out in particularly nasty ways. For instance, I recently went to a major conference and had a chat with a rather well-known colleague after a meal. As is standard, we spent some time complaining about annoying reviewers, and all of a sudden, my colleague said "And I just KNOW this reviewer is Chinese." The venom with which the word "Chinese" came out of their mouth really took me by surprise, but I'm betting I'm not the only one who's heard that sort of thing, and more than once. Just imagine hearing this kind of talk about any other racial or ethnic group.

In that environment, is it surprising that it is hard for Asian scientists to break through to higher levels in academia? It seems to me that Asians form an under-over-represented class in science: they are a big part of making the scientific enterprise run, but have got plenty of extra hurdles to jump through to get to the next level, with bias working against them on precisely all those extra, conveniently unquantifiable qualities deemed necessary to get, say, a faculty position. My father is an academic, and was pretty sure that he faced racism earlier in his career, though it's hard to pinpoint exactly where and how. I had a recent conversation with a Chinese colleague who told me the exact same thing: he knows its harder for him for a number of reasons, but it's just so hard to prove. It is the soft nature of this bias that makes it so pernicious, which is of course true for other groups as well, but I feel like we don't think about it as much for Asians because they are so visibly over-represented, so we think "What's the problem?".

All this is not to say that there's been no progress. For instance, at the very conference where my colleague lamented their allegedly Chinese reviewer, I noted just how many of the best and brightest PIs in attendance were Asian, including a large number of Chinese and Chinese-American scientists. Indeed, I just visited a university where my hosts were extremely successful Asian scientists, and they so were warm and welcoming, inviting me to dinner at their home together with a few other Asian scientists, all of whom I really admire and respect. At those times, I think the vision of an inclusive, open-minded scientific community is not only possible, but perhaps attainable.

At the same time, I think recent events have shown that these changes do not come for free. It is a cliché, but it is true that we must all fight for these changes and stand against fear and against hate, etc., etc. Great, that's fine and well, and I'm all for it. At the same time, I think it's important to acknowledge that when it comes down to it, social pressures often make it hard to say something in the moment when these situations arise. Looking back at my own experiences, I think I am not alone in saying that I have more regrets about lost opportunities to do or say the right thing rather than proud moments of actually standing up to what I think was wrong. Just saying "we should stand up to bias and discrimination" is very different than providing a blueprint for how to do so.

As such, all moral grandstanding aside, I think there is an interesting question facing us Asians now as a group. Thus far, I feel that Asian scientists have relied on the goodwill of non-Asians to advocate for us, push our careers, make a place for us in science—and to the many, many wonderful scientists who have supported Asians, including myself, a sincere thank you. But it's important to realize that this means, essentially, succeeding on other people's terms. Those terms have generally been favorable to Asian scientists (and non-scientists) so far, but are there limits to Asian success in that model? Do we need to start asserting our rights more aggressively and in a more organized fashion? A postdoc in my lab, Uschi, has vigorously spoken out for postdoc rights here at Penn, and guess what: it makes a difference. I would imagine that advocating for Asians scientists could result in similar benefits. Should this be part of a larger effort to assert Asian rights on a national stage? After all, while relying on the benevolence of kind-hearted non-Asian scientists has worked okay so far in our little science bubble, if we think that general nerdiness and funny accents are going to save us in Gen Pop, well, take a look at what's going on in the aftermath of this election. Maybe it will require concerted, coordinated advocacy to change the policies and bias that make things difficult for foreigners that science in this country relies on, Asian and otherwise.

Gotta say, I felt very weird writing this last paragraph. Does this come across as shrill and ungrateful? Why am I rocking the boat? Making a mountain out of a molehill? Shouldn't we just keep our heads down and focus on our work? These are questions I asked myself as I wrote this as a person who has done well in the system and doesn't really have that much to complain about. But maybe that's just me "being Asian"?

PS: Here's another snippet from the German guide for Chinese students:
The third surprise was that on the same day Ming arrived in Konstanz, the research group threw a welcome party for him and all the group members showed up. At that party, Ming got to know everybody. Besides, there was a discussion about picking a German name for Ming. Based on the group members’ opinions and Ming’s agreement, he was finally named Felix, which indicates optimism and therefore matches his character. From then on, he has had a German name. The thoughtful and warm welcome from his research group touched Ming and he was looking forward to the cooperation with his research group.
Okay, whatever else happens, can we at least agree to stop this forced renaming business?

[Update, 11/20: Apparently, the word Eskimo is now considered derogatory; changed to Inuit, no offense intended.]

Saturday, November 5, 2016

On bar graphs, buying guides and avoiding the tyranny of choice

Ah, the curse of the internet! Once upon a time, we would be satisfied just to get an okay taco in NYC. Now, unless you get the VERY best anything as rated by the internet, you’re stuck feeling like this:

Same goes for everything from chef’s knives to backpacks to whatever it is (I recommend The Sweethome as an excellent site with buying guides for tons of products). Funnily enough, I think we have ended up with this problem for the same reason that people whine on about bar graphs: because we fail to show the data points underlying the summary statistic. Take a look at these examples from this paper:


For most buying guides, they usually just report the max (rather than the mean in most scientific bar graphs), but the problem is the same. The max is most useful when your distribution looks like this:
However, reporting the max is far less useful a statistic when your distribution looks like this or this:




What I mean by all this is that when we read an online shopping guide, we assume that their top pick is WAY better than all the other options—a classic case of the outlier distribution I showed first. (This is why we feel like assholes for getting the second best anything.) But for many things, the best scoring item is not all that much better than the second best. Or maybe even the third best. Like this morning, when I was thinking of getting a toilet brush and instinctively went to look up a review. Perhaps there are some toilet brushes are better than others. Maybe there are some with a fatal flaw that means you really shouldn’t buy them. But I’m guessing that most toilet brushes basically are just fine. Of course, that doesn’t prevent The Sweethome providing me a guide for the best toilet brush: great, deeply appreciative. But if I just go to the local store and get a toilet brush, I’m probably not all that far off. Which is to say that the distribution of “scores” for the toilet brush are probably closely packed and not particularly differentiated—there is no outlier toilet brush.

While there may be cases where there is truly a clear outlier (like the early days of the iPod or Google (remember AltaVista?)), I venture to say that the distribution of goodness most of the time is probably bimodal. Some products are good and roughly equivalent, some are duds. Often the duds will have some particular characteristic to avoid, like when The Sweethome says this about toilet brushes:
We were quick to dismiss toilet brushes whose holders were entirely closed, or had no holders at all. In the latter category, that meant eliminating the swab-style Fuller brush, a $3 mop, and a very cheap wire-ring brush.
I think this sort of information should be at the top of the page, and so you buying guide could say “Pretty much all decent toilet brushes are similar, but be sure to get one with an open holder. And spend around $5-10.”

Then again, when you read these guides, it often seems that there’s no other rational option than their top choice, portraying it as being by far and away the best based on their extensive testing. But that’s mostly because they’ve just spend like 79 hours with toilet brushes and are probably magnifying subtle distinctions invisible to the majority of people, and have already long since discarded all the duds. It’s like they did this:


Now this is not to say those smaller distinctions don’t matter, and by all means get the best one, but let’s not kill ourselves trying to get the very best everything. After all, do those differences really matter for the few hours you’re likely to spend with a toilet brush over your entire lifetime? (And how valuable was the time you spent on the decision itself?)

All of this reminds me of a trip I took to New York City to hang out with my brother a few months back. New York is the world capital of “Oh, don't bother with these, I know the best place to get toilet brushes”, and my brother is no exception. Which is actually pretty awesome—we had a great time checking out some amazing eats across town. But then, at the end, I saw a Haagen Dazs and was like "Oh, let's get a coffee milkshake!". My brother said "Oh, no, I know this incredible milkshake place, we should go there." To which I said, "You ever had a coffee milkshake from Haagen Dazs? It's actually pretty damn good." And good it was.

Tuesday, October 4, 2016

This one weird trick can help you think of a project (maybe)

-Caroline

I’ve had several PIs tell me during grad school, ‘Ideas are cheap.’ (This phrase usually comes up in conversations discussing the challenges of carrying a project to completion.) It’s certainly true that finishing projects requires creativity, innovation, thoughtfulness, and persistence. However, I think that really good ideas are extremely rare and precious.

I’ve spent a fair amount of time in grad school thinking about new avenues to pursue and new questions to answer, and I find it one of the most challenging parts of science. A really good idea for a project requires a number of attributes, including a well-defined question to answer, interest, novelty, technical feasibility, a ‘go-or-no-go’ point such that you can quickly decide if the project has merit, and so on.

Here are some things that have helped me try to identify interesting projects:
  1. Practicing thinking: One of my favorite books is ‘The Creative Habit’ by Twyla Tharp. Her thesis is that people do have innate creativity, but that everyone can improve with practice and effort. In any creative field, spending time every day reading works of your idols, and thinking critically about your work and the work of others can improve your ability to come up with good ideas. (Plus breaking out a classic Michael Elowitz or Marc Jenkins paper always makes me appreciate how cool science can be!)
  2. Reading papers: I find this essential. I try to read papers in my direct subfield, to keep track of what other scientists are thinking about and what interesting questions they raise, but also more far afield, to try to learn about new techniques, questions and experimental styles. Also reading classic papers can be very important: too often we have an idea of what the field thinks is true, and it’s often illuminating and surprising to look at how ‘the field’ decided this. Often these conclusions may be more nuanced or not as firm as you might think-which can open up new avenues of inquiry! (Also you get to look at old-school stuff like gel-shift assays, which I feel strengthens my character. Takes like ten minutes every time to understand those.)
  3. Not reading papers: Arjun is a big fan of this one. I think there is a lot of truth to the idea that feeling beholden to the literature for everything can be paralyzing. Spending time trying to think critically about what questions are interesting (apart from where the field is) can allow you more originality and freedom.
  4. Changing location: I’m a big fan of wandering around campus (and Philadelphia in general) when I try to think of ideas. This has several benefits: first, no one can find you to ask you questions! I’m very easily distracted so this is essential for me. On a related note, in lab I always have a laundry list of tasks, like splitting cells and doing lab chores, so it’s easy to feel drawn into doing those instead of reading or thinking. Finally, I think that going new places might help give me freedom to think in non-habitual ways. 
  5. Lab coffee breaks: This is an important step that I use to test ideas- coffee breaks with lab mates (also, hopefully this balances out the misanthropy implied by step 4!). I’m extremely lucky to have intelligent, helpful, and critical lab mates who are always willing to chat about ideas. Though sometimes this can be tough on the ego, often I haven’t considered all the sides of a potential idea until discussing it at Starb’s. Also those tiny vanilla scone things definitely inspire creativity.

I’d love to hear if anyone has other suggestions or habits when trying to identify new directions in science!

Saturday, October 1, 2016

The kinship of the arcane

Just had a great visit to University of Calgary, and one of the highlights was meeting with Jim McGhee. It was a feeling I’m sure many of you have had as well—finally meeting someone whose papers have been really influential in your life. In this case, it took me back first to my postdoc, when Jim’s work on C. elegans intestinal development formed much of the basis for work we did on incomplete penetrance. Many of his papers were amongst my most well-thumbed when we were working on that project, and were simply invaluable as we tried to piece our findings together. Then, my fledgling lab’s fate intertwined with Jim’s again when we (meaning Gautham and I) started asking questions about cell cycle and gene expression timing during intestinal development. Gautham is a true scholar, and as this project progressed, we were continually drawn to Edgar and McGhee, "DNA Synthesis and the Control of Embryonic Gene Expression in C. elegans", Cell 1988, a classic paper on this same question.

We would read that paper over and over, delving into each minor point in detail, all the while wondering who this “Lois Edgar” was, marveling at skill and fortitude that must have been required to pull off those seemingly impossible experiments. To give you a sense of what was involved, the question was DNA replication was required for expression of certain genes during development (i.e., is cell cycle somehow a timing mechanism). To answer the question, Lois would take live embryos at precise stages, permeabilize their eggshells through mild smushing or completely removing them via pipetting, then add aphidicolin to inhibit DNA synthesis, and then trace those embryos over time to look for expression of gut specific factors. These are most certainly NOT easy experiments to do, and I (think) I remember my friend John Murray telling us that McGhee said that only Lois Edgar could do those experiments. Gautham also found this really interesting self-profile by Lois in which she talks about her career going to graduate school at age 40 (!). She sounded like a very interesting person to both of us.

So I was sort of nervous to meet Jim. What would he be like? Gregarious and fun? Quiet and bitter? Well, unsurprisingly for a worm person, he was wonderful! We had a great time over dinner talking about various things, including, of course, Lois Edgar, and about the many hours she spent at the microsope watching worms. Also, he talked about how she was a very talented artist. Indeed, the next day, he showed me this lovely rendition of the worm she drew for him many years ago that only an artistic geneticist (or genetically-inclined artist?) could create:


Apparently she has now indeed followed through on her promise to retire from science and go back to art, and makes pottery at her home in Boulder. I actually go to Boulder fairly often, and I’m thinking I should try and meet up with her next time I’m out there.

Which, upon reflection, is a bit odd. After all, how strong can a connection be between two strangers linked by nothing but a model organism and some gene names? I think that’s actually part of the magic of science: at its purest, it’s a kinship, passed through generations, forged by a common interest in arcane details and minor subtleties that perhaps only three or four people in the world know, much less care about.

I also had the opportunity to meet two of Jim’s students, and I’m happy to report that they are doing some very cool science. It has been a while since I’ve thought about intestinal development in worms, but talking with them was like putting on an old pair of shoes—it was nice to talk about old friends like elt-2 and end-3. I could sense that the students enjoyed talking with someone who knew the details of the system to which they have devoted a fair fraction of their waking hours to studying. One of them had a marked up copy of our C. elegans paper out, and I pointed out a minor detail about ectopic expression in hda-1 knockdown worms that is very easy to miss in the paper and might be relevant to their work. In the end, it is people who keep a field alive, speaking to each other imperfectly over the years through dusty pages.

Sunday, September 25, 2016

Some thoughts on how to structure a talk

As I will recount in a future blog post, I just went to a really fun conference at Cincinnati Children’s on systems biology. Especially cool was interacting with all the postdocs doing pretty amazing work in a variety of areas. More on that later.

As with most conferences, the talks were… mixed. Not the science, but the presentations themselves. Some were great, and some were sort of hard to follow. And some were really hard to follow. One thing I was struck by, however, is that there was far less correlation than you might think between how naturally vibrant someone is and how good their presentation was. This got me thinking: maybe part of the issue is that we always remember those presenters who are both super bubbly and super clear, and so everyone else just looks at that and says “well, they’ve just got it”, whatever “it” is, and gives up on improving. I, however, would contend that while there might be some correspondence between being "sparkly" and engagement/clarity, these are separable problems. And while being super sparkly may be a hard trait to manufacture, giving a clear and compelling talk is most certainly a skill. Indeed, I think that while it might be hard to give a spectacular talk based on skill alone, I think that almost anyone can give a great talk if they're willing to work at it and accept guidance. Being a cheerleader may or may not be part of your job as a scientist, but being able to clearly communicate your work most definitely is.

Now, there’s plenty of opinions out there on how to give a talk, and I’ve given plenty already (see also this excellent website from David Stern). However, most of these tips focus primarily on the mechanics of giving a talk, but devote little to how to structure a talk. Like, they all give some variant of the following maxims:

Basic :
  1. Don’t use text in slides.
  2. Use color appropriately.
  3. Make sure all axes are labeled and graphics are legible.
  4. Remove all jargon.
  5. Don’t go over time.
Mid-level:
  1. Don’t use text in slides.
  2. Remove everything you thought was not jargon but actually is still jargon.
  3. Make the title of each sentence a complete sentence (or no title, but this takes more expertise).
  4. Remember that the slides are just props—you are the speaker.
  5. Identify your audience.
  6. Don’t use figures from papers.
  7. Break up multiple concepts into multiple slides.
  8. Avoid jokes unless you are actually funny. Even then, you should probably avoid jokes.
Some of these are common sense, some are obvious in hindsight. Many have some sort of principles underlying them, and I’ll leave it to you to find other websites with that information (and this great video from Susan McConnell).

Thing is, none of these rules are universal. Take, for example, “Don’t use text in slides”. I have seen multiple talks by very senior, famous PIs, and they had slides with a paragraph of writing on them that they literally just read out verbatim. And you know what? It worked! Why?

Because the structure of their talks were superb. Yet there is precious little guidance out there about how to structure your talk to make it compelling and convincing, and, by proxy, clear. Anyway, here are some thoughts on structure. (Big thanks to Leor Weinberger, who turned me on to “Resonate” by Nancy Duarte, which I found very helpful.) Keep in mind this is just my opinion, but whatever.

The main thing with structuring your talk is to realize that you are telling a story. Stories are fundamentally different than papers, the latter having to be frontloaded as much as possible. Stories, by contrast, have a narrative arc. These arcs have a formula. Do not deviate from the formula! Here’s the formula as given by Pixar:
Once upon a time there was ___. Every day, ___. One day ___. Because of that, ___. Because of that, ___. Until finally ___.
Now let me translate that to science:
Once upon a time, there was a way to measure gene expression called RT-PCR. Every day, people would grind up a bunch of cells and measure the average expression across all the cells. One day, someone looked at expression in individual cells by measuring GFP levels cell by cell. Because of that, they saw that single cells could deviate wildly from the population average. Because of that, they developed further tools, showing that this variability was pervasive. Until finally, they were able to show that this variability had profound consequences for how cells function in both healthy and diseased organisms.
Now just make that into a deck of 50-100 slides and you have a departmental seminar. :)

Let’s deconstruct this a bit. Why does this narrative formula work so well? Because it establishes tension and contrast early, and allows one to come back to it often. In scientific terms, this basically means drawing a clear line between what the current thinking in the field is (the population average is all the information we need for gene expression) and an alternative that you are going to convince them of (individual cells can vary wildly). Duarte gives the example of Steve Jobs’ introduction of the iPhone. Look at the contrast he develops! What is now: flip phones, no way to do e-mail, no music—vs. what could be: a single device to do it all.

(Random aside: it’s actually really funny watching that keynote now to see the audience react wildly for the “iPod” feature, the “phone” feature, and then clap quietly and confusedly at the “revolutionary internet communicator”. If only they knew.)

Anyway, all this to say that it’s paramount to clearly state, in simple terms, what people think now, and then tantalize them with the promise of something new, something different. To build that tension, provide hints from the literature that support your new view, like it's hiding in plain sight. Look how effective this is in Star Wars. All the little Force tricks that Obi Wan uses make you want to know more. That's like saying "Hey, everyone has been looking at gene expression for ages, and if you look around, you can see all this variability in their data that they just didn't have the tools or inclination to quantify." It is these hints of what could be sprinkled in between your description of what is that gets your audience excited about your story and helps to highlight contrast. End the first act of your story (i.e., the introduction) with some sort of major conclusion or result that provides some meat to maintain this contrast. It is that contrast that will keep them interested during the second act.

What is that second act? Before getting to that, it’s important to realize that every good story has a hero (if you have negative results, an anti-hero). And who is that hero? Your audience. They are the ones who are on a journey, the journal from what everyone thought before towards what you are going to convince them of. To borrow from Duarte, your audience is Luke Skywalker. And that makes you the mentor—you’re Yoda. Your job is to lead the hero on this journey. Think about it, aren’t the most satisfying talks the ones where you think to yourself “Man, wouldn’t it then be cool if…” and then they show exactly that experiment on the very next slide? That’s because your mentor (the speaker) is doing a good job of shepherding you, the hero, on the path.

What does the hero do in the second act? Well, there are a couple of options here. One of my favorites from many martial arts movies is the training montage. The science equivalent of this is showing a bunch of further evidence to bolster your initial, cool result. Wait, what about the alternative isoforms? Nope, that doesn't explain it. What if you use an alternative method? Effect still shows up. This is what Duarte calls "resisting the call", like Luke Skywalker (i.e., your audience) resisting the call to action to use the force and blow up the Death Star. Here, your job is to persuade.

Another approach is to "fill out the story" here. This can take the form of a digression on a side point, or further analysis. Like: "So I showed you this really cool single cell data about cancer, but it's actually also interesting for these other reasons as well, let me show you." Key thing, though, is not to give away your turning point until closer to the end, where you transition to the third act.

The beginning of the end comes from the transition from either the training montage or the fill-in-the-story sequence, which are sort of the aftermath of that first big result, to thinking about what the implications of those big results. This is the hero's turning point. Like: "So everything I showed you about single cell analysis in cancer would imply that the cells will die in this specific pattern. Does this happen?" This should lead to another major result, something to carry the ending. Your hero now has a final purpose.

Now, during the ending, it's important to come back to the beginning as well. Point out the initial state of the field again. The implication should be "See, this is how we were thinking before." In the best of situations, the contrast should be so stark that the original view should look positively quaint. This means that the hero (your audience) has transformed, and there's no going back. Then you've done your job.

This basic formula can work in long talks, short talks, any kind of talk. The only difference is how many details you leave in or leave out. The truth is that it takes time to learn this skill, and while there are some tips, there's no substitute for just carefully thinking about what you're trying to present and what works best to tell your story. I will give the following bit of advice, though. Your story has one thing more in common with a TV series rather than a movie, and that's that you can't assume people actually watched the whole thing. Even if they're sitting there, how often has something like this happened to you while sitting in the audience?
“Wow, those are some convincing results I never looked at regulatory DNA like that before and now my mind is brimming with all these possibilities hmm I wonder if SuperCuts is a pokestop that would be cool and also the FACS facility probably not oh well so what was this talk about again?”
You simply cannot assume that people listened to all or even most of your talk, and certainly not that they internalized important details. I remember someone I know giving a talk that started with some heavy quantitative framework, after which it was like "Okay, now that you've all got that, let's get into the results, all presented assuming you know this framework". That was not good. I'm directly in the field and knew some of the work beforehand, and even I had a hard time following. Things work best if you remind them of key concepts and results along the way. One nice tip (shamelessly stolen from Susan McConnell) is the idea of a talisman, some sort of visual aid that you come back to over and over to help orient your audience. For instance, if you have a framework with two competing models, show those models repeatedly, every 5-10 minutes, perhaps with variations as the story develops. Take that opportunity to reiterate the main concepts required to understand what comes next. This helps your audience reconnect with the central arc of your work.

Anyway, hope this guidance proves useful. I realize it's sort of abstract, but I've found that as my speaking skills have evolved, understanding these principles has proven even more important than all the various tips, tricks and opinions on how to construct slides. All that stuff is important, but just remember that all those rules are typically in service of the principles of clarity and engagement, and while rules are meant to be broken, you never want to compromise your principles!

Note: As I was writing this, I was definitely thinking about the standard 45-60 minute talk, which usually has at least two main results. In the case of a short talk, like 2-15 minutes, it may make more sense to shorten or eliminate the second act. Also, the transition to the third act may or may not require Any new result, but I think some version of the "what does this new knowledge imply"/"contrast with the old" is still necessary.

Another note: These are lessons that take most people many years to learn, so I wouldn't expect immediate results. But the main thing is to keep trying to improve. Many never do.

Sunday, August 21, 2016

Response to New Atlantis piece about "Saving Science (from itself)"

So I came across this New Atlantis piece by Daniel Sarewitz, a long, rambling essay on how to save science. From what? From itself, apparently. And here's the solution:
To save the enterprise, scientists must come out of the lab and into the real world.
To expand upon this briefly, Sarewitz claims that many ills that befall our current scientific enterprise, and that these ills all stem from this "lie" from Vannevar Bush:
Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.
The argument is that scientists, left to our own devices and untethered to practical applications (and unaccountable to the public), will drift aimlessly and never produce anything of merit to society. Moreover, the science itself will suffer from the careerism of scientists when divorced from "reality". Finally, he advocates that scientists, in order the avoid these ills, should be brought into direct relationship with outside influences. He makes his case using a set of stories touching on virtually every aspect of science today, from the "reproducibility crisis" to careerism to poor quality clinical studies to complexity to big data to model organisms—indeed, it is hard to find an issue with science that he does not ascribe to the lack of scientists focusing on practical, technology oriented research.

Here's my overall take. Yes, science has issues. Yes, there's plenty we can do to fix it. Yes, applied science is great. No, this article most definitely does not make a strong case for Sarewitz's prescription that we all do applied science while being held accountable to non-scientists.

Indeed, at a primary level, Sarewitz's essay suffers from the exact same problem that he says much modern science suffers from. At the heart of this is the distinction between science and "trans-science", the latter of which basically means "complex systems". Here's an example from the essay:
For Weinberg, who wanted to advance the case for civilian nuclear power, calculating the probability of a catastrophic nuclear reactor accident was a prime example of a trans-scientific problem. “Because the probability is so small, there is no practical possibility of determining this failure rate directly — i.e., by building, let us say, 1,000 reactors, operating them for 10,000 years and tabulating their operating histories.” Instead of science, we are left with a mélange of science, engineering, values, assumptions, and ideology. Thus, as Weinberg explains, trans-scientific debate “inevitably weaves back and forth across the boundary between what is and what is not known and knowable.” More than forty years — and three major reactor accidents — later, scientists and advocates, fully armed with data and research results, continue to debate the risks and promise of nuclear power.
As a concept, I rather like this concept of trans-science, and there are many parts of science, especially biomedical science, in which belief and narrative play bigger roles than we would like. This is true, I think, in any study of complex systems—including, for example, the study of science itself! Sarewitz's essay is riddled with narratives and implicit beliefs overriding fact, connecting dots of his choosing to suit his particular thesis and ignoring evidence to the contrary.

Sarewitz supports his argument with the following:
  1. The model of support from Department of Defense (DOD), which is strongly bound to outcomes, provides more tangible benefits.
  2. Cancer biology has largely failed to deliver cures for cancer.
  3. Patient advocates can play a role in pushing science forward by holding scientists accountable.
  4. A PhD student made a low-cost diagnostic based inspired by his experiences in the Peace Corps.
A full line-by-line rundown of the issues here would simply take more time than it's worth (indeed, I've already spent much more time on this than it's worth!), but in general, the major flaw in this piece is in attempting to draw clean narrative lines when the reality is a much more murky web of blind hope, false starts, and hard won incremental truths. In particular, we as humans tend to ascribe progress to a few heroes in a three act play, when the truth is that the groundwork of success is a rich network of connections with no end in sight. In fact, true successes as so rare and the network underlying them so complex that it's relatively easy to spin the reasons for their success in any way you want.

Let me give a few examples from the essay here. Given that I am most familiar with biomedical research (and that biomedical research seems to be Sarewitz's most prominent target), I'll stick with that.

First, Sarewitz spills much ink in extolling the virtues of the DOD results-based model. And sure, look, DOD clearly has an amazing track record of funding science projects that transform society—that much is not in dispute. (That is their explicit goal, and so it is perhaps unsurprising that they have many such prominent successes.) In the biomedical sciences, however, there is little evidence that the DOD style of research produces benefits. In the entire essay, there is exactly one example given, that of Herceptin:
DOD’s can-do approach, its enthusiasm about partnering with patient-advocates, and its dedication to solving the problem of breast cancer — rather than simply advancing our scientific understanding of the disease — won Visco over. And it didn’t take long for benefits to appear. During its first round of grantmaking in 1993–94, the program funded research on a new, biologically based targeted breast cancer therapy — a project that had already been turned down multiple times by NIH’s peer-review system because the conventional wisdom was that targeted therapies wouldn’t work. The DOD-funded studies led directly to the development of the drug Herceptin, one of the most important advances in breast cancer treatment in recent decades.
This is blatantly deceptive. I get that people love the "maverick", and the clear insinuation here is that DOD played that role, who together with patient advocates upended all the status-quo eggheads at NIH to Get Real Results. Nice story, but false. A quick look at Genetech's Herceptin timeline shows that much of the key results were in place well before 1993—in fact, they started a clinical trial in 1992! Plus, look at the timeline more closely, and you will see many seminal, basic science discoveries that laid the groundwork for Herceptin's eventual discovery. Were any of these discoveries made with a mandate from above to "Cure breast cancer by 1997 or bust"?

Overall, though, it is true that cancer treatment has not made remotely the progress we had hoped for. Why? Perhaps somewhat because of lack of imagination, but I think it's also just a really hard problem. And I get that patient advocates are frustrated by the lack of progress. Sorry, but wishing for a cure isn't going to make it happen. In the end, progress in technical areas is going to require people with technical expertise. Sarewitz devotes much of his article to the efforts of Fran Visco, a lawyer who got breast cancer and became a patient advocate, demanding a seat at the table for granting decisions. Again, it makes a nice story for a lawyer with breast cancer to turn breast cancer research on its head. I ask: would she take legal advice from a cancer biologist? Probably not. Here's a passage about Visco:
It seemed to her that creativity was being stifled as researchers displayed “a lemming effect,” chasing abundant research dollars as they rushed from one hot but ultimately fruitless topic to another. “We got tired of seeing so many people build their careers around one gene or one protein,” she says. Visco has a scientist’s understanding of the extraordinary complexity of breast cancer and the difficulties of making progress toward a cure. But when it got to the point where NBCC had helped bring $2 billion to the DOD program, she started asking: “And what? And what is there to show? You want to do this science and what?”
“At some point,” Visco says, “you really have to save a life.”
There is some truth to the fact that scientists chase career, fame and fortune. So they are human, so what? Trust me, if I knew exactly how to cure cancer for real, I would definitely be doing it right now. It's not for a lack of desire. Sometimes that's just science—real, hard science. Money won't necessarily change that reality just because of the number of zeros behind the dollar sign.

Note: I have talked with patient advocates before, and many of them are incredibly smart and knowledgeable and can be invaluable in the search for cures. But I think it's a big and unfounded leap to say that they would know how best to steer the research enterprise.

Along those lines, I think it's unfair to judge the biomedical enterprise solely by cancer research. Cancer is in many ways an easy target: huge funding, limited (though non-negligible) practical impact, fair amount of low quality research (sorry, but it's true). But there are many examples of success in biomedical science as well, including cancer. Consider HIV, which has been transformed from a death sentence into a more manageable disease. Or Gleevec. Or whatever. Many of which had no DOD involvement. And most of which relied on decades of blue skies research in molecular biology. Sure, out-of-the-box ideas have trouble gaining traction—the reasons for that should be obvious to anyone. That said, even our current system tolerates these: now fashionable ideas like immunotherapy for cancer did manage to subsist for decades even when nobody was interested.

Oh, and to the point about the PhD student's low-cost diagnostic: I of course wish him luck, but if I had a dollar for every press release on a low-cost diagnostic developed in the lab, I'd have, well, a lot of dollars. :) And seriously, there's lots of research going on in this and related areas, and certainly not all of it is from DOD-style entities. Again, I would hardly take this anecdote as a rationale for structurally changing the entire biomedical enterprise.

Anyway, to sum up, my point is that a more fair reading of the situation makes it clear Sarewitz's arguments are essentially just opinion, with little if any concrete evidence to back up his assertions that curiosity-driven research is going to destroy science from within.

Epilogue:

OK, so having spent a couple hours writing this, I'm definitely wondering why I bothered spending the time. I think most scientists would already find most of Sarewitz's piece wrong for many of the same reasons I did, and I doubt I'll convince him or his editors of anything, given their responses to my tweets:


I'm not familiar with The New Atlantis, and I don't know if they are some sort of scientific Fox News equivalent or what. I definitely get the feeling that this is some sort of icky political agenda thing. Still, if anyone reads this, my hope is that it may play some role in helping those outside science realize that science is just as hard and messy as their lives and work are, but that we're working on it and trying the best we can. And most of us do so with integrity, humility, and with a real desire to advance humanity.

Update, 8/21/2016: Okay, now I'm feeling really stupid. This New Atlantis is some sort of scientific Fox News: they're supported/published by the Ethics and Public Policy Center, which is clearly some conservative "think" tank. Sigh. Bait taken.

Monday, July 18, 2016

Honesty, integrity, academia, industry

[Note added 7/22/2016 below in response to comments]

Implicit in my last post about reputation in science was one major assumption: that honesty and integrity are important in academia. The reason I left this implicit is because it seems so utterly obvious to us in academia, given that the truth is in many ways our only real currency. In industry, there are many other forms of currency, including (but not limited to) actual currency. And thus, while we value truth first and foremost in academia, I think that in some areas of industry, even those perhaps closely related to academia, the truth is just one of many factors to weigh in their final analysis. This leads to what I consider to be some fairly disturbing decision making.

It’s sort of funny: many very talented scientists I know have left academia because they feel like in industry, you’re doing something that is real and that really matters, instead of just publishing obscure papers that nobody reads. And in the end, it's true: if you buy an iPhone, it either works or doesn’t work, and it’s not really a debatable point most of the time. And I think most CEOs of very successful companies (that actually make real things that work) are people with a lot of integrity. Indeed, one of the main questions in the Theranos story is how it could have gotten so far with a product that clearly had a lot of issues that they didn’t admit to. Is Theranos the rare anomaly? Or are there a lot more Elizabeth Holmes’s out there, flying under the radar with a lower profile? Based on what I’ve heard, I’m guessing it’s the latter, and the very notion that industry cares about the bottom line of what works or doesn’t has a lot of holes in it.

Take the example of a small startup company looking for venture capital funding. Do the venture capitalists necessarily care about the truth of the product the company is selling or the integrity of the person selling it? To me, from academia, I thought this would seem to be of paramount importance. However, from what I’ve been hearing, turns out I was completely wrong. Take one case I’ve heard of where (to paraphrase) someone I know was asked by venture capitalists at some big firm or another to comment on someone they were considering funding. This person then related some serious integrity issues with this person to the venture capitalists. To which the venture people said something like “We hear what you’re saying. Thing is, I gotta say, a lot of people we look at make up their degrees and stuff like that. We just don’t really care.” A lot of people make up their degrees, and we just don’t really care. A number of other people I know have told me versions of the same thing: they call the venture capitalists (or the venture capitalists even call them), they raise their concerns, and the venture people just don’t want to hear it.

Let’s logic this out a bit. The question is why venture capitalists don’t care whether the people they fund are liars. Let’s take as a given that the venture capitalists are not idiots. One possible reason that they may not care is that it’s not worth their time to find out whether someone has faked their credentials. Well, given that the funding is often in the millions and it probably takes an underling half a day with Google and a telephone to verify someone’s credentials, I think that’s unlikely to be the issue (plus, it seems that even when lies are brought to their attention, they just don’t care). So now we are left with venture capitalists knowingly funding unscrupulous people. From here, there are a few possibilities. One is that someone could be a fraud personally but still build a successful business in the long term. Loathe as I am to admit it, this is entirely possible—I haven’t run a business, and as I pointed out in the last post, there are definitely people in science who are pretty widely acknowledged as doing shoddy work, and yet it doesn’t (always) seem to stick. Moreover, there was the former dean of college admissions at MIT, who appeared to be rather successful at her job until it came out that (you can’t make this stuff up) that she faked her college degrees. I do think, however, that the probability of a fraudulent person doing something real and meaningful in the world is probably considerably less than the infamous 1 out of 10 ratio of success to failure that venture people always bandy about, or at least considerably less than someone who's not a Faker McFakerpants. Plus, as the MIT example shows, there’s always the risk that someone finds out about it, and it leads to a high-profile debacle. Imagine if Elizabeth Holmes said that she actually graduated from Stanford (instead of admitting to dropping out (worn as a badge of honor?)). Would there be any chance she would have taken her scam this far without someone blowing the whistle? Overall, I think there’s a substantial long term risk in funding liars and cheats (duh?).

Another possibility, though, is that venture capitalists will fund people who are liars and cheats because they don’t care about building a viable long term business. All they care about is pumping the business up and selling it off to the next bidder. Perhaps the venture capitalists will invest in a charming con-artist because someone not, ahem, constrained by the details of reality might be a really good salesman. I don’t know, but the cynic in me says that this may be the answer more often than not. One might say, well, whatever, who cares if some Silicon Valley billionaires lose a couple million dollars. Problem is, implicit in this possibility is that somebody is losing out, most likely some other investors along the way. Just as bad, rewarding cheaters erodes everyone’s sense of trust in the system. This is particularly aggravating in cases when the company is couched in moral or ethical terms—and in situations where patient health is involved, everything suddenly becomes that much more serious still.

Overall, one eye-opening aspect of all this for me as an academic is that while we value integrity, skepticism and evidence very highly, business values things like “passion” more than we do. I don’t know that an imposition of academic values would have necessarily caught something like Theranos earlier on and all the other lesser known cases out there, but I would like to think that it would. Why are these values not universal, though? After all, our role in academia is that of evaluation, of setting a bar that employers value. In a way, our student’s aren’t really paying for an education per se—rather, they are paying for our evaluation, which is a credential that will get them a job; in a sense, it’s their future employers that are paying for the degree. Why doesn’t this work when someone fakes a degree? When someone fakes data?

Here’s a thought. One way to counter the strategy of funding fakers and frauds would be for us to make this information public. It would be very difficult, then, to pump up the value of the company with such a cloud hanging over it, and so I think this would be a very effective deterrent. The biggest problem with this plan is the law. Making such information public can lead to big defamation lawsuits directed at the university and perhaps the faculty personally, and I’ve heard of universities losing these lawsuits even if they have documented proof of the fraud. So naturally, universities generally advise faculty against any public declarations of this sort. I don’t know what to do about that. It seems that with the laws set up the way they are, this option is just not viable most of the time.

I think the only real hope is that venture capitalists eventually decide that integrity actually does matter for the bottom line. I certainly don't have any numbers on this, but I know of at least one venture capital firm that claims success rates of 4 in 10 by taking a long view and investing carefully in the success of the people and ventures they fund. I would assume that integrity would matter a lot in that process. And I really do believe that at the end of the day in industry, integrity and reality really do trump hype and salesmanship, just like in academia. I don’t know a lot of CEOs, but one of my heroes is Ron Cook, CEO of Biosearch Technologies, a great scientist, businessman, and a person of integrity. I think it’s not coincidental that Ron has a PhD. For real.

Update in response to comments, 7/22/2016:
Got a comment from Anonymous and Sri saying that this post is overblowing the issue and unfairly impugns the venture capital industry. I would agree that perhaps some elements of this post are a bit overblown, and I certainly have no idea what the extent of this particular issue (knowingly funding fakers) is. This situation probably doesn't come up in the majority of cases, and it may be relatively rare. All that said, I understand that my post is short on specifics and data when it comes to funding known fakers and looking the other way, but I think it will be impossible to get data on this for the very same reason: fear of defamation lawsuits. You just can't say anything specific without being targeted by a defamation suit that you will probably lose even if you have evidence of faking. So where are you going to get this data from?

And it's true that I personally don't have enough anecdotes to consider this data. But I can say that essentially every single person I've discussed this with tells me the same thing: even if you say something, the venture capitalists won't care. In at least some cases, they have specific personal examples.

Also, note that I am not directly calling out the integrity of the venture capitalists themselves, but rather just pointing out that personal integrity of who they fund is not necessarily as big a factor in their decision making as I would have thought. My point is not so much about the integrity of venture capitalists—I suspect they are just optimizing to their objective function, which is return on investment. I just think that it's shady at a societal level that the integrity of who they fund is apparently less important to them than we in academia would hope. Let me ask you this: in your department, would you hire someone on the faculty knowing that they had faked their degree? I'm guessing the answer is no, and for good reason. The question is why those same reasons don't matter when venture capital are deciding who to fund.