Saturday, June 28, 2014

Why bother studying molecular biology if the singularity is coming?

Perhaps I’m just being hopelessly optimistic, but I believe Ray Kurzweil’s singularity is going to happen, and while it may not happen on his particular timetable, I would not be surprised to see it in my lifetime. For those of you who haven’t heard of it, the singularity is when the power of artificial intelligence surpasses our own, at which point it becomes impossible to predict the future pace of change in technology. Sounds crazy, right? Well, I thought it was crazy to have a computer play Jeopardy, but not only did it play, but it crushed all human challengers. I think it’s a matter of when, not if, but reasonable people could disagree… :)

Anyway, that got me thinking: if artificial intelligence is the next version/successor of our species, and it’s coming within, say, 50 years, then what’s the point of studying molecular biology? If we consider a full understanding of the molecular basis of development to be a 50-100 year challenge, then what’s the point? Or cancer? Or any disease? What’s the point of studying an obsolete organism?

In fact, it’s unclear what the point is in studying anything other than how to bring about the super-intelligent machines. Because once we have them, then we can just sit back and have them figure everything else out. That spells doom for most biomedical research. You could make an argument for neuroscience, which may help hasten the onset of the machines, but otherwise, well, the writing’s on the wall. Or we can just do it for fun, which is the only reason we do anything anyway, I suppose…

Friday, June 27, 2014

Google Docs finally has "track changes"!

I love Google Docs! It's where we store tons of information in the lab–it is essentially indestructible, easy to use and easy to share. And, perhaps most important of all, it makes collaborating on documents SO much easier. No more endlessly sending around XYZ_manuscript_final3_actuallyfinal8.docx. The Word Doc sharing method is prone to errors and is slow, especially if you have many collaborators.

However, the one feature Google Docs was missing from Word was the infamous "track changes". Warts and all, this is an essential feature, since it allows editing while also showing you directly where the edits happened. Google Docs had great commenting features and had some version history feature, but it just was not as good as good-old track changes in Word, end of story. But now it exists! It's called editing in "suggest" mode, and you can change modes with the little pop-up menu towards the top right corner (it's in "edit" mode by default). So awesome! Now witness the firepower of this fully armed and operational Word replacement!

Thursday, June 26, 2014

I sincerely hope I never write about STAP again

Got a few negative comments on my last blog post about the STAP fiasco. Seems like some folks think I’m being overly apologetic or that I have no idea that she faked data. Haha, reminds me why I should just stick to blogging about Chipotle! Also, for the record, in case anyone cares (which I sincerely hope they don’t), I do not have, nor am qualified to give, any opinion as to whether the papers should be retracted or whether STAP cells are real. This is about the vilification of Obokata.

First of all, let me just say (in case it’s not obvious) that if someone has faked data, well, then they should be out of the game, permanently. I think everyone agrees that intentionally fabricating data is a capital offense.

Where, however, is the line between sloppy science and fabrication? Is it the intent to deceive? Are we sure Obokata had such intent? Let’s look at the evidence that I am aware of (and for those out there who think I was completely clueless about the context, I had considered most of this before writing my post).

Here is the RIKEN investigation’s report. At issue are two main problems: splicing of the lane of a gel and duplicating some teratoma figures from Obokata’s thesis. To the former, well, it was an inappropriate manipulation (scaled and spliced a positive control lane from one gel into another because it looked a bit better), but the original gel data was there and, while the manipulation was certainly not a good thing to do, it seems as though she wasn’t even aware that this was a bad thing to do. Moreover, her raw data doesn’t appear, from what I can find, to contradict what she portrayed. I’m not saying that this was a good thing to do, just that it seems like an honest mistake. The RIKEN report does not contradict this sentiment, by the way.

Then there is the more damning issue of the use of teratoma images duplicated from Obokata’s thesis. Yes, this is outright fabrication. Again, are we absolutely sure there was intent to deceive? Yes, I admit it does seem a bit weird that she would have forgotten what pictures came from where, especially from her thesis. That said, Obokata says that she has since provided what she claims to be correct images, although the data trail is very weak (which, by the way, is the case for imaging data in almost every biomedical research lab I’ve seen). Indeed, the RIKEN report itself says that she and Sasai provided the “correct" images just before the investigation began when notified of the issue, and so they didn’t even think they needed to provide further explanation. Obokata also maintains that she submitted a correction to Nature.

Certainly, this is beyond sloppy, probably worthy of retraction. But it also seems true, based on the report, that alternative images showing the effect exist that are not duplicates of her thesis (be these images legitimate or not). So, is she a fool or a knave? To the former, if we assume that she actually has the correct images and they show the right thing, as she maintains, then what other rational explanation is there for the images than an honest mistake? I mean, it beggars belief that she would purposefully show duplicated images when she had the right ones in hand. Now let’s assume she’s a knave and that she didn't actually have teratoma images and needed to manufacture evidence. Again, there is STILL absolutely no motivation for her to intentionally use images from her thesis when she had other non-duplicated fake-worthy images in hand. So she’s either a (yes, very) sloppy scientist, or an utterly incompetent faker.

I’ve never met this woman, and I’m not here to defend her honor, but personally, without seeing more of the evidence, I would feel reluctant to destroy a young woman’s life by branding her a liar and a fraud. I also wonder if we’d all be giving her the benefit of the doubt on all this if her results were immediately replicated by a dozen labs (and Wakayama did actually replicate the experiment, with coaching from Obokata). Would we then be asking for an erratum instead?

As I pointed out before, it’s absolutely nuts to deliberately fake results like this. What is the endgame? It’s obvious that people will attempt to replicate immediately, and so any “fame” would be fleeting at best, followed by probably the darkest period in your life. Whether she misinterpreted her data and those putative STAP cells are actually dying cells or contaminating cells or whatever, is a separate issue than whether she intentionally misled people in her work. Being (honestly) wrong in science isn’t the end of the world, nor should it be.

Or whatever, maybe I’m just hopelessly naive and she really is a faker and a cheat. I, like others, am working with limited information. If so, fine, I was wrong, and maybe she does deserve to be piled on. I’ve already written way more about this than I ever intended, and the whole thing has taken up way more of my brain space than I wanted. I suspect I'm not the only one to feel that way.

Why there is no “Journal of Negative Results”

I think everyone at one point or another in their scientific career says: “Why isn’t there a Journal of Negative Results?” Turns out there is, actually, and you could argue that PLOS ONE would also be an appropriate venue. However, it’s worth thinking about exactly why we don’t generally publish negative results. There are the obvious political reasons, one being that by disproving someone else’s work, you have now made yourself an enemy, potentially a very powerful one, and another being that you typically don’t get much “impact credit” for a negative result. But there is a scientific reason as well, namely that it is much easier to confirm a finding than to definitively disprove it.

Let’s say someone had a finding about, say, expression of a gene going up in condition B as measured by RT-PCR. Then we in our lab try and confirm the finding using RNA FISH to measure gene expression. If we confirm the finding, great, done, that was easy! But if we find that the expression is unchanged, then what? In our lab, this is usually when we stop. But if we wanted to publish this negative result, we would not have satisfied the burden of proof. First, someone would say “well, your RNA FISH thing doesn’t work right, at least not in this case.” So now we have to do RT-PCR using their primers and conditions, etc. Then, “well, how do you know your conditions are exactly the same?” Now we contact the authors and try to replicate everything down to the smallest detail. Maybe we don’t get their RT-PCR results. Not useful, because while it might reveal that the original authors were incompetent, they could just keep saying “Well, you’re not doing the exact same experiments as us, so whatever.” Once the result is published, the burden of proof is on you, not on them. Suppose on the other hand that we do confirm their RT-PCR results. Now we’ve just bought ourselves months of work to try and figure out why there is a discrepancy between the methods. Not exactly glamorous work, especially when you already know in your heart of hearts that their result is wrong. Overall, the work required just doesn’t justify the reward, which is only compounded when you factor in the political risks.

That said, there is a type of negative result that would be less politically dangerous, although perhaps not quite paperworthy. Those are the little methods details that are typically buried within a lab’s collective brain. Like “oh yeah, don’t try to use the RNA FISH method in XYZ organism because the high autofluorescence will kill you.” I tried to do that for RNA FISH by setting up a FAQ, and I think it has proven to be useful. If you’re a methods-head, I would definitely give it a try.

Monday, June 23, 2014

Why is everyone STILL piling on that STAP woman?

[Update, 6/30: follow up post]

There is something weird about the STAP “scandal”. I still have no idea why there is such a hardcore investigation into these papers, now with talk of it shutting down the entire institute. Much of the inquiry seems to circulate around some apparently duplicated lanes of a gel and “plagiarism” of materials and methods (seriously?!?). Umm, well, the plagiarism of materials and methods stuff is just a stupid criticism. The duplicated lanes in a figure? Potentially more serious, but I’m inclined to say it’s just sloppy. Why? Because NOBODY in their right mind would publish this work without truly believing that the protocol worked. The only other logical option is that this woman is a pathological liar, because she must have known that everyone in the world will be angling to replicate this result immediately–there is nowhere to hide. And, admittedly from afar, I don’t think she’s seems like a liar. Rather, I think that either she made an honest mistake and got fooled by biology–wouldn’t be the first, won’t be the last–or the result is actually valid and just requires more time for others to replicate. I think it would be wrong to dismiss this latter possibility, by the way.

And what if the papers end up being wrong? So what. Papers are wrong all the time. I personally know of several papers that are wrong in high profile journals, ones that we and others have wasted time and treasure on following up. I’m sure you do, too. (Buy me a beer and we can trade stories.) In fact, there are entire subfields that I strongly suspect are bogus. Nobody is hunting these people down and debating whether they should close MIT or Stanford or Harvard, even if everyone in the room quietly agrees the work is bogus. Perhaps some of this work is not quite as high profile as STAP, but I don’t think that should matter. It’s all still in the record, helping the careers of those who write it and fooling those who read it. I’m not saying that it’s a good thing, I’m just saying there’s a serious double standard here.

I think it’s worth comparing the fallout here to that of two recent high profile physics “mistakes”. One was the faster than light neutrinos. The overall reception was “Well, it’s probably wrong, let’s just wait until they figure out what happened.” And it turned out to be a not fully plugged in USB cable or something. Fine. I don’t think there was any huge fallout calling for people to shut down OPERA, although the lead investigator did somewhat mysteriously step down from his post. But still, not as crazy as the firestorm surrounding Obokata. Then consider the very recent discovery of gravitational wave signatures of inflation shortly after the big bang. Turns out it might just be a bunch of dust. Oops! Still, nobody’s losing their job or shutting down an institute or being investigated like a criminal. And, with all due respect to STAP, finding particles that move faster than light or seeing signatures of inflation in the early universe are some pretty big results. So why this crazy witch hunt? Is it because she is a woman? I feel like maybe everyone is focusing all their “the biomedical research system is broken!” energy onto this one particular case, as though that will solve anything.

Whatever, all I’m saying is that you better hope you’ve got someone who believes in you when the science gestapo comes looking for blood. Because, in a slight bastardization of JFK, while success has a dozen coauthors, retraction just needs one scapegoat.

Sunday, June 22, 2014

Is my PI out to get me?

I've been delving more into the world of science blogging lately, and it is a wonderful place, with many exciting voices, both young and old. Among the younger crowd, though, I sometimes see these negative comments towards PIs, usually along the lines of “All my PI cares about is getting her papers published and grants funded and not about training me with real skills that I need or helping me get ahead in my own career”. Essentially, some variant of “PIs only cares about themselves and not about me and the incentives in science are all wrong in this regard, etc.”

Hmm. As a PI now (still feels weird to say it) I think it's worth making a couple points. First, by far the majority of PIs that I've met care deeply about their trainees and want nothing but the best for them. They are acutely aware of their successes and failures and want to maximize the former and minimize the latter. As in any relationship, I think some degree of conflict/frustration with your PI is totally normal and is in fact productive. Indeed, I think one of the most wonderful things about being a PI is working with younger people who have a healthy disrespect for experience, including my own. That said, thinking you know better than your PI is one thing, thinking that they don’t care about you is something else.

Another criticism I often hear these days is something along the lines of “My PI didn't teach me anything, so why is it called graduate SCHOOL?” I think this gets to the heart of what graduate school is about. Is it developing tangible skills? Yes and no. Yes, you will learn some specific skills along the way. But going to graduate school for a PhD is decidedly not the same as going to a trade school. Or at least it is one of a wholly different sort. To me, the most important thing you learn in grad school is critical thinking, which is hard to quantify but is very, very real. I was just on a thesis committee of a student who is about to wrap up her PhD. In the last year, she did a ton of work, and made fantastic progress–which, by the way, everyone on the committee was happy to see. And it was also clear that she had changed as a person. Perhaps a bit weary, maybe a bit less idealistic. But at the same time, it was clear she had developed tremendously as a scientist–and, by extension, as a professional. She handled criticism with aplomb, projected authority on her research field, and presented her work clearly and effectively. In short, she is a now a trained, mature scientist. It is this aspect of training that is the hardest to objectively describe and so gets ignored the most, but is far more important than any particular skill in the long term, no matter what career path you choose to follow.

So if this is the point of grad school, then how does the PI help make this happen? Partly to provide an exciting and vibrant environment for doing science, and make no mistake, this is a lot of work–something I didn’t appreciate as much until I became a PI. And yes, partly to provide some technical scientific skills. (I actually think this includes the elusive notion of “creativity”, which is a skill that can be taught, in my opinion.) But another big part of it that is much harder for trainees to appreciate is that we bring experience. Experience in navigating through the ups, downs, twists and turns (the “cloud”) as you struggle through a research project, experience in how to deal with failure and rejection, experience in how to deal with success. We have learned from many mistakes, and the point of the human learning is to try not to repeat mistakes others have made before. At the edge of knowledge, where the path is by definition not clear, this experience is invaluable.

I think we also have something valuable to share with students just based on the fact that we’ve typically been alive a bit longer. I think this manifests itself most often in the common belief that “My PI is not letting me graduate because they just want to get another paper out of me.” I’m going to just speak for myself here, and I’m sure there are many counterexamples, but I don’t think I’ve met any PI who’s done this. In fact, far more common in my experience is the opposite thought for the PI, along the lines of “How am I going to graduate this student who just has not been productive?”

Where does this notion of the PI holding back the student come from? I think it’s the Luke Skywalker/Yoda dynamic. You know, when Luke Skywalker wants to face Darth Vader and Yoda says “Complete is not your training” (or whatever Yodaspeak, leave me alone Star Wars nerds), but Luke leaves anyway, gets his hand chopped off, and then returns, at which point Yoda says “Complete your training is”. Students are young, with that energy and the aforementioned healthy disrespect for inexperience that tells them “I’m ready to get out there and do my thing!” PIs have a bunch of experience that says “Wait, why be in such a rush? Life is long, there’s more to learn here, and it will serve you well.” You know what, I think they’re both right. At some point, the student is ready to get out there and develop their own experiences–they, like Luke Skywalker, will be faced with a new situation, rely on what they have learned, perhaps make some mistakes, and then their training will be complete. I have often found myself now encountering situations where I think to myself “Oh, now I see what my advisor was talking about…” I think it’s just the natural cycle of the mentor and the trainee.

Again, though, I think it’s important not to mistake this for the PI acting against the best interests of the trainee. I would like to think that when faced with a choice, I would choose to act in the best interests of the trainee, and I believe that to be the case for myself (although perhaps some might disagree :). To be frank, though, I don’t think these choices come up all that often anyway–most of the time, I think the interests of trainees and mentors align quite well. Yes, PIs have their own career to think about, and I’m not going to say that it's not a part of one’s thinking as a PI. But consider this: if the interests were so misaligned, why is it that junior PIs, who face the most career pressure and uncertainty, typically make the best mentors for graduate students?

Anyway, I guess my point is that for the most part, PIs care about their trainees, and usually a lot. Sometimes PIs make choices that may not initially seem to trainees to be in their best interest. And sometimes PIs make mistakes, or sometimes we all fall victim to just plain old bad luck. But, for the trainees out there, please don’t feel like your PI is out to get you, because they probably aren’t.

Wednesday, June 18, 2014

Why does push-button science push my buttons?

We just did some single cell RNA-seq using the Fluidigm C1 machine. Seems fancy, right? Sequencing from a single cell? Actually, it was remarkably simple. We just put some cells in a tube and squirted them into a chip, put the chip in a machine, and said go. Okay, maybe a little more than that, but not much. For a while, something about this really irked me about this, and I’ve been trying to get to the root of that feeling. Here’s what I realized about myself.

We, like many others in imaging, do artisanal experiments. Our experiments take patience and some degree of debugging and know-how, and getting a good data set still requires some amount of care and attention. Although we have automated much of our workflow, we most definitely do not have a platform. And the fact is that I like it that way. I take pride in high quality data, in experiments that not everyone can do, crafted by hand by highly skilled experimentalists. I love knowing the ins and outs of all the little details that go into interpreting an image, in figuring out what may have gone wrong and how best to fix it. And maybe I’m all wrong for feeling this way.

In the end, isn’t it a good thing if experimental methods are commoditized, made robust and broadly available? If it’s so easy to do something that anyone can do it? That we are freed to spend more time thinking about science and less time mindlessly pipetting clear liquids from one tube to the next?

I think a part of what bothers me is fear. It’s like I’m a scientific luddite, scared that there will not be a place for our type of work in the future. It’s a fear that’s probably largely misplaced (I hope). Of course, one issue with these push-button machines is that the big, fancy labs typically have access to them well before anyone else, and so they get their big, fancy papers out first. But those papers generally tend to be just about the least imaginative and most obvious thing you can do with this tech, as though at the end of the run, somebody pushed the next button over labeled “Write paper”. Our job is to think of something clever to do with these new capabilities and use it to enhance our existing strengths. In that way, the more things change, the more things remain the same.

Tuesday, June 17, 2014

Some pics of the lab from our good buddy Marc Beal

Marc Beal, Stellaris RNA FISH champion numero uno, came to visit the lab and took some great pictures at our traditional BBQ 'n Beal outing at Baby Blues BBQ:

One must enter the secret room through the secret doorway:

Our guides on this magical journal:

Let the games begin:

Some cows were definitely harmed in the making of these pictures:

Thanks for a great time, Marc!

Sunday, June 15, 2014

How to get people to do boring stuff they don’t want to do in lab

Lately, we’ve been working on a lot of infrastructure and process-oriented aspects of our work in lab, like a complete overhaul of our RNA FISH analysis software (now in sufficiently good shape to be publicly available to everyone), a probe database, and thinking about how best to organize our growing RNA-seq datasets. Once we have established what we believe to be best practice, though, the next issue is compliance. It’s one thing to tell people what they should do, quite another to actually get them to do it. For instance, we can all say “if you’re going to analyze your RNA-seq data, you should use this data organization scheme”, but there’s a natural entropy at play when people actually do work in the lab, and non-compliance is a natural by-product. How can you enforce best practice?

Well, actually, before starting to think about enforcement, I think it’s worth making sure that whatever scheme you put in place has actual, real benefits to people in the lab. I’ve come to realize that process, while it can enable science, is not science in and of itself, and it’s not always worth the effort. It’s a fine line, and perhaps somewhat a matter of personal taste; I think some folks are just fussier about stuff than others.

So what are the benefits? For our lab, I feel like there are three main benefits to building process infrastructure:
  1. Error reduction: To me, the most useful benefit to having a standardized and robust data pipeline is that it can greatly reduce errors. The consequences of mixing up your datasets or applying the wrong algorithm can be absolutely devastating in a number of ways.
  2. Reproducibility/documentation: For data, I feel, as do many others, that it is imperative to be able to reliably (and understandably) reproduce the graphs and figures in your paper from your raw data. Frankly, in this day and age, there’s no excuse not to be able to do this. Documentation is just as important for other things we do in lab, whether it’s how we designed a particular probe or what the part number is for some kit we ordered 3 years ago and is about to run out.
  3. Saving people time and facilitating their work: Good infrastructure can save time in a number of ways. Firstly, it hopefully leads to less wheel-reinvention, which I’ve seen all the time in other labs. Another way it saves time is by (hopefully) leaving a data trail; i.e., “That data point looks funny, can you show me the image it came from?” Good infrastructure makes it easy to answer that question, and makes it much easier to explore your data in general. If getting answers is easier, you will ask more questions, which is always a good thing.
So what’s the problem? Well, for points 1 and 2, the issue is that error reduction, reproducibility and documentation are just not that exciting, at least not to people who are more interested in doing science. That, and the payoff is typically a sigh of relief a couple years down the line. My experience thus far has been that most systems for documenting lab stuff, no matter how sound the rationale, just don’t stick without some serious effort. For instance, we have a probe “database” (i.e., spreadsheet) that is woefully out of date. And we have a number of protocols that are fairly out of date, and an orders spreadsheet that is out of date, you get the idea. Same for RNA-seq and RNA FISH datasets, at least at high level data organization. You know the feeling: “No, not that transcription inhibition dataset, that’s the one that came out funny because of the cells acting weird, use this one instead…” The only way to enforce in these cases is to create a punitive rule, something like "no more orders placed until you update the ordering sheet". Sucks, but I guess that works.

But point 3, saving time and facilitating work, that’s something everyone can all get behind without any prodding. And then there's never any issue of compliance. For instance, our software provides all the backend to make sure that our data is fully traceable from funny outlier data point to the raw images of a particular cell. But it also provides all the tools to analyze data and use all the latest tricks and tools for image analysis that we have developed in the lab. For this reason, it's essentially inconceivable that anyone would spend any time writing their own software and doing anything else, the benefits are big and, importantly, immediately realizable future.

So what I’m thinking is that we somehow have to structure all the boring lab documentation tasks so that there is some immediate gratification for doing so. What can that be? I’m not sure. But here’s an example from the lab. We’re working on having our probe database automatically generate identifiers and little labels that we can print out and stick on the tube. Not a huge deal, but it’s sort of fun and certainly convenient. And it’s something you can enjoy right away and only get if you access the probe database. So I’m hoping that will drive the use of the database. A more ambitious plan is to develop similar databases for experiments and consequent datasets that would enable automatic data loading. This would be both important for reproducibility, but would also be enormously convenient, so I’m hoping people in the lab would be excited to give it a whirl.

Anybody else have any thoughts about how to encourage people to participate in lab best practices?

Saturday, June 7, 2014

Brilliant post by Casey Bergman about why science is getting harder

I just was browsing through Casey Bergman's blog, and I found this fantastic post about growth in the scientific "industry". The basic point is that the growth in the number of scientists is an exponential trend that is increasing faster than the rate of overall population growth, and perhaps the general crunching sound that we're experiencing in science right now is a result of a transition from exponential growth to saturation. Feels like a very sound argument to me!

Thursday, June 5, 2014

What are technology transfer departments for?

I just wrote a post about how Penn's tech transfer policy is worse than usual in terms of rewarding inventors, and Sri Kosuri (all around cool guy and now running his own show at UCLA) made the excellent point that most tech transfer offices don't actually make money, except for a few that struck gold with a blockbuster drug or something. Very true! Which got me thinking: what is the mission statement of the tech transfer office? And the answer I came to is that I have no idea.

Could it be making money for the university? It's true that overall, tech transfer is not a money making venture. So it would seem that the university is on its face not interested in making money on tech transfer, and that it's provided as a service for the faculty. Partly true. However, there are definitely rumors at places that they overhauled tech transfer after screwing up some big blockbusters. So the university is definitely interested in perhaps winning the lottery. In fact, this is not so very different from the overall business model of undergraduate education at many of these same places: they somehow lose money on each student (despite the astronomical tuition), making it up when the occasional graduate becomes super rich and donates a ton of money back to the university. So my feeling is that the university definitely believes that there is some potential financial gain from tech transfer, and would like to make sure their tech transfer office doesn't miss something big.

That said, it does not really make money, and I certainly don't think tech transfer is typically run with a profit motive in mind most of the time. The idea, in theory, is to get the ideas from our labs out into the public domain. Wait, don't papers already do that for free? Yes. But companies typically would actually prefer to license the technology, since it can give them a competitive advantage in the marketplace. So if you want to get your tech out there, you really need some IP, no matter how altruistic you are. In practice, though, I'm not sure what criteria they use for deciding what to patent, and its an important choice because if they choose to patent, then it's a serious cost borne by everyone, which is what I was complaining about before. I'm guessing that they pursue more stuff than a more profit-minded entity would, and that's fine–at least some faculty can get some line items for their CV.

I see some problems with this, though. I have only very limited experience with this stuff, but I just don't see much coming from filing patents and having them sit on some shelf in the tech transfer office waiting for a wandering Google search to bring it to life. To the contrary, from what I've seen, getting your technology out into the commercial world requires a lot of care and attention and fostering of ties to industry partners (to its credit, Penn is making a concerted effort in this regard by creating an "innovation center" for startups and the like). It's a serious amount of work to establish and maintain these ties, and it doesn't just happen on its own. Its something that you have to want to do, and not everyone does.

Which brings me to another point: is the point of tech transfer to enable us as faculty to profit from our inventions? Again, not sure. They sure do tax the hell out of it: as I mentioned earlier, you're only getting around 22% of the royalty income, at Penn at least. Now, I griped about how little we get earlier–what roils me most is Penn compared to other places–but there's really not much I can do about it, and its not going to prevent me from trying to commercialize our work. But there is a line somewhere, and we're not all that far away from it. For instance, if the percentage we got was 1%, I think it would be pretty hard for me to justify all the time I would spend on commercialization if there was so little to show for it at the end of the day. What about 5%? 10%? 22%? Dunno, all I know is that there's a line in there somewhere. Of course, if you don't do any work, then I suppose any payout is fine, but on the other hand, like I said, those patents are unlikely to go anywhere. (By the way, remember that this is a percentage of the royalty income, which is already a very small percentage of sales, and then that gets split with various inventors, etc.)

All this to say that I'm still not sure exactly what the university hopes to get out of tech transfer, either for itself or the faculty involved. Solvency may be a goal, or may not be; empowering faculty may be a goal or may not be. In the end, though, I have to say that as I'm writing this, I'm more and more excited about the idea Penn has of creating tech incubators with resources to help take technology to the marketplace. Perhaps that's a far better model for empowering faculty to commercialize their work than people like me quibbling about a few pennies here or there.

Wednesday, June 4, 2014

ANOTHER NIH Biosketch format?!?

NIH is now set to introduce yet another Biosketch format. And this one requires yet more work, requiring investigators to write (subjectively) about their primary contributions to various fields for up to 5 pages (5 pages!). Ugh, can we just settle on a simple CV and leave it at that?

Tuesday, June 3, 2014

Penn’s patent policy is crummy for inventors

So Penn has decided to invest heavily in “innovation”, whatever that means. I think part of what it means is transferring technologies developed in academic labs at Penn to the outside world, which is of course a good thing all around. Of course, the devil’s in the details. And Penn’s details are devilish indeed!

On their fancy new website, Penn says “[Penn’s patent policy] provides a means for Inventors to receive a generous share of any income that is derived from the licensing of inventions they create as Penn employees…” Which is funny, because frankly, of all the places I’ve been, it is by far the least generous towards inventors.

For those of you who are not particularly familiar with patents and licensing at universities, here’s how it usually works. Basically, the university owns everything you invent. You own nothing. Fair or not, that’s the way it works. If you have something that you think the outside world would find useful (and is patentable), then you go to the university’s technology transfer office. Naively, the idea is that these folks will decide on whether your work is patentable and worth patenting. They then shop this "intellectual property” to various companies/startups to try and strike a licensing deal. In practice, the issue is that most of these patents just end up sitting on the shelf and never get licensed by a company. It turns out to be very hard to just have a patent sit there and somehow find a home. I think that commercialization works best if the professor herself is actively engaged in trying to develop commercial interest in her technologies, either by making a startup or getting an existing company interested–relying on someone else to make those connections in a largely anonymous fashion seems like playing some very low odds.

The issue is that all these unlicensed patents still cost money to prosecute, and somebody’s got to pay for it. And that’s where Penn’s system gets particularly bad. Typically, the breakdown is such that the inventors get around 1/3 of the money from licensing (the rest goes to various factions at Penn). Some places its a bit more (I’ve seen 35%) and some a bit less (Penn is stingy at 30%). But then come all these legal costs, and here Penn does something very strange. They basically take all the legal costs and take it primarily out of everybody’s personal share. These legal costs typically amount to a whopping 25% of the total inventor’s share. So effectively, you’re getting just 22%! And, more importantly, this applies even if you have already gotten the licensee to pay all the patent expenses! That means that even though the company you are working with is paying all the legal expenses, you are still paying out of your own pocket for everyone else’s patenting costs. In other words, the winners pay for the losers, and they pay a lot.

At other places, they will charge non-legal operating expenses out of the initial pot of licensing income, and then have you pay back the legal expenses on your own particular patent out of the royalties first. This way, if your licensee pays your legal expenses, that directly benefits you. This makes some sort of sense, I suppose–I’m still not sure why inventors get so little, but at least it more directly rewards those who actually manage to get their work out there. Note that somebody still has to pay for the unlicensed patents, but at least those costs usually get split between the university’s share and the inventor’s share, which lightens the load (especially since the university share is much bigger!).

Anyway, look, realistically, nobody’s getting into academia to get rich. That said, commercialization is the best way to have a real impact in the world–that’s just a fact. Why shouldn’t we as scientists benefit somewhat from the ideas that we work so hard to cultivate? And why should Penn scientists benefit considerably less than those at other universities?