Showing posts with label writing/presenting. Show all posts
Showing posts with label writing/presenting. Show all posts

Tuesday, September 26, 2023

“Refusing the call” and presenting a scientific story

 When scientists present in an informal setting where questions are expected, I always have an internal bet with myself as to how long until some daring person asks the first question, after which everyone else joins in and the questions rapidly start pouring out. This usually happens around the 10 minute mark. This phenomenon has gotten me wondering what this means for how best to structure a scientific talk.


I think this “dam breaking” phenomenon can be best thought of in terms of “refusal of the call”, which is a critical part of the classic hero’s journey in the theory of storytelling. The hero typically is leading some sort of hum-drum existence, until suddenly there is a “call to adventure”. Think Luke Skywalker in Star Wars (Episode IV, of course) when Obi Wan proposes that he go on an adventure to save the galaxy, only for Luke to say “Awww, I hate the empire, but what can I do about it?”. (Related point, Mark Hamill sucks.) Usually, shortly afterwards, the hero will “refuse the call” to adventure—usually from fear or lack of confidence or perhaps just from having common sense. This refusal involves some sort of rejection of the premise of the proposed adventure, which then needs to be overcome.


I think that’s exactly what’s going on in a scientific talk. As Nancy Duarte says, in a presentation, your audience is the hero. You are Obi Wan, presenting the call to adventure (an exciting new idea). And, almost immediately afterward, your audience (the hero) is going to refuse the call, meaning they are going to challenge your premise. In the context of a scientific talk, I think that’s where you have to present some sort of data. Like, I’ve presented you with this cool idea, here’s some preliminary result that gives it some credibility. Then the hero will follow the guide a little further on the adventure.


The mistake I sometimes see in scientific talks is that they let this tension go on for too long. They introduce an idea and then expound on the idea for a while, not providing the relief of a bit of data as the audience is refusing the call. The danger is that the longer the audience's mind runs with their internal criticism, the more it will forever dominate their destiny. Instead, spoon feed it to them slowly. Present an idea. Within a minute, say to the audience “You may be wondering about X. Well here is Y proof.” If you are pacing at their rate of questioning, perhaps a little faster, then they will feel very satisfied.


For instance:

“You may think drug resistance in cancer is caused by genetic mutations and selection. However, what if it is non-genetic in origin? We did sequencing and found no mutations…”


Friday, July 16, 2021

Confusion and credentials in presenting your work

Just listened to a great Planet Money episode in which Dr. Cecelia Conrad describes how she dealt with some horrible racist students in her class who were essentially questioning her credentials. She got the advice from a senior professor to be less clear in her intro class:


This snippet reminded me of some advice I got from my postdoc advisor about giving talks: "You don't want everything to be clear. You should have at least some part of it that is confusing." This advice has really stuck with me through the years, and I have continued to puzzle over it for a long time. Like, it should all be clear, no? I always felt like the measure of success for a presentation should on some level be a monotonically increasing function of its clarity.

But… for a while before the pandemic, I was doing this QR code thing to get feedback after my talks on both degree of clarity and degree of inspiration, and I have to say I feel like I noticed some slight anti-correlation: when I gave a super clear talk, it was seemingly less inspiring, but when I got lower marks for clarity, it was somehow more inspiring. Huh.

Nancy Duarte makes the point that in any presentation, the audience is the hero, and you as the presenter are more like Yoda, the sage who leads the audience on their heroic adventure. Perhaps it is not for nothing that Yoda speaks in wise-seeming syntactically mixed-up babble. Perhaps you have to assert credentials and intellectual dominance at some point in order to inspire your audience? Thoughts on how best to accomplish that goal?

Friday, July 17, 2020

My favorite "high yield" guides to telling better stories

Guest post by Eric Sanford


In medical school, we usually have five lectures’ worth of new material to memorize each day. Since we can’t simply remember it all, we are always seeking “high yield” resources (a term used so often by med students that it quickly becomes a joke): those concise one or two-pagers that somehow contain 95 percent of what we need to know for our exams. My quest of finding the highest yield resources has continued in full force after becoming a PhD student.


A major goal of mine has been to improve my scientific communication skills (you know, writing, public speaking, figure-making… i.e. those extremely-important skills that most of us scientists are pretty bad at), and I’ve come across a few very high yield resources as I’ve worked on this. Here are my favorites so far:


Resonate, by Nancy Duarte:

  • The best talks are inspiring, but “be more inspiring” is not easy advice to follow.

  • This book teaches you how to turn your content into a story that inspires an audience.

  • I received extremely positive feedback and a lot of audience questions the first time I gave a talk where I tried to follow the suggestions of this book.

  • This was both the most fun and the most useful of all my recommendations.


The Visual Display of Quantitative Information, by Edward Tufte:

  • Tufte is probably the most famous “data visualization” guru, and I think this book, his first book, is his best one. (I’ve flipped through the sequels and would also recommend the chapter on color from “Envisioning Information.”)

  • This book provides a useful framework for designing graphics that convey information in ways that are easy (easier?) for readers to understand. Some pointers include removing clutter, repeating designs in “small multiples”, labeling important elements directly, and using space consistently when composing multiple elements in the same figure.


The Elements of Style, by Strunk and White, pages 18-25:


Words to Avoid When Writing, by Arjun Raj


Raj Lab basic Adobe Illustrator (CC) guide, by Connie Jiang


There are many other great resources out there that are also worth going through if you have the time (Style: Lessons in Clarity and Grace by Bizup and Williams is another excellent writing guide), but for me these ones above had the highest amount-learned-per-minute-of-concentration-invested. 



Guest post by Eric Sanford



Monday, February 18, 2019

Dear me, I am awesome. Sincerely, me… aka How to write a letter of rec for yourself

Got an email from someone who got asked to write a letter for themselves by someone else and was looking for guidance… haha, now that PI has made work for me! :) Oh well, no problem, I actually realize how hard this is for the letter drafter, and it’s also something for which there is very little guidance out there for obvious reasons. So I thought I’d make a little guide. Oh, first a couple things. First off, I don’t really know all that much about doing this, having written a few for myself and having asked for a couple, so comments from others are most welcome. Secondly, if you’re one of those sanctimonious types who thinks the PIs should write every letter and never ask for a draft, well, this blog post is probably not for you so don’t bug me about it. Third, if the PI is European, maybe just like turn everything down a notch, ya know? ;)

Anyhoo: so I figure the best way to describe how to do this is to describe how I write a letter. I’ll aim it at how I write letters for, say, a former trainee applying for a postdoc fellowship, maybe with some notes about how this might change for faculty applying for some sort of award or something.

Okay. I usually use the first paragraph to give an executive summary. Here’s an example of what I might write:
“It is my pleasure to provide my strongest possible recommendation for Dr. Nancy Longpaper. Nancy is simply an incredible scientist: she has developed, from scratch and by combining both experimental and computational skills, a system that has led to fundamental new insights into the evolution of frog legs. She has all the tools to be a superstar in her field: talent, intellectual brilliance, work ethic, and raw passion for science to become a stellar independent scientist. I look forward to watching her career unfold in the coming years.”
Or whatever something like that. The key parts of this that you will want to leave blank is the first sentence, i.e., the “strongest possible recommendation” part. That’s an important part that the letter writer will fill in.

Okay, second (optional) paragraph. This one depends a bit on personality. For some letter writers, they like to include a bit about how awesome they are and thus how qualified they are to write the letter. This is important for things like visas and so forth. This could be something like “First, I would like to introduce myself and my expertise. My laboratory studies XYZ, and I am an expert in ABC. I have published several peer reviewed articles in renowned journals such as Proceedings of the Canadian Horticultural Society B and our work has been continuously funded by the NIH.” I personally don’t include things like this for regular (non-visa) recommendations, but I have seen it.

Third paragraph: I usually try and put in some context about how I met the person I’m recommending. Like, “I first met Nancy when she was looking for labs to rotate in. She rotated in my lab and worked on project ABC. Even in her short time in the lab, she managed to accomplish XYZ. I immediately offered her a spot, and while I was disappointed for her to join Prof. Goodgrant’s lab, I was very pleased when she asked me to chair her thesis committee.” If you are a junior PI, this might be replaced with something about how the letter writer knows about your work and any interactions you may have had.

Next several paragraphs: a bunch of scientific meat. This is where you are REALLY going to save your letter writer some time. I usually break it into two parts. First paragraph or two, I describe the person’s work. What specifically did they do. PROVIDE CITATIONS, including journal names. Sorry, they matter, too bad. Try and aim for a very general audience, stressing primarily the impact of the findings. But if you don’t, don’t worry, people probably either know the work already or not. Still, try. Emphasize specific contributions. Like, “Nancy herself conceived of the critical set of controls that was required to establish the now well accepted ‘left leg bias estimator’ statistical methodology that was the key to making the discovery that XYZ.” At all times, emphasize why what you did was special. Don’t be shy! If you’re too ridiculous, don’t worry, your letter writer will fix it.

Next part of the science-meat section: in my letters, I usually try and zoom out a bit. Like, what are the specific attributes of the person that led them to be successful in the aforementioned science. Like, “This is a set of findings that only someone of Nancy’s caliber could have discovered. Her intellectual abilities and broad command of the literature enabled her to rapidly ask important questions at the forefront of the field…” Be careful to emphasize big picture important qualities and not just list out your specific skills here. Like, don’t say “Nancy was really good at qPCR and probably ran about 4.32 million of them.” Makes you sound like a drone. At the trainee level, something about how rapidly you picked up skills could be good, but definitely not at the junior faculty level. Just try and be honest about the qualities you have that you think are most important and relevant. Be maybe a little over the top but not too crazy and then maybe your letter writer will embellish as needed.

Second to last paragraph: I try and fill in a bit more personal characteristics here. Like, what are the personal qualities that helped them shine. E.g. “Nancy also is an excellent communicator of her science, and already has excellent visibility. She gives great talks and has generated a lot of enthusiasm……” Also, if relevant, can add the standard “On a personal note, Nancy is a wonderful person to have in the lab……” Probably like 4-5 sentences max. Make it sound like you belong at the level you are applying for. If it’s for a faculty position, make it sound like you are faculty, not a student.

Finally, I end my letters with an “In sum, Nancy is the perfect candidate for XYZ. I have had the privilege of watching many star scientists develop into independent scientists in this field at top institutions over the years, and I consider Nancy to be of that caliber. I cannot recommend her more strongly.” This one can be sort of a skeleton and the letter writer can fill this in with whatever gushy verbiage they want. For some things, there might be some sort of “comparables” statement here that they can put in if they want.

Tips:
  • Don’t ever say anything bad. If you say something bad, it’s a huge red flag. If the letter writer wants to say something bad, they will. That would be a pretty jerky thing to do, though.
  • Length: There are three things that matter in a letter: the first paragraph, the last paragraph, and how long the letter is in between. For a postdoc thingy, aim for 1.5-2 pages for a strong letter. 2-3 for faculty positions. 1-2 for other stuff after that.
  • Duplication: What do you do if two letter writers ask for a draft? Uhhhh… not actually sure. I have tried to make a few edits, but sometimes I just send it and say hey already sent this and they can kinda edit it up a bit. I dunno, weird situation.
Anyway, that’s my template for whatever it’s worth, and comments welcome from anyone who knows more!

Sunday, February 3, 2019

The sad state of scientific talks (and a thought on how we might help fix it)

Just got back from a Keystone meeting, and I’m just going to say it (rather than subtweet it): most of the talks were bad. I don’t mean to offend anyone, and certainly it was no worse than most other conferences, but come on. Talks over time, filled with jargon and unexplained data incomprehensible to those even slightly outside the field, long rambling introductions… it doesn’t have to be this way, people! Honestly, it also begs the question as to why people bother going to these meetings just to play around on their computers because the talk quality is so poor. I’ve heard so many people say the informal interactions are the most useful thing at conferences. I actually think this is partly because the formal part is so bad.

Why? After all, there are endless resources out there on how to give a good talk. While some tips conflict (titles? no titles? titles? no titles?), mostly they agree on some basic tenets of slide construction and presentation. I wrote this blog post with some tips on structuring talks and also links to a few other resources I think are good. And most graduate programs have at least some sort of workshop or something or other on giving a talk. So why are we in this situation?

I think the key thing to realize is that giving a good talk actually requires working on your talk. A good talk requires more than taking a couple minutes to throw some raw data onto a slide and winging it with how you present that data. For most of us, when we write a paper, it is a long iterative process to achieve clarity and engagement. Why would a talk be any different? (Oh, and by the way, practice is critical, but is not in and of itself sufficient—have to work on the right things; see aforementioned blog post.)

I think the fundamental issue is the nature of feedback and incentives for giving research talks. Without having these structured well, there is little push to do the work required to make a talk good, and they are currently structured very poorly. For incentives, the biggest problem is that the structure to date is all about what you don’t get in the long term, which are often things you don’t know you could get in the first place. Giving a good talk has huge benefits and opens the door to various opportunities long term, but it’s not like someone is going to tell you, “Hey, I had this job opening, but I’m not going to tell you about it now because your talk stunk." Partly, the issue is that the visible benefits of good presentations are often correlated to some extent with brilliance. Take, for instance, Michael Elowitz’s talk at this conference, which my lab hands down voted as the best talk of the conference. Amazing science, clear, and exciting. Michael is a brilliant and deservedly highly successful scientist. Does it help that he is an excellent communicator of his work? Of course! To what extent? I don’t know. What I can say is that many of the best scientists presented their work very well. Where do cause and effect begin and end? Hard to say, but it’s clearly not an independent variable.

Despite this correlation, I still firmly believe that you don’t have to Michael Elowitz-level brilliant to give a great talk. So then why are all these talks so bad? The other element beyond vague incentives is feedback. The most common feedback, regardless of anything about the talk you give, is “Hey, great talk!” Maybe, if you really stunk it up, you’ll get “interesting talk”. And that’s about it. I have many times gotten “Hey, great talk” followed by a question demonstrating that I totally did a terrible job explaining things. I mean, how is anybody ever going to get better if they don’t even get a thumbs-up/down on their presentation? The reason we don’t get that feedback is obviously because of the social awkwardness to telling someone something they did publicly was bad. The main place where people feel safe to give feedback is in lab meeting, which while somewhat helpful is also one of the worst places to get feedback. Asking a bunch of people already intimately familiar with your story and conversant in your jargon about what is clear or not is not going to get you all that far, generally. Also, the person with the most authority in that context (the PI) probably also gives terrible talks and so is not a good person to get feedback from. (Indeed, I have heard many, many stories of PIs actively giving their trainees bad advice.) Generally, the fact that most people you are getting feedback from aren’t themselves typically good at it is a big problem.

Okay, fine…

WHAT CAN WE DO ABOUT IT?

Again, I think the key missing element is honest feedback—I think most talk-givers don’t even realize just how bad their talks are. As I said, few people are going to tell someone to their face that their talk sucks. So how about the following: what if people preregister their talk on a website, and then people can anonymously submit a rating with comments? Basically like a teacher rating, but for speakers at a conference. You could even provide the link to the rating website on the first slide of your talk or something. This would have a number of advantages. First off, if you don’t want to do it, fine, no problem. Second, all feedback is anonymous, thus allowing people to be honest. Also, the comments allow people to give some more detailed feedback if they so choose. And, there is a strong positive incentive. With permission, you could have your average rating posted. This rating could be compared to e.g. the overall average, and if it’s good—which presumably it is if you decided to share it :)—then that’s great publicity, no?

One problem with this, though, is it doesn’t necessarily provide specific feedback. Like, what was clear or not? Comments could provide this to some extent. Also, if you, as the speaker, are willing, you could even imagine posting some questions related to your talk and seeing how well people got those particular points. Of course completely optional and just for those who really care about improving. Which should be all of us, right? :)

Oh, and one suggestion from Rita Strack was to promote the 15 minute format, which is short enough to either require concision and clarity, or, should that not happen, is over fast! :)

Some suggested (e.g. Katie Whitehead) that we incentivize good talks by doing Skype interviews or having them submit YouTubes, etc. for contributed talks. In principle I like this, but I think it's just a LOT of work and also conflates scientific merit with presentation merit, so people who don't get a spot have something other than their presentation skills to blame. Still, could work maybe.

Another, perhaps more radical idea, is to do away with the talk format entirely. Most scientists are far more clear when answering questions (probably for the simple reason that the audience drives it). Perhaps we could limit talks to 5 minutes followed by some sort of structured Q&A? Not sure how to do that exactly, but anyway, a thought.

Anybody want to give this a try?

Friday, June 30, 2017

#overlyhonestauthorcontributions

___ toiled over ridiculous reviewer experiments for over a year for the honor of being 4th author.

___ did all the work but somehow ended up second author because the first author "had no papers".

___ told the first author to drop the project several times before being glad they themselves thought of it.

___ was better to have as an author than as a reviewer.

___ ceased caring about this paper about 2 years ago.

Nobody's quite sure why ___ is an author, but it seems weird to take them off now.

___ made a real fuss about being second vs. third author, so we made them co-second author, which only serves to signal their own utter pettiness to the community.

Sunday, April 2, 2017

Nabokov, translated for academia

Nabokov: I write for my pleasure, but publish for money.
Academia: I write for your pleasure, but pay money to publish.

More specifically…

Undergrad: I don’t know how to write, but please let me publish something for med school.
Grad student: I write my first paper draft for pleasure, but my thesis for some antiquated notion of scholarship.
Postdoc: I write "in press" with pleasure, but "in prep" for faculty applications.
Editor: You write for my pleasure, but these proofs gonna cost you.
SciTwitter: I write preprints for retweets, but tweet cats/Trump for followers.
Junior PI: I write mostly out of a self-imposed sense of obligation, but publish to try and get over my imposter syndrome.
Mid-career PI: I say no to book chapters (finally (mostly)), but publish to see if anyone is still interested.
Senior PI: I write to explain why my life’s work is under-appreciated, but give dinner talks for money.

Sunday, September 25, 2016

Some thoughts on how to structure a talk

As I will recount in a future blog post, I just went to a really fun conference at Cincinnati Children’s on systems biology. Especially cool was interacting with all the postdocs doing pretty amazing work in a variety of areas. More on that later.

As with most conferences, the talks were… mixed. Not the science, but the presentations themselves. Some were great, and some were sort of hard to follow. And some were really hard to follow. One thing I was struck by, however, is that there was far less correlation than you might think between how naturally vibrant someone is and how good their presentation was. This got me thinking: maybe part of the issue is that we always remember those presenters who are both super bubbly and super clear, and so everyone else just looks at that and says “well, they’ve just got it”, whatever “it” is, and gives up on improving. I, however, would contend that while there might be some correspondence between being "sparkly" and engagement/clarity, these are separable problems. And while being super sparkly may be a hard trait to manufacture, giving a clear and compelling talk is most certainly a skill. Indeed, I think that while it might be hard to give a spectacular talk based on skill alone, I think that almost anyone can give a great talk if they're willing to work at it and accept guidance. Being a cheerleader may or may not be part of your job as a scientist, but being able to clearly communicate your work most definitely is.

Now, there’s plenty of opinions out there on how to give a talk, and I’ve given plenty already (see also this excellent website from David Stern). However, most of these tips focus primarily on the mechanics of giving a talk, but devote little to how to structure a talk. Like, they all give some variant of the following maxims:

Basic :
  1. Don’t use text in slides.
  2. Use color appropriately.
  3. Make sure all axes are labeled and graphics are legible.
  4. Remove all jargon.
  5. Don’t go over time.
Mid-level:
  1. Don’t use text in slides.
  2. Remove everything you thought was not jargon but actually is still jargon.
  3. Make the title of each sentence a complete sentence (or no title, but this takes more expertise).
  4. Remember that the slides are just props—you are the speaker.
  5. Identify your audience.
  6. Don’t use figures from papers.
  7. Break up multiple concepts into multiple slides.
  8. Avoid jokes unless you are actually funny. Even then, you should probably avoid jokes.
Some of these are common sense, some are obvious in hindsight. Many have some sort of principles underlying them, and I’ll leave it to you to find other websites with that information (and this great video from Susan McConnell).

Thing is, none of these rules are universal. Take, for example, “Don’t use text in slides”. I have seen multiple talks by very senior, famous PIs, and they had slides with a paragraph of writing on them that they literally just read out verbatim. And you know what? It worked! Why?

Because the structure of their talks were superb. Yet there is precious little guidance out there about how to structure your talk to make it compelling and convincing, and, by proxy, clear. Anyway, here are some thoughts on structure. (Big thanks to Leor Weinberger, who turned me on to “Resonate” by Nancy Duarte, which I found very helpful.) Keep in mind this is just my opinion, but whatever.

The main thing with structuring your talk is to realize that you are telling a story. Stories are fundamentally different than papers, the latter having to be frontloaded as much as possible. Stories, by contrast, have a narrative arc. These arcs have a formula. Do not deviate from the formula! Here’s the formula as given by Pixar:
Once upon a time there was ___. Every day, ___. One day ___. Because of that, ___. Because of that, ___. Until finally ___.
Now let me translate that to science:
Once upon a time, there was a way to measure gene expression called RT-PCR. Every day, people would grind up a bunch of cells and measure the average expression across all the cells. One day, someone looked at expression in individual cells by measuring GFP levels cell by cell. Because of that, they saw that single cells could deviate wildly from the population average. Because of that, they developed further tools, showing that this variability was pervasive. Until finally, they were able to show that this variability had profound consequences for how cells function in both healthy and diseased organisms.
Now just make that into a deck of 50-100 slides and you have a departmental seminar. :)

Let’s deconstruct this a bit. Why does this narrative formula work so well? Because it establishes tension and contrast early, and allows one to come back to it often. In scientific terms, this basically means drawing a clear line between what the current thinking in the field is (the population average is all the information we need for gene expression) and an alternative that you are going to convince them of (individual cells can vary wildly). Duarte gives the example of Steve Jobs’ introduction of the iPhone. Look at the contrast he develops! What is now: flip phones, no way to do e-mail, no music—vs. what could be: a single device to do it all.

(Random aside: it’s actually really funny watching that keynote now to see the audience react wildly for the “iPod” feature, the “phone” feature, and then clap quietly and confusedly at the “revolutionary internet communicator”. If only they knew.)

Anyway, all this to say that it’s paramount to clearly state, in simple terms, what people think now, and then tantalize them with the promise of something new, something different. To build that tension, provide hints from the literature that support your new view, like it's hiding in plain sight. Look how effective this is in Star Wars. All the little Force tricks that Obi Wan uses make you want to know more. That's like saying "Hey, everyone has been looking at gene expression for ages, and if you look around, you can see all this variability in their data that they just didn't have the tools or inclination to quantify." It is these hints of what could be sprinkled in between your description of what is that gets your audience excited about your story and helps to highlight contrast. End the first act of your story (i.e., the introduction) with some sort of major conclusion or result that provides some meat to maintain this contrast. It is that contrast that will keep them interested during the second act.

What is that second act? Before getting to that, it’s important to realize that every good story has a hero (if you have negative results, an anti-hero). And who is that hero? Your audience. They are the ones who are on a journey, the journal from what everyone thought before towards what you are going to convince them of. To borrow from Duarte, your audience is Luke Skywalker. And that makes you the mentor—you’re Yoda. Your job is to lead the hero on this journey. Think about it, aren’t the most satisfying talks the ones where you think to yourself “Man, wouldn’t it then be cool if…” and then they show exactly that experiment on the very next slide? That’s because your mentor (the speaker) is doing a good job of shepherding you, the hero, on the path.

What does the hero do in the second act? Well, there are a couple of options here. One of my favorites from many martial arts movies is the training montage. The science equivalent of this is showing a bunch of further evidence to bolster your initial, cool result. Wait, what about the alternative isoforms? Nope, that doesn't explain it. What if you use an alternative method? Effect still shows up. This is what Duarte calls "resisting the call", like Luke Skywalker (i.e., your audience) resisting the call to action to use the force and blow up the Death Star. Here, your job is to persuade.

Another approach is to "fill out the story" here. This can take the form of a digression on a side point, or further analysis. Like: "So I showed you this really cool single cell data about cancer, but it's actually also interesting for these other reasons as well, let me show you." Key thing, though, is not to give away your turning point until closer to the end, where you transition to the third act.

The beginning of the end comes from the transition from either the training montage or the fill-in-the-story sequence, which are sort of the aftermath of that first big result, to thinking about what the implications of those big results. This is the hero's turning point. Like: "So everything I showed you about single cell analysis in cancer would imply that the cells will die in this specific pattern. Does this happen?" This should lead to another major result, something to carry the ending. Your hero now has a final purpose.

Now, during the ending, it's important to come back to the beginning as well. Point out the initial state of the field again. The implication should be "See, this is how we were thinking before." In the best of situations, the contrast should be so stark that the original view should look positively quaint. This means that the hero (your audience) has transformed, and there's no going back. Then you've done your job.

This basic formula can work in long talks, short talks, any kind of talk. The only difference is how many details you leave in or leave out. The truth is that it takes time to learn this skill, and while there are some tips, there's no substitute for just carefully thinking about what you're trying to present and what works best to tell your story. I will give the following bit of advice, though. Your story has one thing more in common with a TV series rather than a movie, and that's that you can't assume people actually watched the whole thing. Even if they're sitting there, how often has something like this happened to you while sitting in the audience?
“Wow, those are some convincing results I never looked at regulatory DNA like that before and now my mind is brimming with all these possibilities hmm I wonder if SuperCuts is a pokestop that would be cool and also the FACS facility probably not oh well so what was this talk about again?”
You simply cannot assume that people listened to all or even most of your talk, and certainly not that they internalized important details. I remember someone I know giving a talk that started with some heavy quantitative framework, after which it was like "Okay, now that you've all got that, let's get into the results, all presented assuming you know this framework". That was not good. I'm directly in the field and knew some of the work beforehand, and even I had a hard time following. Things work best if you remind them of key concepts and results along the way. One nice tip (shamelessly stolen from Susan McConnell) is the idea of a talisman, some sort of visual aid that you come back to over and over to help orient your audience. For instance, if you have a framework with two competing models, show those models repeatedly, every 5-10 minutes, perhaps with variations as the story develops. Take that opportunity to reiterate the main concepts required to understand what comes next. This helps your audience reconnect with the central arc of your work.

Anyway, hope this guidance proves useful. I realize it's sort of abstract, but I've found that as my speaking skills have evolved, understanding these principles has proven even more important than all the various tips, tricks and opinions on how to construct slides. All that stuff is important, but just remember that all those rules are typically in service of the principles of clarity and engagement, and while rules are meant to be broken, you never want to compromise your principles!

Note: As I was writing this, I was definitely thinking about the standard 45-60 minute talk, which usually has at least two main results. In the case of a short talk, like 2-15 minutes, it may make more sense to shorten or eliminate the second act. Also, the transition to the third act may or may not require Any new result, but I think some version of the "what does this new knowledge imply"/"contrast with the old" is still necessary.

Another note: These are lessons that take most people many years to learn, so I wouldn't expect immediate results. But the main thing is to keep trying to improve. Many never do.

Saturday, January 2, 2016

A proposal for how to label small multiples

I love the concept, invented/defined/popularized/whatever by Tufte, of small multiples. The general procedure is to break apart data into multiple small graphs, each of which contain some subset of the data. Importantly, small multiples often make it easier to compare data and spot trends because the cognitive load is split in a more natural way: understand the graph on a small set of data, then once you get the hang of it, see how that relationship changes across other subsets.

For instance, take this more conventionally over-plotted graph of city vs. highway miles per gallon, with different classes of cars labeled by color:

q2 <- qplot(cty,hwy,data=mpg,color = class) + theme_bw()
ggsave("color.pdf",q2,width = 8, height = 6)



Now there are a number of problems with this graph, but the most pertinent is the fact that there are a lot of colors corresponding to the different categories of car and so it takes a lot of effort to parse. The small multiple solution is to make a bunch of small graphs, one for each category, that allows you to see the differences between each. By the power of ggplot, behold!

q <- qplot(cty,hwy,data=mpg,facets = .~class) + theme_bw()
ggsave("horizontal_multiples.pdf",q,width = 8, height = 2)


Or vertically:

q <- qplot(cty,hwy,data=mpg,facets = class~.) + theme_bw()
ggsave("vertical_multiples.pdf",q,width = 2, height = 8)


Notice how much easier it is to see the differences between categories of car in these small multiples than the more conventional over-plotted version, especially the horizontal one.

Most small multiple plots look like these, and they're typically a huge improvement from heavily over-plotted graphs, but I think there’s room for improvement, especially in the labeling. The biggest problem with small multiple labeling is that most of the axis labels are very far away from the graphs themselves. This is of course a seemingly logical way to set things up because the labels apply to all the multiples, but it leads to a problem because it leads to a lot of mental gymnastics to figure out what the axes are for any one particular multiple.

Thus, my suggestion is actually based on the philosophy of the small multiple itself: explain a graph once, then rely on that knowledge to help the reader parse the rest of the graphs. Check out these before and after comparisons:


The horizontal small multiples also improve, in my opinion:


To me, labeling one the small multiples directly makes it a lot easier to figure out what is in each graph, and thus makes the entire graphic easier to understand quickly. It also adheres to the principle that important information for interpretation should be close to the data. The more people’s eyes wander, the more opportunities they have to get confused. There is of course the issue that by labeling one multiple, you are calling attention to that one in particular, but I think the tradeoff is acceptable. Another issue is a loss of precision in the other multiples. Could include tickmarks as more visible markers, but again, I think the tradeoff is acceptable.

Oh, and how did I perform this magical feat of alternative labeling of small multiples (as well as general cleanup of ggplot's nice-but-not-great output)? Well, I used this amazing software package called “Illustrator” that works with R or basically any software that spits out a PDF ;). I’m of the strong opinion that being able to drag around lines and manipulate graphical elements directly is far more efficient than trying to figure out how to do this stuff programmatically most of the time. But that’s a whole other blog post…

Saturday, December 19, 2015

Will reproducibility reduce the need for supplementary figures?

One constant refrain about the kids these days is that they use way too much supplementary material. All those important controls, buried in the supplement! All the alternative hypotheses that can’t be ruled out, buried in the supplement! All the “shady data” that doesn’t look so nice, buried in the supplement! Now papers are just reduced to ads for the real work, which is… buried in the supplement! The answer to the ultimate question of life, the universe and everything? Supplementary figure 42!

Whatever. Overall, I think the idea of supplementary figures makes sense. Papers have more data and analyses in them than before, and supplementary figures are a good way to keep important but potentially distracting details out of the way. To the extent that papers serve as narratives for our work as well as documentation of it, then it’s important to keep that narrative as focused as possible. Typically, if you know the field well enough to know that a particular control is important, then you likely have an interest sufficient enough to go to the trouble to dig it up in the supplement. If the purpose of the paper is to reach people outside of your niche–which most papers in journals with big supplements are attempting to do–then there’s no point in having all those details front and center.

(As an extended aside/supplementary discussion (haha!), the strategy we’ve mostly adopted (from Jeff Gore, who showed me this strategy when we were postdocs together) is to use supplementary figures like footnotes, like “We found that protein X bound to protein Y half the time. We found this was not due to the particular cross-linking method we used (Supp. Fig. 34)”. Then the supplementary figure legend can have an extended discussion of the point in question, no supplementary text required. This is possible because unlike regular figure legends, you can have interpretation in the legend itself, or at least the journal doesn’t care enough to look.)

I think the distinction between the narrative and documentary role of a paper is where things may start to change with the increased focus on reproducibility. Some supplementary figures are really important to the narrative, like a graph detailing an important control. But many supplementary figures are more like data dumps, like “here’s the same effect in the other 20 genes we analyzed”. Or showing the same analysis but on replicate data. Another type of supplementary figure has various analyses done on the data that may be interesting, but not relevant to the main points of the paper. If not just the data but also the analysis and figures are available in a repository associated with the paper, then is there any need for these sorts of supplementary figures?

Let’s make this more concrete. Let’s say you put up your paper in a repository on github or the equivalent. The way we’ve been doing this lately is to have all processed data (like spot counts or FPKM) in one folder, all scripts in another, and when you run the scripts, it takes the processed data, analyzes it, and puts all the outputted graphical elements into a third folder (with subfolders as appropriate). (We also have a “Figures” folder where we assemble the figures from the graphical elements in Illustrator; more in another post.) Let’s say that we have a side point about the relative spatial positions of transcriptional loci for all the different genes we examined in a couple different datasets; e.g., Supp Figs. 16 and 21 of this paper. As is, the supplementary figures are a bit hard to parse because there’s so much data, and the point is relatively peripheral. What if instead we just pointed to the appropriate set of analyses in the “graphs” folder? And in that folder, it could have a large number of other analyses that we did that didn’t even make the cut for the supplement. I think this is more useful than the supplement as normally presented and more useful than just the raw data, because it also contains additional analyses that may be of interest–and my guess is that these analyses are actually far more valuable than the raw data in many cases. For example, Supp Fig. 11 of that same paper shows an image with our cell-cycle determination procedure, but we had way more quantitative data that we just didn’t show because the supplement was already getting insane. Those analyses would be great candidates for a family of graphs in a repository. Of course, all of this requires these analyses being well-documented and browsable, but again, not sure that’s any worse than the way things are now.

Now, I’m not saying that all supplementary figures would be unnecessary. Some contain important controls and specific points that you want to highlight, e.g., Supp. Fig. 7–just like an important footnote. But analyses of data dumps, replicates, side points and the such might be far more efficiently and usefully kept in a repository.

One potential issue with this scheme is hosting and versioning. Most supplementary information is currently hosted by journals. In this repository-based future, it’s up to Bitbucket or Github to stick around, and the authors are free to modify and remove the repository if they wish. Oh well, nothing’s permanent in this world anyway, so I’m not so worried about that personally. I suppose you could zip up the whole thing and upload it as a supplementary file, although most supplementary information has size restrictions. Not sure about the solution to that.

Part of the reason I’ve been thinking about this lately is because Cell Press has this very annoying policy that you can’t have more supplementary figures than main figures. This wreaked havoc with our “footnote” style we originally used in Olivia’s paper because now you have to basically agglomerate smaller, more focused supplementary figures into huge supplementary mega-figures that are basically a hot figure mess. I find this particularly ironic considering that Cell’s focus on “complete stories” is probably partially to blame for the proliferation of supplementary information in our field. I get that the idea is to reduce the amount of supplementary information, but I don’t think the policy accomplishes this goal and only serves to complicate things. Cell Press, please reconsider!

Sunday, December 13, 2015

Blog hiatus (hopefully) over, and a few thoughts on writing grants

Had at least a couple folks ask why I haven’t written any blog posts lately. The answer is some combination of real-life work leading to writing fatigue and some degree of lack of inspiration. On a related note, writing grants sucks.

There have been some grants that I’ve had fun writing, but I would say they are in the distinct minority. I am of course not alone in that, but one of the reasons often trotted out for hating writing grants is that many scientists just hate writing in general, and I think that is true for a number of scientists that I know. Personally, though, I actually really like writing, and I typically enjoy writing papers and am reasonably fast at it, so it’s not the writing per se. So what is it that makes me leave the page empty until just days before the deadline while patiently waiting for the internet to run out of videos of people catching geoducks?

Part of it is definitely that grantwriting makes you sit and think about your work, how it fits into what we already know, and how it will tell us something new. It is certainly the case that grants can force you to critically evaluate ideas–writing is where weak ideas go to die, and that death can be painful. But I don’t think this is the whole story, either. I would say that the few grants I’ve really enjoyed writing are the ones where the process focused on thinking about the science I really want to do (or more likely already did) and explaining it clearly. So what is it about the other grants that make me try to find virtually any excuse to avoid writing them?

After a bit of reflection, I think that for me, the issue is that writing a grant generally often just feels so disingenuous. This is because I’m generally trying to write something that is “fundable” rather than what I really want to do. And I find it really REALLY hard to get motivated to do that. I mean, think about it. I’ve got to somehow come up with a “plausible plan” for research for 5 years, in a way that sounds exciting to a bunch of people who are probably not experts in the area and have a huge stack of applications to read. First off, if I ever end up in a situation where I’m actually doing what I thought I was going to do 5 years ago, I should probably be fired for complete lack of imagination. Secondly, the scope of what one typically proposes in a grant is often far less imaginative to begin with. Nobody ever proposes the really exciting thing they want to do, instead they just propose what reviewers will think is safe and reasonable. Not that these are some brilliant insights on my part; I think most applicants and reviewers are acutely aware of this, hence the maxim “An NIH grant should have 3 aims: 2 you’ve already done and 1 you’re never going to do”. So to the extent that everyone already knows all this, why do we bother with the whole charade?

I should say that I feel far less disingenuous writing “people” grants, by which I mean the fund-the-person-not-the-project grants like many junior investigator awards, HHMI and those as part of the NIH high-risk/high-reward program. At least there, I’m focusing more on the general area that we’re interested in in the lab, describing what makes us think it’s exciting, and why we’re well positioned to work on this topic, which is far more realistic than detailing specific experiments I’ll use to evaluate a particular hypothesis that I’ll probably realize is hopelessly naive after year 1. Of course, I think these are basically the criteria that people are judging “project” grants on as well for the most part, but at least I don’t have to pretend that I know what cell type I’m going to use in our RNA FISH validation studies in year 4… nor will I get dinged for a “bad choice” in this regard, either. This is not to say that writing people-grants is easy–it is particularly tricky to write confidently about yourself without sounding silly or boastful–but I’m just saying that the whole exercise of writing a people-grant involves writing about things that feel more aligned with the criteria by which I think grants should actually be evaluated.

(Sometimes I wonder if this whole system exists mainly to give people a way out of directly criticizing people. If you’re not too excited about a grant, can harp on technical issues, of which there are always plenty. I think this is an issue with our culture of criticism, which is probably a topic for another blog post.)

Carried a bit further, the logical conclusion to this line of argument is that we shouldn’t be judged on prospective plans at all, but rather just on past performance. Personally, I would much rather spend time writing about (and being judged on) science I actually did than some make-believe story about what I’m going to do. I remember a little while ago that Ron Germain wrote a proposal that was “people-oriented” in the sense that it suggested grants for all young investigators, with renewal based on productivity. His proposal engendered a pretty strong backlash from people saying that people-based grants are just a way to help the rich get richer (“Check your privilege!”). Hmm, don’t know that I’m qualified to delve into this too deeply, but I'm not sure I buy the argument that people-based grants would necessarily disfavor the non-elite, at least any more than the current system already does. Of course the current people-based grant system looks very elitist–it’s very small, and so naturally it will mostly go to a few elites. I don’t think that we can necessarily draw any conclusions from that about what people-based funding might look like on a broader scale. I also think that it might be a lot easier to combat bias if we can be very explicit about it, which I think may actually be easier in people-based grants.

As to the backlash against these sort of proposals, I would just say that many scientists have an inherent (and inherently contradictory) tendency towards supreme deification on the one hand and radical egalitarianism on the other. I think a good strategy is probably somewhere in between. Some people-based grants to encourage a bit more risk-taking and relieve some of the writing burden. Some project-based grants to keep programmatic diversity (because it would help fund important areas that are maybe not fashionable at the moment). I don’t know where this balance is, but my feeling is that we're currently skewed too far towards projects. For this reason, I was really excited about the NIH R35 program–until you follow this eligibility flowchart and find out that most roads lead to no. :(

Oh, and about the actual mechanics of writing a grant: my personal workflow is to write the text in Google Docs using PaperPile, then export to Pages, Apple’s little-used-but-perfect-for-grants word processing tool. The killer feature of Pages is that it’s SO much better than Word/Google Docs at allowing you to move figures to exactly where you want them and have them stay there, and as an added bonus, they will keep their full PDF-quality resolution. Only problem is that there are a grand total of around 18 people in the continental United States who use Pages, and none of them are writing a grant with you. Sad. Still better than LaTeX, though. ;)

Saturday, July 11, 2015

Some of my favorite meta-science posts from the blog

I recently was asked to join a faculty panel on writing for Penn Bioengineering grad students, and in doing so, I realized that this blog already has a bunch of thoughts on "meta-science", like how to do science, manage time, give a talk, write. Below are some vaguely organized links to various posts on the subject, along with a couple outside links. I'll also try and maintain this Google Doc with links as well.

Time and people management:
Save time with FAQs
Quantifying the e-mail in my life, 1/2
Organizing the e-mail in my life, 2/2
How to get people to do boring stuff
The Shockley model of academic performance
Use concrete rules to change yourself
Let others organize your e-mail for you
Some thoughts on time management
Is my PI out to get me?
How much work do PIs do?
What I have learned since being a PI

How to do science:
The Shockley model of academic performance
What makes a scientist creative?
Why there is no journal of negative results
Why does push-button science push my buttons
Some thoughts on how to do science
Storytelling in science
Uri Alon's cloud
The magical results of reviewer experiments
Being an anal scientist
Statistics is not science
Machine learning, take 2

Giving talks:
How to structure a talk
http://www.howtogiveatalk.com/
http://www.ibiology.org/ibioseminars/techniques/susan-mcconnell-part-1.html
Figures for talks vs. figures for papers
Simple tips to improve your presentations
Images in presentations
A case against laser pointers for talks
A case against color merges to show colocalization

Writing:
The most annoying words in scientific discourse
How to write fast
Passive voice in scientific writing
The principle of WriteItAllOut
Figures for talks vs. figures for papers
What's the point of figure legends?
Musing on writing
Another short musing on writing

Publishing:
The eleven stages of academic grief
A taxonomy of papers
Why there is no journal of negative results
How to review a paper
How to re-review a paper
What not to worry about when you submit a manuscript
Storytelling in science
The cost of a biomedical research paper
Passive-aggressive review writing
The magical results of reviewer experiments
Retraction in the age of computation

Career development:
Why are papers important for getting faculty positions?
Is academia really broken? Or just really hard?
How much work do PIs do?
What I have learned since being a PI
Is my PI out to get me?
Why there's a great crunch coming in science careers
Change yourself with rules
The royal scientific jelly

Programming:
The hazards of commenting code
Why don't bioinformaticians learn how to run gels?

Friday, April 3, 2015

Theorists give great talks

We just had Rob Phillips come visit Penn and give a talk in the chemistry department. It was great! A few months back, we also had Jane Kondev come give a talk in bioengineering that was similarly a lot of fun. Now, Jane and Rob have a lot in common (both are cool, interesting people), but I think one common thread that links them is that they are both theorists by training. (Both do have strong experimental work happening in their group now, by the way.) I think theorists (at least in our sort of systems biology) give some of the most engaging talks, and I think the reasons why are illustrated in some of the best features of both their presentations.

The first departure from business as usual is in the amount of data presented. In Rob’s case, he presented almost no data from his own lab. Jane’s talk also had a lot of background from other people’s work, and the work he did present always came with heavy references to other literature and findings. This allows them to set up the conceptual issues well, as well as their place in the context of science. I think that some people feel like bringing up other work distracts from their own work, and that they don’t have time for it because they have so much of their own to present. I think that Rob and Jane’s talks prove these concerns to be overblown. Rather, I think that their talks feel rich with history and thus significance. Those are good things.

The other main thing I’ve noticed in talks by theorists is that they emphasize the conceptual. Most talks suffer not from a lack of data but an overabundance of data. Here’s a simple rule: if you’re not going to explain a piece of data, don’t show it. If it’s impossible for the audience to truly grasp how the data you show proves your point, then you may as well not show it and just tell them that it all works out. Often times in bad talks, it’s hard to tell that this is happening because people haven’t even set up the question well.

Which brings me to another nice thing about theorists: they aren’t afraid to delve into what might be called philosophy. For some of us, I think there is maybe a fear that people won’t take us seriously if we muse about the big picture in our talks. I think those fears are ill-founded. Overall, I think biomedical science could do with a little more thinking and a little less doing. Another nice thing about this is that for trainees, it can be very inspiring to think about deeper problems. Isn’t that what got us all into this in the first place?

On a related but peripheral note, I was at a conference a couple years ago and was shocked by what I was hearing from the students and postdocs. I asked one student what they thought about some fundamental question about the field, and they responded with a blank stare as though they had never been asked that question before. Another postdoc I met, when asked about some underpinnings of the field, literally responded with “I just want to get an assay that works and get a bunch of data”. If that’s you, go see a talk by a theorist and get back in touch with your inner scientist!

Sunday, November 23, 2014

The most annoying words in scientific discourse

Most scientific writing and discourse is really bad. Like, REALLY bad. How can we make it better? There are some obvious simple rules, like avoiding passive voice, avoiding acronyms, and avoiding jargon.

I wanted to add another few items to the list, this time in the form of words that typically signify weak writing (and sometimes weak thinking). Mostly, these are either ambiguous, overused, or pointless meta-content just used to mask a lack of real content. Here they are, along with my reasons for disliking them:

Novel. Ugh, I absolutely hate this word. It’s just so overused in scientific discourse, and it’s taken on this subtext relating to how interesting a piece of work is. Easily avoided. Like “Our analysis revealed novel transcript variants.” Just say “new transcript variants”.

Insight. One of the best examples of contentless meta-content. If any abstract says the word insight, nine times out of ten it’s to hide a complete lack of insight. For example: “Our RNA-seq analysis led to many novel insights.” Wait, so there are insights? If so, what are these insights? If those insights were so insightful, I’m pretty sure someone would actually spell them out. More than likely, we’re talking about “novel transcript variants” here.

Landscape. Example of a super imprecise word. What does this mean anyway? Do you mean an arrangement of shrubbery? Or do you mean genome-wide? In which case, say genome-wide. Usually, using the word landscape is an attempt to evoke some images like these:


Now exactly what do these images mean? Speaking of which…

Epigenetic. Used as a placeholder for “I have no idea what’s going on here, but it’s probably not genetic”. Or even just “I have no idea what’s going on here whatsoever”. Or chromatin modifications. Or all of this at once. Which is too bad, because it actually is a useful word with an interesting meaning.

Paradigm. Need I say more?

Robust. Use of the word robust is robust to perturbations in the actual intended meaning upon invoking robustness. :)

Impact. As in “impact factor”. The thing that bugs me about this word is that its broad current usage really derives from the Thomson/Reuters calculation of Impact Factor for journal “importance”. People now use it as a surrogate for importance, but it’s always sort of filtered through the lens of impact factor, as though impact factor is the measure of whether a piece of work is important. So twisted has our discourse become that I’ve even heard the word impactful thrown about, like "that work was impactful". It's a word, but a weird one. If something is influential, then say influential. If it’s important, then say important. If an asteroid hits the moon, that’s impact.

These words are everywhere in science, providing muddied and contentless messages wherever they are found. For instance, I’m sure you’ve seen some variant of this talk title before: “Novel insights into the epigenetic landscape: changing the paradigm of gene regulation.”

To which I would say: “Wow, that sounds impactful.”

[Updated to include Paradigm, forgot that one.]
[Updated 12/13: forgot Robust, how could I?]

Sunday, November 9, 2014

My favorite quote about LaTeX

Argh, just finished struggling through submitting a LaTeX document to a journal. And I think I still screwed up and will have to do some more fussing. My only hope (and a fading one at that) is that things will not devolve to the point where I just have to copy the whole damn thing into Google Docs, where you can actually spend your time on, you know, doing real work.

So I just Googled around and found the following page, which has my new favorite quote about LaTeX:
Latex ("LaTeX" if you're pretentious as hell) is the biggest piece of shit in the history of both pieces and shit.
Yes.

(And yes, before you say it, I know what you are going to say.)

Friday, August 1, 2014

How to write fast

Being a scientist means being able to write effectively about your science: papers, grants, e-mails, reviews, blogs, twitters, facebooks, whatever. And being an efficient scientist means being able to write about your science both effectively and fast. Striking this balance is a struggle for most people, and solutions are likely highly personal, but here are a few things I’ve found have worked for me (more interesting/less generic ones towards the end):
  1. Deadlines are your friend. Wait until the last minute and write in a big spurt. I personally feel that the last 10% takes way more than 10% of the time, but actually makes much less than 10% difference in the final outcome (grant, paper, etc.). Being up against a deadline is unpleasant, but cuts down on this relatively low-payoff time.
  2. If you do have to write early for whatever reason, set an artificial early deadline and try to finish it by then as though it were a hard deadline. This has another bonus…
  3. … which is to put the piece of writing away for a week and not think about it, then come back to it. This distance gives you a sufficiently long break that editing your own writing will be much more effective and efficient than if you just edit it continuously.
  4. Don’t be afraid of the blank page. For me, the blank page is a period of reflection and thought. Often, I will look at a blank page for a week, during which time I’ve really thought about what I wanted to say, at which point it all comes out very quickly and relatively coherently. Whenever I force myself to write before I'm ready, I just end up rewriting it anyway.
  5. If you’re having a hard time explaining something in writing, just try to explain it to someone verbally. For me, this really helps me clearly formulate something. Then just write that down and see what happens. Much faster than struggling endlessly with that one troublesome sentence.
  6. Don’t worry about word limits while you’re writing. I’ve found that writing with the word limit in mind makes my writing very confusing and overly compressed because I try to squeeze in too many thoughts in as few words as possible. I find it’s more efficient to just write what I want to say as clearly as possible and then come back and cut as necessary. And be brutal about trimming and don’t look back.
  7. Watch out for “track changes wars”. If you’re writing with other people (who doesn't these days), there is a natural tendency to push back against other people’s edits. This can lead to a lot of back and forth about minor points. One way to handle this is to just accept all changes in the document and read it clean. If whatever it was is a real problem, it will still stand out.
  8. Learn the “templates” for scientific writing. Most scientific writing has a particular form to it, and once you learn that, it makes for easy formulas for getting ideas out of your mind and onto the page. These templates vary from format to format. For instance, in a paper, often the results section will go something like “Our findings suggested that X. For X to be true, we reasoned that Y could be either A or B. In order to test for Y, we performed qPCR on…” Rinse and repeat. If you find it sounding repetitive, just use your thesaurus, and learn the 3-4 common variants for the given sentiment (e.g., “we reasoned”, “we hypothesized”, “we considered whether”) and cycle through them. It’s all rather prosaic, but it will get words on the page. You can channel your inner Shakespeare in revision. Same thing for grants.
  9. Regarding templates for grants, I have basically found it much easier to work from someone else’s grant. Many grants have very vague outlines for overall structure, and so ask a friend for theirs and try to stick with that. It will save you hours of wondering whether this or that structure or style can be funded. Which reminds me: be sure to ask people who, you know, actually got the grant… :)
  10. Some people really like writing out an outline of the whole thing first. I’ve never really been able to get into that myself. But a few times lately when I’ve really been up against a deadline, I tried what I can perhaps best call a “short form temporary outline”. The idea is that I have to write a paragraph, and it has to say 4 things. Write out a very quick outline just below the cursor with bullet points of these 4 things in a reasonable order. This should just take a couple minutes. Then, well, just start writing them out. If a thought comes to you while writing, just add it to the outline so you remember. It’s sort of like a to-do list for the paragraph. I’ve found this made writing faster because I didn’t feel like I had to try to remember a lot of stuff in my head, thus freeing my mind to just write. Next paragraph, next outline.
  11. [Updated, 8/15]: Forgot this really important one–don't be afraid to just rewrite something wholesale. Sometimes I'll write something that just... sucks. But at least I got it out of my system. Often, in the course of writing it, I will discover what I really meant to say. Better to just start fresh and write it again the right way. It's like renovating an old house–often would be easier to just tear it down and start over.
Oh, and avoid passive voice. The question of how to reduce the crushing writing load we all are facing to begin with is perhaps a topic for another blog post... :)