Sunday, September 14, 2014

University admissions at Ivy Leagues are unfair: wah-wah-wah

Lots of carping these days about university admissions processes. Steven Pinker had some article, then Scott Aaronson had a blog post, both advocating a greatly increased emphasis on standardized testing, because the Ivy League schools have been turning away academically talented but not “well-rounded” students. Roy Unz (referenced in the Pinker article) provides some evidence that Asians are facing the same quota-based discrimination that Jewish people did in the early 20th century [Note: not sure about many parts of the Unz article, and here's a counter–I find the racial/ethnic overtones in these discussions distasteful, regardless of whether they are right or wrong]. Discrimination is bad, right? Many look to India, with its system of very hard entrance exams to select the cream of the crop into the IIT system and say, why not here?

Yeah. Well, let me let you all in on a little secret: life is not fair. But we are very lucky to live here in the US, where getting rejected from the Ivies is not a death sentence. Aaronson got rejected from a bunch of schools, then went to Cornell (hardly banishment to Siberia, although Ithaca is quite cold), then went on to have a very successful career, getting job offers from many of the same universities that originally rejected him. It’s hard not to detect a not-so-subtle scent of bitterness in his writing on this topic based on his own experience as a 15 year old with perfect SATs, a published paper and spotty grades, and I would say that holding on to such a grudge risks us drawing the wrong lesson from his story. Yes, it is ironic that those schools didn’t take him as an undergraduate. But the lesson is less that the overall system is broken, but more that the system works–it identified his talent, nurtured it and ultimately rewarded him for it.

Look, I also was rejected from almost all the “top” schools I applied to out of high school. I’m probably not nearly as smart as Scott Aaronson or Steven Pinker–I didn’t have perfect SATs or a publication, but I was pretty good academically. Perhaps I suffered because I’m (south) Asian, or my extracurriculars consisted of ping-pong club and playing video games, dunno. Whatever, I think it’s pretty silly if you’re still dwelling on that at this point in life. I was beyond lucky to get into Berkeley, which was, at least at the time, certainly less selective than a Harvard or an MIT. I got a great education at Berkeley, filled with many rich experiences that I think have shaped me into who I am (and that I almost certainly wouldn’t have had at a place like Harvard or MIT). I got into graduate school, again by the skin of my teeth because of a poor standardized test result, and have been reasonably successful since, although I cannot claim to have gotten faculty offers from universities that rejected me as an undergrad–I got exactly one faculty offer, and I can’t even remember if I applied to Penn for undergrad (if I did, I got rejected). Some could argue that I have been more successful than others from my high school who got into more selective schools, although measuring success by those narrow and ultimately typically self-aggrandizing terms is inherently rather silly. Anyway, so I have had a fair amount of rejection, but I’m fine–I'm lucky enough to have a job I love, and I’m doing okay at it. I think you can be a productive member of society without having gotten into Harvard or Princeton. Larry Page and Sergey Brin went to Michigan and Maryland, respectively, and I think they’re doing okay, too.

Those who look elsewhere to places like India have it wrong, also. The IITs are rightly regarded as the crown jewels of Indian education. The problem is that the next tier down is not nearly so strong, thus not nurturing the talents of all those who were just below the cutoff for whatever reason. So all those people who don’t manage to do as well on that one entrance exam have far less access to opportunities than they do here. Despite these exams, India is hardly what one would call a meritocratic society. So again, I would not consider India a source of inspiration.

I understand the allure of something objective like an SAT test. The problem with it is that beyond a certain bar, they just don’t provide much information. There are tons of kids with very high SATs. I can tell you right now that my SATs were not perfect, but I’m pretty sure I’m “smarter” than some of my cohort who did get perfect SATs. I did terribly on the math subject GRE–I’m guessing by far the worst in my entering graduate school class–which almost scuppered my chances of getting into graduate school, but I don’t think I was clearly the lowest performing student in my cohort. At the graduate level, it is clear that standardized tests provide essentially no useful predictive information.

I think we’ve all seen the kid with the perfect grades from the top university who flames out in grad school, or the kid from a much less prestigious institution with mixed grades who just nails it. Moreover, as anyone who has worked with underrepresented minorities will tell you, their often low standardized test scores DO NOT reflect their innate abilities. There are probably many reasons for why, but whatever, it’s just a fact. And I think that diversity is a good thing on its own.

So scores are not so useful. The other side of the argument is that the benefits of a highly selective university are immense–a precious resource we must carefully apportion to those most deserving. For instance, Pinker says:
The economist Caroline Hoxby has shown that selective universities spend twenty times more on student instruction, support, and facilities than less selective ones, while their students pay for a much smaller fraction of it, thanks to gifts to the college.
Sure, they spend more. So what. I honestly don’t see that all this coddling necessarily helps students do better in life. Also this:
Holding qualifications constant, graduates of a selective university are more likely to graduate on time, will tend to find a more desirable spouse, and will earn 20 percent more than those of less selective universities—every year for the rest of their working lives.
Yes, there is some moderate benefit, holding “qualifications constant”–I guess their vacations can last 20% longer and their dinners can be 20% more expensive on average. The point is that qualifications are NOT constant. The variance within the cohort at a given selective university is enormous, dwarfing this 20 percent average benefit. The fact is that we just don’t know what makes a kid ultimately successful or not. We can go with standardized testing or the current system or some other system based on marshmallow tests or what have you, but ultimately we just have no idea. Unz assembles evidence that Caltech is more meritocratic, but so far there seems to be little evidence that the world is run by our brilliant Caltech-trained overlords.

What to do, then? How about nothing? Quoting Aaronson:
Some people would say: so then what’s the big deal? If Harvard or MIT reject some students that maybe they should have admitted, those students will simply go elsewhere, where—if they’re really that good—they’ll do every bit as well as they would’ve done at the so-called “top” schools. But to me, that’s uncomfortably close to saying: there are millions of people who go on to succeed in life despite childhoods of neglect and poverty. Indeed, some of those people succeed partly because of their rough childhoods, which served as the crucibles of their character and resolve. Ergo, let’s neglect our own children, so that they too can have the privilege of learning from the school of hard knocks just like we did. The fact that many people turn out fine despite unfairness and adversity doesn’t mean that we should inflict unfairness if we can avoid it.
A fair point, but one that ignores a few things. Firstly, going to Cornell instead of Harvard is hardly the same thing as living a childhood of neglect and poverty. Secondly, universities compete. If another university can raise their profile by admitting highly meritorious students wrongly rejected by Harvard, well, then so be it. Those universities will improve and we’ll have more good schools overall.

Which feeds into the next, more important point. As I said, it’s not at all clear to me that we have any idea how to select for “success” or “ability”, especially for kids coming out of high school. As such, we have no idea where to apportion our educational resources. To me, the solution is to have as many resources available as broadly as possible. Rather than focusing all our resources and mental energy into "getting it right" at Harvard and MIT, I think it makes much more sense to spend our time making sure that the educational level is raised at all schools, which will ultimately benefit far more people and society in general. The Pinker/Aaronson view essentially is that this is a “waste” of our resources on those not “deserving” of them based on merit. I would counter first that spending resources on educating anyone will benefit our society overall, and second that all these “merit” metrics are so weakly correlated with whatever the hell it is that we’re supposedly trying to select for that concentrating our resources on the chosen few at elite universities is a very bad idea, regardless of how we select those folks. The goal should be to make opportunities as widely available as possible so that we can catch and nurture those special folks out there who may not particularly distinguish themselves by typical metrics, which I think is the majority, by the way. A quick look at where we pull in graduate students from shows that the US does a reasonably good job at this relative to other places, a fact that I think is related to many of this country’s successes.

As I said before in the context of grad admissions, if you want to figure out who runs the fastest, there are a couple ways of going about it. You can measure foot size and muscle mass and whatever else to try to predict who will run fastest a priori–good luck with that. Or you can just have them all run in a race and see who runs the fastest. And if you want to make sure you don’t miss the next Usain Bolt or Google billionaire, better make the race as big and inclusive as possible.

Saturday, September 13, 2014

Greatest molecular biologist of all time?

Serena Williams just won her 18th grand slam title, and while I’m not a super knowledgeable tennis person, I think it’s fair to say that she’s the best female tennis player ever. Of course, in these discussions, it always comes down to what exactly one means by best ever. Is it the one who, at peak form, would have won head to head? Well, in that case, I doubt there’s much contest: despite whatever arguments about tennis racket technology improvement, Serena would likely crush anyone else. Is it the most dominant in their era? Is it the one who defines an era, transforming their sport? (Serena wins on these counts as well, I think.)

“Who is the greatest” is a common (and admittedly silly) pastime that physicists and mathematicians tend to play that has many of the same elements as sports (Newton and Gauss, respectively, if I had to pick). Yet curiously, molecular biology doesn’t have quite as much of this. There are certainly heroes (mythical and real) in the story of molecular biology, but there is much less of the absolute deification that you will find at the math department’s afternoon tea. Why?

I think there’s a couple of reasons, but one of the big ones is that the golden era of molecular biology has come much more recently in history than that of math and physics. And recent history is different than ancient history in one very important respect: there are just WAY more people. This means that it’s just that much harder nowadays for someone to come up with a good idea and develop it all entirely by themselves. In the time of Newton, there were just not a lot of trained scientists around, and even then, Leibniz came up with calculus around the same time. Imagine the same thing today. Let’s say you formulated the basic ideas of calculus. Your idea would travel across the internet instantaneously to a huge number of smart mathematicians and for all you know, all the ramifications would get worked out within a very short period of time, perhaps even on a blog. Indeed, think about how many mathematical results from the old days would be worked out by one person: Maxwell’s equations, Einstein’s theory of relativity, Newton’s laws of motion. Nowadays, mathematical ideas tend to have many names attached, like Gromov-Witten invariants, Chern-Simons theory, etc. Einstein’s general theory of relativity is perhaps an example of this transition: I think I read somewhere that Hilbert actually worked out all the math, but waited for Einstein to work it out out of respect. Similarly, quantum mechanics has so many brilliant names associated with it that we can’t really call it “Dirac theory” or “Feynman theory”. It’s just very hard for any one person to develop an idea completely on their own these days.

This is the era that molecular biology came of age in. As such, there are just so many names associated with the major developments that it’s impossible to ascribe any one big thing to any one person, or even a small set of people. And I think the pace is accelerating even further. For instance, consider CRISPR. It’s clear that it’s something that’s captured the attention of the moment, and I’ve been utterly amazed at how quickly people have adopted and applied it in so many clever contexts seemingly instantaneously.

I think this is actually a wonderful thing about molecular biology and modern science in general. I think the excessive focus on the “genius” deemphasizes that scientific progress is a web of interconnected concepts and findings coming from many sources, and I love thinking about molecular biology in those terms. Although I have to admit that a good old fashioned Newton vs. Einstein debate is a lot of fun!

Sunday, August 31, 2014

My new favorite website

I had a couple posts earlier about the Fermi paradox inspired largely by a post on the website waitbutwhy.com. It's now officially my favorite website. Check out some of these other great posts, many characterized by mind-blowing depictions of scale:







Plus other fun stuff about baby names, trips to Japan and Russia, etc. Anyway, great site!

Saturday, August 30, 2014

Do I want molecular biology to turn into physics?

I just read this little Nature News feature about the detection of low-energy neutrinos produced during the fusion reactions that power the Sun. I don't know much nuclear physics, but the article says that fusion in the sun involves the fusion of two hydrogen molecules into deuterium, and the conversion of one proton into a neutron leads to the production of low energy neutrinos, but nobody has been able to detect them because their low energy leads to their signal getting swamped out.

The experiment itself sounds incredible:
The core of the Borexino experiment features a nylon vessel containing 278 tonnes of an ultrapure benzene-like liquid that emits flashes of light when electrons are scattered by neutrinos. The liquid was derived from a crude-oil source nearly devoid of radioactive carbon-14, which can hide the neutrino signal. The detector fluid is surrounded by 889 tonnes of non-scintillating liquid that shields the vessel from spurious radiation emitted by the experiment's 2,212 light detectors.
That is some BIG science! Clearly an experimental triumph.

But it's also a triumph of theory. Theory in physics is so amazing. Can't tell you how many articles about physics have some version of the following line:
While the detection validates well-established stellar fusion theory, future, more sensitive versions of the experiment could look for deviations from the theory that would reveal new physics.
Not only is theory qualitatively right in physics, it's usually provides quantitative predictions with an accuracy that seem utterly preposterous from the perspective of a molecular biologist:
Borexino can measure the flux of low-energy neutrinos with a precision of 10%. Future experiments could bring that down to 1%, providing a demanding test of theoretical predictions and thus potentially uncovering new physics.
For fellow quantitative biologists, it is beyond our wildest fantasies to have strongly predictive theories and models like these. Usually, we're pretty happy if we can get the sign of the prediction correct! That said, there is also something troubling in these articles, which are all the little hints like "more sensitive... could... reveal new physics". I think the operative word is "could". Mostly, it feels like our understanding of many fundamental physical processes is so deep and so accurate that there aren't many surprises left out there (except for that whole dark matter/energy thing...).

I think I'm overall really happy that biology has surprises coming out all the time that make us reconsider our basic understanding of how things work. I do think that molecular biology does suffer a bit from "let's find the latest molecule that breaks the dogma" syndrome instead of focusing more effort on systematizing what we do know, but for the most part, I love the energy that comes from the huge amount of unknowns in the field and the huge challenge that comes with the effort to make molecular biology as predictive physics. Of course, there's no reason to believe a priori that such predictivity is possible. But that's what makes it fun!

Saturday, August 23, 2014

Is academia really broken? Or just really hard?

(Second contrarian post in a row. Need to do some more positive thinking!)

Scarcely a day goes by when I don’t read something somewhere on the internet about how academia is broken. Usually, this centers around peer review of papers, getting an academic job, getting grants and so forth. God knows I’ve contributed a fair amount of similar internet-fodder myself. And just for the record, I absolutely do think that many of the systems that we have in place are deeply flawed and could do with a complete overhaul.

But what do all these hot-button meta-science topics have in common? Why do they engender such visceral reactions? I think they are all about the same basic underlying issue, namely competition for limited resources (spots in high impact journals, academic jobs, grant funding). I think we can and should fix the processes by which these resources are apportioned. But there’s also no getting around the fact that there are limited resources, and as such, there will be a large number of people dissatisfied with the results no matter what system we choose to use.

Take peer review of papers. Colossal waste of time, I agree. Personally, the best system I can envision is one where everyone publishes their work in PLOS ONE or equivalent with non-anonymous review (or, probably better, no review), then “editors” just trawl through that and publish their own “best of” lists. I’m sure you have a favorite vision for publishing, too, and I’m guessing it doesn’t look much like the current system–and I applaud people working to change this system. In the end, though, I anticipate that even if my system was adopted, everyone (including me) would still be complaining about how so and so hot content aggregator is not paying attention to their own particular groundbreaking results they put up on bioRxiv. The bottom line is that we are all competing for the limited attentions of our fellow scientists, and everyone thinks their own work is more important than it probably is, and they will inevitably be bummed when their work is not recognized for being the unique and beautiful snowflake that they are so sure it is. Groundbreaking, visionary papers will still typically be under-recognized at the time precisely because they are breaking new ground. Most papers will still be ignored. Fashionable and trendy papers will still be popular for the same reason that fashionable clothes are–because, umm, that’s the definition of fashion. Politics will still play a role in what people pay attention to. We can do pre-publication review, post-publication review, no review, more review, alt-metrics, old-metrics, whatever: these underlying truths will remain. It’s worth noting that the same sorts of issues are present even in fields with strong traditions of using pre-print servers and far less fetishization of publishing in The Glossies. I think it's the fear and heartbreak associated with rejection by one's peers (either by reviewers or by potential readers) that is the primary underlying motivation for people to consider alternative approaches to publishing–it certainly is for me. We should definitely consider and implement alternatives, but I think it's worth considering that the anguish that comes from nobody appearing to appreciate your work will always be present because other people's attention is a limited and precious resource that we are all fighting for. [Update 8/25: same points made here and here by Jeremy Fox]

For trainees, the other “great filter event” they probably experience is getting a faculty position. Yes, the system is probably somewhat broken (in particular with gender/racial disparities that we simply must address), although compared to peer review of papers, search committees are far more deliberate in their decision making, precisely because the stakes are so much higher. Yes, we can and should encourage and support students considering other career paths. I guess what I’m saying is that even if everyone went into science with their eyes wide open, with all the best mentoring in the world, the reality is that there are more dreamers than dream jobs available. That means many people who feel like they deserve such a position (and certainly many of them do) are not going to get one. And they probably won’t be happy about it.

(Sidebar about career path stuff: to be frank, most of the trainees I’ve met are pretty realistic about their chances of getting a faculty position and have many other plans they are considering as well, and so I think some of the “I’m not getting support and advice about other career choices” meme is overblown, especially these days. We can blame the “system” for somehow making it seem like doing something other than academics is a failure, and there is definitely some truth to that. At the same time, I think it’s fair to say that many people do a PhD because being a scientist was a long-held dream from childhood, and so if we’re being totally honest, at least some of the sense of failure comes from within. It’s a lot easier to say abstractly that we should be realistic with trainees and manage expectations and so forth than to actually look someone in the eye and tell them to their face that they should give up on their dream. I agree that this is the sort of hard stuff PIs should do as part of their jobs–I’m just saying it’s not as easy as it is sometimes made out to be. And yes, I’ve personally experienced both sides of this particular coin.)

Look, nobody likes this stuff. Rejecting is about as much fun as being rejected, and I FULLY support all efforts to make our scientific processes better in every possible way. All I’m saying is that even the best, most utopian system we can think of will suffer from inequities, politics, fashions, etc. because that is just human nature. The current systems are currently largely run by scientists, after all, and so we really have nobody to blame but ourselves. I realize it’s much easier to blame Spineless Editor From Fancy Journal, Nasty Reviewer with a Bone to Pick, Crusty Old Guy on the Hiring Committee, or Crazy Grant Reviewer with a Serious Mental Health Issue, and I’ve for sure blamed all those people myself when I have failed at something. Maybe I was right, or maybe I was wrong. I’m pretty sure it’s mostly a rationalization that lets me keep my chin up in what can sometimes be a fairly demoralizing line of work. Science is a human endeavor. It will be as good and as bad as humans are. And when the chips are down and there’s not enough to go around, that can bring out both the best and the worst in us.

Sunday, August 17, 2014

Another approach to having data available, standardized and accessible: who cares?

I once went to a talk by someone who spent most of their seminar talking about a platform they had created for integrating and processing data of all different kinds (primarily microarray). After the talk, a Very Wise Colleague of mine and I were chatting with the speaker, and I said something to the effect of “Yeah, it’s so crazy how much effort it takes to deal with other people’s datasets” and both the speaker and I nodded vigorously while Very Wise Colleague smiled a little. Then he said, “Well, you know, another approach to this problem is to just not care.” Now, Very Wise Colleague has forgotten more about this field than I’ve ever learned (times 10), so I have spent the last several years pondering this statement. And I think that as time has gone on and I’ve become at least somewhat less unwise, I think I largely agree with Very Wise Colleague.

I realize this is a less than fashionable point of view these days, especially amongst the “open everything” crowd (heavy overlap with the genomics crowd). I think this largely stems from some very particular aspects of genomics data that are dangerous to generalize to the broader scientific community. So let’s start with a very important exception to my argument and then work from there: the human genome. I think our lab uses the human genome on pretty much a daily basis. Mouse genome as well. As such, it is super handy that the data is available and easily accessed and manipulated because we need the data as a resource of specific important information that does not change or (substantially) improve with time or technology.

I think this is only true of a very small subset of research, though, and leads to the following bigger question: when The Man is paying for research, what are they paying for? In the case of the genome, I think the idea is that they are paying for a valuable set of data that is reasonably finalized and important to the broader scientific endeavor. Same could be said for genomes of other species, or for measuring the melting points of various metals, crystal structures, motions of celestial bodies, etc.–basically anything in which the data yields a reasonably final value of interest. For most other research, though, The Man is paying us to generate scientific insight, not data. Think about virtually every important result in biomedical science from the past however long. Like how mutations to certain genes cause cells to proliferate uncontrollably (i.e., genes cause cancer). Do we really need the original data for any reason? At this point, no. Would anyone at the time have needed the original data for any reason? Maybe a few people who wanted to trade notes on a thing or two, but that’s about it. The main point of the work is the scientific insight one gains from it, which will hopefully stand the test of time. Standing the test of time, by the way, means independent verification of your conclusions (not data) in others labs in other systems. Whether or not you make your data standardized and easily accessible makes no real difference in this context.

I think it’s also really important before building any infrastructure to first think pretty carefully about the "reasonably final" part of reasonably final value of interest. The genome, minor caveats aside, passes this bar. I mean, once you have a person’s genome, you have their sequence, end of story. No better technology will give them a radically better version of the sequence. Such situations in biology are relatively rare, though. Most of the time, technology will march along so fast that by the time you build the infrastructure, the whole field has moved on to something new. I saw so many of those microarray transcriptome profile compendiums and databases that came out just before RNA-seq started to catch on–were those efforts really worthwhile? Given that experience, is it worth doing the same thing now with RNA-seq? Even right now, although I can look up the HeLa transcriptome in online repositories, do I really trust that it’s going to give me the same results that I would get on my HeLa cells growing in my incubator in my lab? Probably just sequence it myself as a control anyway. And by the time someone figures this whole mess out, will some new tech have come along making the whole effort seem hopelessly quaint?
 Incidentally, I think the same sort of thinking is a pretty strong argument that if a line of research is not going to give a reasonably final value of interest for something, then you better try and get some scientific insight out of it, because purely as data, the whole thing will likely be obsolete in a few years.

Now, of course, making data available and easily shared with others via standards is certainly a laudable goal, and in the absence of any other factors, sure, why not, even for scientific insight-oriented studies. But there are other factors. Primary amongst them is that most researchers I know maintain all sorts of different types of data, often custom to the specific study, and to share means having to in effect write a standard for that type of data. That’s a lot of work, and likely useless as the standards will almost certainly change over time. In areas where the rationale for interoperable data is very strong, then researchers in the field will typically step up to the task with formats and databases, as is the case with genomes and protein structures, etc. For everything else, I feel like it’s probably more efficient to handle it the old fashioned way by just, you know, sending an e-mail–I think personal engagement on the data is more productive than just randomly downloading the data anyway. (Along those lines, I think DrugMonkey was right on with this post about PLOS’s new and completely inane data availability policy.) I think the question really is this: if someone for some reason wants to do a meta-analysis of my work, is the onus on me or them to wrangle with the data to make it comparable with other people’s studies? I think it’s far more efficient for the meta-analyzer to wrangle with the data from the studies they are interested in rather than make everyone go to a lot of trouble to prepare their data in pseudo-standard formats for meta-analyses that will likely never happen.

All this said, I do definitely personally think that making data generally available and accessible is a good thing, and it’s something that we’ve done for a bunch of our papers. We have even released a software package for image analysis that hopefully someone somewhere will find useful outside of the confines of our lab. Or not. I guess the point is that if someone else doesn’t want to use our software, well, that’s fine, too.

Thursday, August 14, 2014

An argument as to why the great filter may be behind us

A little while back, I read a great piece on the internet about the Fermi Paradox and the possibility of other life in our galaxy (blogged about it here). To quickly summarize, there are tons of earth-like planets out there in our galaxy, and so a pretty fair number of them likely have the potential to harbor life. If we are just one amongst the multitudes, then some civilizations must have formed hundreds of millions or billions of years ago. Now, there’s a credible argument to be made that a civilization that is a few hundred million years more advanced than we are should actually have developed into a “Type III” civilization that has colonized the entire galaxy (gets into the somewhat spooky concept of the von Neumann probe). The question then is why haven’t we actually met any aliens in a galaxy that seemingly should be teeming with life.

There are two general answers. One is that life is out there, but we just haven’t detected it yet, and that online piece does a good job of going through all the possible reasons why we might not yet have detected any life out there. But the other possibility, and the one that I think is frankly a bit more plausible, is that there aren’t any Type III civilizations out there. Yet. Will we be the first? That’s what this piece by Nick Bostrom is all about. The idea is that somewhere in the history of a Type III civilization is an event known as the great filter. This is some event during the course of civilization development that is exceptionally rare, thus providing a great filter between the large number of potential life-producing worlds out there and the complete and utter radio silence of the galaxy as we know it. What are candidates for the great filter? Well, the development of life itself is one. Maybe the transition from prokaryotic life to eukaryotic life. Or maybe all civilizations are doomed to destroy themselves. So in many ways, the existential question facing humanity is whether this great filter is behind us (yay!) or ahead of us (uh-oh!). One fun point that Nick Bostrom makes is that it’s a good thing we haven’t yet found life on Mars. If we did find life on Mars, then that means that the formation of life is not particularly rare, meaning that cannot be a great filter event. The more complex the life that we found on Mars, the worse and worse it would be, because that would eliminate ever greater number of potential great filter candidates behind us, meaning that it is likely that the great filter is ahead of us. Ouch! So don’t be rooting for life on Mars. But while the presence of life on Mars would likely indicate that the great filter is ahead of us, the absence of such life doesn’t say anything, and certainly doesn’t prove that the great filter is behind us. Hmm.

So for a while, I thought this was a classic optimist/pessimist divide: if you’re an optimist, then you believe the filter is behind us, pessimist, ahead of us. But I think there’s actually a rational argument to be made that it’s behind us. Why? Well, I think there are two possible categories of great filter events ahead of us. One is destruction of all life by outside forces. These could be asteroid impacts, gamma ray bursts, etc. Bostrom makes a good argument against these being great filters because a great filter has to be something that is almost vanishingly rare to get past. So even if only 1/1000 civilizations made it past these asteroids and bursts and whatever, then it’s still not a great filter, given the enormous number of potentially life-sustaining planets out there. The other category of filter events (which is in some ways more depressing) are those that basically say that intelligent life is inherently self-destructive, along the lines of “we cause global warming and kill ourselves”, or global thermonuclear war, etc. This is the pessimists line of argument.

Here’s a statistical counterargument in favor of the filter being behind us, or at least against the self-destruct scenario. Suppose that the civilizations are inherently self-destructive and that the filter event ahead of us. Then I would argue that we should see the remnants of previous civilizations on our planet. The idea is that as long as a civilization’s self-destruction doesn’t cause the complete and total annihilation of our planet (which I think unlikely, more in a bit), then conditions would be favorable for life to again evolve until it hits the filter again. And again. And again. Statistically speaking, it would be very unlikely for us, right now, to be the very first in this series of civilizations. Possible, but unlikely.

Now, this argument relies on the notion that whatever these potential future filter events are, they don’t prevent the re-evolution of intelligent life. I think this is likely to be the case. Virtually every such candidate we can think of would probably destroy us, maybe even most life, but it’s hard to imagine them killing off all life on earth, permanently. Global warming? It’s been hot in the earth’s past, with much higher levels of CO2, and life thrived, probably would again. Nuclear war or extreme pollution? Might take a billion or two more years, but eventually, intelligent cockroaches would be wandering the earth in our place. Remember, it doesn’t have to happen overnight. I think there are very few self-destruct scenarios that would lead to COMPLETE destruction–all I can think of are events that literally destroy the planet, like making a black hole that eats up the solar system or something like that. I feel like those are unlikely.

So where does that leave us? Well, I can think of two possibilities. One is that we are not destined for self-destruction, but that the “filter” event is one that just prevents us from colonizing the galaxy. Given our current technological trajectory, I don’t think this is the case. Thank god! Stasis would just feel so… ordinary. The other much more fun possibility is that we are the first ones past the great filter, and we’re going to colonize the galaxy! Awesome! Incidentally, I’m an optimist and an (unofficial) singularitarian. So keep that in mind as well.

So what was the great filter, if it really is behind us? I personally feel like the strongest candidate is the development of eukaryotic life (basically, the development of cells with nuclei). You can get some sense for how rare something is by seeing how long it took to happen, given that conditions aren’t changing. This is hard, because conditions are always changing, but still. Take the development of life itself. Maybe a couple hundred million years? That’s a long time, but not that long, and moreover, conditions on early Earth were changing a lot, so it could be that it didn’t take very long at all once the conditions were right. But eukaryotic life? Something like 1.5-2 billion years! Now that’s a long time, no matter how cosmically your timescale. And the “favorable conditions” issue doesn’t really apply: presumably the conditions favorable to eukaryotic life aren’t really any different than for prokaryotic life, since it's just different rearrangements of the same basic stuff. So prokaryotic life just sat around for billions of years until the right set of cells bumped into each other to make eukaryotic life. Seems like a good candidate for a great filter to me.

Incidentally, one of the things I like about thinking about this stuff is how it puts life on earth in perspective. Given all the conflicts in the news these days, I can’t help but wonder that if we all thought more about our place in the universe, maybe we’d stop fighting with each other so much. We should all be working to better humanity and become a Type III civilization! The wisdom of a fool, I suppose...