Tuesday, 24 February 2015

Telling stories about the real world

Science is very good, on the whole, at what it does: establishing predictable facts about the natural world. It's probably fair to say that modern science, in so far as there is such a thing, is better at this than pretty much any other system that humans have ever devised. As scientists, we are often tempted to take other areas of human knowledge gathering to task for not behaving enough like science in their attitude to establishing facts about the world. We berate and bemoan the lack or misuse of evidence. We pour (sometimes deserved) scorn on the apparent flimsiness of other enterprises. In two very interesting posts, drugmonkey discusses why journalism and law can be so unappealing to the scientifically trained. I don't disagree with everything he says, and I think that certainly parts of both could stand to be held to more scientific standards of evidence.
And yet, I am always a little cautious about yelling "be more scientific" at people. Scientists as a whole have a bad a case of epistemological "PC gaming master race" syndrome (if I ever make a nerdier analogy, shoot me).  We are convinced of the innate superiority of our means of knowing what we know. This view is hubris. On the one hand, as has been shown time and again by historians, philosophers and sociologists of science, it is remarkably difficult to pin down, at any time in history, a single definition of what constitutes the scientific method. What's more, efforts to do so have often lead to awkward situations, such as Karl Popper's continuous flip flopping over whether evolution by natural selection could be considered scientific. (One of the many reasons I dislike Karl Popper). On the other, there are entire areas of enquiry in which applying the many commonly accepted facets of the "scientific method" (hypothesis testing by experiment, replication) is difficult, impossible, or wrong headed. And no, I'm not talking about your position on omnipotent beings awkwardly obsessed with judging humans. I'm talking about the two things that are (not coincidentally) crucial in both journalism and courts of law: the reconstruction of historical events and  the determination of people's internal states.
Wait, I here you cry, are you claiming history is not a science? Well, no. What I am claiming (and the philosopher Anton Schopenhauer got there first) is that one cannot apply an experimental framework to individual point event in the past, and furthermore that the range of applicability of hypothetico-deductive methods is much more limited. And when attempting to ascertain the truth behind a single event, in the past, never to be repeated (such as a current event, or a crime), what are we left with? Testimony, correlation, hearsay. Also known as a case. Also known as a story. Neither Baconian empiricism, nor hypothetico-deductivism, nor Popperian faslificationism, will help us here. At best, they will allow us to evaluate some (though by no means all) the evidence. In essence, the repeatable experiment is a tool for erasing the singular nature of point events (and it is much more difficult to do then often presented).
This exact problem is one that plagues my original discipline of paleontology. Paleontology is, in many ways, history writ large. Except it is not writ, more sort of left lying around. Many of the events we would like to understand are unobservable, their consequence inferred from disparate sources of evidence observed in the here and now. And the paleontological literature is rife with people grappling with the problem of just how scientific this endeavor is. And they run the gamut, from people twisting Popper's theory into an unrecognizable pretzel to accommodate paleontological historicity, to people consigning vast tracts of what is normally considered part of paleo to the dustbin of unscientific speculation. Neither approach is satisfying, and both end up running into absurdity and intellectual sterility. Paleontology is largely historical, as such the best we can do is marshal evidence for competing narratives, and go with the most plausible. Some of that evidence will fit with more prescriptive definitions of the scientific method, most (and I count most cases of fossil discovery in this category) will not.
As for the reconstruction of the internal states of a person at a given time and place, science isn't even close to figuring that one out. Yet we, as humans, are always in the business of trying to figure out what's going on in another person's head. And, in the West at least, it is literature that has grappled most directly with this problem, through what one could describe as thought experiments, but that we usually call novels. As the philosopher Mary Midgley once wrote:

"That is why literature is such an important part of our lives - why the notion that it is less important than science is so mistaken, Shakespeare and Tolstoy help us to understand the self-destructive psychology of despotism. Flaubert and Racine illuminate the self-destructive side of love. What we need to grasp in such cases is not the simple fact that people are acting against their interests. We know that; it stands out a mile. We need to understand, beyond this, what kind of gratification they are getting in acting this way."

Ironically, two of the authors who most thoroughly embraces this task of literature, Emile Zola and Marcel Proust, both believed that their novels were eminently empirical and yes, scientific.
Abandoning the notion that a one-size-fits-all epistemological framework derived from the physical sciences can be applied across all areas of possible human knowledge is not the same as saying "anything goes". Rather, it is a requirement that we be more rigorous, more critical, more demanding of the evidence and narratives presented to us given the constraints of what it is we are trying to understand and know. Indeed, the uncritical acceptance of a superficially science-y framework can be as dangerous for critical thought as any narrative-based understanding of the world (this is what Richard Feynman described as cargo cult science). We should have more that one tool in our epistemological kit. The world of things that are knowable is complex enough to need them.


PS: While researching this post, I found this delightful Schopenhauer quote we can probably all agree on:
"Newspapers are the second hand of history. This hand, however, is usually not only of inferior metal to the other hands, it also seldom works properly."

Monday, 16 February 2015

Silent Ivories

My mother has always had a piano. Playing the piano is central to her idea of herself. The only thing other than a house she has ever taken a loan for is the upright Steinway in our living room. Having "time to play the piano" is a measure of how much control she has over her life. It is, to use modern parlance, her measure of self care. Yet, interestingly, the piano is not relaxing for her in the way that a movie or a book might be. She is always challenging herself to play harder pieces, to get better. It requires effort and energy: more of a discipline than a hobby. And when my mother does not have energy for the piano, then she knows that something else in her life is taking too much of her time.
My relationship to the piano is very different. Its presence and sound are comforting to me, yet I never really played. My instrument was the violin, and our relationship is estranged at best. I have dabbled in playing the piano (over the course of a decade I have taught myself the first two movements of the Moonlight sonata), and whenever I am back home I find time to refresh my memory.
During the year between ending my PhD and starting my postdoc, when I was unemployed and living back with my mother, I started playing the piano more seriously. At my mother's prompting, I took lessons with her piano teacher. I improved noticeably. I began to think of other pieces I would like to learn. And so, when I moved to Ohio, concerned I would be bored, I resolve to buy myself a piano.
I found a good electronic piano on Craigslist, one that was highly rated for its sound. In fact, it is based on sampling a Steinway concert grand. I set it up in our spare room, and got sheet music for the Moonlight (no point in losing the benefit of all that practice) and the next piece I'd resolved to learn, a piece of Tchaikovsky incidental music from his Seasons series. After almost a year, I can play 8 bars. I play the piano maybe a couple of times a month.
On occasion, I feel guilty about this. I enjoy the piano, and I want, on some level, to learn these pieces, and ultimately, one day, to master the third movement of the Moonlight sonata, which is currently far beyond my technical skill. And, of course, I compare myself with my mother, who throughout her always busy, sometimes hectic life has always been able to practice several hours a week. It would be easy to blame science, and its tendency to expand to fill all available space. It would be easy to argue that I could cut out more mindless pursuits (like twitter, or browsing the internet, or playing video games, or watching TV). Certainly, my upbringing was Protestant enough that I must always wonder if I am making the best use of the time allotted to me. And certainly the pressures of being an early career researcher don't help.
But I think also, part of the problem lies in being honest about what is important to us. As I was talking to colleagues over the weekend about the compatibility of science with time consuming hobbies, I came to a realisation. I have decided what matters to me: my work, my fiance, my friends and family, and exercising so that I can enjoy the outdoors in the summer. These are the things, when I look at how I spend my time, that I make time for. It is not worth feeling guilty that I do not play the piano enough. It is simply not important enough to me to make the cut. And conversely, it is clearly important enough to my mother that she will make time for it in the face of other pressures.
Life is difficult enough, and there are enough pressures and demands on our time as scientists, that we should not burden ourselves with feeling obligations towards activities that we do not in fact value that much. Embrace the things you care about, and make time for them. That choice is personal, and yours, and your choices are valid.
Maybe in ten years time I will have deciphered the Tchaikovsky piece. If that is the time it takes, then that is the time it takes. In the meantime, I will enjoy my fiance's company, invite my friends to dinner, and plan for a summer of camping trips in the Appalachian mountains.

Tuesday, 10 February 2015

We need to talk about the dissertation

It's been over two years since I picked up the 8 bound copies of my dissertation from the library of Johns Hopkins University. The box weighed a ton. The printing had cost me a fair amount of money I no longer had by that point. I confess that when, upon asking her if she wanted a bound copy, my mother said "yes", I was more than a little annoyed. That was another 400 sheets of acid free paper, another 40 dollar binding fee. I dutifully handed my advisor and my department a copy, posted two more to my US based committee members, and boarded the plane back to the UK with three hard copies, because they took up too much of my limited baggage allowance to be worth putting in the hold. Of those copies, I handed one to my committee member who was in Glasgow, one came back with me to the US when I started my postdoc. The third is on the table in my mother's office, more or less exactly where she put it two years ago, slowly getting covered by bills and newspaper clippings.
If you've read this post, you'll probably find the sentence above an apt metaphor for the fate of dissertations such as mine. My program still required dissertations to be written in the monographic format, and despite vague encouragement to publish during my PhD, my advisor never pushed me to realise that my focus on my dissertation as a monolithic project was a liability. Thus, none of that 400 page tome was published by the time I finished up my program. And to this date, all of it that is published is this (unless you count four conference abstracts, which no one does). Other than that, there's a revise and resubmit that's been in limbo for almost a year at this point. And when that gets published, that'll be it, I think.
Given the above, you'd probably think I'm kind of sour on monographic dissertations. And I won't deny that the CVs of graduate students from paper focused programs made me jealous when graduating. Certainly my affection for the object itself is limited. But I am still proud of what's within those pages. That project is solid, cohesive science I would (and have) stand up and take ownership of. It's grounded in a deep understanding of the issues that drove it, both methodological and theoretical. It leveraged under used ressources (museum collections). Moreoever, it is the brainchild of my own intellectual development. Thanks to my masters, I was already well versed in the techniques I applied in my PhD, and I knew the kinds of questions I wanted to ask. I chose to work with my advisor because he too was interested in those questions, and because he has access to the resources (fossils) i needed to do my project. So the intellectual process that lead to my dissertation I am proud of, and happy with. Like others, I defend the holistic, root and branch approach to science that a good monographic dissertation requires. But only up to a point. And the point at which I stop defending the monographic thesis is the point at which it hobbles the already dim career prospects of graduate students. More fundamentally, the point at which it perpetuates the lie that your research is valued irrespective of your productivity, which is publication. If graduate school is training for academia, then it must reflect the reality of academia. And that is both that your work must be thorough, rigorous, and yours, and that it must be published.
Instead of focusing on sterile arguments about whether or not three papers bound together counts as a dissertation, we need to work on developing a system of awarding graduate degrees that both encourages and develops thorough, original thinking and research in science, and which encourages regular, high quality publication of said research. Rather than end this post on my half baked ideas, I'd like to hear your thoughts on the topic.

Wednesday, 4 February 2015

Bad DIY is not carpentry: on coding in academia

I have a friend who's hobby is woodworking. Half of the basement of his and his wife's house outside Philadelphia is dedicated to his hobby. The large dining room table around which the family gathers was made by him from the cured wood of an old oak tree he salvaged after a storm. The table is beautiful, inlaid with rosewood, with intricate foldaway leaves. It is precise, the result of years of craft and practice in the service of a single result: to make a beautiful, functional table. Curiously, until he retired, his career was physics. He's never done woodwork for money, only for pleasure. But his work is exquisite.
Conversely, I can barely saw straight. I took shop classes after school, where I learnt the rudiments of planing, sawing, and hammering. I made a box from off cuts of cheap wood, and a toy castle, held together with glue and impatience. I would not use the terms woodwork or carpentry to describe what I do, even though in a pinch, I could probably slap together a vaguely functional table. In fact, I once did. It fell apart after a year and still is the source of much hilarity for my more practically inclined friends.
There is, I think, a distinction in the mindset of woodworking and DIY. Woodworking, though it may have the goal of producing a useful, functional, or beautiful object, is also concerned with the process, materials and skills intrinsic to the craft. A woodworker makes things to become better at woodworking. Conversely, DIY is more utilitarian. The purpose is to produce a functional result, and improvements in skill are valued only in so far as they help achieve the final functional goal.
The point of the above: I think that much of the code and software that is used in modern science fits the definition of DIY, not woodwork.
In our lab, we collect electromyographic data (electrical signals from actively contracting muscles). Because of various properties of the muscles we record from, and because of various properties of the animals we work with, our EMG data require idiosyncratic post-processing before interpretation. To do this post processing, we use a piece of software that was written by a collaborator of my PI probably over a decade ago. Or it is perhaps more appropriate to say, we endure this piece of software. It is unstable, the code is lost, and in fact it is probably written in a legacy language. It was written by an amateur coder whose sole concern was to get it to do the post processing we require. The formatting requirements are fickle, the software is only partially compatible with modern operating systems, and it randomly stops working for reasons we cannot understand. And yet, much like an inexpertly fitted IKEA kitchen, it does the job well enough that we cannot justify the expense of developing something better.
Similarly, my dissertation has about ten pages of Matlab code in it. The code was pretty much when I learnt to write basic manipulations. It is a nightmare of nested for- loops because I never figured out how to do logical matrix indexing, and for- loops did the job. The code does the job, is reasonably well commented, and has meaningful variable names (probably the only good coding practice I have). But it is quasi useless for any project other than my dissertation. And an actual coder would probably rewrite from scratch.
There's a lot of push for scientists to learn to code. And certainly, the ability to do our job these days requires at least the ability to formulate how we would like our data manipulated and to competently explain what the software we use does. I am radically opposed to black box approaches to data analysis software and overly customised solutions. They are anathema to good quantitative analysis that is sensitive to the specific structure of data and to the proper formulation of hypothesis tests. And yet in encouraging a DIY mentality, we encourage the proliferation of cludgy, inelegant, and unreliable software.
Yet those scientists who dedicate themselves to actual code carpentry often feel denigrated as "methods people". We are all grateful for their software (especially when it saves us the effort of writing a miserable few lines of code) yet do we remember to cite them? Do we defend their contributions to the field? I'm part of a macro-ecology journal club that includes a good number of methods people who take pride in writing efficient, elegant, usable code in the R- environment, and these issues come up often.
In an ideal world, we'd make more use of professional software developers. Yet on the other hand, our questions are sufficiently esoteric that I wonder how useful that would be in practice. But I do think we need to be careful of over celebrating a DIY coding ethos, and maybe work to nurture, celebrate,  and collaborate with the genuine software carpenters in science more.