Monthly Archives: May 2012

Review of Clarke and Primo

This is my review of Kevin Clarke and David Primo’s new book, A Model Discipline: Political Science and the Logic of Representations, which tackles the role of theoretical and statistical modeling in political science.  Phil Arena has a great review covering the book’s main points. My goal is to avoid duplicating his efforts – so I suggest reading his review.

Since I try to keep a foot in both evolutionary ecology and political science, I have been exposed to the methods of both. I am, to my knowledge, the only non-political-science grad student to attend an EITM summer school, so may have more insight into the context of their argument. But since most of my methods training has been in on the ecology side, this will be somewhat of an outsider’s view.

First, I really liked this book and agreed with most of the authors’ philosophy of modeling.  One of their main points is that the goal of theoretical models is rarely prediction.  Most often they are tools for reasoning about the world.  They might help evaluate the logical consistency of ideas, define terms and clarify arguments, or identify important new areas of research – but these many uses are not always clear to the non-modeler.  For example, in my inaugural blog post, I wrote about how Hamilton’s rule is the result of a simple model intended to demonstrate the mechanisms of kin selection and that the big misunderstanding of E.O. Wilson’s crusade against kin selection is mistaking it for a predictive model.

The standard story I keep hearing in political science is that the field has taken the “theoretical models as predictive models” to the extreme.  I have been warned, repeatedly, that the field’s top journal, APSR, will not publish purely theoretical models without some nod at an empirical “test” of the model.  Because of my experience at EITM, I agree with Phil that, the relationship between theory and statistics are hardly monolithic.

Primo and Clarke’s view of theoretical models would be very uncontroversial in ecology, population biology, population genetics, and related fields.  These, like political science, are fields of complex dynamics systems with important interactions at different levels and a heavy reliance on observational data for the large scale processes.  (International relations and ecosystem ecology are both hindered by a sample size of one planet and the difficulty of logistically and ethically conducting large-scale experiments.)

Clarke and Primo argue that theoretical models are like maps. Maps are designed for a specific purpose and what determines a good map is whether it is useful for the purpose for which it is designed.  In the same way that it is weird to argue about whether a map is “true” or “false” (since all maps are false in most respects), it is weird to argue about whether a model is true or false.*

I was surprised by their discussion of empirical models since Clarke and Primo did not take their argument to its seemingly logical conclusion by arguing for a model selection approach to empirical analysis (as opposed to the standard null hypothesis testing – NHT).  The NHT view of the world is that models are true and false and the job of science is to reject the false ones. In a null hypothesis test, the first step is to pick a “null” model (which, in practice, is almost always a very terrible model) and assume it is true.

The model selection view of the world assumes that all models are false – or incomplete views of the world – but that some models are better than others for specific purposes.  They try to distinguish between models based on some criteria of usefulness.  One criteria might be out-of-sample prediction.  Failing that, they can use criteria based on information theory. Another (important) criteria could be theoretical relevance based on a priori reasoning. 

I found this omission surprising, not because I’ve seen a lot of model selection statistics in political science** (I haven’t), but because, in ecology, as soon as you start reading about statistical models not being “true” or “false,” this seems to be next argument.  The standard reference for this approach is Burnham and Anderson’s Model Selection and Multi-Model Inference: A Practical Information-Theoretic Approach.   (Here is a short review.)

Overall, I recommend this book anyone interested in modeling in political science.  I especially found their taxonomy of model types helpful (see Phil’s review for details).  As someone who aspires to both a theoretical and empirical modeling program in political science, I hope that the authors manage to shift the field’s view of modeling more towards their own.

* – They are not the first to make this analogy.  I couldn’t remember whether I first saw it in McElreath and Boyd or Miller and Page – so I looked it up and it was in both.

** – Phil Arena pointed out to me that Kevin Clarke has written about model selection elsewhere.  But that makes it even more surprising to me that it was not in the book.

How are babies like tiny scientists?

A few weeks ago I attended the one-day SkeptiCal Conference (partially organized by my friend Lauren). One of the day’s many interesting speakers was Alison Gopnik, a professor of psychology and author of The Scientist in the Crib and The Philosophical Baby.

Dr. Gopnik researches how babies and young children learn about the world.  While the bulk of her SkeptiCal presentation was similar to her TED talk, at the end she discussed recent experiments showing how children use two types of learning – individual experimentation and social learning (that is learning from others).  She also described these experiments in a recent interview:

A couple of recent studies show that preschoolers do something very different if they’re exploring a toy to figure out how it works than if they think somebody’s actually giving them the answer. In a nice experiment that was done at MIT, they gave children a toy to play with that could do lots of different things. You’d punch something and it squeaked, you’d push another button and something else happened, and so on. In one condition the experimenter came in and said, “Oh look, I’ve never seen this toy, let’s see what it can do,” and then bumped into it and it squeaked. In the other condition the experimenter said, “I’ll show you how this toy works.” In the first condition, the children then spontaneously explored everything else the toy could do. Whereas when the experimenter said, “I’m going to show you how this works,” the children just did exactly what the experimenter did, over and over and over again. The findings suggest that children and, presumably, adults, learn quite differently when they’re learning in this spontaneous exploratory way than when they’re learning from a teacher. Now, there are good things about having a teacher who just narrows the range of options you can consider, but there’s also the danger that you’ll wind up just essentially imitating the teacher.

Since I am interested in how the trade-offs between individual and social learning influenced the evolution of humans, I found this experiment interesting for its own sake.

However, this blog post is about last part of her talk in which she used this experiment to illustrate how children learn like “little scientists,” open-mindedly conducting experiments to discover things about the world.  That is, unless this exploration is constrained by others – at which point they cease to thinking like rational scientists and open the door to irrational pseudoscientific beliefs (which is the opposite of what most SkeptiCal participants would want).

What I found most interesting about this part of her talk is not that she gives so much credit to babies, but that she gives so much credit to scientists.*  Her conception of a scientist as the independent open-minded experimentalist as opposed to a socially-constrained learner seems, to me, old-fashioned and at odds with the view Thomas Kuhn made popular with his 1962 book The Structure of Scientific Revolutions.  In the book, Kuhn argued that scientists largely accept the received views of their field and only when it becomes overwhelmingly apparent that accepted views have problems do these views change (something he called a “paradigm-shift”).  As summarized in Kuhn’s NYT obituary:

Professor Kuhn argued in the book that the typical scientist was not an objective, free thinker and skeptic. Rather, he was a somewhat conservative individual who accepted what he was taught and applied his knowledge to solving the problems that came before him.

To me, this sounds a lot like Dr. Gopnik’s depiction of a baby…

* – Though, honestly, I think that both scientists and babies are best served by a mixture of individual and social learning.