Interpreting interactions in multivariate regressions

Warning: if you are not interested in data analysis, avert your eyes!  What follows could not be more boring.  It’s actually pretty boring to me, and I wrote it.

Lately I’ve been getting a fair number of requests to review papers.  The last three have made the same error regarding interactions in multivariate regression, so I thought I would write a quick note about it that my loyal reader(s) can share with their friends/students/enemies as need be (or correct me if I am wrong!).  Specifically, all three papers have interpreted the coefficients of the main effects and the interactions in a model, which is problematic in most cases.

Let’s consider an example.  Using data that are unfolding before me, let’s say you are interested in how affectionate domestic cats are in relation to ambient temperature.  You observe your study system intently and develop an a priori hypothesis that cats grow increasingly affectionate as it gets colder, but that this relationship disappears in the presence of a dog because they are too scared to be affectionate.  Your regression model should then look like this:

affection = temperature + dog_presence + temperature*dog_presence

Note that the main effects must be present in a model with an interaction to allow not just the slope, but also the intercept of the different “dog_presence” categories to vary  (looking at the figure it’s clear they shouldn’t have the same intercept).

The influence of temperature on a cat’s affection for me is mediated by the dog.

You run this data in a simple linear model and you get the following output:

Estimate          Std. Error        p-value

(Intercept)                           20.75000         1.38141           <0.001

temperature                       -0.20714          0.01802           <0.001

dog_presence                   -17.24286        1.95361           <0.001

temperature*dog            0.18000           0.02549          <0.001

All the coefficients are highly significant, but what does that mean?  When you have an interaction in a model, the coefficient of a main effect represents its effect on the response variable when the other main effect is set to zero.  So in this case, the coefficient for dog_presence represents the effect of a dog when temperature is zero, which is not even a case for which we have data.  As such, the value of the coefficient and its significance are not meaningful and so it should not be interpreted.  In retrospect this makes a lot of sense – when we include interactions we are explicitly assessing whether a covariate’s significance is conditional upon another covariate and so a single p-value can’t tell you whether the main effect is sometimes significant and sometimes not.  Even the p-value for the coefficient on the interaction term is not always meaningful, for similar reasons.  You can have an important interaction with a statistically insignificant interaction term.

It sounds as if there is no good way to evaluate the importance of a potential interaction, but that’s not the case.  You can compare models with and without interactions using AIC to determine whether the inclusion of an interaction is parsimonious.  In conjunction with that, you can and should produce plots of the marginal effects of each covariate across a range of biologically relevant values so a reader can get a sense of the strength of the interaction.  You can also further educate yourself on the matter so you are not relying on me as your guide to how to correctly analyze data (that could be dangerous).  I suggest the following paper, which was the basis for much of this post:

Brambor, T., W.R. Clark, and M. Golder. 2006. Understanding interaction models: improving empirical analyses. Political Analysis 14:63-82.


6 thoughts on “Interpreting interactions in multivariate regressions

  1. Thanks Andrew for the excellent description! We should be talking about these things more often. Keep them coming! Based on the data, I’d predict you were getting an affection score of about 2 when that picture was taken–though the disapproving expression on the cat’s face says it all…

    My default for these types of analyses is to run the candidate models, rank them with AICc, calculate model average predictions using all models, and then plot the model average predictions for each level for factors that are supported (in interactions or not). With the packages available in R, this takes about as much time as running one simple model! Package AICcmodavg makes the model comparisons so easy and lattice with latticeExtra make plotting predictions with many levels a snap. Dr. Cox, I know you probably do this too, but I am surprised more people don’t adopt this relatively straightforward approach to linear model analysis.

    1. Thanks Ray! You have described my approach to a tee, including model averaging across the entire candidate set and focusing on predictions rather than parameter estimates (though I should say that parameter estimates are often informative). I have only recently transitioned to R and I was not aware of lattice and latticeExtra – I’ll have to check them out.

  2. Well both of you have given me some serious fodder for thought. I’m working on some things right now, in the midst of trying to transition to R (which sometimes comes out as GRrrrr). I too will have to have a look at AICcmodavg and lattice/latticeExtra. I’d heard some other rumblings about lattice but haven’t explored it yet. Thanks both. And Dr. C, keep up the good work. This is informative stuff.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s