Contrasts

Statistical contrasts are an important component of fMRI analyses. Contrasts allow for testing of hypotheses about the relative brain activity between two or more conditions. Our aim is to provide a flexible interface for describing contrasts and generating contrast weight vectors that can be applied to linear models.

A simple 2 by 2 experiment

The approach we take in specifying contrasts is to formulate them as regular R expressions using the ~ operator. To demonstrate, lets start with a simple two-by-two design with factors category with levels [face,scene] and attention with levels [attend,ignore]. For this simple design, we assume each condition is repeated twice in a single run.

First we construct the fMRI design as follows:

design <- expand.grid(category=c("face", "scene"), attention=c("attend", "ignore"), replication=c(1,2))
design$onsets <- seq(1, 100, length.out=nrow(design))
design$block <- rep(1,nrow(design))
category attention replication onsets block
face attend 1 1.00000 1
scene attend 1 15.14286 1
face ignore 1 29.28571 1
scene ignore 1 43.42857 1
face attend 2 57.57143 1
scene attend 2 71.71429 1
face ignore 2 85.85714 1
scene ignore 2 100.00000 1

Creating contrasts to test experimental effects

In a two by two design, there are a number of experimental effects that we might be interested in testing, for example: (1) the effect of category (face > scene), (2) the effect of attention (attend > ignored), and (3) the interaction between category and attention. All of these effects can be formulated as contrasts of weighted averages of condition effects, yielding directional (i.e. signed) t-statistics. Below we show how each contrast can be expressed:

  1. face_scene <- pair_contrast(~ category == "face", ~ category == "scene")
  2. attend_ignored <- pair_contrast(~ attention == "attend", ~ attention == "ignored")
  3. category_by_attention <- contrast(~ (face:attend - face:ignored) - (scene:attend + scene:ignored), name="category_by_attention"))

In the following sections we will explain the syntax used in the examples above.

Contrasting two sets of conditions

We often want to compare two conditions, or two sets of conditions, for a difference in fMRI activity. Such contrasts can be described, generically, as involving an expression A - B, where A represents the weights for the first set of conditions and B represents the weights for the second set of conditions.

If sum of the A weights is 1 and the sum of the ‘B’ weights is also 1, then the difference in these weights (A - B) is 0. Thus, the difference in the weights sum to zero, which is what we want when the null hypothesis is that A == B.

The pair_contrast function

A simple way to contruct this sort of A - B contrast is to use the pair_contrast function, which takes as arguments two “matching expressions”, A and B. In example 1 from the previous section, the first expression (category == "face") matches all conditions where the category factor equals the level “face”. The second expression (category == "scene") matches all conditions where the category factor equals the level “scene”.

To compute contrast weights, every matching level in the two expressions will be assigned a weight of 1 for a match and zero otherwise. The respective binary weight vectors are then normalized to sum to 1 and then subtracted (A - B) to form the contrast weights.

Notice in the above examples, the contrasts are defined abstractly without supplying any corresponding design information. To compute the contrast weights, however, we need to supply the design information. Below, we show a complete example.

And here is the relevant model term (the only term in this model):

## fmri_term:  convolved_term 
##    Term Name:  category:attention 
##    Formula:   ~ (category:attention - 1) 
##    Num Events:  8 
##    Num Rows:  100 
##    Num Columns:  4 
##    Conditions:  category[face]:attention[attend] category[scene]:attention[attend] category[face]:attention[ignore] category[scene]:attention[ignore] 
##    Term Types:  event_factor event_factor

Now, we create two pair_contrasts and compute the weights:

wts1:

## contrast: face_scene 
##  A:  ~category == "face" 
##  B:  ~category == "scene" 
##  term:  category:attention 
##  weights:  
##                                 [,1]
## category#face:attention#attend   0.5
## category#scene:attention#attend -0.5
## category#face:attention#ignore   0.5
## category#scene:attention#ignore -0.5
##  conditions:  category#face:attention#attend category#scene:attention#attend category#face:attention#ignore category#scene:attention#ignore
category#face:attention#attend 0.5
category#scene:attention#attend -0.5
category#face:attention#ignore 0.5
category#scene:attention#ignore -0.5

wts2:

## contrast: attend_ignore 
##  A:  ~attention == "attend" 
##  B:  ~attention == "ignore" 
##  term:  category:attention 
##  weights:  
##                                 [,1]
## category#face:attention#attend   0.5
## category#scene:attention#attend  0.5
## category#face:attention#ignore  -0.5
## category#scene:attention#ignore -0.5
##  conditions:  category#face:attention#attend category#scene:attention#attend category#face:attention#ignore category#scene:attention#ignore
category#face:attention#attend 0.5
category#scene:attention#attend 0.5
category#face:attention#ignore -0.5
category#scene:attention#ignore -0.5

Unit contrasts and the “implicit baseline”

a “unit contrast” tests whether activation, averaged over one or more conditions, is greater than the baseline activity. In this case “baseline” is defined as the level of activation captured by the intercept term, or the implicit baseline in fMRI parlance. Because all beta estimates are already expressed relative to the intercept term, main effect contrast vectors always sum to 1, rather than 0. We therefore refer to the sum-to-one contrasts as: “unit contrasts”.

Below we show how to construct two unit contrasts, one testing for the effect of face > baseline and for scene - baseline:

wts1:

## contrast: face_baseline 
##  A:  ~category == "face" 
##  term:  category:attention 
##  weights:  
##                                 [,1]
## category#face:attention#attend  0.25
## category#scene:attention#attend 0.25
## category#face:attention#ignore  0.25
## category#scene:attention#ignore 0.25
##  conditions:  category#face:attention#attend category#scene:attention#attend category#face:attention#ignore category#scene:attention#ignore
category#face:attention#attend 0.25
category#scene:attention#attend 0.25
category#face:attention#ignore 0.25
category#scene:attention#ignore 0.25

wts2:

## contrast: scene_baseline 
##  A:  ~category == "scene" 
##  term:  category:attention 
##  weights:  
##                                 [,1]
## category#face:attention#attend  0.25
## category#scene:attention#attend 0.25
## category#face:attention#ignore  0.25
## category#scene:attention#ignore 0.25
##  conditions:  category#face:attention#attend category#scene:attention#attend category#face:attention#ignore category#scene:attention#ignore
category#face:attention#attend 0.25
category#scene:attention#attend 0.25
category#face:attention#ignore 0.25
category#scene:attention#ignore 0.25