This is an old revision of the document!
QCBS R Workshops
This series of 10 workshops walks participants through the steps required to use R for a wide array of statistical analyses relevant to research in biology and ecology. These open-access workshops were created by members of the QCBS both for members of the QCBS and the larger community.
Workshop 4: Linear models
Developed by: Catherine Baltazar, Bérenger Bourgeois, Zofia Taranu
Summary: In this workshop, you will learn how to implement basic linear models commonly used in ecology in R such as simple regression, analysis of variance (ANOVA), analysis of covariance (ANCOVA), and multiple regression. After verifying visually and statistically the assumptions of these models and transforming your data when necessary, the interpretation of model outputs and the plotting of your final model will no longer keep secrets from you!
Link to associated Prezi: Prezi
Download the R script and data for this lesson:
Learning Objectives
- Simple linear regression
- T-test
- ANOVA
- Two-way ANOVA
- Unbalanced ANOVA
(advanced section/ optional) - ANCOVA
- Multiple linear regression
Overview
Scientists have always been interested in determining relationships between variables. Depending on the kind of variables considered and their number, different statistical tools can be used to assess these relationships. The table below lists the five types of statistical analysis that will be covered in this workshop:
Statistical analysis | Type of response variable Y | Type of explanatory variable X | Number of explanatory variables | Number of levels k |
---|---|---|---|---|
Simple linear regression | Continuous | Continuous | 1 | |
t-test | Categorical | 1 | 2 | |
ANOVA | Categorical | 1 (one-way ANOVA), 2 (two- way ANOVA) or more | 3 or more | |
ANCOVA | Continuous AND categorical | 2 or more | 2 or more | |
Multiple regression | Continuous | 2 or more |
1. Simple linear regression
The purpose of this analysis is to relate a response variable, y, and an explanatory variable, x, through a straight line. The mathematical model corresponding to linear regression is given by:
where
is the intercept of the regression line,
is the slope of the regression line,
is the continuous explanatory variable,
are the residuals of the model (i.e. the unexplained variation).
The goal here is to find a correct estimation of these two regression parameters and then assess the goodness of fit of the regression model. While several methods have been developed to calculate the intercept and slope coefficients of regression models, ordinary least squares is the most widely used estimation method, and also corresponds to the default method of the lm
function in R. Ordinary least squares fits a line such that the sum of the squared vertical distances between the observed data and the linear regression model (i.e. the residuals) is minimized. From this method, the intercept β0 and the slope β1 of the linear regression can be calculated as:
To be valid, a linear regression must meet 4 assumptions, otherwise the model results cannot be safely interpreted.
1.1 Assumptions
- Homoscedasticity
To be valid, explanatory variables must have a homogenous variance (also called homoscedasticity), i.e. the spread of the data must be uniform for each xi value. This assumption can be verified by indirectly comparing the spread of the model residuals along the different xi values using a plot of the residuals against the fitted values.
As we will see below, in case of heteroscedasticty, data transformations can be applied to improve the spread of the residuals, or use a generalized linear model with a distribution (Poisson, negative binomial, etc.) that better suits the relationship. - Independence
Linear regression can only be applied on independent data. This means that the yi at a given xi value must not be influenced by other xi values. Violation of independence can happen if your data represent some form of dependence structure, such as spatial or temporal correlation. - High leverage
If some of the observations in a dataset possess strongly different values from others, a model fitting problem can arise such that these high leverage data influence the model calculation. - Normal distribution
Linear regression should only be applied to data that follow a normal distribution (response and explanatory variables).
1.2 Running a linear model
Using the bird dataset, we will first examine the linear regression of maximum abundance as a function of mass.
In R, linear regression is implemented using the lm function from the stats package:
lm (y ~ x)
Note: before using a new function in R, users should refer to its help documentation (?functionname ) to find out how to use the function as well as its preset default methods. |
- | Load and explore your data
# Loading libraries and bird dataset library(e1071) library(MASS) setwd("~/Desktop/...") # Don't forget to set your working directory (note: your directory will be different) bird<-read.csv("birdsdiet.csv") # Visualize the dataframe names(bird) str(bird) head(bird) summary(bird) plot(bird)
The bird dataset contains 7 variables:
Variable Name | Description | Type |
---|---|---|
Family | Common name of family | String |
MaxAbund | The highest observed abundance at any site in North America | Continuous/ numeric |
AvgAbund | The average abundance across all sites where found in NA | Continuous/ numeric |
Mass | The body size in grams | Continuous/ numeric |
Diet | Type of food consumed | Discrete – 5 levels (Plant; PlantInsect; Insect; InserctVert; Vertebrate) |
Passerine | Is it a songbird/ perching bird | Boolean (0/1) |
Aquatic | Is it a bird that primarily lives in/ on/ next to the water | Boolean (0/1) |
We are now ready to run our linear model:
- | Regression of Maximum Abundance on Mass
lm1 <- lm(bird$MaxAbund ~ bird$Mass) # where Y ~ X means Y "as a function of" X>
1.3 Verifying assumptions
- | Diagnostic plots
opar <- par(mfrow=c(2,2)) # draws subsequent figures in a 2-by-2 panel plot(lm1) par(opar) # resets to 1-by-1 plot
Homoscedasticity
Residual vs Fitted plot - The first graph of the diagnostic plots called by plot(lm1)
illustrates the spread of the residuals between each fitted values, if homogenous then the homoscedasticty is respected. The Residuals vs Fitted values should show a similar scatter across all Fitted values (x-axis).
If the relationship between the response variable and the explanatory variable is not linear, then it will show up here.
Scale-location plot - The third graph of the diagnostic plots enables one to verify whether the residuals spread increases with a given fitted values (i.e. identifies whether the spread in the residuals is due to the selected explanatory variable). If the spread increases, the homoscedasticity assumption is not respected.
Independence and normal distribution
QQ plot - Independence can be assessed from the QQplot of the diagnostic plots. It allows you to check the distribution of the model residuals and verify the normality of the response and explanatory variables. This graph compares the probability distribution of the model residuals to the probability distribution of normal data series. If the standardized residuals lie linearly on the 1:1 line of the QQplot, the residuals can be considered normally distributed.
The points of the QQplot are nonlinear, which suggests that the residuals are not normally distributed.
High leverage
Residuals vs Leverage plot - High leverage data can be visualised on the fourth diagnostic plots (i.e. residuals vs leverage), which identifies the observation numbers of the high leverage data point(s). If (and only if!) these observations correspond to mismeasurements or represent exceptions, they can be removed from the original dataset.
1.4 Normalizing data
In the example provided above, the MaxAbund (response variable) and Mass (explanatory variable) were not normally distributed and the linear regression assumptions were violated. The next step is to try to normalize the variables using transformations. To assess the normality of a variable, draw a histogram using the function hist
, and check visually whether the data series appears to follow a normal distribution. For example:
- | Testing Normality: hist() function
# Plot Y ~ X and the regression line # Plot Y ~ X and the regression line plot(bird$MaxAbund ~ bird$Mass, pch=19, col="coral", ylab="Maximum Abundance", xlab="Mass") abline(lm1, lwd=2) ?plot # For further details on plot() arguments # see colours() for list of colours # Is the data normally distributed? hist(bird$MaxAbund,col="coral", main="Untransformed data", xlab="Maximum Abundance") hist(bird$Mass, col="coral", main="Untransformed data", xlab="Mass")
A second way to assess normality is to use the Shapiro-Wilk normality test that compares the distribution of the observed data series to a normal distribution using the function shapiro.test
.
The null and alternate hypotheses of this test are:
H0: the observed data series is normally distributed,
H1: the observed data series is not normally distributed,
The observed data series can be considered normally distributed when the p-value calculated by the Shapiro-Wilk normality test is greater than or equal to α, typically set to 0.05.
- Testing Normality: shapiro.test() function
# Test null hypothesis that the sample came from a normally distributed population shapiro.test(bird$MaxAbund) shapiro.test(bird$Mass) # If p < 0.05, then distribution is not-normal # if p > 0.05, then distribution is normal
We can also evaluate the skewness of each distribution using the Skewness
function:
- Testing Normality: skewness() function
skewness(bird$MaxAbund) skewness(bird$Mass) # where positive values indicate a left-skewed distribution, and negative value a right skew.
The histograms, Shapiro tests and Skewness all indicate that the variables need to be transformed to normalize (e.g. a log10 transformation).
1.5 Data transformation
In case of non-normality, response and explanatory variables can be transformed to enhance their normality following these rules:
Type of distribution | Transformation | R function |
---|---|---|
Moderately positive skewness | sqrt(x) | |
Substantially positive skewness | log10(x) | |
Substantially positive skewness | log10(x + C) where C is a constant added to each value of x so that the smallest score is 1 |
|
Moderately negative skewness | sqrt(K - x) where K is a constant subtracted from each value of x so that the smallest score is 1 |
|
Substantially negative skewness | log10(K - x) |
Thus, log10 transformations should be applied and saved in the bird data frame. The model can then be re-runned, verified and interpreted.
- | Data Transformation
# Add log10() transformed variables to your dataframe bird$logMaxAbund <- log10(bird$MaxAbund) bird$logMass <- log10(bird$Mass) names(bird) # to view the dataframe + new transformed variables hist(bird$logMaxAbund,col="yellowgreen", main="Log transformed", xlab=expression("log"[10]*"(Maximum Abundance)")) hist(bird$logMass,col="yellowgreen", main="Log transformed", xlab=expression("log"[10]*"(Mass)")) shapiro.test(bird$logMaxAbund); skewness(bird$logMaxAbund) shapiro.test(bird$logMass); skewness(bird$logMass) # Re-run your analysis with the appropriate transformations lm2 <- lm(bird$logMaxAbund ~ bird$logMass) # Are there remaining problems with the diagnostics (heteroscedasticity, non-independence, high leverage)? opar <- par(mfrow=c(2,2)) plot(lm2, pch=19, col="gray") par(opar)
1.6 Model output
Once all these assumptions have been verified, the model results can be interpreted. These results are called in R using the function summary
.
- | Summary output
# Now we can look at the model coefficients and p-values summary(lm2) # You can also just call up the coefficients of the model lm2$coef # What else? str(summary(lm2)) summary(lm2)$coefficients # where Std. Error is the standard error of each estimate summary(lm2)$r.squared # Coefficient of determination summary(lm2)$adj.r.squared # Adjusted coefficient of determination summary(lm2)$sigma # Residual standard error (square root of Error Mean Square) # etc… # You can also check for yourself the equation for R2: SSE = sum(resid(lm2)^2) SST = sum((bird$logMaxAbund - mean(bird$logMaxAbund))^2) R2 = 1 - ((SSE)/SST) R2
The output of this function presents all the results of your validated model:
lm(formula = logMaxAbund ~ logMass, data = bird) Residuals: Min 1Q Median 3Q Max -1.93562 -0.39982 0.05487 0.40625 1.61469 Estimate Std. Error t value Pr(>|t|) (Intercept) 1.6724 0.2472 6.767 1.17e-08 *** logMass -0.2361 0.1170 -2.019 0.0487 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.6959 on 52 degrees of freedom Multiple R-squared: 0.07267, Adjusted R-squared: 0.05484 F-statistic: 4.075 on 1 and 52 DF, p-value: 0.04869
The coefficients of the regression model and their associated standard error appear in the second and third columns of the regression table, respectively. Thus,
β0 = 1.6724 ± 0.2472 is the intercept (± se) of the regression model,
β1 = -0.2361 ± 0.1170 is the slope (± se) of the regression model.
and finally: logMaxAbund = 1.6724 (± 0.2472) - 0.2361 (± 0.1170) x logMass
The t-value and their associated p-value (in the fourth and fifth columns of the regression table, respectively) test for a significant difference between the calculated coefficients and zero. In this case, we can see that logMass has a significant influence on logMaxAbund because the p-value associated with the slope of the regression model is inferior to 0.05. Moreover, these two variables are negatively related as the slope of the regression model is negative.
Users must, however, be aware that significant relationship between two variables does not always imply causality. Conversely, the absence of significant linear regression between y and x does not always imply an absence of relationship between these two variables; this is for example the case when a relationship is not linear.
The goodness of fit of the linear regression model is assessed from the adjusted-R2 (here, 0.05484), given by:
where
p is the total number of regressors and n is the sample size,
is the total sums of squares,
is the regression sums of squares - also called the explained sums of squares.
The higher the adjusted-R2 is, the better the data fit the statistical model, knowing that this coefficient varies between 0 and 1. In this case, the relationship between logMaxAbund and logMass is quite weak.
The last line of the R output represents the F-statistic of the model and its associated p-value. If this p-value is inferior to 0.05, the model explains the data relationship better than a null model.
1.7 Plotting
Linear regression results are generally represented by a plot of the response variable as a function of the explanatory variable on which the regression line is added (and if needed the confidence intervals), using the R code:
- | Plot Y ~ X with regression line and CI
plot(logMaxAbund ~ logMass, data=bird, pch=19, col="yellowgreen", ylab = expression("log"[10]*"(Maximum Abundance)"), xlab = expression("log"[10]*"(Mass)")) abline(lm2, lwd=2) # You may also flag the previously identified high-leveraged points points(bird$logMass[32], bird$logMaxAbund[32], pch=19, col="violet") points(bird$logMass[21], bird$logMaxAbund[21], pch=19, col="violet") points(bird$logMass[50], bird$logMaxAbund[50], pch=19, col="violet") # We can also plot the confidence intervals confit<-predict(lm2,interval="confidence") points(bird$logMass,confit[,2]) points(bird$logMass,confit[,3])
1.8 Subsetting
We may also run the analysis on a subset of observations, for example, on terrestrial birds only.
- | Regression on Subset of Observations
# Recall that you can exclude objects using "!" # We can analyze a subset of this data using this subset command in lm() lm3 <- lm(logMaxAbund ~ logMass, data=bird, subset =! bird$Aquatic) # removing the Aquatic birds # or equivalently lm3 <- lm(logMaxAbund ~ logMass, data=bird, subset=bird$Aquatic == 0) # Examine the model opar <- par(mfrow=c(2,2)) plot(lm3, pch=19, col=rgb(33,33,33,100,maxColorValue=225)) summary(lm3) par(opar) # Compare the two datasets opar <- par(mfrow=c(1,2)) plot(logMaxAbund ~ logMass, data=bird, main="All birds", ylab = expression("log"[10]*"(Maximum Abundance)"), xlab = expression("log"[10]*"(Mass)")) abline(lm2,lwd=2) plot(logMaxAbund ~ logMass, data=bird, subset=!bird$Aquatic, main="Terrestrial birds", ylab = expression("log"[10]*"(Maximum Abundance)"), xlab = expression("log"[10]*"(Mass)")) abline(lm3,lwd=2) opar(par)
CHALLENGE 1
Examine the relationship between log10(MaxAbund) and log10(Mass) for passerine birds.
HINT:
Passerine is coded 0 and 1 just like Aquatic. You can verify this by viewing the structure str(bird)
.
2. T-tests
The Student’s t-test, or simply t-test enables the comparison of a continuous variable according to a categorical variable divided in two groups (or treatments), and only two groups. Based on the following mathematical model, t-test compares the response variables, y, between the two groups A1 and A2 of the explanatory variable:
where
µ is the mean of the response variable,
corresponds to the effect of group i,
i varies from 1 to 2,
are the residuals of the model (i.e. the unexplained variation).
The statistical hypotheses of the t-test evaluates the difference between the two explanatory groups, i.e.:
H0: µ1 = µ2
H1: µ1 ≠ µ2
where
µ1 is the mean of the response variable for group 1,
µ2 is the mean of the response variable for group 2.
The above bilateral alternative hypothesis H1 searches for a difference in the response variables between the two groups.
However, if the sense of the expected difference is supported by a biological hypothesis, unilateral alternative hypotheses can also be used:
- if the response variable is supposed to be higher for group 1, then: H1: µ1 > µ2,
- if the response variable is supposed to be smaller for group 1, then: H1: µ1 < µ2.
For the t-test, the t statistic used to find the p-value calculation is calculated as:
where
1 and 2 are the means of the response variable y for group 1 and 2, respectively,
s12 and s22 are the variances of the response variable y for group 1 and 2, respectively,
n1 and n2 are the sample sizes of groups 1 and 2, respectively.
2.1 Assumptions
If the assumptions of the t-test are not met, the test can give misleading results. These assumptions concern the shape of the distribution of the data:
- Normality of data
As with simple linear regression, data series need to be normally distributed. If the data are not normally distributed, but have reasonably symmetrical distributions, a mean which is close to the centre of the distribution, and only one mode (highest point in the frequency histogram) then a t-test will still work as long as the sample is sufficiently large (rule of thumb ~30 observations). If the data is heavily skewed, then we may need a very large sample before a t-test works. In such cases, an alternate non-parametric test should be used. - Homoscedasticity
Another important assumption of the two-sample t-test is that the variance of your two samples are equal. This allows you to calculate a pooled variance, which in turn is used to calculate the standard error. If population variances are unequal, then the probability of a Type I error is greater than α.
The robustness of the t-test increases with sample size and is higher when groups have equal sizes.
We can test for difference in variances among two populations and ask what is the probability of taking two samples from two populations having identical variances and have the two sample variances be as different as are s12 and s22.
To do so, we must do the variance ratio test (i.e. an F-test).
For the example above, with the pooled variance the type I error is actually larger than the α set from the group 1 sample. Thus, we would conclude do not reject, when actually the α cut-off was wrong and we should have rejected!
Violation of assumptions
If variances between groups are not equal, it is possible to use corrections, like the Welch correction. If assumptions cannot be respected, the non-parametric equivalent of t-test is the Mann-Whitney test. Finally, if the two groups are not independent (e.g. measurements on the same individual at 2 different years), you should use a Paired t-test.
2.2 Running a t-test
In R, t-tests are implemented using the function t.test
. For example, to test for a mass difference between aquatic and non-aquatic birds, you should write:
- | T-test
# T-test boxplot(logMass ~ Aquatic, data=bird, ylab=expression("log"[10]*"(Bird Mass)"), names=c("Non-Aquatic","Aquatic"), col=c("yellowgreen","skyblue")) # First, let's test the assumption of equal variance # Note: we do not need to test the assumption of normally distributed data since # we already log transformed the data above tapply(bird$logMass,bird$Aquatic,var) var.test(logMass~Aquatic,data=bird) # We are now ready to run the t-test ttest1 <- t.test(Mass~Aquatic, var.equal=TRUE, data=bird) # or equivalently ttest1 <- t.test(x=bird$logMass[bird$Aquatic==0], y=bird$logMass[bird$Aquatic==1], var.equal=TRUE) ttest1
Two Sample t-test data: logMass by Aquatic t = -7.7707, df = 52, p-value = 2.936e-10 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -1.6669697 -0.9827343 sample estimates: mean of x mean of y 1.583437 2.908289
Here, we show that the ratio of variances is not statistically different from 1, therefore variances are equal, and we proceeded with our t-test. Since p < 0.05, the hypothesis of no difference between the two bird types (Aquatic vs. terrestrial) was rejected.
2.3 Running a t-test with lm()
Don't forget that the t-test is still a linear model and a specific case of ANOVA (see below) with one factor with 2 levels. As such, we can also run the t-test with the lm()
function in R:
- | T-test as a linear model
ttest.lm1 <- lm(logMass ~ Aquatic, data=bird) anova(ttest.lm1)
Analysis of Variance Table Response: logMass Df Sum Sq Mean Sq F value Pr(>F) Aquatic 1 19.015 19.0150 60.385 2.936e-10 *** Residuals 52 16.375 0.3149 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
When variances are equal (i.e., two-sample t-test), we can show that t2 = F:
2.4 Unilateral t-test
The alternative option of the t.test
function allows for the use of unilateral t-test. For example, if users want to test if non-aquatic birds are less heavy than aquatic birds, the function can be written:
- | Unilateral t-test
# Alternative T-test uni.ttest1 <- t.test(logMass~Aquatic, var.equal=TRUE, data=bird, alternative="less") uni.ttest1
In the R output, called by uni.ttest1
, the results of the t-test appear in the third line:
Two Sample t-test data: logMass by Aquatic t = -7.7707, df = 52, p-value = 1.468e-10 alternative hypothesis: true difference in means is less than 0 95 percent confidence interval: -Inf -1.039331 sample estimates: mean in group 0 mean in group 1 1.583437 2.908289
In this case, the calculated t-statistic is t = -7.7707 with df = 52 degrees of freedom that gives a p-value of p-value = 1.468e-10. As the calculated p-value is inferior to 0.05, the null hypothesis is rejected. Thus, aquatic birds are significantly heavier than non-aquatic birds.
3. ANOVA
Analysis of Variance (ANOVA) corresponds to a generalization of the Student’s t-test. ANOVA enables the comparison of a continuous variable according to a categorical variable divided in more than two groups (or treatments), whereas the number of groups in the Student’s t-test was restricted to two. Based on the following mathematical model, ANOVA thus compares the response variables y between the j groups of the explanatory variable:
where
µ is the general mean of the response variable,
Ai is the effect of the level i for the factor A,
i varies from 1 to n (n ≥ 2),
εij are the residuals of the model (i.e. the unexplained variation).
The statistical hypotheses of the ANOVA aim to identify if one of the j groups differ from the others in terms of the response variable y:
H0: µ1 = µ2 =… = µj =… = µn
H1: there is at least one µj that differs from the others
To determine whether the null hypothesis must be accepted or rejected, ANOVA calculations are based on the sum of squares partitioning and compares the within-treatment variance to the between-treatment variance. If the between-treatment variance is greater than the within-treatment variance, this means that the treatments affect the explanatory variable more than the random error (corresponding to the within-treatment variance), and that the explanatory variable is likely to be significantly influenced by the treatments.
In the ANOVA, the comparison of the between-treatment variance to the within-treatment variance is made through the calculation of the F-statistic that correspond to the ratio of the mean sum of squares of the treatment (MSTrt) on the mean sum of squares of the error (MSE). These two last terms are obtained by dividing their two respective sums of squares by their corresponding degrees of freedom, as presented in the following ANOVA table. Then, the p-value of the ANOVA is calculated from the F-statistic that follows a Chi-square (χ2) distribution.
Source of variation | Degrees of freedom (df) | Sums of squares | Mean squares | F-statistic |
---|---|---|---|---|
Total | ||||
Factor A | ||||
Error |
a: number of levels of the explanatory variable A; r: number of replicates per treatment; : general mean of the explanatory variable; i : mean of the explanatory variable for all the replicates of the treatment i.
3.1 Types of ANOVA
- One-way ANOVA
One factor with more than 2 levels - Two-way ANOVA (see section below)
- 2 factors or more,
- Each factor can have multiple levels,
- The interactions between each factor must be tested. - Repeated measures
ANOVA can be used for repeated measures, but we won't cover this today. Linear Mixed-effect Models can also be used for this kind of data (see Workshop 6).
3.2 Assumptions
As with the simple linear regression and t-test, ANOVA must meet different statistical assumptions to be valid, among which two are particularly important. These assumptions can be verified using the diagnostic plots or with parametric tests.
- Normal distribution
The residuals of ANOVA model can once again be visualised in the normal QQplot. If the residuals lie linearly on the 1:1 line of the QQplot, they can be considered as normally distributed. If not, the ANOVA results cannot be interpreted. - Homoscedasticity
To be valid, ANOVA must be performed on models with homogeneous variance of the residuals. This homoscedasticity can be verified using either the residuals vs fitted plot or the scale-location plot of the diagnostic plots. If these plots present equivalent spread of the residuals for each of the fitted values, then the residuals variance can be considered homogeneous.
A second way to assess the homogeneity of residuals variance is to perform a Bartlett test on the anova model using the functionbartlett.test
. If the p-value of this test is superior to 0.05, the null hypothesis H0: s12 = s22 =… = sj2 =… = sn2 is accepted and the homoscedasticity assumption is respected.
Usual transformations of explanatory variables can be used if the homogeneity of residuals variance is not met. - Additivity
The effects of two factors are additive if the effect of one factor remains constant over all levels of the other factor, and that each factor influences the response variable independently of the other factor(s).
Violation of assumptions
If assumptions are violated your can try to transform your data, which could potentially equalize variances and normalize residuals, and can convert a multiplicative effect into an additive effect. Or, if you can't (or don't want to) transform your data, the non-parametric equivalent of ANOVA is Kruskal-Wallis test.
3.3 Contrasts
- Contrasts are group mean comparisons based on an a priori hypothesis,
- These groups can be compounded of one or many levels of a factor ,
- We can test basic hypothesis (ex: μ1 = μ2) or more complex hypothesis (ex: (μ1 + μ2)/3 == μ3).
The number of comparisons has to be lower or equal to the number of degrees of freedom of the ANOVA. Comparisons have to be independent from one another. For more details, see the advanced section on contrasts below.
3.4 Running an ANOVA
First, let's visualize the data using boxplot()
. Recall that by default, R will order you groups in alphabetical order. We can reorder the groups according to the median of each Diet level.
Another way to graphically view the effect sizes is to use plot.design()
. This function will illustrate the levels of a particular factor along a vertical line, and the overall value of the response is drawn as a horizontal line.
- | ANOVA
# Default alphabetical order boxplot(logMaxAbund ~ Diet, data=bird) # Relevel factors med <- sort(tapply(bird$logMaxAbund, bird$Diet, median)) boxplot(logMaxAbund ~ factor(Diet, levels=names(med)), data=bird, col=c("white","lightblue1", "skyblue1","skyblue3","skyblue4")) plot.design(logMaxAbund ~ Diet, data=bird, ylab = expression("log"[10]*"(Maximum Abundance)"))
Let's now run the ANOVA. In R, ANOVA can be called either directly with the aov
function, or with the anova
function performed on a linear model previously implemented with lm
:
3.5 Verifying assumptions
- Model diagnostics
# Plot for diagnostics opar <- par(mfrow=c(2,2)) plot(anov1) par(opar) # Test assumption of normality of residuals shapiro.test(resid(anov1)) # Test assumption of homogeneity of variance bartlett.test(logMaxAbund ~ Diet, data=bird)
Ideally the first diagnostic plot should show similar scatter for each Diet level. The Shapiro and Bartlett tests are both non-significant, therefore residuals are assumed to be normally distributed and variances are assumed to be equal.
3.6 Model output
Once your ANOVA model has been validated, its results can be interpreted. The R output of ANOVA model depends of the function that has been used to implement the ANOVA. If the aov
function is used to implement the ANOVA model
aov1 <- aov(logMaxAbund ~ Diet, data=bird)
the results of the ANOVA can be visualized using the function
summary(aov1)
On the other hand, if lm()
is used
anov1 <- lm(logMaxAbund ~ Diet, data=bird)
the ANOVA results must be called using the function
anova(anov1)
In both cases, the R output is as follows:
Df Sum Sq Mean Sq F value Pr(>F) Diet 4 5.106 1.276 2.836 0.0341 * Residuals 49 22.052 0.450 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
This R output corresponds exactly to the ANOVA table of your model. This output also present the degrees of freedom, the sum of squares, the mean sum of squares and the F-value previously explained. For this example, the diet significantly influences the abundance of birds as the p-value is inferior to 0.05. The null hypothesis can then be rejected meaning that at least one of the diet treatments influenced the abundance differently than the other treatments.
3.7 Complementary test
Importantly, ANOVA cannot identify which treatment is different from the others in terms of response variable. To determine this, post-hoc tests that compare the levels of the explanatory variables (i.e. the treatments) two by two, must be performed. While several post-hoc tests exist (e.g. Fischer’s least significant difference, Duncan’s new multiple range test, Newman-Keuls method, Dunnett’s test, etc.), the Tukey’s range test is used in this example using the function TukeyHSD
as follows:
- Post-hoc Tukey Test
# Where does the Diet difference lie? TukeyHSD(aov(anov1),ordered=T) # or equivalently TukeyHSD(aov1,ordered=T)
The R output for this test gives a table containing all the two by two comparisons of the explanatory variable levels and identify which treatment differ from the others:
Tukey multiple comparisons of means 95% family-wise confidence level factor levels have been ordered Fit: aov(formula = anov1) $Diet diff lwr upr p adj Vertebrate-InsectVert 0.3364295 -1.11457613 1.787435 0.9645742 Insect-InsectVert 0.6434334 -0.76550517 2.052372 0.6965047 Plant-InsectVert 0.8844338 -1.01537856 2.784246 0.6812494 PlantInsect-InsectVert 1.0657336 -0.35030287 2.481770 0.2235587 Insect-Vertebrate 0.3070039 -0.38670951 1.000717 0.7204249 Plant-Vertebrate 0.5480043 -0.90300137 1.999010 0.8211024 PlantInsect-Vertebrate 0.7293041 0.02128588 1.437322 0.0405485 Plant-Insect 0.2410004 -1.16793813 1.649939 0.9884504 PlantInsect-Insect 0.4223003 -0.19493574 1.039536 0.3117612 PlantInsect-Plant 0.1812999 -1.23473664 1.597336 0.9961844
In this case, the only significant difference in abundance occurs between the PlantInsect diet and the Vertebrate diet.
3.8 Plotting
After having verified the assumptions of your ANOVA model, interpreted the ANOVA table and differentiated the effect of the treatments using post-hoc tests or contrasts, the ANOVA results can be graphically illustrated using a barplot
. This shows the response variable as a function of the explanatory variable levels, where standard errors can be superimposed on each bar as well as the different letters representing the treatment group (according to the post-hoc test).
- Barplot
# Graphical illustration of ANOVA model using barplot() sd <- tapply(bird$logMaxAbund,list(bird$Diet),sd) means <- tapply(bird$logMaxAbund,list(bird$Diet),mean) n <- length(bird$logMaxAbund) se <- 1.96*sd/sqrt(n) bp <- barplot(means, col=c("white","lightblue1","skyblue1","skyblue3","skyblue4"), ylab = expression("log"[10]*"(Maximum Abundance)"), xlab="Diet", ylim=c(0,1.8)) # Add vertical se bars segments(bp, means - se, bp, means + se, lwd=2) # and horizontal lines segments(bp - 0.1, means - se, bp + 0.1, means - se, lwd=2) segments(bp - 0.1, means + se, bp + 0.1, means + se, lwd=2)
3.9 Contrasts (advanced section/ optional)
4. Two-way ANOVA
To better explain the variability of a response variable, two categorical variables can also be used as explanatory variables in an ANOVA model instead of just one (cf. part 3). To mathematically describe a two-way ANOVA, the one-way ANOVA model must simply be rewritten to take into account the interaction term between the two explanatory variables:
where
µ is the general mean of the response variable,
Ai is the effect of the level i for the factor A,
Bj is the effect of the level j for the factor B,
AiBj is the interaction term between the two explanatory variables,
i and j vary from 1 to n (n ≥ 2),
εijk are the residuals of the model.
However, the statistical hypotheses for the two-way ANOVA are:
H01: No difference in mean among levels of A; µa1 = µa2 = … = µai =… = µan
H02: No difference in mean among levels of B; µb1 = µb2 = … = µbi =… = µbm
H03: No interaction between factors A and B.
The one-way ANOVA table also has to be rewritten to add the second explanatory term as well as the interaction term. Thus, a two-way ANOVA table corresponds to:
Source of variation | Degrees of freedom (df) | Sums of squares | Mean squares | F-statistic |
---|---|---|---|---|
Total | ||||
Cells | ||||
Within- cells (error) | ||||
Factor A | ||||
Factor B | ||||
Interaction AB |
a: number of levels of the explanatory variable A; b: number of levels of the explanatory variable B; r: number of replicates per treatment
4.1 Running a two-way ANOVA
In R, a two-way ANOVA model is implemented in the same fashion as a one-way ANOVA using the function lm
.
CHALLENGE 2
Examine the effects of the factors Diet, Aquatic, and their interaction on the maximum bird abundance.
Recall: Before interpreting the ANOVA results, the model must first be validated by verifying the statistical assumptions of ANOVA, namely the:
- Normal distribution of the model residuals
- Homoscedasticty of the residuals variance
This verification can be done using the four diagnostic plots as previously explained for one-way ANOVA.
4.2 Interaction plot
Interactions can also be viewed graphically using the function interaction.plot
as:
- Interaction Plot
interaction.plot(bird$Diet, bird$Aquatic, bird$logMaxAbund, col="black", ylab = expression("log"[10]*"(Maximum Abundance)"), xlab="Diet")
What do the gaps in the line for the Aquatic group mean?
- Unbalanced design
table(bird$Diet, bird$Aquatic)
0 1 Insect 14 6 InsectVert 1 1 Plant 2 0 PlantInsect 17 1 Vertebrate 5 7
The design is unbalanced; unequal observations among diet levels for Aquatic (coded as 1) and Terrestrial (coded as 0). See advanced section below for details on unbalanced ANOVA designs.
CHALLENGE 3
Test the significance of the Aquatic factor by comparing nested models with and without this categorical variable.
5. Unbalanced ANOVA (advanced section/ optional)
6. ANCOVA
Analysis of covariance (ANCOVA) combines linear regression and ANOVA to test the influence of one categorical explanatory variable (or more) and one continuous explanatory variable (or more) on a continuous response variable. The underlying mathematical model of ANCOVA can be written as:
where
µ is the general mean of the response variable,
Ai is the treatment effect,
Bi is the effect of the continuous variable,
xij is the covariate measured for observation yij,
i is the average value of the covariate for treatment group i,
i varies from 1 to n (n > 2) treatments,
εij are the residuals of the model.
Notice that this model contains a term Ai for the treatment effect (as in ANOVA) and a slope term Βi for the covariate effect (as in regression). Thus, each treatment group is described by its own slope and intercept. In addition to testing if the response variable differs for at least one level of the categorical variable, ANCOVA also tests whether the response variable might be influenced by its relationship with the continuous variable (called the covariate in ANCOVA), and by any differences between group levels in the way that the continuous variable influences the response (i.e. the interaction). The ANCOVA hypotheses are thus:
H01: There is no effect of the categorical factor (i.e. µ1 = µ2 =… = µi =… = µn)
H02: There is no effect of the continuous factor (i.e. β = 0)
H03: There is no interacting effect of the categorical and continuous factors
6.1 Assumptions
As with models seen above, to be valid, ANCOVA models must also meet two statistical assumptions that can be verified using diagnostic plots, i.e.:
- Normal distribution of the model residuals
- Homoscedasticty of the residual variance
- Independence between residuals and fitted values,
- Independence between between variance of residuals and fitted values
- Equal variance between different levels of a given factor
- Same value range for all covariates
- Variables are fixed
- No interaction between factors and covariate(s)
Note: A fixed variable is one that you are specifically interested in (i.e. bird mass). In contrast, a random variable is noise that you want to control for (i.e. site a bird was sampled in). If you have random variables, see the workshop on Linear Mixed-effect Models!
6.2 Types of ANCOVA
You can have any number of factors and/or covariates, but as their number increases, the interpretation of results gets more complex.
The most frequently used ANCOVAs are those with:
- one covariate and one factor
- one covariate and two factors
- two covariates and one factor
The different possible goals of the ANCOVA are to determine the effects of:
- the factor(s) and covariate(s) on the response variable
- the factor(s) on the response variable after removing the effect of the covariate(s)
- the factor(s) on the relationship between the covariate(s) and the response variable
Importantly, these goals are only met if there is no significant interaction between the factor(s) and the covariate(s)! Examples of significant interactions between the factor and the covariate (for an ANCOVA with one factor and one covariate) are illustrated by the second and third panels below:
The same logic follows for ANCOVAs with multiple factors and/or covariates.
6.3 Running an ANCOVA
Running an ANCOVA in R is comparable to running a two-way ANOVA, using the function lm
. However, instead of using two categorical variables (Diet and Aquatic), we now use one categorical and one continuous variable.
For example, using a build in dataset called CO2, where the response variable is uptake, the continuous variable is conc and the factor is Treatment, the ANCOVA is:
- ANCOVA example
ancova.example <- lm(uptake ~ conc*Treatment, data=CO2) anova(ancova.example)
If only your factor is significant, drop your covariate from the model: you will then have an ANOVA.
If only your covariate is significant, drop your factor from the model, you will then have a simple linear regression.
If your interaction covariate*factor is significant, you might want to test which level(s) of your factor ha(s)ve different slopes.
In the CO2 example above, both the covariate and factor are significant, but the interaction is non-significant. If your replace Treatment with Type, however, you will see an example of a significant interaction.
If you want to compare means across factors, you can use adjusted means, which uses the equations given by the ANCOVA to estimate the means of each level of the factor, corrected for the effect of the covariate:
- Adjusted means
install.packages("effects") library(effects) adj.means <- effect('Treatment', ancova.example) plot(adj.means) adj.means <- effect('conc*Treatment', ancova.example) plot(adj.means)
CHALLENGE 4
Run an ANCOVA to test the effect of Diet, Mass, and their interaction on MaxAbund.
7. Multiple regression
Multiple regression tests the effects of several continuous explanatory variables on a response variable based on the following mathematical model:
where
β0 is the intercept of the regression line,
β1 is effect of the variable x1 (i.e. the slope of the regression line of variable x1),
β2 is effect of the variable x2 (i.e. the slope of the regression line of variable x2),
εi are the residuals of the model (i.e. the unexplained variation).
7.1 Assumptions
To be valid, each of the explanatory variables used in the multiple regression model must be:
- Normally-distributed
This can be verified using a Shapiro-Wilk test (functionshapiro.test
). Usual transformations must be performed in the case of non-normality.
- Orthogonal
Explanatory variables must not be collinear: if one explanatory variable is correlated to another, they are likely to explain the same variability of the response variable, and the effect of one variable will be masked by the other.
- Linear
i.e. Linear relationships between each explanatory variable.
The model residuals assumptions are the same as those of the simple linear regression, that is:
- Normality of residuals
- Independence of residuals versus xi (or predicted values)
- Independence of the variance of residuals versus xi (or predicted values)
- No outliers
Violation of assumptions
If you see any pattern between two explanatory variables, they are collinear. Collinearity must be avoided as the effect of each explanatory variable will be confounded! Possible solutions are:
- Keep only one of the collinear variables,
- Try multidimensional analysis (see workshop 9),
- Try a pseudo-orthogonal analysis.
7.2 Dickcissel dataset
The Dickcissel dataset explores environmental variables that drive the abundance and presence/ absence of a grassland bird with peak abundances in Kansas, USA. It contains 15 variables:
Variable Name | Description | Type |
---|---|---|
abund | The number of individuals observed at each route | Continuous/ numeric |
Present | Presence/ absence of the species | Boolean (“Present”/ “Absent”) |
broadleaf, conif, crop, grass, shrub, urban, wetland | Land use variables within 20 km radius of the center route | Continuous/ numeric |
NDVI | Vegetation index (a measure of productivity) | Interger |
clDD, clFD, clTma, clTmi, clP | Climate date (DD = degree days, FD = frost days, Tma = max temperature, Tmi = min temperature, P = precipitation) | Continuous/ numeric |
In R, multiple regression are implemented using the lm
function and its results are viewed using the summary
function. Using, for example, the Dickcissel data, we can test the effects of climate, productivity and land cover on the abundance of the Dickcissel species abundance by applying the model:
CHALLENGE 5
Is a transformation needed for the response variable abund?
As you likely noticed in Challenge 5, the abund variable could not be normalized, suggesting that we might need to relax the assumptions of a normally distributed response variable and move on to Generalized Linear Models, but that will wait until later!
For now, let's simply use the untransformed abund and compare the relative importance of the three variables (climate, productivity, and land cover) on abund
- Multiple Regression
lm.mult <- lm(abund ~ clTma + NDVI + grass, data=Dickcissel) summary(lm.mult)
The R output enables one to visualize the significant explanatory variables:
lm(formula = abund ~ clTma + NDVI + grass, data = Dickcissel) Residuals: Min 1Q Median 3Q Max -35.327 -11.029 -4.337 2.150 180.725 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -83.60813 11.57745 -7.222 1.46e-12 *** clTma 3.27299 0.40677 8.046 4.14e-15 *** NDVI 0.13716 0.05486 2.500 0.0127 * grass 10.41435 4.68962 2.221 0.0267 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 22.58 on 642 degrees of freedom Multiple R-squared: 0.117, Adjusted R-squared: 0.1128 F-statistic: 28.35 on 3 and 642 DF, p-value: < 2.2e-16
In this case, the three explanatory variables significantly influence the abundance of the Dickcissel species, the most significant one being the climate (p-value=4.14e-15). Altogether these variables explain 11.28% of the Dickcissel abundance variability (Adjusted R-squared= 0.1128). The overall model is also significant and explains the Dickcissel abundance variability better than a null model (p-value: < 2.2e-16).
A plot of the response variable as a function of each explanatory variable can be used to represent graphically the model results:
plot(abund ~ clTma, data=Dickcissel, pch=19, col="orange") plot(abund ~ NDVI, data=Dickcissel, pch=19, col="skyblue") plot(abund ~ grass, data=Dickcissel, pch=19, col="green")
7.3 Polynomial regression (advanced section/ optional)
7.4 Stepwise regression
To obtain a final multiple regression model, users can first implement a full regression model containing all the explanatory variables and then drop the non-significant variable using a stepwise selection procedure. In this method, non-significant variables are successively dropped one by one from the model and the goodness-of-fit of each successive model are compared based on AIC (Akaike’s Information Criterion), until all the explanatory variables of the model are significant. Note that a lower AIC value indicates a better goodness of fit (i.e., the best model is the one with the lowest AIC). In R, stepwise selection is implemented using the function step
:
- Stepwise Regression
lm.full <- lm(abund ~ . - Present, data=Dickcissel) lm.step <- step(lm.full) summary(lm.step)
According to the stepwise selection, only 6 significant explanatory variables among the 13 tested are retained in the final model:
Call: lm(formula = abund ~ clDD + clFD + clTmi + clTma + clP + grass, data = Dickcissel) Residuals: Min 1Q Median 3Q Max -30.913 -9.640 -3.070 4.217 172.133 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -4.457e+02 3.464e+01 -12.868 < 2e-16 *** clDD 5.534e-02 8.795e-03 6.292 5.83e-10 *** clFD 1.090e+00 1.690e-01 6.452 2.19e-10 *** clTmi -6.717e+00 7.192e-01 -9.339 < 2e-16 *** clTma 3.172e+00 1.288e+00 2.463 0.014030 * clP 1.562e-01 4.707e-02 3.318 0.000959 *** grass 1.066e+01 4.280e+00 2.490 0.013027 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 19.85 on 639 degrees of freedom Multiple R-squared: 0.3207, Adjusted R-squared: 0.3144 F-statistic: 50.29 on 6 and 639 DF, p-value: < 2.2e-16
The model now accounts for 31.44% of the Dickcissel abundance variability, the clTmi being the most significant explanatory variable.
Nevertheless, some of the selected explanatory variables are highly correlated and should be dropped from the final model in order to remove uninformative variables.
7.5 Variance inflation
Collinearity between explanatory variables can be assessed based on the variance inflation factor using the vif
function of package ‘HH’:
- Variance Inflation Factor
vif(clDD ~ clFD + clTmi + clTma + clP + grass, data=Dickcissel)
which gives the following output:
clFD clTmi clTma clP grass 13.605855 9.566169 4.811837 3.196599 1.165775
As variance inflation factor higher than 5 represents collinear variables, the R output shows that clDD, clFD and clTmi are highly collinear. Only one of these explanatory variables can thus be retained in the final regression model.
8. Variance partitioning (advanced section/ optional)
Go further!
Amazing! You are now ready to perform your own regression, ANOVA and ANCOVA! But never forget to correctly specify your model and verify its statistical assumptions before interpreting its results according to the ecological background of your data.
Some exciting books about linear regression and ANOVA:
- Myers RH - Classical and Modern Regression with Application
- Gotelli NJ - A Primer of Ecological Statistics