This series of 10 workshops walks participants through the steps required to use R for a wide array of statistical analyses relevant to research in biology and ecology. These open-access workshops were created by members of the QCBS both for members of the QCBS and the larger community.
The content of this workshop has been peer-reviewed by several QCBS members. If you would like to suggest modifications, please contact the current series coordinators, listed on the main wiki page
Developed by: Xavier Giroux-Bougard, Maxwell Farrell, Amanda Winegardner, Étienne Low-Decarie and Monica Granados
Summary: In this workshop we will build on the data manipulation and visualization skills you have learned in base R by introducing three additional R packages: ggplot2, tidyr and dplyr. We’ll learn how to use ggplot2, an excellent plotting alternative to base R that can be used for both diagnostic and publication quality plots. We will then introduce tidyr and dplyr, two powerful tools to manage and re-format your dataset, as well as apply simple or complex functions on subsets of your data. This workshop will be useful for those progressing through the entire workshop series, but also for those who already have some experience in R and would like to become proficient with new tools and packages.
Link to associated Prezi: Prezi
Download the R script for this lesson: Script
Whether you are calculating summary statistics (e.g. Excel), performing more advanced statistical analysis (e.g. JMP, SAS, SPSS), or producing figures and tables (e.g. Sigmaplot, Excel), it is easy to get lost in your workflow when you use a variety of software. This becomes especially problematic every time you import and export your dataset to accomplish a downstream task. With each of these operations, you increase your risk of introducing errors into your data or losing track of the correct version of your data. The R statistical language provides a solution to this by unifying all of the tools you need for advanced data manipulation, statistical analysis, and powerful graphical engines under the same roof. By unifying your workflow from data to tables to figures, you reduce your chances of making mistakes and make your workflow easily understandable and reproducible. Believe us, the future “you” will not regret it! Nor will your collaborators!
The most flexible and complete package available for advanced data visualization in R is ggplot2. This packages was created for R by Hadley Wickham based on the Grammar of Graphics by Leland Wilkinson. The source code is hosted on github: https://github.com/hadley/ggplot2. Today we will walk you through the basics of ggplot2, and hopefully its potential will speak for itself and you will have the necessary tools to explore its use for your own projects.
To get started you will need to open RStudio and and load the necessary package from the CRAN repository:
Let's jump right into it then! We will build our first basic plot using the
qplot() function in ggplot2. The quick plot function is built to be an intuitive bridge from the
plot() function in base R. As such, the syntax is almost identical, and the
qplot() function understands how to draw plots based on the types of data (e.g. factor, numerical, etc…) that are mapped. Remember you can always access the help file for a function by typing the command preceded by
?, such as:
If you checked the help file we just called, you will see that the 3 first arguments are:
Before we go forward with
qplot(), we need some data to assign values to these arguments. We will first play with the
iris dataset, a famous dataset of flower dimensions for three species of iris collected by Edgar Anderson in the Gaspé peninsula right here in Québec! It is already stored as a
data.frame directly in
R. To load it and explore its structure and the different variables, use the following commands:
Let's build our first scatter plot by mapping the
y variables from the
iris dataset in the
qplot() function as follows:
As mentioned previously, the
qplot() function understands how to draw a plot based on the mapped variables. In the previous example we used two numerical variables and obtained a scatter plot. However,
qplot() will also understand categorical variables:
If you return to the
qplot() function's help file, there are several more arguments that can be used to modify different aspects of your figure. Lets start with adding labels and titles using the
qplot() function, build a basic scatter plot with a title and axis labels from one of the
BOD data sets in R. You can load these and explore their contents as follows:
The grammar of graphics is a framework for data visualization that dissects every component of a graph into individual components. Its “awesome factor” is due to this ability, which allows you to flexibly change and modify every single element of a graph. Let's go over a few of these elements, or “layers”, together. First and foremost, a graph requires data, which can be displayed using:
In ggplot2, aesthetics are a group of parameters that specify what and how data is displayed. Here are a few arguments that can be used within the
x: position of data along the x axis
y: position of data along the y axis
colour: colour of an element
group: group that an element belongs to
shape: shape used to display a point
linetype: type of line used (e.g. solid, dashed, etc…)
size: size of a point or line
alpha: transparency of an element
Geometric objects, or
geoms, determine the visual representation of your data:
geom_line(): lines connected to points by increasing value of x
geom_path(): lines connected to points in sequence of appearance
geom_boxplot(): box and whiskers plot for categorical variables
geom_bar(): bar charts for categorical x axis
geom_barfor continuous x axis
plot.object ← ggplot() OR qplot()
plot.object ← plot.object + layer()
Using these steps, we build a final product by laying individual elements over each other until we are satisfied:
Our brief intro on the grammar of graphics is not enough to cover even a small fraction of the all the layers and elements that can be used in visualization. Instead, we introduce the most commonly used, in the hopes that your travels will bring you to the following resources when you take your next steps:
Often, we only require some quick and dirty plots to visualize data, making the
qplot() function perfect. As stated above, if we assign a qplot to an object in R, we can still add more complex layers to our base plot like this:
plot.object ← plot.object + layer(). However, the
qplot() function simply wraps the raw
ggplot() commands into a form similar to the
plot() function in base R. Let's dissect what ggplot2 is actually doing when we use
In the raw
ggplot() function, we first specify the
data and then map the
y variables in
aes(). We subsequently add each individual element one at a time, which unleashes the full potential of grammar of graphics. As your needs shift towards more advanced features in the ggplot2 package, it is good practice to use raw syntax of the
ggplot() function, as we will do in the rest of the workshop.
Before we get started with some more advanced features, let's build a foundation by assigning the previous plot to an object:
Let's add colours and shapes to our basic scatter plot by adding these as arguments in the
We already have some geometric objects in our basic plot which we added as points using
geom_point(). Now we let's add some more advanced geoms, such as linear regressions with
You can even use emojis as your geoms!!! You need the
emoGG package by David Lawrence Miller.
Produce a colourful plot with linear regression (or other smoother) from built in data such as the
CO2 dataset or the
Data becomes difficult to visualize when there are multiple factors accounted for in experiments. For example, the
CO2 data set contains data on CO2 uptake for chilled vs non-chilled treatments in a species of grass from two different regions (Québec and Mississippi). Let's build a basic plot using this data set:
If we want to compare regions, it is useful to make two panels, where the axes are perfectly aligned for data to be compared easily with the human eye. In ggplot2, we can accomplish this with the
facet_grid() function. Briefly its basic syntax is as follows:
plot.object + facet_grid(rows ~ columns). Here the
columns are variables in the
data.frame which are factors we want to compare by separating the data into panels. To compare data between regions, we can use the following code:
*Note: you can stack the two panels by using
facet.grid(Type ~ .).
Now that we have two facets, let's observe how the CO2 uptake evolves as CO2 concentrations rise, by adding connecting lines to the points using
As we can see from the previous figure, the lines are connecting vertically across three points for each treatment. If we look into the details of the
CO2 dataset, this is each treatment in each region has 3 replicates. If we want to plot lines connecting data points for each replicate separately, we can add map a
group aesthetic to the “geom_line()” function as follows:
There are many options available to save your beloved creations to file.
In RStudio, there are many options available to you to save your figures. You can copy them to the clipboard or export them as any file type (png, jpg, emf, tiff, pdf, metafile, etc…):
In instances where you are producing many plots (e.g. during long programs that produces many plots automatically while performing analysis), it is useful to save many plots in one file as a pdf. This can be accomplished as follows:
There are many other options, of particular note is
ggsave() as it will write directly to your working directory all in one line of code and you can specify the name of the file and the dimensions of the plot:
The ggplot2 package automatically chooses colours for you based on the chromatic circle and the number of colours you need. However, in many instances it is useful to use your own colours. You can easily accomplish this using the
For added control, you can also use all hexadecimal codes for colours specifically tailored to your needs. A great resource for hex codes is the internets. Just Google “RGB to Hex” and a colour slider will be returned. You can input the hex codes directly into the
scale_colour_manual() function instead of colour names.
If you are looking for a colour palette to use across your variables and don't have specific colours in mind. The
viridis function will use colour blind and printer friendly colours based on four palettes. You can specify which palette to use, how many colours to use and where to start and end on the spectrum of the palette. Let try adding the default palette “D” to our CO2 plot for the two colours we need.
Check out the following links/packages for some really awesome colour fun:
You can control all the aspects of the axes and scales used to display your data (eg break, labels, positions, etc…):
This is were we get to the “publication quality” part of the ggplot2 package. I am sure that by now, some of you have formed opinions about that grey background… You either love it or hate it (like liver and brussel sprouts). Worry no longer, you will not have to include figures with a grey background in that next publication your are submitting to Nature!
Like every other part of the grammar of graphics, we can modify the theme of the plot to suit our needs, or higher sense of style! Its as simple as:
plot.object + theme()
There are way too many theme elements built into the ggplot2 package to mention here, but you can find a complete list in the ggplot theme vignette. Instead of modifying the many elements contained in
theme(), you can start from theme functions, which contain a specific set of elements from which to start. For example we can use the “black and white” theme like this:
CO2.plot + theme_bw()
Another great strategy is to build a theme tailored to your own publication needs, and then apply it to all you figures:
mytheme <- theme_bw() + theme(plot.title = element_text(colour = "red")) + theme(legend.position = c(0.9, 0.9)) CO2.plot + mytheme
The ggtheme package is a great project developed by Jeffrey Arnold on github and also hosted on the CRAN repository, so it can easily be installed as follows:
The package contains many themes, geoms, and colour ramps for ggplot2 which are based on the works of some of the most renown and influential names in the world of data visualization, from the classics such asEdward Tufte to the modern data journalists/programmers at FiveThirtyEight blog.
Here is a quick example which uses the Tufte's boxplots and theme, as you can see he is a minimalist:
data(OrchardSprays) tufte.box.plot <- ggplot(data = OrchardSprays, aes(x = treatment, y = decrease)) + geom_tufteboxplot() + theme_tufte() tufte.box.plot
While hardcore programmers might laugh at you for using a GUI, there is no shame in using them! Jeroen Schouten, who is about as hardcore a programmer as you can get, understood the learning curve for begginners could be steep and so designed an online ggplot2 GUI. While it will not be as fully functional as coding the grammar of graphics, it is very complete. You can import from excel, google spreadsheets, or any data format, and build a few plots using some tutorial videos. The great part is that it shows you the code you have generated to build your figure, which you can copy paste into R as a skeleton on which to add some meat using more advanced features such as themes.
Tidying allows you to manipulate the structure of your data while preserving all original information. Many functions in R require or work better with a data structure that isn't the best for readability by people.
In contrast to aggregation, which reduces many cells in the original data set to one cell in the new dataset, tidying preserves a one-to-one connection. Although aggregation can be done with many functions in R, the tiydr package allows you to both reshape and aggregate within a single syntax.
Install / Load the
In addition to
CO2, we will use the built-in dataset
airquality for this part of the workshop
Explore the datasets:
You can also use the following code to find other datasets available in R:
Let's pretend you send out your field assistant to measure the diameter at breast height (DBH) and height of three tree species for you. The result is this messy (“wide”) data set.
“Long” format data has a column stating the measured variable types and a column containing the values associated to those variables (each column is a variable, each row is an observation). This is considered “tidy” data because it is easily interpreted by most packages for visualization and analysis in
The format of your data depends on your specific needs, but some functions and packages such as
ggplot2 work well with long format data.
Additionally, long form data can more easily be aggregated and converted back into wide form data to provide summaries, or check the balance of sampling designs.
We can use the
tidyr package to:
Most of the packages in the Hadleyverse will require long format data where each row is an entry and each column is a variable. Let's try to “gather” the this messy data using the gather function in tidyr. gather() takes multiple columns, and gathers them into key-value pairs. Note that you have to specify (data, what you want to gather across, the “unit” of your new column, the row identity).
Let's try this with the C02 dataset. Here we might want to collapse the last two quantitative variables:
Sometimes you might want to go to from long to wide
SPREAD BASICS:spread uses the same syntax as gather (they are complements)
> messy.wide <- spread(messy.long, Measurement, cm) > messy.wide Species DBH Height 1 Ash 13 55 2 Elm 20 85 3 Oak 12 56
gather() all the columns (except Month and Day) into rows. Then
spread() the resulting dataset to return the same data format as the original data.
Some times you might have really messy data which has two variables in one column. Thankfully the separate function can (wait for it) separate the two variables into two columns
Let's say you have this really messy data set
First we want to convert this wide dataset to long
Then we want to split those two sampling time (T1 & T2). The syntax we use here is to tell R separate(data, what column, into what, by what) the tricky part here is telling R where to separate the character string in your column entry using a regular expression to describe the character that separates them.Here the string should be separated by the period (.)
> head(airquality) Ozone Solar.R Wind Temp Month Day 1 41 190 7.4 67 5 1 2 36 118 8.0 72 5 2 3 12 149 12.6 74 5 3 4 18 313 11.5 62 5 4 5 NA NA 14.3 56 5 5 6 28 NA 14.9 66 5 6
The dataset is in wide format, where measured variables (ozone, solar.r, wind and temp) are placed in their own columns.
1: Visualize each individual variable and the range it displays for each month in the timeseries
fMonth <- factor(airquality$Month) # Convert the Month variable to a factor. ozone.box <- ggplot(airquality, aes(x = fMonth, y = Ozone)) + geom_boxplot() solar.box <- ggplot(airquality, aes(x = fMonth, y = Solar.R)) + geom_boxplot() temp.box <- ggplot(airquality, aes(x = fMonth, y = Temp)) + geom_boxplot() wind.box <- ggplot(airquality, aes(x = fMonth, y = Wind)) + geom_boxplot()
You can use
grid.arrange() in the package
gridExtra to put these plots into 1 figure.
combo.box <- grid.arrange(ozone.box, solar.box, temp.box, wind.box, nrow = 2) # nrow = number of rows you would like the plots displayed on.
This arranges the 4 separate plots into one panel for viewing. Note that the scales on the individual y-axes are not the same:
2. You can continue using the wide format of the airquality dataset to make individual plots of each variable showing day measurements for each month.
ozone.plot <- ggplot(airquality, aes(x = Day, y = Ozone)) + geom_point() + geom_smooth() + facet_wrap(~ Month, nrow = 2) solar.plot <- ggplot(airquality, aes(x = Day, y = Solar.R)) + geom_point() + geom_smooth() + facet_wrap(~ Month, nrow = 2) wind.plot <- ggplot(airquality, aes(x = Day, y = Wind)) + geom_point() + geom_smooth() + facet_wrap(~ Month, nrow = 2) temp.plot <- ggplot(airquality, aes(x = Day, y = Temp)) + geom_point() + geom_smooth() + facet_wrap(~ Month, nrow = 2)
You could even then combine these different faceted plots together(though it looks pretty ugly at the moment):
BUT, what if I'd like to use
facet_wrap() for the variables as opposed to by month or put all variables on oneplot?
Change data from wide to long format (See back to Section 2.3)
air.long <- gather(airquality, variable, value, -Month, -Day) air.wide <- spread(air.long , variable, value)
fMonth.long <- factor(air.long$Month) weather <- ggplot(air.long, aes(x = fMonth.long, y = value)) + geom_boxplot() + facet_wrap(~ variable, nrow = 2)
weather plot with
This is the same data but working with it in wide versus long format has allowed us to make different looking plots.
The weather plot uses facet_wrap to put all the individual variables on the same scale. This may be useful in many circumstances. However, using the facet_wrap means that we don't see all the variation present in the wind variable.
In that case, you can modify the code to allow the scales to be determined per facet by setting
We can also use the long format data (air.long) to create a plot with all the variables included on a single plot:
weather2 <- ggplot(air.long, aes(x = Day, y = value, colour = variable)) + geom_point() + # this part will put all the day measurements on one plot facet_wrap(~ Month, nrow = 1) # add this part and again, the observations are split by month weather2
The vision of the
dplyr package is to simplify data manipulation by distilling all the common data manipulation tasks to a set of intuitive verbs. The result is a comprehensive set of tools that allows users to easily translate their thoughts into
R code. In addition to ease of use, it is also an amazing package because:
dplyr(mastering fear of data, adopting cool technologies)
dplyr package is built around a core set of “verbs” (or commands). We will start with the following 4 verbs because these operations are ubiquitous in data manipulation:
select(): select columns from a data frame
filter(): filter rows according to defined criteria
arrange(): re-order data based on criteria (e.g. ascending, descending)
mutate(): create or transform values in a column
Let's load the
dplyr package and explore these functions:
In these examples, we will use the
airquality dataset. In the challenges we will use the
airquality dataset contains several columns:
> head(airquality) Ozone Solar.R Wind Temp Month Day 1 41 190 7.4 67 5 1 2 36 118 8.0 72 5 2 3 12 149 12.6 74 5 3 4 18 313 11.5 62 5 4 5 NA NA 14.3 56 5 5 6 28 NA 14.9 66 5 6
Suppose we are only interested in the variation of “Ozone” over time, then we can select the subset of required columns for further analysis:
> ozone <- select(airquality, Ozone, Month, Day) > head(ozone) Ozone Month Day 1 41 5 1 2 36 5 2 3 12 5 3 4 18 5 4 5 NA 5 5 6 28 5 6
As you can see the general format for this function is
select(dataframe, column1, column2, …). Most
dplyr functions will follow a similarly simple syntax.
A common operation in data manipulation is the extraction of a subset based on specific conditions. For example, in the
airquality dataset, suppose we are interested in analyses that focus on the month of August during high temperature events:
> august <- filter(airquality, Month == 8, Temp >= 90) > head(august) Ozone Solar.R Wind Temp Month Day 1 89 229 10.3 90 8 8 2 110 207 8.0 90 8 9 3 NA 222 8.6 92 8 10 4 76 203 9.7 97 8 28 5 118 225 2.3 94 8 29 6 84 237 6.3 96 8 30
The syntax we employed here is
filter(dataframe, logical statement 1, logical statement 2, …). Remember that logical statements provide a TRUE or FALSE answer. The
filter() function retains all the data for which the statement is TRUE. This can also be applied on characters and factors.
In data manipulation, we sometimes need to sort our data (e.g. numerically or alphabetically) for subsequent operations. A common example of this is a time series. First let's use the following code to create a scrambled version of the
> air_mess <- sample_frac(airquality, 1) > head(air_mess) Ozone Solar.R Wind Temp Month Day 21 1 8 9.7 59 5 21 42 NA 259 10.9 93 6 11 151 14 191 14.3 75 9 28 108 22 71 10.3 77 8 16 8 19 99 13.8 59 5 8 104 44 192 11.5 86 8 12
Now let's arrange the data frame back into chronological order, sorting by
Note that we can also sort in descending order by placing the target column in
desc() inside the
Besides subsetting or sorting your data frame, you will often require tools to transform your existing data or generate some additional data based on existing variables. For example, suppose we would like to convert the temperature variable form degrees Fahrenheit to degrees Celsius:
> airquality_C <- mutate(airquality, Temp_C = (Temp-32)*(5/9)) > head(airquality_C) Ozone Solar.R Wind Temp Month Day Temp_C 1 41 190 7.4 67 5 1 19.44444 2 36 118 8.0 72 5 2 22.22222 3 12 149 12.6 74 5 3 23.33333 4 18 313 11.5 62 5 4 16.66667 5 NA NA 14.3 56 5 5 13.33333 6 28 NA 14.9 66 5 6 18.88889
Note that the syntax here is quite simple, but within a single call of the
mutate() function, we can replace existing columns, we can create multiple new columns, and each new column can be created using newly created columns within the same function call.
magrittr package brings a new and exciting tool to the table: a pipe operator. Pipe operators provide ways of linking functions together so that the output of a function flows into the input of next function in the chain. The syntax for the
magrittr pipe operator is
magrittr pipe operator truly unleashes the full power and potential of
dplyr, and we will be using it for the remainder of the workshop. First, let's install and load it:
Using it is quite simple, and we will demonstrate that by combining some of the examples used above. Suppose we wanted to
filter() rows to limit our analysis to the month of June, then convert the temperature variable to degrees Celsius. We can tackle this problem step by step, as before:
This code can be difficult to decipher because we start on the inside and work our way out. As we add more operations, the resulting code becomes increasingly illegible. Instead of wrapping each function one inside the other, we can accomplish these 2 operations by linking both functions together:
Notice that within each function, we have removed the first argument which specifies the dataset. Instead, we specify our dataset first, then “pipe” into the next function in the chain. This is similar to
ggplot2, in that we only specify the data frame once, not every single time we are adding a layer. The advantages of this approach are that our code is less redundant and functions are executed in the same order we read and write them, which makes its easier and quicker to both translate our thoughts into code and read someone else's code and grasp what is being accomplished. As the complexity of your data manipulations increases, it becomes quickly apparent why this is a powerful and elegant approach to writing your
Quick tip: In RStudio we can insert this pipe quickly using the following hotkey:
Cmd for Mac) +
dplyr verbs we have explored so far can be useful on their own, but they become especially powerful when we link them with each other using the pipe operator (
%>%) and by applying them to groups of observations. The following functions allow us to split our data frame into distinct groups on which we can then perform operations individually, such as aggregating/summarising:
group_by(): group data frame by a factor for downstream commands (usually summarise)
summarise(): summarise values in a data frame or in groups within the data frame with aggregation functions (e.g.
These verbs provide the needed backbone for the Split-Apply-Combine strategy that was initially implemented in the
plyr package on which
dplyr is built. Let's demonstrate the use of these with an example using the
airquality dataset. Suppose we are interested in the mean temperature and standard deviation within each month:
> month_sum <- airquality %>% group_by(Month) %>% summarise(mean_temp = mean(Temp), sd_temp = sd(Temp)) > month_sum Source: local data frame [5 x 3] Month mean_temp sd_temp (int) (dbl) (dbl) 1 5 65.54839 6.854870 2 6 79.10000 6.598589 3 7 83.90323 4.315513 4 8 83.96774 6.585256 5 9 76.90000 8.355671
ChickWeight dataset, create a summary table which displays the difference in weight between the maximum and minimum weight of each chick in the study. Employ
dplyr verbs and the
Note that we can group the data frame using more than one factor, using the general syntax as follows:
group_by(group1, group2, …)
group_by(), the multiple groups create a layered onion, and each subsequent single use of the
summarise() function peels off the outer layer of the onion. In the above example, after we carried out a summary operation on
group2, the resulting data set would remain grouped by
group1 for downstream operations.
ChickWeight dataset, create a summary table which displays, for each diet, the average individual difference in weight between the end and the beginning of the study. Employ
dplyr verbs and the
%>% operator. (Hint:
last() may be useful here.)
In addition to all the operations we have explored,
dplyr also provides some functions that allow you to join two data frames together. The syntax in these functions is simple relative to alternatives in other
These are beyond the scope of the current introductory workshop, but they provide extremely useful functionality you may eventually require for some more advanced data manipulation needs.
Here are some great resources for learning ggplot2, tidyr and dplyr that we used when compiling this workshop:
Lecture notes from Hadley's course Stat 405 (the other lessons are awesome too!)
dplyr and tidyr
BONUS! Check out R style guides to help format your scripts for easy reading: