---
title: "Data analysis in R"
subtitle: "IML Machine Learning Workshop, CERN"
author:
- "Andrew John Lowe\\newline"
- "Wigner Research Centre for Physics,\\newline"
- "Hungarian Academy of Sciences"
output: beamer_presentation
header-includes:
- \setbeamertemplate{caption}{\raggedright\insertcaption\par}
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
## Downloading and installing R
You can follow this tutorial while I talk
1. Download R for Linux, Mac or Windows from the Comprehensive R Archive Network (CRAN): [https://cran.r-project.org/](https://cran.r-project.org/)
- Latest release version 3.3.3 (2017-03-06, "Another Canoe")
RStudio is a free and open-source integrated development environment (IDE) for R
2. Download RStudio from [https://www.rstudio.com/](https://www.rstudio.com/)
- Make sure you have installed R first!
- Get RStudio Desktop (free)
- Latest release version 1.0.136
## Rstudio
![RStudio looks something like this.](rstudio.png)
## RStudio
![RStudio with source editor, console, environment and plot pane.](rstudio2.png)
## What is R?
- An open source programming language for statistical computing
- R is a dialect of the S language
- S is a language that was developed by John Chambers and others at Bell Labs
- S was initiated in 1976 as an internal statistical analysis environment --- originally implemented as FORTRAN libraries
- Rewritten in C in 1988
- S continues to this day as part of the GNU free software project
- R created by Ross Ihaka and Robert Gentleman in 1991
- First announcement of R to the public in 1993
- GNU General Public License makes R free software in 1995
- Version 1.0.0 release in 2000
## Why use R?
- Free
- Runs on almost any standard computing platform/OS
- Frequent releases; active development
- Very active and vibrant user community
- Estimated ~2 million users worldwide
- R-help and R-devel mailing lists, Stack Overflow
- Frequent conferences; useR!, EARL, etc.
- 388 Meetup groups worldwide
- R-Ladies chapters in many major cities
- Big-name backing: Microsoft, Google, IBM, Oracle, ...
- Functionality is divided into modular packages
- Download and install just what you need
- There are now > 10000 packages on CRAN
- Graphics capabilities very sophisticated
- Useful for interactive work, but contains a powerful programming language for developing new tools
## Any other reasons for using R?
- R enables fast prototyping and high-level abstractions that let you concentrate on what you want to achieve, rather than on the mechanics of how you might do it
- Enables you to stay "in the flow" of data analysis
- Latest machine learning algorithms are available
- Your technical questions have probably already been answered on Stack Overflow
- R offers a pleasant user experience
**R allows you to concentrate on your data, not on your tools**
## KDNuggets 2016 Advanced Analytics Survey
![KDnuggets: R remains leading tool, but Python usage growing very fast.](kdnuggets.jpg)
## O'Reilly 2016 Data Science Salary Survey
![O'Reilly: SQL is king, but R and Python very popular.](survey.png)
## Drawbacks of R
- Essentially based on 40-year-old technology
- Functionality is based on consumer demand and user contributions
- Objects must generally be stored in physical memory
- Your data must fit within the contiguous RAM of your hardware
- True of other software such as scikit-learn, WEKA, and TMVA
- There have been advancements to deal with this, including:
- Interfaces to Spark and Hadoop
- Packages `ff` and `bigmemory`
- File-backed data objects
- Big RAM is eating Big Data!
- You can now get 2 TB X1 instances on Amazon EC2
- Google and Microsoft also offer instances with large RAM
- Trend driven by companies who increasingly are replacing local hardware with cloud computing and need massive compute resources
## Resources for learning R
- Online courses on Coursera, edX, DataCamp, and elsewhere
- For machine learning specifically, the book ***Introduction to Statistical Learning*** (Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani) can be downloaded for free from [http://www-bcf.usc.edu/~gareth/ISL/](http://www-bcf.usc.edu/~gareth/ISL/) and contains many examples in R
- Learn R, in R, with `swirl` interactive courses:
```{r eval = FALSE}
install.packages("swirl")
require(swirl)
swirl()
```
## Design of the R system
- The R system is divided into two conceptual parts:
- The "base" R system that you download from CRAN
- Everything else
- R functionality is divided into a number of packages
## Where to get packages
- CRAN [https://cran.r-project.org](https://cran.r-project.org)
- GitHub
- Bioconductor [https://www.bioconductor.org/](https://www.bioconductor.org/)
- Mostly bioinformatics and genomics stuff
- Neuroconductor [https://www.neuroconductor.org/](https://www.neuroconductor.org/)
- The new kid on the block: computational imaging software for brain imaging
- RForge [http://rforge.net/](http://rforge.net/)
- Not so well known
- Some .tar.gz file you downloaded from a website
- Use caution!
## Installing from CRAN
```{r eval = FALSE}
install.packages("devtools") # Install it
require(devtools) # Load it
library(devtools) # Alternatively, load like this
```
You might need to specify a repository:
```{r eval = FALSE}
install.packages("devtools",
repos = "http://stat.ethz.ch/CRAN/")
```
List of mirrors here: [https://cran.r-project.org/mirrors.html](https://cran.r-project.org/mirrors.html)
## Installing from Bioconductor and Neuroconductor
```{r eval = FALSE}
source("https://bioconductor.org/biocLite.R")
biocLite()
biocLite("GenomicFeatures") # For example
```
```{r eval = FALSE}
source("https://neuroconductor.org/neurocLite.R")
neuro_install(c("fslr", "hcp")) # Install these two
```
## Installing from GitHub
```{r eval = FALSE}
install.packages("devtools") # Install devtools first!
require(devtools) # Load devtools
install_github("mlr-org/mlr") # Install mlr
require(mlr) # Load mlr
```
Tip: What if the package you want has been removed from, say, CRAN or Bioconductor? Both have read-only mirrors of their R repositories. These mirrors can be a useful alternative source for packages that have disappeared from their R repositories.
## How to get help
- Typing `?command` will display the help pages for `command`, *e.g.*:
```{r eval = FALSE}
# This will display help on plotting of R objects:
?plot
```
- RTFM! Google for package reference manuals on CRAN
- CRAN packages usually come with example code; usually found at the bottom of the help pages or on the CRAN webpage for the package (search for "Vignettes")
- CRAN Task Views: view packages for particular tasks [https://cran.r-project.org/web/views/](https://cran.r-project.org/web/views/)
- Click *Help* on the RStudio toolbar for cheatsheets and quick reference guides
## Other ways for getting help
![I recommend this.](google-error.jpg)
## A good way to learn R
![Another gripping page-turner.](cat-program.jpg)
## Packages for fun/strange things you can do with R
- `twitteR`: send tweets (or do sentiment analysis)
- `ubeR`: call an Uber
- `rpushbullet`: send a notification to your phone
- ("Your code finished running!")
- `predatory`: keep track of predatory publishers
- `xkcd`: draw your plots in *xkcd* style
- `remoji`: plot using emoji
- `magick`: advanced image processing
- `catterplots`: plot using cats
## Image processing with magick
```{r fig.height = 6,echo = F, message = F, hide = T, fig.cap="Obviously, you can replace Prof. Frink with (for example) the logo for your institute or experiment."}
# Installation details:
# https://cran.r-project.org/web/packages/magick/vignettes/intro.html
# install.packages("magick")
require(magick, quietly = T)
frink <- image_read("https://jeroenooms.github.io/images/frink.png")
require(ggplot2, quietly = T)
require(grid, quietly = T)
qplot(speed, dist, data = cars, geom = c("point", "smooth"))
grid.raster(frink)
```
## Meow
```{r echo = FALSE, message = F, hide = T, fig.caption="A silly example of R's plotting abilities."}
# devtools::install_github("Gibbsdavidl/CatterPlots")
require(CatterPlots, quietly = T)
x <- -10:10; y <- -x^2 + 10; set.seed(123)
# random cats
meow <- multicat(xs=x, ys=rnorm(21),
cat=c(1,2,3,4,5,6,7,8,9,10),
catcolor=list(c(0,0,0,1)),
canvas=c(-0.1,1.1, -0.1, 1.1),
xlab="some cats", ylab="other cats", main="Random Cats")
```
# Fundamentals
## Data types
- R has five basic or "atomic" classes of objects:
- character
- numeric (real numbers)
- integer
- complex
- logical (Boolean `TRUE`/`FALSE` or `T`/`F` in abbreviated form)
- The most basic object is a vector
- Vectors are homogeneous containers (elements all same type)
## Numbers
- By default, numbers are objects of the class numeric
- If you explicitly want an integer, you need to specify the `L` suffix: `42L`
- Special numbers:
- `Inf`: infinity
- `NaN`: undefined value; "not a number", *e.g.* $0/0$. Another example: $p_\mathrm{T}$ of subleading jet in an event with only one jet can be represented with a NaN. Arithmetic with a NaN value results in a NaN. NaNs can also be thought of as representing a missing value.
- Get numerical characteristics of your machine:
```{r eval = FALSE}
.Machine
# For information on interpreting the output:
?.Machine
```
## Attributes
R objects can have attributes:
- Names
- Dimensions (*e.g., matrices*)
- Length (*e.g., vectors*)
- Class
```{r}
# Create an empty matrix with 2 columns and 3 rows:
m <- matrix(ncol = 2, nrow = 3)
# Dimensions:
dim(m)
class(m)
```
## Evaluation
- When a complete expression is entered at the prompt, it is evaluated and the result of the evaluated expression is returned. The result may be auto-printed.
- `<-` is the assignment operator (you might also see `=` in code)
- Tip: keyboard shortcut in RStudio is `Alt+-` (Windows/Linux) or `Option+-` (Mac)
```{r}
# This is a comment!
x <- 42 # Nothing printed
x # Auto-printed
print(x) # Explicit printing
```
## Sequences
```{r}
x <- 1:5 # Create integer sequence
x
x <- seq(from = 0, to = 1, by = 0.2)
x
```
## Vectors
The `c()` ("concatenate") function can be used to create vectors of objects:
```{r}
x <- c(0.3, 0.5) # Numeric
x <- c(TRUE, FALSE) # Logical
x <- c(T, F) # Logical
x <- c("A", "B", "C") # Character
x <- c(1+5i, 3-2i) # Complex
```
Using the `vector()` function:
```{r}
x <- vector("numeric", length = 10)
x
```
## Mixing objects
```{r}
y <- c(1.7, "a") # y is character
y <- c(TRUE, 2) # y is numeric
y <- c("a", TRUE) # y is character
```
When different objects are mixed in a vector, *coercion* occurs so that every element in the vector is of the same class.
## Explicit coercion
Objects can be explicitly coerced from one class to another using the `as.*` functions, if available:
```{r}
x <- 0:6
class(x)
as.logical(x)
as.character(x)
```
## Explicit coercion
Nonsensical coercion results in `NA`s (missing values):
```{r}
x <- c("a", "b", "c")
as.numeric(x)
```
## Matrices
Matrices are vectors with a dimension attribute. The dimension attribute is itself an integer vector of length 2 (nrow, ncol):
```{r}
m <- matrix(1:6, nrow = 2, ncol = 3)
m
dim(m)
```
Matrices are constructed column-wise, so entries can be thought of starting in the "upper left" corner and running down the columns.
## Reshaping matrices
Matrices can also be created directly from vectors by adding a dimension attribute:
```{r}
v <- 1:10
v
dim(v) <- c(2, 5)
v
```
We can also reshape matrices (unroll them) using the same method.
## cbind-ing and rbind-ing
Matrices can be created by column-binding or row-binding with `cbind()` and `rbind()`:
```{r}
x <- 1:3
y <- 10:12
cbind(x, y)
rbind(x, y)
```
## Lists
Lists are a special type of vector that can contain elements of different classes:
```{r}
x <- list(1, "a", TRUE, 1 + 4i)
x
```
## Factors
Factors are used to represent categorical data. Factors can be unordered or ordered. One can think of a factor as an integer vector where each integer has a *label*:
- `signal`/`background`
- `bjet`/`light`
- `male`/`female`
- `benign`/`malignant`
- `low`/`middle`/`high`
- *etc.*
Similar to `enum`s in C++
## Factors
The order of the levels can be set using the `levels` argument to `factor()`. This can be important in linear modelling because the first level is used as the baseline level:
```{r}
x <- factor(c("signal", "signal",
"background", "signal"))
x
x <- factor(c("signal", "signal",
"background", "signal"),
levels = c("signal", "background"))
x
```
## Factors
```{r}
unclass(x)
```
## Missing values
Missing values are denoted by `NA` or `NaN` for undefined mathematical operations:
```{r}
x <- 0/0
is.na(x) # Is x NA?
is.nan(x) # Is x NaN?
```
- A `NaN` value is also `NA` but the converse is not true
- Systematic use of NaNs in programming languages was introduced by the IEEE 754 floating-point standard in 1985
## Data frames
Data frames are used to store tabular data, and are like pandas DataFrames and similar to ROOT Ntuples
```{r}
x <- data.frame(foo = 1:3, bar = c(T, T, F))
x
colnames(x) # Names of the variables
rownames(x) # To identify specific observations
```
## Names
R objects can also have names, which is very useful for writing readable code and self-describing objects:
```{r}
x <- 1:3
names(x) <- c("foo", "bar", "qux")
x
m <- matrix(1:4, nrow = 2, ncol = 2)
dimnames(m) <- list(c("a", "b"), c("c", "d"))
m
```
## Subsetting
There are a number of operators that can be used to extract subsets of R objects.
- `[` always returns an object of the same class as the original; can be used to select more than one element
- `[[` is used to extract elements of a list or a data frame; it can only be used to extract a single element and the class of the returned object will not necessarily be a list or data frame
- `$` is used to extract elements of a list or data frame by name; semantics are similar to that of `[[`
- The `[[` operator can be used with computed indices; `$` can only be used with literal names
- The `[[` operator can take an integer sequence
- Partial matching of names is allowed with `[[` and `$`
## Subsetting
```{r}
x <- c("A", "B", "C", "D", "E")
x[1] # This is the FIRST element!
x[1:5]
x[x > "A"]
```
## Subsetting a matrix
```{r}
x <- matrix(1:6, 2, 3)
x
x[1, 2]
x[1, ] # Note missing index
```
## Subsetting lists
```{r}
x <- list(foo = 1:4, bar = 0.6)
x[1]
class(x[1])
x[[1]]
class(x[[1]])
```
## Subsetting lists
```{r}
x$bar
class(x["bar"])
class(x[["bar"]])
```
## Removing missing values
A common task is to remove missing values (`NA`s)
```{r}
x <- c(1, 2, NA, 4, NA, 5)
is.na(x)
x[!is.na(x)]
```
## Removing missing values
```{r}
airquality[3:6, ] # A toy dataset
good <- complete.cases(airquality) # Logical vector
airquality[good, ][3:6, ] # Remove NAs
```
## Reading data
There are a few principal functions reading data into R:
- `read.table`, `read.csv`, for reading tabular data
- `readLines`, for reading lines of a text file
- `source`, for reading in R code files (inverse of dump)
- `dget`, for reading in R code files (inverse of dput)
- `load`, for reading in saved workspaces
- `unserialize`, for reading single R objects in binary form
There are analogous functions for writing data to files: `write.table`, `writeLines`, `dump`, `dput`, `save`, `serialize`
## read.csv and read.table
The `read.csv` function is one of the most commonly used functions for reading data:
```{r eval = FALSE}
data <- read.csv("foo.txt")
```
R will automatically
- skip lines that begin with a `#`
- figure out how many rows there are
- figure what type of variable is in each column of the table
- `read.table` is identical to `read.csv` except that the default separator is a tab
## Faster methods for reading and writing data
- The time taken to read in data is not something we normally have to consider in HEP; one of ROOT's strengths is its ability to allow the user access to huge datasets in the blink of an eye
- Sadly, this is not true of R
- If your data is big, you might be waiting minutes or even hours with `read.csv`
- The `data.table` package provides `fread` and `fwrite`; both are often **many times** faster than the base R `read.csv` and `write.csv` methods
- Alternatively, use the `sqldf` package to subset your data with a SQL query before you read it in --- good if you know SQL!
## Bang!
![This is what happens if you try to read in more data than will fit in RAM.](trump.png)
## RAM considerations
- Know Thy System
- A 64-bit OS will usually allow you to squeeze more out of RAM than a 32-bit OS
- What other applications are in use?
- Close those umpteen Chrome tabs!
- You can delete objects you no longer need with `rm(foo)`
## Stored Object Caches for R
I use the `SOAR` package to store objects out of memory in a stored object cache on disk
```{r echo = FALSE}
rm(list = ls()) # Remove all the stuff from earlier in the tutorial
require(SOAR, quietly = TRUE)
```
```{r}
Sys.setenv(R_LOCAL_CACHE=".R_Test") # Store stuff here
dummy <- rnorm(1e6, mean = 2) # 1e6 random normals
ls() # What's in my environment?
Store(dummy) # Cache dummy data
ls() # Gone from RAM:
mean(dummy) # But we can read it
```
## Interfaces to the outside world
Data are read in using connection interfaces. Connections can be made to files (most common) or to other more exotic things.
- `file`, opens a connection to a file
- `gzfile`, opens a connection to a file compressed with gzip
- `bzfile`, opens a connection to a file compressed with bzip2
- `url`, opens a connection to a webpage
Type, for example, `?file` to find out more
## Reading ROOT data
- At some point, you're going to want/need to do this
- I use Adam Lyon's `RootTreeToR` package
- Debuted at useR! 2007 conference: [http://user2007.org/program](http://user2007.org/program)
- No longer maintained, sadly
- Latest version 6 years old
- You need ROOT installed, but no need to run ROOT
- Launch R/RStudio from a terminal where ROOTSYS is set
- Haven't checked if this works with ROOT 6
- What if this stops working with newer ROOT versions?
- Trivial to write a stand-alone C++ utility to parse command line strings, like those you would pass to Draw(), and pass them to TTreePlayer and use it to dump out the data into a text file
- Or use a R$\leftrightarrow$Python interface package to run `root_numpy`, then use `feather` to write and then read back the data into R
## RootTreeToR
```{r eval = FALSE}
devtools::install_github("lyonsquark/RootTreeToR")
require(RootTreeToR)
# Open and load ROOT tree:
rt <- openRootChain("TreeName", "FileName")
N <- nEntries(rt) # Number of rows of data
# Names of branches:
branches <- RootTreeToR::getNames(rt)
# Read in a subset of branches (vars), M rows:
df <- toR(rt, vars, nEntries = M) # df: a data.frame
```
## Other packages for reading ROOT data
- **AlphaTwirl**: a Python library for summarising event data in ROOT Trees [(https://github.com/TaiSakuma/AlphaTwirl)](https://github.com/TaiSakuma/AlphaTwirl)
> "The library contains a set of Python classes which can be used to loop over event data, summarize them, and store the results for further analysis or visualization. Event data here are defined as any data with one row (or entry) for one event; for example, data in ROOT TTrees are event data when they have one entry for one proton-proton collision event. Outputs of this library are typically not event data but multi-dimensional categorical data, which have one row for one category. Therefore, the outputs can be imported into R or pandas as data frames. Then, users can continue a multi-dimensional categorical analysis with R, pandas, and other modern data analysis tools."
- For more details, ask the author Tai Sakuma (he's here)
## Control structures
- Structures that will be familiar to C++ programmers include: `if`, `else`, `for`, `while`, `break`, and `return`. `repeat` executes an infinite loop, `next` skips an iteration of a loop.
```{r}
for(i in 1:5) print(letters[i])
```
- Although these structures exist, vectorisation means that loops are not as common as in other languages
- It's usually faster to use a function from the *`apply` family
## Functions
Functions are created using the `function()` directive and are stored as R objects just like anything else. In particular, they are R objects of class `function`.
```{r}
my_function <- function(foo, bar, ...) {
# Do something interesting
}
```
Functions in R are "first class objects", which means that they can be treated much like any other R object. Importantly,
- Functions can be passed as arguments to other functions
- Functions can be nested, so that you can define a function inside of another function
- The return value of a function is the last expression in the function body to be evaluated
## Function arguments
- R functions arguments can be matched positionally or by name
- In addition to not specifying a default value, you can also set an argument value to `NULL`:
```{r eval = FALSE}
f <- function(a, b = 1, c = 2, d = NULL) { # stuff }
```
- Use `...` to indicate a variable number of arguments to pass to an inner function:
```{r}
myplot <- function(x, y, type = "l", ...) {
plot(x, y, type = type, ...)
}
```
- Useful when the number of arguments passed to the function cannot be known in advance
## Inspecting function implementations
- Strip the parentheses from a function to see its implementation:
```{r tidy = TRUE}
myplot
```
## Interlude: simplify your code with pipes
![](pipe.png)
# %>%
## Data pipelines
R is a functional language, which means that your code often contains a lot of parentheses. And complex code often means nesting those parentheses together, which make code hard to read and understand:
```{r eval = F}
foo <- do_more_stuff(
do_something(
read_some_data(data), some.args,
)
)
print(foo)
```
## Using %>%
- The `dplyr` package provides powerful functions for operating on data, and relies on the `%>%` pipeline operator (provided by `magrittr`), which takes the results of the previous operation and passes it as the first argument of the next:
```{r eval = F}
read_some_data(data) %>% do_something(some.args) %>%
do_more_stuff %>% print
```
- Unwrap nested function calls (I call this *dematryoshkafication*)
- Data manipulation with `%>%` mirrors the way we think about processing data: like on a production line, performing actions on an object sequentially, in a stepwise manner
- This results in more readable code, and can be used to reduce the creation of intermediate data objects, which saves RAM
- Takes a mental switch to do analysis this way, but utterly addictive once you get used to it
# End of interlude
## Vectorised operations
Many operations in R are vectorised:
```{r}
x <- 1:4; y <- 5:8 # Two statements on one line
x * y
```
## Vectorised matrix operations
```{r}
a <- rep(10, 4) # 10 repeated 4 times
x <- matrix(1:4, 2, 2); y <- matrix(a, 2, 2)
x * y # element-wise multiplication
x %*% y # true matrix multiplication
```
## Loop functions
The *`apply` family of functions enable looping at the command line:
- `lapply`: Loop over a list and evaluate a function on each element
- `sapply`: Same as lapply but try to simplify the result
- `apply`: Apply a function over the margins of an array
- `tapply`: Apply a function over subsets of a vector
- `mapply`: Multivariate version of lapply
## Loop functions: lapply
`lapply` always returns a list, regardless of the class of the input:
```{r}
x <- list(a = 1:5, b = rnorm(10))
lapply(x, mean)
```
- The actual looping is done internally in C code
## Loop functions: sapply
`sapply` will try to simplify the result of lapply if possible:
```{r}
sapply(x, mean)
```
## Loop functions: apply
- `apply` is used to a evaluate a function (often an anonymous one) over the margins of an array
- Usage: `apply(X, MARGIN, FUN, ...)`
- `X` is an array
- `MARGIN` is an integer vector indicating which margins should be "retained".
- `FUN` is a function to be applied
- `...` is for other arguments to be passed to `FUN`
```{r}
x <- matrix(rnorm(12), 4, 3)
apply(x, 2, mean)
```
## Loop functions: apply
A more complicated example:
```{r}
x <- matrix(rnorm(120000), 4, 30000)
apply(x, 1, FUN = function(x) {
quantile(x, probs = c(0.025, 0.975))
})
```
## Some useful functions for exploring your data: str
```{r}
str(mtcars)
```
## Some useful functions for exploring your data: summary
```{r}
summary(mtcars[1:3])
```
## Some useful functions for exploring your data: head
```{r}
head(mtcars[1:5])
```
## Invoke the RStudio viewer
```{r, eval = FALSE}
View(mtcars)
```
![](View.png)
## Simulations
Functions for probability distributions in R
- `rnorm`: generate random Normal variates with a given mean and standard deviation
- `dnorm`: evaluate the Normal probability density (with a given mean/SD) at a point (or vector of points)
- `pnorm`: evaluate the cumulative distribution function for a Normal distribution
- `rpois`: generate random Poisson variates with a given rate
Probability distribution functions usually have four functions associated with them. The functions are prefixed with a
- `d` for density
- `r` for random number generation
- `p` for cumulative distribution
- `q` for quantile function
# Graphics
## Plotting using R base plots: histograms
```{r fig.height = 4}
hist(airquality$Ozone, col = "skyblue")
```
## Plotting using R base plots: (Tukey) boxplots
```{r fig.height = 4}
airquality <- transform(airquality,
Month = factor(Month))
boxplot(Ozone ~ Month, airquality, col = "powderblue")
```
- Box: interquartile range (IQR)
- Line: median
- Whiskers: 1.5$\times$IQR
- Outliers: in a Tukey boxplot, points beyond 1.5$\times$IQR
## Plotting using R base plots: scatter plots
```{r fig.height = 6}
x <- rchisq(10000, df = 5)
y <- rgamma(10000, shape = 3)
plot(x, y, pty = 19, col = "slateblue")
rug(x, col = "steelblue")
```
## Plotting using R base plots: scatter plots
```{r fig.height = 6}
smoothScatter(x, y)
```
## Plotting using R base plots: scatter plot matrices
```{r fig.height = 6}
plot(iris, col = iris$Species)
```
## Plotting using R base plots: line graphs
```{r}
data("UKDriverDeaths")
plot(UKDriverDeaths)
```
## Plotting with ggplot2
- An implementation of the *Grammar of Graphics* by Leland Wilkinson
- Written by Hadley Wickham
- Grammar of graphics represents an abstraction of graphic ideas/objects
- Think "noun", "verb", "adjective" for graphics
- Plots are made up of aesthetics (size, shape, colour) and geoms (points, lines)
- See the ggplot2 cheatsheet (*Help* $\rightarrow$ *Cheatsheets* in RStudio)
- The range of plots you can make is **huge**; I'll show only a small selection
- See my talk in the jet tagging session for more examples
## Example ggplot2 plot: histogram
```{r fig.height=4}
require(ggplot2, quietly = TRUE)
ggplot(diamonds, aes(price, fill = cut)) +
geom_histogram(binwidth = 500)
```
## Example ggplot2 plot: scatter plot
```{r echo = FALSE, message = FALSE, warning = FALSE, error = FALSE, hide = TRUE}
suppressPackageStartupMessages(
suppressMessages(
install.packages("gapminder",
repos = "http://stat.ethz.ch/CRAN/",
quiet = TRUE))
)
```
```{r fig.height=4}
require(gapminder, quietly = TRUE)
ggplot(
data = gapminder, aes(x = lifeExp, y = gdpPercap)) +
geom_point(
aes(color = continent, shape = continent)) +
scale_y_log10()
```
## Example ggplot2 plot: box plot
```{r fig.height=4}
ggplot(InsectSprays,
aes(x = spray, y = count, fill = spray)) +
geom_boxplot()
```
## Example ggplot2 plot: violin plot
```{r, fig.height=4}
ggplot(InsectSprays,
aes(x = spray, y = count, fill = spray)) +
geom_violin()
```
## Example ggplot2 plot: scatter plot with a LOESS (locally weighted scatterplot smoothing) fit and 95% confidence interval band
```{r, fig.height=4}
qplot(speed, dist, data = cars,
geom = c("point", "smooth"))
```
# Machine learning
## Machine learning in R
- There are a huge number of packages available for machine learning in R
- See CRAN Machine Learning task view
- Rather than interacting with algorithms directly, it's easier to drive them from a framework that provides a unified interface for performing common operations
- These provide functions for data preprocessing (cleaning), data splitting (creating partitions and resampling), training and testing functions, and utilities for generating confusion matrices and ROC plots
- Two excellent frameworks for machine learning are the `caret` and `mlr` packages
- Another useful package is $H_{2}O$, which includes algorithms implemented in Java (for speed) for deep learning and boosted decision trees, and has an interface to Spark for parallelised machine learning on large datasets --- well worth checking out!
## Machine learning in R
- Caret and mlr are to R what scikit-learn is to Python
- These are powerful tools that I can't cover in the available time
- Instead, I'll show a few examples of what you can do with them
## Caret
Load data and partition for training and testing:
```{r}
library(mlbench, quietly = T); data(Sonar)
library(caret, quietly = T)
set.seed(42)
inTraining <- createDataPartition(
Sonar$Class, p = 0.75, list = FALSE)
training <- Sonar[ inTraining, ]
testing <- Sonar[ -inTraining, ]
```
## Caret
Create a grid of hyperparameters to tune over:
```{r}
gbmGrid <- expand.grid(interaction.depth = c(1, 5, 9),
n.trees = (1:30) * 50,
shrinkage = 0.1,
n.minobsinnode = 20)
```
## Caret
Create train control for 5-fold CV, train GBM:
```{r message = FALSE, hide = TRUE}
fitControl <- trainControl(# 5-fold CV
method = "cv",
number = 5)
# Using R's formula language: Class ~ .
# fit Class using all features in the data
gbmFit <- train(Class ~ ., data = training,
method = "gbm",
trControl = fitControl,
tuneGrid = gbmGrid,
verbose = FALSE)
```
## Caret
Predict on held-out test data and generate confusion matrix:
```{r}
predictions <- predict(gbmFit, newdata = testing)
confusionMatrix(predictions, testing$Class)
```
## Caret
Examining the effect of hyperparameter settings:
```{r fig.height=5}
ggplot(gbmFit)
```
## Caret
Plot variable importance for top 10 variables:
```{r fig.height=6}
imp <- varImp(gbmFit); plot(imp, top = 10)
```
## mlr
A simple stratified cross-validation of linear discriminant analysis with `mlr`:
```{r message = FALSE, hide = TRUE}
require(mlr)
data(iris)
# Define the task
task <- makeClassifTask(id = "tutorial",
data = iris,
target = "Species")
# Define the learner
lrn <- makeLearner("classif.lda")
# Define the resampling strategy
rdesc <- makeResampleDesc(method = "CV",
stratify = TRUE)
# Do the resampling
r <- resample(learner = lrn, task = task,
resampling = rdesc, show.info = FALSE)
```
## mlr
Results:
```{r}
# Get the mean misclassification error:
r$aggr
```
## Deep learning with $\mathrm{H_{2}O}$
```{r message = F, message=FALSE, warning=FALSE}
require(h2o);
h2o.no_progress()
localH2O <- h2o.init() # Initialise
```
## Deep learning with $\mathrm{H_{2}O}$
Load prostate cancer dataset, partition into training and test sets:
```{r message = F, message=FALSE, warning=FALSE, hide=T}
prosPath <- system.file("extdata",
"prostate.csv",
package = "h2o")
prostate.hex <- h2o.importFile(
path = prosPath,
destination_frame = "prostate.hex")
prostate.split <- h2o.splitFrame(data = prostate.hex,
ratios = 0.75)
prostate.train <- as.h2o(prostate.split[[1]])
prostate.test <- as.h2o(prostate.split[[2]])
```
## Deep learning with $\mathrm{H_{2}O}$
Train deep learning neural net with 5 hidden layers, ReLU with dropout, 10000 epochs, then predict on held-out test set:
```{r, echo=TRUE, message=FALSE, warning=FALSE}
model <- h2o.deeplearning(
x = setdiff(colnames(prostate.train),
c("ID","CAPSULE")),
y = "CAPSULE",
training_frame = prostate.train,
activation = "RectifierWithDropout",
hidden = c(10, 10, 10, 10, 10),
epochs = 10000)
predictions <- h2o.predict(model, prostate.test)
```
## Deep learning with $\mathrm{H_{2}O}$
Calculate AUC:
```{r}
suppressMessages(require(ROCR, quietly = T))
preds <- as.data.frame(predictions)
labels <- as.data.frame(prostate.test[2])
p <- prediction(preds, labels)
auc.perf <- performance(p, measure = "auc")
auc.perf@y.values
```
## Deep learning with $\mathrm{H_{2}O}$
Plot ROC curve:
```{r fig.height=6}
plot(performance(p,
measure = "tpr", x.measure = "fpr"),
col = "red"); abline(a = 0, b = 1, lty = 2)
```
# Reproducible research
## Quick live demo (if there is time)
## That's probably enough for now!
![*Thanks for listening!*](ml-cartoon.png)