Spatial data and analysis in R using terra

From CUOSGwiki
Revision as of 12:40, 22 March 2024 by Smitch (talk | contribs)
Jump to navigationJump to search


Spatial analysis and the handling of spatial data have long been powerful capabilities in R. However, recently the dominant libraries providing these capabilities (including sp, raster, rgdal, ...) have been declared obsolete and were removed from current versions of R. New replacement packages include terra and sf. This move has advantages in that it removed a mix of incompatible tools, forcing standardization, and boosting the ability to use larger, modern datasets. It does mean, however, that there are a lot of tutorials and other documentation on the Internet or in books about spatial data analysis which is now also out of date.

This tutorial aims to help fix that situation. In the spirit of most of the tutorials on this site, it will use only open data, and while we focus on data centred on Carleton University, the same methods should work on similar data from anywhere else in the world. It was developed using R-4.3.1, and I am running it within the RStudio environment, which will show up in screenshots, and there are a few setup instructions given that are specific to RStudio. Besides that, the actual processing will be demonstrated using just console-based R commands, so they should work in any R environment.

This is written assuming that you have general computer / file management skills. We don't assume you already know how to use R, but we also don't go out of our way to explain it. A good general introduction is available at the RSpatial site (click here). We also assume that you already know some basics of spatial data, such as what is meant by vector and raster data, projections, and coordinate reference systems.


Get data

To start with, we need some data. On your computer's local storage / hard drive, please create a new folder where you will store everything related to this exercise.

First, we will use some Landsat imagery, from the open archive provided by USGS.

To speed things up, I used Google Earth Engine, to take advantage of their huge collection of open data and their tools so I could quickly search for images free from cloud cover, and clip then export an image surrounding Carleton University. Here's a short tutorial on how to make use of Google Earth Engine to look for and download imagery. As long as you pick an area that includes some version an area centred on downtown Ottawa or the region around Carleton University, the rest of this tutorial should work. While you are in Google Earth Engine, please take note of the unique identifier (ProductID) of the image that you obtain - you can do that by looking at the metadata printed to the console when you run the script in the tutorial. In my example, I obtained the GEE id LANDSAT/LC08/C02/T1_L2/LC08_015029_20190508; the last part of that (starting with LC08, is the ProductID.

However, if you're in a rush and want to stay focused on R, the file lined here contains an image I have already exported in GeoTIFF format.

Next, we will get some vector data from the City of Ottawa Open Data; there's a lot there you can explore, but to get started, we will download Parks and Greenspace data and Ottawa Neighbourhoods into the same folder you used above. Extract the contents of any .zip files.

Start R using RStudio and set environment

Launch RStudio on your computer (e.g. on a Windows computer, find it on the Start Menu; on a macOS computer, it's probably in your Applications folder). On the File menu, choose New Project. Choose the Existing Directory option, and then point it to the directory where you saved your data files, above. You should then have a blank RStudio project, set to use your data directory as the working directory for all work in R (you could use the getwd() command to double check this).

Try loading the Terra library (we will discuss more on what this does below) by typing the following at the prompt in the RStudio Console, and pressing Enter:

library("terra")

If you get an error stating that the library was not found, you will need to install it in the local library collection on your machine, using the following command:

install.packages("terra")

That should download and install the library, so that it will be available for you from then on if you return to this same computer.

About the Terra library

When you loaded the Terra library, it added a number of classes to the R environment that support spatial data, and functions to access, analyze, and visualize those data. The data class SpatVector holds vector data, and SpatRaster handles raster data (for more details, see RSpatial's documentation of SpatVector and SpatRaster). When I wrote that SpatRaster "handles" raster data I chose that verb carefully - an important advance in raster data handling compared to some older libraries is that the data do not have to all reside inside the R data library - R typically holds all of its active data objects in the computer's working memory. Raster grids can be very large, so loading all the raster files you are working on into the computer's active memory can quickly fill it up; instead, it is much more efficient to keep the raster data on your storage device (hard drive, network drive, etc.) and just access parts of the grid as needed. SpatRaster objects CAN have data stored within them, but in the majority of "real life" cases, most of the data stays in external files and the R data object holds metadata like the coordinate reference system being used, the bounding coordinates and resolution of the grid, etc.

Reading spatial data into R

To load in the satellite image, use the following command at the R command prompt. Note that one thing that the RStudio development environment provides on top of basic R capabilities is things like command completion prompts. When you start typing the first few letters of a data object or function name, hit the tab key, and a list of potential matches to what you have entered appears; once you see the one that matches, you can just choose it (with the mouse or the up / down keys on the keyboard) instead of typing the whole name in. Similarly, once you open the brackets at the end of a function name, you can use the same completion mechanism to put R objects in as parameters to the function.

rawImage <- rast(choose.files())

The choose.files() function will cause the RStudio gui to pop up a window that lets you navigate to and select the input file, which in this case is the Landsat GEOTiff downloaded above - note that alternatively, you could have typed that file name (with quotes) in as the function's argument.

Now import the vector data:

parks <- vect(choose.files())

and then choose the file (I downloaded a shapefile, so chose the file ending in .shp; adjust as needed for the file format you chose).

And the same for the neighbourhoods data:

nbrhds <- vect(choose.files()) 

(this time I tried a GeoJSON file, which was nice since it was a single file download)

Investigating the data

Now let's take a look at what we've imported. Enter just the name of our image at the R command prompt (and press Enter):

rawImage

You should see output something like:

> rawImage
class       : SpatRaster 
dimensions  : 1029, 1716, 19  (nrow, ncol, nlyr)
resolution  : 30, 30  (x, y)
extent      : 425535, 477015, 5002485, 5033355  (xmin, xmax, ymin, ymax)
coord. ref. : WGS 84 / UTM zone 18N (EPSG:32618) 
source      : exportedImage2.tif 
names       : SR_B1, SR_B2, SR_B3, SR_B4, SR_B5, SR_B6, ... 
min values  :    16,    79,  4493,    ? ,    ? ,    ? , ... 
max values  : 54928, 53621, 58744,    ? ,    ? ,    ? , ... 

Let's look at the images. If you simply enter the following command, R will cram each band into the output plot window:

plot(rawImage)
All 16 Landsat bands plotted

If you wanted to look at just the NIR band, using 255 shades of grey, you could use this:

plot(rawImage$SR_B5, col=gray(0:255 / 255))
Band 5 greyscale

How about a visual composite (assigning the "true colour" red, green, and blue measurements to their respective colours on the display)?

plotRGB(c(rawImage$SR_B4, rawImage$SR_B3, rawImage$SR_B2), stretch="lin")
Visual composite

Analysis

Now let's imagine you were interested in measuring relative amounts or vigour of vegetation in different parts of Ottawa. We will start with a neighbourhood-based analysis. Then we'll compare that to one restricted to parks and greenspaces, to see what kinds of differences we find. We will use NDVI as an index of vegetation vigour and photosynthetic biomass.


Image algebra to calculate NDVI

NDVI <- ((rawImage$SR_B5 - rawImage$SR_B4)/ (rawImage$SR_B5 + rawImage$SR_B4))

I can check the distribution of NDVI across the image with this:

hist(NDVI, xlab="NDVI", ylab="Frequency")

And make a map of it:

plot(NDVI, col=rev(terrain.colors(10)), main="NDVI")

Image statistics (zonal)

Now we will find out the average NDVI by neighbourhood. While there is a extract() function that could tell us about the distribution of NDVI in each neighbourhood polygon, I found I was getting errors that didn't make sense when I tried to do this across the city. Then I noticed in the help pages for extract() it mentioned that rasterizing the polygons and then using zonal() was a more "efficient" alternative. I tried it, it worked, so I think that efficiency includes use of RAM, and I was probably running out once I tried it across on a significant spatial extent.

I did learn through trial and error that I really needed to set up the spatial layers so they aligned perfectly to make this work, though, so that's the reason for the following transformations:

rast_nbrhds <- rast(NDVI) # create an empty raster with same spatial qualities (CRS, resolution, extent) as NDVI

What is the CRS for NDVI?

crs(NDVI)

At the end of that output, you should see that the EPSG code for the CRS is 32618, so we do this to create a version of the neighbourhood polygons that is reprojected to match:

nbrhds_proj <- project(nbrhds, "EPSG:32618") # reproject Neighbourhoods to UTM 17N

Finally, rasterize the neighbourhoods polygons, using the "ONS_ID" field to populate the raster cells:

nbrhdIDrast <- rasterize(nbrhds_proj, rast_nbrhds, "ONS_ID")

I originally thought I needed to crop the neighbourhoods layer because some of them extended outside the extent of the image I had created, and that works, using the crop() function. But it turns out that was redundance because the above steps accomplish the cropping already, by virtue of the fact that rast_nbrhds is defined to match the extent of NDVI. So I just mention it here so that you're aware of the crop() function if you ever DO need such a tool.

Let's take a look to make sure this has accomplished what we think it did:

plot(nbrhdIDrast)
lines(nbrhds_proj)
Rasterized Neighbourhoods

You should see a map with the neighbourhood polygon boundaries, "coloured in" using the rasterized neighbourhood ID pixels. If your image boundaries are smaller than the Ottawa boundary, some neighbourhoods will be cut off at the edges of the map, but since we're using a pixel-by-pixel extraction, that will work fine for this analysis (with the proviso that in border cases, we're not looking at the full neighbourhood). Exercise for the reader - how could this workflow be modified to make sure we only look at full neighbourhoods?

Finally, let's extract the average NDVI per neighbourhood:

NDVIbyNbrhd <- zonal(NDVI, nbrhdIDrast, "mean", na.rm=TRUE)

That produces a data frame:

> zonal(NDVI, nbrhdIDrast, "mean", na.rm=TRUE)
   ONS_ID     SR_B5
1     3001 0.2718333
2     3002 0.2549597
3     3003 0.1818396

Join tables

The column names of that table could use fixing; ONS_ID is fine, but the second column inherited its name from a default assignment back when the NDVI image was created from the original Landsat bands (in this case Band 5, I assume because that was the first object referenced in the assignment equation that created the NDVI object). That's easy to fix:

names(NDVIbyNbrhd)[2] <- "NDVI"

If you look at the NDVIbyNbrhd object now, you should see that the second column has been renamed to NDVI.

Now let's join that to the full neighbourhoods vector layer's attribute table, using merge():

newNbrhds <- merge(nbrhds_proj, NDVIbyNbrhd)

So... which neighbourhood has the highest average NDVI on my image date?

> newNbrhds[newNbrhds$NDVI == max(newNbrhds$NDVI)]$ONS_Name
[1] "NAVAN - SARSFIELD"

Exporting data

Finally, let's export the resulting neighbourhoods data and inspect it in a GIS package. I've decided here to write out the new neighbourhoods layer into a geoPackage:

writeVector(newNbrhds, filename="NeighbourhoodNDVI.gpkg", filetype="GPKG")

And when I look at the file in QGIS, I see that indeed the NDVI attribute is now populated:

Screenshot of Neighbourhoods in QGIS showing NDVI attribute