-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathsearch.json
240 lines (240 loc) · 83.2 KB
/
search.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
[
{
"objectID": "portfolio.html",
"href": "portfolio.html",
"title": "Portfolio",
"section": "",
"text": "This page will be updated soon.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nmaps!\n\n\ncartography\n\n\n\nArcGIS\n\n\nR\n\n\n\nhere are some maps I have made\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npubs!\n\n\nscientific writing\n\n\n\nremote sensing\n\n\nglaciology\n\n\nthermal limits\n\n\nmellitology\n\n\n\nonly a co-author at the moment, but working on a manuscript!\n\n\n\n\n\n\n\n\n\nNo matching items"
},
{
"objectID": "portfolio/publications/index.html#agu-2024",
"href": "portfolio/publications/index.html#agu-2024",
"title": "pubs!",
"section": "AGU 2024",
"text": "AGU 2024"
},
{
"objectID": "portfolio/publications/index.html#fire-and-natural-disturbance-2023",
"href": "portfolio/publications/index.html#fire-and-natural-disturbance-2023",
"title": "pubs!",
"section": "Fire and Natural Disturbance 2023",
"text": "Fire and Natural Disturbance 2023"
},
{
"objectID": "portfolio/publications/index.html#graphical-abstract-for-publication",
"href": "portfolio/publications/index.html#graphical-abstract-for-publication",
"title": "pubs!",
"section": "Graphical abstract for publication",
"text": "Graphical abstract for publication"
},
{
"objectID": "index.html",
"href": "index.html",
"title": "Wesley Rancher",
"section": "",
"text": "I’m a master’s student at the University of Oregon, where I use spatial data science to tackle questions about climate change and forest ecology. For example, my thesis focuses on mapping and modeling above-ground carbon density in Interior Alaska with Landsat imagery, random forests, and a forest simulation model.\nOn this site, you’ll find:\n\nReproducible code examples for manipulating spatial data.\nTeaching material I’ve developed to help students apply introductory remote sensing concepts.\nMaps, posters, and publications as my form of a portfolio.\n\n\n\nM.S. in Geography (in progress) University of Oregon | Eugene, OR\nB.A. in Environmental Studies, Geography, and Philosophy Ohio Wesleyan University | Delaware, OH"
},
{
"objectID": "index.html#education",
"href": "index.html#education",
"title": "Wesley Rancher",
"section": "",
"text": "M.S. in Geography (in progress) University of Oregon | Eugene, OR\nB.A. in Environmental Studies, Geography, and Philosophy Ohio Wesleyan University | Delaware, OH"
},
{
"objectID": "code/RasterSieving/index.html",
"href": "code/RasterSieving/index.html",
"title": "Raster manipulation using terra",
"section": "",
"text": "This R script reads in NDWI images derived the blue-red/blue+red equation, converts them to binary images using a threshold from the literature, and then removes outlier pixels which are disconnected from large water bodies.\n\n# setting up\nlibrary(terra)\n\nterra 1.8.5\n\n\nJust working with one file in this example. But you can imagine reading in a list of files and performing this operation iteratively with a for loop.\n\nr <- rast(\"../ndwi_ice_small.tif\")\nplot(r)\n\n\n\n\n\n\n\n\nSo this is what the NDWI image looks like. Let’s visualize as lake vs non lake.\n\nlake_mask <- r > 0.25\nplot(lake_mask)\n\n\n\n\n\n\n\n\nThis looks good but if you look at some of the isolated yellow pixels, they would then be considered a lake pixel even if disconnected from the larger lake. So we remove those.\n\n# function to sieve\nlake_sieve <- function(ndwi_thres_raster) {\n # get connected components\n connected_comp <- patches(ndwi_thres_raster, directions = 4, zeroAsNA = TRUE)\n components <- unique(values(connected_comp), na.rm = TRUE)\n components <- components[components != 0 & !is.na(components)]\n cell_indices <- unique(values(connected_comp, na.rm = TRUE))\n # empty mask for valid lakes\n valid_lake_mask <- rast(ndwi_thres_raster)\n values(valid_lake_mask) <- 0\n min_pixel <- 10\n min_width <- 1\n #loop over each ndwi scene\n for (comp_id in components) {\n # get cell indices for the current cc\n cell_indices <- which(values(connected_comp) == comp_id)\n if (length(cell_indices) < min_pixel) next\n # convert cell indices to coordinates\n coords <- xyFromCell(ndwi_thres_raster, cell_indices)\n width <- length(unique(coords[, \"x\"]))\n height <- length(unique(coords[, \"y\"]))\n # check if the component meets width/height requirements\n if (width <= min_width || height <= min_width) next\n values(valid_lake_mask)[cell_indices] <- 1 #update valid lake\n }\n return(valid_lake_mask)\n}\n\nApply the function\n\nsieved_r <- lake_sieve(lake_mask)\nplot(sieved_r)\n\n\n\n\n\n\n\n\nLooking good!! This is just a qualitative sieving technique and could easily adapt to more stats-based approaches. This layer can now be used to mask other rasters to. In my approach, I mask images in the red and panchromatic wavelengths by this image to isolate lakes and apply radiative transfer equations."
},
{
"objectID": "code/HistogramRasterBands/index.html",
"href": "code/HistogramRasterBands/index.html",
"title": "Graphing raster distribution",
"section": "",
"text": "Histograms are the best way to get a glimpse of data, no? This code is useful if you have a multiband raster stack and want to plot the data distribution on a normalized scale. My output from this got a lot of thumbs ups at AGU this year.\n\nlibrary(terra)\n\nterra 1.8.5\n\nlibrary(tidyverse)\n\n── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──\n✔ dplyr 1.1.4 ✔ readr 2.1.5\n✔ forcats 1.0.0 ✔ stringr 1.5.1\n✔ ggplot2 3.5.1 ✔ tibble 3.2.1\n✔ lubridate 1.9.4 ✔ tidyr 1.3.1\n✔ purrr 1.0.2 \n\n\n── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──\n✖ tidyr::extract() masks terra::extract()\n✖ dplyr::filter() masks stats::filter()\n✖ dplyr::lag() masks stats::lag()\nℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors\n\nlibrary(ggridges)\nlibrary(dplyr)\nlibrary(RColorBrewer)\n\n# path to your multiband raster\npath <- \"../FullComp_Dalton_2015.tif\"\n\n\n#example file\nr <- rast(path)\nband_names <- names(r)\nnormalize_layer <- function(layer) {\n min_val <- min(layer[], na.rm = TRUE)\n max_val <- max(layer[], na.rm = TRUE)\n normalized <- (layer - min_val) / (max_val - min_val)\n return(normalized)\n}\n\nnormalized_rasters <- lapply(1:nlyr(r), function(i) {\n band <- r[[i]]\n normalized_band <- normalize_layer(band)\n return(normalized_band)\n})\n\n# stack the normalized rasters\nnormalized_r <- rast(normalized_rasters)\n\nConvert raster to df and plot it:\n\n#convert to df\ndf <- as.data.frame(normalized_r, xy = FALSE, na.rm = TRUE)\ndf_long <- df %>%\n pivot_longer(cols = everything(), \n names_to = \"Band\", \n values_to = \"Value\")\ndf_long_sampled <- df_long %>%\n sample_n(500)\n\n#ridge plot\n#display.brewer.all()\ncolors <- colorRampPalette(brewer.pal(11, \"Paired\"))(57) # 57 colors\n\nplt <- ggplot(df_long_sampled, aes(x = Value, y = Band, fill = Band)) +\n geom_density_ridges_gradient(scale = 10, rel_min_height = 0.002, linewidth = 0.35) +\n scale_fill_manual(values = colors)+\n #scale_fill_viridis_d(option = \"plasma\") + \n theme(\n axis.title = element_blank(), \n axis.text.x = element_blank(),\n axis.text.y = element_text(size = 12, color = \"black\", face = \"bold\"),\n axis.ticks = element_blank() \n )\n#plt\n\nAny band denoted with _1 is derived from a spring composite, _2, a summer composite, and _3, a fall composite. Hence, the color grouping in the plot. For context, these are layers that are used as inputs into a random forest model prior to remove correlated variables."
},
{
"objectID": "instructions/LabThree/index.html",
"href": "instructions/LabThree/index.html",
"title": "Lab three instructions",
"section": "",
"text": "We will use QGIS for this lab and will install the SAGA NextGen plugin and Dzetsaka.\n\nInside QGIS select the plugins tab at the top.\nManage and install plugins\nMake sure you are on the “All” tab and search for SAGA. It is a blue logo: “Processing Saga NextGen Provider. Select it and install.\nNow search for dzetsaka and install\nYou can close the pop-up\n\nPlease let me know if you have trouble getting these tools installed!\nBe sure to start a QGIS project and save it. You will not need to install or load the tool your next time working on the machine. You will need to install it if you work on a new computer next time.\n\n\n\nYou will download Landsat imagery on EarthExplorer of the Mt St Helens area, before and after the 1980 eruption to answer a brief research question of something that interests you (for example: how long did it take for vegetation recovery after the eruption?). The research question will guide the image dates and number of images you download. It’s been almost 45 years, so using Landsat you should be able and ask a pretty interesting question about this. If visible bands are not available you can download any raw bands that are available.\nYou will work image by image, and your task is to classify pixels in the image to match a classification schema of interest (i.e., landcover or forest type).\nYou will be using your image interpretation skills to generate training data\n\nAt this point you can navigate to EarthExplorer, download your Landsat images and then read on.\n\nThis lab will be divided into three parts.\nPart one: Unsupervised classification.\n\nThe tool you are going to run in QGIS will basically group pixels that have similar spectral characteristics to classes or clusters without any additional information from the user (you).\nFor example: pixels that have really high NIR values could be grouped as one cluster because maybe these pixels have vegetation in them.\n\nPart two: Supervised classification.\n\nThe tool will take additional information (training data) and then it will look at the spectral characteristics of each pixel and group into classes accordingly. You are going to be the source of this training data (more on this below).\n\nPart three: Change detection\n\nYou will run the supervised classification algorithms for your remaining image dates.\n\n\n\n\nFirst, consider: what is it you would like to investigate about the landscape? How will you set up your classification schema? Create a classification key with class, value, and color as columns for reference. You have to freedom over what this looks like and it is purely for reference. Use excel, notepad, or word create one. Here is mine:\n\n\n\nClass\nValue\nColor\n\n\n\n\nHealthy Forest\n1\nGreen\n\n\nUrban\n2\nGray\n\n\nWater\n3\nBlue\n\n\nGrass\n4\nYellow\n\n\nUnhealthy Forest\n5\nBrown\n\n\n\n\n\n\nPrepare the Data:\n\nLoad layers into QGIS. Just use one image date to start\nCreate a true color composite or false color composite for reference using the Build Virtual Raster Tool.\n\nRaster tab (top of the screen)\nMiscellaneous\nBuild Virtual Raster\nSelect input layers\nSet Resolution to highest\nPlace each input file in separate band\nRUN\n\nRemember that the virtual output you created does not automatically save.\n\nRight-click the virtual layer\nExport\nSave in the output data as a tif\nBe specific with how you name (i.e., FCC_1979.tif)\n\n\nDefine Area of Interest:\n\nCreate a polygon shapefile around Mt. St. Helens to use as a clipping mask.\nLayer tab (top of the screen)\nCreate layer\nNew shapefile layer\nFilename is up to you\nGeometry type: polygon\nRUN\nRight-click the shapefile in your layers tab\nSelect “Toggle editing”\nAdd polygon (see screenshot) then Click somewhere on the screen to begin drawing and complete the polygon by right-clicking\nRight-click and select toggle editing again to turn it off\n\n\nClip Raster by Mask:\n\nClip your images to the shapefile. The shapefile will act as a cookie cutter.\nSearch the processing toolbox for “Clip raster by mask layer”. If you don’t see the toolbox on the right side of QGIS press CTRL + ALT + T.\nInput layer: Landsat raw band raster (you will have to do this for each raw band raster instead of composites because the classification tool will not work for a composite image).\nMask layer: The shapefile you created\nClipped mask: Save this in your output data as a TIF (i.e., Landsat_1979_B5_CLIPPED.tif)\nRUN\nRinse and repeat for the other bands\n\nK-Means Clustering:\n\nSearch for K-means clustering for grids in your toolbox window\nInput the clipped rasters from above. Set the Number of Clusters based on your classification key (for example, 5 clusters for 5 land cover classes).\nAdjust the Iterations to control how many times the algorithm tries to classify pixels (usually around 10-20 iterations). More on the tool\nRun the tool. See the parameters I used:\n\n\nClassify the Output:\n\nAfter running the tool, the output will be a classified raster. #WOW\n\nDo a quick export of the clusters layer in your layers pane and save it as a tif. It won’t save if you reload QGIS because it is just a temporary file at the moment.\nOnce you save it, load it into your layers and change the symbology to apply a random color palette or directly adjust the colors according to your KEY!\n\nRight-click the output clusters layer\nProperties\nSymbology\nChange the render type to paletted/ unique values\nPress classify\nPress apply\nChange the colors to match your key (read below)\n\nNow, use a true color or false color composite as a reference (toggle your layer on and off) to visually inspect the output and match it with the classification key. Take a screenshot of your classified output after setting the colors as close to your key as possible. Do this to the best of your ability.\n\n\nSee if your output looks any better than this but remember this was unsupervised:\n\n\n\n\n\n\n\n\n\nCollect Training Data (take your pick of option 1 or option 2):\n\nOption 1: You can search the toolbox for a tool called random points in extent\n\n10 points minimum per class (so 50 points if you are going for 5 classes)\ninput extent: calculate from layer and use the shapefile from step 2 in the unsupervised classification section.\nYou can set a mimimum distance in between points. This is up to you.\nRight-click the points layer and select attribute table\nYou will need a value column and you will need to adjust the value of each point in your table depending on where it lies within a true or false color image. For this you can reference your key! You should also add a class column that will be the label (i.e., forest, grass, urban).\n\nOption 2: You can create a new points shapefile and manually place your points\n\nrecall step 2 of the unsupervised classification section except geometry type will be points\nOK\nChange the points symbology so it is a bright color and then start adding points\nNow toggle editing and add points using your true or false color as a reference\nEach point you add you have the option to set the value or id (for this you will refer to your key and assess what you see visually in true or false color). Essentially this is just a number you assign it 1-5 based on what class it falls in when you look at an image.\nRight-click the layer after your save your changes and turn off editing and select attribute table to see how this looks. Feel free to add more columns to match your key. You should also add a class column that will be the label (i.e., forest, grass, urban).\n\n\nConsiderations:\n\nThe first option is random so you might get some points that fall within vegetation pixels and way more that fall within water pixels so there is a bias\nThe second option is tedious but you have full control over placement. Shoot for at least 10 points per class, so 50 points in total if you are trying to classify the image into 5 distinct classes.\n\n\nHere is how they might be dispersed:\n\nInstall a dependency for dzetsaka\n\nThere is a python library we need in order to run classification using dzetsaka\nSelect the plugins tab and select python console\nAt the bottom of your screen type this and press enter\n\n\n\nimport pip\n\nNow type this and press enter\n\npip.main(['install', 'scikit-learn'])\n\nWARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.\nPlease see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.\nTo avoid this problem you can invoke Python with '-m pip' instead of running pip directly.\n\n\nRequirement already satisfied: scikit-learn in /Users/wancher/Documents/thesis/env/lib/python3.13/site-packages (1.6.1)\n\n\n\nRequirement already satisfied: numpy>=1.19.5 in /Users/wancher/Documents/thesis/env/lib/python3.13/site-packages (from scikit-learn) (2.2.2)\n\n\n\nRequirement already satisfied: scipy>=1.6.0 in /Users/wancher/Documents/thesis/env/lib/python3.13/site-packages (from scikit-learn) (1.15.1)\n\n\n\nRequirement already satisfied: joblib>=1.2.0 in /Users/wancher/Documents/thesis/env/lib/python3.13/site-packages (from scikit-learn) (1.4.2)\n\n\n\nRequirement already satisfied: threadpoolctl>=3.1.0 in /Users/wancher/Documents/thesis/env/lib/python3.13/site-packages (from scikit-learn) (3.5.0)\n\n\n\n[notice] A new release of pip is available: 24.3.1 -> 25.0\n[notice] To update, run: pip install --upgrade pip\n\n\n\n0\n\n\n\nTrain a model:\n\nIn the processing toolbox select dzetsaka\nClassification tool\nTrain algorithm. Here are the parameters used (you can you a composite image with this tool):\n\n\nPredictions from a trained model:\n\nOnce you have saved your models for each band you are ready to make predictions (I saved three models for bands 6, 5, and 4 from a Landsat 1 image)\nOpen the processing toolbox –> dzetsaka\nClassification tool\nPredict model\nInput raster: Landsat B4 clipped to start\nModel learned is my B4 model from the above step\nOutput raster is saved in output data folder as a tif\nConfidence raster is saved in output data folder as a tif with a unique name\nRUN\nChange layer symbology to match your key\n\n\n\n\n\n\n\n\nWhat is your research question?\nDid the unsupervised or supervised classification perform better and what makes you say so?\nWhat is the K-Means clustering algorithm doing? Check out google or the QGIS documentation for the tool\nWhat is the K nearest neighbors algorithm doing?\nWhat kinds of biases could be introduced in the training process?\nHow was your training data distributed spatially? Did you favor a particular area of the image?\nAround how many observations did you have for each class?\nWhat input raster bands did you use when predicting?\nInsert screenshots of the unsupervised result, and your supervised results with brief figure captions.\n\n\n\n\n\n\n\nIf you think back to lab 2; you performed change detection using R and indexed images. For example, NDVI change between 2019 to 2024.\nUsing your additional image dates you will run the supervised classification tool again, using the same training data.\nYou will perform post classification change detection, and this can be done a few different ways. For example we can obtain the count of pixels in each class and multiply this by pixel area to see how the surface area of a class changes over time. Or it could be framed in terms of class change (i.e., healthy forest to unhealthy forest). Just using your supervised classification maps for each image date you will toggle through them and write 2-3 sentences about the change you see visually.\nNow you can right-click each of your output supervised layers and select properties then histogram and compute histogram. Frequency (on the y axis is the pixel count and Pixel value is the class it belongs to). Take the frequency of the pixels in each class divided by the total to get proportion of the image that belongs to each class (this can then be multiplied by pixel area). Or you might find other ways to get pixel counts.\nThis last part will be up to you to create additional supervised images (however many help your research question), and then create some type of summary of the pixels in each class and how this has changed over time. Feel free to use google to find ways to get pixel counts of a raster or create attribute tables for a raster. Please submit some type of visualization of the change. Whether it is a graph or a map, don’t overburden yourself but try to do a little research about how people typically do this in QGIS or excel.\n\n\n\n\nWhat did you learn?\nWhat are the limitations?\nHow would you do this differently next time?"
},
{
"objectID": "instructions/LabThree/index.html#overview",
"href": "instructions/LabThree/index.html#overview",
"title": "Lab three instructions",
"section": "",
"text": "You will download Landsat imagery on EarthExplorer of the Mt St Helens area, before and after the 1980 eruption to answer a brief research question of something that interests you (for example: how long did it take for vegetation recovery after the eruption?). The research question will guide the image dates and number of images you download. It’s been almost 45 years, so using Landsat you should be able and ask a pretty interesting question about this. If visible bands are not available you can download any raw bands that are available.\nYou will work image by image, and your task is to classify pixels in the image to match a classification schema of interest (i.e., landcover or forest type).\nYou will be using your image interpretation skills to generate training data\n\nAt this point you can navigate to EarthExplorer, download your Landsat images and then read on.\n\nThis lab will be divided into three parts.\nPart one: Unsupervised classification.\n\nThe tool you are going to run in QGIS will basically group pixels that have similar spectral characteristics to classes or clusters without any additional information from the user (you).\nFor example: pixels that have really high NIR values could be grouped as one cluster because maybe these pixels have vegetation in them.\n\nPart two: Supervised classification.\n\nThe tool will take additional information (training data) and then it will look at the spectral characteristics of each pixel and group into classes accordingly. You are going to be the source of this training data (more on this below).\n\nPart three: Change detection\n\nYou will run the supervised classification algorithms for your remaining image dates."
},
{
"objectID": "instructions/LabThree/index.html#part-one",
"href": "instructions/LabThree/index.html#part-one",
"title": "Lab three instructions",
"section": "",
"text": "First, consider: what is it you would like to investigate about the landscape? How will you set up your classification schema? Create a classification key with class, value, and color as columns for reference. You have to freedom over what this looks like and it is purely for reference. Use excel, notepad, or word create one. Here is mine:\n\n\n\nClass\nValue\nColor\n\n\n\n\nHealthy Forest\n1\nGreen\n\n\nUrban\n2\nGray\n\n\nWater\n3\nBlue\n\n\nGrass\n4\nYellow\n\n\nUnhealthy Forest\n5\nBrown\n\n\n\n\n\n\nPrepare the Data:\n\nLoad layers into QGIS. Just use one image date to start\nCreate a true color composite or false color composite for reference using the Build Virtual Raster Tool.\n\nRaster tab (top of the screen)\nMiscellaneous\nBuild Virtual Raster\nSelect input layers\nSet Resolution to highest\nPlace each input file in separate band\nRUN\n\nRemember that the virtual output you created does not automatically save.\n\nRight-click the virtual layer\nExport\nSave in the output data as a tif\nBe specific with how you name (i.e., FCC_1979.tif)\n\n\nDefine Area of Interest:\n\nCreate a polygon shapefile around Mt. St. Helens to use as a clipping mask.\nLayer tab (top of the screen)\nCreate layer\nNew shapefile layer\nFilename is up to you\nGeometry type: polygon\nRUN\nRight-click the shapefile in your layers tab\nSelect “Toggle editing”\nAdd polygon (see screenshot) then Click somewhere on the screen to begin drawing and complete the polygon by right-clicking\nRight-click and select toggle editing again to turn it off\n\n\nClip Raster by Mask:\n\nClip your images to the shapefile. The shapefile will act as a cookie cutter.\nSearch the processing toolbox for “Clip raster by mask layer”. If you don’t see the toolbox on the right side of QGIS press CTRL + ALT + T.\nInput layer: Landsat raw band raster (you will have to do this for each raw band raster instead of composites because the classification tool will not work for a composite image).\nMask layer: The shapefile you created\nClipped mask: Save this in your output data as a TIF (i.e., Landsat_1979_B5_CLIPPED.tif)\nRUN\nRinse and repeat for the other bands\n\nK-Means Clustering:\n\nSearch for K-means clustering for grids in your toolbox window\nInput the clipped rasters from above. Set the Number of Clusters based on your classification key (for example, 5 clusters for 5 land cover classes).\nAdjust the Iterations to control how many times the algorithm tries to classify pixels (usually around 10-20 iterations). More on the tool\nRun the tool. See the parameters I used:\n\n\nClassify the Output:\n\nAfter running the tool, the output will be a classified raster. #WOW\n\nDo a quick export of the clusters layer in your layers pane and save it as a tif. It won’t save if you reload QGIS because it is just a temporary file at the moment.\nOnce you save it, load it into your layers and change the symbology to apply a random color palette or directly adjust the colors according to your KEY!\n\nRight-click the output clusters layer\nProperties\nSymbology\nChange the render type to paletted/ unique values\nPress classify\nPress apply\nChange the colors to match your key (read below)\n\nNow, use a true color or false color composite as a reference (toggle your layer on and off) to visually inspect the output and match it with the classification key. Take a screenshot of your classified output after setting the colors as close to your key as possible. Do this to the best of your ability.\n\n\nSee if your output looks any better than this but remember this was unsupervised:"
},
{
"objectID": "instructions/LabThree/index.html#part-two",
"href": "instructions/LabThree/index.html#part-two",
"title": "Lab three instructions",
"section": "",
"text": "Collect Training Data (take your pick of option 1 or option 2):\n\nOption 1: You can search the toolbox for a tool called random points in extent\n\n10 points minimum per class (so 50 points if you are going for 5 classes)\ninput extent: calculate from layer and use the shapefile from step 2 in the unsupervised classification section.\nYou can set a mimimum distance in between points. This is up to you.\nRight-click the points layer and select attribute table\nYou will need a value column and you will need to adjust the value of each point in your table depending on where it lies within a true or false color image. For this you can reference your key! You should also add a class column that will be the label (i.e., forest, grass, urban).\n\nOption 2: You can create a new points shapefile and manually place your points\n\nrecall step 2 of the unsupervised classification section except geometry type will be points\nOK\nChange the points symbology so it is a bright color and then start adding points\nNow toggle editing and add points using your true or false color as a reference\nEach point you add you have the option to set the value or id (for this you will refer to your key and assess what you see visually in true or false color). Essentially this is just a number you assign it 1-5 based on what class it falls in when you look at an image.\nRight-click the layer after your save your changes and turn off editing and select attribute table to see how this looks. Feel free to add more columns to match your key. You should also add a class column that will be the label (i.e., forest, grass, urban).\n\n\nConsiderations:\n\nThe first option is random so you might get some points that fall within vegetation pixels and way more that fall within water pixels so there is a bias\nThe second option is tedious but you have full control over placement. Shoot for at least 10 points per class, so 50 points in total if you are trying to classify the image into 5 distinct classes.\n\n\nHere is how they might be dispersed:\n\nInstall a dependency for dzetsaka\n\nThere is a python library we need in order to run classification using dzetsaka\nSelect the plugins tab and select python console\nAt the bottom of your screen type this and press enter\n\n\n\nimport pip\n\nNow type this and press enter\n\npip.main(['install', 'scikit-learn'])\n\nWARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.\nPlease see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.\nTo avoid this problem you can invoke Python with '-m pip' instead of running pip directly.\n\n\nRequirement already satisfied: scikit-learn in /Users/wancher/Documents/thesis/env/lib/python3.13/site-packages (1.6.1)\n\n\n\nRequirement already satisfied: numpy>=1.19.5 in /Users/wancher/Documents/thesis/env/lib/python3.13/site-packages (from scikit-learn) (2.2.2)\n\n\n\nRequirement already satisfied: scipy>=1.6.0 in /Users/wancher/Documents/thesis/env/lib/python3.13/site-packages (from scikit-learn) (1.15.1)\n\n\n\nRequirement already satisfied: joblib>=1.2.0 in /Users/wancher/Documents/thesis/env/lib/python3.13/site-packages (from scikit-learn) (1.4.2)\n\n\n\nRequirement already satisfied: threadpoolctl>=3.1.0 in /Users/wancher/Documents/thesis/env/lib/python3.13/site-packages (from scikit-learn) (3.5.0)\n\n\n\n[notice] A new release of pip is available: 24.3.1 -> 25.0\n[notice] To update, run: pip install --upgrade pip\n\n\n\n0\n\n\n\nTrain a model:\n\nIn the processing toolbox select dzetsaka\nClassification tool\nTrain algorithm. Here are the parameters used (you can you a composite image with this tool):\n\n\nPredictions from a trained model:\n\nOnce you have saved your models for each band you are ready to make predictions (I saved three models for bands 6, 5, and 4 from a Landsat 1 image)\nOpen the processing toolbox –> dzetsaka\nClassification tool\nPredict model\nInput raster: Landsat B4 clipped to start\nModel learned is my B4 model from the above step\nOutput raster is saved in output data folder as a tif\nConfidence raster is saved in output data folder as a tif with a unique name\nRUN\nChange layer symbology to match your key\n\n\n\n\n\n\n\n\nWhat is your research question?\nDid the unsupervised or supervised classification perform better and what makes you say so?\nWhat is the K-Means clustering algorithm doing? Check out google or the QGIS documentation for the tool\nWhat is the K nearest neighbors algorithm doing?\nWhat kinds of biases could be introduced in the training process?\nHow was your training data distributed spatially? Did you favor a particular area of the image?\nAround how many observations did you have for each class?\nWhat input raster bands did you use when predicting?\nInsert screenshots of the unsupervised result, and your supervised results with brief figure captions."
},
{
"objectID": "instructions/LabThree/index.html#part-three",
"href": "instructions/LabThree/index.html#part-three",
"title": "Lab three instructions",
"section": "",
"text": "If you think back to lab 2; you performed change detection using R and indexed images. For example, NDVI change between 2019 to 2024.\nUsing your additional image dates you will run the supervised classification tool again, using the same training data.\nYou will perform post classification change detection, and this can be done a few different ways. For example we can obtain the count of pixels in each class and multiply this by pixel area to see how the surface area of a class changes over time. Or it could be framed in terms of class change (i.e., healthy forest to unhealthy forest). Just using your supervised classification maps for each image date you will toggle through them and write 2-3 sentences about the change you see visually.\nNow you can right-click each of your output supervised layers and select properties then histogram and compute histogram. Frequency (on the y axis is the pixel count and Pixel value is the class it belongs to). Take the frequency of the pixels in each class divided by the total to get proportion of the image that belongs to each class (this can then be multiplied by pixel area). Or you might find other ways to get pixel counts.\nThis last part will be up to you to create additional supervised images (however many help your research question), and then create some type of summary of the pixels in each class and how this has changed over time. Feel free to use google to find ways to get pixel counts of a raster or create attribute tables for a raster. Please submit some type of visualization of the change. Whether it is a graph or a map, don’t overburden yourself but try to do a little research about how people typically do this in QGIS or excel.\n\n\n\n\nWhat did you learn?\nWhat are the limitations?\nHow would you do this differently next time?"
},
{
"objectID": "instructions/LabOne/index.html",
"href": "instructions/LabOne/index.html",
"title": "Lab one instructions",
"section": "",
"text": "An Image is Worth at Least a Thousand Words\n\nSummary\nA useful tip when working with remote sensing data is to consider beginning your workflow by visualizing your data in true color. This can serve as reference prior to processing so that you can toggle the changes you make on and off and make semi-informed interpretations, at least at the beginning. There is SO much remote sensing data that is freely available but the go-to data source is often Landsat because of its historical record. It is also relatively easy to download Landsat imagery with different levels of processing (for example, surface reflectance vs top-of-atmosphere reflectance products vs analysis ready data), which can take a lot of steps out of the scientific process for us but means we have less influence how the processing is applied. Nonetheless the surface reflectance product from Landsat has been used in many studies, here is a good review paper.\nAs for data, I’ll provide USGS’s EarthExplorer for your reference, which is a simple, web-based archive of satellite and aerial imagery and indices from various sources. You will need to create an account to download data. You are not limited to EarthExplorer, there are other sources like The National Map, ArcGIS online, and Google Earth Engine (GEE). We will introduce (GEE) in future labs when we need more data.\nInside EarthExplorer, you can filter your data search spatially, temporally, and by cloud cover. Try to get logged on, drop a pin on or near Mount St Helens and define a date range and cloud cover percentage (lower cloud cover is great). [Check out this demo] for steps to download and visualize!\n\n\n\nFigure 1: True Color Composite of Landsat 8 (RED, GREEN, BLUE)\n\n\n\n\nSoftware and tools\nYou will want to get familiar with the GIS software and your file structure if you are not already. I typically start a file structure like this:\nrs_485/\n─ lab_one/\n├── input_data/ <– raw images, shapefiles\n├── output_data/ <– processed imagery, CSVs\n├── scripts/ <– useful in future labs\n├── venv/ <– useful in future labs\n└── writing/\nNote that data in future labs may be uploaded to the R drive.\nThis lab relies on QGIS which is an open-source platform that we will use for most of our visualization and image manipulation. However, examples of different approaches techniques using alternative software R, GEE and Python are provided in future labs.\n\n\n\nWhat you are going to submit\n\nA PDF documenting the steps you took to create a true color and or false image composites from raw satellite imagery. This can be as detailed as you want, but you must obtain imagery from 5 distinct satellite or aerial sources, display and manipulate each image in QGIS (documenting your methods).\nScreenshot of each image with brief description\nReview advantages and disadvantages and which datasets might be useful as we continue to study the region\nA brief reflection on the types of imagery you would use for specific studies\n\n\n\n\nFigure 2: False Color Composite of Landsat 8 (SWIR2, NIR, RED)\n\n\n\n\nCheck out this demo\nhttps://youtu.be/u-FCX4RjsxM"
},
{
"objectID": "teaching.html",
"href": "teaching.html",
"title": "Lab exercises",
"section": "",
"text": "We will work through various standard remote sensing techniques in these labs. However, we will be focusing on one area of interest: Mount St Helens. This will enable us to tell a story using remote sensing methods while fine tuning our skills.\nLabs might be updated up until their open date! Feel free to add to this collaborative Spotify playlist so we have some tunes while we work.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nLab one instructions\n\n\n\n\n\n\nQGIS\n\n\ncomposite-bands\n\n\nEarthExplorer\n\n\n\n\n\n\n\n\n\nDec 19, 2024\n\n\nWesley Rancher\n\n\n\n\n\n\n\n\n\n\n\n\nLab three instructions\n\n\n\n\n\n\nQGIS\n\n\nclassification\n\n\nk-means-clustering\n\n\n\n\n\n\n\n\n\nDec 19, 2024\n\n\nWesley Rancher\n\n\n\n\n\n\n\n\n\n\n\n\nLab four instructions\n\n\n\n\n\n\nR\n\n\nQGIS\n\n\nLiDAR\n\n\n\n\n\n\n\n\n\nFeb 18, 2024\n\n\n\n\n\n\n\n\n\n\n\n\nLab two instructions\n\n\n\n\n\n\nR\n\n\nimage-enhancement\n\n\n\n\n\n\n\n\n\nJan 5, 2024\n\n\nWesley Rancher\n\n\n\n\n\n\nNo matching items"
},
{
"objectID": "instructions/LabTwo/index.html",
"href": "instructions/LabTwo/index.html",
"title": "Lab two instructions",
"section": "",
"text": "Summary\n\nThe objective of this lab is to enhance satellite imagery using a few different techniques. This lab assumes no prior coding experience and is commented thoroughly to explain what each line is doing. After setting things up it should run fairly smoothly but please ask if you need help!! I’m not trying to throw you to the wolves but learning R (among other tools), is an extremely useful skill in GIScience.\nThis script is available in the R drive as “enhance.qmd”. Copy it to your folder.\nAs for data, you have been provided images in the class R drive. Copy them to your project folder where you prefer to store data.\nWe will be working with Landsat 8/9 (Level 1 and 2) and Sentinel 2A imagery from 2019-2024. You should look up these satellites and sensors to clarify what the levels indicate and what information or bands might be present in the images.\nIn brief, you will start by getting acquianted with R and RStudio and figuring out how to read in data. You will then move to displaying your data and exploring summary statistics. This will give you a lay of the land, and then you can go on to enhancement techniques. These instructions will outline how to do this for a subset of the data (you will just need to replicate the steps for ALL of your data). The last step of this lab is to compare across datasets and across time and infer based on what you create (quantitatively and qualitatively).\n\n\n\nSetting up R\n\nTo run a line of code move your cursor to the line and press ctrl+Enter. To run a chunk of code press the green button at the top right of the chunk. If a a line starts with a #, it is a comment but it can also be used to prevent a line from running. See how I comment out install packages on line 31 since I have already installed this package on my computer.\n\nInstall terra and other packages. Terra is the main library that will let us work with spatial data\n\n#install.packages(\"terra\")\n#install.packages(\"RStoolbox\")\n#install.packages(\"ggplot2\")\n#install.packages(\"dplyr\")\n#install.packages(\"ggspatial\")\n#install.packages(\"tmap\")\n\nlibrary(terra)\nlibrary(RStoolbox)\nlibrary(ggplot2)\nlibrary(dplyr)\nlibrary(ggspatial)\nlibrary(tmap)\n\nPaste this in the terminal to create an R project. This will make it easy to navigate around our folders in RStudio.\n\n#replace with the path to your lab 2 folder\n#cd /path/to/files/\n#touch lab-one.Rproj\n\nLet’s figure out where we are and get where we need to be\n\n#this prints the working directory\ngetwd()\n\n#this sets the working directory\n#setwd(\"/Users/wancher/Documents/rs_485/input_data/\")\n#setwd(\"D:/RemoteSensingLabs/input_data/\")\n\n\n\nReading in data\nRead in a single image\n\n# the arrow is the same as =\n# replace with your file name\ndir <- \"/Users/wancher/Documents/rs_485/input_data/\"\nr <- rast(paste0(dir,\"landsat_panchro_2023.tif\"))\nr\n\nThis is how you can plot something\n\nplot(r)\n\nPrint summary statistics and plot a histogram\n\nsummary(r)\nhist(r)\n\nThose are the basics of how to read an inspect an image. You can read in your other files the same way. Something like this…\n\n#sentinel_b2 <- \"landsat8_2024_rawbands.tif\"\n#sentinel_b3 <- \"sentinel_2019_rawbands.tif\"\n\nOR\nWe can read in all of the tiff files from our current working folder and store them in a list\n\ndir <- \"/Users/wancher/Documents/rs_485/input_data\"\nlist_of_files <- list.files(dir, pattern=\"*.tif\", full.names = TRUE)\nlist_of_files\n\nThis is just a list of the file paths. So let’s convert it to list of rasters or SpatRasters as terra puts it\n\n#lapply means list apply. So perform terra's rast function on a list of items\nlist_of_rasters <- lapply(list_of_files, rast)\n\n#sometimes the names don't transfer properly so you can change them if needed\nnames(list_of_rasters) <- list_of_files\n\n\n\nFor loops\nWe can write a for loop to do things in iteration. Let’s say we want to plot each image in our list of rasters.\n\n#for each each raster in the sequence, do x thing...\nfor (i in seq_along(list_of_rasters)){\n one_image <- list_of_rasters[[i]]\n \n #store the name of the image\n filename <- basename(sources(one_image))\n plot(one_image, main = paste0(filename))\n rm(one_image)\n}\n#this might take a second\n\n\n\nDisplaying data iteratively\nNow… let’s plot in true color using the plotrgb function\n\n#just one image and band\nplot(list_of_rasters[[1]]) #here I am just reaching into the list of rasters and grabbing the 2nd item and Band 3\nnames(list_of_rasters[[1]])\n\n#just one image in true color\nplotRGB(list_of_rasters[[7]], r=4, g=3, b=2, stretch = \"lin\")\n\nLet’s do the same thing using a for loop over the entire list of rasters. You can decide if you want to use something like a for loop for the later enhancement techniques or if you would prefer to write things out line by line.\n\n#iterate \nfor (raster in list_of_rasters){\n #if there is more than one band in the raster then...\n if (nlyr(raster) > 1){\n filename <- varnames(raster)\n \n plotRGB(raster, r=7, g=2, b=4, \n stretch = \"lin\",\n smooth = TRUE,\n main = paste0(\"true color: \", filename))\n \n #otherwise if the image only has one band\n } else {\n filename2 <- varnames(raster)\n plot(raster, main = paste0(\"b8: \", filename2), stretch = \"lin\")\n }\n}\n\nSECTION TURN IN\nQuestion 1: What bands are needed to make a true color image for Landsat 8 and 9?\nQuestion 2: Did the Sentinel images plot in true color? What bands are needed for true color with Sentinel 2A?\nQuestion 3: Is the “plotRGB” function in the above chunk the same as the Build Virtual Raster tool in QGIS? yes/no/why?\nQuestion 4: What are different types of stretch methods within the PlotRGB function?\nScreenshots: Answers to questions above alongside 3 screenshots. Select one year between 2019 and 2024 and submit a screenshot of the corresponding images in that year. (Should be a landsat true color, sentinel true color, and landsat panchromatic). If your images did not display in true color, you should tweak the arguments in the plotRGB function!\n\n\nPansharpening\n\n#print the spatial resolution\nres(list_of_rasters[[1]])\nres(list_of_rasters[[7]])\nres(list_of_rasters[[14]])\n\n#you'll notice that there is a difference in spatial resolution between the landsat raw bands, landsat panchromatic, and sentinel. Since the goal here is to compare across Landsat and Sentinel we will downsample our landsat rawbands to the higher resolution of the panchromatic band (from 30m down to 15m).\n\n#as an example\npanchro_test <- list_of_rasters[[1]]\nlandsat_rawbands_test <- list_of_rasters[[7]]\n\n#be sure to specify the correct bands\nlandsat_rawbands_sharpened <- panSharpen(landsat_rawbands_test, panchro_test, \n r = 5, g = 4, b = 3, method = \"brovey\")\n\nSECTION TURN IN\nQuestion 1: What do you think of the result? What happens if you change the method?\nQuestion 2: What is the spatial resolution of the panchromatic data compared to the raw bands of Landsat? What about the spatial resolution of Sentinel?\nQuestion 3: What units of resolution are your images in?\nQuestion 4: Answers to questions and pansharpen one of your Landsat images and submit a screenshot of the result, with a figure caption. I’ll let you decide if you would like to pansharpen all of your Landsat images.\n\n\nConstrast stretching\n\nJust as you would photoshop a photo you to enhance the quality or color for sharing on social media or with friends and family, we can do the same thing in remote sensing.\nSince we know that images are just numbers, we can think of this process as stretching the image values towards the extremes of the data range.\n\nRename the bands so operations only need cast once.\n\n#rename the bands so we can write a universal equation\nband_names_landsat <- c(\"aerosol\", \"blue\", \"green\", \"red\", \"nir\", \"swir1\", \"swir2\")\nband_names_sentinel <- c(\"blue\", \"green\", \"red\", \"rededge1\", \"rededge2\", \"rededge3\", \"nir\", \"rededge4\", \"swir1\", \"swir2\")\n\nrename_bands <- function(raster) {\n if (\"SR_B1\" %in% names(raster)) {\n names(raster) <- band_names_landsat\n return(raster)\n } else if (\"B2\" %in% names(raster)) {\n names(raster) <- band_names_sentinel\n return(raster)\n } else {\n return(NULL)\n }\n}\n\nlist_of_rasters_renamed <- lapply(list_of_rasters, rename_bands)\nlist_of_rasters_renamed <- list_of_rasters_renamed[-c(1:6)]#rm the panchro images\n\nStretch one image\n\ntest_image <- list_of_rasters_renamed[[12]]\nhist(test_image$red)#red for example\nplot(test_image$red)\n\n#the 5th and 80th percentiles of the raster (values which 5% and 80% of the raster fall under)\np_low <- quantile(values(test_image$red), 0.05, na.rm = TRUE)\np_high <- quantile(values(test_image$red), 0.80, na.rm = TRUE) \n\n#limit the range of the raster to these high and low values and change the scale to range from 0 to 1\nr_stretched <- clamp((test_image$red - p_low) / (p_high - p_low), lower = 0, upper = 1)\nplot(r_stretched$red)\nhist(r_stretched$red)\n\nSECTION TURN IN\nQuestions: What are your thoughts about the contrast stretch result? Did you try changing the percentile values?\nContrast stretch each band in each image\n\ncontrast_stretch <- function(raster_band){\n p_low <- quantile(values(raster_band), 0.05, na.rm = TRUE)\n p_high <- quantile(values(raster_band), 0.90, na.rm = TRUE) \n stretched_band <- clamp((raster_band - p_low) / (p_high - p_low), \n lower = 0, upper = 1)\n return(stretched_band)\n}\n\n#function to apply the stretch to each band and each raster\napply_stretch <- function(raster) {\n band_names <- names(raster)\n \n #apply stretch to each band\n stretched_bands <- lapply(band_names, function(band_name) {\n band <- raster[[band_name]]\n contrast_stretch(band)\n })\n #combine stretched bands into one raster\n stretched_raster <- c(stretched_bands)\n names(stretched_raster) <- band_names#retain bandnames\n return(stretched_raster)\n}\n\nlist_of_stretched_rasters <- lapply(list_of_rasters_renamed, apply_stretch)\n\n\n\nCalculating vegetation indices\n\nYou can now calculate vegetation or other indices on your stretched images.\nSay you would like to look at vegetation health around Mount St Helens. You can look up things in google like “how do I compute NDVI for Landsat?” (assuming you have heard about NDVI), or “what is a good index to use when monitoring vegetation with Landsat?” What you will find is that remote sensing scientists have developed equations for answering these types of questions and it is pretty straightforward to apply if you have your data and software ready.\nIn R, we can using simple arithmetic operations to do this. For example, to compute the Normalized Differenced Vegetation Index (NDVI), the equation is like so:\n\nNDVI = NIR-RED/NIR+RED\nLet’s grab an image and try it:\n\n#dollar sign indexing is what this is called \n#i like to follow good ole PEMDAS pretty closely when I do band math just to be cautious\ntest_ndvi <- (r_stretched$B8- r_stretched$B4) / \n (r_stretched$B8 + r_stretched$B4)\nsummary(test_ndvi)\n\nhist(test_ndvi)\nplot(test_ndvi)\n\nVisualizing a different way\n\n#different color palette\nndvi_palette <- colorRampPalette(c(\"#FFFF00\", \"#FF0000\", \"#FF00FF\", \"#0000FF\", \"#639200\"))(100)\n\n#plot\nplot(test_ndvi, col = ndvi_palette, main = \"ndvi sentinel\")\n\n#compare this with true color\nplotRGB(r_stretched, r=3, g=2, b=1, stretch = \"lin\")\n\n#save and take it to qgis\n#writeRaster(test_ndvi, \"/Users/wancher/Documents/rs_485/output_data/ndvi_sentinel_2022.tif\")\n\nSECTION TURN IN\nScreenshots: Take a screenshot of the test NDVI you produced in the default color palette, and then define your own color palette (line 317) and submit another screenshot. Mention how this either helped or did not help your interpretation of the data.\nLet’s write a function to compute ndvi for each image and then apply it to the list of rasters.\n\n#write functions to calculate ndvi\ncalculate_ndvi <- function(raster){\n ndvi <- (raster$nir - raster$red) / (raster$nir + raster$red)\n names(ndvi) <- \"ndvi\"\n return(ndvi)\n}\nlist_of_ndvis <- lapply(list_of_stretched_rasters, calculate_ndvi)\n\n\n\nComparison\n\nYou are now going to create graphs to show your indices through time.\n\nHere is an example:\n\n#example of how to convert a raster to a dataframe\ndf <- as.data.frame(r) #not going to use it but this is the basic function you need (line 349)\n\n#function to convert rasters to dataframes\nraster_to_dataframe <- function(raster){\n source <- strsplit(varnames(raster), \"_\")[[1]][1] #split the filename by recognizing the underscore seperator and grab the first element\n year <- strsplit(varnames(raster), \"_\")[[1]][3]#third element (which is year if you look at the filename)\n \n #average and variance\n df <- as.data.frame(raster, xy = TRUE) %>%\n summarise(Avg_NDVI = mean(ndvi, na.rm = TRUE),\n SD_NDVI = sd(ndvi, na.rm = TRUE))\n \n df$source <- source #add source column\n df$year <- as.numeric(year) #add year column\n \n return(df)\n}\n\n#convert the list of NDVIs to a list of dataframes so we can graph\nlist_of_dfs <- lapply(list_of_ndvis, raster_to_dataframe)\ncombined_df <- do.call(rbind, list_of_dfs) #rowbind the dataframes into one dataframe\n\n#if you would like to work in excel!\n#write.csv(combined_df, /path/to/output/folder/*.csv, row.names = FALSE)\n\nPlot\n\n#plotting through time using ggplot library\nggplot(combined_df, aes(x = year, y = Avg_NDVI, color = source)) +\n geom_smooth(size = 2) + \n geom_point(size = 3) + #add points\n #geom_errorbar(aes(ymin = Avg_NDVI - SD_NDVI, ymax = Avg_NDVI + SD_NDVI), width = 0.2) + \n labs(title = \"Normalized Difference Vegetation Index in AOI\",\n x = \"Year\",\n y = \"Average NDVI\",\n color = \"Satellite\") +\n theme_minimal()\n\nSECTION TURN IN\nQuestions: Your average NDVI value is an average of what exactly? What would you need to do if you wanted the average NDVI value in a particular spot in your image?\nScreenshots: Screenshot or figure with caption showing NDVI through time. Calculate at least 2 additional indices and make similar plots.\n\n\nDifference maps\n\nFor your last step you need to calculate take the difference between 2024 and 2019 for each index and satellite.\n\nFor example: NDVI_Landsat_2024.tif - NDVI_Landsat_2019.tif\nHere is how I would do it for the NDVI Image. Feel free to use the writeRaster function to export your ndvi images and make a layout in ArcGIS or QGIS if you would be more comfortable there. The ggplot layout is acceptable though!\n\nndvi_diff <- list_of_ndvis[[12]] - list_of_ndvis[[7]]\n\n#using tmap\nndvi_palette <- c(\"#a50026\", \"#d73027\", \"#f46d43\", \"#fdae61\", \n \"#66bd63\", \"#006837\")\ntm_shape(ndvi_diff) +\n tm_raster(midpoint = NA, style = \"pretty\", palette = ndvi_palette, title = \"Range\") +\n tm_layout(title = \"Sentinel NDVI Difference (2024 - 2019)\",\n legend.position = c(\"right\", \"bottom\"),\n legend.bg.color = \"white\", #legend background\n legend.frame = TRUE, #legend border\n legend.text.size = 1.2, \n legend.title.size = 1.4) +\n tm_compass(size = 2, position = c(\"right\", \"top\")) + #north arrow\n tm_scale_bar(text.size = 1, position = c(\"left\", \"bottom\")) #scale bar\n\nSECTION TURN IN\nScreenshots: Maps of index change between 2019 and 2024 for each index. Include a north arrow and scale bar. Use intuitive color palettes. Describe what we are looking at. As always toggle it on and off with true color images to guide your interpretation.\nFINAL SUBMISSION:\nPlease submit…\n\nA PDF write up answering the questions throughout with your screenshots attached\n\n#OR\n\nYou can submit this .QMD document rendered (see top of the script), with “eval=TRUE”. This will convert the document to an HTML and will run your code. If you select this method you can just type your responses directly beneath the questions. Please submit both the HTML and QMD if you go with this option. You may get an error which is sort of difficult to debug. So I would suggest writing up a PDF to be safe but know that this option exists."
},
{
"objectID": "code.html",
"href": "code.html",
"title": "Code Examples",
"section": "",
"text": "Mosaic rasters\n\n\n\n\n\n\nR\n\n\nGEE\n\n\nLandsat\n\n\nimage-processing\n\n\n\n\n\n\n\n\n\nDec 27, 2024\n\n\nWesley Rancher\n\n\n\n\n\n\n\n\n\n\n\n\nConstrast stretching\n\n\n\n\n\n\nR\n\n\nLandsat\n\n\nenhancement\n\n\n\n\n\n\n\n\n\nDec 19, 2024\n\n\nWesley Rancher\n\n\n\n\n\n\n\n\n\n\n\n\nRaster manipulation using terra\n\n\n\n\n\n\nR\n\n\nLandsat\n\n\nimage-processing\n\n\n\n\n\n\n\n\n\nDec 19, 2024\n\n\nWesley Rancher\n\n\n\n\n\n\nNo matching items"
},
{
"objectID": "code/MosaicRasters/index.html",
"href": "code/MosaicRasters/index.html",
"title": "Mosaic rasters",
"section": "",
"text": "library(terra)\n\nterra 1.8.5"
},
{
"objectID": "code/MosaicRasters/index.html#terra",
"href": "code/MosaicRasters/index.html#terra",
"title": "Mosaic rasters",
"section": "",
"text": "library(terra)\n\nterra 1.8.5"
},
{
"objectID": "code/MosaicRasters/index.html#directory",
"href": "code/MosaicRasters/index.html#directory",
"title": "Mosaic rasters",
"section": "Directory",
"text": "Directory\n\n# dir and outdir\nfile_dir <- \"/Users/wancher/Documents/weranch/code/tiffs/\"\n#setwd(file_dir)\n\n# read in the rasters \nlist_of_files <- list.files(file_dir, pattern = \"S1A-.*\\\\.tif$\", full.names = TRUE)"
},
{
"objectID": "code/MosaicRasters/index.html#commenting-out-the-for-loop-and-just-work-with-files-from-one-year-for-this-example",
"href": "code/MosaicRasters/index.html#commenting-out-the-for-loop-and-just-work-with-files-from-one-year-for-this-example",
"title": "Mosaic rasters",
"section": "Commenting out the for loop and just work with files from one year for this example",
"text": "Commenting out the for loop and just work with files from one year for this example\n\n# iterate over a sequence of years and pull out files specific to the year in the sequence\n# mosaics <- list()\n# years <- seq(2019, 2023)\n# years_string <- as.character(years)\n# for (i in seq_along(years_string)) {\n# \n# #pull out unique year\n# year <- years_string[[i]]\n# files_one_year <- list_of_files[grepl(year, list_of_files)]\n files_one_year <- list_of_files\n\n \n list_of_rasters <- lapply(files_one_year, function(file) {\n r <- rast(file)\n r\n })\n \n # get band names for retention\n #band_names <- names(list_of_rasters[[1]])\n #flattened_band_names <- unlist(band_names)\n \n # turn list in sprc and mosaic\n coll_of_rasters <- sprc(list_of_rasters)\n #print(paste0(year, \" start: \", Sys.time()))\n mosaiced_raster <- merge(coll_of_rasters) \n\n\n|---------|---------|---------|---------|\n=========================================\n \n\n #print(paste0(year, \" finish: \", Sys.time()))\n #names(mosaiced_raster) <- band_names\n #print(names(mosaiced_raster))\n #save it\n # output_filename <- paste0(file_dir, \"s1v-mosaic-\", year, \".tif\")\n # print(output_filename)\n # #writeRaster(mosaiced_raster, filename = output_filename, filetype = \"GTiff\", overwrite = TRUE)\n # mosaics[[i]] <- mosaiced_raster\n # rm(list_of_rasters, coll_of_rasters, mosaiced_raster)\n # gc()\n#}"
},
{
"objectID": "code/MosaicRasters/index.html#visualize",
"href": "code/MosaicRasters/index.html#visualize",
"title": "Mosaic rasters",
"section": "Visualize",
"text": "Visualize\n\n#without mosaicing\nfor (r in list_of_rasters){\n plot(r)\n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n#after mosaic\nplot(mosaiced_raster, main = \"Summer 2023 C Band Composite\")"
},
{
"objectID": "code/ContrastStretching/index.html",
"href": "code/ContrastStretching/index.html",
"title": "Constrast stretching",
"section": "",
"text": "Let’s read in a raster and set a color palette to visualize by. This example file is a modified normalized difference water index image within interior Alaska.\n\nlibrary(terra)\n\nterra 1.8.5\n\n\n\ncolors <- colorRampPalette(c(\"white\", \"lightblue\", \n \"blue\", \"darkblue\", \"black\"))(100)\nr <- rast(\"../FullComp_Dalton_2015.tif\", lyr = 24)\nplot(r, col = colors)\n\n\n\n\n\n\n\n\nLooks great. Now if we want to increase the contrast between different pixel values. We can do this\n\n# contrast stretch\np_low <- quantile(values(r), 0.05, na.rm = TRUE)\np_high <- quantile(values(r), 0.95, na.rm = TRUE) \n\nr_stretched <- clamp((r - p_low) / (p_high - p_low), lower = 0, upper = 1)\nplot(r_stretched, col = colors)"
},
{
"objectID": "portfolio/maps/index.html",
"href": "portfolio/maps/index.html",
"title": "maps!",
"section": "",
"text": "AOI map for thesis\n\n\n\nChange detection map idea\n\n\n\nChange detection reclass map of total carbon in the year 2100. Split by diagonal line in the middle. The left is no climate change the right is extreme climate change. This was a fun exercise of patience in R, attempting to draw a line at the right location with the right slope to mask my images by.\n\n\n\n\nLANDIS-II Output Maps\n\n\n\nComparison of number of simulated fires in LANDIS-II across three distinct climate scenarios. From top down: no climate change, moderate climate change, extreme climate change\n\n\n\n\nLandsat Composite\n\n\n\nSometimes imagery without context can be visually pleasing. This is a composite image within Interior Alaska where tasseled cap brightness is displayed in the red channel, greenness in green, and wetness in the blue channel."
},
{
"objectID": "cv.html",
"href": "cv.html",
"title": "Wesley Rancher",
"section": "",
"text": "Department of Geography, University of Oregon\nTerrestrial Ecosystems Ecology and Landscapes Lab\nEmail: [email protected] | Linkedin | github\n\n\n\nBiogeographer specializing in spatial data science and ecology, with a focus on understanding ecosystem dynamics and environmental change across diverse landscapes and time scales. Skilled in GIS, remote sensing, and spatial modeling, and passionate about leveraging these tools to address pressing environmental problems and communicate research findings to the public. Eager to contribute to research and industry initiatives that support data-driven climate solutions and enhance ecological and social resilience.\n\n\n\n\n\n\nM.S. in Geography\nGPA: 3.9\nAdvisor: Dr. Melissa Lucash\n\n\n\nB.A. in Environmental Studies and Geography\nMinor in Philosophy\nGPA: 3.5\nAdvisor: Dr. Nathan Rowley\n\n\n\n\n\n\nGIS & Remote Sensing: Advanced in ArcGIS, QGIS, and Google Earth Engine\n\nProgramming: Advanced in R and Python; proficient in Bash and AWS CLI\n\nDrone Operation & Image Processing: FAA Certified Remote UAS Pilot (#4802988); experience with Pix4D, Drone2Map, Agisoft; proficient in LiDAR and Dual Red-Edge sensors calibration\n\nLanguages: Working proficiency in Spanish\n\n\n\n\n\n\n\nSeptember 2023 – Present\nAdvisor: Dr. Melissa Lucash\n- Quantified aboveground carbon dynamics in boreal Alaska using remote sensing and simulation modeling\n- Developed image processing code in Google Earth Engine to perform atmospheric correction, cross-sensor calibration, and spectral index calculations\n- Analyzed large climate datasets from CMIP5 and CMIP6 - Integrated remote sensing data, and climate projections with LANDIS-II \n\n\n\nJune 2023 – August 2023\nAdvisor: Dr. Anthony Vorster\n- Collaborated with Grand Staircase Escalante Partners to map invasive plant communities in the Paria River Watershed, Utah\n- Processed large datasets using Google Earth Engine, R, and ArcGIS\n\n\n\nDecember 2022 – May 2023\nAdvisor: Dr. Nathan Rowley\n- Estimated supraglacial lake depth development in Western Greenland using radiative transfer models\n- Developed workflows to process Landsat imagery (raster sieving and feature detection)\n\n\n\nJune 2022 – July 2022\nAdvisor: Dr. Victor Gonzalez\n- Analyzed climate stressors on heat tolerances of honeybees and sweat bees in Lesvos, Greece\n- Discovered that bees remain heat tolerant following desiccation and starvation\n\n\n\n\n\n\n\nSeptember 2023 – Present\nLab Instructor – Geography 485/585: Remote Sensing I (Fall, Winter 2025)\nDeveloped lab exercises, taught GIS software (QGIS, R), and provided demonstrations.\nCourse Assistant – Geography 199: Global Wildfire (Spring 2024)\nSupported curriculum development and provided supplemental instruction\nGuest Lecturer\nGeography 199: “Changing Wildfire in Brazil”\nGeography 199: “Bees and Wildfire”\nLab Instructor – Geography 181: Our Digital Earth (Fall 2023, Winter 2024)\nFacilitated labs on digital mapping and spatial data\n\n\n\n\n\n\nRippey Research Grant ($1000, UO) – 2024\n\nNASA Develop Scholarship ($1500, SSAI) – 2023\n\nDean’s List (OWU) – Fall ’22, Spring ’20, ’22, ’23\n\nRobert E. Shanklin Distinguished Scholar (Geography, OWU) – 2023\n\nPhi Sigma Tau (Philosophy, OWU) – 2023\n\nOur New Gold Digital Storytelling Winner (Spanish, OWU) – 2022\n\n\n\n\n\n\nRancher W, Matsumoto H, Lamping J, Lucash MS. 2025. “Comparing Current and Future Above-Ground Carbon Estimates Using Random Forests, Landsat Imagery, and a Forest Simulation Model” (In preparation)\n\nWeiss S, Rancher W, Hayes K, Buma B, Lucash MS. 2024. “Wildfire Dynamics Under Climate Change in Interior Alaska” (In preparation)\n\nGonzalez VHB, Rancher W, Vigil R, Garino-Heisey I, Oyen K, Tscheulin T, Petanidou T, Hranitz J, Barthell J. 2024. “Bees Remain Heat Tolerant After Acute Exposure to Desiccation and Starvation”\n\nRowley N, Rancher W, Karmosky C. 2024. “Comparison of Multiple Methods for Supraglacial Melt-Lake Volume Estimation in Western Greenland During the 2021 Summer Melt Season”\n\n\n\n\n\n\nRancher W, Matsumoto H, Lamping J, Lucash ML. 2024. “Assessing Vegetation Shifts in Boreal Alaska by Integrating Landsat Imagery with Spatial Modeling” – American Geophysical Union – Washington, DC (Poster)\n\nRancher W, VanArnam M, Kowalski A, Anarella T, Vorster A. 2023. “Mapping Russian Olive and Tamarisk to Inform Invasive Species Management along the Paria River, Utah” – NASA Develop Day – Washington, DC (Virtual talk)\n\nRancher W, Rowley N. 2023. “Estimating Supraglacial Melt Lake Volume Changes in West Central Greenland Using Multiple Remote Sensing Methods” – Ohio Wesleyan Spring Symposium – Delaware, OH (Poster)\n\nRancher W, Vigil R, Garino-Heisey I, Gonzalez V. 2022. “Effects of Desiccation on Bees’ Heat Tolerance” – Ohio Wesleyan Connection Conference – Delaware, Ohio (Poster)\n\nRancher W, Gonzalez V. 2022. “Effects of Desiccation on Bees’ Heat Tolerance” – IUSSI Sección Andina y del Caribe – Panama City, Panama (Talk)"
},
{
"objectID": "cv.html#summary",
"href": "cv.html#summary",
"title": "Wesley Rancher",
"section": "",
"text": "Biogeographer specializing in spatial data science and ecology, with a focus on understanding ecosystem dynamics and environmental change across diverse landscapes and time scales. Skilled in GIS, remote sensing, and spatial modeling, and passionate about leveraging these tools to address pressing environmental problems and communicate research findings to the public. Eager to contribute to research and industry initiatives that support data-driven climate solutions and enhance ecological and social resilience."
},
{
"objectID": "cv.html#education",
"href": "cv.html#education",
"title": "Wesley Rancher",
"section": "",
"text": "M.S. in Geography\nGPA: 3.9\nAdvisor: Dr. Melissa Lucash\n\n\n\nB.A. in Environmental Studies and Geography\nMinor in Philosophy\nGPA: 3.5\nAdvisor: Dr. Nathan Rowley"
},
{
"objectID": "cv.html#skills",
"href": "cv.html#skills",
"title": "Wesley Rancher",
"section": "",
"text": "GIS & Remote Sensing: Advanced in ArcGIS, QGIS, and Google Earth Engine\n\nProgramming: Advanced in R and Python; proficient in Bash and AWS CLI\n\nDrone Operation & Image Processing: FAA Certified Remote UAS Pilot (#4802988); experience with Pix4D, Drone2Map, Agisoft; proficient in LiDAR and Dual Red-Edge sensors calibration\n\nLanguages: Working proficiency in Spanish"
},
{
"objectID": "cv.html#research-experience",
"href": "cv.html#research-experience",
"title": "Wesley Rancher",
"section": "",
"text": "September 2023 – Present\nAdvisor: Dr. Melissa Lucash\n- Quantified aboveground carbon dynamics in boreal Alaska using remote sensing and simulation modeling\n- Developed image processing code in Google Earth Engine to perform atmospheric correction, cross-sensor calibration, and spectral index calculations\n- Analyzed large climate datasets from CMIP5 and CMIP6 - Integrated remote sensing data, and climate projections with LANDIS-II \n\n\n\nJune 2023 – August 2023\nAdvisor: Dr. Anthony Vorster\n- Collaborated with Grand Staircase Escalante Partners to map invasive plant communities in the Paria River Watershed, Utah\n- Processed large datasets using Google Earth Engine, R, and ArcGIS\n\n\n\nDecember 2022 – May 2023\nAdvisor: Dr. Nathan Rowley\n- Estimated supraglacial lake depth development in Western Greenland using radiative transfer models\n- Developed workflows to process Landsat imagery (raster sieving and feature detection)\n\n\n\nJune 2022 – July 2022\nAdvisor: Dr. Victor Gonzalez\n- Analyzed climate stressors on heat tolerances of honeybees and sweat bees in Lesvos, Greece\n- Discovered that bees remain heat tolerant following desiccation and starvation"
},
{
"objectID": "cv.html#teaching-experience",
"href": "cv.html#teaching-experience",
"title": "Wesley Rancher",
"section": "",
"text": "September 2023 – Present\nLab Instructor – Geography 485/585: Remote Sensing I (Fall, Winter 2025)\nDeveloped lab exercises, taught GIS software (QGIS, R), and provided demonstrations.\nCourse Assistant – Geography 199: Global Wildfire (Spring 2024)\nSupported curriculum development and provided supplemental instruction\nGuest Lecturer\nGeography 199: “Changing Wildfire in Brazil”\nGeography 199: “Bees and Wildfire”\nLab Instructor – Geography 181: Our Digital Earth (Fall 2023, Winter 2024)\nFacilitated labs on digital mapping and spatial data"
},
{
"objectID": "cv.html#awards-and-honors",
"href": "cv.html#awards-and-honors",
"title": "Wesley Rancher",
"section": "",
"text": "Rippey Research Grant ($1000, UO) – 2024\n\nNASA Develop Scholarship ($1500, SSAI) – 2023\n\nDean’s List (OWU) – Fall ’22, Spring ’20, ’22, ’23\n\nRobert E. Shanklin Distinguished Scholar (Geography, OWU) – 2023\n\nPhi Sigma Tau (Philosophy, OWU) – 2023\n\nOur New Gold Digital Storytelling Winner (Spanish, OWU) – 2022"
},
{
"objectID": "cv.html#publications",
"href": "cv.html#publications",
"title": "Wesley Rancher",
"section": "",
"text": "Rancher W, Matsumoto H, Lamping J, Lucash MS. 2025. “Comparing Current and Future Above-Ground Carbon Estimates Using Random Forests, Landsat Imagery, and a Forest Simulation Model” (In preparation)\n\nWeiss S, Rancher W, Hayes K, Buma B, Lucash MS. 2024. “Wildfire Dynamics Under Climate Change in Interior Alaska” (In preparation)\n\nGonzalez VHB, Rancher W, Vigil R, Garino-Heisey I, Oyen K, Tscheulin T, Petanidou T, Hranitz J, Barthell J. 2024. “Bees Remain Heat Tolerant After Acute Exposure to Desiccation and Starvation”\n\nRowley N, Rancher W, Karmosky C. 2024. “Comparison of Multiple Methods for Supraglacial Melt-Lake Volume Estimation in Western Greenland During the 2021 Summer Melt Season”"
},
{
"objectID": "cv.html#presentations",
"href": "cv.html#presentations",
"title": "Wesley Rancher",
"section": "",
"text": "Rancher W, Matsumoto H, Lamping J, Lucash ML. 2024. “Assessing Vegetation Shifts in Boreal Alaska by Integrating Landsat Imagery with Spatial Modeling” – American Geophysical Union – Washington, DC (Poster)\n\nRancher W, VanArnam M, Kowalski A, Anarella T, Vorster A. 2023. “Mapping Russian Olive and Tamarisk to Inform Invasive Species Management along the Paria River, Utah” – NASA Develop Day – Washington, DC (Virtual talk)\n\nRancher W, Rowley N. 2023. “Estimating Supraglacial Melt Lake Volume Changes in West Central Greenland Using Multiple Remote Sensing Methods” – Ohio Wesleyan Spring Symposium – Delaware, OH (Poster)\n\nRancher W, Vigil R, Garino-Heisey I, Gonzalez V. 2022. “Effects of Desiccation on Bees’ Heat Tolerance” – Ohio Wesleyan Connection Conference – Delaware, Ohio (Poster)\n\nRancher W, Gonzalez V. 2022. “Effects of Desiccation on Bees’ Heat Tolerance” – IUSSI Sección Andina y del Caribe – Panama City, Panama (Talk)"
},
{
"objectID": "instructions/LabFour/index.html",
"href": "instructions/LabFour/index.html",
"title": "Lab four instructions",
"section": "",
"text": "This lab is adapted from the Mapping with Drones class and was originally written by James Lamping and Colin Mast. Here we use a dataset from NOAA and ask slightly different questions about the data.\nLets start by setting up our work space. The things we need to do here are to load the libraries we will use in this analysis and also make sure our working directory is set.\n\n# This section of code below loads necessary packages into our R session.\ninstall.packages(\"pacman\") #when you run code again on the same machine you can comment install.packages lines out\ninstall.packages(\"sf\")\ninstall.packages(\"stars\")\n\n# Install Rtools on Root folder. Edit system environment variables with ming64/bin path. Restart R\n# https://cran.r-project.org/bin/windows/Rtools/rtools44/rtools.html\n# you might need these packages if you cannot plot your point cloud\n#https://www.xquartz.org/ #install this and unzip if running on macOS\n#install.packages(\"rgl\")\n#install.packages(\"stars\", type = \"source\") #an alternative attempt at line 4 of this chunk\n\npacman::p_load(\"lidR\", \"tidyverse\", \"terra\", \"tidyterra\", \"future\", 'RCSF', 'gstat') \nlibrary(sf)\nlibrary(stars)\n\n# Now we need to set the working directory. This is the filepath where we are going to be working from.\nsetwd(\"R:/Geog485_585/Class_Data/Lab4_1/\")\nsetwd(\"/Users/wancher/Documents/rs_485/input_data/\")\n\nFirst, lets bring in some lidar data and take a look at it.\n\n#Here is the source for the data used in this lab\n#https://portal.opentopography.org/lidarDataset?opentopoID=OTLAS.102010.26910.1\n\n# There are two main file types that you will see with lidar data, LAS and LAZ. The LAZ file type is highly compressed, saving a lot of room for storage and transfer. Most processing software can read LAZ files, however they default to LAS when exporting. Be careful about storage space when dealing with lidar data.\n# read in an individual LAZ file\nlas <- readLAS(\"ot_000119.laz\")\n# lets take a peak at what that looks like\nplot(las)\n\n# Some lidar data also has RGB data included. If so this is how you would visualize it in color.\n#plot(las, color = \"RGB\") \n\nlas # run this code and it will give you a basic summary of the object\n\nSECTION TURN IN 1. Take a screen shot of your point cloud 2. What is the difference between LAS and LAZ? 3. What coordinate reference system is your data in? 4. What is your point density? 5. How much area does your data cover?\n\nWe are going to chunk our data so we can work with smaller files for subsequent steps.\n\n# bring in the large lidar file as a LAS Catalog\nctg <- readLAScatalog(\"ot_000119.laz\")\nplot(ctg, chunk = TRUE)\n\n# Create a new set of 200 x 200 m. laz files with a 10 meter buffer.\nopt_chunk_buffer(ctg) <- 10\nopt_chunk_size(ctg) <- 200\nplot(ctg, chunk = TRUE)\nopt_laz_compression(ctg) <- TRUE # this tells it to be a .laz file instead of a .las file\n\n#set an output folder path (as is, this will write to wherever you setwd)\nopt_output_files(ctg) <- \"retile_{XLEFT}_{YBOTTOM}\" # this sets the folder location and name with the coordinates of the chunk.\n\n# preview the chunk pattern and create new tiles\nplot(ctg, chunk = TRUE)\nplan(multisession, workers = availableCores() - 1) # create a parallel session. This lets you process more than one at a time\nnewctg = catalog_retile(ctg)\nplot(newctg)\n\nPoint classification\nOne of the most important steps in processing lidar is classifying points to different categories. A lot of freely available aerial lidar already comes classified as part of a completed validated dataset. However, if you are the one producing the lidar from a drone you will need to classify the point cloud yourself. There are many standards when it comes to lidar data and most are controlled by ASPRS. To read more about the classification standards of ASPERS visit the link below. Table 3 contains the standard classification values and meanings set by ASPERS. We will just be focusing on classifying ground points in this lab.\nhttps://www.asprs.org/wp-content/uploads/2010/12/LAS_Specification.pdf\n\n# lets plot our data and see if these points are already classified.\nplot(las, color = \"Classification\")\n\n# It likely is, and in this case you will see white points for unclass (0) and blue points for points classified as ground (2). Lets bring the point cloud back in as if there are no ground points classified with the filter funtions in lidR.\n\n#read in one of the tiles that you created\nlas <- readLAS(\"retile_722200_4940600.laz\", filter = \"-change_classification_from_to 2 0\")\nplot(las, color = \"Classification\")\n\n# Now we are going to use the ground classification algorithm to classify points as either ground or not. There are a few different methods for this. For details on other methods please look here: https://r-lidar.github.io/lidRbook/gnd.html\nlas_class <- classify_ground(las, algorithm = pmf(ws = 5, th = 3))\n\n# How does it look? Zoom in move around. Does it look like it performed well? If not, maybe try another classification method from the link above.\nplot(las_class, color = \"Classification\", size = 3, bg = \"white\")\n\n# Another way to see how well this worked is by plotting a cross section.\nplot_crossection <- function(las,\n p1 = c(min(las@data$X), mean(las@data$Y)),\n p2 = c(max(las@data$X), mean(las@data$Y)),\n width = 4, colour_by = NULL)\n{\n colour_by <- enquo(colour_by)\n data_clip <- clip_transect(las, p1, p2, width)\n p <- ggplot(data_clip@data, aes(X,Z)) + geom_point(size = .5) + coord_equal() + theme_minimal()\n \n if (!is.null(colour_by))\n p <- p + aes(color = !!colour_by) + labs(color = \"\")\n \n return(p)\n}\n\nplot_crossection(las_class, colour_by = factor(Classification))\n\nSECTION TURN IN 1. Take a screen shot of the cross section you created with the pmf function. 2. How well did it perform? 3. Did you have to use a different classification? If so, what did you end up using? 4. What do you see in your classified point cloud?\n\nCreating a digital surface model (DSM)\nLets start off by making a DSM. If you remember from lecture a DSM includes all ground and vegetaion points. Its like draping a sheet across the surface of everything that is there.\n\ndsm <- rasterize_canopy(las_class, res = 1, p2r(na.fill = tin()))\ncol <- height.colors(25) # create a color profile\nplot(dsm, col = col, main = \"Digital Surface Model\")\n\n#summary stats\nsummary(dsm) \nhist(dsm)#repeat this for dtm and chm\n\nCreating a digital terrain model (DTM)\nOne of the neatest things about lidar data is its ability to model what lies beneath the canopy. Where are old logging roads? Are there artifacts beneath the vegetation? What is the actual elevation of the land? Lets only look at the ground points and make another surface model of the terrain (DTM).\n\n#this one could take awhile\ndtm <- rasterize_terrain(las_class, algorithm = kriging(k = 40))\nplot(dtm, main = \"Digital Terrain Model\") \n\n# that doesnt look like much... lets make a hillsahde\ndtm_prod <- terrain(dtm, v = c(\"slope\", \"aspect\"), unit = \"radians\")\ndtm_hillshade <- shade(slope = dtm_prod$slope, aspect = dtm_prod$aspect)\nplot(dtm_hillshade, col =gray(0:30/30), legend = FALSE, main = \"DTM Hillshade\")\n\nCreating a canopy height model (CHM)\nTo look at the height of only the vegetation we need to subtract the height of the ground out of our data. Looking at the DSM you see individual trees, but the height data is a combination of both the ground height and the vegetation height. Lets get rid of the ground and look at the trees!\n\nlas_norm <- normalize_height(las_class, algorithm = tin())\nplot(las_norm, color = \"Classification\") # notice how the terrain is now completely flat\n\n# now that we have no more terrain elevation. Lets make another surface model. This will be our CHM.\nchm <- rasterize_canopy(las_norm, res = 1, pitfree(thresholds = c(0, 10, 20), max_edge = c(0, 1.5)))\nplot(chm, col = col, main = \"Canopy Height Model\")#what are the units\n\nSECTION TURN IN 1. Screenshots of all terrain models and histograms. 2. Report the average values of each model. You can copy summary(dsm) for dtm and chm models.\n\nIndividual tree detection\nNow that we have the lidar processed and our terrain models generated, lets do something a bit more advanced. From the terrain data and the unclassified vegetation data there are methods of detecting and modeling every tree on the landscape. This can be useful in so many different ways! It can tell us the density of trees on the landscape, the height distrubution, and even estimate how much biomass there is in the living trees. For this lab we will locate each tree and create spatial data of the canopy area of each tree.\nThere are many different ways of going about this. For this I chose the fastest, but it might not be the most accurate for this environment. Feel free to try other methods as described here: https://r-lidar.github.io/lidRbook/itd-its.html\n\n# locate all the tops of the trees. This does this by looking at a local area maximum height within a set radius.\nttops <- locate_trees(las_norm, lmf(ws = 5))\n\n# plot the tree tops on top of the CHM\nplot(chm, col = height.colors(50))\nplot(sf::st_geometry(ttops), add = TRUE, pch = 3)\n\n#Lets see what that looks like on top of the normalized point cloud.\nx <- plot(las_norm, bg = \"white\", size = 4)\nadd_treetops3d(x, ttops)\n## Do you think it is oversegmenting trees (as in there are more than one \"tree top\" per tree)? If so, change the window size (ws) to a larger one and see how that performs.\n\n# Now lets segment the point cloud. To do this we will use the CHM and the tree tops to identify canopies. Then we will segment the point cloud to have individual trees identified.\nlas_seg <- segment_trees(las_norm, dalponte2016(chm, ttops)) # segment point cloud\nplot(las_seg, bg = \"white\", size = 4, color = \"treeID\") # visualize trees\n\n# you can now look at individual trees!\ntree110 <- filter_poi(las_seg, treeID == 110)\nplot(tree110, size = 8, bg = \"white\")\n\n# from the segmented lidar you can now easily make spatial data that has information about each tree. This is useful in arcpro or QGIS and is the product you will need to make basic figures and maps about the status of those trees.\ncrowns <- crown_metrics(las_seg, func = .stdmetrics, geom = \"convex\")\nplot(crowns[\"zq95\"], main = \"95th percentile of tree height (feet)\")\nnames(crowns) # look at all the data you now have of each tree\n\nExporting data\nAll of this can be easily exported for use in other geospatial programs. Lets do that now!\n\n# rasters\nwriteRaster(dsm, \"data/output/lidar_dsm.tif\", overwrite = TRUE)\nwriteRaster(dtm, \"data/output/lidar_dtm.tif\", overwrite = TRUE)\nwriteRaster(chm, \"data/output/lidar_chm.tif\", overwrite = TRUE)\nwriteRaster(dtm_hillshade, \"data/output/lidar_hillshade.tif\", overwrite = TRUE)\n\n# vectors\nwriteVector(vect(crowns), \"data/output/Crowns/lidar_tree_crowns.shp\")\n\nSECTION TURN IN 1. Add your output rasters to a QGIS environment and change symbology. You can add these screenshots to your write up if you want.\n\nAdditional processing with lasCatalog. No submission required here but worth knowing how to process larger datasets\nAt the very beginning of this exercise you created a catalog from the large lidar dataset to create smaller chunks that are much more manageable to analyze. In the same way your created these chunks you can analyze these as a whole unit to create complete output products for a whole area. There are many functions that work with the lasCatalog engine in lidR, including terrain modeling, tree segmentation, and classification. Below we are going to just create a DTM on the whole dataset. Remember that the dataset we used for this lab already had ground points classified, so lets just go ahead and run the analysis.\n\nctg_dtm <- rasterize_terrain(ctg, algorithm = tin()) # calling a catalog into a function runs that function through the lascatalog engine\nplot(ctg_dtm, main = \"Digital Terrain Model\") \n\n# that doesnt look like much... lets make a hillsahde\nctg_dtm_prod <- terrain(ctg_dtm, v = c(\"slope\", \"aspect\"), unit = \"radians\")\nctg_dtm_hillshade <- shade(slope = ctg_dtm_prod$slope, aspect = ctg_dtm_prod$aspect)\nplot(ctg_dtm_hillshade, col =gray(0:30/30), legend = FALSE, main = \"DTM Hillshade\")\n\n# export DTM and hillshade\nwriteRaster(ctg_dtm, \"data/output/ctg_DTM.tif\", overwrite = TRUE)\nwriteRaster(ctg_dtm_hillshade, \"data/output/ctg_Hillshade.tif\", overwrite = TRUE)"
}
]