Skip to content

Commit 9f01b0b

Browse files
committed
add numbering
1 parent f98c216 commit 9f01b0b

25 files changed

+63
-77
lines changed

book/_config.yml

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ latex:
2424
targetname: book.tex
2525

2626
bibtex_bibfiles:
27-
- "paper.bib"
27+
- "book_refs.bib"
2828

2929
# Information about where the book exists on the web
3030
repository:
@@ -76,18 +76,18 @@ sphinx:
7676
part4_title: "Summary + Conclusion"
7777

7878
#tutorial 1 nb titles
79-
title_its_nb1: "# 1. Accessing cloud-hosted ITS_LIVE data"
80-
title_its_nb2: "# 2. Working with larger than memory data"
81-
title_its_nb3: "# 3. Handling raster and vector data"
82-
title_its_nb4: "# 4. Exploratory data analysis of a single glacier"
83-
title_its_nb5: "# 5. Exploratory data analysis of multiple glaciers"
79+
title_its_nb1: "# 3.1 Accessing cloud-hosted ITS_LIVE data"
80+
title_its_nb2: "# 3.2 Working with larger than memory data"
81+
title_its_nb3: "# 3.3 Handling raster and vector data"
82+
title_its_nb4: "# 3.4 Exploratory data analysis of a single glacier"
83+
title_its_nb5: "# 3.5 Exploratory data analysis of multiple glaciers"
8484

8585
#tutorial 2 nb titles
86-
title_s1_1: "# 1. Read Sentinel-1 data processed by ASF"
87-
title_s1_2: "# 2. Wrangle metadata"
88-
title_s1_3: "# 3. Exploratory analysis of ASF S1 imagery"
89-
title_s1_4: "# 4. Read Sentinel-1 RTC data from Microsoft Planetary Computer"
90-
title_s1_5: "# 5. Comparing Sentinel-1 RTC datasets"
86+
title_s1_1: "# 4.1 Read Sentinel-1 data processed by ASF"
87+
title_s1_2: "# 4.2 Wrangle metadata"
88+
title_s1_3: "# 4.3 Exploratory analysis of ASF S1 imagery"
89+
title_s1_4: "# 4.4 Read Sentinel-1 RTC data from Microsoft Planetary Computer"
90+
title_s1_5: "# 4.5 Comparing Sentinel-1 RTC datasets"
9191
#title_s1_6: "# 6. Example of Sentinel-1 RTC time series analysis"
9292

9393
#global nb sections

book/_toc.yml

Lines changed: 17 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -2,43 +2,38 @@
22
format: jb-book
33
root: introduction
44
parts:
5-
- caption: Introduction
6-
#numbered: 2
5+
- caption: Part 1. Introduction
76
chapters:
87
- file: intro/getting_started
98
- file: intro/learning_objectives
109
- file: intro/open_source_setting
11-
- caption: Background
12-
#numbered: 2
10+
- caption: Part 2. Background
1311
chapters:
1412
- file: background/context_motivation
1513
- file: background/data_cubes
1614
- file: background/tutorials_overview
1715
- file: background/tutorial_data
18-
- file: intro/software
16+
- file: background/software
1917
- file: background/relevant_concepts
20-
- caption: Part 1
21-
#numbered: 2
18+
- caption: Part 3. ITS_LIVE Tutorial
2219
chapters:
2320
- file: itslive/itslive_intro
24-
- file: itslive/nbs/1_accessing_itslive_s3_data
25-
- file: itslive/nbs/2_larger_than_memory_data
26-
- file: itslive/nbs/3_combining_raster_vector_data
27-
- file: itslive/nbs/4_exploratory_data_analysis_single
28-
- file: itslive/nbs/5_exploratory_data_analysis_group
29-
- caption: Part 2
30-
#numbered: 2
21+
- file: itslive/nbs/accessing_itslive_s3_data
22+
- file: itslive/nbs/larger_than_memory_data
23+
- file: itslive/nbs/combining_raster_vector_data
24+
- file: itslive/nbs/exploratory_data_analysis_single
25+
- file: itslive/nbs/exploratory_data_analysis_group
26+
- caption: Part 4. Sentinel-1 RTC Tutorial
3127
chapters:
3228
- file: sentinel1/s1_intro
33-
- file: sentinel1/nbs/1_read_asf_data
34-
- file: sentinel1/nbs/2_wrangle_metadata
35-
- file: sentinel1/nbs/3_asf_exploratory_analysis
36-
- file: sentinel1/nbs/4_read_pc_data
37-
- file: sentinel1/nbs/5_comparing_s1_rtc_datasets
38-
- caption: Conclusion
39-
#numbered: 2
29+
- file: sentinel1/nbs/read_asf_data
30+
- file: sentinel1/nbs/wrangle_metadata
31+
- file: sentinel1/nbs/asf_exploratory_analysis
32+
- file: sentinel1/nbs/read_pc_data
33+
- file: sentinel1/nbs/comparing_s1_rtc_datasets
34+
- caption: Part 5. Conclusion
4035
chapters:
41-
- file: conclusion/conclusion
36+
- file: conclusion/wrapping_up
4237
- file: conclusion/summary
4338
- file: conclusion/datacubes_revisited
4439
- caption: Additional material

book/background/context_motivation.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
# Context & Motivation
1+
# 2.1 Context & Motivation
22

33
This book demonstrates scientific workflows using publicly-available, cloud-optimized geospatial datasets and open-source scientific software tools in order to address the need for educational resources related to new technologies and reduce barriers to entry to working with earth observation data. The tutorials in this book focus on the complexities inherent to working with n-dimensional, gridded datasets and use the core stack of software packages built on and around the Xarray data model.
44

5-
### *I. Moving away from the 'download model' of scientific data analysis*
5+
### *Moving away from the 'download model' of scientific data analysis*
66

77
Technological developments in recent decades have engendered fundamental shifts in the nature of scientific data and how it is used for analysis.
88

@@ -11,7 +11,7 @@ Technological developments in recent decades have engendered fundamental shifts
1111
-- {cite}`abernathey_2021_cloud`
1212
```
1313

14-
### *II. Increasingly large, cloud-optimized data means new tools and approaches for data management*
14+
### *Increasingly large, cloud-optimized data means new tools and approaches for data management*
1515

1616
The increase in publicly available earth observation data has transformed scientific workflows across a range of fields, prompting analysts to gain new skills in order to work with larger volumes of data in new formats and locations, and to use distributed cloud-computational resources in their analysis ({cite:t}`abernathey_2021_cloud,gentemann_2021_science,mathieu_2017_esas,ramachandran_2021_open,Sudmanns_2020_big,wagemann_2021_user`).
1717

@@ -21,7 +21,7 @@ The increase in publicly available earth observation data has transformed scient
2121
Volume of NASA Earth Science Data archives, including growth of existing-mission archives and new missions, projected through 2029. Source: [NASA EarthData - Open Science](https://www.earthdata.nasa.gov/about/open-science).
2222
```
2323

24-
### *III. Asking questions of complex datasets*
24+
### *Asking questions of complex datasets*
2525

2626
Scientific workflows involve asking complex questions of diverse types of data. Earth observation and related datasets often contain two types of information: measurements of a physical observable (e.g. temperature) and metadata that provides auxiliary information that required in order to interpret the physical observable (time and location of measurement, information about the sensor, etc.). With the increasingly complex and large volume of earth observation data that is currently available, storing, managing and organizing these types of data can very quickly become a complex and challenging task, especially for students and early-career analysts {cite}`mathieu_esas_2017,palumbo_2017_building,Sudmanns_2020_big,wagemann_2021_user`.
2727

book/background/data_cubes.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
# Data cubes
1+
# 2.2 Data cubes
22

33
The term **data cube** is used frequently throughout this book. This page contains an introduction of ***what*** a data cube is and ***why*** it is useful.
44

5-
## *I. Anatomy of a data cube*
5+
## *Anatomy of a data cube*
66

77
The key object of analysis in this book is a [raster data cube](https://openeo.org/documentation/1.0/datacubes.html). Raster data cubes are n-dimensional objects that store continuous measurements or estimates of physical quantities that exist along given dimension(s). Many scientific workflows involve examining how a variable (such as temperature, windspeed, relative humidity, etc.) varies over time and/or space. Data cubes are a way of organizing geospatial data that let us ask these questions.
88

@@ -55,7 +55,7 @@ A data cube should be organized out of these building blocks adhering to the fol
5555
**Attributes** - Metadata that can be assigned to a given `xr.Dataset` or `xr.DataArray` that is ***static*** along that object's dimensions.
5656
:::
5757

58-
## *II. 'Analysis-ready' data*
58+
## *'Analysis-ready' data*
5959
The process described above is an example of preparing data for analysis. Thanks to development and collaboration across the earth observation community, analysis-ready for earth observation has a specific, technical definition:
6060

6161
```{epigraph}
@@ -67,15 +67,15 @@ The development and increasing adoption of analysis-ready specifications for sat
6767

6868
However, many legacy datasets still require significant effort in order to be considered 'analysis-ready'. Furthermore, for analysts, 'analysis-ready' can be a subjective and evolving label. Semantically, from a user-perspective, analysis-ready data can be thought of as data whose structure is conducive to scientific analysis.
6969

70-
## *III. Analysis-ready data cubes & this book*
70+
## *Analysis-ready data cubes & this book*
7171
The tutorials in this book contain examples of data at various degrees of 'analysis-ready'. [Tutorial 1: ITS_LIVE](../itslive/itslive_intro.md) uses a dataset of multi-sensor observations that is already organized as a `(x,y,time)` cube with a common grid. In [Tutorial 2: Sentinel-1](../sentinel1/s1_intro.md), we will see an example of a dataset that has undergone intensive processing to make it 'analysis-ready' but requires further manipulation to arrive at the `(x,y,time)` cube format that will be easist to work with.
7272

7373
### References
7474
- {cite:t}`montero_2024_EarthSystemData`
7575
- {cite:t}`appel_2019_ondemand`
7676
- {cite:t}`giuliani_2019_EarthObservationOpen`
7777
- {cite:t}`truckenbrodt_2019_Sentinel1ARD`
78-
## Additional data cube resources
78+
### Additional data cube resources*
7979
- [OpenEO - Data Cubes](https://openeo.org/documentation/1.0/datacubes.html)
8080
- [Open Data Cube initiative](https://www.opendatacube.org/about-draft)
8181
- [The Datacube Manifesto](http://www.earthserver.eu/tech/datacube-manifesto/The-Datacube-Manifesto.pdf)

book/background/relevant_concepts.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Relevant concepts
1+
# 2.6 Relevant concepts
22

33
## *Larger than memory data, parallelization and Dask*
44

book/intro/software.md renamed to book/background/software.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Software and computing environment
1+
# 2.5 Software and computing environment
22

33
On this page you'll find information about the computing environment and datasets that will be used in both of the tutorials in this book.
44

book/background/tutorial_data.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Data used in tutorials
1+
# 2.4 Data used in tutorials
22

33
We use a many different datasts throughout these tutorials. While each tutorial is focused on a different raster time series (ITS_LIVE ice velocity data and Sentinel-1 imagery), we also use vector data to represent points of interest.
44

book/background/tutorials_overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Tutorials overview
1+
# 2.3 Tutorials overview
22

33
This book contains two distinct tutorials, each of which focuses on a different cloud-optimized geospatial dataset and different cloud-computing resources. Read more about the datasets used [here](tutorial_data.md).
44

File renamed without changes.

book/conclusion/datacubes_revisited.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Data Cubes Revisited
1+
# 5.3 Data Cubes Revisited
22

33
In this book, we saw a range of real-world datasets and the steps required to prepare them for analysis. Several guiding principles for assembling and using analysis-ready data cubes in Xarray can be drawn from these examples.
44

0 commit comments

Comments
 (0)