Wine Map Part 1: Downloading and manipulating climate data
I was reading a second-hand book I picked up called “Vendange: A study of wine and other drinks.”
The author made mention that the following are necessary for a vine to produce a classically agreeable wine:
- Latitudes between 28 and 50 degrees, north and south.
- Winter: Short and cold with a good supply of rain.
- Spring: Mild to warm, with showers.
- Summer: Long, hot, balanced rain in mild showers and plenty of dew.
- Even summer sunshine as opposed to short, intense bursts.
- ~1300 hours of sunshine and ~180mm of rainfall per annum.
- Sloping plots with sun-facing aspects.
- Poor quality soils of chalk, limestone, pebble, slate, gravel or schist.
- Loosely packed to allow proper drainage but also air and moisture ingress.
- Soil that can retain heat and oppose cool night air.
To me, this was interesting information and got my mind thinking:
- How accurate are these statements?
- Which ones are more significant?
- Can I classify and identify regions in Australia that have these attributes?
Simultaneously, I had been wanting to experiment with TensorFlow and Keras, and this seemed like an excellent opportunity to try and integrate both interests. I knew this was a geospatial problem so I would need to develop some geospatial skills for good measure too.
In Summary:
- Can I predict the likelihood of an area growing wine?
- Can I quantify and rank the significant features?
- What regions don’t grow vines but have similar attributes to existing wine-growing regions?
- Could I break this quantification down for White and Red Wines?
- Could I break it down further by varietals?
- Could you weight the results by wine tasting scores?
Collecting the data
It was clear that I needed lots of different data. I needed to consult an expert and called upon my good friend Jaxon King for his advice on features. Out of our conversation it was clear that I needed the following datasets:
- Seasonal Climate Data
- Temperature
- Precipitation
- Solar Radiation
- Wind Speed
- Soil Data
- Surface Lithology
- Regional geology
- Physical Environmental Data
- Elevation
- Slope
- Aspect
- Distance to coast
Finally, I’d need a training set of labelled, known wine-growing regions in Australia.
This post focuses on the collection, extraction and transformation of the climate data. All code is available at this GitHub Repo.
I’ve deliberately added the CSV’s to my .gitignore merely to keep the repo manageable. The process should be fully reproducible and will download the data as needed.
Climate Data
I used the excellent online [TerraClimate]((http://www.climatologylab.org/wget-terraclimate.html) resource for free, NetCDF files of monthly data. The data set had most of the climate data required, in monthly increments from 2018 back until 1958. Even better, they had programmatic access and support cURL-ing data, perfect!
The following attributes are included in the ClimatologyLab dataset and are saved as input/00 terraclimate codes/codes.csv.
The following code in 01 get ncdf data.R glues a string together and uses curl::curl_download to save the files into the respective folders. By naming the columns deliberately in df, we can exploit the argument matching capability of purrr::pwalk to get row-wise iteration over a dataframe.
I now had a folder full of .nc files with a grid of observations in three dimensions, latitude, longitude and time.
The approach I am taking here is derived from some trial and error, previous attempts to load and manipulate the NetCDF entirely in memory caused R to crash with my laptop’s 16GB of RAM.
In code for “02 ncdf to csv.R”, I nest a Monthly function inside a more substantial, per file function. Working backwards from the result I want, my data structure needs to have year, month, attribute, latitude, longitude and value. This data structure will be in a long format while I will eventually need it in a wide format for modelling.
This code subsets the NCDF via square brackets, into the third dimension, time. It also relies on the matrix having named rows and columns, which we will apply in the parent function.
Once this is done, I used pivot_longer to reshape the data frame from thousands of columns to five. I then convert the lat/lons into numerics and filter results to Australia and use data.table::setDT to change the type in place, i.e. no other objects are created in memory, very handy when working with large datasets and limited RAM. Finally some prayers to gods with gc() to free up memory, this is as superstitious as I get.
The next functions maps the per_month function to the time dimension of the NCDF matrix. By having a list of data.tables, I can use the blindingly fast data.table::rbindlist for row-based concatenation and writing the CSV back to disk.
The third and final part of this post now looks at turning these long files into a single wide CSV. T
I wrote two, small helper functions in “03 long csvs to wide.R”. One for the common header data and a second to be mapped over all the CSVs. Having consistently used arrange in the previous code, I can simply column bind the single columns of results into a consistent, wide data.frame.
I also make using of R’s quasi-quotation to dynamically assign the column names in the “extract_attribute” function.
Bringing both pieces together is very straightforward. I also wrote a helper function to assist in verifying that the extract completed correctly.
The following are graphs used in spot-checking the validity. All looks well!
This completes the climate component of the data collection stage. Further detail to come in subsequent posts.