Metadata
title: "E-TRAINEE Tutorial - Landcover classification based on spectral-temporal metrics"description: "This is a tutorial within the third theme of Module 1 of the E-TRAINEE course."
lastUpdate: 2023-07-20
authors: Andreas Mayr
Landcover classification based on spectral-temporal metrics¶
This Notebook shows how to use spectral-temporal metrics as features for a supervised landcover classification. A simple approach for unsupervised classification is tried additionaly towards the end of the Notebook. We will look at the accuracy of the supervised landcover classification in theme 6 of the E-TRAINEE Module 1.
We use the geemap, wxee and eemont packages to derive the spectral-temporal metrics from a Sentinel-2 satellite image time series in the Google Earth Engine (GEE). The subsequent machine learning part is accomplished on your local machine using the scikit-learn library. Xarray is used for raster time series handling. These and other Python packages used in this tutorial are contained in the requirements file provided for the course. Please see the instructions on the software page for setting up a Conda environment based on this file.
Data:
- We will download the Sentinel-2 time series metrics from the Google Earth Engine.
- Two geopackages with an area-of-interest polygon and a reference data point layer, respectively, are provided for download on the course data repository in the theme 3 folder:
/T3/aoi.gpkg
- A rectangular area in the Inn valley (Austria), covering urban structures, infrastructure, forest, grassland and cropland./T3/ref_data.gpkg
- A set of 72 points purposely placed and labelled according to landcover categories, as interpreted by visual inspection of a (0.2-m resolution) aerial orthophoto from 2019 (i.e., the same year as the satellite imagery is from; orthophto provided by the Office of the Government of Tyrol, Geoinfomation Division).
Import some of the required packages.
import ee, eemont, geemap, wxee # ee is the GEE Python API, the others are extensions to that
import folium
import geopandas as gpd
import pathlib
Area-of-interest (AOI) and reference data set¶
Download the AOI and the reference data from the course data repository and save them on your local drive. Set the directory where you store the data.
data_dir = pathlib.Path('C:/work/etrainee/gee/T3') / 'data' # Define path on your local system
data_dir
WindowsPath('C:/work/etrainee/gee/T3/data')
We use GeoPandas to import the Geopackage with our Area-of-Interest (AOI). The simplest way to display it in an interactive map is the Geopandas explore()
method.
my_aoi = gpd.read_file(data_dir / 'aoi.gpkg')
my_aoi.explore()
Read the Geopackage with reference data for landcover to a Geopandas Dataframe.
The landcover classes are coded with label IDs and we add a new column with the landcover classes named by strings. These classes discriminate different types of (periodically) vegetated areas, water bodies, and "other" landcover types (such as barren land, roads, parking lots, buildings, etc. which we do not aim to separate further now).
ref_data = gpd.read_file(data_dir / 'ref_data.gpkg')
landcover_dict={
1: "Grassland",
2: "Cropland",
3: "Forest",
4: "Water",
5: "Other"
}
ref_data["landcover_string"] = ref_data['landcover'].map(landcover_dict) # Map values of the new column according to the dictionary
ref_data # Show the DataFrame
landcover | geometry | landcover_string | |
---|---|---|---|
0 | 1 | POINT (682379.903 5238921.551) | Grassland |
1 | 1 | POINT (682521.253 5239272.683) | Grassland |
2 | 1 | POINT (682951.614 5238795.907) | Grassland |
3 | 1 | POINT (682490.123 5239846.356) | Grassland |
4 | 1 | POINT (683024.953 5239605.725) | Grassland |
... | ... | ... | ... |
67 | 5 | POINT (684383.764 5238819.044) | Other |
68 | 5 | POINT (683702.255 5238405.091) | Other |
69 | 5 | POINT (683539.871 5238396.677) | Other |
70 | 5 | POINT (683578.293 5238347.597) | Other |
71 | 5 | POINT (683478.731 5238432.576) | Other |
72 rows × 3 columns
For plotting we define a colormap associated to the landcover classes in alphabetical order.
my_colors = ("brown", "darkgreen", "lightgreen", "grey", "blue")
Create an interactive Leaflet map with the two vector layers (AOI and Landcover), using some additional options for map making based on the Folium package.
# Show AOI
m = my_aoi.explore( # Show the AOI in a Folium map object
name="AOI",
style_kwds={"fill": False} # Do not fill the polygon
)
# Add landcover point layer
ref_data.explore(
m=m, # Pass the map object
column="landcover_string", # Column to use for color
name="Landcover", # Layer name in the map
marker_kwds={"radius": 4}, # Set marker circle size in pixels
style_kwds={"color": "black", "weight": 1}, # Set marker stroke color and width in pixels
cmap=my_colors, # Use the custom colormap
legend=True
)
folium.TileLayer('CartoDB positron', control=True).add_to(m) # Add alternative tiles as background map
folium.LayerControl(collapsed=False).add_to(m) # Add layer control
m # Show map
Authenticate and initialize Google Earth Engine:
try:
wxee.Initialize() # Works similarly to ee.Initialize but automatically connects to the high-volume GEE endpoint
except Exception as e: # If initialize does not work, you probably have to authenticate first
ee.Authenticate()
wxee.Initialize()
From GEE, retrieve an image collection for our AOI and for the year 2019 (we have reference data for the same year), limit the search by a maximum cloudy pixel percentage per scene. We exclude the winter months where snow cover could compromise the spectral time series. The eemont preprocess method automatically masks clouds and shadows, and it scales and offsets the image or image collection. The eemont spectralIndices()
method can calculate any index from the Awesome Spectral Indices library but for simplicity we only take the NDVI.
aoi_fc = geemap.gdf_to_ee(my_aoi) # Convert the AOI from GeoPandas GeoDataFrame to ee.FeatureCollection
S2 = ee.ImageCollection("COPERNICUS/S2_SR") \
.filterDate("2019-04-01","2019-10-31") \
.filterBounds(aoi_fc) \
.filterMetadata("CLOUDY_PIXEL_PERCENTAGE", "less_than", 50) \
.preprocess() \
.spectralIndices("NDVI")
The newer COPERNICUS/S2_SR_HARMONIZED collection results in an error later on (when calling S2_ts.describe()).
Create a wxee TimeSeries object to enable xarray operations, such as temporal aggregations. For convenience, we select only the NDVI, omitting all the original bands.
S2_ts = S2.select('NDVI').wx.to_time_series()
First, get some info about our time series.
S2_ts.describe()
COPERNICUS/S2_SR
Images: 33
Start date: 2019-04-02 10:27:44 UTC
End date: 2019-10-26 10:17:45 UTC
Mean interval: 6.47 days
And plot the timeline to see how the (low-cloudiness) observations are distributed over the observation period.
S2_ts.timeline()
Feature extraction¶
To create spectral-temporal metrics that can be used as classification features, we perform a temporal aggregation (calculate pixel-level seasonal statistics) of our time series in the GEE. We define the frequency as year but in fact it will be an aggregation over the growing season as our time series covers only the months April to October 2019.
S2_seasonal_mean_ts= S2_ts.aggregate_time(frequency="year", reducer=ee.Reducer.mean(), keep_bandnames=False) # We do not keep band names only but append the name of the reducer to the band names.
S2_seasonal_std_ts= S2_ts.aggregate_time(frequency="year", reducer=ee.Reducer.stdDev(), keep_bandnames=False)
S2_seasonal_min_ts= S2_ts.aggregate_time(frequency="year", reducer=ee.Reducer.min(), keep_bandnames=False)
S2_seasonal_max_ts= S2_ts.aggregate_time(frequency="year", reducer=ee.Reducer.max(), keep_bandnames=False)
Downloading features from GEE¶
The nice thing is that so far all the big computations on the time series data happened in the cloud (GEE). Now use wxee to download the results of the time series aggregation to xarray Datasets. Note that wxee, however, is not designed to download larger areas and high resolution - in such cases consider downloading to a Google Drive using ee or to a local drive using geemap.
Either define a path where a NetCDF file will be written to, or load the Dataset into memory only. You can set the scale parameter to control the resolution. See the documentation for more options.
The requests and downloads may take some minutes.
s_mean_path = data_dir / "S2_seasonal_mean.nc"
s_std_path = data_dir / "S2_seasonal_std.nc"
s_min_path = data_dir / "S2_seasonal_min.nc"
s_max_path = data_dir / "S2_seasonal_max.nc"
S2_seasonal_mean_ds = S2_seasonal_mean_ts.wx.to_xarray(path=s_mean_path, region = aoi_fc.geometry(), crs='EPSG:25832', scale=10)
S2_seasonal_std_ds = S2_seasonal_std_ts.wx.to_xarray(path=s_std_path, region = aoi_fc.geometry(), crs='EPSG:25832', scale=10)
S2_seasonal_min_ds = S2_seasonal_min_ts.wx.to_xarray(path=s_min_path, region = aoi_fc.geometry(), crs='EPSG:25832', scale=10)
S2_seasonal_max_ds = S2_seasonal_max_ts.wx.to_xarray(path=s_max_path, region = aoi_fc.geometry(), crs='EPSG:25832', scale=10)
Requesting data: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 0%| | 0/1 [00:00<?, ?it/s]
Requesting data: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 0%| | 0/1 [00:00<?, ?it/s]
Requesting data: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 0%| | 0/1 [00:00<?, ?it/s]
Requesting data: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 0%| | 0/1 [00:00<?, ?it/s]
Combine all Datasets with seasonal statistics into one Dataset (using their common dimension coordinates). This new Dataset contains all the seasonal statistics (spectral-temporal metrics) but is limited to one time stamp because we aggregated all observations into seasonal metrics.
import xarray as xr
S2_seasonal_ds = xr.combine_by_coords([S2_seasonal_mean_ds, S2_seasonal_std_ds, S2_seasonal_min_ds, S2_seasonal_max_ds])
S2_seasonal_ds
<xarray.Dataset> Dimensions: (time: 1, x: 767, y: 428) Coordinates: * time (time) datetime64[ns] 2019-04-02T10:27:44 * x (x) float64 6.818e+05 6.818e+05 ... 6.895e+05 6.895e+05 * y (y) float64 5.242e+06 5.242e+06 ... 5.237e+06 5.237e+06 spatial_ref int32 0 Data variables: NDVI_max (time, y, x) float64 0.6992 0.7314 0.7144 ... 0.8441 0.9075 NDVI_mean (time, y, x) float64 0.5153 0.5224 0.5348 ... 0.7466 0.7546 NDVI_min (time, y, x) float64 0.1644 0.1643 0.1623 ... 0.2692 0.3074 NDVI_stdDev (time, y, x) float64 0.147 0.1533 0.1561 ... 0.1084 0.1111 Attributes: AREA_OR_POINT: Area TIFFTAG_RESOLUTIONUNIT: 1 (unitless) TIFFTAG_XRESOLUTION: 1 TIFFTAG_YRESOLUTION: 1 _FillValue: -32768.0 scale_factor: 1.0 add_offset: 0.0
Data preprocessing¶
Rasterize reference data¶
We will use the entire reference dataset for training a supervised machine learning classifier. We do not hold out a part of the samples for validation (testing the classification performance) but in theme 6 we will label new samples for this purpose.
To make the reference data (training data) compatible with the spectral-temporal features, we rasterize the sample points using a template raster. First, create this template raster by writing one variable (extracted from the S2 imagery) to a GeoTiff using rioxarray
.
import rioxarray
S2_seasonal_ds.NDVI_mean.rio.to_raster(data_dir / "NDVI_mean.tif")
Let's have a look at this template raster.
import rasterio
from rasterio.plot import show
raster_template = rasterio.open(data_dir / "NDVI_mean.tif")
print(raster_template.profile)
rasterio.plot.show(raster_template, title="Mean NDVI", cmap="BrBG")
{'driver': 'GTiff', 'dtype': 'float64', 'nodata': -32768.0, 'width': 767, 'height': 428, 'count': 1, 'crs': CRS.from_epsg(25832), 'transform': Affine(10.0, 0.0, 681820.0, 0.0, -10.0, 5241670.0), 'blockysize': 1, 'tiled': False, 'interleave': 'band'}
<Axes: title={'center': 'Mean NDVI'}>
Now we can rasterize the reference data with Geopandas and Rasterio (as shown here).
from rasterio import features
# Get list of geometries for all features in vector file (ref_data is a GeoDataFrame)
geom = [shapes for shapes in ref_data.geometry]
# Create tuples of geometry, value pairs, where value is the attribute value you want to burn
geom_value = ((geom,value) for geom, value in zip(ref_data.geometry, ref_data['landcover']))
# Rasterize vector using the shape and coordinate system of the template raster
ref_raster = features.rasterize(geom_value,
out_shape = raster_template.shape,
fill = raster_template.nodata,
out = None,
transform = raster_template.transform,
all_touched = True)
# Plot raster
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, figsize = (5, 5))
show(ref_raster, title="Reference data", ax = ax)
plt.gca().invert_yaxis() # Get current axes and invert y
The reference samples are only one pixel each and very hard to see. Write the rasterized reference data to disc and load to a GIS to check if everything went fine (... looks perfect;)). Each reference point is one labelled pixel now.
with rasterio.open(
data_dir / "rasterized_ref_data.tif", "w",
driver = "GTiff",
transform = raster_template.transform,
crs = raster_template.crs,
dtype = rasterio.uint8,
count = 1,
width = raster_template.width,
height = raster_template.height) as dst:
dst.write(ref_raster, indexes = 1)
Read the GeoTiff with rasterized reference data to an xarray DataArray.
ref_data_da = rioxarray.open_rasterio(data_dir / "rasterized_ref_data.tif")
Add the labels from this reference data to our Dataset as a new variable.
S2_seasonal_ds['labels'] = ref_data_da
Clean up the Dataset: Remove dimensions time and band which have length 1 using squeeze()
, and drop coordinates band and spatial_ref using reset_coords()
.
S2_seasonal_ds = S2_seasonal_ds.squeeze().reset_coords(['time', 'band', 'spatial_ref'], drop=True)
S2_seasonal_ds
<xarray.Dataset> Dimensions: (x: 767, y: 428) Coordinates: * x (x) float64 6.818e+05 6.818e+05 ... 6.895e+05 6.895e+05 * y (y) float64 5.242e+06 5.242e+06 ... 5.237e+06 5.237e+06 Data variables: NDVI_max (y, x) float64 0.6992 0.7314 0.7144 ... 0.8714 0.8441 0.9075 NDVI_mean (y, x) float64 0.5153 0.5224 0.5348 ... 0.7425 0.7466 0.7546 NDVI_min (y, x) float64 0.1644 0.1643 0.1623 ... 0.2503 0.2692 0.3074 NDVI_stdDev (y, x) float64 0.147 0.1533 0.1561 ... 0.1192 0.1084 0.1111 labels (y, x) uint8 ... Attributes: AREA_OR_POINT: Area TIFFTAG_RESOLUTIONUNIT: 1 (unitless) TIFFTAG_XRESOLUTION: 1 TIFFTAG_YRESOLUTION: 1 _FillValue: -32768.0 scale_factor: 1.0 add_offset: 0.0
Maintain only the samples, i.e. pixels where a valid label (> 0) exists, mask out all other pixels (i.e. set them to nan).
samples_ds = S2_seasonal_ds.where(S2_seasonal_ds.labels > 0)
Flatten array (stack dimensions)¶
Our samples data set is a raster with shape (n_features, n_cells_y, n_cells_x). Scikit-learn, however, expects it to be of shape (n_samples, n_features). In other words, the Scikit-learn algorithm doesn't care how the samples (pixels, each with feature values and a label) are ordered in 2D space. Therefore, we have to re-shape our data, i.e. flatten the x and y dimensions. We use xarray methods (stack
) for this operation. This makes it possible to keep track of the coordinate labels ('x' and 'y') along the next steps and re-shape back to our array (in 2D raster space) later without loosing information.
Convert Dataset to DataArray and stack x and y dimensions, call new dimension "samples". This is a multi-index object, which can be unstacked into x and y later.
samples_da = samples_ds.to_array()
samples_da_flat = samples_da.stack(samples=('x', 'y'))
print("Shape and dimensions of samples_da: ", samples_da.shape, samples_da.dims)
print("Shape and dimensions of samples_da_flat: ", samples_da_flat.shape, samples_da_flat.dims)
Shape and dimensions of samples_da: (5, 428, 767) ('variable', 'y', 'x') Shape and dimensions of samples_da_flat: (5, 328276) ('variable', 'samples')
To get our flattened array into the shape of n_samples, n_features (as expected by scikit-learn), we reorder the dimensions using .transpose.
samples_da_flat_t = samples_da_flat.transpose()
samples_da_flat_t.shape
(328276, 5)
Most scikit-learn estimators (including Random Forest) do not accept missing values, so we have to remove all NaNs in our samples.
samples_da_flat_t = samples_da_flat_t.dropna(dim='samples', how='any')
samples_da_flat_t.shape
(72, 5)
Define the predictors (features; X) and the predicted variable (dependent variable; y) in our samples.
X = samples_da_flat_t.sel(variable = ['NDVI_max', 'NDVI_min', 'NDVI_mean', 'NDVI_stdDev'])
y = samples_da_flat_t.sel(variable = ['labels'])
Random Forest classification using scikit-learn¶
We use scikit-learn to fit a Random Forest (RF) classifier (Breiman 2001) to the samples and apply this classifier to the entire AOI covered by our spectral temporal feature rasters (We fix the random_state
for reproducible results across executions).
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(oob_score=True, random_state=0) # We use default values for the number of trees and other user-defined parameters of the RF algorithm
clf = clf.fit(X, y.squeeze(dim='variable')) # Without squeeze we get a data conversion warning (change the shape of y to (n_samples,); but it works!?)
print(f'Out-of-bag error: {(1 - clf.oob_score_):.3f}') # Print OOB error rounded to three decimals
Out-of-bag error: 0.028
As a measure for the prediction error during this training phase, we printed the out-of-bag (OOB) error. The OOB error of a Random Forest is aggregated from those samples of the individual trees that are not used for building the tree by bootstrapping (random sampling with replacement).
Let's also have a look at the feature importances provided by the Random Forest algorithm.
for i, j in zip(['NDVI_max', 'NDVI_min', 'NDVI_mean', 'NDVI_stdDev'], clf.feature_importances_):
print(f'Feature importance of {i}: {j:.3f}')
Feature importance of NDVI_max: 0.257 Feature importance of NDVI_min: 0.218 Feature importance of NDVI_mean: 0.215 Feature importance of NDVI_stdDev: 0.310
Basically, it seems that the classifier was fit well to the training data and all four features contribute more or less similarly strong. So let's go on and classify all pixels.
Classify all pixels of our AOI¶
The steps above were done for the samples only (recall that we have masked all unsampled pixels using xarray's where()
). Now we prepare the full rasters with spectral-temporal features for landcover classification across the entire AOI. Could probably be simplified by re-ordering the processing steps.
predict_da = S2_seasonal_ds.to_array()
predict_da = predict_da.stack(samples=('x', 'y'))
predict_da = predict_da.transpose()
X_predict = predict_da.sel(variable = ['NDVI_max', 'NDVI_min', 'NDVI_mean', 'NDVI_stdDev']).dropna(dim='samples', how='any')
Predict all pixels based on their feature vector and the classifier fitted on the samples.
labels_predicted = clf.predict(X_predict)
Convert the labels from float to integer.
import numpy as np
labels_predicted = labels_predicted.astype(int)
Un-flatten the classification output¶
To be able to un-flatten (unstack) the spatial dimensions x and y of our predicted labels we use the flattened predict_da DataArray as a template. The original DataArray has the shape n_samples by n_features, we need a template with shape n_samples, so take only one of the variables. then fill this template with the predicted labels.
template_da_1d = predict_da[:, 0] # take only one variable
output_da_1d = template_da_1d.copy(data=labels_predicted) # fill template with predicted labels
Now unstack the output to recover the spatial dimensions. At some point we had swapped the dimensions by transpose()
, so we have to undo this here by calling transpose()
again.
Note that the output has "inherited" the metadata of the input data (such as CRS, cell size, etc.; stored as attributes). Unfortunately, it has also "inherited" the name of the variable used as template. To be able to rename it to 'landcover_id' we convert the DataArray to a Dataset.
output_da_2d = output_da_1d.unstack().transpose()
output_ds_2d = output_da_2d.expand_dims('variable').to_dataset(dim='variable')
output_ds_2d = output_ds_2d.rename({'NDVI_max': 'landcover_id'})
<__array_function__ internals>:200: RuntimeWarning: invalid value encountered in cast
Finally, write the classification output to disk.
landcover_out = output_ds_2d['landcover_id']
landcover_out.rio.write_crs(raster_template.crs, inplace=True)
landcover_out.rio.write_nodata(0)#output_ds_2d.nodatavals[0])
landcover_out.rio.to_raster(data_dir / "landcover_id_v2.tif")
Plot the results in a static map¶
We use the plotting capabilities of Pandas and xarray with matplotlib as plotting backend to create a static map with the reference data and the classification output (predicted labels for each pixel).
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
my_colors_by_ids = ("lightgreen", "brown", "darkgreen", "blue", "grey") # Define colors for landcover IDs
fig, ax = plt.subplots(1, figsize=(10,5))
output_ds_2d['landcover_id'].plot.imshow(ax=ax, levels=[1, 2, 3, 4, 5, 6], colors=my_colors_by_ids, cbar_kwargs={'shrink': 0.7})
ref_data.plot(ax=ax, column="landcover_string", # Column to use for color
cmap=ListedColormap(['brown', 'darkgreen', 'lightgreen', 'grey', 'blue']),
edgecolor='white',
legend=True)
plt.title('Landcover')
plt.show()
Interactive map of the results¶
Vectorize landcover¶
For plotting an interactive Leaflet map using the Folium package, we vectorize the landcover classes of our classification raster with rasterio.features.shapes
and shapely.geometry.shape
. This converts a 2D xarray.DataArray
into a geopandas.GeoDataFrame
.
from rasterio import features
from shapely.geometry import shape
import geopandas as gpd
geometry_list = []
value_list = []
lc_rast = rioxarray.open_rasterio(data_dir / "landcover_id.tif") # use rioxarray accessor to open GeoTiff as xarray DataArray
for shapedict, value in features.shapes(lc_rast, transform=lc_rast.rio.transform()): # get a pair of (polygon, value) for each feature found in the image.
value_list.append(value)
geometry_list.append(shape(shapedict)) # construct shapely polygons from GeoJSON-like dicts returned by rasterio.features.shapes
lc_dict = {'lc_id': value_list, 'geometry': geometry_list} # make a dictionary with class ID values and polygon geometries
lc_gdf = gpd.GeoDataFrame(lc_dict, crs=lc_rast.rio.crs) # create a GeoDataFrame from the dictionary
lc_gdf['lc_id'] = lc_gdf['lc_id'].astype(int) # landcover class ID to integer
So far, landcover is coded with IDs. For convenience let's add a column with landcover class names as strings.
landcover_dict={1: "Grassland", 2: "Cropland", 3: "Forest", 4: "Water", 5: "Other"}
lc_gdf["Landcover"] = lc_gdf['lc_id'].map(landcover_dict) # Map values of the new column according to the dictionary
Plot an interactive map¶
Finally, use folium
to plot a map and be able to compare the validation samples and the prediction in spatial context. This may also help to see deficiencies of the classification missed by the random validation samples (there is, e.g., no cropland in the northwestern corner).
from folium import plugins
# Show the AOI in a Folium map object
my_aoi = gpd.read_file(data_dir / 'aoi.gpkg')
m = my_aoi.explore(name="AOI", style_kwds={"fill": False}) # Do not fill the polygon
# Add predicted landcover layer (polygonized) to the map object
colormap=ListedColormap(['brown', 'darkgreen', 'lightgreen', 'grey', 'blue']) # Define a colormap (for alphabetically sorted class names)
lc_gdf.explore(m=m, column='Landcover', name="Landcover predicted", categorical=True, cmap=colormap)
# Add landcover reference point layer
colormap=ListedColormap(['lightgreen', 'brown', 'darkgreen', 'blue', 'grey']) # Define a colormap (for class IDs)
ref_points = gpd.read_file(data_dir / 'ref_data.gpkg')
ref_points.explore(
m=m, column="landcover", name="Landcover training samples",
marker_kwds={"radius": 4}, style_kwds={"color": "black", "weight": 1},
categorical=True, legend=False, cmap=colormap
)
# Add alternative background layers
folium.TileLayer(tiles='https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{z}/{y}/{x}',
attr='Tiles © Esri — Source: Esri, i-cubed, USDA, USGS, AEX, GeoEye, Getmapping, Aerogrid, IGN, IGP, UPR-EGP, and the GIS User Community',
name='ESRI WorldImagery', control=True).add_to(m)
folium.TileLayer(tiles='Stamen Terrain',
name='Stamen Terrain', control=True).add_to(m)
folium.LayerControl(collapsed=False).add_to(m) # Add layer control
folium.plugins.Fullscreen().add_to(m) # Add a full screen button (for display in a browser)
m # Show map