Nature doesn’t change in neat administrative units. It changes field by field, river reach by river reach, and sometimes metre by metre. That’s why spatial granularity—how “fine” or “coarse” your spatial data is—often determines whether nature and biodiversity insights are decision-ready, or merely descriptive.
What is spatial granularity?
Spatial granularity describes the size of the spatial unit used to represent reality. It determines how accurately they represent, analyse, and map features. In practice, it is usually one of two things:
- In raster datasets (grids), granularity is closely tied to spatial resolution: the size of each pixel and the ground area it represents. NASA succinctly defines spatial resolution as the pixel size and the area each pixel covers on Earth’s surface.
- In vector datasets (points, lines, polygons), granularity is about the size and detail of the mapped features: for example, whether a supplier location is a single point, a farm boundary polygon, or a set of field parcels. Ordnance Survey’s GIS overview captures the core distinction: rasters represent space as cells in a grid, while vectors represent discrete features as geometries (points/lines/polygons).
What this looks like in the real world:
- A 1 km × 1 km raster pixel covers 100 hectares—large enough to contain a mosaic of forest patches, smallholder plots, roads, and waterways.
- A 10 m × 10 m pixel covers 0.01 hectares (100 m²)—often small enough to separate a hedgerow from a field, or a narrow riparian corridor from surrounding agriculture.
A useful way to think about spatial granularity is: what is the smallest change you need to detect to make a better decision? That “smallest meaningful unit” varies by use case. But the principle stays consistent: coarse data tends to average away local variation and differences, while finer data can reveal them—if you handle dataset quality and uncertainty properly.1
Using spatial data for business decision making
Nature- and biodiversity-related decision making is increasingly spatial by necessity, because risks and impacts depend on where an activity occurs—not just what sector it sits in. All key regulation and assessment frameworks such as TNFD, SBTN or the EU CSRD ESRS E4, highlight the importance of “placed-based” analysis.
- For example, EU deforestation rules have made geolocation integral to due diligence: guidance describes collecting plot coordinates and, for larger plots, providing polygons with latitude/longitude points (with specified precision).
This is particularly important because the main drivers of nature loss—land/sea-use change, direct exploitation, climate change, pollution, and invasive alien species—are unevenly distributed and concentrate in specific places.
Here are a few concrete examples of how spatial data supports better nature decisions:
- Detecting land cover change (conversion, deforestation, degradation)
Time-series satellite imagery makes it possible to monitor vegetation and land cover dynamics at repeat intervals. Sentinel‑2, for instance, provides multispectral imagery with 10 m bands (and additional bands at 20 m and 60 m), supporting land monitoring applications including land-use and land-cover change.
Derived “analysis-ready” products take this further. Dynamic World was developed in part because global land-cover products have historically been annual and lagged; it moves towards near real-time land-cover mapping at 10 m, allowing organisations to see change closer to when it happens.
- Detecting and responding to pollution events
Some pollution events are highly localised and time-sensitive—making both granularity and revisit frequency important. Copernicus highlights how Sentinel‑1 radar data (cloud-penetrating) can support oil spill detection and early-stage alerts in marine environments.
For atmospheric pollution, Sentinel‑5P’s TROPOMI instrument improved its effective spatial resolution over time (with high spatial resolution implemented from August 2019 for certain products), enabling finer identification of pollution plumes and hotspots.
Significant improvements in data quality over the past 20 years
Over the last two decades, spatial granularity in land-cover monitoring has improved through three reinforcing shifts:
- Better sensors have dramatically increased the spatial detail available at global scale. Early global land-cover datasets such as GlobCover (~300 m) and MODIS products (~500 m) provided the first consistent global views of land cover. Today, systems such as ESA WorldCover and Google’s Dynamic World deliver global 10 m land-cover mapping derived from Sentinel-1 and Sentinel-2 imagery, representing an order-of-magnitude improvement in spatial resolution and making much finer landscape patterns visible.
- More open data also expanded what organisations could actually use. Even when high-quality imagery existed, access was often restricted or expensive. Many people consider the USGS decision to make the Landsat archive free and open in 2008 a turning point because it enabled widespread use of 30 m Earth observation data across research, government, and operational monitoring.
- More scalable processing has made high-granularity data usable in practice. Modern cloud-based geospatial platforms—most notably Google Earth Engine—allow planetary-scale analysis by combining vast imagery archives with distributed computing. This infrastructure enables applications such as deforestation monitoring, drought analysis, and water management, and supports near-real-time pipelines like Dynamic World that can generate thousands of land-cover predictions per day from newly acquired Sentinel-2 imagery.
What good granularity data can bring you
Spatial granularity changes what you can see, what you can prove, and where you take action.
Consider a very practical example: deforestation risk near farms.
- With a 1 km × 1 km dataset, a single pixel can contain a mix of cropland, tree cover, and settlements.
- If a supplier’s farm is near a forest edge and clearing happens in small patches (or in narrow strips), the “dominant class” of that 1 km cell may not change—even though the on-the-ground reality has.
- Coarse data can quietly convert a real risk signal into “no material change”, simply by averaging it out.
- With 10 m data, you can often detect:
- small clearings that would be invisible inside a 1 km mixed pixel,
- whether loss is happening inside a supplier boundary versus outside it,
- whether remaining natural habitat exists as connected blocks or fragmented remnants.
This materially changes decisions such as:
- which suppliers to prioritise for engagement,
- where to target monitoring or remediation budgets,
- whether a sourcing region is genuinely stable or simply “smoothed” by low-resolution data.
The same logic applies to biodiversity hotspots and corridors.
- Narrow habitat connections—riparian strips, hedgerows, coastal margins—may be ecologically significant but spatially small. Because connectivity is crucial to species movement and genetic diversity, missing these features can lead to the wrong interventions (for example, restoring habitat in a place that still remains functionally isolated).
Example from Natcap platform 10 m vs 100 m granularity at a site covering >500,000ha
One important caveat: higher resolution isn’t automatically “better”.
- It increases data volumes, can increase classification noise, and may create false confidence if you don’t communicate uncertainty. This is why probabilistic products (where you can see class likelihoods, not just a single label) are increasingly valuable for decision workflows.
- Satellite imagery, even at very high granularity, does not yet capture some key local characteristics well, so teams must collect ground-based data. This is especially true for plant species differentiation and animal species richness indicators.
How Natcap approaches spatial granularity
At Natcap, the goal is not “more data”. It delivers decision-ready nature intelligence by translating spatial science into metrics organisations use for governance, risk management, procurement, and reporting.
A key part of that is choosing datasets with the right spatial granularity for the decision at hand. Natcap leverages more than 40 different geospatial datasets, with global coverage, to inform the state of nature at the best granularity level, across detailed categories such as soil quality, land cover, water availability, pollution levels.
- Natcap uses Google Dynamic World (10 m) to classify land cover (including natural vs non-natural), and to quantify change over time—supporting metrics such as deforestation, land conversion, and change in natural vegetation while accounting for uncertainty.
- Natcap also integrates established risk layers such as WRI Aqueduct to understand water risk exposure (e.g., water stress) in specific areas at the best granularity available (HydroBASIN Level 6)—because a “country average” rarely matches what is happening at site level, but also because water is a dynamic resource and should be assessed at basin-level for accuracy.
- Natcap recently integrates high granularity soil erosion data layers from the JRC ESDAC datasets, including 10m resolution maps for Topographical factors and 1km maps for Soil erodibility factor (K), providing precise estimate of soil characteristics based on which Natcap can calculate soil erosion rates.
Granularity is not a nice-to-have in this context. It is the difference between:
- broad screening and defensible evidence,
- generic commitments and targeted action,
- a map that looks reassuring, and a hotspot you can actually manage.
Although some gaps still exist, spatial data is constantly improving. Natcap continuously tracks advances in satellites, land-cover products, and analytical methods so that the datasets behind your nature metrics keep pace with the decisions (and scrutiny) they need to support.
Ready to move from broad screening to location-specific action? See how Natcap’s nature intelligence platform uses high-resolution spatial data to help teams prioritise sites, assess risk, and make more defensible nature-related decisions.
-
Sources