In Situ Observation-Constrained Global Surface Soil Moisture Using Random Forest Model

The inherent biases of different long-term gridded surface soil moisture (SSM) products, unconstrained by the in situ observations, implies different spatio-temporal patterns. In this study, the Random Forest (RF) model was trained to predict SSM from relevant land surface feature variables (i.e., land surface temperature, vegetation indices, soil texture, and geographical information) and precipitation, based on the in situ soil moisture data of the International Soil Moisture Network (ISMN). The results of the RF model show an RMSE of 0.05 m3 m−3 and a correlation coefficient of 0.9. The calculated impurity-based feature importance indicates that the Antecedent Precipitation Index affects most of the predicted soil moisture. The geographical coordinates also significantly influence the prediction (i.e., RMSE was reduced to 0.03 m3 m−3 after considering geographical coordinates), followed by land surface temperature, vegetation indices, and soil texture. The spatio-temporal pattern of RF predicted SSM was compared with the European Space Agency Climate Change Initiative (ESA-CCI) soil moisture product, using both time-longitude and latitude diagrams. The results indicate that the RF SSM captures the spatial distribution and the daily, seasonal, and annual variabilities globally.

How to cite: Zhang, L.; Zeng, Y.; Zhuang, R.; Szabó, B.; Manfreda, S.; Han, Q.; Su, Z. In Situ Observation-Constrained Global Surface Soil Moisture Using Random Forest Model. Remote Sens. 202113, 4893. [pdf]

3D Models of the Cultural Heritage

UAS-based surveys and structure from motion (SfM) can lead to extraordinary and realistic 3D models to preserve our cultural heritage.

In our recent applications, our members are developing new strategies to build extremely detailed point clouds using UAS and portable cameras. In the following, we provided some examples developed within HARMONIOUS partnership cooperation:

Planning the future of Harmonious

The Department of Topography and Cartography of the Technical University of Madrid hosted our work group meeting of COST Action – HARMONIOUS from 27 up to the 30 of October.

During this meeting the WG1 finalized the Glossary of terms used for UAS-based applications considering the three macro categories : platform and equipment, software and outputs.

GLOSSARY

1 Category: Platforms and Equipment 

  • Global Navigation Satellite System (GNSS) is a constellation of satellites used for positioning a receiver on the ground.
  • GALILEO is the GNSS European solution used to determine the ground position of an object.  
  • GPS is the most common GNSS based on the reception of signals from about 24 orbiting satellites by the USA, used to determine the ground position of an object. This global and accurate system allows users to know their exact location, velocity, and time 24 hours per day, anywhere in the world.    
  • Light Detection and Ranging (LiDAR) is based on laser pulses to locate the acquired point cloud in a 3D remote sensing. LiDAR data products are often managed within a gridded or raster data format.
  • Multispectral imaging captures image data within specific wavelength ranges across the electromagnetic spectrum.  The used spectral regions are often at least partially outside the visible spectral range, covering parts of the infrared and ultraviolet region. For example, a multi-spectral imager may provide wavelength channels for near-UV, red, green, blue, near-infrared, mid-infrared and far-infrared light – sometimes even thermal radiation.
  • Near Infrared (NIR) is a subset of the infrared band that is just outside the range of what humans can see. Applied to cameras, NIR cameras cover the wavelength range of 900 to 1700 nm, a range that is best suited for absorption and radiation characteristics analyses.
  • Noise    is an irregular fluctuation that accompanies a transmitted electrical signal but is not part of it and tends to obscure it. The main sources of noise can be divided into two main categories: the physical noise, linked to physics constraints like the corpuscular nature of light, and the hardware noise, linked to mechanical issues in the camera.
  • Optical Camera is a photographic device aimed to form and record an image of an object. An optical camera sensor is an imager that collects visible light (400~700nm).
  • Payload is the weight a drone or unmanned aerial vehicle (UAV) can carry on board. It is usually counted outside of the weight of the drone itself and includes anything additional to the drone – such as extra cameras, sensors, or packages for delivery.
  • Pixel size of an image identifies the spatial resolution and it is dependent on the sensor capabilities. It provides a measure of the image resolution, which is higher with finer grids, where the degree of recognizable details increases.
  • RGB Camera is equipped with a standard Complementary Metal Oxide Semiconductor (CMOS) sensor through which the colourful images of persons and objects are acquired. In a CMOS sensor, the charge from the photosensitive pixel is converted to a voltage at the pixel site and the signal is multiplied by row and column to multiple on chip Digital-to-Analog Converters (DACs). In a RGB camera, the acquisition of static photos is commonly expressed in megapixels that define the amount of pixels in a singular photo. While, the acquisition of videos is usually expressed with terms such as Full HD or Ultra HD.        
  • Thermal Camera is a non-contact temperature measurement sensor. All objects (above absolute zero) emit infrared energy as a function of their temperature. The vibration of atoms and molecules generates infrared energy. The higher the temperature of an object, the faster its molecules and atoms move. This movement is emitted as infrared radiation, which our eyes cannot see but our skin can feel (as heat). Thermal imaging uses special infrared camera sensors to illuminate a spectrum of light invisible to the naked eye. Thermal energy is invisible to the naked eye and works in different ways; it can be emitted, absorbed, or reflected. Infrared cannot see through objects but can detect differences in radiated thermal energy between materials. This is known as thermal bridging or heat transfer. 
  • Unmanned Aerial System (UAS) is a remotely controlled professional system integrating several technological components (e.g., navigation system, gyroscope, and sensors) in order to perform spatial observations.
  • Unmanned Aerial Vehicle (UAV) is a remotely controlled vehicle able to perform several operations and observations.

2 Software 

  • Aero-triangulation is the method most frequently applied to the photogrammetry to determine the X, Y, and Z ground coordinates of individual points based on photo coordinate measurements. The purpose of aero-triangulation is to increase the density of a geodetic network in order to provide images with an exhaustive number of control points for topographic mapping. Deliverables from aero-triangulation may be three-dimensional or planimetric, depending on the number of point coordinates determined.
  • Checkpoints are Ground Control Points (GCPs) used to validate the relative and absolute accuracy of the geo-localization of maps. The checkpoints are not used for processing. Instead, they are used to calculate the error of the map by comparing the known measured locations of the checkpoints to the coordinates of the checkpoints shown on the map.
  • Flight Type refers to the flight mission mode (manual or autonomous). In the manual mode, a pilot manages the UAS during the flight. The autonomous mission is programmed to react to various types of events, in a preset and direct way by means of special sensors. This makes UAS flight predictable and subject to intervention by a remote pilot, only if necessary.
  • Flight Time is a measurement of the total time needed to complete a mission, from the first to the last image taken during a flight. Flight time can be used to characterize the wind impacts on flight performance of UAS.    
  • Ground Control Points (GCPs) are user defined and priorly determined tie points within the mapping polygon used in the process of indirectly georeferencing UAS images. Such tie points can be permanent or portable markers with or without georeferenced data.
  • Masking is the procedure of excluding some part of the scene from image analysis. For instance, clouds, trees, bushes and their shadows should not be considered in further processing, such as in vegetation studies for the evaluation of crop vegetation indices.        
  • Orthorectification is a process of linearly scaling the image pixel size to real-world distances. This is achieved by accounting for the impacts of camera perspective and relative height above the sensed object. The objective is the reprojection of the original image, which could be captured from oblique viewing angles looking at unlevelled terrain, into an image plane to generate a distortion-free photo. 
  • Point Cloud is a collection of data points in a three-dimensional plane. Each point contains several measurements, including its coordinates along the X, Y, and Z-axes, and sometimes additional data such as a color value, which is stored in RGB format, and luminance value, which determines how bright the point is.
  • Radiometric Calibration is a process that allows the transformation of the intensities or digital numbers (DN) of multiple images in order to describe an area and detect relative changes of the landscape, removing anomalies due to atmospheric factors or illumination conditions. 
  • Structure from Motion (SfM) is the process of reconstructing a three-dimensional model from the projections derived from a series of images taken from different viewpoints. Camera orientation and scene geometry are reconstructed simultaneously through the automatic identification of matching features in multiple images.        
  • Tie Point is a point in a digital image or aerial photograph that can be found in the same location in an adjacent image or aerial photograph. A tie point is a feature that can be clearly identified in two or more images and selected as a reference point and whose ground coordinates are not known. The ground coordinates of Tie Points are computed during block triangulation. So, Tie points represent matches between key points detected on two (or more) different images and represent the link between images to get 3D relative positioning.
  • Precision is a description of random errors in the 2D/3D representations.
  • Quality Assessment is an estimation of the statistical geometric and radiometric errors of the final products obtained using ground true data.           

UAS-based Outputs

  • 2D Model is a bidimensional representation of the earth that contains 2 coordinates X and Y.
  • 3D Model is a mathematical or virtual representation of a three dimensional object.
  • 2.5D Model (Pseudo 3D Model) is a three-dimensional representation that uses X, Y coordinates, which are associated to a single elevation value in order to relate different points.
  • Digital Elevation Model (DEM) or Digital Height Model (DHM) is a gridded image describing the altitude of the earth excluding all other objects artificial or natural.    
  • Digital Surface Model (DSM) is a gridded image describing the altitude of the earth including all other objects artificial or natural. For instance, the DSM provides information about dimensions of buildings and forests.    
  • Digital Terrain Model (DTM) is a vector or raster dataset consisting of a virtual representation of the land environment in the mapping polygon. In a DTM the height of the point belongs to the bare ground.
  • Orthophoto is an aerial or terrestrial photograph that has been geometrically corrected to make the scale of the photograph uniform and use it as a map. Since each pixel of the orthophoto has a X and Y, it can be overlapped to other orthophotos, and it can be used to measure true distances of features within the photograph.        
  • Orthomosaic    is a high resolution image made by the combination of many orthophotos. It is a single, radiometrically corrected image that offers a photorealistic representation of an area that can produce surveyor-grade measurements of topography, infrastructure, and buildings.    
  • Feature Identification is a vector information computed from images using artificial intelligence algorithms in order to identify objects (roads, buildings, bridges, etc.) automatically. 
  • Point Cloud is a set of data points in space representing a three-dimensional object. Each point position has its set of Cartesian coordinates (X, Y, Z). It can be generated from overlapping images or LiDAR sensors.
  • Point Cloud Classification is the output of an algorithm that classifies the points of a cloud by computing a set of geometric and radiometric attributes.
  • Image Segmentation is a process that detects the features of an image clearly distinguishable based on the image texture and color.
  • Triangulated Irregular Network (TIN) is a pseudo three-dimensional representation obtained from the  relations in a point cloud using triangles.   
  • Vegetation Indices (VIs) are combinations of surface reflectance at two or more wavelengths designed to highlight a particular property of vegetation. VIs are designed to maximize sensitivity to the vegetation characteristics while minimizing confounding factors such as soil background reflectance, directional, or atmospheric effects. VIs can be found in the scientific literature under different forms such as NDVI, EVI, SAVI, etc.                
  • Aerial photograph is an image taken from an air-borne (i.e., UAS) platform using a precision camera. From aerial photographs, it is possible to derive qualitative information of the depicted areas, such as land use/land cover, topographical forms, soil types, etc. 
  • Terrestrial photograph is an image taken from the earth surface using a camera with an orientation that in most cases is not Nadiral.               

Characterizing vegetation complexity with unmanned aerial systems (UAS) – A framework and synthesis

Ecosystem complexity is among the important drivers of biodiversity and ecosystem functioning, and unmanned aerial systems (UASs) are becoming an important tool for characterizing vegetation patterns and processes. The variety of UASs applications is immense, and so are the procedures to process UASs data described in the literature. Optimizing the workflow is still a matter of discussion. Here, we present a comprehensive synthesis aiming to identify common rules that shape workflows applied in UAS-based studies facing complexity in ecosystems. Analysing the studies, we found similarities irrespective of the ecosystem, according to the character of the property addressed, such as species composition (biodiversity), ecosystem structure (stand volume/complexity), plant status (phenology and stress levels), and dynamics (disturbances and regeneration). We propose a general framework allowing to design UAS-based vegetation surveys according to its purpose and the component of ecosystem complexity addressed. We support the framework by detailed schemes as well as examples of best practices of UAS studies covering each of the vegetation properties (i.e. composition, structure, status and dynamics) and related applications. For an efficient UAS survey, the following points are crucial: knowledge of the phenomenon, choice of platform, sensor, resolution (temporal, spatial and spectral), model and classification algorithm according to the phenomenon, as well as careful interpretation of the results. The simpler the procedure, the more robust, repeatable, applicable and cost effective it is. Therefore, the proper design can minimize the efforts while maximizing the quality of the results.

How to cite: Müllerová J. , X. Gago, M. Bučas,J. Company, J. Estrany, J. Fortesa, S. Manfreda, A. Michez, M. Mokroš, G. Paulus, E. Tiškus, M. A. Tsiafouli, R. Kent, Characterizing vegetation complexity with unmanned aerial systems (UAS) – A framework and synthesis, Ecological Indicators, Volume 131, November 2021, 108156. [pdf]

Mapping Water Infiltration Rate Using Ground and UAV Hyperspectral Data: A Case Study of Alento, Italy

Water infiltration rate (WIR) into the soil profile was investigated through a comprehensive study harnessing spectral information of the soil surface. As soil spectroscopy provides invaluable information on soil attributes, and as WIR is a soil surface-dependent property, field spectroscopy may model WIR better than traditional laboratory spectral measurements. This is because sampling for the latter disrupts the soil-surface status. A field soil spectral library (FSSL), consisting of 114 samples with different textures from six different sites over the Mediterranean basin, combined with traditional laboratory spectral measurements, was created. Next, partial least squares regression analysis was conducted on the spectral and WIR data in different soil texture groups, showing better performance of the field spectral observations compared to traditional laboratory spectroscopy. Moreover, several quantitative spectral properties were lost due to the sampling procedure, and separating the samples according to texture gave higher accuracies. Although the visible near-infrared–shortwave infrared (VNIR–SWIR) spectral region provided better accuracy, we resampled the spectral data to the resolution of a Cubert hyperspectral sensor (VNIR). This hyperspectral sensor was then assembled on an unmanned aerial vehicle (UAV) to apply one selected spectral-based model to the UAV data and map the WIR in a semi-vegetated area within the Alento catchment, Italy. Comprehensive spectral and WIR ground-truth measurements were carried out simultaneously with the UAV–Cubert sensor flight. The results were satisfactorily validated on the ground using field samples, followed by a spatial uncertainty analysis, concluding that the UAV with hyperspectral remote sensing can be used to map soil surface-related soil properties.

How to cite: Francos, N.; Romano, N.; Nasta, P.; Zeng, Y.; Szabó, B.; Manfreda, S.; Ciraolo, G.; Mészáros, J.; Zhuang, R.; Su, B.; Ben-Dor, E.  Mapping Water Infiltration Rate Using Ground and UAV Hyperspectral Data: a Case Study of AlentoItalyRemote Sensing13, 2606, (doi: 10.3390/rs13132606) 2021. [pdf]

HARMONIOUS deliverables of 2020

This year, COST Action – HARMONIOUS members produced a quite impressive number of results working online. Imagine what we could do without restrictions!
See the following list:
1. Use of UAVs with the simplified “triangle” technique https://lnkd.in/dFQftqY
2. Identifying the optimal spatial distribution of tracers https://lnkd.in/dyEcmzq
3. A geostatistical approach to map near-surface soil moisture https://lnkd.in/dymZzHB
4. Refining image-velocimetry performances for streamflow monitoring https://lnkd.in/dyQzvyc
5. Metrics for the quantification of seeding characteristics https://lnkd.in/gvMBe4c
6. Harmonisation of image velocimetry techniques for river surface velocity observations https://lnkd.in/d-ygHpY
7. An integrative information aqueduct to close the gaps in water observations https://lnkd.in/dfTHZcG
8. Practical guidance for UAS-based environmental mapping https://lnkd.in/dAAuFmf
9. Long-term soil moisture observations over Tibetan Plateau https://lnkd.in/dguKMCE
10. Image velocimetry techniques under low flow conditions https://lnkd.in/dGRwY9Y


#hydrology #environmentalmonitoring #remotesensing #UAS #rivermonitoring

Seeding metrics for error minimisation

River streamflow monitoring is currently facing a transformation due to the emerging of new innovative technologies. Fixed and mobile measuring systems are capable of quantifying surface flow velocities and discharges, relying on video acquisitions. This camera-gauging framework is sensitive to what the camera can “observe” but also to field circumstances such as challenging weather conditions, river background transparency, transiting seeding characteristics, among others. This short communication paper introduces the novel idea of optimising image velocimetry techniques selecting the most informative sequence of frames within the available video. The selection of the optimal frame window is based on two reasonable criteria: i) the maximisation of the number of frames, subject to ii) the minimisation of the recently introduced dimensionless seeding distribution index (SDI). SDI combines seeding characteristics such as seeding density and spatial clustering of tracers, which are used as a proxy to enhance the reliability of image velocimetry techniques. Two field case studies were considered as a proof-of-concept of the proposed framework, on which seeding metrics were estimated and averaged in time to select the proper application window. The selected frames were analysed using LSPIV to estimate the surface flow velocities and river discharge. Results highlighted that the proposed framework might lead to a significant error reduction. In particular, the computed discharge errors, at the optimal portion of the footage, were about 0.40% and 0.12% for each case study, respectively. These values were lower than those obtained, considering all frames available.

How to cite: Pizarro, A., S. F. Dal Sasso, S. Manfreda, Refining image‐velocimetry performances for streamflow monitoring: Seeding metrics to errors minimisation, Hydrological Processes, (doi: 10.1002/hyp.13919 ), 2020.

A Geostatistical Approach to Map Near-Surface Soil Moisture Through Hyperspatial Resolution Thermal Inertia

Thermal inertia has been applied to map soil water content exploiting remote sensing data in the short and long wave regions of the electromagnetic spectrum. Over the last years, optical and thermal cameras were sufficiently miniaturized to be loaded onboard of unmanned aerial systems (UASs), which provide unprecedented potentials to derive hyperspatial resolution thermal inertia for soil water content mapping. In this study, we apply a simplification of thermal inertia, the apparent thermal inertia (ATI), over pixels where underlying thermal inertia hypotheses are fulfilled (unshaded bare soil). Then, a kriging algorithm is used to spatialize the ATI to get a soil water content map. The proposed method was applied to an experimental area of the Alento River catchment, in southern Italy. Daytime radiometric optical multispectral and day and nighttime radiometric thermal images were acquired via a UAS, while in situ soil water content was measured through the thermo-gravimetric and time domain reflectometry (TDR) methods. The determination coefficient between ATI and soil water content measured over unshaded bare soil was 0.67 for the gravimetric method and 0.73 for the TDR. After interpolation, the correlation slightly decreased due to the introduction of measurements on vegetated or shadowed positions (r² = 0.59 for gravimetric method; r² = 0.65 for TDR). The proposed method shows promising results to map the soil water content even over vegetated or shadowed areas by exploiting hyperspatial resolution data and geostatistical analysis.

How to cite: Paruta, A., P. Nasta, G. Ciraolo, F. Capodici, S. Manfreda, N. Romano, E. Bendor, Y. Zeng, A. Maltese, S. F. Dal Sasso and R. Zhuang, A geostatistical approach to map near-surface soil moisture through hyper-spatial resolution thermal inertia, IEEE Transactions on Geoscience and Remote Sensing, (doi: 10.1109/TGRS.2020.3019200) 2020. [pdf]

Modeling Antecedent Soil Moisture to Constrain Rainfall Thresholds for Shallow Landslides Occurrence

Rainfall-triggered shallow landslide events have caused losses of human lives and millions of euros in damage to property in all parts of the world. The need to prevent such hazards combined with the difficulty of describing the geomorphological processes over regional scales led to the adoption of empirical rainfall thresholds derived from records of rainfall events triggering landslides. These rainfall intensity thresholds are generally computed, assuming that all events are not influenced by antecedent soil moisture conditions. Nevertheless, it is expected that antecedent soil moisture conditions may provide critical support for the correct definition of the triggering conditions. Therefore, we explored the role of antecedent soil moisture on critical rainfall intensity-duration thresholds to evaluate the possibility of modifying or improving traditional approaches. The study was carried out using 326 landslide events that occurred in the last 18 years in the Basilicata region (southern Italy). Besides the ordinary data (i.e., rainstorm intensity and duration), we also derived the antecedent soil moisture conditions using a parsimonious hydrological model. These data have been used to derive the rainfall intensity thresholds conditional on the antecedent saturation of soil quantifying the impact of such parameters on rainfall thresholds.

Geographical distribution of the weather stations and landslide events for the study area. The graph in the inset shows the monthly distribution of landslides in Basilicata from 2001 to 2018.

How to cite: Lazzari, M., M. Piccarreta, R. L. Ray and S. Manfreda, Modelling antecedent soil moisture to constrain rainfall thresholds for shallow landslides occurrence, Landslides edited by Dr. Ram Ray, IntechOpen, pp. 1-331, (10.5772/intechopen.92730) 2020. [Link]