Loading...

Pubblicazioni Scientifiche

Filtri di ricerca 3 risultati
Pubblicazioni per anno
Influence of image pixel resolution on canopy cover estimation in poplar plantations from field, aerial and satellite optical imagery
Mostra abstract
Accurate estimates of canopy cover (CC) are central for a wide range of forestry studies. As direct measurements are impractical, indirect optical methods have often been used to estimate CC from the complement of gap fraction measurements obtained with restricted-view sensors. In this short note we evaluated the influence of the image pixel resolution (ground sampling distance; GSD) on CC estimation in poplar plantations obtained from field (cover photography; GSD < 1 cm), unmanned aerial (UAV; GSD <10 cm) and satellite (Sentinel-2; GSD = 10 m) imagery. The trial was conducted in poplar tree plantations in Northern Italy, with varying age and canopy cover. Results indicated that the coarser resolution available from satellite data is suitable to obtain estimates of canopy cover, as compared with field measurements obtained from cover photography; therefore, S2 is recommended for larger scale monitoring and routine assessment of canopy cover in poplar plantations. The higher resolution of UAV compared with Sentinel-2 allows finer assessment of canopy structure, which could also be used for calibrating metrics obtained from coarser-scale remote sensing products, avoiding the need of ground measurements. © 2021 Centro di Ricerca per la Selvicoltura, Consiglio per la Ricerca in Agricoltura e l'Analisi dell'Economia Agraria. All rights reserved.
Review of ground and aerial methods for vegetation cover fraction (fCover) and related quantities estimation: definitions, advances, challenges, and future perspectives
Mostra abstract
Vegetation cover fraction (fCover) and related quantities are basic yet critical vegetation structure variables in various disciplines and applications. Ground- and aerial-based proximal and remote sensing techniques have been widely adapted across multiple spatial extents. However, the definitions of fCover-related nomenclatures have not yet been fully standardized, leading to confusing terms and making comparing historic measures difficult. With the issues potentially arising from an increasing diversity of fCover and related quantities estimation methods and corresponding uncertainties, there is also a growing need to spread knowledge on the current advances, challenges, and perspectives, especially in the context of no such existing review for ground- and aerial- based estimation. This paper provides the current knowledge mainly concerning passive image-based methods and active light detection and ranging (LiDAR) -based methods. We first harmonized the definitions of fCover and its related quantities (e.g., effective canopy cover, crown cover, stratified vegetation cover, and canopy fraction). Secondly, the typical applications of fCover and related quantities over a range of scales, fields, and ecosystems were summarized. Thirdly yet importantly, we offered a comprehensive review of traditional non-imaging methods, image-based methods (e.g., segmentation, unmixing, and spectral retrieval), point cloud-based methods (e.g., rasterization), and LiDAR return-based methods (e.g., return number index and return intensity retrieval) across different platforms (i.e., ground, unmanned aerial vehicle (UAV) and airplane). Our investigation of fCover and related quantities estimation touches upon various vegetation ecosystems, including agriculture cropland, grassland, wetland, and forest. Finally, the current challenges and future directions were discussed, such as image signal processing under complex heterogeneous surfaces and stratified cover and non-photosynthesis cover retrieval. We, therefore, expect that this review may offer an insight into fCover and related quantities estimation and serve as a reference for remote sensing scientists, agronomists, silviculturists, and ecologists. © 2023 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS)
Ultrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach
Mostra abstract
Accurate wall-to-wall estimation of forest crown cover is critical for a wide range of ecological studies. Notwithstanding the increasing use of UAVs in forest canopy mapping, the ultrahigh-resolution UAV imagery requires an appropriate procedure to separate the contribution of understorey from overstorey vegetation, which is complicated by the spectral similarity between the two forest components and the illumination environment. In this study, we investigated the integration of deep learning and the combined data of imagery and photogrammetric point clouds for boreal forest canopy mapping. The procedure enables the automatic creation of training sets of tree crown (overstorey) and background (understorey) data via the combination of UAV images and their associated photogrammetric point clouds and expands the applicability of deep learning models with self-supervision. Based on the UAV images with different overlap levels of 12 conifer forest plots that are categorized into “I”, “II” and “III” complexity levels according to illumination environment, we compared the self-supervised deep learning-predicted canopy maps from original images with manual delineation data and found an average intersection of union (IoU) larger than 0.9 for “complexity I” and “complexity II” plots and larger than 0.75 for “complexity III” plots. The proposed method was then compared with three classical image segmentation methods (i.e., maximum likelihood, Kmeans, and Otsu) in the plot-level crown cover estimation, showing outperformance in overstorey canopy extraction against other methods. The proposed method was also validated against wall-to-wall and pointwise crown cover estimates using UAV LiDAR and in situ digital cover photography (DCP) benchmarking methods. The results showed that the model-predicted crown cover was in line with the UAV LiDAR method (RMSE of 0.06) and deviate from the DCP method (RMSE of 0.18). We subsequently compared the new method and the commonly used UAV structure-from-motion (SfM) method at varying forward and lateral overlaps over all plots and a rugged terrain region, yielding results showing that the method-predicted crown cover was relatively insensitive to varying overlap (largest bias of less than 0.15), whereas the UAV SfM-estimated crown cover was seriously affected by overlap and decreased with decreasing overlap. In addition, canopy mapping over rugged terrain verified the merits of the new method, with no need for a detailed digital terrain model (DTM). The new method is recommended to be used in various image overlaps, illuminations, and terrains due to its robustness and high accuracy. This study offers opportunities to promote forest ecological applications (e.g., leaf area index estimation) and sustainable management (e.g., deforestation). © 2022 The Author(s)