Loading...
Pubblicazioni Scientifiche
Filtri di ricerca 2 risultati
Pubblicazioni per anno
Review of ground and aerial methods for vegetation cover fraction (fCover) and related quantities estimation: definitions, advances, challenges, and future perspectives
Li
,
Linyuan
,
Mu
,
Xihan
,
Jiang
,
Hailan
,
Chianucci
,
Francesco
,
Hu
,
Ronghai
,
Song
,
Wanjuan
,
Qi
,
Jianbo
,
Liu
,
Shouyang
,
Zhou
,
Jiaxin
,
Chen
,
Ling
,
Huang
,
Huaguo
,
Yan
,
Guangjian
airborne remote sensing
fcover
ground measurements
image and lidar
unmanned aerial vehicle (uav)
“cover” attribute
Mostra abstract
Vegetation cover fraction (fCover) and related quantities are basic yet critical vegetation structure variables in various disciplines and applications. Ground- and aerial-based proximal and remote sensing techniques have been widely adapted across multiple spatial extents. However, the definitions of fCover-related nomenclatures have not yet been fully standardized, leading to confusing terms and making comparing historic measures difficult. With the issues potentially arising from an increasing diversity of fCover and related quantities estimation methods and corresponding uncertainties, there is also a growing need to spread knowledge on the current advances, challenges, and perspectives, especially in the context of no such existing review for ground- and aerial- based estimation. This paper provides the current knowledge mainly concerning passive image-based methods and active light detection and ranging (LiDAR) -based methods. We first harmonized the definitions of fCover and its related quantities (e.g., effective canopy cover, crown cover, stratified vegetation cover, and canopy fraction). Secondly, the typical applications of fCover and related quantities over a range of scales, fields, and ecosystems were summarized. Thirdly yet importantly, we offered a comprehensive review of traditional non-imaging methods, image-based methods (e.g., segmentation, unmixing, and spectral retrieval), point cloud-based methods (e.g., rasterization), and LiDAR return-based methods (e.g., return number index and return intensity retrieval) across different platforms (i.e., ground, unmanned aerial vehicle (UAV) and airplane). Our investigation of fCover and related quantities estimation touches upon various vegetation ecosystems, including agriculture cropland, grassland, wetland, and forest. Finally, the current challenges and future directions were discussed, such as image signal processing under complex heterogeneous surfaces and stratified cover and non-photosynthesis cover retrieval. We, therefore, expect that this review may offer an insight into fCover and related quantities estimation and serve as a reference for remote sensing scientists, agronomists, silviculturists, and ecologists. © 2023 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS)
Ultrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach
Li
,
Linyuan
,
Mu
,
Xihan
,
Chianucci
,
Francesco
,
Qi
,
Jianbo
,
Jiang
,
Jingyi
,
Zhou
,
Jiaxin
,
Chen
,
Ling
,
Huang
,
Huaguo
,
Yan
,
Guangjian
,
Liu
,
Shouyang
Mostra abstract
Accurate wall-to-wall estimation of forest crown cover is critical for a wide range of ecological studies. Notwithstanding the increasing use of UAVs in forest canopy mapping, the ultrahigh-resolution UAV imagery requires an appropriate procedure to separate the contribution of understorey from overstorey vegetation, which is complicated by the spectral similarity between the two forest components and the illumination environment. In this study, we investigated the integration of deep learning and the combined data of imagery and photogrammetric point clouds for boreal forest canopy mapping. The procedure enables the automatic creation of training sets of tree crown (overstorey) and background (understorey) data via the combination of UAV images and their associated photogrammetric point clouds and expands the applicability of deep learning models with self-supervision. Based on the UAV images with different overlap levels of 12 conifer forest plots that are categorized into “I”, “II” and “III” complexity levels according to illumination environment, we compared the self-supervised deep learning-predicted canopy maps from original images with manual delineation data and found an average intersection of union (IoU) larger than 0.9 for “complexity I” and “complexity II” plots and larger than 0.75 for “complexity III” plots. The proposed method was then compared with three classical image segmentation methods (i.e., maximum likelihood, Kmeans, and Otsu) in the plot-level crown cover estimation, showing outperformance in overstorey canopy extraction against other methods. The proposed method was also validated against wall-to-wall and pointwise crown cover estimates using UAV LiDAR and in situ digital cover photography (DCP) benchmarking methods. The results showed that the model-predicted crown cover was in line with the UAV LiDAR method (RMSE of 0.06) and deviate from the DCP method (RMSE of 0.18). We subsequently compared the new method and the commonly used UAV structure-from-motion (SfM) method at varying forward and lateral overlaps over all plots and a rugged terrain region, yielding results showing that the method-predicted crown cover was relatively insensitive to varying overlap (largest bias of less than 0.15), whereas the UAV SfM-estimated crown cover was seriously affected by overlap and decreased with decreasing overlap. In addition, canopy mapping over rugged terrain verified the merits of the new method, with no need for a detailed digital terrain model (DTM). The new method is recommended to be used in various image overlaps, illuminations, and terrains due to its robustness and high accuracy. This study offers opportunities to promote forest ecological applications (e.g., leaf area index estimation) and sustainable management (e.g., deforestation). © 2022 The Author(s)