Loading...
Pubblicazioni Scientifiche
Filtri di ricerca 4 risultati
Pubblicazioni per anno
Review of ground and aerial methods for vegetation cover fraction (fCover) and related quantities estimation: definitions, advances, challenges, and future perspectives
Li
,
Linyuan
,
Mu
,
Xihan
,
Jiang
,
Hailan
,
Chianucci
,
Francesco
,
Hu
,
Ronghai
,
Song
,
Wanjuan
,
Qi
,
Jianbo
,
Liu
,
Shouyang
,
Zhou
,
Jiaxin
,
Chen
,
Ling
,
Huang
,
Huaguo
,
Yan
,
Guangjian
airborne remote sensing
fcover
ground measurements
image and lidar
unmanned aerial vehicle (uav)
“cover” attribute
Mostra abstract
Vegetation cover fraction (fCover) and related quantities are basic yet critical vegetation structure variables in various disciplines and applications. Ground- and aerial-based proximal and remote sensing techniques have been widely adapted across multiple spatial extents. However, the definitions of fCover-related nomenclatures have not yet been fully standardized, leading to confusing terms and making comparing historic measures difficult. With the issues potentially arising from an increasing diversity of fCover and related quantities estimation methods and corresponding uncertainties, there is also a growing need to spread knowledge on the current advances, challenges, and perspectives, especially in the context of no such existing review for ground- and aerial- based estimation. This paper provides the current knowledge mainly concerning passive image-based methods and active light detection and ranging (LiDAR) -based methods. We first harmonized the definitions of fCover and its related quantities (e.g., effective canopy cover, crown cover, stratified vegetation cover, and canopy fraction). Secondly, the typical applications of fCover and related quantities over a range of scales, fields, and ecosystems were summarized. Thirdly yet importantly, we offered a comprehensive review of traditional non-imaging methods, image-based methods (e.g., segmentation, unmixing, and spectral retrieval), point cloud-based methods (e.g., rasterization), and LiDAR return-based methods (e.g., return number index and return intensity retrieval) across different platforms (i.e., ground, unmanned aerial vehicle (UAV) and airplane). Our investigation of fCover and related quantities estimation touches upon various vegetation ecosystems, including agriculture cropland, grassland, wetland, and forest. Finally, the current challenges and future directions were discussed, such as image signal processing under complex heterogeneous surfaces and stratified cover and non-photosynthesis cover retrieval. We, therefore, expect that this review may offer an insight into fCover and related quantities estimation and serve as a reference for remote sensing scientists, agronomists, silviculturists, and ecologists. © 2023 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS)
Ultrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach
Li
,
Linyuan
,
Mu
,
Xihan
,
Chianucci
,
Francesco
,
Qi
,
Jianbo
,
Jiang
,
Jingyi
,
Zhou
,
Jiaxin
,
Chen
,
Ling
,
Huang
,
Huaguo
,
Yan
,
Guangjian
,
Liu
,
Shouyang
Mostra abstract
Accurate wall-to-wall estimation of forest crown cover is critical for a wide range of ecological studies. Notwithstanding the increasing use of UAVs in forest canopy mapping, the ultrahigh-resolution UAV imagery requires an appropriate procedure to separate the contribution of understorey from overstorey vegetation, which is complicated by the spectral similarity between the two forest components and the illumination environment. In this study, we investigated the integration of deep learning and the combined data of imagery and photogrammetric point clouds for boreal forest canopy mapping. The procedure enables the automatic creation of training sets of tree crown (overstorey) and background (understorey) data via the combination of UAV images and their associated photogrammetric point clouds and expands the applicability of deep learning models with self-supervision. Based on the UAV images with different overlap levels of 12 conifer forest plots that are categorized into “I”, “II” and “III” complexity levels according to illumination environment, we compared the self-supervised deep learning-predicted canopy maps from original images with manual delineation data and found an average intersection of union (IoU) larger than 0.9 for “complexity I” and “complexity II” plots and larger than 0.75 for “complexity III” plots. The proposed method was then compared with three classical image segmentation methods (i.e., maximum likelihood, Kmeans, and Otsu) in the plot-level crown cover estimation, showing outperformance in overstorey canopy extraction against other methods. The proposed method was also validated against wall-to-wall and pointwise crown cover estimates using UAV LiDAR and in situ digital cover photography (DCP) benchmarking methods. The results showed that the model-predicted crown cover was in line with the UAV LiDAR method (RMSE of 0.06) and deviate from the DCP method (RMSE of 0.18). We subsequently compared the new method and the commonly used UAV structure-from-motion (SfM) method at varying forward and lateral overlaps over all plots and a rugged terrain region, yielding results showing that the method-predicted crown cover was relatively insensitive to varying overlap (largest bias of less than 0.15), whereas the UAV SfM-estimated crown cover was seriously affected by overlap and decreased with decreasing overlap. In addition, canopy mapping over rugged terrain verified the merits of the new method, with no need for a detailed digital terrain model (DTM). The new method is recommended to be used in various image overlaps, illuminations, and terrains due to its robustness and high accuracy. This study offers opportunities to promote forest ecological applications (e.g., leaf area index estimation) and sustainable management (e.g., deforestation). © 2022 The Author(s)
A new method to estimate clumping index integrating gap fraction averaging with the analysis of gap size distribution
leaf area index
hemispherical photography
canopy nonrandomness
ordered weighted averaging (owa) operator
orness
Mostra abstract
Estimates of clumping index (Ω) are required to improve the indirect estimation of leaf area index (L) from optical field-based instruments such as digital hemispherical photography (DHP). A widely used method allows estimation of Ω from DHP using simple gap fraction averaging formulas (LX). This method is simple and effective but has the disadvantage of being sensitive to the spatial scale (i.e., the azimuth segment size in DHP) used for averaging and canopy density. In this study, we propose a new method to estimate Ω (LXG) based on ordered weighted gap fraction averaging (OWA) formulas, which addresses the disadvantages of LX and also accounts for gap size distribution. The new method was tested in 11 broadleaved forest stands in Italy; Ω estimated from LXG was compared with other commonly used clumping correction methods (LX, CC, and CLX). Results showed that LXG yielded more accurate Ω estimates, which were also more correlated with the values obtained from the gap size distribution methods (CC and CLX) than Ω obtained from LX. Leaf area index estimates, adjusted by LXG, are only 5%–6% lower than direct measurements obtained from litter traps, while other commonly used clumping correction methods yielded more underestimation. © 2019, Canadian Science Publishing. All rights reserved.
Comparison of seven inversion models for estimating plant andwoody area indices of leaf-on and leaf-off forest canopy using explicit 3D forest scenes
Zou
,
Jie
,
Zhuang
,
Yinguo
,
Chianucci
,
Francesco
,
Mai
,
Chunna
,
Lin
,
Weimu
,
Leng
,
Peng
,
Luo
,
Shezhou
,
Yan
,
Bojie
forest canopy
canopy element and woody component projection functions
clumping effect
digital hemispherical photography
forest scenes
inversion model
leaf area index (lai)
plant area index (pai)
woody area index (wai)
Mostra abstract
Optical methods require model inversion to infer plant area index (PAI) and woody area index (WAI) of leaf-on and leaf-off forest canopy from gap fraction or radiation attenuation measurements. Several inversion models have been developed previously, however, a thorough comparison of those inversion models in obtaining the PAI and WAI of leaf-on and leaf-off forest canopy has not been conducted so far. In the present study, an explicit 3D forest scene series with different PAI,WAI, phenological periods, stand density, tree species composition, plant functional types, canopy element clumping index, and woody component clumping index was generated using 50 detailed 3D tree models. The explicit 3D forest scene series was then used to assess the performance of seven commonly used inversion models to estimate the PAI andWAI of the leaf-on and leaf-off forest canopy. The PAI andWAI estimated from the seven inversion models and simulated digital hemispherical photography images were compared with the true PAI and WAI of leaf-on and leaf-off forest scenes. Factors that contributed to the differences between the estimates of the seven inversion models were analyzed. Results show that both the factors of inversion model, canopy element and woody component projection functions, canopy element and woody component estimation algorithms, and segment size are contributed to the differences between the PAI and WAI estimated from the seven inversion models. There is no universally valid combination of inversion model, needle-to-shoot area ratio, canopy element and woody component clumping index estimation algorithm, and segment size that can accurately measure the PAI and WAI of all leaf-on and leaf-off forest canopies. The performance of the combinations of inversion model, needle-to-shoot area ratio, canopy element and woody component clumping index estimation algorithm, and segment size to estimate the PAI and WAI of leaf-on and leaf-off forest canopies is the function of the inversion model as well as the canopy element and woody component clumping index estimation algorithm, segment size, PAI,WAI, tree species composition, and plant functional types. The impact of canopy element and woody component projection function measurements on the PAI and WAI estimation of the leaf-on and leaf-off forest canopy can be reduced to a low level ( < 4%) by adopting appropriate inversion models. © 2018 by the authors.