Computer Vision for Hazard Tree Identification and Assessment
2021, Jose Delpiano, PhD, University of the Andes, Co-PIs Dr. Mauricio Ponce, Dr. Jose Saavedra, Dr. Matias Recabarren
There are over 100 landscape and tree fatalities In America every year. To name one, New York City had to award $1.6 million due to a citizen being killed by a falling tree branch in 2003. Hazard tree identification and assessment is key to prevention of accidents related to urban trees. Individual arborists are faced with the responsibility for areas with a large number of trees and are very often unable to identify and assess all the hazard trees. We believe some of the most relevant defects in hazard trees can be detected in an automated manner with artificial intelligence (AI) tools. We will determine which of them can be detected from photographs of tree parts and street-level photographs, taken with mobile devices. To train the AI models we will design, a new set of tree images will be collected. A few images are required to assess one tree: even for an expert arborist, a global photograph is not sufficient and pictures of tree parts like leaves, bark, and flowers may be needed. We will test our AI models in real urban environments.
Study Results:
ArboCensus is our data collection of over 3,200 tree samples already from Santiago, Chile. Each sample includes 5 images and 4 attributes (OG4). The images were taken by multiple people using smartphones and show trees from street-level view. The collection also includes categorical data from arboricultural company Sercotal S.A., the creators of ArboTag. ArboCensus delivers information that can be useful to recognize hazard trees in an automated manner. School of Engineering and Applied Sciences, Universidad de los Andes – www.uandes.cl/our-university Mons. Álvaro del Portillo 12.455, Las Condes, Santiago, Chile.
Tagging for trunk detection and segmentation of trunk and canopy pixels to solve the trunk detection problem, images were manually tagged using LabelMe software to apply a deep learning model (OG3). The tree contour was delineated and the trunk region enclosed with a bounding box. Trunk segmentation masks were extracted and used as ground truth to train, validate and evaluate the model. A segmentation approach was chosen to accurately determine pixels that are part of the trunk and canopy.
Methodologies for trunk detection and segmentation of trunk and canopy pixels. The problem of tree trunk detection is addressed using a segmentation approach (OG1-OG2) and a baseline method based on a U-net neural architecture. The network generates a segmentation mask and consists of an encoder and decoder. Transfer learning is performed using a model that was pre-trained with the ImageNet dataset. Various encoders such as MobileNet, ResNet and Inception are evaluated.
Results for segmentation of bark: inclination angle and height of main branches. The algorithm for determining the height of main branches involves obtaining a vertical image of the tree, processing it to extract the trunk mask, selecting the central area, calculating height in pixels, and converting it to centimeters using the tree’s diameter at breast height (DBH) as a reference. Figure 2.1 provides visual examples of the process.
Year: 2021
Funding Duration: 2 years
Grant Program: John Z. Duling Grant
Grant Title: Computer Vision for Hazard Tree Identification and Assessment
Researcher: Jose Delpiano, PhD
Key words:
Peer Reviewed Publications from Grant:
General Audience/Trade Publications:
Professional Presentations: TREE Fund Webinar Series, July 2024
For more information on this project, contact the researcher via TREE Fund at treefund@treefund.org.