Hawaii (and much of the world) is losing the war against invasive species. As a result, the ecosystems, cultural heritage and environmental services are all at risk. In Hawaii, existing approaches, including aerial herbicide applications via manned aviation, are exhausting and can often be dangerous work from the ground by field crews. This older technology has been used for decades with mixed and often disappointing results, but new technologies, including drones and miniaturized sensors, hold promise for expanding the surveillance capabilities but have yet to put a real dent in the encroaching spread of these aggressive invaders. One of the biggest bottlenecks currently associated with using very high-resolution drone imagery to detect individual invasive plant targets in high-value areas is the need for trained human analysts to manually scan each individual image.
While effective, this is time-consuming and tedious work given that drone surveys produce 1000s of photos for relatively small areas (200-500 acres). Computer vision and machine learning hold great promise for automating this process and we have begun to have some success with training computer algorithms to detect various plants of interest. Technology alone will not solve Hawaii’s worsening invasive species problem, but effective and targeted technology can be leveraged to expand the impact of the limited budgets and personnel resources extended towards this important issue.
To solve the bottleneck problem associated with processing 1000s of high-resolution images to detect individual plant targets within a sea of similar-looking vegetation, we are developing a flexible machine learning algorithm that will:
- Automatically scan through raw geotagged imagery and detect individual plants of interest with a high degree of accuracy (including plants partially obscured by other plants in the canopy).
- Convert the detected plants’ raw image pixel coordinates into real-world geographic coordinates within a zone of expected positional uncertainty.
- Produce a list and GIS layer of all the resulting coordinates for management decisions.
He had few assumptions
- He assumed that scaling up the initial software attempts whilst testing dozens of images, should not create any issues when applied to 100,000s+ of images.
- He assumed that with better plant detection information provided to managers, they can better use their limited resources to more effectively combat invasive plants in Hawaii and beyond.
- He assumed that the power of cm-scale resolution imagery and CNN deep learning can be leveraged to detect these plants of interest.
He manually builds the libraries of target species from his existing imagery and successfully builds an accurate convolutional neural network algorithm (or set of algorithms) to automatically identify these species from new imagery in a variety of lighting conditions (it rains a lot in east Hawaii). He already has extensive imagery datasets containing species of interest – Miconia, Banana poke, etc., from his work and from his partner agencies (Big Island Invasive Species Committee, Hawaii Volcanoes National Park, etc.), but tagging the plants/leaves of interest and building the working CCN models was not a small task.
He now has an automatic detector for 3 plant species (Miconia, coconuts, and Rapid Ohia Death-affected Ohia trees) and a good framework (though tedious) in place for adding in more species. Aside from improving the performance of the detectors and then getting them in a downloadable form that he can share with his partner management agencies, his biggest and most recent issue is speed. For high-resolution Nikon D850 photos (taken from a helicopter), he can process ~200+ photos an hour on a pretty expensive and fast computer with a Titan RTX GPU. He collects ~5000 photos per flight with multiple.