In an initiative for better wildlife protection, seven organizations, led by Conservation International and Google has mapped more than 4.5 million animals in the wild using photos taken from motion-activated cameras known as camera traps.
The photos are all part of Wildlife Insights, an AI-enabled, Google Cloud-based platform that streamlines conservation monitoring by speeding up camera trap photo analysis.
"With photos and aggregated data available for the world to see, people can change the way protected areas are managed, empower local communities in conservation and bring the best data closer to conservationists and decision makers," the company said in a statement on Tuesday.
With Wildlife Insights, conservation scientists with camera trap photos can now upload their images to Google Cloud and run Google's species identification AI models over the images, collaborate with others, visualize wildlife on a map and develop insights on species population health.
It is the largest and most diverse public camera-trap database in the world that allows people to explore millions of camera-trap images, and filter images by species, country and year.
On average, human experts can label 300 to 1,000 images per hour. With the help of Google AI Platform Predictions, Wildlife Insights can classify the same images up to 3,000 times faster, analyzing 3.6 million photos an hour. To make this possible, we trained an AI model to automatically classify species in an image using Google's open source TensorFlow framework.
Even though species identification can be a challenging task for AI, across the 614 species that Google's AI models have been trained on, species like jaguars, white-lipped peccaries and African elephants have between an 80 to 98.6 percent probability of being correctly predicted.
With this data, managers of protected areas or anti-poaching programs can gauge the health of specific species, and local governments can use data to inform policies and create conservation measures.