How can we make users trust machine learning to help them combat illegal activities?

Vulcan Philanthropy

In my work at Vulcan, I worked on several projects where we leverage new technology for species conservation. Two particular examples of this are our anti- poaching and combating illegal fishing efforts.

In both of these projects, we work with organizations that are often underfunded and understaffed. They usually have the daunting tasks of patrolling and monitoring large areas of responsibility with minimal resources.

Using New Technology For Good

There was a desire from our founder to see how we could use new technology, such as drones, satellites and machine learning, to aid these underfunded and understaffed organizations in their conservation and enforcement tasks.

One of the initiatives that came out of this was to use drones equipped with cameras to do surveillance in wildlife parks. Drones would fly and stream pictures and videos so that anti-poaching units could locate poachers, and signs of poaching activities.

The Approach

In order to create a user experience strategy I needed to understand the following:

  • I needed to understand the user and the problem. Luckily I had done contextual research in Botswana to find out how the rangers used the a similar interface.  This piece of software they used was built to detect anomalies in static images extracted from a drone flight. I also had seen them watching live video from a drone flight, so I had seen very similar problems already.
  • I needed to understand and get a scope of the capabilities of the Neural Network. For this I needed to get some experts in the room. Once I realized that I decided to have a participating design session with both the software developer and the people who were the Machine Learning experts. After all, it’d be much easier to tell them the problem and come up with multiple solutions to solve this.

The Insights

Anti-poaching units would have to sit through drone flights for hours of streaming video looking for signs of poaching. Glued to a monitor, they would have little time for anything else. After a while fatigue would creep in and their ability to concentrate would wain fast, reducing the chance of discovering signs of poaching.

To combat user fatigue, we used object detection through our neural network so that we could classify objects in videos (and also still images) as animal, human or vehicle. These detections would then be communicated in real time to the users so that they would be able to multitask without the fear of missing detections.

Working with the Machine learning team and the software developers, I facilitated for designing the best user experience to flag and signal anomalies on screen.

We found that to be able to convey the intel to the organizations, there were three crucial things that dominated the User Experience.

The UX Strategy

1. Help the user
Through user research, I observed first hand the operator’s fatigue that would lead to missed detections. So we needed the user to pay attention when it was really necessary by flagging potential anomalies in the landscape.

2. Reduce the noise
Our neural network classification worked fine as long as we didn’t ask it to be too granular, it could classify anomalies with a pretty good success rate but it couldn’t distinguish cows from elephants.   In our effort to aid the user, the last thing we wanted to do is to replace user fatigue with annoyance over false detections. How could we reduce the noise from alerts and notifications caused by wrong or duplicate sightings from different frames?

3. Build trust with the user – be transparent
As accuracy of our detections wasn’t anywhere close to being acceptable, we needed a way of gradually introducing the user to our ML system’s output. We needed to be just granular enough to stay trustworthy without shooting ourselves in the foot through over-promising. Setting the user expectation was crucial for building trust.

The Execution

The intention of helping the user to pay attention when necessary became one of the focus points of the user experience. The end goal was to have a system with less than 10% false detections that would pro-actively alert the user when something was detected. In order to achieve that kind of impact, the engineers really needed to concentrate on making the mental model perform better. So working backwards, I developed a roadmap of implementation of Machine Learning features based on the current limitations and the expanding future capabilities.

First of all, to reduce noise, I proposed to aggregate detections based on a grid system. This would generate only one alert for a given grid, and eliminate noise from multiple detections in the same location.

The grid would be visible to the user, so they could use the grid to communicate a location from the command center to the rangers in the field. Eg. They could instruct a patrol to go to G17 without having to communicate coordinates, addressing another pain point I discovered during the user research in the field.

To build trust with the user, the strategy of the user experience was to simply detect if there was “something” in the frame of the feed that did not belong there. The neural network was able to do this with a high percentage of accuracy.

The second phase would be implemented a s soon as the system was able to tell the user with great accuracy if the “something” in the frame was an animal/person or an object (think: vehicle).

The concurrent phases on the user experience roadmap would be marked by granular improvements of the neural network classification capabilities.

Please note that I can not share artifacts as this time, as I am still under NDA. However, I’ll be more than happy to talk through designs in a more personal setting.