There was a story on NPR this week about how bad the updated New Orleans flood map is, which depicts a significant reduction in flood risk. According to the updated graphic, only 5% of the regions are at high flood risk.
The issue that All Things Considered’s Ryan Kailath points out is that the reduction in risk was likely driven by a political incentive to reduce insurance rates. Higher flood risk = higher insurance premiums, which discourages growth. The story also includes an alternative map which was created using a different forecast model that does not depict the majority of New Orleans under high risk. This second map uses a model which factors in new safety measures failing.
Why is there a disagreement between these two maps and what does it mean for the people of New Orleans?
The reality is that ALL forecasts have this same issue, but it is particularly complicated with maps. The issue is with uncertainty communication. Scientists make forecasts by taking a model that is created from historical data or simulations and run the model many times. Each time the model is generated small changes are made to it. This results in a grouping of many runs from the model. This image (right) is an example of output from this process, for possible hurricane paths. Each line represents a possible route that the hurricane could take. As you can see, most of the lines group together but some lines go off the side. These outlier hurricane paths have a lower likelihood of occurring but are still within the realm of possibility. What you are really seeing is a probability distribution or the likelihood of a hurricane taking a particular path.
The issue is that we (the general public) don’t get to see these types of maps. Instead, scientists average this information into a display like the one below.
Everything inside the cone has a 66% probability of being struck by the hurricane. Much of the uncertainty data about the hurricane path is smoothed over. This is the same issue with the New Orleans flood map.
The FEMA map is showing optimistic averages, which are correct but not the whole picture. The second map is showing the pessimistic averages. The reality is that we need to see all of the possible outcomes to make the best decisions about our own health and safety. These maps are missing the uncertainty.
Why is uncertainty difficult in the case of maps? To be on FEMA’s side for a moment, depicting uncertainty on maps is incredibly hard. Scientists have written dozens of research papers on the subject of visualizing uncertainty (including my own work) and there isn’t a good solution yet. The issue is that maps are already complex. If you look at FEMA’s map, they already use color and texture to communicate risk, and the map is almost illegible because of the complexity.
How do you add in uncertainty? Some ideas include adding the level of likelihood based on saturation, blurriness, and line quality. My work argues that showing the multiple simulations without averaging is the simplest for people to understand. But the truth is that there is not a reliable technique at this point. This is a large design problem that has massive implications for the health and prosperity of New Orleans and all of us. We need more work on successful ways to communicate uncertainty in maps.