How would you feel if I told you that cognitive research thinks that you are irrational?
The truth is that you, I, and all other humans make decisions that are not purely based on the “facts.” We are a lazy lot and would rather not perform the same computations over and over. Rather, we find a rule that works and use that to guide our judgments. For example, I don’t like mushrooms. I don’t need to continuously taste them to check their flavor. I’ve developed a rule, and I use that to decide what to sneakily transfer from my plate to my husband’s when he isn’t looking.
However, we are not always controlled by these rules. With mental effort, we can attempt to counteract some well-learned rules. But this requires that we know what rules are guiding our judgments. Many of the rules are impossible to detect unless you read the fabulous work by Daniel Kahneman .
What does this mean for how we make decisions with images?
The rules that guide how we think about images are based on the salient features of the image and are not easily counteracted.
For example, consider the cone of uncertainty (See image below), which is a display that is produced by the National Hurricane Center. Forecasters create the cone of uncertainty by averaging a five-year sample of historical hurricane forecast tracks, resulting in a border where locations inside the boundary have a 66% likelihood of being struck by the center of the storm.
The most prominent visual feature of the cone of uncertainty is the growing diameter of the cone, and the majority of viewers incorrectly report that the cone represents the size of the hurricane expanding over time. However, the cone only depicts a distribution of hurricane path information and no size data.
We use a simple rule, such as “a large cone means a larger storm”, rather than processing the harder idea that the size of the cone indicates the uncertainty of the hurricane path increasing over time.
How do we unconsciously pick which rule to use?
We use the rule or mental model that most closely matches the visual characteristics of the graphic, rather than using the best mental model for the task. For example, a research study  found that users could effectively use confidence intervals when presented as text. However when communicated visually, participants believed that the confidence intervals represented high and low temperatures.
The visual confidence intervals looked similar to high and low-temperature forecasts used in the news, and participants were possibly matching the familiar mental model to the visual display. The authors further found they could not change viewers’ interpretations with additional instructions.
I would caution visualization designers not to expect viewers to be able or willing to use any other mental model than the one that visually matches the display. To be more specific, new visual encodings should not look similar to widely used graphics and symbols, as viewers will have difficulty inhibiting older more familiar mental models.
- Kahneman, D. (2011). Thinking, fast and slow. Macmillan. I highly recommend this book. It is an easy read and discusses in great detail heuristics or rules of thumb that guide our judgments and offers advice on how to make well-considered decisions.
- Savelli, S., & Joslyn, S. (2013). The advantages of predictive interval forecasts for non‐expert users and the impact of visualizations. Applied Cognitive Psychology, 27(4), 527-541.