How do we reason about testimony in an information-rich world?

This page contains information from the following manuscripts:

Yousif, S. R., Aboody, R., and Keil, F. C. (2019). The illusion of consensus: A failure to distinguish between ‘true’ and ‘false’ consensus. Psychological Science. picture_as_pdf

As academics, we encounter a dizzying amount of new information in many different fields of study; how could we possibly rationally interpret all of it at once? In one line of work, I reveal a subtle but crucial fallacy made when interpreting information across sources — namely, that individuals more heavily weight information that is merely repeated, as opposed to genuinely corroborated. We discuss the implications of this work in a social context of social media and disinformation, but there are also implications for our own knowledge: do we overweight the value of highly cited studies, for example, independent of that study’s replicability?

To demonstrate this, we have subjects read articles about current events (e.g., about the state of the Japanese economy, or a piece of legislation in Switzerland). There are three conditions. Across all three conditions, subjects see one article that takes a negative stance on the issue, and that article cites a single source. In our 'true consensus' condition, subjects read four more articles that take a positive stance, each of which independently cites a unique source. In our 'false consensus' condition, subjects also read four more articles that take a positive stance (containing identical information as the true consensus condition), except that all of those articles cite a single source. Finally, in our 'no consensus' baseline condition, subjects read only a single additional article that takes a positive stance, and that article cites a single, unique source. You can see a schematic of the three conditions below.

Our central question is whether observers will properly discount information that is repeated. If true, we should expect that agreement with the affirmative position in the false consensus condition is lower than in the true consensus condition (and equal to that of the no consensus condition). But, in fact, we find that people are decieved by false consensus: they exhibit a reliable boost in agreement with the affirmative opinion when they encounter the same information repeated across multiple secondary sources. You can see our results from one experiment below.

(Hover to see the full distributions)

As you can see, observers greatly overweight the value of a false consensus. Importantly, we show that this cannot be explained by either (a) a failure to remember the source information, and (b) an explicit belief that false consensus is valuable (subjects still exhibit this effect even after explicitly stating that true consensus was superior to false consensus).

However, people do not always fall prey to this illusion of consensus: they are sensitive to some contexts when they should not value repeated information. For example, we ran an experiment where subjects were exposed to the same kind of design as above. However, the articles were not about current events, but about a bear sighting at a local high school. Here, what varied is how many unique people claimed to have seen the bear. And in this case, people rationally discounted false consensus (see results below). This suggests that people are still rationally interpreting the information they receive, and the 'illusion' of consensus therefore only applies to certain kinds of knowledge.