When Too Much Agreement Is a Red Flag

Monday, October 31, 2016

Can there be too much agreement about an idea or a product or a practice for one to be inclined to accept it for oneself? The recent findings of a mathematician, in line with the hunches of policemen the world over suggest so.

Alex Tabarrok of Marginal Revolution, summarizes a mathematician's contention about data as follows: "More evidence can reduce confidence." Obviously it can, particularly if it contradicts other evidence one has gathered, but what makes the claim interesting is that he is not speaking of this obvious case:

... The basic idea is simple. We expect that in most processes there will normally be some noise so absence of noise suggests a kind of systemic failure. The police are familiar with one type of example. When the eyewitnesses to a crime all report exactly the same story that reduces confidence that the story is true. Eyewitness stories that match too closely suggests not truth but a kind a systemic failure, namely the witnesses have collaborated on telling a lie. [bold added]
In other words, since people generally can make mistakes, we should normally expect some discrepancies among accounts when asking for an account of fact from more than one person. But I would emphasize normally. I wouldn't be at all suspicious if, for example, I asked ten people what color the sky is and they all said, "Blue." The complexity of the question and the ability of an average person to evaluate it also come into play. But as a rule of thumb, it is hardly off-base to want to look into something in more detail when too many people seem more sure about an issue than circumstances seem to warrant.

We should all want to evaluate the truth for ourselves, but sometimes don't have the time or expertise to do so. In those cases, we have to rely on what others say to some degree, so it behooves us to know when to dig deeper. "Too much" agreement doesn't necessarily mean a given group of people are wrong, but it suggests that a closer look at why they all agree is in order.

-- CAV

4 comments:

Scott Holleran said...

Excellent post. Thank you.

Gus Van Horn said...

Thanks for the kind words, Scott.

Steve D said...


He seems to be discussing something akin to the p value sample size problem in which large sample sizes can actually lead to the wrong conclusion since they can make what should be insignificant, statistically significant.

This is actually a potential watch out for my work. There is an ideal sample size for every experiment; too high or too low reduces the chance of getting a correct answer.

http://blog.minitab.com/blog/statistics-and-quality-data-analysis/large-samples-too-much-of-a-good-thing

Gus Van Horn said...

Steve,

Interesting points from that article, not to mention an amusing analogy.

Gus