Information Cascades, Revisited

6 Min Read

A couple of years ago, I blogged about an information cascade problem I’d read about in David Easley and Jon Kleinberg‘s textbook on Networks, Crowds, and Markets. To recall the problem (which they themselves borrowed from Lisa Anderson and Charles Holt:

The experimenter puts an urn at the front of the room with three marbles in it; she announces that there is a 50% chance that the urn contains two red marbles and one blue marble, and a 50% chance that the urn contains two blue marbles and one red marble…one by one, each student comes to the front of the room and draws a marble from the urn; he looks at the color and then places it back in the urn without showing it to the rest of the class. The student then guesses whether the urn is majority-red or majority-blue and publicly announces this guess to the class.

The fascinating result is that the sequence of guesses locks in on a single color as soon as two consecutive students agree. For example, if the first two marbles drawn are blue, then all subsequent students will guess blue. If the urn is majority-red, then it turns out there is a 16/21 probability that the sequence will converge to red and a 5/21 probability that it will converge to blue.

Let me explain why I find this problem so fascinating.

Consider a scenario where you are among a group of people faced with the single binary decision — let’s say, choosing red or blue — and that each of you is independently tasked with recommending the best decision given your own judgement and all available information. Assume further that each of you is perfectly rational and that each of your prior decisions (i.e., without knowing what anyone else thinks) is based on independent and identically distributed random variables. Let’s follow the example above, in which each participant in the decision process has a prior corresponding to a Bernoulli random variable with probability p = 2/3.

If each of you makes a decision independently, then the expected fraction of participants who makes the right decision is 2/3.

But you could do better if you have a chance to observe others’ independent decision making first. For example, if you get to witness 100 independent decisions, then you have a very low probability of going wrong by voting the majority. If you’d like the gory details, review the cumulative distribution function of binomial random variables.

On the other hand, if the decisions happen sequentially and every person has access to all of the previous decisions, then we see an information cascade. Rationally, it makes sense to let previous decisions influence your own — and indeed 16/21 > 2/3. But 16/21 is still almost a one in four chance of making the wrong decision, even after you witness 100 previous decisions. We are wasting a lot of independent input because of how participants are incented.

I can’t help wondering how changing the incentives could affect the outcome of this process. What would happen if participants were rewarded based, in whole or in part, on the accuracy of the participants who guess after them?

Consider as an extreme case rewarding all participants based solely on the accuracy of the final participant’s guess. In that case, the optimal strategy for all but the last participant is to ignore previous participants’ guesses and vote based solely on their own independent judgements. Then the final participant combines these judgements with his or her own and votes based on the majority. The result makes optimal use of all participants’ independent judgments, despite the sequential decision process.

But what if individuals are rewarded based on a combination of individual and collective success? Consider the 3rd participant in our example who draws a red marble after the previous participants guess blue. Let’s say that there are 5 participants in total. If the reward is entirely based on individual success, the 3rd participant will vote blue, yielding an expected reward of 2/3. If the reward is entirely based on group success, the 3rd participant will vote red, yielding an expected reward of 20/27 (details left as an exercise for the reader). If we make the reward evenly split between individual success and group success, the 3rd participant will still vote blue — the benefit from helping the group will not be enough to overcome the cost to the individual reward.

There’s a lot more math in the details of this problem, e.g. “The Mathematics of Bayesian Learning Traps“, by Simon Loertscher and Andrew McLennan. But there’s a simple take-away: incentives are crucial in determining how we best exploit our collective wisdom. Something to think about the next time you’re on a committee.

Share This Article
Exit mobile version