Bias and Incentive: Influences on c

Influencing bias

On the previous page, we explored factors that alter sensitivity. Now let’s turn our attention to bias. We have discussed how SDT defines bias as the position of the decision threshold relative to the neutral point right between the distributions. But what determines the threshold location for a given participant in a given block of trials? One way to look at this is to observe that while sensitivity determines how many errors we will make, bias determines what type of errors they will be. A neutral bias balances equally between type I errors, false alarms, and type II errors, misses. A more conservative bias means more misses, while a more liberal bias means more false alarms. What might influence our desire to avoid one type more than the other?

Of course, there are lots of potential answers to the question just posed, but many of them center around the concept of incentive. What are the consequences of a miss versus a false alarm? And how much do I value those consequences?

In the world, those consequences can take many forms. For example, consider the potential consequences of a miss, if you are operating a metal detector at the airport. Or, on the other hand, the consequences of a false alarm, if you are on a jury in a courtroom. To keep things simpler, and more quantitative, we’ll use (theoretical) monetary incentives. In the examples below, each possible outcome is associated with an incentive in the form of a monetary reward or punishment.

When misses are worse

First, let’s consider a situation where the negative consequences of a miss are much worse than those of a false alarm. (And the positive consequences of a hit are much better than those of a correct rejection). Give it a try:

The incentive for each outcome is displayed beneath the corresponding label in the table of outcomes. Gain $90 for Hits, lose $10 for False Alarms, gain $10 for Correct Rejections, lose $90 for Misses. In addition, you will lose $100 for No Response, so make sure you respond on each trial!

On each trial, your gain or loss will be displayed below the outcome in the feedback box. And a running total of gains and losses in the block is displayed below that box.

How did you respond? If you are like me, you made a lot more ‘present’ responses than ‘absent’ responses. It just makes good sense, since it maximizes possible gains, and minimizes possible losses.

When false alarms are worse

Now, let’s try out the opposite scenario, where the negative consequences of a false alarm are now much worse than those of a miss. (And the positive consequences of a correct rejection are much better than those of a hit). Give this version a try:

How did you respond this time? The logical thing to do this time is to shift to lots of ‘absent’ responses and far fewer [‘present’] responses.

Take a look at the position of the threshold in the model fits for the two blocks of trials you just performed. Is the threshold shifted to the left in the first case and to the right in the second?

Exploring bias

As we saw on the previous page, when we want to compare performance across multiple blocks while manipulating an experimental variable of interest, it is helpful to plot the results from each block in the same ROC space, for easier direct comparison.

Try this out below. Run a few blocks, each time with a different balance of incentives:

Use the Payoff slider to determine the balance of incentives. When the slider is at $0, False Alarms are punished the most. At $50, False Alarms and Misses are punished equally. And at $100, Misses are punished the most.

Your particular results may vary due to a wide variety of factors, but, in general, we find that as we shift from punishing false alarms to punishing misses, the resulting points in ROC space shift from being nearer to the lower-left corner to being nearer to the upper-right corner.

As we observed back on the page about Iso-Sensitivity Curves, SDT predicts that if our manipulation of incentive truly impacts bias and not sensitivity, then the points should all fall on the same iso-sensitivity curve, and would ideally look like this:

Choose the Sensitivity by moving the distributions in the model diagram. A range of values for the Bias have been selected to illustrate how manipulating it looks in ROC space.

While the exact position of the points depends on the bias, in general, the pattern of points arrayed from the bottom left corner to the top right corner of ROC space holds. And even if sensitivity varies a bit from condition to condition, that pattern will remain.

We’ve pushed around bias by manipulating the payoffs for the outcomes, but the same pattern should hold for other manipulations of incentive. For example, if the frequency of signal present and absent trials is unequal, that can lead to a bias. This works because it changes the prior likelihood of the stimuli.