# Signal Detection Theory: Putting it Together

## Bias (c) and sensitivity (d’)

We have seen how, depending on whether the signal is present or absent, a measurement is made from the corresponding distribution. And we have seen how the value of this measurement relative to a threshold is used to determine if the model will respond ‘present’ or ‘absent’. Now we can put the distributions and the threshold together, so we can see how sensitivity (d′) and bias (c) determine our response:

You can adjust d′ by moving the distributions and c by moving the threshold. In this example, the distributions are colored according to the resulting response. If there are evidence measurements for individual trials, they will change between ‘Present’ and ‘Absent’ responses based on the locations of the distributions and threshold.

Hopefully, it is clear that the response depends on *both* sensitivity *and* bias.

## A measurement plus a threshold gives us an outcome!

We are now ready to go one step further, by noting that since we can classify each simulated trial based on the signal being present or absent, and then also based on the response being ‘present’ or ‘absent’, we can therefore determine the *outcome* of each trial as a hit, miss, correct rejection, or false alarm.

Explore the model and how sensitivity and bias determine outcomes:

In the *live* graph of the model, the box representing the model’s measurement for each trial represents whether the trial outcome will be a Hit, Miss, Correct Rejection, or False Alarm. The distributions are also colored accordingly to indicate the outcome that will result from sampling from each part of each distribution.

The measurements and resulting outcomes for each trial in the histogram will update as you drag the distributions or the threshold to adjust the parameter values. The table and ROC space will also update. This will help you to see how the values of d′ and c in the model determine performance in terms of hits, misses, correct rejections, and false alarms, as well as hit rate, false alarm rate, and accuracy.

Run a bunch of trials and watch them accumulate in the histogram, the table, and in ROC space. Adjust the distributions and threshold to get a sense of how the values of the model parameters d′ and c jointly determine performance. Indeed, any pattern of performance can be accounted for with the model by selecting the appropriate parameter values!

## Calculating hit and false alarm rates from sensitivity and bias

According to SDT, the measurement of evidence on each trial is stochastically sampled from the noise distribution or the signal-plus-noise distribution. In the examples above, we simulated a small number of trials by pseudo-random sampling and calculated a hit rate and false alarm rate from the results, just as we did when you were the participant. The results are fairly unreliable, since we have a small sample. However, since SDT specifies the distributions and we have specified the model parameters, we can calculate the *exact* rates predicted by the theory if we were collecting data from an *infinite* number of trials.

As suggested by the way the distributions are color-coded based on outcomes in the graph above, the hit rate is the proportion of the signal-plus-noise distribution above the threshold. Using the cumulative distribution function of the normal distribution, Φ, we can calculate it from d′ and c:

Similarly, the false alarm rate is the proportion of the noise distribution above the threshold. It can also be calculated from d′ and c:

In the following pages we will continue to explore the relationship between the model parameters for sensitivity and bias, and the behavioral measures of hit rate and false alarm rate.