Sensitivity and Difficulty: Influences on d′
Influencing sensitivity
We have discussed how SDT defines sensitivity as the distance between the noise distribution and the signal-plus-noise distribution. But what determines that distance for a given participant in a given block of trials? One determining factor is the participant. For example, a participant with poorer eyesight may be less sensitive than one with excellent eyesight. And equipment like the monitor will matter as well. It will be harder to see stimuli on a small, dim monitor than on a large, bright one. And then there is the environment around the participant. Consider a quiet evenly lit room versus a loud room full of flashing lights and distractions. But if we hold all of that constant, so we consider the same participant with the same equipment in the same context, can we still manipulate sensitivity?
A harder task
The key is to return to what the two distributions represent — evidence measured on trials with the signal present or absent. If we make the signal present trials more like the signal absent trials, then it will be harder to tell them apart.
Give this example a try:
How did you do? If you’re like me, you didn’t do so well. We’ve increased the difficulty. Indeed, the coherence for this example is set at 0.10, so only a tenth of the dots were moving together on the signal present trials – in other words, ninety percent of the dots were moving randomly, just as on the signal absent trials.
An easier task
Let’s try making the signal present trials less like the signal absent trials, so they are easier to tell apart.
Try this:
How did you do this time? I know I did a lot better! We’ve decreased the difficulty. This time, the coherence was set to 0.90, so nine tenths of the dots were moving together on the signal present trials – in other words, only ten percent of the dots were moving randomly.
Take a look at the model fits for the hard block and the easy block. Are the distributions overlapping more for the hard block?
Exploring difficulty
While we can compare the results from hard and easy blocks by scrolling back and forth above, a more typical approach in SDT analysis is to run multiple blocks of trials and plot the results of each block as a new point in ROC space. Then we can compare multiple performances in a single graph.
Try this out below. If you have the time, increase the trials on each block to get a more reliable measure. Try a few different levels of coherence, from very low to very high:
Your particular results may vary due to a wide variety of factors, but, in general, we find that as we vary difficulty from harder to easier, the resulting points in ROC space shift from being nearer to the diagonal to being nearer to the upper-left corner.
Indeed, as we already observed way back on the page about Iso-Bias Curves, SDT predicts that if our manipulation of difficulty truly impacts sensitivity and not bias, then the points should all fall on the same iso-bias curve, and would ideally look like this:
Choose the Bias by moving the threshold in the model diagram. A range of values for the Sensitivity have been selected to illustrate how manipulating it looks in ROC space.
While the exact position of the points depends on the bias, in general, the pattern of points shifting toward the top left corner of ROC space holds. And even if bias varies a bit from condition to condition, that pattern will remain.
We’ve pushed around sensitivity by manipulating the coherence of the dots, but the same pattern should hold for other manipulations of difficulty. For example, you could leave the coherence constant and adjust the duration of the stimulus instead. Feel free to give it a try above!