Evidence-based assessment/Prescription phase/Interviews Are Not Perfect

=Deep Thought/Rabbit Hole=

Are interviews perfect? It would be nice if they were, but they aren't.

Unstructured interviews have all kinds of limitations, as covered in the Jensen-Doss et al. (2020) chapter.

Fully structured interviews have the highest inter-rater reliability (because everyone is getting the same script), which should improve the validity of conclusions based on them (remember that reliability sets an upper limit on validity). But if the patient does not understand the question, or misinterprets what is being asked, that will lower the validity. There is a huge debate in epidemiology about whether diagnoses made by research assistants using a structured interview (such as a SCID, MINI, or CIDI) represent "real" cases with clinical impairment. (Bird; Kessler)

Semi-structured interviews are supposed to hit the sweet spot, where the structure makes sure that we ask about the "vital few" topics for all clients, and go deeper on topics that are relevant to the individual. The clinician/interviewer can rephrase things, changing the script to try to improve communication. This improves validity compared to fully structured interviews, but it lowers the inter-rater reliability (because every interviewer gets some discretion about how to ask questions and to interpret the answers). That wiggle room is why semi-structured approaches are all supposed to only be used by trained clinicians. That isn't a magical solution, either, though, given the potentially big differences in clinical training or the models we use. The actual validity of a semi-structured approach gets sandwiched somewhere in between the improved validity due to adding clinical judgment, versus the hit to inter-rater reliability due to letting people improvise.

LEAD Diagnoses
Describe LEAD. Talk about whether we can do this in clinical practice.

Research Corner
Talk about the ways of estimating the validity of SDIs (e.g., comparing to LEAD).

Kraemer (1992) and effects on AUC.

Could paste in cuttings from parent page.

Psychometric properties of common diagnostic interviews
Note: “LR+” refers to the change in likelihood ratio associated with a positive test score, and “LR-” is the likelihood ratio for a low score. Likelihood ratios of 1 indicate that the test result did not change impressions at all. LRs larger than 10 or smaller than .10 are frequently clinically decisive; 5 or .20 are helpful, and between 2.0 and .5 are small enough that they rarely result in clinically meaningful changes of formulation (Sackett et al., 2000).