This year MHO modified its Statistically Meaningful Improvement (SMI) metric to exclude patients whose admission assessment severity was too low for them to achieve improvement, provided the patient also did not decline. MHO consistently finds that 8-10% of inpatients with a change score on a patient self-report assessment have an extremely low admission and discharge severity that make them ineligible for this new version of the SMI metric (Figure 1, rates per diagnostic group). As you might expect, our data curiosity was piqued! We set out to explore what may be driving this phenomenon and “loss” of outcomes data. Afterall, clinical thresholds for inpatient admission dictate patients are experiencing some type of psychiatric crisis which directly contradicts a low admission assessment severity.
A disconnect between primary diagnosis and assessment tool
In MHO’s data, 40% of patients excluded from the new SMI metric due to very low admission and discharge severity were likely assessed with a tool that is less than ideal for their primary diagnosis, for example patients diagnosed with anxiety or psychotic disorders assessed with the depression specific Patient Health Questionnaire (PHQ-9) or substance use specific Brief Substance Craving Scale (BSCS). The rate of exclusion from the new SMI metric varies by diagnostic group. Further, patients assessed with a less than ideal tool were three times more likely to be excluded from the new SMI metric (17.3% vs 5.6%) and, even when not excluded, were less likely to achieve large SMI (Figure 2).
Of course, these results indicate not all patients with a less than ideal tool score low at admission, and many do in fact show improvement. Why is this? Mental disorders have some clinical overlap[1] and can be comorbid with other disorders[2] measured by less-than-ideal tools. Nonetheless, selecting the right tool allows for more accurate evaluation of patient symptomology and treatment impact.
Patients with low admission severity on an ideal tool
Approximately 1 in 20 patients assessed with an ideal tool (e.g., patients with mood disorders assessed with the PHQ-9) are ultimately excluded from the new SMI metric due to low admission and discharge severity (Figure 2). Could it be that the ability to reliably self-report symptoms is compromised by low insights and/or cognitive deficits associated with serious mental disorders? Although that can be the case in certain patients[3][4][5], research shows we can generally trust the validity of self-report measures for most symptom domains[6]. However, even when we can’t trust patient self-report, assessment and documentation of patient insight can be a beneficial component of the inpatient psychiatric assessment.
We may also question whether ‘ideal tools’ are indeed the gold standard for assessment of all the specific diagnoses within broad diagnostic categories (e.g., there is variety between anxiety disorders), and whether some of these tools are sufficiently comprehensive of all symptom domains within the diagnosis (e.g., BSCS used for psychoactive and substance use disorder assessing only cravings over the last 24 hours). Examining each assessment tool and other potential causes of low admission severity within each broad diagnostic category could provide more insight into this complicated and important issue.
Final thoughts
Clinicians and hospital administration should consider the reasons behind low symptom severity and the tools used to assess symptoms when treating patients with severe mental health problems. Facilities may benefit from diversifying their assessment tool inventory and administering disorder-specific measures, using measures assessing a broader scope of symptoms such as the Behavior and Symptom Identification Scale (BASIS – 32™), or using multiple measures in cases of comorbidities. Afterall, administering appropriate tools allows facilities and clinicians to objectively evaluate and communicate to stakeholders whether the care provided to patients is impacting patient outcomes.
[1] Doherty, J. L., & Owen, M. J. (2014). Genomic insights into the overlap between psychiatric disorders: implications for research and clinical practice. Genome Medicine, 6(4), 29. https://doi.org/10.1186/gm546
[1] Plana-Ripoll O, Pedersen CB, Holtz Y, et al. (2019). Exploring Comorbidity Within Mental Disorders Among a Danish National Population. JAMA Psychiatry, 76(3):259-270. doi: 10.1001/jamapsychiatry.2018.3658.
[1] van Helvoort, D., Merckelbach, H., van Nieuwenhuizen, C., & Otgaar, H. (2022). Traits and distorted symptom presentation: A scoping review. Psychological Injury and Law, 15(2), 151–171. https://doi.org/10.1007/s12207-022-09446-0
[1] Atkinson M, Zibin S, Chuang H: Characterizing quality of life among patients with chronic mental illness: a critical examination of the self-report methodology. American Journal of Psychiatry 154:99-105.
[1] Calsyn RJ, Allen G, Morse GA, et al: Can you trust self-report data provided by homeless mentally ill individuals? Evaluation Review 17:353-366, 1993.
[1] Bell, Morris & Fiszdon, Joanna & Richardson, Randall & Bryson, Gary. (2007). Are self-reports valid for schizophrenia patients with poor insight? Relationship of unawareness of illness to psychological self-report instruments. Psychiatry research. 151. 37-46. 10.1016/j.psychres.2006.04.012.