Systematic Review of Automated Cognitive Tests Yields Little Useful Clinical Guidance

Share this content:
Investigators searched 6 databases — Medline, Cochrane, Embase, ProQuest, PsycINFO, and Institute for Scientific Information — from 2005 to 2015, and identified 16 studies that fit their criteria.
Investigators searched 6 databases — Medline, Cochrane, Embase, ProQuest, PsycINFO, and Institute for Scientific Information — from 2005 to 2015, and identified 16 studies that fit their criteria.

Researchers from the University of Liverpool conducted a systematic literature review (PROSPERO CRD42015025410) to evaluate computerized tests for cognitive impairment, evaluating their diagnostic accuracy and utility as monitoring tools for treatment response and disease progression. However, the findings, which were published in the International Journal of Geriatric Psychiatry, were not amenable to statistical meta-analysis because of the wide variety of tests used across studies, a lack of quality evidence, and the lack of standardized reporting, and the review was therefore unable to offer much clinical guidance.

The team focused on studies that examined automated tests designed to identify mild cognitive impairment (MCI) and/or early dementia, as there is a need to better estimate incidence and prevalence in susceptible populations amid an expanding public health issue. Their belief was that automated tests could play a crucial role in earlier diagnosis by offering improved sensitivity and examining a wider array of abilities than standard pen and paper batteries.

Investigators searched 6 databases — Medline, Cochrane, Embase, ProQuest, PsycINFO, and Institute for Scientific Information — from 2005 to 2015, and identified 16 studies that fit their criteria. Within these studies, 11 different computerized tools were used to diagnose MCI and/or early dementia (which contributed to the issue of variability). Whenever possible, several parameters were calculated, including sensitivity, specificity, and area under the curve (AUC), among others, to evaluate potential clinical utility.

In all studies, sensitivities ranged from 23.4% to 96.9%, specificities from 52.4% to 100%, and AUCs from 0.623 to 0.97. No computerized tests in the investigations met eligibility criteria for use as monitoring tools for treatment response or disease progression. And while the studies were generally considered of good quality, the aforementioned variability and non-standardization, along with patient samples that were not representative of clinical populations, prevented any meaningful statistical analysis.

Strengths of this review included an extensive search strategy with rigorous review involving 2 assessors, contact with all primary investigators to complete contingency tables, and inclusion of an exercise designed to involve patients and the public.

Review limitations included the small number of studies that assessed identical automated tests, the use of incompatible data that prevented pooling analysis, and a lack of results that could be compared across the domains examined.

Despite individual tests showing promise for diagnosing MCI and/or early dementia, the studies reviewed were unable to contribute much in terms of clinical recommendations. The investigators noted that even with future improvement, including re-evaluation of cutoff points, automated tests should not be used alone, but as part of a comprehensive evaluation that must include clinical judgment and traditional assessor-guided cognitive batteries.

Reference

Aslam RW, Bates V, Dundar Y, et al. A systematic review of the diagnostic accuracy of automated tests for cognitive impairment [January 22, 2018]. Int J Geriatr Psychiatry. doi:10.1002/gps.4852

You must be a registered member of Psychiatry Advisor to post a comment.

Sign Up for Free e-newsletters