Guidance on psychological tele-assessment during the COVID-19 crisis
Much of the health service psychology and broader mental healthcare world has rallied in recent weeks to adapt clinical practice to the necessary physical distancing constraints of the COVID-19 crisis. The bulk of clinical services, largely based on verbal interaction between the client and service provider, has moved to a distributed and distance service delivery model, largely relying on online teleconferencing technology to continue face-to-face contact with consumers (clients, parents, schools, etc.).
However, the situation is more challenging with assessment services that have standardized administration procedures that require in-person contact. In considering these challenges, some psychologists may choose to pause their psychological assessment services during this time; however, there are others who do time-sensitive, high-need and/or high-stakes assessments that really need to continue. Most current and emerging telehealth guidelines largely focus on psychotherapy and, as such, tele-assessment guidance is necessary.
In response to the need for physical distancing and isolation requirements, test publishers have started to modify their service delivery options, such as the ability to use remote testing options (for example, emailing a link to the testing interface to the respondent or presenting stimuli over their remote testing platforms).
However, multiple types of assessment, including cognitive, neuropsychological and autism assessment, are disproportionately burdened by the current physical distancing, limited contact, and stay-at-home constraints. These assessment methods have historically relied on tasks and interpersonal procedures that require in-person interaction, such as the manipulation of physical materials, standardized interactions between assessor and client, and clinical observation of the person in a physical environment.
These principles are an effort to offer help to those providing psychological assessment service under physical distancing constraints. They are not meant to supplant typical practices and guidelines under normal circumstances. That is, when it becomes safe and feasible to resume in-person services, these recommendations should not override typical and standardized practice. However, they are meant to allow for at least some continuity of the care and needed services provided during this unprecedented time.
The overarching context within which these principles are being developed is an understanding that the research and evidence base for equivalence of cognitive, neuropsychological, and other interactional measures in a remote, online format compared to a traditional, face-to-face format is extremely nascent. Some early evidence, under very controlled circumstances, of possible equivalence exists (e.g., Brearly, 2017; Cullum et al., 2006; Galusha-Glasscock et al., 2016; Harrell et al., 2014; Parmanto et al., 2013; Smith et al., 2017; Wadsworth et al., 2018; Wright, 2018).
Replications of studies are needed, and evidence needs to be amassed. Therefore, the following principles are aimed at continuing care while understanding that equivalence between in-person testing and tele-testing is not guaranteed. That means that validity of the data should be overtly addressed in the report.
This guidance represents the best, current, available knowledge and opinions of the Boards of the Society for Personality Assessment and Section IX (Assessment) of APA Div. 12 (Society of Clinical Psychology). The principles here are not necessarily shared by all organizations. For example, the Interorganizational Practice Committee, a coalition of national neuropsychology organizations including APA Div. 40 (Society for Clinical Neuropsychology), is producing tele-neuropsychological specific assessment guidance. Further, no recommendation provided here should be followed if it contradicts federal, state, or local laws overseeing the practice of psychologists providing assessment services.
Finally, these principles should be considered all together. That is, no one principle is meant to allow psychologists the ability to alter test administration, if the other principles are not considered as well.
The purpose of these principles is to allow for the best possible practice within the current physical distancing constraints; as such, some standardized administration methods will need to be altered. Altering these administration procedures should be done carefully, thoughtfully, and deliberately, with special attention paid to how the alterations themselves may alter the data. Psychologists, fully trained in the standardized administration procedures of all tests they are planning to give, also need to practice the altered procedures with individuals other than their clients before attempting actual assessments.
Principle 1: Do not jeopardize test security
While there may be some workarounds with respect to modifying test materials and procedures to achieve physical distancing, these modifications should not jeopardize test security. The language in the American Psychological Association (2017) Ethical Principles of Psychologists and Code of Conduct states, “Psychologists make reasonable efforts to maintain the integrity and security of test materials and other assessment techniques consistent with law and contractual obligations, and in a manner that permits adherence to this Ethics Code” (Ethical Standard 9.11).
Sending stimulus materials (e.g., stimulus pictures for Block Design, copies of psychomotor task stimuli or record forms) is not a viable solution to the current crisis, unless approved by the test publisher. Developing approved methods to present stimuli on a computer screen may be possible, as it is more protective of test security.
Although we cannot absolutely control if clients are, for example, recording their computer screen, or “grabbing” screenshots, during a telehealth session, this is much less likely than their ability to photocopy materials physically sent to them.
Remote audio-visual monitoring of the test administration, even with self-administered instruments, is essential. Failure to do so increases the risk of violations of test security; further, it is essential to know that the designated client is actually the one completing the test.
Principle 2: Do the best you can with what is available to you (mindfully and ethically)
Make sure you know, thoroughly, how to use the technology available to you. You must ensure that connections are secure on both sides, that your Wi-Fi is trustworthy, that you know the different functions of whatever platform you are using, and think through how it can be used to approximate traditional, standardized administration as closely as possible.
Psychologists need to consider specific client circumstances — age (especially kids and older adults), certain mental health conditions, physical disabilities, access to testing space/conditions, etc. Psychologists must be mindful of duration of sessions. While many of us spend a lot of time “on screen” it is often not doing such tasks.
It is important for the psychologist to know the limits of tele-testing and to consider if this approach is appropriate given the referral question, evidence, client characteristics/preferences and clinician expertise.
Do your best to keep the administration procedures as close as possible to the traditional, in-person procedures. For example, one must build rapport with the client before conducting the testing. For performance-based tests like intelligence tests, one should be observing the person’s performance to intervene when necessary and to determine if anything disrupted the typical response process of the task.
The main interactive component of self-report questionnaires is typically the initial instructions given to the client, and the test is taken individually in a quiet room. However, when administering self-report questionnaires remotely, one must ensure that the client themselves is actually the person taking the test and that they are in a room fairly free of distractions. As noted above, audio-visual monitoring of the remote assessment session is essential.
Some telehealth platforms allow for screen sharing (so that a client can see whatever is on your screen), as well as remote control (so the client actually controls your mouse/cursor). This can allow a client to complete forms from their computer as if they were sitting at your own computer or laptop.
Principle 3: Be rigorously mindful of data quality
To date, research and evidence for equivalence of testing in a remote, online format compared to a traditional, face-to-face format is limited. You should use your knowledge of processes that underlie performance on tasks and how those processes are likely to be affected by the alternate administration format to think through the quality of the data collected.
For example, some purely verbal tasks may suffer very little alteration in the quality of data collected, as they rely primarily on hearing and speaking, whereas many nonverbal tasks are likely to suffer more greatly in this format. You should think through every task administered and decide just how much the quality of the data are likely affected by the alternate administration format. The quality of the images (blurriness, shadows, etc.) for the client is an important factor to consider for potential use of any visuals as this has a higher likelihood to negatively impact results.
When considering data quality, it is important to consider just how detrimental to validity the alterations are likely to be. Of course, nobody should be making conclusions or decisions based on data that are so skewed that they likely no longer represent an individual’s abilities or functioning.
Additionally, it is important to decide whether it is better to proceed with modified assessment procedures in the specific situation, to use alternative measures that are available to use in a remote format, or to wait until in-person services are again feasible.
Principle 4: Think critically about test and subtest substitutions
There will certainly be tasks that are not possible to replicate in a telehealth format at this time. Consider, for example, Block Design from the Wechsler tests. Without being able to ship the blocks to clients (and subsequently being able to use multiple camera angles to be able to see both a client’s face and their hands/table), it is extremely unlikely that this is feasible to administer at the moment. This is also true for other tasks with manipulatives.
However, you can consider tasks that tap similar constructs in similar ways. Wechsler tests, for example, have subtests that load onto the broader Visual-Spatial or Perceptual Reasoning Indices that do not require the use of blocks. Additionally, adding a different visual-motor integration task to the battery may bolster the information gained when a core subtest cannot be utilized.
Remember that the most robust and meaningful scales in multi-faceted tests are typically the overall (“full scale”) indices rather than their subscales (see, for example, McGill et al., 2018). That is, the individual variation amongst subtests may be useful and informative, but the overall score is generally the most clinically reliable point of data. That means that slight data problems (with consideration of Principle 5) may not be as important, meaningful, or disruptive, as they are only partially contributing to the larger, overall score.
Principle 5: Widen “confidence intervals” when making conclusions and clinical decisions
Ultimately, psychological assessment requires the clinical judgment of psychologists interpreting test scores, including their margin for error, within the context of individual and contextual factors, including presenting problems, diversity considerations and other information.
No single test score should ever make a clinical decision for us, even under the most optimal conditions. Psychologists will continue to integrate test data within an understanding of the individual, their background, their context, their culture and their circumstances in order to inform conclusions and clinical decisions.
Integrating test data derived from nonstandardized administration procedures broadens the margin of error. It is important to be deliberate and explicit about the broader confidence intervals and potential for errors in the administration process, interpretation and in the write-up of results.
Related to confidence intervals, it is important to remember that cognitive and other psychological test data are proxies for underlying abilities, traits, states and functioning. No test score has ever, with 100% accuracy, explained an underlying trait perfectly. This is a primary reason it is important to use a multi-method approach, combined with clinical expertise (Bornstein, 2017).
Remember that these test scores and data are not perfect, and at best they have error and are approximations of constructs we are trying to understand about an individual. Therefore, with all the above caveats, use test results collected via telehealth as individual data in a larger picture.
Principle 6: Maintain the same ethical standards of care as in traditional psychological assessment services
The ethical principles that underlie the APA’s ethics code are built on the foundation of doing good, avoiding harm, and being faithful and just in our work. These ethical principles remain intact during this crisis period.
This includes ensuring that the process of informed consent is thorough, clear, and ongoing. Potential difficulties can arise when conducting psychological assessment remotely via telehealth and should be discussed explicitly. Consumers should know the limitations ahead of time, whenever possible.
Additionally, as tele-psychological assessment is an area in which the overwhelming majority of psychologists are not familiar, it is important to seek out consultation, if at all possible. While it may not be possible to consult with an expert in this specific area, you should discuss implications with knowledgeable colleagues
Further, issues of inequity, disparities, and diversity need to be attended to throughout the process. Beyond even access to technology and stable internet connection that would be required to engage in this process at all, clients’ level of technology literacy can interact with the actual performance on tasks that rely on technology.
For example, when a task requires the client to use a computer, you need to think carefully about the implications of those from disadvantaged or traditionally marginalized backgrounds who may have less experience with computers, in addition to how that may affect performance (e.g., speed, accuracy) on a task. Although not yet studied, it is highly possible that problematic, systematic differences between groups on certain tests may be amplified in this telehealth format.
Finally, it is important to note in psychological assessment reports and feedback when assessment procedures have been altered and how these alterations may have or likely impacted the data. It is important to be transparent about the novel circumstances under which the assessment was conducted, as well as the considerations that went into how data are interpreted, with consideration to alterations, and integrated with other information.
Much of the psychological assessment work conducted by psychologists is timely, necessary and high-stakes. During this crisis period with physical distancing and stay-at-home orders, it may be best for many psychologists simply to pause their psychological assessment work. However, because of the uncertainty about how long this will continue and the fact that many individuals simply need assessments conducted (despite the constrictions of the current circumstances), these guidelines are meant to help psychologists continue their important work in the most ethical, clinically responsible way possible.
Whenever possible, administration procedures should mimic or at least approximate the standardized protocols presented in test manuals. However, when this is not possible, psychologists should take steps to collect data that are as high quality as possible and use caution and clinical expertise when interpreting those data and integrating them with other information to make conclusions and inform clinical decisions.
American Psychological Association (2017). Ethical principles of psychologists and code of conduct. Retrieved from https://www.apa.org/ethics/code/index.aspx
Bornstein, R.F. (2017). Evidence-based psychological assessment. Journal of Personality Assessment, 99(4), 435-445.
Brearly, T.W., Shura, R.D., Martindale, S.L., Lazowski, R.A., Luxton, D.D., Shenal, B.V., and Rowland, J.A. (2017). Neuropsychological test administration by videoconference: A systematic review and meta-analysis. Neuropsychology Review, 27(2), 174-186.
Cullum, C.M., Weiner, M.F., Gehrmann, H.R., & Hynan, L.S. (2006). Feasibility of telecognitive assessment in dementia. Assessment, 13(4), 385-390.
Galusha-Glasscock, J.M., Horton, D.K., Weiner, M.F., and Cullum, C.M. (2016). Video teleconference administration of the repeatable battery for the assessment of neuropsychological status. Archives of Clinical Neuropsychology, 31(1), 8-11.
Harrell, K.M., Wilkins, S.S., Connor, M.K., and Chodosh, J. (2014). Telemedicine and the evaluation of cognitive impairment: the additive value of neuropsychological assessment. Journal of the American Medical Directors Association, 15(8), 600-606.
McGill, R.J., Dombrowski, S.C., and Canivez, G.L. (2018). Cognitive profile analysis in school psychology: History, issues, and continued concerns. Journal of School Psychology, 71, 108- 121.
Parmanto, B., Pulantara, I.W., Schutte, J.L., Saptono, A., and McCue, M.P. (2013). An integrated telehealth system for remote administration of an adult autism assessment. Telemedicine and e-health, 19(2), 88-94.
Smith, C.J., Rozga, A., Matthews, N., Oberleitner, R., Nazneen, N., and Abowd, G. (2017). Investigating the accuracy of a novel telehealth diagnostic approach for autism spectrum disorder. Psychological Assessment, 29(3), 245-252.
Wadsworth, H.E., Dhima, K., Womack, K.B., Hart Jr, J., Weiner, M.F., Hynan, L.S., and Cullum, C. M. (2018). Validity of teleneuropsychological assessment in older patients with cognitive disorders. Archives of Clinical Neuropsychology, 33(8), 1040-1045.
Wright, A.J. (2018). Equivalence of remote, online administration and traditional, face-to-face administration of the Woodcock-Johnson IV cognitive and achievement tests. Archives of Assessment Psychology, 8(1), 23-35.