week2psy6502.pdf

3

DOI: 10.1037/14048-001APA Handbook of Testing and Assessment in Psychology: Vol. 2. Testing and Assessment in Clinical and Counseling Psychology, K. F. Geisinger (Editor-in-Chief)Copyright © 2013 by the American Psychological Association. All rights reserved.

C h a P t e r 1

ClInICal and CounsElInG TEsTInG

Janet F. Carlson

Many clinical and counseling psychologists depend on tests to help them understand as fully as possible the clients with whom they work (Camara, Nathan, & Puente, 2000; Hood & Johnson, 2007; Masling, 1992; Naugle, 2009). A broad and comprehensive understanding of an individual supports decisions to be made by or regarding a client. Tests provide a means of sampling behavior, with results used to promote better decision making. Decisions may include such matters as (a) what diagnosis or diag-noses may be applicable, (b) what treatments are most likely to produce behavioral or emotional changes in desired directions, (c) what colleges should be considered, (d) what career options might be most satisfying, (e) whether an individual quali-fies for a gifted educational program, (f) the extent to which an individual is at risk for given outcomes, (g) the extent to which an individual poses a risk of harm to others or to himself or herself, (h) the extent to which an individual has experienced dete-rioration in his or her ability to manage important aspects of living, and (i) whether an individual is suitable for particular types of roles or occupations such as those that involve high risk or extreme stress or where human error could have catastrophic effects. The foregoing list is certainly not exhaustive.

The term assessment as used in clinical and counseling settings is a broader term than testing because it refers to the more encompassing integra-tion of information collected from numerous sources. Tests comprise sources of information that often contribute to assessment efforts. Discussion within this chapter focuses on procedures used in

clinical and counseling assessment, all of which provide samples of behavior and, thus, qualify as tests. The narrative begins with a consideration of how clinical assessment may be framed and then addresses briefly ethics and other guidelines perti-nent to assessment practices. Next, specific assess-ment techniques used in clinical and counseling contexts are reviewed, followed by a discussion of concerns related to interpretation and integration of assessment results. The chapter concludes with a section devoted to the importance of providing assessment feedback.

TRADITIONAL AND THERAPEUTIC ASSESSMENT

A diverse collection of procedures may be viewed as falling within the purview of clinical and counseling assessment (Naugle, 2009). The disparate array of procedures makes it somewhat difficult to appreciate commonalities among them, particularly for individu-als who are relatively new to the field of assessment. Although clinical and counseling assessment proce-dures take many forms, nearly all are applied in a manner that facilitates an intense focus on concerns of a single individual or small unit of individuals, such as a couple or family (Anastasi & Urbina, 1997). The clinician who works one-on-one with a client during a formal assessment effectively serves as data collector, information processor, and clinical judge (Graham, 2006). Procedures that may be adminis-tered to groups of people often serve as screening measures that identify respondents who may be at

Copy

righ

t Am

eric

an Psy

chological Association. Not for fu

rthe

r di

stri

buti

on.

Janet F. Carlson

4

risk and, therefore, need closer clinical attention (i.e., further testing conducted individually).

The immediate goals of clinical and counseling assessment frequently address mental illness and mental health concerns. Testing can help practitio-ners to better address an individual’s mental illness or mental health needs by identifying those needs, improving treatment effectiveness, and tracking the process or progress of interventions (Carlson & Geisinger, 2009; Kubiszyn et al., 2000). Tests that assist clinicians’ diagnostic efforts also may be important in predicting therapeutic outcome (i.e., prognosis) and establishing expectations for improvement. On a practical level, testing can be used to satisfy insurance or managed care require-ments for evidence that supports diagnostic determi-nations or progress monitoring.

Within this basic framework, practitioners view the assessment process and their role within it dif-ferently. Indeed, some clinicians regard their role as similar to that of a technician or skilled tradesper-son. From this traditional vantage point, skillful assessment begins to develop during graduate train-ing, as trainees become familiar with the tools of the trade—tests, primarily. They learn about a variety of tests and how to use them. As trainees become practitioners, they accumulate experience with spe-cific tests and find certain tests more helpful to their work with clients than other tests. It is not surpris-ing that clinicians rely on tests that have proven most useful to them in their clinical work (Masling, 1992), despite test selection guidelines and stan-dards that emphasize the importance of matching tests to the needs of the specific client or client’s agent (American Educational Research Association [AERA], American Psychological Association [APA], & National Council on Measurement in Education [NCME], 1999; Eyde, Robertson, & Krug, 2010). As Cates (1999) observed, “the temp-tation to remain with the familiar [test battery] is an easy one to rationalize, but may serve the client poorly” (p. 637). It is important to note that the clinical milieu is fraught with immediate practical demands to provide client-specific information that is accurate, is useful, and addresses matters such as current conflicts, coping strategies, strengths and weaknesses, degree of distress, risk for self-harm,

and so forth. The dearth of well-developed tests to assess certain clinical features does not alleviate or delay the need for this information in clinical prac-tice. Thus, practitioners may find it necessary to do the best they can with the tools at hand.

Therapeutic assessment represents an alternative to traditional conceptualizations of the assessment process (Finn & Martin, 1997: Finn & Tonsager, 1997; Kubiszyn et al., 2000). In this contemporary framework, test givers and test takers collaborate throughout the assessment process and work as partners in the discovery process. Test takers have a vested interest in the initiation and implementation of assessment as well as in evaluating and interpret-ing results of the procedures used. Advocates of therapeutic assessment value and seek input from test takers throughout the assessment process and regard their perspectives as valid and informed. Rather than dismissing client input as fraught with self-serving motives and inaccuracies, practitioners who embrace the therapeutic assessment model engage clients as equal partners. This stance, together with the participatory role of the test giver, led Finn and Tonsager (1997) to characterize the process as an empathic collaboration in which tests offer opportunities for dialogue as well as interper-sonal and subjective exchanges. A more thorough discussion of therapeutic assessment and its applica-tion is given in Chapter 26, this volume.

TEST USAGE

A survey of clinical psychology and neuropsychology practitioners (Camara et al., 2000) indicated that clin-ical psychologists most frequently used tests for per-sonality or diagnostic assessment. The findings were consistent with those from an earlier study (O’Roark & Exner, 1989, as cited by Camara et al., 2000), in which 53% of psychologists also reported that they used testing to help determine the most effective ther-apeutic approach. Testing constitutes an integral component of many practitioners’ assessment efforts as practitioners report using formal measures with regularity. Ball, Archer, and Imhof (1994) reported results from a national survey of a sample of 151 clini-cal psychologists who indicated they provided psy-chological testing services. The seven most used tests

Copy

righ

t Am

eric

an Psy

chological Association. Not for fu

rthe

r di

stri

buti

on.

Clinical and Counseling Testing

5

reported by respondents were used by more than half of the practitioners who responded to the survey. In order, these tests included the Wechsler IQ scales, Rorschach, Thematic Apperception Test (TAT), Min-nesota Multiphasic Personality Inventory (MMPI), Wide-Range Achievement Test, Bender Visual Motor Gestalt Test, and Sentence Completion. Camara et al.’s (2000) sample comprising 179 clinical psycholo-gists reported remarkably similar frequencies of use, with the Wechsler IQ scales, MMPI, Rorschach, Bender Visual Motor Gestalt Test, TAT, and Wide-Range Achievement Test heading up the list. The pre-ceding reports notwithstanding, considerable evidence suggests that test usage is in decline (Ben-Porath, 1997; Camara et al., 2000; Garb, 2003; Eis-man et al., 2000; Meyer et al., 2001), whereas other researchers have noted a corresponding decline in graduate instruction and training in testing and assessment (Aiken, West, Sechrest, & Reno, 1990; Fong, 1995; Hayes, Nelson, & Jarrett, 1987).

The now ubiquitous presence of managed care in all aspects of health care, including mental health care, clearly influences practitioners’ use of tests (Carlson & Geisinger, 2009; Yates & Taub, 2003). As is true for health care providers generally, mental health care providers can expect reimbursement for services they provide only if those services can be shown to be cost effective and essential for effective treatment. In a managed care environment, practi-tioners no longer have the luxury of making unilat-eral decisions about patient care, including test administration. Clinical assessments that pinpoint a diagnosis and provide direction for effective treat-ment are reimbursable, within limits, and typically are considered by third-party payers as therapeutic interventions (Griffith, 1997; Kubiszyn et al., 2000; Yates & Taub, 2003). Moreover, a number of studies have demonstrated that clinical tests have therapeu-tic value in and of themselves (Ben-Porath, 1997; Finn & Tonsager, 1997) and encourage their use as interventions.

STANDARDS, ETHICS, AND RESPONSIBLE TEST USE

Counseling and clinical psychologists who conduct assessments must maintain high standards and abide

by recommendations for best practice. In short, their assessment practices must be beyond reproach. Considering the important and varied uses to which assessment results may be applied, it is not surpris-ing that an array of rules, guidelines, and recom-mendations govern testing and assessment practices. For many years, the Standards for Educational and Psychological Testing (AERA, APA, & NCME, 1999) have served several professions well as far as delin-eating the standards for test users as well as for test developers, and clinical and counseling psycholo-gists must adhere to ethical principles and codes of conduct that influence testing practices.

The APA’s Ethical Principles of Psychologists and Code of Conduct (APA Ethical Principles; APA, 2010) addresses assessment specifically in Standard 9, although passages relevant to assessment occur in several other standards, too. The 11 subsections of Standard 9 address issues such as use of tests, test construction, release of test data, informed consent, test security, test interpretation, use of automated services for scoring and interpretation, and commu-nication of assessment results. In essence, the stan-dards demand rigorous attention to the relationship between the clinician (as test giver) and the client (as test taker) from inception to completion of the assessment process. Ultimately, practitioners must select and use tests that are psychometrically sound, appropriate for use with the identified client, and responsive to the referral question(s). Furthermore, clinicians retain responsibility for all aspects of the assessment including scoring, interpretation and explanation of results, and test security, regardless of whether they choose to use other agents or ser-vices to carry out some of these tasks.

The Standards for Educational and Psychological Testing (AERA, APA, & NCME, 1999) and Standard 9 of the APA Ethical Principles (APA, 2010) provide sound guidance for counseling and clinical psycholo-gists who provide assessment-related services. A number of other organizations concerned with good testing practices have official policy statements that offer additional assistance to practitioners seeking further explication of testing-related guiding princi-ples or whose services may extend to areas beyond traditional parameters. The policy statements most likely to interest counseling and clinical

Copy

righ

t Am

eric

an Psy

chological Association. Not for fu

rthe

r di

stri

buti

on.

Janet F. Carlson

6

psychologists include the ACA Code of Ethics (Ameri-can Counseling Association, 2005), Specialty Guide-lines for Forensic Psychology (Committee on the Revision of Specialty Guidelines for Forensic Psychol-ogy, 2011), Principles for Professional Ethics (National Association of School Psychologists, 2010), and the International Guidelines for Test Use (International Test Commission, 2001). In addition to the forego-ing, many books about ethics in the professional practice of psychology include substantial coverage of ethical considerations in assessment (e.g., Cottone & Tarvydas, 2007; Ford, 2006). A particularly accessible volume by Eyde et al. (2010) provides expert analysis of case studies concerning test use in various settings, including mental health settings, and illustrating real-life testing challenges and conundrums.

ASSESSMENT METHODS

As in all assessment endeavors, tasks associated with assessment in clinical and counseling psychology involve information gathering. Clinical and counsel-ing assessments typically comprise evaluations of individuals with the goal of assisting an individual client in some manner. To determine the best way to help an individual, clinicians rely on comprehensive assessments that evaluate several aspects of an indi-vidual’s functioning. Thus, most such assessments involve collecting information using a variety of assessment techniques (e.g., interviews, behavioral observations). Moreover, the use of multiple proce-dures (e.g., tests) facilitates the overarching goal of clinical and counseling assessment and also reso-nates with the important principle of good testing practice. Specifically, Standard 11.20 of the Stan-dards for Educational and Psychological Testing (AERA, APA, & NCME, 1999) states that, in clinical and counseling settings, “a test taker’s score should not be interpreted in isolation; collateral informa-tion that may lead to alternative explanations for the examinee’s test performance should be considered” (p. 117). It follows that inferences drawn from a sin-gle measure must be validated against evidence derived from other sources, including other tests and procedures used in the assessment.

Counseling and clinical assessment methods vary widely in their forms. The means of identifying what

information is needed and gathering relevant evi-dence may include direct communications with examinees, observations of examinees’ behavior, input from other interested parties (e.g., family members, peers, coworkers, teachers), reviews of records (e.g., psychiatric, educational, legal), and use of formal measures (i.e., tests). Interviews, behavioral observations, and formal testing proce-dures represent the primary ways of obtaining clini-cally relevant information.

InterviewingIntake or clinical interviews often represent a first point of contact between a client and a clinician in which information that contributes to clinical assessment surfaces. Many important concerns must be handled effectively within what is probably no more than a 50-minute session. Beyond practical (e.g., scheduling, billing, emergency contact infor-mation) and ethical (e.g., informed consent, confi-dentiality and its limits) matters, the practitioner must accurately grasp and convey his or her under-standing of the issues to the client. If this under-standing captures the client’s concerns, then it likely helps the client to believe that his or her problems can be understood and treated by the clinician. If the practitioner’s understanding of the client’s issues is not accurate, then the client has the opportunity to provide additional information that represents his or her concerns more accurately. At the same time and somewhat in the background, the clinician exudes competence and concern in a manner that inspires hope and commitment, while, in the fore-ground, he or she establishes a fairly rapid yet accu-rate appraisal of the client’s issues and concerns. Effective treatment depends on the establishment of rapport sufficient to suggest that a productive work-ing relationship is possible along with an appraisal that accurately reflects the severity of the concerns expressed and disruptions in the client’s ability to function on a day-to-day basis as well as attendant risks. For a more complete discussion, readers can consult Chapter 7, this volume, concerning clinical interviewing.

Many intake procedures involve clinical inter-viewing that is somewhat formalized by the use of a structured format or questionnaire. The quality of

Copy

righ

t Am

eric

an Psy

chological Association. Not for fu

rthe

r di

stri

buti

on.

Clinical and Counseling Testing

7

intake forms varies widely, partly as a function of how they were developed. For example, clinicians may complete an intake form developed or adopted by the facility in which he or she works. Such forms generally include questions about the client’s cur-rent concerns (e.g., “presenting problem” or “chief complaint”) as well as historical information that may bear on the client’s status (e.g., history of previ-ous treatment, family history, developmental his-tory). Depending on the quality of the intake form, practitioners may find it necessary to supplement the information collected routinely through comple-tion of the form. In the appendices of her book, The Beginning Psychotherapist’s Companion, Willer (2009) offers several lists of intake questions that may be used to probe specific areas of concern that may surface during the collection of intake informa-tion (e.g., depression and suicide, mania, substance use). Advisable in all clinical settings and essential in clinical settings that provide acute and crisis ser-vices, intake procedures must address the extent to which the client poses a danger to others or to him-self or herself.

Intake interviews may be considered semistruc-tured if they address specific content uniformly from one client to the next but are not tightly “scripted” as are structured interviews. According to Garb’s (2005) review, semistructured interviews are more reliable than unstructured clinical interviews, most likely because of the similarity of content (if not actual test items) across interviewers. An example of a semistructured technique is the mental status examination (MSE), which refers to a standardized method of conducting a fairly comprehensive inter-view. The areas of mental status comprising an MSE are summarized in Table 1.1. Many MSE elements may be evaluated through unobtrusive observations made during the meeting or through verbal exchanges that occur naturally in ordinary conversation.

The semistructured nature of the MSE ensures coverage of certain vital elements of mental status but is flexible enough to allow clinicians to ask follow-up questions if he or she believes it is neces-sary or helpful to do so. The MSE is used by a wide variety of mental health providers (counseling and clinical psychologists as well as social workers,

psychiatrists, and others) and typically is completed at intake or during the course of treatment to assess progress. There are several versions of the MSE, including standardized and nonstandardized forms (Willer, 2009). An example of a structured diagnos-tic interview is the Structured Clinical Interview for the DSM–IV–TR (SCID; First, Spitzer, Gibbon, & Williams, 2002), where DSM–IV–TR refers to the Diagnostic and Statistical Manual of Mental Disorders (4th ed., text revision; American Psychiatric Associ-ation, 2000). Completion of the SCID allows practi-tioners to arrive at an appropriate psychiatric diagnosis.

Regardless of whether an initial clinical contact calls for formal assessment, a crucial area to evaluate during one’s initial interactions with clients is the presence of symptoms that indicate risk of harm to self or others. “Assessing risk of suicide is one of the most important yet terrifying tasks that a beginning clinician can do” (Willer, 2009, p. 245) and consti-tutes the ultimate high-stakes assessment. It is also frequently encountered in clinical practice (Stolberg & Bongar, 2002). Multiple factors contribute to overall risk status either by elevating or diminishing risk. Bauman (2008) describes four areas to examine when evaluating risk of suicide: (a) short-term risk factors, including stressors arising from environ-mental sources and mental health conditions; (b) long-term precipitating risk factors, including genetic traits or predispositions and personality traits; (c) precipitating events, such as legal matters, significant personal or financial losses, unwanted pregnancy, and so forth; and (d) protective factors or buffers, such as hope, social support, and access to mental health services. An individual’s overall risk of suicide represents a combination of risks emanating from the first three elements, which ele-vate overall risk, adjusted by the buffering effect of the last element, which reduces overall risk.

In practice, assessment of suicide risk relies heav-ily on clinical interviewing (Stolberg & Bongar, 2002). Specific tests designed to assess suicide risk, such as the Beck Hopelessness Scale (Beck, 1988) and the Suicide Intent Scale (Beck, Schuyler, & Her-man, 1974), appear to be used infrequently by prac-titioners (Jobes, Eyman, & Yufit, 1990; Stolberg & Bongar, 2002). Assessment of risk must consider

Copy

righ

t Am

eric

an Psy

chological Association. Not for fu

rthe

r di

stri

buti

on.

Janet F. Carlson

8

several features of risk beyond its mere presence including immediacy, lethality, and intent. Immedi-acy represents a temporal consideration with higher levels of immediacy associated with imminent risk—a state of acute concern for the individual’s life. Assessment of imminent risk involves consideration of several empirically derived risk factors including (a) history of prior attempts (with recent attempts given greater weight than attempts that occurred longer ago); (b) family history of suicide or attempt; and (c) presence of mental or behavior disorders such as substance abuse, depression, and conduct disorder. Imminent risk is accelerated by an inability to curb impulses and a need to “blow off steam,”

which constitute poor prognostic signs. Lethality refers to the possibility of death occurring as a result of a particular act. In assessing risk of suicide, the act in question is one that is planned or contem-plated by the client. Use of firearms connotes higher lethality than overdosing on nonprescription drugs (e.g., aspirin). Lethality differs from intent, which refers to what the person seeks to accomplish with a particular act of self-harm. Serious suicidal intent is not necessarily associated with acts of high lethality.

Behavioral ObservationsOne of the earliest means by which assessment information begins to accumulate is the test taker’s

TABLE 1.1

Major Areas Assessed During a Mental Status Examination

Area Content

Appearance The examiner observes and notes the person’s age, race, gender, and overall appearance.Movement The examiner observes and notes the person’s gait (manner of walking), posture, psychomotor excess or

retardation, coordination, agitation, eye contact, facial expressions, and similar behaviors.Attitude The examiner notes client’s overall demeanor, especially concerning cooperativeness, evasiveness, hostility,

and state of consciousness (e.g., lethargic, alert).Affect The examiner observes and describes affect (outwardly observable emotional reactions), as well as

appropriateness and range of affect.Mood The examiner observes and describes mood (underlying emotional climate or overall tone of the client’s

responses).Speech The examiner evaluates the volume and rate of speech production, including length of answers to questions,

the appropriateness and clarity of the answers, spontaneity, evidence of pressured speech, and similar characteristics.

Thought content The examiner assesses what the client says, listening for indications of evidence of misperceptions, hallucinations, delusions, obsessions, phobias, rituals, symptoms of dissociation (feelings of unreality, depersonalization), or thoughts of suicide.

Thought process The examiner assesses thought processes (logical connections between thoughts and how thoughts connect to the main thread or gist of conversation), noting especially irrelevant detail, verbal perseveration, circumstantial thinking, flight of ideas, interrupted thinking, and loose or illogical connections between thoughts that may indicate a thought disorder.

Cognition The evaluation assesses the person’s orientation (ability to locate himself or herself) with regard to person, place, and time; long- and short-term memory; ability to perform simple arithmetic (e.g., serial sevens); general intellectual level or fund of knowledge (e.g., identifying the last several U.S. presidents, or similar questions); ability to think abstractly (explaining a proverb); ability to name specific objects and read or write complete sentences; ability to understand and perform a task with multiple steps (e.g., showing the examiner how to brush one’s teeth, throw a ball, or follow simple directions); ability to draw a simple map or copy a design or geometrical figure; ability to distinguish between right and left.

Judgment The examiner asks the person what he or she would do about a commonsense problem, such as running out of shampoo.

Insight The examiner evaluates degree of insight (ability to recognize a problem and understand its nature and severity) demonstrated by the client.

Intellectual The examiner assesses fund of knowledge, calculation skills (e.g., through simple math problems), and abstract thinking (e.g., through proverbs or verbal similarities).

Copy

righ

t Am

eric

an Psy

chological Association. Not for fu

rthe

r di

stri

buti

on.

Clinical and Counseling Testing

9

behaviors. Surprisingly little information about behavioral observations appears in the empirical or practice-based literature, despite its traditional inclusion as a section in assessment reports (Leicht-man, 2002; Tallent, 1988). Although difficult to standardize and quantify, many psychologists con-sider the observations and interpretations of an examinee’s behavior during testing vital to under-standing the client (Oakland, Glutting, & Watkins, 2005). Only a few standardized assessments of test behavior have been developed, sometimes associated with a specific test. For example, Glutting and Oak-land (1993) developed the Guide to the Assessment of Test Session Behavior and normed it on the stan-dardization samples of the Wechsler Intelligence Scale for Children (3rd ed.; Wechsler, 1993) and the Wechsler Individual Achievement Test (Psychologi-cal Corporation, 1992). To date, standardized mea-sures of test session behavior have not been widely adopted.

Counseling and clinical psychologists typically have sufficient and specialized training to allow them to observe and record an examinee’s verbal and nonverbal behaviors. Notations usually are made for several behavioral dimensions including physical appearance, attitude toward testing, con-tent of speech, quality and amount of motor activity, eye contact, spontaneity, voice quality, effort (gener-ally and in the face of challenge), fatigue, coopera-tion, attention to tasks, willingness to offer guesses (if applicable), and attitude toward success and fail-ure (if applicable). Leichtman (2002) cautioned against either (a) including observations of every-thing a test taker thinks, feels, says, and does; or (b) reducing behavioral descriptions to such an extent that the resulting narrative fails to provide any real sense of what the test taker is like.

Behavior during clinical and counseling testing is unavoidably influenced by interactions between the test taker and the test giver. As Masling (1992) observed, “the psychologist is simultaneously a par-ticipant in the assessment process and an observer of it” (p. 54). A common expectation and responsi-bility of psychologists who administer such tests is to establish rapport with the test taker before imple-menting test procedures. Rapport is vital to ensure a test taker’s cooperation and best effort, attitudes that

contribute to test results that provide an accurate portrayal of the test taker’s characteristics. However, rapport differs from one dyad to another, as stylistic and personality factors vary across both examiners and examinees and affect the quality of their interac-tions. Although adherence to standardized adminis-tration procedures during testing is vital to preserve the integrity of assessment process and test score interpretability (e.g., AERA, APA, & NCME, 1999; Geisinger & Carlson, 2009), practitioners are not automatons who simply set specific tasks before examinees while reciting specific instructions. Actions taken by examiners during individual test administration must be responsive to test-taker behaviors and the examiner’s interpretation of those behaviors. Some of these actions are scripted in the test administration procedures, whereas others are subtle, nonverbal—possibly unconscious—ones that serve to allay anxiety or encourage elaboration of a response. Other actions follow logically from an examinee’s behavior, such as when the examiner offers a short break after noting the examinee’s failed attempt to stifle several yawns. In this vein, Leichtman (2002) suggested that test administration procedures and instructions are “like a play. Exam-iners are bound by the script, but there is wide lati-tude for how they and their clients interpret their roles” (p. 209). The traditions of testing encourage the notion that an examiner, “like the physical sci-entist or engineer, is ‘measuring an object’ with a technical tool. But the ‘object’ before him [sic] is a person, and the testing involves a complex psycho-logical relationship” (Cronbach, 1960, p. 602).

Formal TestingTests are used by counseling and clinical psycholo-gists at various points in therapeutic contexts. Some tests may be administered during an intake session, before the establishment of a therapeutic relation-ship, to check for a broad range of possible issues that may need clinical attention. These screening measures represent a “first pass” over the variety of issues that may concern a person who seeks mental health assistance. They are meant to provide a gross indication of level of symptom severity in select areas and, often, to indicate where to focus subse-quent assessment efforts (Kessler et al., 2003).

Copy

righ

t Am

eric

an Psy

chological Association. Not for fu

rthe

r di

stri

buti

on.

Janet F. Carlson

10

Screening measures typically are quite brief and are seldom, if ever, validated for use as diagnostic instruments. Rather, these measures provide a glimpse into the nature and intensity of a client’s concerns. As such, they may reveal problems that need immediate attention as well as areas needing further assessment. An example of a screening mea-sure designed for use in college counseling centers is the Inventory of Common Problems (ICP; Hoffman & Weiss, 1986), a 24-item inventory of specific problems college students may encounter. Respon-dents use a 5-point Likert-type scale to indicate the extent to which they have been bothered or worried by the stated problem over the past few weeks. Areas assessed include depression, anxiety, academic problems, interpersonal problems, physical health problems, and substance use problems. High scores suggest topics that may be explored further in counseling.

The Symptom Check List-90-R (SCL-90-R; Dero-gatis, 1994) is a clinical screening inventory with broader applicability than the ICP. The inventory consists of 90 items, each of which presents a symp-tom of some sort to which respondents indicate the extent to which they were distressed by that symp-tom over the past week, using a 5-point scale. The SCL-90-R yields scores on nine scales (Somatization, Obsessive-Compulsive, Interpersonal Sensitivity, Depression, Anxiety, Hostility, Phobic Anxiety, Para-noid Ideation, and Psychoticism) and total scores on three scales (Global Severity Index, Positive Symp-tom Total, and Positive Symptom Distress Index). Norms are differentiated by age (adolescent and adult) for nonpatients and by psychiatric patient sta-tus (nonpatient, inpatient, and outpatient) for adults, with each norm keyed by gender. Some brief clinical measures may be used to screen for problems in a single area of potential concern. For example, the Beck Depression Inventory—II (Beck, Steer, & Brown, 1996) and the State–Trait Anxiety Inventory (Spielberger, Gorsuch, Lushene, Vagg, & Jacobs, 1983) screen for elevated levels of symptom severity in depression and anxiety, respectively. Overall, these and other screening measures are most useful for detecting cases in need of further examination.

The assessment procedures described thus far are used routinely at or near the outset of a therapeutic

relationship to help specify or clarify the clinical sit-uation that prompted the client to seek treatment. More extensive, formal testing may prove beneficial at an early stage of intervention or anytime during therapy to specify, clarify, or differentiate diagnoses; to monitor treatment progress; or to predict psycho-therapy or mental health outcomes (Kubiszyn et al., 2000; see also Chapter 13, this volume, concerning psychological assessment in treatment). Counseling and clinical testing can be used to illuminate a vari-ety of dimensions that may help clinicians to deliver effective treatment for a particular client, including measures of cognitive ability, values, interests, aca-demic achievement, psychopathology, personality, and attitudes. The sheer number of tests available in each of these areas makes it impractical to review (or even mention) every test that may have clinical salience, particularly in light of the coverage afforded these measures in other chapters of this handbook. Thus, in the section that follows, tests are described according to several different ways of grouping them, with implications for clinical and counseling tests highlighted.

DIMENSIONS OF CLINICAL AND COUNSELING TESTING

Various characteristics of tests may be used to dis-tinguish among them. Such distinctions go beyond merely grouping or categorizing tests. For example, tests differ in administration format, nature of the respondent’s tasks, and whether the stakes associ-ated with the use of test scores are high or low. These dimensions influence the testing process in counseling and clinical contexts, by affecting expec-tations and behaviors of test givers and test takers as well as how the tests may be used and the confi-dence testing professionals may have in the results.

Test administration format is one way to distin-guish among tests. Some tests require one-to-one or individual administration, whereas other tests are designed for group administration. Generally speak-ing, it is possible to administer group tests using an individual format, although the examiner’s role in these situations is often reduced as he or she serves primarily as a monitor of the session. As suggested near the beginning of this chapter, clinical measures

Copy

righ

t Am

eric

an Psy

chological Association. Not for fu

rthe

r di

stri

buti

on.

Clinical and Counseling Testing

11

focus intensely on individual concerns. It follows that many—although by no means all—clinical measures were developed for individual administra-tion. Individually administered tests are highly dependent on the clinical skills of the examiner. As Meyer et al. (2001) observed, “a psychological test is a dumb tool, and the worth of the tool cannot be separated from the sophistication of the clinician who draws inferences from it and then communi-cates with patients and other professionals” (p. 153). Among other things, the responsibility to establish and maintain rapport rests with the clini-cian, and there is no magic formula by which to achieve it and no established criteria by which to establish that a reasonable level of rapport has been achieved. That determination depends on clinical judgment.

At the outset of a testing session, examiners need to ensure that a sufficient level of comfort and com-munication exists with the test taker to foster his or her best and sustained effort. Examiners need to exude a businesslike manner yet remain responsive to queries from the test taker and aware of fluctua-tion in the test taker’s energy, focus, and attitude. They need to help test takers understand that testing is important but must avoid overstating this point, lest the test taker become overly anxious about per-forming well on the test tasks. Test takers differ in terms of their readiness to engage in the assessment process and to give it their best effort: Some are eager to begin, some are anxious, some are irritated, some are suspicious or confused, and so forth. The clinician must keep a finger on the pulse of the test-ing session and take action as needed to restore rap-port and keep motivation high and performance optimal.

Standardized individual administration of tests is vital for the vast majority of tests to assure that test-ing conditions are the same for all test takers; there-fore, results from different test takers may be meaningfully compared (Geisinger & Carlson, 2009). However, given the interpersonal context within which clinical and counseling measures are administered, this procedural sameness is difficult to ensure for all aspects of testing. For example, most projective (performance-based) measures are untimed. How long examiners wait before moving

on to the next stimulus is a matter of judgment and, likely, varies a great deal from one examiner to the next. Some standardized measures include “scripts” for the examiner, in an effort to make administra-tion more uniform across examiners. Despite appearances, there is room for interpretation in the scripts nevertheless (Leichtman, 2002). How scru-pulously examiners follow standardized procedures for administration is an open question (Geisinger & Carlson, 2009; Masling, 1992), as studies of even highly scripted individually administered tests reveal many departures (e.g., Moon, Blakey, Gor-such, & Fantuzzo, 1991; Slate, Jones, & Murray, 1991; Thompson & Bulow, 1994).

On the other hand, group-administered tests are not monitored as closely as individually admin-istered tests and do not depend on rapport to ensure optimal performance. Directions for group-administered tests must be clear to all test takers before the beginning of the test (or inventory or questionnaire) because missteps by examinees cannot be corrected easily. The same instructions and practice procedures are used for everyone. An individual who perhaps would benefit from one more practice item will not get it, and there will be no follow-up opportunities to test limits.

The nature of the tasks that constitute individual tests is another way to distinguish tests. In Chapter 10 of this volume, which addresses performance-based measures (often referred to as projective tech-niques), Irving B. Weiner describes a major distinction between test types—that is, between performance-based measures and self-report mea-sures. The former test type requires test takers to act upon stimuli presented to them (e.g., Rorschach ink-blots, TAT cards), to create or construct responses, or to formulate responses to specific questions (e.g., Wechsler scales of intelligence) presented to them, whereas self-report measures ask respondents to answer questions about themselves by selecting responses from a preset array of options. As sug-gested by Weiner, neither test type is inherently superior, as the test types seek and provide different kinds of information. A test’s clinical value is unre-lated to the nature of the tasks that constitute it.

Performance-based measures typically use scor-ing systems or rubrics that ultimately depend on

Copy

righ

t Am

eric

an Psy

chological Association. Not for fu

rthe

r di

stri

buti

on.

Janet F. Carlson

12

some degree of subjectivity in scoring. The tasks that constitute performance-based measures are open-ended and offer wide latitude to test takers as far as how they choose to respond. Some tests or tasks require constructed responses (e.g., TAT, fig-ure drawings), whereas others require retrieval or application of specific information (e.g., Vocabulary and Arithmetic subtests on the Wechsler tests).

Self-report measures require examinees to select or endorse a response presented in a predefined set of possibilities. In part because responses are selected rather than constructed by the examinee, systematic distortion of responses is a concern in many self-report inventories (Graham, 2006). Detecting such response sets is important because, when they occur, they may undermine the validity of the test scores. Validity scales were big news when they were first introduced in the original MMPI (Hathaway & McKinley, 1943); now they are commonplace in many personality and other types of inventories. Scoring of self-report measures is considered to be objective and typically involves the use of either computer software or scoring tem-plates. Other than human errors (e.g., misaligning a scoring template), objective scoring produces test scores that do not require clinical judgment. Detailed discussion of self-report measures is pro-vided later in Chapter 11, this volume.

The level of impact that the use of tests scores may have varies and forms another way to distinguish groups of tests. High-stakes testing refers to the situa-tion where test scores are used to make important decisions about an individual. The impact level of such decisions is substantial, sometimes rising to the level of life altering. Tests whose results are used to render such decisions must be psychometrically sound. Evidence supporting the reliability and valid-ity of test scores must surpass the level typically seen in measures used for lesser purposes, such as research or screening. Custody evaluations used to determine parental fitness (for further information, see Chapter 34, this volume) and forensic evaluations used to establish competency to stand trial (for further infor-mation, see Chapters 6 and 16, this volume) are but two examples of high-stakes testing situations.

In clinical decision making, the specific test used does not automatically determine the stakes. Rather,

the use to which the test scores are put dictates whether the testing should be considered high stakes. For example, practitioners may use the results of an assessment simply to confirm a diagno-sis and formulate interventions. This use of tests is a rather routine practice aimed at improving the men-tal health of a particular client. In this situation, the stakes likely are low, because the individual is already engaged in treatment and the differential diagnosis that is sought will enhance the clinician’s understanding and treatment of his or her psycho-logical difficulties. If the same test results were used as the basis for denying disability benefits, then the testing context would be regarded as high stakes.

Low-stakes measures often include those related to documenting values and interests. The human interest value of these measures notwithstanding, low-stakes situations simply do not have the same level of impact as high-stakes decisions. Test takers frequently are curious to review the assessment results, but many are not surprised by them. How-ever, low-stakes measures may contribute to impor-tant decisions that an individual may make concerning career or relationship pursuits or other quality-of-life choices.

INTERPRETING AND INTEGRATING ASSESSMENT RESULTS

Interpreting and integrating test results requires a tenacious, disciplined, and thorough approach. It follows the collection of data from various sources, none of which should be ignored or dismissed. Like test administration, test interpretation represents

an interpersonal activity [that] may be considered part of the influence process of counseling. The counselor commu-nicates his or her own understanding of the client’s test data to the client, antici-pating that the client will adopt and apply some part of that understanding as self-understanding. (Claiborn & Hanson, 1999, p. 151)

An important objective in interpreting assess-ment results is to account for as much test data as possible. Formulating many tenable hypotheses at

Copy

righ

t Am

eric

an Psy

chological Association. Not for fu

rthe

r di

stri

buti

on.

Clinical and Counseling Testing

13

the outset of test interpretation facilitates this goal. With regard to enhancing clinical judgment, Garb (1989) encouraged clinicians to become more will-ing to consider alternative hypotheses and to revise their initial views of a client’s behavior. Although Garb’s point referred broadly to clinical judgment and not specifically to clinical assessment, it applies equally well to test interpretation. For example, an overarching ennui reported by an adult client at intake could stem from numerous causes, including psychological and physical ones. Subsequent results from a comprehensive assessment consisting of a multitude of tests and sources of data may suggest (a) depression or a related derivative, (b) bereave-ment, (c) malingering, (d) anemia, (e) reaction to situational (e.g., job related) stress, (f) passive–aggressive coping strategy, (g) insomnia, (h) a side effect of a new medication, (i) a combination of two or more of the foregoing, or (j) something else entirely. An intake interview and routine screening measures may rule out several of the possible expla-nations. Interpretations stemming from more com-prehensive measures may be compared against the remaining competing hypotheses to ascertain which hypothesis best accounts for the evidence. In the end, the best explanation is the one that explains most (or all) of the evidence accumulated and con-sidered in the assessment process.

An important first step in evaluating test data often takes place while assessment procedures are under way, in the presence of the test taker or before he or she leaves the premises where testing occurred. This step involves reviewing the examin-ee’s responses to any “critical items” that are included on any of the measures. These items are so called because their content has been judged to be indicative of serious maladjustment, signifying grave concerns such as the propensity for self-harm. Although empirical scrutiny has not tended to offer much support for the utility of critical items for this purpose (Koss, 1980; Koss, Butcher, & Hoff-man, 1976), many practitioners consider the items worthy of follow-up efforts, perhaps because failing to act on such a blatant appeal for assistance would be unconscionable and the possible outcome irre-versible. Moreover, base-rate problems cloud the issue, as low-base-rate events such as suicide are

notoriously difficult to predict (Sandoval, 1997), especially when one tries to predict such an event on the basis of responses to a small handful of items. Also at issue is the absence of an adequate criterion against which to judge test validity (Hunsley & Meyer, 2003). A client who does not commit suicide after his or her responses to critical items suggested a high risk of suicide was present was not necessar-ily misjudged. Individuals at high risk for a given outcome do not unerringly suffer that outcome; such is the nature of risk.

Base-rate and criterion problems persist in the area of suicide risk assessment and are unlikely to be resolved. Measures developed to assess suicide risk are intended to be used to avert acts of self-harm and cannot be easily validated in the usual manner because lives are at stake. Critical items denote risk; they do not predict behavior. Recommended practice is to avoid treating critical items as a scale or brief assessment of functioning, but rather consider the items as offering possible clues to content themes that may be important to the client (Butcher, 1989).

After considering a client’s responses to critical items, integration of findings obtained from the vari-ous methods used in an assessment moves to a review of evidence collected during the assessment, including test and nontest data, from each individ-ual source. Scoring and interpreting or evaluating individual procedures that were implemented con-stitutes an important first step because it is at this stage that clinicians begin to weigh the credibility of the evidence. Specifically, it is essential to note for each procedure whether the test taker’s approach to that procedure allows further consideration of the results. Tests that include validity scales can make this task more objective and fairly straightforward. However, many assessment procedures do not have built-in components to help examiners evaluate whether responses should be considered valid indi-cators of the test taker’s functioning. In these cases, examiners must render a judgment, often based on the test taker’s demeanor, attitude toward the proce-dures, and behaviors demonstrated during the assessment. Obviously and unfortunately, this judg-ment process is not standardized and is quite open to subjective interpretations. Even so, it is probably safe to conclude that most practitioners would at

Copy

righ

t Am

eric

an Psy

chological Association. Not for fu

rthe

r di

stri

buti

on.

Janet F. Carlson

14

least question the validity of assessment results from a client who arrived to the session 20 minutes late, looked at his watch no fewer than 25 times, neglected to respond to half of the items on two test forms, and sighed audibly throughout the assess-ment while mumbling about how “ridiculous this is.” In any case,

psychologists must consider whether there is a discernible reason for test tak-ers to be less than forthright in their responses, and whether that reason might constitute a motive for faking. If so, the test giver must . . . interpret test findings with these possibilities in mind. (Carlson & Geisinger, 2009, p. 83)

In the early stages of interpretation, possible explanations for the results should be treated as ten-tative, because various hypotheses may be offered to explain individual test outcomes. All reasonable explanations for the observed results should be con-sidered while examining evidence from other sources. In the face of additional data, some hypoth-eses will be discarded and some will be retained. Evi-dence from other sources—test and nontest—that confirms or disconfirms active hypotheses is particu-larly important, as this type of evidence helps to bol-ster (i.e., rule in) or weaken (i.e., rule out) putative explanations, respectively. Typically, a small number of hypotheses survive this iterative process, and these viable explanations of the observed results form the prominent themes of a written report.

PROVIDING ASSESSMENT FEEDBACK

Providing test feedback to test takers is an ethical responsibility (e.g., APA, 2010) that appears to be taken lightly by some practitioners according to some published reports (Pope, 1992; Smith, Wig-gins, & Gorske, 2007). As Smith et al. (2007) observed, there is surprisingly little written about assessment feedback and “little published research on the assessment feedback practices of psycholo-gists” (p. 310). These researchers surveyed some 719 clinicians (neuropsychologists and members of the Society for Personality Assessment) about their psy-chological assessment feedback practices to find that

some 71% reported that they frequently provided in-person feedback, either to clients or clients’ family members. The researchers also queried respondents about the time they spent providing feedback, how useful they found the practice, and what kind of feedback they provided (e.g., written, oral). Although most practitioners reported that they do provide feedback, nearly 41% reported that they pro-vided no direct feedback to clients or their families. Nearly one third of respondents reported that they mailed a report to clients, a practice that Harvey (1997) denounced, because recipients often lack the background and technical knowledge to understand and interpret the results. Even so, Smith et al. viewed the survey results positively overall and sug-gested that the status of psychological assessment feedback practices may not be as dire as suggested several years ago (Pope, 1992). Interested readers may refer to Chapter 3, this volume, for further guidance on communicating assessment results.

Test feedback may serve several important pur-poses, not the least of which is to help bring about behavioral changes (Butcher, 2010; Finn & Tonsager, 1997). In discussing the importance of providing test feedback, Pope (1992) suggested that the feedback process offers opportunities on several fronts that bear directly on the therapeutic process and that, in essence, extend the assessment to include the feed-back component. Empirical evidence accumulated thereafter, which demonstrated treatment effects of assessment feedback (Kubiszyn et al., 2000). Specifi-cally, several studies compared therapeutic gains made by clients in treatment who received feedback about their test results on the MMPI–2 (Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) to those of similar clients who did not receive such feed-back (e.g., Finn & Tonsager, 1992, 1997; Fischer, 2000; Newman & Greenway, 1997). Clients who received assessment feedback demonstrated thera-peutic improvements, as noted by their higher levels of hope and decrease in reported symptoms.

CONCLUDING THOUGHTS

Assessment methods used in counseling and clinical contexts focus tightly on an individual client’s condition and seek to identify ways in which his or

Copy

righ

t Am

eric

an Psy

chological Association. Not for fu

rthe

r di

stri

buti

on.

Clinical and Counseling Testing

15

her concerns may be addressed or resolved. Broadly speaking, the methods used include interview tech-niques, behavioral observations, and formal tests that place different demands on the examinee as well as the examiner. Information gathered from multiple sources then must be interpreted and inte-grated into a cohesive explanation of the test data and, by extension, the client’s functioning and fea-tures. The end goal of assessment in counseling and clinical contexts is to produce an accurate portrayal of the client’s functioning that is useful for planning and implementing interventions. Providing feedback to the client about assessment results is vital to pro-moting the client’s interests and effecting treatment.

Cates (1999) observed that clinical assessment is best regarded as providing a “snapshot not a film” of an individual’s functioning, that “describes a moment frozen in time, described from the view-point of the psychologist” (p. 637). When an observer says something like, “that’s a good picture of her,” the speaker means that the image represents the subject as she truly is. Good pictures depend on using good tools and good techniques. Clinical assessment, too, uses tools and techniques to reflect the characteristics of the client as he or she exists and functions every day.

ReferencesAiken, L. S., West, S. G., Sechrest, L., & Reno, P. R.

(1990). Graduate training in statistics, methodology and measurement in psychology: A survey of Ph.D. programs in North America. American Psychologist, 45, 721–734. doi:10.1037/0003-066X.45.6.721

American Counseling Association. (2005). ACA code of ethics. Washington, DC: Author. Retrieved from http://72.167.35.179/Laws_and_Codes/ACA_Code_of_Ethics.pdf

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing (3rd ed.). Washington, DC: American Educational Research Association.

American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (4th ed., text revision). Washington, DC: Author.

American Psychological Association. (2010). Ethical principles of psychologists and code of conduct (2002, Amended June 1, 2010). Retrieved from http://www.apa.org/ethics/code/index.aspx

Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.). Upper Saddle River, NJ: Prentice Hall.

Ball, J. D., Archer, R. P., & Imhof, E. A. (1994). Time requirements of psychological testing: A survey of practitioners. Journal of Personality Assessment, 63, 239–249. doi:10.1207/s15327752jpa6302_4

Bauman, S. (2008). Essential topics for the helping profes-sional. Boston, MA: Pearson.

Beck, A. T. (1988). Beck Hopelessness Scale. San Antonio, TX: Psychological Corporation.

Beck, A. T., Schuyler, D., & Herman, I. (1974). Development of suicidal intent scales. In A. T. Beck, H. L. P. Resnik, & D. J. Lettieri (Eds.), The prediction of suicide (pp. 45–56). Bowie, MD: Charles Press.

Beck, A. T., Steer, R. A., & Brown, G. K. (1996). Beck Depression Inventory manual (2nd ed.). San Antonio, TX: Psychological Corporation.

Ben-Porath, Y. S. (1997). Use of personality instru-ments in empirically guided treatment planning. Psychological Assessment, 9, 361–367. doi:10.1037/1040-3590.9.4.361

Butcher, J. N. (1989). The Minnesota report: Adult Clinical System MMPI–2. Minneapolis, MN: University of Minnesota Press.

Butcher, J. N. (2010). Personality assessment from the nineteenth to the early twenty-first century: Past achievements and contemporary challenges. Annual Review of Clinical Psychology, 6, 1–20. doi:10.1146/annurev.clinpsy.121208.131420

Butcher, J. N., Dahlstrom, W. G., Graham, J. R., Tellegen, A., & Kaemmer, B. (1989). Manual for administra-tion and scoring: Minnesota Multiphasic Personality Inventory—2 (MMPI–2). Minneapolis, MN: University of Minnesota Press.

Camara, W. J., Nathan, J. S., & Puente, A. E. (2000). Psychological test usage: Implications in professional psychology. Professional Psychology: Research and Practice, 31, 141–154. doi:10.1037/0735-7028.31.2.141

Carlson, J. F., & Geisinger, K. F. (2009). Psychodiagnostic testing. In R. Phelps (Ed.), Correcting fallacies about educational and psychologi-cal testing (pp. 67–88). Washington, DC: American Psychological Association. doi:10.1037/11861-002

Cates, J. A. (1999). The art of assessment in psychology: Ethics, expertise, and validity. Journal of Clinical Psychology, 55, 631–641. doi:10.1002/(SICI)1097-4679(199905)55:5<631::AID-JCLP10>3.0.CO;2-1

Claiborn, C. D., & Hanson, W. E. (1999). Test inter-pretation: A social-influence perspective. In J. W. Lichtenberg & R. K. Goodyear (Eds.), Scientist-practitioner perspectives on test interpretation (pp. 151–166). Needham Heights, MA: Allyn & Bacon.

Copy

righ

t Am

eric

an Psy

chological Association. Not for fu

rthe

r di

stri

buti

on.

Janet F. Carlson

16

Committee on the Revision of the Specialty Guidelines for Forensic Psychology. (2011). Specialty guidelines for forensic psychology (6th draft). Retrieved from http://www.ap-ls.org/aboutpsychlaw/3182011sgfpdraft.pdf

Cottone, R. R., & Tarvydas, V. M. (2007). Counseling ethics and decision-making (3rd ed.). Upper Saddle River, NJ: Pearson Education.

Cronbach, L. J. (1960). Essentials of psychological testing (2nd ed.). New York, NY: Harper.

Derogatis, L. R. (1994). Administration, scoring, and pro-cedures manual for the SCL-90-R. Minneapolis, MN: National Computer Systems.

Eisman, E. J., Dies, R., Finn, S. E., Eyde, L. D., Kay, G. G., Kubiszyn, T. W., . . . Moreland, K. L. (2000). Problems and limitations in the use of psychologi-cal assessment in contemporary health care delivery. Professional Psychology: Research and Practice, 31, 131–140. doi:10.1037/0735-7028.31.2.131

Eyde, L. D., Robertson, G. J., & Krug, S. E. (2010). Responsible test use: Case studies for assessing human behavior (2nd ed.). Washington, DC: American Psychological Association.

Finn, S. E., & Martin, H. (1997). Therapeutic assessment with the MMPI–2 in managed health care. In J. N. Butcher (Ed.), Objective personality assessment in man-aged health care: A practitioner’s guide (pp. 131–152). New York, NY: Oxford University Press.

Finn, S. E., & Tonsager, M. E. (1992). Therapeutic effects of providing MMPI–2 test feedback to college stu-dents awaiting therapy. Psychological Assessment, 4, 278–287. doi:10.1037/1040-3590.4.3.278

Finn, S. E., & Tonsager, M. E. (1997). Information-gathering and therapeutic models of assessment: Complementary paradigms. Psychological Assessment, 9, 374–385. doi:10.1037/1040-3590.9.4.374

First, M. B., Spitzer, R. L., Gibbon, M., & Williams, J. B. W. (2002). Structured Clinical Interview for DSM–IV–TR Axis I disorders, research version, patient edition (SCID-I/P). New York, NY: Biometrics Research, New York State Psychiatric Institute.

Fischer, C. T. (2000). Collaborative, individualized assessment. Journal of Personality Assessment, 74, 2–14. doi:10.1207/S15327752JPA740102

Fong, M. L. (1995). Assessment and DSM–IV diagnosis of personality disorders: A primer for counselors. Journal of Counseling and Development, 73, 635–639. doi:10.1002/j.1556-6676.1995.tb01808.x

Ford, G. G. (2006). Ethical reasoning for mental health professionals. Thousand Oaks, CA: Sage.

Garb, H. N. (1989). Clinical judgment, clinical training, and professional experience. Psychological Bulletin, 105, 387–396. doi:10.1037/0033-2909.105.3.387

Garb, H. N. (2003). Incremental validity and the assess-ment of psychopathology in adults. Psychological

Assessment, 15, 508–520. doi:10.1037/1040-3590.15.4.508

Garb, H. N. (2005). Clinical judgment and decision mak-ing. Annual Review of Clinical Psychology, 1, 67–89. doi:10.1146/annurev.clinpsy.1.102803.143810

Geisinger, K. F., & Carlson, J. F. (2009). Standards and standardization. In J. N. Butcher (Ed.), Oxford hand-book of personality assessment (pp. 99–111). New York, NY: Oxford University Press.

Glutting, J., & Oakland, T. (1993). Guide to the assess-ment of test session behavior: Manual. San Antonio, TX: Psychological Corporation.

Graham, J. R. (2006). MMPI–2: Assessing personality and psychopathology (4th ed.). New York, NY: Oxford University Press.

Griffith, L. (1997). Surviving no-frills mental health care: The future of psychological assessment. Journal of Practical Psychiatry and Behavioral Health, 3, 255–258.

Harvey, V. S. (1997). Improving readability of psychological reports. Professional Psychology: Research and Practice, 28, 271–274. doi:10.1037/0735-7028.28.3.271

Hathaway, S. R., & McKinley, J. C. (1943). The Minnesota Multiphasic Personality Inventory. Minneapolis, MN: University of Minnesota Press.

Hayes, S. C., Nelson, R. O., & Jarrett, R. B. (1987). The treatment utility of assessment: A functional approach to evaluating assessment quality. American Psychologist, 42, 963–974. doi:10.1037/0003-066X.42.11.963

Hoffman, J. A., & Weiss, B. (1986). A new system for conceptualizing college students’ problems: Types of crises and the Inventory of Common Problems. Journal of American College Health, 34, 259–266. doi:10.1080/07448481.1986.9938947

Hood, A. B., & Johnson, R. W. (2007). Assessment in counseling: A guide to the use of psychological assessment procedures. Alexandria, VA: American Counseling Association.

Hunsley, J., & Meyer, G. J. (2003). The incremen-tal validity of psychological testing and assess-ment: Conceptual, methodological, and statistical issues. Psychological Assessment, 15, 446–455. doi:10.1037/1040-3590.15.4.446

International Test Commission. (2001). International guidelines for test use. International Journal of Testing, 1, 93–114. doi:10.1207/S15327574IJT0102_1

Jobes, D. A., Eyman, J. R., & Yufit, R. I. (1990, April). Suicide risk assessment survey. Paper presented at the annual meeting of the American Association of Suicidology, New Orleans, LA.

Kessler, R. C., Barker, P. R., Cople, L. J., Epstein, J. F., Gfroerer, J. C., Hiripi, E., . . . Zaslavsky, A. M. (2003). Screening for serious mental illness in the general population. Archives of General Psychiatry, 60, 184–189. doi:10.1001/archpsyc.60.2.184

Copy

righ

t Am

eric

an Psy

chological Association. Not for fu

rthe

r di

stri

buti

on.

Clinical and Counseling Testing

17

Koss, M. P. (1980). Assessment of psychological emergen-cies with the MMPI. Nutley, NJ: Roche.

Koss, M. P., Butcher, J. N., & Hoffman, N. (1976). The MMPI critical items: How well do they work? Journal of Consulting and Clinical Psychology, 44, 921–928. doi:10.1037/0022-006X.44.6.921

Kubiszyn, T. W., Meyer, G. J., Finn, S. E., Eyde, L. D., Kay, G. G., Moreland, K. L., . . . Eisman, E. J. (2000). Empirical support for psychological assess-ment in clinical health care settings. Professional Psychology: Research and Practice, 31, 119–130. doi:10.1037/0735-7028.31.2.119

Leichtman, M. (2002). Behavioral observations. In J. N. Butcher (Ed.), Clinical personality assessment: Practical approaches (pp. 303–318). New York, NY: Oxford University Press.

Masling, J. M. (1992). Assessment and the therapeu-tic narrative. Journal of Training and Practice in Professional Psychology, 6, 53–58.

Meyer, G. J., Finn, S. E., Eyde, L., Kay, G. G., Moreland, K. L., Dies, R. R., . . . Reed, G. M. (2001). Psychological testing and psychological assessment: A review of evi-dence and issues. American Psychologist, 56, 128–165. doi:10.1037/0003-066X.56.2.128

Moon, G. W., Blakey, W. A., Gorsuch, R. L., & Fantuzzo, J. W. (1991). Frequent WAIS–R administration errors: An ignored source of inaccurate measure-ment. Professional Psychology: Research and Practice, 22, 256–258. doi:10.1037/0735-7028.22.3.256

National Association of School Psychologists. (2010). Principles for professional ethics. Retrieved from http://www.nasponline.org/standards/2010standards/1_%20Ethical%20Principles.pdf

Naugle, K. A. (2009). Counseling and testing: What counselors need to know about state laws on assess-ment and testing. Measurement and Evaluation in Counseling and Development, 42, 31–45. doi:10.1177/0748175609333561

Newman, M. L., & Greenway, P. (1997). Therapeutic effects of providing MMPI–2 test feedback to clients at a university counseling service: A collaborative approach. Psychological Assessment, 9, 122–131. doi:10.1037/1040-3590.9.2.122

Oakland, T., Glutting, J., & Watkins, M. W. (2005). Assessment of test behaviors with the WISC–IV. In A. Prifitera, D. H. Saklofske, & L. G. Weiss (Eds.), WISC–IV clinical use and interpretations: Scientist-practitioner perspectives (pp. 435–467). San Diego, CA: Elsevier Academic Press.

Pope, K. S. (1992). Responsibilities in providing psy-chological test feedback to clients. Psychological Assessment, 4, 268–271. doi:10.1037/1040-3590.4.3.268

Psychological Corporation. (1992). Wechsler Individual Achievement Test. San Antonio, TX: Author.

Sandoval, J. (1997). Critical thinking in test interpreta-tion. In J. Sandoval, C. L. Frisby, K. F. Geisinger, J. D. Scheuneman, & J. R. Grenier (Eds.), Test inter-pretation and diversity: Achieving equity in assess-ment (pp. 31–49). Washington, DC: American Psychological Association.

Slate, J. R., Jones, C. H., & Murray, R. A. (1991). Teaching administration and scoring of the Wechsler Adult Intelligence Scale—Revised: An empirical evaluation of practice administrations. Professional Psychology:, Research and Practice, 22, 375–379. doi:10.1037/0735-7028.22.5.375

Smith, S. R., Wiggins, C. M., & Gorske, T. T. (2007). A survey of psychological assessment feedback practices. Assessment, 14, 310–319. doi:10.1177/1073191107302842

Spielberger, C. D., Gorsuch, R. L., Lushene, R., Vagg, P. R., & Jacobs, G. A. (1983). Manual for State–Trait Anxiety Inventory. Palo Alto, CA: Consulting Psychologists Press.

Stolberg, R., & Bongar, B. (2002). Assessment of sui-cide risk. In J. N. Butcher (Ed.), Clinical personality assessment: Practical approaches (pp. 376–406). New York, NY: Oxford University Press.

Tallent, N. (1988). Psychological report writing (3rd ed.). Englewood Cliffs, NJ: Prentice-Hall.

Thompson, A. P., & Bulow, C. A. (1994). Administration error in presenting the WAIS–R blocks: Approximating the impact of scrambled presenta-tions. Professional Psychology: Research and Practice, 25, 89–91. doi:10.1037/0735-7028.25.1.89

Wechsler, D. (1993). Wechsler Intelligence Scale for Children (3rd ed.). San Antonio, TX: Psychological Corporation.

Willer, J. (2009). The beginning psychotherapist’s compan-ion. Lanham, MD: Rowman & Littlefield.

Yates, B. T., & Taub, J. (2003). Assessing the costs, benefits, cost-effectiveness, and cost–benefit of psychological assessment: We should, we can, and here’s how. Psychological Assessment, 15, 478–495. doi:10.1037/1040-3590.15.4.478

Copy

righ

t Am

eric

an Psy

chological Association. Not for fu

rthe

r di

stri

buti

on.

Calculate your order
Pages (275 words)
Standard price: $0.00
Client Reviews
4.9
Sitejabber
4.6
Trustpilot
4.8
Our Guarantees
100% Confidentiality
Information about customers is confidential and never disclosed to third parties.
Original Writing
We complete all papers from scratch. You can get a plagiarism report.
Timely Delivery
No missed deadlines – 97% of assignments are completed in time.
Money Back
If you're confident that a writer didn't follow your order details, ask for a refund.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00
Power up Your Academic Success with the
Team of Professionals. We’ve Got Your Back.
Power up Your Study Success with Experts We’ve Got Your Back.
Open chat
1
Hello. Can we help you?