last authored: Dec 2012, David LaPierre
last reviewed:
Regular and ongoing assessment is very important during training in health care, playing a role in ongoing professional development and in evaluation.
Assessment is powerfully motivating for learners, and they will often study and prepare according to what they expect.
Assessment is important during all phases of education:
There are a variety of assessment tools, which can be used according to the setting and what they are designed to measure. It is clear that multimodal and multitemporal assessment is important.
New ways of summarizing and aggregating this are needed standardization of assessment is critical.
Assessment plays three main roles during training in health care (Epstein, 2007):
Key roles of competency-based assessment also include:
main article: feedback
Assessment can be formative, meaning it is designed primarily to strengthen the learner's future performance, provide encouragement, or lead to self-reflection.
Objective formative assessments can be very important in helping learners identify and rectify gaps, and have been shown to result in large improvements on evaluation performance (Norman, Neville, Blake, and Mueller, 2010). These significant positive effect on learning can also be accompanied by strong feelings of value by students, even when voluntary (Velan, Jones, McNeil, and Kumar, 2008).
One major aspect of formative assessment is feedback, which is normally less formal, confidental, and non-threatening. It allows mistakes to be caught and addressed before they become solidified.
Formative assessment is especially important during early training.
Assessment can also be summative, leading to decisions regarding advancement or readiness to practice. Here, this will be referred to as evaluation. Summative assessments must represent the judgement of a trainee's performance in relation to stated learning objectives.
Summative assessment is important periodically to assure that residents are on track for successful completion of the program and to identify faltering or failing residents who need additional or modified educational opportunities or other interventions to address their individual needs.
All assessment tools have strengths and weaknesses, and some types of tools are better than others for a given task (Sherbino, Bandiera, and Frank, 2008). This becomes especially important for high-stakes assessments, eg completion of a program or credentialling, as the appropriate use of quality assessment tools becomes of paramount important.
Some commonly-used types of tools are as follows:
written assessments
|
clinical assessments
quality of care indicators
|
other |
Many aspects of a learner's knowledge, skills, and attitudes can be measured, though "not everything that can be counted counts; not everything that counts can be counted" (best attributed to William Bruce Cameron).
Proposed targets of assessment.
Some of the targets to be assessed include:
The SiH model takes into account the differing roles of preceptors.
In the figure at right, a small number of preceptors (eg, two) follow the learner over time. This longitudinal relationship allows robust assessment of a number of factors.
A larger number of preceptors, also assist in assessment, though in a more limited way.
Ideally, preceptors are primarily responsible for assessing the general roles and clinical skills identified at left.
The learner's focus should primarily be on ensuring progressive readiness to practice within a given clinical domain. In this figure, the domains are organized according to a hybrid of populations and pathologies. This could be re-assembled in other ways as well, eg body systems or practice environments (eg clinic, emergency department, etc).
It is critical that assessments be viewed by learners and educators as useful. Van der Vleuten describes five criteria to be used in evaluating assessment:
ReliabilityAssessment results should be consistent, especially in regards to pass/fail measures. |
ValidityAssessment should focus on performance that accurately reflects real life.
|
ImpactAssessment should benefit the learner's trajectory, rather than simply leading to a passing grade.
|
AcceptabilityLearners, preceptors, administration, and the public should all see assessment as meaningful and fair. |
CostAssessment should be effective in regards to time, money, and other resources. |
Ideally these factors are all well-accounted for in tool design. "Traditional approaches to measurement, based in the psychometric imperative, have been leery of work-based assessment, given the biases inherent in the clinical setting and the challenges of 'adjusting' for contextual factors that make it difficult to determine the 'true' score, or rating, of comeptence" (Holmboe et al, 2010). As such, the assessment process requires incorporating context.
Assessment should ideally be criterion-based (judged against established standards), rather than normative (judging against one's peers). The rationale for this comes largely from the fact that comparison against other learners often results in standards that are too low (Holmboe et al, 2010). One example from the literature describes central line insertion. In this study, all residents learning the procedure failed the baseline assessment. If normative assessment was used, it could be determined that all the group was competent, whereas in fact none of them were (Barsuk et al, 2009).
When examining competence of students across schools, it becomes clear that nationally agreed upon standards for assessment are very helpful. This should be balanced with the ability of schools or programs to determine their own curriculum and assessment strategies.
In line with this, Holmboe et al (2010) suggest programs need to "move away from developing multiple "home-grown" assessment tools and work instead toward the adoption of a core set of assessment tools that will be used in all programs within a country or region."
Multimethod assessment makes clear sense, in regards to avoiding limitations of a given type of testing and to derive robust data over time. This is especially true for competency based assessment, which "necessitates a robust and multifaceted assessment system" (Holmboe et al, 2010).
Assessment framework
Assessment involves multiple assessors using multiple tools, interacting with the learner. There are many interactions that therefore are taking place, and these adapt and change over time.
Unfortunately, design and implementation of a system like this has proven extremely challenging.
Our SiH model is to utilize appropriate types of tools to assess various competencies, building in a step-wise fashion.
Assessment should take place in a number of clinical contexts, with a balance of complex, real-life situations that require reasoning and judgement with structured, focused assessment of knowledge, skills, and behaviour (Epstein, 2007).
Combining these data is often done in a portfolio.
Pass-fail standards need to be in place to assess appropriate developmental standards (eg benchmarking).
Content specificity: learners may excel in one clinical encounter, eg sore throat, and appear to be functioning at a high level. However, perhaps their knowledge of diabetes care is quite poor; in this case, their performance will also likely be quite poor.
Context specifity: setting also can play a big role in affecting performance. For example, a learner may competently care for a patient with cough in the emergency department, but not the clinic.
"New ways of combining qualitative and quantitative data will be required if portfolio assessments are to find widespread application and withstand the test of time" (Epstein, 2007).
Truly, subjective judgement is present when faculty are assessing learners. However, while training through faculty development is a necessary component, the 'profession' of medicine almost by definition embraces individual clinical judegement.
main article: entrustment
ten Cate and colleagues (2010) have suggested a main goal of assessment in competency-based education is to determine entrustability related to a specific role, in a specific context. There are four main factors contributing to this:
Ideally, reliable and valid assessment tools should be used in determining entrustment. To ensure readiness, ideally at least two faculty should observe the learner before judging readiness for practice.
Even though assessment should be compared against criteria, not peers, it is at the same time important to take into consideration the level of training, or developmental stage, of a learner. Benchmarking, or milestoning, allows determination as to whether the learner's trajectory is on target or not (Green et al, 2009).
Benchmarks can differ in different clinical domains, and learners may progress more or less quickly according to aptitude, prior experience, or interest.
As competence grows, the task of assessment shifts to higher level cognitive tasks and performance when faced with significant stress and/or ambiguity.
Testing inductive thinking - the use of data to identify possibilities - or deductive thinking - the use of data to deduce the correct response among possibilities - becomes important (Epstein, 2007).
As the assessment continues to evolve, medical educators and researchers need to identify mechanisms of creating and maintaining best practices, especially in the context of systemic and institutional culture (Holmboe et al, 2010).
Lake wobegon effect
There is evidence (where?) suggesting a 5 point likert is more valid than a 3 point
Get this data
Barsuk JH et al. 2009. Use of simulation-based mastery learning to improve the quality of central venous catheter placement in a medical intensive care unit. J Hosp Med. 4(7):397-403.
Epstein RM. 2007. Assessment in Medical Education. NEJM. 356(4): 387-396.
Green ML et al. 2009. Charting the road to competence: developmental milestones for internal medicine residency training. J Grad Med Educ. 1:5-20.
Holmboe ES et al. 2010. The role of assessment in competency-based medical education. Medical Teacher. 32:676-682.
Norman G, Neville A, Blake JM, Mueller B. 2010. Assessment steers learning down the right road: impact of progress testing on licensing examination performance. Med Teach. 32(6):496-9.
Pangaro L. 1999. Academic Medicine. 74(11): 1203-1207.
Sherbino J, Bandiera G, Frank JR. 2008. Assessing competence in emergency medicine trainees: an overview of effective methodologies. CJEM. 10(4):365-71.
ten Cate O, Snell L, Carraccio C. 2010. Medical competence: the interplay between individual ability and the health care environment. Medical Teacher. 32:669-675.
Van der Vleuten.1996. The assessment of professional competence: developments, research, and practical implications. Adv Health Sci Educ 1:41-67.
Velan GM, Jones P, McNeil HP, Kumar RK. 2008. Integrated online formative assessments in the biomedical sciences for medical students: benefits for learning. 8:52.