van Der Vleuten Cees P.M
Publications Authored By van Der Vleuten Cees P.M
We conducted a constructivist grounded theory study using semi-structured interviews with 17 medical students from two universities enrolled in clerkships. Participants were purposively sampled to ensure variety in age, gender, experience and current clerkship. The Day Reconstruction Method was used to help participants remember their activities of the previous day. The interviews were transcribed verbatim and analysed iteratively using constant comparison and open, axial and interpretive coding.
Self-regulated learning by students in the clinical environment was influenced by the specific goals perceived by students, the autonomy they experienced, the learning opportunities they were given or created themselves, and the anticipated outcomes of an activity. All of these factors were affected by personal, contextual and social attributes.
Self-regulated learning of medical students in the clinical environment is different for every individual. The factors influencing this process are affected by personal, social and contextual attributes. Some of these are similar to those known from previous research in classroom settings, but others are unique to the clinical environment and include the facilities available, the role of patients, and social relationships pertaining to peers and other hospital staff. To better support students' SRL, we believe it is important to increase students' metacognitive awareness and to offer students more tailored learning opportunities.
It is nevertheless of utmost importance that objectives, rules and guidelines comparable to those existing in undergraduate training (Project Team Consilium Abeundi van Luijk in Professional behaviour: teaching, assessing and coaching students. Final report and appendices. Mosae Libris, 2005; van Mook et al. in Neth J Crit Care 16(4):162-173, 2010a) are developed for postgraduate training. And that implicit rules are made explicit. This article outlines a framework based on the lessons learned from contemporary postgraduate medical training programmes.
This study investigated the elements of programmatic assessment that students perceived as supporting or inhibiting learning, and the factors that influenced the active construction of their learning.
The study was conducted in a graduate-entry medical school that implemented programmatic assessment. Thus, all assessment information, feedback and reflective activities were combined into a comprehensive, holistic programme of assessment. We used a qualitative approach and interviewed students (n = 17) in the pre-clinical phase of the programme about their perceptions of programmatic assessment and learning approaches. Data were scrutinised using theory-based thematic analysis.
Elements from the comprehensive programme of assessment, such as feedback, portfolios, assessments and assignments, were found to have both supporting and inhibiting effects on learning. These supporting and inhibiting elements influenced students' construction of learning. Findings showed that: (i) students perceived formative assessment as summative; (ii) programmatic assessment was an important trigger for learning, and (iii) the portfolio's reflective activities were appreciated for their generation of knowledge, the lessons drawn from feedback, and the opportunities for follow-up. Some students, however, were less appreciative of reflective activities. For these students, the elements perceived as inhibiting seemed to dominate the learning response.
The active participation of learners in their own learning is possible when learning is supported by programmatic assessment. Certain features of the comprehensive programme of assessment were found to influence student learning, and this influence can either support or inhibit students' learning responses.
The authors collected data from 2008 to 2012 from electronically completed MSF questionnaires. In total, 428 residents completed 586 MSF occasions, and 5,020 assessors provided feedback. The authors used generalizability theory to analyze the reliability of MSF for multiple occasions, different competencies, and varying numbers of assessors and assessor groups across multiple occasions.
A reliability coefficient of 0.800 can be achieved with two MSF occasions completed by at least 10 assessors per group or with three MSF occasions completed by 5 assessors per group. Nonphysicians' scores for the "Scholar" and "Health advocate" competencies and physicians' scores for the "Health advocate" competency had a negative effect on the composite reliability.
A feasible number of assessors per MSF occasion can reliably assess residents' performance. Scores from a single occasion should be interpreted cautiously. However, every occasion can provide valuable feedback for learning. This research confirms that the (unique) characteristics of different assessor groups should be considered when interpreting MSF results. Reliability seems to be influenced by the included assessor groups and competencies. These findings will enhance the utility of MSF during residency training.
The purpose of this study was to compare the effectiveness of PA with the usual case discussion (CD) strategy on adherence to CPGs for physical therapist management of upper extremity complaints.
A single-masked, cluster-randomized controlled trial with pretest-posttest design was conducted.
Twenty communities of practice (n=149 physical therapists) were randomly assigned to groups receiving PA or CD, with both interventions consisting of 4 sessions over 6 months. Both PA and CD groups worked on identical clinical cases relevant to the guidelines. Peer assessment focused on individual performance observed and evaluated by peers; CD focused on discussion.
Guideline adherence was measured with clinical vignettes, reflective practice was measured with the Self-Reflection and Insight Scale (SRIS), awareness of performance was measured via the correlation between perceived and assessed improvement, and attainment of personal goals was measured with written commitments to change.
The PA groups improved more on guideline adherence compared with the CD groups (effect=22.52; 95% confidence interval [95% CI]=2.38, 42.66; P=.03). The SRIS scores did not differ between PA and CD groups. Awareness of performance was greater for the PA groups (r=.36) than for the CD groups (r=.08) (effect=14.73; 95% CI=2.78, 26.68; P=.01). The PA strategy was more effective than the CD strategy in attaining personal goals (effect=0.50; 95% CI=0.04, 0.96; P=.03).
Limited validity of clinical vignettes as a proxy measure of clinical practice was a limitation of the study.
Peer assessment was more effective than CD in improving adherence to CPGs. Personal feedback may have contributed to its effectiveness. Future research should address the role of the group coach.
Many studies have examined factors of influence on the usage of mini-clinical evaluation exercise (mini-CEX) instruments and provision of feedback, but little is known about how these factors influence teachers' feedback-giving behaviour. In this study, we investigated teachers' use of mini-CEX in performance evaluations to provide narrative feedback in undergraduate clinical training.
We designed an exploratory qualitative study using an interpretive approach. Focusing on the usage of mini-CEX instruments in clinical training, we conducted semi-structured interviews to explore teachers' perceptions. Between February and June 2013, we conducted interviews with 14 clinicians participated as teachers during undergraduate clinical clerkships. Informed by concepts from the literature, we coded interview transcripts and iteratively reduced and displayed data using template analysis.
We identified three main themes of interrelated factors that influenced teachers' practice with regard to mini-CEX instruments: teacher-related factors; teacher-student interaction-related factors, and teacher-context interaction-related factors. Four issues (direct observation, relationship between teacher and student, verbal versus written feedback, formative versus summative purposes) that are pertinent to workplace-based performance evaluations were presented to clarify how different factors interact with each other and influence teachers' feedback-giving behaviour. Embedding performance observation in clinical practice and establishing trustworthy teacher-student relationships in more longitudinal clinical clerkships were considered important in creating a learning environment that supports and facilitates the feedback exchange.
Teachers' feedback-giving behaviour within the clinical context results from the interaction between personal, interpersonal and contextual factors. Increasing insight into how teachers use mini-CEX instruments in daily practice may offer strategies for creating a professional learning culture in which feedback giving and seeking would be enhanced.
For the operationalization of national and organizational culture, the authors used Hofstede's dimensions of culture and Quinn and Spreitzer's competing values framework, respectively. To operationalize successful curriculum change, the authors used two derivates: medical schools' organizational readiness for curriculum change developed by Jippes and colleagues, and change-related behavior developed by Herscovitch and Meyer. The authors administered a questionnaire in 2012 measuring the described operationalizations to medical schools in the process of changing their curriculum.
Nine hundred ninety-one of 1,073 invited staff members from 131 of 345 medical schools in 56 of 80 countries completed the questionnaire. An initial poor fit of the model improved to a reasonable fit by two suggested modifications which seemed theoretically plausible. In sum, characteristics of national culture and organizational culture, such as a certain level of risk taking, flexible policies and procedures, and strong leadership, affected successful curriculum change.
National and organizational culture influence readiness for change in medical schools. Therefore, medical schools considering curriculum reform should anticipate the potential impact of national and organizational culture.
A questionnaire on professional competencies was administered, semi-structured interviews were conducted, and work diaries were collected. The findings were integrated in a conceptual model.
Six areas of tension between global health care ideals and local health care practice emerged from the data that challenged doctors' motivation and preparedness for practice. Four elements of the innovative curriculum equipped students and graduates with skills, attitudes and competencies to better cope with these tensions. Students and graduates from the innovative curriculum rated significantly higher levels on various competencies and expressed more satisfaction with the curriculum and its usefulness for their work.
An innovative problem- and community-based curriculum can improve sub-Saharan African doctors' motivation and preparedness to tackle the challenges of health care practice in this region.
In 2012 and 2013, physicians reviewed video-recorded patient encounters for seven residents, completed a Mini-CEX, and described their social judgments of the residents. Additional participants sorted these descriptions, which were analyzed using latent partition analysis (LPA). The best-fitting set of partitions for each resident served as an independent variable in a one-way ANOVA to determine the proportion of variance explained in Mini-CEX ratings.
Forty-eight physicians rated at least one resident (34 assessed all seven). The seven sets of social judgments were sorted by 14 participants. Across residents, 2 to 5 partitions (mode: 4) provided a good LPA fit, suggesting that subgroups of raters were making similar social judgments, while different causal explanations for each resident's performance existed across subgroups. The partitions accounted for 9% to 57% of the variance in Mini-CEX ratings across residents (mean = 32%).
These findings suggest that multiple "signals" do exist within the "noise" of interrater variability in performance-based assessment. It may be valuable to understand and exploit these multiple signals rather than try to eliminate them.
Methods: In two Dutch medical schools with 10 and 40 years of student-centred education, teachers were asked to fill out the Conceptions of Learning and Teaching (COLT) Questionnaire to assess their 'teacher-centredness', 'appreciation of active learning' and 'orientation to professional practice'. Next, we quantitatively assessed the relations of teachers' conceptions with their personal and occupational characteristics and institute. Results: Overall response was 49.4% (N = 319/646). Institute was the main predictor for variance in all three scales, and discipline, gender and teaching experience significantly explained variance in two of the scales. More than 80% of the variance was not explained by these factors. Conclusion: Longer exposure to a student-centred curriculum was associated with fewer teacher-centred conceptions, greater 'appreciation of active learning' and stronger 'orientation towards professional practice'. In line with studies on lecture-based curricula, discipline, gender and teaching experience also appeared important for teachers' conceptions in student-centred curricula. More research is necessary to better understand the influence of institute on the three teachers' conceptions scales.
To select the items to be included in the TeamQ questionnaire, we conducted a content validation in 2011, using a Delphi procedure in which 40 experts were invited. Next, for pilot testing the preliminary tool, 1446 clinical teachers from 116 teaching teams were requested to complete the TeamQ questionnaire. For data analyses we used statistical strategies: principal component analysis, internal consistency reliability coefficient, and the number of evaluations needed to obtain reliable estimates. Lastly, the median TeamQ scores were calculated for teams to explore the levels of teamwork.
In total, 31 experts participated in the Delphi study. In total, 114 teams participated in the TeamQ pilot. The median team response was 7 evaluations per team. The principal component analysis revealed 11 factors; 8 were included. The reliability coefficients of the TeamQ scales ranged from 0.75 to 0.93. The generalizability analysis revealed that 5 to 7 evaluations were needed to obtain internal reliability coefficients of 0.70. In terms of teamwork, the clinical teachers scored residents' empowerment as the highest TeamQ scale and feedback culture as the area that would most benefit from improvement.
This study provides initial evidence of the validity of an instrument for measuring teamwork in teaching teams. The high response rates and the low number of evaluations needed for reliably measuring teamwork indicate that TeamQ is feasible for use by teaching teams. Future research could explore the effectiveness of feedback on teamwork in follow up measurements.
Veterinarians were invited via email to participate in the study. A framework of 18 competencies grouped into 7 domains (veterinary expertise, communication, collaboration, entrepreneurship, health and welfare, scholarship, and personal development) was used. Respondents rated the importance of each competency for veterinary professional practice and for veterinary education by use of a 9-point Likert scale in an online questionnaire. Quantitative statistical analyses were performed to assess the data.
All described competencies were perceived as having importance (with overall mean ratings [all countries] ≥ 6.45/9) for professional practice and education. Competencies related to veterinary expertise had the highest ratings (overall mean, 8.33/9 for both professional practice and education). For the veterinary expertise, entrepreneurship, and scholarship domains, substantial differences (determined on the basis of statistical significance and effect size) were found in importance ratings among veterinarians in different countries.
Results indicated a general consensus regarding the importance of specific types of competencies in veterinary professional practice and education. Further research into the definition of competencies essential for veterinary professionals is needed to help inform an international dialogue on the subject.
Methods: Three sources of validity evidence were examined: (i) Content was examined based on theory of clinical reasoning and an international VP expert team. (ii) The response process was explored in think-aloud pilot studies with medical students and in content analyses of free text questions accompanying each item of the instrument. (iii) Internal structure was assessed by exploratory factor analysis (EFA) and inter-rater reliability by generalizability analysis. Results: Content analysis was reasonably supported by the theoretical foundation and the VP expert team. The think-aloud studies and analysis of free text comments supported the validity of the instrument. In the EFA, using 2547 student evaluations of a total of 78 VPs, a three-factor model showed a reasonable fit with the data. At least 200 student responses are needed to obtain a reliable evaluation of a VP on all three factors. Conclusion: The instrument has the potential to provide valid information about VP design, provided that many responses per VP are available.
In a prior study trainees reported on the variation in received supervisor support. This study aims at exploring supervisors' perspectives. The aim is to explore how supervisors experience self-regulated learning of postgraduate general practitioners (GP) trainees and their role in this, and what helps and hinders them in supervising. In a qualitative study using a phenomenological approach, we interviewed 20 supervisors of first- and third-year postgraduate GP trainees. Supervisors recognised trainee activity in self-regulated learning and adapted their coaching style to trainee needs, occasionally causing conflicting emotions. Supervisors' beliefs regarding their role, trainees' role and the usefulness of educational interventions influenced their support. Supervisors experienced a relation between patient safety, self-regulated learning and trainee capability to learn. Supervisor training was helpful to exchange experience and obtain advice. Supervisors found colleagues helpful in sharing supervision tasks or in calibrating judgments of trainees. Busy practice occasionally hindered the supervisory process. In conclusion, supervisors adapt their coaching to trainees' self-regulated learning, sometimes causing conflicting emotions. Patient safety and entrustment are key aspects of the supervisory process. Supervisors' beliefs about their role and trainees' role influence their support. Supervisor training is important to increase awareness of these beliefs and the influence on their behaviour, and to improve the use of educational instruments. The results align with findings from other (medical) education, thereby illustrating its relevance.
Previously we constructed a questionnaire named COLT to measure conceptions. In the present study, we investigated if different teacher profiles could be assessed which are based on the teachers' conceptions. These teacher profiles might have implications for individual teachers, for faculty development activities and for institutes. Our research questions were: (1) Can we identify teacher profiles based on the COLT? (2) If so, how are these teacher profiles associated with other teacher characteristics?
The COLT questionnaire was sent electronically to all teachers in the first three years of the undergraduate curriculum of Medicine in two medical schools in the Netherlands with student-centred education. The COLT (18 items, 5 point Likert scales) comprises three scales: 'teacher centredness', 'appreciation of active learning' and 'orientation to professional practice'. We also collected personal information about the participants and their occupational characteristics. Teacher profiles were studied using a K-means cluster analysis and calculating Chi squares.
The response rate was 49.4% (N = 319/646). A five-cluster solution fitted the data best, resulting in five teacher profiles based on their conceptions as measured by the COLT. We named the teacher profiles: Transmitters (most traditional), Organizers, Intermediates, Facilitators and Conceptual Change Agents (most modern). The teacher profiles differed from each other in personal and occupational characteristics.
Based on teachers' conceptions of learning and teaching, five teacher profiles were found in student-centred education. We offered suggestions how insight into these teacher profiles might be useful for individual teachers, for faculty development activities and for institutes and departments, especially if involved in a curriculum reform towards student-centred education.
Participating obstetrics-gynecology residents and attending physicians (including residency program directors) at six hospitals in the Netherlands performed individual Q sorts to rank 36 statements concerning WBA and WBA tools according to their level of agreement. The authors conducted by-person factor analysis to uncover patterns in the rankings of the statements. They used the statistical results and participant comments about their sorts to interpret and describe distinct perceptions.
The analysis of 65 Q sorts (completed by 22 residents and 43 attendings) identified five distinct user perceptions regarding the effects of WBA in practice, which the authors labeled enthusiasm, compliance, effort, neutrality, and skepticism. These perceptions were characterized by differences in views on three main issues: the intended goals of the innovation, its applicability (ease of applying it to practice), and its actual impact.
User perceptions of the effects of innovations in medical education can be typified and should be anticipated. This study's insights into five principal user perceptions can support the design and implementation of innovations in medical education.
My central argument is that dialogue between education practice (and its teachers) and education research (and its researchers) is indispensable.
To illustrate how I have come to this perspective, I discuss two crucial developments of personal import to myself. The first is the development of assessment theory informed by both research findings and insights emerging from implementations conducted in collaboration with teachers and learners. The second is the establishment of a department of education that includes many members from the medical domain.
Medical education is thriving because it is shaped and nourished within a community of practice of collaborating teachers, practitioners and researchers. This obviates the threat of a fissure between education research and education practice. The values of this community of practice - inclusiveness, openness, supportiveness, nurture and mentorship - are key elements for its sustainability. In pacing the development of our research in a manner that maintains this synergy, we should be mindful of the zone of proximal development of our community of practice.
Data were analysed iteratively using constant comparison. Key themes were identified and their relationships critically examined to derive a conceptual understanding of feedback and its impact.
We identified three essential sources of influence on the meaning that feedback assumed: the individual learner; the characteristics of the feedback, and the learning culture. Individual learner traits, such as motivation and orientation toward feedback, appeared stable across learning contexts. Similarly, certain feedback characteristics, including specificity, credibility and actionability, were valued in sport, music and medicine alike. Learning culture influenced feedback in three ways: (i) by defining expectations for teachers and teacher-learner relationships; (ii) by establishing norms for and expectations of feedback, and (iii) by directing teachers' and learners' attention toward certain dimensions of performance. Learning culture therefore neither creates motivated learners nor defines 'good feedback'; rather, it creates the conditions and opportunities that allow good feedback to occur and learners to respond.
An adequate understanding of feedback requires an integrated approach incorporating both the individual and the learning culture. Our research offers a clear direction for medicine's learning culture: normalise feedback; promote trusting teacher-learner relationships; define clear performance goals, and ensure that the goals of learners and teachers align.
We discuss activity theory's theoretical background and principles, and we show how these can be applied to the cultural research practice by discussing the steps involved in a cross-cultural study that we conducted, from formulating research questions to drawing conclusions. We describe how the activity system, the unit of analysis in activity theory, can serve as an organizing principle to grasp cultural complexity. We end with reflections on the theoretical and practical use of activity theory for cultural research and note that it is not a shortcut to capture cultural complexity: it is a challenge for researchers to determine the boundaries of their study and to analyze and interpret the dynamics of the activity system.
Oral face-to-face feedback was provided as well as written feedback and scores.This study aims to explore the impact of PA on the improvement of clinical performance of undergraduate PT students.
The PA task was analyzed and decomposed into task elements. A qualitative approach was used to explore students' perceptions of the task and the task elements. Semi-structured interviews with second year students were conducted to explore the perceived impact of these task elements on performance improvement. Students were asked to select the elements perceived valuable, to rank them from highest to lowest learning value, and to motivate their choices. Interviews were transcribed verbatim and analyzed, using a phenomenographical approach and following template analysis guidelines. A quantitative approach was used to describe the ranking results.
Quantitative analyses showed that the perceived impact on learning varied widely. Performing the clinical task in the PT role, was assigned to the first place (1), followed by receiving expert feedback (2), and observing peer performance (3). Receiving peer feedback was not perceived the most powerful task element.Qualitative analyses resulted in three emerging themes: pre-performance, true-performance, and post-performance triggers for improvement. Each theme contained three categories: learning activities, outcomes, and conditions for learning.Intended learning activities were reported, such as transferring prior learning to a new application context and unintended learning activities, such as modelling a peer's performance. Outcomes related to increased self-confidence, insight in performance standards and awareness of improvement areas. Conditions for learning referred to the quality of peer feedback.
PA may be a powerful tool to improve clinical performance, although peer feedback is not perceived the most powerful element. Peer assessors in undergraduate PT education use idiosyncratic strategies to assess their peers' performance.
Participants' subsequent exam performance was compared with non-participants.
About 71% of students who performed poorly in the new exam subsequently failed a course. Attendance at the workshops made no difference to short- or long-term pass rates. Attendance at more than three follow-up small group sessions significantly improved pass rates two semesters later, and was influenced by teacher experience.
Close similarity between predictor task and target task is important for accurate prediction of failure. Consideration should be given to dose effect and class size in the prevention of failure of at-risk students, and we recommend a systemic approach to intervention/remediation programmes, involving a whole semester of mandatory, weekly small group meetings with experienced teachers.
Principal component analysis on data from a lecture in statistics for PhD students (n = 56) in psychology and health sciences revealed a three-component solution, consistent with the types of load that the different items were intended to measure. This solution was confirmed by a confirmatory factor analysis of data from three lectures in statistics for different cohorts of bachelor students in the social and health sciences (ns = 171, 136, and 148), and received further support from a randomized experiment with university freshmen in the health sciences (n = 58).
In this discussion paper we argue that meaningfulness and appropriateness of current validity evidence can be called into question and that we need alternative strategies to assessment and validity inquiry that build on current theories of learning and performance in complex and dynamic workplace settings.
Drawing from research in various professional fields we outline key issues within the mechanisms of learning, competence and performance in the context of complex social environments and illustrate their relevance to WBA. In reviewing recent socio-cultural learning theory and research on performance and performance interpretations in work settings, we demonstrate that learning, competence (as inferred from performance) as well as performance interpretations are to be seen as inherently contextualised, and can only be under-stood 'in situ'. Assessment in the context of work settings may, therefore, be more usefully viewed as a socially situated interpretive act.
We propose constructivist-interpretivist approaches towards WBA in order to capture and understand contextualised learning and performance in work settings. Theoretical assumptions underlying interpretivist assessment approaches call for a validity theory that provides the theoretical framework and conceptual tools to guide the validation process in the qualitative assessment inquiry. Basic principles of rigour specific to qualitative research have been established, and they can and should be used to determine validity in interpretivist assessment approaches. If used properly, these strategies generate trustworthy evidence that is needed to develop the validity argument in WBA, allowing for in-depth and meaningful information about professional competence.
The scenarios differed in the sequencing and alignment of VPs and related educational activities, tutor involvement, number of VPs, relevance to assessment and involvement of real patients. We sought students' perceptions on the VP scenarios in focus group interviews with eight groups of 4-7 randomly selected students (n = 39). The interviews were recorded, transcribed and analysed qualitatively.
The analysis resulted in six themes reflecting students' perceptions of important features for effective curricular integration of VPs: (i) continuous and stable online access, (ii) increasing complexity, adapted to students' knowledge, (iii) VP-related workload offset by elimination of other activities, (iv) optimal sequencing (e.g.: lecture--1 to 2 VP(s)--tutor-led small group discussion--real patient) and (V) optimal alignment of VPs and educational activities, (vi) inclusion of VP topics in assessment.
The themes appear to offer starting points for the development of a framework to guide the curricular integration of VPs. Their impact needs to be confirmed by studies using quantitative controlled designs.
It explores the perspectives of patients, midwives, nurses, general practitioners, and hospital boards on gynaecological competencies and compares these with the CanMEDS framework.
Clinical expertise, reflective practice, collaboration, a holistic view, and involvement in practice management were perceived to be important competencies for gynaecological practice. Although all the competencies were covered by the CanMEDS framework, there were some mismatches between stakeholders' perceptions of the importance of some competencies and their position in the framework.
The CanMEDS framework appears to offer relevant building blocks for specialty specific postgraduate training, which should be combined with the results of an exploration of specialty specific competencies to arrive at a postgraduate curriculum that is in alignment with professional practice.
Besides maximum facilitation of learning it should improve the validity and reliability of measurements and documentation of competence development. We explored how, in a competency-based curriculum, current theories on programmatic assessment interacted with educational practice.
In a development study including evaluation, we investigated the implementation of a theory-based programme of assessment. Between April 2011 and May 2012 quantitative evaluation data were collected and used to guide group interviews that explored the experiences of students and clinical supervisors with the assessment programme. We coded the transcripts and emerging topics were organised into a list of lessons learned.
The programme mainly focuses on the integration of learning and assessment by motivating and supporting students to seek and accumulate feedback. The assessment instruments were aligned to cover predefined competencies to enable aggregation of information in a structured and meaningful way. Assessments that were designed as formative learning experiences were increasingly perceived as summative by students. Peer feedback was experienced as a valuable method for formative feedback. Social interaction and external guidance seemed to be of crucial importance to scaffold self-directed learning. Aggregating data from individual assessments into a holistic portfolio judgement required expertise and extensive training and supervision of judges.
A programme of assessment with low-stakes assessments providing simultaneously formative feedback and input for summative decisions proved not easy to implement. Careful preparation and guidance of the implementation process was crucial. Assessment for learning requires meaningful feedback with each assessment. Special attention should be paid to the quality of feedback at individual assessment moments. Comprehensive attention for faculty development and training for students is essential for the successful implementation of an assessment programme.
Yet, despite participants' convergent opinions on the elements of effective remediation, significant differences were found between outcomes of students working with experienced and inexperienced teachers. The current study explores the actual practice of teachers on this remediation course, aiming to exemplify elements of our theory of remediation and explore differences between teachers.
Since it is in the classroom context that the interactions that constitute the complex process of remediation emerge, this practice-based research has focused on direct observation of classroom teaching. Nineteen hours of small group sessions were recorded and transcribed. Drawing on ethnography and sociocultural discourse analysis, selected samples of talk-in-context demonstrate how the various elements of remediation play out in practice, highlighting aspects that are most effective, and identifying differences between experienced and novice teachers.
Long-term student outcomes are strongly correlated to teacher experience (r, 0.81). Compared to inexperienced teachers, experienced teachers provide more challenging, disruptive facilitation, and take a dialogic stance that encourages more collaborative group dynamics. They are more expert at diagnosing cognitive errors, provide frequent metacognitive time-outs and make explicit links across the curriculum.
Remediation is effective in small groups where dialogue is used for collaborative knowledge construction and social regulation. This requires facilitation by experienced teachers who attend to details of both content and process, and use timely interventions to foster curiosity and the will to learn. These teachers should actively challenge students' language use, logical inconsistencies and uncertainties, problematize their assumptions, and provide a metacognitive regulatory voice that can generate attitudinal shifts and nurture the development of independent critical thinkers.
In one experimental condition, a tutor in the video encouraged participants to elaborate by asking elaborative questions. In a second condition, the tutor asked superficial questions. After the discussion, all participants studied a text with relevant new information. Elaborative questions had no significant effect on recall of idea units from the text, p = .39, η(2) = .01. High-ability students outperformed low-ability students, p = .04, η(2) = .07, but this effect did not interact with the experimental treatment, p = .22, η(2) = .02. Suggestions for further research are presented.
Using a linear regression model for each station, we calculated the checklist score cut-off on the regression equation for the global scale cut-off set at 2. The OSCE pass-fail standard was defined as the average of all station's standard. To determine the reliability, the root mean square error (RMSE) was calculated. The R (2) coefficient and the inter-grade discrimination were calculated to assess the quality of OSCE.
The mean total test score was 60.78. The OSCE pass-fail standard and its RMSE were 47.37 and 0.55, respectively. The R (2) coefficients ranged from 0.44 to 0.79. The inter-grade discrimination score varied greatly among stations.
The RMSE of the standard was very small indicating that BRM is a reliable method of setting standard for OSCE, which has the advantage of providing data for quality assurance.
The mean relevance score of the Delphi panel (n = 19) reached 4.2 on a five-point Likert-type scale (1 = not relevant and 5 = highly relevant) in the second round, meeting predefined criteria for completing the Delphi procedure. Faculty (n = 991) from 131 medical schools in 56 countries completed MORC. Exploratory factor analysis yielded three underlying factors-motivation, capability, and external pressure-in 12 subscales with 53 items. The scale structure suggested by exploratory factor analysis was confirmed by confirmatory factor analysis. Cronbach alpha ranged from 0.67 to 0.92 for the subscales. Generalizability analysis showed that the MORC results of 5 to 16 faculty members can reliably evaluate a school's organizational readiness for change.
MORC is a valid, reliable questionnaire for measuring organizational readiness for curriculum change in medical schools. It can identify which elements in a change process require special attention so as to increase the chance of successful implementation.
This raised the question: 'How did those schools overcome the barrier of uncertainty avoidance?'
Austria offered the combination of a high uncertainty avoidance score and integrated curricula in all its medical schools. Twenty-seven key change agents in four medical universities were interviewed and transcripts analysed using thematic cross-case analysis.
Initially, strict national laws and limited autonomy of schools inhibited innovation and fostered an 'excuse culture': 'It's not our fault. It is the ministry's'. A new law increasing university autonomy stimulated reforms. However, just this law would have been insufficient as many faculty still sought to avoid change. A strong need for change, supportive and continuous leadership, and visionary change agents were also deemed essential.
In societies with strong uncertainty avoidance strict legislation may enforce resistance to curriculum change. In those countries opposition by faculty can be overcome if national legislation encourages change, provided additional internal factors support the change process.
We conducted nine focus groups (two with medical students, three with residents, four with music students) and four individual interviews (with one clinician-educator, one music educator and two doctor-musicians), for a total of 37 participants. Analysis occurred alongside and informed data collection. Themes were identified iteratively using constant comparisons.
Cultural perspectives diverged in terms of where learning should occur, what learning outcomes are desired, and how learning is best facilitated. Whereas medicine valued learning by doing, music valued learning by lesson. Whereas medical learners aimed for competence, music students aimed instead for ever-better performance. Whereas medical learners valued their teachers for their clinical skills more than for their teaching abilities, the opposite was true in music, in which teachers' instructional skills were paramount. Self-assessment challenged learners in both cultures, but medical learners viewed self-assessment as a skill they could develop, whereas music students recognised that external feedback would always be required.
This comparative analysis reveals that medicine and music make culturally distinct assumptions about teaching and learning. The contrasts between the two cultures illuminate potential vulnerabilities in the medical learning culture, including the risks inherent in its competence-focused approach and the constraints it places on its own teachers. By highlighting these vulnerabilities, we provide a stimulus for reimagining and renewing medicine's educational practices.
In total, 138 students (in the third year out of five) completed a questionnaire about goal orientation, motivation, self-efficacy, control of learning beliefs and attitudes to feedback. Individual website usage was analysed over an 8-week period. Latent class analyses were used to identify profiles of students, based on their use of different aspects of the feedback website. Differences in learning-related student characteristics between profiles were assessed using analyses of variance (anovas). Individual website usage was related to OSCE performance.
In total, 132 students (95.7%) viewed the website. The number of pages viewed ranged from two to 377 (median 102). Fifty per cent of students engaged comprehensively with the feedback, 27% used it in a minimal manner, whereas a further 23% used it in a more selective way. Students who were comprehensive users of the website scored higher on the value of feedback scale, whereas students who were minimal users scored higher on extrinsic motivation. Higher performing students viewed significantly more web pages showing comparisons with peers than weaker students did. Students who just passed the assessment made least use of the feedback.
Higher performing students appeared to use the feedback more for positive affirmation than for diagnostic information. Those arguably most in need engaged least. We need to construct feedback after summative assessment in a way that will more effectively engage those students who need the most help.
All consultants registered in the Netherlands in 2007-2009 (n = 2643) and Denmark in 2007-2010 (n = 1336) received in June 2010 and April 2011, respectively, a survey about their preparation for medical and generic competencies, perceived intensity and burnout. Power analysis resulted in required sample sizes of 542. Descriptive statistics and independent t-tests were used for analysis.
Data were available of 792 new consultants in the Netherlands and 677 Danish new consultants. Compared to their Dutch counterparts, Danish consultants perceived specialty training and the transition less intensely, reported higher levels of preparation for generic competencies and scored lower on burnout.
The importance of contextual aspects in the transition is underscored and shows that Denmark appears to succeed better in aligning training with practice. Regulations regarding working hours and progressive independence of trainees appear to facilitate the transition.
Using a constructivist grounded theory approach, we conducted 12 focus groups and nine individual interviews (with a total of 50 participants) across three cultures of professional training in, respectively, music, teacher training and medicine. Constant comparative analysis for recurring themes was conducted iteratively.
Each of the three professional cultures created a distinct context for learning that influenced how feedback was handled. Despite these contextual differences, credibility and constructiveness emerged as critical constants, identified by learners across cultures as essential for feedback to be perceived as meaningful. However, the definitions of credibility and constructiveness were distinct to each professional culture and the cultures varied considerably in how effectively they supported the occurrence of feedback with these critical characteristics.
Professions define credibility and constructiveness in culturally specific ways and create contexts for learning that may either facilitate or constrain the provision of meaningful feedback. Comparison with other professional cultures may offer strategies for creating a productive feedback culture within medical education.
However, validity evidence from those interventions has not proved entirely adequate for the practical anatomy examination, and thus, further investigation was required. In this study, the validity evidence of SRF was examined using multiple choice questions (MCQs) constructed according to different levels of Bloom's taxonomy in comparison with the traditional free response format. A group of 100 medical students registered in a gross anatomy course volunteered to be enrolled in this study. The experimental MCQ examinations were part of graded midterm and final steeplechase practical examination. Volunteer students were instructed to complete the practical examinations twice, once in each of two separate examination rooms. The two separate examinations consisted of a traditional free response format and MCQ format. Scores from the two examinations (FRF and MCQ) displayed a strong correlation, even with higher level Bloom's taxonomy questions. In conclusion, the results of this study provide empirical evidence that the SRF (MCQ) response format is a valid method and can be used as an alternative to the traditional FRF steeplechase examination.
A longitudinal qualitative study was performed in the Netherlands. Semi-structured interviews were conducted with new consultants. The study was guided by an interpretative phenomenological approach until saturation was reached. At 3-month intervals between July 2011 and March 2012, eight novice consultants in internal medicine were interviewed three times each about their supervisory role while on call. Interviews focused on their preparation for the role in training, the actions they took to master the role, and their progression over time.
Three interrelated domains of relevant factors emerged from the data: preparedness; personal characteristics, and contextual characteristics. Preparedness referred to the extent to which new consultants were prepared by training to take full responsibility for registrars' actions while supervising them from a distance. Personal characteristics, such as coping strategies and views on supervision, guided consultants' development as supervisors. Essential to this process were contextual characteristics, especially those concerning the extent to which the consultant knew the registrar, was familiar with departmental procedures, and had access to support from colleagues.
New consultants should be prepared for their supervisory role by training and by being given a proper introduction to their workplace. The former requires progressive independence and exposure to supervisory tasks during specialty training; the latter requires an induction programme to enable new consultants to familiarise themselves with the departmental environment and the registrars they will be supervising.
A questionnaire covering preparedness for practice, intensity of the transition, social support, and burnout was used. Structural equation modelling was used for statistical analysis.
Data from a third of the population were available (32% n = 840) (43% male/57% female). Preparation in generic competencies received lower ratings than in medical competencies. A total of 10% met the criteria for burnout and 18% scored high on the emotional exhaustion subscale. Perceived lack of preparation in generic competencies correlated with burnout (r = 0.15, p < 0.001). No such relation was found for medical competencies. Furthermore, social support protected against burnout.
These findings illustrate the relevance of generic competencies for new hospital consultants. Furthermore, social support facilitates this intense and stressful stage within the medical career.
The audio-taped interviews were transcribed verbatim, analyzed, and themes were identified. We preformed investigators' triangulation, member checking with clinical supervisors and we triangulated the data with a similar research performed prior to the implementation of WBA.
WBA results in variable learning approaches. Based on several affecting factors; clinical supervisors, faculty-given feedback, and assessment function, students may swing between surface, deep and effort and achievement learning approaches. Students' and supervisors' orientations on the process of WBA, utilization of peer feedback and formative rather than summative assessment facilitate successful implementation of WBA and lead to students' deeper approaches to learning. Interestingly, students and their supervisors have contradicting perceptions to WBA.
A change in culture to unify students' and supervisors' perceptions of WBA, more accommodation of formative assessment, and feedback may result in students' deeper approach to learning.
This article focuses on a teaching approach and is a translational contribution to existing literature. In line with best evidence medical education, the aim of this article is twofold: to briefly inform teachers about constructivist learning theory and elaborate on the principles of constructive, collaborative, contextual, and self-directed learning; and to provide teachers with an example of how to implement these learning principles to change the approach to teaching surface anatomy. Student evaluations of this new approach demonstrate that the application of these learning principles leads to higher student satisfaction. However, research suggests that even better results could be achieved by further adjustments in the application of contextual and self-directed learning principles. Successful implementation and guidance of peer physical examination is crucial for the described approach, but research shows that other options, like using life models, seem to work equally well. Future research on surface anatomy should focus on increasing the students' ability to apply anatomical knowledge and defining the setting in which certain teaching methods and approaches have a positive effect.
Focusing on WBA as a recent instance of innovation in PGME, we conducted semi-structured interviews to explore perceptions of the effects of WBA in a purposive sample of Dutch trainees and (lead) consultants in surgical and non-surgical specialties. Interviews conducted in 2011 with 17 participants were analysed thematically using template analysis. To support the exploration of effects outside the domain of education, the study design was informed by theory on the diffusion of innovations.
Six domains of effects of WBA were identified: sentiments (affinity with the innovation and emotions); dealing with the innovation; specialty training; teaching and learning; workload and tasks, and patient care. Users' affinity with WBA partly determined its effects on teaching and learning. Organisational support and the match between the innovation and routine practice were considered important to minimise additional workload and ensure that WBA was used for relevant rather than easily assessable training activities. Dealing with WBA stimulated attention for specialty training and placed specialty training on the agenda of clinical departments.
These outcomes are in line with theoretical notions regarding innovations in general and may be helpful in the implementation of other innovations in PGME. Given the substantial effects of innovations outside the strictly education-related domain, individuals designing and implementing innovations should consider all potential effects, including those identified in this study.
Research in organisational psychology has proposed a mechanism whereby feedback seeking is influenced by motives and goal orientation mediated by the perceived costs and benefits of feedback. Building on a recently published model of resident doctors' feedback-seeking behaviour, we conducted a qualitative study to explore students' feedback-seeking behaviours in the clinical workplace.
Between April and June 2011, we conducted semi-structured face-to-face interviews with veterinary medicine students in Years 5 and 6 about their feedback-seeking behaviour during clinical clerkships. In the interviews, 14 students were asked about their goals and motives for seeking feedback, the characteristics of their feedback-seeking behaviour and factors influencing that behaviour. Using template analysis, we coded the interview transcripts and iteratively reduced and displayed the data until agreement on the final template was reached.
The students described personal and interpersonal factors to explain their reasons for seeking feedback. The factors related to intentions and the characteristics of the feedback provider, and the relationship between the feedback seeker and provider. Motives relating to image and ego, particularly when students thought that feedback might have a positive effect on image and ego, influenced feedback-seeking behaviour and could induce specific behaviours related to students' orientation towards particular sources of feedback, their orientation towards particular topics for and timing of feedback, and the frequency and method of feedback-seeking behaviour.
This study shows that during clinical clerkships, students actively seek feedback according to personal and interpersonal factors. Perceived costs and benefits influenced this active feedback-seeking behaviour. These results may contribute towards the optimising and developing of meaningful educational opportunities during clerkships.
Generalisability coefficient of 0.8, on a scale of 0 to 1.0, was considered to indicate good reliability for assessment purposes. Pass/fail standards were based on laparoscopic experience: Novices, intermediates, and experts (>100 procedures). The pass/fail standards were investigated for the PLUS performances of 33 second-year urological residents.
Fifteen novices, twenty-three intermediates and twelve experts were included. An inter-trial reliability of >0.80 was reached with two trials for each task. Inter-rater reliability of the quality measurements was 0.79 for two judges. Pass/fail scores were determined for the novice/intermediate boundary and the intermediate/expert boundary. Pass rates for second-year residents were 63.64% and 9.09%, respectively.
The PLUS assessment is reliable for setting a certification standard for second-year urological residents that serves as a starting point for residents to proceed to the next level of laparoscopic competency.
Contemporary theories on learning based on a constructivist paradigm offer the following insights: acquisition of knowledge and skills should be viewed as an ongoing process of exchange between the learner and his environment, so called lifelong learning. This process can neither be atomized nor separated from the context in which it occurs. Four contemporary approaches are presented as examples.
The following shift in focus for future research is proposed: beyond isolated single factor effectiveness studies toward constructivist, non-reductionistic studies integrating the context.
Future research should investigate how constructivist approaches can be used in the medical context to increase effective learning and transition of communication skills.
Five key attributes of guidelines for communication skill training were identified: complexity, level of detail, format and organization, type of information, and trustworthiness/validity. The desired use of these attributes is related to specific educational purposes and learners' expertise. The low complexity of current communication guidelines is appreciated, but seems ad odds with the wish for more valid communication guidelines.
Which guideline characteristics are preferred by users depends on the expertise of the learners and the educational purpose of the guideline.
Communication guidelines can be improved by modifying the key attributes in line with specific educational functions and learner expertise. For example: the communication guidelines used in GP training in the Netherlands, seem to offer an oversimplified model of doctor patient communication. This model may be suited for undergraduate learning, but does not meet the validity demands of physicians in training.
Remediation should support emotional needs and foster cognitive and metacognitive skills for self-regulation and critical thinking. Teachers of remediation need to motivate, critique, challenge and advise their learners, applying teaching and contextual expertise in a constructivist, student-centred environment that fosters curiosity and joy for learning. Teachers of remediation can mediate these processes through embodiment of five core roles: facilitator, nurturing mentor, disciplinarian, diagnostician and modeller of desired skills, attitudes and behaviours.
Remediation of struggling medical students can be achieved through a cognitive apprenticeship within a small community of inquiry that motivates and challenges the students. This community needs teachers capable of performing a unique combination of roles that demands high levels of teaching presence and practical wisdom.
Active peer discussion by a Computer Supported Collaborative Learning (CSCL) environment show positive medical students perceptions on subjective knowledge improvement. High students' activity during discussions in a CSCL environment demonstrated higher task-focussed discussion reflecting higher levels of knowledge construction. However, it remains unclear whether high discussion activity influences students' decisions revise their CAT paper. The aim of this research is to examine whether students who revise their critical appraisal papers after discussion in a CSCL environment show more task-focussed activity and discuss more intensively on critical appraisal topics than students who do not revise their papers.
Forty-seven medical students, stratified in subgroups, participated in a structured asynchronous online discussion of individual written CAT papers on self-selected clinical problems. The discussion was structured by three critical appraisal topics. After the discussion, the students could revise their paper. For analysis purposes, all students' postings were blinded and analysed by the investigator, unaware of students characteristics and whether or not the paper was revised. Postings were counted and analysed by an independent rater, Postings were assigned into outside activity, non-task-focussed activity or task-focussed activity. Additionally, postings were assigned to one of the three critical appraisal topics. Analysis results were compared by revised and unrevised papers.
Twenty-four papers (51.6%) were revised after the online discussion. The discussions of the revised papers showed significantly higher numbers of postings, more task-focussed activities, and more postings about the two critical appraisal topics: "appraisal of the selected article(s)", and "relevant conclusion regarding the clinical problem".
A CSCL environment can support medical students in the execution and critical appraisal of authentic tasks in the clinical workplace. Revision of CAT papers appears to be related to discussions activity, more specifically reflecting high task-focussed activity of critical appraisal topics.
We collected and analysed modified mini-CEX forms completed by GP trainers and trainees. Since each trainee has the same trainer for the duration of one year, we used trainer-trainee pairs as the unit of analysis. We determined for all forms the frequency of the different types of narrative comments and rated their specificity on a three-point scale: specific, moderately specific, not specific. Specificity was compared between trainee-trainer pairs.
We collected 485 completed modified mini-CEX forms from 54 trainees (mean of 8.8 forms per trainee; range 1-23; SD 5.6). Trainer feedback was more frequently provided than trainee self-reflections, and action plans were very rare. The comments were generally specific, but showed large differences between trainee-trainer pairs.
The frequency of self-reflection and action plans varied, all comments were generally specific and there were substantial and consistent differences between trainee-trainer pairs in the specificity of comments. We therefore conclude that feedback is not so much determined by the instrument as by the users. Interventions to improve the educational effects of the feedback procedure should therefore focus more on the users than on the instruments.
For stringency, we focused on a subset of assessment factor-learning effect associations that featured least commonly in a baseline qualitative study. Our aims were to determine whether these uncommon associations were operational in a broader but similar population to that in which the model was initially derived.
A cross-sectional survey of 361 senior medical students at one medical school was undertaken using a purpose-made questionnaire based on a grounded theory and comprising pairs of written situational tests. In each pair, the manifestation of an assessment factor was varied. The frequencies at which learning effects were selected were compared for each item pair, using an adjusted alpha to assign significance. The frequencies at which mechanism factors were selected were calculated.
There were significant differences in the learning effect selected between the two scenarios of an item pair for 13 of this subset of 21 uncommon associations, even when a p-value of < 0.00625 was considered to indicate significance. Three mechanism factors were operational in most scenarios: agency; response efficacy, and response value.
For a subset of uncommon associations in the model, the role of most assessment factor-learning effect associations and the mechanism factors involved were supported in a broader but similar population to that in which the model was derived. Although model validation is an ongoing process, these results move the model one step closer to the stage of usefully informing interventions. Results illustrate how factors not typically included in studies of the learning effects of assessment could confound the results of interventions aimed at using assessment to influence learning.
Externally regulated educational interventions, like reflection, learning portfolios, assessments and progress meetings, are increasingly used to scaffold self-regulation.The aim of this study is to explore how postgraduate trainees regulate their learning in the workplace, how external regulation promotes self-regulation and which elements facilitate or impede self-regulation and learning.
In a qualitative study with a phenomenologic approach we interviewed first- and third-year GP trainees from two universities in the Netherlands. Twenty-one verbatim transcripts were coded. Through iterative discussion the researchers agreed on the interpretation of the data and saturation was reached.
Trainees used a short and a long self-regulation loop. The short loop took one week at most and was focused on problems that were easy to resolve and needed minor learning activities. The long loop was focused on complex or recurring problems needing multiple and planned longitudinal learning activities. External assessments and formal training affected the long but not the short loop. The supervisor had a facilitating role in both loops. Self-confidence was used to gauge competence.Elements influencing self-regulation were classified into three dimensions: personal (strong motivation to become a good doctor), interpersonal (stimulation from others) and contextual (organizational and educational features).
Trainees did purposefully self-regulate their learning. Learning in the short loop may not be visible to others. Trainees should be encouraged to actively seek and use external feedback in both loops. An important question for further research is which educational interventions might be used to scaffold learning in the short loop. Investing in supervisor quality remains important, since they are close to trainee learning in both loops.
A fitness-for-purpose approach defining quality was adopted to develop and validate guidelines.
First, in a brainstorm, ideas were generated, followed by structured interviews with 9 international assessment experts. Then, guidelines were fine-tuned through analysis of the interviews. Finally, validation was based on expert consensus via member checking.
In total 72 guidelines were developed and in this paper the most salient guidelines are discussed. The guidelines are related and grouped per layer of the framework. Some guidelines were so generic that these are applicable in any design consideration. These are: the principle of proportionality, rationales should underpin each decisions, and requirement of expertise. Logically, many guidelines focus on practical aspects of assessment. Some guidelines were found to be clear and concrete, others were less straightforward and were phrased more as issues for contemplation.
The set of guidelines is comprehensive and not bound to a specific context or educational approach. From the fitness-for-purpose principle, guidelines are eclectic, requiring expertise judgement to use them appropriately in different contexts. Further validation studies to test practicality are required.
The interview guide was based on questionnaire results; overall response rate for Years 1-3 was 90% (n = 875). Students report a variety of activities to improve their physical examination skills. On average, students devote 20% of self-study time to skill training with Year 1 students practising significantly more than Year 3 students. Practice patterns shift from just-in-time learning to a longitudinal selfdirected approach. Factors influencing this change are assessment methods and simulated/real patients. Learning resources used include textbooks, examination guidelines, scientific articles, the Internet, videos/DVDs and scoring forms from previous OSCEs. Practising skills on fellow students happens at university rooms or at home. Also family and friends were mentioned to help. Simulated/real patients stimulated students to practise of physical examination skills, initially causing confusion and anxiety about skill performance but leading to increased feelings of competence. Difficult or enjoyable skills stimulate students to practise. The strategies students adopt to master physical examination skills outside timetabled training sessions are self-directed. OSCE assessment does have influence, but learning takes place also when there is no upcoming assessment. Simulated and real patients provide strong incentives to work on skills. Early patient contacts make students feel more prepared for clinical practice.
137 complaints (98%) yielded 46 different unprofessional behaviours grouped into 18 categories. The element 'perceived medical complications and error' occurred most commonly (n=77), followed by 'having to wait for care' and 'insufficient or unclear clarification' (n=52, n=48, respectively). The combined non-cognitive elements of professionalism (especially aspects of communication) were far more prominently discussed than cognitive issues (knowledge/skills) related to medical error. Most categories of professionalism elements were considered important by physicians but, nevertheless, were identified in patient complaints analysis. Some issues (eg, 'altruism', 'appearance', 'keeping distance/respecting boundaries with patients') were not perceived as problematic by patients and/or relatives, while mentioned by physicians. Conversely, eight categories of poor professionalism revealed from complaint analysis (eg, 'having to wait for care', 'lack of continuity of care' and 'lack of shared decision making') were not considered essential by physicians.
The vast majority of unprofessional behaviour identified related to non-cognitive, professionalism aspects of care. Complaints pertaining to unsatisfactory communication were especially noticeable. Incongruence is noted between the physicians' and the patients' perception of actual care.
Specifically, it investigated how students' cultural backgrounds impact on SDL in PBL and how this impact affects students.
A qualitative, cross-cultural, comparative case study was conducted in three medical schools. Data were collected through 88 semi-structured, in-depth interviews with Year 1 and 3 students, tutors and key persons involved in PBL, 32 observations of Year 1 and 3 PBL tutorials, document analysis, and contextual information. The data were thematically analysed using the template analysis method. Comparisons were made among the three medical schools and between Year 1 and 3 students across and within the schools.
The cultural factors of uncertainty and tradition posed a challenge to Middle Eastern students' SDL. Hierarchy posed a challenge to Asian students and achievement impacted on both sets of non-Western students. These factors were less applicable to European students, although the latter did experience some challenges. Several contextual factors inhibited or enhanced SDL across the cases. As students grew used to PBL, SDL skills increased across the cases, albeit to different degrees.
Although cultural factors can pose a challenge to the application of PBL in non-Western settings, it appears that PBL can be applied in different cultural contexts. However, its globalisation does not postulate uniform processes and outcomes, and culturally sensitive alternatives might be developed.
This Guide presents a generic, systemic framework to help identify and explore improvements in the quality and defensibility of progress test data. The framework draws on the combined experience of the Dutch consortium, an individual medical school in the United Kingdom, and the bulk of the progress test literature to date. It embeds progress testing as a quality-controlled assessment tool for improving learning, teaching and the demonstration of educational standards. The paper describes strengths, highlights constraints and explores issues for improvement. These may assist in the establishment of potential or new progress testing in medical education programmes. They can also guide the evaluation and improvement of existing programmes.
The COLT was adapted based on experts' comments during a meeting and interviews, followed by a Delphi procedure (Part I). It was administered to teachers from two Dutch medical schools with different traditions in student-centred education (Part II; N=646). The data were analyzed using confirmatory factor analysis and reliability analysis.
324 Teachers (50.2%) completed the questionnaire. Confirmatory factor analysis did not confirm the underlying theoretical model, but an alternative model demonstrated a good fit. This led to an instrument with eighteen items reflecting three underlying factors: 'teacher centredness', 'appreciation of active learning', and 'orientation to professional practice'. We found significant differences in COLT scores between the faculty of the two medical schools.
The COLT appears to be a construct valid tool resulting in reliable scores of teachers' conceptions of learning and teaching, in student-centred medical education. Two of the three factors are new and may be specific for student-centred medical education. The COLT may be a promising tool to improve faculty development.
The purpose of this study was to explore whether the model was operational in a clinical context as a first step in this process.
Given the complexity of the model, we adopted a qualitative approach. Data from in-depth interviews with eighteen medical students were subject to content analysis. We utilised a code book developed previously using grounded theory. During analysis, we remained alert to data that might not conform to the coding framework and open to the possibility of deploying inductive coding. Ethical clearance and informed consent were obtained.
The three components of the model i.e., assessment factors, mechanism factors and learning effects were all evident in the clinical context. Associations between these components could all be explained by the model. Interaction with preceptors was identified as a new subcomponent of assessment factors. The model could explain the interrelationships of the three facets of this subcomponent i.e., regular accountability, personal consequences and emotional valence of the learning environment, with previously described components of the model.
The model could be utilized to analyse and explain observations in an assessment context different to that from which it was derived. In the clinical setting, the (negative) influence of preceptors on student learning was particularly prominent. In this setting, learning effects resulted not only from the high-stakes nature of summative assessment but also from personal stakes, e.g. for esteem and agency. The results suggest that to influence student learning, consequences should accrue from assessment that are immediate, concrete and substantial. The model could have utility as a planning or diagnostic tool in practice and research settings.
In a constructivist grounded theory study, we interviewed 22 early-career academic doctors about experiences they perceived as influential in their learning. Although feedback emerged as important, responses to feedback were highly variable. To better understand how feedback becomes (or fails to become) influential, we used the theoretical framework of regulatory focus to re-examine all descriptions of experiences of receiving and responding to feedback.
Feedback could be influential or non-influential, regardless of its sign (positive or negative). In circumstances in which the individual's regulatory focus was readily determined, such as in choosing a career (promotion) or preparing for a high-stakes examination (prevention), the apparent influence of feedback was consistent with the prediction of regulatory focus theory. However, we encountered many challenges in applying regulatory focus theory to real feedback scenarios, including the frequent presence of a mixed regulatory focus, the potential for regulatory focus to change over time, and the competing influences of other factors, such as the perceived credibility of the source or content of the feedback.
Regulatory focus theory offers a useful, if limited, construct for exploring learners' responses to feedback in the clinical setting. The insights and predictions it offers must be considered in light of the motivational complexity of clinical learning tasks and of other factors influencing the impact of feedback.
Interviews were conducted and the resulting data analysed using a qualitative, phenomenological approach. Between October 2009 and January 2010, we interviewed 22 postgraduate general practice trainees at two institutions in the Netherlands. Three researchers analysed the transcripts of the interviews.
A three-step scheme emerged from the data. Feedback as part of WBA is of greater benefit to trainees if: (i) observation and feedback are planned by the trainee and trainer; (ii) the content and delivery of the feedback are adequate, and (iii) the trainee uses the feedback to guide his or her learning by linking it to learning goals. Negative emotions reported by almost all trainees in relation to observation and feedback led to different responses. Some trainees avoided observation, whereas others overcame their apprehension and actively sought observation and feedback. Active trainers were able to help trainees overcome their fears. Four types of trainer-trainee pairs were distinguished according to their engagement in observation and feedback. External requirements set by training institutions may stimulate inactive trainers and trainees.
In line with the literature, our results emphasise the importance of the content of feedback and the way it is provided, as well as the importance of its incorporation in trainees' learning. Moreover, we highlight the step before the actual feedback itself. The way arrangements for feedback are made appears to be important to feedback in formative WBA. Finally, we outline several factors that influence the success or failure of feedback but precede the process of observation and feedback.
The authors reanalyzed 104 previously published comparisons involving a single, problem-based medical school in the Netherlands (Maastricht University's medical school), using student attrition and study duration data from this school and the schools with which it was compared. The authors removed bias by reequalizing the comparison groups in terms of attrition and study duration.
The uncorrected data showed no differences between problem-based and conventional curricula: Mean effect sizes as expressed by Cohen d were 0.02 for medical knowledge and 0.07 for diagnostic reasoning. However, the reanalysis demonstrated medium-level effect sizes favoring the problem-based curriculum. After corrections for attrition and study duration, the mean effect size for knowledge acquisition was 0.31 and for diagnostic reasoning was 0.51.
Effects of the Maastricht problem-based curriculum were masked by differential attrition and differential exposure in the original studies. Because this school has been involved in many studies included in influential literature reviews published in the past 20 years, the authors' findings have implications for the assessment of the value of problem-based learning put forward by these reviews.
From an interpretative constructivist perspective, we conducted a qualitative exploratory study using semi-structured interviews with a purposive sample of 16 lead consultants in the Netherlands between August 2010 and February 2011. The study design was based on the research questions and notions from corporate business and social psychology about the roles of change managers. Interview transcripts were analysed thematically using template analysis.
The lead consultants described change processes with different stages, including cause, development of content, and the execution and evaluation of change, and used individual change strategies consisting of elements such as ideas, intentions and behaviour. Communication is necessary to the forming of a strategy and the implementation of change, but the nature of communication is influenced by the strategy in use. Lead consultants differed in their degree of awareness of the strategies they used. Factors influencing approaches to change were: knowledge, ideas and beliefs about change; level of reflection; task interpretation; personal style, and department culture.
Most lead consultants showed limited awareness of their own approaches to change. This can lead them to adopt a rigid approach, whereas the ability to adapt strategies to circumstances is considered important to effective change management. Interventions and research should be aimed at enhancing the awareness of lead consultants of approaches to change in PGME.
This study explored the pre-assessment learning effects of summative assessment in theoretical modules by exploring the variables at play in a multifaceted assessment system and the relationships between them. Using a grounded theory strategy, in-depth interviews were conducted with individual medical students and analyzed qualitatively. Respondents' learning was influenced by task demands and system design. Assessment impacted on respondents' cognitive processing activities and metacognitive regulation activities. Individually, our findings confirm findings from other studies in disparate non-medical settings and identify some new factors at play in this setting. Taken together, findings from this study provide, for the first time, some insight into how a whole assessment system influences student learning over time in a medical education setting. The findings from this authentic and complex setting paint a nuanced picture of how intricate and multifaceted interactions between various factors in an assessment system interact to influence student learning. A model linking the sources, mechanism and consequences of the pre-assessment learning effects of summative assessment is proposed that could help enhance the use of summative assessment as a tool to augment learning.
Regarding the quality of feedback, the aggregated score for each of the three categories was not significantly different between the two groups, neither for the interim, nor for the final assessment. Some, not statistically significant, but nevertheless noteworthy trends were nevertheless noted. Feedback in the web-based group was more often unrelated to observed behaviour for several categories for both the interim and final assessment. Furthermore, most comments relating to the category 'Dealing with oneself' consisted of descriptions of a student's attendance, thereby neglecting other aspects of personal functioning. The survey identified significant differences between the groups for all questionnaire items regarding feasibility, acceptability and perceived usefulness in favour of the paper-based form. The use of a web-based instrument for professional behaviour assessment yielded a significantly higher number of comments compared to the traditional paper-based assessment. Unfortunately, the quality of the feedback obtained by the web-based instrument as measured by several generally accepted feedback criteria did not parallel this increase.
English language literatures were searched in Pubmed, PsycINFO, and Medline without restriction to type or date of publication. Reviewing the literature, the most prominent identified theme was assessment function characterized in summative and formative assessment and general effect of assessment on students' learning approaches. The literature review has pointed clearly to the complexity of the relationship between learning environment, students' perceptions of assessment demands, and students' approaches to learning. Many factors (extrinsic and intrinsic) were theoretically proposed to mediate students' approaches to learning in response to their assessment. However, few of these factors were researched in the published literature. Formative assessment is likely to contribute to students' deep approach to learning while summative is likely to contribute to their surface approach. However, these effects are not definite and further research about the complex relationship between assessment and students' learning is required.
Progress testing is longitudinal assessment in that it is based on subsequent equivalent, yet different, tests. The results of these are combined to determine the growth of functional medical knowledge for each student, enabling more reliable and valid decision making about promotion to a next study phase. The longitudinal integrated assessment approach has a demonstrable positive effect on student learning behaviour by discouraging binge learning. Furthermore, it leads to more reliable decisions as well as good predictive validity for future competence or retention of knowledge. Also, because of its integration and independence of local curricula, it can be used in a multi-centre collaborative production and administration framework, reducing costs, increasing efficiency and allowing for constant benchmarking. Practicalities include the relative unfamiliarity of faculty with the concept, the fact that remediation for students with a series of poor results is time consuming, the need to embed the instrument carefully into the existing assessment programme and the importance of equating subsequent tests to minimize test-to-test variability in difficulty. Where it has been implemented-collaboratively-progress testing has led to satisfaction, provided the practicalities are heeded well.
Participants were asked to reflect on experiences they considered to have been influential during their training. Constant comparative analysis for emerging themes was conducted iteratively with data collection.
A model of clinical learning emerged in which the clinical work itself is central. As they observe and participate in clinical work, learners can attend to a variety of sources of information that facilitate the interpretation of the experience and the construction of knowledge from it. These 'learning cues' include feedback, role models, clinical outcomes, patient or family responses, and comparisons with peers. The integration of a cue depends on the learner's judgement of its credibility. Certain cues, such as clinical outcomes or feedback from patients, are seen as innately credible, whereas other cues, particularly feedback from supervisors, are subjected to critical judgement.
Learners make complex judgements regarding the credibility of information about clinical performance. Credibility judgements influence the learning that arises from the clinical experience. Further understanding of how such judgements are made could guide educators in providing credible information to learners.
Moreover, it evaluated the effect of self-assessment process on students' study strategies within a community of clinical practice.
We conducted a qualitative phenomenological study from May 2008 to December 2009. We held 37 semi-structured individual interviews with three different cohorts of undergraduate medical students until we reached data saturation. The cohorts were exposed to different contexts while experiencing their clinical years' assessment program. In the interviews, students' perceptions and interpretations of 'self-assessment practice' and 'supervisor-provided feedback' within different contexts and the resulting study strategies were explored.
The analysis of interview data with the three cohorts of students yielded three major themes: strategic practice of self-assessment, self-assessment and study strategies, and feedback and study strategies. It appears that self-assessment is not appropriate within a summative context, and its implementation requires cultural preparation. Despite education and orientation on the two major components of the self-assessment process, feedback was more effective in enhancing deeper study strategies.
This research suggests that the theoretical advantages linked to the self-assessment process are a result of its feedback component rather than the practice of self-assessment isolated from feedback. Further research exploring the effects of different contextual and personal factors on students' self-assessment is needed.
The first experiences with the programme show that students think that the programme has high learning value and the assessment is sufficiently robust. Many of the commonly reported weaknesses of work-based assessment (not a good fit with the educational context, too complex, too bureaucratic and too much work) were not mentioned by the students.
Kane's views on validity as represented by a series of arguments provide a useful framework from which to highlight the value of different widely used approaches to improve the quality and validity of assessment procedures.
In this paper we discuss four inferences which form part of Kane's validity theory: from observations to scores; from scores to universe scores; from universe scores to target domain, and from target domain to construct. For each of these inferences, we provide examples and descriptions of approaches and arguments that may help to support the validity inference.
As well as standard psychometric methods, a programme of assessment makes use of various other arguments, such as: item review and quality control, structuring and examiner training; probabilistic methods, saturation approaches and judgement processes, and epidemiological methods, collation, triangulation and member-checking procedures. In an assessment programme each of these can be used.
A competency framework was developed based on the analysis of focus group interviews with 54 recently graduated veterinarians and clients and subsequently validated in a Delphi procedure with a panel of 29 experts, representing the full range and diversity of the veterinary profession. The study resulted in an integrated competency framework for veterinary professionals, which consists of 16 competencies organized in seven domains: veterinary expertise, communication, collaboration, entrepreneurship, health and welfare, scholarship, and personal development. Training veterinarians who are able to use and integrate the seven domains in their professional practice is an important challenge for today's veterinary medical schools. The Veterinary Professional (VetPro) framework provides a sound empirical basis for the ongoing debate about the direction of veterinary education and curriculum development.
Our purpose was to clarify the influence of context on reasoning, to build upon education theory and to generate implications for education practice.
Qualitative data about experts were gathered from two sources: think-aloud protocols reflecting concurrent thought processes that occurred while board-certified internists viewed videotape encounters, and free-text responses to queries that explicitly asked these experts to comment on the influence of selected contextual factors on their clinical reasoning processes. These data sources provided both actual performance data (think-aloud responses) and opinions on reflection (free-text answers) regarding the influence of context on reasoning. Results for each data source were analysed for emergent themes and then combined into a unified theoretical model.
Several themes emerged from our data and were broadly classified as components influencing the impact of contextual factors, mechanisms for addressing contextual factors, and consequences of contextual factors for patient care. Themes from both data sources had good overlap, indicating that experts are somewhat cognisant of the potential influences of context on their reasoning processes; notable exceptions concerned the themes of missed key findings, balancing of goals and the influence of encounter setting, which emerged in the think-aloud but not the free-text analysis.
Our unified model is consistent with the tenets of cognitive load, situated cognition and ecological psychology theories. A number of potentially modifiable influences on clinical reasoning were identified. Implications for doctor training and practice are discussed.
Expertise theories highlight the multistage processes involved. The transition from novice to expert is characterised by an increase in the aggregation of concepts from isolated facts, through semantic networks to illness scripts and instance scripts. The latter two stages enable the expert to recognise the problem quickly and form a quick and accurate representation of the problem in his/her working memory. Striking differences between experts and novices is not per se the possession of more explicit knowledge but the superior organisation of knowledge in his/her brain and pairing it with multiple real experiences, enabling not only better problem solving but also more efficient problem solving. Psychometric theories focus on the validity of the assessment - does it measure what it purports to measure and reliability - are the outcomes of the assessment reproducible. Validity is currently seen as building a train of arguments of how best observations of behaviour (answering a multiple-choice question is also a behaviour) can be translated into scores and how these can be used at the end to make inferences about the construct of interest. Reliability theories can be categorised into classical test theory, generalisability theory and item response theory. All three approaches have specific advantages and disadvantages and different areas of application. Finally in the Guide, we discuss the phenomenon of assessment for learning as opposed to assessment of learning and its implications for current and future development and research.
We found that students working in heterogeneous groupings interact with students with whom they don't normally interact with, learn a lot more from each other because of their differences in language and academic preparedness and become better prepared for their future professions in multicultural societies. On the other hand we found students segregating in the tutorials along racial lines and that status factors disempowered students and subsequently their productivity. Among the challenges was also that academic and language diversity hindered student learning. In light of these the recommendations were that teachers need special diversity training to deal with heterogeneous groups and the tensions that arise. Attention should be given to create 'the right mix' for group learning in diverse student populations. The findings demonstrate that collaborative heterogeneous learning has two sides that need to be balanced. On the positive end we have the 'ideology' behind mixing diverse students and on the negative the 'practice' behind mixing students. More research is needed to explore these variations and their efficacy in more detail.
In each school, teachers, management and examination board participated. Results show that the two schools use different approaches to assure assessment quality. The innovative school seems to be more aware of its own strengths and weaknesses, to have a more positive attitude towards teachers, students, and educational innovations, and to explicitly involve stakeholders (i.e., teachers, students, and the work field) in their assessments. This school also had a more explicit vision of the goal of competence-based education and could design its assessments in accordance with these goals.
Three student cohorts were taught using one instructional format per subject area so that each cohort received a different instructional format for each of the three subject areas. Outcome measures (objective structured clinical examination, video quiz, written examination) were selected to determine the effect of each instructional format on the clinical reasoning of students.
Increasingly authentic instructional formats did not significantly improve clinical reasoning performance across all outcome measures and subject areas. However, the results of the video quiz showed significant differences in the anaemia subject area between students who had been instructed using the paper case and live SP-based formats (scores of 47.4 and 57.6, respectively; p = 0.01) and in the abdominal pain subject area, in which students instructed using the DVD format scored higher than students instructed using either the paper case or SP-based formats (scores of 41.6, 34.9 and 31.2, respectively; p=0.002).
Increasing the authenticity of instructional formats does not appear to significantly improve clinical reasoning performance in a pre-clerkship course. Medical educators should balance increases in authenticity with factors such as cognitive load, subject area and learner experience when designing new instructional formats.
Eight teachers at the Vrije Universiteit (VU) University Medical Centre in Amsterdam attended a training course on the use of the MAAS-Global instrument, which they subsequently used to assess the consultation skills of 53 GPTs in 176 videotaped consultations (102 with SPs, 74 with RPs). All consultations were randomly allocated and assessed by two teachers independently. The reliability of the ratings was estimated using generalisability theory.
It was easier to obtain acceptable reliability using RP consultations than SP consultations. Two assessors and five consultations were required to achieve minimal reliability (generalisability coefficient 0.7) with RPs, whereas three assessors and 30 consultations were needed to achieve minimal reliability with SPs.
Inter-observer and context variability in the assessment of the consultation skills of GPTs remains high. To achieve acceptable levels of reliability, large samples of observations are required in both formats, but, interestingly, RP encounters require a smaller sample than SP encounters.
We conducted an international qualitative study using focus groups and drawing on principles of grounded theory. We recruited volunteer participants from three undergraduate and two postgraduate programmes using structured self-assessment activities (e.g. portfolios). We asked learners to describe their perceptions of and experiences with formal and informal activities intended to inform self-assessment. We conducted analysis as a team using a constant comparative process.
Eighty-five learners (53 undergraduate, 32 postgraduate) participated in 10 focus groups. Two main findings emerged. Firstly, the perceived effectiveness of formal and informal assessment activities in informing self-assessment appeared to be both person- and context-specific. No curricular activities were considered to be generally effective or ineffective. However, the availability of high-quality performance data and standards was thought to increase the effectiveness of an activity in informing self-assessment. Secondly, the fostering and informing of self-assessment was believed to require credible and engaged supervisors.
Several contextual and personal conditions consistently influenced learners' perceptions of the extent to which assessment activities were useful in informing self-assessments of performance. Although learners are not guaranteed to be accurate in their perceptions of which factors influence their efforts to improve performance, their perceptions must be taken into account; assessment strategies that are perceived as providing untrustworthy information can be anticipated to have negligible impact.
Sixty bowel cancer screening polypectomy videos were randomly chosen for analysis and were scored independently by 7 expert assessors by using DOPyS. Each parameter and the global rating were scored from 1 to 4 (scores ≥3 = competency). The scores were analyzed by using generalizability theory (G theory).
Fifty-nine of the 60 videos were assessable and scored. The majority of the assessors agreed across the pass/fail divide for the global assessment scale in 58 of 59 (98%) polyps. For G-theory analysis, 47 of the 60 videos were analyzed. G-theory analysis suggested that DOPyS is a reliable assessment tool, provided that it is used by 2 assessors to score 5 polypectomy videos all performed by 1 endoscopist. DOPyS scores obtained in this format would reflect the endoscopist's competence.
Small sample and polyp size.
This study is the first attempt to develop and validate a tool designed specifically for the assessment of technical skills in performing polypectomy. G-theory analysis suggests that DOPyS could reliably reflect an endoscopist's competence in performing polypectomy provided a requisite number of assessors and cases were used.
Using a strategic planning approach, a semi-structured open-ended questionnaire on the future of their profession was sent to 102 Dutch gynecologists. Through inductive analysis, a future perspective and its needed competencies were identified and compared to the CanMEDS framework.
The 62 responses showed content validity for the CanMEDS roles. Additionally, two roles were identified: advanced technology user and entrepreneur. Within the role Communicator, the focus will change through more active patient participation. The roles Collaborator and Manager are predicted to change in focus because of an increase of complex interdisciplinary teamwork and leadership roles.
By studying the Dutch gynecologists' perspective of the future in a strategic planning approach, two additional roles and focus areas within a contemporary competency framework were identified. The perspective of clinicians on future health care provides valuable messages on how to design future-proof curricula.
This has led to a broadened perspective on the types of construct assessment tries to capture, the way information from various sources is collected and collated, the role of human judgement and the variety of psychometric methods to determine the quality of the assessment. Research into the quality of assessment programmes, how assessment influences learning and teaching, new psychometric models and the role of human judgement is much needed.
To find empirical evidence for the factors claimed to have an influence on anatomical knowledge of students.
A literature search.
There is a lack of sufficient quantity and quality of information within the existing literature to support any of the claims, but the gathered literature did reveal some fascinating insights which are discussed.
Anatomy education should be made as effective as possible, as nobody will deny that medical students cannot do without anatomical knowledge. Because of promising findings in the areas of teaching in context, vertical integration and assessment strategies, it is recommended that future research into anatomy education should focus on these factors.
We searched the PubMed, EMBASE and PsycINFO databases for articles pertaining to script concordance testing. We then reviewed these articles to evaluate the construct validity of the script concordance method, following an established approach for analysing validity data from five categories: content; response process; internal structure; relations to other variables, and consequences.
Content evidence derives from clear guidelines for the creation of authentic, ill-defined scenarios. High internal consistency reliability supports the internal structure of SCT scores. As might be expected, SCT scores correlate poorly with assessments of pure factual knowledge, in which correlations for more advanced learners are lower. The validity of SCT scores is weakly supported by evidence pertaining to examinee response processes and educational consequences.
Published research generally supports the use of SCT to assess the interpretation of clinical data under conditions of uncertainty, although specifics of the validity argument vary and require verification in different contexts and for particular SCTs. Our review identifies potential areas of further validity inquiry in all five categories of evidence. In particular, future SCT research might explore the impact of the script concordance method on teaching and learning, and examine how SCTs integrate with other assessment methods within comprehensive assessment programmes.
Analysis of variance examined differences between years and regression analysis the relationship between deliberate practice and skill test results.
875 students participated (90%). Factor analysis yielded four factors: planning, concentration/dedication, repetition/revision, study style/self reflection. Student scores on 'Planning' increased over time, score on sub-scale 'repetition/revision' decreased. Student results on the clinical skill test correlated positively with scores on subscales 'planning' and 'concentration/dedication' in years 1 and 3, and with scores on subscale 'repetition/revision' in year 1.
The positive effects on test results suggest that the role of deliberate practice in medical education merits further study. The cross-sectional design is a limitation, the large representative sample a strength of the study. The vanishing effect of repetition/revision may be attributable to inadequate feedback. Deliberate practice advocates sustained practice to address weaknesses, identified by (self-)assessment and stimulated by feedback. Further studies should use a longitudinal prospective design and extend the scope to expertise development during residency and beyond.
Between January and May 2009, 14 physicians were interviewed who had commenced an attending post in internal medicine or obstetrics-gynecology between six months and two years earlier, within the Netherlands. Interviews focused on the attendings' perceptions of the transition, their socialization within the new organization, and the preparation they had received during residency training. The interview transcripts were openly coded, and through constant comparison, themes emerged. The research team discussed the results until full agreement was reached.
A conceptual framework emerged from the data, consisting of three themes interacting in a longitudinal process. The framework describes how novel disruptive elements (first theme) due to the transition from resident to attending physician are perceived and acted on (second theme), and how this directs new attendings' personal development (third theme).
The conceptual framework finds support in transition psychology and notions from organizational socialization literature. It provides insight into the transition from resident to attending physician that can inform measures to smooth the intense transition.
This article presents the framework for PB that is used at Maastricht medical school, the Netherlands.
The approach to PB used in the Dutch medical schools is described with special attention to 4 years (2005-2009) of experience with PB education in the first 3 years of the 6-year undergraduate curriculum of Maastricht medical school. Future challenges are identified.
The adages 'Assessment drives learning' and 'They do not respect what you do not inspect' [Cohen JJ. 2006. Professionalism in medical education, an American perspective: From evidence to accountability. Med Educ 40, 607-617] suggest that formative and summative aspects of PB assessment can be combined within an assessment framework. Formative and summative assessments do not represent contrasting but rather complementary approaches. The Maastricht medical school framework combines the two approaches, as two sides of the same coin.
In the present study, we employed two established theories as frameworks with the purpose of assessing the extent to which different views of the same clinical encounter (a three-component, Year 2 medical student objective structured clinical examination [OSCE] station) are similar to or differ from one another.
We performed univariate comparisons between the individual items on each of the three components of the OSCE: the standardised patient (SP) checklist (patient perspective); the post-encounter form (trainee perspective), and the oral presentation rating form (faculty perspective). Confirmatory factor analysis (CFA) of the three-component station was used to assess the fit of the three-factor (three-viewpoint) model. We also compared tercile performance across these three views as a form of extreme groups analysis.
Results from the CFA yielded a measurement model with reasonable fit. Moderate correlations between the three components of the station were observed. Individual trainee performance, as measured by tercile score, varied across components of the station.
Our work builds on research in fields outside medicine, with results yielding small to moderate correlations between different perspectives (and measurements) of the same event (SP checklist, post-encounter form and oral presentation rating form). We believe obtaining multiple perspectives of the same encounter provides a more valid measure of a student's clinical performance.
Results were analysed for emergent themes.
Remedial programmes for at-risk medical students should be mandatory, but should respect students' identity as repeaters. Attitude and motivation are key, and working in stable groups provides essential emotional and cognitive support. The learning environment needs to foster changes in students' ways of thinking and their development as flexible, reflective learners. These endeavours require support from honest teachers with rigorous expectations and good facilitation skills.
Successful remediation needs to challenge students' conceptions of learning, works best in groups with skilled facilitators, and must take into account a blend of cognitive and affective factors and the complex interplay between learner and environment. Given a carefully designed programme, at-risk medical students can learn to make effective and lasting changes to their approach to study, and their views of learning can come to converge with influential ideas in the education literature.
To explore students' perceptions about a newly introduced integrated feedback and assessment instrument to support self-directed learning in clinical practice. Students collected feedback from clinical supervisors and wrote it on a competency-based format. This feedback was used for self-assessment, which had to be completed before the final assessment.
Four focus group discussions were conducted with second and last year Midwifery students. Focus groups were audiotaped, transcribed verbatim and analysed in a thematic way using ATLAS.ti for qualitative data analysis.
The analysis of the transcripts suggested that integrating feedback and assessment supports participation and active involvement in learning by collecting, writing, asking, reading and rereading feedback. Under the condition of training and dedicated time, these learning activities stimulate reflection and facilitate the development of strategies for improvement. The integration supports self-assessment and formative assessment but the value for summative assessment is contested. The quality of feedback and empowerment by motivated supervisors are essential to maximise the learning effects.
The integrated Midwifery Assessment and Feedback Instrument is a valuable tool for supporting formative learning and assessment in clinical practice, but its effect on students' self-directed learning depends on the feedback and support from supervisors.
We reasoned that the PT data should be flexibly accessible in all pathways and with any available comparison data, according to the personal interest of the learner. For that purpose, a web-based tool (Progress test Feedback, the ProF system) was developed. This article presents the principles and features of the generated feedback and shows how it can be used. In addition to enhancement of the feedback, the ProF database of longitudinal PT-data also provides new opportunities for research on knowledge growth, and these are currently being explored.
The items within the instrument are clustered around motivational and cognitive factors based on Slavin's theoretical framework. A confirmatory factor analysis (CFA) was carried out to estimate the validity of the instrument. Furthermore, generalizability studies were conducted and alpha coefficients were computed to determine the reliability and homogeneity of each factor.
The CFA indicated that a three-factor model comprising 19 items showed a good fit with the data. Alpha coefficients per factor were high. The findings of the generalizability studies indicated that at least 9-10 student responses are needed in order to obtain reliable data at the tutorial group level.
The instrument validated in this study has the potential to provide faculty and students with diagnostic information and feedback about student behaviors that enhance and hinder tutorial group effectiveness.
Students who failed and then repeated first semester were required to participate in a cognitive skills programme, following a syllabus based on principles drawn from both educational experience and multi-disciplinary theory and practice. Performance of programme participants was compared to the performance of students who repeated prior to the mandatory programme.
Of the participants (n = 216), 91% passed their repeat semester, compared to 58% (n = 715) for controls (p < 0.0001). This significant effect persisted for progression through the school for the subsequent three semesters (p < 0.0005).
A mandatory programme that draws on a blend of theories and research-proven techniques can make a positive difference to the outcomes for at-risk medical students.
Historical data from 54 Maastricht (norm-referenced) and 52 Groningen (criterion-referenced) tests were used to demonstrate huge discrepancies and variability in cut-off scores and failure rates. Subsequently, the compromise model - known as Cohen's method - was applied to the Groningen tests.
The Maastricht norm-referenced method led to a large variation in required cut-off scores (15-46%), but a stable failure rate (about 17%). The Groningen method with a conventional, pre-fixed standard of 60% led to a large variation in failure rates (17-97%). The compromise method reduced variation in required cut-off scores as well as failure rates.
Both the criterion and norm-referenced standards, used in practice, have disadvantages. The proposed compromise model reduces the disadvantages of both methods and is considered more acceptable. Last but not least, compared to standard setting methods using panels, this method is affordable.