Topic number
Спецвыпуск . 2022
Editorial

Dear collegues!

Abstract
EDUCATIONAL TECHNOLOGIES

AMEE GUIDE # 99. CURRICULUM PLANNING. CURRICULUM DEVELOPMENT FOR THE WORKPLACE USING ENTRUSTABLE PROFESSIONAL ACTIVITIES (EPAS)

Abstract
This guide was written to support educators interested in building a competency-based workplace curriculum. It aims to provide an up-to-date overview of the literature on Entrustable Professional Activities (EPAs), supplemented with suggestions for practical application to curriculum construction, assessment and educational technology. The guide first introduces concepts and definitions related to EPAs and then guidance for their identification, elaboration and validation, while clarifying common misunderstandings about EPAs. A matrix-mapping approach of combining EPAs with competencies is discussed, and related to existing concepts such as competency milestones. A specific section is devoted to entrustment decision-making as an inextricable part of working with EPAs. In using EPAs, assessment in the workplace is translated to entrustment decision-making for designated levels of permitted autonomy ranging from acting under full supervision to providing supervision to a junior learner. A final section is devoted to the use of technology, including mobile devices and electronic portfolios to support feedback to trainees about their progress and to support entrustment decision-making by programme directors or clinical teams.

AMEE GUIDE # 84. PROBLEM-BASED LEARNING: GETTING THE MOST OUT OF YOUR STUDENTS; THEIR ROLES AND RESPONSIBILITIES

Abstract

This guide discusses the considerable literature on the merits or shortcomings of Problem Based Learning (PBL), and the factors that promote or inhibit it, when seen through the eyes of the student. It seems to be the case that PBL works best when students and faculty understand the various factors that influence learning and are aware of their roles; this Guide deals with each of the main issues in turn. One of the most important concepts to recognise is that students and Faculty share the responsibility for learning and there are several factors that can influence its success. They include student motivation for PBL and the various ways in which they respond to being immersed in the process. As faculty, we also need to consider the way in which the learning environment supports the students develop the habit of lifelong learning, and the skills and attitudes that will help them become competent reflective practitioners. Each of these elements place responsibilities upon the student, but also upon the Faculty and learning community they are joining. Although all of the authors work in a European setting, where PBL is used extensively as a learning strategy in many medical schools, the lessons learned we suggest , apply more widely and several of the important factors apply to any form of curriculum. This Guide follows on from a previous review in the AMEE Guides in Medical education series, which provided an overview of PBL [121] and attempts to emphasise the key role that students have in mastering their subject through PBL. This should render the business of being a student a little less mystifying, and help faculty to see how they can help their students acquire the independence and mastery that they will need.

ASSESSMENT IN MEDICAL EDUCATION

AMEE GUIDE # 119. THE FOUNDATIONS OF MEASUREMENT AND ASSESSMENT IN MEDICAL EDUCATION

Abstract

As a medical educator, you may be directly or indirectly involved in the quality of assessments. Measurement has a substantial role in developing the quality of assessment questions and student learning. The information provided by psychometric data can improve pedagogical issues in medical education.

By measuring, we are able to assess the learning experiences of students. Standard setting plays an important role in assessing the performance quality of students as doctors in the future. Presentation of performance data for standard setters may contribute towards developing a credible and defensible pass mark. Validity and reliability of test scores are the most important factors for developing quality assessment questions. The analysis of assessment individual questions provide useful feedback for assessment leads in order to improve the quality of each question, and hence make students' marks fair in terms of the diversity and ethnicity. Item Characteristic Curves (ICC), Differential Item Function (DIF) analysis and option analysis will send signals to assessment leads to improve the quality of individual question.

GUIDE ON DEVELOPING MULTIPLE CHOICE QUESTIONS (MCQS) FOR HIGH STAKES CLINICAL EXAMS

Abstract

The goal of this guide is to help faculty members, authors and everyone engaged in developing exams for medical specialists, whose aim is the reliable and valid assessment. This guide contains recommendations, their clarification, examples and most common flaws, and they may help novice authors to develop Multiple Choice Questions. We hope that this guide will be useful training aid for every medical teacher.

GUIDE ON DEVELOPING MULTIPLE CHOICE QUESTIONS (MCQS) FOR HIGH STAKES CLINICAL EXAMS

Abstract

The goal of this guide is to help faculty members, authors and everyone engaged in developing exams for medical specialists, whose aim is the reliable and valid assessment. This guide contains recommendations, their clarification, examples and most common flaws, and they may help novice authors to develop Multiple Choice Questions. We hope that this guide will be useful training aid for every medical teacher.

HOW TO SET STANDARDS ON PERFORMANCE-BASED EXAMINATIONS: AMEE GUIDE NO. 85

Abstract

This AMEE Guide offers an overview of methods used in determining passing scores for performance-based assessments. A consideration of various assessment purposes will provide context for discussion of standard setting methods, followed by a description of different types of standards that are typically set in health professions education. A step-by-step guide to the standard setting process will be presented. The Guide includes detailed explanations and examples of standard setting methods, and each section presents examples of research done using the method with performance-based assessments in health professions education. It is intended for use by those who are responsible for determining passing scores on tests and need a resource explaining methods for setting passing scores. The Guide contains a discussion of reasons for assessment, defines standards, and presents standard setting methods that have been researched with performance-based tests. The first section of the Guide addresses types of standards that are set. The next section provides guidance on preparing for a standard setting study. The following sections include conducting the meeting, selecting a method, implementing the passing score, and maintaining the standard. The Guide will support efforts to determine passing scores that are based on research, matched to the assessment purpose, and reproducible.

A FRESH LOOK AT MILLER'S PYRAMID: ASSESSMENT AT THE ‘IS’ AND ‘DO’ LEVELS

Abstract

In its silver jubilee, we celebrate the ground-breaking pyramid of George Miller by submitting a fresh look at it. We discuss two questions. Does the classical pyramidal structure perfectly portray the relationships of the four levels that were described by Miller? Can the model of Miller fulfill the unmet needs of assessors to measure evolving essential constructs and accommodate the increasingly sophisticated practice of assessment of health professionals? In response to the first question, Millers pyramid is revisited in view of two assumptions for pyramidal structures, namely: hierarchy and tapering. Then we suggest different configurations for the same classical four levels and indicate when to use each one. With regard to the second question, we provide a rationale for amending the pyramid with two further dimensions to assess personal qualities of students at the 'Is' level and their performance in teams at the 'Do' (together) level. At the end of the article, we yearn to think outside the pyramid and suggest the Assessment Orbits framework to assess students as individuals and in teams. The five Assessment Orbits alert educators to assess the emerging cognitive and non-cognitive constructs, without implying features such as hierarchy or tapering that are ingrained in pyramidal structures. The 'Is' orbit attends to the personal qualities of graduates 'who' we may (or may not) trust to be our physicians. Assessment of teams at the 'Do' level (together) offers a paradigm shift in assessment from competitive ranking (storming) among students toward norming and performing as teams.

CONDUCTING RESEARCH IN MEDICAL EDUCATION

AMEE GUIDE # 108. WRITING COMPETITIVE RESEARCH CONFERENCE ABSTRACTS

Abstract

The ability to write a competitive research conference abstract is an important skill for medical educators. A compelling and concise abstract can convince peer-reviewers, conference selection committee members, and conference attendees that the research described therein is worthy for inclusion in the conference programme and/or for their attendance in the meeting. Guide of the Association for Medical Education in Europe (AMEE) is designed help medical educators write research conference abstracts that can achieve this outcome. To do so, this guide begins by examining the rhetorical context (i.e., the purpose, audience and structure) of research conference abstracts, and then moves on to describe the abstract selection processes common to many medical education conferences. Next, the guide provides theory-based information and concrete suggestions on how to write persuasively. Finally, the guide offers some writing tips and some proofreading techniques that all authors can use. By attending to the aspects of the research conference abstract addressed in this guide, we hope to help medical educators enhance this important text in their writing repertoire.

AMEE GUIDE # 87. DEVELOPING QUESTIONNAIRES FOR EDUCATIONAL RESEARCH

Abstract

In this AMEE Guide, we consider the design and development of self-administered surveys, commonly called questionnaires. Questionnaires are widely employed in medical education research. Unfortunately, the processes used to develop such questionnaires vary in quality and lack consistent, rigorous standards. Consequently, the quality of the questionnaires used in medical education research is highly variable. To address this problem, this AMEE Guide presents a systematic, seven-step process for designing high-quality questionnaires, with particular emphasis on developing survey scales. These seven steps do not address all aspects of survey design, nor do they represent the only way to develop a high-quality questionnaire. Instead, these steps synthesize multiple survey design techniques and organize them into a cohesive process for questionnaire developers of all levels. Addressing each of these steps systematically will improve the probabilities that survey designers will accurately measure what they intend to measure.

Simulation technologies

ROBOT ASSISTED VERSUS LAPAROSCOPIC SUTURING LEARNING CURVE IN A SIMULATED SETTING

Abstract

Background. Compared to conventional laparoscopy, robot assisted surgery is expected to have most potential in difficult areas and demanding technical skills like minimally invasive suturing. This study was performed to identify the differences in the learning curves of laparoscopic versus robot assisted suturing.

Method. Novice participants performed three suturing tasks on the EoSim laparoscopic augmented reality simulator or the RobotiX robot assisted virtual reality simulator. Each participant performed an intracorporeal suturing task, a tilted plane needle transfer task and an anastomosis needle transfer task. To complete the learning curve, all tasks were repeated up to twenty repetitions or until a time plateau was reached. Clinically relevant and comparable parameters regarding time, movements and safety were recorded. Intracorporeal suturing time and cumulative sum analysis was used to compare the learning curves and phases.

Results. Seventeen participants completed the learning curve laparoscopically and 30 robot assisted. Median first knot suturing time was 611 s (s) for laparoscopic versus 251 s for robot assisted (p<0.001), and this was 324 s versus 165 (6th knot, p<0.001) and 257 s and 149 s (eleventh knot, p<0.001) respectively on base of the found learning phases. The percentage of ‘adequate surgical knots’ was higher in the laparoscopic than in the robot assisted group. First knot: 71% versus 60%, sixth knot: 100% versus 83%, and eleventh knot: 100% versus 73%. When assessing the ‘instrument out of view’ parameter, the robot assisted group scored a median of 0% after repetition four. In the laparoscopic group, the instrument out of view increased from 3.1 to 3.9% (left) and from 3.0 to 4.1% (right) between the first and eleventh knot (p>0.05).

Conclusion. The learning curve of minimally invasive suturing shows a shorter task time curve using robotic assistance compared to the laparoscopic curve. However, laparoscopic outcomes show good end results with rapid outcome improvement.

DESIGNING OF A SIMULATION CENTER

Abstract

This article reveals different aspects of designing of a simulation center. Collegial approach enables to take into account various concerns influencing on this process. It describes the most common questions and mistakes arising on the planning stage. The article aims to support a team engaged in this process and provide recommendations on an optimal design of the simulation center, which will satisfy educational goals and needs of the particular institution.

COMPUTERIZED VIRTUAL REALITY SIMULATION IN PRECLINICAL DENTISTRY: CAN A COMPUTERIZED SIMULATOR REPLACE THE CONVENTIONAL PHANTOM HEADS AND HUMAN INSTRUCTION?

Abstract

In preclinical dental education, the acquisition of clinical, technical skills, and the transfer of these skills to the clinic are paramount. Phantom heads provide an efficient way to teach preclinical students dental procedures safely while increasing their dexterity skills considerably. Modern computerized phantom head training units incorporate features of virtual reality technology and the ability to offer concurrent augmented feedback. The aims of this review were to examine and evaluate the dental literature for evidence supporting their use and to discuss the role of augmented feedback versus the facilitator's instruction. Adjunctive training in these units seems to enhance student's learning and skill acquisition and reduce the required faculty supervision time. However, the virtual augmented feedback cannot be used as the sole method of feedback, and the facilitator's input is still critical. Well-powered longitudinal randomized trials exploring the impact of these units on student's clinical performance and issues of cost-effectiveness are warranted.

All articles in our journal are distributed under the Creative Commons Attribution 4.0 International License (CC BY 4.0 license)

CHIEF EDITOR
CHIEF EDITOR
Balkizov Zalim Zamirovich
Secretary General of the Russian Society of Medical Education Specialists, Director of the Institute of Training of Medical Education Specialists of the Russian Medical Academy of Continuing Professional Education, 125993, Moscow, Russian Federation, Professor of the Department of Vocational Education and Educational Technologies of the N.I. Pirogov RNIMU of the MOH of Russia, CEO of GEOTAR-Med, Advisor President of the National Medical Chamber, Moscow, Russian Federation

Journals of «GEOTAR-Media»