Thank you for visiting this blog. Please don't hesitate to contact us if you find error/s. We would appreciate your comment to improve your experience in this blog.


1007P - A PERSONAL INTERVIEW AS A CRITERION OF ADMISSION TO MEDICAL SCHOOLS by Omar Hasan Kasule Sr.

A review by Siu and Reiter (2009) concluded that academic scores, aptitude tests, and structured interviews predicted success in medical school. They found the following tools unreliable: personal interviews, personal statements, letters of reference, personality testing, emotional intelligence testing, and situational judgment testing.

Virtually all US medical schools use the personal interview in evaluating students for admission (Puryear and Lewis 1981). The interview is however considered unreliable for various reasons such as changing moods (Redeleier et al ) or personal characteristics of the interviewer (Quintero et al 2009). Performance at interviews has a low to moderate correlation with other admission criteria such as grades (Patrick et al 2001). There is no strong evidence that performance at interviews correctly predicts performance in medical school Smith 1991, al Rukban et al 2010). There is no agreement on the reliability of the interview in assessing students’ noncognitive skills (Donnon and Paolucci 2008, Kay et al 2010, Fan et al 2010). These skills can alternatively be assessed using computer-based screening tools (Dore at al 2009, Patterson et al 2009).

The personal interview has some advantage. Interview by faculty or even by current students can potentially provide information on personality and motivation not available from documents (Edwards et al 1990, Gotowski et al 2010) and can identify potential psychological problems (Willer et al 1984). The personal interview has the psychological benefit of providing the first point of personal contact with the school and the faculty that has long-lasting emotional benefits if it is carried out successfully.

There is general agreement that structured standardized interviews such as the Multiple Mini Interview (MMI) are better in predicting future success than traditional interviews by committees because of less interviewer bias (Edwards et al 1990, Razack et al 2009) especially if the interviewers are trained (Joyner et al 2007). The MMI is a reliable, valid, and acceptable method (Hecket et al 2009) but is requires a lot of effort to do well (Axelson and Kreiter 2009).

In the case of the Majmaah, We have a brand-new medical school admitting students for the first time. We have to be careful to select students who will succeed and give the faculty a good reputation. It is recommended that admission be primarily be based on school grades and aptitude test scores. As a secondary assessment selected students should then go through an English language test (preferably an old version of IELTS) and a multi mini objective and structured interview (MMI) consisting of 10 stations testing the following: (a) motivation to study medicine (b) prior experience/interest in medicine (c) verbal communication skills (d) written communication skills (e) understanding written communication (f) analytic skills (g) ethical analysis (h) problem solving skills (i) decision making skills (j) social skills.  The secondary assessment will be used to identify students who will have problems with the medical course in order to make decisions on their admission in their best interests. .

Patrick LE, Altmaier EM, Kuperman S, Ugolini K. A structured interview for medical school admission, Phase 1: initial procedures and results. Acad Med. 2001 Jan;76(1):66-71.
Smith SR.. Medical school and residency performances of students admitted with and without an admission interview. Acad Med. 1991 Aug;66(8):474-6.
Puryear JB, Lewis LA. Description of the interview process in selecting students for admission to U.S. medical schools. J Med Educ. 1981 Nov;56(11):881-5.
Willer B, Keill S, Isada C. Survey of U.S. and Canadian medical schools on admissions and psychiatrically at-risk students. J Med Educ. 1984 Dec;59(12):928-36.
Edwards JC, Johnson EK, Molidor JB. The interview in the admission process. Acad Med. 1990 Mar;65(3):167-77.





Acad Med. 2001 Jan;76(1):66-71.

A structured interview for medical school admission, Phase 1: initial procedures and results.

Division of Psychological and Quantitative Foundations, College of Education, University of Iowa, Iowa City, USA.

Abstract

PURPOSE: Despite their widespread use, medical school admission interviews often are unstructured and lack reliability. This report describes the development of a structured admission interview designed to eliminate bias and provide valid information for selecting medical students, with preliminary information about the interview's reliability and validity. METHOD: After screening applications, 490 applicants to a public medical school residency program were interviewed by two faculty members using a structured interview format. Interview scores were compiled and correlated with undergraduate grade-point averages (GPAs); Medical College Admission Test (MCAT) scores; Iowa Evaluation Form (IEF) scores, an in-house evaluation of applicants' noncognitive abilities; and eventual admissions status. RESULTS: Interrater agreement was good; the percentages of rater pairs whose scores differed by one point or less ranged from 87% to 98%. Scores on the structured interview revealed low to moderate correlations with other admission criteria: 10 (p < 0.05) for cumulative GPA, 0.18 (p < 0.01) for MCAT Biological Science, 0.08 (p > 0.05) MCAT Physical Science, and 0.10 (p < 0.05) MCAT Verbal Reasoning. None of the correlations between the overall interview scores and the IEF scores reached statistical significance (p = 0.05). Higher overall scores on the structured interview did predict a greater likelihood of being accepted into the medical school and the interview score accounted for 20% of the incremental variance in admission status beyond GPA, MCAT, and IEF scores. CONCLUSIONS: The moderate-to-low correlations with other admission criteria suggest that the interview provided information about candidate credentials not obtained from other sources and accounted for a substantial proportion of the variance in admission status. This finding supports the considerable time and resources required to develop a structured interview for medical student admissions. Final judgment on the validity and utility of this interview should be made after follow-up performance data have been obtained and analyzed.
Acad Med. 1991 Aug;66(8):474-6.

Medical school and residency performances of students admitted with and without an admission interview.

Brown University Program in Medicine, Providence, RI 02912.

Abstract

In 1982 the Brown University Program in Medicine eliminated the personal interview from its process of selecting applicants for admission to medical school. This study compares the 113 M.D.-program students admitted to the first three classes (entering between 1983 and 1985) without an interview with the 67 students in the previous three classes admitted with an interview. The students' characteristics were essentially the same with respect to the preadmission variables, the proportions of women and minority students, course performances, scores on Parts I and II of the National Board of Medical Examiners examinations, and evaluation scores from residency program directors. This study offers additional evidence that the selection interview, as practiced in most U.S. medical schools, does not contribute to the predictive validity of the admission process.
  • Format
Create File
  • FormatMeSH and Other Data
  • E-mail
  • Additional text
E-mail
Add to Clipboard
Add to Collections
Order articles
J Med Educ. 1981 Nov;56(11):881-5.

Description of the interview process in selecting students for admission to U.S. medical schools.

Abstract

A survey was made of the medical schools in the United States to obtain a description of the interview process used in the selection of first-year medical students. The following questions were the basis for the study: What is the role of the interview in the selection of medical students? What is the nature of the interview process? How is the interview administered? An 87 percent response rate was obtained. The results indicated that 99 percent of the responding medical schools use interviews in evaluating students for medical school admission, and the interview ranks second only to the grade-point average in importance among four selection factors. The interview is usually in a one to one setting, with each applicant having two separate interviews. All schools use faculty and staff members in interviewing, and usually at least one admissions committee member interviews each applicant. Usually interviews are conducted on the campus of the school. Implications drawn from the results indicate a need for a quantification of methods to incorporate the interview into the selection process.
PMID: 7299795 [PubMed - indexed for MEDLINE]
J Med Educ. 1984 Dec;59(12):928-36.

Survey of U.S. and Canadian medical schools on admissions and psychiatrically at-risk students.

Abstract

A survey of directors of admissions and chairmen of departments of psychiatry in U.S. and Canadian medical schools was undertaken concerning problems of the identification of students with emotional problems prior to admission and the role of psychiatry faculty members on the admissions committee. In general, the respondents felt that the preadmission interview was the best opportunity to identify the at-risk student but that current interview procedures would have to be improved. The respondents also indicated that U.S. law, which prohibits asking questions about psychiatric problems or treatment, is restrictive and greatly reduces the potential effectiveness of the interview. The survey results are compared with the results of earlier surveys.
PMID: 6502661 [PubMed - indexed for MEDLINE]
Acad Med. 1990 Mar;65(3):167-77.

The interview in the admission process.

Department of Surgery, St. Louis University School of Medicine, Missouri 63104.

Abstract

Significant demographic, legal, and educational developments during the last ten years have led medical schools to review critically their selection procedures. A critical component of this review is the selection interview, since it is an integral part of most admission processes; however, some question its value. Interviews serve four purposes: information gathering, decision making, verification of application data, and recruitment. The first and last of these merit special attention. The interview enables an admission committee to gather information about a candidate that would be difficult or impossible to obtain by any other means yet is readily evaluated in an interview. Given the recent decline in numbers of applicants to and interest in medical school, many schools are paying closer attention to the interview as a powerful recruiting tool. Interviews can be unstructured, semistructured, or structured. Structuring involves analyzing what makes a medical student successful, standardizing the questions for all applicants, providing sample answers for evaluating responses, and using panel interviews (several interviewers simultaneously with one applicant). Reliability and validity of results increase with the degree of structuring. Studies of interviewers show that they are often biased in terms of the rating tendencies (for instance, leniency or severity) and in terms of an applicant's sex, race, appearance, similarity to the interviewer, and contrast to other applicants). Training interviewers may reduce such bias. Admission committees should weigh the purposes of interviewing differently for various types of candidates, develop structured or semistructured interviews focusing on nonacademic criteria, and train the interviewers.
Med Educ Online. 2010 Jun 9;15. doi: 10.3402/meo.v15i0.5245.

Current medical student interviewers add data to the evaluation of medical school applicants.

New Jersey Medical School, University of Medicine and Dentistry of New Jersey, Newark, NJ 07101-5292, USA.

Abstract

BACKGROUND: There is evidence that the addition of current medical student interviewers (CMSI) to faculty interviewers (FI) is valuable to the medical school admissions process. This study provides objective data about the contribution of CMSI to the admissions process. METHOD: Thirty-six applicants to a 4-year medical school program were interviewed by both CMSI and FI, and the evaluations completed by the two groups of interviewers were compared. Both FI and CMSI assessed each applicant's motivation, medical experiences, personality, communication skills, and interests outside of the medical field, and provided a numerical score for each applicant on an evaluation form. Both objective and subjective data were then extracted from the evaluation forms, and paired t-test and rank order tests were used for statistical analysis. RESULTS: When compared with FI, CMSI wrote two to three times more words on the applicants' motivation, personality, communication skills, interests, and overall evaluation sections (p<0.001) and provided about 60% more examples on the motivation section (p=0.0011) and communication skills section (p=0.0035). In contrast, FI and CMSI provided similar numbers of negative examples in these and in the personality section and equivalent overall numerical evaluation scores. CONCLUSIONS: These results indicate that when compared with FI, CMSI give equivalent overall evaluation scores to medical school candidates but provide additional potentially useful information particularly in the areas of motivation and communication skills to committees assigned the task of selecting students to be admitted to medical school.
BMC Med Educ. 2008 Dec 10;8:58.

A generalizability study of the medical judgment vignettes interview to assess students' noncognitive attributes for medical school.

Department of Community Health Sciences, Faculty of Medicine, University of Calgary, Calgary, Canada. tldonnon@ucalgary.ca

Abstract

BACKGROUND: Although the reliability of admission interviews has been improved through the use of objective and structured approaches, there still remains the issue of identifying and measuring relevant attributes or noncognitive domains of interest. In this present study, we use generalizability theory to determine the estimated variance associated with participants, judges and stations from a semi-structured, Medical Judgment Vignettes interview used as part of an initiative to improve the reliability and content validity of the interview process used in the selection of students for medical school. METHODS: A three station, Medical Judgment Vignettes interview was conducted with 29 participants and scored independently by two judges on a well-defined 5-point rubric. Generalizability Theory provides a method for estimating the variability of a number of facets. In the present study each judge (j) rated each participant (p) on all three Medical Judgment Vignette stations (s). A two-facet crossed designed generalizability study was used to determine the optimal number of stations and judges to achieve a 0.80 reliability coefficient. RESULTS: The results of the generalizability analysis showed that a three station, two judge Medical Judgment Vignettes interview results in a G coefficient of 0.70. As shown by the adjusted E rho 2 scores, since interviewer variability is negligible, increasing the number of judges from two to three does not improve the generalizability coefficient. Increasing the number of stations, however, does have a substantial influence on the overall dependability of this measurement. In a decision study analysis, increasing the number of stations to six with a single judge at each station results in a G coefficient of 0.81. CONCLUSION: The Medical Judgment Vignettes interview provides a reliable approach to the assessment of candidates' noncognitive attributes for medical school. The high inter-rater reliability is attributed to the greater objectivity achieved through the used of the semi-structured interview format and clearly defined scoring rubric created for each of the judgment vignettes. Despite the relatively high generalizability coefficient obtained for only three stations, future research should further explore the reliability, and equally importantly, the validity of the vignettes with a large group of candidates applying for medical school.
Am J Pharm Educ. 2007 Oct 15;71(5):83.

The structured interview and interviewer training in the admissions process.

The University of North Carolina at Chapel Hill School of Pharmacy, 27599-7360, USA. pam_joyner@unc.edu

Abstract

OBJECTIVES: To determine the extent to which the structured interview is used in the PharmD admissions process in US colleges and schools of pharmacy, and the prevalence and content of interviewer training. METHODS: A survey instrument consisting of 7 questions regarding interviews and interviewer training was sent to 92 colleges and schools of pharmacy in the United States that were accredited or seeking accreditation. RESULTS: Sixty survey instruments (65% response rate) were returned. The majority of the schools that responded (80%) used interviews as part of the PharmD admissions process. Of the schools that used an interview as part of the admissions process, 86% provided some type of interviewer training and 13% used a set of predefined questions in admissions interviews. CONCLUSIONS: Most colleges and schools of pharmacy use some components of the structured interview in the PharmD admissions process; however, training for interviewers varies widely among colleges and schools of pharmacy.
Saudi Med J. 2010 May;31(5):560-4.
Department of Family and Community Medicine, Faculty of Medicine, King Saud University, PO Box 91678, Riyadh 11643, Kingdom of Saudi Arabia. mrukban@hotmail.com
Abstract
OBJECTIVE: To evaluate the ability of preadmission criteria used in most health professional schools in Saudi Arabia to predict the in-program performance. METHODS: This retrospective cohort study was conducted at King Fahd Medical City, Faculty of Medicine, Riyadh, Kingdom of Saudi Arabia between July and September 2008. Four sets were used to examine the predictive power of preadmission variables. The variables are the academic abilities (high school grades), aptitude test, achievement test, and an interview. The criterion variables were the undergraduate grade point averages' (GPAs) of medical college students (n=193). The correlation between admission variables and the GPA was examined using Pearson's correlation coefficient and regression analyses. RESULTS: Inclusion of all 4 admission tools in a regression analysis as predictors of GPA performance revealed that only the achievement test was statistically predictive of the GPA. Approximately 6.5% of variance in the GPA can be accounted for by the current admission criteria. CONCLUSION: The current admission criteria provide some insight into the predicted future performance of students. The inclusion of other valid and reliable admissions tools, such as the multiple mini-interviews and the questionnaire for candidate's suitability to follow a problem-based learning curriculum, should be considered.
Eval Health Prof. 2010 Jun;33(2):140-63. Epub 2010 Mar 31.
Faculty of Medicine, National Yang-Ming University. fan_angela@hotmail.com
Abstract
Medical schools in Taiwan have recently adopted the U.S. medical school admissions model by incorporating interviews into the selection process. The objective of this study was to investigate factors that contribute to successful medical school applications through the national entrance examination and interview admission routes. The sample consisted of survey data from five entry cohorts of medical students admitted to the National Yang-Ming University Faculty of Medicine from 2003 to 2007. Of the 513 students, 62% were admitted through the traditional national entrance examination route and 38% were admitted early after achieving a threshold score on the composite national exam followed by a structured interview. Students admitted through the interview route were more likely to be female, with an odds ratio (OR) of 2.17 (1.20-3.93). Maternal education level was an independent predictor of both early admission through a successful interview and higher medical school grade point average (GPA). Students admitted through the interview route had a 3.20 point higher first-year medical school GPA (p < .001) as determined by regression analyses. Those students who were admitted via interview did not have significantly different personality traits than those admitted through the traditional route. This study calls into question the ability of an admissions interview to select for noncognitive character traits.
Med J Aust. 2010 Feb 15;192(4):212-6.
University of Adelaide, Adelaide, SA, Australia. caroline.laurence@adelaide.edu.au
Abstract
OBJECTIVE: To determine the applicant characteristics that influence success at each application stage for entry to the University of Adelaide Medical School. DESIGN, SETTING AND PARTICIPANTS: Retrospective analysis of characteristics associated with a successful outcome to an undergraduate-entry medical school for 6699 applicants from four cohorts (2004-2007). MAIN OUTCOME MEASURES: Offer of an interview, offer of a place, and acceptance of a place in the medical school. RESULTS: Female applicants were less likely to gain an interview (odds ratio [OR], 0.88; 95% CI, 0.78-0.99) but more likely to receive an offer of a place (OR, 1.33; 95% CI, 1.07-1.66). Older applicants were less likely than younger applicants (OR, 0.78; 95% CI, 0.71-0.86) and non-school leavers (applying after leaving school) were more likely than school leavers (applying while at school) (OR, 9.54; 95% CI, 6.16-14.78) to receive an offer of an interview. Applicants from areas of high socioeconomic status were more likely to gain an interview (quartile 1 v 4: OR, 0.55; 95% CI, 0.45-0.68). The more interviews an applicant had, the more likely he or she was to be offered a place (OR, 1.49; 95% CI, 1.34-1.66). CONCLUSION: This study indicates that some applicant characteristics have a significant influence on the success of an application at particular stages, but overall there does not appear to be a large or inherent systematic bias in the selection process at the University of Adelaide Medical School.
Br Dent J. 2010 Feb 13;208(3):127-31.
Peninsula Dental School, Peninsula College ofMedicine and Dentistry, TamarScience Park, Plymouth, PL6 8BU. elizabeth.kay@pds.ac.uk
Abstract
The aim of the study described was to measure the performance of potential dental students in an evidence-based, objective, structured admission interview, and to compare that performance to student achievement in aptitude tests, tests of scientific knowledge, and tests of ability to apply knowledge to dentistry. A list of desirable attributes of dental professionals was drawn from the literature, omitting those which were considered to be learnt within the dental school curriculum. Possession of these attributes were then measured by objectively scoring responses to questions framed around a challenging clinical scenario. The interview scores were then correlated against student performance in an MCQ science for dentistry examination, an applied dental knowledge test, and the Graduate Australian Medical student aptitude test. The literature review revealed that sensitivity to others, professionalism, and ethical behaviour were deemed almost as important as academic and technical competency. Correlations of scores from an interview which sought to measure the attributes described in the literature with scores in scientific knowledge tests, aptitude tests and applied dental knowledge tests were low, and did not reach statistical significance. The results suggest that an interview process has been devised which measures the importance of characteristics not readily captured in more traditional selection strategies. Because the literature demonstrates that these characteristics are important to the public and the profession, this objective interview is a useful selection tool.
Acad Med. 2009 Oct;84(10 Suppl):S9-12.
dore@mcmaster.ca
Abstract
BACKGROUND: Most medical school candidates are excluded without benefit of noncognitive skills assessment. Is development of a noncognitive preinterview screening test that correlates with the well-validated Multiple Mini-Interview (MMI) possible? METHOD: Study 1: 110 medical school candidates completed MMI and Computer-based Multiple Sample Evaluation of Noncognitive Skills (CMSENS)-eight 1-minute video-based scenarios and four self-descriptive questions, with short-answer-response format. Seventy-eight responses were audiotaped, 32 typewritten; all were scored by two independent raters. Study 2: 167 candidates completed CMSENS-eight videos, six self-descriptive questions, typewritten responses only, scored by two raters; 88 of 167 underwent the MMI. RESULTS: Results for overall test generalizability, interrater reliability, and correlation with MMI, respectively, were, for Study 1, audio-responders: 0.86, 0.82, 0.15; typewritten-responders: 0.72, 0.81, 0.51; and for Study 2, 0.83, 0.95, 0.46 (correlation with disattenuation was 0.60). CONCLUSIONS: Strong psychometric properties, including MMI correlation, of CMSENS warrant investigation into future widespread implementation as a preinterview noncognitive screening test.
Med Teach. 2009 Dec;31(12):1066-72.
School of Psychology, University of Newcastle, Callaghan, NSW, Australia. Miles.Bore@newcastle.edu.au
Abstract
BACKGROUND: Medical schools have a need to select their students from an excess of applicants. Selection procedures have evolved piecemeal: Academic thresholds have risen, written tests have been incorporated and interview protocols are developed. AIM: To develop and offer for critical review and, ultimately, present for adoption by medical schools, an evidence-based and defensible model for medical student selection. METHODS: We have described here a comprehensive model for selecting medical students which is grounded on the theoretical and empirical selection and assessment literature, and has been shaped by our own research and experience. RESULTS: The model includes the following selection criteria: Informed self-selection, academic achievement, general cognitive ability (GCA) and aspects of personality and interpersonal skills. A psychometrically robust procedure by which cognitive and non-cognitive test scores can be used to make selection decisions is described. Using de-identified data (n = 1000) from actual selection procedures, we demonstrate how the model and the procedure can be used in practice. CONCLUSION: The model presented is based on a currently best-practice approach and uses measures and methods that maximise the probability of making accurate, fair and defensible selection decisions.
Department of Medicine, University of Toronto, Toronto, Ontario. dar@ices.on.ca
Abstract
Mood can influence behaviour and consumer choice in diverse settings. We found that such cognitive influences extend to candidate admission interviews at a Canadian medical school. We suggest that an awareness of this fallibility might lead to more reasonable medical school admission practices.
PMID: 19969588 [PubMed - indexed for MEDLINE]PMCID: PMC2789141Free PMC Article
Med Educ. 2009 Dec;43(12):1198-202.
Department of Family Medicine, University of Iowa, Iowa City, USA. rick-axelson@uiowa.edu
Abstract
CONTEXT: Some medical schools have recently replaced the medical school pre-admission interview (MSPI) with the multiple mini-interview (MMI), which utilises objective structured clinical examination (OSCE)-style measurement techniques. Their motivation for doing so stems from the superior reliabilities obtained with the OSCE-style measures. Other institutions, however, are hesitant to embrace the MMI format because of the time and costs involved in restructuring recruitment and admission procedures. OBJECTIVES: To shed light on the aetiology of the MMI's increased reliability and to explore the potential of an alternative, lower-cost interview format, this study examined the relative contributions of two facets (raters, occasions) to interview score reliability. METHODS: Institutional review board approval was obtained to conduct a study of all students who completed one or more MSPIs at a large Midwestern medical college during 2003-2007. Within this dataset, we identified 168 applicants who were interviewed twice in consecutive years and thus provided the requisite data for generalisability (G) and decision (D) studies examining these issues. RESULTS: Increasing the number of interview occasions contributed much more to score reliability than did increasing the number of raters. CONCLUSIONS: Replicating a number of interviews, each with one rater, is likely to be superior to the often recommended panel interview approach and may offer a practical, low-cost method for enhancing MSPI reliability. Whether such a method will ultimately enhance MSPI validity warrants further investigation.
Med Teach. 2009 Nov;31(11):1018-23.
Medical Education Unit, School of Medicine, University of Aberdeen Medical School, Foresterhill, Aberdeen, UK. n.fernando@abdn.ac.uk
Abstract
BACKGROUND: The United Kingdom Clinical Aptitude Test (UK-CAT) was introduced for the purpose of student selection by a consortium of 23 UK University Medical and Dental Schools, including the University of Aberdeen in 2006. AIM: To compare candidate performance on UK-CAT with local medical student selection outcome. METHOD: We compared the outcomes of all applicants to Medicine, University of Aberdeen (UoA), in 2006 who undertook the UK-CAT. The candidates were selected into one of five outcomes (academic reject, reject following assessment, reject following interview, reserve list or offer). The candidate performance in the UK-CAT was compared to candidate performance on the UoA selection. RESULTS: Data are reported on 1307 (85.0%) students who applied to UoA in 2006 and undertook the UK-CAT. Total UK-CAT scores were significantly correlated with local selection scores. However, of 314 students offered a place following the conventional selection process, only 101 were also in the highest scoring 318 on the UK-CAT. CONCLUSIONS: Results from this study indicate that UK-CAT scores show weak correlation with success in our medical admissions process. It appears therefore that the UK-CAT examines different traits compared to our selection process. Further work is required to establish which better predicts success as an undergraduate or as a doctor.
Clin Med. 2009 Oct;9(5):417-20.
City University, London. f.patterson@city.ac.uk
Abstract
This study examined whether two machine-marked tests (MMTs; a clinical problem-solving test and situational judgement test), previously validated for selection into U.K. general practice (GP) training, could provide a valid methodology for shortlisting into core medical training (CMT). A longitudinal design was used to examine the MMTs' psychometric properties in CMT samples, and correlations between MMT scores and CMT interview outcomes. Independent samples from two years were used: in 2008, a retrospective analysis was conducted (n=1711), while in 2009, CMT applicants completed the MMTs for evaluation purposes (n=2265). Both MMTs showed good reliability in CMT samples, similar to GP samples. Both MMTs were good predictors of CMT interview performance (r = 0.56, p < 0.001 in 2008; r = 0.61, p < 0.001 in 2009) and offered incremental validity over the current shortlisting process. The GP MMTs offer an appropriate measurement methodology for selection into CMT, representing a significant innovation for selection methodology.
Acad Med. 2009 Oct;84(10):1364-72.
Department of Orthopaedic Surgery, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania, USA.
Abstract
PURPOSE: The selection of medical students for training in orthopaedic surgery consists of an objective screening of cognitive skills to secure interviews for the brightest candidates, followed by subjective measures of candidates to confirm whether applicants are worthy of further consideration. The personal interview and its potential biased impact on the orthopaedic workforce were evaluated. METHOD: During 2004-2006 at the Penn State College of Medicine, the authors performed a prospective cohort study in which 30 consenting interviewers and 135 interviewees completed the Myers-Briggs Type Indicator before the interviews. Completed surveys were evaluated after submitting the resident selection list to the National Residency Matching Program, and candidate rankings based solely on the personal interview were analyzed. RESULTS: Clinicians ranked candidates more favorably when they shared certain personality preferences (P = .044) and when they shared the preference groupings of the quadrant extrovert-sensing and either the function pair sensing-thinking (P = .007) or the temperament sensing-judging (P = .003), or the function pair sensing-feeling and the temperament sensing-judging (P = .029). No associations existed between personality preferences and interviewee rankings performed by basic scientists and resident interviewers. CONCLUSIONS: The results support the hypothesis that, within the department studied, there was a significant association between similarities in personality type and the rankings that individual faculty interviewers assigned to applicants at the completion of each interview session. The authors believe that it is important for the faculty member to recognize that this tendency exists. Finally, promoting diversity within the admission committee may foster a diverse resident body and orthopaedic workforce.
Med Educ. 2009 Nov;43(11):1069-77.
Department of Education Centre, University of Western Australia, 35 Stirling Highway, Crawley, Western Australia 6009, Australia. Sandra.Carr@uwa.edu.au
Abstract
CONTEXT: Much attention and emphasis are placed on the selection of medical students. Although selection measures have been validated in the literature, it is not yet known whether high scores at selection are indicative of high levels of interpersonal aptitude. Emotional intelligence (EI) is reported to be a predictor of the interpersonal and communications skills medical schools are looking for in applicants. OBJECTIVES: This study describes EI scores in medical students and explores correlations between EI and selection scores at the University of Western Australia. METHODS: Senior medical students from a 6-year undergraduate curriculum completed the online MSCEIT (Mayer-Salovey-Caruso Emotional Intelligence Test) survey. Scores for EI were described and correlations between EI and Undergraduate Medicine and Health Sciences Admission Test (UMAT), Interview and Tertiary Entrance Rank (TER) scores were analysed. RESULTS: Mean scores of the 177 respondents (58%) reflected the normal distribution of scores (mean 98, standard deviation [SD] 15.0) in the general population. Males had higher EI scores than females and Asian students demonstrated higher EI Total and branch scores than White students. The highest and lowest EI scores were obtained for the branches Understanding Emotions (mean 110, SD 19.0) and Perceiving Emotions (mean 94, SD 15.6), respectively. No significant correlations were found between EI Total or EI branch scores and any of the selection scores (UMAT, TER and Interview). DISCUSSION: This study offers information that can be used to compare the EI scores of medical students with those of other health professionals. No relationship was identified between cognition (measured by the UMAT) and skill (measured by the MSCEIT) in the interpersonal domain and EI. Further studies are required to explore whether UMAT Section 2 is measuring EI, if there are associations between EI and academic performance and if EI can be used to predict the performance of junior doctors.
Med Educ. 2009 Oct;43(10):993-1000.
Centre for Medical Education, Faculty of Medicine, McGill University, Montreal, Quebec, Canada. saleem.razack@mcgill.ca
Abstract
CONTEXT: The McGill University Faculty of Medicine undertook a pilot, simulation-based multiple mini-interview (MMI) for medical school applicant selection, which ran simultaneously with traditional unstructured interviews (all applicants underwent both processes). This paper examines major stakeholder (applicants and evaluators) opinions towards the MMI compared with traditional interviews, including perceptions about the feasibility and utility of the MMI. METHODS: A total of 100 candidates applying to McGill University Medical School were enrolled in the pilot comparison of the MMI with the traditional, unstructured interview. Applicants' opinions were obtained by questionnaire shortly after the process (for all applicants) and approximately 6 months after the interviews (for non-accepted applicants). Evaluators' perceptions were also surveyed. Questionnaires contained both quantitative items and space for qualitative impressions. Descriptive statistics, repeated measures analysis of variance (manova) and analysis of the topics raised in written comments were conducted. RESULTS: Univariate analyses of response scores revealed statistically significant differences, with the MMI rated more highly than the traditional interview on fairness, imposition of stress and effectiveness as a measurement tool. Compared with the traditional interview, applicants also felt the MMI: (i) allowed them to be competitive; (ii) was enjoyable, and (iii) was often a favourite part of their interview experience. It should be noted that applicants were aware that their MMI score would be included in their overall interview rating. Written comments were positive with regard to, for example, fairness, the provision of opportunities to show one's strengths, and appreciation of the fidelity of the simulations. Evaluators' responses were in agreement with applicants' responses, albeit that overall they expressed more caution about the MMI. CONCLUSIONS: Results suggest the MMI is a promising selection tool from the point of view of both applicants and evaluators. Both groups expressed concerns, but overall the response was favourable for the MMI in comparison with traditional interviews, and the MMI has been adopted by McGill University's medical school.
Med Educ. 2009 Aug;43(8):767-75.
Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada. evakw@mcmaster.ca
Abstract
INTRODUCTION: In this paper we report on further tests of the validity of the multiple mini-interview (MMI) selection process, comparing MMI scores with those achieved on a national high-stakes clinical skills examination. We also continue to explore the stability of candidate performance and the extent to which so-called 'cognitive' and 'non-cognitive' qualities should be deemed independent of one another. METHODS: To examine predictive validity, MMI data were matched with licensing examination data for both undergraduate (n = 34) and postgraduate (n = 22) samples of participants. To assess the stability of candidate performance, reliability coefficients were generated for eight distinct samples. Finally, correlations were calculated between 'cognitive' and 'non-cognitive' measures of ability collected in the admissions procedure, on graduation from medical school and 18 months into postgraduate training. RESULTS: The median reliability of eight administrations of the MMI in various cohorts was 0.73 when 12 10-minute stations were used with one examiner per station. The correlation between performance on the MMI and number of stations passed on an objective structured clinical examination-based licensing examination was r = 0.43 (P < 0.05) in a postgraduate sample and r = 0.35 (P < 0.05) in an undergraduate sample of subjects who sat the MMI 5 years prior to sitting the licensing examination. The correlation between 'cognitive' and 'non-cognitive' assessment instruments increased with time in training (i.e. as the focus of the assessments became more tailored to the clinical practice of medicine). DISCUSSION: Further evidence for the validity of the MMI approach to making admissions decisions has been provided. More generally, the reported findings cast further doubt on the extent to which performance can be captured with trait-based models of ability. Finally, although a complementary predictive relationship has consistently been observed between grade point average and MMI results, the extent to which cognitive and non-cognitive qualities are distinct appears to depend on the scope of practice within which the two classes of qualities are assessed.
J Vet Med Educ. 2009 Summer;36(2):166-73.
Department of Veterinary Clinical and Diagnostic Science, Faculty of Veterinary Medicine, University of Calgary, Calgary, Canada. kghecker@ucalgary.ca
Abstract
This study describes the development, implementation, and psychometric assessment of the multiple mini-interview (MMI) for the inaugural class of veterinary medicine applicants at the University of Calgary Faculty of Veterinary Medicine (UCVM). The MMI is a series of approximately five to 12 10-minute interviews that consist of situational events. Applicants are given a scenario and asked to work through an issue or behavioral-type questions that are meant to assess one attribute (e.g., empathy) at a time. This structure allows for multiple assessments of the applicants by trained interviewers on the same questions. MMI scenario development was based on a review of the noncognitive attributes currently assessed by the 31 veterinary schools across Canada and the United States and the goals and objectives of UCVM. The noncognitive attributes of applicants (N=110) were assessed at five stations, by two interviewers within each station, on three items using a standardized rating form on an anchored 1-5 scale. The method was determined to be reliable (G-coefficient=0.88) and demonstrated evidence of validity. The MMI score did not correlate with grade-point average (r=0.12, p=0.22). While neither the applicants nor interviewers had participated in an MMI format before, both groups reported the process to be acceptable in a post-interview questionnaire. This analysis provides preliminary evidence of the reliability, validity, and acceptability of the MMI in assessing the noncognitive attributes of applicants for veterinary medical school admissions.
Med Teach. 2009 Jun 8:1-6. [Epub ahead of print]
Medical Education Unit, School of Medicine.
Abstract
Background: The United Kingdom Clinical Aptitude Test (UK-CAT) was introduced for the purpose of student selection by a consortium of 23 UK University Medical and Dental Schools, including the University of Aberdeen in 2006. Aim: To compare candidate performance on UK-CAT with local medical student selection outcome. Method: We compared the outcomes of all applicants to Medicine, University of Aberdeen (UoA), in 2006 who undertook the UK-CAT. The candidates were selected into one of five outcomes (academic reject, reject following assessment, reject following interview, reserve list or offer). The candidate performance in the UK-CAT was compared to candidate performance on the UoA selection. Results: Data are reported on 1307 (85.0%) students who applied to UoA in 2006 and undertook the UK-CAT. Total UK-CAT scores were significantly correlated with local selection scores. However, of 314 students offered a place following the conventional selection process, only 101 were also in the highest scoring 318 on the UK-CAT. Conclusions: Results from this study indicate that UK-CAT scores show weak correlation with success in our medical admissions process. It appears therefore that the UK-CAT examines different traits compared to our selection process. Further work is required to establish which better predicts success as an undergraduate or as a doctor.
Med Educ. 2009 Jun;43(6):573-9.
Department of Community Health Sciences, University of Calgary, Calgary, Alberta, Canada. mlhofmei@ucalgary.ca
Abstract
CONTEXT: The multiple mini-interview (MMI) was used to measure professionalism in international medical graduate (IMG) applicants for family medicine residency in Alberta for positions accessed through the Alberta International Medical Graduate (AIMG) Program. This paper assesses the evidence for the MMI's reliability and validity in this context. METHODS: A group of 71 IMGs participated in our 12-station MMI designed to assess professionalism competency. A 10-point scale evaluated applicants on ability to address the objectives of the situation, interpersonal skills, suitability for a residency and for family medicine, and overall performance. We conducted generalisability and decision studies to assess the reliability of MMI scores. We assessed the validity by examining the differences in MMI scores associated with session, track and socio-demographic characteristics of applicants and by measuring the correlations between MMI scores and scores on compulsory examinations, including the AIMG objective structured clinical examination, the Medical Council of Canada Evaluating Examination (MCCEE) and the Medical Council of Canada Qualifying Examination Part I (MCCQE I). We measured the correlation between MMI and non-requisite MCCQE Part II (MCCQE II) scores that were provided. RESULTS: The reliability as indicated by the generalisability coefficient associated with average station scores was 0.70 with one interviewer per station. There were no statistically significant differences in total MMI scores or mean station sum scores based on session, track, applicant age, gender, years since medical school completion, or language of medical school. There were low, non-significant correlations with OSCE overall (r = 0.15), MCCEE (r = 0.01) and MCCQE I (r = 0.06) scores and a higher non-significant correlation with MCCQE II scores (r = 0.33). CONCLUSIONS: There is evidence that the MMI offers a reliable and valid assessment of professionalism in IMG doctors applying for Canadian family medicine residencies and that this clinically situated MMI assessed facets of competency other than those assessed by the OSCE.
Adv Health Sci Educ Theory Pract. 2009 Dec;14(5):759-75. Epub 2009 Apr 2.
McMaster University,
1200 Main Street West
, MDCL 3112, Hamilton, ON, L8N 3Z5, Canada.
Abstract
Admissions committees and researchers around the globe have used diligence and imagination to develop and implement various screening measures with the ultimate goal of predicting future clinical and professional performance. What works for predicting future job performance in the human resources world and in most of the academic world may not, however, work for the highly competitive world of medical school applicants. For the job of differentiating within the highly range-restricted pool of medical school aspirants, only the most reliable assessment tools need apply. The tools that have generally shown predictive validity in future performance include academic scores like grade point average, aptitude tests like the Medical College Admissions Test, and non-cognitive testing like the multiple mini-interview. The list of assessment tools that have not robustly met that mark is longer, including personal interview, personal statement, letters of reference, personality testing, emotional intelligence and (so far) situational judgment tests. When seen purely from the standpoint of predictive validity, the trends over time towards success or failure of these measures provide insight into future tool development.
PMID: 19340597 [PubMed - indexed for MEDLINE]
Bottom of Form


Video


Writings of Professor Omar Hasan Kasule, Sr








This section provides thoughts in Islamic Epistemology and Curriculum Reform.
This section covers motivation of a medical student and development of personal skills: social, intellectual, professional behavior etc. It also equips the medical student with leadership skills that will be required of him as a future physician.




New Items

This section contains monthly e-newsletter presents the most recent developments in the fields of Islamic epistemology and educational curriculum reform summarized from books, journals, websites, interviews, and academic proceedings (conferences, seminars, and workshops). We also accept original contributions of less than 500 words...










Recent Uploads


This section provides inter-disciplinary books authored by renowned scholars.

This section contains different e-journals.