Journal of Education and Ethics in Dentistry

REVIEW ARTICLE
Year
: 2011  |  Volume : 1  |  Issue : 2  |  Page : 46--51

Appropriateness of using oral examination as an assessment method in medical or dental education


Ghousia Rahman 
 Department of Dentistry and Dental Hygiene, Riyadh Colleges of Dentistry and Pharmacy, Saudi Arabia

Correspondence Address:
Ghousia Rahman
P.O.BOX: 84891, Riyadh: 11681
Saudi Arabia

Abstract

This paper describes the appropriateness of using oral examination as an assessment method in medical or dental education. It highlights the rationale for using oral exam and it also discusses the usefulness of adopting oral exam as an assessment tool. Oral Examination is a form of assessment where a set of stimulus questions are developed that address critical areas of knowledge or sets of abilities related to a competency or set of competencies. Students are expected to respond verbally in their own words, which allow an assessment of the student�SQ�s depth of comprehension, and capacity to apply knowledge and insights to different situations. Responses to the questions are assessed using a rating scale or scoring system. In practice, oral exams were used not as a substitute, but as a complement to written exams. They are a way to ask what is not feasible through the written format. The paper reviews literature to explore the strengths and weaknesses of using an oral exam as an assessment tool. The paper concludes by offering a set of alternatives and recommendations to improve the utility of the oral exam methodology. Substantial work, however, is needed to develop the traditional oral examination into a �SQ�best practice oral�SQ� format appropriate for medical or dental education.



How to cite this article:
Rahman G. Appropriateness of using oral examination as an assessment method in medical or dental education.J Educ Ethics Dent 2011;1:46-51


How to cite this URL:
Rahman G. Appropriateness of using oral examination as an assessment method in medical or dental education. J Educ Ethics Dent [serial online] 2011 [cited 2024 Mar 29 ];1:46-51
Available from: https://www.jeed.in/text.asp?2011/1/2/46/103674


Full Text

 Introduction



Medical educators around the world have successfully used many different methods of assessing learners, both written and oral. [1] The oral or viva method of assessment was defined by Joughin as an "assessment in which a student's response to the assessment task is verbal, in the sense of being expressed or conveyed by speech instead of writing". [2] He also pointed out that oral examinations enable two qualities to be measured: student command of the oral medium, and a student's command of content. [2] Oral Examination is a form of assessment where a set of stimulus questions are developed that address critical areas of knowledge, or sets of abilities related to a competency or set of competencies. Students are expected to respond verbally in their own words, which allow an assessment of the student's depth of comprehension, and capacity to apply knowledge and insights to different situations. The viva is also used for assessment of the borderline or exceptional student and can theoretically make a positive contribution to the overall educational experience in terms of assessment; providing a student with the opportunity to improve on a performance in previous aspects of an examination, and the facility to further explore topics via direct interaction with the examiner. [2] Responses to the questions are assessed using a rating scale or scoring system.

 Glossary



Assessment tools

Assessment tools comprise a wide range of instruments and methodologies designed to gather this information for feedback, diagnostic purposes, and identifying successful attainment of competence. The utility, or usefulness, of an assessment has been defined as a product of its validity, reliability, cost-effectiveness, acceptability and educational impact. [3]

Validity

Validity "is the degree to which a test 'truly' measures what it is intended to measure". Validity is the "first priority of any assessment". [4] Validity refers to the accumulation of evidence gathered from a variety of sources and supporting the proposition that the assessment is, in fact, evaluating the competency of interest, or the knowledge and abilities that support the acquisition of competence. Evidence can take the form of expert opinion derived from a practice analysis, survey, or standard setting event. [3]

Types of validity

Content validity

It measures the extent to which the content of the test matches the instructional objectives. For example, if the final exam includes content covered only during the last six weeks, it is not a valid measure of the course's overall objectives, that is, it has a very low content validity. [3],[4]

Criterion validity

It determines the extent to which scores on the test are in agreement with (concurrent validity) or predict (predictive validity) an external criterion. For example, if the end-of-year final exams in an university correlate highly with the national competitive exam, they would have high concurrent validity. [3],[4]

Construct validity

It determines the extent to which an assessment corresponds to other variables, as predicted by some rationale or theory. If you can correctly hypothesize that English for speakers of other languages (ESOL) students will perform differently on a reading test than English-speaking students (because of theory), the assessment may have construct validity. [3],[4]

 Reliability



Reliability relates to consistency in measurement, that is, scores derived from a reliable assessment tool are similar across assessment events. Reliability is of central importance in assessment because trainees, assessors, regulatory bodies and the public alike want to be reassured that assessments used to ensure that students are competent would reach the same conclusions if it were possible to administer the same test again on the same student in the same circumstances. Reliability is typically reported as a value ranging from 0.0 to 1.0. Reliabilities above 0.90 are considered to be excellent. Reliabilities below 0.70 are considered suspect, and results from such an assessment tool should be interpreted with caution. [3]

Types of reliability

Inter-rater or inter-observer reliability

It is used to assess the degree to which different raters/observers give consistent estimates of the same phenomenon. It can be calculated by measuring the percent of agreement between the raters, or calculating the correlation between the ratings of the two observers. [5]

Test-retest reliability

It is used to assess the consistency of a measure from one time to another, given the same assessment twice, separated by days, weeks, or months. Reliability is stated as the correlation between scores at Time 1 and Time 2. [5]

Parallel-forms reliability

It is used to assess the consistency of the results of the two tests constructed in the same way from the same content domain. To measure, create two forms of the same test (vary the items slightly). Reliability is stated as correlation between the scores of Test 1 and Test 2. [5]

Internal consistency (Alpha, a)

It assesses the consistency of the results across items within a test. Compare one half of the test to the other half. Or, use methods such as Kuder-Richardson Formula 20 (KR20) or Cronbach's Alpha. Coefficient alpha and KR-20 both represent the average of all possible split-half estimates. The difference between the two arises when they would be used to assess reliability. Specifically, coefficient alpha is typically used during scale development with items that have several response options (i.e., 1 = strongly disagree to 5 = strongly agree), whereas KR-20 is used to estimate reliability for dichotomous (i.e., Yes/No; True/False) response scales. [5]

Relationship between reliability and validity

If a test is unreliable, it cannot be valid. For a test to be valid, it must reliable. However, just because a test is reliable does not mean it will be valid. Hence, reliability does not imply validity. That is, a reliable measure is measuring something consistently; however, you may not be measuring what you want to be measuring. For example, while there are many reliable tests of specific abilities, not all of them would be valid for predicting, say, performance. In terms of accuracy and precision, reliability is analogous to precision, while validity is analogous to accuracy. Therefore, reliability is a necessary; however, not a sufficient condition for validity! [1],[2],[3],[4],[5]

Effectiveness of oral exams

The oral exams format enables instructors to test the students on all five cognitive domains of Bloom's taxonomy [6] [Figure 1]. For example, consider the type of questions and questioning one can use in an oral exam setting. The examiner can ask the student about his/her knowledge and comprehension (levels 1 and 2), can use the exam to see if the student can apply the concepts (level 3), can use a case to test the student's analytical ability (level 4), can determine if the student can combine concepts into a new whole (Level 5), and can even determine if the student can evaluate or critically assess various concepts or theories (Level 6). While many of these domains can be assessed through the written exam, the oral exam allows the instructor to probe these areas to ascertain if the student "really knows what s/he is talking about". Oral exams thus cover several cognitive domains and also the psychomotor skill of oral expression.{Figure 1}

In practice, oral exams are used not as a substitute, but as a complement to written exams. They are a way to ask what is not feasible through the written format. Ostensibly, the rationale is that instructors could use the oral format to probe, challenge, and critically assess depth and breadth of student's knowledge, understanding and use of various concepts. Oral examination enables interactive dialogue between candidate and assessor, allowing the examinee to demonstrate strengths and letting examiners discriminate between superficial and real knowledge via in-depth questioning and also the ability to tailor the questions asked to the needs of each individual candidate. [7] This form of assessment is well suited for the evaluation of reflective and critical thinking competencies along with problem-solving abilities and analytical abilities. Oral exams also have the potential to measure the student's achievement in course outcomes not restricted to knowledge, but related to individual's professionalism, ethics, interpersonal competence and qualities. [8]

Review of literature supports that oral examination has several advantages over other forms of tests, including direct personal contact and also recognition of safe and competent clinicians. [9] Oral exams provide a constructive forum to ascertain the student's appropriate use of the 'scientific language,' and also to test the student's persuasive skills and oral poise. Jacobson and his colleagues pointed out that many examiners consider that oral examinations are a useful feedback mechanism for the examiners, and by personally examining a sample of students the examiner can elicit valuable information on the strengths and weaknesses of the curriculum. [10] Flexibility in moving from one area to another during the examination is cited as an added advantage of oral exams. [11]

Usefulness of oral exams

Kehm's review of the literature highlights potential challenges when using oral vivas as socio-physiological (unequal power distance between the candidate and the assessors), psycho-analytical (ritual aspects of the process), sociological (differences in cultural and social norms affecting oral performance), and methodological issues of measurement (low degree of objectivity, reliability, and validity). [12] The use of oral examinations in high-stakes assessment systems has also been criticized for many years because of poor inter rater reliability. [10] The low reliability relates, in part, to the examiner's active participation in the examination, which can introduce bias. In the traditional oral examination, each candidate may receive a different assessment with regard to content areas addressed, the difficulty of the questions asked, the level of prompting or help provided, and the learning outcomes assessed; for example, knowledge of the basic sciences, patient investigation and management. These differences present difficulties not only in a norm-referenced system of assessment, where the intention is to rank the candidates, but also in a criterion-referenced system, where the intention is to assess whether or not the candidate has achieved a pre-determined standard. The reasons for low reliability also have an adverse impact on validity because of the potential for variation in content matter addressed, and in the emphasis given to different content areas. [11] Oral examinations are usually employed in an attempt to assess the candidate's knowledge of a subject. Memon and his colleagues pointed out that viva marks correlated with personality scores. [11] Rowland-Morin et al., showed that verbal style and dress of the candidates influence oral examination scores. [13] Roberts et al. carried out a discourse analysis (a detailed study of language in use) of the oral component in the membership examination of the Royal College of General Practitioners (MRCGP), and pointed out that candidates from ethnic minorities and those trained abroad may experience particular hidden difficulties with oral examinations leading to discrimination. Furthermore, the discrimination may not be limited to ethnicity. [14] Esmail and May suggested that candidates from working class backgrounds and, in some instances, female candidates may also be discriminated against. [15] The problems with oral examinations extend beyond poor reliability and validity. Oakley and Hencken questioned the cost effectiveness of oral examinations, when the cost, in terms of professional time and energy, is weighed against its reliability and validity as a measure of professional competence. [16] Any well-planned examination, however, is costly in terms of examiners' time and effort. The challenge is finding assessment instruments where the effort spent is educationally 'profitable'. Orals can be highly threatening for candidates with resultant poor performance. [9] It can be argued, however, that all examinations are stressful. The question is whether the viva is more stress provoking than other assessments. There is no evidence that orals are more stressful than other exams and, indeed, there is anecdotal evidence to the contrary. Schiff, in a personal narrative, reported that the short case was more stressful than other parts of the MRCGP clinical examination. [17]

 Suggestions for Alternative Methods of Assessment



According to Norman, clinical competence is divided into five elements. [18] First, clinical skills are needed to obtain relevant information by history and physical examination. Second, knowledge is required to interpret these findings and to institute appropriate medical care. Third, interpersonal skills are essential to deal effectively with patients and co-workers. Fourth, skills, knowledge, and interpersonal skills must be integrated in order to solve medical problems. Fifth, technical skills are required to effect patient care or medical procedures. As oral exams do not fully test these elements the following alternative methods of assessment are suggested. [18]

Record review (Chart-stimulated review)

Record review can be effectively used for evaluating wide range of competencies that are not readily assessed by other tools, including competencies in the domains of critical thinking, professionalism, and health promotion. This methodology involves a review of the patient care records (i.e., patients' charts) developed by the student. The review consists of an evaluation of diagnostic information and an examination of findings related to treatment planning in light of standards of dental practice. [19] Chart-stimulated review assesses the learner's capacity to explain rationales for treatment decisions, show comprehension of key concepts, and compare and contrast alternative treatment approaches; it is also used to stimulate students' self-assessment and reflection. However, the variability among records can lead to subjective judgments regarding their quality. [19]

Even the chart-stimulated recall interview is limited in the degree to which it enables investigators to assess a student's clinical management competence. The student is likely to present his or her management in a positive light, and may not reflect his or her actual clinical performance. However, if the learner can give an adequate explanation during the interview, the investigator is reassured by his or her knowledge. In the absence of an adequate explanation by the interviewed student, the investigator may suspect competency problems. The use of chart stimulated recall interviews proved to be a good compromise in the context of limitations of time and cost. The use of videotaped assessments of regular consultations in daily practice would permit a more valid assessment of the student's performance (as would videotaped standardized patient encounters) but their use would have been certainly more expensive and difficult to implement. Moreover, the reliability of the measurements may be influenced by the assessors' professional background. [20]

Virtual reality (Computer-based clinical scenarios)

This assessment involves Computer-based clinical scenarios that are designed to evaluate students' knowledge and abilities related to diagnosis and treatment planning. These scenarios can be highly sophisticated, involving usage of audio and video simulations. [21] Computer-based scenarios are highly suitable to assess competencies related to diagnosis and treatment planning. These scenarios have high clarity and make excellent tools for teaching and assessment. [21] One of the big advantages of using virtual scenarios is that its fidelity is high. However, it should be noted that determination of performance levels is difficult, and requires considerable research in determining salient decision points in evaluating appropriate diagnoses and treatment plans. Computer based scenarios are expensive to produce and they have to depict actual patient care situations. They are also time consuming. [21]

Use of computer technology to modify objective structured clinical examinations

Objective Structured Clinical Examinations (OSCEs) are multi-stationed clinical examinations that have been shown to be effective in testing students' ability to integrate the knowledge, skills, and attitudes acquired during their preclinical and clinical training, and experiences. To reduce the disruption caused by the students' moving from station to station and to allow for examination of the entire class in one setting, the traditional concept can be modified using computer technology, and the stations "move" via a Power Point presentation while the students remain stationary. Questions on exams provide a means for testing data interpretation, diagnostic skills, and, to some extent, interpersonal skills. The overall atmosphere during the computer-based examination is less chaotic. Each student receives identical instructions, explanations, and time allotments to respond to the information presented. The ratio of faculty to students required to monitor the exam will be less, than required for the traditional format. Additionally, since there is no need to allow time for student transition, the total time required to administer the exam will be reduced. Thus, objective assessment of the entire class can be accomplished using fewer faculty members and less class time and with less disruption for the students. [22],[23]

Models

Models are nothing but mannequins which show various dentally related clinical challenges which can be used for the evaluation of dental students. [24] This form of assessment looks into areas of knowledge and problem-solving skills, underlying competencies which are often related to diagnosis and treatment planning. The evaluation can be relatively objective as models are standard for all students and evaluation criteria can be readily defined. [24] This type of assessment is effective largely because symptoms are easily standardized and consistent across students. Also, assessing the performance is relatively straightforward. The only limitation crops up when criteria are not well defined or those serving as raters are not well calibrated. [24]

Problem-based learning

Reports on the adoption of problem-based learning in dental education asserted the dual purposes of assessment: 1) feedback for self-direction of learning and 2) assessment of abilities in the process. [25],[26] They pointed out that assessment methods chosen will influence what is learned and that in PBL those methods must measure "student achievement in the process of problem dissection, identification of learning objectives, and development of critical thinking skills," as well as, later on, "the application of these skills in problem-solving situations." [25],[26] They suggested that a variety of approaches to assessment are useful for PBL, including faculty, self-, and peer/subjective assessments, problem-solving exercises, case-based multiple-choice tests that are written to assess students' comprehension of the association between symptoms and pathophysiology, OSCEs, clinical competency assessments, and the triple jump exercise, which requires self-directed learning skills. [25],[26] The multimethod approach advocated by Fincham and Shuler may not be employed in many programs using PBL; however, the message that assessment must be an integral part of the educational experience is now being recognized by dental educators. [25],[26]

 Recommendations for Increasing the Effectiveness of Oral Exams



Orient the students

The candidates should be informed about the examination process in advance. Guidelines for candidates should be made available, to allow candidates to organize their responses for general and specific problems. An orientation letter can be given to candidates by the college to allow them to understand the structure and overall objectives of the examination. Recommended preparatory techniques such as guidance from the supervisor, clearly defined guidelines, and mock oral exams can reduce students' stress levels. [27]

Train the examiners

Examiner performance can be enhanced by appropriate guidelines and instructions and training of new examiners. In addition, the performance of examiners should be evaluated by periodic observation and discussion of candidates' results. Training of examiners may produce more uniform delivery of questions and evaluation of performance. Formal training programs can be adopted to increase both validity and reliability of oral examinations. Develop orientation manual, day-long workshops, an examiner evaluation system, and a device to organize content between examination teams and prevent redundancy in the examination. Despite concerns about the subjectivity of oral exams, a longitudinal study of oral practice examination within medical programs revealed substantial internal consistency and reliability of orals, identifying a positive correlation to in-training examination scores and faculty evaluation scores. [28],[29]

Use multiple assessors

Norman suggested that the oral examination must sample more broadly across cases and examiners to enhance reliability (control observer bias, drift and fabrication) and enhance scope of feedback. [30]

Assess on multiple occasions

Use a number of orals to enhance reliability and aid thorough and complete assessment, and to enhance perceptions of fairness and accuracy.

Questions should be straightforward and clear

Questions should be capable of being asked in a few sentences which are clear, unambiguous, uncomplicated, and without repetition. The question should have been thought out clearly beforehand, but not so rigidly that it cannot be changed to suit the candidate's response. This requires each question to have a decision tree prepared. A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences more specifically, in this case all possible correct responses that can be expected from students. The examiner must have clear expectations of the important points that the candidate should make during the answer. Ideally, the question should be practiced on colleagues before being used in examinations.

Use simple grading system or rubrics

Criteria for answers can provide clear guidelines on what is and is not an acceptable answer to the examiner's questions. Checklists have been suggested as a mechanism to reduce the variability in content of questions and grading. It may be that: "the more rigid the structure of oral. the higher the reliability". [31] As an example of this good practice, a standardized rating sheet could be adopted comprising a Likert-type scale from 1 (very poor) to 5 (very good) across 12 criteria: appearance, knowledge of subject, confidence, conciseness of responses, quality of responses, thinking on the spot, communication skills, application of theory to practice, ability to handle questions, body language, professional manner, and clarity of responses. Examiners ratings of each student can be summated to give a score out of 60, which can then be converted to a percentage contribution reflecting his or her performance for this assessment component. [32]

Structure the oral on clinical scenarios

Structured oral examinations (SOE) based on a clinical case with well-defined goals can often give great insight into a candidate's knowledge, interpretive ability, problem solving and attitudes thereby improving the inter-rater reliability results. Most authors agree that structured examinations have better validity and reliability, with less susceptibility to gender or cultural bias than unstructured examinations. [33]

Establish quality assurance standards

It is highly recommended to implement standards, benchmarks, and performance indicators for effective oral examinations. [34]

 Conclusion



Single assessment does not fulfill all aspects of assessment and there is a need for an evaluation system with multiple ways of assessment. Of course, the optimal assessment plan is to use the right technique for the right reasons, at the right time, and with the right group of students in order to make the right decisions about the right competencies that the student will need to function independently after graduation from dental school. Considering all these challenges, current assessment practices would be enhanced if the following principles summarized below were implemented:



The content, format, and frequency of assessment, as well as the timing and format of feedback, should follow the specific goals of the dental education program.The various domains of competence should be assessed in an integrated, coherent, and longitudinal fashion with the use of multiple methods and provision of frequent and constructive feedback.Educators should be mindful of the impact of assessment on learning, the potential unintended effects of assessment, the limitations of each method (including cost), and the prevailing culture of the program or institution in which the assessment is occurring.Multiple methods of assessment implemented longitudinally can provide the data that are needed to assess trainees' learning needs and to identify and remediate suboptimal performance by clinicians.The consistency and comparability of assessments methods should be maximized through a quality assurance system.

References

1Epstein RM. Assessment in medical education. N Engl J Med 2007;356:387-96.
2Joughin G. Dimensions of oral assessment. Assess Eval High Educ 1998;23:367-78.
3van der Vleuten C. The assessment of professional competence: Developments, research and practical implications. Adv Health Sci Educ 1996;1:41-67.
4Newble DA. Handbook for Medical Teachers. In: Norwell, 2 nd ed. Boston, Massachusetts: MTP Press; 1987.
5Rousson V, Gasser T, Seifer B. Assessing intrarater, interrater and test-retest reliability of continuous measurements. Stat Med 2002;21:3431-46.
6Bloom BS. Taxonomy of Educational Objectives, Handbook I: The Cognitive Domain. New York: David McKay Co Inc; 1956.
7Gibbs H, Habeshaw S, Habeshaw T. Interesting Ways to Teach: 53 Interesting Ways to Assess your Students. Bristol: Technical and Educational Services; 1988.
8Harden RM. Developments in outcome based education. Med Teach 2002;24:117-20.
9Rushton P, Eggett D. Comparison of written and oral examinations in a baccalaureate medical-surgical nursing course. J Prof Nurs 2003;19:142-8.
10Jacobson E, Klock PA, Avidan M. Poor inter-rater reliability on mock anesthesia oral examinations. Can J Anesth 2006;53:659-68.
11Memon MA, Joughin GR, Memon B. Oral assessment and postgraduate medical examinations: establishing conditions for validity, reliability and fairness. Adv Health Sci Educ Theory Pract 2010;15:277-89.
12Kehm BM. Oral examinations in german universities. Assess Educ 2001;8:25-31.
13Rowland-Morin PA, Burchard KW, Garb JL, Coe NP. Influence of effective communication by surgery students on their oral examination scores. Acad Med 1991;66:169-71.
14Roberts C, Sarangi S, Southgate L, Wakeford R, Wass V. Oral examinations-equal opportunities, ethnicity, and fairness in the MRCGP. BMJ 2000;320:370-5.
15Esmail A, May C. Oral exams-get them right or don't bother. BMJ 2000;320-75.
16Oakley B, Hencken C. Oral examination Assessment Practices: effectiveness and Change with a First Year Undergraduate Cohort. J Hosp Leis Sport Tour Educ 2005;4:3-14.
17Schiff R. A short case prolonged. Br Med J 2001;323:551.
18Norman GR. Defining competence: a methodological review. In: Neufeld RV, editor. Assessing Clinical Competence. New York: Springer Publishing Co; 1985.
19Logan H, Gardner T. A review of a dental record audit program within a predoctoral dental curriculum. J Dent Educ 1988;52:302-5.
20Ram P, van der Vleuten C, Rethans JJ, Grol R, Aretz K. Assessment of practicing family physicians: comparison of observation in a multiple-station examination using standardized patients with observation of consultations in daily practice. Acad Med 1999;74:62-9.
21Wierinck ER, Puttemans V, Swinnen SP, van Steenberghe D. Expert performance on a virtual reality simulation system. J Dent Educ 2007;71:759-66.
22Holyfield LJ, Bolin KA, Rankin KV, Shulman JP, Jones DL, Eden BD. Use of computer technology to modify objective structured clinical examinations. J Dent Educ 2005;69:1133-6.
23Walsh M, Bailey PH, Koren I. Objective structured clinical evaluation of clinical competence: an integrative review. J Adv Nurs 2009;65:1584-95.
24Jasinevicius TR, Landers M, Nelson S, Urbankova A. An evaluation of two dental simulation systems: virtual reality versus contemporary non-computer-assisted. J Dent Educ 2004;68:1151-62.
25Fincham AG, Shuler CF. The changing face of dental education: the impact of PBL. J Dent Educ 2001;65:406-21.
26Jahangiri L, Mucciolo TW, Choi M, Spielman AI. Assessment of teaching effectiveness in U.S. dental schools and the value of triangulation. J Dent Educ 2008;72:707-18.
27Tinkler P, Jackson C. In the dark? Preparing for the PhD viva. Qual Assur Educ 2002;10:86-97.
28Birley H. The Society of Apothecaries Diploma examination in Genitourinary Medicine: death of the viva voce? Sex Transm Infect 2001;77:223-4.
29Iqbal IZ, Naqvi S, Abeysundara L, Narula AA. The value of Oral Assessments: A Review. Bull R Coll Surg Engl 2010;92:1-6.
30Norman G. Examining the examination: Canadian versus US radiology certification exam. Can Assoc Radiol J 2000;51:208-9.
31Muzzin LJ, Hart L. Oral examinations. In: Neufeld RV, editor. Assessing Clinical Competence. New York: Springer Publishing Co; 1985.
32Pearce G, Lee G. Viva voce (oral examination) as an assessment method insights from marketing students. J Mark Educ 2009;31:120-30.
33Simpson RG, Ballard KD. What is being assessed in the MRCGP oral examination? A qualitative study. Br J Gen Pract 2005;55:430-6.
34Morley L, Leonard D, David M. Quality and equality in British PhD assessment. Qual Assur Educ 2003;11:64-72.