Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Multiple choice
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Assessment that are responded by choosing correct answers from a list of choices}} {{for multi|the album by Pork|Multiple Choice (album)|the novel by Alejandro Zambra|Multiple Choice (novel)}} [[File:Exams Start... Now.jpg|thumb|upright=1.35|A multiple choice question, with days of the week as potential answers]] '''Multiple choice''' ('''MC'''),<ref>{{cite journal | doi=10.1111/j.1467-8535.2010.01058.x | title=Constructive multiple-choice testing system | year=2010 | last1=Park | first1=Jooyong | journal=British Journal of Educational Technology | volume=41 | issue=6 | pages=1054–1064 }}</ref> '''objective response''' or '''MCQ''' (for '''multiple choice question''') is a form of an objective [[Educational assessment|assessment]] in which respondents are asked to select only the correct answer from the choices offered as a list. The multiple choice format is most frequently used in [[education]]al testing, in [[market research]], and in [[elections]], when a person chooses between multiple candidates, [[Political party|parties]], or policies. Although [[E. L. Thorndike]] developed an early scientific approach to testing students, it was his assistant [[Benjamin D. Wood]] who developed the multiple-choice test.<ref>{{cite journal |title=Alumni Notes |journal=The Alcalde |date=May 1973 |volume=61 |issue=5 |page=36 |url=https://books.google.com/books?id=6M8DAAAAMBAJ |access-date=29 November 2020 |issn=1535-993X}}</ref> Multiple-choice testing increased in popularity in the mid-20th century when scanners and data-processing machines were developed to check the result. Christopher P. Sole created the first multiple-choice examinations for computers on a Sharp Mz 80 computer in 1982. ==Nomenclature== '''Single Best Answer''' ('''SBA''' or '''One Best Answer''') is a written examination form of MCQ used extensively in [[medical education]].<ref name="pmid18585017"/> This form, from which the candidate must choose the best answer, has been distinguished from ''Single Correct Answer'' forms, which can produce confusion where more than one of the possible answers has some validity. The SBA form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior. == Structure == [[File:SAT-Grid-In-Example.svg|thumb|A [[Optical mark recognition#Optical answer sheet|machine-readable bubble sheet]] on a multiple choice test]] {{See also|Questionnaire construction#Test item}} Multiple choice items consist of a stem and several alternative answers. The ''stem'' is the opening—a problem to be solved, a question asked, or an incomplete statement to be completed. The options are the possible answers that the examinee can choose from, with the correct answer called the ''key'' and the incorrect answers called ''distractors''.<ref>{{cite journal |last=Kehoe |first=Jerard |year=1995 |url=http://PAREonline.net/getvn.asp?v=4&n=9 |title=Writing multiple-choice test items |journal=Practical Assessment, Research & Evaluation |volume=4 |issue=9}}</ref> Only one answer may be keyed as correct. This contrasts with [[multiple response]] items in which more than one answer may be keyed as correct. Usually, a correct answer earns a set number of points toward the total mark, and an incorrect answer earns nothing. However, tests may also award partial credit for unanswered questions or penalize students for incorrect answers, to discourage guessing. For example, the [[SAT]] Subject tests remove a quarter point from the test taker's score for an incorrect answer. For advanced items, such as an applied knowledge item, the stem can consist of multiple parts. The stem can include extended or ancillary material such as a [[scenario|vignette]], a [[case study]], a [[Chart|graph]], a table, or a detailed description which has multiple elements to it. Anything may be included as long as it is necessary to ensure the utmost validity and authenticity to the item. The stem ends with a lead-in question explaining how the respondent must answer. In a medical multiple choice items, a lead-in question may ask "What is the most likely diagnosis?" or "What [[pathogen]] is the most likely cause?" in reference to a case study that was previously presented. The items of a multiple choice test are often colloquially referred to as "questions," but this is a misnomer because many items are not phrased as questions. For example, they can be presented as incomplete statements, analogies, or mathematical equations. Thus, the more general term "item" is a more appropriate label. Items are stored in an [[item bank]]. == Examples == Ideally, the multiple choice question (MCQ) should be asked as a "stem", with plausible options, for example: {{Quote frame| If <math>a=1</math> and <math>b=2</math>, what is <math>a+b</math>? {{ordered list|type=upper-alpha | 12 | 3 | 4 | 10 }} In the equation <math>2x+3=4</math>, solve for ''x''. {{ordered list|type=upper-alpha | 4 | 10 | 0.5 | 1.5 | 8 }} The city known as the "IT Capital of India" is {{ordered list|type=upper-alpha | Bangalore | Mumbai | Karachi | Detroit }} }} (The correct answers are B, C and A respectively.) A well written multiple-choice question avoids obviously wrong or implausible distractors (such as the non-Indian city of Detroit being included in the third example), so that the question makes sense when read with each of the distractors as well as with the correct answer. A more difficult and well-written multiple choice question is as follows: {{Quote frame| Consider the following: {{ordered list|type=upper-roman | An eight-by-eight chessboard. | An eight-by-eight chessboard with two opposite corners removed. | An eight-by-eight chessboard with all four corners removed. }} Which of these can be tiled by two-by-one dominoes (with no overlaps or gaps, and every domino contained within the board)? {{ordered list|type=upper-alpha | I only | II only | I and II only | I and III only | I, II, and III }} }} == Advantages == There are several advantages to multiple choice tests. If item writers are well trained and items are quality assured, it can be a very effective assessment technique.<ref>[http://www.nbme.org/publications/item-writing-manual-download.html Item Writing Manual] {{Webarchive|url=https://web.archive.org/web/20070929044551/http://www.nbme.org/publications/item-writing-manual-download.html |date=2007-09-29 }} by the National Board of Medical Examiners</ref> If students are instructed on the way in which the item format works and myths surrounding the tests are corrected, they will perform better on the test.<ref>{{cite journal | doi=10.1046/j.1365-2923.2003.01499.x | title=A needs-based study and examination skills course improves students' performance | year=2003 | last1=Beckert | first1=Lutz | last2=Wilkinson | first2=Tim J. | last3=Sainsbury | first3=Richard | journal=Medical Education | volume=37 | issue=5 | pages=424–428 | pmid=12709183 | s2cid=11096249 }}</ref> On many assessments, reliability has been shown to improve with larger numbers of items on a test, and with good sampling and care over case specificity, overall test reliability can be further increased.<ref name="Downing">{{cite journal | doi=10.1111/j.1365-2929.2004.01932.x | title=Reliability: On the reproducibility of assessment data | year=2004 | last1=Downing | first1=Steven M. | journal=Medical Education | volume=38 | issue=9 | pages=1006–1012 | pmid=15327684 | s2cid=1150035 }}</ref> Multiple choice tests often require less time to administer for a given amount of material than would tests requiring written responses. Multiple choice questions lend themselves to the development of objective assessment items, but without author training, questions can be subjective in nature. Because this style of test does not require a teacher to interpret answers, test-takers are graded purely on their selections, creating a lower likelihood of teacher [[bias]] in the results.<ref>{{cite news|last=DePalma|first=Anthony|title=Revisions Adopted in College Entrance Tests|url=https://www.nytimes.com/1990/11/01/us/revisions-adopted-in-college-entrance-tests.html|access-date=22 August 2012|newspaper=New York Times|date=1 November 1990}}</ref> Factors irrelevant to the assessed material (such as handwriting and clarity of presentation) do not come into play in a multiple-choice assessment, and so the candidate is graded purely on their knowledge of the topic. Finally, if test-takers are aware of how to use answer sheets or online examination tick boxes, their responses can be relied upon with clarity. Overall, multiple choice tests are the strongest predictors of overall student performance compared with other forms of evaluations, such as in-class participation, case exams, written assignments, and simulation games.<ref>{{cite journal |last1=Bontis |first1=N. |last2=Hardie |first2=T. |last3=Serenko |first3=A. |year=2009 |url=https://www.aserenko.com/papers/IJTCS_Published2.pdf |title=Techniques for assessing skills and knowledge in a business strategy classroom |journal=International Journal of Teaching and Case Studies |volume=2 |issue=2 |pages=162–180|doi=10.1504/IJTCS.2009.031060 }}</ref> Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false questions. But during the [[2000s (decade)|2000s]], educators found that SBAs would be superior.<ref name="pmid18585017">{{cite journal |pmid=18585017 | doi=10.1016/j.clon.2008.05.010 | volume=20 | title=The introduction of single best answer questions as a test of knowledge in the final examination for the fellowship of the Royal College of Radiologists in Clinical Oncology | year=2008 | journal=Clin Oncol (R Coll Radiol) | pages=571–6 | last1 = Tan | first1 = LT | last2 = McAleer | first2 = JJ| issue=8 }}</ref> == Disadvantages == The most serious disadvantage is the limited types of knowledge that can be assessed by multiple choice tests. Multiple choice tests are best adapted for testing well-defined or lower-order skills. Problem-solving and higher-order reasoning skills are better assessed through short-answer and essay tests.<ref name="FreshMCQs">{{Cite web |title=Multiple Choice Questions (MCQs) |url=https://freshmcqs.com/mcqs/ |access-date=16 May 2025 |website=Fresh MCQs }}</ref> However, multiple choice tests are often chosen, not because of the type of knowledge being assessed, but because they are more affordable for testing a large number of students. This is especially true in the United States and India, where multiple choice tests are the preferred form of high-stakes testing and the sample size of test-takers is large respectively. Another disadvantage of multiple choice tests is possible ambiguity in the examinee's interpretation of the item. Failing to interpret information as the test maker intended can result in an "incorrect" response, even if the taker's response is potentially valid. The term "multiple guess" has been used to describe this scenario because test-takers may attempt to guess rather than determine the correct answer. A [[free response]] test allows the test taker to make an argument for their viewpoint and potentially receive credit. In addition, even if students have some knowledge of a question, they receive no credit for knowing that information if they select the wrong answer and the item is scored dichotomously. However, free response questions may allow an examinee to demonstrate partial understanding of the subject and receive partial credit. Additionally if more questions on a particular subject area or topic are asked to create a larger sample then statistically their level of knowledge for that topic will be reflected more accurately in the number of correct answers and final results. Another disadvantage of multiple choice examinations is that a student who is incapable of answering a particular question can simply select a random answer and still have a chance of receiving a mark for it. If randomly guessing an answer, there is usually a 25 percent chance of getting it correct on a four-answer choice question. It is common practice for students with no time left to give all remaining questions random answers in the hope that they will get at least some of them right. Many exams, such as the [[Australian Mathematics Competition]] and the [[SAT]], have systems in place to negate this, in this case by making it no more beneficial to choose a random answer than to give none. Another system of negating the effects of random selection is formula scoring, in which a score is proportionally reduced based on the number of incorrect responses and the number of possible choices. In this method, the score is reduced by the number of wrong answers divided by the average number of possible answers for all questions in the test, ''w''/(''c'' – 1) where ''w'' is the ''number of wrong responses on the test'' and ''c'' is ''the average number of possible choices for all questions on the test''.<ref>{{cite web |url=http://www.ncme.org/pubs/items/ITEMS_Mod_4.pdf |title=Formula Scoring of Multiple-Choice Tests (Correction for Guessing) |access-date=2011-05-20 |url-status=dead |archive-url=https://web.archive.org/web/20110721041317/http://www.ncme.org/pubs/items/ITEMS_Mod_4.pdf |archive-date=2011-07-21 }}</ref> All exams scored with the three-parameter model of [[item response theory]] also account for guessing. This is usually not a great issue, moreover, since the odds of a student receiving significant marks by guessing are very low when four or more selections are available. Additionally, it is important to note that questions phrased ambiguously may confuse test-takers. It is generally accepted that multiple choice questions allow for only one answer, where the one answer may encapsulate a collection of previous options. However, some test creators are unaware of this and might expect the student to select multiple answers without being given explicit permission, or providing the trailing encapsulation options. Critics like philosopher and education proponent [[Jacques Derrida]], said that while the demand for dispensing and checking basic knowledge is valid, there are other means to respond to this need than resorting to [[crib sheet]]s.<ref>[[Jacques Derrida]] (1990) pp.334-5 ''Once Again from the Top: Of the Right to Philosophy'', interview with [[Robert Maggiori]] for ''[[Libération]]'', November 15, 1990, republished in ''Points'' (1995).</ref> Despite all the shortcomings, the format remains popular because MCQs are easy to create, score and analyse.<ref>{{Cite web|url=https://www.facultyfocus.com/articles/teaching-professor-blog/multiple-choice-tests-pros-cons/|title=Multiple-Choice Tests: Revisiting the Pros and Cons|date=2018-02-21|website=Faculty Focus {{!}} Higher Ed Teaching & Learning|language=en-US|access-date=2019-03-22}}</ref> == Changing answers == The theory that students should trust their first instinct and stay with their initial answer on a multiple choice test is a myth worth dispelling. Researchers have found that although some people believe that changing answers is bad, it generally results in a higher test score. The data across twenty separate studies indicate that the percentage of "right to wrong" changes is 20.2%, whereas the percentage of "wrong to right" changes is 57.8%, nearly triple.<ref>{{cite journal | doi=10.1177/009862838401100303 | title=Staying with Initial Answers on Objective Tests: Is it a Myth? | year=1984 | last1=Benjamin | first1=Ludy T. | last2=Cavell | first2=Timothy A. | last3=Shallenberger | first3=William R. | journal=Teaching of Psychology | volume=11 | issue=3 | pages=133–141 | s2cid=33889890 }}</ref> Changing from "right to wrong" may be more painful and memorable ([[Von Restorff effect]]), but it is probably a good idea to change an answer after additional reflection indicates that a better choice could be made. In fact, a person's initial attraction to a particular answer choice could well derive from the surface plausibility that the test writer has intentionally built into a distractor (or incorrect answer choice). Test item writers are instructed to make their distractors plausible yet clearly incorrect. A test taker's first-instinct attraction to a distractor is thus often a reaction that probably should be revised in light of a careful consideration of each of the answer choices. Some test takers for some examination subjects might have accurate first instincts about a particular test item, but that does not mean that all test takers should trust their first instinct. == Notable multiple-choice examinations == {{Div col}} *[[ACT (examination)|ACT]] *[[AIEEE]] in India *[[Advanced Placement Program|AP]] *[[Armed Services Vocational Aptitude Battery|ASVAB]] *[[American Mathematics Competitions|AMC]] *[[Australian Mathematics Competition]] *[[Chartered Financial Analyst|CFA]] *[[CISSP]] *[[CLEP]] *[[COMLEX]] *[[Common Law Admission Test|CLAT]] *[[Hong Kong Diploma of Secondary Education]] *[[Exame Nacional do Ensino Médio|ENEM]] *[[United States National Physics Olympiad#F = ma exam|F = ma]], leading up to the [[United States Physics Olympiad]] *[[Fundamentals of Engineering exam|FE]] *[[GCE Ordinary Level]] *[[General Educational Development|GED]] *[[Graduate Record Examination|GRE]] *[[Graduate Aptitude Test in Engineering|GATE]] *[[IB Diploma Programme]] science subject exams *[[IIT-JEE]] *[[National Exam (Indonesia)|Indonesian National Exam]] *[[Law School Admission Test|LSAT]] *[[MCAT]] *[[Membership of the Royal Colleges of Physicians of the United Kingdom|MRCP(UK)]] *[[Multistate Bar Examination]] *[[NCLEX]] *[[Professional and Linguistic Assessments Board|PLAB]] for non-EEA medical graduates to practice in the UK *[[PSAT/NMSQT|PSAT]] *[[SAT]] *[[Test of English as a Foreign Language]] *[[TOEIC]] *[[USMLE]] *[[NTSE]] *[[National Eligibility cum Entrance Test (Undergraduate)|NEET(UG)]] in India *[[UGC NET]] in India *[[Civil Services Examination|UPSC CSE]] Preliminary in India *[[Unified Tertiary Matriculation Examination|UTME]] University Admission Exam in Nigeria {{Div col end}} == See also == * [[Concept inventory]] * [[Extended matching items]] * [[Objective test]] * [[Test (student assessment)]] * [[Closed-ended question]] == References == {{Reflist}} {{DEFAULTSORT:Multiple Choice}} [[Category:Questionnaire construction]] [[Category:Standardized tests]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite journal
(
edit
)
Template:Cite news
(
edit
)
Template:Cite web
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:For multi
(
edit
)
Template:Quote frame
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:Webarchive
(
edit
)