Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Usability testing
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Number of participants== In the early 1990s, [[Jakob Nielsen (usability consultant)|Jakob Nielsen]], at that time a researcher at [[Sun Microsystems]], popularized the concept of using numerous small usability tests—typically with only five participants each—at various stages of the development process. His argument is that, once it is found that two or three people are totally confused by the home page, little is gained by watching more people suffer through the same flawed design. "Elaborate usability tests are a waste of resources. The best results come from testing no more than five users and running as many small tests as you can afford."<ref name="useit">{{cite web|url=http://www.useit.com/alertbox/20000319.html|title=Usability Testing with 5 Users (Jakob Nielsen's Alertbox)|publisher=useit.com|date=2000-03-13}}; references {{cite book |doi=10.1145/169059.169166 |chapter=A mathematical model of the finding of usability problems |title=Proceedings of the SIGCHI conference on Human factors in computing systems |year=1993 |last1=Nielsen |first1=Jakob |last2=Landauer |first2=Thomas K. |pages=206–213 |isbn=978-0-89791-575-5 |s2cid=207177537 }}</ref> The claim of "Five users is enough" was later described by a mathematical model<ref>{{cite journal |last=Virzi |first=R. A. |title=Refining the Test Phase of Usability Evaluation: How Many Subjects is Enough? |journal=Human Factors |year=1992 |volume=34 |issue=4 |pages=457–468 |doi=10.1177/001872089203400407 |s2cid=59748299 }}</ref> which states for the proportion of uncovered problems U <math>U = 1-(1-p)^n</math> where p is the probability of one subject identifying a specific problem and n the number of subjects (or test sessions). This model shows up as an asymptotic graph towards the number of real existing problems (see figure below). [[Image:Virzis Formula.PNG]] In later research Nielsen's claim has been questioned using both [[empirical]] evidence<ref>{{cite conference |last1=Spool |first1=Jared |last2=Schroeder |first2=Will |title=Testing web sites: five users is nowhere near enough |conference=CHI '01 extended abstracts on Human factors in computing systems |date=2001 |page=285 |doi=10.1145/634067.634236 |s2cid=8038786 }}</ref> and more advanced [[mathematical model]]s.<ref>{{cite journal |last=Caulton |first=D. A. |title=Relaxing the homogeneity assumption in usability testing |journal=Behaviour & Information Technology |year=2001 |volume=20 |issue=1 |pages=1–7 |doi=10.1080/01449290010020648 |s2cid=62751921 }}</ref> Two key challenges to this assertion are: # Since usability is related to the specific set of users, such a small sample size is unlikely to be representative of the total population so the data from such a small sample is more likely to reflect the sample group than the population they may represent #Not every usability problem is equally easy-to-detect. Intractable problems happen to decelerate the overall process. Under these circumstances, the progress of the process is much shallower than predicted by the Nielsen/Landauer formula.<ref>{{cite journal |last1=Schmettow |first1=Martin |title=Heterogeneity in the Usability Evaluation Process |series=Electronic Workshops in Computing |date=1 September 2008 |doi=10.14236/ewic/HCI2008.9 |doi-access=free }}</ref> Nielsen does not advocate stopping after a single test with five users; his point is that testing with five users, fixing the problems they uncover, and then testing the revised site with five different users is a better use of limited resources than running a single usability test with 10 users. In practice, the tests are run once or twice per week during the entire development cycle, using three to five test subjects per round, and with the results delivered within 24 hours to the designers. The number of users actually tested over the course of the project can thus easily reach 50 to 100 people. Research shows that user testing conducted by organisations most commonly involves the recruitment of 5-10 participants.<ref>{{Cite web|title=Results of the 2020 User Testing Industry Report|url=https://www.userfountain.com/results-of-the-2020-user-testing-industry-report|access-date=2020-06-04|website=www.userfountain.com|language=en}}</ref> In the early stage, when users are most likely to immediately encounter problems that stop them in their tracks, almost anyone of normal intelligence can be used as a test subject. In stage two, testers will recruit test subjects across a broad spectrum of abilities. For example, in one study, experienced users showed no problem using any design, from the first to the last, while naive users and self-identified power users both failed repeatedly.<ref>{{cite web|url=http://www.asktog.com/columns/000maxscrns.html|author=Bruce Tognazzini|title=Maximizing Windows}}</ref> Later on, as the design smooths out, users should be recruited from the target population. When the method is applied to a sufficient number of people over the course of a project, the objections raised above become addressed: The sample size ceases to be small and usability problems that arise with only occasional users are found. The value of the method lies in the fact that specific design problems, once encountered, are never seen again because they are immediately eliminated, while the parts that appear successful are tested over and over. While it's true that the initial problems in the design may be tested by only five users, when the method is properly applied, the parts of the design that worked in that initial test will go on to be tested by 50 to 100 people.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)