Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Usability testing
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Methods== Setting up a usability test involves carefully creating a [[scenario]], or a realistic situation, wherein the person performs a list of tasks using the product being [[Dynamic testing|tested]] while observers watch and take notes ([[Software verification#Dynamic verification .28Test.2C experimentation.29|dynamic verification]]). Several other [[Static testing|test]] instruments such as scripted instructions, [[paper prototypes]], and pre- and post-test questionnaires are also used to gather feedback on the product being tested ([[Software verification#Static verification .28Analysis.29|static verification]]). For example, to test the attachment function of an [[e-mail]] program, a scenario would describe a situation where a person needs to send an e-mail attachment, and asking them to undertake this task. The aim is to observe how people function in a realistic manner, so that developers can identify the problem areas and fix them. Techniques popularly used to gather data during a usability test include [[think aloud protocol]], co-discovery learning and [[eye tracking]]. ===Hallway testing=== '''Hallway testing''', also known as '''guerrilla usability''', is a quick and cheap method of usability testing in which people — such as those passing by in the hallway—are asked to try using the product or service. This can help designers identify "brick walls", problems so serious that users simply cannot advance, in the early stages of a new design. Anyone but project designers and engineers can be used (they tend to act as "expert reviewers" because they are too close to the project). This type of testing is an example of [[convenience sampling]] and thus the results are potentially biased. ===Remote usability testing=== In a scenario where usability evaluators, developers and prospective users are located in different countries and time zones, conducting a traditional lab usability evaluation creates challenges both from the cost and logistical perspectives. These concerns led to research on remote usability evaluation, with the user and the evaluators separated over space and time. Remote testing, which facilitates evaluations being done in the context of the user's other tasks and technology, can be either synchronous or asynchronous. The former involves real time one-on-one communication between the evaluator and the user, while the latter involves the evaluator and user working separately.<ref>{{cite book |doi=10.1145/1240624.1240838 |chapter=What happened to remote usability testing? |title=Proceedings of the SIGCHI Conference on Human Factors in Computing Systems |year=2007 |last1=Andreasen |first1=Morten Sieker |last2=Nielsen |first2=Henrik Villemann |last3=Schrøder |first3=Simon Ormholt |last4=Stage |first4=Jan |isbn=978-1-59593-593-9 |page=1405 |s2cid=12388042 }}</ref> Numerous tools are available to address the needs of both these approaches. Synchronous usability testing methodologies involve video conferencing or employ remote application sharing tools such as WebEx. WebEx and GoToMeeting are the most commonly used technologies to conduct a synchronous remote usability test.<ref>{{cite web|url=http://www.boxesandarrows.com/view/remote_online_usability_testing_why_how_and_when_to_use_it|title=Remote Online Usability Testing: Why, How, and When to Use It|author=Dabney Gough|author2=Holly Phillips|date=2003-06-09|archive-url=https://web.archive.org/web/20051215231619/http://www.boxesandarrows.com/view/remote_online_usability_testing_why_how_and_when_to_use_it|archive-date=December 15, 2005}}</ref> However, synchronous remote testing may lack the immediacy and sense of "presence" desired to support a collaborative testing process. Moreover, managing interpersonal dynamics across cultural and linguistic barriers may require approaches sensitive to the cultures involved. Other disadvantages include having reduced control over the testing environment and the distractions and interruptions experienced by the participants in their native environment.<ref name="Dray & Siegel 2004">{{cite journal |last1=Dray |first1=Susan |last2=Siegel |first2=David |title=Remote possibilities?: international usability testing at a distance |journal=Interactions |date=March 2004 |volume=11 |issue=2 |pages=10–17 |doi=10.1145/971258.971264 |s2cid=682010 }}</ref> One of the newer methods developed for conducting a synchronous remote usability test is by using virtual worlds.<ref>{{cite book |doi=10.1145/1978942.1979267 |chapter=Synchronous remote usability testing |title=Proceedings of the SIGCHI Conference on Human Factors in Computing Systems |year=2011 |last1=Chalil Madathil |first1=Kapil |last2=Greenstein |first2=Joel S. |pages=2225–2234 |isbn=978-1-4503-0228-9 |s2cid=14077658 }}</ref> Asynchronous methodologies include automatic collection of user's click streams, user logs of critical incidents that occur while interacting with the application and subjective feedback on the interface by users.<ref name="Dray & Siegel 2004"/> Similar to an in-lab study, an asynchronous remote usability test is task-based and the platform allows researchers to capture clicks and task times. Hence, for many large companies, this allows researchers to better understand visitors' intents when visiting a website or mobile site. Additionally, this style of user testing also provides an opportunity to segment feedback by demographic, attitudinal and behavioral type. The tests are carried out in the user's own environment (rather than labs) helping further simulate real-life scenario testing. This approach also provides a vehicle to easily solicit feedback from users in remote areas quickly and with lower organizational overheads. In recent years, conducting usability testing asynchronously has also become prevalent and allows testers to provide feedback in their free time and from the comfort of their own home. ===Expert review=== Expert review is another general method of usability testing. As the name suggests, this method relies on bringing in experts with experience in the field (possibly from companies that specialize in usability testing) to evaluate the usability of a product. A [[heuristic evaluation]] or '''usability audit''' is an evaluation of an interface by one or more human factors experts. Evaluators measure the usability, efficiency, and effectiveness of the interface based on usability principles, such as the 10 usability heuristics originally defined by [[Jakob Nielsen (usability consultant)|Jakob Nielsen]] in 1994.<ref>{{cite web|title=Heuristic Evaluation|url=http://www.usabilityfirst.com/usability-methods/heuristic-evaluation/|publisher=Usability First|access-date=April 9, 2013}}</ref> Nielsen's usability heuristics, which have continued to evolve in response to user research and new devices, include: * Visibility of system status * Match between system and the real world * User control and freedom * Consistency and standards * Error prevention * Recognition rather than recall * Flexibility and efficiency of use * Aesthetic and minimalist design * Help users recognize, diagnose, and recover from errors * Help and documentation ===Automated expert review=== Similar to expert reviews, '''automated expert reviews''' provide usability testing but through the use of programs given rules for good design and heuristics. Though an automated review might not provide as much detail and insight as reviews from people, they can be finished more quickly and consistently. The idea of creating surrogate users for usability testing is an ambitious direction for the artificial intelligence community. ===A/B testing=== {{Main|A/B testing}} In web development and marketing, A/B testing or split testing is an experimental approach to web design (especially user experience design), which aims to identify changes to web pages that increase or maximize an outcome of interest (e.g., click-through rate for a banner advertisement). As the name implies, two versions (A and B) are compared, which are identical except for one variation that might impact a user's behavior. Version A might be the one currently used, while version B is modified in some respect. For instance, on an e-commerce website the purchase funnel is typically a good candidate for A/B testing, as even marginal improvements in drop-off rates can represent a significant gain in sales. Significant improvements can be seen through testing elements like copy text, layouts, images and colors. Multivariate testing or bucket testing is similar to A/B testing but tests more than two versions at the same time.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)