Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Software testing
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Black/white box === Software testing can often be divided into white-box and black-box. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box that includes aspects of both boxes may also be applied to software testing methodology.<ref name="LimayeSoftware09">{{Cite book |last=Limaye, M.G. |url=https://books.google.com/books?id=zUm8My7SiakC&pg=PA108 |title=Software Testing |publisher=Tata McGraw-Hill Education |year=2009 |isbn=978-0-07-013990-9 |pages=108–11}}</ref><ref name="SalehSoftware09">{{Cite book |last=Saleh, K.A. |url=https://books.google.com/books?id=N69KPjBEWygC&pg=PA224 |title=Software Engineering |publisher=J. Ross Publishing |year=2009 |isbn=978-1-932159-94-3 |pages=224–41}}</ref> ==== White-box testing ==== {{Main|White-box testing}} [[File:White Box Testing Approach.png|alt=White Box Testing Diagram|thumb|White box testing diagram]] White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system (the source code), as well as programming skills are used to design test cases. The tester chooses inputs to exercise paths through the code and determines the appropriate outputs.<ref name="LimayeSoftware09" /><ref name="SalehSoftware09" /> This is analogous to testing nodes in a circuit, e.g., [[in-circuit test]]ing (ICT). While white-box testing can be applied at the [[unit testing|unit]], [[integration testing|integration]], and [[system testing|system]] levels of the software testing process, it is usually done at the unit level.<ref name="AmmannIntro16" /> It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements. Techniques used in white-box testing include:<ref name="SalehSoftware09" /><ref name="EverettSoftware07">{{Cite book |last1=Everatt, G.D. |title=Software Testing: Testing Across the Entire Software Development Life Cycle |last2=McLeod Jr., R. |publisher=John Wiley & Sons |year=2007 |isbn=978-0-470-14634-7 |pages=99–121 |chapter=Chapter 7: Functional Testing}}</ref> * [[API testing]] – testing of the application using public and private [[application programming interfaces|APIs]] (application programming interfaces) * [[Code coverage]] – creating tests to satisfy some criteria of code coverage (for example, the test designer can create tests to cause all statements in the program to be executed at least once) * [[Fault injection]] methods – intentionally introducing faults to gauge the efficacy of testing strategies * [[Mutation testing]] methods * [[Static testing]] methods Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important [[function points]] have been tested.<ref name="CornettCode96">{{Cite web |last=Cornett |first=Steve |date=c. 1996 |title=Code Coverage Analysis |url=https://www.bullseye.com/coverage.html#intro |access-date=November 21, 2017 |publisher=Bullseye Testing Technology |at=Introduction}}</ref> Code coverage as a [[software metric]] can be reported as a percentage for:<ref name="LimayeSoftware09" /><ref name="CornettCode96" /><ref name="BlackPragmatic11">{{Cite book |last=Black, R. |url=https://books.google.com/books?id=n-bTHNW97kYC&pg=PA44 |title=Pragmatic Software Testing: Becoming an Effective and Efficient Test Professional |publisher=John Wiley & Sons |year=2011 |isbn=978-1-118-07938-6 |pages=44–6}}</ref> :* ''Function coverage'', which reports on functions executed :* ''Statement coverage'', which reports on the number of lines executed to complete the test :* ''Decision coverage'', which reports on whether both the True and the False branch of a given test has been executed 100% statement coverage ensures that all code paths or branches (in terms of [[control flow]]) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.<ref>As a simple example, the [[C (programming language)|C]] function <syntaxhighlight lang="C" inline>int f(int x){return x*x-6*x+8;}</syntaxhighlight> consists of only one statement. All tests against a specification <syntaxhighlight lang="C" inline>f(x)>=0</syntaxhighlight> will succeed, except if <syntaxhighlight lang="C" inline>x=3</syntaxhighlight> happens to be chosen.</ref> ==== Black-box testing ==== {{Main|Black-box testing}} [[File:Black box diagram.svg|thumb|Black box diagram]] Black-box testing (also known as functional testing) describes designing test cases without knowledge of the implementation, without reading the source code. The testers are only aware of what the software is supposed to do, not how it does it.<ref name="Patton">{{Cite book |last=Patton |first=Ron |url=https://archive.org/details/softwaretesting0000patt |title=Software Testing |publisher=Sams Publishing |year=2005 |isbn=978-0-672-32798-8 |edition=2nd |location=Indianapolis}}</ref> Black-box testing methods include: [[equivalence partitioning]], [[boundary value analysis]], [[all-pairs testing]], [[state transition table]]s, [[decision table]] testing, [[fuzz testing]], [[model-based testing]], [[use case]] testing, [[exploratory testing]], and specification-based testing.<ref name="LimayeSoftware09" /><ref name="SalehSoftware09" /><ref name="BlackPragmatic11" /> Specification-based testing aims to test the functionality of software according to the applicable requirements.<ref>{{Cite thesis |last=Laycock |first=Gilbert T. |title=The Theory and Practice of Specification Based Software Testing |degree=dissertation |publisher=Department of Computer Science, [[University of Sheffield]] |url=https://www.cs.le.ac.uk/people/glaycock/thesis.pdf |year=1993 |access-date=January 2, 2018}}</ref> This level of testing usually requires thorough [[test case]]s to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs, to derive test cases. These tests can be [[functional testing|functional]] or [[non-functional testing|non-functional]], though usually functional. Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.<ref>{{Cite journal |last=Bach |first=James |author-link=James Bach |date=June 1999 |title=Risk and Requirements-Based Testing |url=https://www.satisfice.com/articles/requirements_based_testing.pdf |journal=Computer |volume=32 |issue=6 |pages=113–114 |access-date=August 19, 2008}}</ref> Black box testing can be used to any level of testing although usually not at the unit level.<ref name="AmmannIntro16" /> '''Component interface testing''' Component interface testing is a variation of [[black-box testing]], with the focus on the data values beyond just the related actions of a subsystem component.<ref name="MathurFound11-63">{{Cite book |last=Mathur, A.P. |url=https://books.google.com/books?id=hyaQobu44xUC&pg=PA18 |title=Foundations of Software Testing |publisher=Pearson Education India |year=2011 |isbn=978-81-317-5908-0 |page=63}}</ref> The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units.<ref name="Clapp">{{Cite book |last=Clapp |first=Judith A. |url=https://books.google.com/books?id=wAq0rnyiGMEC&pg=PA313 |title=Software Quality Control, Error Analysis, and Testing |year=1995 |isbn=978-0-8155-1363-6 |page=313 |publisher=William Andrew |access-date=January 5, 2018}}</ref><ref name="Mathur">{{Cite book |last=Mathur |first=Aditya P. |url=https://books.google.com/books?id=yU-rTcurys8C&pg=PR38 |title=Foundations of Software Testing |publisher=Pearson Education India |year=2007 |isbn=978-81-317-1660-1 |page=18}}</ref> The data being passed can be considered as "message packets" and the range or data types can be checked for data generated from one unit and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values.<ref name=Clapp/> Unusual data values in an interface can help explain unexpected performance in the next unit. ===== Visual testing ===== The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information he or she requires, and the information is expressed clearly.<ref>{{Cite thesis |last=Lönnberg |first=Jan |title=Visual testing of software |date=October 7, 2003 |degree=MSc |publisher=Helsinki University of Technology |url=https://www.cs.hut.fi/~jlonnber/VisualTesting.pdf |access-date=January 13, 2012}}</ref><ref>{{Cite magazine |last=Chima |first=Raspal |title=Visual testing |url=http://www.testmagazine.co.uk/2011/04/visual-testing |magazine=TEST Magazine |archive-url=https://web.archive.org/web/20120724162657/http://www.testmagazine.co.uk/2011/04/visual-testing/ |archive-date=July 24, 2012 |access-date=January 13, 2012}}</ref> At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones. Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it, and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed. [[Ad hoc testing]] and [[exploratory testing]] are important methodologies for checking software integrity because they require less preparation time to implement, while the important bugs can be found quickly.<ref name="LewisSoftware16">{{Cite book |last=Lewis, W.E. |url=https://books.google.com/books?id=fgaBDd0TfT8C&pg=PA68 |title=Software Testing and Continuous Quality Improvement |publisher=CRC Press |year=2016 |isbn=978-1-4398-3436-7 |edition=3rd |pages=68–73}}</ref> In ad hoc testing, where testing takes place in an improvised impromptu way, the ability of the tester(s) to base testing off documented methods and then improvise variations of those tests can result in a more rigorous examination of defect fixes.<ref name="LewisSoftware16" /> However, unless strict documentation of the procedures is maintained, one of the limits of ad hoc testing is lack of repeatability.<ref name="LewisSoftware16" /> {{further|Graphical user interface testing}} ==== Grey-box testing ==== {{main|Gray box testing}} Grey-box testing (American spelling: gray-box testing) involves using knowledge of internal data structures and algorithms for purposes of designing tests while executing those tests at the user, or black-box level. The tester will often have access to both "the source code and the executable binary."<ref name="RansomeCore13">{{Cite book |last1=Ransome, J. |url=https://books.google.com/books?id=MX5cAgAAQBAJ&pg=PA140 |title=Core Software Security: Security at the Source |last2=Misra, A. |publisher=CRC Press |year=2013 |isbn=978-1-4665-6095-6 |pages=140–3}}</ref> Grey-box testing may also include [[Reverse coding|reverse engineering]] (using dynamic code analysis) to determine, for instance, boundary values or error messages.<ref name="RansomeCore13" /> Manipulating input data and formatting output do not qualify as grey-box, as the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting [[integration testing]] between two modules of code written by two different developers, where only the interfaces are exposed for the test. By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities, such as seeding a [[database]]. The tester can observe the state of the product being tested after performing certain actions such as executing [[SQL]] statements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios based on limited information. This will particularly apply to data type handling, [[exception handling]], and so on.<ref name="ref4">{{Cite web |title=SOA Testing Tools for Black, White and Gray Box |url=http://www.crosschecknet.com/soa_testing_black_white_gray_box.php |archive-url=https://web.archive.org/web/20181001010542/http://www.crosschecknet.com:80/soa_testing_black_white_gray_box.php |archive-date=October 1, 2018 |access-date=December 10, 2012 |publisher=Crosscheck Networks |type=white paper}}</ref> With the concept of grey-box testing, this "arbitrary distinction" between black- and white-box testing has faded somewhat.<ref name="AmmannIntro16">{{Cite book |last1=Ammann, P. |url=https://books.google.com/books?id=58LeDQAAQBAJ&pg=PA26 |title=Introduction to Software Testing |last2=Offutt, J. |publisher=Cambridge University Press |year=2016 |isbn=978-1-316-77312-3 |page=26}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)