Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Model-based testing
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Deriving tests algorithmically== The effectiveness of model-based testing is primarily due to the potential for automation it offers. If a model is machine-readable and formal to the extent that it has a well-defined behavioral interpretation, test cases can in principle be derived mechanically. ===From finite-state machines=== Often the model is translated to or interpreted as a [[finite-state automaton]] or a [[state transition system]]. This automaton represents the possible configurations of the system under test. To find test cases, the automaton is searched for executable paths. A possible execution path can serve as a test case. This method works if the model is [[Deterministic system (mathematics)|deterministic]] or can be transformed into a deterministic one. Valuable off-nominal test cases may be obtained by leveraging unspecified transitions in these models. Depending on the complexity of the system under test and the corresponding model the number of paths can be very large, because of the huge amount of possible configurations of the system. To find test cases that can cover an appropriate, but finite, number of paths, test criteria are needed to guide the selection. This technique was first proposed by Offutt and Abdurazik in the paper that started model-based testing.<ref>Jeff Offutt and Aynur Abdurazik. Generating Tests from UML Specifications. Second International Conference on the Unified Modeling Language (UML ’99), pages 416-429, Fort Collins, CO, October 1999.</ref> Multiple techniques for test case generation have been developed and are surveyed by Rushby.<ref>John Rushby. Automated Test Generation and Verified Software. Verified Software: Theories, Tools, Experiments: First IFIP TC 2/WG 2.3 Conference, VSTTE 2005, Zurich, Switzerland, October 10–13. pp. 161-172, Springer-Verlag</ref> Test criteria are described in terms of general graphs in the testing textbook.<ref name="Jeff Offutt 2016"/> ===Theorem proving=== [[Theorem proving]] was originally used for automated proving of logical formulas. For model-based testing approaches, the system is modeled by a set of [[Predicate (logic)|predicates]], specifying the system's behavior.<ref name="otbt">{{Cite journal|first1=Achim D.|last1=Brucker|first2=Burkhart|last2=Wolff|title=On Theorem Prover-based Testing|journal=Formal Aspects of Computing|volume=25|issue=5|pages=683–721|year=2012|doi=10.1007/s00165-012-0222-y|url=http://www.brucker.ch/bibliography/abstract/brucker.ea-theorem-prover-2012.en.html|citeseerx=10.1.1.208.3135|s2cid=5774837}}</ref> To derive test cases, the model is partitioned into [[equivalence classes]] over the valid interpretation of the set of the predicates describing the system under test. Each class describes a certain system behavior, and, therefore, can serve as a test case. The simplest partitioning is with the disjunctive normal form approach wherein the logical expressions describing the system's behavior are transformed into the [[disjunctive normal form]]. ===Constraint logic programming and symbolic execution=== [[Constraint programming]] can be used to select test cases satisfying specific constraints by solving a set of constraints over a set of variables. The system is described by the means of constraints.<ref>Jefferson Offutt. Constraint-Based Automatic Test Data Generation. IEEE Transactions on Software Engineering, 17:900-910, 1991</ref> Solving the set of constraints can be done by Boolean solvers (e.g. SAT-solvers based on the [[Boolean satisfiability problem]]) or by [[numerical analysis]], like the [[Gaussian elimination]]. A solution found by solving the set of constraints formulas can serve as a test cases for the corresponding system. Constraint programming can be combined with symbolic execution. In this approach a system model is executed symbolically, i.e. collecting data constraints over different control paths, and then using the constraint programming method for solving the constraints and producing test cases.<ref>Antti Huima. Implementing Conformiq Qtronic. Testing of Software and Communicating Systems, Lecture Notes in Computer Science, 2007, Volume 4581/2007, 1-12, DOI: 10.1007/978-3-540-73066-8_1 </ref> ===Model checking=== [[Model checking|Model checkers]] can also be used for test case generation.<ref>Gordon Fraser, Franz Wotawa, and Paul E. Ammann. Testing with model checkers: a survey. Software Testing, Verification and Reliability, 19(3):215–261, 2009. URL: [https://archive.today/20130105114035/http://www3.interscience.wiley.com/journal/121560421/abstract]</ref> Originally model checking was developed as a technique to check if a property of a specification is valid in a model. When used for testing, a model of the system under test, and a property to test is provided to the model checker. Within the procedure of proofing, if this property is valid in the model, the model checker detects witnesses and counterexamples. A witness is a path where the property is satisfied, whereas a counterexample is a path in the execution of the model where the property is violated. These paths can again be used as test cases. ===Test case generation by using a Markov chain test model=== [[Markov chains]] are an efficient way to handle Model-based Testing. Test models realized with Markov chains can be understood as a usage model: it is referred to as Usage/Statistical Model Based Testing. Usage models, so Markov chains, are mainly constructed of 2 artifacts : the [[finite-state machine]] (FSM) which represents all possible usage scenario of the tested system and the Operational Profiles (OP) which qualify the FSM to represent how the system is or will be used statistically. The first (FSM) helps to know what can be or has been tested and the second (OP) helps to derive operational test cases. Usage/Statistical Model-based Testing starts from the facts that is not possible to exhaustively test a system and that failure can appear with a very low rate.<ref>Helene Le Guen. Validation d'un logiciel par le test statistique d'usage : de la modelisation de la decision à la livraison, 2005. URL:ftp://ftp.irisa.fr/techreports/theses/2005/leguen.pdf</ref> This approach offers a pragmatic way to statically derive test cases which are focused on improving the reliability of the system under test. Usage/Statistical Model Based Testing was recently extended to be applicable to embedded software systems.<ref>{{Cite book | doi=10.1109/ICSTW.2011.11| chapter=Model Based Statistical Testing of Embedded Systems| title=2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops| pages=18–25| year=2011| last1=Böhr| first1=Frank| isbn=978-1-4577-0019-4| s2cid=9582606}}</ref><ref>{{cite book |isbn=978-3843903486|title=Model-Based Statistical Testing of Embedded Real-Time Software with Continuous and Discrete Signals in a Concurrent Environment: The Usage Net Approach |last1=Böhr |first1=Frank |year=2012 |publisher=Verlag Dr. Hut }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)