Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Software testing
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Quality == {{main|Software quality}} === Software verification and validation === {{main|Verification and validation (software)|Software quality control}} Software testing is used in association with [[Verification and validation (software)|verification and validation]]:<ref name="tran">{{Cite web |last=Tran |first=Eushiuan |year=1999 |title=Verification/Validation/Certification |url=https://www.ece.cmu.edu/~koopman/des_s99/verification/index.html |access-date=August 13, 2008 |publisher=Carnegie Mellon University |type=coursework}}</ref> * Verification: Have we built the software right? (i.e., does it implement the requirements). * Validation: Have we built the right software? (i.e., do the deliverables satisfy the customer). The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms defined with contradictory definitions. According to the ''[[IEEE Standards Association|IEEE Standard]] Glossary of Software Engineering Terminology'':<ref name=IEEEglossary />{{rp|80β81}} : Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. : Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements. And, according to the ISO 9000 standard: : Verification is confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. : Validation is confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. The contradiction is caused by the use of the concepts of requirements and specified requirements but with different meanings. In the case of IEEE standards, the specified requirements, mentioned in the definition of validation, are the set of problems, needs and wants of the stakeholders that the software must solve and satisfy. Such requirements are documented in a Software Requirements Specification (SRS). And, the products mentioned in the definition of verification, are the output artifacts of every phase of the software development process. These products are, in fact, specifications such as Architectural Design Specification, Detailed Design Specification, etc. The SRS is also a specification, but it cannot be verified (at least not in the sense used here, more on this subject below). But, for the ISO 9000, the specified requirements are the set of specifications, as just mentioned above, that must be verified. A specification, as previously explained, is the product of a software development process phase that receives another specification as input. A specification is verified successfully when it correctly implements its input specification. All the specifications can be verified except the SRS because it is the first one (it can be validated, though). Examples: The Design Specification must implement the SRS; and, the Construction phase artifacts must implement the Design Specification. So, when these words are defined in common terms, the apparent contradiction disappears. Both the SRS and the software must be validated. The SRS can be validated statically by consulting with the stakeholders. Nevertheless, running some partial implementation of the software or a prototype of any kind (dynamic testing) and obtaining positive feedback from them, can further increase the certainty that the SRS is correctly formulated. On the other hand, the software, as a final and running product (not its artifacts and documents, including the source code) must be validated dynamically with the stakeholders by executing the software and having them to try it. Some might argue that, for SRS, the input is the words of stakeholders and, therefore, SRS validation is the same as SRS verification. Thinking this way is not advisable as it only causes more confusion. It is better to think of verification as a process involving a formal and technical input document. === Software quality assurance === In some organizations, software testing is part of a [[software quality assurance]] (SQA) process.<ref name="Kaner2" />{{rp|347}} In SQA, software process specialists and auditors are concerned with the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the [[software engineering]] process itself to reduce the number of faults that end up in the delivered software: the so-called defect rate. What constitutes an acceptable defect rate depends on the nature of the software; a flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.{{citation needed|reason=archive July 2012|date=December 2017}} Software testing is an activity to investigate software under test in order to provide quality-related information to stakeholders. By contrast, QA ([[quality assurance]]) is the implementation of policies and procedures intended to prevent defects from reaching customers. === Measures === Quality measures include such topics as [[correctness (computer science)|correctness]], completeness, [[computer security audit|security]] and [[ISO/IEC 9126]] requirements such as capability, [[Reliability engineering|reliability]], [[algorithmic efficiency|efficiency]], [[Porting|portability]], [[maintainability]], compatibility, and [[usability]]. There are a number of frequently used [[software metric]]s, or measures, which are used to assist in determining the state of the software or the adequacy of the testing. === Artifacts === A software testing process can produce several [[Artifact (software development)|artifacts]]. The actual artifacts produced are a factor of the software development model used, stakeholder and organisational needs. ====Test plan==== {{Main|Test plan}} A [[test plan]] is a document detailing the approach that will be taken for intended test activities. The plan may include aspects such as objectives, scope, processes and procedures, personnel requirements, and contingency plans.<ref name="LewisSoftware16-2">{{Cite book |last=Lewis, W.E. |url=https://books.google.com/books?id=fgaBDd0TfT8C&pg=PA92 |title=Software Testing and Continuous Quality Improvement |publisher=CRC Press |year=2016 |isbn=978-1-4398-3436-7 |edition=3rd |pages=92β6}}</ref> The test plan could come in the form of a single plan that includes all test types (like an acceptance or system test plan) and planning considerations, or it may be issued as a master test plan that provides an overview of more than one detailed test plan (a plan of a plan).<ref name="LewisSoftware16-2" /> A test plan can be, in some cases, part of a wide "[[test strategy]]" which documents overall testing approaches, which may itself be a master test plan or even a separate artifact. ====Traceability matrix==== {{Excerpt|Traceability matrix|only=paragraphs|paragraph=1}} ====Test case==== {{Main|Test case}} A [[test case]] normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and the actual result. Clinically defined, a test case is an input and an expected result.<ref>{{Cite book |last=IEEE |title=IEEE standard for software test documentation |title-link=IEEE 829 |publisher=IEEE |year=1998 |isbn=978-0-7381-1443-9 |location=New York}}</ref> This can be as terse as "for condition x your derived result is y", although normally test cases describe in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repositories. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table. ====Test script==== A [[test script]] is a procedure or programming code that replicates user actions. Initially, the term was derived from the product of work created by automated regression test tools. A test case will be a baseline to create test scripts using a tool or a program. ====Test suite==== {{Excerpt|Test suite|paragraph=1}} ====Test fixture or test data==== {{Main|Test fixture}} In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project. There are techniques to generate Test data. ====Test harness==== {{Main|Test harness}} The software, tools, samples of data input and output, and configurations are all referred to collectively as a [[test harness]]. ====Test run==== A test run is a collection of test cases or test suites that the user is executing and comparing the expected with the actual results. Once complete, a report or all executed tests may be generated. === Certifications === {{further|Certification#In software testing}} Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. A few practitioners argue that the testing field is not ready for certification, as mentioned in the [[#Controversy|controversy]] section.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)