Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Software testing
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Controversy == Some of the major [[software testing controversies]] include: ; Agile vs. traditional : Should testers learn to work under conditions of uncertainty and constant change or should they aim at [[Capability Maturity Model|process "maturity"]]? The [[agile testing]] movement has received growing popularity since the early 2000s mainly in commercial circles,<ref>{{Cite web |last=Strom |first=David |date=July 1, 2009 |title=We're All Part of the Story |url=http://stpcollaborative.com/knowledge/272-were-all-part-of-the-story |archive-url=https://web.archive.org/web/20090831182649/http://stpcollaborative.com/knowledge/272-were-all-part-of-the-story |archive-date=August 31, 2009 |publisher=Software Test & Performance Collaborative}}</ref><ref>{{Cite book |last=Griffiths |first=M. |title=Agile Development Conference (ADC'05) |publisher=ieee.org |year=2005 |isbn=978-0-7695-2487-0 |pages=318β322 |chapter=Teaching agile project management to the PMI |doi=10.1109/ADC.2005.45 |s2cid=30322339}}</ref> whereas government and military<ref>{{Cite journal |last=Willison, John S. |date=April 2004 |title=Agile Software Development for an Agile Force |url=http://www.stsc.hill.af.mil/crosstalk/2004/04/0404willison.htm |journal=CrossTalk |publisher=STSC |issue=April 2004 |archive-url=https://web.archive.org/web/20051029135922/http://www.stsc.hill.af.mil/crosstalk/2004/04/0404willison.html |archive-date=October 29, 2005}}</ref> software providers use this methodology but also the traditional test-last models (e.g., in the [[Waterfall model]]).{{Citation needed|date= February 2011}} ; Manual vs. automated testing: Some writers believe that [[test automation]] is so expensive relative to its value that it should be used sparingly.<ref>An example is Mark Fewster, Dorothy Graham: ''Software Test Automation.'' Addison Wesley, 1999, {{ISBN|978-0-201-33140-0}}.</ref> The test automation then can be considered as a way to capture and implement the requirements. As a general rule, the larger the system and the greater the complexity, the greater the ROI in test automation. Also, the investment in tools and expertise can be amortized over multiple projects with the right level of knowledge sharing within an organization. ; Is the existence of the [[ISO/IEC 29119|ISO 29119]] software testing standard justified?: Significant opposition has formed out of the ranks of the context-driven school of software testing about the ISO 29119 standard. Professional testing associations, such as the International Society for Software Testing, have attempted to have the standard withdrawn.<ref>{{Cite web |title=stop29119 |url=http://commonsensetesting.org/stop29119 |archive-url=https://web.archive.org/web/20141002033046/http://commonsensetesting.org/stop29119 |archive-date=October 2, 2014 |website=commonsensetesting.org}}</ref><ref>{{Cite web |last=Paul Krill |date=August 22, 2014 |title=Software testers balk at ISO 29119 standards proposal |url=http://www.infoworld.com/t/application-development/software-testers-balk-iso-29119-standards-proposal-249031 |website=InfoWorld}}</ref> ; Some practitioners declare that the testing field is not ready for certification:<ref>{{Cite web |last=Kaner |first=Cem |author-link=Cem Kaner |year=2001 |title=NSF grant proposal to 'lay a foundation for significant improvements in the quality of academic and commercial courses in software testing' |url=http://www.testingeducation.org/general/nsf_grant.pdf |access-date=October 13, 2006 |archive-date=November 27, 2009 |archive-url=https://web.archive.org/web/20091127210430/http://www.testingeducation.org/general/nsf_grant.pdf |url-status=dead }}</ref> No certification now offered actually requires the applicant to show their ability to test software. No certification is based on a widely accepted body of knowledge. Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester.<ref>{{Cite conference |last=Kaner |first=Cem |author-link=Cem Kaner |year=2003 |title=Measuring the Effectiveness of Software Testers |url=http://www.testingeducation.org/a/mest.pdf |conference=STAR East |access-date=January 18, 2018 |archive-date=March 26, 2010 |archive-url=https://web.archive.org/web/20100326042728/http://www.testingeducation.org/a/mest.pdf |url-status=dead }}</ref> ; Studies used to show the relative expense of fixing defects: There are opposing views on the applicability of studies used to show the relative expense of fixing defects depending on their introduction and detection. For example: <blockquote> It is commonly believed that the earlier a defect is found, the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found.<ref>{{Cite book |last=McConnell |first=Steve |url=https://archive.org/details/codecomplete0000mcco |title=Code Complete |publisher=Microsoft Press |year=2004 |isbn=978-0-7356-1967-8 |edition=2nd |page=[https://archive.org/details/codecomplete0000mcco/page/29 29] |url-access=registration}}</ref> For example, if a problem in the requirements is found only post-release, then it would cost 10β100 times more to fix than if it had already been found by the requirements review. With the advent of modern [[continuous deployment]] practices and cloud-based services, the cost of re-deployment and maintenance may lessen over time. {| class="wikitable" style="text-align:center;" |- !rowspan="2" colspan="2"| Cost to fix a defect !colspan="5" | Time detected |- !Requirements !Architecture !Construction !System test !Post-release |- !rowspan="3" | Time introduced ! Requirements | 1Γ | 3Γ | 5β10Γ | 10Γ | 10β100Γ |- !Architecture | β | 1Γ | 10Γ | 15Γ | 25β100Γ |- !Construction | β | β | 1Γ | 10Γ | 10β25Γ |} </blockquote> <blockquote> The data from which this table is extrapolated is scant. Laurent Bossavit says in his analysis: <blockquote> The "smaller projects" curve turns out to be from only two teams of first-year students, a sample size so small that extrapolating to "smaller projects in general" is totally indefensible. The GTE study does not explain its data, other than to say it came from two projects, one large and one small. The paper cited for the Bell Labs "Safeguard" project specifically disclaims having collected the fine-grained data that Boehm's data points suggest. The IBM study (Fagan's paper) contains claims that seem to contradict Boehm's graph and no numerical results that clearly correspond to his data points. Boehm doesn't even cite a paper for the TRW data, except when writing for "Making Software" in 2010, and there he cited the original 1976 article. There exists a large study conducted at TRW at the right time for Boehm to cite it, but that paper doesn't contain the sort of data that would support Boehm's claims.<ref name="Bossavit-Leprechauns">{{Cite book |last=Bossavit |first=Laurent |title=The Leprechauns of Software Engineering: How folklore turns into fact and what to do about it |date=November 20, 2013 |publisher=leanpub |chapter=The cost of defects: an illustrated history}}</ref></blockquote> </blockquote>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)