Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Test-driven development
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Coding cycle == [[File:TDD Global Lifecycle.png|thumb|A graphical representation of the test-driven development lifecycle]] The TDD steps vary somewhat by author in count and description, but are generally as follows. These are based on the book ''Test-Driven Development by Example'',<ref name=Beck>{{cite book |last=Beck| first=Kent |title=Test-Driven Development by Example |publisher=Addison Wesley |location=Vaseem |date=2002-11-08 |isbn=978-0-321-14653-3}}</ref> and Kent Beck's Canon TDD article.<ref>{{Cite web |last=Beck |first=Kent |date=2023-12-11 |title=Canon TDD |url=https://tidyfirst.substack.com/p/canon-tdd |access-date=2024-10-22 |website=Software Design: Tidy First?}}</ref> ;1. List scenarios for the new feature :List the expected variants in the new behavior. “There’s the basic case & then what-if this service times out & what-if the key isn’t in the database yet &…” The developer can discover these specifications by asking about [[use case]]s and [[user story|user stories]]. A key benefit of TDD is that it makes the developer focus on requirements ''before'' writing code. This is in contrast with the usual practice, where unit tests are only written ''after'' code. ;2. Write a test for an item on the list :Write an automated test that ''would'' pass if the variant in the new behavior is met. ;3. Run all tests. The new test should ''fail'' {{endash}} for ''expected'' reasons :This shows that new code is actually needed for the desired feature. It validates that the [[test harness]] is working correctly. It rules out the possibility that the new test is flawed and will always pass. ;4. Write the simplest code that passes the new test :Inelegant code and [[hard coding]] is acceptable. The code will be honed in Step 6. No code should be added beyond the tested functionality. ;5. All tests should now pass :If any fail, fix failing tests with minimal changes until all pass. ;6. Refactor as needed while ensuring all tests continue to pass :Code is [[Code refactoring|refactored]] for [[Code readability|readability]] and maintainability. In particular, hard-coded test data should be removed from the production code. Running the test suite after each refactor ensures that no existing functionality is broken. Examples of refactoring: :* moving code to where it most logically belongs :* removing [[duplicate code]] :* making [[Identifier (computer languages)|names]] [[Self-documenting code|self-documenting]] :* splitting methods into smaller pieces :* re-arranging [[Inheritance (object-oriented programming)|inheritance hierarchies]] ;Repeat :Repeat the process, starting at step 2, with each test on the list until all tests are implemented and passing. Each tests should be small and commits made often. If new code fails some tests, the programmer can [[undo]] or revert rather than [[debug]] excessively. When using [[Library (computing)|external libraries]], it is important not to write tests that are so small as to effectively test merely the library itself,<ref name=Newkirk /> unless there is some reason to believe that the library is buggy or not feature-rich enough to serve all the needs of the software under development.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)