Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Software performance testing
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Performance testing web applications === According to the Microsoft Developer Network the Performance Testing Methodology consists of the following activities: # '''Identify the Test Environment.''' Identify the physical [[test environment]] and the production environment as well as the tools and resources available to the test team. The physical environment includes hardware, software, and network configurations. Having a thorough understanding of the entire test environment at the outset enables more efficient [[test design]] and planning and helps you identify testing challenges early in the project. In some situations, this process must be revisited periodically throughout the project's [[Systems development life-cycle|life cycle]]. # '''Identify Performance Acceptance Criteria.''' Identify the response time, throughput, and resource-use goals and constraints. In general, response time is a user concern, throughput is a business concern, and resource use is a system concern. Additionally, identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate which combination of configuration settings will result in the most desirable performance characteristics. # '''Plan and Design Tests.''' Identify key [[scenario]]s, determine variability among representative users and how to [[simulate]] that variability, define test data, and establish metrics to be collected. Consolidate this information into one or more models of system usage to implemented, executed, and analyzed. # '''Configure the Test Environment.''' Prepare the test environment, tools, and resources necessary to execute each strategy, as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary. # '''Implement the Test Design.''' Develop the performance tests in accordance with the test design. # '''Execute the Test.''' Run and monitor your tests. Validate the tests, test data, and [[results collection]]. Execute validated tests for analysis while monitoring the test and the test environment. # '''Analyze Results, Tune, and Retest.''' Analyze, consolidate, and share results data. Make a tuning change and retest. Compare the results of both tests. Each improvement made will return smaller improvement than the previous improvement. When do you stop? When you reach a CPU bottleneck, the choices then are either improve the code or add more CPU.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)