Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Software performance testing
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Setting performance goals == Performance testing can serve different purposes: * It can demonstrate that the system meets performance criteria. * It can compare two systems to find which performs better. * It can measure which parts of the system or workload cause the system to perform badly. Many performance tests are undertaken without setting sufficiently realistic, goal-oriented performance goals. The first question from a business perspective should always be, "why are we performance-testing?". These considerations are part of the [[business case]] of the testing. Performance goals will differ depending on the system's technology and purpose, but should always include some of the following: === Concurrency and throughput === If a system identifies end-users by some form of log-in procedure then a concurrency goal is highly desirable. By definition this is the largest number of concurrent system users that the system is expected to support at any given moment. The work-flow of a scripted transaction may impact true [[Concurrency (computer science)|concurrency]] especially if the iterative part contains the log-in and log-out activity. If the system has no concept of end-users, then performance goal is likely to be based on a maximum throughput or transaction rate. === Server response time === This refers to the time taken for one system node to respond to the request of another. A simple example would be a HTTP 'GET' request from browser client to web server. In terms of response time this is what all [[load testing]] tools actually measure. It may be relevant to set server response time goals between all nodes of the system. === Render response time === Load-testing tools have difficulty measuring render-response time, since they generally have no concept of what happens within a [[Node (networking)|node]] apart from recognizing a period of time where there is no activity 'on the wire'. To measure render response time, it is generally necessary to include functional [[test script]]s as part of the performance test scenario. Many load testing tools do not offer this feature. === Performance specifications === It is critical to detail performance specifications (requirements) and document them in any performance test plan. Ideally, this is done during the requirements development phase of any system development project, prior to any design effort. See [[Performance Engineering]] for more details. However, performance testing is frequently not performed against a specification; e.g., no one will have expressed what the maximum acceptable response time for a given population of users should be. Performance testing is frequently used as part of the process of performance profile tuning. The idea is to identify the "weakest link" β there is inevitably a part of the system which, if it is made to respond faster, will result in the overall system running faster. It is sometimes a difficult task to identify which part of the system represents this critical path, and some test tools include (or can have add-ons that provide) instrumentation that runs on the server (agents) and reports transaction times, database access times, network overhead, and other server monitors, which can be analyzed together with the raw performance statistics. Without such instrumentation one might have to have someone crouched over [[Windows Task Manager]] at the server to see how much CPU load the performance tests are generating (assuming a Windows system is under test). Performance testing can be performed across the web, and even done in different parts of the country, since it is known that the response times of the internet itself vary regionally. It can also be done in-house, although [[router (computing)|router]]s would then need to be configured to introduce the lag that would typically occur on public networks. Loads should be introduced to the system from realistic points. For example, if 50% of a system's user base will be accessing the system via a 56K modem connection and the other half over a [[T-carrier|T1]], then the load injectors (computers that simulate real users) should either inject load over the same mix of connections (ideal) or simulate the network latency of such connections, following the same user profile. It is always helpful to have a statement of the likely peak number of users that might be expected to use the system at peak times. If there can also be a statement of what constitutes the maximum allowable 95 percentile response time, then an injector configuration could be used to test whether the proposed system met that specification. === Questions to ask === Performance specifications should ask the following questions, at a minimum: * In detail, what is the performance test scope? What subsystems, interfaces, components, etc. are in and out of scope for this test? * For the user interfaces (UIs) involved, how many concurrent users are expected for each (specify peak vs. nominal)? * What does the target system (hardware) look like (specify all server and network appliance configurations)? * What is the Application Workload Mix of each system component? (for example: 20% log-in, 40% search, 30% item select, 10% checkout). * What is the System Workload Mix? [Multiple workloads may be simulated in a single performance test] (for example: 30% Workload A, 20% Workload B, 50% Workload C). * What are the time requirements for any/all back-end batch processes (specify peak vs. nominal)?
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)