Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Worst-case execution time
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Automated approaches === There are many automated approaches to calculating WCET beyond the manual techniques above. These include: * analytical techniques to improve test cases to increase confidence in end to end measurements * static analysis of the software (“static” meaning without executing the software). * combined approaches, often referred to as “hybrid” analysis, being a combination of measurements and structural analysis ==== Static analysis techniques ==== A static WCET tool attempts to estimate WCET by examining the computer software without executing it directly on the hardware. Static analysis techniques have dominated research in the area since the late 1980s, although in an industrial setting, end-to-end measurements approaches were the standard practice. Static analysis tools work at a high-level to determine the structure of a [[computer program|program]]'s task, working either on a piece of [[source code]] or disassembled binary [[executable]]. They also work at a low-level, using timing information about the real hardware that the task will execute on, with all its specific features. By combining those two kinds of analysis, the tool attempts to give an upper bound on the time required to execute a given task on a given hardware platform. At the low-level, static WCET analysis is complicated by the presence of architectural features that improve the average-case performance of the [[central processing unit|processor]]: instruction/data [[CPU cache|caches]], [[Branch predictor|branch prediction]] and [[Pipeline (computing)|instruction pipeline]]s, for example. It is possible, but increasingly difficult, to determine tight WCET bounds if these modern architectural features are taken into account in the timing model used by the analysis. Certification authorities such as the [[European Aviation Safety Agency]], therefore, rely on model validation suites. {{Citation needed|date=September 2012}} Static analysis has resulted in good results for simpler hardware, however a possible limitation of static analysis is that the hardware (the CPU in particular) has reached a complexity which is extremely hard to model. In particular, the modelling process can introduce errors from several sources: errors in chip design, lack of documentation, errors in documentation, errors in model creation; all leading to cases where the model predicts a different behavior to that observed on real hardware. Typically, where it is not possible to accurately predict a behavior, a pessimistic result is used, which can lead to the WCET estimate being much larger than anything achieved at run-time. Obtaining tight static WCET estimation is particularly difficult on multi-core processors. There are a number of commercial and academic tools that implement various forms of static analysis. ==== Measurement and hybrid techniques ==== Measurement-based and hybrid approaches usually try to measure the execution times of short code segments on the real hardware, which are then combined in a higher level analysis. Tools take into account the structure of the software (e.g. loops, branches), to produce an estimate of the WCET of the larger program. The rationale is that it's hard to test the longest path in complex software, but it is easier to test the longest path in many smaller components of it. A worst case effect needs only to be seen once during testing for the analysis to be able to combine it with other worst case events in its analysis. Typically, the small sections of software can be measured automatically using techniques such as instrumentation (adding markers to the software) or with hardware support such as debuggers, and CPU hardware tracing modules. These markers result in a trace of execution, which includes both the path taken through the program and the time at which different points were executed. The trace is then analyzed to determine the maximum time that each part of the program has ever taken to execute, what the maximum observed iteration time of each loop is and whether there are any parts of the software that are untested ([[Code coverage]]). Measurement-based WCET analysis has resulted in good results for both simple and complex hardware, although like static analysis it can suffer excessive pessimism in multi-core situations, where the impact of one core on another is hard to define. A limitation of measurement is that it relies on observing the worst-case effects during testing (although not necessarily at the same time). It can be hard to determine if the worst case effects have necessarily been tested. There are a number of commercial and academic tools that implement various forms of measurement-based analysis.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)