Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Rate-monotonic scheduling
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Scheduling technique in computer science}} In [[computer science]], '''rate-monotonic scheduling''' ('''RMS''')<ref>{{citation|first1=C. L.|last1=Liu|author-link1=Chung Laung Liu|first2=J.|last2=Layland|title=Scheduling algorithms for multiprogramming in a hard real-time environment|journal=Journal of the ACM|volume=20|issue=1|year=1973|pages=46β61|doi=10.1145/321738.321743|citeseerx=10.1.1.36.8216|s2cid=207669821}}.</ref> is a priority assignment algorithm used in [[real-time operating system]]s (RTOS) with a static-priority scheduling class.<ref>{{citation|first1=Daniel P.|last1=Bovet|first2=Marco|last2=Cesati|title=Understanding the Linux Kernel}}, http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#85347 {{webarchive|url=https://web.archive.org/web/20140921000832/http://oreilly.com/catalog/linuxkernel/chapter/ch10.html |date=2014-09-21 }}.</ref> The static priorities are assigned according to the cycle duration of the job, so a shorter cycle duration results in a higher job priority. These operating systems are generally [[preemption (computing)|preemptive]] and have deterministic guarantees with regard to response times. Rate monotonic analysis is used in conjunction with those systems to provide scheduling guarantees for a particular application. ==Introduction== A simple version of rate-monotonic analysis assumes that threads have the following properties: *No resource sharing (processes do not share resources, ''e.g.'' a [[Computer hardware|hardware]] resource, a queue, or any kind of [[Semaphore (programming)|semaphore]] blocking or non-blocking ([[busy waiting|busy-waits]])) *Deterministic deadlines are exactly equal to periods *Static priorities (the task with the highest static priority that is runnable immediately preempts all other tasks) *Static priorities assigned according to the ''rate monotonic'' conventions (tasks with shorter periods/deadlines are given higher priorities) *Context switch times and other thread operations are free and have no impact on the model It is a mathematical model that contains a calculated simulation of periods in a closed system, where [[round-robin scheduling|round-robin]] and [[time-sharing]] schedulers fail to meet the scheduling needs otherwise. Rate monotonic scheduling looks at a run modeling of all threads in the system and determines how much time is needed to meet the guarantees for the set of threads in question. === Optimality === The rate-monotonic priority assignment is ''optimal'' under the given assumptions, meaning that if any static-priority scheduling algorithm can meet all the deadlines, then the rate-monotonic algorithm can too. The [[deadline-monotonic scheduling]] algorithm is also optimal with equal periods and deadlines, in fact in this case the algorithms are identical; in addition, deadline monotonic scheduling is optimal when deadlines are less than periods.<ref>{{citation|first1=J. Y.|last1=Leung|first2=J.|last2=Whitehead|title=On the complexity of fixed-priority scheduling of periodic, real-time tasks|journal=Performance Evaluation|volume=2|issue=4|pages=237β250|year=1982|doi=10.1016/0166-5316(82)90024-4}}.</ref> For the task model in which deadlines can be greater than periods, Audsley's algorithm endowed with an exact schedulability test for this model finds an optimal priority assignment.<ref>{{citation|author=Alan Burns and Andy Wellings|title=Real-Time Systems and Programming Languages|year=2009|publisher=Addison-Wesley|edition=4th|isbn=978-0-321-41745-9|pages=391, 397}}</ref> == Upper bounds on utilization == === Least upper bound === {{harvtxt|Liu|Layland|1973}} proved that for a set of {{mvar|n}} periodic tasks with unique periods, a feasible schedule that will always meet deadlines exists if the [[Central processing unit|CPU]] utilization is below a specific bound (depending on the number of tasks). The schedulability test for RMS is: :<math>U = \sum_{i=1}^{n} {U_i} = \sum_{i=1}^{n} \frac{C_i}{T_i} \leq n({2}^{1/n} - 1)</math> where {{mvar|U}} is the utilization factor, {{mvar|C<sub>i</sub>}} is the computation time for process {{mvar|i}}, {{mvar|T<sub>i</sub>}} is the release period (with deadline one period later) for process {{mvar|i}}, and {{mvar|n}} is the number of processes to be scheduled. For example, {{mvar|U β€ 0.8284}} for two processes. When the number of processes tends towards [[infinity]], this expression will tend towards: :<math>\lim_{n \rightarrow \infty} n(\sqrt[n]{2} - 1) = \ln 2 \approx 0.693147\ldots</math> Therefore, a rough estimate when <math>{n} \geq {10}</math> is that RMS can meet all of the deadlines if total CPU utilization, {{mvar|U}}, is less than 70%. The other 30% of the CPU can be dedicated to lower-priority, non-real-time tasks. For smaller values of {{mvar|n}} or in cases where {{mvar|U}} is close to this estimate, the calculated utilization bound should be used. In practice, for the <math>{i^{th}}</math> process, <math>{C_i}</math> should represent the worst-case (i.e. longest) computation time and <math>{T_i}</math> should represent the worst-case deadline (i.e. shortest period) in which all processing must occur. === Relationship to queueing theory === In [[queueing theory]], {{mvar|T<sub>i</sub>}} is called the '''interarrival time''', and {{mvar|C<sub>i</sub>}} is called the ''' service time'''. These two parameters are often specified as rates: :<math>\lambda_i = { 1 \over T_i } </math> is the '''arrival rate''', and :<math>\mu_i = { 1 \over C_i } </math> is the '''service rate'''. The utilization for each task, denoted {{mvar|ρ<sub>i</sub>}}, is then: :<math> \rho_i = { \lambda_i \over \mu_i } = { C_i \over T_i } = U_i</math> as above. === Upper bound for harmonic task sets === Liu and Layland noted that this bound may be relaxed to the maximum possible value of 1.0, if for tasks <math>{T_m}</math>, <math>{T_i}</math> where <math>{T_m} {>} {T_i}</math> and <math>i = 1...m-1</math>, <math>{T_m}</math> is an integer multiple of <math>{T_i}</math>, which is to say that all tasks have a period that is not just a multiple of the shortest period, <math>{T_1}</math>, but instead that any task's period is a multiple of all shorter periods. This is known as an [[Harmonic Task Set|harmonic task set]]. An example of this would be: <math>[{T_1},{T_2},{T_3},{T_4}] = [1, 3, 6, 12]</math>. It is acknowledged by Liu and Layland that it is not always feasible to have a harmonic task set and that in practice other mitigation measures, such as buffering for tasks with soft-time deadlines or using a dynamic priority assignment approach may be used instead to allow for a higher bound. === Generalization to harmonic chains === Kuo and Mok<ref>{{cite book|author=T.-W. Kuo |author2=A.K. Mok |title=[1991] Proceedings Twelfth Real-Time Systems Symposium |chapter=Load adjustment in adaptive real-time systems |year=1991|pages=160β170|doi=10.1109/REAL.1991.160369|isbn=0-8186-2450-7|s2cid=31127772}}</ref> showed that for a task set made up of {{mvar|K}} harmonic task subsets (known as ''harmonic chains''), the least upper bound test becomes: :<math>U = \sum_{i=1}^{n} \frac{C_i}{T_i} \leq K({2}^{1/K} - 1)</math> In the instance where for each task, its period is an exact multiple of every other task that has a shorter period, the task set can be thought of as being composed of {{mvar|n}} harmonic task subsets of size 1 and therefore <math>{K}{=}{n}</math>, which makes this generalization equivalent to Liu and Layland's least upper bound. When <math>{K}{=}{1}</math>, the upper bound becomes 1.0, representing full utilization. === Stochastic bounds === It has been shown that a randomly generated periodic task system will usually meet all deadlines when the utilization is 88% or less,<ref>{{citation|first1=J.|last1=Lehoczky|first2=L.|last2=Sha|first3=Y.|last3=Ding|contribution=The rate monotonic scheduling algorithm: exact characterization and average case behavior|title=IEEE Real-Time Systems Symposium|pages=166β171|year=1989|doi=10.1109/REAL.1989.63567|isbn=978-0-8186-2004-1|s2cid=206524469}}.</ref> however this fact depends on knowing the exact task statistics (periods, deadlines) which cannot be guaranteed for all task sets, and in some cases the authors found that the utilization reached the least upper bound presented by Liu and Layland. === Hyperbolic bound === The hyperbolic bound<ref>{{citation|author1=Enrico Bini |author2=Giorgio C. Buttazzo |author3=Giuseppe M. Buttazzo |title=Rate Monotonic Analysis: the Hyperbolic Bound|journal=IEEE Transactions on Computers|year=2003|volume=52|issue=7|pages=933β942|doi=10.1109/TC.2003.1214341|hdl=11382/200358|hdl-access=free}}</ref> is a tighter sufficient condition for schedulability than the one presented by Liu and Layland: :<math>\prod_{i=1}^n (U_i +1) \leq 2</math>, where {{mvar|U<sub>i</sub>}} is the CPU utilization for each task. It is the tightest upper bound that can be found using only the individual task utilization factors. == Resource sharing == In many practical applications, resources are shared and the unmodified '''RMS''' will be subject to [[priority inversion]] and [[deadlock (computer science)|deadlock]] hazards. In practice, this is solved by disabling preemption or by [[priority inheritance]]. Alternative methods are to use [[lock-free and wait-free algorithms|lock-free algorithms]] or avoid the sharing of a mutex/semaphore across threads with different priorities. This is so that resource conflicts cannot result in the first place. === Disabling of preemption === *The <code>OS_ENTER_CRITICAL()</code> and <code>OS_EXIT_CRITICAL()</code> primitives that lock CPU interrupts in a real-time kernel, e.g. [[MicroC/OS-II]] *The <code>splx()</code> family of primitives which nest the locking of device interrupts (FreeBSD 5.x/6.x<!-- UNIX? -->), === Priority inheritance === *The ''basic priority inheritance protocol''<ref>{{citation|first1=B. W.|last1=Lampson|author-link1=Butler Lampson|first2=D. D.|last2=Redell|title=Experience with processes and monitors in Mesa|journal=Communications of the ACM|volume=23|issue=2|year=1980|pages=105β117|doi=10.1145/358818.358824|citeseerx=10.1.1.46.7240|s2cid=1594544}}.</ref> promotes the priority of the task that holds the resource to the priority of the task that requests that resource at the time the request is made. Upon release of the resource, the original priority level before the promotion is restored. This method does not prevent deadlocks and suffers from ''chained blocking''. That is, if a high priority task accesses multiple shared resources in sequence, it may have to wait (block) on a lower priority task for each of the resources.<ref>{{citation|last=Buttazzo|first=Giorgio|title=Hard Real-Time Computing Systems: Predictable Scheduling Algorithms and Applications|publisher=Springer|location=New York, NY|year=2011|edition=Third|page=225}}</ref> The [https://rt.wiki.kernel.org/index.php/Main_Page real-time patch] {{Webarchive|url=https://web.archive.org/web/20201013113111/https://rt.wiki.kernel.org/index.php/Main_Page |date=2020-10-13 }} to the [[Linux kernel]] includes an implementation of this formula.<ref>{{cite web | url = https://rt.wiki.kernel.org/index.php/Frequently_Asked_Questions#How_does_the_CONFIG_PREEMPT_RT_patch_work.3F | title = Real-Time Linux Wiki | date = 2008-03-26 | access-date = 2014-03-14 | publisher = kernel.org }}</ref> *The [[priority ceiling protocol]]<ref>{{citation|first1=L.|last1=Sha|first2=R.|last2=Rajkumar|first3=J. P.|last3=Lehoczky|title=Priority inheritance protocols: an approach to real-time synchronization|journal=IEEE Transactions on Computers|volume=39|issue=9|year=1990|pages=1175β1185|doi=10.1109/12.57058}}.</ref> enhances the basic priority inheritance protocol by assigning a ''ceiling priority'' to each semaphore, which is the priority of the highest job that will ever access that semaphore. A job cannot preempt a lower priority critical section if its priority is lower than the ceiling priority for that section. This method prevents deadlocks and bounds the blocking time to at most the length of one lower priority critical section. This method can be suboptimal, in that it can cause unnecessary blocking. The priority ceiling protocol is available in the [[VxWorks]] real-time kernel. It is also known as ''Highest Locker's Priority Protocol'' (HLP).<ref>{{citation|last=Buttazzo|first=Giorgio|title=Hard Real-Time Computing Systems: Predictable Scheduling Algorithms and Applications|publisher=Springer|location=New York, NY|year=2011|edition=Third|page=212}}</ref> Priority inheritance algorithms can be characterized by two parameters. First, is the inheritance lazy (only when essential) or immediate (boost priority before there is a conflict). Second is the inheritance optimistic (boost a minimum amount) or pessimistic (boost by more than the minimum amount): {| class="wikitable" |- ! ! pessimistic ! optimistic |- ! immediate | <code>OS_ENTER_CRITICAL()</code> / <code>OS_EXIT_CRITICAL()</code> | <code>splx()</code>, highest locker |- ! lazy | | priority ceiling protocol, basic priority inheritance protocol |- |} In practice there is no mathematical difference (in terms of the Liu-Layland system utilization bound) between the lazy and immediate algorithms, and the immediate algorithms are more efficient to implement, and so they are the ones used by most practical systems.{{Citation needed|date=October 2007}} An example of usage of basic priority inheritance is related to the "[[Mars Pathfinder]] reset bug" <ref>{{Cite web|url=http://research.microsoft.com/~mbj/Mars_Pathfinder/|title=Mike Jones at Microsoft Research}}</ref><ref>{{Cite web |url=http://anthology.spacemonkeys.ca/archives/770-Mars-Pathfinder-Reset-Bug.html |title=Mars Pathfinder Reset Bug - Anthology of Interest |access-date=2008-09-09 |archive-date=2011-10-05 |archive-url=https://web.archive.org/web/20111005024710/http://anthology.spacemonkeys.ca/archives/770-Mars-Pathfinder-Reset-Bug.html }}</ref> which was fixed on Mars by changing the creation flags for the semaphore so as to enable the priority inheritance. == Interrupt Service Routines == All [[Interrupt Service Routine|interrupt service routines]] (ISRs), whether they have a hard real-time deadline or not should be included in RMS analysis to determine schedulability in cases where ISRs have priorities above all scheduler-controlled tasks. An ISR may already be appropriately prioritized under RMS rules if its processing period is shorter than that of the shortest, non-ISR process. However, an ISR with a period/deadline longer than any non-ISR process period with a critical deadline results in a violation of RMS and prevents the use of the calculated bounds for determining schedulability of a task set. === Mitigating mis-prioritized ISRs === One method for mitigating a mis-prioritized ISR is to adjust the analysis by reducing the ISR's period to be equal to that of the shortest period, if possible. Imposing this shorter period results in prioritization that conforms to RMS, but also results in a higher utilization factor for the ISR and therefore for the total utilization factor, which may still be below the allowable bound and therefore schedulability can be proven. As an example, consider a hardware ISR that has a computation time, <math>{C_{isr}}</math> of 500 microseconds and a period, <math>{T_{isr}}</math>, of 4 milliseconds. If the shortest scheduler-controlled task has a period, <math>{T_1}</math> of 1 millisecond, then the ISR would have a higher priority, but a lower rate, which violates RMS. For the purposes of proving schedulability, set <math>{T_{isr}}={T_1}</math> and recalculate the utilization factor for the ISR (which also raises the total utilization factor). In this case, <math>{U_{isr}}{=}{C_{isr}}/{T_{isr}}</math> will change from <math>{0.5 ms}/{4 ms}{=}0.125</math> to <math>{0.5 ms}/{1 ms}{=}0.5</math>. This utilization factor would be used when adding up the total utilization factor for the task set and comparing to the upper bound to prove schedulability. It should be emphasized that adjusting the period of the ISR is for analysis only and that the true period of the ISR remains unchanged. Another method for mitigating a mis-prioritized ISR is to use the ISR to only set a new semaphore/mutex while moving the time-intensive processing to a new process that has been appropriately prioritized using RMS and will block on the new semaphore/mutex. When determining schedulability, a margin of CPU utilization due to ISR activity should be subtracted from the least upper bound. ISRs with negligible utilization may be ignored. ==Examples== ===Example 1=== {| class="wikitable" |- ! Process ! Computation time ''C'' ! Release period ''T'' ! Priority |- ! P1 | 1 | 8 | 2 |- ! P2 | 2 | 5 | 1 |- ! P3 | 2 | 10 | 3 |- |} Under RMS, P2 has the highest release rate (i.e. the shortest release period) and so would have the highest priority, followed by P1 and finally P3. ==== Least Upper Bound ==== The utilization will be: :<math>U = \frac{1}{8} + \frac{2}{5} + \frac{2}{10} = 0.725</math>. The sufficient condition for <math>3\,</math> processes, under which we can conclude that the system is schedulable is: :<math>{U_{lub}} = 3(2^\frac{1}{3} - 1) = 0.77976</math> Because <math>U < U_{lub}</math>, and because being below the Least Upper Bound is a sufficient condition, the system is guaranteed to be schedulable. ===Example 2=== {| class="wikitable" |- ! Process ! Computation time ''C'' ! Release period ''T'' ! Priority |- ! P1 | 3 | 16 | 3 |- ! P2 | 2 | 5 | 1 |- ! P3 | 2 | 10 | 2 |- |} Under RMS, P2 has the highest release rate (i.e. the shortest release period) and so would have the highest priority, followed by P3 and finally P1. ==== Least Upper Bound ==== Using the Liu and Layland bound, as in Example 1, the sufficient condition for <math>3\,</math> processes, under which we can conclude that the task set is schedulable, remains: :<math>{U_{lub}} = 3(2^\frac{1}{3} - 1) = 0.77976</math> The total utilization will be: :<math> U = \frac{3}{16} + \frac{2}{5} + \frac{2}{10} = 0.7875</math>. Since <math> U > U_{lub}</math>, the system is determined ''not'' to be guaranteed to be schedulable by the Liu and Layland bound. ==== Hyperbolic Bound ==== Using the tighter Hyperbolic bound as follows: :<math>\prod_{i=1}^n (U_i +1) = (\frac{3}{16}+1) * (\frac{2}{5}+1) * (\frac{2}{10}+1) = 1.995 \leq 2</math> it is found that the task set ''is'' schedulable. ===Example 3=== {| class="wikitable" |- ! Process ! Computation time ''C'' ! Release period ''T'' ! Priority |- ! P1 | 7 | 32 | 3 |- ! P2 | 2 | 5 | 1 |- ! P3 | 2 | 10 | 2 |- |} Under RMS, P2 has the highest rate (i.e. the shortest period) and so would have the highest priority, followed by P3 and finally P1. ==== Least Upper Bound ==== Using the Liu and Layland bound, as in Example 1, the sufficient condition for <math>3\,</math> processes, under which we can conclude that the task set is schedulable, remains: :<math>{U_{lub}} = 3(2^\frac{1}{3} - 1) = 0.77976</math> The total utilization will be: :<math>U = \frac{7}{32} + \frac{2}{5} + \frac{2}{10} = 0.81875</math>. Since <math>U > U_{lub}</math>, the system is determined ''not'' to be guaranteed to be schedulable by the Liu and Layland bound. ==== Hyperbolic Bound ==== Using the tighter Hyperbolic bound as follows: :<math>\prod_{i=1}^n (U_i +1) = (\frac{7}{32}+1) * (\frac{2}{5}+1) * (\frac{2}{10}+1) = 2.0475</math> Since <math>2.0 {<} 2.0475</math> the system is determined to ''not'' be guaranteed to be schedulable by the Hyperbolic bound. ==== Harmonic Task Set Analysis ==== Because <math>{T_3}={2{T_2}}</math>, tasks 2 and 3 can be considered a harmonic task subset. Task 1 forms its own harmonic task subset. Therefore, the number of harmonic task subsets, {{mvar|K}}, is {{mvar|2}}. :<math display="block">{U_{lub,harmonic}} = K(2^\frac{1}{K} - 1) = 2(2^\frac{1}{2} - 1) = 0.828</math> Using the total utilization factor calculated above (0.81875), since <math>0.81875 < 0.828</math> the system is determined to be schedulable. ==See also== *[[Deadline-monotonic scheduling]] *[[Deos]], a time and space partitioned real-time operating system containing a working Rate Monotonic Scheduler. *[[Dynamic priority scheduling]] *[[Earliest deadline first scheduling]] *[[RTEMS]], an open source real-time operating system containing a working Rate Monotonic Scheduler. *[[Scheduling (computing)]] *[[Queueing theory]] *[[Kingman's formula]] ==References== {{Reflist|30em}} ==Further reading== *{{citation|last=Buttazzo|first=Giorgio|title=Hard Real-Time Computing Systems: Predictable Scheduling Algorithms and Applications|publisher=Springer|location=New York, NY|year=2011}}. *{{citation|author=Alan Burns and Andy Wellings|title=Real-Time Systems and Programming Languages|year=2009|publisher=Addison-Wesley|edition=4th|isbn=978-0-321-41745-9}} *{{citation|last=Liu|first=Jane W.S.|author-link= Jane Liu |title=Real-time systems|publisher=Prentice Hall|location=Upper Saddle River, NJ|year=2000}}, Chapter 6. *{{citation|first1=M.|last1=Joseph|first2=P.|last2=Pandya|title=Finding response times in real-time systems|journal=BCS Computer Journal|volume=29|issue=5|pages=390β395|year=1986|doi=10.1093/comjnl/29.5.390|doi-access=free}}. *{{citation|first1=Lui|last1=Sha|first2=John B.|last2=Goodenough|title=Real-Time Scheduling Theory and Ada|journal=IEEE Computer|pages=53β62|date=April 1990|volume=23|issue=4|doi=10.1109/2.55469|s2cid=12647942}} ==External links== * [http://research.microsoft.com/~mbj/Mars_Pathfinder/Authoritative_Account.html Mars Pathfinder Bug] from Research @ Microsoft * [http://catless.ncl.ac.uk/Risks/19.49.html#subj1 What really happened on Mars Rover Pathfinder by Mike Jones] from The Risks Digest, Vol. 19, Issue 49 * [https://web.archive.org/web/20191014204826/http://knusbaum.com/mars/Authoritative_Account] The actual reason for the Mars Pathfinder Bug, by those who actually dealt with it, rather than someone whose company and therefore stock value depended upon the description of the problem, or someone who heard someone talking about the problem. {{Processor scheduling}} [[Category:Processor scheduling algorithms]] [[Category:Real-time computing]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite web
(
edit
)
Template:Harvtxt
(
edit
)
Template:Mvar
(
edit
)
Template:Processor scheduling
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Webarchive
(
edit
)