Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Scheduling (computing)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Operating system process scheduler implementations== The algorithm used may be as simple as [[round-robin scheduling|round-robin]] in which each process is given equal time (for instance 1 ms, usually between 1 ms and 100 ms) in a cycling list. So, process A executes for 1 ms, then process B, then process C, then back to process A. More advanced algorithms take into account process priority, or the importance of the process. This allows some processes to use more time than other processes. The kernel always uses whatever resources it needs to ensure proper functioning of the system, and so can be said to have infinite priority. In [[Symmetric multiprocessing|SMP]] systems, [[processor affinity]] is considered to increase overall system performance, even if it may cause a process itself to run more slowly. This generally improves performance by reducing [[cache thrashing]]. ===OS/360 and successors=== IBM [[OS/360]] was available with three different schedulers. The differences were such that the variants were often considered three different operating systems: * The ''Single Sequential Scheduler'' option, also known as the ''Primary Control Program (PCP)'' provided sequential execution of a single stream of jobs. * The ''Multiple Sequential Scheduler'' option, known as ''Multiprogramming with a Fixed Number of Tasks (MFT)'' provided execution of multiple concurrent jobs. Execution was governed by a priority which had a default for each stream or could be requested separately for each job. MFT version II added subtasks (threads), which executed at a priority based on that of the parent job. Each job stream defined the maximum amount of memory which could be used by any job in that stream. * The ''Multiple Priority Schedulers'' option, or ''Multiprogramming with a Variable Number of Tasks (MVT)'', featured subtasks from the start; each job requested the priority and memory it required before execution. Later virtual storage versions of MVS added a ''[[Workload Manager]]'' feature to the scheduler, which schedules processor resources according to an elaborate scheme defined by the installation. ===Windows=== Very early [[MS-DOS]] and Microsoft Windows systems were non-multitasking, and as such did not feature a scheduler. [[Windows 3.1x]] used a non-preemptive scheduler, meaning that it did not interrupt programs. It relied on the program to end or tell the OS that it didn't need the processor so that it could move on to another process. This is usually called cooperative multitasking. Windows 95 introduced a rudimentary preemptive scheduler; however, for legacy support opted to let 16-bit applications run without preemption.<ref>{{webarchive |url=https://web.archive.org/web/*/www.jgcampbell.com/caos/html/node13.html |date=* |title=Early Windows }}</ref> [[Windows NT]]-based operating systems use a multilevel feedback queue. 32 priority levels are defined, 0 through to 31, with priorities 0 through 15 being ''normal'' priorities and priorities 16 through 31 being soft real-time priorities, requiring privileges to assign. 0 is reserved for the Operating System. User interfaces and APIs work with priority classes for the process and the threads in the process, which are then combined by the system into the absolute priority level. The kernel may change the priority level of a thread depending on its I/O and CPU usage and whether it is interactive (i.e. accepts and responds to input from humans), raising the priority of interactive and I/O bounded processes and lowering that of CPU bound processes, to increase the responsiveness of interactive applications.<ref>{{cite web|title=A Tale of Two Schedulers Windows NT and Windows CE |url=http://sriramk.com/schedulers.html |author=Sriram Krishnan |url-status=dead |archive-url=https://web.archive.org/web/20120722015555/http://sriramk.com/schedulers.html |archive-date=July 22, 2012 }}</ref> The scheduler was modified in [[Windows Vista]] to use the [[Time Stamp Counter|cycle counter register]] of modern processors to keep track of exactly how many CPU cycles a thread has executed, rather than just using an interval-timer interrupt routine.<ref>{{cite web|url=https://technet.microsoft.com/en-us/magazine/cc162494.aspx |title=Windows Administration: Inside the Windows Vista Kernel: Part 1 |website=Technet.microsoft.com |date=2016-11-14 |access-date=2016-12-09}}</ref> Vista also uses a priority scheduler for the I/O queue so that disk defragmenters and other such programs do not interfere with foreground operations.<ref>{{cite web |url=http://blog.gabefrost.com/?p=25 |title=Archived copy |website=blog.gabefrost.com |access-date=15 January 2022 |archive-url=https://web.archive.org/web/20080219174631/http://blog.gabefrost.com/?p=25 |archive-date=19 February 2008 |url-status=dead}}</ref> ===Classic Mac OS and macOS=== Mac OS 9 uses cooperative scheduling for threads, where one process controls multiple cooperative threads, and also provides preemptive scheduling for multiprocessing tasks. The kernel schedules multiprocessing tasks using a preemptive scheduling algorithm. All Process Manager processes run within a special multiprocessing task, called the ''blue task''. Those processes are scheduled cooperatively, using a [[round-robin scheduling]] algorithm; a process yields control of the processor to another process by explicitly calling a [[blocking function]] such as <code>WaitNextEvent</code>. Each process has its own copy of the [[Thread Manager]] that schedules that process's threads cooperatively; a thread yields control of the processor to another thread by calling <code>YieldToAnyThread</code> or <code>YieldToThread</code>.<ref name=appletn2028>{{cite web|url=https://developer.apple.com/library/archive/technotes/tn/tn2028.html |title= Technical Note TN2028: Threading Architectures |website=developer.apple.com |access-date=2019-01-15}}</ref> macOS uses a multilevel feedback queue, with four priority bands for threads{{snd}}normal, system high priority, kernel mode only, and real-time.<ref>{{cite web|url=https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/scheduler/scheduler.html |title= Mach Scheduling and Thread Interfaces |website=developer.apple.com |access-date=2019-01-15}}</ref> Threads are scheduled preemptively; macOS also supports cooperatively scheduled threads in its implementation of the Thread Manager in [[Carbon (API)|Carbon]].<ref name=appletn2028/> ===AIX=== In AIX Version 4 there are three possible values for thread scheduling policy: * First In, First Out: Once a thread with this policy is scheduled, it runs to completion unless it is blocked, it voluntarily yields control of the CPU, or a higher-priority thread becomes dispatchable. Only fixed-priority threads can have a FIFO scheduling policy. * Round Robin: This is similar to the AIX Version 3 scheduler round-robin scheme based on 10 ms time slices. When a RR thread has control at the end of the time slice, it moves to the tail of the queue of dispatchable threads of its priority. Only fixed-priority threads can have a Round Robin scheduling policy. * OTHER: This policy is defined by POSIX1003.4a as implementation-defined. In AIX Version 4, this policy is defined to be equivalent to RR, except that it applies to threads with non-fixed priority. The recalculation of the running thread's priority value at each clock interrupt means that a thread may lose control because its priority value has risen above that of another dispatchable thread. This is the AIX Version 3 behavior. Threads are primarily of interest for applications that currently consist of several asynchronous processes. These applications might impose a lighter load on the system if converted to a multithreaded structure. AIX 5 implements the following scheduling policies: FIFO, round robin, and a fair round robin. The FIFO policy has three different implementations: FIFO, FIFO2, and FIFO3. The round robin policy is named SCHED_RR in AIX, and the fair round robin is called SCHED_OTHER.<ref>[http://www.ibm.com/developerworks/aix/library/au-aix5_cpu/index.html#N100F6] {{webarchive|url=https://web.archive.org/web/20110811094049/http://www.ibm.com/developerworks/aix/library/au-aix5_cpu/index.html|date=2011-08-11}}</ref> ===Linux=== {{See also|Linux kernel#Scheduling}} ==== Linux 1.2 ==== Linux 1.2 used a [[round-robin scheduling]] policy.<ref name=":0">{{Cite web |last=Jones |first=M. |date=2018-09-18 |orig-date=first published on 2009-12-14 |title=Inside the Linux 2.6 Completely Fair Scheduler |url=https://developer.ibm.com/tutorials/l-completely-fair-scheduler/ |access-date=2024-02-07 |website=developer.ibm.com}}</ref> ==== Linux 2.2 ==== Linux 2.2 added scheduling classes and support for [[symmetric multiprocessing]] (SMP).<ref name=":0" />[[File:Simplified Structure of the Linux Kernel.svg|thumb|upright=1.5|A highly simplified structure of the Linux kernel, which includes process schedulers, I/O schedulers, and [[Linux kernel packet scheduler|packet schedulers]] ]] ====Linux 2.4==== In [[Linux]] 2.4,<ref name=":0" /> an [[O(n) scheduler]] with a [[multilevel feedback queue]] with priority levels ranging from 0 to 140 was used; 0–99 are reserved for real-time tasks and 100–140 are considered [[nice (Unix)|nice]] task levels. For real-time tasks, the time quantum for switching processes was approximately 200 ms, and for nice tasks approximately 10 ms.{{citation needed|date=December 2011}} The scheduler ran through the [[run queue]] of all ready processes, letting the highest priority processes go first and run through their time slices, after which they will be placed in an expired queue. When the active queue is empty the expired queue will become the active queue and vice versa. However, some enterprise [[Linux distributions]] such as [[SUSE Linux Enterprise Server]] replaced this scheduler with a backport of the [[O(1) scheduler]] (which was maintained by [[Alan Cox (computer programmer)|Alan Cox]] in his Linux 2.4-ac Kernel series) to the Linux 2.4 kernel used by the distribution. ====Linux 2.6.0 to Linux 2.6.22==== In versions 2.6.0 to 2.6.22, the kernel used an [[O(1) scheduler]] developed by [[Ingo Molnar]] and many other kernel developers during the Linux 2.5 development. For many kernel in time frame, [[Con Kolivas]] developed patch sets which improved interactivity with this scheduler or even replaced it with his own schedulers. ====Linux 2.6.23 to Linux 6.5==== Con Kolivas' work, most significantly his implementation of [[fair-share scheduling|fair scheduling]] named [[Rotating Staircase Deadline]] (RSDL), inspired Ingo Molnár to develop the [[Completely Fair Scheduler]] (CFS) as a replacement for the earlier [[O(1) scheduler]], crediting Kolivas in his announcement.<ref>{{cite mailing list | last=Molnár | first=Ingo | author-link=Ingo Molnár | title=[patch] Modular Scheduler Core and Completely Fair Scheduler [CFS] | url=https://lwn.net/Articles/230501/ | mailing-list=linux-kernel | date=2007-04-13}}</ref> CFS is the first implementation of a fair queuing [[process scheduler]] widely used in a general-purpose operating system.<ref name="dwrr">{{cite web|url=http://happyli.org/tongli/papers/dwrr.pdf|title=Efficient and Scalable Multiprocessor Fair Scheduling Using Distributed Weighted Round-Robin|author1=Tong Li|author2=Dan Baumberger|author3=Scott Hahn|website=Happyli.org|access-date=2016-12-09}}</ref> The CFS uses a well-studied, classic scheduling algorithm called [[fair queuing]] originally invented for [[packet network]]s. Fair queuing had been previously applied to CPU scheduling under the name [[stride scheduling]]. The fair queuing CFS scheduler has a scheduling complexity of <math>O(\log N)</math>, where {{mvar|N}} is the number of tasks in the [[run queue|runqueue]]. Choosing a task can be done in constant time, but reinserting a task after it has run requires <math>O(\log N)</math> operations, because the [[run queue]] is implemented as a [[red–black tree]]. The [[Brain Fuck Scheduler]], also created by Con Kolivas, is an alternative to the CFS. ==== Linux 6.6 ==== In 2023, Peter Zijlstra proposed replacing CFS with an [[earliest eligible virtual deadline first scheduling]] (EEVDF) process scheduler.<ref>{{Cite web |title=EEVDF Scheduler May Be Ready For Landing With Linux 6.6 |url=https://www.phoronix.com/news/Linux-6.6-EEVDF-Likely |access-date=2023-08-31 |website=[[Phoronix]] |language=en}}</ref><ref>{{Cite web |title=EEVDF Scheduler Merged For Linux 6.6, Intel Hybrid Cluster Scheduling Re-Introduced |url=https://www.phoronix.com/news/Linux-6.6-EEVDF-Merged |access-date=2024-02-07 |website=www.phoronix.com |language=en}}</ref> The aim was to remove the need for CFS ''latency nice'' patches.<ref>{{Cite web |title=An EEVDF CPU scheduler for Linux [LWN.net] |url=https://lwn.net/Articles/925371/ |access-date=2023-08-31 |website=[[LWN.net]]}}</ref> ==== Linux 6.12 ==== Linux 6.12 added support for [[User space and kernel space|userspace]] scheduler extensions, also known as sched_ext.<ref>{{Cite web |title=Sched_ext Merged For Linux 6.12 - Scheduling Policies As BPF Programs |url=https://www.phoronix.com/news/Linux-6.12-Lands-sched-ext |access-date=2025-02-10 |website=www.phoronix.com |language=en}}</ref> These schedulers can be installed and replace the default scheduler.<ref>{{Cite web |title=Pluggable CPU schedulers - openSUSE Wiki |url=https://en.opensuse.org/Pluggable_CPU_schedulers |access-date=2025-02-10 |website=en.opensuse.org}}</ref> ===FreeBSD=== [[FreeBSD]] uses a multilevel feedback queue with priorities ranging from 0–255. 0–63 are reserved for interrupts, 64–127 for the top half of the kernel, 128–159 for real-time user threads, 160–223 for time-shared user threads, and 224–255 for idle user threads. Also, like Linux, it uses the active queue setup, but it also has an idle queue.<ref name="opensolaris-queue">{{cite web|title=Comparison of Solaris, Linux, and FreeBSD Kernels |url=http://cn.opensolaris.org/files/solaris_linux_bsd_cmp.pdf |url-status=dead |archive-url=https://web.archive.org/web/20080807124435/http://cn.opensolaris.org/files/solaris_linux_bsd_cmp.pdf |archive-date=August 7, 2008 }}</ref> ===NetBSD=== [[NetBSD]] uses a multilevel feedback queue with priorities ranging from 0–223. 0–63 are reserved for time-shared threads (default, SCHED_OTHER policy), 64–95 for user threads which entered [[kernel space]], 96-128 for kernel threads, 128–191 for user real-time threads (SCHED_FIFO and SCHED_RR policies), and 192–223 for [[Interrupt|software interrupts]]. ===Solaris=== [[Solaris (operating system)|Solaris]] uses a multilevel feedback queue with priorities ranging between 0 and 169. Priorities 0–59 are reserved for time-shared threads, 60–99 for system threads, 100–159 for real-time threads, and 160–169 for low priority interrupts. Unlike Linux,<ref name="opensolaris-queue"/> when a process is done using its time quantum, it is given a new priority and put back in the queue. Solaris 9 introduced two new scheduling classes, namely fixed-priority class and fair share class. The threads with fixed priority have the same priority range as that of the time-sharing class, but their priorities are not dynamically adjusted. The fair scheduling class uses CPU ''shares'' to prioritize threads for scheduling decisions. CPU shares indicate the entitlement to CPU resources. They are allocated to a set of processes, which are collectively known as a project.<ref name="Galvin"/> ===Summary=== {| class="wikitable sortable" |- ! Operating System ! Preemption ! Algorithm |- | [[Amiga OS]] | {{Yes}} | Prioritized [[round-robin scheduling]] |- | [[FreeBSD]] | {{Yes}} | [[Multilevel feedback queue]] |- | [[Linux kernel]] before 2.6.0 | {{Yes}} | [[Multilevel feedback queue]] |- | Linux kernel 2.6.0–2.6.23 | {{Yes}} | [[O(1) scheduler]] |- | Linux kernel 2.6.23–6.6 | {{Yes}} | [[Completely Fair Scheduler]] |- | Linux kernel 6.6 and later | {{Yes}} | [[Earliest eligible virtual deadline first scheduling]] (EEVDF) |- | [[classic Mac OS]] pre-9 | {{No|None}} | [[Cooperative scheduler]] |- | [[Mac OS 9]] | {{Maybe|Some}} | Preemptive scheduler for MP tasks, and cooperative for processes and threads |- | [[macOS]] | {{Yes}} | [[Multilevel feedback queue]] |- | [[NetBSD]] | {{Yes}} | [[Multilevel feedback queue]] |- | [[Solaris (operating system)|Solaris]] | {{Yes}} | [[Multilevel feedback queue]] |- | [[Windows 3.1x]] | {{No|None}} | [[Cooperative scheduler]] |- | [[Windows 95]], [[Windows 98|98]], [[Windows Me|Me]] | {{Maybe|Half}} | Preemptive scheduler for 32-bit processes, and cooperative for 16-bit processes |- | [[Windows NT]] (including 2000, XP, Vista, 7, and Server) | {{Yes}} | [[Multilevel feedback queue]] |}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)