Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Load (computing)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Unix-style load calculation == All Unix and Unix-like systems generate a dimensionless [[Software metric|metric]] of three "load average" numbers in the [[kernel (operating system)|kernel]]. Users can easily query the current result from a [[Unix shell]] by running the <code>[[uptime]]</code> command: <syntaxhighlight lang="console"> $ uptime 14:34:03 up 10:43, 4 users, load average: 0.06, 0.11, 0.09 </syntaxhighlight> The [[W (Unix)|<code>w</code>]] and [[top (software)|<code>top</code>]] commands show the same three load average numbers, as do a range of [[graphical user interface]] utilities. In operating systems based on the [[Linux kernel]], this information can be easily accessed by reading the [[procfs|<code>/proc/loadavg</code>]] file. To explore this kind of information in depth, according to the Linux's [[Filesystem Hierarchy Standard]], architecture-dependent information are exposed on the file <code>/proc/stat</code>.<ref>{{Cite web |url = https://www.kernel.org/doc/html/latest/admin-guide/cpu-load.html |title = CPU load |access-date=2023-10-04 }}</ref><ref>{{Cite web |url = https://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html |title = /proc |access-date=2023-10-04 |website = Linux Filesystem Hierarchy }}</ref><ref>{{Cite web |url = https://www.kernel.org/doc/html/latest/filesystems/proc.html#miscellaneous-kernel-statistics-in-proc-stat |title = Miscellaneous kernel statistics in /proc/stat |access-date=2023-10-04 }}</ref> An idle computer has a load number of 0 (the idle process is not counted). Each [[process (computing)|process]] using or waiting for [[Central processing unit|CPU]] (the ''ready queue'' or [[run queue]]) increments the load number by 1. Each process that terminates decrements it by 1. Most UNIX systems count only processes in the ''running'' (on CPU) or ''runnable'' (waiting for CPU) [[Process state|states]]. However, Linux also includes processes in uninterruptible sleep states (usually waiting for [[Hard disk drive|disk]] activity), which can lead to markedly different results if many processes remain blocked in [[Input/output|I/O]] due to a busy or stalled I/O system.<ref>{{Cite web|url=https://linuxtechsupport.blogspot.com/2008/10/what-exactly-is-load-average.html|title=Linux Tech Support: What exactly is a load average?|date=23 October 2008}}</ref> This, for example, includes processes blocking due to an [[Network File System|NFS]] server failure or too slow [[Data storage|media]] (e.g., [[USB]] 1.x storage devices). Such circumstances can result in an elevated load average, which does not reflect an actual increase in CPU use (but still gives an idea of how long users have to wait). Systems calculate the load ''average'' as the [[Moving average#Exponential moving average|exponentially damped/weighted moving average]] of the load ''number''. The three values of load average refer to the past one, five, and fifteen minutes of system operation.<ref name="drdobbs">{{cite web |url=https://www.linuxjournal.com/article/9001 |title=Examining Load Average |first=Ray |last=Walker |date=1 December 2006 |work=Linux Journal |access-date=13 March 2012 }}</ref> Mathematically speaking, all three values always average all the system load since the system started up. They all decay exponentially, but they decay at different ''speeds'': they decay exponentially by ''e'' after 1, 5, and 15 minutes respectively. Hence, the 1-minute load average consists of 63% (more precisely: 1 - 1/''e'') of the load from the last minute and 37% (1/''e'') of the average load since start up, excluding the last minute. For the 5- and 15-minute load averages, the same 63%/37% ratio is computed over 5 minutes and 15 minutes, respectively. Therefore, it is not technically accurate that the 1-minute load average only includes the last 60 seconds of activity, as it includes 37% of the activity from the past, but it is correct to state that it includes ''mostly'' the last minute. === Interpretation === For single-CPU systems that are [[CPU bound]], one can think of load average as a measure of system utilization during the respective time period. For systems with multiple CPUs, one must divide the load by the number of processors in order to get a comparable measure. For example, one can interpret a load average of "1.73 0.60 7.98" on a single-CPU system as: * During the last minute, the system was overloaded by 73% on average (1.73 runnable processes, so that 0.73 processes had to wait for a turn for a single CPU system on average). * During the last 5 minutes, the CPU was idling 40% of the time, on average. * During the last 15 minutes, the system was overloaded 698% on average (7.98 runnable processes, so that 6.98 processes had to wait for a turn for a single CPU system on average). This means that this system (CPU, disk, memory, etc.) could have handled all the work scheduled for the last minute if it were 1.73 times as fast. In a system with four CPUs, a load average of 3.73 would indicate that there were, on average, 3.73 processes ready to run, and each one could be scheduled into a CPU. On modern UNIX systems, the treatment of [[Thread (computing)|threading]] with respect to load averages varies. Some systems treat threads as processes for the purposes of load average calculation: each thread waiting to run will add 1 to the load. However, other systems, especially systems implementing so-called [[Thread (computing)#M:N (hybrid threading)|M:N threading]], use different strategies such as counting the process exactly once for the purpose of load (regardless of the number of threads), or counting only threads currently exposed by the user-thread scheduler to the kernel, which may depend on the level of concurrency set on the process. Linux appears to count each thread separately as adding 1 to the load.<ref>See http://serverfault.com/a/524818/27813</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)