Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Web server
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Load limits== A web server (program installation) usually has pre-defined '''load limits''' for each combination of [[#Operating conditions|operating conditions]], also because it is limited by OS resources and because it can handle only a limited number of concurrent client connections (usually between 2 and several tens of thousands for each active web server process, see also the [[C10k problem]] and the [[C10M problem]]). When a web server is near to or over its load limits, it gets '''overloaded''' and so it may become '''unresponsive'''. ===Causes of overload=== At any time web servers can be overloaded due to one or more of the following causes (e.g.). * '''Excess legitimate web traffic'''. Thousands or even millions of clients connecting to the website in a short amount of time, e.g., [[Slashdot effect]]. * [[Distributed Denial of Service]] attacks. A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer or network resource unavailable to its intended users. * [[Computer worm]]s that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them). * [[XSS worm]]s can cause high traffic because of millions of infected browsers or web servers. * [[Internet bot]]s Traffic not filtered/limited on large websites with very few network resources (e.g. [[Bandwidth (computing)|bandwidth]]) and/or hardware resources (CPUs, RAM, disks). * [[Internet]] (network) slowdowns (e.g. due to packet losses) so that client requests are served more slowly and the number of connections increases so much that server limits are reached. * Web servers, '''serving dynamic content''', '''waiting for slow responses coming from [[Front and back ends|back-end]] computer(s)''' (e.g. [[database]]s), maybe because of too many queries mixed with too many inserts or updates of DB data; in these cases web servers have to wait for back-end data responses before replying to HTTP clients but during these waits too many new client connections / requests arrive and so they become overloaded. * Web servers ([[computer]]s) '''partial unavailability'''. This can happen because of required or urgent maintenance or upgrade, hardware or software failures such as [[Front and back ends|back-end]] (e.g. [[database]]) failures; in these cases the remaining web servers may get too much traffic and become overloaded. ===Symptoms of overload=== The symptoms of an overloaded web server are usually the following ones (e.g.). * Requests are served with (possibly long) delays (from 1 second to a few hundred seconds). * The web server returns an [[List of HTTP status codes|HTTP error code]], such as 500, 502,<ref>{{Cite web|url=https://www.lifewire.com/502-bad-gateway-error-explained-2622939|title=Getting a 502 Bad Gateway Error? Here's What to Do|last1=Fisher|first1=Tim|last2=Lifewire|website=Lifewire|language=en|access-date=2019-02-01|archive-date=23 February 2017|archive-url=https://web.archive.org/web/20170223042443/https://www.lifewire.com/502-bad-gateway-error-explained-2622939|url-status=live}}</ref><ref>{{Cite web|url=https://www.itpro.co.uk/go/30258|title=What is a 502 bad gateway and how do you fix it?|website=IT PRO|language=en|access-date=2019-02-01|archive-date=20 January 2023|archive-url=https://web.archive.org/web/20230120185257/https://www.itpro.co.uk/web-hosting/30258/what-is-a-502-bad-gateway-and-how-do-you-fix-it|url-status=live}}</ref> 503,<ref>{{Cite web|url=https://www.lifewire.com/503-service-unavailable-explained-2622940|title=Getting a 503 Service Unavailable Error? Here's What to Do|last1=Fisher|first1=Tim|last2=Lifewire|website=Lifewire|language=en|access-date=2019-02-01|archive-date=20 January 2023|archive-url=https://web.archive.org/web/20230120185318/https://www.lifewire.com/503-service-unavailable-explained-2622940|url-status=live}}</ref> 504,<ref>{{Cite web|url=https://www.lifewire.com/504-gateway-timeout-error-explained-2622941|title=Getting a 504 Gateway Timeout Error? Here's What to Do|last1=Fisher|first1=Tim|last2=Lifewire|website=Lifewire|language=en|access-date=2019-02-01|archive-date=23 April 2021|archive-url=https://web.archive.org/web/20210423182953/https://www.lifewire.com/504-gateway-timeout-error-explained-2622941|url-status=live}}</ref> 408, or even an intermittent [[HTTP 404|404]]. * The web server refuses or resets (interrupts) [[Transmission control protocol|TCP]] connections before it returns any content. * In very rare cases, the web server returns only a part of the requested content. This behavior can be considered a [[Software bug|bug]], even if it usually arises as a symptom of overload. ===Anti-overload techniques=== To partially overcome above average load limits and to prevent overload, most popular websites use common techniques like the following ones (e.g.). * Tuning OS parameters for hardware capabilities and usage. * Tuning web server(s) parameters to improve their security and performances. * Deploying {{strong|[[web cache]]}} techniques (not only for static contents but, whenever possible, for dynamic contents too). * Managing network traffic, by using: ** [[Firewall (computing)|Firewall]]s to block unwanted traffic coming from bad IP sources or having bad patterns; ** HTTP traffic managers to drop, redirect or rewrite requests having bad [[HTTP]] patterns; ** [[Bandwidth management]] and [[traffic shaping]], in order to smooth down peaks in network usage. * Using different [[domain name]]s, [[IP address]]es and computers to serve different kinds (static and dynamic) of content; the aim is to '''separate''' big or huge files (<code>download.*</code>) (that domain might be replaced also by a [[Content delivery network|CDN]]) from small and medium-sized files (<code>static.*</code>) and from main dynamic site (maybe where some contents are stored in a [[Back-end database|backend database]]) (<code>www.*</code>); the idea is to be able to efficiently serve big or huge (over 10 β 1000 MB) files (maybe throttling downloads) and to fully [[web cache|cache]] small and medium-sized files, without affecting performances of dynamic site under heavy load, by using different settings for each (group) of web server computers, e.g.: ** <code><nowiki>https://download.example.com</nowiki></code> ** <code><nowiki>https://static.example.com</nowiki></code> ** <code><nowiki>https://www.example.com</nowiki></code> * Using many web servers (computers) that are grouped together behind a [[Load balancing (computing)|load balancer]] so that they act or are seen as one big web server. * Adding more hardware resources (i.e. [[random-access memory|RAM]], fast [[Disk storage|disks]]) to each computer. * Using more efficient computer programs for web servers (see also: [[#Software efficiency|software efficiency]]). * Using the most efficient {{strong|[[#StandardCGIs|Web Server Gateway Interface]]}} to process dynamic requests (spawning one or more external programs every time a dynamic page is retrieved, kills performances). * Using other programming techniques and [[workaround]]s, especially if dynamic content is involved, to speed up the HTTP responses (i.e. by avoiding dynamic calls to retrieve objects, such as style sheets, images and scripts), that never change or change very rarely, by copying that content to static files once and then keeping them synchronized with dynamic content). * Using latest efficient versions of [[Hypertext Transfer Protocol|HTTP]] (e.g. beyond using common HTTP/1.1 also by enabling [[HTTP/2]] and maybe [[HTTP/3]] too, whenever available web server software has reliable support for the latter two protocols) in order to reduce a lot the number of TCP/IP connections started by each client and the size of data exchanged (because of more compact HTTP headers representation and maybe data compression). This may not prevent overloads of RAM and CPU caused by the need for encryption. It may also not address overloads caused by excessively large files uploaded at high speed, because they are optimized for concurrency.<ref name="nextcloud-http2-slow-upload">{{Cite web|url=https://github.com/nextcloud/server/issues/25297|title=Slow uploads with HTTP/2|author=many|publisher=github|date=2021-01-24|access-date=2021-11-15|language=en|archive-date=16 November 2021|archive-url=https://web.archive.org/web/20211116002101/https://github.com/nextcloud/server/issues/25297|url-status=live}}</ref><ref name="nginx-http2-slow-upload">{{Cite web|url=https://blog.cloudflare.com/delivering-http-2-upload-speed-improvements/|title=Delivering HTTP/2 upload speed improvements|author=Junho Choi|publisher=Cloudflare|date=2020-08-24|access-date=2021-11-15|language=en|archive-date=16 November 2021|archive-url=https://web.archive.org/web/20211116002101/https://blog.cloudflare.com/delivering-http-2-upload-speed-improvements/|url-status=live}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)