Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
HTTP pipelining
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Motivation and limitations== The pipelining of requests results in a dramatic improvement<ref>{{cite web|url=http://www.w3.org/Protocols/HTTP/Performance/Pipeline.html|title=Network Performance Effects of HTTP/1.1, CSS1, and PNG|publisher=World Wide Web Consortium|accessdate=14 January 2010|date=24 June 1997|first1=Henrik Frystyk|last1=Nielsen|authorlink1=Henrik Frystyk Nielsen|first2=Jim|last2=Gettys|authorlink2=Jim Gettys|first3=Anselm|last3=Baird-Smith|first4=Eric|last4=Prud'hommeaux|first5=Håkon Wium|last5=Lie|authorlink5=Håkon Wium Lie|first6=Chris|last6=Lilley|authorlink6=Chris Lilley (computer scientist)}}</ref> in the loading times of HTML pages, especially over high [[Latency (engineering)|latency]] connections such as [[Satellite Internet|satellite Internet connection]]s. The speedup is less apparent on broadband connections, as the limitation of HTTP 1.1 still applies: the server must send its responses in the same order that the requests were received—so the entire connection remains [[FIFO and LIFO accounting|first-in-first-out]]<ref name="HTTP/1.1-pipelining" /> and [[Head-of-line blocking|HOL blocking]] can occur. The asynchronous operations of [[HTTP/2]] and [[SPDY]] are solution for this.<ref name="lwnspdy">{{cite web|url=https://lwn.net/Articles/362473/|title=Reducing HTTP latency with SPDY|first=Nathan|last=Willis|date=18 November 2009|publisher=[[LWN.net]]}}</ref> By 2017 most browsers supported HTTP/2 by default which uses multiplexing instead.<ref name=":0" /> Non-[[idempotence (computer science)#Examples|idempotent]] requests such as [[POST (HTTP)|<code>POST</code>]] should not be pipelined.<ref name="non-idempotent">{{cite web|url=http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html|title=Connections|publisher=[[w3.org]]}}</ref> Read requests like <code>GET</code> and <code>HEAD</code> can always be pipelined. A sequence of other idempotent requests like <code>PUT</code> and <code>DELETE</code> can be pipelined or not depending on whether requests in the sequence depend on the effect of others.<ref name="HTTP/1.1-pipelining"/> HTTP pipelining requires both the client and the server to support it. [[Hypertext Transfer Protocol|HTTP/1.1]] conforming servers are required to produce valid responses to pipelined requests, but may not actually process requests concurrently.<ref>{{cite web|url=https://www-archive.mozilla.org/projects/netlib/http/pipelining-faq.html|title=HTTP/1.1 Pipelining FAQ'}}</ref> {{Clear left}} Most pipelining problems happen in HTTP intermediate nodes (hop-by-hop), i.e. in [[proxy server]]s, especially in transparent proxy servers (if one of them along the HTTP chain does not handle pipelined requests properly then nothing works as it should).<ref name="make-pipelining-usable"/> Using pipelining with HTTP proxy servers is usually not recommended also because the HOL blocking problem may really slow down proxy server responses (as the server responses must be in the same order of the received requests).<ref name="HTTP/1.1-pipelining"/> <ref name="MSIE-8-chat-2008"/> '''Example''': if a client sends 4 pipelined GET requests to a proxy through a single connection and the first one is not in its cache then the proxy has to forward that request to the destination web server; if the following three requests are instead found in its cache, the proxy has to wait for the web server response, then it has to send it to the client and only then it can send the three [[Web cache|cached]] responses too. If instead a client opens 4 connections to a proxy and sends 1 GET request per connection (without using pipelining) the proxy can send the three cached responses to client in parallel before the response from server is received, decreasing the overall completion time (because requests are served in parallel with no head-of-line blocking problem).<ref name="HTTP/1.1-concurrency">{{cite journal|url=http://tools.ietf.org/html/rfc7230#section-6.4|title=Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing: Concurrency|year=2014 |publisher=ietf.org|doi=10.17487/RFC7230 |accessdate=2014-07-24|editor-last1=Fielding |editor-last2=Reschke |editor-first1=R. |editor-first2=J. |last1=Fielding |first1=R. |last2=Reschke |first2=J. |doi-access=free |url-access=subscription }}</ref> The same advantage exists in HTTP/2 multiplexed streams.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)