Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Linear programming
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Algorithms == {{See also|List of numerical analysis topics#Linear programming}} [[File:Linear Programming Feasible Region.svg|frame|In a linear programming problem, a series of linear constraints produces a [[Convex set|convex]] [[feasible region]] of possible values for those variables. In the two-variable case this region is in the shape of a convex [[simple polygon]].]] === Basis exchange algorithms === ==== Simplex algorithm of Dantzig ==== The [[simplex algorithm]], developed by [[George Dantzig]] in 1947, solves LP problems by constructing a feasible solution at a vertex of the [[polytope]] and then walking along a path on the edges of the polytope to vertices with non-decreasing values of the objective function until an optimum is reached for sure. In many practical problems, "[[Simplex algorithm#Degeneracy: stalling and cycling|stalling]]" occurs: many pivots are made with no increase in the objective function.<ref name="DT03">{{harvtxt|Dantzig|Thapa|2003}}</ref><ref name="Padberg">{{harvtxt|Padberg|1999}}</ref> In rare practical problems, the usual versions of the simplex algorithm may actually "cycle".<ref name="Padberg" /> To avoid cycles, researchers developed new pivoting rules.<ref name="FukudaTerlaky" /> In practice, the simplex [[algorithm]] is quite efficient and can be guaranteed to find the global optimum if certain precautions against ''cycling'' are taken. The simplex algorithm has been proved to solve "random" problems efficiently, i.e. in a cubic number of steps,<ref>{{harvtxt|Borgwardt|1987}}</ref> which is similar to its behavior on practical problems.<ref name="DT03" /><ref name="Todd">{{harvtxt|Todd|2002}}</ref> However, the simplex algorithm has poor worst-case behavior: Klee and Minty constructed a family of linear programming problems for which the simplex method takes a number of steps exponential in the problem size.<ref name="DT03" /><ref name="Murty">{{harvtxt|Murty|1983}}</ref><ref name="PS">{{harvtxt|Papadimitriou|Steiglitz|}}</ref> In fact, for some time it was not known whether the linear programming problem was solvable in [[polynomial time]], i.e. of [[P (complexity)|complexity class P]]. ==== Criss-cross algorithm ==== Like the simplex algorithm of Dantzig, the [[criss-cross algorithm]] is a basis-exchange algorithm that pivots between bases. However, the criss-cross algorithm need not maintain feasibility, but can pivot rather from a feasible basis to an infeasible basis. The criss-cross algorithm does not have [[time complexity|polynomial time-complexity]] for linear programming. Both algorithms visit all 2<sup>''D''</sup> corners of a (perturbed) [[unit cube|cube]] in dimension ''D'', the [[Klee–Minty cube]], in the [[worst-case complexity|worst case]].<ref name="FukudaTerlaky">{{cite journal|first1=Komei|last1=Fukuda|author1-link=Komei Fukuda|first2=Tamás|last2=Terlaky|author2-link=Tamás Terlaky|title=Criss-cross methods: A fresh view on pivot algorithms |journal=Mathematical Programming, Series B|volume=79|number=1–3|pages=369–395|editor=Thomas M. Liebling |editor2=Dominique de Werra|year=1997|doi=10.1007/BF02614325|mr=1464775|citeseerx=10.1.1.36.9373|s2cid=2794181}}</ref><ref name="Roos">{{cite journal|last=Roos|first=C.|title=An exponential example for Terlaky's pivoting rule for the criss-cross simplex method|journal=Mathematical Programming|volume=46|year=1990|series=Series A|doi=10.1007/BF01585729|mr=1045573 |issue=1|pages=79–84|s2cid=33463483}}</ref> === Interior point === In contrast to the simplex algorithm, which finds an optimal solution by traversing the edges between vertices on a polyhedral set, interior-point methods move through the interior of the feasible region. ==== Ellipsoid algorithm, following Khachiyan ==== This is the first [[worst-case complexity|worst-case]] [[polynomial-time]] algorithm ever found for linear programming. To solve a problem which has ''n'' variables and can be encoded in ''L'' input bits, this algorithm runs in <math> O(n^6 L) </math> time.<ref name = "khachiyan79" /> [[Leonid Khachiyan]] solved this long-standing complexity issue in 1979 with the introduction of the [[ellipsoid method]]. The convergence analysis has (real-number) predecessors, notably the [[iterative method]]s developed by [[Naum Z. Shor]] and the [[approximation algorithm]]s by Arkadi Nemirovski and D. Yudin. ==== Projective algorithm of Karmarkar ==== {{main|Karmarkar's algorithm}} Khachiyan's algorithm was of landmark importance for establishing the polynomial-time solvability of linear programs. The algorithm was not a computational break-through, as the simplex method is more efficient for all but specially constructed families of linear programs. However, Khachiyan's algorithm inspired new lines of research in linear programming. In 1984, [[Narendra Karmarkar|N. Karmarkar]] proposed a<!-- n interior-point --> [[projective method]] for linear programming. Karmarkar's algorithm<ref name = "karmarkar84" /> improved on Khachiyan's<ref name = "khachiyan79" /> worst-case polynomial bound (giving <math>O(n^{3.5}L)</math>). Karmarkar claimed that his algorithm was much faster in practical LP than the simplex method, a claim that created great interest in interior-point methods.<ref name="Strang">{{cite journal|last=Strang|first=Gilbert|author-link=Gilbert Strang|title=Karmarkar's algorithm and its place in applied mathematics|journal=[[The Mathematical Intelligencer]]|date=1 June 1987|issn=0343-6993|pages=4–10|volume=9|doi=10.1007/BF03025891|mr=883185|issue=2|s2cid=123541868}}</ref> Since Karmarkar's discovery, many interior-point methods have been proposed and analyzed. ==== Vaidya's 87 algorithm ==== In 1987, Vaidya proposed an algorithm that runs in <math> O(n^3) </math> time.<ref>{{cite conference|title= An algorithm for linear programming which requires <math>{O} (((m+ n) n^2+(m+ n)^{1.5} n) L)</math> arithmetic operations | conference = 28th Annual IEEE Symposium on Foundations of Computer Science | series = FOCS |last1=Vaidya|first1=Pravin M. |year=1987 }}</ref> ==== Vaidya's 89 algorithm ==== In 1989, Vaidya developed an algorithm that runs in <math>O(n^{2.5})</math> time.<ref>{{cite conference|chapter= Speeding-up linear programming using fast matrix multiplication | conference = 30th Annual Symposium on Foundations of Computer Science| series = FOCS |last1=Vaidya|first1=Pravin M. | title = 30th Annual Symposium on Foundations of Computer Science|year=1989| pages = 332–337| doi = 10.1109/SFCS.1989.63499 | isbn = 0-8186-1982-1}}</ref> Formally speaking, the algorithm takes <math>O( (n+d)^{1.5} n L)</math> arithmetic operations in the worst case, where <math>d</math> is the number of constraints, <math> n </math> is the number of variables, and <math>L</math> is the number of bits. ==== Input sparsity time algorithms ==== In 2015, Lee and Sidford showed that linear programming can be solved in <math>\tilde O((nnz(A) + d^2)\sqrt{d}L)</math> time,<ref>{{cite conference|title= Efficient inverse maintenance and faster algorithms for linear programming | conference = FOCS '15 Foundations of Computer Science |last1=Lee|first1=Yin-Tat|last2=Sidford|first2=Aaron |year=2015| arxiv = 1503.01752 }}</ref> where <math>\tilde O</math> denotes the [[soft O notation]], and <math>nnz(A)</math> represents the number of non-zero elements, and it remains taking <math>O(n^{2.5}L)</math> in the worst case. ==== Current matrix multiplication time algorithm ==== In 2019, Cohen, Lee and Song improved the running time to <math>\tilde O( ( n^{\omega} + n^{2.5-\alpha/2} + n^{2+1/6} ) L)</math> time, <math> \omega </math> is the exponent of [[matrix multiplication]] and <math> \alpha </math> is the dual exponent of [[matrix multiplication]].<ref>{{cite conference|title= Solving Linear Programs in the Current Matrix Multiplication Time | conference = 51st Annual ACM Symposium on the Theory of Computing |last1=Cohen|first1=Michael B.|last2=Lee|first2=Yin-Tat|last3=Song|first3=Zhao |year=2018| arxiv = 1810.07896 | series = STOC'19 }}</ref> <math> \alpha </math> is (roughly) defined to be the largest number such that one can multiply an <math> n \times n </math> matrix by a <math> n \times n^\alpha </math> matrix in <math> O(n^2) </math> time. In a followup work by Lee, Song and Zhang, they reproduce the same result via a different method.<ref>{{cite conference|title= Solving Empirical Risk Minimization in the Current Matrix Multiplication Time | conference = Conference on Learning Theory |last1=Lee|first1=Yin-Tat|last2=Song|first2=Zhao |last3=Zhang|first3=Qiuyi|year=2019| arxiv = 1905.04447 | series = COLT'19 }}</ref> These two algorithms remain <math>\tilde O( n^{2+1/6} L ) </math> when <math> \omega = 2 </math> and <math> \alpha = 1 </math>. The result due to Jiang, Song, Weinstein and Zhang improved <math> \tilde O ( n^{2+1/6} L) </math> to <math> \tilde O ( n^{2+1/18} L) </math>.<ref>{{cite conference|title= Faster Dynamic Matrix Inverse for Faster LPs |last1=Jiang|first1=Shunhua|last2=Song|first2=Zhao |last3=Weinstein|first3=Omri|last4=Zhang|first4=Hengjie|year=2020| arxiv = 2004.07470 }}</ref> === Comparison of interior-point methods and simplex algorithms === The current opinion is that the efficiencies of good implementations of simplex-based methods and interior point methods are similar for routine applications of linear programming. However, for specific types of LP problems, it may be that one type of solver is better than another (sometimes much better), and that the structure of the solutions generated by interior point methods versus simplex-based methods are significantly different with the support set of active variables being typically smaller for the latter one.<ref>{{cite journal|doi=10.1016/S0377-2217(02)00061-9|title=Pivot versus interior point methods: Pros and cons|journal=European Journal of Operational Research|volume=140|issue=2|pages=170|year=2002|last1=Illés|first1=Tibor|last2=Terlaky|first2=Tamás|url=https://strathprints.strath.ac.uk/9200/|citeseerx=10.1.1.646.3539}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)