Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Greedy algorithm
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Sequence of locally optimal choices}} [[Image: Greedy algorithm 36 cents.svg|thumb|280px|right| Greedy algorithms determine the minimum number of coins to give while making change. These are the steps most people would take to emulate a greedy algorithm to represent 36 cents using only coins with values {1, 5, 10, 20}. The coin of the highest value, less than the remaining change owed, is the local optimum. (In general, the [[change-making problem]] requires [[dynamic programming]] to find an optimal solution; however, most currency systems are special cases where the greedy strategy does find an optimal solution.)]] A '''greedy algorithm''' is any [[algorithm]] that follows the problem-solving [[Heuristic (computer science)|heuristic]] of making the locally optimal choice at each stage.<ref name="NISTg">{{cite web|last=Black|first=Paul E.|title=greedy algorithm|url=http://xlinux.nist.gov/dads//HTML/greedyalgo.html|work=Dictionary of Algorithms and Data Structures|publisher=[[National Institute of Standards and Technology|U.S. National Institute of Standards and Technology]] (NIST) |access-date=17 August 2012|date=2 February 2005}}</ref> In many problems, a greedy strategy does not produce an optimal solution, but a greedy heuristic can yield locally optimal solutions that approximate a globally optimal solution in a reasonable amount of time. For example, a greedy strategy for the [[travelling salesman problem]] (which is of high [[computational complexity]]) is the following heuristic: "At each step of the journey, visit the nearest unvisited city." This heuristic does not intend to find the best solution, but it terminates in a reasonable number of steps; finding an optimal solution to such a complex problem typically requires unreasonably many steps. In [[mathematical optimization]], greedy algorithms optimally solve [[Combinatorics|combinatorial]] problems having the properties of [[matroid]]s and give constant-factor approximations to optimization problems with the submodular structure. ==Specifics== Greedy algorithms produce good solutions on some [[mathematical problem]]s, but not on others. Most problems for which they work will have two properties: ; Greedy choice property: Whichever choice seems best at a given moment can be made and then (recursively) solve the remaining sub-problems. The choice made by a greedy algorithm may depend on choices made so far, but not on future choices or all the solutions to the subproblem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one. In other words, a greedy algorithm never reconsiders its choices. This is the main difference from [[dynamic programming]], which is exhaustive and is guaranteed to find the solution. After every stage, dynamic programming makes decisions based on all the decisions made in the previous stage and may reconsider the previous stage's algorithmic path to the solution. ;Optimal substructure: "A problem exhibits [[optimal substructure]] if an optimal solution to the problem contains optimal solutions to the sub-problems."<ref>{{harvnb|Cormen|Leiserson|Rivest|Stein|2001|loc=Ch. 16}}</ref> ===Correctness Proofs=== A common technique for proving the correctness of greedy algorithms uses an [[inductive reasoning|inductive]] exchange argument.<ref>{{cite book |last=Erickson |first=Jeff |title=Algorithms |url=https://jeffe.cs.illinois.edu/teaching/algorithms/ |year=2019 |publisher=University of Illinois at Urbana-Champaign |chapter=Greedy Algorithms}}</ref> The exchange argument demonstrates that any solution different from the greedy solution can be transformed into the greedy solution without degrading its quality. This proof pattern typically follows these steps: This proof pattern typically follows these steps (by contradiction): # Assume there exists an optimal solution different from the greedy solution # Identify the first point where the optimal and greedy solutions differ # Prove that exchanging the optimal choice for the greedy choice at this point cannot worsen the solution # Conclude by induction that there must exist an optimal solution identical to the greedy solution In some cases, an additional step may be needed to prove that no optimal solution can strictly improve upon the greedy solution. ===Cases of failure=== {{multiple image | align = | direction = vertical | width = 300 | header = Examples on how a greedy algorithm may fail to achieve the optimal solution. | image1 = Greedy Glouton.svg | alt1 = | caption1 = Starting from A, a greedy algorithm that tries to find the maximum by following the greatest slope will find the local maximum at "m", oblivious to the global maximum at "M". | image2 = Greedy-search-path-example.gif | alt2 = | caption2 = To reach the largest sum, at each step, the greedy algorithm will choose what appears to be the optimal immediate choice, so it will choose 12 instead of 3 at the second step, and will not reach the best solution, which contains 99. }} Greedy algorithms fail to produce the optimal solution for many other problems and may even produce the ''unique worst possible'' solution. One example is the [[travelling salesman problem]] mentioned above: for each number of cities, there is an assignment of distances between the cities for which the nearest-neighbour heuristic produces the unique worst possible tour.<ref>{{cite journal|doi=10.1016/S0166-218X(01)00195-0|title=Traveling salesman should not be greedy: Domination analysis of greedy-type heuristics for the TSP|journal=Discrete Applied Mathematics|volume=117|issue=1–3|pages=81–86|year=2002|last1=Gutin|first1=Gregory|last2=Yeo|first2=Anders|last3=Zverovich|first3=Alexey|doi-access=free}}</ref> For other possible examples, see [[horizon effect]]. ==Types== {{More citations needed section|date=June 2018}} Greedy algorithms can be characterized as being 'short sighted', and also as 'non-recoverable'. They are ideal only for problems that have an 'optimal substructure'. Despite this, for many simple problems, the best-suited algorithms are greedy. It is important, however, to note that the greedy algorithm can be used as a selection algorithm to prioritize options within a search, or branch-and-bound algorithm. There are a few variations to the greedy algorithm:<ref>{{Cite journal |last=DeVore |first=R. A. |last2=Temlyakov |first2=V. N. |date=1996-12-01 |title=Some remarks on greedy algorithms |url=https://doi.org/10.1007/BF02124742 |journal=Advances in Computational Mathematics |language=en |volume=5 |issue=1 |pages=173–187 |doi=10.1007/BF02124742 |issn=1572-9044}}</ref> * Pure greedy algorithms * Orthogonal greedy algorithms * Relaxed greedy algorithms ==Theory== Greedy algorithms have a long history of study in [[combinatorial optimization]] and [[theoretical computer science]]. Greedy heuristics are known to produce suboptimal results on many problems,<ref>{{harvnb|Feige|1998}}</ref> and so natural questions are: * For which problems do greedy algorithms perform optimally? * For which problems do greedy algorithms guarantee an approximately optimal solution? * For which problems are greedy algorithms guaranteed ''not'' to produce an optimal solution? A large body of literature exists answering these questions for general classes of problems, such as [[matroid]]s, as well as for specific problems, such as [[set cover]]. ===Matroids=== {{Main|Matroid}} A [[matroid]] is a mathematical structure that generalizes the notion of [[linear independence]] from [[vector spaces]] to arbitrary sets. If an optimization problem has the structure of a matroid, then the appropriate greedy algorithm will solve it optimally.<ref>{{harvnb|Papadimitriou|Steiglitz|1998}}</ref> ===Submodular functions=== {{Main|Submodular set function#Optimization problems}} A function <math>f</math> defined on subsets of a set <math>\Omega</math> is called [[submodular]] if for every <math>S, T \subseteq \Omega</math> we have that <math>f(S)+f(T)\geq f(S\cup T)+f(S\cap T)</math>. Suppose one wants to find a set <math>S</math> which maximizes <math>f</math>. The greedy algorithm, which builds up a set <math>S</math> by incrementally adding the element which increases <math>f</math> the most at each step, produces as output a set that is at least <math>(1 - 1/e) \max_{X \subseteq \Omega} f(X)</math>.<ref>{{harvnb|Nemhauser|Wolsey|Fisher|1978}}</ref> That is, greedy performs within a constant factor of <math>(1 - 1/e) \approx 0.63</math> as good as the optimal solution. Similar guarantees are provable when additional constraints, such as cardinality constraints,<ref>{{harvnb|Buchbinder|Feldman|Naor|Schwartz|2014}}</ref> are imposed on the output, though often slight variations on the greedy algorithm are required. See <ref>{{harvnb|Krause|Golovin|2014}}</ref> for an overview. ===Other problems with guarantees=== Other problems for which the greedy algorithm gives a strong guarantee, but not an optimal solution, include * [[Set cover problem#Greedy algorithm|Set cover]] * The [[Steiner tree problem]] * [[Load balancing (computing)|Load balancing]]<ref>{{cite web |title=Lecture 5: Introduction to Approximation Algorithms |work=Advanced Algorithms (2IL45) — Course Notes |publisher=TU Eindhoven |url=http://www.win.tue.nl/~mdberg/Onderwijs/AdvAlg_Material/Course%20Notes/lecture5.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://www.win.tue.nl/~mdberg/Onderwijs/AdvAlg_Material/Course%20Notes/lecture5.pdf |archive-date=2022-10-09 |url-status=live}}</ref> * [[Independent set (graph theory)#Approximation algorithms|Independent set]] Many of these problems have matching lower bounds; i.e., the greedy algorithm does not perform better than the guarantee in the worst case. ==Applications== {{Expand section|date=June 2018}} Greedy algorithms typically (but not always) fail to find the globally optimal solution because they usually do not operate exhaustively on all the data. They can make commitments to certain choices too early, preventing them from finding the best overall solution later. For example, all known [[greedy coloring]] algorithms for the [[graph coloring problem]] and all other [[NP-complete]] problems do not consistently find optimum solutions. Nevertheless, they are useful because they are quick to think up and often give good approximations to the optimum. If a greedy algorithm can be proven to yield the global optimum for a given problem class, it typically becomes the method of choice because it is faster than other optimization methods like [[dynamic programming]]. Examples of such greedy algorithms are [[Kruskal's algorithm]] and [[Prim's algorithm]] for finding [[minimum spanning tree]]s and the algorithm for finding optimum [[Huffman tree]]s. Greedy algorithms appear in network [[routing]] as well. Using greedy routing, a message is forwarded to the neighbouring node which is "closest" to the destination. The notion of a node's location (and hence "closeness") may be determined by its physical location, as in [[geographic routing]] used by [[ad hoc network]]s. Location may also be an entirely artificial construct as in [[small world routing]] and [[distributed hash table]]. ==Examples== * The [[activity selection problem]] is characteristic of this class of problems, where the goal is to pick the maximum number of activities that do not clash with each other. * In the [[Macintosh computer]] game ''[[Crystal Quest]]'' the objective is to collect crystals, in a fashion similar to the [[travelling salesman problem]]. The game has a demo mode, where the game uses a greedy algorithm to go to every crystal. The [[artificial intelligence]] does not account for obstacles, so the demo mode often ends quickly. * The [[matching pursuit]] is an example of a greedy algorithm applied on signal approximation. * A greedy algorithm finds the optimal solution to [[Malfatti circles|Malfatti's problem]] of finding three disjoint circles within a given triangle that maximize the total area of the circles; it is conjectured that the same greedy algorithm is optimal for any number of circles. * A greedy algorithm is used to construct a Huffman tree during [[Huffman coding]] where it finds an optimal solution. * In [[decision tree learning]], greedy algorithms are commonly used, however they are not guaranteed to find the optimal solution. **One popular such algorithm is the [[ID3 algorithm]] for decision tree construction. *[[Dijkstra's algorithm]] and the related [[A* search algorithm]] are verifiably optimal greedy algorithms for [[graph search]] and [[Shortest path problem|shortest path finding]]. **A* search is conditionally optimal, requiring an "[[admissible heuristic]]" that will not overestimate path costs. *[[Kruskal's algorithm]] and [[Prim's algorithm]] are greedy algorithms for constructing [[minimum spanning tree]]s of a given [[connected graph]]. They always find an optimal solution, which may not be unique in general. *The [[Sequitur algorithm|Sequitur]] and [[Lempel-Ziv-Welch algorithm|Lempel-Ziv-Welch]] algorithms are [[Grammar_induction#Grammatical_inference_by_greedy_algorithms|greedy algorithms for grammar induction]]. ==See also== {{Portal|Mathematics}} <!-- Please keep entries in alphabetical order & add a short description [[WP:SEEALSO]] --> {{div col|colwidth=20em}} *[[Best-first search]] *[[Multi-armed bandit#Semi-uniform strategies|Epsilon-greedy strategy]] *[[Greedy algorithm for Egyptian fractions]] *[[Greedy source]] *[[Hill climbing]] *[[Horizon effect]] *[[Matroid]] {{div col end}} <!-- please keep entries in alphabetical order --> ==References== {{Reflist}} ===Sources=== {{refbegin}} *{{cite book |first1=Thomas H. |last1=Cormen |first2=Charles E. |last2=Leiserson |first3=Ronald L. |last3=Rivest |first4=Clifford |last4=Stein |chapter=16 Greedy Algorithms |title=Introduction To Algorithms |title-link=Introduction to Algorithms |chapter-url=https://books.google.com/books?id=NLngYyWFl_YC&pg=PA370 |year=2001 |publisher=MIT Press |isbn=978-0-262-03293-3 |pages=370–}} *{{cite journal |doi=10.1016/S0166-218X(01)00195-0|title=Traveling salesman should not be greedy: Domination analysis of greedy-type heuristics for the TSP|journal=Discrete Applied Mathematics|volume=117|issue=1–3|pages=81–86|year=2002|last1=Gutin|first1=Gregory|last2=Yeo|first2=Anders|last3=Zverovich|first3=Alexey|doi-access=free}} *{{cite journal |doi=10.1016/j.disopt.2004.03.007|title=When the greedy algorithm fails|journal=Discrete Optimization|volume=1|issue=2|pages=121–127|year=2004|last1=Bang-Jensen|first1=Jørgen|last2=Gutin|first2=Gregory|last3=Yeo|first3=Anders|doi-access=free}} *{{cite journal|doi=10.1016/j.disopt.2006.03.001|title=Greedy-type resistance of combinatorial problems|journal=Discrete Optimization|volume=3|issue=4|pages=288–298|year=2006|last1=Bendall|first1=Gareth|last2=Margot|first2=François|doi-access=free}} *{{cite journal |first=U. |last=Feige |url=https://www.cs.duke.edu/courses/cps296.2/spring07/papers/p634-feige.pdf |archive-url=https://ghostarchive.org/archive/20221009/https://www.cs.duke.edu/courses/cps296.2/spring07/papers/p634-feige.pdf |archive-date=2022-10-09 |url-status=live |title=A threshold of ln n for approximating set cover |journal=Journal of the ACM |volume=45 |issue=4 |pages=634–652 |date=1998 |doi=10.1145/285055.285059 |s2cid=52827488 }} *{{cite journal |first1=G. |last1=Nemhauser |first2=L.A. |last2=Wolsey |first3=M.L. |last3=Fisher |url=https://www.researchgate.net/publication/242914003 |title=An analysis of approximations for maximizing submodular set functions—I |journal=Mathematical Programming |volume=14 |issue=1 |date=1978 |pages=265–294|doi=10.1007/BF01588971 |s2cid=206800425 }} *{{cite book |first1=Niv |last1=Buchbinder |first2=Moran |last2=Feldman |first3=Joseph (Seffi) |last3=Naor |first4=Roy |last4=Schwartz |chapter=Submodular maximization with cardinality constraints |chapter-url=http://theory.epfl.ch/moranfe/Publications/SODA2014.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://theory.epfl.ch/moranfe/Publications/SODA2014.pdf |archive-date=2022-10-09 |url-status=live |title=Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms |publisher=Society for Industrial and Applied Mathematics |year=2014 |isbn=978-1-61197-340-2 |doi=10.1137/1.9781611973402.106}} *{{cite book |last1=Krause |first1=A. |last2=Golovin |first2=D. |chapter=Submodular Function Maximization |chapter-url= https://books.google.com/books?id=YxliAgAAQBAJ&pg=PA71 |editor-first=L. |editor-last=Bordeaux |editor2-first=Y. |editor2-last=Hamadi |editor3-first=P. |editor3-last=Kohli |title=Tractability: Practical Approaches to Hard Problems |publisher=Cambridge University Press |year=2014 |isbn=9781139177801 |pages=71–104 |doi=10.1017/CBO9781139177801.004}} * {{Cite book |title=Combinatorial Optimization: Algorithms and Complexity |last1=Papadimitriou |first1=Christos H. |author1-link=Christos Papadimitriou |last2=Steiglitz |first2=Kenneth |author2-link=Kenneth Steiglitz |year=1998 |publisher=Dover}} {{refend}} ==External links== {{Commons category|Greedy algorithms}} * {{springer|title=Greedy algorithm|id=p/g110210}} * {{cite web |first=Noah |last=Gift |url=http://www.oreillynet.com/onlamp/blog/2008/04/python_greedy_coin_changer_alg.html |title=Python greedy coin example}} {{optimization algorithms|combinatorial|state=expanded}}{{Algorithmic paradigms}} {{Authority control}} [[Category:Optimization algorithms and methods]] [[Category:Combinatorial algorithms]] [[Category:Matroid theory]] [[Category:Exchange algorithms]] [[Category:Greedy algorithms]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Algorithmic paradigms
(
edit
)
Template:Authority control
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Commons category
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:Expand section
(
edit
)
Template:Harvnb
(
edit
)
Template:Main
(
edit
)
Template:More citations needed section
(
edit
)
Template:Multiple image
(
edit
)
Template:Optimization algorithms
(
edit
)
Template:Portal
(
edit
)
Template:Refbegin
(
edit
)
Template:Refend
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Springer
(
edit
)