Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
A* search algorithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Bounded relaxation== [[Image:Weighted A star with eps 5.gif|thumb|A* search that uses a heuristic that is 5.0(=ε) times a [[consistent heuristic]], and obtains a suboptimal path]] While the admissibility criterion guarantees an optimal solution path, it also means that A* must examine all equally meritorious paths to find the optimal path. To compute approximate shortest paths, it is possible to speed up the search at the expense of optimality by relaxing the admissibility criterion. Oftentimes we want to bound this relaxation, so that we can guarantee that the solution path is no worse than (1 + ''ε'') times the optimal solution path. This new guarantee is referred to as ''ε''-admissible. There are a number of ''ε''-admissible algorithms: *Weighted A*/Static Weighting's.<ref>{{cite journal | first = Ira | last = Pohl | title = First results on the effect of error in heuristic search | journal = Machine Intelligence 5 | pages = 219–236 |isbn=978-0-85224-176-9 |oclc=1067280266 | year = 1970 |publisher=Edinburgh University Press }}</ref> If ''h<sub>a</sub>''(''n'') is an admissible heuristic function, in the weighted version of the A* search one uses {{math|1=''h<sub>w</sub>''(''n'') = ''ε h<sub>a</sub>''(''n'')}}, {{math|1=''ε'' > 1}} as the heuristic function, and perform the A* search as usual (which eventually happens faster than using ''h<sub>a</sub>'' since fewer nodes are expanded). The path hence found by the search algorithm can have a cost of at most ''ε'' times that of the least cost path in the graph.<ref name="pearl84">{{cite book | first = Judea | last = Pearl | title = Heuristics: Intelligent Search Strategies for Computer Problem Solving | publisher = Addison-Wesley | year = 1984 | isbn = 978-0-201-05594-8 | url = https://archive.org/details/heuristicsintell00pear }}</ref> *Convex Upward/Downward Parabola (XUP/XDP). <ref>{{Cite journal |last=Chen |first=Jingwei |last2=Sturtevant |first2=Nathan R. |date=2019 |title=Conditions for Avoiding Node Re-expansions in Bounded Suboptimal Search |url=https://www.ijcai.org/proceedings/2019/170 |journal=Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence |publisher=International Joint Conferences on Artificial Intelligence Organization |pages=1220–1226}}</ref> Modification to the cost function in weighted A* to push optimality toward the start or goal. XDP gives paths which are near optimal close to the start, and XUP paths are near-optimal close to the goal. Both yield <math>\epsilon</math>-optimal paths overall. *:<math>f_{\text{XDP}}(n) = \frac{1}{2\epsilon} \left[ \ g(n) + (2\epsilon - 1) + \sqrt{(g(n)-h(n))^2 + 4\epsilon g(n)h(n) } \ \right]</math>. *:<math>f_{\text{XUP}}(n) = \frac{1}{2\epsilon} \left[ \ g(n)+h(n) + \sqrt{(g(n)+h(n))^2+4\epsilon (\epsilon-1)h(n)^2} \ \right]</math>. *Piecewise Upward/Downward Curve (pwXU/pwXD).<ref>{{Cite journal |last=Chen |first=Jingwei |last2=Sturtevant |first2=Nathan R. |date=2021-05-18 |title=Necessary and Sufficient Conditions for Avoiding Reopenings in Best First Suboptimal Search with General Bounding Functions |url=https://ojs.aaai.org/index.php/AAAI/article/view/16485 |journal=Proceedings of the AAAI Conference on Artificial Intelligence |language=en |volume=35 |issue=5 |pages=3688–3696 |doi=10.1609/aaai.v35i5.16485 |issn=2374-3468|doi-access=free }}</ref> Similar to XUP/XDP but with piecewise functions instead of parabola. Solution paths are also <math>\epsilon</math>-optimal. *:<math>f_{\text{pwXD}}(n) = \begin{cases} g(n)+h(n), & \text{if } h(n) > g(n) \\ g(n) + (2\epsilon-1)h(n)/\epsilon, & \text{if } h(n) \le g(n) \end{cases}</math> *:<math>f_{\text{pwXU}}(n) = \begin{cases} g(n)/(2\epsilon-1)+h(n), & \text{if } g(n) < (2\epsilon-1)h(n) \\ (g(n) + h(n))/\epsilon, & \text{if } g(n) \ge (2\epsilon-1)h(n) \end{cases}</math> * Dynamic Weighting<ref>{{cite conference | first = Ira | last = Pohl | title = The avoidance of (relative) catastrophe, heuristic competence, genuine dynamic weighting and computational issues in heuristic problem solving | book-title = Proceedings of the Third International Joint Conference on Artificial Intelligence (IJCAI-73) | volume = 3 | pages = 11–17 | place = California, USA | date = August 1973 | url = https://www.cs.auckland.ac.nz/courses/compsci709s2c/resources/Mike.d/Pohl1973WeightedAStar.pdf }}</ref> uses the cost function {{tmath|1=f(n) = g(n) + (1 + \varepsilon w(n))h(n)}}, where <math>w(n) = \begin{cases} 1 - \frac{d(n)}{N} & d(n) \le N \\ 0 & \text{otherwise} \end{cases}</math>, and where {{tmath|d(n)}} is the depth of the search and ''N'' is the anticipated length of the solution path. * Sampled Dynamic Weighting<ref>{{cite conference | first = Andreas | last = Köll |author2= Hermann Kaindl | title = A new approach to dynamic weighting | book-title = Proceedings of the Tenth European Conference on Artificial Intelligence (ECAI-92) | pages = 16–17 |url=https://dl.acm.org/doi/abs/10.5555/145448.145484 | place = Vienna, Austria | date = August 1992 |isbn=978-0-471-93608-4 |publisher=Wiley }}</ref> uses sampling of nodes to better estimate and debias the heuristic error. * <math>A^*_\varepsilon</math>.<ref>{{cite journal | first = Judea | last = Pearl |author2= Jin H. Kim | title = Studies in semi-admissible heuristics | journal = IEEE Transactions on Pattern Analysis and Machine Intelligence | volume = 4 | issue = 4 | pages = 392–399 | year = 1982 | doi = 10.1109/TPAMI.1982.4767270 | pmid = 21869053 | s2cid = 3176931 }}</ref> uses two heuristic functions. The first is the FOCAL list, which is used to select candidate nodes, and the second ''h<sub>F</sub>'' is used to select the most promising node from the FOCAL list. * ''A<sub>ε</sub>''<ref>{{cite conference |first=Malik |last=Ghallab |author2=Dennis Allard |title=''A<sub>ε</sub>'' – an efficient near admissible heuristic search algorithm |book-title=Proceedings of the Eighth International Joint Conference on Artificial Intelligence (IJCAI-83) |volume=2 |pages=789–791 |place=Karlsruhe, Germany |date=August 1983 |url=http://ijcai.org/Past%20Proceedings/IJCAI-83-VOL-2/PDF/048.pdf |url-status=dead |archive-url=https://web.archive.org/web/20140806200328/http://ijcai.org/Past%20Proceedings/IJCAI-83-VOL-2/PDF/048.pdf |archive-date=2014-08-06 }}</ref> selects nodes with the function {{tmath|A f(n) + B h_F(n)}}, where ''A'' and ''B'' are constants. If no nodes can be selected, the algorithm will backtrack with the function {{tmath|C f(n) + D h_F(n)}}, where ''C'' and ''D'' are constants. * AlphA*<ref>{{cite report |first=Bjørn |last=Reese |title=AlphA*: An ''ε''-admissible heuristic search algorithm |year=1999 |publisher=Institute for Production Technology, University of Southern Denmark |url=http://home1.stofanet.dk/breese/astaralpha-submitted.pdf.gz |access-date=2014-11-05 |archive-url=https://web.archive.org/web/20160131214618/http://home1.stofanet.dk/breese/astaralpha-submitted.pdf.gz |archive-date=2016-01-31 |url-status=dead }}</ref> attempts to promote depth-first exploitation by preferring recently expanded nodes. AlphA* uses the cost function <math>f_\alpha(n) = (1 + w_\alpha(n)) f(n)</math>, where <math>w_\alpha(n) = \begin{cases} \lambda & g(\pi(n)) \le g(\tilde{n}) \\ \Lambda & \text{otherwise} \end{cases}</math>, where ''λ'' and ''Λ'' are constants with <math>\lambda \le \Lambda</math>, ''π''(''n'') is the parent of ''n'', and ''ñ'' is the most recently expanded node.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)