Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Depth-first search
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Properties== The [[Time complexity|time]] and [[Memory management|space]] analysis of DFS differs according to its application area. In theoretical computer science, DFS is typically used to traverse an entire graph, and takes time {{nowrap|1=<math>O(|V| + |E|)</math>,<ref> Cormen, Thomas H., Charles E. Leiserson, and Ronald L. Rivest. p.606</ref>}} where <math>|V|</math> is the number of [[Vertex (graph theory)|vertices]] and <math>|E|</math> the number of [[Edge (graph theory)|edges]]. This is linear in the size of the graph. In these applications it also uses space <math>O(|V|)</math> in the worst case to store the [[Stack (abstract data type)|stack]] of vertices on the current search path as well as the set of already-visited vertices. Thus, in this setting, the time and space bounds are the same as for [[breadth-first search]] and the choice of which of these two algorithms to use depends less on their complexity and more on the different properties of the vertex orderings the two algorithms produce. For applications of DFS in relation to specific domains, such as searching for solutions in [[artificial intelligence]] or web-crawling, the graph to be traversed is often either too large to visit in its entirety or infinite (DFS may suffer from [[Halting problem|non-termination]]). In such cases, search is only performed to a [[Depth-limited search|limited depth]]; due to limited resources, such as memory or disk space, one typically does not use data structures to keep track of the set of all previously visited vertices. When search is performed to a limited depth, the time is still linear in terms of the number of expanded vertices and edges (although this number is not the same as the size of the entire graph because some vertices may be searched more than once and others not at all) but the space complexity of this variant of DFS is only proportional to the depth limit, and as a result, is much smaller than the space needed for searching to the same depth using breadth-first search. For such applications, DFS also lends itself much better to [[heuristics|heuristic]] methods for choosing a likely-looking branch. When an appropriate depth limit is not known a priori, [[iterative deepening depth-first search]] applies DFS repeatedly with a sequence of increasing limits. In the artificial intelligence mode of analysis, with a [[branching factor]] greater than one, iterative deepening increases the running time by only a constant factor over the case in which the correct depth limit is known due to the geometric growth of the number of nodes per level. DFS may also be used to collect a [[Sample (statistics)|sample]] of graph nodes. However, incomplete DFS, similarly to incomplete [[Breadth-first search#Bias towards nodes of high degree|BFS]], is [[bias]]ed towards nodes of high [[Degree (graph theory)|degree]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)