Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Local search (optimization)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Method for problem solving in optimization}} {{more footnotes|date=May 2015}} In [[computer science]], '''local search''' is a [[heuristic]] method for solving computationally hard [[Mathematical optimization|optimization]] problems. Local search can be used on problems that can be formulated as finding a solution that maximizes a criterion among a number of [[candidate solution]]s. Local search algorithms move from solution to solution in the space of candidate solutions (the ''search space'') by applying local changes, until a solution deemed optimal is found or a time bound is elapsed. Local search algorithms are widely applied to numerous hard computational problems, including problems from [[computer science]] (particularly [[artificial intelligence]]), [[mathematics]], [[operations research]], [[engineering]], and [[bioinformatics]]. Examples of local search algorithms are WalkSAT, the [[2-opt|2-opt algorithm for the Traveling Salesman Problem]] and the [[Metropolis–Hastings algorithm]].<ref>{{cite web|url=https://www.cs.princeton.edu/~wayne/kleinberg-tardos/pdf/12LocalSearch.pdf|title=12LocalSearch.key}}</ref> While it is sometimes possible to substitute [[gradient descent]] for a local search algorithm, gradient descent is not in the same family: although it is an [[iterative method]] for [[Global optimization|local optimization]], it relies on an [[loss function|objective function’s gradient]] rather than an explicit exploration of the solution space. == Examples == Some problems where local search has been applied are: # The [[vertex cover problem]], in which a solution is a [[vertex cover]] of a [[Graph (discrete mathematics)|graph]], and the target is to find a solution with a minimal number of nodes # The [[traveling salesman problem]], in which a solution is a [[Cycle (graph theory)|cycle]] containing all nodes of the graph and the target is to minimize the total length of the cycle # The [[Boolean satisfiability problem]], in which a candidate solution is a truth assignment, and the target is to maximize the number of [[Clause (logic)|clauses]] satisfied by the assignment; in this case, the final solution is of use only if it satisfies ''all'' clauses # The [[nurse scheduling problem]] where a solution is an assignment of nurses to [[Shift work|shifts]] which satisfies all established [[Constraint satisfaction|constraints]] # The [[k-medoid]] clustering problem and other related [[Optimal facility location|facility location]] problems for which local search offers the best known approximation ratios from a worst-case perspective # The Hopfield Neural Networks problem involves finding stable configurations in [[Hopfield network]]. == Description == Most problems can be formulated in terms of a search space and target in several different ways. For example, for the traveling salesman problem a solution can be a route visiting all cities and the goal is to find the shortest route. But a solution can also be a path, and being a cycle is part of the target. A local search algorithm starts from a candidate solution and then [[iterative method|iteratively]] moves to a [[Neighbourhood (mathematics)|neighboring]] solution; a neighborhood being the set of all potential solutions that differ from the current solution by the minimal possible extent. This requires a neighborhood relation to be defined on the search space. As an example, the neighborhood of vertex cover is another vertex cover only differing by one node. For Boolean satisfiability, the neighbors of a Boolean assignment are those that have a single variable in an opposite state. The same problem may have multiple distinct neighborhoods defined on it; local optimization with neighborhoods that involve changing up to ''k'' components of the solution is often referred to as '''k-opt'''. Typically, every candidate solution has more than one neighbor solution; the choice of which one to select is taken using only information about the solutions in the [[Neighbourhood (mathematics)|neighborhood]] of the current assignment, hence the name ''local'' search. When the choice of the neighbor solution is done by taking the one locally maximizing the criterion, i.e.: a greedy search, the metaheuristic takes the name [[hill climbing]]. When no improving neighbors are present, local search is stuck at a [[Local optimum|locally optimal point]]. This local-optima problem can be cured by using restarts (repeated local search with different initial conditions), randomization, or more complex schemes based on iterations, like [[iterated local search]], on memory, like reactive search optimization, on memory-less stochastic modifications, like [[simulated annealing]]. Local search does not provide a guarantee that any given solution is optimal. The search can terminate after a given time bound or when the best solution found thus far has not improved in a given number of steps. Local search is an [[anytime algorithm|anytime]] algorithm; it can return a valid solution even if it's interrupted at any time after finding the first valid solution. Local search is typically an [[approximation algorithm|approximation]] or incomplete algorithm because the search may stop even if the current best solution found is not optimal. This can happen even if termination happens because the current best solution could not be improved, as the optimal solution can lie far from the neighborhood of the solutions crossed by the algorithm. Schuurman & Southey propose three measures of effectiveness for local search (depth, mobility, and coverage):<ref>D. Schuurmans and F. Southey. Local search characteristics of in- complete SAT procedures. AI J., 132(2):121–150, 2001.</ref> * depth: the cost of the current (best) solution; * mobility: the ability to rapidly move to different areas of the search space (whilst keeping the cost low); * coverage: how systematically the search covers the search space, the maximum distance between any unexplored assignment and all visited assignments. They hypothesize that local search algorithms work well, not because they have some understanding of the search space but because they quickly move to promising regions, and explore the search space at low depths as quickly, broadly, and systematically as possible. ==See also== Local search is a sub-field of: * [[Metaheuristic]]s * [[Stochastic optimization]] * [[Mathematical optimization|Optimization]] Fields within local search include: * [[Hill climbing]] * [[Simulated annealing]] (suited for either local or global search) * [[Tabu search]] * [[Late acceptance hill climbing]] * Reactive search optimization (combining [[machine learning]] and local search heuristics) ===Real-valued search-spaces=== Several methods exist for performing local search of [[real number|real-valued]] search-spaces: * [[Luus–Jaakola]] searches locally using a [[Uniform distribution (continuous)|uniform distribution]] and an exponentially decreasing search-range. * [[Random optimization]] searches locally using a [[normal distribution]]. * [[Random search]] searches locally by sampling a [[hypersphere]] surrounding the current position. * [[Pattern search (optimization)|Pattern search]] takes steps along the axes of the search-space using exponentially decreasing step sizes. == References == {{Reflist}} ==Bibliography== *{{cite book |title = Reactive Search and Intelligent Optimization |last = Battiti |first = Roberto |author2 = Mauro Brunato |author3 = Franco Mascia |url = http://reactive-search.org/thebook |year = 2008 |publisher = [[Springer Verlag]] |isbn = 978-0-387-09623-0 |url-status = dead |archive-url = https://web.archive.org/web/20120316104607/http://reactive-search.org/thebook |archive-date = 2012-03-16 }} *[[Holger H. Hoos|Hoos, H.H.]] and Stutzle, T. (2005) Stochastic Local Search: Foundations and Applications, Morgan Kaufmann. *Vijay Arya and Naveen Garg and Rohit Khandekar and Adam Meyerson and Kamesh Munagala and Vinayaka Pandit, (2004): [http://www.cse.iitd.ernet.in/~pandit/lsearchSICOMP.pdf Local Search Heuristics for ''k''-Median and Facility Location Problems], SIAM Journal of Computing 33(3). *[[Juraj Hromkovič]]: Algorithmics for Hard Problems: Introduction to Combinatorial Optimization, Randomization, Approximation, and Heuristics (Springer) * Wil Michiels, Emile Aarts, Jan Korst: Theoretical Aspects of Local Search (Springer) {{Major subfields of optimization}} [[Category:Metaheuristics]] [[Category:Optimization algorithms and methods]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite web
(
edit
)
Template:Major subfields of optimization
(
edit
)
Template:More footnotes
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)