Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Inference engine
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Component of artificial intelligence systems}} {{Update|date=October 2019}} In the field of [[artificial intelligence]], an '''inference engine''' is a [[software component]] of an intelligent system that applies logical rules to the [[knowledge base]] to deduce new information. The first inference engines were components of [[expert system]]s. The typical expert system consisted of a knowledge base and an inference engine. The knowledge base stored facts about the world. The inference engine applied logical rules to the knowledge base and deduced new knowledge. This process would iterate as each new fact in the knowledge base could trigger additional rules in the inference engine. Inference engines work primarily in one of two modes either special rule or facts: [[forward chaining]] and [[backward chaining]]. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved.<ref name="Hayes-Roth 1983">{{cite book|last=Hayes-Roth|first=Frederick|title=Building Expert Systems|year=1983|publisher=Addison-Wesley|isbn=0-201-10686-8|author2=Donald Waterman|author3=Douglas Lenat|url=https://archive.org/details/buildingexpertsy00temd}}</ref> Additionally, the concept of 'inference' has expanded to include the process through which trained [[Artificial neural network|neural networks]] generate predictions or decisions. In this context, an 'inference engine' could refer to the specific part of the system, or even the hardware, that executes these operations. This type of inference plays a crucial role in various applications, including (but not limited to) [[image recognition]], [[natural language processing]], and [[autonomous vehicles]]. The inference phase in these applications is typically characterized by a high volume of data inputs and real-time processing requirements.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)