Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Inference engine
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Architecture== The logic that an inference engine uses is typically represented as IF-THEN rules. The general format of such rules is IF <nowiki><logical expression></nowiki> THEN <nowiki><logical expression></nowiki>. Prior to the development of expert systems and inference engines, artificial intelligence researchers focused on more powerful [[Theorem-prover|theorem prover]] environments that offered much fuller implementations of [[first-order logic]]. For example, general statements that included [[universal quantification]] (for all X some statement is true) and [[existential quantification]] (there exists some X such that some statement is true). What researchers discovered is that the power of these theorem-proving environments was also their drawback. Back in 1965, it was far too easy to create logical expressions that could take an indeterminate or even infinite time to terminate. For example, it is common in universal quantification to make statements over an infinite set such as the set of all natural numbers. Such statements are perfectly reasonable and even required in mathematical proofs but when included in an automated theorem prover executing on a computer may cause the computer to fall into an infinite loop. Focusing on IF-THEN statements (what logicians call ''[[modus ponens]]'') still gave developers a very powerful general mechanism to represent logic, but one that could be used efficiently with computational resources. What is more, there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.<ref>{{cite book|last=Feigenbaum|first=Edward|title=The Handbook of Artificial Intelligence, Volume I|publisher=Addison-Wesley|isbn=0201118114|page=195|url=https://archive.org/stream/handbookofartific01barr#page/156/mode/2up|author2=Avron Barr|date=September 1, 1986}}</ref> A simple example of ''modus ponens'' often used in introductory logic books is "If you are human then you are mortal". This can be represented in [[pseudocode]] as: Rule1: Human(x) => Mortal(x) A trivial example of how this rule would be used in an inference engine is as follows. In ''forward chaining'', the inference engine would find any facts in the knowledge base that matched Human(x) and for each fact it found would add the new information Mortal(x) to the knowledge base. So if it found an object called Socrates that was human it would deduce that Socrates was mortal. In ''backward chaining'', the system would be given a goal, e.g. answer the question is Socrates mortal? It would search through the knowledge base and determine if Socrates was human and, if so, would assert he is also mortal. However, in backward chaining a common technique was to integrate the inference engine with a user interface. In that way, rather than simply being automated the system could now be interactive. In this trivial example, if the system was given the goal to answer the question if Socrates was mortal and it didn't yet know if he was human, it would generate a window to ask the user the question "Is Socrates human?" and would then use that information accordingly. This innovation of integrating the inference engine with a user interface led to the second early advancement of expert systems: explanation capabilities. The explicit representation of knowledge as rules rather than code made it possible to generate explanations to users: both explanations in real time and after the fact. So if the system asked the user "Is Socrates human?", the user may wonder why she was being asked that question and the system would use the chain of rules to explain why it was currently trying to ascertain that bit of knowledge: that is, it needs to determine if Socrates is mortal and to do that needs to determine if he is human. At first these explanations were not much different than the standard debugging information that developers deal with when debugging any system. However, an active area of research was utilizing natural language technology to ask, understand, and generate questions and explanations using natural languages rather than computer formalisms.<ref>{{cite journal|last=Barzilayt|first=Regina |author2=Daryl McCullough |author3=Owen Rambow |author4=Jonathan DeCristofaro |author5=Tanya Korelsky |author6=Benoit Lavoie |title=A New Approach to Expert System Explanations |journal=USAF Rome Laboratory Report |url=https://apps.dtic.mil/sti/pdfs/ADA457707.pdf|archive-url=https://web.archive.org/web/20160705225736/http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA457707|url-status=live|archive-date=July 5, 2016}}</ref> An inference engine cycles through three sequential steps: ''match rules'', ''select rules'', and ''execute rules''. The execution of the rules will often result in new facts or goals being added to the knowledge base, which will trigger the cycle to repeat. This cycle continues until no new rules can be matched. In the first step, ''match rules'', the inference engine finds all of the rules that are triggered by the current contents of the knowledge base. In forward chaining, the engine looks for rules where the antecedent (left hand side) matches some fact in the knowledge base. In backward chaining, the engine looks for antecedents that can satisfy one of the current goals. In the second step, ''select rules'', the inference engine prioritizes the various rules that were matched to determine the order to execute them. In the final step, ''execute rules'', the engine executes each matched rule in the order determined in step two and then iterates back to step one again. The cycle continues until no new rules are matched.<ref>{{citation|title=A Rule-Based Inference Engine which is Optimal and VLSI Implementable|last=Griffin|first=N.L.|publisher=University of Kentucky.}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)