Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Apriori algorithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Overview == The Apriori algorithm was proposed by Agrawal and Srikant in 1994. Apriori is designed to operate on [[database]]s containing transactions (for example, collections of items bought by customers, or details of a website frequentation or [[IP address]]es<ref>{{usurped|1=[https://web.archive.org/web/20210822191810/https://deductive.com/blogs/data-science-ip-matching/ The data science behind IP address matching]}} Published by deductive.com, September 6, 2018, retrieved September 7, 2018</ref>). Other algorithms are designed for finding association rules in data having no transactions ([[Winepi]] and Minepi), or having no timestamps ([[DNA sequencing]]). Each transaction is seen as a set of items (an ''itemset''). Given a threshold <math>C</math>, the Apriori algorithm identifies the item sets which are subsets of at least <math>C</math> transactions in the database. Apriori uses a "bottom up" approach, where frequent subsets are extended one item at a time (a step known as ''candidate generation''), and groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found. Apriori uses [[breadth-first search]] and a [[Hash tree (persistent data structure)|Hash tree]] structure to count candidate item sets efficiently. It generates candidate item sets of length <math>k</math> from item sets of length <math>k-1</math>. Then it prunes the candidates which have an infrequent sub pattern. According to the downward closure lemma, the candidate set contains all frequent <math>k</math>-length item sets. After that, it scans the transaction database to determine frequent item sets among the candidates. The pseudo code for the algorithm is given below for a transaction database <math>T</math>, and a support threshold of <math>\varepsilon</math>. Usual set theoretic notation is employed, though note that <math>T</math> is a [[multiset]]. <math>C_k</math> is the candidate set for level <math>k</math>. At each step, the algorithm is assumed to generate the candidate sets from the large item sets of the preceding level, heeding the downward closure lemma. <math>\mathrm{count}[c]</math> accesses a field of the data structure that represents candidate set <math>c</math>, which is initially assumed to be zero. Many details are omitted below, usually the most important part of the implementation is the data structure used for storing the candidate sets, and counting their frequencies. '''Apriori'''(T, Ξ΅) L<sub>1</sub> β {large singleton itemsets} k β 2 '''while''' L<sub>kβ1</sub> '''is not''' empty C<sub>k</sub> β Generate_candidates(L<sub>kβ1</sub>, k) '''for''' transactions t '''in''' T D<sub>t</sub> β {c in C<sub>k</sub> : c β t} '''for''' candidates c '''in''' D<sub>t</sub> count[c] β count[c] + 1 L<sub>k</sub> β {c in C<sub>k</sub> : count[c] β₯ Ξ΅} k β k + 1 '''return''' Union(L<sub>k</sub>) '''over all''' k '''Generate_candidates'''(L, k) result β empty_set() '''for all''' p β L, q β L '''where''' p and q differ in exactly one element c β p βͺ q '''if''' u β L '''for all''' u β c '''where''' |u| = k-1 result.add(c) '''return''' result
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)