Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Naive Bayes classifier
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Examples == ===Person classification=== Problem: classify whether a given person is a male or a female based on the measured features. The features include height, weight, and foot size. Although with NB classifier we treat them as independent, they are not in reality. ====Training==== Example training set below. {| class="wikitable" |- ! Person !! height (feet) !! weight (lbs) !! foot size (inches) |- | male || 6 || 180 || 12 |- | male || 5.92 (5'11") || 190 || 11 |- | male || 5.58 (5'7") || 170 || 12 |- | male || 5.92 (5'11") || 165 || 10 |- | female || 5 || 100 || 6 |- | female || 5.5 (5'6") || 150 || 8 |- | female || 5.42 (5'5") || 130 || 7 |- | female || 5.75 (5'9") || 150 || 9 |- |} The classifier created from the training set using a Gaussian distribution assumption would be (given variances are ''unbiased'' [[Variance#Population variance and sample variance|sample variances]]): {| class="wikitable" |- ! Person !! mean (height) !! variance (height) !! mean (weight) !! variance (weight) !! mean (foot size) !! variance (foot size) |- | male || 5.855 || 3.5033 × 10<sup>−2</sup> || 176.25 || 1.2292 × 10<sup>2</sup> || 11.25 || 9.1667 × 10<sup>−1</sup> |- | female || 5.4175 || 9.7225 × 10<sup>−2</sup> || 132.5 || 5.5833 × 10<sup>2</sup> || 7.5 || 1.6667 |} The following example assumes equiprobable classes so that P(male)= P(female) = 0.5. This prior [[probability distribution]] might be based on prior knowledge of frequencies in the larger population or in the training set. ====Testing==== Below is a sample to be classified as male or female. {| class="wikitable" |- ! Person !! height (feet) !! weight (lbs) !! foot size (inches) |- | sample || 6 || 130 || 8 |} In order to classify the sample, one has to determine which posterior is greater, male or female. For the classification as male the posterior is given by <math display="block"> \text{posterior (male)} = \frac{P(\text{male}) \, p(\text{height} \mid \text{male}) \, p(\text{weight} \mid \text{male}) \, p(\text{foot size} \mid \text{male})}{\text{evidence}} </math> For the classification as female the posterior is given by <math display="block"> \text{posterior (female)} = \frac{P(\text{female}) \, p(\text{height} \mid \text{female}) \, p(\text{weight} \mid \text{female}) \, p(\text{foot size} \mid \text{female})}{\text{evidence}} </math> The evidence (also termed normalizing constant) may be calculated: <math display="block">\begin{align} \text{evidence} = P(\text{male}) \, p(\text{height} \mid \text{male}) \, p(\text{weight} \mid \text{male}) \, p(\text{foot size} \mid \text{male}) \\ + P(\text{female}) \, p(\text{height} \mid \text{female}) \, p(\text{weight} \mid \text{female}) \, p(\text{foot size} \mid \text{female}) \end{align}</math> However, given the sample, the evidence is a constant and thus scales both posteriors equally. It therefore does not affect classification and can be ignored. The [[probability distribution]] for the sex of the sample can now be determined: <math display="block">P(\text{male}) = 0.5</math> <math display="block">p({\text{height}} \mid \text{male}) = \frac{1}{\sqrt{2\pi \sigma^2}}\exp\left(\frac{-(6-\mu)^2}{2\sigma^2}\right) \approx 1.5789,</math> where <math>\mu = 5.855</math> and <math>\sigma^2 = 3.5033 \cdot 10^{-2}</math> are the parameters of normal distribution which have been previously determined from the training set. Note that a value greater than 1 is OK here – it is a probability density rather than a probability, because ''height'' is a continuous variable. <math display="block">p({\text{weight}} \mid \text{male}) = \frac{1}{\sqrt{2\pi \sigma^2}}\exp\left(\frac{-(130-\mu)^2}{2\sigma^2}\right) = 5.9881 \cdot 10^{-6}</math> <math display="block">p({\text{foot size}} \mid \text{male}) = \frac{1}{\sqrt{2\pi \sigma^2}}\exp\left(\frac{-(8-\mu)^2}{2\sigma^2}\right) = 1.3112 \cdot 10^{-3}</math> <math display="block">\text{posterior numerator (male)} = \text{their product} = 6.1984 \cdot 10^{-9}</math> <math display="block">P({\text{female}}) = 0.5</math> <math display="block">p({\text{height}} \mid {\text{female}}) = 2.23 \cdot 10^{-1}</math> <math display="block">p({\text{weight}} \mid {\text{female}}) = 1.6789 \cdot 10^{-2}</math> <math display="block">p({\text{foot size}} \mid {\text{female}}) = 2.8669 \cdot 10^{-1}</math> <math display="block">\text{posterior numerator (female)} = \text{their product} = 5.3778 \cdot 10^{-4}</math> Since posterior numerator is greater in the female case, the prediction is that the sample is female. ===Document classification=== Here is a worked example of naive Bayesian classification to the [[document classification]] problem. Consider the problem of classifying documents by their content, for example into [[spamming|spam]] and non-spam [[e-mail]]s. Imagine that documents are drawn from a number of classes of documents which can be modeled as sets of words where the (independent) probability that the i-th word of a given document occurs in a document from class ''C'' can be written as <math display="block">p(w_i \mid C)\,</math> (For this treatment, things are further simplified by assuming that words are randomly distributed in the document - that is, words are not dependent on the length of the document, position within the document with relation to other words, or other document-context.) Then the probability that a given document ''D'' contains all of the words <math>w_i</math>, given a class ''C'', is <math display="block">p(D\mid C) = \prod_i p(w_i \mid C)\,</math> The question that has to be answered is: "what is the probability that a given document ''D'' belongs to a given class ''C''?" In other words, what is <math>p(C \mid D)\,</math>? Now [[Conditional probability|by definition]] <math display="block">p(D\mid C)={p(D\cap C)\over p(C)}</math> and <math display="block">p(C \mid D) = {p(D\cap C)\over p(D)}</math> Bayes' theorem manipulates these into a statement of probability in terms of [[likelihood]]. <math display="block">p(C\mid D) = \frac{p(C)\,p(D\mid C)}{p(D)}</math> Assume for the moment that there are only two mutually exclusive classes, ''S'' and ¬''S'' (e.g. spam and not spam), such that every element (email) is in either one or the other; <math display="block">p(D\mid S)=\prod_i p(w_i \mid S)\,</math> and <math display="block">p(D\mid\neg S)=\prod_i p(w_i\mid\neg S)\,</math> Using the Bayesian result above, one can write: <math display="block">p(S\mid D)={p(S)\over p(D)}\,\prod_i p(w_i \mid S)</math> <math display="block">p(\neg S\mid D)={p(\neg S)\over p(D)}\,\prod_i p(w_i \mid\neg S)</math> Dividing one by the other gives: <math display="block">{p(S\mid D)\over p(\neg S\mid D)}={p(S)\,\prod_i p(w_i \mid S)\over p(\neg S)\,\prod_i p(w_i \mid\neg S)}</math> Which can be re-factored as: <math display="block">{p(S\mid D)\over p(\neg S\mid D)}={p(S)\over p(\neg S)}\,\prod_i {p(w_i \mid S)\over p(w_i \mid\neg S)}</math> Thus, the probability ratio p(''S'' | ''D'') / p(¬''S'' | ''D'') can be expressed in terms of a series of [[likelihood function|likelihood ratios]]. The actual probability p(''S'' | ''D'') can be easily computed from log (p(''S'' | ''D'') / p(¬''S'' | ''D'')) based on the observation that p(''S'' | ''D'') + p(¬''S'' | ''D'') = 1. Taking the [[logarithm]] of all these ratios, one obtains: <math display="block">\ln{p(S\mid D)\over p(\neg S\mid D)}=\ln{p(S)\over p(\neg S)}+\sum_i \ln{p(w_i\mid S)\over p(w_i\mid\neg S)}</math> (This technique of "[[log-likelihood ratio]]s" is a common technique in statistics. In the case of two mutually exclusive alternatives (such as this example), the conversion of a log-likelihood ratio to a probability takes the form of a [[sigmoid curve]]: see [[logit]] for details.) Finally, the document can be classified as follows. It is spam if <math>p(S\mid D) > p(\neg S\mid D)</math> (i. e., <math>\ln{p(S\mid D) \over p(\neg S\mid D)} > 0</math>), otherwise it is not spam. ===Spam filtering=== Naive Bayes classifiers are a popular [[statistics|statistical]] [[scientific technique|technique]] of [[e-mail filtering]]. They typically use [[bag-of-words model|bag-of-words]] features to identify [[email spam]], an approach commonly used in [[Document classification|text classification]]. Naive Bayes classifiers work by correlating the use of tokens (typically words, or sometimes other things), with spam and non-spam e-mails and then using [[Bayes' theorem]] to calculate a probability that an email is or is not spam. '''Naive Bayes spam filtering''' is a baseline technique for dealing with spam that can tailor itself to the email needs of individual users and give low [[false positive]] spam detection rates that are generally acceptable to users. Bayesian algorithms were used for email filtering as early as 1996. Although naive Bayesian filters did not become popular until later, multiple programs were released in 1998 to address the growing problem of unwanted email.<ref>{{cite book|title=Spam: A Shadow History of the Internet|last=Brunton|first=Finn|publisher=[[MIT Press]]|year=2013|isbn=9780262018876|page=136|url=https://books.google.com/books?id=QF7EjCRg5CIC&pg=PA136|access-date=2017-09-13|archive-url=https://web.archive.org/web/20190323133300/https://books.google.com/books?id=QF7EjCRg5CIC&pg=PA136|archive-date=2019-03-23|url-status=live}}</ref> The first scholarly publication on Bayesian spam filtering was by Sahami et al. in 1998.<ref>{{cite web|url=http://robotics.stanford.edu/users/sahami/papers-dir/spam.pdf|author1=M. Sahami|author2=S. Dumais|author3=D. Heckerman|author4=E. Horvitz|title=A Bayesian approach to filtering junk e-mail|publisher=AAAI'98 Workshop on Learning for Text Categorization|year=1998|access-date=2007-08-15|archive-url=https://web.archive.org/web/20070927171816/http://robotics.stanford.edu/users/sahami/papers-dir/spam.pdf|archive-date=2007-09-27|url-status=live}}</ref> Variants of the basic technique have been implemented in a number of research works and commercial [[Computer software|software]] products.<ref>{{cite web|url=http://kb.mozillazine.org/Junk_Mail_Controls|title=Junk Mail Controls|publisher=MozillaZine|date=November 2009|access-date=2010-01-16|archive-url=https://web.archive.org/web/20121023211104/http://kb.mozillazine.org/Junk_Mail_Controls|archive-date=2012-10-23|url-status=live}}</ref> Many modern mail [[Client (computing)|clients]] implement Bayesian spam filtering. Users can also install separate [[E-mail filtering|email filtering programs]]. [[Server-side]] email filters, such as [[DSPAM]], [[SpamAssassin]],<ref name=twsSep14yy>{{cite web|title = Installation|publisher = Ubuntu manuals|quote = Gary Robinson’s f(x) and combining algorithms, as used in SpamAssassin|date = 2010-09-18|url = http://manpages.ubuntu.com/manpages/gutsy/man1/sa-learn.1p.html|access-date = 2010-09-18|archive-url = https://web.archive.org/web/20100929165032/http://manpages.ubuntu.com/manpages/gutsy/man1/sa-learn.1p.html|archive-date = 29 September 2010|url-status = dead}}</ref> [[SpamBayes]],<ref name=twsSep2>{{Cite news|title= Background Reading|publisher= SpamBayes project|quote= Sharpen your pencils, this is the mathematical background (such as it is).* The paper that started the ball rolling: Paul Graham's A Plan for Spam.* Gary Robinson has an interesting essay suggesting some improvements to Graham's original approach.* Gary Robinson's Linux Journal article discussed using the chi squared distribution.|date= 2010-09-18|url= http://spambayes.sourceforge.net/background.html|access-date= 2010-09-18| archive-url= https://web.archive.org/web/20100906031341/http://spambayes.sourceforge.net/background.html| archive-date= 6 September 2010 | url-status= live}}</ref> [[Bogofilter]], and [[Anti-Spam SMTP Proxy|ASSP]], make use of Bayesian spam filtering techniques, and the functionality is sometimes embedded within [[mail server]] software itself. [[CRM114 (program)|CRM114]], oft cited as a Bayesian filter, is not intended to use a Bayes filter in production, but includes the ″unigram″ feature for reference.<ref>{{Cite web |url=http://crm114.sourceforge.net/docs/classify_details.txt |title=Archived copy |access-date=2016-07-09 |archive-url=https://web.archive.org/web/20161007063935/http://crm114.sourceforge.net/docs/classify_details.txt |archive-date=2016-10-07 |url-status=live }}</ref> ====Dealing with rare words==== In the case a word has never been met during the learning phase, both the numerator and the denominator are equal to zero, both in the general formula and in the spamicity formula. The software can decide to discard such words for which there is no information available. More generally, the words that were encountered only a few times during the learning phase cause a problem, because it would be an error to trust blindly the information they provide. A simple solution is to simply avoid taking such unreliable words into account as well. Applying again Bayes' theorem, and assuming the classification between spam and ham of the emails containing a given word ("replica") is a [[random variable]] with [[beta distribution]], some programs decide to use a corrected probability: :<math>\Pr'(S|W) = \frac{s \cdot \Pr(S) + n \cdot \Pr(S|W)}{s + n }</math> where: *<math>\Pr'(S|W)</math> is the corrected probability for the message to be spam, knowing that it contains a given word ; * <math>s</math> is the ''strength'' we give to background information about incoming spam ; * <math>\Pr(S)</math> is the probability of any incoming message to be spam ; * <math>n</math> is the number of occurrences of this word during the learning phase ; * <math>\Pr(S|W)</math> is the spamicity of this word. (Demonstration:<ref>{{cite magazine|url=http://www.linuxjournal.com/article/6467|magazine=Linux Journal|author=Gary Robinson|author-link=Gary Robinson|title=A statistical approach to the spam problem|year=2003|access-date=2007-07-19|archive-url=https://web.archive.org/web/20101022001749/http://www.linuxjournal.com/article/6467|archive-date=2010-10-22|url-status=live}}</ref>) This corrected probability is used instead of the spamicity in the combining formula. This formula can be extended to the case where ''n'' is equal to zero (and where the spamicity is not defined), and evaluates in this case to <math>Pr(S)</math>. ====Other heuristics==== "Neutral" words like "the", "a", "some", or "is" (in English), or their equivalents in other languages, can be ignored. These are also known as [[Stop words]]. More generally, some bayesian filtering filters simply ignore all the words which have a spamicity next to 0.5, as they contribute little to a good decision. The words taken into consideration are those whose spamicity is next to 0.0 (distinctive signs of legitimate messages), or next to 1.0 (distinctive signs of spam). A method can be for example to keep only those ten words, in the examined message, which have the greatest [[absolute value]] |0.5 − ''pI''|. Some software products take into account the fact that a given word appears several times in the examined message,<ref>{{cite web|url=http://spamprobe.sourceforge.net/paper.html|author=Brian Burton|title=SpamProbe - Bayesian Spam Filtering Tweaks|year=2003|access-date=2009-01-19|archive-url=https://web.archive.org/web/20120301235828/http://spamprobe.sourceforge.net/paper.html|archive-date=2012-03-01|url-status=live}}</ref> others don't. Some software products use ''patterns'' (sequences of words) instead of isolated natural languages words.<ref>{{cite web|url=http://bnr.nuclearelephant.com/l|author=Jonathan A. Zdziarski|title=Bayesian Noise Reduction: Contextual Symmetry Logic Utilizing Pattern Consistency Analysis|year=2004}}{{dead link|date=February 2018 |bot=InternetArchiveBot |fix-attempted=yes }}</ref> For example, with a "context window" of four words, they compute the spamicity of "Viagra is good for", instead of computing the spamicities of "Viagra", "is", "good", and "for". This method gives more sensitivity to context and eliminates the Bayesian noise better, at the expense of a bigger database. ====Disadvantages==== Depending on the implementation, Bayesian spam filtering may be susceptible to [[Bayesian poisoning]], a technique used by spammers in an attempt to degrade the effectiveness of spam filters that rely on Bayesian filtering. A spammer practicing Bayesian poisoning will send out emails with large amounts of legitimate text (gathered from legitimate news or literary sources). [[e-mail spam|Spammer]] tactics include insertion of random innocuous words that are not normally associated with spam, thereby decreasing the email's spam score, making it more likely to slip past a Bayesian spam filter. However, with (for example) [[Paul_Graham_(programmer)|Paul Graham]]'s scheme only the most significant probabilities are used, so that padding the text out with non-spam-related words does not affect the detection probability significantly. Words that normally appear in large quantities in spam may also be transformed by spammers. For example, «Viagra» would be replaced with «Viaagra» or «V!agra» in the spam message. The recipient of the message can still read the changed words, but each of these words is met more rarely by the Bayesian filter, which hinders its learning process. As a general rule, this spamming technique does not work very well, because the derived words end up recognized by the filter just like the normal ones.<ref>Paul Graham (2002), [http://www.paulgraham.com/spam.html A Plan for Spam] {{Webarchive|url=https://web.archive.org/web/20040404013856/http://www.paulgraham.com/spam.html |date=2004-04-04 }}</ref> Another technique used to try to defeat Bayesian spam filters is to replace text with pictures, either directly included or linked. The whole text of the message, or some part of it, is replaced with a picture where the same text is "drawn". The spam filter is usually unable to analyze this picture, which would contain the sensitive words like «Viagra». However, since many mail clients disable the display of linked pictures for security reasons, the spammer sending links to distant pictures might reach fewer targets. Also, a picture's size in bytes is bigger than the equivalent text's size, so the spammer needs more bandwidth to send messages directly including pictures. Some filters are more inclined to decide that a message is spam if it has mostly graphical contents. A solution used by [[Google]] in its [[Gmail]] email system is to perform an [[Optical character recognition|OCR (Optical Character Recognition)]] on every mid to large size image, analyzing the text inside.<ref>{{cite web|url=http://www.google.com/mail/help/intl/en_GB/fightspam/spamexplained.html|title=Gmail uses Google's innovative technology to keep spam out of your inbox|access-date=2015-09-05|archive-url=https://web.archive.org/web/20150913070222/http://www.google.com/mail/help/intl/en_GB/fightspam/spamexplained.html|archive-date=2015-09-13|url-status=live}}</ref><ref>{{cite journal|last1=Zhu|first1=Z.|last2=Jia|first2=Z|last3=Xiao|first3=H|last4=Zhang|first4=G|last5=Liang|first5=H.|last6=Wang|first6=P.|editor1-last=Li|editor1-first=S|editor2-last=Jin|editor2-first=Q|editor3-last=Jiang|editor3-first=X|editor4-last=Park|editor4-first=J|editor1-link=Frontier and Future Development of Information Technology in Medicine and Education. Lecture Notes in Electrical Engineering|title=A Modified Minimum Risk Bayes and {{as written|I|t's [sic]}} Application in Spam|journal=Lecture Notes in Electrical Engineering|date=2014|volume=269|pages=2155–2159|doi=10.1007/978-94-007-7618-0_261|publisher=Springer|location=Dordrecht|language=en}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)