Text Classification - Naive Bayes (Part 2) - Bernoulli Naive Bayes

Introduction

In a previous notebook, we implemented a Multinomial Naive Bayes text classifier. The multinomial model represents a document by counting how often each term occurs in that document. The Bernoulli model on the other hand evaluates the binary presence or absence of each term in a document.

You will see how this change of representation manifests in the learning algorithm and implement a Bernoulli Naive Bayes classifier.

Requirements

Knowledge

This notebook draws comparisons to the Multinomial Model introduced in Part 1. If you are new to Naive Bayes, I recommend you start there.

As with Part 1, this notebook is based on Chapter 13 of Introduction to Information RetrievalMAN09. The literature is recommended for a deeper reading on Naive Bayes classifiers.

Python Modules

We'll need a few imports from sklearn to fetch and preprocess the documents as well as the ubiquitous numpy.

from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import (CountVectorizer)
import numpy as np

Data

Similarly to part one, a CountVectorizer learns a vocabulary from the training set and transforms the training and test set into feature matrices. There is just one difference - we pass the parameter binary=True to the CountVectorizer which changes how it represents documents. The feature vector of a document contains an entry for each term in the vocabulary. The entry is 0 if the term is absent, or 1 if the term is present, regardless of whether it is present once or a hundred times.

twenty_train = fetch_20newsgroups('~/deep.TEACHING/data/newsgroups_dataset',
                                  subset='train',
                                  shuffle=True,
                                  random_state=42)
twenty_test = fetch_20newsgroups('~/deep.TEACHING/data/newsgroups_dataset',
                                  subset='test',
                                  shuffle=True,
                                  random_state=42)

count_vectorizer = CountVectorizer(binary=True)
X_train = count_vectorizer.fit_transform(twenty_train.data)
y_train = twenty_train.target
X_test = count_vectorizer.transform(twenty_test.data)
y_test = twenty_test.target
  • X_train : sparse matrix, shape = (n_documents, n_terms)

  • y_train : array, shape = (n_documents,)

  • X_test, y_test:

Bernoulli Naive Bayes

We calculate the best class cmapc_{map} to assign to a document as

cmap=argmaxcϵC[log(P^(c))+1iMlog(P^(U=eic))]c_{map} = \underset{c \epsilon C}{argmax} \left[ log(\hat{P}(c)) + \sum_{1 \leq i \leq M} log(\hat{P}(U = e_i|c)) \right]

Differences to the Multinomial model

The way a document provides evidence through its features has changed. Firstly, if a term occurs in a document it's irrelevant whether it occurs once or a hundred times. Secondly, the absence of a term is explicity measured as evidence.

The Bernoulli model describes a document as the feature vector

d=<e1,e2,...,eM>d = \left< e_1, e_2, ..., e_M \right>

where MM is the number of terms in the vocabulary. eie_i is 1 if the ii-th term is present or 0 if it is absent.

When we iterate over the feature vector, the evidence we gather is "Is the term present in the document"? If so, we multiply by the conditional probability of the term given the class, P(tic)P(t_i|c). If it is absent, we multiply by the inverse probability 1P(tic)1 - P(t_i|c).

P(tic)P(t_i|c) tells us the frequency of the term in cc. It is the fraction of documents in cc where the term is present, with smoothing applied. This leaves us with

P^(tic)=Ncti+1Nc+2\hat{P}(t_i|c) = \frac{N_{ct_i} + 1}{N_c + 2}

where NctiN_{ct_i} is the number of documents in cc where tit_i is present and NcN_c is the number of documents in cc. The smoothing constant in the denominator is 2 because we consider two cases, present vs. absent.

Example

Assume we have a one-sentence document "Chinese Chinese Chinese Tokyo Nippon Japan" The known vocabluary comprises [chinese, beijing, tokyo, japan]. What is the probability china is the correct class of the document?

c=china\ d=<1,0,1,1>\ P(cd)=P(c)P(dc)\ =P(c)P(chinesec)(1P(beijingc))P(tokyoc)P(japanc)c = china \ d = \left< 1, 0, 1, 1 \right> \ P(c|d) = P(c) * P(d|c) \ = P(c) * P(chinese|c) * (1 - P(beijing|c)) * P(tokyo|c) * P(japan|c)

Exercises

Training

Here is a setup for a Bernoulli Naive Bayes classifier with two methods learn and predict. The docstrings elaborate what those methods are supposed to do. In the Testing section you'll find an example of how the class will be used.

class NaiveBayesBernoulli:
    def __init__(self):
        pass

    def learn(self, X, y):
        """Learns the priors ^P(c) for each class as well as the
        conditional probabilities ^P(t|c) for each term and each
        class from X and y.
        Infers the vocabulary and set of predefined classes from
        X and y.
        :param X: A document-term matrix e.g. X_train
        :param y: A target vector e.g. y_train
        :return: A self-reference
        """
        return self

    def predict(self, X):
        """
        Predicts a class for each document in X
        :param X: A document-term matrix e.g X_test
        :return: A target vector containing the predicted categories
        for each document in X
        """
        return np.full((X.shape,), -1)

Testing

nn = NaiveBayesBernoulli()
nn.learn(X_train, y_train)
y_pred = nn.predict(X_test)
accuracy = np.mean(y_pred == y_test)
print('accuracy: {}'.format(accuracy))

Comparing against sklearn's implementation

from sklearn.naive_bayes import BernoulliNB
sk_nb = BernoulliNB()
sk_pred = sk_nb.fit(X_train, y_train).predict(X_test)
sk_accuracy = np.mean(sk_pred == y_test)
print('SK accuracy: {}'.format(sk_accuracy))
from sklearn.metrics import confusion_matrix
conf_mat = confusion_matrix(y_test, y_pred)
print(conf_mat)

Summary and Outlook

We'll wrap up this notebook by comparing a few features of the Bernoulli and Multinomial model. This is abridged from Chapter 13.4 of Introduction to Information Retrieval.MAN09 (pp. 265)

  • Document generation: In the multinomial model, we first pick a class c. Then, we generate a term tkt_k for the kk-th token for each position in the document. In the Bernoulli model, we also start with a class. Then, we pick a 0 or 1 for the kk-th term of the vocabulary for all terms of the vocabulary.
  • Document length: The Bernoulli model is geared towards short documents. In an entire book on China, the Bernoulli model would only weigh the presence of 'China' once and remain oblivious to the great frequency of the term.
  • Absence of terms: In the multinomial model, only the present terms of a document contribute evidence. A missing term adds 0 to the score we try to maximize. In the Bernoulli model, the absence of a term is explicitly measured and weighed into the evidence.
  • Both models assume conditional and positional independence. Note the Bernoulli model has no concept of position in the first place.

Literature

Licenses

Notebook License (CC-BY-SA 4.0)

The following license applies to the complete notebook, including code cells. It does however not apply to any referenced external media (e.g., images).

Text Classification - Naive Bayes (Part 2) - Bernoulli Naive Bayes
by Diyar Oktay
is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://gitlab.com/deep.TEACHING.

Code License (MIT)

The following license only applies to code cells of the notebook.

Copyright 2018 Diyar Oktay

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.