Probability

 

1. Introduction

 

Probability is a subject not well known to most people -- even mathematicians, surprisingly. It is playing an increasingly large role in computational linguistics and machine learning, and will be of great importance to us.

If you've had any exposure to probability at all, you're likely to think of cases like rolling dice. If you roll one die, there's a 1 in 6 chance -- about 0.166 -- of rolling a "1", and likewise for the five other normal outcomes of rolling a die. Games of chance, like rolling dice and tossing coins, are important illustrative cases in most introductory presentations of what probability is about. This is only natural; the study of probability arose through the analysis of games of chance, only becoming a bit more respectable when it was used to form the rational basis for the insurance industry. But neither of these applications lends itself to questions of linguistics, and linguists tend to be put off by examples like these, examples which seem to suggest that we take it for granted that the utterance of a word is a bit like the roll of a die -- which it's not, as we perfectly well know.

Probability is better thought of in another way. We use probability theory in order to talk in an explicit and quantitative way about the degree of certainty, or uncertainty, that we possess about a question. Putting it slightly differently, if we wanted to develop a theory of how certain a perfectly rational person could be of a conclusion in the light of specific data, we'd end up with something very much like probability theory. And that's how we should think of it.

Let's take an example. Many of the linguistic examples we consider will be along the lines of what a speech recognition system must deal with, which is to say, the task of deciding (or guessing) what word has just been uttered, given knowledge of what the preceding string of words has been coming out of the speaker's mouth. Would you be willing to consider the following suggestions?
            Let us suppose that we have established that the person is speaking English. Can we draw any conclusions independent of the sounds that the person is uttering at this moment? Surely we can. We can make an estimate of the probability that the word is in our desk-top Webster's Dictionary, and we can make an estimate of the probability that the word is "the", and an estimate of the probability that the word is -- let's choose another word -- "telephone". We can be quite certain, in fact, that "the" is the most likely word to be produced by an English speaker; as much as ten percent of a speaker's words may be  "the"s.

 

 

2. Let's take a look at -- or review -- some of the very basics. 

We're going to try to look at language from the roll-of-the-die point of view for a little while. It's not great, but it's unavoidable.

The very first notion to be familiar with is that of a distribution: a set of (non-negative) numbers that add up to 1.0. In every discussion of probability, distributions play a central role, and one must always ask oneself what is being treated as forming a distribution. Probabilities are always members of a distribution.

Let's consider the roll of a die. There are six results of such a roll, and we typically assume that their probabilities must be equal; it follows that their probabilities must be 1/6, since they add up to 1.0: they form a distribution. We call a distribution in which all values are the same a uniform distribution.  We always assume that there is a universe of outcomes, and each outcome has associated with it a probability. The sum of all of the outcomes is 1.0. Any subset of the outcomes has a probability, which is the sum of the outcomes of the members of the subset. Thus the probability of rolling an even number is 0.5.

In this simplest case, we took the universe of outcomes to consist of 6 members, the numbers 1 through 6. But this is not necessary. We can take the universe of outcomes to be all possible outcomes of two successive rolls of a die. The universe then has 36 members, and the outcome "The first roll is a 1" is not a single member of the universe of outcomes, but rather it is a subset consisting of 6 different members, each with a probability of 1/36. These six are: (1) The first roll is 1 and the second is 1; (2) The first roll is 1 and the second is 2; …(6) The first roll is 1 and the second is 6. The probability of these 6 add up to 1/6.

It is not hard to see that if a universe consists of N rolls of a die (N can be any positive number), the number of outcomes in that universe will be 6N. (And the probability of any particular sequence is 1/6N).

Be clear on the fact that whenever we pose a question about probability, we have to make it clear just what the universe of outcomes is that we're considering! It matters whether we are talking about the universe of all possible sequences of 6 rolls of a die, or all possible sequences of 6 or fewer rolls of a die, for example. You should convince yourself that the latter universe is quite a bit bigger, and hence the probability of any die-roll that is 6 rolls long will have a lower probability in that larger universe than it does in the universe consisting only of 6 rolls of a die.

 

            We can imagine the universe to consist of a sequence of rolls of a die anywhere in length from 1 roll to (let us say) 100. The counting is a little more complicated, but it's not all that different. (How many members of the universe are there? answer: 6201/2). And each one of them is equally likely (and not very likely, as  you can convince yourself).

           

Let's make the die bigger. Let us suppose, now, that we have a large die with 1,000 sides on it. We choose the 1,000 most frequent words in a large corpus -- say, the Brown corpus. Each time we roll the die, we choose the word with the corresponding rank, and utter it. That means that each time the die comes up ”1" (which is only once in a thousand rolls, on average), we say the word "the". When it comes up "2", we say "of" -- these are the two most frequent words. And so forth.

            If we start rolling the die, we'll end up with utterances like the following:

            320 990 646 94 756

which translates into: whether designed passed must southern.

 

That's what this worst of random word generators would generate. But that's not what we're thinking about grammars probabilistically to do -- not at all. Rather, what we're interested in is the probability that this model would assign to a particular sentence that somebody has already uttered. Let's use, as our example, the sentence: In the beginning was the word. There are six words, and it just so happens that all six are among the 1,000 most common words in the Brown corpus. So the probability that we would assign to this sentence is 1/1000 * 1/1000 * 1/1000 * 1/1000 * 1/1000 * 1/1000, which can also be expressed more readably as 10-18.  There are 1,000 = 103 events in the universe of strings of one word in length, and 1,000,000 = 106 events in the universe of strings of 2 words in length, and 1018 events in the universe of strings of 6 words. That is why each such event has a probability of the reciprocal of that number. (If there are K events which are equally likely, then each has the probability 1/K, right?) 

 

I hope it is already clear that this model would assign that probability to any sequence of six words. Is this good or bad? It's neither the one nor the other. We might say that this is a terrible grammar of English, but it's all relative. This method will assign a zero probability to any sequence of words in which at least one word does not appear in the top 1000 words of the Brown corpus. That may sound bad, too, but do notice that it means that such a grammar will assign a zero probability to any sentence in a language that is not English! And it will assign a non-zero probability to any word-sequence made up entirely of words from the top 1,000 words.

 

We could redo this case and include a non-zero probability for all of the 47,885 distinct words in the Brown Corpus. Then any string of words all of which appear in the corpus will be assigned a probability of (1/ 47,885  )N, where N is the number of words in the string. A sentence of 6 words would be assigned a probability of (1/ 47,885  )6, which just so happens to be about  (2.1 * 10-5)6, or 86 * 10-30. We'll get back to that (very small) number in a few paragraphs.

 

Or -- we could do better than that (and the whole point of this discussion is so that I can explain in just a moment exactly what "doing better" really means). We could assign to each word in the corpus a probability equal to its frequency in the corpus. The word "the", for example, appears 69,903 out of the total 1,159,267 words, so its probability will be approximately .0603 -- and other words have a much lower probability. "leaders" occurs 107 times, and has the probability 107/1,159,267 = .000 092 (it is the 1,000 most frequent word). Is it clear that the sum of the probabilities assigned to all of the words adds up to 1.00? It should be.

 

Now let's ask what the probability is of the sentence "the woman arrived." To find the answer, we must, first of all, specify that we are asking this question in the context of sentence composed of 3 words -- that is, sentence of length 3. Second, in light of the previous paragraph, we need to find the probability of each of those words in the Brown Corpus. The probability of "the" is 0.068271; prob (woman) = 0.00023; prob (arrived) = .00006. These numbers represent their probabilities where the universe in question is a universe of single words being chosen from the universe of possibilities. What we are interested in now is the universe of 3-word sentences. (By the way, I will henceforth use the word "sentence" to mean "sequence of words" -- use of that term  doesn't imply a claim about grammaticality or acceptability.) We need to be able to talk about sentences whose first word is "the", or whose second word is "woman"; let's use the following notation. We'll indicate the word number in square brackets, so if S is the sentence "the woman arrived," then S[1] = "the", S[2] = "woman", and S[3] = "arrived". We may also want to refer to words in a more abstract way -- to speak of the ith word, for example. If we want to say the first word of sentence S is the ith word of the vocabulary, we'll write S[1] = wi. (This is to avoid the notation that Charniak and others use, which I think is confusing, and which employs both subscripts and superscripts on w's.)

 

We need to assign a probability to each and every sequence (i.e., sentence) of three words from the Brown Corpus in such a fashion that these probabilities add up to 1.0. The natural way to do that is to say that the probability of a sentence is the product of the probabilities: if S = "the woman arrived" then

(1)        prob (S) = prob ( S[1] = "the") * prob (S[2] = "woman" ) *

prob ( S[3] = "arrived")

 

and we do as I suggested, which is to suppose that the probability of a word is independent of what position it is in. We would state that formally:

For all  sentences S, all words w and all positions i, prob ( S[i] = wn ) = prob ( S[j] = wn ).

A model with that assumption is said to be a stationary model.

 

You should convince yourself that with these assumptions, the probabilities of all 3-word sentences does indeed add up to 1.0.

 

As I just said, the natural way to assign probabilities to the sentences in our universe is as in (1); we'll make the assumption that the probability of a given word is stationary, and furthermore that it is its empirical frequency (i.e., the frequency we observed) in the Brown Corpus. So the probability of "the woman arrived" is 0.068271 * 0.00023 *  .00006 = 0.000 000 000 942 139 8, or about 9.42 * 10-10.

 

What about the probability of the sentence "in the beginning was the word"? We calculated it above to be 10-18 in the universe consisting of all sentences of length 6 (exactly) where the words were just the 1,000 most frequency words in the Brown Corpus, with uniform distribution. And the probability was 8.6 * 10-29 when we considered the universe of all possible sentences of six words in length, where the size of the vocabulary was the whole vocabulary of the Brown Corpus, again with uniform distribution. But we have a new model for that universe, which is to say, we are considering a different distribution of probability mass. In the new model, the probability of the sentence is the product of the empirical frequencies of the words in the Brown Corpus, so the probability of in the beginning was the word in our new model is .021 * .068 * .00016 * .0096 * .021 * .00027 = 2.1 * 10-2 * 6.8 * 10-2 * 1.6 * 10-4 * 9.6 * 10-3 * 2.1 * 10-2 * 2.7 * 10-4 = 1243 * 10-17 = 1.243 * 10-14. That's a much smaller number than we got with other distributions.

            The main point for the reader now is to be clear on what the significance of these two numbers is: 10-18 for the first model, 8.6 * 10-29 for the second model, and 1.243 *  10-14 for the third. But it's the same sentence, you may say! The difference is that a higher probability (a bigger number, with a smaller negative exponent, putting it crudely) is assigned to the sentence that we know is an English sentence in the frequency-based model. If this result holds up over a range of real English sentences, this tells us that the frequency-based model is a better model of English than the model in which all words have the same frequency (a uniform distribution). That speaks well for the frequency-based model! In short, we prefer a model that scores better (by assigning a higher probability) to sentences that actually and already exist -- we prefer that model to any other model that assigns a lower probability to the actual corpus.

 

In order for a model to assign higher probability to actual and existing sentences, it must assign less probability to other sentences (since the total amount of probability mass that it has at its disposal to assign totals up to 1.000, and no more). So of course it assigns lower probability to a lot of unobserved strings. On the frequency-based model, a string of word-salad like civilized streams riverside prompt shaken squarely will have a probability even lower than it does in the uniform distribution model. Since each of these words has probability 1.07 * 10-5  (I picked them that way --), the probability of the sentence is  (1.07 * 10-5)6 = 1.4 * 10-30.That's the probability based on using empirical frequencies. Remember that a few paragraphs above we calculated the probability of any six-word sentence in the uniform-distribution model as 8.6 * 10-29; so we've just seen that the frequency-based model gives an even smaller probability to this word-salad sentence than did the uniform distribution model -- which is a good thing.

 

You are probably aware that so far, our model treats word order as irrelevant -- it assigns the same probability to beginning was the the in word as it does to in the beginning was the word. We'll get to this point eventually.

 

 

Connection to Manning and Schütze, Chapter 2, so far. What I've called the universe of outcomes is what they call the sample space; the outcomes are their basic outcomes or sample points. We assume discrete sample spaces (universes) everywhere. Following standard terminology, they call a subset of the sample space an event, even when that is fairly counterintuitive, in my humble opinion -- the set of outcomes in which the first word is the is an event, but it's not a basic event, because it consists of all of the sentences which begin with the word the.

A probability function or distribution is the distribution that we have been talking about. It is very helpful to think of there being some physically tangible material (like some kind of putty or clay) which is physically spread over the various members of the universe, so that if you could pick out the cases you care about, you could actually weigh them to get their probability. This stuff is called probability mass, and its total weight, of course, is 1.0.

 

 

            Conditional probability (Section 2.1.2)

I stressed before that we must start an analysis with some understanding as to what the universe of outcomes is that we are assuming. That universe forms the background, the given, of the discussion. Sometimes we want to shift the universe of discussion to a more restricted subuniverse -- this is always a case of having additional information, or at least of acting as if we had additional information. This is the idea that lies behind the term conditional probability. We look at our universe of outcomes, with its probability mass spread out over the set of outcomes, and we say, let us consider only a sub-universe, and ignore all possibilities outside of that sub-universe. We then must ask: how do we have to change the probabilities inside that sub-universe so as to ensure that the probabilities inside it add up to 1.0 (to make it a distribution)? A moment's thought will convince you that what must be done is to divide the probability of each event by the total amount of probability mass inside the sub-universe.

 

Let's take another classic probability case. Let the universe of outcomes be the 52 cards of a standard playing card deck. The probability of drawing any particular card is 1/52 (that's a uniform distribution). What if we strict our attention to red cards? It might be the case, for example, that of the card drawn, we know it is red, and that's all we know about it; what is the probability now that it is the Queen of Hearts?

 

The sub-universe consisting of the red cards has probability mass 0.5, and the Queen of Hearts lies within that sub-universe. So if we restrict our attention to the 26 outcomes that comprise the "red card sub-universe," we see that the sum total of the probability mass is only 0.5 (the sum of 26 red cards, each with 1/52 probability). In order to consider the sub-universe as having a distribution on it, we must divide each of the 1/52 in it by 0.5, the total probability of the sub-universe in the larger, complete universe. Hence the probability of the Queen of Hearts, given the Red Card sub-Universe (given means with the knowledge that the event that occurs is in that sub-universe), is 1/52 divided by 1/2, or 1/26.

 

This is traditionally written: p(A|B) = probability of A, given B = .

 

 

Discussion and expansion of the textbook

 

2.1.9 Standard distributions

 

Binomial distribution. Suppose we look at a set of nouns in a corpus, each of which occur 10 times. Suppose that the probability that a noun will occur in the singular is p, and the probability that it will occur in the plural is  1-p. What is the probability that a noun will occur once in the singular (of the 10 times studied)? twice? five times? nine? all ten?

 

Suppose the word is dog. The probability that it will appear in the singular all 10 times is p10; that it will appear in the singular 9 times and plural once is p9(1-p); that it will appear in the singular 8 times and plural twice is p8(1-p)2; and the probability that it will appear in the plural all 10 times is (1-p)10. That should be clear! In the Brown corpus, dog appears 78 times, and dogs 70 times. Let us assume, for present purposes, that p = 78/148 = about 0.53.  What's the probability, then, that a stem which appears 10 times will appear in the singular all 10 times? (0.53)10 = 1.65 * 10-3. So -- a little bit more than one time in a 1,000 (one in a thousand is 10-3, right?). What if the stem only occurs five times in the corpus?

 

If there are three or more forms a stem could take, we need to consider a multinomial distribution.