Segmentation (3.8)

Continue on Segmentation, chapter 3.8 of the whale book.

Some of corpora, for example, Brown corpus is accessible per sentence like this.

>>> len(nltk.corpus.brown.words())/len(nltk.corpus.brown.sents())
20.250994070456922

NLTK has a functionality to split into sentences even though the corpus is not accessible per sentence.

>>> sent_tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
>>> text = nltk.corpus.gutenberg.raw('chesterton-thursday.txt')
>>> sents = sent_tokenizer.tokenize(text)
>>> pprint.pprint(sents[171:181])
['In the wild events which were to follow this girl had no\npart at all; he never saw her again until all his tale was over.',
 'And yet, in some indescribable way, she kept recurring like a\nmotive in music through all his mad adventures afterwards, and the\nglory of her strange hair ran like a red thread through those dark\nand ill-drawn tapestries of the night.',
 'For what followed was so\nimprobable, that it might well have been a dream.',
 'When Syme went out into the starlit street, he found it for the\nmoment empty.',
 'Then he realised (in some odd way) that the silence\nwas rather a living silence than a dead one.',
 'Directly outside the\ndoor stood a street lamp, whose gleam gilded the leaves of the tree\nthat bent out over the fence behind him.',
 'About a foot from the\nlamp-post stood a figure almost as rigid and motionless as the\nlamp-post itself.',
 'The tall hat and long frock coat were black; the\nface, in an abrupt shadow, was almost as dark.',
 'Only a fringe of\nfiery hair against the light, and also something aggressive in the\nattitude, proclaimed that it was the poet Gregory.',
 'He had something\nof the look of a masked bravo waiting sword in hand for his foe.']
>>> 

The results are different from the textbook.

This example is to split into words(?) using seg1 to seg3. If the value is '1', this means the end of words.

>>> text = "doyouseethekittyseethedoggydoyoulikethekittylikethedoggy"
>>> seg1 = '0000000000000000000000000000000000000000000000000000000"
>>> seg2 = '0100100100100001001001000010100100010010000100010010000'
>>> def segment(text, segs):
...     words = []
...     last = 0
...     for i in range(len(segs)):
...             if segs[i] == '1':
...                     words.append(text[last:i+1])
...                     last = i+1
...     words.append(text[last:])
...     return words
... 
>>> seg3 = '0000000000000001000000000010100000000000000100000000000'
>>> segment(text, seg3)
['doyouseethekitty', 'seethedoggy', 'do', 'youlikethekitty', 'likethedoggy']>>> segment(text, seg1)
['doyouseethekittyseethedoggydoyoulikethekittylikethedoggy']
>>> segment(text, seg2)
['do', 'you', 'see', 'the', 'kitty', 'see', 'the', 'doggy', 'do', 'you', 'like', 'the', 'kitty', 'like', 'the', 'doggy']

This is not clear for me to understand. First execute then try to understand...

>>> text = "doyouseethekittyseethedoggydoyoulikethekittylikethedoggy"
>>> seg1 = '0000000000000000000000000000000000000000000000000000000'
>>> seg2 = '0100100100100001001001000010100100010010000100010010000'
>>> seg3 = '0000100100000011001000000110000100010000001100010000001'
>>> segment(text, seg3)
['doyou', 'see', 'thekitt', 'y', 'see', 'thedogg', 'y', 'doyou', 'like', 'thekitt', 'y', 'like', 'thedogg', 'y']
>>> evaluate(text, seg3)
46
>>> evaluate(text, seg2)
47
>>> evaluate(text, seg1)
57
>>> 

Try to seek the "best" segmentation.

>>> text = "doyouseethekittyseethedoggydoyoulikethekittylikethedoggy"
>>> seg1 = '0000000000000001000000000010000000000000000100000000000'
>>> def anneal(text, segs, itertions, cooling_rate):
...     temperature = float(len(segs))
...     while temperature > 0.5:
...             best_segs, best = segs, evaluate(text, segs)
...             for i in range(itertions):
...                     guess = flip_n(segs, int(round(temperature)))
...                     score = evaluate(text, guess)
...                     if score < best:
...                             best, best_segs = score, guess
...             score, segs = best, best_segs
...             temperature = temperature / cooling_rate
...             print evaluate(text, segs), segment(text, segs)
...     print
...     return segs
... 
>>> anneal(text, seg1, 5000, 1.2)
63 ['doyouseethekitty', 'seethedoggy', 'doyoulikethekitty', 'likethedoggy']
63 ['doyouseethekitty', 'seethedoggy', 'doyoulikethekitty', 'likethedoggy']
63 ['doyouseethekitty', 'seethedoggy', 'doyoulikethekitty', 'likethedoggy']
63 ['doyouseethekitty', 'seethedoggy', 'doyoulikethekitty', 'likethedoggy']
63 ['doyouseethekitty', 'seethedoggy', 'doyoulikethekitty', 'likethedoggy']
63 ['doyouseethekitty', 'seethedoggy', 'doyoulikethekitty', 'likethedoggy']
63 ['doyouseethekitty', 'seethedoggy', 'doyoulikethekitty', 'likethedoggy']
63 ['doyouseethekitty', 'seethedoggy', 'doyoulikethekitty', 'likethedoggy']
63 ['doyouseethekitty', 'seethedoggy', 'doyoulikethekitty', 'likethedoggy']
59 ['doyou', 'seethekittyseethedoggy', 'doyou', 'likethekittyliketh', 'edoggy']
57 ['doyou', 'seethekittyseethedoggy', 'doyou', 'likethekittylikethedoggy']
53 ['doyou', 'seethe', 'kittyse', 'ethedoggy', 'doyou', 'lik', 'ethekitty', 'lik', 'ethedoggy']
53 ['doyou', 'seethe', 'kittyse', 'ethedoggy', 'doyou', 'lik', 'ethekitty', 'lik', 'ethedoggy']
51 ['doyou', 'seethekittyse', 'ethedoggy', 'doyou', 'lik', 'ethekitty', 'lik', 'ethedoggy']
42 ['doyou', 'se', 'ethekitty', 'se', 'ethedoggy', 'doyou', 'lik', 'ethekitty', 'lik', 'ethedoggy']
42 ['doyou', 'se', 'ethekitty', 'se', 'ethedoggy', 'doyou', 'lik', 'ethekitty', 'lik', 'ethedoggy']
42 ['doyou', 'se', 'ethekitty', 'se', 'ethedoggy', 'doyou', 'lik', 'ethekitty', 'lik', 'ethedoggy']
42 ['doyou', 'se', 'ethekitty', 'se', 'ethedoggy', 'doyou', 'lik', 'ethekitty', 'lik', 'ethedoggy']
42 ['doyou', 'se', 'ethekitty', 'se', 'ethedoggy', 'doyou', 'lik', 'ethekitty', 'lik', 'ethedoggy']
42 ['doyou', 'se', 'ethekitty', 'se', 'ethedoggy', 'doyou', 'lik', 'ethekitty', 'lik', 'ethedoggy']
42 ['doyou', 'se', 'ethekitty', 'se', 'ethedoggy', 'doyou', 'lik', 'ethekitty', 'lik', 'ethedoggy']
42 ['doyou', 'se', 'ethekitty', 'se', 'ethedoggy', 'doyou', 'lik', 'ethekitty', 'lik', 'ethedoggy']
42 ['doyou', 'se', 'ethekitty', 'se', 'ethedoggy', 'doyou', 'lik', 'ethekitty', 'lik', 'ethedoggy']
42 ['doyou', 'se', 'ethekitty', 'se', 'ethedoggy', 'doyou', 'lik', 'ethekitty', 'lik', 'ethedoggy']
42 ['doyou', 'se', 'ethekitty', 'se', 'ethedoggy', 'doyou', 'lik', 'ethekitty', 'lik', 'ethedoggy']
42 ['doyou', 'se', 'ethekitty', 'se', 'ethedoggy', 'doyou', 'lik', 'ethekitty', 'lik', 'ethedoggy']

'0000101000000001010000000010000100100000000100100000000'
>>>