pythonnlpnltkcorpus

Creating a new corpus with NLTK


I reckoned that often the answer to my title is to go and read the documentations, but I ran through the NLTK book but it doesn't give the answer. I'm kind of new to Python.

I have a bunch of .txt files and I want to be able to use the corpus functions that NLTK provides for the corpus nltk_data.

I've tried PlaintextCorpusReader but I couldn't get further than:

>>>import nltk
>>>from nltk.corpus import PlaintextCorpusReader
>>>corpus_root = './'
>>>newcorpus = PlaintextCorpusReader(corpus_root, '.*')
>>>newcorpus.words()

How do I segment the newcorpus sentences using punkt? I tried using the punkt functions but the punkt functions couldn't read PlaintextCorpusReader class?

Can you also lead me to how I can write the segmented data into text files?


Solution

  • I think the PlaintextCorpusReader already segments the input with a punkt tokenizer, at least if your input language is english.

    PlainTextCorpusReader's constructor

    def __init__(self, root, fileids,
                 word_tokenizer=WordPunctTokenizer(),
                 sent_tokenizer=nltk.data.LazyLoader(
                     'tokenizers/punkt/english.pickle'),
                 para_block_reader=read_blankline_block,
                 encoding='utf8'):
    

    You can pass the reader a word and sentence tokenizer, but for the latter the default already is nltk.data.LazyLoader('tokenizers/punkt/english.pickle').

    For a single string, a tokenizer would be used as follows (explained here, see section 5 for punkt tokenizer).

    >>> import nltk.data
    >>> text = """
    ... Punkt knows that the periods in Mr. Smith and Johann S. Bach
    ... do not mark sentence boundaries.  And sometimes sentences
    ... can start with non-capitalized words.  i is a good variable
    ... name.
    ... """
    >>> tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
    >>> tokenizer.tokenize(text.strip())