pythonnumpyscipystatisticszipf

Zipf Distribution: How do I measure Zipf Distribution


How do I measure or find the Zipf distribution ? For example, I have a corpus of english words. How do I find the Zipf distribution ? I need to find the Zipf ditribution and then plot a graph of it. But I am stuck in the first step which is to find the Zipf distribution.

Edit: From the frequency count of each word, it is clear that it obeys the Zipf law. But my aim is to plot a zipf distribution graph. I have no idea about how to calculate the data for the distribution graph


Solution

  • I don't pretend to understand statistics. However, based upon reading from scipy site, here is a naive attempt in python.

    Build Data

    First we get our data. For example we download data from National Library of Medicine MeSH (Medical Subject Heading) ASCII file d2016.bin (28 MB).
    Next, we open file, convert to string.

    open_file = open('d2016.bin', 'r')
    file_to_string = open_file.read()
    

    Next we locate individual words in the file and separate out words.

    words = re.findall(r'(\b[A-Za-z][a-z]{2,9}\b)', file_to_string)
    

    Finally we prepare a dict with unique words as key and word count as values.

    for word in words:
        count = frequency.get(word,0)
        frequency[word] = count + 1
    

    Build zipf distribution data
    For speed purpose we limit data to 1000 words.

    n = 1000
    frequency = {key:value for key,value in frequency.items()[0:n]}
    

    After that we get frequency of values , convert to numpy array and use numpy.random.zipf function to draw samples from a zipf distribution.

    Distribution parameter a =2. as a sample as it needs to be greater than 1. For visibility purpose we limit data to 50 sample points.

    s = frequency.values()
    s = np.array(s)
    
    count, bins, ignored = plt.hist(s[s<50], 50, normed=True)
    x = np.arange(1., 50.)
    y = x**(-a) / special.zetac(a)
    

    And finally plot the data.

    Putting All Together

    import re
    from operator import itemgetter
    import matplotlib.pyplot as plt
    from scipy import special
    import numpy as np
    
    #Get our corpus of medical words
    frequency = {}
    open_file = open('d2016.bin', 'r')
    file_to_string = open_file.read()
    words = re.findall(r'(\b[A-Za-z][a-z]{2,9}\b)', file_to_string)
    
    #build dict of words based on frequency
    for word in words:
        count = frequency.get(word,0)
        frequency[word] = count + 1
    
    #limit words to 1000
    n = 1000
    frequency = {key:value for key,value in frequency.items()[0:n]}
    
    #convert value of frequency to numpy array
    s = frequency.values()
    s = np.array(s)
    
    #Calculate zipf and plot the data
    a = 2. #  distribution parameter
    count, bins, ignored = plt.hist(s[s<50], 50, normed=True)
    x = np.arange(1., 50.)
    y = x**(-a) / special.zetac(a)
    plt.plot(x, y/max(y), linewidth=2, color='r')
    plt.show()
    

    Plot
    enter image description here