pythonpython-3.xcsvparallel-processingipython-parallel

Implement parallel processing of for loop


Looking to make the following code parallel- it reads in data in one large 9gb proprietary format and produces 30 individual csv files based on the 30 columns of data. It currently takes 9 minutes per csv written on a 30 minute data set. The solution space of parallel libraries in Python is a bit overwhelming. Can you direct me to any good tutorials/sample code? I couldn't find anything very informative.

for i in range(0, NumColumns):
    aa = datetime.datetime.now()
    allData = [TimeStamp]
    ColumnData = allColumns[i].data    # Get the data within this one Column
    Samples = ColumnData.size           # Find the number of elements in Column data
    print('Formatting Column {0}'.format(i+1))
    truncColumnData = []                # Initialize truncColumnData array each time for loop runs       
    if ColumnScale[i+1] == 'Scale:  '+ tempScaleName:   # If it's temperature, format every value to 5 characters
        for j in range(Samples):
            truncValue = '{:.1f}'.format((ColumnData[j]))
            truncColumnData.append(truncValue)   # Appends formatted value to truncColumnData array

    allData.append(truncColumnData)   #append the formatted Column data to the all data array

    zipObject = zip(*allData)
    zipList = list(zipObject)

    csvFileColumn = 'Column_' + str('{0:02d}'.format(i+1)) + '.csv'    
    # Write the information to .csv file
    with open(csvFileColumn, 'wb') as csvFile:
        print('Writing to .csv file')
        writer = csv.writer(csvFile)
        counter = 0
        for z in zipList:
            counter = counter + 1
            timeString = '{:.26},'.format(z[0])
            zList = list(z)
            columnVals = zList[1:]
            columnValStrs = list(map(str, columnVals))
            formattedStr = ','.join(columnValStrs)
            csvFile.write(timeString + formattedStr + '\n')   # Writes the time stamps and channel data by columns

Solution

  • one possible solution may be to use Dask http://dask.pydata.org/en/latest/ A coworker recently recommended it to me which is why I thought of it.