I'm trying to merge multiple excel files with pandas using the following code:
import pandas as pd
from os.path import basename
df = []
for f in ['test1.xlsx', 'test2.xlsx']:
data = pd.read_excel(f, 'Sheet1')
data.index = [basename(f)] * len(data)
df.append(data)
df = pd.concat(df)
df.to_excel("merged_data2.xlsx")
Which works fine on the test files but when trying it on 3 other excel files of size over 100mb each the process becomes too slow to be useful. I saw this other post on the subject: Why does concatenation of DataFrames get exponentially slower?
And I believe I have correctly followed the advice of using a list before concatenating but without success. Any ideas? Thanks.
I don't see anything wrong with your pandas code; a 300mb excel file might just be a lot for Pandas to handle! Here are some approaches I'd take:
If I were you, my next step in debugging this would be to throw some print(datetime.now())
statements into the loop, to see whether it's the reading, the concatenating, or the .to_excel
that's taking time. That way you may be able to narrow down the problem. Also take a look at your memory usage using appropriate tools for whatever OS you're in.
Pandas is optimized for scientific computing and it probably spends quite a bit of time organizing the data for querying and such. ETL isn't it's primary purpose. If you only need to concatenate a few sheets, (as much as it pains me to suggest doing something manually!) manual work in Excel itself will likely be the quickest way to go - highly-paid engineers at Microsoft have been tasked with optimizing that. If you need a programmatic approach, it could be worth trying out petl or one of the tools discussed here that may take a simpler/more efficient approach than pandas.
Some example petl
code that might do the trick:
import petl
petl.cat(*
petl.io.fromxlsx(file)
for file in ['your.xlsx', 'excel.xlsx', 'files.xlsx']
).progress().toxlsx()