The title is fairly explanatory.
I have a long CSV file that I would like to read line by line with the following code:
lines = []
for line in pd.read_csv(file, chunksize = 1, header = None):
lines.append(line.iloc[0 0])
print(lines)
I'd like to skip the first 48 rows. At first it seemed simple enough and I thought all I needed to do was change my read function to:
pd.read_csv(file,chunksize = 1, header = None, skiprows = 48):
Sadly, this seems to produce the effect of skipping 48 rows every single loops. Not a great outcome.
How can I read line by line which is effectively reading this file while simultaneously skipping the first 48 rows of this long, irregular file?
You could set skiprows to a variable that gets reset after its first execution.
lines = []
row_skip = 48
for line in pd.read_csv(file, chunksize = 1, header = None,skiprows=row_skip):
lines.append(line.iloc[0,0])
if row_skip:
row_skip = None
print(lines)