I am running a Python script that is causing the above error. The unusual thing is this script is running on a different machine and is having no problems.
The difference is that on the machine that is causing the problems I am writing to an external hard drive. To make things even weirder this script has run on the problem machine and already written over 30,000 files.
Some relevant information (The code that is causing the error):
nPage = 0
while nPage != -1:
for d in data:
if len(d.contents) > 1:
if '<script' in str(d.contents):
l = str(d.contents[1])
start = l.find('http://')
end = l.find('>',start)
out = get_records.openURL(l[start:end])
print COUNT
with open('../results/'+str(COUNT)+'.html','w') as f:
f.write(out)
COUNT += 1
nPage = nextPage(mOut,False)
The directory I'm writing to:
10:32@lorax:~/econ/estc/bin$ ll ../
total 56
drwxr-xr-x 3 boincuser boincuser 4096 2011-07-31 14:29 ./
drwxr-xr-x 3 boincuser boincuser 4096 2011-07-31 14:20 ../
drwxr-xr-x 2 boincuser boincuser 4096 2011-08-09 10:38 bin/
lrwxrwxrwx 1 boincuser boincuser 47 2011-07-31 14:21 results -> /media/cavalry/server_backup/econ/estc/results//
-rw-r--r-- 1 boincuser boincuser 44759 2011-08-09 10:32 test.html
Proof there is enough space:
10:38@lorax:~/econ/estc/bin$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.0G 5.3G 3.3G 63% /
none 495M 348K 495M 1% /dev
none 500M 164K 500M 1% /dev/shm
none 500M 340K 500M 1% /var/run
none 500M 0 500M 0% /var/lock
none 9.0G 5.3G 3.3G 63% /var/lib/ureadahead/debugfs
/dev/sdc10 466G 223G 244G 48% /media/cavalry
Some things I have tried:
It turns out the best solution for me here was to just reformat the drive. Once reformatted all these problems were no longer problems.