fseek(f, 0, SEEK_END);
size = ftell(f);
If ftell(f) tells us the current file position, the size here should be the offset from the end of the file to the beginning. Why is the size not ftell(f)+1? Should not ftell(f) only give us the position of the end of the file?
File positions are like the cursor in a text entry widget: they are in between the bytes of the file. This is maybe easiest to understand if I draw a picture:
This is a hypothetical file. It contains four characters: a, b, c, and d. Each character gets a little box to itself, which we call a "byte". (This file is ASCII.) The fifth box has been crossed out because it's not part of the file yet, but but if you appended a fifth character to the file it would spring into existence.
The valid file positions in this file are 0, 1, 2, 3, and 4. There are five of them, not four; they correspond to the vertical lines before, after, and in between the boxes. When you open the file (assuming you don't use "a"
), you start out on position 0, the line before the first byte in the file. When you seek to the end, you arrive at position 4, the line after the last byte in the file. Because we start counting from zero, this is also the number of bytes in the file. (This is one of the several reasons why we start counting from zero, rather than one.)
I am obliged to warn you that there are several reasons why
fseek(fp, 0, SEEK_END);
long int nbytes = ftell(fp);
might not give you the number you actually want, depending on what you mean by "file size" and on the contents of the file. In no particular order:
On Windows, if you open a file in text mode, the numbers you get from ftell
on that file are not byte offsets from the beginning of the file; they are more like fgetpos
cookies, that can only be used in a subsequent call to fseek
. If you need to seek around in a text file on Windows you may be better off opening the file in binary mode and dealing with both DOS and Unix line endings yourself — this is actually my recommendation for production code in general, because it's perfectly possible to have a file with DOS line endings on a Unix system, or vice versa.
On systems where long int
is 32 bits, files can easily be bigger than that, in which case ftell
will fail, return −1 and set errno
to EOVERFLOW
. POSIX.1-2001-compliant systems provide a function called ftello
that returns an off_t
quantity that can represent larger file sizes, provided you put #define _FILE_OFFSET_BITS 64
at the very top of all your source files (before any #include
s). I don't know what the Windows equivalent is.
If your file contains characters that are beyond ASCII, then the number of bytes in the file is very likely to be different from the number of characters in the file. (For instance, if the file is encoded in UTF-8, the character 啡 will take up three bytes, Ä will take up either two or three bytes depending on whether it's "composed", and జ్ఞా will take up twelve bytes because, despite being a single grapheme, it's a string of four Unicode code points.) ftell(o)
will still tell you the correct number to pass to malloc
, if your goal is to read the entire file into memory, but iterating over "characters" will not be so simple as for (i = 0; i < len; i++)
.
If you are using C's "wide streams" and "wide characters", then, just like text streams on Windows, the numbers you get from ftell
on that file are not byte offsets and may not be useful for anything other than subsequent calls to fseek
. But wide streams and characters are a bad design anyway; you're actually more likely to be able to handle all the world's languages correctly if you stick to processing UTF-8 by hand in narrow streams and characters.