I was trying to solve a coding problem and, to summarize, I was using this piece of code:
char result[50005][26], buffer[50005];
while (fin >> buffer) {
for (int i = 0; i < strlen(buffer); ++i) {
result[lin++][col] = buffer[i];
}
lin = 0;
col++;
}
I needed to read some arbitrary words then store the letters. This worked and gave me 90 points, but it gave a time error. Then I switched the buffer from a char
array to an std::string
(and the strlen
, of course), and it gave me 100 points.
Now I'm left dumbfounded because I can't really seem to grasp the difference between them two. Was reading the char
array somehow also "reading" the remaining of the 50k imaginary bytes or something?
See the "Possible implementation" of strlen
in the C++ docs. Note that not only is O(1) performance not guaranteed, it would be impossible to achieve. strlen
only receives an array of char
, and hence has no clue how long your string is; it must iterate until the end of the string is reached. On the other hand, std::string
has a size_t
member that keeps track of the string length, as listed in the docs. Hence reading its length with .length()
is O(1).
Hence, in your implementation, strlen
is called in each iteration of the loop, so the total time complexity is O(N) per loop of N iterations = O(N^2). Whereas, using .length()
would be O(1) per loop of N iterations = O(N).
A simple fix would be to read the length of buffer
once prior to the loop and store it, for example with size_t len = strlen(buffer)
. Or, you may iterate buffer
until the end of the string by checking if buffer[i]
is '\0'
.