coptimizationglibcportabilitystrlen

Why does glibc's strlen need to be so complicated to run quickly?


I was looking through the strlen code here and I was wondering if the optimizations used in the code are really needed? For example, why wouldn't something like the following work equally good or better?

unsigned long strlen(char s[]) {
    unsigned long i;
    for (i = 0; s[i] != '\0'; i++)
        continue;
    return i;
}

Isn't simpler code better and/or easier for the compiler to optimize?

The code of strlen on the page behind the link looks like this:

/* Copyright (C) 1991, 1993, 1997, 2000, 2003 Free Software Foundation, Inc.
   This file is part of the GNU C Library.
   Written by Torbjorn Granlund (tege@sics.se),
   with help from Dan Sahlin (dan@sics.se);
   commentary by Jim Blandy (jimb@ai.mit.edu).

   The GNU C Library is free software; you can redistribute it and/or
   modify it under the terms of the GNU Lesser General Public
   License as published by the Free Software Foundation; either
   version 2.1 of the License, or (at your option) any later version.

   The GNU C Library is distributed in the hope that it will be useful,
   but WITHOUT ANY WARRANTY; without even the implied warranty of
   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
   Lesser General Public License for more details.

   You should have received a copy of the GNU Lesser General Public
   License along with the GNU C Library; if not, write to the Free
   Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
   02111-1307 USA.  */

#include <string.h>
#include <stdlib.h>

#undef strlen

/* Return the length of the null-terminated string STR.  Scan for
   the null terminator quickly by testing four bytes at a time.  */
size_t
strlen (str)
     const char *str;
{
  const char *char_ptr;
  const unsigned long int *longword_ptr;
  unsigned long int longword, magic_bits, himagic, lomagic;

  /* Handle the first few characters by reading one character at a time.
     Do this until CHAR_PTR is aligned on a longword boundary.  */
  for (char_ptr = str; ((unsigned long int) char_ptr
            & (sizeof (longword) - 1)) != 0;
       ++char_ptr)
    if (*char_ptr == '\0')
      return char_ptr - str;

  /* All these elucidatory comments refer to 4-byte longwords,
     but the theory applies equally well to 8-byte longwords.  */

  longword_ptr = (unsigned long int *) char_ptr;

  /* Bits 31, 24, 16, and 8 of this number are zero.  Call these bits
     the "holes."  Note that there is a hole just to the left of
     each byte, with an extra at the end:

     bits:  01111110 11111110 11111110 11111111
     bytes: AAAAAAAA BBBBBBBB CCCCCCCC DDDDDDDD

     The 1-bits make sure that carries propagate to the next 0-bit.
     The 0-bits provide holes for carries to fall into.  */
  magic_bits = 0x7efefeffL;
  himagic = 0x80808080L;
  lomagic = 0x01010101L;
  if (sizeof (longword) > 4)
    {
      /* 64-bit version of the magic.  */
      /* Do the shift in two steps to avoid a warning if long has 32 bits.  */
      magic_bits = ((0x7efefefeL << 16) << 16) | 0xfefefeffL;
      himagic = ((himagic << 16) << 16) | himagic;
      lomagic = ((lomagic << 16) << 16) | lomagic;
    }
  if (sizeof (longword) > 8)
    abort ();

  /* Instead of the traditional loop which tests each character,
     we will test a longword at a time.  The tricky part is testing
     if *any of the four* bytes in the longword in question are zero.  */
  for (;;)
    {
      /* We tentatively exit the loop if adding MAGIC_BITS to
     LONGWORD fails to change any of the hole bits of LONGWORD.

     1) Is this safe?  Will it catch all the zero bytes?
     Suppose there is a byte with all zeros.  Any carry bits
     propagating from its left will fall into the hole at its
     least significant bit and stop.  Since there will be no
     carry from its most significant bit, the LSB of the
     byte to the left will be unchanged, and the zero will be
     detected.

     2) Is this worthwhile?  Will it ignore everything except
     zero bytes?  Suppose every byte of LONGWORD has a bit set
     somewhere.  There will be a carry into bit 8.  If bit 8
     is set, this will carry into bit 16.  If bit 8 is clear,
     one of bits 9-15 must be set, so there will be a carry
     into bit 16.  Similarly, there will be a carry into bit
     24.  If one of bits 24-30 is set, there will be a carry
     into bit 31, so all of the hole bits will be changed.

     The one misfire occurs when bits 24-30 are clear and bit
     31 is set; in this case, the hole at bit 31 is not
     changed.  If we had access to the processor carry flag,
     we could close this loophole by putting the fourth hole
     at bit 32!

     So it ignores everything except 128's, when they're aligned
     properly.  */

      longword = *longword_ptr++;

      if (
#if 0
      /* Add MAGIC_BITS to LONGWORD.  */
      (((longword + magic_bits)

        /* Set those bits that were unchanged by the addition.  */
        ^ ~longword)

       /* Look at only the hole bits.  If any of the hole bits
          are unchanged, most likely one of the bytes was a
          zero.  */
       & ~magic_bits)
#else
      ((longword - lomagic) & himagic)
#endif
      != 0)
    {
      /* Which of the bytes was the zero?  If none of them were, it was
         a misfire; continue the search.  */

      const char *cp = (const char *) (longword_ptr - 1);

      if (cp[0] == 0)
        return cp - str;
      if (cp[1] == 0)
        return cp - str + 1;
      if (cp[2] == 0)
        return cp - str + 2;
      if (cp[3] == 0)
        return cp - str + 3;
      if (sizeof (longword) > 4)
        {
          if (cp[4] == 0)
        return cp - str + 4;
          if (cp[5] == 0)
        return cp - str + 5;
          if (cp[6] == 0)
        return cp - str + 6;
          if (cp[7] == 0)
        return cp - str + 7;
        }
    }
    }
}
libc_hidden_builtin_def (strlen)

Why does this version run quickly?

Isn't it doing a lot of unnecessary work?


Solution

  • You don't need and you should never write code like that - especially if you're not a C compiler / standard library vendor. It is code used to implement strlen with some very questionable speed hacks and assumptions (that are not tested with assertions or mentioned in the comments):

    What is more, a good compiler could even replace code written as

    size_t stupid_strlen(const char s[]) {
        size_t i;
        for (i=0; s[i] != '\0'; i++)
            ;
        return i;
    }
    

    (notice that it has to be a type compatible with size_t) with an inlined version of the compiler builtin strlen, or vectorize the code; but a compiler would be unlikely to be able to optimize the complex version.


    The strlen function is described by C11 7.24.6.3 as:

    Description

    1. The strlen function computes the length of the string pointed to by s.

    Returns

    1. The strlen function returns the number of characters that precede the terminating null character.

    Now, if the string pointed to by s was in an array of characters just long enough to contain the string and the terminating NUL, the behaviour will be undefined if we access the string past the null terminator, for example in

    char *str = "hello world";  // or
    char array[] = "hello world";
    

    So really the only way in fully portable / standards compliant C to implement this correctly is the way it is written in your question, except for trivial transformations - you can pretend to be faster by unrolling the loop etc, but it still needs to be done one byte at a time.

    (As commenters have pointed out, when strict portability is too much of a burden, taking advantage of reasonable or known-safe assumptions is not always a bad thing. Especially in code that's part of one specific C implementation. But you have to understand the rules before knowing how/when you can bend them.)


    The linked strlen implementation first checks the bytes individually until the pointer is pointing to the natural 4 or 8 byte alignment boundary of the unsigned long. The C standard says that accessing a pointer that is not properly aligned has undefined behaviour, so this absolutely has to be done for the next dirty trick to be even dirtier. (In practice on some CPU architecture other than x86, a misaligned word or doubleword load will fault. C is not a portable assembly language, but this code is using it that way). It's also what makes it possible to read past the end of an object without risk of faulting on implementations where memory protection works in aligned blocks (e.g. 4kiB virtual memory pages).

    Now comes the dirty part: the code breaks the promise and reads 4 or 8 8-bit bytes at a time (a long int), and uses a bit trick with unsigned addition to quickly figure out if there were any zero bytes within those 4 or 8 bytes - it uses a specially crafted number to that would cause the carry bit to change bits that are caught by a bit mask. In essence this would then figure out if any of the 4 or 8 bytes in the mask are zeroes supposedly faster than looping through each of these bytes would. Finally there is a loop at the end to figure out which byte was the first zero, if any, and to return the result.

    The biggest problem is that in sizeof (unsigned long) - 1 times out of sizeof (unsigned long) cases it will read past the end of the string - only if the null byte is in the last accessed byte (i.e. in little-endian the most significant, and in big-endian the least significant), does it not access the array out of bounds!


    The code, even though used to implement strlen in a C standard library is bad code. It has several implementation-defined and undefined aspects in it and it should not be used anywhere instead of the system-provided strlen - I renamed the function to the_strlen here and added the following main:

    int main(void) {
        char buf[12];
        printf("%zu\n", the_strlen(fgets(buf, 12, stdin)));
    }
    

    The buffer is carefully sized so that it can hold exactly the hello world string and the terminator. However on my 64-bit processor the unsigned long is 8 bytes, so the access to the latter part would exceed this buffer.

    If I now compile with -fsanitize=undefined and -fsanitize=address and run the resulting program, I get:

    % ./a.out
    hello world
    =================================================================
    ==8355==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffffe63a3f8 at pc 0x55fbec46ab6c bp 0x7ffffe63a350 sp 0x7ffffe63a340
    READ of size 8 at 0x7ffffe63a3f8 thread T0
        #0 0x55fbec46ab6b in the_strlen (.../a.out+0x1b6b)
        #1 0x55fbec46b139 in main (.../a.out+0x2139)
        #2 0x7f4f0848fb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
        #3 0x55fbec46a949 in _start (.../a.out+0x1949)
    
    Address 0x7ffffe63a3f8 is located in stack of thread T0 at offset 40 in frame
        #0 0x55fbec46b07c in main (.../a.out+0x207c)
    
      This frame has 1 object(s):
        [32, 44) 'buf' <== Memory access at offset 40 partially overflows this variable
    HINT: this may be a false positive if your program uses some custom stack unwind mechanism or swapcontext
          (longjmp and C++ exceptions *are* supported)
    SUMMARY: AddressSanitizer: stack-buffer-overflow (.../a.out+0x1b6b) in the_strlen
    Shadow bytes around the buggy address:
      0x10007fcbf420: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf430: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf440: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf450: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf460: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    =>0x10007fcbf470: 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00[04]
      0x10007fcbf480: f2 f2 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf490: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf4a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf4b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf4c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    Shadow byte legend (one shadow byte represents 8 application bytes):
      Addressable:           00
      Partially addressable: 01 02 03 04 05 06 07 
      Heap left redzone:       fa
      Freed heap region:       fd
      Stack left redzone:      f1
      Stack mid redzone:       f2
      Stack right redzone:     f3
      Stack after return:      f5
      Stack use after scope:   f8
      Global redzone:          f9
      Global init order:       f6
      Poisoned by user:        f7
      Container overflow:      fc
      Array cookie:            ac
      Intra object redzone:    bb
      ASan internal:           fe
      Left alloca redzone:     ca
      Right alloca redzone:    cb
    ==8355==ABORTING
    

    i.e. bad things happened.