ioscocoaconcurrencygrand-central-dispatchnslog

Does a GCD dispatch_async wait on NSLog()?


From what I've read about Grand Central Dispatch, GCD does not do preemptive multitasking; it is all a single event loop. I'm having trouble making sense of this output. I have two queues just doing some output (at first I was reading/writing some shared state, but I was able to simplify down to this and still get the same result).

dispatch_queue_t authQueue = dispatch_queue_create("authQueue", DISPATCH_QUEUE_SERIAL);
dispatch_queue_t authQueue2 = dispatch_queue_create("authQueue", DISPATCH_QUEUE_SERIAL);

dispatch_async(authQueue, ^{ 
    NSLog(@"First Block");
    NSLog(@"First Block Incrementing"); 
    NSLog(@"First Block Incremented"); 
});

dispatch_async(authQueue, ^{ 
    NSLog(@"Second Block");
    NSLog(@"Second Block Incrementing");
    NSLog(@"Second Block Incremented"); 
});

dispatch_async(authQueue2,^{ 
    NSLog(@"Third Block"); 
    NSLog(@"Third Block Incrementing");
    NSLog(@"Third Block Incremented"); 
});

I get the following output:

2011-12-15 13:47:17.746 App[80376:5d03] Third Block
2011-12-15 13:47:17.746 App[80376:1503] First Block
2011-12-15 13:47:17.746 App[80376:5d03] Third Block Incrementing
2011-12-15 13:47:17.746 App[80376:1503] First Block Incrementing
2011-12-15 13:47:17.748 App[80376:1503] First Block Incremented
2011-12-15 13:47:17.748 App[80376:5d03] Third Block Incremented
2011-12-15 13:47:17.750 App[80376:1503] Second Block
2011-12-15 13:47:17.750 App[80376:1503] Second Block Incrementing
2011-12-15 13:47:17.751 App[80376:1503] Second Block Incremented

As is evident, the blocks do not execute atomically. My only theory is that GCD writing to stdio via NSLog makes the current execution wait. I can't find anything related to this in the Apple documentation. Can anyone explain this?


Solution

  • GCD does not use any kind of "event loop". It is a new kernel feature in recent releases of Mac OS X and iOS, that doesn't really have any other similar technology that I know of.

    The goal is to finish executing all of the code you give it as quickly as the hardware will allow. Note that it's aiming for the quickest finish time, not the quickest start time. A subtle difference, but an important one with real world impact on how it works.

    If you only have one idle CPU core, then theoretically only one of them will be executed at a time. Because multi-tasking inside a single core is slower than executing two tasks sequentially. But in reality, this isn't the case. If a CPU core becomes idle or not very busy for a moment (for example, reading the hard drive, or waiting for some other program to respond (Xcode drawing the NSLog output)), then it will quite likely move onto executing some a second GCD item, because the one it's currently doing is stuck.

    And of course, most of the time you will have more than one idle CPU core.

    It also will not necessarily execute things in the exact order you give it. GCD/the kernel have control over these details.

    For your specific example Xcode's debugger is probably only capable of processing a single NSLog() event at a time (at the very least, it has to do the screen drawing one at a time). You've got two queues and they might begin executing simultaneously. If you are sending two NSLog() statements at once one of them will wait for the other to finish first. Because you're not doing anything but printing stuff to Xcode, those two GCD queues will be in a race to be the first to send log data to Xcode. The first one has a slight head start, but it's an extremely slight one and often not enough for it to open a connection with Xcode first.

    It all depends on what actual hardware resources are available on the hardware at that specific nanosecond in time. You can't predict it, and need to structure your queues appropriately to assume some control.