c++boostqueuelock-freelockless

Is boost::lockfree::queue (in multithreaded program) lockable?


I am working on a program where, 2+ (gstreamer) boost:: threads and same number of boost:: threads of a dummy application are simultaneously using a queue. Now this queue is used for synchronization between tasks of gstreamer thread with its corresponding dummy application thread.

The queue is an EVENT queue: where the EVENT is a structure

typedef struct EVENT{
    EVENT_TYPE Ev_Type;  // EVENT_TYPE is enum of Events
    EVENT_DATA Ev_Data;  // EVENT_DATA is union of data to be stored for that event
}Event_;

Upon googling, I came across these 2 options for queue: lockfree::queue & lockfree::spsc_queue, suggesting that lockfree::queues are used for multithreaded applications.

CONFUSION: Why the name lockFREE? is it suggesting cannot be (mutex) locked?

Also see this example, it says "boost::lockfree::queue is not lockfree"

Mind=blown...

Well, I then tried following the example (above link) and implement this queue

class Foo {
protected:
    boost::lockfree::queue<EVENT> mqSttEventQueue;
public:
    unsigned int SetEventIntoQueue(EVENT *psEvent);
};

And its definition as:

unsigned int Foo::SetEventIntoQueue(EVENT *psEvent) {
    if(mqSttEventQueue.push(*psEvent)){
         //notify that event is in queue;
    }
}

This is compiled successfully. But I am completely running in the dark here.

QUESTIONS:

what is that 128 there for? Is it saying the queue size is 128 (bytes/items)? Is queue<int> declaring the type of data in the queue?

If I declare it like this, it gets compilation error as

error: expected identifier before numeric constant

boost::lockfree::queue<EVENT> mqSttEventQueue(128);
                                              ^~~

PS:- I am really not sure what title to put here...Edit it if you can.


Solution

  • Why the name lockFREE? is it suggesting cannot be (mutex) locked?

    Of course anything can be locked; you put the mutex outside the data structure and have every thread that touches the data structure use it.

    boost::lockfree::queue provides unsynchronized_pop and unsynchronized_push for use in cases where you've ensured that only one thread can be accessing the queue.

    But the main purpose of lockfree::queue and of lockless algorithms / data structures is that they don't need to be locked: multiple threads can safely write and/or read at the same time.


    "lock free" has 2 meanings in programming, leading to potentially confusing but true statements like "this lockless algorithm is not lock-free".

    Most lockless multi-reader / multi-writer queues are not lock-free in the technical sense. Usually they use a circular buffer, and a writer will "claim" an entry somehow (fixing its order in the queue). but it can't be read until the writer finishes writing to the entry itself.

    See Lock-free Progress Guarantees for an example with analysis of its possible blocking behaviour. A writer atomically increments a write-index, then writes data into the array entry. If the writer sleeps between doing those things, other writers can fill up the later entries while readers are stuck waiting for that claimed but not written entry. (I haven't looked at boost::lockfree::queue<T>, but presumably it's similar1.)

    In practice performance is excellent with very low contention between writers and readers. But in theory a writer could block at just the wrong moment and stall the whole queue.


    Footnote 1: The other plausible option for a queue is a a linked list. In that case you can fully construct a new node and then attempt to CAS it into the list. So if you succeed at adding it, then other threads can read it right away because you have its pointers set correctly.

    But the reclaim problem (safely freeing memory that other threads might be reading to see if another reader has already claimed them) is extremely thorny outside of garbage-collected languages / environments. (e.g. Java)


    boost::lockfree::queue<int> queue(128); Why 128?

    That's the queue (max) size, in entries. Of int in this case, because you used queue<int>, duh. As mentioned above, most lockless queues use a fixed size circular buffer. It can't realloc and copy like std::vector when it needs to grow, because other threads can be reading it simultaneously.

    As documented in the manual (the first google hit for boost::lockfree::queue), the explicit queue(size_type) constructor takes a size.

    You could also bake the capacity into the type, by using it as a template parameter. (So the capacity becomes a compile-time constant everywhere that uses the queue, not just in places that can do constant-propagation from the constructor call.)

    The class apparently doesn't enforce / require a power-of-2 size, so a template size parameter could maybe optimize significantly better by letting % capacity operations compile into an AND with a mask instead of a division.