Found 7 comments on HN
strlen · 2012-04-25 · Original thread
There are rather basic (producer/consume queues) and as some have pointed out, are buggy. I'd highly suggest looking at Boost's threading library (for an example of an object oriented approach to threading, taking advantage of RAII -- much of it is now standard in C++11), Intel's Thread Building Blocks, Java's built in Concurrency Utilities (java.util.concurrent package), Doug Lea's Fork Join Framework. The a great book on the subject is Maurice Herlihy's The Art of Multiprocessor Programming:

http://www.amazon.com/The-Multiprocessor-Programming-Maurice...

The book is in Java, but C++11 has the required primitives (cross platform compare and set for integral types, a defined memory model) so you could follow allong in C++11.

strlen · 2011-10-29 · Original thread
Well, you could also perform CAS using AtomicReference -- the examples in Maurice Herlihy's The Art of Multiprocessor Programming [1] and Brian Goetz' Java Concurrency In Practice [2] do that. So you don't really need to use sun.misc.Unsafe in your own code (of course, you need CAS to implement AtomicReference in the first place).

You're also completely correct about placement new: I am working on a cleaned up version of this class, this was essentially a first pass to get myself more familiar with concurrency in c++0x. What complicates things a bit is that allocators are (much like all else in STL) meant to be used as class template arguments, which makes main separate compilation impossible -- hence the need of an adapter from an STL-style allocator to a pure virtual class. Separate compilation is why I also made a void* version of this initially.

I have a much cleaned up version in the work that will handle more than void . There's an implementation I call it ConcurrentLinkedQueueImpl that handles just void , that is compiled separately -- and there is generic version ConcurrentLinkedQueue that is specialized for void * (ends up just proxying the calls to ConcurrentLinkedQueueImpl), with the generic version (in turn) using ConcurrentLinkedQueue<void *> and placement new to hold any type.

Once again, the version posted there was a rough pass to get myself familiar with 0x concurrency constructs and hazard pointers -- the code is fairly messy.

[1] http://www.amazon.com/Art-Multiprocessor-Programming-Maurice... [2] Everyone should read this book cover to cover -- http://jcip.net/

dkersten · 2010-12-31 · Original thread
I'm not quite sure what you mean, but synchronization without atomic operations is possible.

An example of mutual exclusion, without any atomic operations, taken from the book "The art of multiprocessor programming"[1] is (paraphrased) as follows:

Two threads, A and B, want to access some memory. Each thread has a flag.

When thread A wants to access the shared memory:

    Set flag A
    Wait for flag B to become unset
    Access memory
    Unset flag A
When thread B wants to access the shared memory:

    Set flag B
    While flag A is set {
        Unset flag B
        Wait for flag A to become unset
        Set flag B
    }
    Access memory
    Unset flag B
Obviously this isn't a general purpose solution, but rather an easy to understand example demonstrating that atomic operations are not required.

[1] http://www.amazon.com/Art-Multiprocessor-Programming-Maurice...

dkersten · 2010-07-01 · Original thread
Maybe I'm naive, but I don't think the problem is what everyone always says - that parallel programming is super hard and most programmers cant effectively program for multicore systems. Instead I think the problem is one of education and training. How many programmers actually get decent training and education in multicore programming? I don't think its very many. Of all the programmers complaining how hard it is to write effective multicore programs, how many have actually read books and research papers on the subject? Again, I'd wager not many.

For example, In my four year computer science degree, a lot of time was spent on high level languages like Java, C, C++, Haskell, even prolog. A good bit of time was spent on low level details (two computer architecture modules, an "advanced" computer architecture module, an assembly programming module). Some time was spent on computability, complexity and parsing. Of course, we had a number of math modules too, including probability, statistics and linear algebra. A lot of time was spent on object oriented programming, data structures and algorithms. A little bit of time was spent on other topics like network programming, databases, operating systems (we did cover deadlock and some concurrent programming stuff in great detail here) and AI. The rest was split between single (optional!) modules on niche areas: digital signal processing, cryptography, compression, 3d graphics and so on.. including concurrent programming.

We spent so much time learning how to program sequentially, why would anyone even suggest that we should be able to program for multicore systems? Were we given years of training in parallel programming languages, algorithms, data structures and libraries? Nope. Instead the time was spent on sequential programming. Of course its going to be hard to program for multicore!

Heres a car analogy:

Its like spending years learning to drive tractors, expecting that you can then drive formula one cars.

Some problems simply don't make much sense to try and solve with parallel programming. Some problems do, but we're not properly educated to spot these effectively, to know what data structures are appropriate and what algorithms we should use, nevermind things like testing and debugging parallel software.

As an aside, if performance is what we're trying to achieve with multicore programming, then we need to be taught about efficient cache usage too! Misuse of the processor cache kills multicore performance and popular software development principles, like object-oriented programming actually works against the processor cache! Ugh.

There is plenty to help us effectively program for multicore. A short (and very incomplete) list:

    - monitor based concurrency (mutexes, semaphores, condition variables + threads); usually not the best option
    - Software Transactional Memory; replace manual synchronisation with transactional access to shared memory
    - Message passing; multiuple threads/tasks communicating through immutable messages (being immutable means that synchronization concerns are minimized)
    - Dataflow languages/libraries where data "flows" through computational nodes and many streams of data can flow through in parallel (each node can be processing at the same time)
    - Tools and libraries such as OpenMP, OpenCL and Intel Threading Building Blocks
    - Languages with a focus on parallel programming like Erlang and Clojure (I especially like Clojures view on *time*)
    - Languages with built-in immutable data structures help make synchronisation less painful. Same goes for atomic data structures.
    - Entity Systems[1] are actually surprisingly suited for multicore programming, IMHO
I would suggest everybody to pick up the book The Art of Multiprocessor Programming[2] and get an introduction to the basic concepts. After that it depends on what you want to do, but if you're a C++ programmer, I would suggest the intel threading building blocks book[3].

[1] http://t-machine.org/index.php/2007/09/03/entity-systems-are...

[2] http://www.amazon.com/Art-Multiprocessor-Programming-Maurice...

[3] http://www.amazon.com/Intel-Threading-Building-Blocks-Parall...

kmavm · 2010-05-16 · Original thread
If you are into wait-free and lock-free data structures, you owe it to yourself to get Herlihy and Shavit's "Art of Multiprocessor Programming."

http://www.amazon.com/Art-Multiprocessor-Programming-Maurice...

Not only does it provide a well-debugged, well-written collection of wait-free data structures; it teaches you to think about concurrency in a structured way. The mutual exclusion chapter doesn't teach you which pthread routines to use, but instead tells you how to implement mutual exclusion, and the trade-offs inherent in, e.g., spin locks vs. queueing locks. A significant bonus is Maurice Herlihy's distinctive prose voice: correct, concise, and funny in that order. It is the best computer book I've read recently, and the one I recommend to all colleagues who are trying to take their concurrency chops up to the next level.

(I took a course from Herlihy in '99, but am otherwise unaffiliated.)

gtani · 2010-05-06 · Original thread
Since then, Real world haskell (freely available), Hutton's book, maybe another came out

http://book.realworldhaskell.org/

http://www.amazon.com/Programming-in-Haskell-ebook/dp/B001FS...

also: Herlihy , Shavit , Multiprocessor Programming

http://www.amazon.com/Art-Multiprocessor-Programming-Maurice...

Get dozens of book recommendations delivered straight to your inbox every Thursday.