Readers/writer locks
Readers and writer locks are used for exactly what their name implies: multiple readers can be using a resource, with no writers, or one writer can be using a resource with no other writers or readers.
Sleepon locks
Another common situation that occurs in multithreaded programs is the need for a thread to wait until something happens. This something could be anything! It could be the fact that data is now available from a device, or that a conveyor belt has now moved to the proper position, or that data has been committed to disk, or whatever. Another twist to throw in here is that several threads may need to wait for the given event.
Condition variables
Condition variables (or condvars) are remarkably similar to the sleepon locks we just saw above. In fact, sleepon locks are built on top of condvars, which is why we had a state of CONDVAR in the explanation table for the sleepon example. It bears repeating that the pthread_cond_wait() function releases the mutex, waits, and then reacquires the mutex, just like the pthread_sleepon_wait() function did.
Additional OS services QNX Neutrino lets you do something else that's elegant. POSIX says that a mutex must operate between threads in the same process, and lets a conforming implementation extend that. QNX Neutrino extends this by allowing a mutex to operate between threads in different processes. To understand why this works, recall that there really are two parts to what's viewed as the operating system—the kernel, which deals with scheduling, and the process manager, which worries about memory protection and processes (among other things). A mutex is really just a synchronization object used between threads. Since the kernel worries only about threads, it really doesn't care that the threads are operating in different processes—this is an issue for the process manager.
Pools of threads
Another thing that QNX Neutrino has added is the concept of thread pools. You'll often notice in your programs that you want to be able to run a certain number of threads, but you also want to be able to control the behavior of those threads within certain limits. For example, in a server you may decide that initially just one thread should be blocked, waiting for a message from a client. When that thread gets a message and is off servicing a request, you may decide that it would be a good idea to create another thread, so that it could be blocked waiting in case another request arrived. This second thread would then be available to handle that request. And so on. After a while, when the requests had been serviced, you would now have a large number of threads sitting around, waiting for further requests. In order to conserve resources, you may decide to kill off some of those extra threads.