[Digital logo]
[HR]

Guide to DECthreads


Previous | Contents

If the thread's cancelability state is enabled, use the pthread_testcancel() routine to request the delivery of any pending cancelation request. This routine enables the program to permit cancelation to occur at places where it might not otherwise be permitted, and it is especially useful within very long loops to ensure that cancelation requests are noticed within a reasonable time.

If its cancelability state is disabled, the thread cannot be terminated by any cancelation request. This means that a thread could wait indefinitely if it does not come to a normal conclusion; therefore, exercise care.


Note

If the cancelability state is disabled, the thread cannot be canceled regardless of the cancelability type. Setting cancelability type to deferred or asynchronous is relevant only when the thread's cancelability state is enabled.

2.3.7.3 Cancelation Points

A cancelation point is a routine that delivers a posted cancelation request to that request's target thread. The POSIX.1c standard specifies routines that are cancelation points in a multithreaded context.

The following routines in the DECthreads pthread interface are cancelation points:

The following routines in the DECthreads tis interface are cancelation points:

Other routines that are also cancelation points are mentioned in the operating system-specific appendixes of this guide. Refer to the following thread cancelability for system services topics:

2.3.7.4 Cleanup from Synchronous Cancelation

When a cancelation request is delivered to a thread, the thread could be holding some resources, such as locked mutexes or allocated memory. Your program must release these resources before the thread terminates.

DECthreads provides two equivalent mechanisms that can do the cleanup during cancelation, as follows:

2.3.7.5 Cleanup from Asynchronous Cancelation

Because it is impossible to predict exactly when an asynchronous cancelation request will be delivered, it is extremely difficult for a program to recover properly. For this reason, an asynchronous cancelability type should be set only within regions of code that do not need to clean up in any way, such as:

While a thread's cancelability type is asynchronous, do not call any routine unless it is explicitly documented as "safe for asynchronous cancelation."


Note

None of the general run-time routines are safe for asynchronous cancelation, and likewise for all DECthreads routines except pthread_setcanceltype().

For additional information about accomplishing asynchronous cancelation for your platform, see Section A.4, Section B.8, and Section C.7.

2.3.7.6 Example of Thread Cancelation Code

Example 2-1 shows a thread control and cancelation example.

Example 2-1 pthread Cancel


/* 
 * Pthread Cancel Example 
 */ 
 
/* 
 * Outermost cancelation state 
 */ 
{ 
 . 
 . 
 . 
int     s, outer_c_s, inner_c_s; 
 . 
 . 
 . 
/* Disable cancelation, saving the previous setting.    */ 
 
s = pthread_setcancelstate (PTHREAD_CANCEL_DISABLE, &outer_c_s); 
if(s == EINVAL) 
   printf("Invalid Argument!\n"); 
else if(s == 0) 
         . 
         . 
         . 
        /* Now cancelation is disabled.    */ 
 . 
 . 
 . 
/* Enable cancelation.  */ 
 
       { 
        . 
        . 
        . 
        s = pthread_setcancelstate (PTHREAD_CANCEL_ENABLE, &inner_c_s); 
        if(s == 0) 
           . 
           . 
           . 
           /* Now cancelation is enabled.  */ 
           . 
           . 
           . 
           /* Enable asynchronous cancelation this time.  */ 
 
               { 
                . 
                . 
                . 
                /* Enable asynchronous cancelation.  */ 
 
                int   outerasync_c_s, innerasync_c_s; 
                . 
                . 
                . 
                s = pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, 
                                           &outerasync_c_s); 
                if(s == 0) 
                   . 
                   . 
                   . 
                   /* Now asynchronous cancelation is enabled.  */ 
                   . 
                   . 
                   . 
                   /* Now restore the previous cancelation state (by 
                    * reinstating original asynchronous type cancel). 
                    */ 
                   s = pthread_setcanceltype (outerasync_c_s, 
                                              &innerasync_c_s); 
                   if(s == 0) 
                      . 
                      . 
                      . 
                      /* Now asynchronous cancelation is disabled, 
                       * but synchronous cancelation is still enabled. 
                       */ 
                } 
        . 
        . 
        . 
       } 
 . 
 . 
 . 
/* Restore to original cancelation state.    */ 
 
s = pthread_setcancelstate (outer_c_s, &inner_c_s); 
if(s == 0) 
   . 
   . 
   . 
   /* The original (outermost) cancelation state is now reinstated. */ 
} 

2.4 Synchronization Objects

In a multithreaded program, you must use synchronization objects whenever there is a possibility of conflict in accessing shared data. The following sections discuss two kinds of DECthreads synchronization objects: mutexes and condition variables.

2.4.1 Mutexes

A mutex (mutual exclusion object) is used by multiple threads to ensure the integrity of a shared resource that they access, most commonly shared data, by allowing only one thread to access it at a time.

A mutex has two states, locked and unlocked. For each piece of shared data, all threads accessing that data must use the same mutex: each thread locks the mutex before it accesses the shared data and unlocks the mutex when it is finished accessing that data. If the mutex is locked by another thread, the thread requesting the lock either waits for the mutex to be unlocked or returns, depending on the lock routine called (see Figure 2-4).

Figure 2-4 Only One Thread Can Lock a Mutex



Each mutex must be initialized before use. DECthreads supports static initialization at compile time, using one of the macros provided in the pthread.h header file, as well as dynamic initialization at run time by calling pthread_mutex_init(). This routine allows you to specify an attributes object, which allows you to specify the mutex type. The types of mutexes are described in the following sections.

2.4.1.1 Normal Mutex

A normal mutex (the default) is locked exactly once by a thread. If a thread tries to lock the mutex again without first unlocking it, the thread waits for itself to release the lock and deadlocks.

This is the most efficient form of mutex. When using interface and function inlining (optional), you can often lock and unlock a normal mutex without a call to DECthreads.

2.4.1.2 Recursive Mutex

A recursive mutex can be locked more than once by a given thread without causing a deadlock. The thread must call the pthread_mutex_unlock() routine the same number of times that it called the pthread_mutex_lock() routine before another thread can lock the mutex.

Operations on recursive mutexes take place with respect to a mutex owner. When a thread successfully locks a recursive mutex, it owns that mutex and the lock count is set to 1. Any other thread attempting to lock the mutex blocks until the mutex becomes unlocked. If the owner of the mutex attempts to lock the mutex again, the lock count is incremented, and the thread continues running.

When an owner unlocks a recursive mutex, the lock count is decremented. The mutex remains locked and owned until the count reaches zero. It is an error for any thread other than the owner to attempt to unlock the mutex.

A recursive mutex is useful when a thread requires exclusive access to a piece of data, but must call another routine (or itself) that also requires exclusive access to the data. A recursive mutex allows nested attempts to lock the mutex to succeed rather than deadlock.

This type of mutex is called "recursive," because it allows you a capability not permitted by a normal (default) mutex. However, its use requires more careful programming. For instance, a recursive mutex should never be used with condition variables, because the unlock performed for a pthread_cond_wait() or pthread_cond_timedwait() might not actually release the mutex. In that case, no other thread can satisfy the condition of the predicate, and the thread waits indefinitely. See Section 2.4.2 for information on the condition variable wait and timed wait routines.

2.4.1.3 Errorcheck Mutex

An errorcheck mutex is locked exactly once by a thread, like a normal mutex. If a thread tries to lock the mutex again without first unlocking it, the thread receives an error. If a thread other than the owner tries to unlock an errorcheck mutex, an error is returned. Thus, errorcheck mutexes are more informative than normal mutexes because normal mutexes deadlock in such a case, leaving you to determine why the thread no longer executes. Errorcheck mutexes are useful during development and debugging. Errorcheck mutexes can be replaced with normal mutexes when the code is put into production use, or left to provide the additional checking.

Errorcheck mutexes are always slower than normal mutexes. They cannot be locked without generating a call into DECthreads, and they do more internal tracking.

2.4.1.4 Mutex Operations

To lock a mutex, use one of the following routines, depending on what you want to happen if the mutex is locked:

When a thread is finished accessing a piece of shared data, it unlocks the associated mutex by calling the pthread_mutex_unlock() routine. If other threads are waiting on the mutex, one is placed in the ready state. If more than one thread is waiting on the mutex, the scheduling policy (see Section 2.3.2.2) and the scheduling priority (see Section 2.3.2.3) determine which thread is readied, and the next running thread that requests it locks the mutex.

The mutex is not automatically granted to the first waiter. If the unlocking thread attempts to relock the mutex before the first waiter gets a chance to run, the unlocking thread will succeed in relocking the mutex, and the first waiter may be forced to reblock.

You can destroy a mutex---that is, reclaim its storage---by calling the pthread_mutex_destroy() routine. Use this routine only after the mutex is no longer needed by any thread. It is invalid to attempt to destroy a mutex while it is locked.


Warning

DECthreads does not currently detect deadlock conditions involving more than one mutex, but may in the future. Never write code that depends upon DECthreads not reporting a particular error condition.

2.4.1.5 Mutex Attributes

A mutex attributes object allows you to specify values other than the defaults for mutex attributes when you initialize a mutex with the pthread_mutex_init() routine.

The mutex type attribute specifies whether a mutex is normal, recursive, or errorcheck. You can set the mutex type attribute by calling the pthread_mutexattr_settype_np() routine.

If you do not use a mutex attributes object to select a mutex type, calling the pthread_mutex_init() routine initializes a normal mutex by default.

2.4.2 Condition Variables

A condition variable is a synchronization object used in conjunction with a mutex. It allows a thread to block its own execution until some shared data object reaches a particular state. A mutex controls access to shared data; a condition variable allows threads to wait for that data to enter a defined state.

The state is defined by a Boolean expression called a predicate. A predicate may be a Boolean variable in the shared data or the predicate may be indirect; testing whether a counter has reached a certain value, or whether a queue is empty.

Each predicate should have its own unique condition variable. Sharing a single condition variable between more than one predicate can introduce inefficiency or errors unless you use extreme care.

Cooperating threads test the predicate and wait on the condition variable if the predicate is not in the desired state. For example, one thread in a program produces work-to-do packets and another thread consumes these packets (does the work). If there are no work-to-do packets when the consumer thread checks, that thread waits on a work-to-do condition variable. When the producer thread produces a packet, it signals the work-to-do condition variable.

You must associate a mutex with a condition variable.

A thread uses a condition variable as follows:

  1. A thread locks a mutex for some shared data and then tests the relevant predicate. If it is not in the proper state, the thread waits on a condition variable associated with the predicate. Waiting on the condition variable automatically unlocks the mutex. It is essential that the mutex be unlocked, because another thread needs to acquire the mutex in order to put the data in the state required by the waiting thread.
  2. When the thread that acquires the mutex puts the data in the appropriate state, it wakes a waiting thread by signaling the condition variable.
  3. One thread comes out of its wait state with the mutex locked (the condition wait relocks the mutex before returning from the thread). Other threads waiting on the condition variable remain blocked.

It is important to wait on the condition variable and evaluate the predicate in a while loop. This ensures that the program checks the predicate after it returns from the condition wait. This is due to the fact that, because threads execute asynchronously, another thread might consume the state before an awakened thread can run. Also, the test protects against spurious wake-ups and provides clearer program documentation.

For example, a thread A may need to wait for a thread B to finish a task X before thread A proceeds to execute a task Y. Thread B can tell thread A that it has finished task X by putting a TRUE or FALSE value in a shared variable (the predicate). When thread A is ready to execute task Y, it looks at the shared variable to see if thread B is finished (see Figure 2-5).

Figure 2-5 Thread A Waits on Condition Ready



First, thread A locks the mutex named mutex_ready that is associated with the shared variable named ready. Then it reads the value in ready. This test is called the predicate. If the predicate indicates that thread B has finished task X, then thread A can unlock the mutex and proceed with task Y. If the predicate indicates that thread B has not yet finished task X, however, then thread A waits for the predicate to change by calling the pthread_cond_wait() routine. This automatically unlocks the mutex, allowing thread B to lock the mutex when it has finished task X. Thread B updates the shared data (predicate) to the state thread A is waiting for and signals the condition variable by calling the pthread_cond_signal() routine (see Figure 2-6).

Figure 2-6 Thread B Signals Condition Ready



Thread B releases its lock on the shared variable's mutex. As a result of the signal, thread A wakes up, implicitly regaining its lock on the condition variable's mutex. It then verifies that the predicate is in the correct state, and proceeds to execute task Y (see Figure 2-7).

Figure 2-7 Thread A Wakes and Proceeds



Note that although the condition variable is used for communication among threads, the communication is anonymous. Thread B does not necessarily know that thread A is waiting on the condition variable that thread B signals, and thread A does not know that it was thread B that awakened it from its wait on the condition variable.

Use the pthread_cond_init() routine to initialize a condition variable. To create condition variables as part of your program's one-time initialization code, see Section 3.7. You can also statically initialize condition variables using one of the macros provided in the pthread.h header file.

Use the pthread_cond_wait() routine to cause a thread to wait until the condition is signaled or broadcasted. This routine specifies a condition variable and a mutex that you have locked. If you have not locked the mutex, the results of pthread_cond_wait() are unpredictable.

The pthread_cond_wait() routine automatically unlocks the mutex and causes the calling thread to wait on the condition variable until another thread calls one of the following routines:

If a thread signals or broadcasts on a condition variable and there are no threads waiting at that time, the signal or broadcast has no effect. The next thread to wait on that condition variable blocks until the next signal or broadcast. (Alternatively, the no-portable pthread_cond_signal_int_np() routine creates a pending wake condition, which causes the next wait on the condition variable to complete immediately.)

If you want to limit the time that a thread waits for a condition to be signaled or broadcasted, use the pthread_cond_timedwait() routine. This routine specifies the condition variable, mutex, and absolute time at which the wait should expire if the condition variable has not been signaled or broadcasted.

You can destroy a condition variable and reclaim its storage by calling the pthread_cond_destroy() routine. Use this routine only after the condition variable is no longer needed by any thread. A condition variable cannot be destroyed while one or more threads are waiting on it.

2.4.3 Condition Variable Attributes

Currently, no attributes affecting condition variables are defined. You cannot change any attributes in the condition variable attributes object.

The pthread_condattr_init() and pthread_condattr_destroy() routines are provided for future expandability of the DECthreads pthread interface and to conform with the POSIX.1c standard. These routines serve no useful function, because there are no pthread_condattr_set*() type routines available at this time.

2.5 Thread-Specific Data

Each thread can use an area of DECthreads-private memory where DECthreads stores thread-specific data objects. Use this memory to associate arbitrary data with a thread's context. Consider this as the ability to add user-specified fields to the current thread's context or as global variables that have private values in each thread.

A thread-specific data key is shared by all threads within the process---each thread has its own unique value for that shared key.

Use the following routines to create and access thread-specific data:


Chapter 3
Programming with Threads

This chapter discusses programming disciplines that you should follow as you use DECthreads routines in your programs. Pertinent examples include programming for asynchronous execution, choosing a synchronization mechanism, avoiding priority scheduling problems, making code thread safe, and working with code that is not thread safe.

3.1 Designing Code for Asynchronous Execution

When programming with threads, always keep in mind that the execution of a thread is inherently asynchronous with respect to other threads running the system (or in the process).

In short, there is no guarantee of when a thread will start. It can start immediately or not for a significant period of time, depending on the priority of the thread in relation to other threads that are currently running. When a thread will start can also depend on the behavior of other processes, as well as on other threaded subsystems within the current process.

You cannot depend upon any synchronization between two threads unless you explicitly code that synchronization into your program using one of the following:

Some implementations of threads operate by context-switching threads in user mode, within a single operating system process. Context switches between such threads occur only at relatively determinate times, such as when you make a blocking call to the threads library or when a timeslice interrupt occurs. This type of threading library might be termed "slightly asynchronous," because such a library tolerates many classes of errors in your application.

Systems that support kernel threads are less "forgiving," because context switches between threads can occur more frequently and for less deterministic reasons. Systems that allow threads within a single process to run simultaneously on multiple processors are even less forgiving.

The following subsections present examples of programming errors.

3.1.1 Avoid Passing Stack Local Data

Avoid creating a thread with an argument that points to stack local data, or to global or static data that is serially reused for a sequence of threads.

Specifically, the thread started with a pointer to stack local data may not start until the creating thread's routine has returned, and the storage may have been changed by other calls. The thread started with a pointer to global or static data may not start until the storage has been reused to create another thread.

3.1.2 Initialize DECthreads Objects Before Thread Creation

Initialize DECthreads objects (such as mutexes) or global data that a thread uses before creating that thread.

On slightly asynchronous systems this is often safe, because the thread will probably not run until the creator blocks. Thus, the error can go undetected initially. On another system (or in a later release of the operating system) that supports kernel threading, the created thread may run immediately, before the data has been initialized. This can lead to failures that are difficult to detect. Note that a thread may run to completion, before the call that created it returns to the creator. The system load may affect the timing as well.


Previous | Next | Contents | [Home] | [Comments] | [Ordering info] | [Help]

[HR]

  6493P003.HTM
  OSSG Documentation
  22-NOV-1996 13:19:59.63

Copyright © Digital Equipment Corporation 1996. All Rights Reserved.

Legal