[Digital logo]
[HR]

Guide to DECthreads


Previous | Contents

After your program creates a thread attributes object, that object can be reused for each new thread that the program creates. For the details about creating and deleting a thread attributes object, see the descriptions in Part 2 of the pthread_attr_create() and pthread_attr_delete() routines.

Using the thread attributes object, your program can determine these attributes of a new thread:

2.3.2.1 Setting the Inherit Scheduling Attribute

The inherit scheduling attribute specifies whether the new thread inherits (the default) the settings of its scheduling priority attribute and scheduling parameters attribute from the creating thread, or uses the scheduling attributes stored in the attributes object. Set this attribute by calling the pthread_attr_setinheritsched() routine.

2.3.2.2 Setting the Scheduling Policy Attribute

The scheduling policy attribute describes how DECthreads schedules the new thread for execution relative to the other threads in the process.

A thread has one of the following scheduling policies:

2.3.2.2.1 Techniques for Setting the Scheduling Policy Attribute

Use either of two techniques to set a thread attributes object's scheduling policy attribute:

Section 2.3.6 describes and shows the effect of the scheduling policy on thread scheduling.

2.3.2.2.2 Comparing Throughput and Real-Time Policies

The default throughput scheduling policy is intended to be an adaptive policy, giving each thread an opportunity to execute based on its behavior. That is, for a thread that doesn't execute often, DECthreads tends to give it high access to the processor because it isn't greatly affecting other threads. On the other hand, DECthreads tends to schedule with less preference any compute-bound threads with throughput scheduling policy.

This yields a responsive system in which all threads with throughput scheduling policy get a chance to run fairly frequently. It also has the effect of automatically resolving priority inversions, because over time any threads that have received less processing time (among those with throughput scheduling policy) will rise in preference while the running thread drops, and eventually the inversion is reversed.

The FIFO and RR scheduling policies are considered real-time policies, because they require DECthreads to schedule such threads based on actual processing time. Because running threads that use real-time scheduling policies requires addition DECthreads overhead, incautious use of the FIFO or RR policies can cause the performance of the application to suffer.

If relative priorities of threads are important to your application---that is, if a compute-bound thread really requires consistently predictable execution---then create those threads using either the FIFO or RR scheduling policy.

2.3.2.2.3 Portability of Scheduling Policy Settings

Only the SCHED_FIFO and SCHED_RR scheduling policies are portable across POSIX.1c-conformant implementations. The other scheduling policies are DECthreads extensions to the POSIX.1c standard.


Note

The SCHED_OTHER identifier is portable, but the POSIX.1c standard does not specify the behavior that it signifies. For example, on non-DECthreads platforms the SCHED_OTHER scheduling policy could be identical to the SCHED_FIFO or SCHED_RR policy.

2.3.2.3 Setting the Scheduling Parameters Attribute

The scheduling parameters attribute specifies the execution priority of a thread. (Although the terminology and format are designed to allow adding more scheduling parameters in the future, only priority is currently defined.) The priority is expressed relative to other threads in the same policy on a continuum of minimum to maximum for each scheduling policy.

A thread's priority falls within one of the following ranges, depending on its scheduling policy:

Low High
PRI_FIFO_MIN PRI_FIFO_MAX
PRI_RR_MIN PRI_RR_MAX
PRI_OTHER_MIN PRI_OTHER_MAX
PRI_FG_MIN_NP PRI_FG_MAX_NP
PRI_BG_MIN_NP PRI_BG_MAX_NP

Section 2.3.6 describes how to specify a priority between the minimum and maximum values, and it also discusses how priority affects thread scheduling.

Use either of two techniques to set a thread attributes object's scheduling parameters attribute:

2.3.2.4 Setting the Stacksize Attribute

The stacksize attribute is the minimum size, in bytes, of the memory required for a thread's stack. To increase or decrease the size of the stack for a new thread, call the pthread_attr_setstacksize() routine and use this thread attributes object when creating the thread and stack.

After a thread has been created, your program cannot change the size of the thread's stack. See Section 3.4.3 for more information about sizing a stack.

2.3.2.5 Setting the Guardsize Attribute

The guardsize attribute is the minimum size in bytes of the guard area for the stack of a thread. A guard area is a reserved area designed to help prevent or detect overflow of the thread's stack. The guard area is a region of memory that cannot be accessed by a thread. It is located adjacent to the last page in the thread's stack.

To increase or decrease the size of the guard area for the thread about to be created, call the pthread_attr_setguardsize_np() routine.

2.3.3 Terminating a Thread

Terminating a thread means to cause a thread to end its execution. This can occur for any of the following reasons:

When a thread terminates, DECthreads performs these actions:

  1. DECthreads writes a return value (if one is available) into the terminated thread's thread object:
    Another thread can obtain this return value by joining on the terminated thread (using pthread_join()). See Section 2.3.5 for a description of joining on a thread.

    Note

    If the thread terminated by returning from its start routine normally and the start routine does not provide a return value, then the results obtained by joining on that thread are unpredictable.

  2. If the termination results from a cancelation, a call to pthread_exit(), or an unhandled exception, DECthreads calls, in turn, each cleanup handler that this thread declared (using pthread_cleanup_push()) and that is not yet removed (using pthread_cleanup_pop()). (DECthreads also transfers control to any appropriate CATCH, CATCH_ALL, or FINALY blocks, as described in Chapter 5.)
    DECthreads calls the terminated thread's most recently pushed cleanup handler first. See Section 2.3.3.1 for more information about cleanup handlers.
    For C++ programmers: At normal exit from a thread, your program must call the appropriate destructor functions, just as if an exception has been raised.
  3. To exit the terminated thread due to a call to pthread_exit(), DECthreads raises the pthread_exit_e exception. To exit the terminated thread due to cancelation, DECthreads raises the pthread_cancel_e exception.
    Your program can use the DECthreads exception package to operate on the generated exception. (In particular, note that the practice of using CATCH handlers in place of pthread_cleanup_push() is not portable.) Chapter 5 describes the DECthreads exception package.
  4. For each of the terminated thread's thread-specific data keys that has a non-NULL value:
    DECthreads repeats this step until all thread-specific data values in the thread are NULL, or for up to a number of iterations equal to PTHREAD_DESTRUCTOR_ITERATIONS. This destroys all thread-specific data associated with the terminated thread. See Section 2.5 for more information about thread-specific data.
  5. DECthreads awakens the thread (if there is one) that is currently joined on the terminated thread. That is, DECthreads awakens the thread that is waiting in a call to pthread_join().
  6. If the thread is already detached, DECthreads destroys its thread object. Otherwise, the thread continues to exist until detached or joined on. Section 2.3.4 describes detaching and destroying a thread.

    After a thread terminates, its thread object continues to exist. This means that DECthreads considers the thread's thread object data structure to remain allocated and to contain meaningful information---for instance, the thread's thread identifier is still unique and meaningful. This allows another thread to join on the terminated thread (see Section 2.3.5).

    When a terminated thread is no longer needed, your program should detach that thread (see Section 2.3.4).


    Note

    When the initial thread in a multithreaded process or image (on OpenVMS systems) returns from the main routine, the entire process terminates, just as it does when a thread calls exit() or SYS$EXIT (on OpenVMS systems).

    2.3.3.1 Cleanup Handlers

    A cleanup handler is a routine designed to be called at a thread's exit, whether the exit is due to normal termination or cancelation. Typically, a cleanup handler deallocates one or more data objects that the exiting thread allocated or otherwise restores the state of processing that the thread encountered.

    Your program declares a cleanup handler for a thread by calling the pthread_cleanup_push() routine. Your program removes a cleanup handler by calling the pthread_cleanup_pop() routine.

    Coding cleanup handlers is an alternative to using CATCH, CATCH_ALL, and FINALLY clauses.

    DECthreads considers cleanup handler functions, exception clauses (that is, CATCH, CATCH_ALL, FINALLY), and C++ object destructors are functionally equivalent mechanisms. Each is interleaved by scoping.

    2.3.4 Detaching and Destroying a Thread

    Detaching a thread means to mark a thread for destruction as soon as it terminates. Destroying a thread means to free, or make available for reuse, the resources occupied by the thread object (and by DECthreads internal resources) associated with that thread.

    If a thread has terminated, then detaching that thread causes DECthreads to destroy it immediately. If a thread is detached before it terminates, then DECthreads frees the thread's resources immediately after it terminates.

    A thread can be detached explicitly or implicitly:

    Your program can create a thread that is detached. See Section 2.3.1 for more information about creating a thread.

    It is illegal for your program to attempt any operation on a detached thread or to use any information in the thread object associated with a detached thread. For instance, a thread cannot join on a detached thread, and your program cannot cancel a detached thread.

    2.3.5 Joining on a Thread

    Joining on a thread means to suspend this thread's execution until another thread (the target thread) terminates. In addition, DECthreads detaches the target thread after it terminates.

    For one thread to join on a functionally related thread is one way to synchronize their execution.

    A thread joins on another thread by calling the pthread_join() routine and specifying the thread identifier of the thread. If the target thread has already terminated, then this thread does not wait.

    The target thread of a join operation must be created with the detachstate attribute of its thread attributes object set to PTHREAD_CREATE_JOINABLE.

    Keep in mind these restrictions about joining on a thread:

    2.3.6 Scheduling a Thread

    Scheduling means to evaluate and change the states of the process's threads. As your multithreaded program runs, DECthreads detects whether each thread is ready to execute, has terminated, is waiting for completion of a system call, and so on. Also, for each thread DECthreads regularly checks whether that thread's scheduling policy and scheduling priority, when compared with those of the process's other threads, entail forcing a change in that thread's state.

    Remember that scheduling priority specifies the "importance" of a thread to the application. Scheduling policy provides a mechanism to control how DECthreads interprets that priority as your program runs.

    To understand this section, you must be familiar with the concepts presented in these sections:

    2.3.6.1 Calculating the Scheduling Priority

    A thread's scheduling priority falls within a range of values, depending on its scheduling policy. To specify the minimum or maximum scheduling priority for a thread, use the appropriate symbol---for example, PRI_OTHER_MIN or PRI_OTHER_MAX. Priority values are integers, so you can specify a value between the minimum and maximum priority using an appropriate arithmetic expression.

    For example, to specify a scheduling priority value that is midway between the minimum and maximum for the SCHED_OTHER scheduling policy, use the following expression (coded appropriately for your programming language):

    pri_other_mid = (PRI_OTHER_MIN + PRI_OTHER_MAX) / 2 
    

    where pri_other_mid represents the priority value you want to set.

    Avoid using literal numerical values to specify a scheduling priority setting, because the range of priorities can change from implementation to implementation. Values outside the specified range for each scheduling policy are invalid.

    2.3.6.2 Effects of Scheduling Policy

    To demonstrate the results of the different scheduling policies, consider the following example: A program has four threads, A, B, C, and D. For each scheduling policy, three scheduling priorities have been defined: minimum, middle, and maximum. The threads have the following priorities:
    A minimum
    B middle
    C middle
    D maximum

    On a uniprocessor system, only one thread can run at any given time. The ordering of execution depends upon the relative scheduling policies and priorities of the threads. Given a set of threads with fixed priorities such as the previous list, their execution behavior is typically predictable. However, in a symmetric multiprocessor (or SMP) system the execution behavior is much less predictable. Although the four threads have differing priorities, a four-processor SMP system might execute two or more of these threads simultaneously.

    When you design a multithreaded application that uses scheduling priorities, it is critical to remember that scheduling is not the same as synchronization. That is, you cannot assume that a higher-priority thread can access shared data without interference from lower-priority threads. For example, if one thread has a FIFO scheduling policy and the highest scheduling priority setting, while another has a background scheduling policy and the lowest scheduling priority setting, DECthreads might allow the two threads to run at the same time. As a corollary, on a four-processor system you also cannot assume that the four highest-priority threads are executing simultaneously at any particular moment.

    The following figures demonstrate how DECthreads schedules a set of threads on a uniprocessor based on whether each thread has the FIFO, RR, or throughput setting for its scheduling policy attribute. Assume that all waiting threads are ready to execute when the current thread waits or terminates and that no higher-priority thread is awakened while a thread is executing (that is, executing during the flow shown in each figure).

    Figure 2-1 shows a flow with FIFO scheduling.

    Figure 2-1 Flow with FIFO Scheduling



    Thread D executes until it waits or terminates. Next, although thread B and thread C have the same priority, thread B starts because it has been waiting longer than thread C. Thread B executes until it waits or terminates, then thread C executes until it waits or terminates. Finally, thread A executes.

    Figure 2-2 shows a flow with RR scheduling.

    Figure 2-2 Flow with RR Scheduling



    Thread D executes until it waits or terminates. Next, thread B and thread C are timesliced, because they both have the same priority. Finally, thread A executes.

    Figure 2-3 shows a flow with Default scheduling.

    Figure 2-3 Flow with Default Scheduling



    Threads D, B, C, and A are timesliced, even though thread A has a lower priority than the others. Thread A receives less execution time than thread D, B, or C if any of those are ready to execute as often as Thread A. However, the default scheduling policy protects thread A against indefinitely being blocked from executing.

    Because low-priority threads eventually run, the default scheduling policy protects against occurrences of thread starvation and priority inversion, which are discussed in Section 3.5.2.

    2.3.7 Canceling a Thread

    Canceling a thread means to request the termination of a target thread as soon as possible. A thread can request the cancelation of another thread or itself.

    Thread cancelation is a three-stage operation:

    The DECthreads pthread and tis interfaces implement thread cancelation using exceptions. DECthreads provides an exceptions package, as described in Chapter 5.

    2.3.7.1 Thread Return Value After Cancelation

    When DECthreads terminates a thread due to cancelation, it writes the return value PTHREAD_CANCELED into the thread's thread object. This is because cancelation prevents the thread from calling pthread_exit() or returning from its start routine.

    2.3.7.2 Controlling Thread Cancelation

    Each thread controls whether it can be canceled (that is, whether it receives requests to terminate) and how quickly it terminates after receiving the cancelation request, as follows:


    Previous | Next | Contents | [Home] | [Comments] | [Ordering info] | [Help]

    [HR]

      6493P002.HTM
      OSSG Documentation
      22-NOV-1996 13:19:58.06
    

    Copyright © Digital Equipment Corporation 1996. All Rights Reserved.

    Legal