You can also use pthread_join64() or pthread_join32() to specify the length in bits of the return value.
Note that no other DECthreads functions have special 64-bit versions because the OpenVMS Alpha calling standard always supports 64-bit arguments and return values.
Table B-3 lists the DECthreads condition values for OpenVMS systems and provides an explanation and user action.
Condition Value | Explanation and User Action |
---|---|
CMA$_EXCCOP | Exception raised; OpenVMS condition code follows. |
Explanation: One of the DECthreads exception commands (RAISE or RERAISE) raised or reraised an exception condition originating outside the DECthreads library. The secondary condition code in the signal vector will be the original code. | |
User Action: See the documentation for the software that your program is calling to determine the reason for this exception. | |
CMA$_EXCCOPLOS | Exception raised; some information lost. |
Explanation: CMA$_EXCCOPLOS is nearly the same as CMA$_EXCCOP except that DECthreads determined that the copied signal vector may contain address arguments. However, the address arguments may not be valid when the stack is unwound and the condition is resignaled. Therefore, DECthreads clears the condition codes' arguments in the resignaled vector. In most cases, DECthreads knows that SS$_ code arguments are "safe" and will not clear them. Most other codes with arguments will result in CMA$_EXCCOPLOS. | |
User Action: See the documentation for the software that your program is calling to determine the reason for this exception. | |
CMA$_EXCEPTION | Exception raised; address of exception object is object-address. |
Explanation: This condition is used as the primary condition to RAISE an address-type DECthreads exception. The condition is signaled with a single argument containing the address of the EXCEPTION structure. There is no support for interpreting this value. It is only meaningful to the facility that defined the EXCEPTION. It is not good programming practice to let an address exception propagate outside the facility that raised it. There is no support for getting message text, and it cannot be interpreted by other facilities. | |
User Action: None. |
This section applies to OpenVMS Alpha systems only.
Under OpenVMS Alpha Version 7.0 and later, DECthreads implements a new scheduling model, referred to as two-level scheduling. This model is based on the concept of virtual processors. Virtual processors are implemented as a result of using kernel thread technology in the OpenVMS Alpha operating system.
DECthreads schedules threads onto virtual processors similar to the way that OpenVMS schedules processes onto the processors of a multiprocessing machine. Thus, to the DECthreads runtime environment, a scheduled thread is executed on a virtual processor until it blocks or until it exhausts its timeslice quantum; then DECthreads schedules a new thread to run.
While DECthreads schedules threads onto virtual processors, the OpenVMS scheduler also schedules virtual processors to run on physical processors. The term two-level scheduling derives from this relationship.
The two-level scheduling model provides these advantages:
The key to making the two-level scheduling model work is the upcall mechanism. An upcall is a communication between the OpenVMS scheduler and the DECthreads scheduler. When an event occurs that affects the scheduling of a thread, such as blocking for a system service, the OpenVMS scheduler calls "up" to the DECthreads scheduler to notify it of the change in the thread's status.
This upcall gives DECthreads the opportunity to schedule another thread to run on the virtual processor in place of the blocking thread, rather than to allow the virtual processor itself to block, which would deny that resource to other threads in the process.
Upcalls are typically arranged in pairs, with an "unblock" upcall corresponding to each "block" upcall. The unblock upcall notifies DECthreads that a previously blocked thread is now eligible to run again. DECthreads schedules that thread to run when it is appropriate, given the thread's scheduling policy and priority.
In OpenVMS Alpha Version 7.1 and later, the linker supports the /THREADS_ENABLE (or /NOTHREADS_ENABLE) qualifier for specifying the role of kernel threads in the resulting image. Use this qualifier to specify that the image controls whether the process can create multiple kernel threads and whether the OpenVMS Alpha kernel's support for DECthreads upcalls is enabled. If this qualifier is not specified, the default linker setting is /NOTHREADS_ENABLE, which results in an image that behaves as under OpenVMS Alpha Version 6.
The /THREADS_ENABLE qualifier takes two keyword arguments, MULTIPLE_KERNEL_THREADS and UPCALLS. Table B-4 summarizes the allowable combinations of these keywords and their effects.
Keywords Specified | Result |
---|---|
/NOTHREADS_ENABLE | No kernel threads support for DECthreads |
/THREADS_ENABLE
or: /THREADS_ENABLE=(MULTIPLE_KERNEL_THREADS,UPCALLS) |
Full kernel threads support for DECthreads, including the ability to run multiple use threads simultaneously on different CPUs on a multiprocessor machine |
/THREADS_ENABLE=MULTIPLE_KERNEL_THREADS | Same behavior as if /NOTHREADS_ENABLE is specified (without support for upcalls, DECthreads cannot reliably use multiple kernel threads) |
/THREADS_ENABLE=UPCALLS | Upcall support for DECthreads (such as making system calls thread-synchronous), but restricts the process's threads to one CPU on a multiprocessor machine |
Note
Under no circumstances should a process explicitly create kernel threads. DECthreads creates them as needed when allowed to do so. Explicit creation of kernel threads by an application disrupts the operation of the DECthreads runtime environment and causes incorrect and/or unreliable application behavior.
Under OpenVMS Alpha only, use the THREADCP tool to set or show the kernel threads features described above for an existing image. The tool provides the ability to enable, disable, and show the state of the thread control bits in an image's header.
The THREADCP command verb is not part of the normal set of DCL commands. To use the tool, you must define the command verb before invoking it.
In a THREADCP command, an image file name is a required parameter for use with all supported qualifiers. THREADCP supports abbreviations to the first character for all qualifiers and parameters. When the SHOW qualifier is used alone with the THREADCP command, the file name can contain wildcard characters.
After you define the THREADCP command verb, an image's thread control bits can be set or cleared using the /ENABLE and /DISABLE qualifiers, respectively. To do so, specify the name of each thread control bit to be enabled, disabled, or shown. One or both thread control bits can be specified. The user must have write access to the image file.
If no thread control bit is specified, the THREADCP default is to operate on both bits. If the image is currently being executed or is installed, it cannot be modified.
This command defines the THREADCP command verb:
$ SET COMMAND SYS$UPDATE:THREADCP.CLD
This command displays the current settings of both thread control bits for the image TEST.EXE:
$ THREADCP/SHOW TEST.EXE
This command displays the current settings of both thread control bits for all SYS$SYSTEM images:
$ THREADCP/SHOW SYS$SYSTEM:*
This command sets both thread control bits explicitly for the image TEST.EXE:
$ THREADCP/ENABLE=(MULTIPLE_KERNEL_THREADS, UPCALLS) TEST.EXE
This command clears both thread control bits explicitly for the image TEST.EXE:
$ THREADCP/DISABLE=(MULTIPLE_KERNEL_THREADS, UPCALLS) TEST.EXE
On OpenVMS Alpha systems, a program can call the $GETJPI system service and specify the appropriate MULTITHREAD item code to determine whether kernel threads are in use. The return values have the same meanings as are defined for the MULTITHREAD system parameter, as summarized in Table B-5.
Value | Description |
---|---|
0 | Both Thread Manager upcalls and the creation of multiple kernel threads are disabled. |
1 | Thread Manager upcalls are enabled; the creation of multiple kernel threads is disabled. |
2 through 16 | Both Thread Manager upcalls and the creation of multiple kernel threads are enabled. The number specified represents the maximum number of kernel threads that can be created for a single process. |
In general, there is no reason to create more virtual processors than there are physical processors; that is, the virtual processors would contend with each other for the physical processors and cause unnecessary overhead. Regardless of the value of the MULTITHREAD parameter, DECthreads creates no more virtual processors than there are user threads (excluding DECthreads internal threads).
DECthreads does not delete virtual processors or let them terminate. They are retained in the HIB idle state until they are needed again. During image rundown, they are deleted by OpenVMS.
The DECthreads scheduler can schedule any user thread onto any virtual processor. Therefore, a user thread can run on different kernel threads at different times. Normally, this should pose no problem; however, for example, a user thread's PID (as retrieved by querying the system) can change from time to time.
When a user mode AST becomes deliverable to a DECthreads process, the OpenVMS scheduler makes an upcall to DECthreads, passing the information that is required to deliver the AST (service routine address, argument, and target user thread ID). DECthreads stores this information and queues the AST to be delivered to the appropriate user thread. That thread is made runnable (if it is not already), and executes the AST routine the next time it is scheduled to run. This means the following:
In addition to per-thread ASTs, there are also user mode ASTs that are directed to the process as a whole, or to no thread in particular, or to a thread that has since terminated. These "process" ASTs are queued to the initial thread, making the thread runnable in a fashion similar to per-thread ASTs. They are executed in the context of the initial thread, for the following reasons:
Note
In OpenVMS Version 7.0 and later, all ASTs are directed to the process as a whole. In future releases, AST delivery will be made per thread as individual services are updated.
The following implications must be considered for application development:
All blocking system services are thread synchronous in OpenVMS Alpha Version 7.0 and later. That is, they block only the calling thread. When the thread is to be blocked by the system service, the OpenVMS scheduler makes an upcall to allow DECthreads to schedule another user thread to execute. Therefore, only the calling thread is blocked, all other threads are unaffected, and the process continues running. When the service completes, the thread is awakened by means of another upcall, and DECthreads schedules it to run again at the thread's next opportunity.
This applies to all "W" forms of system services. For example, $QIOW, $END_TRANSW, and $GETJPIW. Additionally, this applies to the following event flag services: $WAITFR, $WFLAND, and $WFLOR.
$HIBER and $WAKE result in upcalls to DECthreads. When a user thread calls $HIBER, only that thread is blocked; all other threads continue running. The blocking thread is immediately unscheduled and another thread is scheduled to run instead. When a thread (or another process) calls $WAKE, all hibernating threads are awakened.
Prior to OpenVMS Version 7.0, a thread that called a $HIBER (or called a library routine that eventually resulted in a call to $HIBER) would cause the whole process to hibernate for a brief period whenever that thread was scheduled to "run." Also, with multiple threads in calls to $HIBER simultaneously, there was no reliable way to wake the threads (or a specific thread); the next hibernating thread to be scheduled would awaken, and any other threads would continue to sleep.
In OpenVMS Alpha Version 7.0 and later, these problems have been resolved. However, this new behavior has some other effects. For instance, hibernation-based services, such as LIB$WAIT and the C RTL sleep() routine, may be prone to premature completion. If the service does not validate its wakeup (that is, ensure that enough time has passed or that there is some other reason for it to return), then it will be prone to this problem, as are the above services, since they do not perform such wake-up validation.
All event flags are shared by all threads in the process. Therefore, it is possible for different threads' use of the same event flag to cause interference. That is, if two threads use the same event flag in calls to different system services, whichever service completes first will cause both threads to awaken, even though the other service has not completed. This situation can be resolved by specifying an I/O status block (IOSB) for those system services that use them. When an IOSB is present, the blocked thread will not be awakened when the event flag is set, unless the IOSB has also been written.
Note that a DECthreads process is rarely in LEF state. In general, instead of blocking for an event flag wait, DECthreads schedules another thread to be run. If no threads are available, DECthreads schedules a "null" thread, which places the virtual processor in HIB state until it is needed to execute a thread.
Note that no upcall is made for waits on a common event flag. If a thread waits on a common event flag, the virtual processor blocks until the wait is satisfied. (On a uniprocessor, this is most likely to block all threads.)
There are several interactions with the OpenVMS operating system that should be noted:
In multithreaded processes, image exit occurs as follows: $EXIT does not immediately invoke exit handler routines. Instead, it results in an upcall that causes DECthreads to schedule a special thread to execute the exit-handler routines. $EXIT then calls pthread_exit() to terminate the calling thread. This allows the calling thread to release any resources that it might be holding.
To avoid possible deadlocks, the exit-handler routines are executed in a separate thread. For example, if a thread calls $EXIT while holding a mutex that is required by an exit-handler routine, then that routine causes the thread to block forever, as it waits for a mutex that it already holds. Because the exit-handler routine executes in a separate thread, it can block while the thread holding the mutex cleans up.
$FORCEX works in an analogous fashion. Instead of invoking $EXIT directly, it causes an upcall that allows DECthreads to release the exit-handler thread.
DCL Ctrl/Y continues to work as it always has on multithreaded applications. However, typing EXIT or issuing any other command that invokes a new image causes the $FORCEX upcall. While this is an improvement in many cases over the behavior prior to OpenVMS Version 7.0, it does not guarantee that the multithreaded application will exit.
For example, if the application is deadlocked, holding a resource required by one of the exit handler's routines, the application will continue to hang, even after typing Ctrl/Y and EXIT. In these cases, type Ctrl/Y and STOP to terminate the application without running exit handlers. Note that doing so causes the application to be unable to clean up, and it may leave data files and the terminal in an inconsistent state.
The SYSGEN parameter MULTITHREAD limits the maximum number of kernel threads per process. It is set by AUTOGEN to the number of CPUs on the system. If MULTITHREAD is set to zero (0), two-level scheduling support is disabled, and DECthreads reverts to its behavior prior to OpenVMS Version 7.0---that is, no upcalls can occur, and DECthreads does not use all processors in multiprocessor systems.
Under OpenVMS Version 7.0 or later, it is possible to see all but one of your kernel threads in SUSP state, such as when at a breakpoint in the debugger. This effect is a part of the debugging support and is not the result of calling $SUSPND.
The following system services now operate on a per-thread basis: $HIBER, $SCHDWK, and $SYNCH. These services will not operate on an entire process; they are thread based. For example, $HIBER will cause the calling thread to become inactive but will not affect other threads in the process.
The following DCL commands operate as indicated:
Previous releases of the POSIX for OpenVMS layered product had very limited interoperability with DECthreads. Under OpenVMS Version 7.0 and later, using DECthreads with the POSIX for OpenVMS layered product is not supported.
This appendix discusses DECthreads issues and restrictions specific to the Windows NT and Windows 95 operating systems.
DECthreads is supported in the Win32 subsystem of the Windows NT operating system and Windows 95. The Win32 subsystem provides support for multithreading through the Win32 Application Programming Interface (API). The Win32 API allows for thread creation, termination, synchronization, and other thread functions. DECthreads adds value to the Win32 API by providing the POSIX 1003.4a API as well as the other DECthreads APIs, which are also available across all Digital platforms and other DCE platforms. The POSIX 1003.4a API allows for ease of portability of applications written to it.
Note
Note that the POSIX 1003.1c-1995 style (or pthread) and thread-independent services (or tis) interfaces are not yet available for Windows NT and Windows 95 systems as of this manual's publication date. Only the Digital proprietary CMA (or cma) and POSIX 1003.4a/Draft 4 (or d4) appendixes apply to application development on Windows NT and Windows 95 at this time.
6493P016.HTM OSSG Documentation 22-NOV-1996 13:20:20.85
Copyright © Digital Equipment Corporation 1996. All Rights Reserved.