In this example, the default file type .COM is assumed because the file type is omitted.
For more information on creating command procedures, see the SAVE command in the OpenVMS System Management Utilities Reference Manual.
The System Management utility (SYSMAN) provides two kinds of support for OpenVMS Cluster management:
Each SYSMAN command requires a specific level of privilege. For more information on each command, see the OpenVMS System Management Utilities Reference Manual.
You can manage security data and system time for an OpenVMS Cluster system with SYSMAN CONFIGURATION commands. Table 20-4 summarizes these CONFIGURATION commands and their functions.
Command | Function |
---|---|
CONFIGURATION SET CLUSTER_AUTHORIZATION | Modifies the group number and password in a local area cluster |
CONFIGURATION SHOW CLUSTER_AUTHORIZATION | Displays the group number and multicast address of a local area cluster |
CONFIGURATION SET TIME | Updates system time |
CONFIGURATION SHOW TIME | Displays current system time |
The group number identifies the group of nodes in the cluster, and the associated Ethernet address is used to send messages to all nodes in the cluster. The OpenVMS Cluster password protects the integrity of the cluster membership.
Using the CONFIGURATION SET CLUSTER_AUTHORIZATION command modifies the group number and password, as recorded in SYS$SYSTEM:CLUSTER_AUTHORIZE.DAT. Normally, you do not need to alter records in the CLUSTER_AUTHORIZE.DAT file.
If your configuration has multiple system disks, SYSMAN automatically updates each copy of CLUSTER_AUTHORIZE.DAT, provided that you have defined the environment as a cluster with the SET ENVIRONMENT/CLUSTER command.
Caution
If you change either the group number or password, you must reboot the entire cluster.
You cannot display the cluster password for security reasons, but you can display the group number and group multicast address with the CONFIGURATION SHOW CLUSTER_AUTHORIZATION command.
Examples
SYSMAN> SET ENVIRONMENT/CLUSTER/NODE=NODE21 SYSMAN> SET PROFILE/PRIVILEGE=SYSPRV SYSMAN> CONFIGURATION SET CLUSTER_AUTHORIZATION/PASSWORD=GILLIAN %SYSMAN-I-CAFOLDGROUP, existing group will not be changed %SYSMAN-I-GRPNOCHG, Group number not changed SYSMAN-I-CAFREBOOT, cluster authorization file updated. The entire cluster should be rebooted.
SYSMAN> CONFIGURATION SHOW CLUSTER_AUTHORIZATION Node NODE21: Cluster group number 65240 Multicast address: AB-00-04-01-F2-FF
Use the CONFIGURATION SET TIME command to modify system time for nodes in an OpenVMS Cluster system, as well as for individual nodes. You can specify time values in the following format:
[dd-mmm-yyyy[:]] [hh:mm:ss.cc]
You can also enter delta time values. See the OpenVMS User's Manual for more information about time formats.
In a cluster environment, SYSMAN sets the time on each node to the value you specify. However, if you do not specify a value, SYSMAN reads the clock on the node from which you are executing SYSMAN and assigns this value to all nodes in the cluster. In a remote cluster, SYSMAN reads the clock on the target node in the cluster and assigns that value to all nodes. Note that the time-of-year clock is optional for some processors; see your processor's hardware handbook for more information.
SYSMAN tries to ensure that all processors in the cluster are set to the same time. Because of communication and processing delays, it is not possible to synchronize clocks exactly. However, the variation is typically less than a few hundredths of a second. If SYSMAN cannot set the time to within one-half second of the specified time, you receive a warning message that names the node that failed to respond quickly enough.
As a result of slight inaccuracies in each processor clock, times on various members of a cluster tend to drift apart. The first two examples show how to synchronize system time in a cluster.
Examples
$ SYNCH_CLOCKS: $ RUN SYS$SYSTEM:SYSMAN SET ENVIRONMENT/CLUSTER CONFIGURATION SET TIME EXIT $ WAIT 6:00:00 $ GOTO SYNCH_CLOCKS
SYSMAN> SET ENVIRONMENT/NODE=(NODE21,NODE22,NODE23) SYSMAN> SET PROFILE/PRIVILEGE=LOG_IO SYSMAN> CONFIGURATION SET TIME 12:38:00
SYSMAN> SET ENVIRONMENT/CLUSTER/NODE=NODE23 SYSMAN> CONFIGURATION SHOW TIME System time on node NODE21: 19-APR-1996 13:32:19.45 System time on node NODE22: 19-APR-1996 13:32:27.79 System time on node NODE23: 19-APR-1996 13:32:58.66
The Time of Day Register (TODR), which the system uses to maintain system time, has a limit of approximately 15 months. Between January 1 and April 1, reset the system time; otherwise, the following problems might occur:
Because the TODR has an approximate limit of 15 months, the system maintains time by combining the TODR value with a base time recorded in the base system image (SYS$LOADABLE_IMAGES:SYS.EXE). The definition of base time is:
01-JAN-CURRENT_YEAR 00:00:00.00
Because all TODRs ordinarily have the same base, multiple CPUs can boot off the same system disk, and you can use multiple system disks on one CPU; the system sets the time correctly.
When a SET TIME command is issued (with or without specifying a time), OpenVMS does the following:
In an OpenVMS Cluster system (or for a node that is not part of the cluster), when you set the time, the TODR and the base time in the system image are reset with the values for the new year. However, multiple systems might share the system image. This does not normally cause a problem except after the first day of a new year.
Note
The system issues the SET TIME command when it boots and as a part of the normal SHUTDOWN command procedure.
By December, each node has a very large offset stored in the TODR (from the base time of 1-JAN of that year). When the time advances to a new year, the system image still has the old year and the TODR values are still large.
After January 1, if a SET TIME command is issued on any node (or any node is shut down using SHUTDOWN.COM), the following happens:
After these three events occur, if a node that has a large TODR crashes and rejoins the cluster, its system time is initially in the next year (applying the large TODR to the new year). This system time is recorded as the system's boot time. When the node joins the cluster, its time is set to the correct value but the boot time remains one year in the future. Certain forms of the SHOW SYSTEM command compare current time to boot time; in this instance, SHOW SYSTEM displays incorrect values.
If a system disk is used at different times by different, unclustered CPUs or if different system disks are used at different times on the same CPU, the system might incorrectly set the time to a year in the future or a year in the past, depending on how the CPU's TODR and the value recorded on the system disk become unsynchronized:
Example
The following example uses SYSMAN commands to reset the time on all nodes in an OpenVMS Cluster system:
$ RUN SYS$SYSTEM:SYSMAN SYSMAN> SET ENVIRONMENT/CLUSTER SYSMAN> SET PROFILE/PRIVILEGE=(LOG_IO,SYSLCK) SYSMAN> CONFIGURATION SET TIME 05-JUN-1996:12:00:00 SYSMAN> EXIT
Notes
In a node that is not part of a cluster, use the SET TIME command and specify a time. If you do not specify a time, the SET TIME command updates the system time using the time in the TODR.If you are running the Digital Distributed Time Service (DECdts) on your system, you must use it to set the time.
The SYSMAN command DO enables you to execute a DCL command or command procedure on all nodes in the current environment. This is convenient when you are performing routine system management tasks on nodes in the OpenVMS Cluster system, such as:
Each DO command executes as an independent process, so there is no process context retained between DO commands. For this reason, you must express all DCL commands in a single command string, and you cannot run a program that expects input.
In a cluster environment, SYSMAN executes the commands sequentially on all nodes in the cluster. Each command executes completely before SYSMAN sends it to the next node in the environment. Any node that is unable to execute the command returns an error message. SYSMAN displays an error message if the timeout period expires before the node responds.
In a dual-architecture heterogeneous OpenVMS Cluster running both OpenVMS VAX and OpenVMS Alpha, some uses of the DO command may require special handling. For example, if you are installing images that are named differently in each architecture, you can still use the DO command if you create logical name tables for VAX and for Alpha nodes. See the example sequence that follows this description for an example.
Some DCL commands, such as MOUNT/CLUSTER or SET QUORUM/CLUSTER, operate clusterwide by design. It is best to avoid using these kinds of commands with the DO command in SYSMAN when the environment is set to cluster. As alternatives, you could leave SYSMAN temporarily with the SPAWN command and execute these commands in DCL, or you could define the environment to be a single node within the cluster.
Examples
SYSMAN> SET PROFILE/PRIVILEGES=(CMKRNL,SYSPRV)/DEFAULT=SYS$SYSTEM SYSMAN> DO INSTALL ADD/OPEN/SHARED WRKD$:[MAIN]STATSHR SYSMAN> DO MCR AUTHORIZE ADD JONES/PASSWORD=COLUMBINE - _SYSMAN> /DEVICE=WORK1/DIRECTORY=[JONES]
SYSMAN>SET ENVIRONMENT/CLUSTER %SYSMAN-I-ENV, Current command environment: Clusterwide on local cluster Username SMITH will be used on nonlocal nodes SYSMAN> DO @SYS$STARTUP:XYZ_STARTUP
$ CREATE/NAME_TABLE/PARENT=LNM$SYSTEM_DIRECTORY SYSMAN$NODE_TABLE $ DEFINE/TABLE=SYSMAN$NODE_TABLE ALPHA_NODES NODE21,NODE22,NODE23 $ DEFINE/TABLE=SYSMAN$NODE_TABLE VAX_NODES NODE24,NODE25,NODE26 $ RUN SYS$SYSTEM:SYSMAN SYSMAN> SET ENVIRONMENT/NODE=ALPHA_NODES %SYSMAN-I-ENV, current command environment: Individual nodes: NODE21,NODE22,NODE23 Username BOUCHARD will be used on nonlocal nodes SYSMAN> DO INSTALL REPLACE SYS$LIBRARY:DCLTABLES.EXE %SYSMAN-I-OUTPUT, command execution on node NODE21 %SYSMAN-I-OUTPUT, command execution on node NODE22 %SYSMAN-I-OUTPUT, command execution on node NODE23 SYSMAN> DO INSTALL REPLACE SYS$SYSTEM: DEC_FORTRAN.EXE %SYSMAN-I-OUTPUT, command execution on node NODE21 %SYSMAN-I-OUTPUT, command execution on node NODE22 %SYSMAN-I-OUTPUT, command execution on node NODE23 SYSMAN> SET ENVIRONMENT/NODE=VAX_NODES %SYSMAN-I-ENV, current command environment: Individual nodes: NODE24,NODE25,NODE26 Username BOUCHARD will be used on nonlocal nodes SYSMAN> DO INSTALL REPLACE SYS$LIBRARY:DCLTABLES.EXE %SYSMAN-I-OUTPUT, command execution on node NODE24 %SYSMAN-I-OUTPUT, command execution on node NODE25 %SYSMAN-I-OUTPUT, command execution on node NODE26 SYSMAN> DO INSTALL REPLACE SYS$SYSTEM:FORTRAN$MAIN.EXE %SYSMAN-I-OUTPUT, command execution on node NODE24 %SYSMAN-I-OUTPUT, command execution on node NODE25 %SYSMAN-I-OUTPUT, command execution on node NODE26
SYSMAN >SET ENVIRONMENT/CLUSTER %SYSMAN-I-ENV, Current command environment: Clusterwide on local cluster Username SMITH will be used on nonlocal nodes SYSMAN> DO SHOW DEVICE/FILES DISK2: %SYSMAN-I-OUTPUT, command execution on node NODE21 Files accessed on device $1$DIA2: (DISK2, NODE22) on 14-MAY-1996 15:44:06.05 Process name PID File name 00000000 [000000]INDEXF.SYS;1 %SYSMAN-I-OUTPUT, command execution on node NODE22 Files accessed on device $1$DIA2: (DISK2, NODE21) on 14-MAY-1996 15:44:26.93 Process name PID File name 00000000 [000000]INDEXF.SYS;1 %SYSMAN-I-OUTPUT, command execution on node NODE23 Files accessed on device $1$DIA2: (NODE21, NODE22) on 14-MAY-1996 15:45:01.43 Process name PID File name 00000000 [000000]INDEXF.SYS;1 %SYSMAN-I-OUTPUT, command execution on node NODE24 Files accessed on device $1$DIA2: (NODE22, NODE21) on 14-MAY-1996 15:44:31.30 Process name PID File name 00000000 [000000]INDEXF.SYS;1 Susan Scott 21400059 [SCOTT]DECW$SM.LOG;228 _FTA7: 214000DD [SCOTT]CARE_SDML.TPU$JOURNAL;1 %SYSMAN-I-OUTPUT, command execution on node NODE25 Files accessed on device $1$DIA2: (NODE21, NODE22) on 14-MAY-1996 15:44:35.50 Process name PID File name 00000000 [000000]INDEXF.SYS;1 DECW$SESSION 226000E6 [SNOW]DECW$SM.LOG;6 _FTA17: 2260009C [SNOW.MAIL]MAIL.MAI;1 SNOW_1 2260012F [SNOW.MAIL]MAIL.MAI;1 SNOW_2 22600142 [SNOW.MAIL]MAIL.MAI;1 SNOW_3 22600143 [SNOW.MAIL]MAIL.MAI;1
SYSMAN > SET ENVIRONMENT/NODE=(NODE21,NODE22) %SYSMAN-I-ENV, Current command environment: Clusterwide on local cluster Username SMITH will be used on nonlocal nodes SYSMAN> DO SHOW MEMORY %SYSMAN-I-OUTPUT, command execution on node NODE21 System Memory Resources on 14-MAY-1996 15:59:21.61 Physical Memory Usage (pages): Total Free In Use Modified Main Memory (64.00Mb) 131072 63955 65201 1916 Slot Usage (slots): Total Free Resident Swapped Process Entry Slots 360 296 64 0 Balance Set Slots 324 262 62 0 Fixed-Size Pool Areas (packets): Total Free In Use Size Small Packet (SRP) List 10568 1703 8865 128 I/O Request Packet (IRP) List 3752 925 2827 176 Large Packet (LRP) List 157 28 129 1856 Dynamic Memory Usage (bytes): Total Free In Use Largest Nonpaged Dynamic Memory 1300480 97120 1203360 60112 Paged Dynamic Memory 1524736 510496 1014240 505408 Paging File Usage (pages): Free Reservable Total DISK$MTWAIN_SYS:[SYS0.SYSEXE]SWAPFILE.SYS 10000 10000 10000 DISK$MTWAIN_SYS:[SYS0.SYSEXE]PAGEFILE.SYS 60502 -52278 100000 Of the physical pages in use, 19018 pages are permanently allocated to VMS. %SYSMAN-I-OUTPUT, command execution on node NODE22 System Memory Resources on 14-MAY-1996 15:59:42.65 Physical Memory Usage (pages): Total Free In Use Modified Main Memory (32.00Mb) 65536 44409 20461 666 Slot Usage (slots): Total Free Resident Swapped Process Entry Slots 240 216 24 0 Balance Set Slots 212 190 22 0 Fixed-Size Pool Areas (packets): Total Free In Use Size Small Packet (SRP) List 5080 2610 2470 128 I/O Request Packet (IRP) List 3101 1263 1838 176 Large Packet (LRP) List 87 60 27 1856 Dynamic Memory Usage (bytes): Total Free In Use Largest Nonpaged Dynamic Memory 1165312 156256 1009056 114432 Paged Dynamic Memory 1068032 357424 710608 352368 Paging File Usage (pages): Free Reservable Total DISK$MTWAIN_SYS:[SYS1.SYSEXE]SWAPFILE.SYS 10000 10000 10000 DISK$MTWAIN_SYS:[SYS1.SYSEXE]PAGEFILE.SYS 110591 68443 120000 Of the physical pages in use, 9056 pages are permanently allocated to VMS.
This chapter introduces the basic network software options available for OpenVMS Systems. Material provided in this chapter is intended as an introduction only: refer to the appropriate documentation set for the network product or products you are using for complete planning, installation, configuration, use, and management information.
On OpenVMS systems, three types of network functionality are available:
Nodes running DECnet-Plus, TCP/IP, and DECnet Phase IV can co-exist in the same network. You can run TCP/IP software and either DECnet-Plus or DECnet Phase IV on the same system. Table 21-1 lists the various software combinations possible on a node and which applications can be used for communication between various pairs of systems.
If System A Has... |
And System B Has... |
Then Systems A and B Can Communicate Using ... |
---|---|---|
TCP/IP | TCP/IP |
TCP/IP applications
|
DECnet Phase IV | DECnet Phase IV |
DECnet applications
|
DECnet-Plus | DECnet-Plus |
DECnet applications
OSI applications |
DECnet-Plus | DECnet Phase IV |
DECnet applications
|
DECnet-Plus | OSI |
OSI applications
|
TCP/IP and DECnet Phase IV | TCP/IP |
TCP/IP applications
|
TCP/IP and DECnet Phase IV | DECnet Phase IV |
DECnet applications
|
TCP/IP and DECnet-Plus | TCP/IP |
TCP/IP applications
|
TCP/IP and DECnet-Plus | DECnet-Plus |
DECnet applications
OSI applications |
TCP/IP and DECnet-Plus | TCP/IP and DECnet-Plus |
OSI applications
DECnet applications DECnet applications via DECnet over TCP/IP (RFC 1859)+ OSI applications via OSI over TCP/IP (RFC 1006) TCP/IP applications |
TCP/IP and DECnet-Plus | OSI (supporting RFC 1006) and TCP/IP |
OSI applications
OSI over TCP/IP (RFC 1006) TCP/IP applications |
TCP/IP and DECnet-Plus | OSI (not supporting RFC 1006) and TCP/IP |
OSI applications
TCP/IP applications |
For an introduction to DECnet-Plus and a roadmap of the documentation set, see DECnet-Plus for OpenVMS Introduction and User's Guide.
For an introduction to Digital TCP/IP Services for OpenVMS, see the Digital TCP/IP Services for OpenVMS Concepts and Planning Guide.
A comprehensive list of DECnet-Plus and TCP/IP Services for OpenVMS documentation is provided at the end of this chapter (see Section 21.4.)
The following sections introduce DECnet-Plus and Digital TCP/IP Services for OpenVMS.
DECnet-Plus for OpenVMS provides the means for various Digital operating systems to communicate with each other and with systems provided by other vendors. The DECnet-Plus network supports remote system communication, resource sharing, and distributed processing. Network users can access resources on any system in the network. Each system participating in the network is known as a network node. In addition, DECnet-Plus includes support for the Internet standard RFC 1006 and the Internet draft RFC 1859, allowing OSI and DECnet applications to run over TCP/IP. Thus using DECnet-Plus, applications can connect to and communicate with peer OSI and DECnet applications on any DECnet Phase IV-based system or OSI-based system, whether from Digital or from other vendors.
Table 21-2 defines terms related to DECnet-Plus networks.
6017P063.HTM OSSG Documentation 22-NOV-1996 14:22:49.62
Copyright © Digital Equipment Corporation 1996. All Rights Reserved.