[Digital logo]
[HR]

OpenVMS System Manager's Manual


Previous | Contents

In this example, the default file type .COM is assumed because the file type is omitted.

For more information on creating command procedures, see the SAVE command in the OpenVMS System Management Utilities Reference Manual.

20.4 Understanding SYSMAN and OpenVMS Cluster Management

The System Management utility (SYSMAN) provides two kinds of support for OpenVMS Cluster management:

Each SYSMAN command requires a specific level of privilege. For more information on each command, see the OpenVMS System Management Utilities Reference Manual.

20.5 Using SYSMAN to Manage Security and System Time

You can manage security data and system time for an OpenVMS Cluster system with SYSMAN CONFIGURATION commands. Table 20-4 summarizes these CONFIGURATION commands and their functions.

Table 20-4 SYSMAN CONFIGURATION Commands
Command Function
CONFIGURATION SET CLUSTER_AUTHORIZATION Modifies the group number and password in a local area cluster
CONFIGURATION SHOW CLUSTER_AUTHORIZATION Displays the group number and multicast address of a local area cluster
CONFIGURATION SET TIME Updates system time
CONFIGURATION SHOW TIME Displays current system time

20.5.1 Modifying the Group Number and Password

The group number identifies the group of nodes in the cluster, and the associated Ethernet address is used to send messages to all nodes in the cluster. The OpenVMS Cluster password protects the integrity of the cluster membership.

Using the CONFIGURATION SET CLUSTER_AUTHORIZATION command modifies the group number and password, as recorded in SYS$SYSTEM:CLUSTER_AUTHORIZE.DAT. Normally, you do not need to alter records in the CLUSTER_AUTHORIZE.DAT file.

If your configuration has multiple system disks, SYSMAN automatically updates each copy of CLUSTER_AUTHORIZE.DAT, provided that you have defined the environment as a cluster with the SET ENVIRONMENT/CLUSTER command.


Caution

If you change either the group number or password, you must reboot the entire cluster.

You cannot display the cluster password for security reasons, but you can display the group number and group multicast address with the CONFIGURATION SHOW CLUSTER_AUTHORIZATION command.

Examples

  1. The following command example sets the environment to a specific cluster, sets privilege to SYSPRV, and modifies the cluster password:
    SYSMAN> SET ENVIRONMENT/CLUSTER/NODE=NODE21 
    SYSMAN> SET PROFILE/PRIVILEGE=SYSPRV 
    SYSMAN> CONFIGURATION SET CLUSTER_AUTHORIZATION/PASSWORD=GILLIAN
    %SYSMAN-I-CAFOLDGROUP, existing group will not be changed
    %SYSMAN-I-GRPNOCHG, Group number not changed
    SYSMAN-I-CAFREBOOT, cluster authorization file updated.
    The entire cluster should be rebooted.
    
  2. The following command example displays the group number and multicast address for NODE21. Because the group number and password on other nodes in the cluster are identical, no further information is displayed.
    SYSMAN> CONFIGURATION SHOW CLUSTER_AUTHORIZATION
    Node NODE21: Cluster group number 65240 
    Multicast address: AB-00-04-01-F2-FF 
    

20.5.2 Modifying the System Time

Use the CONFIGURATION SET TIME command to modify system time for nodes in an OpenVMS Cluster system, as well as for individual nodes. You can specify time values in the following format:

[dd-mmm-yyyy[:]] [hh:mm:ss.cc] 

You can also enter delta time values. See the OpenVMS User's Manual for more information about time formats.

In a cluster environment, SYSMAN sets the time on each node to the value you specify. However, if you do not specify a value, SYSMAN reads the clock on the node from which you are executing SYSMAN and assigns this value to all nodes in the cluster. In a remote cluster, SYSMAN reads the clock on the target node in the cluster and assigns that value to all nodes. Note that the time-of-year clock is optional for some processors; see your processor's hardware handbook for more information.

SYSMAN tries to ensure that all processors in the cluster are set to the same time. Because of communication and processing delays, it is not possible to synchronize clocks exactly. However, the variation is typically less than a few hundredths of a second. If SYSMAN cannot set the time to within one-half second of the specified time, you receive a warning message that names the node that failed to respond quickly enough.

As a result of slight inaccuracies in each processor clock, times on various members of a cluster tend to drift apart. The first two examples show how to synchronize system time in a cluster.

Examples

  1. The following procedure sets the time on all cluster nodes to the value obtained from the local time-of-year clock, waits 6 hours, then resets the time for the cluster:
    $ SYNCH_CLOCKS: 
    $ RUN SYS$SYSTEM:SYSMAN 
          SET ENVIRONMENT/CLUSTER 
          CONFIGURATION SET TIME 
          EXIT       
    $ WAIT 6:00:00 
    $ GOTO SYNCH_CLOCKS 
    
  2. The next example sets the environment to NODE21, NODE22, and NODE23, sets privilege, and modifies the system time on all three nodes:
    SYSMAN> SET ENVIRONMENT/NODE=(NODE21,NODE22,NODE23) 
    SYSMAN> SET PROFILE/PRIVILEGE=LOG_IO 
    SYSMAN> CONFIGURATION SET TIME 12:38:00
    
  3. The following example sets the environment to cluster and displays the system time for all nodes:
    SYSMAN> SET ENVIRONMENT/CLUSTER/NODE=NODE23
    SYSMAN> CONFIGURATION SHOW TIME 
    System time on node NODE21: 19-APR-1996 13:32:19.45          
    System time on node NODE22: 19-APR-1996 13:32:27.79
    System time on node NODE23: 19-APR-1996 13:32:58.66
    

20.5.2.1 Resetting System Time After January 1

The Time of Day Register (TODR), which the system uses to maintain system time, has a limit of approximately 15 months. Between January 1 and April 1, reset the system time; otherwise, the following problems might occur:

Because the TODR has an approximate limit of 15 months, the system maintains time by combining the TODR value with a base time recorded in the base system image (SYS$LOADABLE_IMAGES:SYS.EXE). The definition of base time is:

01-JAN-CURRENT_YEAR 00:00:00.00 

Because all TODRs ordinarily have the same base, multiple CPUs can boot off the same system disk, and you can use multiple system disks on one CPU; the system sets the time correctly.

When a SET TIME command is issued (with or without specifying a time), OpenVMS does the following:

  1. Writes the current time to the system image file
  2. Resets the TODR as an offset within the current year

In an OpenVMS Cluster system (or for a node that is not part of the cluster), when you set the time, the TODR and the base time in the system image are reset with the values for the new year. However, multiple systems might share the system image. This does not normally cause a problem except after the first day of a new year.


Note

The system issues the SET TIME command when it boots and as a part of the normal SHUTDOWN command procedure.

By December, each node has a very large offset stored in the TODR (from the base time of 1-JAN of that year). When the time advances to a new year, the system image still has the old year and the TODR values are still large.

After January 1, if a SET TIME command is issued on any node (or any node is shut down using SHUTDOWN.COM), the following happens:

  1. The new year becomes the base year.
  2. The system resets the TODR on that node.
  3. The other nodes still have a large value in the TODR.

After these three events occur, if a node that has a large TODR crashes and rejoins the cluster, its system time is initially in the next year (applying the large TODR to the new year). This system time is recorded as the system's boot time. When the node joins the cluster, its time is set to the correct value but the boot time remains one year in the future. Certain forms of the SHOW SYSTEM command compare current time to boot time; in this instance, SHOW SYSTEM displays incorrect values.

If a system disk is used at different times by different, unclustered CPUs or if different system disks are used at different times on the same CPU, the system might incorrectly set the time to a year in the future or a year in the past, depending on how the CPU's TODR and the value recorded on the system disk become unsynchronized:

Example

The following example uses SYSMAN commands to reset the time on all nodes in an OpenVMS Cluster system:

$ RUN SYS$SYSTEM:SYSMAN
SYSMAN> SET ENVIRONMENT/CLUSTER
SYSMAN> SET PROFILE/PRIVILEGE=(LOG_IO,SYSLCK)
SYSMAN> CONFIGURATION SET TIME 05-JUN-1996:12:00:00
SYSMAN> EXIT

Notes

In a node that is not part of a cluster, use the SET TIME command and specify a time. If you do not specify a time, the SET TIME command updates the system time using the time in the TODR.

If you are running the Digital Distributed Time Service (DECdts) on your system, you must use it to set the time.


20.6 Using the SYSMAN Command DO to Manage an OpenVMS Cluster

The SYSMAN command DO enables you to execute a DCL command or command procedure on all nodes in the current environment. This is convenient when you are performing routine system management tasks on nodes in the OpenVMS Cluster system, such as:

Each DO command executes as an independent process, so there is no process context retained between DO commands. For this reason, you must express all DCL commands in a single command string, and you cannot run a program that expects input.

In a cluster environment, SYSMAN executes the commands sequentially on all nodes in the cluster. Each command executes completely before SYSMAN sends it to the next node in the environment. Any node that is unable to execute the command returns an error message. SYSMAN displays an error message if the timeout period expires before the node responds.

In a dual-architecture heterogeneous OpenVMS Cluster running both OpenVMS VAX and OpenVMS Alpha, some uses of the DO command may require special handling. For example, if you are installing images that are named differently in each architecture, you can still use the DO command if you create logical name tables for VAX and for Alpha nodes. See the example sequence that follows this description for an example.

Some DCL commands, such as MOUNT/CLUSTER or SET QUORUM/CLUSTER, operate clusterwide by design. It is best to avoid using these kinds of commands with the DO command in SYSMAN when the environment is set to cluster. As alternatives, you could leave SYSMAN temporarily with the SPAWN command and execute these commands in DCL, or you could define the environment to be a single node within the cluster.

Examples

  1. The following example installs an image on a cluster. First, it adds CMKRNL and SYSPRV privileges to the current privileges because they are required by INSTALL and AUTHORIZE. The DO INSTALL command installs the file STATSHR. The DO MCR AUTHORIZE command sets up an account for user Jones, specifying a password and a default device and directory.
    SYSMAN> SET PROFILE/PRIVILEGES=(CMKRNL,SYSPRV)/DEFAULT=SYS$SYSTEM
    SYSMAN> DO INSTALL ADD/OPEN/SHARED WRKD$:[MAIN]STATSHR
    SYSMAN> DO MCR AUTHORIZE ADD JONES/PASSWORD=COLUMBINE -
    _SYSMAN> /DEVICE=WORK1/DIRECTORY=[JONES]
    
  2. The following example sets the environment to cluster and starts up a software product called XYZ on each node in the cluster:
    SYSMAN>SET ENVIRONMENT/CLUSTER
    %SYSMAN-I-ENV, Current command environment:
            Clusterwide on local cluster 
            Username SMITH    will be used on nonlocal nodes
    SYSMAN> DO @SYS$STARTUP:XYZ_STARTUP
    
  3. The following example shows how you can define logical names for VAX and Alpha nodes in a dual-architecture heterogeneous cluster, so that you can use the DO command to install architecture-specific images.
    $ CREATE/NAME_TABLE/PARENT=LNM$SYSTEM_DIRECTORY SYSMAN$NODE_TABLE
    $ DEFINE/TABLE=SYSMAN$NODE_TABLE ALPHA_NODES NODE21,NODE22,NODE23    
    $ DEFINE/TABLE=SYSMAN$NODE_TABLE VAX_NODES NODE24,NODE25,NODE26    
    $ RUN SYS$SYSTEM:SYSMAN
    SYSMAN> SET ENVIRONMENT/NODE=ALPHA_NODES
    %SYSMAN-I-ENV, current command environment: 
             Individual nodes: NODE21,NODE22,NODE23 
             Username BOUCHARD will be used on nonlocal nodes
     
    SYSMAN> DO INSTALL REPLACE SYS$LIBRARY:DCLTABLES.EXE
    %SYSMAN-I-OUTPUT, command execution on node NODE21 
    %SYSMAN-I-OUTPUT, command execution on node NODE22 
    %SYSMAN-I-OUTPUT, command execution on node NODE23
    SYSMAN> DO INSTALL REPLACE SYS$SYSTEM: DEC_FORTRAN.EXE
    %SYSMAN-I-OUTPUT, command execution on node NODE21 
    %SYSMAN-I-OUTPUT, command execution on node NODE22 
    %SYSMAN-I-OUTPUT, command execution on node NODE23
     
    SYSMAN> SET ENVIRONMENT/NODE=VAX_NODES
    %SYSMAN-I-ENV, current command environment: 
             Individual nodes: NODE24,NODE25,NODE26 
             Username BOUCHARD will be used on nonlocal nodes
     
    SYSMAN> DO INSTALL REPLACE SYS$LIBRARY:DCLTABLES.EXE
    %SYSMAN-I-OUTPUT, command execution on node NODE24 
    %SYSMAN-I-OUTPUT, command execution on node NODE25 
    %SYSMAN-I-OUTPUT, command execution on node NODE26
    SYSMAN> DO INSTALL REPLACE SYS$SYSTEM:FORTRAN$MAIN.EXE
    %SYSMAN-I-OUTPUT, command execution on node NODE24 
    %SYSMAN-I-OUTPUT, command execution on node NODE25 
    %SYSMAN-I-OUTPUT, command execution on node NODE26
    
  4. The following example shows which files are open on DISK2. You might use this if you want to dismount DISK2 and need to see which users in the cluster have files open.
    SYSMAN >SET ENVIRONMENT/CLUSTER
    %SYSMAN-I-ENV, Current command environment:
            Clusterwide on local cluster 
            Username SMITH    will be used on nonlocal nodes
    SYSMAN> DO SHOW DEVICE/FILES DISK2:
     
    %SYSMAN-I-OUTPUT, command execution on node NODE21 
    Files accessed on device $1$DIA2: (DISK2, NODE22) on 14-MAY-1996 15:44:06.05 
    Process name      PID     File name 
                    00000000  [000000]INDEXF.SYS;1 
    %SYSMAN-I-OUTPUT, command execution on node NODE22 
    Files accessed on device $1$DIA2: (DISK2, NODE21) on 14-MAY-1996 15:44:26.93 
    Process name      PID     File name 
                    00000000  [000000]INDEXF.SYS;1 
    %SYSMAN-I-OUTPUT, command execution on node NODE23 
    Files accessed on device $1$DIA2: (NODE21, NODE22) on 14-MAY-1996 15:45:01.43 
    Process name      PID     File name 
                    00000000  [000000]INDEXF.SYS;1 
    %SYSMAN-I-OUTPUT, command execution on node NODE24 
    Files accessed on device $1$DIA2: (NODE22, NODE21) on 14-MAY-1996 15:44:31.30 
    Process name      PID     File name 
                    00000000  [000000]INDEXF.SYS;1 
    Susan Scott     21400059  [SCOTT]DECW$SM.LOG;228 
    _FTA7:          214000DD  [SCOTT]CARE_SDML.TPU$JOURNAL;1 
    %SYSMAN-I-OUTPUT, command execution on node NODE25 
    Files accessed on device $1$DIA2: (NODE21, NODE22) on 14-MAY-1996 15:44:35.50 
    Process name      PID     File name 
                    00000000  [000000]INDEXF.SYS;1 
    DECW$SESSION    226000E6  [SNOW]DECW$SM.LOG;6 
    _FTA17:         2260009C  [SNOW.MAIL]MAIL.MAI;1 
    SNOW_1          2260012F  [SNOW.MAIL]MAIL.MAI;1 
    SNOW_2          22600142  [SNOW.MAIL]MAIL.MAI;1 
    SNOW_3          22600143  [SNOW.MAIL]MAIL.MAI;1 
    

  5. The following example shows how much memory is available on the nodes in a cluster. You might use this if you are installing software and want to know if each node has enough memory available.
    SYSMAN > SET ENVIRONMENT/NODE=(NODE21,NODE22)
    %SYSMAN-I-ENV, Current command environment:
            Clusterwide on local cluster 
            Username SMITH    will be used on nonlocal nodes
    SYSMAN>  DO SHOW MEMORY
    %SYSMAN-I-OUTPUT, command execution on node NODE21 
                  System Memory Resources on 14-MAY-1996 15:59:21.61 
    Physical Memory Usage (pages):     Total        Free      In Use    Modified 
      Main Memory (64.00Mb)           131072       63955       65201        1916 
    Slot Usage (slots):                Total        Free    Resident     Swapped 
      Process Entry Slots                360         296          64           0 
      Balance Set Slots                  324         262          62           0 
    Fixed-Size Pool Areas (packets):   Total        Free      In Use        Size 
      Small Packet (SRP) List          10568        1703        8865         128 
      I/O Request Packet (IRP) List     3752         925        2827         176 
      Large Packet (LRP) List            157          28         129        1856 
    Dynamic Memory Usage (bytes):      Total        Free      In Use     Largest 
      Nonpaged Dynamic Memory        1300480       97120     1203360       60112 
      Paged Dynamic Memory           1524736      510496     1014240      505408 
    Paging File Usage (pages):                      Free  Reservable       Total 
      DISK$MTWAIN_SYS:[SYS0.SYSEXE]SWAPFILE.SYS                                     
                                                   10000       10000       10000 
      DISK$MTWAIN_SYS:[SYS0.SYSEXE]PAGEFILE.SYS                                     
                                                   60502      -52278      100000 
    Of the physical pages in use, 19018 pages are permanently allocated to VMS. 
     
    %SYSMAN-I-OUTPUT, command execution on node NODE22 
                  System Memory Resources on 14-MAY-1996 15:59:42.65 
    Physical Memory Usage (pages):     Total        Free      In Use    Modified 
      Main Memory (32.00Mb)            65536       44409       20461         666 
    Slot Usage (slots):                Total        Free    Resident     Swapped 
      Process Entry Slots                240         216          24           0 
      Balance Set Slots                  212         190          22           0 
    Fixed-Size Pool Areas (packets):   Total        Free      In Use        Size 
      Small Packet (SRP) List           5080        2610        2470         128 
      I/O Request Packet (IRP) List     3101        1263        1838         176 
      Large Packet (LRP) List             87          60          27        1856 
    Dynamic Memory Usage (bytes):      Total        Free      In Use     Largest 
      Nonpaged Dynamic Memory        1165312      156256     1009056      114432 
      Paged Dynamic Memory           1068032      357424      710608      352368 
    Paging File Usage (pages):                      Free  Reservable       Total 
      DISK$MTWAIN_SYS:[SYS1.SYSEXE]SWAPFILE.SYS                                     
                                                   10000       10000       10000 
      DISK$MTWAIN_SYS:[SYS1.SYSEXE]PAGEFILE.SYS                                     
                                                  110591       68443      120000 
    Of the physical pages in use, 9056 pages are permanently allocated to VMS. 
    


Chapter 21
Network Considerations

This chapter introduces the basic network software options available for OpenVMS Systems. Material provided in this chapter is intended as an introduction only: refer to the appropriate documentation set for the network product or products you are using for complete planning, installation, configuration, use, and management information.

21.1 Network Options Available on OpenVMS Systems

On OpenVMS systems, three types of network functionality are available:

Nodes running DECnet-Plus, TCP/IP, and DECnet Phase IV can co-exist in the same network. You can run TCP/IP software and either DECnet-Plus or DECnet Phase IV on the same system. Table 21-1 lists the various software combinations possible on a node and which applications can be used for communication between various pairs of systems.

Table 21-1 Network Software Interoperability Options
If System A
Has...
And System B
Has...
Then Systems A and B
Can Communicate Using ...
TCP/IP TCP/IP TCP/IP applications
DECnet Phase IV DECnet Phase IV DECnet applications
DECnet-Plus DECnet-Plus DECnet applications
OSI applications
DECnet-Plus DECnet Phase IV DECnet applications
DECnet-Plus OSI OSI applications
TCP/IP and DECnet Phase IV TCP/IP TCP/IP applications
TCP/IP and DECnet Phase IV DECnet Phase IV DECnet applications
TCP/IP and DECnet-Plus TCP/IP TCP/IP applications
TCP/IP and DECnet-Plus DECnet-Plus DECnet applications
OSI applications
TCP/IP and DECnet-Plus TCP/IP and DECnet-Plus OSI applications
DECnet applications
DECnet applications via
DECnet over TCP/IP (RFC 1859)+
OSI applications via
OSI over TCP/IP (RFC 1006)
TCP/IP applications
TCP/IP and DECnet-Plus OSI (supporting RFC 1006) and TCP/IP OSI applications
OSI over TCP/IP (RFC 1006)
TCP/IP applications
TCP/IP and DECnet-Plus OSI (not supporting RFC 1006) and TCP/IP OSI applications
TCP/IP applications


+RFC 1859 is an Internet draft, an extension of Internet standard RFC 1006.

For an introduction to DECnet-Plus and a roadmap of the documentation set, see DECnet-Plus for OpenVMS Introduction and User's Guide.

For an introduction to Digital TCP/IP Services for OpenVMS, see the Digital TCP/IP Services for OpenVMS Concepts and Planning Guide.

A comprehensive list of DECnet-Plus and TCP/IP Services for OpenVMS documentation is provided at the end of this chapter (see Section 21.4.)

The following sections introduce DECnet-Plus and Digital TCP/IP Services for OpenVMS.

21.2 Understanding DECnet-Plus for OpenVMS Networks

DECnet-Plus for OpenVMS provides the means for various Digital operating systems to communicate with each other and with systems provided by other vendors. The DECnet-Plus network supports remote system communication, resource sharing, and distributed processing. Network users can access resources on any system in the network. Each system participating in the network is known as a network node. In addition, DECnet-Plus includes support for the Internet standard RFC 1006 and the Internet draft RFC 1859, allowing OSI and DECnet applications to run over TCP/IP. Thus using DECnet-Plus, applications can connect to and communicate with peer OSI and DECnet applications on any DECnet Phase IV-based system or OSI-based system, whether from Digital or from other vendors.

Table 21-2 defines terms related to DECnet-Plus networks.

Table 21-2 DECnet-Plus for OpenVMS Network Terminology
Term Definition
Address/Address tower DECnet-Plus systems have multiple address towers, also called protocol stacks, that describe various sets of communications protocols available for a particular node. These towers are stored in the namespace. They are used for determining the protocols that two nodes have in common so that they can communicate with each other.
Autoconfigure An option supported by DECnet-Plus in which you can have your end node's network entity title (NET) automatically configured by the adjacent router.
Domain Collection of systems that use the same routing protocol
Entity An individual, manageable piece of a network that has attributes that describe it, a name that identifies it, and an interface that supports management operations. Examples of entities are node, routing, and OSI transport.
Extended address A DECnet-Plus network address that does not fall within the limits of DECnet Phase IV addressing and thereby provides extended addressing capabilities. A DECnet-Plus network address can also be DECnet Phase IV compatible. The DECnet-Plus configuration procedure automatically builds an extended address from the Phase IV address of your node. The extended address should be of concern only if users and applications require extended addressing for communication with other OSI systems (Digital or non-Digital).
Multihome The ability to assign more than one network address to a system. Having multiple addresses allows you to have both a DECnet-Plus extended address, a Phase IV compatible address, and a TCP/IP address, so you can communicate with DECnet Phase IV, OSI (or DECnet-Plus), and TCP/IP systems. This also allows you to belong to more than one network.
Name service Software that manages node name and addressing information. DECnet-Plus offers a choice of three, distinct name services: Local namespace, Digital Distributed Name Service (DECdns), and the Domain Name System (DNS/BIND).
Namespace The set of names stored by, and accessible to, a name service.
Network entity title In OSI terminology, a network entity title (NET) is a network address that is used for identifying the Network layer protocol for routing. A DECnet-Plus system can automatically construct (autoconfigure) a NET for each transport operating over routing.
Network service access point (NSAP) One of the following:
  • The global network address of a DECnet-Plus system
  • The addressable point at which a network entity provides the network service to a network user
  • The complete address that identifies both the particular network system and the transport module on that system that is to receive the data
The NSAP is used to determine the destination node for all packets and so must be unique in the network. The NSAP is a NET with a selector field other than 00. (A selector field identifies the transport to be used.)
Object/Application In DECnet Phase IV, an object is a process to which a logical link connects. Objects are set up by layered products that use DECnet. Some objects are DECnet system programs---for example, the Mail object; other objects are user-written programs.

In DECnet-Plus, objects are referred to as applications. Where Phase IV has an object database, DECnet-Plus has an applications database.

Phase IV compatible address A DECnet-Plus network address that falls within the limits of Phase IV addressing; that is, conforming to the Phase IV area and node limits, where the area number is from 1 to 63, and the node number is from 1 to 1023, as in 36.515. Your DECnet-Plus system needs a Phase IV compatible address to communicate with DECnet Phase IV nodes in the same network.
Time service Software that synchronizes the system clocks in computers connected by a network. The Digital Distributed Time Service (DECdts) enables distributed applications to execute in the proper sequence even though they run on different systems.


Previous | Next | Contents | [Home] | [Comments] | [Ordering info] | [Help]

[HR]

  6017P063.HTM
  OSSG Documentation
  22-NOV-1996 14:22:49.62

Copyright © Digital Equipment Corporation 1996. All Rights Reserved.

Legal