Remote monitoring in an OpenVMS Cluster system might not be compatible, however, between nodes that are running different versions of OpenVMS. Table 18-10 shows the compatibility of versions for remote monitoring.
Versions | OpenVMS Alpha and VAX Version 6.n or 7.n | OpenVMS Alpha Version 1.5 and VAX Version 5.n |
---|---|---|
OpenVMS Alpha and VAX Version 6. n or 7. n | Yes | No |
OpenVMS Alpha Version 1.5 and VAX Version 5. n | No | Yes |
If you attempt to monitor a remote node that is incompatible, the system displays the following message:
%MONITOR-E-SRVMISMATCH, MONITOR server on remote node is an incompatible version
If you receive this message, you can still use MONITOR to obtain data about the remote node. To do this, record the data on the remote node and then run the MONITOR playback feature to examine the data on the local node.
Another difference exists when you monitor remote nodes in an OpenVMS Cluster system. Beginning with OpenVMS Version 6.2, the limit on the number of disks that can be monitored was raised from 799 to 909 for record output and from 799 to 1817 for display and summary outputs. If you monitor a remote node running OpenVMS Version 6.2 or later from a system running a version earlier than OpenVMS Version 6.2, the old limit of 799 applies.
For more information on MONITOR, see the OpenVMS System Management Utilities Reference Manual.
This chapter describes how to find out how your system resources have been used. You can use this information to:
Information Provided in This Chapter
This chapter describes the following tasks:
Task | Section |
---|---|
Determining which resources are being tracked | Section 19.2 |
Controlling which resources are tracked | Section 19.3 |
Starting up a new accounting file | Section 19.4 |
Moving the accounting file | Section 19.5 |
Producing reports of resource use | Section 19.6 |
Setting up accounting groups | Section 19.7 |
Monitoring disk space | Section 19.8 |
This chapter explains the following concept:
Concept | Section |
---|---|
Accounting files | Section 19.1 |
The system gathers information on resource use. For example, the information can include the resources such as CPU time used by each print job. The system stores this information in accounting files.
The resources tracked by default depend on the model of computer you use. However, you can control which resources are tracked. If you do not want to track resource use, you can stop the accounting file tracking resource use altogether. (See Section 19.3.)
Each node in an OpenVMS Cluster has its own accounting file, known as its current accounting file. By default, this file is SYS$MANAGER:ACCOUNTNG.DAT, but you can control which file is used (see Section 19.5).
The information in the accounting files is in binary. You cannot display it with the TYPE command. To display the information, use the Accounting utility (ACCOUNTING). (See Section 19.6.)
To determine which resources are currently being tracked, use the SHOW ACCOUNTING command:
$ SHOW ACCOUNTING
This command produces a screen display (see the example) that contains keywords in the following two categories:
Keyword | Type of Resource |
---|---|
IMAGE | Resources used by an image |
LOGIN_FAILURE | Resources used by an unsuccessful attempt to log in |
MESSAGE | Unformatted resource record written to the accounting file by a call to the $SNDJBC system service |
Resources used by a print job | |
PROCESS | Resources used by a process |
Keyword | Type of Process |
---|---|
BATCH | Batch process |
DETACHED | Detached process |
INTERACTIVE | Interactive process |
NETWORK | Network process |
SUBPROCESS | Subprocess (the parent process can be a batch, detached, interactive, or network process) |
Example
$ SHOW ACCOUNTING Accounting is currently enabled to log the following activities: PROCESS any process termination IMAGE image execution INTERACTIVE interactive job termination LOGIN_FAILURE login failures NETWORK network job termination PRINT all print jobs
The keywords in this example show that the local node is tracking the resources used by each:
You can control which resources the system tracks. To save disk space, you can stop the system tracking resources you are not interested in.
How to Perform This Task
SET ACCOUNTING/DISABLE[=(keyword[,...])]/ENABLE[=(keyword[,...])]
Example
This example prevents the tracking of all resources except those used by interactive and batch processes:
$ SET ACCOUNTING/DISABLE/ENABLE=(PROCESS,INTERACTIVE,BATCH)
The /DISABLE qualifier is not followed by a keyword. Therefore, the qualifier disables the tracking of all resources. The /ENABLE qualifier then enables the tracking of the resources used by interactive and batch processes.
To start up a new current accounting file, use the following command:
$ SET ACCOUNTING/NEW_FILE
This closes the current accounting file and opens a new version of it.
If the system encounters an error when trying to write information to the current accounting file, it automatically closes the file and opens a new version of it.
Example
This example closes the current accounting file, opens a new version of it, and changes the name of the old file to WEEK_24_RESOURCES.DAT. You can retain this file as a record of the resources used in that week.
$ SET ACCOUNTING/NEW_FILE $ RENAME SYS$MANAGER:ACCOUNTNG.DAT;-1 WEEK_24_RESOURCES.DAT
When you first install your system, the current accounting file is SYS$MANAGER:ACCOUNTNG.DAT.
This file can become quite large. Moving it from your system disk can improve system performance.
$ DEFINE ACCOUNTNG MYDISK:[MYDIR]MYFILE.DAT/SYSTEM
Note
Two nodes cannot log information in the same accounting file. If you define ACCOUNTNG on two nodes to point to the same file, each node will open and use its own version of the file.
$ SET ACCOUNTING/NEW_FILE
Example
This example changes the current accounting file to [MYDIR]MYDISK:MYFILE.DAT.
$ DEFINE ACCOUNTNG MYDISK:[MYDIR]MYFILE.DAT/SYSTEM $ SET ACCOUNTING/NEW_FILE
The three types of reports are:
Type of Report | Qualifier |
---|---|
Brief | /BRIEF (the default) |
Full | /FULL |
Summary | /SUMMARY |
To produce a report, use the ACCOUNTING command with the appropriate qualifier in the following format:
ACCOUNTING [filespec[,...]/qualifier[,...]]
This runs the Accounting utility. The filespec parameter lists the accounting files you want to process. If you omit it, the Accounting utility processes the default current accounting file, SYS$MANAGER:ACCOUNTNG.DAT.
By default, the Accounting utility processes all the records in the accounting files you specify. You can use selection qualifiers to specify which records you want to process.
By default, brief and full reports present the records in the order in which they were logged in the accounting file. When you produce brief and full reports, you can use the /SORT qualifier to specify another order.
Example
This example produces a brief report of the information in the file that the logical name ACCOUNTNG points to. The /TYPE qualifier selects records for print jobs only. The /SORT qualifier displays them in reverse alphabetical order of user name.
$ ACCOUNTING ACCOUNTNG/TYPE=PRINT/SORT=-USER Date / Time Type Subtype Username ID Source Status ------------------------------------------------------------------------ 13-APR-1996 13:36:04 PRINT SYSTEM 20A00442 00000001 13-APR-1996 12:42:37 PRINT JONES 20A00443 00000001 13-APR-1996 14:43:56 PRINT FISH 20A00456 00000001 14-APR-1996 19:39:01 PRINT FISH 20A00265 00000001 14-APR-1996 20:09:03 PRINT EDWARDS 20A00127 00000001 14-APR-1996 20:34:45 PRINT DARNELL 20A00121 00000001 14-APR-1996 11:23:34 PRINT CLARK 20A0032E 00040001 14-APR-1996 16:43:16 PRINT BIRD 20A00070 00040001 14-APR-1996 09:30:21 PRINT ANDERS 20A00530 00040001
Users are already organized into UIC security groups. For accounting purposes, security groups are often inappropriate. You can put users into accounting groups with the Authorize utility using the /ACCOUNT qualifier. In this way, each user is in an accounting group and a security group.
Using the Accounting utility, you can:
How to Perform This Task
MODIFY username/ACCOUNT=account-namewhere:
username | is the name of the user |
account-name | is the name of the accounting group that you want that user to be in |
The next time your users log in, they will be in their new accounting groups, and their resource use will be tagged with the appropriate accounting group names.
Example
This example modifies the accounting group name to SALES_W8 for the username FORD:
$ RUN SYS$SYSTEM:AUTHORIZE UAF> MODIFY FORD/ACCOUNT=SALES_W8 UAF> EXIT
To find out how much disk space a user is using, use SYSMAN or, if you have not enabled disk quotas, the DIRECTORY command.
How to Perform This Task
Use either of the following methods:
DISKQUOTA SHOW owner [/DEVICE=device-spec]
DIRECTORY [filespec[,...]]/SIZE=ALLOCATION/GRAND_TOTAL
Examples
$ RUN SYS$SYSTEM:SYSMAN SYSMAN> DISKQUOTA SHOW * %SYSMAN-I-QUOTA, disk quota statistics on device SYS$SYSTEM:MYDISK Node UNION UIC Usage Permanent Quota Overdraft Limit [0,0] 0 1000 100 [DOC,EDWARDS] 115354 150000 5000 [DOC,FISH] 177988 250000 5000 [DOC,SMITH] 140051 175000 5000 [DOC,JONES] 263056 300000 5000
$ DIRECTORY MYDISK:[PARSONS...]/SIZE=ALLOCATION/GRAND_TOTAL Grand total of 28 directories, 2546 files, 113565 blocks.
This chapter describes concepts related to the OpenVMS Cluster environment; it also tells how the Show Cluster utility (SHOW CLUSTER) can display information about a cluster and how the System Management utility (SYSMAN) can help you manage an OpenVMS Cluster environment.
Information Provided in This Chapter
This chapter describes the following tasks:
Task | Section |
---|---|
Beginning to use SHOW CLUSTER commands | Section 20.3.2 |
Adding information to a report | Section 20.3.3 |
Controlling the display of data | Section 20.3.4 |
Formatting the display of data | Section 20.3.5 |
Creating a startup initialization file | Section 20.3.6 |
Using command procedures containing SHOW CLUSTER commands | Section 20.3.7 |
Using SYSMAN to manage security and system time | Section 20.5 |
Using the SYSMAN DO command to manage an OpenVMS Cluster | Section 20.6 |
This chapter explains the following concepts:
Concept | Section |
---|---|
OpenVMS Cluster systems | Section 20.1 |
Setting up an OpenVMS Cluster environment | Section 20.1.1 |
Clusterwide system management | Section 20.1.2 |
The Show Cluster utility (SHOW CLUSTER) | Section 20.3.1 |
SYSMAN and OpenVMS Cluster management | Section 20.4 |
An OpenVMS Cluster system is a loosely coupled configuration of two or more computers and storage subsystems, including at least one Alpha computer. An OpenVMS Cluster system appears as a single system to the user even though it shares some or all of the system resources. When a group of computers shares resources clusterwide, the storage and computing resources of all of the computers are combined, which can increase the processing capability, communications, and availability of your computing system.
A shared resource is a resource (such as a disk) that can be accessed and used by any node in an OpenVMS Cluster system. Data files, application programs, and printers are just a few items that can be accessed by users on a cluster with shared resources, without regard to the particular node on which the files or program or printer might physically reside.
When disks are set up as shared resources in an OpenVMS Cluster environment, users have the same environment (password, privileges, access to default login disks, and so on) regardless of the node that is used for logging in. You can realize a more efficient use of mass storage with shared disks, because the information on any device can be used by more than one node---the information does not have to be rewritten in many places. You can use the OpenVMS MSCP, which is the mass storage control protocol, or TMSCP, which is the tape mass storage control protocol, server software to make tapes accessible to nodes that are not directly connected to the storage devices.
You can also set up print and batch queues as shared resources. In an OpenVMS Cluster configuration with shared print and batch queues, a single queue database manages the queues for all nodes. The queue database makes the queues available from any node. For example, suppose your cluster configuration has fully shared resources and includes nodes ALBANY, BASEL, and CAIRO. A user logged in to node ALBANY can send a file that physically resides on node BASEL to a printer that is physically connected to node CAIRO, and the user never has to specify (or even know) the nodes for either the file or the printer.
Planning an OpenVMS Cluster System
A number of types of OpenVMS Cluster configurations are possible. Refer to Guidelines for OpenVMS Cluster Configurations and the OpenVMS Cluster Software Product Description (SPD) for complete information about supported devices and configurations.
The following sections briefly describe OpenVMS Cluster systems. For complete information about setting up and using an OpenVMS Cluster environment, see OpenVMS Cluster Systems.
Once you have planned your configuration, installed the necessary hardware, and checked hardware devices for proper operation, you can set up an OpenVMS Cluster system using various system software facilities. Setup procedures to build your cluster follow.
Procedure | For More Information |
---|---|
Installing or upgrading the operating system on the first OpenVMS Cluster computer | Installation and operations guide for your computer |
Installing required software licenses | OpenVMS License Management Utility Manual |
Configuring and starting the DECnet for OpenVMS network | DECnet for OpenVMS Networking Manual |
Preparing files that define the cluster operating environment and that control disk and queue operations | OpenVMS Cluster Systems |
Adding computers to the cluster | OpenVMS Cluster Systems |
Depending on various factors, the order in which these operations are performed can vary from site to site, as well as from cluster to cluster at the same site.
Once any system is installed, you must decide how to manage users and resources for maximum productivity and efficiency while maintaining the necessary security. OpenVMS Cluster systems provide the flexibility to distribute users and resources to suit the needs of the environment. OpenVMS Cluster system resources can also be easily redistributed as needs change. Even with the vast number of resources available, you can manage the cluster configuration as a single system.
You have several tools and products to help you manage your cluster as a unified entity.
OpenVMS Cluster Tools
The following utilities are provided with the operating system:
Utility | Description |
---|---|
DECamds | Collects and analyzes data from multiple nodes simultaneously, directing all output to a centralized DECwindows display. (Refer to Section 20.2 and the DECamds User's Guide.) |
Monitor utility (MONITOR) | Provides basic performance data. (See Section 18.8.) |
Show Cluster utility (SHOW CLUSTER) | Monitors activity in an OpenVMS Cluster configuration, and then collects and sends information about that activity to a terminal or other output device. (Described in Section 20.3.) |
System Management utility (SYSMAN) | Allows you to send common control commands across all, or a subset of, the nodes in the cluster. (Described in Section 20.6.) |
System Management Applications
The following products are not provided with the operating system:
Product | Description |
---|---|
POLYCENTER solutions | A comprehensive set of operations management products and services to help you manage complex distributed environments. However, the POLYCENTER Software Installation utility is described in this manual in Section 3.8. |
+Storage Library System (SLS) for VAX
++Storage Library System (SLS) for Alpha |
A set of software tools that enables tape, cartridge tape, and optical disks. |
OpenVMS Cluster Console System (VCS) | Designed to consolidate the console management of the OpenVMS Cluster system at a single console terminal. |
You can find additional information about these system management tools in the appropriate product documentation.
The Digital Availability Manager for Distributed Systems (DECamds) is a real-time monitoring, diagnostic, and correction tool that assists system managers to improve OpenVMS system and OpenVMS Cluster availability. DECamds can help system programmers and analysts target a specific node or process for detailed analysis, and can help system operators and service technicians resolve hardware and software problems.
DECamds simultaneously collects and analyzes system data and process data from multiple nodes and displays the output on a DECwindows Motif display. Based on the collected data, DECamds analyzes, detects, and proposes actions to correct resource and denial issues in real-time.
For more information, see the DECamds User's Guide.
The Show Cluster utility (SHOW CLUSTER) monitors nodes in an OpenVMS Cluster system. You can use the utility to display information about cluster activity and performance.
The following sections describe the Show Cluster utility and explain how to perform these tasks:
Task | Section |
---|---|
Begin to use SHOW CLUSTER commands | Section 20.3.2 |
Add information to a report | Section 20.3.3 |
Control the display of data | Section 20.3.4 |
Format the display of data | Section 20.3.5 |
Create a startup initialization file | Section 20.3.6 |
Use command procedures containing SHOW CLUSTER commands | Section 20.3.7 |
You can display SHOW CLUSTER information on your terminal screen or send it to a device or a file. You can use SHOW CLUSTER interactively, with command procedures, or with an initialization file in which you define default settings. Because this utility is installed with the CMKRNL privilege, SHOW CLUSTER requires no special privilege.
6017P061.HTM OSSG Documentation 22-NOV-1996 14:22:46.65
Copyright © Digital Equipment Corporation 1996. All Rights Reserved.