The advantages and disadvantages of the configuration shown in Figure 7-22 include:
Advantages
Disadvantages
Figure 7-23 shows an OpenVMS Cluster configuration that provides high performance and high availability on the FDDI ring.
Figure 7-23 High-Powered Workstation Server Configuration
In Figure 7-23, several Alpha workstations, each with its own system disk, are connected to the FDDI ring. Putting Alpha workstations on the FDDI provides high performance because each workstation has direct access to its system disk. In addition, the FDDI bandwidth is higher than that of the Ethernet. Because Alpha workstations have FDDI adapters, putting these workstations on an FDDI is a useful alternative for critical workstation requirements. FDDI is 10 times faster than Ethernet, and Alpha workstations have processing capacity that can take advantage of FDDI's speed.
The following are guidelines for setting up an OpenVMS Cluster with satellites:
You can use bridges between LAN segments to form an extended LAN (ELAN). This can increase availability, distance, and aggregate bandwidth as compared with a single LAN. However, an ELAN can increase delay and can reduce bandwidth on some paths. Factors such as packet loss, queuing delays, and packet size can also affect ELAN performance. Table 7-3 provides guidelines for ensuring adequate LAN performance when dealing with such factors.
Factor | Guidelines |
---|---|
Propagation delay |
The amount of time it takes a packet to traverse the ELAN depends on
the distance it travels and the number of times it is relayed from one
link to another by a bridge or a station on the FDDI ring. If
responsiveness is critical, then you must control these factors.
When an FDDI is used for OpenVMS Cluster communications, the ring latency when the FDDI ring is idle should not exceed 400 ms. FDDI packets travel at 5.085 microseconds/km and each station causes an approximate 1-ms delay between receiving and transmitting. You can calculate FDDI latency by using the following algorithm: Latency = (distance in km) * (5.085 ms/km) + (number of stations) * (1 ms/station) For high-performance applications, limit the number of bridges between nodes to two. For situations in which high performance is not required, you can use up to seven bridges between nodes. |
Queuing delay |
Queuing occurs when the instantaneous arrival rate at bridges and host
adapters exceeds the service rate. You can control queuing by:
|
Packet loss |
Packets that are not delivered by the ELAN require retransmission,
which wastes network resources, increases delay, and reduces bandwidth.
Bridges and adapters discard packets when they become congested. You
can reduce packet loss by controlling queuing, as previously described.
Packets are also discarded when they become damaged in transit. You can control this problem by observing LAN hardware configuration rules, removing sources of electrical interference, and ensuring that all hardware is operating correctly. Packet loss can also be reduced by using VMS Version 5.5--2 or later, which has PEDRIVER congestion control. The retransmission timeout rate, which is a symptom of packet loss, must be less than 1 timeout in 1000 transmissions for OpenVMS Cluster traffic from one node to another. ELAN paths that are used for high-performance applications should have a significantly lower rate. Monitor the occurrence of retransmission timeouts in the OpenVMS Cluster. Reference: For information about monitoring the occurrence of retransmission timeouts, see OpenVMS Cluster Systems. |
Bridge recovery delay |
Choose bridges with fast self-test time and adjust bridges for fast
automatic reconfiguration.
Reference: Refer to OpenVMS Cluster Systems for more information about LAN bridge failover. |
Bandwidth |
All LAN paths used for OpenVMS Cluster communication must operate with
a nominal bandwidth of at least 10 Mb/s. The average LAN segment
utilization should not exceed 60% for any 10-second interval.
Use FDDI exclusively on the communication paths that have the highest performance requirements. Do not put an Ethernet LAN segment between two FDDI segments. FDDI bandwidth is significantly greater, and the Ethernet LAN will become a bottleneck. This strategy is especially ineffective if a server on one FDDI must serve clients on another FDDI with an Ethernet LAN between them. A more appropriate strategy is to put a server on an FDDI and put clients on an Ethernet LAN, as Figure 7-22 shows. |
Traffic isolation |
Use bridges to isolate and localize the traffic between nodes that
communicate with each other frequently. For example, use bridges to
separate the OpenVMS Cluster from the rest of the ELAN and to separate
nodes within an OpenVMS Cluster that communicate frequently from the
rest of the OpenVMS Cluster.
Provide independent paths through the ELAN between critical systems that have multiple adapters. |
Packet size |
You can adjust the NISCS_MAX_PKTSZ system parameter to use the full
FDDI packet size. Ensure that the ELAN path supports a data field of at
least 4474 bytes end to end.
Some failures cause traffic to switch from an ELAN path that supports 4474-byte packets to a path that supports only smaller packets. It is possible to implement automatic detection and recovery from these kinds of failures. This capability requires that the ELAN set the value of the priority field in the FDDI frame-control byte to zero when the packet is delivered on the destination FDDI link. Ethernet-to-FDDI bridges that conform to the IEEE 802.1 bridge specification provide this capability. |
In an OpenVMS Cluster with satellites and servers, specific system parameters can help you manage your OpenVMS Cluster more efficiently. Table 7-4 gives suggested values for these system parameters.
System Parameter | Value for Satellites |
Value for Servers |
---|---|---|
LOCKDIRWT | 0 | 1--4 |
SHADOW_MAX_COPY | 0 | 1--4 |
MSCP_LOAD | 0 | 1 or 2 |
NPAGEDYN | Higher than for standalone node | Higher than for satellite node |
PAGEDYN | Higher than for standalone node | Higher than for satellite node |
VOTES | 0 | 1 |
EXPECTED_VOTES | Sum of OpenVMS Cluster votes | Sum of OpenVMS Cluster votes |
RECNXINTERVL¹ | Equal on all nodes | Equal on all nodes |
Reference: For a more in-depth description of these parameters, see OpenVMS Cluster Systems.
The ability to scale I/Os is an important factor in the growth of your OpenVMS Cluster. Adding more components to your OpenVMS Cluster requires high I/O throughput so that additional components do not create bottlenecks and decrease the performance of the entire OpenVMS Cluster. Many factors can affect I/O throughput:
These factors can affect I/O scalability either singly or in combination. The following sections explain these factors and suggest ways to maximize I/O throughput and scalability without having to change in your application.
Additional factors that affect I/O throughput are types of interconnects and types of storage subsystems.
Reference: See Chapter 4 for more information about interconnects and Chapter 5 for more information about types of storage subsystems.
MSCP server capability provides a major benefit to OpenVMS Clusters: it enables communication between nodes and storage that are not directly connected to each other. However, MSCP served I/O does incur overhead. Figure 7-24 is a simplification of how packets require extra handling by the serving system.
Figure 7-24 Comparison of Direct and MSCP Served Access
In Figure 7-24, an MSCP served packet requires an extra "stop" at another system before reaching its destination. When the MSCP served packet reaches the system associated with the target storage, the packet is handled as if for direct access.
In an OpenVMS Cluster that requires a large amount of MSCP serving, I/O performance is not as efficient and scalability is decreased. The total I/O throughput is approximately 20% less when I/O is MSCP served than when it has direct access. Design your configuration so that a few large nodes are serving many satellites rather than satellites serving their local storage to the entire OpenVMS Cluster.
In recent years, the ability of CPUs to process information has far outstripped the ability of I/O subsystems to feed processors with data. The result is an increasing percentage of processor time spent waiting for I/O operations to complete.
Solid-state disks (SSDs), DECram, and RAID level 0 bridge this gap between processing speed and magnetic-disk access speed. Performance of magnetic disks is limited by seek and rotational latencies, while SSDs and DECram use memory, which provides nearly instant access.
RAID level 0 is the technique of spreading (or "striping") a single file across several disk volumes. The objective is to reduce or eliminate a bottleneck at a single disk by partitioning heavily accessed files into stripe sets and storing them on multiple devices. This technique increases parallelism across many disks for a single I/O.
Table 7-5 summarizes disk technologies and their features.
Disk Technology | Characteristics |
---|---|
Magnetic disk |
Slowest access time.
Inexpensive. Available on multiple interconnects. |
Solid-state disk |
Fastest access of any I/O subsystem device.
Highest throughput for write-intensive files. Available on multiple interconnects. |
DECram |
Highest throughput for small to medium I/O requests.
Volatile storage; appropriate for temporary read-only files. Available on any Alpha or VAX system. |
RAID level 0 | Available on HSJ and HSD controllers. |
Note: Shared, direct access to a solid-state disk or to DECram is the fastest alternative for scaling I/Os.
The read/write ratio of your applications is a key factor in scaling I/O to shadow sets. MSCP writes to a shadow set are duplicated on the interconnect.
Therefore, an application that has 100% (100/0) read activity may benefit from volume shadowing because shadowing causes multiple paths to be used for the I/O activity. An application with a 50/50 ratio will cause more interconnect utilization because write activity requires that an I/O be sent to each shadow member. Delays may be caused by the time required to complete the slowest I/O.
To determine I/O read/write ratios, use the DCL command MONITOR IO.
Each I/O packet incurs processor and memory overhead, so grouping I/Os together in one packet decreases overhead for all I/O activity. You can achieve higher throughput if your application is designed to use bigger packets. Smaller packets incur greater overhead.
Caching is the technique of storing recently or frequently used data in an area where it can be accessed more easily---in memory, in a controller, or in a disk. Caching complements solid-state disks, DECram, and RAID. Applications automatically benefit from the advantages of caching without any special coding. Caching reduces current and potential I/O bottlenecks within OpenVMS Cluster systems by reducing the number of I/Os between components.
Table 7-6 describes the three types of caching.
Caching Type | Description |
---|---|
Host based | Cache that is resident in the host system's memory and services I/Os from the host. |
Controller based | Cache that is resident in the storage controller and services data for all hosts. |
Disk | Cache that is resident in a disk. |
Host-based disk caching provides different benefits from controller-based and disk-based caching. In host-based disk caching, the cache itself is not shareable among nodes. Controller-based and disk-based caching are shareable because they are located in the controller or disk, either of which is shareable.
A hot file is a file in your system on which the most activity occurs. Hot files exist because, in many environments, approximately 80% of all I/O goes to 20% of data. This means that, of equal regions on a disk drive, 80% of the data being transferred goes to one place on a disk, as shown in Figure 7-25.
Figure 7-25 Hot-File Distribution
To increase the scalability of I/Os, focus on hot files, which can become a bottleneck if you do not manage them well. The activity in this area is expressed in I/Os, megabytes transferred, and queue depth.
RAID level 0 balances hot-file activity by spreading a single file over multiple disks. This reduces the performance impact of hot files.
Use the following DCL commands to analyze hot-file activity:
The MONITOR IO and the MONITOR MSCP commands enable you to find out which disk and which server are hot.
The Volume Shadowing for OpenVMS product ensures that data is available to applications and end users by duplicating data on multiple disks. Although volume shadowing provides data redundancy and high availability, it can affect OpenVMS Cluster I/O on two levels:
Factor | Effect |
---|---|
Geographic distance | Host-based volume shadowing enables shadowing of any disk volumes in an OpenVMS Cluster system, including those served by MSCP servers. This ability can allow great distances along with MSCP overhead. For example, OpenVMS Cluster systems using FDDI can be located up to 25 miles apart. Both the distance and the MSCP involvement can slow I/O throughput. |
Read/write ratio | Because shadowing writes data to multiple volumes, applications that are write intensive may experience reduced throughput. In contrast, read-intensive applications may experience increased throughput because the shadowing software selects one disk member from which it can retrieve the data most efficiently. |
This chapter suggests some key system management strategies that you can use to get the most out of your OpenVMS Cluster. It is not intended to be a comprehensive discussion of the most common OpenVMS Cluster system management practices; see OpenVMS Cluster Systems for that information.
This chapter also assumes that the reader has some familiarity with basic system management concepts, such as system disks, quorum disks, and OpenVMS Cluster transitions.
The following information is contained in this chapter:
OpenVMS Cluster software makes a system manager's job easier because many system management tasks need to be done only once. This is especially true if business requirements call for a simple configuration rather than for every feature that an OpenVMS Cluster can provide. The simple configuration is appealing to both new and experienced system managers and is applicable to small OpenVMS Clusters---those with 3 to 7 nodes, 20 to 30 users, and 100 GB of storage.
Reference: See Figure 8-1 for an example of a simple OpenVMS Cluster configuration.
More complex OpenVMS Cluster configurations may require a more sophisticated system management strategy to deliver more availability, scalability, and performance.
Reference: See Figure 8-3 for an example of a complex OpenVMS Cluster configuration.
Choose system management strategies that balance simplicity of system management with the additional management tasks required by more complex OpenVMS Clusters.
System disks contain system files and environment files.
System files are primarily read-only images and command procedures, such as run-time libraries, and are accessed clusterwide.
Environment files create specific working environments for users. You can create a common environment by making all environment files accessible clusterwide, or you can create multiple environments by making specific environment files accessible to only certain users or systems.
System management is easiest for a simple configuration that has a single system disk and a common environment. Most procedures need to be performed only once, and both system files and environment files are located on the same disk. Page and swap files are also located on the system disk.
Figure 8-1 shows an example of a simple OpenVMS Cluster with a single system disk and a common environment.
Figure 8-1 Common Environment with a Single System Disk
In Figure 8-1, a simple CI OpenVMS Cluster contains a single, shadowed system disk. This system disk contains system files, environment files, and page and swap files. Because there is one set of environment files, this is a common environment.
Figure 8-2 shows another variation of a simple OpenVMS Cluster with a common environment.
Figure 8-2 Simple LAN OpenVMS Cluster with a Single System Disk
In Figure 8-2, six satellites and one boot server are connected by Ethernet. Each satellite has its own page and swap disk, which saves system disk space and removes the I/O activity of page and swap files from the Ethernet. Removing page and swap files from the system disk improves performance for the OpenVMS Cluster.
Although the single-system-disk configuration works well for many OpenVMS Cluster requirements, multiple system disks can offer several advantages.
OpenVMS Clusters that include both Alpha and VAX systems require multiple system disks: a VAX system disk and an Alpha system disk. Table 8-1 gives some additional reasons (not related to architecture) why a system manager might want more than one system disk in a OpenVMS Cluster.
Advantage | Description |
---|---|
Decreased boot times |
A single system disk can be a bottleneck when booting three or more
systems simultaneously.
Boot times are highly dependent on:
|
Increased system and application performance |
If your OpenVMS Cluster has many different applications that are in
constant use, it may be advantageous to have either a local system disk
for every node or a system disk that serves fewer systems. The benefits
are shorter image-activation times and fewer files being served over
the LAN.
Alpha workstations benefit from a local system disk because the powerful Alpha processor does not have to wait as long for system disk access. Reference: See Section 7.7.5 for more information. |
Reduced LAN utilization |
More system disks reduce LAN utilization because fewer files are served
over the LAN. Isolating LAN segments and their boot servers from
unnecessary traffic outside the segments decreases LAN path contention.
Reference: See Section 8.2.4 for more information. |
Increased OpenVMS Cluster availability | A single system disk can become a single point of failure. Increasing the number of boot servers and system disks increases availability by reducing the OpenVMS Cluster's dependency on a single resource. |
Arranging system disks as shown in Figure 8-3 can reduce booting time and LAN utilization.
Figure 8-3 Multiple System Disks in a Common Environment
Figure 8-3 is an OpenVMS Cluster with multiple system disks:
The use of multiple system disks in this configuration and the way that the LAN segments are divided enable the booting sequence to be efficient and timely.
In the workstation server examples shown in Section 7.7, OpenVMS Cluster reboots after a failure are relatively simple because of the small number of satellites per server. However, reboots in the larger, OpenVMS Cluster configuration shown in Figure 8-3 require careful planning. Dividing this OpenVMS Cluster and arranging the system disks as described in this section can reduce booting time significantly. Dividing the OpenVMS Cluster can also reduce the satellite utilization of the LAN segment and increase satellite performance.
The disks in this OpenVMS Cluster have specific functions, as described in Table 8-2.
Disk | Contents | Purpose |
---|---|---|
Common disk | All environment files for the entire OpenVMS Cluster |
Environment files such as SYSUAF.DAT, NETPROXY.DAT, QMAN$MASTER.DAT are
accessible to all nodes---including satellites---during booting. This
frees the satellite boot servers to serve only system files and root
information to the satellites.
To create a common environment and increase performance for all system disks, see Section 8.3. |
System disk | System roots for Alpha 1, Alpha 2, and Alpha 3 | High performance for server systems. Make this disk as read-only as possible by taking environment files that have write activity off the system disk. The disk can be mounted clusterwide in SYLOGICALS.COM during startup. |
Satellite boot servers' system disks | System files or roots for the satellites | Frees the system disk attached to Alpha 1, Alpha 2, and Alpha 3 from having to serve satellites, and divide total LAN traffic over individual Ethernet segments. |
Page and swap disks | Page and swap files for one or more systems | Reduce I/O activity on the system disks, and free system disk space for applications and system roots. |
6318P007.HTM OSSG Documentation 26-NOV-1996 11:20:21.24
Copyright © Digital Equipment Corporation 1996. All Rights Reserved.