The advantages and disadvantages of the configuration shown in Figure 7-2 include:
Advantages
Disadvantage
An increased need for more storage or processing resources could lead to an OpenVMS Cluster configuration like the one shown in Figure 7-3.
In Figure 7-3, three nodes are connected to two HSC controllers by the CI interconnects. The critical system disk is dual ported and shadowed.
Figure 7-3 Three-Node CI OpenVMS Cluster
The advantages and disadvantages of the configuration shown in Figure 7-3 include:
Advantages
Disadvantage
If the I/O activity exceeds the capacity of the CI interconnect, this could lead to an OpenVMS Cluster configuration like the one shown in Figure 7-4.
In Figure 7-4, seven nodes each have a direct connection to two star couplers and to all storage.
Figure 7-4 Seven-Node CI OpenVMS Cluster
The advantages and disadvantages of the configuration shown in Figure 7-4 include:
Advantages
Disadvantage
The following guidelines can help you configure your CI OpenVMS Cluster:
Volume shadowing is intended to enhance availability, not performance. However, the following volume shadowing strategies enable you to utilize availability features while also maximizing I/O capacity. These examples show CI configurations, but they apply to DSSI and SCSI configurations, as well.
Figure 7-5 Volume Shadowing on a Single Controller
Figure 7-5 shows two nodes connected to an HSJ, with a two-member shadow set.
The disadvantage of this strategy is that the controller is a single point of failure. The configuration in Figure 7-6 shows examples of shadowing across controllers, which prevents one controller from being a single point of failure. Shadowing across HSJ and HSC controllers provides optimal scalability and availability within an OpenVMS Cluster system.
Figure 7-6 Volume Shadowing Across Controllers
As Figure 7-6 shows, shadowing across controllers has three variations:
Figure 7-7 shows an example of shadowing across nodes.
Figure 7-7 Volume Shadowing Across Nodes
As Figure 7-7 shows, shadowing across nodes provides the advantage of flexibility in distance. However, it requires MSCP server overhead for write I/Os. In addition, the failure of one of the nodes and its subsequent return to the OpenVMS Cluster will cause a copy operation.
If you have multiple volumes, shadowing inside a controller and shadowing across controllers are more effective than shadowing across nodes.
Reference: See Volume Shadowing for OpenVMS for more information.
Each DSSI interconnect can have up to eight nodes attached; four can be systems and the rest can be storage devices. Figure 7-8, Figure 7-9, and Figure 7-10 show a progression from a two-node DSSI OpenVMS Cluster to a four-node DSSI OpenVMS Cluster.
In Figure 7-8, two nodes are connected to four disks by a common DSSI interconnect.
Figure 7-8 Two-Node DSSI OpenVMS Cluster
The advantages and disadvantages of the configuration shown in Figure 7-8 include:
Advantages
Disadvantages
If the OpenVMS Cluster in Figure 7-8 required more processing power, more storage, and better redundancy, this could lead to a configuration like the one shown in Figure 7-9.
In Figure 7-9, four nodes have shared, direct access to eight disks through two DSSI interconnects. Two of the disks are shadowed across DSSI interconnects.
Figure 7-9 Four-Node DSSI OpenVMS Cluster with Shared Access
The advantages and disadvantages of the configuration shown in Figure 7-9 include:
Advantages
Disadvantage
If the configuration in Figure 7-9 required more storage, this could lead to a configuration like the one shown in Figure 7-10.
Figure 7-10 shows an OpenVMS Cluster with 4 nodes and 10 disks. This model differs from Figure 7-8 and Figure 7-9 in that some of the nodes do not have shared, direct access to some of the disks, thus requiring these disks to MSCP served. For the best performance, place your highest-priority data on disks that are directly connected by common DSSI interconnects to your nodes. Volume shadowing across common DSSI interconnects provides the highest availability and may increase read performance.
Figure 7-10 DSSI OpenVMS Cluster with 10 Disks
The advantages and disadvantages of the configuration shown in Figure 7-10 include:
Advantages
Disadvantages
Each MEMORY CHANNEL (MC) interconnect can have up to four nodes attached to each MEMORY CHANNEL hub. For two-hub configurations, each node must have two PCI adapters, and each one must be attached to a different hub. In a two-node configuration, no hub is required because one of the PCI adapters serves as a virtual hub.
Figure 7-11, Figure 7-12, and Figure 7-13 show a progression from a two-node MEMORY CHANNEL cluster to a four-node MEMORY CHANNEL cluster.
Reference: For additional configuration information and a more detailed technical summary of how MEMORY CHANNEL works, see Appendix B.
In Figure 7-11, two nodes are connected by a MEMORY CHANNEL interconnect, a LAN (Ethernet, FDDI, or ATM) interconnect, and a SCSI interconnect.
Figure 7-11 Two-Node MEMORY CHANNEL OpenVMS Cluster
The advantages and disadvantages of the configuration shown in Figure 7-11 include:
Advantages
Disadvantages
If the OpenVMS Cluster in Figure 7-11 required more processing power and better redundancy, this could lead to a configuration like the one shown in Figure 7-12.
In Figure 7-12, three nodes are connected by a high-speed MEMORY CHANNEL interconnect, as well as by a LAN (Ethernet, FDDI, or ATM) interconnect. These nodes also have shared, direct access to storage through the SCSI interconnect.
Figure 7-12 Three-Node MEMORY CHANNEL OpenVMS Cluster
The advantages and disadvantages of the configuration shown in Figure 7-12 include:
Advantages
Disadvantage
If the configuration in Figure 7-12 required more storage, this could lead to a configuration like the one shown in Figure 7-13.
Figure 7-13, each node is connected by a MEMORY CHANNEL interconnect as well as by a CI interconnect.
Figure 7-13 MEMORY CHANNEL Cluster with a CI Cluster
The advantages and disadvantages of the configuration shown in Figure 7-13 include:
Advantages
Disadvantage
SCSI-based OpenVMS Clusters allow commodity-priced storage devices to be used directly in OpenVMS Clusters. Using a SCSI interconnect in an OpenVMS Cluster offers you variations in distance, price, and performance capacity. This new SCSI clustering capability is an ideal starting point when configuring a low-end, affordable cluster solution. SCSI clusters can range from desktop to deskside to departmental configurations.
Note the following general limitations when using the SCSI interconnect:
The figures in this section show a progression from a two-node SCSI configuration with modest storage to a three-node SCSI "hub" configuration with maximum storage and further expansion capability.
In Figure 7-14, two nodes are connected by a 3-m, single-ended, narrow-fast SCSI bus, with MEMORY CHANNEL (or any) interconnect for internode traffic. The BA350 storage cabinet contains single-ended, narrow-fast disks.
Figure 7-14 Two-Node Narrow-Fast SCSI Cluster
The advantages and disadvantages of the configuration shown in Figure 7-14 include:
Advantages
Disadvantages
If the SCSI cluster in Figure 7-14 required a longer bus and more bandwidth, this could lead to a configuration like the one shown in Figure 7-15.
In Figure 7-15, two nodes are connected by a 25-m, fast-wide differential (FWD) SCSI bus, with MEMORY CHANNEL (or any) interconnect for internode traffic. The BA356 storage cabinet contains a power supply, a DWZZB single-ended to differential converter, and six disk drives. This configuration can have either narrow or wide disks.
Figure 7-15 Two-Node Fast-Wide SCSI Cluster
The advantages and disadvantages of the configuration shown in Figure 7-15 include:
Advantages
Disadvantage
If the configuration in Figure 7-15 required even more storage, this could lead to a configuration like the one shown in Figure 7-16.
In Figure 7-16, two nodes are connected by a 25-m, fast-wide differential (FWD) SCSI bus, with MEMORY CHANNEL (or any) interconnect for internode traffic. Multiple storage shelves are within the HSZ controller.
Figure 7-16 Two-Node Fast-Wide SCSI Cluster with HSZ Storage
The advantages and disadvantages of the configuration shown in Figure 7-16 include:
Advantages
Disadvantage
In Figure 7-17, three nodes are connected by two 25-m, fast-wide (FWD) SCSI interconnects. Multiple storage shelves are contained in each HSZ controller, and more storage is contained in the BA356 at the top of the figure.
Figure 7-17 Three-Node Fast-Wide SCSI Cluster
The advantages and disadvantages of the configuration shown in Figure 7-17 include:
Advantages
Disadvantage
Figure 7-18 shows three nodes connected by a "SCSI hub"---a BA356 cabinet that contains two power supplies and up to five DWZZB converters. The DWZZB converters enable you to use the BA356 as a hub for storage and for nodes. When used in nonhub configurations, a FWD SCSI interconnect can be up to 25 m. However, in a hub configuration, the distance between any two nodes is limited to a total of 40 m, resulting in a 20-m maximum for each node.
Figure 7-18 also shows how the entire SCSI configuration can be extended by adding another BA356 hub, which can connect to additional nodes and storage.
Figure 7-18 Three-Node Fast-Wide SCSI Hub Configuration
The advantages and disadvantages of the configuration shown in Figure 7-18 include:
Advantages
Disadvantage
The number of satellites in an OpenVMS Cluster and the amount of storage that is MSCP served determine the need for the quantity and capacity of the servers. Satellites are systems that do not have direct access to a system disk and other OpenVMS Cluster storage. Satellites are usually workstations, but they can be any OpenVMS Cluster node that is served storage by other nodes in the OpenVMS Cluster.
Each Ethernet LAN segment should have only 10 to 20 satellite nodes attached. Figure 7-19, Figure 7-20, Figure 7-21, and Figure 7-22 show a progression from a 6-satellite LAN to a 45-satellite LAN.
In Figure 7-19, six satellites and a boot server are connected by Ethernet.
Figure 7-19 Six-Satellite LAN OpenVMS Cluster
The advantages and disadvantages of the configuration shown in Figure 7-19 include:
Advantages
Disadvantage
If the boot server in Figure 7-19 became a bottleneck, a configuration like the one shown in Figure 7-20 would be required.
Figure 7-20 shows six satellites and two boot servers connected by Ethernet. Boot server 1 and boot server 2 perform MSCP server dynamic load balancing: they arbitrate and share the work load between them and if one node stops functioning, the other takes over. MSCP dynamic load balancing requires shared access to storage.
Figure 7-20 Six-Satellite LAN OpenVMS Cluster with Two Boot Nodes
The advantages and disadvantages of the configuration shown in Figure 7-20 include:
Advantages
Disadvantage
If the LAN in Figure 7-20 became an OpenVMS Cluster bottleneck, this could lead to a configuration like the one shown in Figure 7-21.
Figure 7-21 shows 12 satellites and 2 boot servers connected by two Ethernet segments. These two Ethernet segments are also joined by a LAN bridge. Because each satellite has dual paths to storage, this configuration also features MSCP dynamic load balancing.
Figure 7-21 Twelve-Satellite OpenVMS Cluster with Two LAN Segments
The advantages and disadvantages of the configuration shown in Figure 7-21 include:
Advantages
Disadvantages
If the OpenVMS Cluster in Figure 7-21 needed to grow beyond its current limits, this could lead to a configuration like the one shown in Figure 7-22.
Figure 7-22 shows a large, 51-node OpenVMS Cluster. The three boot servers, Alpha 1, Alpha 2, and Alpha 3, have two disks: a common disk and a system disk. The FDDI ring has three LAN segments attached. Each segment has 15 workstation satellites as well as its own boot node.
Figure 7-22 Forty-Five Satellite OpenVMS Cluster with FDDI Ring
6318P006.HTM OSSG Documentation 26-NOV-1996 11:20:19.09
Copyright © Digital Equipment Corporation 1996. All Rights Reserved.