[Digital logo]
[HR]

Guidelines for OpenVMS Cluster Configurations


Previous | Contents

The advantages and disadvantages of the configuration shown in Figure 7-2 include:

Advantages

Disadvantage

An increased need for more storage or processing resources could lead to an OpenVMS Cluster configuration like the one shown in Figure 7-3.

7.3.2 Three-Node CI OpenVMS Cluster

In Figure 7-3, three nodes are connected to two HSC controllers by the CI interconnects. The critical system disk is dual ported and shadowed.

Figure 7-3 Three-Node CI OpenVMS Cluster



The advantages and disadvantages of the configuration shown in Figure 7-3 include:

Advantages

Disadvantage

If the I/O activity exceeds the capacity of the CI interconnect, this could lead to an OpenVMS Cluster configuration like the one shown in Figure 7-4.

7.3.3 Seven-Node CI OpenVMS Cluster

In Figure 7-4, seven nodes each have a direct connection to two star couplers and to all storage.

Figure 7-4 Seven-Node CI OpenVMS Cluster



The advantages and disadvantages of the configuration shown in Figure 7-4 include:

Advantages

Disadvantage

7.3.4 Guidelines for CI OpenVMS Clusters

The following guidelines can help you configure your CI OpenVMS Cluster:

7.3.5 Guidelines for Volume Shadowing in CI OpenVMS Clusters

Volume shadowing is intended to enhance availability, not performance. However, the following volume shadowing strategies enable you to utilize availability features while also maximizing I/O capacity. These examples show CI configurations, but they apply to DSSI and SCSI configurations, as well.

Figure 7-5 Volume Shadowing on a Single Controller



Figure 7-5 shows two nodes connected to an HSJ, with a two-member shadow set.

The disadvantage of this strategy is that the controller is a single point of failure. The configuration in Figure 7-6 shows examples of shadowing across controllers, which prevents one controller from being a single point of failure. Shadowing across HSJ and HSC controllers provides optimal scalability and availability within an OpenVMS Cluster system.

Figure 7-6 Volume Shadowing Across Controllers



As Figure 7-6 shows, shadowing across controllers has three variations:

Figure 7-7 shows an example of shadowing across nodes.

Figure 7-7 Volume Shadowing Across Nodes



As Figure 7-7 shows, shadowing across nodes provides the advantage of flexibility in distance. However, it requires MSCP server overhead for write I/Os. In addition, the failure of one of the nodes and its subsequent return to the OpenVMS Cluster will cause a copy operation.

If you have multiple volumes, shadowing inside a controller and shadowing across controllers are more effective than shadowing across nodes.

Reference: See Volume Shadowing for OpenVMS for more information.

7.4 Scalability in DSSI OpenVMS Clusters

Each DSSI interconnect can have up to eight nodes attached; four can be systems and the rest can be storage devices. Figure 7-8, Figure 7-9, and Figure 7-10 show a progression from a two-node DSSI OpenVMS Cluster to a four-node DSSI OpenVMS Cluster.

7.4.1 Two-Node DSSI OpenVMS Cluster

In Figure 7-8, two nodes are connected to four disks by a common DSSI interconnect.

Figure 7-8 Two-Node DSSI OpenVMS Cluster



The advantages and disadvantages of the configuration shown in Figure 7-8 include:

Advantages

Disadvantages

If the OpenVMS Cluster in Figure 7-8 required more processing power, more storage, and better redundancy, this could lead to a configuration like the one shown in Figure 7-9.

7.4.2 Four-Node DSSI OpenVMS Cluster with Shared Access

In Figure 7-9, four nodes have shared, direct access to eight disks through two DSSI interconnects. Two of the disks are shadowed across DSSI interconnects.

Figure 7-9 Four-Node DSSI OpenVMS Cluster with Shared Access



The advantages and disadvantages of the configuration shown in Figure 7-9 include:

Advantages

Disadvantage

If the configuration in Figure 7-9 required more storage, this could lead to a configuration like the one shown in Figure 7-10.

7.4.3 Four-Node DSSI OpenVMS Cluster with Some Nonshared Access

Figure 7-10 shows an OpenVMS Cluster with 4 nodes and 10 disks. This model differs from Figure 7-8 and Figure 7-9 in that some of the nodes do not have shared, direct access to some of the disks, thus requiring these disks to MSCP served. For the best performance, place your highest-priority data on disks that are directly connected by common DSSI interconnects to your nodes. Volume shadowing across common DSSI interconnects provides the highest availability and may increase read performance.

Figure 7-10 DSSI OpenVMS Cluster with 10 Disks



The advantages and disadvantages of the configuration shown in Figure 7-10 include:

Advantages

Disadvantages

7.5 Scalability in MEMORY CHANNEL OpenVMS Clusters

Each MEMORY CHANNEL (MC) interconnect can have up to four nodes attached to each MEMORY CHANNEL hub. For two-hub configurations, each node must have two PCI adapters, and each one must be attached to a different hub. In a two-node configuration, no hub is required because one of the PCI adapters serves as a virtual hub.

Figure 7-11, Figure 7-12, and Figure 7-13 show a progression from a two-node MEMORY CHANNEL cluster to a four-node MEMORY CHANNEL cluster.

Reference: For additional configuration information and a more detailed technical summary of how MEMORY CHANNEL works, see Appendix B.

7.5.1 Two-Node MEMORY CHANNEL Cluster

In Figure 7-11, two nodes are connected by a MEMORY CHANNEL interconnect, a LAN (Ethernet, FDDI, or ATM) interconnect, and a SCSI interconnect.

Figure 7-11 Two-Node MEMORY CHANNEL OpenVMS Cluster



The advantages and disadvantages of the configuration shown in Figure 7-11 include:

Advantages

Disadvantages

If the OpenVMS Cluster in Figure 7-11 required more processing power and better redundancy, this could lead to a configuration like the one shown in Figure 7-12.

7.5.2 Three-Node MEMORY CHANNEL Cluster

In Figure 7-12, three nodes are connected by a high-speed MEMORY CHANNEL interconnect, as well as by a LAN (Ethernet, FDDI, or ATM) interconnect. These nodes also have shared, direct access to storage through the SCSI interconnect.

Figure 7-12 Three-Node MEMORY CHANNEL OpenVMS Cluster



The advantages and disadvantages of the configuration shown in Figure 7-12 include:

Advantages

Disadvantage

If the configuration in Figure 7-12 required more storage, this could lead to a configuration like the one shown in Figure 7-13.

7.5.3 Four-Node MEMORY CHANNEL OpenVMS Cluster

Figure 7-13, each node is connected by a MEMORY CHANNEL interconnect as well as by a CI interconnect.

Figure 7-13 MEMORY CHANNEL Cluster with a CI Cluster



The advantages and disadvantages of the configuration shown in Figure 7-13 include:

Advantages

Disadvantage

7.6 Scalability in SCSI OpenVMS Clusters

SCSI-based OpenVMS Clusters allow commodity-priced storage devices to be used directly in OpenVMS Clusters. Using a SCSI interconnect in an OpenVMS Cluster offers you variations in distance, price, and performance capacity. This new SCSI clustering capability is an ideal starting point when configuring a low-end, affordable cluster solution. SCSI clusters can range from desktop to deskside to departmental configurations.

Note the following general limitations when using the SCSI interconnect:

The figures in this section show a progression from a two-node SCSI configuration with modest storage to a three-node SCSI "hub" configuration with maximum storage and further expansion capability.

7.6.1 Two-Node Narrow-Fast SCSI Cluster

In Figure 7-14, two nodes are connected by a 3-m, single-ended, narrow-fast SCSI bus, with MEMORY CHANNEL (or any) interconnect for internode traffic. The BA350 storage cabinet contains single-ended, narrow-fast disks.

Figure 7-14 Two-Node Narrow-Fast SCSI Cluster



The advantages and disadvantages of the configuration shown in Figure 7-14 include:

Advantages

Disadvantages

If the SCSI cluster in Figure 7-14 required a longer bus and more bandwidth, this could lead to a configuration like the one shown in Figure 7-15.

7.6.2 Two-Node Fast-Wide SCSI Cluster

In Figure 7-15, two nodes are connected by a 25-m, fast-wide differential (FWD) SCSI bus, with MEMORY CHANNEL (or any) interconnect for internode traffic. The BA356 storage cabinet contains a power supply, a DWZZB single-ended to differential converter, and six disk drives. This configuration can have either narrow or wide disks.

Figure 7-15 Two-Node Fast-Wide SCSI Cluster



The advantages and disadvantages of the configuration shown in Figure 7-15 include:

Advantages

Disadvantage

If the configuration in Figure 7-15 required even more storage, this could lead to a configuration like the one shown in Figure 7-16.

7.6.3 Two-Node Fast-Wide SCSI Cluster with HSZ Storage

In Figure 7-16, two nodes are connected by a 25-m, fast-wide differential (FWD) SCSI bus, with MEMORY CHANNEL (or any) interconnect for internode traffic. Multiple storage shelves are within the HSZ controller.

Figure 7-16 Two-Node Fast-Wide SCSI Cluster with HSZ Storage



The advantages and disadvantages of the configuration shown in Figure 7-16 include:

Advantages

Disadvantage

7.6.4 Three-Node Fast-Wide SCSI Cluster

In Figure 7-17, three nodes are connected by two 25-m, fast-wide (FWD) SCSI interconnects. Multiple storage shelves are contained in each HSZ controller, and more storage is contained in the BA356 at the top of the figure.

Figure 7-17 Three-Node Fast-Wide SCSI Cluster



The advantages and disadvantages of the configuration shown in Figure 7-17 include:

Advantages

Disadvantage

7.6.5 Three-Node Fast-Wide SCSI Hub Configuration

Figure 7-18 shows three nodes connected by a "SCSI hub"---a BA356 cabinet that contains two power supplies and up to five DWZZB converters. The DWZZB converters enable you to use the BA356 as a hub for storage and for nodes. When used in nonhub configurations, a FWD SCSI interconnect can be up to 25 m. However, in a hub configuration, the distance between any two nodes is limited to a total of 40 m, resulting in a 20-m maximum for each node.

Figure 7-18 also shows how the entire SCSI configuration can be extended by adding another BA356 hub, which can connect to additional nodes and storage.

Figure 7-18 Three-Node Fast-Wide SCSI Hub Configuration



The advantages and disadvantages of the configuration shown in Figure 7-18 include:

Advantages

Disadvantage

7.7 Scalability in OpenVMS Clusters with Satellites

The number of satellites in an OpenVMS Cluster and the amount of storage that is MSCP served determine the need for the quantity and capacity of the servers. Satellites are systems that do not have direct access to a system disk and other OpenVMS Cluster storage. Satellites are usually workstations, but they can be any OpenVMS Cluster node that is served storage by other nodes in the OpenVMS Cluster.

Each Ethernet LAN segment should have only 10 to 20 satellite nodes attached. Figure 7-19, Figure 7-20, Figure 7-21, and Figure 7-22 show a progression from a 6-satellite LAN to a 45-satellite LAN.

7.7.1 Six-Satellite OpenVMS Cluster

In Figure 7-19, six satellites and a boot server are connected by Ethernet.

Figure 7-19 Six-Satellite LAN OpenVMS Cluster



The advantages and disadvantages of the configuration shown in Figure 7-19 include:

Advantages

Disadvantage

If the boot server in Figure 7-19 became a bottleneck, a configuration like the one shown in Figure 7-20 would be required.

7.7.2 Six-Satellite OpenVMS Cluster with Two Boot Nodes

Figure 7-20 shows six satellites and two boot servers connected by Ethernet. Boot server 1 and boot server 2 perform MSCP server dynamic load balancing: they arbitrate and share the work load between them and if one node stops functioning, the other takes over. MSCP dynamic load balancing requires shared access to storage.

Figure 7-20 Six-Satellite LAN OpenVMS Cluster with Two Boot Nodes



The advantages and disadvantages of the configuration shown in Figure 7-20 include:

Advantages

Disadvantage

If the LAN in Figure 7-20 became an OpenVMS Cluster bottleneck, this could lead to a configuration like the one shown in Figure 7-21.

7.7.3 Twelve-Satellite LAN OpenVMS Cluster with Two LAN Segments

Figure 7-21 shows 12 satellites and 2 boot servers connected by two Ethernet segments. These two Ethernet segments are also joined by a LAN bridge. Because each satellite has dual paths to storage, this configuration also features MSCP dynamic load balancing.

Figure 7-21 Twelve-Satellite OpenVMS Cluster with Two LAN Segments



The advantages and disadvantages of the configuration shown in Figure 7-21 include:

Advantages

Disadvantages

If the OpenVMS Cluster in Figure 7-21 needed to grow beyond its current limits, this could lead to a configuration like the one shown in Figure 7-22.

7.7.4 Forty-Five Satellite OpenVMS Cluster with FDDI Ring

Figure 7-22 shows a large, 51-node OpenVMS Cluster. The three boot servers, Alpha 1, Alpha 2, and Alpha 3, have two disks: a common disk and a system disk. The FDDI ring has three LAN segments attached. Each segment has 15 workstation satellites as well as its own boot node.

Figure 7-22 Forty-Five Satellite OpenVMS Cluster with FDDI Ring




Previous | Next | Contents | [Home] | [Comments] | [Ordering info] | [Help]

[HR]

  6318P006.HTM
  OSSG Documentation
  26-NOV-1996 11:20:19.09

Copyright © Digital Equipment Corporation 1996. All Rights Reserved.

Legal