[Digital logo]
[HR]

Guidelines for OpenVMS Cluster Configurations


Previous | Contents

4.1 Characteristics

The six interconnects described in this chapter share some general characteristics. Table 4-1 describes these characteristics.

Table 4-1 Interconnect Characteristics
Characteristic Description
Throughput The quantity of data transferred across the interconnect.

Some interconnects require more processor overhead than others. For example, Ethernet and FDDI interconnects require more processor overhead than do CI or DSSI.

Larger packet sizes allow higher data-transfer rates (throughput) than do smaller packet sizes.

Cable length Interconnects range in length from 3 m to 40 km.
Maximum number of nodes The number of nodes that can connect to an interconnect varies among interconnect types. Be sure to consider this when configuring your OpenVMS Cluster system.
Supported systems and storage Each OpenVMS Cluster node and storage subsystem requires an adapter to connect the internal system bus to the interconnect. First consider the storage and processor I/O performance, then the adapter performance, when choosing an interconnect type.

4.2 Comparison of Interconnect Types

Table 4-2 shows key statistics for a variety of interconnects.

Table 4-2 Comparison of Interconnect Types
Attribute CI DSSI FDDI SCSI MEMORY
CHANNEL
Ethernet
Maximum throughput (Mb/s) 140 32 100 160 800 10
Hardware-assisted data link¹ Yes Yes No No No No
Connection to storage Direct and
MSCP served
Direct and
MSCP served
MSCP served Direct and
MSCP served
MSCP served MSCP served
Topology Radial coaxial cable Bus Dual ring of trees Bus Radial copper cable Linear coaxial cable
Maximum nodes 32² 96 4 8--16 5 4 96 4
Maximum length 45 m 6 m 6 40 km 25m 3 m 2800 m


¹Hardware-assisted data link reduces the processor overhead required.
²Up to 16 OpenVMS Cluster computers; up to 31 HSC controllers.
³Up to 4 OpenVMS Cluster computers; up to 7 storage devices.
4OpenVMS Cluster computers.
5Up to 3 OpenVMS Cluster computers; up to 15 storage devices.
6DSSI cabling lengths vary based on cabinet cables.

4.3 Multiple Interconnects

You can use multiple interconnects to achieve the following benefits:

4.4 Mixed Interconnects

A mixed interconnect is a combination of two or more different types of interconnects in an OpenVMS Cluster system. You can use mixed interconnects to combine the advantages of each type and to expand your OpenVMS Cluster system. For example, an Ethernet cluster that requires more storage can expand with the addition of CI, DSSI, or SCSI connections.

4.5 Interconnects Supported by Alpha and VAX Systems

Table 4-3 shows the OpenVMS Cluster interconnects supported by Alpha and VAX systems. You can also refer to the most recent OpenVMS Cluster SPD to see the latest information on supported interconnects.

Table 4-3 Cluster Interconnects Supported by Systems
Systems CI DSSI SCSI FDDI Ethernet MEMORY
CHANNEL
AlphaServer 8400, 8200 X X X X X
AlphaServer 4100, 2100, 2000 X X X X
AlphaServer 1000 X X X X
AlphaServer 400 X X X
AlphaStation series X X
DEC 7000/10000 X X X
DEC 4000 X X
DEC 3000 X
DEC 2000 X
VAX 6000/7000/10000 X X X X
VAX 4000, MicroVAX 3100 X X
VAXstation 4000 X


¹Able to boot over the interconnect as a satellite node.

As Table 4-3 shows, OpenVMS Clusters support a wide range of interconnects: CI, DSSI, SCSI, FDDI, Ethernet, and MEMORY CHANNEL. This power and flexibility means that almost anything will work well. The most important factor to consider is how much I/O you need, as explained in Chapter 2.

In most cases, the I/O requirements will be less than the capabilities of any one OpenVMS Cluster interconnect. Ensure that you have a reasonable surplus I/O capacity, then choose your interconnects based on other needed features.

4.6 MEMORY CHANNEL Interconnect

MEMORY CHANNEL is a high-performance cluster interconnect technology for PCI-based Alpha systems. With the benefits of very low latency, high bandwidth, and direct memory access, MEMORY CHANNEL complements and extends the unique ability of OpenVMS Clusters to work as a single, virtual system.

Three hardware components are required by a node to support a MEMORY CHANNEL connection:

A MEMORY CHANNEL hub is a desktop-PC sized unit that provides a connection among systems. For its initial release in OpenVMS Version 7.1, MEMORY CHANNEL supports up to four Alpha nodes per hub. You can configure systems with two MEMORY CHANNEL adapters in order to provide failover in case an adapter fails. Each adapter must be connected to a different hub.

A MEMORY CHANNEL hub is not required in clusters that comprise only two nodes. In a two-node configuration, one PCI adapter is configured, using module jumpers, as a virtual hub.

4.6.1 Advantages

MEMORY CHANNEL technology provides the following features:

4.6.2 Throughput

The MEMORY CHANNEL interconnect has a very high maximum throughput of 100 MB/s. If a single MEMORY CHANNEL is not sufficient, up to two interconnects (and two MEMORY CHANNEL hubs) can share throughput.

4.6.3 Supported Adapter

The MEMORY CHANNEL adapter is CCMAA-AA. It connects to the PCI bus.

Reference: For complete information about each adapter's features and order numbers, see the Digital Systems and Options Catalog, order number EC-I6601-10.

To access the most recent Digital Systems and Options Catalog on the World Wide Web, use the following URL:

http://www.digital.com/info/soc 

4.7 SCSI Interconnect

The SCSI interconnect is an industry standard interconnect that supports one or more computers, peripheral devices, and interconnecting components. SCSI is a single-path, daisy-chained, multidrop bus. It is a single 8-bit or 16-bit data path with byte parity for error detection. Both inexpensive single-ended and differential signaling for longer distances are available.

In an OpenVMS Cluster, multiple Alpha computers on a single SCSI interconnect can simultaneously access SCSI disks. This type of configuration is called multihost SCSI connectivity. A second type of interconnect is required for node-to-node communication. For multihost access to SCSI storage, the following components are required:

For larger configurations, the following components are available:

Reference: For a detailed description of how to connect SCSI configurations, see Appendix A.

4.7.1 Advantages

The SCSI interconnect offers the following advantages:

4.7.2 Throughput

Throughput for the SCSI interconnect is shown in Table 4-4.

Table 4-4 Maximum Data Transfer Rates in Megabytes per Second
Mode Narrow (8-Bit) Wide (16-Bit)
Standard 5 10
Fast 10 20

4.7.3 SCSI Interconnect Distances

The maximum length of the SCSI interconnect is determined by the signaling method used in the configuration and, for single-ended signaling, by the data transfer rate.

There are two types of electrical signaling for SCSI interconnects: single-ended and differential. Both types can operate in standard mode or fast mode. For differential signaling, the maximum SCSI interconnect possible is the same whether you use standard or fast mode.

Table 4-5 summarizes how the type of signaling method affects SCSI interconnect distances.

Table 4-5 Maximum SCSI Interconnect Distances
Signaling Technique Rate of Data Transfer Maximum Cable Length
Single ended Standard 6 m¹
Single ended Fast 3 m
Differential Standard or Fast 25 m


¹The SCSI standard specifies a maximum length of 6 m for this interconnect. However, Digital recommends that, where possible, you limit the cable length to 4 m to ensure the highest level of data integrity.

4.7.4 Supported Adapters, Bus Types, and Computers

Table 4-6 shows SCSI adapters with the internal buses and computers they support.

Table 4-6 SCSI Adapters
Adapter Internal Bus Supported
Computers
Embedded (NCR-810 based)/KZPAA¹ PCI AlphaServer 400
AlphaServer 1000
AlphaServer 2000
AlphaServer 2100
AlphaStation 200
AlphaStation 250
AlphaStation 400
AlphaStation 600
KZPSA² PCI Supported on all Alpha computers that support KZPSA in single-host configurations.³
KZTSA² TURBOchannel DEC 3000


¹Single-ended.
²Fast-wide differential (FWD).
³See the system-specific hardware manual.

Reference: For complete information about each adapter's features and order number, see the Digital Systems and Options Catalog, order number EC-I6601-10.

To access the most recent Digital Systems and Options Catalog on the World Wide Web, use the following URL:

http://www.digital.com/info/soc 

4.8 CI Interconnect

The CI interconnect is a radial bus through which OpenVMS Cluster systems communicate. It comprises the following components:

4.8.1 Advantages

The CI interconnect offers the following advantages:

4.8.2 Throughput

The CI interconnect has a high maximum throughput. CI adapters use high-performance microprocessors that perform many of the processing activities usually performed by the CPU. As a result, they consume minimal CPU processing power.

Because the effective throughput of the CI bus is high, a single CI interconnect is not likely to be a bottleneck in a large OpenVMS Cluster configuration. If a single CI is not sufficient, multiple CI interconnects can increase throughput.

4.8.3 Supported Adapters and Bus Types

The following are CI adapters and internal buses that each supports:

Reference: For complete information about each adapter's features and order numbers, see the Digital Systems and Options Catalog, order number EC-I6601-10.

To access the most recent Digital Systems and Options Catalog on the World Wide Web, use the following URL:

http://www.digital.com/info/soc 

4.8.4 Multiple CI Adapters

You can configure multiple CI adapters on some OpenVMS nodes. Multiple star couplers can be used in the same OpenVMS Cluster.

With multiple CI adapters on a node, adapters can share the traffic load. This reduces I/O bottlenecks and increases the total system I/O throughput. Table 4-7 lists the limits for multiple CI adapters per system.

Table 4-7 Maximum CI Adapters per System
System CIPCA CIBCA--A CIBCA--B CIXCD
AlphaServer 8400 26 10
AlphaServer 8200 26
AlphaServer 4100, 2100, 2000 3--4
DEC 7000/10000 10
VAX 6000 1 4 4
VAX 7000, 10000 10

Reference: For more extensive information about the CIPCA adapter, see Appendix C.

4.8.5 Configuration Guidelines for CI Clusters

Use the following guidelines when configuring systems in a CI cluster:

4.9 Digital Storage Systems Interconnect (DSSI)

DSSI is a single-path, daisy-chained, multidrop bus. It provides a single, 8-bit parallel data path with both byte parity and packet checksum for error detection.

4.9.1 Advantages

DSSI offers the following advantages:

4.9.2 Maintenance Consideration

DSSI storage often resides in the same cabinet as the CPUs. For these configurations, the whole system may need to be shut down for service, unlike configurations and interconnects with separately housed systems and storage devices.

4.9.3 Throughput

The maximum throughput is 32 Mb/s.

DSSI has highly intelligent adapters that require minimal CPU processing overhead.

4.9.4 DSSI Adapter Types

There are two types of DSSI adapters:

4.9.5 Supported Adapters and Bus Types

The following are DSSI adapters and internal bus that each supports:

Reference: For complete information about each adapter's features and order numbers, see the Digital Systems and Options Catalog, order number EC-I6601-10.

To access the most recent Digital Systems and Options Catalog on the World Wide Web, use the following URL:

http://www.digital.com/info/soc 

4.9.6 DSSI-Connected Storage

DSSI configurations use HSD intelligent controllers to connect disk drives to an OpenVMS Cluster. HSD controllers serve the same purpose with DSSI as HSJ controllers serve with CI: they enable you to configure more storage.

Alternatively, DSSI configurations use integrated storage elements (ISEs) connected directly to the DSSI bus. Each ISE contains either a disk and disk controller or a tape and tape controller.

4.9.7 Multiple DSSI Adapters

Multiple DSSI adapters are supported for some systems, enabling higher throughput than with a single DSSI bus.

Table 4-8 lists the limitations for multiple DSSI adapters. You can also refer to the most recent OpenVMS Cluster SPD for the latest information about DSSI adapters.

Table 4-8 DSSI Adapters per System
System Embedded KFPSA&185; KFQSA&178; KFESA KFESB KFMSA&179; KFMSB&179;
AlphaServer 8400 - 4 - - - - 12
AlphaServer 8200, 4100 - 4 - - - - -
AlphaServer 2100 - 4 - - - - -
AlphaServer 2000, 1000 - 4 - - 4 - -
DEC 4000
(embedded N710)
2 - - - - - -
DEC 7000/10000 - - - - - - 12
MicroVAX II, 3500, 3600, 3800, 3900 - - 2 - - - -
MicroVAX 3300/3400
(embedded EDA640)
1 - 2 - - - -
VAX 4000 Model 105A
(embedded SHAC 4)
1 + 1 4 - 2 5 - - - -
VAX 4000 Model 200
(embedded SHAC 4)
1 - 2 - - - -
VAX 4000 Model 300, 400, 500, 600 2 - 2 - - - -
VAX 4000 Model 505A/705A
(embedded SHAC³)
2 + 2 6 - 2 - - - -
VAX 6000 - - - - - 6 -
VAX 7000 - - - - - 12 -


¹ The KFPSA cannot be configured on the same DSSI as a KFMSB. Ensure that the specific AlphaServer system has sufficient PCI backplane slots to accept the number of KFPSAs required.
2 The KFQSA cannot be used for node-to-node cluster communication. An additional interconnect must be configured between systems that use KFQSA for access to shared storage.
3 Each KFMSA/B (XMI-to-DSSI) adapter contains two DSSI VAX system ports.
4 Single Host Adapter Chip (SHAC).
5 Requires a Q-bus expansion enclosure.
6 Additional adapter embedded on daughter card.

4.9.8 Configuration Guidelines for DSSI Clusters

The following configuration guidelines apply to all DSSI clusters:


Previous | Next | Contents | [Home] | [Comments] | [Ordering info] | [Help]

[HR]

  6318P002.HTM
  OSSG Documentation
  26-NOV-1996 11:20:11.41

Copyright © Digital Equipment Corporation 1996. All Rights Reserved.

Legal