The six interconnects described in this chapter share some general characteristics. Table 4-1 describes these characteristics.
Characteristic | Description |
---|---|
Throughput |
The quantity of data transferred across the interconnect.
Some interconnects require more processor overhead than others. For example, Ethernet and FDDI interconnects require more processor overhead than do CI or DSSI. Larger packet sizes allow higher data-transfer rates (throughput) than do smaller packet sizes. |
Cable length | Interconnects range in length from 3 m to 40 km. |
Maximum number of nodes | The number of nodes that can connect to an interconnect varies among interconnect types. Be sure to consider this when configuring your OpenVMS Cluster system. |
Supported systems and storage | Each OpenVMS Cluster node and storage subsystem requires an adapter to connect the internal system bus to the interconnect. First consider the storage and processor I/O performance, then the adapter performance, when choosing an interconnect type. |
Table 4-2 shows key statistics for a variety of interconnects.
Attribute | CI | DSSI | FDDI | SCSI | MEMORY CHANNEL |
Ethernet |
---|---|---|---|---|---|---|
Maximum throughput (Mb/s) | 140 | 32 | 100 | 160 | 800 | 10 |
Hardware-assisted data link¹ | Yes | Yes | No | No | No | No |
Connection to storage |
Direct and
MSCP served |
Direct and
MSCP served |
MSCP served |
Direct and
MSCP served |
MSCP served | MSCP served |
Topology | Radial coaxial cable | Bus | Dual ring of trees | Bus | Radial copper cable | Linear coaxial cable |
Maximum nodes | 32² | 8³ | 96 4 | 8--16 5 | 4 | 96 4 |
Maximum length | 45 m | 6 m 6 | 40 km | 25m | 3 m | 2800 m |
You can use multiple interconnects to achieve the following benefits:
A mixed interconnect is a combination of two or more different types of interconnects in an OpenVMS Cluster system. You can use mixed interconnects to combine the advantages of each type and to expand your OpenVMS Cluster system. For example, an Ethernet cluster that requires more storage can expand with the addition of CI, DSSI, or SCSI connections.
Table 4-3 shows the OpenVMS Cluster interconnects supported by Alpha and VAX systems. You can also refer to the most recent OpenVMS Cluster SPD to see the latest information on supported interconnects.
Systems | CI | DSSI | SCSI | FDDI | Ethernet | MEMORY CHANNEL |
---|---|---|---|---|---|---|
AlphaServer 8400, 8200 | X | X | X | X¹ | X | X |
AlphaServer 4100, 2100, 2000 | X | X | X | X¹ | X¹ | X |
AlphaServer 1000 | X | X | X | X¹ | X | |
AlphaServer 400 | X | X | X | X¹ | ||
AlphaStation series | X | X | X¹ | |||
DEC 7000/10000 | X | X | X¹ | X | ||
DEC 4000 | X | X | X¹ | |||
DEC 3000 | X | X¹ | X¹ | |||
DEC 2000 | X | X¹ | ||||
VAX 6000/7000/10000 | X | X | X | X | ||
VAX 4000, MicroVAX 3100 | X | X | X¹ | |||
VAXstation 4000 | X | X¹ |
As Table 4-3 shows, OpenVMS Clusters support a wide range of interconnects: CI, DSSI, SCSI, FDDI, Ethernet, and MEMORY CHANNEL. This power and flexibility means that almost anything will work well. The most important factor to consider is how much I/O you need, as explained in Chapter 2.
In most cases, the I/O requirements will be less than the capabilities of any one OpenVMS Cluster interconnect. Ensure that you have a reasonable surplus I/O capacity, then choose your interconnects based on other needed features.
MEMORY CHANNEL is a high-performance cluster interconnect technology for PCI-based Alpha systems. With the benefits of very low latency, high bandwidth, and direct memory access, MEMORY CHANNEL complements and extends the unique ability of OpenVMS Clusters to work as a single, virtual system.
Three hardware components are required by a node to support a MEMORY CHANNEL connection:
A MEMORY CHANNEL hub is a desktop-PC sized unit that provides a connection among systems. For its initial release in OpenVMS Version 7.1, MEMORY CHANNEL supports up to four Alpha nodes per hub. You can configure systems with two MEMORY CHANNEL adapters in order to provide failover in case an adapter fails. Each adapter must be connected to a different hub.
A MEMORY CHANNEL hub is not required in clusters that comprise only two nodes. In a two-node configuration, one PCI adapter is configured, using module jumpers, as a virtual hub.
MEMORY CHANNEL technology provides the following features:
The MEMORY CHANNEL interconnect has a very high maximum throughput of 100 MB/s. If a single MEMORY CHANNEL is not sufficient, up to two interconnects (and two MEMORY CHANNEL hubs) can share throughput.
The MEMORY CHANNEL adapter is CCMAA-AA. It connects to the PCI bus.
Reference: For complete information about each adapter's features and order numbers, see the Digital Systems and Options Catalog, order number EC-I6601-10.
To access the most recent Digital Systems and Options Catalog on the World Wide Web, use the following URL:
http://www.digital.com/info/soc
The SCSI interconnect is an industry standard interconnect that supports one or more computers, peripheral devices, and interconnecting components. SCSI is a single-path, daisy-chained, multidrop bus. It is a single 8-bit or 16-bit data path with byte parity for error detection. Both inexpensive single-ended and differential signaling for longer distances are available.
In an OpenVMS Cluster, multiple Alpha computers on a single SCSI interconnect can simultaneously access SCSI disks. This type of configuration is called multihost SCSI connectivity. A second type of interconnect is required for node-to-node communication. For multihost access to SCSI storage, the following components are required:
For larger configurations, the following components are available:
Reference: For a detailed description of how to connect SCSI configurations, see Appendix A.
The SCSI interconnect offers the following advantages:
Throughput for the SCSI interconnect is shown in Table 4-4.
Mode | Narrow (8-Bit) | Wide (16-Bit) |
---|---|---|
Standard | 5 | 10 |
Fast | 10 | 20 |
The maximum length of the SCSI interconnect is determined by the signaling method used in the configuration and, for single-ended signaling, by the data transfer rate.
There are two types of electrical signaling for SCSI interconnects: single-ended and differential. Both types can operate in standard mode or fast mode. For differential signaling, the maximum SCSI interconnect possible is the same whether you use standard or fast mode.
Table 4-5 summarizes how the type of signaling method affects SCSI interconnect distances.
Signaling Technique | Rate of Data Transfer | Maximum Cable Length |
---|---|---|
Single ended | Standard | 6 m¹ |
Single ended | Fast | 3 m |
Differential | Standard or Fast | 25 m |
Table 4-6 shows SCSI adapters with the internal buses and computers they support.
Adapter | Internal Bus | Supported Computers |
---|---|---|
Embedded (NCR-810 based)/KZPAA¹ | PCI | AlphaServer 400 |
AlphaServer 1000 | ||
AlphaServer 2000 | ||
AlphaServer 2100 | ||
AlphaStation 200 | ||
AlphaStation 250 | ||
AlphaStation 400 | ||
AlphaStation 600 | ||
KZPSA² | PCI | Supported on all Alpha computers that support KZPSA in single-host configurations.³ |
KZTSA² | TURBOchannel | DEC 3000 |
Reference: For complete information about each adapter's features and order number, see the Digital Systems and Options Catalog, order number EC-I6601-10.
To access the most recent Digital Systems and Options Catalog on the World Wide Web, use the following URL:
http://www.digital.com/info/soc
The CI interconnect is a radial bus through which OpenVMS Cluster systems communicate. It comprises the following components:
The CI interconnect offers the following advantages:
The CI interconnect has a high maximum throughput. CI adapters use high-performance microprocessors that perform many of the processing activities usually performed by the CPU. As a result, they consume minimal CPU processing power.
Because the effective throughput of the CI bus is high, a single CI interconnect is not likely to be a bottleneck in a large OpenVMS Cluster configuration. If a single CI is not sufficient, multiple CI interconnects can increase throughput.
The following are CI adapters and internal buses that each supports:
Reference: For complete information about each adapter's features and order numbers, see the Digital Systems and Options Catalog, order number EC-I6601-10.
To access the most recent Digital Systems and Options Catalog on the World Wide Web, use the following URL:
http://www.digital.com/info/soc
You can configure multiple CI adapters on some OpenVMS nodes. Multiple star couplers can be used in the same OpenVMS Cluster.
With multiple CI adapters on a node, adapters can share the traffic load. This reduces I/O bottlenecks and increases the total system I/O throughput. Table 4-7 lists the limits for multiple CI adapters per system.
System | CIPCA | CIBCA--A | CIBCA--B | CIXCD |
---|---|---|---|---|
AlphaServer 8400 | 26 | 10 | ||
AlphaServer 8200 | 26 | |||
AlphaServer 4100, 2100, 2000 | 3--4 | |||
DEC 7000/10000 | 10 | |||
VAX 6000 | 1 | 4 | 4 | |
VAX 7000, 10000 | 10 |
Reference: For more extensive information about the CIPCA adapter, see Appendix C.
Use the following guidelines when configuring systems in a CI cluster:
DSSI is a single-path, daisy-chained, multidrop bus. It provides a single, 8-bit parallel data path with both byte parity and packet checksum for error detection.
DSSI offers the following advantages:
DSSI storage often resides in the same cabinet as the CPUs. For these configurations, the whole system may need to be shut down for service, unlike configurations and interconnects with separately housed systems and storage devices.
The maximum throughput is 32 Mb/s.
DSSI has highly intelligent adapters that require minimal CPU processing overhead.
There are two types of DSSI adapters:
The following are DSSI adapters and internal bus that each supports:
Reference: For complete information about each adapter's features and order numbers, see the Digital Systems and Options Catalog, order number EC-I6601-10.
To access the most recent Digital Systems and Options Catalog on the World Wide Web, use the following URL:
http://www.digital.com/info/soc
DSSI configurations use HSD intelligent controllers to connect disk drives to an OpenVMS Cluster. HSD controllers serve the same purpose with DSSI as HSJ controllers serve with CI: they enable you to configure more storage.
Alternatively, DSSI configurations use integrated storage elements (ISEs) connected directly to the DSSI bus. Each ISE contains either a disk and disk controller or a tape and tape controller.
Multiple DSSI adapters are supported for some systems, enabling higher throughput than with a single DSSI bus.
Table 4-8 lists the limitations for multiple DSSI adapters. You can also refer to the most recent OpenVMS Cluster SPD for the latest information about DSSI adapters.
System | Embedded | KFPSA&185; | KFQSA&178; | KFESA | KFESB | KFMSA&179; | KFMSB&179; |
---|---|---|---|---|---|---|---|
AlphaServer 8400 | - | 4 | - | - | - | - | 12 |
AlphaServer 8200, 4100 | - | 4 | - | - | - | - | - |
AlphaServer 2100 | - | 4 | - | - | - | - | - |
AlphaServer 2000, 1000 | - | 4 | - | - | 4 | - | - |
DEC 4000
(embedded N710) |
2 | - | - | - | - | - | - |
DEC 7000/10000 | - | - | - | - | - | - | 12 |
MicroVAX II, 3500, 3600, 3800, 3900 | - | - | 2 | - | - | - | - |
MicroVAX 3300/3400
(embedded EDA640) |
1 | - | 2 | - | - | - | - |
VAX 4000 Model 105A
(embedded SHAC 4) |
1 + 1 4 | - | 2 5 | - | - | - | - |
VAX 4000 Model 200
(embedded SHAC 4) |
1 | - | 2 | - | - | - | - |
VAX 4000 Model 300, 400, 500, 600 | 2 | - | 2 | - | - | - | - |
VAX 4000 Model 505A/705A
(embedded SHAC³) |
2 + 2 6 | - | 2 | - | - | - | - |
VAX 6000 | - | - | - | - | - | 6 | - |
VAX 7000 | - | - | - | - | - | 12 | - |
The following configuration guidelines apply to all DSSI clusters:
6318P002.HTM OSSG Documentation 26-NOV-1996 11:20:11.41
Copyright © Digital Equipment Corporation 1996. All Rights Reserved.