Reference: For more information about DSSI, see the DSSI OpenVMS Cluster Installation and Troubleshooting Manual.
The Ethernet interconnect provides single path connections within an OpenVMS Cluster system and a local area network (LAN). Ethernet and FDDI are both LAN-based interconnects. See Section 4.11 for information about FDDI and for general LAN-based cluster guidelines.
The Ethernet interconnect offers the following advantages:
The maximum throughput of the Ethernet interconnect is 10 Mb/s. Because Ethernet adapters do not provide hardware assistance, processor overhead is higher than for CI or DSSI.
General network traffic on an Ethernet can reduce the throughput available for OpenVMS Cluster communication. The Ethernet can become an I/O bottleneck in an OpenVMS Cluster system. Therefore, consider the capacity of the total network design when you configure an OpenVMS Cluster system with many Ethernet-connected nodes or when the Ethernet also supports a large number of PCs or printers.
Reference: For information about reducing congestion on an Ethernet LAN, see Section 7.7.7.
If only Ethernet paths are available, the choice between which path the OpenVMS Cluster software uses is based on latency (computed network delay). If delays are equal, either path can be used. Otherwise, the OpenVMS Cluster software chooses the channel with the least latency. The network delay across each segment is calculated approximately every 3 seconds. Traffic is then balanced across all communication paths between local and remote adapters.
The following are Ethernet adapters and the internal bus that each supports:
Reference: For complete information about each adapter's features and order numbers, see the Digital Systems and Options Catalog, order number EC-I6601-10.
To access the most recent Digital Systems and Options Catalog on the World Wide Web, use the following URL:
http://www.digital.com/info/soc
You can use transparent Ethernet-to-FDDI translating bridges to provide an interconnect between a 10-Mb/s Ethernet segment and a 100-Mb/s FDDI ring. These Ethernet-to-FDDI bridges are also called "10/100" bridges. They perform high-speed translation of network data packets between the FDDI and Ethernet frame formats.
Reference: See Figure 7-22 for an example of these bridges.
FDDI is an ANSI standard LAN interconnect that uses fiber-optic cable. FDDI supports OpenVMS Cluster functionality over greater distances than other interconnects. FDDI also augments the Ethernet by providing a high-speed interconnect for multiple Ethernet segments in a single OpenVMS Cluster system.
FDDI offers the following advantages:
The FDDI standards define the following two types of nodes:
FDDI limits the total fiber path to 200 km (125 miles). The maximum distance between adjacent FDDI devices is 40 km with single-mode fiber and 2 km with multimode fiber. In order to control communication delay, however, Digital recommends limiting the maximum distance between any two OpenVMS Cluster nodes on an FDDI ring to 40 km.
The maximum throughput of the FDDI interconnect (100 Mb/s) is 10 times higher than that of Ethernet.
In addition, FDDI supports transfers using large packets (up to 4468 bytes). Only FDDI nodes connected exclusively by FDDI can make use of large packets.
Because FDDI adapters do not provide processing assistance for OpenVMS Cluster protocols, more processing power is required than for CI or DSSI.
Following is a list of FDDI adapters and the buses they support:
Reference: For complete information about each adapter's features and order numbers, see the Digital Systems and Options Catalog, order number EC-I6601-10.
To access the most recent Digital Systems and Options Catalog on the World Wide Web, use the following URL:
http://www.digital.com/info/soc
FDDI-based configurations use FDDI for node-to-node communication. The following general guidelines apply to FDDI configurations:
Because FDDI is ideal for spanning great distances, you may want to supplement its high throughput with high availability by ensuring that critical nodes are connected to multiple FDDI rings. Physical separation of the two FDDI paths helps ensure that the configuration is disaster tolerant.
If only FDDI paths are available, the OpenVMS Cluster software bases the choice between which path to use on latency (computed network delay). If delays are equal, either path can be used. Otherwise, OpenVMS Cluster software chooses the channel with the least latency. The network delay across each segment is calculated approximately every 3 seconds. Traffic is balanced across all communication paths between local and remote adapters.
This chapter describes how to design a storage subsystem. The design process involves the following steps:
The rest of this chapter contains sections that explain these steps in detail.
In an OpenVMS CLuster, storage choices include the StorageWorks family of products, a modular storage expansion system based on the Small Computer Systems Interface (SCSI--2) standard. StorageWorks helps you configure complex storage subsystems by choosing from the following modular elements:
Consider the following criteria when choosing storage devices:
One of the benefits of OpenVMS Cluster systems is that you can connect storage devices directly to OpenVMS Cluster interconnects to give member systems access to storage.
In an OpenVMS Cluster system, storage devices can be connected to the following interconnects:
Table 5-1 lists the kinds of storage devices that you can attach to specific interconnects.
Storage Interconnect | Storage Devices |
---|---|
CI | HSJ and HSC controllers and SCSI storage |
DSSI | HSD controllers, ISEs, and SCSI storage |
SCSI | HSZ controllers and SCSI storage |
FDDI | HS xxx controllers and SCSI storage |
Storage capacity is the amount of space needed on storage devices to hold system, application, and user files. Knowing your storage capacity can help you to determine the amount of storage needed for your OpenVMS Cluster configuration.
To estimate your online storage capacity requirements, add together the storage requirements for your OpenVMS Cluster system's software, as explained in Table 5-2.
Software Component | Description |
---|---|
OpenVMS operating system |
Estimate the number of blocks¹ required by the OpenVMS operating
system.
Reference: Your OpenVMS installation documentation and Software Product Description (SPD) contain this information. |
Page, swap, and dump files |
Use AUTOGEN to determine the amount of disk space required for page,
swap, and dump files.
Reference: The OpenVMS System Manager's Manual provides information about calculating and modifying these file sizes. |
Site-specific utilities and data | Estimate the disk storage requirements for site-specific utilities, command procedures, online documents, and associated files. |
Digital and third-party application programs |
Estimate the space required for each Digital and third-party
application product to be installed on your OpenVMS Cluster system,
using information from the application suppliers.
Reference: Consult the appropriate Software Product Description (SPD) to estimate the space required for normal operation of any layered product you need to use. |
User-written programs | Estimate the space required for user-written programs and their associated databases. |
Databases | Estimate the size of each database. This information should be available in the documentation pertaining to the application-specific database. |
User data |
Estimate user disk-space requirements according to these guidelines:
|
Total requirements | The sum of the preceding estimates is the approximate amount of disk storage presently needed for your OpenVMS Cluster system configuration. |
Before you finish determining your total disk capacity requirements, you may also want to consider future growth for online storage and for backup storage.
For example, at what rate are new files created in your OpenVMS Cluster system? By estimating this number and adding it to the total disk storage requirements that you calculated using Table 5-2, you can obtain a total that more accurately represents your current and future needs for online storage.
To determine backup storage requirements, consider how you deal with obsolete or archival data. In most storage subsystems, old files become unused while new files come into active use. Moving old files from online to backup storage on a regular basis frees online storage for new files and keeps online storage requirements under control.
Planning for adequate backup storage capacity can make archiving procedures more effective and reduce the capacity requirements for online storage.
Estimating your anticipated disk performance work load and analyzing the work load data can help you determine your disk performance requirements.
You can use the Monitor utility and DECamds to help you determine which performance optimizer best meets your application and business needs.
Performance optimizers are software or hardware products that improve storage performance for applications and data. Table 5-3 explains how various performance optimizers work.
Reference: See Section 7.8 for more information about how these performance optimizers increase an OpenVMS Cluster's ability to scale I/Os.
Some costs are associated with optimizing your storage subsystems for higher availability. Part of analyzing availability costs is weighing the cost of protecting data against the cost of unavailable data during failures. Depending on the nature of your business, the impact of storage subsystem failures may be low, moderate, or high.
Device and data availability options reduce and sometimes negate the impact of storage subsystem failures.
Depending on your availability requirements, choose among the availability optimizers described in Table 5-4 for applications and data with the greatest need.
Availability Optimizer | Description |
---|---|
Redundant access paths | Protect against hardware failures along the path to the device by configuring redundant access paths to the data. |
Volume Shadowing for OpenVMS software |
Replicates data written to a virtual disk by writing the data to one or
more physically identical disks that form a shadow set. With replicated
data, users can access data even when one disk becomes unavailable. If
one shadow set member fails, the shadowing software removes the drive
from the shadow set, and processing continues with the remaining
drives. Shadowing is transparent to applications and allows data
storage and delivery during media, disk, controller, and interconnect
failure.
A shadow set can contain up to three members, and shadow set members can be anywhere within the storage subsystem of an OpenVMS Cluster system. Reference: See Volume Shadowing for OpenVMS for more information about volume shadowing. |
System disk redundancy |
Place system files judiciously on disk drives with multiple access
paths. OpenVMS Cluster availability increases when you form a shadow
set that includes the system disk. You can also configure an OpenVMS
Cluster system with multiple system disks.
Reference: For more information, see Section 8.2. |
Database redundancy | Keep redundant copies of certain files or partitions of databases that are, for example, updated overnight by batch jobs. Rather than using shadow sets, which maintain a complete copy of the entire disk, it might be sufficient to maintain a backup copy on another disk or even on a standby tape of selected files or databases. |
DECevent |
DECevent, in conjunction with volume shadowing, can detect most
imminent device failures with sufficient lead time to move the data to
a spare device.
Enhance device reliability with appropriate software tools. Use device-failure prediction tools, such as DECevent, where high availability is needed. When a shadow set member has an increasing fault rate that might indicate potential failure, DECevent works with Volume Shadowing for OpenVMS to make a shadow set copy of the suspect device to a spare device. After the copy is made, the suspect device can be taken off the system for examination and repair without loss of data availability. |
Newer devices | Protect against failure by choosing newer devices. Typically, newer devices provide improved reliability and mean time between failures (MTBF). Newer controllers also improve reliability by employing updated chip technologies. |
Implement thorough backup strategies | Frequent and regular backups are the most effective way to ensure the availability of your data. |
The CI interconnect provides the highest OpenVMS Cluster availability with redundant, independent transmit-and-receive CI cable pairs. The CI offers multiple access paths to disks and tapes by means of dual-ported devices between HSC or HSJ controllers.
The following controllers and devices are supported by the CI interconnect:
DSSI-based configurations provide shared direct access to storage for systems with moderate storage capacity. The DSSI interconnect provides the lowest-cost shared access to storage in an OpenVMS Cluster.
The storage tables in this section may contain incomplete lists of products.
DSSI configurations support the following devices:
Reference: RZ, TZ, and EZ SCSI storage devices are described in Section 5.7.
The Small Computer Systems Interface (SCSI) bus is a storage interconnect based on an ANSI industry standard. You can connect up to a total of 8 or 16 nodes (3 of which can be CPUs) to the SCSI bus.
The following devices can connect to a single host or multihost SCSI bus:
The following devices can connect only to a single host SCSI bus:
Host-based storage devices can be connected locally to OpenVMS Cluster member systems using local adapters. You can make this locally connected storage available to other OpenVMS Cluster members by configuring a node as an MSCP server.
6318P003.HTM OSSG Documentation 26-NOV-1996 11:20:13.49
Copyright © Digital Equipment Corporation 1996. All Rights Reserved.