In addition to these general rules, more detailed guidelines apply to different configurations. The rest of this manual discusses those guidelines in the context of specific configurations.
This chapter contains information about how to determine your OpenVMS Cluster business and application requirements.
The kinds of business requirements that you have affect the way that you configure your OpenVMS Cluster. Typical business requirements for an OpenVMS Cluster system include:
Some of these requirements may conflict with each other, such as scalability and physical location. For example, you may want to grow your OpenVMS Cluster, but you are limited by physical space or by the location of your systems. In situations like this, determine what your primary requirements are and where you are willing to make tradeoffs.
As with most business decisions, many of your choices will be determined by cost. Prioritizing your requirements can help you apply your budget resources to areas with the greatest business needs.
When determining your budget, plan for the initial system cost as well as the cost of ownership, which includes:
Determine how available your computing system must be. Most organizations fall into one of the three broad (and sometimes overlapping) categories shown in Table 2-1.
Availability Requirements | Description |
---|---|
Conventional | For business functions that can wait with little or no effect while a system or application is unavailable. |
24 x 365 | For business functions that require uninterrupted computing services, either during essential time periods or during most hours of the day throughout the year. Minimal down time is acceptable. |
Disaster tolerant | For business functions with extremely stringent availability requirements. These businesses need to be immune to disasters like earthquakes, floods, and power failures. |
Reference: For more information about availability, see Chapter 6 in this guide and Building Dependable Systems: The OpenVMS Approach.
Scalability is the ability to expand an OpenVMS Cluster in any system, storage, and interconnect dimension and at the same time fully use the initial configuration equipment. Scalability at the node level means being able to upgrade and add to your node's hardware and software. Scalability at the OpenVMS Cluster level means being able to increase the capacity of your entire OpenVMS Cluster system by adding processing power, interconnects, and storage across many nodes.
Among the low-end PCs and workstations, midrange departmental systems, and high-end data center systems offered by Digital, each level has different processing, storage, and interconnect characteristics. Investing in the appropriate level means choosing systems that meet and perhaps exceed your current business requirements with some extra capacity to spare. The extra capacity is for future growth, because designing too close to your current needs can limit or reduce the scalability of your OpenVMS Cluster.
If you design with future growth in mind, you can make the most of your initial investment, reuse original equipment, and avoid unnecessary upgrades later.
Reference: See Chapter 7 for more help with analyzing your scalability requirements.
Physical restrictions can play a key role in how you configure your OpenVMS Cluster. Designing a cluster for a small computer room or office area is quite different from designing one that will be spread throughout a building or across several miles. Power and air-conditioning requirements can also affect configuration design.
You may want to allow room for physical growth and increased power and cooling requirements when designing your cluster.
Reference: See Section 6.6 and Section 7.7.7 for information about multiple and extended local area network (LAN) configurations.
A secure environment is one that limits physical and electronic access to systems by unauthorized users. Most businesses can achieve a secure environment with little or no performance overhead. However, if security is your highest priority, you may need to make tradeoffs in convenience, cost, and performance.
Reference: See the OpenVMS Guide to System Security for more information.
Applications require processing power, memory, storage, and I/O resources. Determining your application requirements allows you to design an OpenVMS Cluster system that will meet your application needs. To determine your application requirements, follow the steps described in Table 2-2.
Step | Description |
---|---|
1 | Make a list of the applications you currently run or expect to run. |
2 |
For each application, write down your processor, memory, and I/O
requirements (the application documentation provides this information.)
Processor power must be proportional to the number of calculations your applications perform, with enough additional processor power to oversee data transfer between nodes and between nodes and storage. Memory capacity must be sufficient for your applications and for additional OpenVMS Cluster functions. Extra memory frequently improves system performance, so an initial investment in extra memory is probably a good one. I/O performance requirements differ among applications. As you choose components such as nodes, interconnects, and adapters, monitor the inherent speed of each component so that you can choose faster components and eliminate potential bottlenecks. |
3 | Add up the CPU, memory, and I/O requirements for all of your applications. Add to this sum any special requirements, such as user requirements and peripheral devices. |
4 | When you have determined your total application requirements, be sure that your CPU, memory, and I/O resources exceed these requirements by 20%. |
Systems require approximately 5% more memory to run in an OpenVMS Cluster than to run standalone. This additional memory is used to support the shared cluster resource base, which is larger than in a standalone configuration.
With added memory, a node in an OpenVMS Cluster generally can support the same number of users or applications that it supported as a standalone system. As a cluster configuration grows, the amount of memory used for system work by each node may increase. Because the per-node increase depends on both the level of data sharing in the cluster and the distribution of resource management, that increase does not follow fixed rules. If the node is a resource manager for a heavily used resource, additional memory may increase performance for cluster users of that resource.
Reference: For more information about using additional memory to improve performance, refer to the Guide to OpenVMS Performance Management.
Application performance depends on adequate processor, memory, and I/O resources. Depending on your applications, one of these resources may be more important than the others. Consider your application requirements, and find a balance among these three resources that meets your requirements. Table 2-3 provides some guidelines on the resource requirements of different application types.
Application Type | Example | Requirement |
---|---|---|
General timesharing | Program development, document preparation, office automation | Processor and I/O intensive |
Searching and updating a database and displaying reports | Transaction processing, funds transfer, online order entry or reservation systems | I/O and memory intensive |
Simulation, modeling, or calculation | Computer-aided design and manufacturing, image processing, graphics applications | Processor and memory intensive |
The OpenVMS operating system supports a number of utilities and tools that help you determine your business and application requirements in OpenVMS Cluster configurations. Table 2-4 describes many of these products and indicates whether each is supplied with the OpenVMS operating system or is an optional product.
Tool | Supplied or Optional | Function |
---|---|---|
Accounting utility | Supplied | Tracks how resources are being used. |
AUTOGEN command procedure | Supplied | Optimizes system parameter settings based on usage. |
DECamds (Digital Availability Manager for Distributed Systems) | Supplied | Collects and analyzes data from multiple nodes simultaneously, directing all output to a centralized DECwindows display. The analysis detects resource availability problems and suggests corrective actions. |
BRS (Business Recovery Server) | Optional | Consolidates the system management of disaster-tolerant clusters that span different geographic sites. |
Monitor utility | Supplied | Provides basic performance data. |
PCM (POLYCENTER Console Manager) | Optional | Consolidates the console management of the OpenVMS Cluster system to a single console terminal. |
POLYCENTER Performance Solution Capacity Planner | Optional | Provides a capacity-planning function to help analyze how changes in the configuration affect user performance. |
POLYCENTER Performance Solution Data Collector | Optional | Provides performance-data collection, archiving, reporting, and file display for OpenVMS systems. |
POLYCENTER Performance Solution Performance Advisor | Optional | Assists in performance analysis and capacity planning of OpenVMS Cluster systems by identifying bottlenecks and recommending ways to fix problems. |
POLYCENTER Performance Solution Chargeback | Optional | Provides basic user accounting and chargeback reports. |
Show Cluster utility | Supplied | Monitors activity and performance in a OpenVMS Cluster configuration. |
DECevent | Supplied | Monitors system and device status and predicts failures. |
This chapter provides information to help you select systems for your OpenVMS Cluster to satisfy your business and application requirements.
An OpenVMS Cluster can include systems running OpenVMS Alpha, OpenVMS VAX, or both. Digital provides a full range of systems for both the Alpha and VAX architectures.
Alpha and VAX systems span a range of computing environments, including:
Your choice of systems depends on your business, your application needs, and your budget. With a high-level understanding of Digital's systems and their characteristics, you can make better choices.
Table 3-1 is a comparison of recently shipped OpenVMS Cluster systems. While laptop and personal computers can be configured in an OpenVMS Cluster as client satellites, they are not discussed extensively in this manual. For more information about configuring PCs and laptops, see the PATHWORKS Version 6.0 for DOS and Windows: Installation and Configuration Guide
System Type | Useful for | Examples |
---|---|---|
Workstations |
Users who require their own systems with high processor performance.
Examples include users running mechanical computer-aided design,
scientific analysis, and data-reduction and display applications.
Workstations offer the following features:
|
AlphaStation 200
AlphaStation 500 AlphaStation 600 VAXstation 4000 MicroVAX 3100 |
Departmental systems |
Midrange office computing. Departmental systems offer the following
capabilities:
|
AlphaServer 400
AlphaServer 1000 AlphaServer 2000 AlphaServer 2100 AlphaServer 4100 VAX 4000 |
Data center systems |
Large-capacity configurations and highly available technical and
commercial applications. Data center systems have a high degree of
expandability and flexibility and offer the following features:
|
AlphaServer 8400
AlphaServer 8200 VAX 7800 |
When you choose a system based on scalability, consider the following:
The OpenVMS environment offers a wide range of alternative ways for growing and expanding processing capabilities of a data center, including the following:
Reference: For more information about scalability, see Chapter 7.
An OpenVMS Cluster system is a highly integrated environment in which multiple systems share access to resources. This resource sharing increases the availability of services and data. OpenVMS Cluster systems also offer failover mechanisms that are transparent and automatic, and require little intervention by the system manager or the user.
Reference: See Chapter 6 for more information about these failover mechanisms and about availability.
The following factors affect the performance of systems:
With these requirements in mind, compare processor performance, I/O throughput, memory capacity, and disk capacity in the Alpha and VAX specifications that follow.
This section provides comparison tables of Alpha and VAX systems that are currently available.
Reference: See the Digital Systems and Options Catalog, order number EC-I6601-10, for systems that Digital may have shipped since this manual was revised. The Digital Systems and Options Catalog is released quarterly and also contains detailed information about Digital's storage devices, printers, and network application support.
To access the most recent Digital Systems and Options Catalog on the World Wide Web, use the following URL:
http://www.digital.com/info/soc
Figures 3--1 through 3--6 contain detailed comparisons of Digital's Alpha systems.
Based on some of the concepts introduced in this manual, you can use these charts as a reference to determine which systems best fit your needs. These charts also show the full spectrum and variety of Digital's Alpha family systems.
Figure 3-1 AlphaStation 200 Series Desktop Workstations
Figure 3-2 AlphaStation 500 and 600 Desktop/Deskside Workstations
Figure 3-3 AlphaServer 400 and 1000 Series Workgroup Servers
Figure 3-4 AlphaServer 2000 and 2100A Series Departmental Servers
Figure 3-5 AlphaServer 4100 Series Departmental Servers
Figure 3-6 AlphaServer 8200 and 8400 Series Enterprise Servers
Table 3-2 shows the performance and capacity specifications for VAX workstations, departmental systems, and data center systems. You can use these metrics to determine which VAX system best fits your needs. Since OpenVMS Cluster software supports mixed-architecture clusters, you can integrate Alpha systems into your VAXcluster if and when it suits your business requirements.
Reference: See Figure 8-5 for complete information about mixed-version and mixed architecture support for OpenVMS Clusters.
System | Performance&185; | I/O Throughput | Disk Capacity | Memory Capacity |
---|---|---|---|---|
Workstations and MicroVAXes | ||||
VAXstation 4000 Model 90 | 32.8 SPECmark | 10 MB/s | 9.3 GB | 128 MB |
VAXstation 4000 Model 60 | 12.0 SPECmark | 10 MB/s | 9.3 GB | 104 MB |
VAXstation 4000 VLC | 6.2 SPECmark | 5 MB/s | 8.5 GB | 24 MB |
MicroVAX 3100 Model 98 | 10.5 SPECmark | 8 MB/s | 8.7 GB | 256 MB |
MicroVAX 3100 Model 90 | 31.2 SPECmark | 8 MB/s | 8.7 GB | 128 MB |
MicroVAX 3100 Model 88 | 10.5 SPECmark | 8 MB/s | 8.7 GB | 128 MB |
MicroVAX 3100 Model 85 | 10.5 SPECmark | 8 MB/s | 8.7 GB | 128 MB |
MicroVAX 3100 Model 80 | 10.5 SPECmark | 8 MB/s | 8.7 GB | 72 MB |
MicroVAX 3100 Model 30/40 | 5.0 SPECmark | 8 MB/s | 8.7 GB | 32 MB |
Departmental Systems | ||||
VAX 4000 Model 108 | 31.1 SPECmark | 8 MB/s | 335.4 GB | 256 MB |
VAX 4000 Model 108a | 31.1 SPECmark | 8 MB/s | 75 GB | 128 MB |
VAX 6000 Model 610 | 42.1 SPECmark | 80 MB/s | 8000 GB | 1024 MB |
VAX 4000 Model 700a | 51.6 SPECmark | 20 MB/s | 151 GB | 512 MB |
VAX 4000 Model 600a | 41.1 SPECmark | 20 MB/s | 151 GB | 512 MB |
VAX 4000 Model 600 | 41.1 SPECmark | 12 MB/s | 100 GB | 512 MB |
VAX 4000 Model 500a | 30.7 SPECmark | 20 MB/s | 151 GB | 512 MB |
VAX 4000 Model 500 | 30.7 SPECmark | 12 MB/s | 100 GB | 512 MB |
VAX 4000 Model 400 | 22.3 SPECmark | 12 MB/s | 100 GB | 512 MB |
VAX 4000 Model 100a | 31.1 SPECmark | 8 MB/s | 75 GB | 128 MB |
VAX 4000 Model 100 | 31.1 SPECmark | 8 MB/s | 75 GB | 128 MB |
Data Center Systems | ||||
VAX 10000 Models 610 | 51.0 SPECmark | 400 MB/s | 10 TB | 3.5 GB |
VAX 7000 Models 610 | 46.6 SPECmark | 400 MB/s | 10 TB | 3.5 GB |
An interconnect is a hardware connection between OpenVMS Cluster nodes over which the nodes can communicate. This chapter contains information about the following interconnects and how they are used in OpenVMS Clusters:
The software that enables OpenVMS Cluster systems to communicate over an interconnect is the System Communications Services (SCS).
6318P001.HTM OSSG Documentation 26-NOV-1996 11:20:09.37
Copyright © Digital Equipment Corporation 1996. All Rights Reserved.