You should also be familiar with the general concepts of the Maintenance Operations Protocol (MOP), and, in particular, the MOP client database, as described in Chapter 10.
If you are adding a new OpenVMS Cluster satellite node to your OpenVMS Cluster, see Section 9.1.1.1.
If you are making the transition from an existing Phase IV DECnet cluster satellite node to DECnet-Plus, see Section 9.1.2. After you have made the transition, you can invoke CLUSTER_CONFIG.COM to modify its characteristics.
You can delete a satellite node from the OpenVMS Cluster system with CLUSTER_CONFIG.COM whether or not the satellite node has made the transition to DECnet-Plus. If the satellite has not made the transition, a message appears stating that the client could not be removed from the client database. This will not cause a problem, and all root information will be deleted correctly.
By default, the satellite node information created by CLUSTER_CONFIG.COM or NET$CONFIGURE.COM is placed in the SYS$MANAGER root directory for the boot node. See Section 9.1.4 for a discussion about making this information available to other boot nodes in your OpenVMS Cluster system.
To add a new OpenVMS Cluster satellite node to an OpenVMS Cluster environment, invoke the SYS$MANAGER:CLUSTER_CONFIG.COM procedure. This procedure does the following:
Refer to the OpenVMS Cluster Systems for OpenVMS guide for general information about CLUSTER_CONFIG.COM. You can enter a "?" at any prompt to display help text that explains what information is required at the prompt.
Table 9-1 explains the information specific to DECnet-Plus that CLUSTER_CONFIG.COM requests.
Item | Response |
---|---|
What is the node's DECnet full name? |
Determine a full name with the help of your network manager. Enter a
string that includes:
For example: .world.networks.mynode mega:.indiana.jones columbus:.flatworld |
What is the DECnet Phase IV compatible synonym name for this node? |
A node synonym is a short name for the node's full name. In an OpenVMS
Cluster environment, this name is used as the value of the SYSGEN
parameter
SCSNODE. It must also be defined in the namespace as the
synonym name for that node. Therefore, it must be a string of six or
less alphanumeric characters. By default, it is the first six
characters of the last simple name in the full name. For example:
Full name --- bigbang:.galaxy.nova.blackhole Synonym --- blackh Node synonyms greater than six characters in length are not supported if the node is an OpenVMS Cluster member. |
What is the node's DECnet node address? | Enter the node's DECnet node address. This address is in the form area.node. Ask your network manager to help you determine this address. |
Does synonym name need to be registered in the namespace [N]? |
Answer
YES to this question if the name of the node you are adding
has not been registered with the namespace. Registration makes your
node "known" to the namespace. You only need to do this once.
The registration might fail if your namespace has access control list (ACL) protection. If that occurs, your network manager must register the node for you. |
What is the cluster alias full name? |
The alias name is the node name of the alias: the DECdns full name of
the object that stores the address towers for the alias. Do not enter a
node synonym.
If this node will not be participating in an OpenVMS Cluster alias, press carriage return. Determine the OpenVMS Cluster alias name with the help of your network manager. Enter a string that includes:
For example: .networks.farout mega:.proto.quikk |
What is the Phase IV address of the cluster alias? |
The node ID of the alias could not be retrieved from the namespace, so
it must be calculated from the alias's Phase IV address. Enter the
Phase IV address of the alias in the
area.node format (for example, 63.137), or enter a 6-byte
ID in the format
AA-00-04-00-
xx-xx, where
xx-xx is calculated from the Phase IV node address. To
determine the Ethernet physical address, proceed as follows:
|
What selection weight do you choose for this node?[0 for satellites] | The selection weight determines the percentage of incoming connections addressed to the alias that this node will handle. If the node is a satellite, take the default value of 0. For larger nodes, select a value between 1 and 10 (or larger if you want) according to the size of the node. |
The information you enter by means of CLUSTER_CONFIG.COM is automatically entered in the boot node's MOP client database and executed. The cluster_config procedure prompts you for other information. Then, it tells you when to boot your satellite node. The satellite node will run an AUTOGEN procedure shortly after booting.
After the satellite reboots, the NET$CONFIGURE procedure executes automatically. When it completes, the network starts, and the OpenVMS startup procedure continues until completion.
Each of these steps is fully explained in the following list:
The following is a sample of the information requested when you
choose Option 8 of NET$CONFIGURE.COM, "Configure MOP Client
Database:"
* Which configuration option to perform? [1] : 8 * Do you wish to ADD or DELETE a MOP Client? [ADD] : * Name of the MOP Client? : tahini * Circuit for 'TAHINI'? : * Physical addresses for 'TAHINI'? : 08-00-2B-07-36-B6 * Secondary Loader for 'TAHINI'? : * Tertiary Loader for 'TAHINI'? : * System Image for 'TAHINI'? : "@net$niscs_laa(disk$v55:<sys10.>)" * Diagnostic Image for 'TAHINI'? : * Management Image for 'TAHINI'? : * Script File for 'TAHINI'? : * Dump File for 'TAHINI'? : * Verification for 'TAHINI'? [%X0000000000000000] : * Phase IV Client Address (aa.nnnn) for 'TAHINI'? [none] : 63.10 * Phase IV Client Name for 'TAHINI'? [TAHINI] : * Phase IV Host Address for 'TAHINI'? [63.61] : * Phase IV Host Name for 'TAHINI'? [CASIDY] : hummus * Do you wish to generate NCL configuration scripts? [YES] : %NET$CONFIGURE-I-CHECKSUM, checksumming NCL management scripts %NET$CONFIGURE-I-CONFIGCOMPLETED, DECnet-Plus for VMS configuration completed
$ run sys$system:ncl ncl> @sys$manager:net$mop_client_startup.ncl
Note
One line in the file, create mop, generates an error message because the mop entity has already been created. You can ignore this message.
The following example shows the information that network management
knows about the client configured in the previous step:
ncl> show mop client tahini all
Node 0 MOP Client TAHINI at 1995-04-21-18:32:38.205-04:00I0.448 Identifiers Name = TAHINI Characteristics Circuit = Addresses = {08-00-2B-07-36-B6, AA-00-04-00-0A-FC} Secondary Loader = {} Tertiary Loader = {sys$system:tertiary_vmb.exe} System Image = {"@net$niscs_laa(DISK$V55:<SYS10.>)"} Diagnostic Image = {} Management Image = {} Script File = {} Phase IV Host Name = HUMMUS Phase IV Host Address = 63.61 Phase IV Client Name = TAHINI Phase IV Client Address = 63.10 Dump File = {} Dump Address = 0 Verification = %X0000000000000000 Device Types = {}
$ @sys$update autogen getdata reboot nofeedback
By default, a cluster satellite configures its Phase IV Prefix as 49:: and its node synonym directory as .DNA_Nodesynonym. Some clusters may want to have different values for one or both of these attributes. To change these defaults for satellites added to the cluster, define the following logicals in SYS$COMMON:[SYSMGR]NET$LOGICALS.COM before running CLUSTER_CONFIG.
$ define/system/nolog net$phaseiv_prefix "<prefix value>" $ define/system/nolog decnet_migrate_dir_synonym "<synonym dir>"
To change these values for a satellite that has already been configured, run NET$CONFIGURE from that satellite.
By default, the file NET$MOP_CLIENT_STARTUP.NCL resides in SYS$SYSROOT:[SYSMGR]. In this location, however, the MOP client information is only available to the node on which the file resides. It is up to the system manager to make that information available to more boot nodes, if desired.
Both CLUSTER_CONFIG.COM and NET$CONFIGURE.COM modify the file SYS$MANAGER:NET$MOP_CLIENT_STARTUP.NCL for the node on which the procedure is invoked. If the file is found in SYS$SYSROOT:[SYSMGR], it is modified and left in that location. Similarly, if the file is found in SYS$COMMON:[SYSMGR], it is modified and left in that location.
One way of allowing more boot nodes to access NET$MOP_CLIENT_STARTUP.NCL is to move it to SYS$COMMON:[SYSMGR]NET$MOP_CLIENT_STARTUP.NCL. All nodes in the OpenVMS Cluster then have access to it.
Alternatively, you can create one file for common MOP client information. Designated boot nodes can execute this file by placing @ncl_script_name in their own SYS$MANAGER:NET$MOP_CLIENT_STARTUP.NCL file. This method requires more work by the system manager, however, because the configuration procedures does not modify the common file directly.
All or some nodes in an OpenVMS Cluster environment can be represented in the network as a single node by establishing an alias for the OpenVMS Cluster. To the rest of the network, an alias node looks like a normal node. It has a normal node object entry in the namespace, which provides a standard address tower. The alias has a single DECnet address that represents the OpenVMS Cluster environment as a whole. The alias allows access to common resources on the OpenVMS Cluster environment without knowing which nodes comprise the OpenVMS Cluster.
Using an alias never precludes using an individual node name and address. Thus, a remote node can address the OpenVMS Cluster as a single node, as well as address any OpenVMS Cluster member individually.
You decide which nodes participate in an alias. It is not necessary for every member of an OpenVMS Cluster environment to be part of the alias. Those nodes in the OpenVMS Cluster environment that have specifically joined the alias comprise the alias members, and connections addressed to the alias are distributed among these members. You can also have multiple aliases. Multiple aliases allow end nodes to be members of more than one alias. Multiple aliases also allow a mixed architecture cluster. You can have one alias for all the nodes, one for Alpha systems, and another for the VAX systems.
You can have a maximum of three aliases. Members of the same alias must be members of the same OpenVMS Cluster environment. Nodes joining the same alias must be in the same DECnet area.
When creating multiple aliases, the first alias created is used for outgoing connections for any applications, with the outgoing alias attribute set to TRUE. If this alias is not enabled, the local node name is used for the outgoing connection.
Finally, nodes that assume the alias should have a common authorization file.
Note
There must be at least one adjacent DECnet Phase V router on a LAN to support an OpenVMS Cluster alias. A single router can support multiple OpenVMS Cluster environments on a LAN. Providing alias support does not prevent a router from providing normal routing support.OpenVMS Cluster environments do not have routers. If all nodes on a LAN that form a complete network are DECnet Phase V end nodes, no router is required. Any member of the OpenVMS Cluster can communicate with any system on the LAN. If, however, the LAN is part of a larger network or there are Phase IV nodes on the LAN, there must be at least one adjacent DECnet Phase V router on the LAN. The adjacent DECnet Phase V router allows members of the cluster to communicate with Phase IV nodes or systems in the larger network beyond the LAN.
To add a node in an OpenVMS Cluster environment to the alias, use the NET$CONFIGURE.COM procedure. For information about NET$CONFIGURE.COM, refer to the DECnet-Plus for OpenVMS Applications Installation and Advanced Configuration guide.
Note
You must run NET$CONFIGURE.COM on each node in the OpenVMS Cluster environment that you want to become a member of the alias.
Before an alias can be identified by name, you must create a node object entry for it in the namespace. Do this only once for each OpenVMS Cluster.
To add an object entry for an OpenVMS Cluster alias in a DECnet Phase V area, you need:
The decnet_register tool converts a Phase IV-style address of the form area.node into a 6-byte address when registering a Phase IV node (see Section 5.3.4 and Chapter 5 for decnet_register). (In Phase IV, an area has a value in the range of 1--63, and a node has a value in the range of 1--1023. For example, 63.135.) The converted 6-byte address has the form AA-00-04-00-87-FC.
If you are converting an existing Phase IV OpenVMS Cluster to DECnet Phase V, use the existing Phase IV alias address for the Node ID when configuring and registering the alias. If you are installing a new OpenVMS Cluster in a DECnet Phase V network, use any Phase IV-style address that is unique to your network for the node ID when configuring and registering the alias.
Note
The node ID you use when registering your alias in the namespace must be the same Node ID you use when configuring the alias module using NET$CONFIGURE.
If you want to set an outgoing alias for particular nodes in an OpenVMS Cluster, use the following command:
ncl> set alias port port-name outgoing default true
If you want to set an outgoing alias for an application, use the following command:
ncl> set session control application application-name - _ncl> outgoing alias name alias-name
If you do not set application outgoing alias name and the application has the outgoing alias set to true, the alias name for which you set alias port outgoing default true is used.
If you define application outgoing alias name, this supersedes the setting of alias port outgoing default. If the application outgoing alias name is not enabled, the local node name is used.
If neither alias port outgoing default nor application outgoing alias name is set, the first alias created is used as the default for the system. If this alias is not enabled, the local node name is used.
When a node tries to connect to an alias node, it does not know that its destination is an alias. It consults the namespace to translate the alias node name into an address, and uses the address to send data packets to the alias. Data packets can arrive at any node that is an alias member. When a node in the alias receives a request for a connection to the alias, that node selects a member node (possibly itself) to own the connection.
The node makes its selection based on the following criteria:
Once an eligible node is selected, the incoming connect request is forwarded to that node, and the connection is established.
Note
Each connection to the alias is associated with one node, which is a member of the alias. If there is a problem with that node, the connection is lost. It is not transferred to another node in the alias.
If your node is in an OpenVMS Cluster environment using an alias, you can specify which network applications will use incoming and outgoing connections in the application database. If you are using the defaults as specified by Digital for the applications that are supplied with DECnet-Plus, the default is that only the MAIL application is associated with the alias (for outgoing connections). If other applications have been added to the database (such as Rdb, DQS, or your supplied application), outgoing alias for the objects associated with those applications can be enabled.
If you converted from Phase IV to Phase V (or added or changed objects prior to installing DECnet-Plus), the objects will not change back to the defaults.
When MAIL is associated with the alias, MAIL effectively treats the OpenVMS Cluster as a single node. Ordinarily, replies to mail messages are directed to the node that originated the message; the reply is not delivered if that node is not available. If the node is in an OpenVMS Cluster and uses the OpenVMS Cluster alias, an outgoing mail message is identified by the alias node address rather than the individual address of the originating node. An incoming reply directed to the alias address is given to any active node in the OpenVMS Cluster and is delivered to the originator's mail file.
The alias permits you to set a proxy to a remote node for the whole OpenVMS Cluster rather than for each node in the OpenVMS Cluster. The proxy for the OpenVMS Cluster system can be useful if the alias node address is used for outgoing connections originated by the application file access listener FAL, which accesses the file system.
Also, do not allow applications, whose resources are not accessible clusterwide, to receive incoming connect requests directed to the alias node address. All processors in the OpenVMS Cluster must be able to access and share all resources (such as files and devices). For more information about sharing files in an OpenVMS Cluster environment, see Section 9.3.
PROFILE_VMS_015.HTML OSSG Documentation 2-DEC-1996 12:35:10.76
Copyright © Digital Equipment Corporation 1996. All Rights Reserved.