[Digital logo]
[HR]

DECnet-Plus for OpenVMS
Network Management


Previous | Contents

You should also be familiar with the general concepts of the Maintenance Operations Protocol (MOP), and, in particular, the MOP client database, as described in Chapter 10.

If you are adding a new OpenVMS Cluster satellite node to your OpenVMS Cluster, see Section 9.1.1.1.

If you are making the transition from an existing Phase IV DECnet cluster satellite node to DECnet-Plus, see Section 9.1.2. After you have made the transition, you can invoke CLUSTER_CONFIG.COM to modify its characteristics.

You can delete a satellite node from the OpenVMS Cluster system with CLUSTER_CONFIG.COM whether or not the satellite node has made the transition to DECnet-Plus. If the satellite has not made the transition, a message appears stating that the client could not be removed from the client database. This will not cause a problem, and all root information will be deleted correctly.

By default, the satellite node information created by CLUSTER_CONFIG.COM or NET$CONFIGURE.COM is placed in the SYS$MANAGER root directory for the boot node. See Section 9.1.4 for a discussion about making this information available to other boot nodes in your OpenVMS Cluster system.

9.1.1.1 Adding a New Satellite Node to an OpenVMS Cluster Environment

To add a new OpenVMS Cluster satellite node to an OpenVMS Cluster environment, invoke the SYS$MANAGER:CLUSTER_CONFIG.COM procedure. This procedure does the following:

Refer to the OpenVMS Cluster Systems for OpenVMS guide for general information about CLUSTER_CONFIG.COM. You can enter a "?" at any prompt to display help text that explains what information is required at the prompt.

Table 9-1 explains the information specific to DECnet-Plus that CLUSTER_CONFIG.COM requests.

Table 9-1 Information Requested for New OpenVMS Cluster Satellites
Item Response
What is the node's DECnet full name? Determine a full name with the help of your network manager. Enter a string that includes:
  • Nickname (optional), ending with a colon (:)
  • Root directory, designated by a period (.)
  • Zero or more hierarchical directories, designated by a character string followed by a period
  • Simple name, a character string that, combined with the directory names, uniquely identifies the node

For example:

.world.networks.mynode

mega:.indiana.jones

columbus:.flatworld

What is the DECnet Phase IV compatible synonym name for this node? A node synonym is a short name for the node's full name. In an OpenVMS Cluster environment, this name is used as the value of the SYSGEN parameter SCSNODE. It must also be defined in the namespace as the synonym name for that node. Therefore, it must be a string of six or less alphanumeric characters. By default, it is the first six characters of the last simple name in the full name. For example:

Full name --- bigbang:.galaxy.nova.blackhole

Synonym --- blackh

Node synonyms greater than six characters in length are not supported if the node is an OpenVMS Cluster member.

What is the node's DECnet node address? Enter the node's DECnet node address. This address is in the form area.node. Ask your network manager to help you determine this address.
Does synonym name need to be registered in the namespace [N]? Answer YES to this question if the name of the node you are adding has not been registered with the namespace. Registration makes your node "known" to the namespace. You only need to do this once.

The registration might fail if your namespace has access control list (ACL) protection. If that occurs, your network manager must register the node for you.

What is the cluster alias full name? The alias name is the node name of the alias: the DECdns full name of the object that stores the address towers for the alias. Do not enter a node synonym.

If this node will not be participating in an OpenVMS Cluster alias, press carriage return.

Determine the OpenVMS Cluster alias name with the help of your network manager. Enter a string that includes:

  • Nickname (optional), ending with a colon (:)
  • Root directory, designated by a period (.)
  • Zero or more hierarchical directories, designated by a character string followed by a period
  • Simple name, a character string that, combined with the directory names, uniquely identifies the node

For example:

.networks.farout

mega:.proto.quikk

What is the Phase IV address of the cluster alias? The node ID of the alias could not be retrieved from the namespace, so it must be calculated from the alias's Phase IV address. Enter the Phase IV address of the alias in the area.node format (for example, 63.137), or enter a 6-byte ID in the format AA-00-04-00- xx-xx, where xx-xx is calculated from the Phase IV node address. To determine the Ethernet physical address, proceed as follows:
  1. Convert the Phase IV node address to its decimal equivalent as follows:
     (area-number * 1024) + node-number = decimal equivalent
          

    For example:

     (12 * 1024) + 139 = 12427 decimal
          
  2. Convert the decimal node address to its hexadecimal equivalent and reverse the order of the bytes to form the hexadecimal node address. For example:
     (12427 decimal = 308B hex, reversed = 8B30 hexnodeaddress)
          
  3. Incorporate the hexadecimal node address in the following format:
     AA-00-04-00-hexnodeaddress
          

    For example:

     AA-00-04-00-8B-30
          
What selection weight do you choose for this node?[0 for satellites] The selection weight determines the percentage of incoming connections addressed to the alias that this node will handle. If the node is a satellite, take the default value of 0. For larger nodes, select a value between 1 and 10 (or larger if you want) according to the size of the node.

The information you enter by means of CLUSTER_CONFIG.COM is automatically entered in the boot node's MOP client database and executed. The cluster_config procedure prompts you for other information. Then, it tells you when to boot your satellite node. The satellite node will run an AUTOGEN procedure shortly after booting.

After the satellite reboots, the NET$CONFIGURE procedure executes automatically. When it completes, the network starts, and the OpenVMS startup procedure continues until completion.

9.1.2 Making the Transition from an Existing DECnet Phase IV OpenVMS Cluster Satellite Node

Existing OpenVMS Cluster satellite nodes already have a root directory on the system disk. Because these satellites are already in the OpenVMS Cluster, you cannot use CLUSTER_CONFIG.COM to migrate them to DECnet-Plus. However, you can take the following actions to migrate the satellites:
  1. Create a MOP client entry for the satellite node in the MOP client database on the boot node.
  2. Execute the MOP client script on a boot node.
  3. Edit SYS$SYSTEM:MODPARAMS.DAT in the satellite's root directory.
  4. If the OpenVMS Cluster satellite node is not already running, boot it. A boot node will downline load it.
  5. On the OpenVMS Cluster satellite node, invoke the AUTOGEN procedure to provide the satellite node with a set of SYSGEN parameters sufficient to run the DECnet-Plus software.
  6. After the satellite node has rebooted, invoke the SYS$MANAGER:NET$CONFIGURE procedure from the satellite and select the menu option to perform a full configuration. The network will start automatically when NET$CONFIGURE completes.

Each of these steps is fully explained in the following list:

  1. Create a MOP client entry for the satellite node.
    Your Phase IV DECnet object database on your boot node might already have been converted to DECnet-Plus. When you invoke the NET$CONFIGURE.COM procedure to perform a full configuration, you can request that it convert the Phase IV object database. Refer to the DECnet-Plus for OpenVMS Applications Installation and Advanced Configuration guide for more information about converting your Phase IV object database.
    To determine if your Phase IV DECnet object database has been converted to DECnet-Plus, look at the contents of the NCL script file, SYS$MANAGER:NET$MOP_CLIENT_STARTUP.NCL. This is an ASCII file that you can display on your terminal. If the object database has been converted, you will find information about each OpenVMS Cluster satellite node that existed in the Phase IV DECnet object database. If some of the information needs to be modified for DECnet-Plus, you can edit the file.
    If the Phase IV DECnet object database was not converted, you can add information for each OpenVMS Cluster satellite node with SYS$MANAGER:NET$CONFIGURE.COM. Refer to Section 10.2 for more information about the parameters needed to configure an OpenVMS Cluster satellite.


    The following is a sample of the information requested when you choose Option 8 of NET$CONFIGURE.COM, "Configure MOP Client Database:"

      
    * Which configuration option to perform?                 [1] : 8  
    * Do you wish to ADD or DELETE a MOP Client?           [ADD] :  
    * Name of the MOP Client?                                    : tahini  
    * Circuit for 'TAHINI'?                                      :  
    * Physical addresses for 'TAHINI'?                           : 08-00-2B-07-36-B6  
    * Secondary Loader for 'TAHINI'?                             :  
    * Tertiary Loader for 'TAHINI'?                              :  
    * System Image for 'TAHINI'?          : "@net$niscs_laa(disk$v55:<sys10.>)"  
    * Diagnostic Image for 'TAHINI'?                             :  
    * Management Image for 'TAHINI'?                             :  
    * Script File for 'TAHINI'?                                  :  
    * Dump File for 'TAHINI'?                                    :  
    * Verification for 'TAHINI'?            [%X0000000000000000] :  
    * Phase IV Client Address (aa.nnnn) for 'TAHINI'?     [none] : 63.10  
    * Phase IV Client Name for 'TAHINI'?                [TAHINI] :  
    * Phase IV Host Address for 'TAHINI'?                [63.61] :  
    * Phase IV Host Name for 'TAHINI'?                  [CASIDY] : hummus  
    * Do you wish to generate NCL configuration scripts?   [YES] :  
    %NET$CONFIGURE-I-CHECKSUM, checksumming NCL management scripts  
    %NET$CONFIGURE-I-CONFIGCOMPLETED, DECnet-Plus for VMS configuration  
    completed  
      
    

    The information will not take effect until you execute the NCL script SYS$MANAGER:NET$MOP_CLIENT_STARTUP.NCL. If MOP has not yet been started, starting MOP executes the script. If MOP is already running, you can stop it and then start it again to execute the script. Alternatively, if MOP is already running, you can invoke the script at the NCL prompt.
  2. Execute the MOP client script on a boot node.
    $ run sys$system:ncl  
    ncl> @sys$manager:net$mop_client_startup.ncl  
    

    Note

    One line in the file, create mop, generates an error message because the mop entity has already been created. You can ignore this message.


    After the script has been executed, you can downline load the OpenVMS Cluster satellite.


    The following example shows the information that network management knows about the client configured in the previous step:

    ncl> show mop client tahini all  
    
    Node 0 MOP Client TAHINI  
     at 1995-04-21-18:32:38.205-04:00I0.448  
      
    Identifiers  
      Name                              = TAHINI  
      
    Characteristics  
      Circuit                           =  
      Addresses                         = {08-00-2B-07-36-B6, AA-00-04-00-0A-FC}  
      Secondary Loader                  = {}  
      Tertiary Loader                   = {sys$system:tertiary_vmb.exe}  
      System Image                      = {"@net$niscs_laa(DISK$V55:<SYS10.>)"}  
      Diagnostic Image                  = {}  
      Management Image                  = {}  
      Script File                       = {}  
      Phase IV Host Name                = HUMMUS  
      Phase IV Host Address             = 63.61  
      Phase IV Client Name              = TAHINI  
      Phase IV Client Address           = 63.10  
      Dump File                         = {}  
      Dump Address                      = 0  
      Verification                      = %X0000000000000000  
      Device Types                      = {}  
    
  3. Edit SYS$SYSTEM:MODPARAMS.DAT in the satellite's root directory. For a list of the minimum SYSGEN parameter values recommended for an OpenVMS Cluster satellite, see the DECnet-Plus for OpenVMS Installation and Basic Configuration guide.
    When installing DECnet-Plus for OpenVMS on an OpenVMS cluster, make sure that all cluster members have the suggested SYSGEN parameters set correctly. If a node in the cluster does not have the required minimum parameters, startup of the network will fail. If the network fails to start for this reason, the logical NET$STARTUP_STATUS will be set to OFF-AUTOGENREQ. Set the parameters to the recommended values prior to attempting to run NET$CONFIGURE.
    Check the release notes for any updates to the SYSGEN parameter values. Update these values by editing the SYS$SYSTEM:MODPARAMS.DAT file.
  4. If the OpenVMS Cluster satellite node is not already running, boot it.
    If the MOP client database was successfully configured and the script executed, the boot node will downline load the satellite.
  5. On the OpenVMS Cluster satellite node, invoke the AUTOGEN procedure to provide the satellite node with a set of SYSGEN parameters sufficient to run the DECnet-Plus software.
    From the OpenVMS Cluster satellite node, invoke the AUTOGEN procedure as follows:
    $ @sys$update autogen getdata reboot nofeedback  
    

    This regenerates the satellite node's SYSGEN parameters and takes into account the new minimum values.
  6. After the satellite node has rebooted, invoke SYS$MANAGER:NET$CONFIGURE.
    Invoke this procedure from the satellite and select the menu option to perform a full configuration. The network will start automatically when NET$CONFIGURE is finished.

9.1.3 Specifying Defaults for Phase IV Prefix and Node Synonym Directory

By default, a cluster satellite configures its Phase IV Prefix as 49:: and its node synonym directory as .DNA_Nodesynonym. Some clusters may want to have different values for one or both of these attributes. To change these defaults for satellites added to the cluster, define the following logicals in SYS$COMMON:[SYSMGR]NET$LOGICALS.COM before running CLUSTER_CONFIG.

$ define/system/nolog net$phaseiv_prefix "<prefix value>"  
$ define/system/nolog decnet_migrate_dir_synonym "<synonym dir>"  

To change these values for a satellite that has already been configured, run NET$CONFIGURE from that satellite.

9.1.4 Customizing Your MOP Client Database for Multiple Boot Nodes

By default, the file NET$MOP_CLIENT_STARTUP.NCL resides in SYS$SYSROOT:[SYSMGR]. In this location, however, the MOP client information is only available to the node on which the file resides. It is up to the system manager to make that information available to more boot nodes, if desired.

Both CLUSTER_CONFIG.COM and NET$CONFIGURE.COM modify the file SYS$MANAGER:NET$MOP_CLIENT_STARTUP.NCL for the node on which the procedure is invoked. If the file is found in SYS$SYSROOT:[SYSMGR], it is modified and left in that location. Similarly, if the file is found in SYS$COMMON:[SYSMGR], it is modified and left in that location.

One way of allowing more boot nodes to access NET$MOP_CLIENT_STARTUP.NCL is to move it to SYS$COMMON:[SYSMGR]NET$MOP_CLIENT_STARTUP.NCL. All nodes in the OpenVMS Cluster then have access to it.

Alternatively, you can create one file for common MOP client information. Designated boot nodes can execute this file by placing @ncl_script_name in their own SYS$MANAGER:NET$MOP_CLIENT_STARTUP.NCL file. This method requires more work by the system manager, however, because the configuration procedures does not modify the common file directly.

9.2 Using an OpenVMS Cluster Alias

All or some nodes in an OpenVMS Cluster environment can be represented in the network as a single node by establishing an alias for the OpenVMS Cluster. To the rest of the network, an alias node looks like a normal node. It has a normal node object entry in the namespace, which provides a standard address tower. The alias has a single DECnet address that represents the OpenVMS Cluster environment as a whole. The alias allows access to common resources on the OpenVMS Cluster environment without knowing which nodes comprise the OpenVMS Cluster.

Using an alias never precludes using an individual node name and address. Thus, a remote node can address the OpenVMS Cluster as a single node, as well as address any OpenVMS Cluster member individually.

You decide which nodes participate in an alias. It is not necessary for every member of an OpenVMS Cluster environment to be part of the alias. Those nodes in the OpenVMS Cluster environment that have specifically joined the alias comprise the alias members, and connections addressed to the alias are distributed among these members. You can also have multiple aliases. Multiple aliases allow end nodes to be members of more than one alias. Multiple aliases also allow a mixed architecture cluster. You can have one alias for all the nodes, one for Alpha systems, and another for the VAX systems.

You can have a maximum of three aliases. Members of the same alias must be members of the same OpenVMS Cluster environment. Nodes joining the same alias must be in the same DECnet area.

When creating multiple aliases, the first alias created is used for outgoing connections for any applications, with the outgoing alias attribute set to TRUE. If this alias is not enabled, the local node name is used for the outgoing connection.

Finally, nodes that assume the alias should have a common authorization file.


Note

There must be at least one adjacent DECnet Phase V router on a LAN to support an OpenVMS Cluster alias. A single router can support multiple OpenVMS Cluster environments on a LAN. Providing alias support does not prevent a router from providing normal routing support.

OpenVMS Cluster environments do not have routers. If all nodes on a LAN that form a complete network are DECnet Phase V end nodes, no router is required. Any member of the OpenVMS Cluster can communicate with any system on the LAN. If, however, the LAN is part of a larger network or there are Phase IV nodes on the LAN, there must be at least one adjacent DECnet Phase V router on the LAN. The adjacent DECnet Phase V router allows members of the cluster to communicate with Phase IV nodes or systems in the larger network beyond the LAN.


9.2.1 Adding a Node to an OpenVMS Cluster Alias

To add a node in an OpenVMS Cluster environment to the alias, use the NET$CONFIGURE.COM procedure. For information about NET$CONFIGURE.COM, refer to the DECnet-Plus for OpenVMS Applications Installation and Advanced Configuration guide.


Note

You must run NET$CONFIGURE.COM on each node in the OpenVMS Cluster environment that you want to become a member of the alias.

9.2.2 Adding an OpenVMS Cluster Alias to the Namespace

Before an alias can be identified by name, you must create a node object entry for it in the namespace. Do this only once for each OpenVMS Cluster.

To add an object entry for an OpenVMS Cluster alias in a DECnet Phase V area, you need:

The decnet_register tool converts a Phase IV-style address of the form area.node into a 6-byte address when registering a Phase IV node (see Section 5.3.4 and Chapter 5 for decnet_register). (In Phase IV, an area has a value in the range of 1--63, and a node has a value in the range of 1--1023. For example, 63.135.) The converted 6-byte address has the form AA-00-04-00-87-FC.

If you are converting an existing Phase IV OpenVMS Cluster to DECnet Phase V, use the existing Phase IV alias address for the Node ID when configuring and registering the alias. If you are installing a new OpenVMS Cluster in a DECnet Phase V network, use any Phase IV-style address that is unique to your network for the node ID when configuring and registering the alias.


Note

The node ID you use when registering your alias in the namespace must be the same Node ID you use when configuring the alias module using NET$CONFIGURE.

9.2.3 Configuring Multiple OpenVMS Cluster Aliases

If you want to set an outgoing alias for particular nodes in an OpenVMS Cluster, use the following command:

ncl> set alias port port-name outgoing default true  

If you want to set an outgoing alias for an application, use the following command:

ncl> set session control application application-name -  
_ncl> outgoing alias name alias-name  

If you do not set application outgoing alias name and the application has the outgoing alias set to true, the alias name for which you set alias port outgoing default true is used.

If you define application outgoing alias name, this supersedes the setting of alias port outgoing default. If the application outgoing alias name is not enabled, the local node name is used.

If neither alias port outgoing default nor application outgoing alias name is set, the first alias created is used as the default for the system. If this alias is not enabled, the local node name is used.

9.2.4 Controlling Connect Requests to the OpenVMS Cluster Alias

When a node tries to connect to an alias node, it does not know that its destination is an alias. It consults the namespace to translate the alias node name into an address, and uses the address to send data packets to the alias. Data packets can arrive at any node that is an alias member. When a node in the alias receives a request for a connection to the alias, that node selects a member node (possibly itself) to own the connection.

The node makes its selection based on the following criteria:

Once an eligible node is selected, the incoming connect request is forwarded to that node, and the connection is established.


Note

Each connection to the alias is associated with one node, which is a member of the alias. If there is a problem with that node, the connection is lost. It is not transferred to another node in the alias.

9.2.4.1 Controlling Connections to Network Applications

If your node is in an OpenVMS Cluster environment using an alias, you can specify which network applications will use incoming and outgoing connections in the application database. If you are using the defaults as specified by Digital for the applications that are supplied with DECnet-Plus, the default is that only the MAIL application is associated with the alias (for outgoing connections). If other applications have been added to the database (such as Rdb, DQS, or your supplied application), outgoing alias for the objects associated with those applications can be enabled.

If you converted from Phase IV to Phase V (or added or changed objects prior to installing DECnet-Plus), the objects will not change back to the defaults.

When MAIL is associated with the alias, MAIL effectively treats the OpenVMS Cluster as a single node. Ordinarily, replies to mail messages are directed to the node that originated the message; the reply is not delivered if that node is not available. If the node is in an OpenVMS Cluster and uses the OpenVMS Cluster alias, an outgoing mail message is identified by the alias node address rather than the individual address of the originating node. An incoming reply directed to the alias address is given to any active node in the OpenVMS Cluster and is delivered to the originator's mail file.

The alias permits you to set a proxy to a remote node for the whole OpenVMS Cluster rather than for each node in the OpenVMS Cluster. The proxy for the OpenVMS Cluster system can be useful if the alias node address is used for outgoing connections originated by the application file access listener FAL, which accesses the file system.

Also, do not allow applications, whose resources are not accessible clusterwide, to receive incoming connect requests directed to the alias node address. All processors in the OpenVMS Cluster must be able to access and share all resources (such as files and devices). For more information about sharing files in an OpenVMS Cluster environment, see Section 9.3.


Previous | Next | Contents | [Home] | [Comments] | [Ordering info] | [Help]

[HR]

  PROFILE_VMS_015.HTML
  OSSG Documentation
   2-DEC-1996 12:35:10.76

Copyright © Digital Equipment Corporation 1996. All Rights Reserved.

Legal