Enter a selection weight and press Return.
To replace a MOP Client Configuration, select Option 10 from the ADVANCED Configuration Option menu.
* Which configuration option to perform? [1] : 10
The following question appears:
* Load MOP on this system? [YES] :
By default, MOP is not started by NET$STARTUP. In order to make this system service MOP requests, NET$STARTUP_MOP must be defined to signal NET$STARTUP to load the MOP software. This symbol is normally defined in SYS$STARTUP:NET$LOGICALS.COM.
Answering YES to this question will modify SYS$STARTUP:NET$LOGICALS.COM for you, to enable MOP service on this system. Answering NO will remove the logical name definition from SYS$STARTUP:NET$LOGICALS.COM. Note that this will have no effect if the NET$STARTUP_MOP is defined elsewhere.
If you answer YES, the following displays:
%NET$CONFIGURE-I-MOPCLIENTFND, MOP client NCL script already exists * Replace MOP Client script file? [NO] : yes
Answer YES to create a new MOP Client NCL script file, otherwise press Return.
The procedure displays the Summary of Configuration and asks the following:
* Do you want to generate NCL configuration scripts? [YES] :
If you answer YES, the configuration program uses the information you entered to generate modified NCL scripts and, in some cases, automatically modify the system's configuration. The configuration program then returns to the Configuration Options menu.
If you answer NO, the configuration procedure returns to the Configuration Options menu and does not generate any modified NCL scripts.
Note
CAUTION: If your cluster is running mixed versions of DECnet, you cannot use this feature. Instead, you must configure the nodes independently by running net$configure on each system.
To select this option, you must have already configured the system using the ADVANCED configuration option, and net$configure is executing on a cluster system.
From the ADVANCED Configuration Option menu, select Option 11.
* Which configuration option to perform? [1] : 11
A submenu appears:
Configuration Options: [0] Return to main menu [1] Autoconfigure Phase IV cluster nodes [2] Full configuration of cluster node [3] Configure local node * Which configuration option to perform? [1] :
Autoconfigure Phase IV Cluster Nodes (Submenu Option 1)
If you select Option 1, it scans the system disk for evidence of satellite nodes that have not yet been configured to run DECnet-Plus. If it finds one, it creates sys$specific:[sys$startup]net$autoconfigure.com, causing the cluster member to automatically configure DECnet-Plus the next time it reboots. The procedure prompts you to enter the full name of a cluster alias.
* Fullname of cluster alias: :
Supply the full node name of the cluster alias. If none is supplied, no cluster alias is configured for the systems being upgraded.
* Device containing system roots [SYS$SYSDEVICE:] :
Configuring cluster satellites involves finding the system root from which the satellite boots. Normally, this is SYS$SYSDEVICE:, although it is possible to install system roots to a different volume.
The device given in response to this question is searched for all system roots. Those found that do not contain a checksum database are assumed to be Phase IV nodes, and are candidates for being flagged for DECnet-Plus autoconfiguration.
* Upgrade Phase IV cluster member FIGS? [Yes] :
A system root was found that does not contain a DECnet-Plus checksum database, and is therefore assumed to be a Phase IV system. Answering YES to this question causes that cluster node to be flagged to run a DECnet-Plus autoconfiguration on its next reboot.
* What is the synonym name for this system? [FIGS] :
Full Configuration of Cluster Node (Submenu Option 2)
* Which configuration option to perform? [1] : 2
If you select Option 2, it prompts for a cluster member name (and system root location). Once supplied, all net$configure modifications are made to the DECnet configuration for that cluster member. Note that this only allows a subset of configuration options.
* Cluster node name to be configured: : TPZERO
This is simply the node name of the cluster member to configure. The net$configure procedure attempts to find the system root for that cluster member (by scanning NET$MOP_CLIENT_STARTUP.NCL) to supply defaults for the two questions that follow.
* Device for TPZERO root: [SYS$SYSDEVICE] :
In configuring a cluster member other than the system on which net$configure executes, you must specify the location of the member's system root. The location is the disk device on which the cluster member's system root resides.
The default answer to this is either SYS$SYSDEVICE or the root device found for that system in NET$MOP_CLIENT_STARTUP.NCL
* Directory for TPZERO root: : SYS2
In configuring a cluster member other than the system on which net$configure executes, you must also specify the system root directory. The system root directory is of the form "SYSxxxx," where "xxxx" is the hexadecimal root number from which that member loads.
Note that before net$configure returns to the main menu, it warns that all subsequent options will be applied to the cluster node just specified. Notice also that Option 5 (Configure Timezones) is not present when configuring other cluster members.
%NET$CONFIGURE-I-VERCHECKSUM, verifying checksums All configuration options will be applied to cluster node TPZERO
Configure Local Node (Submenu Option 3)
If you select Option 3, it clears the action of Option 2; all subsequent net$configure modifications are made to the local system (as when net$configure was started).
* Which configuration option to perform? [1] : 3
This option allows the system manager to make the network startup scripts for NET$APPLICATION_STARTUP, NET$MOP_CLIENT_STARTUP, and NET$EVENT_STARTUP to be common for all cluster nodes. That is, a single copy of the script is shared by all systems in the cluster. This allows a single configuration for those scripts to be common to all systems, ensuring that all systems have the same application, MOP client, and event logging configuration.
It does this by copying the script from the SYS$SPECIFIC directory to the SYS$COMMON directory. Note that when it does so, it does not delete the script from the SYS$SPECIFIC directories for the other cluster systems. You must do this by rerunning the dialog for all cluster members.
To select this option, you must have already configured the system using the ADVANCED configuration option, and net$configure is executing on a cluster system.
From the ADVANCED Configuration Option menu, select Option 12.
* Which configuration option to perform? [1] : 12
For this example, the system manager selects Option 12 to create cluster common scripts for APPLICATION, EVENT and MOP_CLIENT. These cluster common scripts are created from the latest configuration on the currently executing system.
* Move the APPLICATION startup script to the cluster common area? [YES] : %NET$CONFIGURE-I-MOVESCRIPT, created cluster common APPLICATION startup script from SYS$SPECIFIC:[SYSMGR]NET$APPLICATION_STARTUP.NCL; * Move the EVENT startup script to the cluster common area? [YES] : %NET$CONFIGURE-I-MOVESCRIPT, created cluster common EVENT startup script from SYS$SPECIFIC:[SYSMGR]NET$EVENT_STARTUP.NCL; * Move the MOP_CLIENT startup script to the cluster common area? [YES] : %NET$CONFIGURE-I-MOVESCRIPT, created cluster common MOP_CLIENT startup script from SYS$SPECIFIC:[SYSMGR]NET$MOP_CLIENT_STARTUP.NCL; %NET$CONFIGURE-I-MODCHECKSUM, checksumming NCL management scripts modified by NET$CONFIGURE %NET$CONFIGURE-I-CONFIGCOMPLETED, DECnet-Plus for OpenVMS configuration completed %NET$CONFIGURE-I-USECOMMON, using cluster common APPLICATION script %NET$CONFIGURE-I-USECOMMON, using cluster common EVENT script %NET$CONFIGURE-I-USECOMMON, using cluster common MOP_CLIENT script
However, these cluster common scripts will not be used by the system "TPZERO," because it still has local copies. So, the system manager selects Option 10 to manage cluster nodes, and then suboption 2 to manage the configuration for node TPZERO.
Configuration Options: [0] Exit this procedure [1] Perform an entire configuration [2] Change naming information [3] Configure Devices on this machine [4] Configure Transports [5] Configure Timezone Differential Factor [6] Configure Event Dispatcher [7] Configure Application database [8] Configure MOP Client database [9] Configure Cluster Alias [10] Replace MOP Client configuration [11] Configure satellite nodes [12] Configure cluster script locations * Which configuration option to perform? [0] : 11 Configuration Options: [0] Return to main menu [1] Autoconfigure Phase IV cluster nodes [2] Full configuration of cluster node [3] Configure local node * Which configuration option to perform? [0] : 2 * Cluster node name to be configured: : TPZERO * Device for TPZERO root: [SYS$SYSDEVICE] : * Directory for TPZERO root: : SYS2 %NET$CONFIGURE-I-OVERRIDECOMMON, node specific APPLICATION script overrides the cluster common settings %NET$CONFIGURE-I-OVERRIDECOMMON, node specific EVENT script overrides the cluster common settings %NET$CONFIGURE-I-OVERRIDECOMMON, node specific MOP_CLIENT script overrides the cluster common settings All configuration options will be applied to cluster node TPZERO
Upon doing so, we are informed that TPZERO has local versions of these scripts that override the cluster common defaults. Selecting Option 11 allows the manager to delete these local overrides so that TPZERO will use the cluster common versions.
Configuration Options: [0] Exit this procedure [1] Perform an entire configuration [2] Change naming information [3] Configure Devices on this machine [4] Configure Transports [6] Configure Event Dispatcher [7] Configure Application database [8] Configure MOP Client database [9] Configure Cluster Alias [10] Replace MOP Client configuration [11] Configure satellite nodes [12] Configure cluster script locations * Which configuration option to perform? [0] : 12 * Delete the local APPLICATION startup script? [No] : yes %NET$CONFIGURE-I-DELETEDOVERRIDE, deleted system specific copy of the APPLICATION startup script * Delete the local EVENT startup script? [No] : yes %NET$CONFIGURE-I-DELETEDOVERRIDE, deleted system specific copy of the EVENT startup script * Delete the local MOP_CLIENT startup script? [No] : yes %NET$CONFIGURE-I-DELETEDOVERRIDE, deleted system specific copy of the MOP_CLIENT startup script %NET$CONFIGURE-I-MODCHECKSUM, checksumming NCL management scripts modified by NET$CONFIGURE %NET$CONFIGURE-I-CONFIGCOMPLETED, DECnet-Plus for OpenVMS configuration completed %NET$CONFIGURE-I-USECOMMON, using cluster common APPLICATION script %NET$CONFIGURE-I-USECOMMON, using cluster common EVENT script %NET$CONFIGURE-I-USECOMMON, using cluster common MOP_CLIENT script All configuration options will be applied to cluster node TPZERO
After you have made all your selections, the procedure displays a summary of your changes:
Summary of Configuration Node Information: Directory Services Chosen: DECDNS,LOCAL,DOMAIN Primary directory service: DECDNS DECdns full name: PHASEV:.ENG.SSG.TEST.ELMER Local Full name: LOCAL:.ELMER Fully Qualified Host name: ELMER.WABBIT.ACME.EDU Node Synonym: ELMER Phase IV Address: 15.27 Phase IV Prefix: 49:: Autoconfiguration of Network Address: Enabled Session Control Address Update Interval: 10 Routing ESHello Timer: 600 Alias Name: ACME:.WABBIT.HELP Device Information: Device: XQA0 (DELQA): Data Link name: CSMACD-0 Routing Circuit Name: CSMACD-0 . . . . . .
At the end of the summary, the procedure asks if you want to generate NCL configuration scripts (which now contain your updated information):
* Do you want to generate NCL configuration scripts? [YES] :
If you answer YES, the configuration program uses the information you entered to create the alias NCL script. The configuration program then returns to the Configuration Options menu. To implement the alias NCL script, reboot the system or disable the entity and execute the script.
If you answer NO, the configuration procedure returns to the Configuration Options menu and does not generate any NCL scripts.
Part II describes the steps necessary to configure VAX P.S.I. and VAX P.S.I Access on a DECnet-Plus for OpenVMS VAX system.
This chapter describes how to configure the VAX P.S.I. and VAX P.S.I. Access software.
See Figure 4-1 and take the following steps to configure your VAX P.S.I. system:
This section introduces the aspects of your proposed configuration that you need to consider before you run the configuration program.
There are three types of VAX P.S.I. systems:
Refer to the DECnet-Plus Planning Guide for an explanation of VAX P.S.I. systems.
The type(s) you can configure depends on the license(s) that you have installed. Table 4-1 summarizes the various possible configurations.
Figure 4-1 Installation and Configuration Flowchart
License(s) | Possible VAX P.S.I. Configurations |
---|---|
DECnet-VAX only | VAX P.S.I. Access |
Native only | VAX P.S.I. Native |
DECnet-VAX and Native |
VAX P.S.I. Access
VAX P.S.I. Native VAX P.S.I. Multihost VAX P.S.I. Access + Native VAX P.S.I. Access + Multihost |
The VAX P.S.I. configuration program has many sections, but not all sections are relevant to all types of systems. Table 4-2 shows the sections that apply to each type of system.
Section | Applies to Access? | Applies to Native? | Applies to Multihost? | Required or Optional |
---|---|---|---|---|
Set Up Lines and DTEs | No | Yes | Yes | O¹ |
Set Up PVCs | No | Yes | Yes | O |
Set Up Groups | No | Yes | Yes | O |
Set Up LLC2 | Yes | Yes | Yes | O¹ |
Set Up Remote DTE Classes | Yes | No | No | R |
Choose X.29 and P.S.I. Mail Support | Yes | Yes | Yes | O |
Set Up Gateway Clients | No | No | Yes | R |
Set Up Applications | Yes | Yes | Yes | O |
Declaring a Network Process | Yes | Yes | Yes | O |
Set Up Templates | Yes | Yes | Yes | O |
Select X.25 Security Option | Yes | Yes | Yes | O |
Set Up Incoming Security for Applications | Yes | Yes | Yes | O |
Set Up Outgoing Security for Local Processes | Yes | Yes | Yes | O |
Set Up Incoming Security for Network Processes | Yes | Yes | Yes | O |
Set Up Incoming Security for Gateway Clients | No | No | Yes | O |
Set Up Outgoing Security for Accessing Systems | No | No | Yes | O |
Create the NCL Script | Yes | Yes | Yes | O |
The VAX P.S.I. configuration program automatically skips sections that do not apply to your type of system.
This section explains the purpose of each section in the P.S.I. configuration program.
Choose a line on your system to configure for X.25 communications. You must configure at least one synchronous line unless you intend to use LLC2 exclusively.
Your DTE can communicate with a remote DTE using either an SVC (switched virtual circuit) or a PVC (permanent virtual circuit). A PVC is a permanent association between two specific DTEs.
Two DTEs connected by a PVC can communicate without the need for call clearing or call setup.
Complete this section if you have requested this facility from your PSDN.
If your DTE belongs to a closed user group (CUG), it can communicate freely with remote DTEs that are also members of that CUG. Its communications with other DTEs may be restricted, depending on your PSDN subscription options.
You must complete this section if you have requested this facility from your PSDN.
LLC2 is a data link protocol used on LANs, over which the X.25 Packet-Level Protocol (PLP) is run.
You must set up an LLC2 DTE for each remote system to which you want to connect on the LAN. You can set up one or more LLC2 DTEs per LAN connection.
Use this section to specify the connector system(s) that your Access system uses.
This section allows you to add support for X.29 communications and for P.S.I. Mail.
You need X.29 support if your VAX P.S.I. system is to communicate with character-mode terminals.
P.S.I. Mail is an extension of OpenVMS Mail that lets you send mail messages to and receive them from other VAX P.S.I. systems across a PSDN.
You must create gateway clients to allow your multihost system to pass incoming calls to the correct client system. A gateway client identifies a client system or group of client systems that use this multihost system to receive incoming calls.
In this section, you also set up filter(s) for gateway clients. You must set up at least one filter for each gateway client. See filters for more about filters.
Filters are sets of characteristics that can be matched to fields in an incoming call request packet. If the characteristics in an incoming call match the characteristics you set in a filter, then the call is passed to the gateway client or the application associated with that filter.
You must supply a filter name and a priority for each filter. You may leave all the other parameters unspecified.
The more parameters you specify in a filter, the more specific is that filter. For example, you could create a filter with most of its parameters unspecified and with a low priority to act as a "catchall" for unexpected calls.
You must specify any X.25 or X.29 applications on your system to allow incoming calls for those applications to succeed.
You must supply the name of the command file that starts the application. You may also supply a user name for the application.
Do not specify any applications that do not receive calls.
In this section, you also set up filter(s) for applications. You must set up at least one filter for each application. See filters for more about filters.
X.25 and X.29 programs on your system can issue $QIO(IO$_ACPCONTROL)
calls
to declare themselves as network processes.
Each
$QIO(IO$_ACPCONTROL) specifies a filter
used to determine which calls are able to access the program.
The filter specified by $QIO(IO$_ACPCONTROL) can be one of two types:
If your programs issue only $QIO(IO$_ACPCONTROL) calls that use unnamed dynamic filters, you do not need to complete this section.
Templates
Your system uses a template to make outgoing calls. A template sets various parameters for each call made using that template.
A template called "default" is created automatically on your system.
Set up security to prevent unauthorized use of your VAX P.S.I. system. There are six security sections:
When you are satisfied that all the information you entered is complete and correct, the configuration program creates two NCL scripts using the information you provided.
PROFILE_007.HTML OSSG Documentation 2-DEC-1996 12:33:32.80
Copyright © Digital Equipment Corporation 1996. All Rights Reserved.