[Digital logo]
[HR]

DECnet-Plus for OpenVMS
Network Management


Previous | Contents

When you enter the disable command, an existing connection between the outbound stream and its event sink is deleted. Simultaneously, the inbound stream is also deleted. By default, the system performs an orderly shutdown. Whenever possible, Digital recommends that you perform orderly shutdowns of event stream connections.

If necessary you can abort the connection immediately with the following command:

ncl> disable event dispatcher outbound stream -   
_ncl> netmgr1_obs method abort   

When the disable command completes, the outbound stream's state status attribute is set to OFF. Once this condition exists, you can activate the outbound stream again by issuing enable command, or you can delete the outbound stream.

To delete an outbound stream, its state status has to be set to OFF. Then you can issue the following command:

ncl> delete event dispatcher outbound stream netmgr1_obs   

12.6.2 Disabling and Deleting an Event Sink

Before deleting an event sink, use the disable command to set the sink's state status attribute to OFF:

ncl> disable event dispatcher sink netmgr1_sink_a   

Disabling a sink terminates any existing connections with outbound streams. It also deletes all the inbound stream subentities corresponding to this sink. After disabling a sink, you can activate the event sink again by issuing the enable command. Or you can delete the event sink.

To delete an event sink, its state status has to be set to OFF. Then you can issue the following command:

ncl> delete event dispatcher sink netmgr1_sink_a   

12.7 Collecting Event Reports from Phase IV Systems

To record events from a Phase IV system on a DECnet-Plus system, you need to define the relay entity and its logging subentities. The relay entity receives the events from a Phase IV node, encapsulates them, and posts them in the DECnet-Plus system event dispatcher.

To see the relayed events, you must also have created and enabled the event dispatcher entity, an outbound stream, and a sink on the DECnet-Plus system.

12.7.1 Creating and Enabling the Relay Entity

The following example creates and enables the relay entity. (See Figure 12-4 for the relationship of the relay subentity in the event dispatcher entity hierarchy.)

ncl> create event dispatcher relay   
ncl> enable event dispatcher relay   

Note

The relay entity is created and enabled by default when DECnet-Plus is started.

12.7.2 Disabling and Deleting the Relay Entity

To set the relay entity's state status to OFF, use the disable command:

ncl> disable event dispatcher relay   

To delete the relay entity, use the delete command:

ncl> delete event dispatcher relay   

12.7.3 Enabling and Disabling Logging Entities

logging entity types can be console, file, or monitor logging. They are created and enabled by the relay. You can also explicitly enable them by using the enable command, as the following example shows:

ncl> enable event dispatcher relay logging console   

logging entities are disabled and deleted by the parent relay entity. You can also explicitly disable them by using the disable command. See the following example:

ncl> disable event dispatcher relay logging console   

12.7.4 Using NCP Event Logging Commands on the Phase IV Systems

Use Phase IV NCP commands (from the Phase IV local system, or from a DECnet Phase V system) to direct the event messages from a Phase IV source node to a DECnet Phase V sink node. For example:

ncp> set logging console known events sink node decnet-osi-system   

For information about using NCP event logging, refer to your Phase IV documentation.

12.7.5 Sample Relayed Phase IV Event

The following example shows a typical event relayed from a Phase IV system.

Event: Event Relayed from: Node ADMIN:.NetMgr Event Dispatcher RELAY   
LOGGING Console, (1)   
        at: 1995-02-28-15:33:11.909-05:00I0.405 (2)   
   
        Formatted NICE Data=   
DECnet event 0.9, counters zeroed (3)   
From node 1.234 (PHASE4), 28-FEB-1992 16:31:18.08   
Circuit QNA-0,   
     65535  seconds since last zeroed   
    977346  arriving packets received        
   1087487  departing packets sent   
         0  arriving congestion loss   
         0  transit packets received   
         0  transit packets sent   
         0  transit congestion loss   
         2  line down   
         0  initialization failure   
       309  Unknown counter type 822   
         1  Unknown counter type 900   
   2432065  data blocks sent   
 536847580  bytes sent   
   2552820  data blocks received   
 221074963  bytes received   
         0  Multicast received for disabled protocol   
       541  user buffer unavailable   
   
        eventUid   96486342-D6F5-CA11-8043-AA000400804D (4)   
        entityUid  B677CFE1-D5F5-CA11-8042-AA000400804D   
        streamUid  0691473F-D3F5-CA11-8042-AA000400804D     
  1. Specifies the event and entity instance.
  2. Specifies when the event occurred.
  3. Specifies the event generated by the Phase IV system.
  4. Specifies unique identification (UID) values for the various components involved.


Appendix A
DECnet Phase IV Components and Corresponding Phase V Entities

Table A-1 lists the Phase IV components and parameters with their corresponding DECnet Phase V entities and attributes.

Table A-1 NCP-NCL Equivalents
Phase IV Component Phase IV Parameter DECnet Phase V Entity DECnet Phase V Attribute
Executor Incoming Timer Session Incoming Timer
Executor Outgoing Timer Session Outgoing Timer
Executor Incoming Proxy Session Incoming Proxy
Executor Outgoing Proxy Session Outgoing Proxy
Executor Maximum Links NSP Max Transport Connections
Executor Delay Factor NSP Delay Factor
Executor Delay Weight NSP Delay Weight
Executor Inactivity Timer NSP KeepAlive Time
Executor Retransmit Timer NSP Retransmit Threshold
Executor Type Routing Type
Executor Broadcast Routing Timer Routing PhaseIV Broadcast Routing Timer
Executor Maximum Address Routing PhaseIV Maximum Address
Executor Maximum Circuits Routing Maximum Circuits
Executor Maximum Cost Routing PhaseIV Maximum Cost
Executor Maximum Hops Routing PhaseIV Maximum Hops
Executor Maximum Visits Routing PhaseIV Maximum Visits
Executor Maximum Area Routing PhaseIV Maximum Area
Executor Area Maximum Cost Routing PhaseIV Area Maximum Cost
Executor Area Maximum Hops Routing PhaseIV Area Maximum Hops
Executor Maximum Buffers Routing Maximum Buffers
Executor Buffer Size Routing PhaseIV Buffer Size
Executor Segment Buffer Size Routing PhaseIV Segment Buffer Size
Executor Maximum Path Splits Routing Maximum Path Splits
Executor Pipeline Quota¹
Node Service Circuit MOP Client Circuit
Node Service Password MOP Client Verification²
Node Hardware Address MOP Client Addresses
Node Load File MOP Client System Image
Node Secondary Loader MOP Client Secondary Loader
Node Tertiary Loader MOP Client Tertiary Loader
Node Diagnostic File MOP Client Diagnostic Image
Node Management File MOP Client Management Image
Node Load Assist Agent MOP Client System Image³
Node Load Assist Parameter MOP Client System Image³
Node Dump File MOP Client Dump File
Node Dump Address MOP Client Dump Address
Node Dump Count¹
Node Host MOP Client PhaseIV Host Name and Address
Node Receive Password Routing Permitted Neighbor Verifier
Node Loop Assistant MOP Circuit Assistant System
Node Loop Help MOP Circuit Assistance Type
Line Receive Buffers CSMA-CD Station Receive Buffers
Line Receive Buffers DDCMP Link Receive Buffers
Line Service Timer MOP Circuit Retransmit Timer
Line Duplex Modem Connect Line Duplex
Line Clock Modem Connect Line Clock
Line Retransmit Timer DDCMP Link Retransmit Timer
Line Line Speed Modem Connect Line Speed
Line Protocol DDCMP Link Protocol
Object File ID Session Application Image Name
Object User ID Session Application User Name
Object Alias Outgoing Session Application Outgoing Alias
Object Alias Incoming Session Application Incoming Alias
Object Proxy Session Application Incoming Proxy
Object Proxy Session Application Outgoing Proxy
Circuit Service MOP Circuit Function
Circuit Cost Routing Circuit L1/L2 Cost
Circuit Router Priority Routing Circuit L1/L2 Router Priority
Circuit Hello Timer Routing Circuit Hello Timer
Circuit Maximum Recalls Routing Circuit Maximum Call Attempts
Circuit Recall Timer Routing Circuit Recall Timer
Circuit Number Routing Circuit Neighbor DTE Address
Circuit Transmit Timer Routing Circuit Transmit Timer
Circuit Transmit Timer DDCMP Logical Station Transmit Timer
Circuit Verification Routing Circuit Transmit Verifier


¹No equivalent; not applicable.
²See Section 10.2.2.1.
³Special form of SYSTEM IMAGE set by CLUSTER_CONFIG.COM for OpenVMS.


Appendix B
Circuit Devices

B.1 CSMA-CD Devices

DECnet-Plus supports the circuit devices listed in Table B-1, providing multi-access connections among many nodes on the same CSMA-CD circuit.

Table B-1 CSMA-CD Devices on OpenVMS Systems
Device Bus Name Device Bus Name
DEBNA BI--bus DE422 EISA
DEBNI BI--bus DEMNA XMI
DELQA Q--bus KFE32
DELUA UNIBUS SGEC
DESVA None PMAD TURBOchannel
DEUNA UNIBUS TULIP EISA, PCI

B.2 FDDI Devices

DECnet-Plus supports the devices listed in Table B-2, providing multi-access connections among many nodes on the same FDDI circuit.

Table B-2 FDDI Devices on OpenVMS Systems
Device Bus Name
DEFAA Futurebus
DEFEA EISA
DEFZA TURBOchannel
DEFTA TURBOchannel
DEFQA Q--bus
DEMFA XMI
FOCUS

B.3 Synchronous Devices

DECnet-Plus supports the synchronous devices listed in Table B-3. All of the synchronous line devices are either point-to-point or multipoint tributary circuit devices.

Table B-3 Synchronous Devices on OpenVMS Systems
Device Bus Name
DMB32 VAXBI
DMF32 UNIBUS
DNSES EISA (OpenVMS Alpha only)
DPV11 Q-bus
DSB32 VAXBI
DSF32 MI--bus
DSH32 None
DST32 None
DSV11 Q-bus
DSW21 None
DSW41 None
DSW42 None
DSYT1 TURBOchannel (OpenVMS Alpha only)
DUP11 UNIBUS
SSCC None
DNSES

B.4 Asynchronous Devices

DECnet-Plus supports the asynchronous devices listed in Table B-4.

Table B-4 Asynchronous Devices
Device Mnemonic
DHQ11 TX
DHU11 TX
DHV11 TX
DMB32 TX
DMF32 TX
DMZ32 TX
DZ11 TT
DZ32 TT
DZQ11 TT
DZV11 TT


Appendix C
delay factor and delay weight for NSP and OSI Transport

The following sections provide information about using the delay factor and delay weight attributes when configuring NSP and OSI transport.

C.1 delay factor and delay weight

On class 4 transport connections, the transport service retransmits transport protocol data units (TPDUs) if the remote host does not acknowledge them within a certain period; this period is known as the retransmission time. If the remote host fails to acknowledge a TPDU after a certain number of retransmissions, the local transport service assumes that the network connection has failed, and disconnects the transport connection.

The transport service controls this aspect of its operation by using a retransmission timer. The values of the delay factor and delay weight attributes are used in the algorithm for calculating the value of the retransmission timer.

The default values of delay factor and delay weight should be suitable for most networks. However, consider increasing their values if wide variations in round-trip delay times exist on your network.

The transport service uses the following algorithm to calculate the value of the retransmission timer:

  1. Calculate an average round-trip delay for each TPDU. The round-trip delay is the time that elapses between sending a TPDU and receiving an acknowledgment of that TPDU from the remote host.
    See Section C.2 for information on how the average round-trip delay is calculated.
  2. Calculate the retransmission timer value using the formula:
    retransmission timer = (average round-trip delay * delay factor) + remote 
    acknowledgment time 
    

The effect of delay factor is to increase the retransmission time by increasing the average round-trip delay time, thus allowing for additional network delay. The default value of delay factor is suitable for most networks. You might want to increase its value if there is considerable variation in round-trip delay from one TPDU to another.

The remote acknowledgment time is the maximum time for which the remote transport service will wait before acknowledging a TPDU that it has received. The remote transport service tells the local transport service the value of its acknowledgment time when the transport connection is established.

The value of the retransmission timer is, therefore, the sum of the estimated round-trip delay (weighted by the delay factor) plus the time taken for the remote transport service to acknowledge a TPDU.

C.2 Estimating the Round-Trip Delay

The transport service continuously recalculates its estimate of the average round-trip delay by taking into consideration recent samples of actual round-trip delay. This ensures that the retransmission timer is adjusted to suit current network conditions. The factors used in the calculation are:


Previous | Next | Contents | [Home] | [Comments] | [Ordering info] | [Help]

[HR]

  PROFILE_VMS_020.HTML
  OSSG Documentation
   2-DEC-1996 12:35:18.91

Copyright © Digital Equipment Corporation 1996. All Rights Reserved.

Legal