ISO 8802-3 (CSMA-CD) LANs support a bus topology, a single communications medium to which all the systems are attached as equals. The single network cable replaces the numerous interconnecting cables usually required in traditional WANs. This type of network is also called a broadcast LAN. The maximum possible distance between systems on the LAN is 2.8 kilometers (1.74 miles).
You can connect segments of coaxial cable to extend LANs beyond the 500-meter (1640 feet) limit of a single segment. Extended LANs create larger networks in terms of distance and also in terms of the number of connections that can be made. (A 500-meter segment supports 100 physical connections.) Repeaters and bridges join cable segments:
If the end systems communicate only with each other, you do not need an intermediate system on a LAN, but can use host-based routing. Optionally, you can use one to limit multicast traffic. To route messages off the LAN over other routing circuits, you must configure an intermediate system such as DDCMP circuits.
If a LAN is operating with more than one area and with one or more level 1 intermediate systems, you need a level 2 intermediate system to transport messages between areas.
DECnet-Plus for OpenVMS allows multiple circuits to be active and usable simultaneously on an end system. For example, you can connect an end system to two LAN cables. Both routing circuits are used and traffic is split between the circuits, but no routing occurs over these circuits. A DECnet-Plus for OpenVMS end system can support a maximum of three circuits. This feature provides for redundancy and increased data throughput without requiring an intermediate system.
You can partition a large DECnet-Plus routing domain into subdomains called areas. For Phase IV, an area is a group of network nodes that can run independently, with all nodes in the group having the same area address. DECnet-Plus areas are similar to Phase IV areas except for the following new features:
A node can be in only one area in a network. DECnet-Plus systems, however, can have more than one area address. Such a system is a multihomed system. You can assign up to three area addresses to a DECnet-Plus system.
A DECnet-Plus area is a set of systems that all share the same area address (or addresses). An area (and the systems within the area) can have more than one area address. For example, if an area in a DECnet-Plus network is connected to an X.25 public network, a system in that area of the network would have two addresses: one for the DECnet-Plus network and one for the X.25 network. A system cannot have addresses on two networks that are not connected.
If you connect two areas to a DECnet-Plus LAN, the level 2 intermediate systems automatically combine themselves into one area with two area addresses. Phase IV LANs can be divided into several areas. Mixed Phase IV and DECnet-Plus LANs can also include several areas.
High-level Data Link Control (HDLC) protocol data links are ISO, synchronous, point-to-point links that are basically the same in function as existing DDCMP synchronous links. However, HDLC is a bit-oriented protocol, whereas DDCMP is a byte-oriented protocol.
HDLC operates over synchronous, switched, or nonswitched communications links. HDLC supports a broad range of existing subsets, including the subset used in X.25 networks.
HDLC operates in either of two modes:
HDLC links use UI frames and XID frames. A UI frame is an unnumbered, information frame that carries data not subject to flow control or error recovery. An XID frame exchanges operational parameters between participating stations.
DECnet-Plus for OpenVMS also supports a modified form of HDLC called link access protocol balanced (LAPB). LAPB is the CCITT-approved link level protocol for X.25 connections. LAPB defines the procedure for link control in which the DTE/DCE interface is defined as operating in two-way asynchronous balanced mode (ABM). LAPB is for the reliable transfer of a packet from a host to an X.25 packet switch, which then forwards the packet on to its destination.
DDCMP is designed to provide an error-free communications path between adjacent systems. It operates over serial lines, delimits frames by a special character, and includes checksums at the link level.
DECnet-Plus for OpenVMS continues to support proprietary DDCMP data links, which include these types of connections:
DDCMP provides a low-level communications path between systems. The protocol detects any bit errors that are introduced by the communications channel and requests retransmission of the block. The DDCMP module provides for framing, link management, and message exchange (data transfer). Framing involves synchronization of bytes and messages.
DDCMP pipelining permits several packets to be sent before an acknowledgment is received. Piggybacking permits an acknowledgment to be transmitted on a data packet.
The DDCMP protocol moves information blocks over an unreliable communication channel and guarantees delivery of routing messages. Individual systems on DDCMP routing circuits are addressed directly because no multicast or broadcast addressing capability is available.
The Data Link layer supports point-to-point, DDCMP links, either synchronous or asynchronous. The two types of asynchronous links are static (permanent) and dynamic (switched temporary).
Synchronous links provide the medium to high-speed point-to-point communication. The synchronous DDCMP Protocol can run in full- or half-duplex mode. This allows DDCMP the flexibility of being used for local synchronous communications or for remote synchronous communications over a telephone line using a modem.
DDCMP is implemented in the driver software (WANDD) for the synchronous communications port.
Asynchronous links provide a low-speed, low-cost media for point-to-point communication. Asynchronous DDCMP can run over any directly connected station that the DECnet-Plus for OpenVMS system supports. Asynchronous DDCMP provides for a full-duplex connection. You can use it for remote asynchronous communications over a telephone line using a modem. Asynchronous connections are not supported for maintenance operations or for controller loopback testing.
Asynchronous DDCMP does not need to be predefined for dynamic connections. It is established automatically when a dynamic asynchronous DDCMP link is made.
DDCMP asynchronous links can be static or dynamic.
DECnet-Plus supports HDLC as an alternative to DDCMP to:
You can convert existing DDCMP point-to-point synchronous lines to HDLC lines. However, DECnet-Plus also supports DDCMP for compatibility and to provide these capabilities not available with HDLC:
The Physical layer is responsible for the transmission and receipt of data on the physical media that connects systems. It transparently moves data between the system and the communications path signaling equipment. The Physical layer can include part of the device driver for a communications device and for communications hardware: interface devices, modems, and communication lines.
The Physical layer supports the CSMA-CD interface, which complies with ISO standard 8802-3 and the IEEE 802.3 standard. The Physical layer also allows CSMA-CD LAN connections based on the DECnet Phase IV Ethernet interface.
The DECnet-Plus Modem Connect module defines the operation of synchronous and asynchronous devices. It provides for network management of stations (physical lines) that conform to industry standards for modem connection. You can establish and monitor the following types of links:
The Modem Connect module supports several industry standards for physical interfaces:
The Modem Connect module does not contain driver-specific code. It contains all the common routines related to network management for all synchronous and asynchronous drivers. The functions controlled through the Modem Connect module include:
DECnet-Plus for OpenVMS network management capabilities include:
The structure of DECnet-Plus network management defines formal relationships between the management software at the various layers and the directors that communicate with the layers on behalf of network managers.
The two major components of network management are directors and entities. Directors are interfaces for managing the entities. They are typically used by the network manager and usually involve a command language. NCL management entities, which are the manageable components that make up the network, relate to other entities on the same system.
The entity model defines the structure of the entities that constitute a distributed system and the management functions they provide.
For further information on NCL commands, refer to the DECnet-Plus Network Control Language Reference guide.
The Network Control Language (NCL) is a command line interface to the director, which interprets the input NCL commands, sends these commands to a target entity for processing, and presents the results back to the network manager.
Directors use common mechanisms to communicate with the entities they manage. The same director can manage both local and remote entities. If the managed entity resides on the local system, the director uses the local interface to access it. If the managed entity resides on a remote system, the director converts the NCL message to CMIP and sends the converted message to the entity agent on the remote system.
The management information and operations that pass between directors and entities are:
The management access relationship between a parent entity and a child entity indicates that the parent is capable of passing directives to its child entities. A parent entity is an entity that has created another entity (the child entity); a child entity is a lower class of entity that receives directives forwarded from its parent entity.
The DECnet-Plus director does not directly access the agents of local entities. To access a local entity's management interface, the director accesses the appropriate global node entity that is the parent of the target local entity. That node entity forwards the directive down the entity hierarchy; the forwarding process continues from higher to lower entities until the correct target entity receives the directive. The agent access point is the place of connection between a director or a parent entity and an agent.
The DECnet-Plus network management protocol is based on DNA Common Management Information Protocol (DNA CMIP) draft standard for network management operations. CMIP is used for encoding network management operations that can be performed on an entity. CMIP permits the exchange of information between a director and an agent. CMIP supersedes the Phase IV Network Information and Control Exchange (NICE) protocol.
DECnet-Plus software supports remote management of Phase IV nodes by continuing to support Phase IV NCP and the Phase IV network management protocol (NICE).
However, Phase IV applications that have been written to create logical links to the Phase IV Network Management Listener (NML) and then parse the returned NICE protocol messages are not supported for managing DECnet-Plus for OpenVMS systems. To run on a network composed of DECnet-Plus systems, those applications must be rewritten to use DECnet-Plus network management software and protocols. For example, the application may be rewritten to use the DECnet-Plus CML callable interface into network management. Also, Phase IV applications that use NCP to configure the application need to be converted to use NCL.
DECnet-Plus software supports remote management of DECnet Phase IV nodes by use of NCP.EXE. This utility supports a significant range of NCP commands. It is not designed as a replacement for NCL.
With the Maintenance Operation Protocol (MOP), you can communicate with systems that are not fully operational, for example DECnet-Plus intermediate systems. Low-level maintenance functions of MOP run directly on top of data links, such as ISO 8802-2. MOP functions are often placed in ROMs on link controllers. MOP operations include:
DECnet-Plus for OpenVMS provides enhanced support for concurrent downline loads.
The net$configure.com configuration procedure offers you several options for configuring DECnet-Plus:
$ @sys$manager:net$configure
$ @sys$manager:net$configure basic
$ @sys$manager:net$configure advanced
You can also use net$configure.com to reconfigure all or some of the DECnet-Plus for OpenVMS system. Rerunning the configuration procedure modifies or replaces relevant initialization script files but does not affect running systems. You then execute the modified initialization script files to effect these changes.
The initialization scripts create and enable all required entities. Each entity is initialized through execution of a separate NCL script file. Using NCL scripts to initialize DECnet-Plus for OpenVMS systems replaces the Phase IV requirement of establishing a DECnet permanent configuration database at each node. Remote node information resides in either a local or distributed DECdns namespace.
For further information, refer to the DECnet-Plus Network Management guide.
This chapter introduces basic X.25 networking concepts and how they fit into the DECnet standards for system interconnection as illustrated in Figure 2-2. Refer to the X.25 for OpenVMS Management Guide for specific information on managing and monitoring an X.25 system as well as descriptions of specific parts of an X.25 system (call handling, templates, application filters, server and relay clients, and addressing).
X.25 is a recommendation made by the Comité Consultatif International Téléphonique et Télégraphique (CCITT). The CCITT is a United Nations committee that makes recommendations on data communications services. The CCITT has made several recommendations to ensure that different networks are compatible in their operation. Many of its recommendations have been implemented widely by major network providers.
The X.25 recommendation specifies the interface between data terminal equipment (DTE) and data circuit-terminating equipment (DCE) for equipment operating in the packet mode on public data networks.
Packet switching is a method of sending data across a network. In this method, data is divided into blocks, called packets, which are then sent across a network. Networks using this method are referred to as packet switching data networks (PSDNs) or X.25 networks.
Figure 4-1 shows an example of the equipment involved in sending data across a PSDN. PSDNs are made up of packet switching exchanges (PSEs), and high-speed links that connect the PSEs. Each PSE contains data circuit-terminating equipment (DCE). The DCE is the point where all data enters and leaves a PSDN.
The user's computer that sends data to the DCE, and receives data from the DCE, is known as the data terminal equipment (DTE). Outside a PSDN, packets travel between the DTE and DCE. Inside a PSDN, packets travel between PSEs.
Figure 4-1 Packet Switching Equipment
A PSDN is either owned by a country's Postal, Telegraph, and Telephone (PTT), Authority or it is privately run. Public PSDNs can be used by anyone, on payment of connection fees and regular bills. Private PSDNs are used to serve a single body, such as a large corporation.
Public and private PSDNs can be interconnected. It is, therefore, possible to pass data packets from one DTE to another no matter where in the world the DTEs are located.
The PSDN sends each packet to its destination by the best available route at the time. The packets that make up a data message may take different routes to reach the same destination; this will occur if certain routes are busy or if there is a line failure within the PSDN. Figure 4-2 shows an example of packets traveling by different routes across a PSDN.
INTRO_PROFILE_005.HTML OSSG Documentation 2-DEC-1996 12:54:25.00
Copyright © Digital Equipment Corporation 1996. All Rights Reserved.