Because you can run TNT$UTILITY.COM yourself, and because the OpenVMS Management Station server also updates the database, the TNT$PRINTER_RECON_INTERVAL_MIN logical prevents the database from being updated more frequently than is actually needed.
If you want to change the defaults for one of these logicals, define
the logical on all nodes on which the OpenVMS Management Station server
is running.
E.1.7.2 Do You Need to Run TNT$UTILITY.COM Manually?
If you use OpenVMS Management Station to make all of the changes to your printer configuration, the configuration files are immediately modified to reflect the changes and you probably do not need to specifically run TNT$UTILITY.COM. TNT$UTILITY.COM runs at periodic intervals as a background thread to make sure that the database is kept up to date.
However, if you or someone else uses DCL to make a change --- for example, if you use the DELETE /QUEUE command to delete a queue --- the configuration files will not be synchronized. In this case, the OpenVMS Management Station client will advise you to run TNT$UTILITY.COM to resynchronize the database.
Run the following procedure on one node in the cluster to make the database match your system:
$ @SYS$STARTUP:TNT$UTILITY.COM UPDATE PRINTERS
For example, if you or someone else used DCL to delete a queue, you
need to delete that queue from the database. TNT$UTILITY.COM assumes
that your system is set up and running the way that you want it to, so
you should fix any problems before you run TNT$UTILITY.COM.
E.1.7.3 Are There Any Requirements for Running TNT$UTILITY.COM?
You need the SYSNAM privilege to run TNT$UTILITY.COM.
TNT$UTILITY.COM connects to the OpenVMS Management Station server on the current OpenVMS system to determine device and queue information. Therefore, the OpenVMS Management Station server must be running on the node where you run TNT$UTILITY.COM.
The OpenVMS Management Station server then connects to the other OpenVMS Management Station servers in the OpenVMS Cluster to determine device and queue information. It is generally a good idea to keep the OpenVMS Management Station server running on the other nodes in an OpenVMS Cluster to keep the database up to the minute.
However, if the OpenVMS Management Server is not able to connect to the
OpenVMS Management Station server on a given node, it uses the known
information about that OpenVMS node from the database. That is, in the
absence of a valid connection to that OpenVMS node, the information in
the database is assumed to be correct.
E.1.8 Enabling Disk Quotas
Before installing OpenVMS Management Station, you might have disabled disk quotas on the SYSTEM disk. If so, you should reenable the quotas and then rebuild to update quota information by entering the following commands:
$ RUN SYS$SYSTEM:DISKQUOTA DISKQUOTA> ENABLE DISKQUOTA> REBUILD DISKQUOTA> EXIT
Digital TCP/IP Services for OpenVMS Version 3.2 or higher is the only supported TCP/IP stack. Additional stacks have not been tested. However, TCP/IP stacks that are 100% compliant with the QIO interface for TCP/IP Services for OpenVMS should also work. (Contact your TCP/IP vendor for additional information and support issues.)
For the best chance of success, check the following:
If you encounter a problem while using OpenVMS Management Station, please report it to Digital. Depending on the nature of the problem and the type of support you have, you can take one of the following actions:
During the OpenVMS Version 7.1 installation or upgrade procedure, you selected the OpenVMS Management Station client software files to be installed on your OpenVMS system disk (or you added them later using the DCL command PRODUCT RECONFIGURE VMS). After you have prepared your OpenVMS system to run the server software, you must next prepare your PC to run the client software.
This section includes the following information:
Your PC requires 8 MB of random-access memory (RAM) and 11.5 MB of free
disk space to install the OpenVMS Management Station client software.
E.2.2 Required Software
Table E-1 describes the software that must be installed on your PC before installing OpenVMS Management Station.
Prerequisite Products | Purpose |
---|---|
Microsoft Windows NT Version 3.51
or Microsoft Windows 95 or Microsoft Windows Version 3.1 or Microsoft Windows for Workgroups Version 3.11 |
Operating system |
Optional Products | Purpose |
PATHWORKS Version 5.1 for DOS and Windows client software |
Integrate with PATHWORKS,
DECnet support |
ManageWORKS Workgroup
Administrator, Version 2.2 |
ManageWORKS integration |
Your TCP/IP stack | IP connections |
PATHWORKS for Windows 95 and any version of Windows NT prior to Version
3.51 are not officially supported.
E.2.3 TeamLinks Version 1.0 is Not Supported
If Version 1.0 of TeamLinks is installed on your PC, the OpenVMS Management Station PC installation program will ask whether to update the XTI library component of TeamLinks.
If you answer No, the OpenVMS Management Station installation terminates. However, if you allow OpenVMS Management Station to update the XTI library, Version 1.0 of TeamLinks will no longer work.
The version of the XTI library included with Version 1.0 of TeamLinks
does not allow TCP/IP connections from your PC and is not supported in
this version of OpenVMS Management Station. If you want to utilize both
TCP/IP connections and TeamLinks, you must upgrade to a higher version
of TeamLinks.
E.2.4 Creating the Installation Media
Create the PC installation media using the following procedure:
Note: You need six formatted 3-1/2-inch, high-density floppy disks.
C:\> cd temp-dir C:\temp> ftp ftp> open node Connected to node Username: username Password: password User logged in. ftp> cd sys$common:[tnt.client] ftp> type bin ftp> mget *.*
C:\> NFT COPY /BLOCK node"username password"::SYS$COMMON: [TNT.CLIENT]*.* \temp-dir
C:\> \temp-dir\DISKIMAG \temp-dir\TNTCLID1.IMG A:
This section provides the following information:
The installation procedure allows you to select the installation directory, and suggests \VMSTNT as the default.
Do not install OpenVMS Management Station into the PATHWORKS or
ManageWORKS Workgroup Administrator directories. If you do want to
configure PATHWORKS or ManageWORKS Workgroup Administrator to load
OpenVMS Management Station, see Section E.2.7.
E.2.5.2 Installation Procedure
Follow these steps to install OpenVMS Management Station client:
A:\SETUP.EXE
If an error occurs during installation, you will receive an error message describing the problem. This information can help you determine the cause of the problem. An error can occur during the installation if one or more of the following conditions exist:
The following files (with their directory names) are created on your PC after the OpenVMS Management Station client software is installed:
OpenVMS Management Station allows you to use both the TCP/IP and DECnet transports to establish connections.
You can have a mix of DECnet and TCP/IP connections, all DECnet connections, or all TCP/IP connections. OpenVMS Management Station does not have any DECnet dependencies and can run in a TCP/IP-only environment. Note that Windows NT and Windows 95 support TCP/IP connections only.
You do need to make sure that your PC can connect to the primary-server
systems, as described in the following sections. OpenVMS Management Station connects
your PC to the primary-server system and then routes management
operations to the target systems.
E.2.6.1 Defining TCP/IP Nodes
If you select the TCP/IP transport, your host's file or name server
must be able to resolve the IP name or address of all primary-server
systems. If you can successfully ping the primary-server systems from
your PC, then this condition is met.
E.2.6.2 DECnet Support
If you want to use DECnet connections, PATHWORKS Version 5.1 for DOS and Windows must be installed somewhere on the PC and listed in the PC's path statement.
Define the DECnet node names and network addresses of primary-server
OpenVMS systems that you want to manage.
E.2.6.3 Procedure for Defining DECnet Nodes
Follow these steps to define DECnet nodes:
Step | Action |
---|---|
1 |
At the MS-DOS prompt, invoke the NCP utility as follows:
C:\> NCP |
2 |
At the NCP> prompt, type the following command:
NCP> DEFINE NODE addrs NAME name where addrs is the DECnet address and name is the DECnet node name. Example: NCP> DEFINE NODE 19.208 NAME ISTAR |
3 | At the NCP prompt, type EXIT and press Return to exit the operation. |
You no longer need the PATHWORKS client software to run OpenVMS Management Station. OpenVMS Management Station installs into its own directory and includes all of the ManageWORKS components it needs to run.
If you happen to have PATHWORKS or ManageWORKS Workgroup Administrator installed, both will continue to function independently of OpenVMS Management Station.
You can configure PATHWORKS Version 5.1 for DOS and Windows or the ManageWORKS Workgroup Administrator Version 2.2 to load the OpenVMS Management Station software if you want to.
To do this, run the ManageWORKS Setup application and use the Browse
feature to select the file ARGUS.MMI in the VMSTNT directory. Refer to
the ManageWORKS online help for step-by-step instructions.
E.2.8 POLYCENTER Manager on NetView for Windows NT, Version 3.0
You can launch OpenVMS Management Station from POLYCENTER Manager on NetView for Windows NT, Version 3.0. To do this, copy the file VMSTNT.REG from the temporary directory (or disk 1 if you requested client media) to the following directory:
\usr\ov\registration\c\
Note that NetView can be installed only on an NTFS partition.
E.3 Getting Started with OpenVMS Management Station
All information about getting started, setting up, and using
OpenVMS Management Station is contained in online help and the OpenVMS Management Station
Overview and Release Notes.
E.3.1 Accessing Online Help
Follow these steps to access the OpenVMS Management Station online help:
This glossary defines key terms in the context of an OpenVMS Alpha
computing environment.
boot, bootstrap: The process of loading system
software into a processor's main memory.
boot server: An Alpha computer that is part of a local
area OpenVMS Cluster. The boot server is a combination of a MOP server
and a disk server for the satellite system disk. See also
satellite node.
CI only OpenVMS Cluster: A computer system consisting
of a number of Alpha computers. It uses only the computer interconnect,
or CI, to communicate with other Alpha computers in the cluster. These
computers share a single file system.
CI: A type of I/O subsystem. It links computers to
each other and to HSx devices (for example, an HSC or HSD).
device name: The name used to identify a device on the
system. A device name indicates the device code, controller
designation, and unit number.
disk server: A computer that is part of a local area
OpenVMS Cluster. This computer provides an access path to CI, DSSI, and
locally connected disks for other computers that do not have a direct
connection.
HSx device: A self-contained, intelligent,
mass storage subsystem (for example, an HSC or HSD) that lets computers
in an OpenVMS Cluster environment share disks.
HSx drive: Any disk or tape drive connected
to an HSx device (for example, an HSC or HSD). A system disk
on an HSx drive can be shared by several computers in an
OpenVMS Cluster environment.
InfoServer: A general-purpose disk storage server that
allows you to use the operating system CD--ROM to install the operating
system on remote client systems connected to the same local area
network (LAN).
local area OpenVMS Cluster: A configuration consisting
of one or more computers that act as a MOP server and disk server, and
a number of low-end computers that act as satellite nodes. The local
area network (LAN) connects all of the computers. These computers share
a single file system.
local drive: A drive, such as an RRD42 CD--ROM drive,
that is connected directly to an Alpha computer. If you have a
standalone Alpha computer, it is likely that all drives connected to
the system are local drives.
media: Any packaging agent capable of storing computer
software (for example, CD--ROMs, magnetic tapes, floppy diskettes, disk
packs, and tape cartridges).
mixed interconnect OpenVMS Cluster: A computer system
consisting of a number of computers. It uses CI, Ethernet, and DSSI
adapters to communicate with other computers in the cluster.
MOP server: A computer system running DECnet software
that downline loads OpenVMS Cluster satellites using the DECnet
maintenance operations protocol.
OpenVMS Cluster environment: A computer system
consisting of a number of Alpha and VAX computers. There are four types
of OpenVMS Cluster environments: CI only, DSSI only, local area, and
mixed-interconnect.
satellite node: A computer that is part of a local
area OpenVMS Cluster. A satellite node is downline loaded from a MOP
server and then boots remotely from the system disk served by a disk
server in the local area OpenVMS Cluster.
scratch disk: A blank disk or a disk with files you no
longer need.
source drive: The drive that holds the distribution
kit during an upgrade or installation, or the drive from which you
restore files to a target disk.
standalone system: A computer system with only one
Alpha computer.
system disk: The disk that contains or will contain
the OpenVMS Alpha operating system.
target drive: The drive that holds the system disk
during an upgrade or installation, or the drive you designate when
backing up the system disk.
UETP: User Environment Test Package. A software package that tests all the standard peripheral devices on your system, various commands and operating system functions, the system's multiuser capability, DECnet software, and the OpenVMS Cluster environment.
6486P011.HTM OSSG Documentation 6-DEC-1996 10:35:24.15
Copyright © Digital Equipment Corporation 1996. All Rights Reserved.