top - download
⟦090b5164b⟧ Wang Wps File
Length: 96803 (0x17a23)
Types: Wang Wps File
Notes: AIR CANADA DOC III CH. 6
Names: »1256V «
Derivation
└─⟦0f7d23a07⟧ Bits:30006262 8" Wang WCS floppy, CR 1007V
└─ ⟦this⟧ »1256V «
WangText
…00……00……00……00……00……1b……02……00……00……1b…
…1a……0e……19……09……19……0e……19…
…19……05……18……0c……18……00……18… …17……09……17……00……17……06……16……08……16……0a……16……0f……16……02……16……06……15……0a……15……02……15……05……14……0b……14…
…13……0b……13……02……12……09……12……02……86…1 …02… …02… …02… …02… …02… …02… …02… …02… …02… …02… …02… …02… …02… …02… …02…
CHAPTER 6
Page 1
DOCUMENT III TECHNICAL PROPOSAL Oct. 8,
1981
*** LINE REJECTED ***
…02…Page
6. SOFTWARE CHARACTERISTICS
3
6.1 Introduction
4
6.2 DAMOS-CR80D Standard System Software
5
6.3 BTS Environment
16
6.4 Data Flow and Network Tables
28
6.4.1 Network Tables
28
6.4.1.1 Global Network Tables
28
6.4.1.2 Local Network Tables
31
6.4.2 Network Tables Maintenance and Usage
32
6.4.3 Session Data Flow
34
6.4.3.1 Terminal User Logs On
43
6.4.3.2 Terminal User Logs on to Univac Host
45
Application
6.4.3.3 Terminal User Logs on to IBM Host
47
Application
6.4.3.4 IBM Application Initiates Session with 49
Terminal
6.4.4 Data Flow through the Transport Network 50
6.4.4.1 Transport Level
50
6.4.4.2 The Transport Protocol
53
6.4.4.2.1 Types of Conversation Messages
56
6.4.5 Subnetwork Access Services
61
6.4.5.1 Data Transmission Environment
62
6.4.5.1.1 DtrE Facilities
62
6.4.5.1.2 Routing of Virtual Calls
65
6.4.6 Data Link Environment
65
6.5 Nodal Switch Software Package
66
6.5.1 X25 Level 1,2,3
66
6.5.2 Resource Management Software
68
6.5.2.1 VCBM
68
6.5.2.2 VCSWH
68
6.5.2.3 VCM
71
6.5.3 Transport Station
72
6.5.4 Connection Allocation Services
72
6.5.5 Gateway Software
73
6.5.5.1 ACNC Interfaces
75
6.5.5.2 Network Interface
75
6.5.6 Modules of the Gateway
76
6.5.6.1 The ICC Module
76
6.5.6.2 The Communications Control
76
Interface Module 76
6.5.6.3 The High Level Service Module
76
6.5.6.4 The Supervisory Module
76
6.5.6.5 The Recovery Module
6.6 Terminal Access
Software Package 76
6.6.1 Transport Station User
76
6.6.2 Device Emulators
77
6.6.3 Device Handlers
78
6.6.4 TAS Management
78
6.6.4.1 Configuration Manager and
78…02……02…Status Reporter6.6.4.2 Terminal Session Manager
79
6.6.4.3 External Resource Manager
80
6.6.4.4 Node Logical Unit Services
81
6.6.4.5 Node Session Manager
816.7 Host Access Software Package
84
6.7.1 IBM Application Session Interface
84
6.7.2 IBM Channel Handler
84
6.7.3 PUT Emulator
85
6.7.4 Host Session Manager
856.8 Network Control Software Package
86
6.8.1 Operator Command Interpreter
86
6.8.2 Network Configuration Service
86
6.8.3 User Services
88
6.8.4 Session Services
88
6.8.5 CDRM Emulator
88
6.8.6 Packet Network Manager
89
6.8.7 Event Display Handler
89
6.8.8 Graphic Status Display
89
6.8.9 Data Unit Trace
89
6.8.10 External Resource Statistics
90
6.8.11 Routing Principles and Implementation
91
6.8.11.1 Software Operations
91
6.8.11.2 Route Determination
926.9 Electronic Mail Software Package
95
6.9.1 Major EMS Components
97
6.9.2 Protected Message Switching
98
6.9.2.1 FIKS Definition and System Elements
98
6.9.2.2 System Overview & Functional Summary 101
6.9.2.3 FIKS Nodal Network
101
6.9.2.4 Message Users
102*** LINE REJECTED ***6.9.2.7 FIKS Generic
Elements 104
6.9.2.8 Traffic Security
104
6.9.2.9 Message Categories, Code and Formats 105
6.9.2.10 Message Entry, Storage and
108…02……02…Distribution6.9.2.11 Message Routing and Data
Switching 112
6.9.2.12 System Supervision, Control and
113…02……02…Maintenance
6. SOFTWARE CHARACTERISTICS
(-INSTRUKS-)ion attempts to present a comprehensive view…02……02…of
the software structure supporting the proposed…02……02…ACDN.
To assist in understanding the implied software…02……02…linkages,
certain typical and relevant data flow…02……02…descriptions are
presented.…02……02…The primary design objectives for the networking…02……02…software
provided by Christian Rovsing has been:…02……02…- a realistic
adherence to the proposed architecture…02……02……02…for Open Systems
Interconnection…02……02…- a well conceived strategy to exploit
the inherent…02……02……02…{rchitectural strength of the CR80 environment.…02……02……02…(this
is illustrated by software components BTS & TCS)
- exploitation of the program development environment
…02…supported by PASCAL.
Over and above these considerations, certain aspects
of software structuring have been adopted that
facilitates understanding in terms of mainframe
communication architecture like IBM|s SNA and Univac|s
…02……02…DCA.
6.1 Introduction
The software proposed for the ACDN is anchored onexisting
networking solutions from Christian Rovsing,and is supported
by the system software for the CR80,namely DAMOS.
Section 6.2 introduces the functions and facilities ofDAMOS.
This section remains unchanged from theearlier version
dated October 8th, 1981.
Section 6.3 focuses on a key component designated asBasic
Transport Service (BTS) that is a part of theDAMOS Kernel.
This section illustrates the practical use of the BTS.
…02……02…Section 6.4 introduces data flow aspects and table …02……02…structures.
This section provides a first level …02……02…bridge between concepts
introduced in Chapter 3 and …02……02…functional software description
that follows in 6.5 to …02……02…6.9.
…02……02…Section 6.5 to 6.9 describes the software complexion
…02……02…and functions of NSS, TAS, HAS, NCS and NetworR …02……02…Management
Subsystem.
…02……02…Section 6.10 presents the electronic mail system …02……02…oriented
software capabilities.
***********
It is also possible for device handlers to use BTS.
In this way, data may be switched through an inter-
mediate node, directly from one communication device
to another.
*****************************
When a user process A wants to communicate with user
process B, it:
1) issues a "connect", specifying the global
…02…identification of the other process BTS, then
2) notifies process B, that it has been called from
A.
User process B may then either:
3a) accept to communicate with A
or
3b) reject to communicate with A and BTS notifies
…02…process A either:
4a) that the communication has been established
or
4b) that the communication could not be established
*****************
When user process wants to send data via the
connection, it:
1) issues a "send request" giving BTS a pointer to the
data, specifying the address and the length of data.
When user process B is ready to receive data via the
connection, it
2) issues a "receiving request" giving BTS { pointer
…02…to an empty data buffer, specifying the address
…02…and length.
BTS then initiates the data transfer, and when the
transfer is completed, it:
3) notifies A that data has been sent
4) notifies B that data has been received.
User process B may simultaneously send data to user
process A via the connection (it is fully duplex).
The "receive request" from B may be issued before the
"send request" from A.
Any number of "send request" and "receive requests"
may be outstanding on a connection, they will be
served by BTS as soon as a match occurs.
**********************
If user process B has many connections on which it may
receive data, it may not be able to allocate an input
buffer for each.
It may then:
1) give a data area to BTS to be used as a common
…02…buffer pool for several connections
2) specify that buffers from the pool shall be used
…02…for input on the connection
when user process A
3) issues a "send request"
BTS tries to allocate buffers from the buffer pool.
transfer, and when it is complete:
4) notifies A that data has been sent
5) notifies B that data has been received, specifying
…02…the buffers containing the input data.
*****************************
BTS is an integrated part of the DAMOS kernel.
1) within one CR80 computer, the DMA device in the
…02…MAP is used to move data.
2) On connection between computers connected by a …02…supra
bus, data is to/from the CR80 memory by the …02…Supra Terminal
Interface (STI), interfacing to the supra bus.
3) On connections via communication lines, data is …02…first
moved from the user process to t*e memory of …02…the Line
Termination Unit (LTU) by the DMA device in the NAP.
…02…When it has been transmitted to the *emory of the receiving
LTU, it is moved into the memory of the user processes
by the DMA device in the MAP.
The CPU is thus never loaded with movement of data.
6.4 DATA FLOW AND
NETWORK TABLES
…02……02…The scope of this section is to introduce typical data
…02…flows in the ACDN and to present the generic table …02…structures
incorporated in the NCC NMH and the nodes. …02…The primary
objective is to illustrate the linkages …02…between session
identifiers, transport service …02…indentifiers, virtual
circuit identifiers and the end …02…users.
…02……02…The description presented herein is based on a current
implementation for an IBM and Univac host environment.
6.4.1 NETWORK TABLES
…02……02…Tables reflecting network configuration and status are
maintained in NMM, NCC and NODES.
…02……02…Global Network Tables are located in the NMH and the
NCC while Local Network Tables are located in the NODES.
6.4.1.1 Global Network Tables
…02……02…The Global Network Tables (NT in NCC) reflect the entire
network configuration and status.
NETWORK TABLES
An entry in the Main Table always corresponds to an
entry in the Quick Reference Table while the entries
in the Session Table are network usage dependent.
NCC initialization routines or NCC operators create
entries in the Main Table by using an identifier
transformation algorithm to find empty table elements.The
entries are created within node specific
boundaries and the corresponding elements in the Quick
Reference Table are also found within node specific
boundaries where the first free entry is chosen.
Global Identifiers (q), which are used to locate
elements in all Network Tables in the network, simply
are the elements numbers in the Quick Reference Table.
Global Data is pointed to by the pointer (d) but the
data entries are not shown.
6.4.1.2 Local Network Tables
The Local Network Tables (NT in NODE) reflect
configuration and status partially with respect to
number of units and amount of information.
Entries in the Local Main Table are created on basis
of messages from NCC. The Global Identifiers (which
are within a node specific range) are used as Quick
Reference to the obligate corresponding Quick
Reference Table elements. These elements are made
pointing to the created Main Table elements.
HOST and TERMINAL elements may point to Session Table
elements or linked lists of Session Table elements
(Session Control Blocks).
TERMINAL elements may point (t) to Transaction
elements which are not shown.
NETWORK TABLES
…02……02…***
6.4.2 Network Tables Maintenance and Usa**
In the NCC the Network Tables are maintained by
Network Configuration and used by both User and
Session Services.
In the NODE the tables are maintained by NODE Services
respectively and used by Physical and Logical Services
and by Session Nanagers.
The host related NODE tables mainly reflect the hosts
tables mainly reflect the real network, so the LINE
and CLUSTER elements in a given sub-tree in the host
related part may be different from the terminal
related LINE and CLUSTER elements both with respect to
information held and with respect to number of
elements.
The number of NODE and TERMINAL elements in the host
related part are equal to the number of NODEs and
TERNINALs in the network.
As a HOST can maintain sessions with many TERMINALS in
to a linked list of Session Table elements (Session
Control Blocks).
As a TERMINAL can have several sessions involved in a
transaction the TERMINAL element has a pointer to both
a linked list of Session Table elements and a
Transaction element.
The Session Table and Transaction Table are both
maintained by Logical Services and Session Managers.
6.4.3 Session Data Flow
…02……02…This section describes briefly how a session path
…02……02…between two end users is established, and of which
…02……02…entities it is composed.
…02……02…The framework within which the description is held is
…02……02…shown overleaf.
Legend:
PW: X75 levels 1-3
TS: Transport Station
UDS:User Data Session
The internal structure of User Data Session (UDS) is
a
task structure:
where
ASI: (Host) Application Session Interface task
CAS: Connection and Allocation Services task
EM: (Terminal) Emulator task
TSU: Transport Station User task.
The ASI and EM are Virtual Protocol Units (VPUs)
CAS exists in one incarnation in each UDS and is
created at initialisation time. Having been created
CAS connects to TS.
The remaining tasks exist in principal in one
incarnation per session. These tasks are dynamically
created/deleted during session establishment
termination.
Three cases of session path establishment are covered.
1: Terminal initiated session to an Univac host
…02…application.
2: Terminal initiated session to an IBN host
…02…application.
3: IBM host application initiated session to a
…02…terminal.
In each case, a path like the one sketched below is
established.
The resources allocated to
a session are composed of:
- the session paths
- NCC session services
- NODE session manager (at host)
- NODE session manager (at terminal)
Session control blocks {re used for accounting-control
and recovery purposes. Each triple is given an
internal global identifier thereby providing a linkage
between the control blocks.
Regarding the established session path, the entities
*** LINE REJECTED ***
associated control block. Two associated parts make
a
connection. The entity connecting PN and PN is a
Virtual Circuit.
Thus, the resources which
are allocated to a session
path are divided into:
- task instances (ASI, TSU, TSU, EM)
- connections, each composed of two associated parts
- virtual circuits
No global address defines the host application and the
terminal. The session path is identified by a series
of relative addresses.
On outbound flow - i.e. the flow from host application
to terminal - the driver m{ps the destination address
into a part to ASI, thereby obtaining a handle to the
session path.
On inbound flow - i.e. the flow from terminal to host
- the driver maps the terminal identification into a
part to EM thereby obtaining a handle to the session
path.
Now the various phases in session path establishment
are to be outlined.
…86…1 …02… …02… …02… …02… …02… …02… …02… …02… …02…
…02… …02… …02… …02… …02… …02…
6.4.3.1 Terminal user
lo******
…02……02…1. User logs on at a terminal.
…02……02…2. The terminal network driver does not have any part
…02……02……02…to UDS for the terminal, hence the logon message
…02……02……02…is routed to node services.
…02……02…3. Node TAS service does some preliminary checks,
…02……02……02…removes any device dependencies from the logon
…02……02……02…message, and passes the logon on to User Services
…02……02……02…in NCC.
…02……02…4. User Service checks user profile, passwords, etc.,
…02……02……02…and send the logon to Node NAS services.
…02……02…5. In case the user logged on to a Univac host
…02……02……02…application, the session path is established at
…02……02……02…this time. In case the user logged on to an IBM
…02……02……02…host application, the establishment of the session
…02……02……02…path is triggered by the arrival of a BIND from the
…02……02……02…host application.
…02……02…6. The Univac case is covered at first.
6.4.3.2 Terminal User
lo***************************************
…02……02…1. Node HAS Service passes the logon to Node session
…02……02……02…manager.
…02……02…2. The session manager asks CAS in UDS to create a
…02……02……02…session path.
…02……02…3. CAS creates an ASI - and a TSU task incarnation.
…02……02…4. ASI connects to driver and afterwards to TSU tasks.
…02……02…5. The TSU task connect to TS and asks TS in Node to
…02……02……02…connect Node HAS to connect to TS destination node.
…02……02…6. TSU in host Node sends a command to CAS in UDS in
…02……02……02…terminal node. This command is sent from TS-CAS
…02……02……02…connection established during system
…02……02……02…initialisation.
…02……02…7. CAS in terminal node creates an EM - and a TSU task
…02……02……02…incarnation.
…02……02…8. EM connects to terminal network driver and
…02……02……02…afterwards to TSU task.
…02……02…9. TSU task connects to TS in node.
…02……02…10. A session path set up complete message is sent from
…02……02……02…CAS in node UDS via CAS in host node UDS to host
…02……02……02…node session manager.
…02……02…11. Host node session manager updates node session
…02……02……02…manager and NCC session services.
…02……02…12. Most node session manager releases the logon and6.4.3.3
Terminal user lo************************************
…02……02…1. As in the Univac case, the logon eventually reaches
…02……02……02…HAS services.
…02……02…2. The logon is sent to the host.
…02……02…3. The host application sends a BIND, which is
…02……02……02…received by the driver. The driver does not have
…02……02……02…any path to UDS for this BIND, hence it is routed
…02……02……02…to trip services.
…02……02…4. Now the situation is almost similar to the Univac
…02……02……02…case When a session path has been established,
…02……02……02…the HAS session manager releases the BIND and sends
…02……02……02…it to ASI via CAS. From ASI the BIND is sent along
…02……02……02…the session path to its destination, and EM in the
…02……02……02…terminal node.
6.4.3.4 IBM a***************************************************
…02……02…1. In this case an "unsolicited" BIND is received by
…02……02……02…the driver. The driver does not have any path to
…02……02……02…UDS for this BIND, hence it is routed to HAS
…02……02……02…services.
…02……02…2. Now the situation is the same as in the case where
…02……02……02…a BIND was initiated by a user logging on to an IBM
…02……02……02…host application.
…02……02…This concludes the outlines of session path
…02……02…establishment.
6.4.4 Data Flow Throu*****************************
The transport services are provided by a part of Com-
munication User Environment which itself uses the
lower layers as was briefly explained in Chapter 3.
Figure 6.4.4.1 shows the relation between the different
environments and the protocols used.
An entity communicates with a corresponding remote
entity in the same environment through a protocol
and with entities in adj{cent environments through
primitives. This section explains the protocols used
in the different environments and their data units.
The protocols can be briefly characterized by the
basic data unit which is exchanged.
The transport protocol exchanges units consisting of:
o Transport Header (TH)
o Message of variable length.
A transport protocol data unit (TPDU) is broken into
one or more packets which are the basic unit of the
packet protocol.
A data packet consists of:
o Packet header (PH)
o Packet text of variable length.
The link protocol uses a frame to transport a packet
across.
The frame contains:
o Frame header (FH)
o Packet
o Frame trailer (FT).
Figure 6.4.4.2 shows an example of the breakdown of a
message to fit into the data units at the various
protocol levels.
6.4.4.1 Trans************
At the transport level, two entities called Communicati
*** LINE REJECTED ***
A CAP can be associated with
another CAP to form the
basis for the conversation. No other conversation
between the two CAPS can be established until they
have been disassociated.
Addressin*
The entity above the transport level which is using
the transport services through communication access
paths is called the transport services user (TSU).
Figure 4.1-1 shows five TSUs assigned to CAPs in two
nodes. Two conversations are established between
TSU1,1 and TSU2|1 and between TSU2*2 and
*** LINE REJECTED ***
…02……02……02…node-id, local-CAP-id
where node-id identifies the node containing the
CAP and the local-CAP-id is assigned when the CAP
is allocated by a TSU. The global CAP identifier
is used by CUE in all communicatio between CAPs.
When a TSU established a conversation with another
TSU, the global name of the destination TSU must be
given. This allows the CUE to route the request
for a conversation to the pro*er destination where
the global TSU name is translated into a global CAP
identifier and the association between the two CAPS
can be formed. Once the conversation is
established, each TSU can identify it by his
local-CAP-id.
The global identification of the conversation is
formed by the global CAP identifier of the two
associated CAPs plus the NCC-id of the Network
Control Center responsible for the conversation.
6.4.4.2 The Trans****************
On a conversation, the two Transport Service users
can exchange messages of varying length governed by
the rules of the transport protocol primitives which
guarantees:
o delivery of messages to the TSU with a
…02…predictable very low error rate,
o correct sequencing of messages.
o a finite, but fluctuating
bandwidth through use
…02…of proper flow control.
The error rate is not equal to zero because there is
a certain probability that transmission errors pass
the cyclic redundancy check {t the link level
protocol. Errors from other hardware malfunctions
such as trunk failure or intermediate node failure,
are completelY recovered unless, a TSU becomes
unreachable either due to lost conectivity in the
network or a severe overload condition. Only in such
case must a conversation be abnormally terminated.
The error recovery facilities and guarantee that
messages are correctly sequenced, characterizes
end-to-end protocol in the sense that its two users
can rely entirely on the protocol to overcome all
problems due to imperfections in the underlying of the
network.
The flow control mechanism does not influence the
logical sequence of messages exchanged between two
TSUs, but they may experience some time delay when
the CAP cannot accept to send messages until suf-
ficient resources become available. This is either
because the conversation already uses all the
resources which were requested when it was
established, or because the network would risk
entering a congestion state unless traffic was slowed
down.
Trans*************************************.
The basic unit of data exchanged in a
conversation is the transport protocol data unit.
It consists of two parts, the conversation header
(TH) and the message.
The Transport header has a variable length and
contains, depending of message type:
o Header length,
o message type,
o source,
o destination,
o message sequenc* number,
o services field.
The message type field identifies one of the
messages described in the following section.
Source and destination CAP-id identifies
uniquely the conversation, carries inform{tion on
the control region responsible for the
conversation and identifies the source and
destination node.
The message sequence number serves to ensure correct
sequencing of messages, proper flow control based on
the number of messages which may be sent before an
acknowledge is received, and helps to detect a
duplicate message which may arrive as a result of
recovery.
The message in the conversation unit is of varying
length and can be either user data, or information
exchanged to control the conversation or to record
statistics. The maximum length of a message must be
specified when the conversation is established.
The message length specifies the number of bytes in
the message following the conversation header. The
services field is only present for certain types of
messages used for conversation control. The
description of these messages also describe the
contents of the services field.
6.4.4.2.1 T***********************************
A conversation can be in one of three phases and in
each phase, certain message types are allowed:
o Establishment phase (Connect Request and confirm)
o Data Transfer phase (INFORMATION message)
*** LINE REJECTED ***
…02……02……02……02……02…(CONTINUE message
o Wrap Up phase (Disconnect request response)
The transfer from one phase
to another is controlled
by the conversation monitor and is caused by
conversation control messages as shown in figure
4.1.2.2-1. This diagram further shows all possible
states of the conversation monitor. The state
transitions are explained below.
Establishment ******
The establishment phase starts by connect
request. The connect request contains a service field
which specifies:
o name of destination conversation system user,
o type of conversation,
o maximum lenght of messages,
o window size, i.e., maximum number of
outstanding messages.
This field allows the routing and resource
requirements to be determined. If the
destination user is not local to the originating
user, the connect request message is not
transmitted until a virtual call has been
established by the Data Transmission
Environment.
The receiving conversation monitor determines which
communication access path is allocated by the named
destination TSU and passes the connect
request/indication message on via this CAP.
The destination TSU must then issue an connect
confirm primitive stating his resources
requirements to the conversation. When the
corresponding connect figure is received and accepted
by the originating conversation monitor,
*** LINE REJECTED ***
established and the data transfer phase has started.
The conversation may not be established for one
of the following reasons:
o the destination TSU is
not assigned to a CAP,
o the CAP services does not have sufficient
…02…resources
o the virtual c{ll cannot be established,
o the destination TSU rejects the OPEN message,
o a time out occurs before an OPEN response is
…02…received.
In these cases, the originating TSU is notified
by a Disconnect indication.
Data Transfer Phase
…02……02……02……02…-
In the data transfer phase, the two TSUs can exchange
INFORMATION messages subject to the flow control
derived from the Connect parameters. The conversation
monitor acknowledges correctly received message. This
is passed back to the sending CAP services in an ACK
message, stating the number of the acknowledged
messages. The message window is moved according to the
message number which may cause a blocked message to be
released for transmission.
Higher level protocols may depend on the ability to
transmit their control messages even in cases where
the normal message flow is blocked. This is provided
for by the LETTERQRAM message, which is not subject to
normal flow control, but has its own window of size
one. LETTERGRAMS are not transmitted via the virtual
call established for the conversation, but on
permanenet virtual calls which are reserved for
lettergram traffic from many conversations.
Lettergrams are therefore restricted in size to ensure
that no fragmentation occurs at the packet level.
When the Data Transmission Environment detects an
irrecoverable error at the packet level, it will clear
all virtual calls affected by the error and notify the
Network Control Center. If the error is truly
irrecoverable, (e.g. a node is unreachable), the NCC
will instruct the conversation monitor to wrap up and
close all conversation corresponding to the c*eared
VCs. The conversation monitor is otherwise asked to
establish a new virtual call and issue a CONTINUE
message with a services field similar to the OPEN
message. This allows the conversation to be resumed,
possibly with different resources demands, if the
error has caused decrease network performance.
When the conversation monitor
has received a positive
CONTINUE response, all messages which were not
acknowledged before the cle*rance of the VC, are
transmitted. This may cause a duplicate message to be
received in the case where and ACK message was lost.
Receipt of a duplicate message in this situation
therefore, is not considered a protocol error.
The conversation can be suspended by an INTERRUPT
message which gives the possibility of a second degree
of flow control exercised from outside the transport
services. In the suspended state, normal messages
cannot be transmitted, but the virtual call allocated
for the conversation is kept. Normal flow is resumed,
possibly with adjusted resources parameters, by the
CONTINUE message as explained above.
Wra************
The data transfer phase is terminated and the wrap
up phase started, when the WRAP message is issued.
This message may be issued in any of the data
transfer states. The message indicates that the
TSUs have ceased to use the conversation and waits
for it to be closed. The WRAP response contains in
the services field, statistics col*ected during the
conversation. When the statistics have been for-
warded to the NCC, the conversation monitor can
accept a CLOSE message.
The CLOSE response causes the virtual call supporting
the conversation to be cleared and all resources
allocated to the conversation, to be removed.
Finally, the conversation controller is notified that
the conversation is finished.
All Phases
************
In all phases of the conversation, there is a need to
be able to monitor more closely, the operation of the
software. In order to synchron*ze certain events in
connection with such tests of the software supporting
the conversation, a TEST message can be issued. It
contains test commands not yet defined, which have a
local effect on the conversation monitor. Since the
TEST message has
followed the norm{l sequence
of messages, the two
conversation monitors will have been in comparable
protocol states when the test commands were executed.
6.4.5 Subnetwork Access Services
This interface receives conversation messages from
all CAPS, in a particular node. The messages have
already been broken and reformatted into packets by
the CAP services.
At the trans*ort level, three types of user traffic
*** LINE REJECTED ***
o Interactive traffic
o Batch traffic
It is the responsibility of the subnetwork access
services to hand over packets to the Data
Transmission Environment according to the following
priority hierarchy:
o Network control traffic generated as a result of
…02…error conditions
o Network control traffic containing status
…02…information (e.g. routing updates)
o Transparent user traffic
o Interactive user traffic
o Batch user traffic
o Network statistics traffic
Although batch user traffic has lower priority than
interactive user traffic, a mechanism is provided to
ensure that the batch traffic is not forced to drop
below a certain fraction of the interactive.
Note that the priority is only given at entr* to the
Data Transmission Environment. When the packets have
entered DTrE, they are switched through to their
destination with equal priority.
6.4.5.1 Data Transmission
Environm*nt
…02……02…The Data Transmission Environment (DTrE) supports the
…02……02…packet level of the store and forward packet sub
…02……02…network used by the transport services.
…02……02…The interface to DTrE from the transport level in
…02……02…CUE, called the sub network interface, is located in
…02……02…the Front End Processor and in each node. Across this
…02……02…interface, the packets are transferred which enter or
…02……02…leave the sub network at the given node.
…02……02…As shown on figure 4.2-1., the DTrE may assume several
…02……02…roles as compared with the CCITT recommendations.
…02……02…When communicating with other nodes in LMENET, DTrE
…02……02…assumes the role of a Signalling Terminal (STE) and
…02……02…operates according to CCITT recommendation X.75, Grey
…02……02…Book, Geneva 1979. It is also technically possible to
…02……02…use this interface for communication through public
…02……02…packet may not allow this and, therefore, DTrE is also
…02……02…able to communicate as a Data Terminal Equipment (DTE)
…02……02…using the CCITT Provisional Recommendation X.25, Grey
…02……02…Book, Geneva 1978. Thirdly, the DTrE can communicate
…02……02…directly with an X.25 terminal, in which case, DTrE
…02……02…acts as a Data Communication Equipment (DCE).
…02……02…Please note that this discussion does not include
…02……02…the possibility of attaching a local X.25 network to
…02……02…a node via the MEDE in the Network Interface
…02……02…Environment.
6.4.5.1.1 DtrE Facilities
…02……02…The sub network interface provides a set of
…02……02…facilities to the Communication User Environment.
…02……02…o A set of Logical channels capable of supporting
…02……02……02…traffic in both directions. Traffic on one
…02……02……02…channel is logically independent of traffic on
…02……02……02…other channels.
…02……02…o Establishment of a Virtual Calls (VC) on a
…02……02……02…specified channel adressing another DTE.
…02……02…o Use of Permanent Virtual Calls (PVC) on a
…02……02……02…specified set of logical channels.
o Transmission of Datagrams
along estab*ished PVC.
o Specification of traffic class, throughput class,
…02…and window size when the call is set up.
o End to end integrity of data by delivery of
…02…packets in correct sequence and notification of
…02…loss of data.
o Flow control across the sub network interface
…02…according to the specified window size.
DTrE provides no other optional user facilities than
the traffic class, throughput class and window size
specification mentioned above. The maximum user
packet size is fixed throughout the net, and the
maximum window size is 7 packets.
When a virtual call is requested, the DTrE established
a route through the network to the addressed
destination node. All packets transmitted on the VC
will follow this route and will always be forced to
travel in correct sequence.
If a permanent error occurs, then all VCs and PVCs
effected by the error, are cleared. The DTrE will
try to re-establish the PVCs through a different
route, and then issue a RESET packet on each. Users
of VCs will only receive a CLEAR packet, then it is
the responsibility of the CUE to re-establish a new
VC.
Inter Node Si*********
**************************
Signalling between the nodes at the packet level is
performing according to CCITT recommendation X.75.
DTrE implements in the Front End Processor and in
*** LINE REJECTED ***
symmetrical, thus giving equal responsibility for
control and error recovery between the nodes.
A virtual call is formed by a set of STE pairs,
which implements the packet protocol through
propagation of the operations on the sub network
interface to successive STE-STE interfaces.
6.4.5.1.2 Routin**********************
…02……02…The routing in LMENET is based on a global routing
…02……02…calculation performed by the Network Services
…02……02…Environment. The calculation is done regularly and
…02……02…is based on the following inputs:
…02……02…o network topology,
…02……02…o trunk speeds,
…02……02…o current allocation of virtual calls,
…02……02…o current traffic load on nodes and trunks.
…02……02…Output from the central routing algorithm is
…02……02…regularly distributed to all nodes where it forms
…02……02…the routing table. This table states the primary
…02……02…link to be used when setting up a virtual call for
…02……02…each destination node and traffic class, a secondary
…02……02…link is also given. It is to be used in cases where
…02……02…the primary link is not operable and new routing
…02……02…information is not yet received.
6.4.6 Data Link Environment
…02……02…The Data Link Environment (DLE) supports the Balanced
…02……02…Link Access Protocol (LAPB) which is specified in
…02……02…CCITT recommendations X.25 and X.75.
…02……02…This protocol ensures error free full duplex
…02……02…transmission of frames on a single link, from one STE
…02……02…to another, except for the very rare case where bit
…02……02…errors are not detected by the frame sequence check.
…02……02…The protocol is implemented in firmware on the Line
…02……02…to the CR80 computer.
…02……02…Based on the routing discipline described in the
…02……02…previous section* the interface between DTrE
…02……02…calls on a given link. These packets will be
…02……02…sent in a first come first served order.
6.5 NODAL SWITCH
SOFTWARE PACKAGE
…02……02…The Nodal Switch Software package implements:
6.5.1 X25 Level 1*******
…02……02…With reference to fig. 6.5-1, the software
…02……02……02…components related to each o* the three X.25 levels
…02……02…are described.
…02……02…a. X.25 Level 1
…02……02……02…The LTU line driver (LD) supports from one to four
…02……02……02…The combination of lines and speeds are:
…02……02……02…1 line: 56/64 K bits/sec
…02……02……02…2 lines: 19.2 K bits/sec each
…02……02……02…4 lines: 9.6 K bits/sec each
…02……02……02…The lines are controlled via the Link Manager
…02……02……02…(LLM) which provides:
…02……02……02…- line connect/disconnect
…02……02……02…- exception notification
…02……02……02…- modem loop tests according to V54
…02……02…b. X.25 Level 2
…02……02……02…Each line is connected to a link protocol ask
…02……02……02…(LAPB) which implements the standard LAPB
…02……02……02…protocol without extended sequence numbering.
…02……02……02…The single Link Manager task controls all LAPB
…02……02……02…tasks:
…02……02……02…- open and close link (SABM and DISC)
…02……02……02…- exception notification
…02……02……02…- remote load of NSP.
…02……02……02…c. X.25 Level 3
…02……02……02…These functions are distributed between the LTU
…02……02……02…and CRB0. This is done for efficiency reasons
…02……02……02…and also in order to split the basic Level 3
…02……02……02…function from the optional functions and the
…02……02……02…control functions which are not part of the X.25
…02……02……02…specification.
6.5.2 Resource Mana*****************
6.5.2.1 VCBM
…02……02…The Virtual Circuit Buffer Manager runs as one task
…02……02…*********d to/from LAPB on a VC basis.
…02……02…The functions performed by VCBM are:
…02……02…- Present each incoming packet to the Nodal Switch
…02……02……02…Processor (NSP) in order of arrival. This is done
…02……02……02…unless packets on the VC in question already are
…02……02……02…queued by VCBM. In that case, the arriving packet
…02……02……02…*s appended to the queue.
…02……02…- Queue incoming packet or free packet buffer based
…02……02……02…on answer from the NSP after packet presentation.
…02……02…- Dequeue packet from specified VC based on command
…02……02……02…from the NSP and present the packet to the NSP.
…02……02…- Perform piggy-backing of RR packet onto outgoing
…02……02……02…data packets.
…02……02…- Management of packet retransmission on VC|s which
…02……02……02…use the packet re|ect facility.
…02……02……02…The interface to the NSP consists of from 1 to 11
…02……02……02…logical channels per LTU. Two channels are used
…02……02……02…for each link, one data channel and one control
…02……02……02…channel. A single channel is used to communicate
…02……02……02…with LLM for overall control *urposes.
6.5.2.2 VCSWH
…02……02…The Virtual Circuit Switch Handler is a specialized
…02……02…DAMOS handler which controls a number of LTU|s.
…02……02…The interface to the LTU is through the shared RAM
…02……02……02…in the LTU which is mapped into the address space of
…02……02…VCSWH. This interface logically supports the 10
…02……02……02…channels as described above.
…02……02…The interaction can either be driven by interrupts
…02……02……02…from the LTU in case that VCSWH only handles one
…02……02……02…LTU, or the handler must poll the LTU|s in case
…02……02……02…where several LTU|s are handled by the same
*** LINE REJECTED ***
The interface to processes
and other handlers in the
*** LINE REJECTED ***
Manager (VCM) and one connection for each active VC
which is switched into the NSP.
The following functions are performed by VCSWH:
- Process packets according to the X.75 level 3
…02…protocol. The packet ty*es used for VC setup and
…02…take down are not processed by VCSWH, but are
…02…routed to VCM.
- Switch packets, according to a static switch table
entry for each VC to:
…02…-- subscriber process, if this is the end
…02…point of the VC
-- another VCSWH incarnation, if this is an inter-
…02…mediate node and the outgoing link is not
…02…controlled by this VCSWH incarnation.
…02…-- LTU, if this is an intermediate node and the
…02…outgoing link is controlled by this VCSWH
…02…incarnation.
- Respond to VC setup commands from VCM, which
…02…indicates switching information and BTS
…02…connections which must be used during the lifetime
…02…of the VC.
- Perform flow control based on the VC window
…02…parameter and the buffer occupancy on the incoming
…02…and outgoing LTU or process.
- Exception notification to VCN.
- Transparent data transportation between VCM and
…02…the LLM|s.
The normal data flow depends on the switching
situation:
- LTU-Application
…02…Each packet is copied once by BTS to/from the
…02…shared RAM on the LTU to/from the application
…02…process. The copy operation is performed via a
…02…DMA transfer.
- LTU-Other VCSWH
…02…Each packet is copied as described above. To
…02…finish the switching operation, the other VCSWH
…02…incarnation then performs a copy to the
…02…destination LTU.
- LTU-LTU
address space of the same
VCSWH, an internal copy
*** LINE REJECTED ***
6.5.2.3 VCM
…02……02…The Virtual CircuitMManager resides as one process
…02……02…in each Nodal Switch Processor. It is connected to
…02……02…a Packet Network Manager (PNM) via a PVC. The PNM
…02……02…is located in the Network Control Centre.
…02……02…VCM performs the following functions:
…02……02…o Receives routing information from PNM
…02……02…o Transmits link load information to PNM which
…02……02……02…uses this information to perform a centralized
…02……02……02…routing calculation at regular intervals.
…02……02…o Exception notification to PNM.
*** LINE REJECTED ***
…02……02……02…controls the VCSWH|s and LLM|s accordingly.
…02……02…o Processes VC setup and takedown requests, either
…02……02……02…from VCSWH|s or subscriber processes.
…02……02…The transport network follows recommendations from
…02……02…CCITT and ISO for levels 1 to 4.
…02……02…The packet network implements X.75/1,2,3 and may be
…02……02…used *ndependently. The transport station is
…02……02……02…implemented using the packet network as
…02……02……02…communication medium.
…02……02…The Nodal Switch software is distributed between the
…02……02……02…LTU|s and NSP. To ease the discussion, the LTU
…02……02……02…software is often referred to as firmware although
…02……02……02…it is actually loaded from the NSP at system
…02……02……02…initialisation time.
…02……02……02…The software is divided both to reflect the three
…02……02……02…levels of the X.25 specification and to reflect the
…02……02……02…distribution of work between the NSP and the LTU.
6.5.3 Trans********************
…02……02……02…The transport station may reside as a tasR in any
…02……02……02…process. It implements the ISO OSI level 4 protocol
…02……02……02…specified by ECMA.
…02……02……02…Each connection is accessible to the level 5 user
…02……02……02…through a BTS port.
…02……02……02…TS multiplexes the connections on Virtual Circuits
…02……02……02…provided by VCM.
…02……02……02…TS recognizes the traffic of the following
…02……02……02…priorities:
…02……02……02…- network control
…02……02……02…- interactive sessions
…02……02……02…- batch sessions
…02……02……02…- statistics and traces,
…02……02……02…and enters the traffic into the packet network
…02……02……02…accordingly.
…02……02……02…Incoming Connect Requests are routed to CAS which
…02……02……02…also receives any exception notification.
6.5.4 Connection Alloc{tion Services (CAS)
…02……02……02…The connection allocation service may reside as a
…02……02……02…task in any process. The main function of CAS is to
…02……02……02…manage the usage of a transport station. However,
…02……02……02…since CAS is the controlling task in processes
…02……02……02…running TS, it also has other administrative
…02……02……02…functions.
…02……02……02…Based on information from the Modal session manager
…02……02……02…(session-requests) or from the transport station
…02……02……02…(connection-requests) CAS creates TSU tasks and make
…02……02……02…them connect to TS.
…02……02……02…Based on external commands CAS also creates other
…02……02……02…tasks and makes them connect both to TSU|s and
…02……02……02…device or channel drivers.
…02……02……02…Message to and from all tasks in the process
…02……02……02…controlled by CAS are multiplexed onto a separate
…02……02……02…BTS connection leading to either NPLUS or HPLUS.
6.5.5 Gatewa************
…02……02…The main task of the gateway (GWY) is to act as the
*** LINE REJECTED ***
…02……02…As seen from the ACNC the gateway must look liRe
…02……02…several ICC|S with CRT|s and printers.
…02……02…As seen from the Nodal Sybsystem of the collocated
…02……02…node the Gateway must look like any other subsystem
…02……02…interfacing the network.
…02……02…Therefore the Gateway is a protocol converter
…02……02…between the ICC- and the NSS-protocols.
…02……02…It also acts as a multiplexer or speed converter of
…02……02…several ICC subsystems (of 9.6 Mbps each) and one
…02……02…CCI subsystem (Communication Control Interface).
…02……02…Finally the Gateway is a converter because messages
…02……02…must be converted to and from the Virtual Terminal
…02……02…Protocol entirely used inside the ACDN.
6.5.6 Modules of the
GatewaZ
6.5.6.1 The ICC Module
…02……02…The ICC Module is divided into a number of
…02……02…co-routines each performing the t{sk of acting like
…02……02…an intelligent Communications Concentrator as seen
…02……02…from the ACNC.
…02……02…Messages are exchanged with the CCI Module by means
…02……02…of Message Queues.
6.5.6.2 The Communications Control Interface Mod***
…02……02…The Communications Control Interface Module (CCI)
…02……02…interfaces the collocated node of the Packet
…02……02…Switched Network.
…02……02…Virtual Circuits to hosts, terminals and printers
…02……02…are created and dismantled when necessary initiated
…02……02…from this module.
…02……02…Messages are exchanged with the ICC co-routines by
…02……02…means of Message Queues.
6.5.6.3 The Hi****************************
…02……02…The conversion to and from the Virtual Terminal
…02……02…Protocol entirely used inside the ACDN is done by
…02……02…the High Level Service Module.
6.5.6.4 The Su*******************
…02……02…
…02……02…The Gateway is monitored from this module which
…02……02…generates Statistics (host-, terminal-, and printer
…02……02…connections) and error reports. Possi*le
…02……02……02…controlling information too is received by the
…02……02…module.
6.5.6.5 The Recover******************************************************
…02……02…********************************************************************
6.6 TERMINAL ACCESS
SOFTWARE PACKAGE
6.6.1 Trans****************************
…02……02…The Transport Station user (TSU) is the end point of
…02……02…one or more sessions which are multiplexed onto a
…02……02…connection.
…02……02…The TSU converts the IBM or Univac specific session
…02……02…interface into the standard LMENET sessions.
6.6.2 Device Emulators (EM)
…02……02…The Device Emulators map a device onto one or more
…02……02…sessions depending on the device and its current
…02……02…connections to other end users. The emulators are
…02……02…created when the first session is established and
…02……02……02…removed when the last session is terminated.
…02……02…The different emulators support the following
…02……02…devices:
…02……02……02…to IBM:
…02……02……02…LUT123 emulator (LUTEM)
…02……02…makes the following devices appear as Logical Unit
…02……02……02…Types 1,2, or 3:
…02……02…- IBM 3271 BSC
…02……02……02…- TTY
…02……02……02…- IBM 2780/370
…02……02…- HASP Work Station
…02……02……02…LUT 123 Boundary Function (LUTBF)
…02……02……02…performs the boundary function defined in SNA for:
…02……02…- IBM 3274, IBM 3276
…02……02…- IBM 3767,
…02……02…- IBM 3790
…02……02……02…to Univac:
…02……02……02…INT1 Emulator (INT1EM)
…02……02……02…transforms the following devices into the INT1
…02……02……02…protocol:
…02……02……02…- IBM 3271 BSC
…02……02…- TTY
…02……02……02…RB2 Emulator (RB2EM)
…02……02……02…transforms the following devices into the RB2
…02……02…protocol:
…02……02……02…- NTR
…02……02……02…- IBM 27B0/378*
6.6.3 Device Handlers
(DH)
…02……02……02…The Device Handler modules are all implemented on
…02……02……02…the LTU|s and support the devices specified in
*** LINE REJECTED ***
…02……02……02…Resource Managers and Emulators through a
…02……02……02…standardized header with a standardized set of
…02……02……02…messages.
6.6.4 TAS Mana*******
…02……02……02…
…02……02……02…TAS management function consists of two parts. One
…02……02……02…is to maintain the physical and logical status of
…02……02……02…the equipment terminated by TAS and the other is to
…02……02……02…control the dynamic session status.
6.6.4.1 Confi*******************************************************
…02……02……02…This controls the resources in the TAS. It
…02……02……02…communicates with the Network Control System in the
…02……02……02…NCC. CM/NSR:
…02……02……02…- Receive definition of external resources.
…02……02……02…- Receive and distribute include and exclude.
…02……02……02…commands for external resources.
…02……02……02…- Maintain status table.
…02……02……02…- Transmit status of external resource either when
…02……02……02…it changes or on request from NCS.
…02……02……02…- Distribute network maintenance commands from NP to
…02……02……02…other tasks in NPLUS.
6.6.4.2 Terminal Session
Nana***
…02……02…This task controls the session initiation and
…02……02…termination for all logic{l units. Associated with
…02……02…this, it performs the overall resource management
…02……02…concerning the node itself.
NSM receives session requests
either from the SS in
the NCC or from a CAS in the NEUP|s:
- Validate session request
- Check status of logical unit
- Select NEUP if not already done
- Acknowledge session request indicating emulator
…02…type of the emulator to be created and connected
…02…to TSU and appropriate-driver.
Force session termination in case of:
- Command from SS
- Exclusion of resource.
…02…
6.6.4.3 External ResourceMana**********
These tasks control the physical units connected to
the Node. This is done via command to the driver
and to the Device and Line Handlers on the LTU|s.
The ERM|s communicate with the NCC via CM/NSR.
The following functions are performed:
- Initialize line
- Initialize cluster
- Initialize device
- Reset
- Request status
- Request loop test
- Report status change
- Activate/deactivate traces and statistics
…02…collection.
6.6.4.4 Node Lo********************************
…02……02…This task manages all data flow on network control
…02……02…sessions from any logical unit to the US in the NCC.
…02……02…Data passed to the NCC are converted into a device
…02……02……02…independent presentation form supporting:
…02……02…- ASCII character set
…02……02…- New line control character
…02……02……02…- Transparent control sequence
…02……02……02…- Erase presentation surface control
…02……02……02…Inbound data from devices not yet included are
…02……02……02…rejected:
…02……02……02…Inbound data *rom devices not in data session are
…02……02……02…passed from the node system to NLUS, converted and
…02……02……02…forwarded to US.
…02……02……02…Outbound data from US is passed on to the node
…02……02……02…system.
…02……02……02…Inbound data from devices in data session, but
…02……02……02…currently switched to network control ses,ion, is
…02……02……02…passed from the EM via CAS to NLUS, converted and
…02……02……02…forwarded to US. Outbound data from US is passed
…02……02……02…back to EM.
6.6.4.5 Node Session Mana**********
…02……02……02…This task controls the session initiation and
…02……02……02…termination for all logical units. Associated with
…02……02……02…this it performs the overall resource management
…02……02……02…concerning the node itself.
…02……02……02…NSM receives session requests either from the SS in
…02……02……02…the NCC or from a CAS in the NEUP|s:
…02……02……02…- Validate session request
…02……02……02…- Check status of logic{l units
…02……02……02…- Select NEUP if not already done
…02……02……02…- Acknowledge session request indicating emulator
…02……02……02…type of the emulator to be created and connected
…02……02……02…to TSU and node system.
…02……02……02…Force session termination in case of:
…02……02……02…- Command from SS
…02……02……02…- Exclusion of resources.
Packet Network Control
This task in NP communicates with the PNM in the NCC
via a PVC which is set up at node initialization
time.
- Pass messages between PNM and VCM
- Manage adjacent nodes which are excluded from the
…02…packet network.
*
*
6.7 HOST ACCESS SOFTWAREP*******
The module configuration of the Host Access Software
System is shown in figure 6.5-1.
The Transport Network modules have been described in
section 6.5, and the PNC and TSU modules in section
6.6.
The HAS connects to the host either locally through
a channel interface or remotely through a
communication line. In both cases, the drivers for
the channel interface or the LTU are able to route
traffic to the proper connection accordingly to
addresses in the data units.
6.7.1. IBM A*******************************************
One instance of IASI exists for each active session.
It receives Path Information Units (PIU) from the
host, removes the Transmission Header (TH) and
passes the data unit on to the session supported by
the connected TSU. Inbound data gets the reverse
treatment. IASI also matches the network flow
control with the SNA pacing.
6.7.2 IBM Channel Handler (ICH)
*******************************
The ICH drives the channel interface according to
the appearance of an IBM 3705 running ACF/NCP/VS.
Inbound data:
Whenever a given number of data units have been
received or a given time has elapsted since the
first data unit was received, the host is
interrupted. When the host selects the channel
interface, all the data units are transmitted across
the channel.
Outbound data:
The host selects the channel and outputs a number of
data units up to a given maximum. ICH must always
it must transmit a "slowdown" *essage to the host,
causing it to stop transmitting until an "exit
slowdown" is received.
6.7.3 PUT4 Emulator
(PUT4EN)
…02……02…Each of these tasks emulate a Physical Unit of type
…02……02…4. This may either represent a shared local NCP or
…02……02…an NCP connected via { cross domain link.
…02……02…The PUT4 emulator is {ctivated through an include
…02……02…command from NCS.
…02……02…Since the hosts can exercise little control over
…02……02…these types of PUT4|s, only a small fraction of the
…02……02…PUT4 functionality need be implemented.
6.7.4 Host Session Mana***
…02……02……02…This task performs the same functions as the NSM
…02……02……02…described *n section 6.6.4-5.
…02……02……02…Additionally, it may receive IBM dependent session
…02……02……02…reBIND requests
…02……02…If such a valid request is received, NSM allocates
…02……02……02…in IASI task and forwards the request. Session
…02……02…setup with the specified destination is now
…02……02…performed in the standard way based on resource
…02……02……02…requirements extracted from the request.
…02……02…Status and S**********************************
…02……02…This task manages a table of network resources
…02……02…describing the hierarchical ordering of the
…02……02……02…resources and their status. Update information for
…02……02…this task is received from the NCS in the NCC.
6.8 NETWORK CONTROLSOFTWAREPACKAGE
…02……02…The module configuration of the Network Control
…02……02…Centre is shown in fig. 6.8-1. The figure also shows
…02……02…the usage of important files.
…02……02…All processes of the NCC need not be permanently
…02……02…via network operator commands, whereas the other
…02……02…processes are permanently running.
…02……02…The modules forming the transport network are
…02……02…described in section 6.5 and the TSU and CAS modules
…02……02…in section 6.6.
6.8.1 O******************************************
…02……02…One incarnation of this process exists for each active
…02……02…network operator.
…02……02…OCMI performs:
…02……02…o command input
…02……02…o command parsing and classification
…02……02…o routine of command
…02……02…o start of processes
…02……02…o message output
…02……02…Chapter 4 provides the scope of the commands.
6.8 2 Network Confi***************************
…02……02…This task manages the Global Network Table (GNT) which
…02……02…initially is read from some network definition files.
…02……02…NCS has a connection to each of the CM/NSR tasks in
…02……02…the nodes.
…02……02…NCS performs:
…02……02…o Load definition of part of the GNT from file
…02……02…o Remove definition of part of the GNT from file
…02……02…o Distribute definition to the nodes
…02……02…o Include defined resource(s)
…02……02…o Exclude defined resource (s)
…02……02…o Obtain status of defined resource(s)
…02……02…o Load part of the GNT from a node
…02……02…If status of an external resource changes, this is
…02……02…reported to NCS automatically, causing NCS to update
…02……02…GNT and generate an event to EDH.
6.8.3 User Services
(US)
…02……02…The user service t{sk communicates with
…02……02…all terminals through NLUS in each node. All data from
…02……02…terminals in network control session is routed to US.
…02……02…US provides:
…02……02…o Logon support, validation of user and selection of
…02……02……02…default parameters based on user profile, mainframe
…02……02……02…application selection, output of logon news and
…02……02……02…indication of waiting messages in an electronic
…02……02……02…mailbox.
…02……02…o Logoff support
…02……02…o Message switch to other user or to message switch
…02……02……02…application.
…02……02…o Operator notification from user.
…02……02…o Message broadcasting from operator to group of
…02……02……02…users or terminals.
6.8.4 Session Services
…02……02…This task supervises all sessions and connections
…02……02…and updates GNT accordingly.
…02……02…Operator commands:
…02……02…o Initiate/terminate/status of session
…02……02…o Inititate/suspend/resume/terminate/ status of
…02……02……02…connection
…02……02…o Set session limits
…02……02…Internal commands:
…02……02…o Route session requests from US either to a CDRM
…02……02……02…emulator or to a session manager depending on
…02……02……02…destination end user type
…02……02…o Switch messages between session managers intended
…02……02……02…for address resolution purposes
…02……02…o Receive accounting information at session
…02……02……02…initiation and termination, and write an account
…02……02……02…record.
6.8.5 CDRM Emulator (CDRNEM)
…02……02…Each CDRMEM tasks manages a Cross Domain Resource
…02……02…Manager session with a CDRM in an IBM host. These
…02……02…sessions are used to synchronize session setup and to
…02……02…obtain information on the application status in hosts.
6.8.6 Packet Network
Mana**********
…02……02…This process m{nages the entire packet network through
…02……02…a PVC to the PNC and VCM in each node.
…02……02…PNM performs:
…02……02…o initiation/termination/status of links
…02……02…o Initiation/termination/status of VC|s
…02……02…o Routing calculation based on VCM information
…02……02…o Collection of network statistics
…02……02…o Remote load and dump of nodes and HIP|s.
6.8.7. Event Dis**********************
…02……02……02…All events which require operator notification are
…02……02……02…routed to the EDH process. Each operator can assign
…02……02……02…certain types of events to his terminal causing EDH
…02……02……02…to note events of those types in a dedicated area on
…02……02……02…the terminal. The event notification will stay on
…02……02……02…the terminal until acknowledged by the operator.
…02……02……02…All events are written to an event log which may be
…02……02……02…printed concurrently by another application.
6.8.8 Gra**********************y*****
…02……02……02…Graphic Status Display processes may be started
…02……02……02…either to generate a status snapshort or to update,
…02……02……02…at regular intervals, a status display assigned to a
…02……02……02…given operator terminal. Status information is
…02……02……02…obtained from the GNS process.
…02……02……02…GSD application is tailored to a specific display
…02……02……02…layout which is stored in a file.
6.8.9 Data Unit Trace (DUT)
…02……02……02…Trace applications may be started to receive trace
…02……02……02…of all data units passing a given point.
…02……02……02…The following traces may be selected:
…02……02……02…o Channel data
…02……02……02…o Line, cluster, or terminal
…02……02……02…o Virtual circu*t
…02……02……02…o Connection
…02……02……02…o Session
…02……02……02…Trace applications may write the trace data to6.8.10
External Resource Statistics (ERSTAT)
…02……02……02…NMH statistics collection may be performed by
…02……02……02…application programs which may receive statistics on
…02……02……02…the same entities as described in section 4.4.9.
6.8.11 Routin**************************************
…02……02…This section presents the underlying scheme for
…02……02…implementing an adaptive routing strategy. This sec-
…02……02…tion describes the basic principles and identifies
…02……02…the software entities realizing the routing
…02……02…procedure.
…02……02…Routin**************
…02……02…An adaptive routing scheme with a centralized
…02……02…routing calculation and a decentralized routing
…02……02…selection is applied.
…02……02…The routing scheme adapts to the current network
…02……02…topology and the current load of the internodal
…02……02…trunks.
…02……02…Routing calculation is performed centrally, and the
…02……02…resulting routing tables {re distributed to each
…02……02…node. The routing table - which at the arrival at
…02……02…the node represents the past - in conjunction with
…02……02…the nodes instantaneous view of the load of the
…02……02…trunks, makes up the basis for route selection.
6.8.11.1 Software O***********
…02……02…The S/W entities responsible for accomplishing the
…02……02…touring task are:
…02……02…- Packet Network Manager (PNM) which is part of the
…02……02……02…network control software package.
…02……02…- Virtual Circuit Manager (VCM) and
…02……02…- Virtual Circuit Switch (VSCW), which both is part
…02……02……02…of the nodal switch software package.
…02……02…The interplay between these entities in a routing
…02……02…context is outlined below.
…02……02…At regular intervals, each VCM transmits the current
…02……02…load on each of its output trunks to PNM.
PNM at regular intervals performs
a routing
calculation based u*on the measured load of the
trunks in each direction. The current topology is
implicitly represented by means of trunk loads: A
trunk which is inoperable is assigned an infinite
load, an inoperable node is connected to other nodes
through trunks each assigned an infinite load.
In case a node - or trunk failure - is reported PNM
immediately calculates new routing information.
Basically the routing algorithm for each determines
the shortest path to each of the remaining nodes.
The shortest path between two nodes is defined as
the path along which the sum of the cost of each
trunk traversed is minimum, where the cost of a
trunk in a given direction is equal to the measured
load in the given direciton.
PNM transmits new routing information to each VCM.
6.8.11.2 Route Determination
The routing information for a node is organized in a
vector for each trunk giving the cost to each node
**** the trunk has been passed.
In a network like the one sketched on the following
page N1 receives routing information telling
- What is the minimum cost reaching N3, N4 and N5
…02…from N2
- -what is the minimum cost reaching N2, N4 and N5
…02…from N3
This information in conjunction with the current
measured load an N|1* outgoing trunks form the basis
for route selection in N1.
If N1 is going to establish a VC to N4, it finds the
output trunk which minimi*es the cost:
…02…Trunk Cost (1 - 2) + Shortest path (2 - 4)
Min.
…02…Trunk Cost (1 - 3) + Shortest path (3 - 4)
This scheme also applies when
the destination node
is an adjacent node.
*
The load of a trunk is expressed by the following
formula
Trunk load = Trunk cost = (MFZ/LS) + MID + MOD + VCL,
where
- MFZ - means frame size in bits
- LS - trunk speed in bits/m sec.
- MID - means time information frame s*ends on the
…02……02…input LTU before being switched to the
…02……02…output LTU
- MOD - means time information frame spends on the
…02……02…output LTU before being transmitted onto the
…02……02…trunk
- VCL - 1/(1 . (*VC/MAXv|) xx a), where *VC is the
*** LINE REJECTED ***
Each term may be prefixed by same coefficient.
It should be noted that the load of the node itself
is incorporated in MID since packets must stay on
the incoming LTU until the node is able to process
them.
The various variables in the formula giving the
trunk cost are measured by the VCSW, reported to VCM
which calculates the cost and transmits it to PNM at
regular intervals.
6.9. Electronic Mail
Software Packa***
…02……02……02…o The Electronic Mail Software Package implements
…02……02……02…the protected message service in the ACDN. From
…02……02……02…a service design point of view, PMS is a service
…02……02……02…where users/subscribers deliver messages to the
…02……02……02…ACDN and ACDN accepts responsibility for
…02……02……02…delivering the message to the appropriate
…02……02……02…destination. The implied functionality in
…02……02……02…assuming the above responsibility is that of a
…02……02……02…high integrity store-forward message switch.
…02……02…This section provides the background for the
…02……02…proposed EMH implementation Section 6.9.1 provides
…02……02……02…short description of each of the major components of
…02……02…the Electronic Mail Software Package.
…02……02…In the specific instance of the ACDN with the
…02……02…anticipated volumes for PMS, it has been necessary
…02……02…to dedicate a processor complex EMH. The software
…02……02……02…functions needed in such a dedicated host are
…02……02……02…functionally similar to those contained in the
…02……02…present LME-net Network Control Centre with some
*** LINE REJECTED ***
…02……02…*
…02……02……02…a. .**************************************
…02……02……02……02…This task is responsible for cooperating
…02……02……02……02…with session services in the NCC and
…02……02……02……02…implementing end to end control on sessions
…02……02……02……02…carrying type B traffic.
…02……02……02……02…The overall session control is still with the
…02……02……02……02…session services in the NCC. The destination
…02……02……02……02…validation is still done by the NCC.
…02……02……02…b. Connection Allocation Services (CAS)
…02……02……02……02…*********************************************the EMH.
…02……02……02……02…This CAS provides the required dynamic tables
…02……02……02……02…to create session control blocks and linkages
…02……02……02……02…to control blocks in Transport Network
…02……02……02……02…software and TAS software.
…02……02……02…c. ******************************
…02……02……02……02…resident in the EMH. This task and associated
…02……02……02……02…routines provide the necessary high
…02……02……02……02…performance file and retrieval functions. The
…02……02……02……02…task is initiated by the store and forward
…02……02……02……02…services.
d. Host Protocol Emulat*rs
(HPE)
…02…In support of the different protocol
…02…requirements towards the Univac and Honeywell
…02…site, a se*arate Host Session Emulator is
…02…provided in the EMH.
…02…Section 6.9.2 provides an overview of the FIKS
…02…system.
e External Devices
…02…To perform the previously mentioned functions
…02…on the EMH, external devices such as discs,
…02…VDUs and line printers are attached.
…02…A number of mirrored disc systems serve as the
…02…data base of the EMH with a capacity which
…02…fullfil the functional requirements. Write
…02…operations are performed on both discs to
…02…m*nimise the risk of corrupting data. During
…02…recovery normal operation can continue while
…02…update of replaced disc takes place. VDUs
*** LINE REJECTED ***
…02…correction of messages which the EMH can not
…02…deliver automatically, e.g. garbled messages.
…02…The attached line printer is able to handle the
…02…TDP subset of type B traffic.
f. Reconfi**************************
*** LINE REJECTED ***
…02…receives and carries out the changes in
…02…configuration tables etc. Also local
…02…reconfiguration inquiries are handled by this
…02…configuration software application.
…02…Tools for being able to perform a properly
…02…recovery is the checRpoint mechanism.
…02…Checkpointing, i.e. saving certain events
…02…during a message flow, is implemented by
…02…checkpointing to disc. From these checkpoints
…02…the recovery program is able to establish the
…02…necessary delivery of all traffic types
…02…according to what will be required after a
…02…possible system failure.
6.9.2 Protected Messa**********************
…02……02…o This protected message switching mechanism will be
…02……02……02…implemented in much the same way as Christian
…02……02……02…Rovsing solved the D{nish Defence Communication
…02……02……02…problem.
…02……02…The communic{tion system, called FIKS, described
…02……02…below, is able to communicate in a secure and
…02……02…reliable manner as is the case for the required
…02……02…protected message switching service. Obviously
…02……02…modifications will have to be carried out to adopt
…02……02…the facilities necessary for achieving correct PMS
…02……02…service.
6.9.2.1 FIKS Definition and S*****************
…02……02…The DANISH INTEGRATED COMMUNICATION SYSTEM, FIKS, is
…02……02…a fully integrated communications network for the
…02……02……02…rapid, reliable, and efficient automated transfer of
…02……02…message and data traffic shared by multiple users
…02……02…for a variety of Danish military and defense
…02……02…applications.
…02……02…FIKS provides dedicated network facilities and nodal
…02……02…switching centres to service communication centres
…02……02……02…(comcenters) and interconnect data terminals and
…02…. eomputer systems geographically distributed
…02……02…throughout Denmark.
…02……02…The FIKS network facilities consists of dedicated
…02……02……02…high speed internodal trunk shared by all users and
…02……02…dedicated lines connecting users and small
…02……02……02…comcentres to the nodal switching centres. The
…02……02…nodal switching centres are configured from these
…02……02…functional entities.
…02……02…the NODE - providing access to FIKS for data
…02……02……02……02……02…terminals, interfacing MEDEs, and
…02……02……02……02……02…performing network-orientated functions
…02……02……02……02……02…common to both data and message traffic,
…02……02…the MEDE - message entry and distribution equipment,
…02……02……02……02……02…providing access to FIKS for
…02……02……02……02……02…communications centres and performing
…02……02……02……02……02…terminal-orientated functions related to
…02……02……02……02……02…message traffic,
the SCC - system control
centre, providing network
…02……02…supervision and contr,* and function as
…02……02…software development and m{intenance
…02……02…centres.
These FIKS system elements may be co-located and
physically intergrated.
Initially, FIKS is structured as an 8-NODE grid
network whose topology is shown in figure III 6.9-3.
6.9.2.2 S********************************************************************
…02……02……02…FIKS, the Danish Defense Integrated Communication
…02……02…System, is an integrated and fully-automated message
*** LINE REJECTED ***
…02……02……02…individual torn tape message traffic networks and
…02……02……02…dedicated data circuits until now operated by the
…02……02……02…three services-army, navy, and airforce.
6.9.2.3 FIKS Nodal Network
…02……02……02…FIKS consists of a multinode network geographically
*** LINE REJECTED ***
…02……02……02…configuration and interconnected via full-duplex
…02……02……02…trunks operating at 9,6 Kbit. These internodal
…02……02……02…trunks are permenantly leased circuits backed up by
…02……02……02…automatically-dailled PTT data circuits. The
…02……02……02…international trunks may be upgraded to 64 Kbit,
…02……02……02…when higher traffic rates are required.
…02……02……02…Message and data traffic is interchanged between
…02……02……02…military users under control of computerised nodal
…02……02……02…switching centres. NODE and MEDE processors are
…02……02……02…co-located.
…02……02……02…The internodal trunk circuits carry a mixture of
…02……02……02…message and data traffic. The 9,6 Kbit bandwidth is
…02……02……02…dynamically allocated between message and data
…02……02……02…sources. A minimum of 1,2 Kbit will always be
…02……02……02…available for message traffic, and 2,4 Kbit is
…02……02……02…reserved for signalling, and protocol overhead (See
…02……02……02…Fig. III 6.9-4). The remaining bandwidth of 6,0
…02……02……02…Kbit is divided into 20 time slots each with a
…02……02……02…capacity of 300 bps. These slots are dynamically
…02……02……02…allocated to continuous and discontinuous (polling,
…02……02……02…contention and dial-up) data traffic. Data traffic
…02……02……02…sources will be allowed to use the 30* bps slots in
…02……02……02…accordance with bandwidth requirements and priority.
…02……02……02…Up to 15 different priority levels are used and the
…02……02……02…nodal software automatically preempts lower priority
…02……02……02…data users, if bandwidth becomes too small to
…02……02……02…accommodat,* all data users sinultaneously.
…02……02……02…Preemption should however only take place when the
…02……02……02…network becomes partly inoperable due to trunk or
…02……02……02…equipment failure.
6.9.2.4 Messa*********
…02……02…Message users are served through 23 COMCENTRES,
*** LINE REJECTED ***
…02……02……02…given access to FIKS through dedicated or
…02……02……02…multiplexed low and medium speed circuits terminated
…02……02……02…in the NODE/MEDE processors.
…02……02……02…All message traffic is encrypted and message traffic
…02……02……02…rates between 50 and 24* bps can be accomodated.
…02……02……02…Based on the current message traffic input of about
…02……02……02…200* messages busy hour, FIKS is initially sized to
…02……02……02…handle a throughput of 25.*00 messages per busy hour
…02……02……02…which will include messages, retrievals, reports,
…02……02……02…control messages, and a 25% spare capacity. Each
…02……02……02…NODE has a throughput of 3 messages per second (1000
…02……02……02…character messages).
6.9.2.5 Data Users
6.9.2.5 Data Users
…02……02…Data users, consisting initially of 12 different
…02……02…data systems, exchanged information through FIKS on
…02……02…i continuous or discontinuous basis through direct
…02……02……02…internodal trunks. Up to 15 different data users
…02……02……02…with speeds ranging from 300-4800 bps may be
…02……02……02…multiplexed on each 9,6 kbit trunk. Data channel
…02……02……02…set-up time is less than 75 msec. per NODE and delay
…02……02……02…variation with respect to set-up time is less than
…02……02……02…50 msec. per NODE.
6.9.2.6 Network ***********************************************************
…02……02……02…The entire FIKS network is monitored and supervised
…02……02……02…by two System Control Centres, SCC|s. The SCC|s
…02……02……02…handles the exchange of messages between FIKS and
…02……02……02…NICS-TARE on a fully automatic basis. The network
…02……02……02…is capable of functioning without SCC|s.
6.9.2.7 FIKS Generi************
…02……02……02…The generic elements of FIKS and their
…02……02……02…interrelationship are shown in Figure III 6.9"5.
…02……02……02…The various demarcation point which will be
…02……02……02…encountered between the NODE/MEDE/SCCs, FIKS
…02……02……02…network, CONCENTREs, message terminals, data
…02……02……02…systems, com*uters, and data terminals are also
…02……02……02…indic{ted. A system overview giving more details
…02……02……02…about interconnection of the FIKS elements is shown
…02……02……02…in Figure III 6.9-6. The NODE processor is
…02……02……02…collocated with the MEDE in the red area for
…02……02……02…security reasons. The NODE Line Termination Units
…02……02……02…(LTUs) and LTU controller are located in the blank
…02……02……02…area as they will carry only encrypted or non-secure
…02……02……02…traffic.
6.9.2.8 Traffic Secur****************************************************
…02……02……02…FIKS is handling all security classification of
…02……02……02…narrative messages and data transmission (i.e.
…02……02……02…Danish and NATA Unclassified, Restricted,
*** LINE REJECTED ***
…02……02……02…ensure that only authorised viewers wfll be allowed
…02……02……02…to examine message content.
Provisions have been made
for security class
marking, protection of stored messages *nd
unauthorised retrieval, message deletion, and
special handling procedures.
Crypto-graphic security equipment protects all
transmissions. Automatic detection of crypto
garbling prevent loss of information.
Data streams requiring security are
terminal-to-terminal encry*ted and routed through
FIKS without need for decryption and re-encryption
at intermediate nodes.
Stable timing is provided from frequency standards
to maintain end-to-end synchronisation and bit count
integrity throughout the networR for several weeRs
without adjustment.
FIKS is designed to prevent misrouting, inadvertent
plain text and unauthorised access and retrieval.
Nodal switching equipment is separable into RED
areas for MEDEs, where plain text unencrypted
information is allowed, and BLACK areas for NODEs
where classified information appears only in
encrypted form.
6.9.2.9 Messa**************************************
Four categories of traffic are handled: (1)
narrative messages with precedence and multiple
addresses in FIKS standard message format (SMF) with
the essential elements of the ACP-127 format; (2)
service messages using an abbreviated format; (3)
continuous data requiring virtually dedicated
channels with minimum delay variation and routed as
an uninterrupted bit stream; (4) discontinuous data
requiring channels on a call-up basis with
predictable set-up time and delay. For message
traffic, FIKS will accept either 5-level
(Baudot/ITA-2) or 7-level (ASCII/ITA-5) codes;
internally, message processing and storage will be
in ASCII code.
For data traffic, FIKS will accept any format or
code, as FIKS is transparent to data traffic.
Narrative messages are modified
before transmission
to add an envelope containing FIKS internodal
routing and local address information, but the
original messages are restored at the destination
terminals.
The FIKS network is entirely transparent to the
format and protocols used for the continuous and
discontinuous data categories.
Internal to the FIKS network, between NODEs, all
traffic is handled as packets compatible with CCITT
X.25 protocol.
A special protocol (LITSYNC) is used between FIKS
and NICS/TARE.
6.9.2.10 Messa*****************************************
Messages enter the FIKS network from a number of
message preparation and receiving terminals such as
teleprinters and visual display units. Each MEDE,
initially serve up to 30 full duplex terminals.
Nowever, the total capacity of the MEDE is 242
terminals and 12 interfaces to host computers.
Message preparation is intertactive with prompts
from the MEDE is 242 terminals and 12 interfaces to
host computers. Message preparation is interactive
with prompts from the MEDE computer. An example of
a message preparation format is shown in figure III
6.9-7. The underlined portions are either prompts
or other computer-inserted information. Address
information is keyed-in as a character representing
the MEDE to which the terminal is connected,
followed by 3 digits. The computer replaces this by
the current address, which is then appearing in the
delivered message. Figure III 6.9-*.
Message terminal operators can use a number of
interactive procedures such as:
…02…- preparation (4 types)
…02…- coordination
…02…- release
…02…- retrieval
…02…- readdressing
…02…- distribution, local
-
- log on
- log off
- special handling
- editing
The MEDEs are manned 24 hours daily and MEDE
supervisors have control with the security and
traffic and the system and its terminals. A number
of special procedures are available for supervisors:
- distribution (2 types)
- control of terminal queue status
- re-arrangement of queues
- relocation of queues
- re-routing of terminal traffic
- block/unblock terminals
- security interrogation of terminals
- establishment of PTT data net connections
- up-dating of route and address tables
- security profile handling
- call-up of daily traffic statistics
and many other procedures.
Full accountability is provided for all messages.
Messages are queued by precedence (Flash, Immediate,
Priority, Routine and two other yet unspecified
*** LINE REJECTED ***
All outgoing and incoming messages are stored at the
MEDEs for 10 days. SPECAT messages will be deleted
from local storage after transmission and delivery.
Retrieval of messages from 10 days storage by
authorised users {re provided. Messages can be
retrieved by message identification subject
indicator codes (SIC) and date/time indication.
Proc PRE (CR)
A***123 (CR)
********ED MESSAGE A21 (CR)
P******************************
********FO R (CR)
*********** CHODDEN
** AIG 160********
**T (CR)
***E104/ (CR) TACDEN
** (CR) *******
**FO X115 (CR)
***** (CR)
*****
**ASS NS (CR)
**********(CR)
*******RHQ (CR)
*** TEXT
NNNN (CR)
BT
DIG/ (CR) 012347z ***
P**C *****************************************************
*****
FIGURE 6.9-7
FIKS MESSAGE PREPARATION FORMAT
(CR) = carriage return
EXAMPLE
0801 KAB
NATO RESTRICTED
0 E 012347z JAN 80 MSG ID ABC 123
FM CHODDEN
TO AIG 1601
TECDEN
INFO SHAPE
BT
NATO RESTRICTED
SIC RHQ
IN REPLY REFER TO TST 312.1-1227
SUBJECT CONTRACT NO FK 7900
IN ACCORDANCE WITH PARAGRAPH 16.5 OF THE SUBJECT
CONTRACT AMC IS PLEASED TO SUBMIT AN ORDER FOR THE
OPTION FOR ADDITIONAL RDS-V
PPI DISPLAYS AS FOLLOWS
QTY IN UNITED STATES DOLLARS
1-2 1000 DOLLARS EA
3.6 976 DOLLARS EA
THE EQUIPMENT SHALL INCLUDE THE RDS-V PPI DISPLAY/DATA
ENTRY
AND TRACKBALL WITH THE NECESSARY SYSTEM MODIFICATION
TO ALLOW
SEPARATION OF THE DISPLAY UP TO 3500 METERS. DELIVERY
SHALL
BE ACCOMPLISHED AT THE RATE OF TWO PER MONTH STARTING
10
MONTHS AFTER RECEIPT OF A CONTRACT MODIFICATION. ALL
OTHER
ITEMS AND CONDITIONS SHALL BE IN ACCORDANCE WITH THE
SUBJECT
CONTRACT.
BT
INT DIST 0-DIV
ACCEPTANCE TIME 020005z
RETRIEVAL TIME 020006z
NATO RESTRICTED
FIKS HARD COPY EXANPLE
FIGURE 6.9-8
6.9.2.11 Messa************************************
…02……02…Message traffic is relayed from the originating
…02……02…NEDEs through intermediate FIKS NODEs to the
…02……02…destination MEDEs, and data traffic transferred
…02……02…between terminals directly interconnected to FIKS
…02……02…NODEs over internodal trunks. The associated
…02……02…message routing and data line switching functions
…02……02……02…are allocated to the NODE processors.
…02……02……02…Messages received by the NODE are routed to other
…02……02……02…NODEs or delivered to the locally connected MEDE on
…02……02……02…the basis of routing indicators and precedence
…02……02……02…contained in a special header. Each NODE is
…02……02……02…interconnected to adjacent NODEs through at least 3
…02……02……02…independently outed trunks. The optimum trunk route
…02……02……02…to the final destination NODE is based upon shortest
…02……02……02…route (minimum hop) and network connectivity. A
…02……02……02…routing algorithm is used, which allow NODEs to
…02……02……02…operate independent of SCC control. SCC is informed
…02……02……02…of all changes in the network and calculate routing
…02……02……02…tables for optimisation of the network traffic. The
…02……02……02…SCC routing algorithm uses weighted delay factors
…02……02……02…for the individual trunks. These weighting factors
…02……02……02…will be derived from the traffic queue-reports and
…02……02……02…be used to calculate message routing tables, which
…02……02……02…are dowm-loaded to the NODEs.
…02……02……02…The routing tables contain three alternative routes
…02……02……02…per destination and the NODEs select the proper
…02……02……02…routes from the tables based on trunk queue lengths.
…02……02……02…If both SCCs are inoperative, the NODE/MEDE
…02……02……02…supervisors can manually update the tables.
…02……02……02…Data traffic, both continuous and discontinuous, is
…02……02……02…switched through predetermined routes over
…02……02……02…internodal trunks. Each data user is allocated a
…02……02……02…primary and a secondary route through the network.
…02……02……02…If the primary route fails, the secondary route is
…02……02……02…automatically established. Switch back to the
…02……02……02…primary route is controlled by supervisory commands.
…02……02……02…End-to-end set-up and transmission delays will be
…02……02……02…less than 12 second. The NODE is transparent to
…02……02……02…data traffic; all data traffic is in the black.
…02……02……02…Crypto synchronisation, channel coordination, error
…02……02……02…control, and recovery procedures are
…02……02……02…termina*-to-terminal or computer-to-computer.
6.9.2.12 S*******************************************************************
…02……02……02…Centralised supervision and control of the overall
…02……02……02…FIKS network maintains network efficiency and
…02……02……02…regulate or restore service in case of congestion,
…02……02……02…outages, or failures. Continuous network status is
…02……02……02…monitored and displayed at System Control Centres.
…02……02……02…Two SCCs are *rovided but neither is dualised;
…02……02……02…back-up is geographic. Both SCCs may be on-line with
…02……02……02…one exercising network control and the other on
…02……02……02…standby monitoring the network; or, the second may
…02……02……02…be off-line and dedicated to programme development,
…02……02……02…maintenance, or training.
…02……02……02…The SCCs exercise control of the network by use of a
…02……02……02…number of procedures.
…02……02……02…- threshold se*ting for trunk queue lenghts
…02……02……02…- threshold setting for message retransmission
…02……02……02……02…rate
…02……02……02…- control of SCC switchover
…02……02……02…- request of diagnostic results from NODE/MEDEs
…02……02……02…- open/close trunks
…02……02……02…- etc.
…02……02……02…Control messages from the NODE/MEDEs concerning
…02……02……02…traffic queues, trunk and NODE status,
…02……02……02…retransmission rate and, equipment availability,
…02……02……02…etc. are transmitted to the SCCs; from this,
…02……02……02…statistics are gathered, alarm conditions noted, and
…02……02……02…reports presented to allow timely network decisions
…02……02……02…by supervisory personnel. Preventive and corrective
…02……02……02…action is initiated by operating personnel. A log
…02……02……02…of control messages and SCC action provides an audit
…02……02……02…trail to trace all network control actions.
…02……02……02…Downline loading of routing, security and address
…02……02……02…tables from the SCC to the network permits selective
…02……02……02…rerouting of message traffic, change of routing
…02……02……02…plan, reconfiguration of the network, and changes of
…02……02……02…security tables.
…02……02……02…The current operational status of the FIKS nodal
…02……02……02…networ* is displayed on a colour TV, dynamically
…02……02……02…updated by reports and alarms from the network (see
…02……02……02…Fig. III 6.9-9).
The open/closed status of
each internodal trunR and
active PTT back-up channels as well as configuration
and availability of each NODE/MEDE and SCC is
displayed.
Statistics are gathered by the SCC from control
messages, periodic reports and traffic received from
the network. Message flow, trunk usage, queueing
delays, outages, equipment up-time, and other
statistics will be available for off-line
statistical analysis, reports and network planning.
A summary message traffic report will be
automatically generated and distributed every 24
hours to the NODE/MEDEs.
The interchange of message traffic between the FIKS
and external NICS/TARE network is performed by SCCs.
TARE may send messages to FIKS terminals; national
routing indicators and addressees will be recognised
and the message will be converted from ACP - 127
format to FIKS Standard Message Format of routing
and distribution on the FIKS network. Similarly,
FIKS terminals send messages to TARE using NATO
addreses. Valid NICS routing indicators is
extracted from an SCC file and the message is
translated to ACP-127 format for transmission on the
FIKS/NICS channel. The recognisable NICS routing
*** LINE REJECTED ***
containing undefined NATO addresses or errors is
intercepted for manual handling.
Naintenance of the system is performed partly by
NODE/MEDE supervisors being cross-trained to operate
the off-line diagnostic programmes, change modules
and perform manual switch-over, partly by
technicians located at the two SCCs and a mobile
technician team which can be called out to the
diffent sites to locate and repair faults. Software
personnel will be located at the two SCCs.