top - download
⟦dd560683c⟧ Wang Wps File
Length: 72549 (0x11b65)
Types: Wang Wps File
Notes: Air Canada Proposal
Names: »1342A «
Derivation
└─⟦32f09e9a2⟧ Bits:30006250 8" Wang WCS floppy, CR 0084A
└─ ⟦this⟧ »1342A «
WangText
4…0a…4…0e…4…07…3…00…2…08…2…0b…2…0f…1…08…1…01…1…05…1…06…0…0f…0 /…08…/…09…/…0d…/…0e…/ .…08….…09….…0d….…86…1
…02…
…02…
…02…
…02…
6.6 C̲o̲m̲m̲u̲n̲i̲c̲a̲t̲i̲o̲n̲
̲S̲o̲f̲t̲w̲a̲r̲e̲
6.6.1 H̲i̲g̲h̲
̲L̲e̲v̲e̲l̲
̲S̲e̲r̲v̲i̲c̲e̲
̲S̲u̲b̲s̲y̲s̲t̲e̲m̲
̲(̲H̲S̲S̲)̲
o This
subsystem
supports
the
interface
to
the
network
at
the
presentation
level
of
the
OSI
reference
model.
The
subsystem
supports
the
virtual
terminal
protocol
(VTP),
the
file
transfer
protocol
(FTP)
and
the
internal
session
layer
protocol
(ISL).
Two
simple
protocols,
Nodal
transport
interface
(NTS)
and
queue
interface
(QI)
supports
the
interfase
to
other
submodules.
The
HSS
interfaces
to
the
following
submodules:
- TAS: Terminal
access
subsystem
- HAS: Host
access
subsystem
- IES: ICC
emulating
subsystem
- EMH: Electronic
mail
host
(SITA,
ARINC
etc.
subsystem)
- NSS: Nodal
switch
subsystem
The
HSS
is
used
when
a
network
user
wants
to
create
a
virtual
circuit
to
another
print
in
the
network,
e.g.
a
terminal
that
wants
a
communication
with
a
specific
host.
The
request
for
the
virtual
circuit
is
serviced
by
the
internal
session
layer.
The
setup
request
must
contain
information
about
the
presentation
layer
protocol
(VTP
of
FTP)
to
be
used
on
the
actual
virtual
circuit.
When
setting
up
the
circuit
the
ISL
interfaces
to
the
NSS
through
the
NTI.
The
queue
interface
(QJF)
to
the
HSS
will
contain
one
pair
of
queues
per
priority
defend.
This
insures
quick
response
for
type
A
traffic.
Only
when
high
priority
queues
are
empty
lower
priorities
are
serviced.
A
special
control
queue
is
defined
to
support
the
session
layer.
All
kind
of
setup
and
close
down
requests
will
be
inserted
in
this
queue.
A
virtual
circuit
setup
request
must
contain
information
about
distribution
address,
priority,
rate,
protected/nonprotected
and
test/operational/other.
A
special
kind
of
session
layer
service
is
the
transfer
of
statistical
and
billing
information
to
the
Network
Control
Center.
6.6.2 T̲e̲r̲m̲i̲n̲a̲l̲ ̲A̲c̲c̲e̲s̲s̲ ̲S̲u̲b̲s̲y̲s̲t̲e̲m̲
o The Terminal access subsystem handles the layered
interface and access between the physical terminals
and the backbone network.
The following levels are present, which will be described
during the next sections:
- ICC physical link layer (IPL)
- ICC link layer (ILL)
- ICC protocol to Network Layer (IPN)
- Terminal Transport Layer (TTL)
- Terminal Session Layer (TSL)
- Terminal Protocol/Printer Protocol (TP/PP)
- Application Layer (APP)
6.6.2.1 T̲h̲r̲e̲e̲ ̲L̲o̲w̲e̲s̲t̲ ̲L̲e̲v̲e̲l̲s̲
The 3 lowest levels will be supported to accomodate
the already existing ICC interface. Details can be
found in the Air Canada request for quotation appenix
II.
6.6.2.2 T̲e̲r̲m̲i̲n̲a̲l̲ ̲T̲r̲a̲n̲s̲p̲o̲r̲t̲ ̲L̲a̲y̲e̲r̲
The transport layer supports a reliable transport through
several interconnections, e..g. terminal,RLMC,PCTG,8562,
etc. Tasks to be performed by this layer are:
- to control of the non-zero permission, to send
count(PC) to the mechanism, or the quasi-treewheeling
protocol.
- to control the RLMC contention in combination with
the trunk queue size.
6.6.2.3 T̲e̲r̲m̲i̲n̲a̲l̲ ̲S̲e̲s̲s̲i̲o̲n̲ ̲L̲a̲y̲e̲r̲
The terminal session layer contains all kinds of access
and securtiy information. This information contents
is described in further detail in section 4.2., under
terminal user interface.
6.6.2.4 T̲e̲r̲m̲i̲n̲a̲l̲ ̲p̲r̲o̲t̲o̲c̲o̲l̲/̲P̲r̲i̲n̲t̲e̲r̲ ̲P̲r̲o̲t̲o̲c̲o̲l̲ ̲L̲a̲y̲e̲r̲
This TP/PP layer contains the programs which the devices
use to gain access to the function-orientated protocols.
Included in the programme is the translation of requests
into local or remote requests. Furthermore, this layer
controls the screen facilities, such as split screen,
protected fields, etc.
6.6.2.5 A̲p̲p̲l̲i̲c̲a̲t̲i̲o̲n̲ ̲L̲a̲y̲e̲r̲
This application layer is user-defined and consists
of the specific operator functions, such as ticket
reservation, time schedule presentation, etc.
6.6.3 H̲o̲s̲t̲ ̲A̲c̲c̲e̲s̲s̲ ̲S̲u̲b̲s̲y̲s̲t̲e̲m̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
The Host Access Subsystem (HAS) is a logical part of
a Host Access Network residing in one or more CR80
front end processors.
HAS interfaces the backbone network HSS (High level
Service Subsystem) at the session-control layer in
terms of the ISO-OSI reference model. This implies
that the protocol layers 6 and 7 (presentation control
and process control) of the backbone network either
have to be supported in HAS or in the host-system depended
of the choise of virtual and real terminal protocols
of the network.
To generalize the functionality of the interface this
description is made on the assumption that the network
will provide service to interaction terminals and will
support bulk transfers.
These services will be provided by the network by means
of virtual protocols residing in the network.
In general the choise of virtual protocols depend on
the service wanted. Apart from the two services mentioned
it is the intention of CR to implement other necessary
virtual protocols at request for instance a standardized
graphic protocol and a protocol covering the very high
speed (GHz) classes used for microwave- and satellite
communication.
HAS Interfaces the various host-systems by a channel-connection
emulating an front-end and networking system as defined
and implemented by the host-vendors networking architecture.
The functions of HAS will be
- to interface to HSS
- to map between the levels 4 to 7 of the network
of the host vendor and the backbone network
- to interface the hosts channel.
Figure III 6.11.1 Logical/Physical UNIVAC Host interface
An implementation of HAS is shown in figure III 6.11.1.
The mainframe is considered to be a UNIVAC 1100/8x
or 1100/6x. The operating system OS1100 also running
the CMS 1100. This implies that the emulation of DCP/40
performed in the CR80 FEP will be according to Telcon
4Rx.
T̲h̲e̲ ̲H̲S̲S̲ ̲I̲/̲F̲ ̲M̲o̲d̲u̲l̲e̲
The communication with the backbone network is done
via the HSS I/F module. The main function of the HSS
I/F-module is to convert the addressing scheme used
in the backbone network to the addressing scheme used
in the DCA-network and to enable the usage of the network
control functions in the backbone network and the Network
Management sessions defined in the DCA-network.
T̲h̲e̲ ̲V̲T̲P̲ ̲m̲o̲d̲u̲l̲e̲
The VTP module represents a virtual network protocol
for interactive terminals. CCITT in a way defines a
low level virtual terminal standard by the three standards
X3, X.28 and X.29. Combined these standards will be
a socalled scrole-mode VT offering user selectable
PAD-functions described as a set of parameters.
The virtual terminal called tripple X will not be sufficient
to cover the needs for the terminals to be supported
in the backbone network. Consequently the design will
be extended with functions necessary to cover the level
of terminal service needed for VDU's like UTS 400 and
IBM 3270 BSC, the line of services supported thus will
be as described for the terminal class, form mode described
in ISO/TC97/SC16 N557 and ISO/TC97/SC16 N666 also known
as ECMA/TC23/81/53.
T̲h̲e̲ ̲F̲T̲P̲ ̲M̲o̲d̲u̲l̲e̲
The FTP module represents a virtual network protocol
for bulk transfer protocols. The implementation in
the backbone network that is supported to enable multihost
access to remote facilities like printers and card
readers will be the relevant parts of FTP-B(80) also
known as blue book (Data Comm. Protocols Unit. NPL,
GB).
The line of services that is foreseen to be adopted
in a start implementation will be the Host-to-Host
transfer of files at a low level, i.e. print files,
whereas at a later stage a full implementation including
job-service may be implemented according to existing
standards (ISO/TC97/SC16 N628 or later).
T̲h̲e̲ ̲D̲M̲F̲ ̲t̲o̲ ̲S̲C̲ ̲m̲o̲d̲u̲l̲e̲
The DMF to SC module will perform the necessary mapping
functions between the session control layers in the
DCA-network and the backbone network. It is a tool
used for dynamic session control from one network to
the other. As CMC1100/Telcon 4 is not yet released
it tends to be rather difficult to describe the UNIVAC-part
of the module further, however, it is considered that
some of the facts known from CMS7R2/Telcon 3RX will
still have value although it has been promissed that
the session part of the DCA-protocols will be more
dynamic. The following is a description of the session
of the DCA-network as we know them for current.
The Telcon logical port sessions are determined at
configuration time. When a terminal or batch station
signs-on to the network it may optionally request an
alternative LP-session by indicating the called party.
A number of LP-sessions connecting the FEP to the Telcon-network
terminals thus must be configured for those Telcon
terminal groups that wants to communicate via the network.
Note that up to 255 terminals may share an LP-session.
Since the backbone network sessions are established
dynamically, we will have no configuraton problems
in this end of the FEP.
System sessions used for batch must be kept open at
any time since the host is not able to establish a
call to a specific site. This implies that batch output
must in some way carry an address of the final destination
in the data structure, otherwise it will not be possible
to open the correct backbone network session.
Full flow control and end-to-end recovery may be difficult
to implement since the 2 network does not supply identical
services. More retransmissions are also to be expected
than if the end-to-end approach was used.
The end-to-end approach is the one where a common end-to-end
control is implemented in every host and every concentrator
in the interconnected network. This approach is not
possible in the project since most of the terminals
and concentrators will use local networks such as the
Telcon-network.
The endpoint approach requires resources in the hosts
whereas the selected approach will demand a very reliable
FEP equipped with a sufficient amount of bufferspace.
T̲h̲e̲ ̲D̲M̲F̲ ̲a̲n̲d̲ ̲V̲T̲R̲ ̲M̲o̲d̲u̲l̲e̲s̲
The CSU is the user of the Communication System (CS)
and interfaces the TS's services by means of an interface
protocol called the Data Management Facility (DMF).
The DMF protocol is not only an interface to the TS
but is also a protocol between paired DMF's. On the
top of this the CSU contains a data formatting service
used to convert device dependent code into device independent
code.
The DCA concept contains what is called Port Presentaton
Services (PPS) as a part of the Logical Port (LP) in
the TS. This implies however, that the port session
can be assigned dynamically, otherwise too many resources
would be needed when many terminals are to be supported.
The DMF-to-DMF sessions are called system-sessions.
A number of system-sessions may share an LP-session.
Each system session may be opened and closed dynamically
by means of the DMF interface or protocol.
VTR supports batch and interactive terminals using
the Data Presentation Protocol (DPP) with UTS400 on
user defined buffered CRT's, i.e. device group og generic
class 4 (CONV4), as the real terminal for interactive
traffic and the NTR (RB2) for bulk transfer.
T̲h̲e̲ ̲T̲S̲ ̲M̲o̲d̲u̲l̲e̲
The Termination System (TS) module will be implemented
according to the specification DCDS-LMENET/XXX/DSP/0010
with the necessary changes needed for CMS1100/Telcon
4RX.
F̲T̲P̲ ̲O̲p̲t̲i̲o̲n̲a̲l̲ ̲i̲m̲p̲l̲e̲m̲e̲n̲t̲a̲t̲i̲o̲n̲ ̲i̲n̲ ̲t̲h̲e̲ ̲H̲o̲s̲t̲
The existing Telcon bulk transfer protocols (NTR, 1004)
are aimed at batch station support only and does not
include general file transfer facilities. For that
reason it is not possible to perform the mapping of
the full FTP in the host interface.
The mapping barrier may be overconnected by implementing
the FTP as a special user written CSU as part of the
CMS-system. Such an FTP-CSU can take full advantage
of the funcitonality of OSI1100 and provide for job
scheduling, file manipultation and text routing.
If this type of FTP is implemented upon two host systems
they may in a convenient manner exchange all types
of files and start jobs in the other host system regarding
the rules of Job Control Language etc.
6.6.3.1 U̲N̲I̲V̲A̲C̲ ̲H̲o̲s̲t̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲
I̲n̲t̲r̲o̲d̲u̲c̲t̲i̲o̲n̲
The UNIVAC host interface has a design based upon experience
in CR.
The design is based upon knowledge of the hardware
and the software architecture of the main frame as
well as how to connect such a mainframe to a network
environment of the ISO-OSI-type.
The tools used to connect a UNIVAC-mainframe with a
network are:
- the distributed communications architecture (DCA)
- the ISO-OSI-type network as defined in other sections
of this paper
- the CR80D minicomputer
- the UNIVAC channel I/F.
The following gives a presentation of the various issues
and their interconnection.
T̲h̲e̲ ̲D̲C̲A̲ ̲C̲o̲n̲c̲e̲p̲t̲
DCA is a conceptual way of describing the functionalities
of a network. DCA is not a specific implementation
of a network. DCA uses the ISO-model of layered protocols
implying that a communication session from one user
to another via the network must confirm to a specific
set of protocols (or rules).
An overview of the DCA/Telcon implementation is given
in figure III 6.6.3.1-1 below:
Fig. III 6.6.3.1-1
DCA/Telcon implementation overview
E̲n̲d̲ ̲U̲s̲e̲r̲ ̲(̲E̲U̲
A network user (also called an end user, EU) will always
be connected to another EU in the network via a system
session. The 2 EU's can as an example be an interactive
terminal and a user program in the host computer. The
EU-to-EU protocol is in this case determined by the
user program.
C̲o̲m̲m̲u̲n̲i̲c̲a̲t̲i̲o̲n̲ ̲S̲y̲s̲t̲e̲m̲s̲ ̲U̲s̲e̲r̲ ̲(̲C̲S̲U̲)̲
An EU is always connected to the communicatons system
(CS) via a socalled communications systems user (CSU),
see fig III 6.6.3.1-1.
The CSU's are interfacing the communication system
(CS) and is the pathway through which the EU connects
to CS. The CSU can for instance be a terminal driver
in a micro-processor that converts the terminal data
to or from terminal independent format. The CSU must
always be paired with a CSU in the other end of a system
session. The paired CSU's in the above mentioned example
could be a virtual terminal server in a host converting
data from and to the user program. The CSU layer is
very loosly defined in DCA terms and depends fully
on the specific implementation in host, front-end and
remote concentrator.
Here it shall be mentioned that the CSU functions will
be formatting a EU-data (socalled UDU's, - User Data
Units), network monitoring and EU-flow control. The
term EU-flow control covers pacing control to and from
an EU.
Telcon will optionally include EU-to-EU retransmission
of EU-data.
T̲e̲r̲m̲i̲n̲a̲t̲i̲o̲n̲ ̲S̲y̲s̲t̲e̲m̲ ̲(̲T̲S̲)̲
More EU's may share a logical port session (fig. III
5.4.1.1). An LP-session is defined as a communication
path from a logical port to another logical port. The
total amount of logical ports in a Telcon processor
(host, nodal, Front End or Remote Concentrator) is
called a Termination System (TS). Note that the CSU
and the TS may be functionalities in the same processor
and that more than one CSU may connect to the TS.
The TS routes the traffic generated by the CSU to the
Transport Network (TN). TS can be divided into 2 parts,
the Port Flow Control (PFC) and the logical Port Multiplexor
(LPM).
P̲o̲r̲t̲ ̲F̲l̲o̲w̲ ̲C̲o̲n̲t̲r̲o̲l̲ ̲(̲P̲F̲C̲)̲
The Port Flow Control provides for:
- Initialization of an LP-session
- Sequence numbering of data presented to the port
(Port Data Units (PDU's)).
- Acknowledging of PDU's received.
- Optional retransmission of PDU's for which negative
acknowledge has been received.
- Control of data flow on a LP-session basis.
For each LP-session there will exist a pair of PFC's
controlling the session by exchanging control informaton.
Data and control information is routed to the Transport
Network (TN) via the other part of the TS, the Logical
Port Multiplexor (LPM).
L̲o̲g̲i̲c̲a̲l̲ ̲P̲o̲r̲t̲ ̲M̲u̲l̲t̲i̲p̲l̲e̲x̲e̲r̲ ̲(̲L̲P̲M̲)̲
The LPM is not a part of the paired-ends concept and
does not exchange data or control messages to a paired
counterpart although LMP's always exist in both ends
of a LP-session. The LPM is merely a standardized interface
that controls the access from the TS to the TN. The
TN, that may consist of one or more processors, has
a corresponding interface for the TS controlled by
the socalled Data Unit Control (DUC) layer. The DUC
converts received Port Data Unit's (PDU's) to internal
transport network packet called Network Data Units
(NDU's) (if the paired receiver TS is connected to
another DUC in a remote processor).
The functionality of this TS/TN interface is very similar
to the CCIT X.25/3 interface proposed for public data
networks. Apart from functions such as call-request
and call-clear the data and control formats are similar
to the corresponding X.25/3 formats. Since no call-request
packet is defined, the Transport Network operates with
fixed virtual circuits called TN-sessions. Each session
is identified at the TS/TN level by it's logical subchannel
number. This number does not need to be unique in the
network since the path is already established and the
logical subchannel number is only used to define the
session at the TS/TN level.
A number of TS's may be configured to a FEP or node.
In pratice only one is internal to the processor wheras
the other are external TS's residing in external devices.
In this way, a FEP or node may have several LP-sessions
described by the same logical subchannel number distributed
on different TS's. Internally each session is defined
uniquely by a processor (node or FEP) number and an
LP-session number. The numbers may be considered as
a channel number and a logical subchannel number, respectively.
A TS/TN subchannel is considered full duplex and maps
into two simplex internal TN-sessions.
S̲u̲b̲-̲A̲r̲c̲h̲i̲t̲e̲c̲t̲u̲r̲a̲l̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲s̲ ̲(̲S̲A̲I̲)̲
When data is to be transferred from one processor to
another, say a host and a FEP, a physical path is used
for the data transfer. In case of remote TS's attached
to a FEP or a node the Universal Data Link Control
(UDLC) procedure is used. X.25/2 lapb is a subset of
the link procedure and is used in the Telcon implementaiton.
The SAI interfaces are not part of the DCA-concepts
and depends on the implementaton.
S̲u̲m̲m̲a̲r̲y̲ ̲o̲f̲ ̲t̲h̲e̲ ̲D̲C̲A̲ ̲c̲o̲n̲c̲e̲p̲t̲s̲
DCA is a set of rules that builds the frames into which
the implementor, after her taste, puts the suitable
procedures and data formats. Figure III 6.6.3.1-2 shows
how the protocol layers are passed as a message is
transferred from an EU via a system-session to another
EU.
Fig.III 6.6.3.1-2
The following steps apply to the figure:
A terminal signs on to the network via the line protocol
handler. The message goes to the Virtual To Real (VTR)
and gets formatted before the message is delivered
to the data management facility. The Data Management
Facility (DMF) sends the canned sign-on message back
to the terminal when the necessary system resources
becomes available. Then an 'open' request is sent to
the paired DMF-port. The paired DMF Data may now be
send. Since DMF and VTR are placed in the same CSU,
no interface between them need to be specified. Data
received from the EU is now multiplexed to a logical
port via the PFC layer. From this point on the data,
now called PDU's, is routed to the node processor via
an UDLC SAI. The PDU is split into packets called Network
Data Units (NDU's) by the DUC and is delivered to Route
Control (RTC). RTC finds the trunk through which the
packet is to leave the node. Trunk Control (TC) selects
the physical channel within the trunk and queues the
packets to the Sub AArchitectural interface (UDLC).
When the packets reaches the FEP, RTC selects the DUC
and the DUC assembles the NDU's into PDU's. Retransmission
of NDU's is not supported.
The full PDU is passed over to the host via a channel
SAI and is presented to the host's TS. Retransmission
of PDU's is supported at the PFC level and is made
possible at an 'ackset' basis. The ackset is the number
of contiguous PDU's that may be retransmitted from
a PFC upon the receiption of a 'retransmit'-command.
The PDU's are converted into EU data, socalled User
Data Units (UDU) by the host CSU and presented to the
user program.
The next chapter will in more detail deal with the
actual implementation of DCA that UNIVAC distributes
under the name Telcon.
T̲h̲e̲ ̲I̲S̲O̲-̲O̲S̲I̲ ̲N̲e̲t̲w̲o̲r̲k̲
The backbone network is described in other sections
of this proposal and will not be covered here. The
following is a brief description of the functions of
the host interface from the backbone network to a UNIVAC
host.
The host I/F consists of the modules NSS (Node Subsystem)
HSS (High service level subsystem) and HAS (Host Access
Subsystem). The NSS-to-HAS interface is at the session-control
level in terms of the ISO-OSI reference model. Consequently
the network layers 1 to 5 (physical control, link control,
network control, transport end-to-end control and session
control) are contained in NSS/HSS and the layers 6-7
(presentation control and process control) are contained
in HAS. Furthermore HAS includes the mapping functions
necessary to perform a interface between the UNIVAC
DCA-network and the backbone network. HAS' functionalities
are described further in chapter 6.11.
T̲h̲e̲ ̲C̲R̲8̲0̲ ̲m̲i̲n̲i̲c̲o̲m̲p̲u̲t̲e̲r̲
A description of the CR80D minicomputer architecture
is given in chapter 5.3.
T̲h̲e̲ ̲U̲N̲I̲V̲A̲C̲ ̲c̲h̲a̲n̲n̲e̲l̲ ̲i̲n̲t̲e̲r̲f̲a̲c̲e̲
The UNIVAC channel I/F is developed by CR. The interface
is supported by a full duplex protocol and connects
to a UNIVAC 1100 ISI (Internally Specified Index) channel
pair for input and output. A further description is
given in "UNIVAC I/F CR8037D product specificaton",
CSD/005/PSP/0052.
6.7 G̲a̲t̲e̲w̲a̲y̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
The main task of the gateway (GWY) is to act as the
physical and logical link between the ACNC and the
Packet Switched Network, see Figure III 6.7-1.
As seen from the ACNC the gateway must look like several
ICC's with CRT's and printers.
As seen from the Nodal Subsystem of the collocated
node the Gateway must look like any other subsystem
interfacing the network.
Therefore the Gateway is a protocol converter between
the ICC- and the NSS-protocols.
It also acts as a multiplexer or speed converter of
several ICC subsystems (of 9.6 Mbps each) and one CCI
subsystem (Communications Control Interface).
Finally the Gateway is a converter because messages
must be converted to and from the Virtual Terminal
Protocol entirely used inside the ACDN.
Figure III 6.7-1
6.7.1 A̲C̲N̲C̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲
o The interface between the Gateway and the ACNC
is characterized by:
- several 9.6 Mbps lines
- ICC protocol (AC200) on each line incl. IMA.
- entire messages are stored and forwarded
- in case of error the entire message is retransmitted.
- messages are divided into segments.
The interface is shown on figure III 6.7-2.
6.7.2 N̲e̲t̲w̲o̲r̲k̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲
o The interface between the Gateway and the Packet
Switched Network is characterized by:
- Supra Bus Connection
- Message level protocol
- A number of logical channels will be established
for virtual calls as well as possible permanent
virtual circuits through the network.
- Virtual Terminal Protocol
The interface is shown on figure III 6.7-2
Figure III 6.7-2
6.7.3 M̲o̲d̲u̲l̲e̲s̲ ̲o̲f̲ ̲t̲h̲e̲ ̲G̲a̲t̲e̲w̲a̲y̲
6.7.3.1 T̲h̲e̲ ̲I̲C̲C̲ ̲M̲o̲d̲u̲l̲e̲
The ICC Module is divided into a number of co-routines
each performing the task of acting like an intelligent
Communications Concentrator as seen from the ACNC.
Messages are exchanged with the CCI Module by means
of Message Queues.
6.7.3.2 T̲h̲e̲ ̲C̲o̲m̲m̲u̲n̲i̲c̲a̲t̲i̲o̲n̲s̲ ̲C̲o̲n̲t̲r̲o̲l̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲ ̲M̲o̲d̲u̲l̲e̲
The Communications Control Interface Module (CCI) interfaces
the collocated node of the Packet Switched Network.
Virtual Circuits to hosts, terminals and printers are
created and dismantled when necessary initiated from
this module.
Messages are exchanged with the ICC co-routines by
means of Message Queues.
6.7.3.3 T̲H̲e̲ ̲H̲i̲g̲h̲ ̲L̲e̲v̲e̲l̲ ̲S̲e̲r̲v̲i̲c̲e̲ ̲M̲o̲d̲u̲l̲e̲
The conversion to and from the Virtual Terminal Protocol
entirely used inside the ACDN is done by the High Level
Service Module.
6.7.3.4 T̲h̲e̲ ̲S̲u̲p̲e̲r̲v̲i̲s̲o̲r̲y̲ ̲M̲o̲d̲u̲l̲e̲
The Gateway is monitored from this module which generates
Statistics (host-, terminal-, and printer connections)
and error reports. Possible controlling information
too is received by the module.
6.7.3.5 T̲h̲e̲ ̲R̲e̲c̲o̲v̲e̲r̲y̲ ̲M̲o̲d̲u̲l̲e̲
This module will perform all recovery necessary for
the GWY to be restarted after a Gateway-failure. The
recovery will be done from appropriate checkpoint information.
6.8 E̲l̲e̲c̲t̲r̲o̲n̲i̲c̲ ̲M̲a̲i̲l̲ ̲H̲o̲s̲t̲
o The electronic mail host consists of following
major components:
- interface software
- application sw
- system software
- external devices
An overhead diagram, figure III 6.8-1 overleaf presents
the structure of the total EMH.
6.8.1 E̲M̲H̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲ ̲S̲W̲
o The EMH interface SW supports the interface to
the YYZ node and support links to CNT,ARINC,SITA,
and TELEX.
Interface to the YYZ node is supported by a dualised
CR80 supra bus. CNT interface will be implemented as
a 150 bps asynchronous FDX ATA/IATA low speed character-oriented
link. Similarly, the interface to ARINC follows the
2400 bps synchronous ATA/IATA-defined link.Furthermore,
the interface component will be able to handle the
the SITA and TELEX interfaces according to the provided
protocols.…86…1 …02… …02… …02… …02…
Fig. III 6.8-1…01…EMH major components
6.8.2 A̲p̲p̲l̲i̲c̲a̲t̲i̲o̲n̲ ̲S̲W̲
o The application software handles the implementation
of all user-orientated requests, local or remote,
concerning EMH functions.
Major functions included in the EMH are:
- programme development
- protected message switching (PMS)
- session control
- reconfiguration
- recovery
6.8.2.1 P̲r̲o̲g̲r̲a̲m̲m̲e̲ ̲D̲e̲v̲e̲l̲o̲p̲m̲e̲n̲t̲
In order to develop the electronic mail service application
programmes, the EMH is equipped with a number of software
development facilities. The standard support software
includes:
- terminal opertaing system
- language processors
- system generation software
- debugging software
- utilities
- maintenance and diagnostics software
Section 6.3 describes in further detail these programme
development tools.…86…1 …02… …02… …02… …02…
6.8.2.2 P̲r̲o̲t̲e̲c̲t̲e̲d̲ ̲M̲e̲s̲s̲a̲g̲e̲ ̲S̲w̲i̲t̲c̲h̲i̲n̲g̲ ̲(̲P̲M̲S̲)̲
o This protected message switching mechanism can
be implemented in much the same way as Christian
Rovsing solved the Danish Defence Communication
problem.
The communication system, called FIKS, described below,
is able to communicate in a secure and reliable manner
as is the case for the protected message switching
system. Obviously minor modifications will have to
be carried out to adopt the facilities necessary for
achieving correct PMS service.
6.8.2.2.1 F̲I̲K̲S̲ ̲D̲e̲f̲i̲n̲i̲t̲i̲o̲n̲ ̲a̲n̲d̲ ̲S̲y̲s̲t̲e̲m̲ ̲E̲l̲e̲m̲e̲n̲t̲s̲
The DANISH INTEGRATED COMMUNICATIONS SYSTEM,FIKS, is
a fully integrated communications network for the rapid,reliable,
and efficient automated transfer of message and data
traffic sared by multiple users for a variety of Danish
military and defense applications.
FIKS provides dedicated network facilities and nodal
switching centres to service communication centres
and interconnect data terminals and computer systems
geographically distributed throughout Denmark.
The FIKS network facilities consists of dedicated high
speed internodal trunk shared by all users and dedicated
lines connecting users and small comcentres to the
nodal switching centres. The nodal switching centres
are configured from these functional entities.
the NODE - providing access to FIKS for data terminals,interfacing
MEDEs, and performing network-orientated
functions common to both data and message
traffic,
the MEDE - message entry and distribution equipment,
providing access to FIKS for communications
centres and performing terminal-orientated
functions related to message traffic,
the SCC - system control centre, providing network
supervision and control and function
as software development and maintenance
centres.
These FIKS system elements may be co-located and physically
intergrated.
Initially, FIKS is structured as an 8-NODE grid network
whose topology is shown in figure III 6.8.2.2.1-1
6.8.2.2.2 S̲y̲s̲t̲e̲m̲ ̲O̲v̲e̲r̲v̲i̲e̲w̲ ̲&̲ ̲F̲u̲n̲c̲t̲i̲o̲n̲a̲l̲ ̲S̲u̲m̲m̲a̲r̲y̲
FIKS, the Danish Defense Integrated Communication System,
is an integrated and fully-automated message switching
and data tranfer communication system used by the Danish
Armed Forces. It replaces individual torn tape message
traffic networks and dedicated data circuits until
now opertaed by the three services- army,navy, and
airforce.
6.8.2.2.3 F̲I̲K̲S̲ ̲N̲o̲d̲a̲l̲ ̲N̲e̲t̲w̲o̲r̲k̲
FIKS consists of a multinode network geographically
distributed throughout Denmark (Fig. III 6.8.2.2.1-1).
As initially structured, 8 nodes are arranged in a
grid configuration and interconected via full-duplex
trunks opertaing at 9,6 Kbit. These internodal trunks
are permenantly leased circuits backed up by automatically-dialled
PTT data circuits. The international trunks may be
upgraded to 64 Kbit, when higher traffic rates are
required.
Message and data traffic is interchanged between military
users under control of computerised nodal switching
centres. NODE and MEDE(M̲essage E̲ntry and D̲ istribution
E̲quipment) processors are located at all NODEs.
The internodal trunk circuits carry a mixture of messsage
and data traffic. The 9,6 kbit bandwith is dynamically
allocated between message and data sources. A minimum
of 1,2 kbit will always be available for message traffic,
and 2,4 kbit is reserved for signalling, and protocol
overhead(See Fig. III 6.8.2.2.3-1). The remaining
bandwidth of 6,0 kbit is divided into 20 time slots
each with a capacity of 300 bps. These slots are dynamically
allocated to continuous and discontinuous (polling,contention
and dial-up) data traffic. Data traffic sources will
be allowed to use the 300 bps slots in accordance with
bandwidth requirements and priority. Up to 15 different
priority levels are used and the nodal software automatically
preempts lower priority data users, if bandwidth becomes
too small to accomodate all data users simultaneously.
Preemption should however only take place when the
network becomes partly inoperable due to trunk or equipment
failure.
6.8.2.2.4 M̲e̲s̲s̲a̲g̲e̲ ̲U̲s̲e̲r̲s̲
Message users are served through 23 COMCENTRES, eight
of which are colocated with the NODES. About 150 message
terminals assigned to the COMCENTRES are given access
to FIKS through dedicated or multiplexed low and medium
speed circuits terminated in the NODE/MEDE processors.…86…1
…02… …02… …02… …02…
All message traffic is encrypted and message traffic
rates between 50 and 240 bps can be accomodated. Based
on the current message traffic input of about 2000
messages busy hour, FIKS is initially sized to handle
a throughout of 25.000 messages per busy hour which
will include messages,retrievals,reports,control messages,
and a 25% spare capacity. Each NODE has a throughout
of 3 messages per second (1000 character messages).
6.8.2.2.5 D̲a̲t̲a̲ ̲U̲s̲e̲r̲s̲
Data users, consisting initially of 12 data systems
exchange information through FIKS on a continuous
or discontinuous basis through direct interconnections
with the NODE processors and internodal trunks. Up
to 15 different data users with speeds ranging from
300-4800 bps may be multiplexed on each 9,6 kbit trunk.
Data channel set-up time is less than 75 m.sec. per
NODE and delay variation with respect to set-up time
is less than 50 m.sec. per NODE.
6.8.2.2.6 N̲e̲t̲w̲o̲r̲k̲ ̲S̲u̲p̲e̲r̲v̲i̲s̲i̲o̲n̲
The entire FIKS network is monitored and supervised
by two System Control Centres, SCC's. The SCC's handles
the exchange of messages between FIKS and NICS-TARE
on a fully automatic basis. The network is capable
of functioning without SCC's.
6.8.2.2.7 F̲I̲K̲S̲ ̲G̲e̲n̲e̲r̲i̲c̲ ̲E̲l̲e̲m̲e̲n̲t̲s̲
The generic elements of FIKS and their interrelationship
are shown in Figure III 6.8.2.2.7-1. The various demarcation
points which will be encountered between the NODE/MEDE/SCCs,
FIKS network,COMCENTREs, message terminals, data systems,computers,
and data terminals are also indicated. A system overview
giving more details about interconnection of the FIKS
elements is shown in Figure III 6.8.2.2.7-2. The NODE
processor is collocated with the MEDE in the red area
for security reasons. The NODE Line Termination Units
(LTUs) and LTU controller are located in the black
area as they will carry only encrypted or non-secure
traffic.…86…1 …02… …02… …02… …02…
6.8.2.2.8 T̲r̲a̲f̲f̲i̲c̲ ̲S̲e̲c̲u̲r̲i̲t̲y̲
FIKS is handling all security classification of narrative
messages and data transmission (i.e. Danish and NATO
Unclassified,Restricted,Confidential,Secret, Top Secret)
as well as 4 categories of SPECAT messages. Password
checks ensure that only authorised viewers will be
allowed to examine message content.
Provisions have been made for security class marking,
protection of stored messages and unauthorised retrieval,
message deletion, and special handling procedures.
Crypto-graphic security equipment protects all transmissions.
Crypto equipment is of the type approved by NATO, generically
referred to as DOLCE. Automatic detection of crypto
garbling prevent loss of information.
Data streams requiring security are terminal-to-terminal
encrypted and routed through FIKS without need for
decryption and re-encryption at intermediate nodes.
Stable timing is provided from frequency standards
to maintain end-to-end synchronisastion and bit count
integrity throughout the network for several weeeks
without adjustment.
FIKS is designed to prevent misrouting, inadvertent
plain text and unathorised access and retrieval. Nodal
switching equipment is seperable into RED areas for
MEDEs, where plain text unencrypted information is
allowed, and BLACK areas for NODEs where classified
information appears only in encrypted form.
6.8.2.2.9 M̲e̲s̲s̲a̲g̲e̲ ̲C̲a̲t̲e̲g̲o̲r̲i̲e̲s̲,̲ ̲C̲o̲d̲e̲ ̲a̲n̲d̲ ̲F̲o̲r̲m̲a̲t̲s̲
Four categories of traffic are handled: (1) narrative
messages with precedence and multiple addresses in
FIKS standard message format (SMF) with the essential
elements of the ACP-127 format; (2) service messages
using an abbreviated format; (3) continuous data requiring
virtually dedicated channels with minimum delay variation
and routed as an uninterrupted bit stream; (4) discontinuous
data requiring channels on a call-up basis with predictable
set-up time and delay. For message traffic, FIKS will
accept either 5-level (Baudot/ITA-2) or 7-level (ASCII/ITA-5)
codes; internally,message processing and storage will
be in ASCII code.
For data traffic, FIKS will accept any format or code,
as FIKS is transparent to data traffic.…86…1 …02…
…02… …02… …02…
Narrative messages are modified before transmission
to add an envelope containing FIKS internodal routing
and local address information, but the original messages
are restored at the destination terminals.
The FIKS network is entirely transparent to the formats
and protocols used for the continuous and discontinuous
data ctegories.
Internal to the FIKS network,between NODEs, all traffic
is handled as packets compatible with CCITT X25 HDLC
protocol.
A special protocol (LITSYNCH) is used between FIKS
and NICS/TARE.
6.8.2.2.10 M̲e̲s̲s̲a̲g̲e̲ ̲E̲n̲t̲r̲y̲,̲ ̲S̲t̲o̲r̲a̲g̲e̲ ̲a̲n̲d̲ ̲D̲i̲s̲t̲r̲i̲b̲u̲t̲i̲o̲n̲
Messages enter the FIKS network from a number of message
preparation and receiving terminals such as teleprinters
and visual display units. Each MEDE, initially serve
up to 30 full duplex terminals. However the total capacity
of the MEDE is 242 terminals and 12 interfaces to host
computers. Message preparation is interactive with
prompts from the MEDE computer. An example of a message
preparation format (SMF) is shown in figure III 6.8.2.2.10-1.
The underlined portions are either prompts or other
computer-inserted information. Adress information
is keyed-in as a character representing the MEDE to
which the terminal is connected, followed by 3 digits.
The computer replaces this by the current address,
which is then appearing in the delivered message. Figure
III 6.8.2.2.10-2.
Message terminal operators can use a number of interactive
procedures such as:
- preparation (4 types)
- coordination
- release
- retrieval
- readdressing
- distribution,local
- log on
- log off
- special handling
- editing
The MEDEs are manned 24 hours daily and MEDE supervisors
have control with the security and traffic of the system
and its terminals. A number of special procedures are
available for supervisors:
- distribution (2 types)
- control of terminal queue status
- re-arrangement of queues
- relocation of queues
- re-routing of terminal traffic
- block/unblock terminals
- security interrogation of terminals
- establishment of PTT data net connections
- up-dating of route and address tables
- security profile handling
- call-up of daily traffic statistics
and many other procedures.
Full accountability is provided for all messages.
Messages are queued by precedence (Flash,Immediate,Priority,Routine
and two other yet unspecified levels) to the NODE for
network routing and for automatic distribution to local
addresses.
All outgoing and incoming messages are stored at the
MEDEs for 10 days. SPECAT messages will be deleted
from local storage after transmission and delivery.
Retrieval of messages from 10 day stroage by authorised
users are provided. Messages can be retrieved by message
identification subject indicator codes (SIC) and date/time
indication.…86…1 …02… …02… …02… …02…
6.8.2.2.11 M̲e̲s̲s̲a̲g̲e̲ ̲R̲o̲u̲t̲i̲n̲g̲ ̲a̲n̲d̲ ̲D̲a̲t̲a̲ ̲S̲w̲i̲t̲c̲h̲i̲n̲g̲
Message traffic is relayed from the originating MEDEs
through intermediate FIKS NODEs to the destination
MEDEs, and data traffic transferred between terminals
directly interconnected to FIKS NODEs over internodal
trunks. The associated message routing and data line
switching functions are allocated to the NODE processors.
Messages received by the NODE are routed to other NODEs
or delivered to the locally connected MEDE on the basis
of routing indicators and precedence contained in a
special header. Each NODE is interconnected to adjacent
NODEs through at least 3 independently routed trunks.
The optimum trunk route to the final destination NODE
is based upon shortest route( minimum hop ) and network
connectivity. A routing algorithm is used, which allow
TH NODE to be independent of SCC control. SCC will
be informed of all changes in the network and calculate
routing tables for optimisation of the network traffic.
The SCC routing algorithm uses weighted delay factors
for the individual trunks. These weighting factors
will be derived from the traffic Q-reports and be used
to calculate message routing tables, which are down-loaded
to the NODEs.
The routing tables contain three alternative routes
per destination and the NODEs select the proper routes
from the tables based on trunk queue lengths. If both
SCCs are inopertaive, the NODE/MEDE supervisors can
manually update the tables.
Data traffic, both continuous and discontinuous, is
switched through predetermined routes over internodal
trunks. Each data user is allocated a primary and a
secondary route through the network. If the primary
route fails, the secondary route is automatically established.
Switch back to the primary route is controlled by supervisory
commands.
End-to-end set-up and transmisssion delays will be
less than 12 second. The NODE is transparent to data
traffic; all data traffic is in the black. Crypto synchronisation,
channel coordination, error control, and recovery procedures
are terminal-to-terminal or computer-to-computer.
6.8.2.2.12 S̲y̲s̲t̲e̲m̲ ̲S̲u̲p̲e̲r̲v̲i̲s̲i̲o̲n̲,̲ ̲C̲o̲n̲t̲r̲o̲l̲ ̲a̲n̲d̲ ̲M̲a̲i̲n̲t̲e̲n̲a̲n̲c̲e̲
Centralised supervision and control of the overall
FIKS network maintains network efficiency and regulate
or restore service in case of congestion, outages,
or failures. Continuous network status is monitored
and displayed at System Control Centres. Two SCCs are
provided but neither is dualised; back-up is geographic.
Both SCCs may be on-line with one exercising network
control and the other on standby monitoring the network;
or, the second may be off-line and dedicated to programme
development, maintenance, or training.
The SCCs exercise control of the network by use of
a number of procedures.
- threshold setting for trunk queue lengths
- threshold setting for message retransmission
rate
- control of SCC switchover
- change of tables
- request of daignostic results from NODE/MEDEs
- open/close trunks
- etc
Control messages from the NODE/MEDEs concerning traffic
queues, trunk and NODE status, retransmission rate
and, equipment availablity, etc. are transmitted to
the SCCs; from this, statistics are gathered, alarm
conditions noted, and reports presented to allow timely
network decisions by supervisory personnel. Preventive
and corrective action is initiated by operating personnel.
A log of control messages and SCC action provides an
audit trail to trace all network control actions.
Downline loading of routing, security and address tables
from the SCC to the network permits selective rerouting
of message traffic, change of routing plan, reconfiguration
of the network, and changes of security tables.
The current opertaional status of the FIKS nodal network
is displayed on a colour TV, dynamically updated by
reports and alarms from the network (see Fig. III 6.8.2.2.12-1).
The open/closed status of each internodal trunk and
active PTT back-up channels as well as configuration
and availability of each NODE/MEDE and SCC is displayed.
Statistics are gathered by the SCC from control messages,
periodic reports and traffic received from the network.
Message flow, trunk usage, queueing delays, outages,
equipment up-time, and other statistics will be available
for off-line statistical analysis, reports and network
planning. A summary message traffic report will be
automatically generated and distributed every 24 hours
to the NODE/MEDEs.
The interchange of message traffic between the FIKS
and NICS/TARE network will be performed by SCCs. TARE
may send messages to FIKS terminals; national routing
indicators and addressees will be recognised and the
message will be converted from ACP -127 format to FIKS
Standard Message Format fo routimg and distribution
on the FIKS network. Similarly, FIKS terminals will
send messages to TARE using NATO addresses. Valid NICS
routing indicators will be extracted from an SCC file
and the message will be translated to ACP-127 format
for transmission on the FIKS/NICS channel. The recognisable
NICS routing indicator directory consists of 1200 selected
NATO addresses at the SCC, 200 at NODE/MEDEs; messages
containing undefined NATO addresses or errors will
be intercepted for manual handling.
Maintenance of the system is performed partly by NODE/MEDE
supervisors being cross-trained to operate the off-line
diagnostic programmes, change modules and perform manual
switch-over, partly by technicians located at the two
SCCs and a mobile technician team which can be called
out to the different sites to locate and repair faults.
Software personnel will be located at the two SCCs.
L̲I̲S̲T̲ ̲O̲F̲ ̲F̲I̲G̲U̲R̲E̲S̲
FIGURE III 6.8.2.2.1-1 FIKS NODAL NETWORK AND TERMINALS
FIGURE III 6.8.2.2.3-1 TRUNK USAGE
FIGURE III 6.8.2.2.7-1 FIKS GENERIC ELEMENTS
FIGURE III 6.8.2.2.7-2 FIKS SYSTEM OVERVIEW SCHEMATIC
FIGURE III 6.8.2.2.10-1 FIKS MESSAGE PREPARATION FORMAT
FIGURE III 6.8.2.2.10-2 FIKS HARD COPY,EXAMPLE
FIGURE III 6.8.2.2.12-1 FIKS STATUS DISPLAY
FIGURE 6.8.2.2.1-1
FIKS NODAL NETWORK AND TERMINALS
Each frame (low delay protocol) on trunk is 13.3 msec.
long and consists of 16 bytes, of which 10 bytes are
used for dynamic allocation of data traffic based upon
bandwidth requirement and priority (15 levels) handled
by a higher level protocol (low delay multiplex protocol).
Content of message traffic bytes (2 per frame) are
collected and sent in a HDLC protocol as 16 bytes per
HDLC frame for error correction of message traffic.
Error correction of data traffic is done independently
from FIKS via the individual data users protocols.
FIGURE 6.8.2.2.3-1
TRUNK USAGE.…86…1 …02… …02… …02… …02…
Fig. 6.8.2.2.7-1
FIKS GENERIC ELEMENTS
FIGURE 6.8.2.2..7-2
F̲I̲K̲S̲ ̲S̲Y̲S̲T̲E̲M̲ ̲O̲V̲E̲R̲V̲I̲E̲W̲ ̲S̲C̲H̲E̲M̲A̲T̲I̲C̲ …86…1
…02… …02… …02… …02…
P̲r̲o̲c̲ PRE(CR)
A̲B̲C̲ ̲1̲2̲3̲ (CR)
F̲O̲R̲M̲A̲T̲T̲E̲D̲ ̲M̲E̲S̲S̲A̲G̲E̲ ̲A̲2̲1̲ ̲(̲C̲R̲)̲
P̲R̲E̲C̲ ̲A̲C̲T̲ 0 (CR)
P̲R̲E̲C̲ ̲I̲N̲F̲O̲ ̲ R (CR)
F̲M̲ / (CR) C̲H̲O̲D̲D̲E̲N̲
T̲O̲ AIG 1601 (CR)
X̲M̲T̲ (CR)
T̲O̲ E104/ (CR) T̲A̲C̲D̲E̲N̲
T̲O̲ CR)
I̲N̲F̲O̲ X115 (CR)
I̲N̲F̲O̲ (CR)
B̲T̲
C̲L̲A̲S̲S̲ NS (CR)
S̲P̲E̲C̲A̲T̲ (CR)
S̲I̲C̲ ̲ RHQ (CR)
TEXT
NNNN (CR
B̲T̲
D̲I̲G̲/ (CR) 0̲1̲2̲3̲4̲7̲z̲ ̲j̲a̲n̲
P̲R̲O̲C̲
FIGURE 6.8.2.2.10-1
FIKS MESSAGE PREPARATION FORMAT
(CR) = carriage return
EXAMPLE
0801 KAB
NATO RESTRICTED
0 R 012347z JAN 80 MSG ID ABC 123
FM CHODDEN
TO AIG 1601
TACDEN
INFO SHAPE
BT
NATO RESTRICTED
SIC RHQ
IN RELPY REFER TO TST 312.1-1227
SUBJECT CONTRACT NO FK 7900
IN ACCORDANCE WITH PARAGRAPH 16.5 OF THE SUBJECT CONTRACT
AMC IS PLEASED TO SUBMIT AN ORDER FOR THE OPTION FOR
ADDITIONAL RDS-V
PPI DISPLAYS AS FOLLOWS
QTY IN UNITED STATES DOLLARS
1-2 1000 DOLLARS EA
3-6 976 DOLLARS EA
THE EQUIPMENT SHALL INCLUDE THE RDS-V PPI DISPLAY/DATA
ENTRY AND TRACKBALL WITH THE NECESSARY SYSTEM MODIFICATION
TO ALLOW SEPARATION OF THE DISPLAY UP TO 3500 METERS.
DELIVERY SHAAL BE ACCOMPLISHED AT THE RATE OF TWO PER
MONTH STARTING 10 MONTHS AFTER RECEIPT OF A CONTRACT
MODIFICATION. ALL OTHER ITEMS AND CONDITIONS SHALL
BE IN ACCORDANCE WITH THE SUBJECT CONTRACT.
BT
INT DIST 0-DIV
ACCEPTANCE TIME 020005z
RETRIEVAL TIME 020006z
NATO RESTRICTED
FIKS HARD COPY EXAMPLE
FIGURE 6.8.2.2.10--2
FIGURE 6.8..2.2.12-1
FIKS STATUS DSIPLAY
6.8.2.3 S̲e̲s̲s̲i̲o̲n̲ ̲C̲o̲n̲t̲r̲o̲l̲
o Session control is the capability to handle the
remote request of the EMH facility usage.
The session control software establishes the link between
remote users and typical EMH applications such as programme
development. When a remote user signs-on to the EMH
the session control creates the link which remains
until the user signs-off again.
6.8.2.4 R̲e̲c̲o̲n̲f̲i̲g̲u̲r̲a̲t̲i̲o̲n̲
o Reconfiguration capabilities for the EMH are handled
by this application software.
During reconfiguration, by NMH and conducted by NCC,
the reconfiguration application software receives
and carries out the changes in configuration tables
etc. Also local reconfiguration inquiries are handled
by this configuration software application.
6.8.2.5 E̲M̲H̲ ̲R̲e̲c̲o̲v̲e̲r̲y̲
o Recovery of the EMH will be effected to such an
extent as to obtain proper functioning.
Tools for being able to perform a properly recovery
is the checkpoint mechanism. Checkpointing, i.e. saving
certain events during a message flow, can be implemented
either by checkpointing to disc or by checkpiont to
stand-by computer. From these checkpoints the recovery
program is able to establish the necessary delivery
of all traffic types according to what will be required
after a possible system failure. For disc checkpointing
recovery can be performed after a total system failure,
i.e. power out, while stand-by checkpointing is used
after a switch-over situation.
6.8.3 S̲y̲s̲t̲e̲m̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
o The system software is the background software,
hidden from the user, only present for being able
to handle the data base system.
For running the EMH applaication software and corresponding
external devices, such as discs and VDUs, the following
software system has been made available:
…86…1 …02… …02… …02… …02…
- file management system
- disc driver
- terminal driver
A more detailed description of these components can
be found in section 6.2, system software for the CR80
computer.
6.8.4 E̲x̲t̲e̲r̲n̲a̲l̲ ̲D̲e̲v̲i̲c̲e̲s̲
o To perform the previously mentioned functions on
the EMH, external devices such as discs, VDUs and
line printers are attached.
A number of dualised disc drive systems serve as the
data base of the EMH with a capacity which fullfil
the functional requirements. Write operations are performed
on each disc at a time to minimise the risk of corrupting
data. During recovery normal opertaion can continue
while update of replaced disc takes place. VDUs serve
as interactive terminals during operation of the EMH.
The opertaions in question are software development
of the electronic mail service, protected message switching
etc.
The attached line printer is able to handle the TDP
subset of type B traffic. During software development
the line printer serves as an important tool for paper
copies of developed application programs.
6.9 N̲e̲t̲w̲o̲r̲k̲ ̲C̲o̲n̲t̲r̲o̲l̲ ̲C̲e̲n̲t̲e̲r̲ ̲(̲N̲C̲C̲)̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
O The NCC provides global operational control for
the entire Air Canada Network.
The NCC profits from the experience Christian Rovsing
has developed in the FIKS data communication network
for the Air Material Command.
The FIKS network is monitored and controlled centralized
via a geographically backed up control center.
6.9.1 N̲C̲C̲ ̲N̲e̲t̲w̲o̲r̲k̲ ̲E̲n̲v̲i̲r̲o̲n̲m̲e̲n̲t̲
The NCC is situated in the YYZ site and is geographically
backed up in the YUL site.
Each of the NCC's are connected to all watchdog processors
(WDP) at any site (refer figure III 6.9.1-1 overleaf).
Via this connection a site component is monitored and
controlled.
Figura III 6.9.1-1…01…NCC Network Environment
6.9.2 N̲C̲C̲ ̲H̲a̲r̲d̲w̲a̲r̲e̲ ̲C̲o̲m̲p̲o̲n̲e̲n̲t̲s̲
The NCC and the back-up NCC (BNCC) contain the same
equipment:
- two mirrored disks (shared with the node), which
are used for statistical and configurational information.
- a color display for display of selected parts of
the status of the Air Canada Network.
- An event log printer, which records commands executed
by an NCC supervisor.
- An alarm printer on which network exceptions are
printed.
- Two terminals from which supervisors control the
network.
- A floppy disk containing boot load files.
Figure III 6.9.2-1…01…NCC Subsystem hardware components…01…(at node YYZ and
YUL)
6.9.3 N̲C̲C̲ ̲D̲a̲t̲a̲ ̲B̲a̲s̲e̲
The NCC data base contains the following information:
- Routing tables for all Air Canada Network nodes
- Statistics information for the previous day and
the current
- Status for all site components
- Configuration tables, including device specific
service data
- load files
- software patch files
The routing and configuration tables and the load files
exists in:
- a previous and
- a current and
- a being updated
version.
Locally at site components also three versions exist.
6.9.4 N̲C̲C̲-̲B̲N̲C̲C̲ ̲O̲p̲e̲r̲a̲t̲i̲o̲n̲
The NCC-BNCC operates in an active/standby mode
Control messages from site components and NMH data
messages are directed to both NCC'S.
Configurational changes executed by the active NCC
supervisor are forwarded to the standby NCC
It is possible to include a repaired NCC as standby
NCC by down-line loading of active NCC configuration
and routing tables.
In this way it is possible to ensure a common view
of the network for both NCC's
Switchover from an active NCC to a standby NCC will
take place due to:
- The non-arrival at the standby NCC of a "keep-alive"
message from the active NCC.
- Execution of a supervisor command in either the
active or standby NCC.
A switchover to the standby NCC is performed within
2 minutes.
6.9.5 N̲M̲H̲-̲N̲C̲C̲ ̲O̲p̲e̲r̲a̲t̲i̲o̲n̲
The NMH contains a data base for various releases of
- configuration tables
- statistics
- inventories
- software load files
The NCC back-ups statistics information daily either
automatically or on supervisor request.
The NCC supervisors can request the NMH to downline
load
- a configuration table
- a load file
to the NCC data bage (refer section 6.9.3).
The distribution in the network of the above tables
and load files is performed by the NCC, which ahead
performs network communicaton.
This approach ensures a centralized and secure updating
of configurational data.
6.9.6 C̲e̲n̲t̲r̲a̲l̲i̲z̲e̲d̲ ̲v̲e̲r̲s̲u̲s̲ ̲D̲i̲s̲t̲r̲i̲b̲u̲t̲e̲d̲ ̲C̲o̲n̲t̲r̲o̲l̲
The site components
- nodes
- FEP's
- EMH
- NMH
are controlled via a top level operating system.
This operating system is designed to receive and execute
reconfigurational commands either from a terminal position
local to the component or remote from one of the NCC's.
Local reconfigurations are coordinated with the NCC's,
thereby ensuring a global view of the network.
So, a transfer of a centralized (NCC) controlled function
to any site position will require the inclusion of
an operator interface at the local control position.
Alarm conditions have an associated attribute:
- to be sent to NCC
- to be presented locally
- both
So, a removal of a centralized alarm function to a
secondary position is easy to cope with.
6.9.7 N̲C̲C̲ ̲m̲o̲n̲i̲t̲o̲r̲i̲n̲g̲ ̲v̲i̲a̲ ̲W̲D̲P̲ ̲(̲W̲a̲t̲c̲h̲d̲o̲g̲)̲
The NCC monitors via the WDP
- power supplies
- crate temperatures
- various switch settings
6.9.8 N̲C̲C̲ ̲C̲o̲n̲t̲r̲o̲l̲ ̲v̲i̲a̲ ̲W̲D̲P̲
The NCC controls via the WDP
- PU's switchover
- LTU switchover
- PU disconnection from suprabus and peripherals.
The watchdog control is executed via a separate bus
CCB (configuration control bus).
6.9.9 S̲o̲f̲t̲w̲a̲r̲e̲ ̲M̲o̲n̲i̲t̲o̲r̲i̲n̲g̲
The NCC can request a site component to transfer
- a dump of a former active PU memory residing on
disk
- selected tables in a current active PU
to the NCC.
The NCC contains a dump analyze program, which can
print the received dump.
6.9.10 S̲o̲f̲t̲w̲a̲r̲e̲ ̲I̲n̲i̲t̲i̲a̲l̲i̲z̲a̲t̲i̲o̲n̲ ̲a̲n̲d̲ ̲M̲o̲d̲i̲f̲i̲c̲a̲t̲i̲o̲n̲
The NCC supervisors initializes and modifies site components
through the following configuration elements:
- system software load file
- application software load file
- LTU software load files
- software patches
- entire configuration files including
device specific service data
- routing tables
- updates to configuration files and routing tables
Update to configuration files and rooling tables are
generated due to NCC supervisor commands.
Having loaded a configuration element or performed
an update, the NCC can bring the updated version into
operation. This will in the system, application and
patch cases be performed during a controlled switchover.
Changes of device characteristics will only influence
the device in question.
The possibility to transfer LTU software provides for
the incorporation of new device types.
The possibility to transfer patches enables repair
of an isolated software error.
The use of backed-up versions locally enables a fast
reversion to a previous release.
As the backbone network is used for transmission of
software- and configuration modifications and as the
loading of modifications and the bringing into operation
is a NCC function, the reconfiguration process is highly
automated.
New versions of software are downline loaded to all
nodes simultaneously.
6.9.11 R̲o̲u̲t̲i̲n̲g̲ ̲C̲o̲n̲t̲r̲o̲l̲
The NCC receives control messages from all nodes which
define the network environment of the node.
Based on these control messages new routing tables
are automatically generated and distributed to any
node.
Locally a node has routing tables defining alternative
paths.
The NCC supervisors are given commands to update the
NCC routing tables.
6.9.12 N̲e̲t̲w̲o̲r̲k̲ ̲M̲o̲n̲i̲t̲o̲r̲i̲n̲g̲
The NCC data base contains a status file which records
the state of all devices and users in the Air Canada
Network.
The device status includes:
- terminal status
- status of any network component, e.g.
- trunks
- multiplexer
- concentrator
- node
- components within the above
The user status includes:
- session status
e.g.
- sign in/out status
- routing information including congestion, from
control and host/application status
- user functional capabilities i.e. the function
allowed to be executed by a user
Nodes can request the contents of routing information
in order to present the information for a user.
The NMH can search in this file during
- inventory and
- billing data base
updates.
Also NCC supervisors can retrieve any network component
information for a display at the color display.
6.9.13 A̲p̲p̲l̲i̲c̲a̲t̲i̲o̲n̲ ̲a̲f̲f̲e̲c̲t̲i̲o̲n̲ ̲b̲y̲ ̲n̲e̲t̲w̲o̲r̲k̲ ̲r̲e̲c̲o̲n̲f̲i̲g̲u̲r̲a̲t̲i̲o̲n̲s̲
It is possible to insert
- new nodes
- new equipment at network components
due to
- the hardware which allows new LTU's to be inserted
in a channel unit without disrupting remaining
LTU's
- the NCC supervisors ability to
- off-line load new software
- off-line load new configuration and routing
tables.
6.9.14 S̲t̲a̲t̲i̲s̲t̲i̲c̲s̲
The NCC collects statistics for a 24 hour period and
off-loads it automatically one per day to the NMH.
A statistics collector collects statistics for
- 5 minutes
- 1 hour
- 24 hours
periods, based on received statistics collected on-line
at a 5 minutes basis.
The statistics collector is a tool for storing anonymous
statistics records. The storing is performed on a
5 minutes basis. A catalogue defines the location
of each 5 minutes statistics on a statistics file.
A statistics record contains:
- a header containing
- time space covered and originator
- type of statistics
e.g. - printer queue length type
- oldest message in printer queue type
- associated data
e.g. - queue length
- time of day
As the statistics collector does not know the contents
of statistics information,
- new statistics record types can be included
- the amount of statistics produced is variable.
6.9.14.1 S̲t̲a̲t̲i̲s̲t̲i̲c̲s̲ ̲I̲n̲f̲o̲r̲m̲a̲t̲i̲o̲n̲
The following statistics information is foreseen:
- billing information:
- session time duration per user
- session routing information
- availability statistics e.g.
number of errors per error type per network component
type
- performance statistics:
specification of system usage e.g.. message traffic,
number of transactions, response times etc.
- security statistics:
number of security violations
6.9.15 A̲l̲a̲r̲m̲s̲
Alarms refer to network conditions affecting a significant
number of users, e.g.
- node failure
- concentrator failure
Alarms cause an audible alarm and an event log.
Also alarms are queued within the NCC and are available
for display/printout on supervisor request.
6.9.16 S̲u̲p̲e̲r̲v̲i̲s̲o̲r̲y̲ ̲F̲u̲n̲c̲t̲i̲o̲n̲s̲
In addition to the supervisory functions defined elsewhere
in this chapter, the supervisors have the following
possibilities:
- dial-up/down a trunk
- switch to any standby equipment
- broadcast messages to terminals
6.9.17 N̲C̲C̲ ̲M̲a̲n̲-̲M̲a̲c̲h̲i̲n̲e̲ ̲I̲/̲F̲
The supervisors at the NCC uses a menu driven format
orientated dialogue during execution of commands.
The supervisory access to the NCC functions are controlled
via a
- password check
- functional capability check
Functional capabilities are rights to e.g.
- update of routing tables
- display of alarms
- sending of NMH request
which are defined per user.
6.10 N̲e̲t̲w̲o̲r̲k̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲ ̲H̲o̲s̲t̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
The Network Management Host (NMH) is a general purpose
computer connected to the backbone network as an ordinary
host processor.
The services offered by the NMH are to a great extent
standard software products such as language processors
and software maintenance utilities. All services are
controlled by the DAMOS multiuser system which can
provide service for an unlimited number of concurrent
users.
6.10.1 N̲M̲H̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲s̲ ̲a̲n̲d̲ ̲F̲u̲n̲c̲t̲i̲o̲n̲a̲l̲ ̲O̲v̲e̲r̲v̲i̲e̲w̲
o The services of the NMH are offered to any user
at any terminal connected to the backbone network.
The basic services offered are:
- Software development and maintenance facilities
- Data base maintenance facilities
- Downline loading of software and configuration
data
- Acquisition of statistics and billing information
- Network modelling software
The NMH is connected to the backbone network via a
supra bus link. Softwarevise the NMH contains support
for the "File Transfer Protocol" and "Virtual Terminal
Protocol" described in section 6.6.
Thus the NMH is able to serve any user at any terminal
in the backbone network. The access to the NMH is controlled
by a session control layer.
Directly connected to the NMH there is the following
standard peripheral equipment:
- Disc unit with 80 Mbyte moving head and 2 Mbyte
fixed head capacity
- Disc unit with 80 Mbyte exchangeable disc pack
- Line printer for 600 LPM
- Magtape unit
- Floppy Disk
- 3 Visual Display Units
The functional areas of the NMH application are the
following:
- Software development and maintenance
- Maintenance of the data bases for: Network Configuration
and Network Inventory List
- Downline loading of software and configuration
data
- Processing of data for statistics and for cost
and billing.
- Network modelling functions and autuomated test
and emulation functions for real-life tests.
6.10.2 S̲o̲f̲t̲w̲a̲r̲e̲ ̲D̲e̲v̲e̲l̲o̲p̲m̲e̲n̲t̲ ̲a̲n̲d̲ ̲M̲a̲i̲n̲t̲e̲n̲a̲n̲c̲e̲ ̲F̲u̲n̲c̲t̲i̲o̲n̲s̲
The NMH provides all the tools necessary for software
development and maintenance
The following programming languages are available for
the CR80D computers:
- SWELL, which is an intermediate level systems programming
language
- PASCAL, which is a general purpose high level programming
language
- COBOL, which is a high level langage for administrative
application programming.
Furthermore, the new DOD standard programming language
ADA will be available for the CR80D computers from
1984.
The NMH software development and maintenance package
comprises the following functions:
- Source test editor
- Language processors for: SWELL,PASCAL,COBOL
- A comprehensive set of test tools for debugging,tracing,performance
measurements
- A comprehensive set of file-handling utilities(copying,moving,patching,comparing,backup..etc)
- All the libraries necessary for software development
and maintenance.
6.10.3 M̲a̲i̲n̲t̲e̲n̲a̲n̲c̲e̲ ̲o̲f̲ ̲t̲h̲e̲ ̲C̲o̲n̲f̲i̲g̲u̲r̲a̲t̲i̲o̲n̲ ̲D̲a̲t̲a̲ ̲B̲a̲s̲e̲
The Configuration Data Base (CDB) is a set of data
structures giving the physical and logical relations
between components in the terminal access network and
the host access network. The key items in the CDB are
as follows:
- End user devices(e.g. terminals,printers,host channels)
- each device has associated a logical identification,
a physical path(relates to inventory items attributes)
and an attribute list(e.g. type, priviledges, protocol).
- Inventory Items - a list of all "line replaceable
units" in the access networks and,the backbone
network ordered per equipment type. Each unit has
a record defining all external logical and physical
connections with cross reference to the device
path definitions for consistency checking.
- User catalogue - each user allowed access to the
network is identified by a user profile, i.e. a
user ID, password, user attributes(e.g. type,priviledges,priority..)
The data base exist in three versions at the NMH. The
previous version, the current version, and the version
under update - only the latter is subject to any changes
or updates. The data base is based on the index sequential
access method: CRAM (refer to section 6.2.4.1.6.2).
Updates may be performed interactively or in batch
mode.
Data base consistency is checked during interactive
update. In batch mode the data compiler allows for
unresolved references enabling the batch mode changes
to be inputted in any order.
The interactive update functions(delete/insert/change
a record) are performed by filling out preformatted
menues presented to the NMH operator. - Note that tools
for generation of interactive procedures (parsing system)
are includued in the standard software utility package.
6.10.4 D̲o̲w̲n̲-̲l̲i̲n̲e̲ ̲l̲o̲a̲d̲i̲n̲g̲ ̲o̲f̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲ ̲a̲n̲d̲ ̲C̲o̲n̲f̲i̲g̲u̲r̲a̲t̲i̲o̲n̲ ̲D̲a̲t̲a̲
The function of down-line loading of software, software
patches, configuration-data, etc, is accomplished by
means of a commanding process in the NMH communicating
with a command receiving process in a NCC computer
via the File Transfer Protocol(refer to section 6.6.1).
The commanding process is the master on application
level and it is controlled by means of NMH operator
input commands. The command receiver process executes
the commands by means of standard utilities or programmed
requests to the operating system. Thus the following
types of remote control are possible:
- copy a file from NMH to the, NCC's connected to
backbone network.
- copy a file from the, NCC's to the NMH. Note that
the file may be a memory dump during remote programme
debugging.
- It should be noted that the files may be configuration
data files or software object files
- Perform operating system command, i.e. load/create/start/stop/,
restart a process in the NCC from the NMH.
- Dump specified memory section to disc on an NCC
.
- It should be noted that LTU firmware is treated
the same way as software.
The distribution process to the nodes, FEP's, EMH or
Gateway is controlled centrally by the active NCC (ref.
section 6.9.10).
6.10.5 P̲r̲o̲c̲e̲s̲s̲i̲n̲g̲ ̲o̲f̲ ̲R̲a̲w̲ ̲D̲a̲t̲a̲ ̲f̲o̲r̲ ̲S̲t̲a̲t̲i̲s̲t̲i̲c̲s̲ ̲a̲n̲d̲ ̲B̲i̲l̲l̲i̲n̲g̲ ̲I̲n̲f̲o̲r̲m̲a̲t̲i̲o̲n̲
The NMH has an aquisition programme that collects the
raw data for statistics and cost-and billing information
compiled at the NCC (refer to section 6.9). The raw
data is structured into two data bases:
- statistical data base
- cost-and billing data base
All the data compiled at the nodes or the NCC are kept
and a comprehensive set of catalogues are built enabling
easy access for customer-defined applications.
6.10.6 N̲e̲t̲w̲o̲r̲k̲ ̲M̲o̲d̲e̲l̲l̲i̲n̲g̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲ ̲a̲n̲d̲ ̲S̲u̲p̲p̲o̲r̲t̲ ̲f̲o̲r̲ ̲A̲u̲t̲o̲m̲a̲t̲i̲c̲
̲N̲e̲t̲w̲o̲r̲k̲ ̲T̲e̲s̲t̲s̲
o Network modelling and real life network tests are
addressed in this section.
Testing the consequences of network topology changes
can be performed by means of the "Network Model" which
is a mathematical model of network or by means of the
"Automated Test & Emulation System" - ATES (ref. appendix
F).
6.10.6.1 N̲e̲t̲w̲o̲r̲k̲ ̲M̲o̲d̲e̲l̲
The Network Modelling Software Package is a mathematical
model of the entire network including submodels for
all types of equipment.
The model is used for consequence calculation of network
topology changes. The model include the entire backbone
network, the entire terminal access network and the
entire host access network.
Input to the network model is the following data structures:
- The test configuration data base defining the entire
network including terminal access net and host
access net.
- A library of submodels for each type of equipment
(terminal, mux, ICC, node, host). The submodels
are complex queue models which can be updated by
library update procedures.
- Per terminal connected to the network there shall
be defined a traffic profile selected from a predefined
library of traffic profiles. The traffic profile
defines type of traffic, variation within a 24
hour period, traffic volume in terms of number
of transactions and number of input and output
characters per transacion, destination/source of
traffic. The traffic profiles may be updated by
library update procedures.
Initial values for all network parameters may be specified
before a simulation.
A simulation for a specified time-period is performed
by the model and output is produced in forms of a log
data file.
A special data reduction S/W package is provided for
output of log data from the model.
The model is able to provide the following set of data:
- Mean value,standard deviation, and maximum value
over the simulation period of the following parameters:
- Queue lenths for all queues in the model
- Terminal response times
- Line utilization
- The parameter sensitivity to an increase/decrease
of the network throughput is given for each
of the above parameters.
- A histogram over a specified timeperiod of above
parameters may be output.
6.10.6.2 N̲e̲t̲w̲o̲r̲k̲ ̲T̲e̲s̲t̲i̲n̲g̲ ̲b̲y̲ ̲M̲e̲a̲n̲s̲ ̲o̲f̲ ̲A̲T̲E̲S̲
The "Automated Test & Emulation System" (ATES) is a
general test drive system developed by Christian Rovsing
for the real-life test of major military communication
systems.
The software driving this system may be downline loaded
via the NMH and executed on redundant equipment in
the nodes, the front-ends or the EMH.
The functional capabilities of the ATES is described
in Appendix F.
The NMH contains all the software necessary for running
a real-life test using the ATES:
- Script Compiler (scripts to be produced at the
NMH)
- Online Test Controller
- Log File Editor (data reduction program)