top - download
⟦8200e3837⟧ Wang Wps File
Length: 100469 (0x18875)
Types: Wang Wps File
Notes: AIR CANADA PROPOSAL
Names: »2041A «
Derivation
└─⟦4363da632⟧ Bits:30006256 8" Wang WCS floppy, CR 0156A
└─ ⟦this⟧ »2041A «
WangText
J…00……00……00……00…E…0a……00……00…E…0b…E E…07…D…08…D…0d…B…06…B…07…A…0e…A…07…@…0d…@ ?…0b…?…0c…?…01…?…02…>…0a…>…00…>…01…9…00…9…06…8…08…8…00…8
7…0a…7…0b…7…0c…7 7…05…6…0c…6…0d…6 5…0d…5…05…4…0b……86…1 …02… …02… …02… …02…
CHAPTER 3
Page #
DOCUMENT III TECHNICAL PROPOSAL Apr. 29, 1982
LIST OF CONTENTS Page
3. PROPOSED SOLUTION ..............................
3
3.1 Introduction ...................................
4
3.2 Proposed Technical Solution ....................
6
3.2.1 Scope of Air Canada Data Network ...............
6
3.2.2 Proposed Network Architecture ..................
14
3.2.2.1 Network Interface Environment ..................
17
3.2.2.2 Communication User Environment .................
18
3.2.2.3 Transmission Environment .......................
22
3.2.2.4 Data Link Environment ..........................
23
3.2.2.5 Network Service Environment ....................
24
3.2.2.6 Network Topology................................
xx
3.2.2.7 Network Architecture Realisations ..............
25
3.2.3 Functional Overview ............................
8
3.2.4 External Environments ..........................
11
3.2.5 Deliverables ...................................
26
3.3 Proposed Hardware Equipment ....................
34
3.4 Proposed Software Packages .....................
38
3.4.1 Access software ................................
42
3.4.2 Nodal Switching Software .......................
45
3.4.3 Network Control Software .......................
46
3.4.4 Network Management Software ....................
50
3.4.5 Electronic Mail Software .......................
51
3.5 Performance ....................................
53
3.5.1 Node Modelling .................................
54
3.5.1.1 End-to-End Response Time .......................
54
3.5.1.2 Processor Utilization and Capacity..............
55
3.5.1.3 Memory Utilization .............................
56
3.5.1.4 Internodal Trunk Utilization ...................
58
3.5.2 Gateway ........................................
59
3.5.3 EMH Modelling ..................................
60
3.5.4 Reliability and Availability ...................
63
3.6 Baseline Capacity and Projected Growth .........
65
3.6.1 Baseline Capacity ..............................
65
3.6.2 Projected Growth ...............................
66
3.6.3 Block Diagrams .................................
68
3.7 Telecommunications .............................
73
3.8 Options ........................................
74
3.8.1 Videotex .......................................
76
3.8.2 High Density Digital Tape Recordings ...........
79
LIST OF FIGURES
3.1-1 Present Air Canada Data Networks.......... 6
3.2-1 Proposed Architecture..................... 12
3. P̲R̲O̲P̲O̲S̲E̲D̲ ̲S̲O̲L̲U̲T̲I̲O̲N̲
To the extent that Air Canada is embarking on establishing
a network base for the 1980s with the full knowledge
of the positive impact this will have on Air Canadas
operations, services to the passengers and to the airline
industry's commercial infrastructure, we have presented
in this chapter our solution in the context of a broad
perspectiv over the trends in data communications industry
in the 1980s.
The key themes behind our proposed solution are:
High availability: Through fault tolerance
and
redundant hardware and
through a unique network
control philosophy.
Granular Expandability: Through a scheme that
exploits multiprocessing
configuratons and
distributed functions.
Investment Protection: Through a networking scheme
that supports heterogenous
hosts, data transport
services, terminal types
and a networking scheme
based on available standards.
…02…Subscriber services: By exploiting available
standard software and
firmware for host support
and terminal devices support.
3.1 I̲n̲t̲r̲o̲d̲u̲c̲t̲i̲o̲n̲
The scope of this chapter is to convey to the Technical
Management function in Air Canada, the underlying precepts
of the solution proposed by Christian Rovsing to meet
the functional requirements contained in the Air Canada
RFP.
This chapter also serves to provide a high level breakdown
of the proposed Air Canada Data Network including the
associated rationale. The breakdown is covered in terms
of hardware and software architectures. The software
is presented in terms of the 7-layer OSI reference
model. Additionally, this chapter presents the predicted
performance and response times.
The proposed solution is presented in a broad context
in section 3.2. Subsection 3.2.1 provides the scope
of the backbone network as perceived by Air Canada
while subsection 3.2.2 presents the proposed network
architecture by covering internal environments, network
topology together with possible network realizations.
Subsection 3.2.3 gives a functional overview on the
proposed Air Canada Data Network, while subsection
3.2.4 defines and presents the external "environments"
to which the proposed Air Canada Data Network provides
service.
The deliverable hardware and software mapping on the
environments of the architecture is provided in subsection
3.2.5.
Section 3.3 provides the high-level break-down of the
hardware.
The software functional basis and structure is presented
in section 3.4. This section covers interfaces to
the backbone network from users, i.e. from host computers,
other networks, via the Gateway to the existing network,
and to terminal equipment.
Results of the performance analysis, i.e. response
times and capacities, are presented,in section 3.5.
Finally, section 3.6 discusses the projected growth
capabilities of the network with respect to capacity
and functionality, 3.7 describes the telecommunication
support provided by CNCP, while 3.8 discusses potential
growth areas.
Figure III 3.1-1 PRESENT AIR CANADA DATA NETWORKS
3.2 P̲r̲o̲p̲o̲s̲e̲d̲ ̲T̲e̲c̲h̲n̲i̲c̲a̲l̲ ̲S̲o̲l̲u̲t̲i̲o̲n̲
3.2.1 S̲c̲o̲p̲e̲ ̲o̲f̲ ̲A̲C̲D̲N̲
Air Canada Data Network (ACDN) is perceived as a transparent
common communication service capable of interfacing
a variety of terminals, terminal concentrators, and
a variety of general purpose mini or mainframe based
computing facilities.
The computing facilities of Air Canada as they exist
today, are presented in a generalized and simplified
view in Figure III 3.1-1 "Present Air Canada Data
Networks". Each of the networks provides specific
sets of services to the user.
The discernible premises for the RFP are:
... a high degree of transparancy in the network
necessitated by the diversity of users and services
... a need to interface to and permit access to
application resources in UNIVAC, HONEYWELL, IBM
mainframes.
... a need to provide network management and admini-
stration facilities that can be exploited by the
network maintenance staff organization at the
central a̲n̲d̲ remote sites.
... a need to support new services like facsimile data
transmission and digitised voice transmission.
... a need to coexist with external networks that
provide complementary services and coverage.
... a need to realise a stable migration from the
existing network environment to an enhanced
network environment.
When the above premises are mapped on to the available
networking solutions from Christian Rovsing a primary
rationale for the proposed Air Canada Data Network
(ACDN) is seen to evolve.
The solution to the Air Canada Data Network proposed
herein, is based on a network architecture which has
evolved over the last five years and is used in national
as well as international private networks, where high
performance, reliability, security and flexibility
were essential.
The proposed network architecture has been designed
to create a communication mechanism which supports
a wide range of user applications, host computer systems
and interconnect technologies. Specifically, the goals
are the following:
a. C̲r̲e̲a̲t̲e̲ ̲a̲ ̲C̲o̲m̲m̲o̲n̲ ̲U̲s̲e̲r̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲:̲ The application
interface to the network should support a broad
spectrum of application communication requirements
and should be common across the varied implementations.
Within such a network environment, applications
may be moved among the systems in the network,
with the common interface hiding the internal characteristics
and topology of the network.
b. S̲u̲p̲p̲o̲r̲t̲ ̲a̲ ̲W̲i̲d̲e̲ ̲R̲a̲n̲g̲e̲ ̲o̲f̲ ̲C̲o̲m̲m̲u̲n̲i̲c̲a̲t̲i̲o̲n̲ ̲F̲a̲c̲i̲l̲i̲t̲i̲e̲s̲:̲
The network should be adaptable to changes in
communication technology and operate with a variety
of communication channels using appropriate cost
effective technology. Today this may be leased
ground circuits, in a few years it may be satellite
channels.
c. B̲e̲ ̲C̲o̲s̲t̲ ̲E̲f̲f̲e̲c̲t̲i̲v̲e̲:̲ The network should approach
the efficiency and performance of a network designed
specifically for a given application. In addition,
factoring in the reduced development effort by
using an existing network product, a distributed
application development should have a good performance
to cost ratio.
d. S̲u̲p̲p̲o̲r̲t̲ ̲a̲ ̲W̲i̲d̲e̲ ̲R̲a̲n̲g̲e̲ ̲o̲f̲ ̲T̲o̲p̲o̲l̲o̲g̲i̲e̲s̲:̲ The architecture
should support communication between users independent
of the intervening data transport network. The
interconnection structure of the computers and
communication facilities should not affect the
logical communication capabilities of the applications.
e. B̲e̲ ̲H̲i̲g̲h̲l̲y̲ ̲A̲v̲a̲i̲l̲a̲b̲l̲e̲:̲ The overall operation of
the network should not be adversely affected by
the failure of a topologically noncritical node
and/or channel.
f. B̲e̲ ̲E̲x̲t̲e̲n̲s̲i̲b̲l̲e̲:̲ The architecture should allow for
the incorporation of future technology changes
in hardware and/or software: for the movement of
functions across the hardware/software boundary,
taking advantage of new technological innocations
in both domains; and for the subsettability of
functions to allow smaller, lower performance nodes.
g. B̲e̲ ̲E̲a̲s̲i̲l̲y̲ ̲I̲m̲p̲l̲e̲m̲e̲n̲t̲a̲b̲l̲e̲:̲ The architecture should
be independent of the internal characteristics
of the hosts and their operating systems and be
easily and efficiently implemented on a wide variety
of heterogeneous hardware and software.
h. U̲s̲e̲ ̲a̲ ̲H̲i̲e̲r̲a̲r̲c̲h̲i̲c̲a̲l̲ ̲L̲a̲y̲e̲r̲e̲d̲ ̲S̲t̲r̲u̲c̲t̲u̲r̲e̲:̲ This will
create a highly flexible structure with ease of
layer replacement and modularity.
i. U̲n̲i̲f̲o̲r̲m̲l̲y̲ ̲A̲d̲d̲r̲e̲s̲s̲ ̲a̲l̲l̲ ̲N̲o̲d̲e̲s̲:̲ The topology should
not restrict access. Nodes should be characterized
only by the functions they perform, and not by
their location in the network.
j. I̲m̲p̲l̲e̲m̲e̲n̲t̲ ̲F̲u̲n̲c̲t̲i̲o̲n̲s̲ ̲a̲t̲ ̲t̲h̲e̲ ̲H̲i̲g̲h̲e̲s̲t̲ ̲P̲r̲a̲c̲t̲i̲c̲a̲l̲ E̲f̲f̲i̲c̲i̲e̲n̲t̲
̲L̲e̲v̲e̲l̲ ̲W̲i̲t̲h̲i̲n̲ ̲t̲h̲e̲ ̲S̲t̲r̲u̲c̲t̲u̲r̲e̲:̲ Such functions as
network control and maintenance should execute
at the level of application programs.
k. B̲e̲ ̲D̲y̲n̲a̲m̲i̲c̲:̲ Protocols should be flexible to change;
new modules and functions should be easily added
within the structure.
The general purpose computer facilities may include
software for supporting remote terminals and associated
communication procedures. ACDN, as conceived and presented
in this proposal, excludes all such facilities in their
entirety. However, the implied transparancy in the
ACDN is such that a̲n̲y̲ general purpose computing facility
can be interfaced to ACDN.
It is our conviction that in the 1980s there is no
real need to distinguish between terminals and computing
facilities, and as such we use the words synonymously.
However, there is a distinction between the interfacing
mechanisms which are attachments to the ACDN and those
which are participants. This distinction is as follows:
a. A̲t̲t̲a̲c̲h̲m̲e̲n̲t̲s̲ ̲t̲o̲ ̲A̲C̲D̲N̲
…02…Any computing facility can be interfaced to the ACDN
as
an
"attachment".
Attachment
implies
that
a
computing
facility,
irrespective
of
the
complexity
and
scope,
behaves
like
a
single
terminal
or
a
cluster
of
terminals
with
predetermined
characteristics.
…02…Attachment implies that a computing facility so
interconnected does not play any role in providing
and controlling the resources made available to
the whole spectrum of ACDN users.
b. P̲a̲r̲t̲i̲c̲i̲p̲a̲n̲t̲s̲ ̲i̲n̲ ̲A̲C̲D̲N̲
Any computing facility can be interfaced to ACDN
as a "participant". Being a participant implies
that application resources contained in that computing
facility are made available to all ACDN users.
The control functions in the ACDN monitor the status
of the application resources in each participant
and maintains the validity of the resource status
on a network wide basis. Such a participant is
also referred to as a "HOST" synonymously.
3.2.2 P̲r̲o̲p̲o̲s̲e̲d̲ ̲N̲e̲t̲w̲o̲r̲k̲ ̲A̲r̲c̲h̲i̲t̲e̲c̲t̲u̲r̲e̲
This section presents the proposed Network Architecture.
The layered structure of the internal environments
of the network is presented and the basic services
provided by these layers are described, in particular
the services provided to the users in external environments
. The hardware and software which constitute the deliverables
to Air Canada are described in the subsequent section
3.2.5.
In the context of the scope of the ACDN a set of interrelated
"environments" can be identified as the component parts
of the general network architecture, to which ACDN
relates.
Essential to the proposed architecture is that entities
communicate with corresponding remote entities in the
same environment.
Figure III 3.2-1 shows the environments and their
relationship.
The following five internal environments are the component
parts of the general network architecture:
- Network Services Environment,
- Network Interface Environment,
- Communications User Environment,
- Data Transmission Environment,
- Data Link Environment,
The Network Interface Environment provides the physical
and logical interface with attachments and participants
to the external environment. It relieves the interconnected
equipment of the burden of providing the appearance
expected by either end of a connection.
The Communication User Environment consists of entities
that establish and maintain an orderly exchange of
information between two users.
The Data Transmission Environment consists of entities
that provide the actual communication path through
the ACDN to support the dialogue between two users.
Fig. III 3.2-1…01…PROPOSED ARCHITECTURE
The Data Link Environment consists of firmware entities
to transport data over physical links and to support
the necessary protocol facilities dictated by the attachment
or participant interface.
The Network Service Environment consists of entities
to support complete network control as well as the
provision of special services to users of the network.
The various communication software products are designed
in a structured manner conforming to the ISO reference
model for Open System Interconnection (OSI).
The proposed architecture is mapped on the OSI as follows:
o presentation layer ... Network Interface
Environment (NIE)
o session layer ... Communication User
Environment (CUE)
o transport layer
o network layer ... Data Transmission
Environment (DTrE)
o link layer
o physical layer ... Data Link Environment
(DLE)
System Management functions and services:
o network management
o system management
o layer management
o service applications ... Network Services
Environment (NSE)
Figure III 3.2-2 illustrates how these ACDN environments
are mapped onto the OSI architecture.
Figure III 3.2-2…01…ACDN/OSI REFERENCE MODEL MAPPING
3.2.2.1 N̲e̲t̲w̲o̲r̲k̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲ ̲E̲n̲v̲i̲r̲o̲n̲m̲e̲n̲t̲
The Network Interface Environment (NIE) as the outermost
layer provides the necessary services to any user accessing
an attachment or a participant. The NIE makes the user
appear as a valid end user either as defined in the
destination or through a mapping onto a virtual network
protocol.
The NIE provides this service by a conversion or translation
of the end user message level protocols and data formats
into protocol and data formats dictated by the communication
user environment of the network.
Virtual Service Providers (VSP) provide the presentation
service entities which are responsible for mapping
the end user protocol to a network conversation protocol.
VSPs are software entities which implement the following
functions:
…02…o The sequence number on the data units handled by the
end user protocol is mapped to the sequence number
of the data units handled by the network conversation
protocol.
…02…o The segmentation and/or blocking indicators in the
end user protocol data unit are mapped into corresponding
control indicators for the data units handled by
the Session Service Providers (SSP) of the Communication
User Environment.
…02…o The throughput control indicators in the end user …02…protocol
are mapped to the flow control indicators
in the conversation protocol.
…02…o The presentation control indicators in the end …02…user
protocol are mapped to the device control …02…indicators
in the conversation protocol.
The NIE may provide a second level of translation or
conversion of protocols, depending on whether a host
facility is interfaced as an attachment or participant.
The second level of conversion is needed for participant
interfacing. For these, the second level of translation
converts protocols and data formats dictated by communication
I/O software in the host to the protocols and data
formats dictated by the communication user environment.
3.2.2.2 C̲o̲m̲m̲u̲n̲i̲c̲a̲t̲i̲o̲n̲ ̲U̲s̲e̲r̲ ̲E̲n̲v̲i̲r̲o̲n̲m̲e̲n̲t̲
The Communication User Environment (CUE) provides the
necessary services to a network user in establishing
and maintaining a conversation with another user or
logical unit in a host.
The Transport Access Port (TAP) is a fundamental entity
of the network. The TAP is the basis for providing
a given number and type of communication channel to
the network user. It provides the means for implementing
network resource control. The TAP has the following
characteristics:
- a finite fluctuating bandwidth
- a finite predictable error rate
- a unique identity recognisable throughout the
network
Two end users exchange data through a "conversation"
is the highest level of protocol within the network.
The conversation is by definition at a level below
the end-user-to-end-user protocol. Each end user must
have a TAP assigned before a conversation can be opened.
The conversation protocol ensures proper flow of user
messages between the associated two TAPs and provides
error recovery facilities to avoid loss of messages.
Referring to Fig. III 3.2-3, "Network Protocols",
end-user-to-end-user protocol is determined by the
complex of the host facilities. There may be as many
end user protocols as there are unique types of hosts
and distinctly supported end user types amongst the
hosts. In the proposed ACDN there is only one conversation
protocol.
Figure III 3.2-3…01…NETWORK PROTOCOLS
The conversation is established between the Session
Service Provider (SSP) entities of the CUE. The SSP
is a software entity. The data areas and control functions
of an SSP implement the following functions:
o Accept and validate a user request for a communication
service provided by the network.
o Assign a Transport Access Port (TAP) under a unique
identity.
o Interrogate the end users and determine the basic
conversation parameters.
o Based on the conversation parameters obtained from
the user, allocate the resources to the TAP required
to support a chosen traffic type and chosen conversation
protocol characteristics.
o Reserve and bind the TAP resources from the Network
Services Environment.
The TAP services provide the operational facilities
necessary to sustain open conversations by:
o Managing resources allocated to the active …02…transport
access paths.
o Monitoring the operation of the conversation protocols.
o Supporting the hierarchical priority system with
the following levels
- messages generated by the network services
environment to inform node or other network
catastrophic failure conditions are given the
highest priority,
- messages generated by the NSE to change routing
primitives at the nodes are given priority
at the same level as protocol responses indicating
the loss of integrity in the end-user-to-end-
user protocol,
- transport user traffic,
- interactive user traffic,
- batch user traffic,
- messages exchanged by the NSEs distributed
in the various nodes and the Network Control
Center for collecting statistical data, are
assigned the lowest priority.
3.2.2.3 D̲a̲t̲a̲ ̲T̲r̲a̲n̲s̲m̲i̲s̲s̲i̲o̲n̲ ̲E̲n̲v̲i̲r̲o̲n̲m̲e̲n̲t̲
The Data Transmission Environment (DTrE) forms the
outermost environment of the transport network.
o The DTrE provides communication services to a pair
of SSPs engaged in a conversation so that the total
usage of the physical media connecting the two
SSPs is maximized. This is achieved by utilizing
the datagram technology as the basis for this environment.
o The SSPs can request either a datagram service
or a virtual connection service from the DTrE.
The latter is achieved by the source and destination
DTrE's guaranteeing the sequence in the data flowing
between the communicating SSPs.
o The virtual connections may be "switched" or "permanent".
o DTrE provides a number of logical channels for
transmitting data.
o Each logical channel supports traffic in both directions
and is independent of the traffic on other channels.
o DTrE assigns "ports" to communication system users
to access the transport network.
o For certain pre-assigned attachments or participants,
the DTrE provides permanent associations between
the relevant SSPs.
o DTrE maintain a bi-directional interface for each
trunk interconnecting the network nodes with the
Data Link Environment.
3.2.2.4 D̲a̲t̲a̲ ̲L̲i̲n̲k̲ ̲E̲n̲v̲i̲r̲o̲n̲m̲e̲n̲t̲
The Data Link Environment is the second inner environment
of the transport network. The DLE provides the necessary
functions to supports a Data Link Control procedure
between two DTrE utilising conventional transmission
media.
The DLE is a homogeneous entity and supports the LAPB
procedure defined as X.25 level 2.
3.2.2.5 N̲e̲t̲w̲o̲r̲k̲ ̲S̲e̲r̲v̲i̲c̲e̲s̲ ̲E̲n̲v̲i̲r̲o̲n̲m̲e̲n̲t̲
The Network Services Environment (NSE) provides the
facilities required to support
- connectivity of network users
- participatory interfaces
- maintaining the integrity of the network configuration
- collecting short term and long term statistics
on network services utilisation
- a set of commands to be used by ACDN operators
to monitor the static and dynamic status of the
network.
The NSE is implemented in a distributed manner. The
scope of the functions to support the above facilities
is presented below.
The NSE validates a user request to get connected to
the network and use its services. Validating the request
implies prior knowledge of the user by the NSE. An
important part of the network LOGON is authentication,
i.e. the validation of the identification of the user.
A result of a successful completion of a LOGON is
the creation of the capabilities against which authorization
is to be checked during subsequent network transactions.
It is emphasised that this LOGON to the network is
independent of and does not replace the LOGON procedure
that may be required by an application program in a
participatory host.
Facilities needed to access and interrogate the status
and availability of applications are provided by the
NSE. These are realised by providing a set of emulated
application programs. These application programs
interrogate the responsible communication software
resident in the participating host. Such interrogations
provide the data needed by the NSE to validate connectivity
requests by a network user.
The NSE maintains the integrity of the network topology
by
- recognising link failure and implementing back-up
routing
- recognising node failures and implementing recovery
procedures
- recognising overload on links and implementing
back up routing
- recognising catastrophic failures at the NIE level
and removing the attachments or participants from
the network.
The NSE collects end user statistics primarily at the
level of an end-user-to-end-user connection. The NSE
initialises the statistics gathering at the NIE and
CUE level when an end user protocol is initialised.
The statistics are forwarded to the NSE once the end
user exchange is terminated.
3.2.2.6 N̲e̲t̲w̲o̲r̲k̲ ̲T̲o̲p̲o̲l̲o̲g̲y̲
The proposed ACDN is a specific implementation of general
network topologies supported by the network architecture.
The addressing structure allows an unlimited topological
combination covering both longhaul networks, C-NETs
based on CR 80 computers and local area networks based
on X-NET systems.
The users of the architecture are connected either
to a C-NET or an X-NET.
An arbitrary group of C-NETS forms a C-REGION. Similarly,
an arbitrary group of X-NETS form an X-REGION.
In a C-REGION, a longhaul region, groups of up to 16
neighbouring C-NETS may be interconnected by a highbandwidth
channel (S-NET). Such a constellation is referred to
as a C-NODE. C-NODEs may be interconnected by leased
lines or by employing the services of public data networks.
In a X-REGION, a highbandwith local region, a number
of X-NETS may be interconnected with other X-NETs via
X-NODES. Up to 16 X-NETs (and one C-NET) can connect
to the same X-NODE. Conversely, up to 16 X-NODEs can
connect to the same X-NET.
Figure III 3.2-4 illustrates the architectural topological
possibilities.
The C-NODE software is implemented by means of the
following high level structure.
A C-NODE is a group of C-NETs, which operates under
the control of a common operating system. To facilitate
the management of system resources, each C-NET software
is isolated and controlled separately. The co-ordination
of the total resources at the C-NODE is facilitated
via the central management part of the Network Services
Environment, which stretches right across the C-NET
boundaries.
Figure III 3.2-5 illustrates this partitioning.
Figure III 3.2-4…01…Topology Example
Figure III 3.2-5 …01…C-NODE Software partitioning
3.2.2.7 N̲e̲t̲w̲o̲r̲k̲ ̲A̲r̲c̲h̲i̲t̲e̲c̲t̲u̲r̲e̲ ̲R̲e̲a̲l̲i̲s̲a̲t̲i̲o̲n̲s̲
A number of system elements might be implemented by
combining appropriate component parts of the network
architecture. The generic network elements presented
in this section highlights the realisations feasible
within the architecture. It is an attempt to provide
the Air Canada Technical Management function with the
perspectives of the proposed ACDN.
Figure III 3.2-6 "Network Architecture Realisations"
illustrates how the internal environments may be combined
to form the following elements:
NCC - Nodal Control Center, which implements central
network application
NSH - Network Service Host, dedicated network element,
which implement network services; e.g. a NMH
provides network administrative and planning
services while an EMH provides a protected
message service; These elements are participants
in the network.
Node - A generalized term used for an integrated MIP,
TIP and switching node,
Switching Node - the switching nodes are configured
to realise the CUE, DTrE, and DLE.
Thus, it provides the basic transport
services.
HIP - the Host Interface Processor implements a host
front-end capability combined with integration
into the network by means of the switching
node element of the HIP. The physical and
logical interface towards the host is a channel
which operates in accordance with the applicable
mainframe vendor standards.
TIP - the Terminal Interface Processor implements
the interfaces to the various terminal devices
to be attached to the network. The TIP is
integrated with the switching node.
MEDE - the Message Entry and Distribution Environment
are configured to realise the (remote) attachment
functions of the NIE. The physical hardware
configuration of the MEDE reflects the interfacing
requirements of the various devices to be attached.
Figure III 3.2-6…01…Network Architecture Realisations
3.2.4 F̲u̲n̲c̲t̲i̲o̲n̲a̲l̲ ̲O̲v̲e̲r̲v̲i̲e̲w̲
The proposed network provides the means for integrating
present as well as future computer and terminal facilities
assigned to the following environments:
- Host Environment
- External Network Environment
- Internal Network Environment
- Terminal Environment
The users, host applications and terminal operators,
uses a combination of the facilities implemented in
the Host, Network, and Terminal Environments together
with those of the ACDN. The ACDN interconnects the
external environments and provides a level of integration
which makes the actual network topology and application
allocation transparent to the user. This is illustrated
in Figure III 3.2-7 "Air Canada Computing Environment".
Each of these environments is described in more details
in the next section. The remainder of section summarizes
the services and capabilities of the ACDN.
In addition to the packet switch based services which
implement interconnection between users, the network
itself provides the following services:
- Network Control
- Network Management
- Protected Message Service
The network control services are provided to designated
personnel, network supervisors and field technicians.
These services provide a centrally controlled environment
which protects the integrity of the network and ensures
consistent service to all users.
The network management services covers collection of
statistical information for billing purposes, and the
provision of facilities for planning and development.
The protected message service provides acknowledged
message transport between users.
Fig. III 3.2-7…01…AIR CANADA COMPUTING ENVIRONMENT
Flexibility in several dimensions
…02…- projected network expansion,
- local area network capability, and
- new equipment types
is supported by the proposed network.
The projected network expansion is seen as an evolution
progressing in small increments based on a continuous
adaption to the connecting environments by means of
standard expansion elements and modules.
Support of local area includes a capability to include
the Christian Rovsing local area network, the X-net,
at the travel agents facilities. This net provides
a means for adding circuit switching to the packet
switching facilities implemented by the proposed network.
Addition of new equipment types is facilitated by the
modular structure of the proposed hardware and software.
Thus, addition of new equipment, whether hosts or
terminals, can take place with a minimum of customizing
and without interrupting the live network activities.
The proposed backbone network is well suited to the
dynamic environment in which Air Canada operates, an
environment characterised by high volumes of transaction
traffic. An architecture is proposed which essentially
provides Air Canada with open ended growth capability,
an architecture which allows optimal allocation of
services and terminations and which meet current and
projected requirements for connectivity and transaction
volumes; an architecture which enables Air Canada to
make use of mainframe equipment best suited for a given
purpose.
The previous sections presented this general network
architecture and is followed in subsequent subsections
by a detailed mapping of the deliverables proposed
for the Air Canada Data Network.
3.2.3 E̲x̲t̲e̲r̲n̲a̲l̲ ̲E̲n̲v̲i̲r̲o̲n̲m̲e̲n̲t̲s̲
A user of the backbone network is either a host application
or a subscriber using a (terminal) device. An important
function of the proposed network is to establish and
maintain connection between network users. The network
must present a stable environment to the user which
ensures data integrity and provides highly resilient
services for data exchange between users.
The term user as used in this proposal covers both
the requester of services and the provider of services.
A requester may be a subscriber from his terminal
while a provider may be an internal network service
in the form of PMS or external in the form of a host
application, e.g. ticket reservation. A requestermay
also be a host application, etc. A session describes
the logical connection or the association between two
users of the network.
Privacy is important in a multiservice environment
like the one proposed for Air Canada. The proposed
network implements a high level of protection against
unauthorized disclosure of data and information, based
as it is on a hardware and operating system architecture
derived from well recognized security principles and
implementations. Higher levels of software reflects
the same discretionary access control and capability
check-out philosophy as implemented by the proposed
operating system.
The Terminal Environment consists of a multitude of
multivendor terminal equipment which provide the immediate
environment used by the subscribers of the Air Canada
data services. The proposed network fully supports
the following variety of terminal types:
- CRTs - types 405, 406, 407, 408
- Flight Information Displays (FIDs)
- Printers - attached to CRT's
…02……02……02…- teletype model 40
…02……02……02…- ticket printers - DI - AN
…02……02……02…- Extel
- Other devices
…02……02……02…- ASTAC (self ticketing machines)
…02……02……02…- MAC (microcomputer based travel agent
administrative system)
- IBM 3270 compatible terminals
The backbone network provides connectivity but also
host transparency to the users of the terminal environment.
The proposed network supports multiple applications
running concurrently between a given user and relevant
hosts. The network establishes and maintains the connection(s)
between user and application(s) on behalf of the user.
The communication management functions are kept transparent
to the user where possible.
The Host environment consists of multiple multivendor
processors which provide the majority of the Air Canada
computer provided services. A major objective in the
proposed network is to achieve an architecture well
suited for allowance of existing and new hosts to be
integrated into the Host Environment of the network
without constraints with respect to selection of processor
equipment. The proposed ACDN interfaces a Host Environment
consisting of six hosts implementing the following
applications:
- Passenger Management (PMH)
- Reservation, VIA (VIA)
- Services Support Host (SUPP)
- Operations (OPNS)
- Cargo (CGO)
- Regional Carriers/
Corporate Services (RCCSH)
IBM and Univac mainframes with installed VTAM and CMS
respectively, may be interfaced and supported as participatory
members by the ACDN. Non IBM systems and IBM systems
not supporting VTAM are interfaced and supported as
attachments to ACDN at present. This proposal includes
an offer to develop a participatory interface to Honeywell
DPS/08 computing facilities.
The backbone network establishes and maintains connections
between host applications and other users. It allows
relieving the host of complex access software occupying
host resources and also the task of network monitoring
and control.
The network includes additional buffering capacity
provided to avoid heavy retransmission activities lasting
a short period of time resulting from host recovery.
This added resilience implies protection of the service
level of other network users. Retransmission caused
by short outages may be avoided for a majority of transactions,
resulting in a lower load for the transport network.
The ACDN interconnects the External Networks Environment
which consists of a number of national and international
networks. This environment establishes paths between
Air Canada resources of the Host and Terminal environment
and external users and information providing sources,
i.e. resources which are not controlled by Air Canada.
The External Networks Environment consists of the
following data networks:
- SITA
- ARINC
- CNT
The Internal Network Environment covers the present
network excluding those hosts and terminals which are
moved to the proposed network, i.e.
…02…- RES Host
…02…- ACNC and connected concentrators and terminals.
This environment plays a major role in the proposed
migration from the existing network to the new network
as presented in this proposal. It provides the means
for enabling migration to take place without interrupting
the services provided to users.
3.2.5 D̲e̲l̲i̲v̲e̲r̲a̲b̲l̲e̲s̲
This section presents the ACDN deliverables in the
context of the general network architecture presented
in the previous section. The hardware elements and
software packages of the ACDN are defined and mapped
onto the internal environments of the network architecture.
The proposed Air Canada Data Network might be seen
as basically three environments:
- Network Interface Environment,
- Communications Environment, and
- Network Services Environment.
The Communication Environment consists of the Data
Transmission and the Data Link Environment of the general
architecture. The Communications Environment establishes
and maintains basically error free communication paths
between any two users.
The communication environment implements the layers
1 to 5 in terms of the 7 layer OSI architecture.
The Network Service Environment includes applications
for network management and administration as well as
a protected message service. These added value services
are offered to users of the ACDN.
The relation between the internal and the external
environments is depicted in Figure III 3.2-7 "Proposed
Network Architecture". This figure and Figure III
3.2-8 provide the mapping of the hardware and software
which constitute the basic elements of the ACDN.
Figure III 3.2-8…01…Proposed Network Architecture
Fig. 3.2-9
Hardware and Software Mapping
H̲a̲r̲d̲w̲a̲r̲e̲ ̲M̲a̲p̲p̲i̲n̲g̲
The Network Interface Environment and the Communications
Environment are implemented hardwarewise sharing the
same processing equipment, Nodal Switch Processors
(NSPs). These are co-located with host equipment on
the following three locations: Toronto, Montreal and
Winnipeg. The NSPs are controlled locally by a redundant
Nodal Control Processor (NCP). The term node as used
in this proposal refers to the system made up of the
NCP and NSPs at a location.
The nodes provide the termination points to equipment
of all external environments whether in the form of
host data channels or communication lines to terminals
and to other networks. This allows Air Canada to terminate
communication lines of the external networks a the
"closest" node; thus, no particular node is designated
as termination point of e.g. SITA or CNT trafic.
The nodes are mutually interconnected by groups of
56 kbps internodal trunk lines. These trunk lines
are the physical transmission media used by the Communications
Environment.
The Network Service Environment is implemented on a
number of processors each dedicated a specific set
of services.
A Network Control Center (NCC) implements the network
control facilities to designated network operators
called supervisors. The NCC is implemented hardwarewise
as part of the NCP of the Toronto Node. A geographical
back-up is likewise proposed as part of the Montreal
node. This results in the increased survivability
of the proposed network.
A Network Management Host (NMH), which provides network
administrative, planning and development facilities,
is implemented by the same type of processor equipment
as all other elements of the network.
An Electronic Mail Host (EMH) implements the hardware
required to provide protected message service (PMS).
The EMH provides a centralized secondary storage for
the PMS traffic in form of mirrored disk equipment,
while long term storage is provided in form of moveable
disk packs and magnetic tape. The EMH as the NMH is
co-located with the Toronto node.
N̲e̲t̲w̲o̲r̲k̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲ ̲E̲n̲v̲i̲r̲o̲n̲m̲e̲n̲t̲
The Network Interface Environment is the outermost
layer of the ACDN. It provides a direct physical and
logical interface with the external environments described
in the previous section. It consists of the interfacing
hardware and software required to support the integration
to external elements; active in form of e.g. participating
hosts providing services and co-operating with the
network, passive in form of e.g. attachments like the
present Air Canada terminal equipment.
The software which implements these capabilities are:
- Host Access Software (HAS)
- External Network Access Software (ENAS)
- Terminal Software (TAS)
- Internal Network Access Software (INAS)
The HAS implements access methods which operate at
mainframe data channel speed while the TAS implements
access methods which operate with communication lines.
The network access software like the TAS, implements
communication line access methods. However, the functionality
between the TAS, ENAS and INAS has led to "integration"
of these into a generalized TAS.
An important role of the HAS and TAS software is to
provide the services necessary to allow a user of the
ACDN access to a participating host by making the user
appear to the host system as a valid (host) end user.
This transparency is implemented by transforming,
where applicable, the user…08…s data to a format more suited
to the host.
The HAS and TAS participates in establishing and maintaining
connections between two users. Both are subdivided
in entities. There will be one such entity for each
type of host or terminal (inclusive external and internal
networks). They share the nodal switch processing
equipment.
In summary, the application of a data channel connection
allows an efficient high bandwidth connection to be
established between ACDN and a host processor. The
integration of external network interface hardware
and software allows Air Canada to terminate external
networks similar to the way terminal equipment is terminated.
In all, the proposed ACDN provides Air Canada with
means of achieving an all-over cost-effective computing
environment.
C̲o̲m̲m̲u̲n̲i̲c̲a̲t̲i̲o̲n̲s̲ ̲E̲n̲v̲i̲r̲o̲n̲m̲e̲n̲t̲
The Communications Environment provides in a sense
the backbone services of the ACDN. It controls the
interconnections between the users and provides the
data transportation facilities. It is implemented by
mutually interconnected Nodal Switch Software (NSS)
executed in Nodal Switch Processors. The NSS implements
a transport network by providing transport, network,
data link, and physical link services as presented
in the model for Open Systems Interconnection.
The NSS implements a bridge between the different entities
of the various access software, HAS, TAS, INAS, ENAS.
The NSS appears as a standard transport network to
these higher layers.
The following essential service types are provided
by the NSS to the Network Interface Environment:
- datagram type of service
- switched virtual connection
- permanent virtual connection
- priority
- end-to-end acknowledge
- end-to-end non-acknowledge
The transportation provided by NSS is carried out by
means of data units in form of packets transmitted
between nodes on internodal trunk groups. The routing
of data is handled by providing each message/packet
with a routing header and employment of an efficient
routing strategy.
Unrecoverable errors are handed over to network control/system
control software through the layer managers.
N̲e̲t̲w̲o̲r̲k̲ ̲S̲e̲r̲v̲i̲c̲e̲s̲ ̲E̲n̲v̲i̲r̲o̲n̲m̲e̲n̲t̲
The Network Services Environment provides first of
all global control facilities, secondary added application
services in form of protected message service, administrative
billing and planning services and development capabilities.
The network wide resource and control facilities are
provided by the Network Control Software (NCS) of the
NCC supported by the System Control Software (SCS)
located in the nodes. The SCS implements a local system
wide resource management by centralized algorithms
in the local Nodal Control Processor assisted by layer
wide resource management entities in all of the Processor
Units of a node.
The protected message service is provided by the Electronic
Mail Software (EMS) of the EMH, the administrative
and planning services are provided by the Network Management
Software (NMS) while the development capabilities is
provided by standard development tools, using the processor
equipment of the NMH.
The control software in form of NCS and SCS are special
since neither support transmission of user data, instead
they control the network respectively the local node
configuration. The NCS uses permanently allocated
connections and resources in the network to control
the remaining part of the network topology. The NCS
uses the remaining software packages to distribute
and retrieve configuration control information and
statistics. The NCS plays an active role in re-establishment
of connections between users (sessions) when such a
connection has been temporarily lost.
The Electronic Nail Software (EMS) implements the
protected message service (PMS). PMS is a store-and-forward
service provided by the network.
Basically EMS maintains the PMS traffic database of
the EMH. It ensures that undelivered messages are retained
for later delivery. Furthermore, it maintains a long
term storage of PMS traffic on magnetic tapes.
3.3 P̲r̲o̲p̲o̲s̲e̲d̲ ̲H̲a̲r̲d̲w̲a̲r̲e̲ ̲E̲q̲u̲i̲p̲m̲e̲n̲t̲
A common computer architecture is used to implement
all the computerized functions of the ACDN. This includes
two NCCs, three nodes, a Network Management Host and
an Electronic Mail Host. The equipment used is based
on the CR80 computer series, designed and manufactured
by Christian Rovsing. The CR80 is configurable and
satisfies the broad range of applications of the ACDN,
present and future, providing a fault tolerant data-,
packet-, and message switching network.
An overview of the specific system configurations is
shown in Figure III 3.3-1. It reflects the baseline
network for 1985 and illustrates the presence of the
generic network elements positioned in accordance with
the concept network provided by Air Canada. The proposed
ACDN nodes, NMH and EMH have all been configured as
selfcontained computer systems.
The NCC facility has in the baseline network been integrated
hardwarewise with the Toronto node and a back-up NCC
facility is proposed as part of the Montreal node.
Deviation from the concept network are found in the
areas of the Air Canada Gateway to the existing ACNC
and in the handling of the external networks, e.g.
SITA, ARINC and CNT.
Figure III 3.3-1
PROPOSED NODE NETWORK
Both of these have for various reasons been integrated
into the nodes.
The Gateway has been integrated into the node due to
its commonality with the TAS package interfacing access
lines to present types of Air Canada equipment but
also caused by the fairly low amount of traffic anticipated
on this network element. This integration leads to
a more clean migration approach from the initial ACDN
network co-existing with the present ACNC network to
the baseline ACDN network carrying and terminating
all traffic.
The External Networks termination and handling have
been moved from the EMH for several reasons:
- They present a communications environment and do
as such logically belong to the terminal access
type of software.
- Apart from the fact that they today carry type
"B" traffic only, there seems in our opinion not
to be a reason for a termination at the EMH.
- Last, but not least, integration of this capability
into the node allows termination of lines from
these external networks at any of the proposed
nodes.
All critical elements of the ACDN, i.e. the NCC, nodes
and EMH are configured to provide the high level of
fault-tolerance characteristic of a CR80 configuration.
The NMH is basically configured as a simple non-redundant
processor reflecting the fact that this element is
not critical to the operation and functioning of the
proposed network.
Co-located network elements are configurated as a complete
CR80 system with all processors integrated by common
high speed local area suprabusses. Regardless of the
configuration, however, the basic processor, memory
modules, and device controllers are the same. The
detailed configurations are described in Chapter 5
of this document.
The CR80 hardware configuration comprises a number
of loadsharing CPU's, grouped together in processing
units, PU's, with up to 5 CPU's per PU and up to 1
Mwords.
Multiple PU's may be interconnected by a group of 16Mbps
suprabusses. Some PU's may be loadsharing equally,
some may carry out special functions while still others
may be standby units ready to be activated to substitute
currently active PU's.
The hardware is continuously monitored and controlled
via a serial Configuration Control bus extending from
a Watchdog to all switchable and/or monitored assemblies,
such as CPUs, busses, power supplies, and LTU's. To
fully understand the CR80 fault tolerance concept,
the Equipment Characteristics Chapter should be read.
The CR80 architecture allows the open ended growth
in the equipment and hence in processing power, which
is so crucial in a dynamic transaction oriented environment.
The great flexibility in the hardware configuration
capabilities supports the graceful evolution of ACDN
configurations. As an example, the Gateway equipment
which is being used in the transition phase 1984-1985
may in 1986 be sublimated into the role of terminating
new terminal equipment.
The partitioning of the computer system into a specific
PU configuration does not impact ACDN communications
software. It provides an architecture which by itself
offers several levels of degraded service should one
or more of the participating processing resources fail.
Furthermore, the redundant hardware design allows even
major configuration changes to take place while the
remaining system is fully operational.
3.4 P̲r̲o̲p̲o̲s̲e̲d̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲ ̲P̲a̲c̲k̲a̲g̲e̲s̲
This section presents an overview of the proposed software
packages. The detailed description for each package
is found in chapter 6.
The software proposed have been collected in a number
of packages. The packages have been based upon func-
tionality and a requirement of a "minimal" interface.
The packages consist of components based on the same
criteria.
Furthermore, the software packages play an important
role in providing visibility to the Air Canada's Technical
Management during the implementation phase of the ACDN.
Before embarking on a more detailed description of
the individual packages being proposed, is a mapping
onto the seven-layer Open Systems Interconnection model
presented. Figures 3.4-1 and 3.4-2 depicts this mapping.
The application layer of the OSI model is represented
by the packages implementing value added services to
the network, i.e.
- Network Control Software
- Network Management Software
- Electronic Mail Software
These services uses the lower layers implemente the
remaining packages to make the services offered available
to all users of the network.
Figure III 3.4-1
ACDN SOFTWARE PACKAGES
Fig. III 3.4-2
ACDN SOFTWARE STRUCTURE
The presentation and session layer of the OSI model
is encompassed in the Network Interface Environment
as implemented by the following software packages:
- Host Access Software (HAS)
- Terminal Access Software (TAS)
- External Network Access Software (ENAS)
- Internal Network Access Software (INAS)
An important role of these packages is to implement
the virtual protocols required to map any network user
into a valid host end user.
The transport and network layer of the OSI model are
encompassed by the Nodal Switch Software (NSS) which
furthermore includes the firmware implementing the
data link layer.
The CR80 DAMOS provides the operating system upon which
the ACDN will be implemented. DAMOS provides all the
general tools for management of the CR80 resources,
CPUs and memory, processors (PUs) and devices. A resource
allocation is implemented which ensures that no process
is capable of blocking all available resources managed
by DAMOS. Data integrity and privacy is protected
by kernelised mechanisms.
The Basic Transport Service (BTS) provides the vechicle
for implementing interactive and batch oriented connections.
A Basic Datagram Service (BDS) will be implemented
which exploits the CR80 hardware and provides the means
by which high volume transaction oriented connections
are supported. Fundamental to the BDS is that it only
copies data to NSP memory once. The Basic Services
provides a queue driven environment within the normal
DAMOS event driven environment.
The File Management System and Terminal Management
System are used to handle standard files and operator
terminals, e.g. those of the NMH and NCC.
Figure III 3.4-3
ACDN SOFTWARE
3.4.1 A̲c̲c̲e̲s̲s̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲ ̲P̲a̲c̲k̲a̲g̲e̲s̲
To generalize the functionality of the interfaces,
the network provides service to interactive terminals
and supports bulk transfers.
These services are provided by virtual protocols residing
in the network.
In general the choice of virtual protocols depends
on the services wanted. Apart from the two services
mentioned it is the intention of Christian Rovsing
to implement other necessary virtual protocols on request.
For instance a standardized graphics protocol or a
protocol covering the very high speed classes satellite-communication.
The backbone network proposed provides the following:
- Packet switching (CCITT's X.75) for interfacing
public networks to ACDN
- Virtual File transfer, (NPL's File Transfer Protocol)
- Virtual Terminal Interaction
(ECMAs Virtual Terminal Protocol)
A virtual protocol is one, which is not used by any
actual equipment attached to the network, at least
not as yet. Actual equipment protocols attached to
the (ACDN), network must be mapped onto the virtual
protocol supported by the network.
Future equipment should be designed to work directly
with the virtual protocols in the network. By providing
a baseline for future communication via the network,
the virtual protocols are the vehicle for commonality
in the ACDN.
In the selection of the virtual protocols for the network,
one must investigate carefully the trends of the related
standards in the world today. The FTP and the ECMA
VTP presently appear to be the most commonly adopted
in Europe.
Should this not be so for the Canadian environment,
other virtual protocols may be selected for the ACDN.
The File Transfer Protocol, FTP, represents a virtual
network protocol for bulk transfers. The implementation
in the backbone network, which is supposed to enable
multihost access to remote facilities like printers
and cardreaders will be the relevant parts of FTP-B
(80), also known as blue book (Data Comm. Protocols
Unit, NPL, G3).
The line of services that is foreseen to be adopted
in an initial implementation will be the Host-to-Host
transfer of files at a low level, i.e. printer files,
whereas at a later stage a full implementation including
job-service may be implemented according to existing
standards (ISO/TC97/SC16 N628 or later).
Also, a Virtual Terminal Protocol is proposed for the
ACDN. Several possibilities have been looked at. CCITT
defines a low level virtual terminal standard by the
three standards X.3, X.28 and X.29. Combined these
standards will be a so-called scroll-mode VT offering
user selectable PAD-functions described by a set of
parameters.
However, this will not be sufficient to cover the needs
for the terminals to be supported in the backbone network.
Consequently the design will be extended with functions
necessary to cover the level of terminal service needed
for VDU…08…s like UTS 400 and IBM 3270 BSC, the line of
services supported thus will be as described for the
terminal class "formmode" described in ISO/TC97/SC16
N666 (ECMA/TC23/81/53).
The Network Access Software plays a special role in
this context. As illustrated in Figure III 3.4-1 they
interface the underlying transport network like the
TAS software. The structure of this access software
is very similar to the TAS as is the role. This explains
the feasibility of integrating the access software
and thus make better usage of the available resources
and mechanisms.
The Internal Network Access Software Package implements
the network function for interconnecting the old ACNC
network and the new network. Operating a number of
2.4 or 9.6 Kbps, the Gateway acts against the ACNC
as multiple ICC's. Against the new data network the
Gateway maps interactive-, printer-, and host-traffic
into the virtual protocols for interactive terminals
and file transfer, respectively.
The External Network Access Software Package plays
a similar role in mapping external connections onto
the internal. The following networks are interfaced:
- ARINC
- SITA
- CNT
3.4.2 N̲o̲d̲a̲l̲ ̲S̲w̲i̲t̲c̲h̲i̲n̲g̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲ ̲P̲a̲c̲k̲a̲g̲e̲
The Nodal Switching software implements the bridging
between the software of the other packages, whether
between entities of the TAS (supporting a given terminal
type) and entities of the HAS (supporting given host
type) or to a network provided service as implemented
by e.g. EMS in form of protected message service.
The NSS is also the software which directly interconnects
the nodes of the ACDN. The transmission media proposed
are 56 Kbps internodal trunks.
The services provided by the NSS are:
- virtual connection service,
- permanent virtual connection services,
- datagram service.
The CCITT X.25 Recommendation is employed on all internodal
trunks.
Transport service users are assigned "ports" to access
the transport network at a packet level. Permanent
connections are established for system control purposes
while virtual connections are used for low frequency
traffic, i.e. interactive and batch type of traffic.
Transaction type of traffic makes use of the datagram
service. Several grades of service are offered to transport
service users:
- priority
- end-to-end acknowledge or not.
The routing implemented by the NSS supports trunk groups.
This includes the ability to have a given message simultaneously
transmitted on several internodal trunks within the
same group.
The switching of data implemented by the NSS depends
on the type of connection. Transactions are "moved"
in memory by the Basic Datagram Service while other
connections are served by the Basic Transport Service.
This minimizes the delay through the NSP and thus results
in a reduction of switching memory buffers.
The data link service is implemented in firmware. The
term firmware refers to software residing in the CR80
line terminating unit, the LTU, even though this is
actually software which is downlineloaded at initialization
by the NSP. The firmware includes a buffer manager
and a line handler.
Figure III 3.4-4
3.4.3 N̲e̲t̲w̲o̲r̲k̲ ̲C̲o̲n̲t̲r̲o̲l̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲ ̲P̲a̲c̲k̲a̲g̲e̲
The Network Control Software (NCS) plays a key role
in maintaining the integrity of the ACDN network. The
capability to support a geographically remote back-up
NCC provides added survivability to the network. The
NCS consists of software components resident in the
Nodal Control Center co-operating with the distributed
System Control Software components of the nodes and
other computer systems of the network.
Probably the most important task of the NCC is to provide
the operations people with an environment and with
facilities which can assist these people in conducting
a safe network operation.
The facilities made available to these people consist
of colour monitors and operational procedures aimed
at providing static and dynamic information of the
system. The procedures made available to the operators
are based on menu selection and provides the means
for obtaining a fast and efficient usage of the network
control and monitoring facilities implemented.
Various graphical presentations are included from a
complete network picture covering hosts, all major
network elements as well as feasible terminal network
presentations to diagrams presenting resource utilizations.
An important role played by the NCS is that of Definition,
defining external as well as internal resources. The
NCS enables the NMH/NCC operator to assign a unique
logical name to any network resource. However, the
allocation of the network logical identifier is assigned
by the NCS and the operator can not modify this entity;
this is part of the network integrity protection facilities.
The NCS is provided with software facilities which
enables operators of the NCC and NMH to modify the
Global Network Definition (GND) in a dialogue form.
Facilities are provided which enables files e.g. like
those of the GND to be transferred from the NCC to
the NMH and thus enable configuration update and check-out
to be performed on a network element operating independent
of the live network.
The Global Network Definition (GND) is broken down
by the NCS to reflect the local configurations and
resources. Thus, the NCS maintains, based on the GND,
a set of definitions for each of the network systems:
nodes, NCC, NMH and EMH. A copy of the local definitions
is stored on the disk associated with these systems.
This copy is maintained only from the NCC in order
to ensure network integrity.
The network definitions may exist in three versions
in the system: Current, previous and under update.
However, only the former two exists in "static" situations
while the "under update" version exist in those periods,
where the NCS is down-line loading a new version.
Facilities are provided to the NCC operators to allow
them activation of any of the three versions. The
NCS activates the same version in all network systems
to protect network integrity.
Activation of new versions takes place under local
control by the SCS. This software is responsible in
protecting existing users which have retained their
old network definitions; this includes implementation
of a version update mechanism which leaves unaffected
sessions undisturbed. Thus, the NCS/SCS leaves the
users of the network basicly transparent to system
upgrades.
A provision for assigning time limits to the validity
of given resources is provided. This provision covers
whether the resource goes active at a certain time
or if it is removed after a certain time period (temporary
resource).
The NCS plays a key role in the establishment of sessions.
Two separate classes of sessions are considered:
external and internal.
A network session must be established whenever a user
wants to establish a connection with a participant
of the network. The NCS is responsible for establishing
this connection and in assigning the proper network
resources. The connection creation is co-ordinated
with the local SCS and check-pointed to disk, both
at the NCC and at the SCS, whether node, NMH or EMH.
Re-establishment of a lost connection caused by e.g.
trunk failure is in the first hand the responsibility
of the SCS. The SCS uses its locally stored session
data in order to provide this service. Failing to
re-establish this connection causes the SCS to inform
the NCC operators via the NCS.
The implementation of a geographical back-up NCC is
based on the implementation by Christian Rovsing on
the Danish FIKS program. A special protocol ensures
synchronization between the two NCC centers. The communication
between the two NCCs is through the transport network
of the ACDN.
Included in this package is also various special test
and trace facilities. These include provision for
monitoring all traffic in and out of a host, a node
or an internodal trunk. They include a possibility
of dumping memory of system elements and displaying
all buffers. The test facilities provided as part of
the ACDN is described as part of Chapter 4.
3.4.4 N̲e̲t̲w̲o̲r̲k̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲ ̲P̲a̲c̲k̲a̲g̲e̲
The Network Management Host functions are included
in the Network Management Software Package system.
It includes the functions necessary for administrating
the network i.e. Subscriber, installation and Billing
Management, Topology Planning, network simulation,
statistics and charging information collection.
Further application subsystems and functions can be
programmed by users via local or remote terminals,
by means of the Software Development Environment supplied.
3.4.5 E̲l̲e̲c̲t̲r̲o̲n̲i̲c̲ ̲M̲a̲i̲l̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲ ̲P̲a̲c̲k̲a̲g̲e̲
The Electronic Mail Software Package implements the
protected message service (PMS) provided by the ACDN.
A central PMS data base was decided to be the most
feasible implementation. Several considerations were
taken into account:
- store-and-forward mechanisms where PMS messages
were stored on each nodal disk leads to longer
response times,
- a solution was considered by which the source node
took PMS responsibility but this leads to a solution
where disks became a critical node element,
- a requirement for unmanned operation of nodes leads
towards a central solution as disks are avoided
for this purpose at the nodes,
- a central EMH leads to one additional hop for PMS
traffic due to the proposed network topology; however,
with the proposed internodal trunk speed of 56
Kbps only a neglible additional delay is added
to the all over PMS response time.
- the PMS traffic volumes account for 10% of all
traffic of the ACDN; thus, only a minor increase
results on the load of the transport network by
a central solution,
- a requirement for long term storage in the order
of one to two months together with the volumes
of traffic considered leads to magnitic tape as
the most feasible mass storage for this purpose;
this implies manned operation or additional load
of the transport network when long term storage
file transfer takes place.
The PMS data base maintenance is based on the same
technique as employed on the store-and-forward message
switching developed by Christian Rovsing on the FIKS
program.
The EMS employs "core-switching" of PMS transactions,
which results in a reduced load on the disk drives.
PMS traffic is spooled to a fixed head disk, thus avoiding
efficiency derating caused by disk head movements.
A full track is copied at a time to the moveable part
of the disk.
Part of EMS is to provide accountability for messages
in the sense that the EMN is responsible for proper
PMS message delivery once acknowledged is received
by EMH.
A network session is established between the EMS and
the proper destinations. The network provides the
same type of service to this type of connection as
to other types.
The System Control Software (SCS) plays an important
role in providing this network service. The SCS enters
the scene only where the node has been unable to deliver
a PMS message, e.g. broken connection. The SCS keeps
track on undelivered PMS messages and acknowledges
the NCC in such cases. The SCS automatically issues
a retrieval request to the EMH once the connection
has been re-established.
Facilities are included as part of the EMS which can
support dedicated EMH operators in correction of faulty
PMS messages via an interactive dialogue. These facilities
are based on subsets of the FIKS implementation which
provide similar services.
Statistics collection for this special service is the
responsibility of the Access Software, e.g. TAS Data
are collected and retained as for other sessions.
3.5 P̲e̲r̲f̲o̲r̲m̲a̲n̲c̲e̲
The results of the performance modelling are presented
this section. It is shown how the proposed ACDN fulfills
the performance requirements of the RFP. Important
design rationales are presented.
The data provided by Air Canada has been used as input
to derive realistic configurations, both the baseline
as projected 86 thru 91 configurations. Furthermore,
it is shown how the proposed network provides a growth
capacity beyond the 1991 projected requirements.
The results presented herein have been based on analytic
distributions reflecting specific properties of the
ACDN. Appendix D provides the modelling and analytical
results which form the basis for the performance values
presented in this section.
The following constitute the common assumptions upon
which the performance factors of the ACDN have been
calculated:
- Poisson arrival pattern
- Service times exponentially distributed
- All servers equally loaded
- All servers have same mean service time
- First-in/First-out dispatching strategy
- No items leave queue
A conservative design policy has been adhered to in
order to provide Air Canada with a sound baseline network
implementation with built-in investment protection.
No attempts have been made to achieve a minimal hardware
solution which could only satisfy AIR CANADA's current
needs and as such present a risk factor.
The analysis and trade-offs presented in this section
will be further refined and detailed as part of the
initial Functional Specification phase should Christian
Rovsing be the selected contractor. This critical
project activity must be conducted in close co-operation
with Air Canada personnel.
3.5.1 N̲o̲d̲e̲ ̲M̲o̲d̲e̲l̲l̲i̲n̲g̲
The nodes of the ACDN provide the termination of connected
hosts and access lines to terminal concentra-
tors. This section presents the results of the perfor-
mance analysis for the proposed nodes in terms of volume
capacities and transfer times. The rationale for the
design which implements these functions is presen-
ted to provide the background required to evaluate
the presented information. The modelling presented
covers the RFP requirements as well as the new Corporate
Service Information Host in Dorval.
3.5.1.1 E̲n̲d̲-̲t̲o̲-̲E̲n̲d̲ ̲R̲e̲s̲p̲o̲n̲s̲e̲ ̲T̲i̲m̲e̲
The end-to-end response times expected for the different
types of ACDN traffic are summarized below:
Response TYPE A TYPE B RCCSH
time (sec) Req. ACDN Req. ACDN Req.
ACDN
average - 1.2 - 2.1 -
6.0
85% 2.5 1.7 5 2.5 -
10.8
95% 5 2.7 - 3.5 -
17.4
Excluding the contributions from resources outside
ACDN, i.e. terminal and host environments leads to
the following ACDN round trip delays:
Round Trip Type A Type B RCCSH
(sec)
average .2 .5 .7
85% .3 .6 .8
95% .5 .8 1.0
3.5.1.2 P̲r̲o̲c̲e̲s̲s̲o̲r̲ ̲U̲t̲i̲l̲i̲z̲a̲t̲i̲o̲n̲ ̲a̲n̲d̲ ̲C̲a̲p̲a̲c̲i̲t̲y̲
The processing time required to switch a transaction
is:
15.4 msec.
Thus, the switching capacity of a Nodal Switch Pro-
cessor (NSP) equipped with four CPUs is:
200 packets/sec.
The CPU utilization upon which the proposed ACDN configurations
are based is:
61 per cent
This utilization caters for the additonal handling
required by the proposed ACDN virutal protocol handling,
emulation, acknowledgements and network control.
Furthermore, the choice of this low utilization factor
reflects an intent to provide Air Canada with a net-
work with built-in allowance for later increases in
processing requirements.
Thus, the capacity of each NSP equipped with four equal
CPUs is
100 incoming transactions/sec.
whether received from a terminal or a host.
This leads to a total maximum capacity of an ACDN node
of:
1100-1200 incoming transactions/sec.
The maximum capacity corresponds to more than the double
of the projected 1991 transaction volume. Note, however,
that this capacity may be increased by a factor of
three to four by a cluster of four co-located nodes.
3.5.1.3 M̲e̲m̲o̲r̲y̲ ̲U̲t̲i̲l̲i̲z̲a̲t̲i̲o̲n̲
The buffer capabity required in order to support the
designed transfer rate per Nodal Switch Processor is:
120 buffers.
The basic assumption behind this is that each buffer
holds one transaction or that three buffers may hold
one RCCSH interaction. Secondly, it reflects the traf-
mix represented by:
- 4 internodal trunks
- 60 access lines
together with a requirement that retransmission caused
by "buffer unavailable" should be required for less
than .1 per cent of all transactions.
The above number of buffers excludes any buffers permanently
allocated for input handling for the purpose of reducing
input processing time. Furthermore, this capacity does
not properly reflect the impacts of the proposed type
"B" handling. The proposed buffer capacity has been
sized to include this handling.
The proposed NSPs are for all proposed ACDN nodes provided
with buffer capacity in excess of the minimum required.
Generally, all memory space which is not permanently
allocated to programs, data, and tables is allocated
as buffers. This leads to a buffer utilization of:
25 per cent
for the typical NSP.
The reason for providing this additional memory capacity
has been a trade-off of price versus the additional
benefits derived:
- Additional buffer capacity may be used to retain
type "B" traffic for periods where an access line
is temporarily out-of-service.
- The additirnal nodal buffer capacity may be used
to buffer transactions to applications which are
temporarily out-of-service.
The aim is twofold. Firstly, to remove the subscriber's
awareness of short variations in Air Canada…08…s computing
resources; secondly, to protect the internodal trunk
network against excessive load conditions caused by
retransmissions and thus avoid influencing the service
provided to other users.
The following summarizes the memory mapping of the
exemplified NSP:
Programs
- DAMOS 43K
- Application 103K
Data
- DAMOS 93K
- Application
o tables 10K
o 3000 CRT 48K
SCBs, 2 each 72K
o 1500 other sub's 24K
SCB's, 1 each 1̲8̲K̲ 172K
- Buffers
o 180 x 64 bytes 6K
o 540 x 512 bytes ̲1̲3̲8̲K̲ ̲ 1̲4̲4̲K̲
5̲1̲2̲K̲
3.5.1.4 I̲n̲t̲e̲r̲n̲o̲d̲a̲l̲ ̲T̲r̲u̲n̲k̲ ̲U̲t̲i̲l̲i̲z̲a̲t̲i̲o̲n̲
The utilization on the internodal trunks has been chosen
as:
75 per cent
This high utilization factor was decided upon taking
the following into consideration:
- minimize number of internodal trunks
- evaluate additional delay caused by high utilization
versus transmission and queueing delays on access
lines
- usage of trunk groups advantages due to the better
…02…service which results from a multiserver environment.
The effective data utilization has been estimated as:
81.5 per cent.
This utilization factor excludes the ACDN communication
header, internal end-to-end acknowledgements, network
control traffic and does as such only include the data
entering the ADCN.
The communication header has for sizing purposes been
set to 20 bytes and excludes a trailer of 3 bytes.
This accounts for about half of lost bandwidth.
Furthermore, it has been assumed that the internal
network control traffic is on average 200 bytes.
3.5.2 G̲a̲t̲e̲w̲a̲y̲
The Air Canada "Gateway" has been included as part
of the Toronto node by provision of dedicated ACNC
LTUs.
The total bandwidth proposed between the existing ACNC
and the ACDN is:
48 Kbps.
The bandwidth has been achieved by 5 LTUs each emulating
4 normal 2400 bps access lines which provide a transfer
capacity of:
50K transactions per peak hour.
This traffic corresponds to 10 per cent of the capacity
of a fully equipped NSP.
The integration of the Gateway into the node leads
to a migration plan which is capable of being adjusted
to substantial variations in workloads.
The proposed "Gateway" capability has its response
time characteristics in common with other node elements.
Thus, the major elements in end-to-end response time
for transactions which are exchanged between the old
and the new network are:
1 Access network delays
2 Nodal delays
3 Internodal trunk delay
4 Gateway trunk delay
5 ACNC delay
6 Host delays
Elements 1,5 and 6 corresponds to the delays existing
at present, while 2, 3, and 4 are additional delays.
The gateway trunk will represent the major contribution
in additional delay irrespective of selected trunk
speed.
Selection of e.g. 2400 bps would lead to additional
response time of average 1.2 sec. (95%: 2.4 sec.) while
9600 bps leads to average .4 sec. (95%: .7 sec.).
3.5.3 E̲M̲H̲ ̲M̲o̲d̲e̲l̲l̲i̲n̲g̲
The EMH provides a central store-and-forward switching
service of the ACDN. This section presents the results
of the performance analysis of the proposed EMH in
terms of traffic volume capacities and transfer time.
The rationale for the design which implements this
function is presented to provide the background required
to evaluate the presented information.
The EMH implements a design which balances the performance
of the utilized storage media:
- high speed main memory
- high speed fixed head disk storage
- medium speed moveable head disk storage
- low speed long term magnetic tape storage
The actual EMH equipment will be tailored to meet projected
type "B" traffic volumes. Below are the CPUs and disk
paths required to fulfill the projected growth:
MDD 300Mb
disk* disks
paths
38.9 87.5 156 1 2 1 1
42.8 96.5 172 1 2 1 1
47.2 106.2 189 2 2 1 1
51.8 116.6 208 2 2 1 2
57.2 128.9 228 2 2 1 2
62.6 141.1 251 2 2 1 2
69.1 155.5 276 2 2 1 2
* mirrored
The bandwidth of one CPU, i.e. the maximum number of
PMS transactions one CPU can handle in a given CPU
is:
…02……02…55 PMS trans/sec
To allow for acknowledgement handling, fixed to
moveable head operations, checkpointing, retention
of
undelivered transactions, manual recovery of garbled
messages, the design capacity per CPU is decided as
50% of the bandwidth or
…02……02…100,000 PMS trans/peak-hour.
Each Electronic Mail Processor (EMP) will be equipped
with from 2 to 3 CPUs as required to support the allocated
traffic load. The EMPs are each provided with 512 Kwords
of memory; memory which is utilized as that of the
Nodal Switch Processors.
The bandwidth to the fixed head of the proposed disk
is:
…02……02…236 PMS trans/sec.
The same rationale as that used for the CPU capacity
has been used in deciding the design capacity of the
disk which results in:
…02……02…425,000 trans/peak-hour.
The capacity of the fixed head storage of 900 Kbytes
corresponds to:
…02……02…30 sec
of design capacity traffic while the moveable head
surface of approximately 60 Mbytes corresponds to:
…02……02…35 min.
Main memory buffer capacity is used to increase the
efficiency of CPUs and minimize the usage of disk storage
other than for write.
Transactions which require PMS service by the ACDN
are transmitted to the EMH. The EMH retains a copy
of the message in a main memory buffer while a mirrored
write operation is conducted to the fixed head portion
of the disk storage.
The successful storage to disk results in acknowledgement
to the PMS requester and forwarding of the copy retained
in memory to the transport network which takes responsibility
for proper delivery.
As a basis for the modelling activity it has been assumed
that 90% of all original PMS transactions are handled
this way. The buffer storage of an EMP supports storage
of the traffic equivalent to several seconds (approx.
55 PMS transactions per EMP).
The remaining 10% will be retrieved from the fixed
head part of the disk storage.
Retransmission of undelivered messages and retrieval
from the moveable head part of the disk storage (or
from magnetic tape) has for sizing purposes been assumed
to be 10% of the basic PMS traffic.
Additional performance improvements of the bandwidth
will be implemented by collecting up to three PMS transactions
before the actual write operation is performed. The
write operation to fixed head takes on average 8.3
msec, max. 16.7 msec. This, together with the applied
write strategy leads to the high transfer rates predicted
for the EMH.
The EMH transfer time is determined by a contribution
from three components:
- delay caused by storage strategy
- write to/retrieval from disk
- switching to node/session handling
The major contributor is the delay caused by the applied
storage strategy. However, this delay will never exceed
the time taken to perform three consecutive PMS writes.
This leads to an EMH transfer time which for the majority
of all PMS transactions does not exceed:
…02……02…200 msec.
3.5.4 R̲e̲l̲i̲a̲b̲i̲l̲i̲t̲y̲ ̲a̲n̲d̲ ̲A̲v̲a̲i̲l̲a̲b̲i̲l̲i̲t̲y̲
o The proposed ACDN provides Air Canada with a highly
…02…resilient network. Surviveability has been increased
by provision of geographical backed-up network
control facilities. Protection against single point
failures is offered in the form of a fault-tolerant
computer architecture. The distributed architecture
offered provides additional advantages in form
of graceful degradation.
It should be noted that the results presented in this
section represents an analysis which has been based
on RMA requirements far more stringent than those presented
in the RFP.
a. I̲n̲d̲i̲v̲i̲d̲u̲a̲l̲ ̲S̲u̲b̲s̲c̲r̲i̲b̲e̲r̲
The RMA analysis for the individual subscriber
has assumed a dual host in order to isolate the
effect of this element.
MTBF…0f…subscriber…0e… = 3.5 years
MTTR…0f…subscriber…0e… = 28 min
Availability…0f…subscriber…0e… = 99.9985%
The RMA analysis implies that a specific subscriber
every 3.5 years will be without service from the
ACDN for a period of 28 min.
b N̲e̲t̲w̲o̲r̲k̲
The following RMA requirements are stated as those
which result in the network being considered up:
1. Network capable to provide full service to
a̲l̲l̲ connected subscribers (except 4, below).
2. Host connections assumed dualized to remove
this factor.
3. Internodal trunks backed-up by rerouting.
4. At most one group of two access lines per NSP
without service.
5. All NSPs assumed worst case NSPs, i.e. with
36 access lines.
6. Only maximum configured nodes assumed, i.e.
with NSPs).
The results which apply to the network subject
to the above assumptions are as presented below:
MTBF…0f…ACDN…0e… = 8900 hours
MTTR…0f…ACDN…0e… = l5 min
Availability = 99.9972%
While the individual node (maximum configuration)
is
MTBF…0f…node…0e… = 3.l years
MTTR…0f…node…0e… = l5 min
Availability = 99.999l%
c E̲l̲e̲c̲t̲r̲o̲n̲i̲c̲ ̲M̲a̲i̲l̲
The RMA analysis results which apply to the Electronic
Mail Host are:
MTB…0f…FENH…0e… = l4 years
MTTR…0f…EMH…0e… = 48 min
Availability = 99.9995%
To the above should be noted that it reflects the l99l
EMH configuration and that the disks storages represent
the major factor to the stated EMH failure rates.
3.6 B̲a̲s̲e̲l̲i̲n̲e̲ ̲C̲a̲p̲a̲c̲i̲t̲y̲ ̲a̲n̲d̲ ̲P̲r̲o̲j̲e̲c̲t̲e̲d̲ ̲G̲r̲o̲w̲t̲h̲
3.6.1 B̲a̲s̲e̲l̲i̲n̲e̲ ̲C̲a̲p̲a̲c̲i̲t̲y̲
The proposed baseline ACDN has been sized to handle
the anticipated l985 load. The following summarizes
the proposed capacities by location:
Toronto Montreal Winnipeg
l000 Transactions/hour 575 549
918
Host Connections 2 2
2
Internodal Trunks 3+2 2+3
3+3
ICC Access Lines 69 81
58
RCCSH Access Lines 0 33
8
3.6.2 P̲r̲o̲j̲e̲c̲t̲e̲d̲ ̲G̲r̲o̲w̲t̲h̲
The baseline ACDN configurations will be sized to handle
the projected capacities.
The projected volumes and required network capacity
are summarized below:
a T̲r̲a̲n̲s̲a̲c̲t̲i̲o̲n̲s̲/̲s̲e̲c̲.̲
The projected peak hour transaction volume per
sec. per node is:
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TRANSACTIONS/SEC
YEAR TOR MTL WPG
84 137.0 130.8 218.7
85 159.7 152.5 255.0
86 185.4 177.0 296.0
87 203.9 194.7 325.6
88 224.3 214.2 358.2
89 246.8 235.6 394.0
90 271.4 259.1 433.4
91 298.6 285.1 476.7
The above represents the total incoming transaction
volume to nodes generated by resources outside the
node.
b. I̲n̲t̲e̲r̲n̲o̲d̲a̲l̲ ̲T̲r̲a̲f̲f̲i̲c̲
The projected internodal peak hour traffic is:
KBPS INTERNODAL TRUNKS
YEAR T-M T-W M-W T-M T-W M-W
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲
84 28.4 81.8 79.6 .8 2.4 2.3
85 33.2 95.4 92.8 1.0 2.8 2.7
86 38.5 110.7 107.7 1.1 3.2 3.1
87 42.3 121.8 118.5 1.2 3.6 3.5
88 46.6 133.9 130.3 1.4 3.9 3.8
89 51.2 147.3 143.3 1.5 4.3 4.2
90 56.4 162.1 157.7 1.6 4.7 4.6
91 62.0 178.3 173.5 1.8 5.2 5.1
The projected internodal traffic has been derived using:
1. the projected transaction volume,
2. the projected length of transactions, type by type,
3. assuming a communications header of 20 bytes plus
trailer of 3 bytes,
4. assuming l0% network control, acknowledge traffic
with average length corresponding to that of other
traffic,
5. a trunk utilization of 75% has been used as baseline
to establish the required number of internodal
trunks.
c Access Lines
YEAR ACCESS LINES IEM LINES
T M W T M W
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
84 59 69 50 0 30 7
85 69 81 58 0 33 8
86 81 95 68 0 34 9
87 89 104 74 0 38 9
88 98 115 82 0 42 10
89 107 126 90 0 46 11
90 118 139 99 0 50 13
91 130 153 109 0 55 14
3.6.3 B̲l̲o̲c̲k̲ ̲D̲i̲a̲g̲r̲a̲m̲s̲
This section presents the year by year block diagrams
which constitute both the baseline but also the projected
growth.
Only major network elements have been included:
o Network Management Processor (NMP)
o Nodal Control Processors (NCP)
o Nodal Switching Processors (NSP)
o Electronic Mail Processors (EMP)
o Channel Units
Figure III 3.6-1
Figure III 3.6-2
Figure III 3.6-3
Figure III 3.6-4
3.7 T̲e̲l̲e̲c̲o̲m̲m̲u̲n̲i̲c̲a̲t̲i̲o̲n̲s̲
Recently Christian Rovsing's maintenance subcontractor
CNCP Telecommunications, responded to an Air Canada
Request for Information (RFI). Their response to the
RFI dated December 15, 1980 is included in its entirety
in this proposal as Appendix E.
Christian Rovsing feels this document provides Air
Canada with sufficient details on CNCP's existing network
services and offerings, as well as their plans for
the future, to enable Air Canada to plan for a total
solution to their communication requirements.
3.8 O̲p̲t̲i̲o̲n̲s̲
o The hardware and software structures fully support
openended growth beyond the projected backbone
network expansion.
Potential growth areas are:
- Internodal Megabit Trunk Capacity (2Mbit(s)
- Voice Switching through the ACDN
- Internodal Megabit satellite links
- Front End Processor connections to Host Access
Network
- Videotex
- Telefax
- Encryption
The (Internodal) trunk capacity in the megabit range
will be necessary both for the connection of the Passenger
Management System and for the Host-to-Host links in
the Host Access Network.
The CR80 module, the STI, presently interfacing to
the 16 Mbps SUPRA Bus and the 1.8 Mbps TDX bus, can
be used as the interface to such megabit communication
lines, together with the appropriate adapter.
This also supports megabit satellite loops, when needed,
since the STI can have at its disposal up to 1 Mega
words; and large buffering capacity and a selective
retransmission function are needed in order to provide
efficient transmission over the long-delaying satellite
hops.
Either based upon conventional digitized voice transmission
using standard 64 Kbps per channel or on the latest
compressed-coding transfer chips, by which voice channels
can be as narrow as 240…0e…o bps, the Air Canada Data Network
Equipment is well suited to include such types of traffic
too. This is indicated as a Common Branch Exchange
and as an automated office processor with a local network.
Videotex and Telefax might also be attachable, directly
or through other public networks.
These examples were intended to illustrate the residence
of the CR80 System Approach:
The system is balanced, i.e. it does not create inhibiting
bottlenecks, when growth occurs.
Presently foreseen are the needs for encryption facilities.
Such implementations will benefit from our significant
experience in this field.
Soon to come may be the need for further Host Access
Subsystems which are part of the Host-Access Network.
The CR80 system concept is well suited for such applications,
as shown with the UNIVAC and IBM host Example offered,
and we look forward to supplying parts of the Host
Access Network as amendments to the backbone Network.
For future efficient interconnections of the ACDN and
public data networks, the X.75 Gateway may be of interest.
3.8.1 V̲i̲d̲e̲o̲t̲e̲x̲
Christian Rovsing is in a position to offer Videotex
as an added value service to the proposed ACDN. This
product could be implemented as part of the proposed
Electronic Mail Service.
VIDEOTEX, also known as Viewdata, is a facility for
retrieving information from computer data bases. The
information is stored in "pages" in the VIDEOTEX system
or may optionally be retrieved from external data bases.
VIDEOTEX adds, to a data processing environment, the
capability of using low-cost and standardized terminals
to interact with different data bases in a user-oriented
way.
Applications for VIDEOTEX are virtually inlimited.
Here are a few examples:
o News Media
- news briefings
- weather forecasts
- restaurant guides
- going out
o Travel and Tourist Information
- time tables, local and global
- information on destinations, domestic and foreign
- local transportation schedules
- local entertainment guides
o Advertising
o Banking
- account inquiries
- funds transfer
- bill payments
- product/service manuals
- financial news service
- calculations
The VIDEOTEX product has been delivered by Christian
Rovsing to the Danish Tele Administrations implemented
on the CR80 computer system.
VIDEOTEX offers the following capabilities:
- Retrieval of VIDEOTEX images in a CR80 database
- Generation/modification of VIDEOTEX images
- Maintenance of user catalogue
- Provision for generation of users in user groups
- Maintenance of password
- Message service
- Generation of primary keywords
User terminals acquire access to the CR VIDEOTEX system
by means of call on the public telephone network. This
facility could be useful as it could provide a means
of establishing a "public" entry point to ACDN value
added services.
The user of this could be a subscriber external to
the ACDN who could dial this service. A normal television
set modified with low-cost circuit board as a terminal
and provided with a low-cost modem is what that subscriber
needs to use in this facility.
A logon image is presented to the user when he has
acquired access to the system. The user has to key
in his user number and associated password. The user
may choose individual VIDEOTEX applications once the
authentication has been successfully passed.
The VIDEOTEX database is structured hierarchically
around the CR standard product CRAM. CRAM provides
variable length records and key of a maximum length
of 127 bytes.
The VIDEOTEX database supports the following user access
methods:
- hierarchical search
- direct page selection
- selection by keyword
Modification of VIDEOTEX images in the data base takes
place by means of on-line edit facilities. Terminals,
which have to use this facility must be provided with
an alphanumeric keyboard.
Similar facilities are available for maintenance of
the user catalogue. Creation of user groups provide
a possibility for associating users with certain image
groups. Thus, a user may only recall images, which
are within the user group to which he belongs. This
facility is only available to users with a capability
of modifying the basic data of the system, e.g. system
supervisors.
A password must be assigned at the creation. However,
the user has facilities available for modifying his
personal password.
The VIDEOTEX product includes a facility similar to
the ACDN Protected Message Service. This includes a
possibility of preformatted images where the user fills
in fields or as free images, where the user has the
whole image at his disposal.
The message, which is stored in a message register,
awaits the reciever's call-up VIDEOTEX service. The
user will receive a message on outstanding messages
and may call "read messages" to retrieve the messages.
The standard VIDEOTEX product supports from 12 to 60
termination points and 20,000 images if an 80 Mbytes
disc is selected
3.8.2 H̲i̲g̲h̲ ̲D̲e̲n̲s̲i̲t̲y̲ ̲D̲i̲g̲i̲t̲a̲l̲ ̲T̲a̲p̲e̲ ̲R̲e̲c̲o̲r̲d̲i̲n̲g̲s̲
Long term storage of all PMS traffic tends to become
impractical, if conventional mass storage techniques
like magnetic tapes or disk packs are used. This is
caused by the load on operators in mounting and demounting
this type of mass storages together with the physical
space required for storing the large anticipated traffic
volumes.
High density Digital Tape Recording offers an attractive
alternative archiving technique. The high packing densities
offered by this technique could provide Air Canada
with a means for very compact storage which in addition
can reduce the load on the operators.
o 14-track wideband tape recorder with record/reproduce
capability for 7 tracks
o 9200 feet tapes
o 22000 bit/inch per track on data tracks
2750 bit/inch per track on search tracks
o recording on two channels. Each channel contains
one search and six data tracks
o tape capacity of 2.2 x 10…0e…10…0f… bits or 3,000 Mbytes
o tape speeds:
- record: 2.5/5/10 inch/sec
- reproduce: 2.5/5/10 inch/sec
- search : 240 inch/sec
o data rate of 160 Kbytes/sec for record/reproduce
o search time average 4 1/2 min.
Christian Rovsing has, as a result of involvement in
a number of Ground Computer Systems for handling large
volumes of satellite generated image data, developed
a number of products for interfacing to and handling
high density tape recorders.
These products used in a computer environment like
Air Canada's could be an attractive alternative storage
media to more conventional ones.