top - download
⟦7f7f0de28⟧ Wang Wps File
Length: 60854 (0xedb6)
Types: Wang Wps File
Notes: Air Canada Proposal
Names: »1341A «
Derivation
└─⟦32f09e9a2⟧ Bits:30006250 8" Wang WCS floppy, CR 0084A
└─ ⟦this⟧ »1341A «
WangText
…1d……0b……1d……00……1d… …1c……0c……1c……0f……1c……06……1b……0b……1b……01……1a……09……1a……02……19……0a……19……0e……19……0f……19……02……86…1
…02…
…02…
…02…
…02…
…02…
CHAPTER
Page
#
DOCUMENT
III
TECHNICAL
PROPOSAL
Oct. 8,
1981
Rev.:
Nov.
6,
1981
LIST OF CONTENTS Page
6. SOFTWARE CHARACTERISTICS
5
6.1 Introduction
5
6.2 DAMOS CR80D Standard System Software
6
6.2.1 Overview of DAMOS Operational Software
8
6.2.2 Security 11
6.2.3 Kernel 13
6.2.3.1 Resource Management 14
6.2.3.2 Process Management 15
6.2.3.3 Memory Management 15
6.2.3.4 Process Communication 15
6.2.3.5 CPU Management 16
6.2.3.6 Processing Unit Management 16
6.2.3.7 BASIC Transport Service 17
6.2.3.7.1 Service Types 19
6.2.4 DAMOS Input/Output 20
6.2.4.1 File Management System 22
6.2.4.1.1 Device and Volume Handling 22
6.2.4.1.2 Directories 23
6.2.4.1.3 Files 23
6.2.4.1.3.1 File Types 23
6.2.4.1.3.2 File Commands 23
6.2.4.1.4 User Handling 24
6.2.4.1.5 Disk Integrity 24
6.2.4.1.5.1 Security 24
6.2.4.1.5.2 Redundant Disks 24
6.2.4.1.5.3 Bad Sectors 25
6.2.4.1.6 Access Methods 25
6.2.4.1.6.1 Unstructured Access 25
6.2.4.1.6.2 Indexed Sequential Access 25
6.2.4.2 Magnetic Tape File Management System 26
6.2.4.2.1 Device Functions 27
6.2.4.2.2 Volume Functions 27
6.2.4.2.3 File Functions 27
6.2.4.2.4 Record Functions 27
6.2.4.3 Terminal Management System 28
6.2.4.3.1 Transfer of I/O Data 28
6.2.4.3.1.1 File Mode 28
6.2.4.3.1.2 Communication Mode 29
6.2.4.3.2 User Handling 31
6.2.4.3.3 Hardware Categories 31
6.2.4.3.3.1 Examples 32
6.2.4.3.3.2 Terminal Controllers 34
6.2.4.3.3.3 Lines 34
6.2.4.3.3.4 Units 34
6.2.5 System Initialization 36
Page
6.3 Standard Support Software 37
6.3.1 Terminal Operating System (TOS) 37
6.3.2 Language Processors 38
6.3.3 System Generation Software 39
6.3.4 Debugging Software 39
6.3.5 Utilities 39
6.3.6 Diagnostic Programs 40
6.3.6.1 Off-line Diagnostic Programs 40
6.3.6.2 On-line Diagnostic Programs 40
6.4 Redundant Operation 41
6.4.1 Hardware Redundancy 41
6.4.1.1 Node Redundancy 41
6.4.1.2 EMH Redundancy 41
6.4.1.3 Gateway Redundancy 42
6.4.1.4 NMH Redundancy 42
6.4.1.5 NCC Redundancy 42
6.4.1.6 FEP Redundancy 42
6.4.2 Switchover 43
6.4.2.1 PU Switchover 43
6.4.2.2 LTU Switchover 43
6.4.2.3 Disk Switchover 43
6.4.2.4 Supra Bus Switchover 43
6.4.3 Checkpointing 44
6.4.4 Recovery/Restart 44
6.4.4.1 Recovery Level 44
6.5 Transmission Software 45
6.5.1 Communication Interface 47
6.5.2 Network Interface 49
6.5.2.1 Protocol Levels 49
6.5.3 The Modules of the NSS 52
6.5.3.1 The Transport Station 52
6.5.3.2 The Packet Handler 55
6.5.3.3 The Supervisory Module 55
6.5.3.4 The Recovery Module 55
6.6 Communication Software 56
6.6.1 High Level Service Subsystem (HSS) 56
6.6.2 Terminal Access Subsystem 57
6.6.2.1 Three Lowest Levels 57
6.6.2.2 Terminal Transport Layer 57
6.6.2.3 Terminal Session Layer 57
6.6.2.4 Terminal Protocol/Printer Protocol Layer 58
6.6.2.5 Application Layer 58
6.6.3 Host Access Subsystem Software 59
6.6.3.1 Univac Host Interface 64
Page
6.7 Gateway Software 72
6.7.1 ACNC Interface 74
6.7.2 Network Interface 74
6.7.3 Modules of the Gateway 76
6.7.3.1 The ICC Module 76
6.7.3.2 The Communications Control Interface Module 76
6.7.3.3 The High Level Service Module 76
6.7.3.4 The Supervisory Module 76
6.7.3.5 The Recovery Module 76
6.8 Electronic Mail Host 77
6.8.1 EMH Interface SW 77
6.8.2 Application Software 79
6.8.2.1 Programme Development 79
6.8.2.2 Protected Message Switching (PMS) 80
6.8.2.2.1 FIKS Definition and System Elements 80
6.8.2.2.2 System Overview and Functional Summary 81
6.8.2.2.3 FIKS Nodal Network 81
6.8.2.2.4 Message Users 82
6.8.2.2.5 Data Users 83
6.8.2.2.6 Network Supervision 83
6.8.2.2.7 FIKS Generic Elements 83
6.8.2.2.8 Traffic Security 84
6.8.2.2.9 Message Categories,Code and Formats 84
6.8.2.2.10 Message Entry,Storage and Distribution 85
6.8.2.2.11 Message Routing and Data Switching 87
6.8.2.2.12 System Supervision,Control and
Maintenance 88
6.8.2.3 Session Control 98
6.8.2.4 Reconfiguration 98
6.8.2.5 EMH Recovery 98
6.8.3 System Software 98
6.8.4 External Devices 100
Page
6.9 Network Control Centre (NCC) Software 101
6.9.1 NCC Network Environment 101
6.9.2 NCC Hardware Components 103
6.9 3 NCC Data Base 105
6.9.4 NCC-BNCC Operation 106
6.9.5 NMH-NCC Operation 107
6.9.6 Centralized versus Distributed Control 108
6.9.7 NCC Monitoring via WDP (Watchdog) 109
6.9.8 NCC Control via WDP 109
6.9.9 Software Monitoring 109
6.9.10 Software Initialization and Modification 110
6.9.11 Routing Control 111
6.9.12 Network Monitoring 111
6.9.13 Application Affection by Network Reconfigurations 112
6.9.14 Statistics 113
6.9.14.1 Statistics Information 113
6.9.15 Alarms 114
6.9.16 Supervisory Functions 114
6.9.17 NCC Man-Machine I/F 114
6.10 Network Management Host Software 115
6.10.1 NMH Interfaces and Functional Overview 115
6.10.2 Software Development and Maintenance Functions 116
6.10.3 Maintenance of the Configuration Data Base 117
6.10.4 Down-line Loading of Software and Configuration Data 118
6.10.5 Processing of Raw Data for Statistics and Billing Information 118
6.10.6 Network Modelling Software and Support for Automatic
Network Tests 119
6.10.6.1 Network Model 119
6.10.6.2 Network Testing by Means of ATES 120
6. S̲O̲F̲T̲W̲A̲R̲E̲ ̲C̲H̲A̲R̲A̲C̲T̲E̲R̲I̲S̲T̲I̲C̲S̲
6.1 I̲n̲t̲r̲o̲d̲u̲c̲t̲i̲o̲n̲
This section describes the software which implements
the functions required for the Air Canada backbone
network. The software is described in the following
subsections:
o Operating System
o Standard Support Software
o Software for Dual Operations
o Transmission Software
(Link, Network and Transport Layers)
o Communication Software
(Session and Presentation Layers)
o Application Software, i.e. one subsection for each
of the generic elements: Gateway, EMH, NCC, NMH
The proposed software is organized in a highly modular
structure with specific emphasis in compliance to the
OSI 7-layer model.
The use of internationally acknowledged standards provides
for flexibility, security, and maintainability. Specific
emphasis has also been put to the requirement for "open
ended growth". Thus the S/W will fully comply with
this requirement.
To a large extent the proposed system builds on existing
software components. Thus a complete X.25 protocol
including a transport station is a standard S/W product,
and the Network Control Center software can be adopted
from the functionally equivalent "System Control Center"
in the "Danish Defence Integrated Communications System"
developed by Christian Rovsing.
6.2 D̲A̲M̲O̲S̲ ̲-̲ ̲C̲R̲8̲0̲D̲ ̲S̲t̲a̲n̲d̲a̲r̲d̲ ̲S̲y̲s̲t̲e̲m̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
o DAMOS Standard System Software is divided into
- operational software
- support software
The CR80D Advanced Multi Processor Operating system
DAMOS is the standard operating system for memory mapped
CR80D systems.
DAMOS is divided into operational and support software
as defined overleaf.
DAMOS includes a virtual memory operating system kernel
for the mapped CR80D series of computers.
DAMOS fully supports the CR80D architecture which facilitates
fault tolerant computing based on hardware redundancy.
DAMOS supports a wide range of machines from a single
Processing Unit (PU) with 1 CPU and 128 K words of
main memory, and up to a maximum configuration with
16 PU's where each PU has 5 CPU's and 16.384 K words
of main memory and a virtually unlimited amount of
peripheral equipment including backing storage.
DAMOS is particularly suited for use in real time systems
but supports also other environments like software
development and batch. The main objectives fulfilled
in DAMOS are: high efficiency, flexibility, and secure
processing.
DAMOS is built as a hierarchy of modules, each performing
its own special task. The services offered by DAMOS
include CPU, PU, and memory management. Demand paging
is the basic memory scheduling mechanism, but process
swapping is also supported. Other levels of DAMOS
provide process management and interprocess communication,
basic device handling and higher level device handling
including handling of interactive terminals, communication
lines, and file structured backing storage devices.
DAMOS provides an operating system kernel which integrates
supervisory services for real time, interactive and
batchsystems. A comprehensive set of software development
tools is available under DAMOS. The following languages
are presently available:
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
DAMOS
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
OPERATIONAL SUPPORT
SOFTWARE SOFTWARE
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲
- Kernel
- resource management - terminal operating system
- directory functions - language processors
- process management - system generation software
- memory management - debugging facilities
- process communica- - utilities
tion
- device management - maintenance and diagnostic
programs
- device handling
- error processing
- real time clock
- PU management
- PU service
- transfer module
- Basic transport service
- Input/output system
- File Management
- Magtape Management
- Terminal Management
- Initialization
Fig. III 6.2-1…01…DAMOS Software Overview
- assembler
- SWELL, the cR80 system programing language
- Pascal
- Cobol
The following languages are announced:
- Fortran 77
- Ada
The DAMOS standard operational software is described
in this section. The description is divided into the
following areas:
- Overview of DAMOS
- Security,
which describes the general DAMOS approach to data
security
- Kernel,
which describes the DAMOS operating system kernel
components
- DAMOS Input/Output,
which describes the DAMOS standard interfaces to
peripheral I/O equipment, the DAMOS disk file management,
magnetic tape file management and terminal and
communication line management systems
- System initialization
The DAMOS standard support software
- terminal operating system
- programing languages
- system generation software
- debugging software
- utilities
- maintenance and diagnostics programs
is defined in section 6.3.
6.2.1 O̲v̲e̲r̲v̲i̲e̲w̲ ̲o̲f̲ ̲D̲A̲M̲O̲S̲ ̲O̲p̲e̲r̲a̲t̲i̲o̲n̲a̲l̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
DAMOS may be visualized as the implementation of a
set of abstract data types and a corresponding set
of tools for creating and manupulating instantiations
(objects) of these types.
The major components in DAMOS are the Kernel, the File
Management System, the Magnetic Tape File Management
System, the Terminal Management System and the Root
Operating System.
The DAMOS Kernel exists in one incarnation for each
processing unit (PU). The data types and functions
implemented by the Kernel are:
D̲a̲t̲a̲ ̲T̲y̲p̲e̲ F̲u̲n̲c̲t̲i̲o̲n̲
CPUs CPU management and scheduling
processes process management
virtual memory segments memory management
PU's PU management
synchronization elements inter process communication
device device management and
basic device access
methods
ports basic transport service
The Kernel also provides facilities for
- processing of errors
- centralized error reporting
- a data transfer mechanism
- a PU service module
The File Management System (FMS) implements files on
disks. The FMS provides functions for manipulating
and accessing files and acts as an operating system
for a group of disks units. The FMS may exist in several
incarnations in each PU where each incarnation controls
its own devices.
The Terminal Management System (TMS) is similar to
the FMS. It provides functions for manipulating and
accessing communication lines and terminals including
line printers. The objects accessed via the TMS are
called units. A unit may be an interactive terminal,
a line printer or a virtual circuit. The TMS acts
as an operating system for a group of communication
devices attached via LTUs, LTUXs or a parallel controller.
The TMS may exist in several incarnations in each PU,
each incarnation controlling its own devices.
The Magnetic Tape File Management System handles files
on magnetic tape units.
A common security policy and hiearachical resource
management strategy is used by the Kernel, the FMS
and the TMS. These strategies have been designed with
the objective of allowing multiple concurrent higher
level operating systems to coexist in a PU in a secure
and independent manner.
The Root operating system is a basic high level operating
system which intially possesses all resources in its
PU.
6.2.2 S̲e̲c̲u̲r̲i̲t̲y̲
DAMOS offers comprehensive data security features.
A multilevel security system ensures that protected
data is not disclosed to unauthorized users and that
protected data is not modified by unauthorized users.
All memory allocatable for multiple users is erased
prior to allocation in case of reload, change of mode,
etc. The erase facility is controlled during system
generation.
The security system is based on the following facilities:
- Hardware supported user mode/privileged mode with
16 privilege levels. Priviliged instructions can
be executed only when processing under DAMOS control.
- Hardware protected addressing boundaries for each
process.
- Non-assigned instructions will cause a trap.
- Primary memory is parity protected.
- Memory bound violation, non-assigned instructions,
or illegal use of privileged instructions cause
an interrupt of highest priority.
- The hierarchical structure of DAMOS ensures a controlled
use of DAMOS functions.
- A general centralized addressing mechanism is used
whenever objects external to a user process are
referred to.
- A general centralized access authorization mechanism
is employed.
Centralized addressing capabilities and access authorization
are integral parts of the security implementation.
User processes are capable of addressing Kernel objects
only via the associated object descriptor table. The
following types of DAMOS objects are known only via
object descriptors:
- Processes
- Synchronization elements
- Segments
- Devices
- PUs
- CPUs
- Ports
The object forms the user level representation of a
DAMOS Kernel object. It includes the following information:
- A capability vector specifying the operations which
may be performed on the object by the process which
has the object descriptor.
- A security classification
The access right information concerning the various
DAMOS objects is retained in a PU directory of object
control blocks. Each control is associated with a
single object.
When the access right of a process to a segment is
verified and the segment is included in the logical
memory space of the process, the contents of that segment
may be accessed on a 16-bit word basis at the hardware
level subject to hardware access checks.
Authorization of access to an object is based on
- security classification check
- functional capability check for the object
versus the process
The security policy is based on a multilevel -multicompartment
security system.
6.2.3 K̲e̲r̲n̲e̲l̲
The DAMOS Kernel is a set of reentrant program modules
which provide the lowest level of system service above
the CR80D hardware and firmware level.
The Kernel consists of the following components:
- Resource Management,
which administers resources in a coherent way
- Directory Functions,
which provide a common directory service function
for the other Kernel components
- Process Manager,
which provides tools for CPU management, process
management and scheduling
- Page Manager,
which provides memory management tools and implements
a segmented virtual memory
- Process Communication Facility,
which provides a mechanism for exchange of control
information between processes
- Device Manager
which provides a common set of device related functions
for device handlers and a standard interface to
device handlers
- Device Handlers,
which control and interface to peripheral devices
- Error Processor,
which handles errors detected at the hardware and
Kernel level and provides a general central error
reporting mechanism
- Real Time Clock
for synchronization with real time
- PU Manager,
which provides functions for coupling and decoupling
PUs
- PU Service Module,
which provides service functions for remote PUs
- Transfer Module
for a hardware based transfer of data in a PU and
between PUs
- Basic Transport Service,
which provides a general mechanism for exchange
of bulk data between processes and device handlers.
The following subsections describe the main Kernel
functions:
- resource management
- process management
- memory management
- process communication
- CPU management
- PU management
- Basic transport service
6.2.3.1 R̲e̲s̲o̲u̲r̲c̲e̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
The goal of DAMOS Resource Management is to implement
a set of tools which enables the individual DAMOS modules
to handle resources in a coherent way. This again,
will make it possible for separate operating systems
to implement their own resource policies without interference.
Further built-in deadlock situations will be avoided.
The resource management module governs anonymous resources,
such as control blocks. Examples of resource types
are:
- process control blocks
- segment control blocks
- synchronization elements
- PU directory entries
Each type of resource is managed independently from
all other types.
The resources are managed in a way that corresponds
to the hierarchical relationships among processes.
Two operating systems which have initially got disjoint
sets of resources, may delegate these resources to
their subordinate processes according to separate and
non-interfering strategies. For example, one operating
sytem may give all its ubordinate processes distinct
resource pools, i.e. there will not be any risk of
one process disturbing another. On the contrary, the
other operating system may let all its subordinate
processes share a common pool, i.e there may be a much
better resource utilization at the cost of the risk
for deadlock among these processes.
6.2.3.2 P̲r̲o̲c̲e̲s̲s̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
In the CR80D system, a clear distinction is made between
programs and their executions, called processes. This
distinction is made logically as well as physically
be applying two different base registers: one for program
code and one for process data. This distinction makes
reentrant, unmodifiable code inevitable.
The process is the fundamental concept in CR80D terminology.
The process is an execution of a program module in
a given memory area. The process is identified to
the remaining software by a unique name. Thus, other
processes need not to be aware of the actual location
of a process in memory but must refer to it by name.
6.2.3.3 M̲e̲m̲o̲r̲y̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
The addressing mechanism of the CR80D limits the address
space seen by a process at any one time to a window
of 2 x 64K words. Due to the virtual memory concept
of DAMOS a process may, however, change the "position"
of the window, thus leading to a practically unlimited
addressing capability.
The finest granularity of the virtual memory known
to a process is a segment. Segments can be created
and deleted. They have unique identifiers and may
have different sizes. A process which has created
a segment may allow others to share the segment by
explicitly identifying them and stating their access
rights to the segment.
The Page Manager implements virtual memory. The actual
space allocated in a Processing Unit to a process may
be only a few segments, while the logical address space
is the full 2 x 64k words. Whenever addressing of
a segment, that is not in physical memory, is attempted,
the Page Manager will bring in the addressed segment.
6.2.3.4 P̲r̲o̲c̲e̲s̲s̲ ̲C̲o̲m̲m̲u̲n̲i̲c̲a̲t̲i̲o̲n̲
Synchronization of processes and communication between
them is supported in DAMOS by objects called Synchronization
Elements (synch elements) which are referred to by
symbolic names and may thus be known by processes system-wide.
In DAMOS a process cannot "send" a block of data directly
to another process identified by name. The exchange
must be done using a synch element.
6.2.3.5 C̲P̲U̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
The CPUs in a processing unit may be pooled and a given
process is allocated processing power from one such
pool. In this way CPUs can be dedicated processes.
6.2.3.6 P̲r̲o̲c̲e̲s̲s̲i̲n̲g̲ ̲U̲n̲i̲t̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
The DAMOS Kernel provides facilities for managing the
logical connections between the individual Processing
Units attached to a Supra Bus.
PUs may be connected logically into groups. The number
of PUs in a group may vary from 1 to 16. Two groups
may be merged, the result being a new PU-group.
Objects are identified by symbolic names having either
local or global scope. They are accessible from all
PUs in the group where they reside.
PU Management provides functions for inclusion of a
PU in a PU-group.
A logical connection between two PUs is not established
until both have received an include request from the
opposite. When trying to connect two PU-groups, conflicts
between the use of global names may arise. Therefore,
a connection is only established if the scope of all
names can be maintained.
The PU Management is designed to allow graceful degradation
when purposely closing a PU or isolating a faulty PU.
It is possible from a PU to force a member out of
its common group. All PUs in the group are informed
to break their logical connection to the designated
PU. As a consequence all global objects residing in
the isolated PU are thereafter unknown to the group.
If not faulty, the isolated PU continues executing
its local processes and is ready to receive new include
requests.
6.2.3.7 B̲A̲S̲I̲C̲ ̲T̲r̲a̲n̲s̲p̲o̲r̲t̲ ̲S̲e̲r̲v̲i̲c̲e̲
The Basic Transport Service (BTS) offers DAMOS processes
and device handlers the possibility to communicate
with other remote or local processes and device handlers.
Processes and device handlers - in the following called
Service Users (SU) - may be addressed indirectly via
ports.
A service user can dynamically be tied to a port.
When a service user wants to communicate with another
service user, the former service user requests a connection
to be established between (one of) his own port(s)
and a prt to which the remote service user is tied.
Once such a connection has been established, the two
service users may exchange data and control information
across the connection.
The figure overleaf depicts the possible connections
within a multiple PU node.
(skema med tekst "Gateway process, not part of BTS)
6.2.3.7.1 S̲e̲r̲v̲i̲c̲e̲ ̲T̲y̲p̲e̲s̲
The BTS offers two different types of service:
- stream service and
- message service
In the first type of service data flows as a logically
continuous stream from one SU to the other. The blocking
of data into buffers performed by the transmitting
SU is not seen by the receiving SU.
In the second type of service the buffers are treated
as semantic entities and transmitted as such. I.e.,
a homomorph correspondance between transmit and receive
buffers is enforced.
6.2.4 D̲A̲M̲O̲S̲ ̲I̲n̲p̲u̲t̲/̲O̲u̲t̲p̲u̲t̲
DAMOS supports input/output (I/O) from user programs
at different levels.
At the lowest level user programs can interact with
device handlers directly and transfer blocks of data
by means of the Basic Transport Service modulel. This
interface is illustrated in the figure on next page.
Device control is exercised via the Device Manager
functions. Data is transfered between the user process
and the device handler using a port in the user process
and a port in the device handler.
At a higher level DAMOS offers a more structured I/O
facility under the DAMOS I/O System (IOS).
The IOS provides a uniform, device independent interface
for user processes to
- disk files
- magnetic tape files
- interactive terminals
- communication lines
- line printers
The IOS is a set of standard interface procedures through
which a user communicates with a class of DAMOS service
processes known as General File Management Systems.
General File Management Systems include:
- the File Management System which implements disk
files
- the Magnetic Tape File Management System for magnetic
tape files
- the Terminal Management System for communication
lines, interactive terminals and printers.
The General File Management Systems provide functions
which are classified as:
- device handling
- user handling
- file handling
- file access
The common file access functions provided by the IOS
are readbytes for input and appendbyte and modifybytes
for output.
Skema under afsnit 6.2.5 DAMOS Input/Output.
These basic functions are used for transfer of blocks
of data.
On top of these functions the IOS provides a stream
I/O facility where the IOS handles the blocking and
buffering of data.
6.2.4.1 F̲i̲l̲e̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲ ̲S̲y̲s̲t̲e̲m̲
The File Management System (FMS) is responsible for
storing, maintaining, and retrieving information on
secondary storage devices (disks).
The number and kind of devices attached to the FMS
is dynamically reconfigurable.
The following subjects are handled:
- devices and volumes
…02…- directories
- files
- users
- integrity
- access methods
6.2.4.1.1 D̲e̲v̲i̲c̲e̲ ̲a̲n̲d̲ ̲V̲o̲l̲u̲m̲e̲ ̲H̲a̲n̲d̲l̲i̲n̲g̲
The file system may be given commands concerning:
- Management of peripheral devices.
Devices may be assigned to and deassigned from
the file system dynamically. Instances of device
handlers are at the same time created or deleted.
- Management of volumes.
Volumes may be mounted on and dismounted from specific
devices.
6.2.4.1.2 D̲i̲r̲e̲c̲t̲o̲r̲i̲e̲s̲
The file system uses directories to implement symbolic
naming of files. If a file has been entered into a
directory under a name specified by the user, it is
possible to locate and use it later on. Temporary
files does not need to be named. A file may be entered
into several directories, perhaps under different names.
Since a directory is also considered a file, it can
itself be given a name and entered into another directory.
This process may continue to any depth, thus enabling
a hierarchical structure of file names.
6.2.4.1.3 F̲i̲l̲e̲s̲
6.2.4.1.3.1 F̲i̲l̲e̲ ̲T̲y̲p̲e̲s̲
The file system supports two different organizations
of files on disk. Al contiguous file consists of a
sequence of consecutive sectors on the disk. The size
of a contiguous file is fixed at the time the file
is created and cannot be extended later on. A random
file consists of a chain of indices giving the addresses
of areas scattered on the volume. Each area consists
of a number of consecutive sectors. The number of
sectors per area is determined at creation time, whereas
the number of areas may increase during the lifetime
of the file.
6.2.4.1.3.2 F̲i̲l̲e̲ ̲C̲o̲m̲m̲a̲n̲d̲s̲
The commands given to the file system concerning files
may be grouped as:
- Creation and removal of files.
A user may request that a file is created with
a given set of attributes and put on a named volume.
- Naming of files in directories.
A file may be entered into a directory under a
symbolic name. Using that name it is possible
to locate the file later on. The file may also
be renamed or removed from the directory again.
- Change of access rights for a specfic user group
(or the public) vis a vis a file. The right to
change the access rights is itself delegatable.
6.2.4.1.4 U̲s̲e̲r̲ ̲H̲a̲n̲d̲l̲i̲n̲g̲
The file management system may be given commands concerning:
- Creation and Removal of users (processes)
6.2.4.1.5 D̲i̲s̲k̲ ̲I̲n̲t̲e̲g̲r̲i̲t̲y̲
6.2.4.1.5.1 S̲e̲c̲u̲r̲i̲t̲y̲
The protection of data entrusted to the file management
system is handled by two mechanisms:
The first mechanism for access control is based on
the use of Access Control Lists (ACL). There is an
ACL connected to each file. The ACL is a table which
describes the access rights of each individual user
group (one being the public) to the corresponding file.
Whenever a user tries to access a file, the ACL is
used to verify that he is indeed allowed to perform
this access.
The second mechanism for access control is based on
a security classificatio system. Each user and each
file is assigned a classification. The user classification
is recorded in the user control block and the file
classification is recorded on the volume. An access
to a file is only allowed if the classification levels
of the user and the file match to each other.
6.2.4.1.5.2 R̲e̲d̲u̲n̲d̲a̲n̲t̲ ̲D̲i̲s̲k̲s̲
The FMS allows use of redundant disk packs, which are
updated concurrently to assure that data will not be
lost in case of a hard error on one disk.
The FMS allows exclusion of one of the two identical
volumes, while normal service goes on on the other
one. After repair it is possible to bring up one volume
to the state of the running volume, while normal service
continues (perhaps with degraded performance).
The bringing up is done by marking a raw copy of the
good disk to that which should be brought up. While
the copying takes place all read operations are directed
to both disks.
6.2.4.1.5.3 B̲a̲d̲ ̲S̲e̲c̲t̲o̲r̲s̲
The FMS is able to use a disk pack with bad sectors,
unless it is sector 0.
The bad sectors are handled by keeping a translation
table on each volume from each bad sector to an alternative
sector.
While using redundant disks the translation tables
of the two disks must be kept identical to assure that
all disk addresses can bve interpreted in the same
way. If bad sectors are detected while bringing up
a disk, they are marked as such on both disks and both
translation tables are updated accordingly.
6.2.4.1.6 A̲c̲c̲e̲s̲s̲ ̲M̲e̲t̲h̲o̲d̲s̲
The file management system implements two access methods
to files:
6.2.4.1.6.1 U̲n̲s̲t̲r̲u̲c̲t̲u̲r̲e̲d̲ ̲A̲c̲c̲e̲s̲s̲
For transfer purposes a file is considered simply as
a string of bytes. It is, therefore, a byte string
which is transferred between a file and a user buffer.
The user can directly access any byte string in a file.
The commands which are implemented by this access methods
are:
READBYTES - Read a specified byte string
MODIFYBYTES - Change a specified byte string
APPENDBYTES - Append a byte string to the end of
the file.
6.2.4.1.6.2 I̲n̲d̲e̲x̲e̲d̲ ̲S̲e̲q̲u̲e̲n̲t̲i̲a̲l̲ ̲A̲c̲c̲e̲s̲s̲
CRAM is a multi-level-index indexed sequentila file
access method. It features random or sequential (forward
or reverse) access to records of 0 to n bytes, n depending
on the selected block size, based on keys of 0-126
bytes. The collating sequence is using the binary value
of the bytes so e.g. character strings are sorted alphabetically.
CRAM is working on normal contiguous FMS files which
are initialized for CRAM use by means of a special
CRAM operation.
The CRAM updating philosophy is based on the execution
of a batch of related updatings, which all together
forms a consistent status change of the CRAM file,
being physically updated as a single update by means
of a LOCK operation. That is, after such a batch of
updates, all these updated may either be forgotten
(by means of the FORGET operation) or locked (by means
of the LOCK OPERATION). Both operations are performed
without critical regions, i.e. without periods of CRAM
data base inconsistency.
For convenience, CRAM supports subdivision of the CRAM
file in up to 255 subfiles, each identified by a subfile
identifier of 0-126 byte (as a key).
CRAM keeps track of the different versions of the CRAM
data base by means of a 32 bit version number, which
is incremented every time CRAMNEWLOCK (the locking
operation) is called. This version number can only
be changed by CRAMNEWLOCK (and CRAMINIT), but if the
user intends to use it for some sort of unique update
version stamping, it is delivered by the operations
CRAMNEWOPEN, CRAMNEWLOCK, CRAMFORGET and CRAMNEWVERSION.
6.2.4.2 M̲a̲g̲n̲e̲t̲i̲c̲ ̲T̲a̲p̲e̲ ̲F̲i̲l̲e̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲ ̲S̲y̲s̲t̲e̲m̲
The Magnetic Tape File Management System (MTFMS) is
responsible for storing and retrieving information
on megnetic tapes. It is able to handle one magnetic
tape controller with a maximum of 8 tape transports
in daisy-chain. The driver is logically split into
3 parts:
- I/O-SYSTEM interface
- Main Processing
- Magnetic tape controller interface
Commands for the MTFMS are received by the I/O-System
interface while the controller interface implements
a number of (low level) commands for handling a tape
transport.
Symbolic volume names and file names are implemented
through use of label records which comply with the
ISO 1001 standard.
The functions of the file system can be separated into
four groups:
- Device functions
- Volume functions
- FIle functions
- Record functions
6.2.4.2.1 D̲e̲v̲i̲c̲e̲ ̲f̲u̲n̲c̲t̲i̲o̲n̲s̲
The following functions are defined:
- Assign a given name to a given unit of the
controller.
- Deassign a given device.
6.2.4.2.2 V̲o̲l̲u̲m̲e̲ ̲f̲u̲n̲c̲t̲i̲o̲n̲s̲
- Initiate the tape on a given device assigning a
name to it by writing a volume label.
- Mount a given volume on a given device.
- Dismount a given volume.
- Rewind a given volume.
6.2.4.2.3 F̲i̲l̲e̲ ̲f̲u̲n̲c̲t̲i̲o̲n̲s̲
- Create a file on a given volume. The following
information must be supplied by the caller and
will be written onto the tape in a file header
label records:
- File name
- Fixed/variable length record specification
- Record size.
The file is opened for output and the given volume
is reserved for the caller.
- Find a file with a given name on a given volume.
The file is opened for input and the given volume
is reserved.
- Skip a given number of files (backwards or forwards)
on a given volume. The file at the resulting tape
position is opened for input and the volume is
reserved.
- Get information about the currently open file on
a given volume. Information like file sequence
number, record size and type (fixed/variable length)
can be retrieved.
- Close currently open file on a given volume. Volume
reservation is released.
6.2.4.2.4 R̲e̲c̲o̲r̲d̲ ̲f̲u̲n̲c̲t̲i̲o̲n̲s̲
- Skip a given number of records (forwards or backwards)
in a given file.
- Read a record in a given file.
- Write a record in a given file. The MTFMS performs
recovery from writing errors by
- backspacing over the record in error
- erasing a fixed length of about 3.7 inches
(thus increasing the record gap).
- attempting the writing once more.
This procedure will be repeated maximally 10 times.
6.2.4.3 T̲e̲r̲m̲i̲n̲a̲l̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲ ̲S̲y̲s̲t̲e̲m̲ ̲
The TMS is a service process which manages devices
characterized by serial blockwise access. Examples
of such devices are:
- interactive terminals (screen or hardcopy)
- data communication equipment (modems)
- line printers
- card readers
In the following, the phrase "terminal" is used as
a common term for any device of this category.
Terminals may be attached to LTUs, LTUXs (via TDX)
and parallel interfaces.
The TMS performs the following main functions:
- terminal related security validation
- access control for terminals
- collecting of statistical information
- management of terminals
- transfer of I/O data between terminal device
handlers and user processes.
The following subsections define:
- transfer of I/O data
- user handling
- hardware categories
6.2.4.3.1 T̲r̲a̲n̲s̲f̲e̲r̲ ̲o̲f̲ ̲I̲/̲O̲ ̲D̲a̲t̲a̲
The TMS enables user processes to perform I/O communication
with terminals.
The I/O communication can be performed in two modes:
file mode and communication mode.
6.2.4.3.1.1 F̲i̲l̲e̲ ̲M̲o̲d̲e̲
In this mode I/O to terminals is identical to I/O to
backing store files from the point of view of the user
process.
The same IOS basic procedures are used (appendbytes,
modifybytes, readbytes) and direct as well as stream
I/O can be used.
This mode provides the greatest flexibility for the
user process. This flexibility is obtained at the expense
of an additional overhead, as all I/O requests from
the user process will have to pass the TMS.
File mode I/O is aimed at terminals which will be connected
to varying processes with different security profiles.
The terminals in question will normally be local or
remote interactive hardcopy or screen terminals.
6.2.4.3.1.2 C̲o̲m̲m̲u̲n̲i̲c̲a̲t̲i̲o̲n̲ ̲M̲o̲d̲e̲
In this mode I/O requests from the user process are
sent directly to the terminal handler. The I/O interface
between the user process and the terminal device handler
is that of the BTS and therefore inherently different
from backing store I/O.
Communication mode I/O is aimed at - but not limited
to - terminals which are connected to a single user
process throughout its lifetime.
The terminals in question are primarily communication
lines like e.g. trunk lines in a message swtiching
network.
figur inds`ttes
6.2.4.3.2 U̲s̲e̲r̲ ̲H̲a̲n̲d̲l̲i̲n̲g̲
Before a user process can make use of the TMS functions,
it must be logged on to the TMS by means of th Useron
command. This command must be invoked by a process
which is already known by the TMS, either through another
Useron command or because it is the parent process
for the TMS.
In the Useron command the calling process grants some
of its TMS resources to the process which is logged
on to the TMS in the Useron command.
When a user process seizes to use the TMS, its TMS
resources must be released by a call of Useroff.
6.2.4.3.3 H̲a̲r̲d̲w̲a̲r̲e̲ ̲C̲a̲t̲e̲g̲o̲r̲i̲e̲s̲
The TMS recognizes the following categories of equipment:
- T̲e̲r̲m̲i̲n̲a̲l̲ ̲C̲o̲n̲t̲r̲o̲l̲l̲e̲r̲ which is a line controller
interfacing one or more lines.
- L̲i̲n̲e̲, which is a group of physical signals
capable of sustaining one simplex or duplex
data stream.
- U̲n̲i̲t̲, which is a terminal device connected
to a line.
If more than one unit is connected to a given line,
the line is called multiplexed line.
6.2.4.3.3.1 E̲x̲a̲m̲p̲l̲e̲s̲
1. A parallel interface module interfacing one or
more line printers. The line printers are units.
tegning
2. An LTU with four interface lines each connecting
one VDU.
tegning
3. An LTU with two interface lines on which a number
of VDUs are multidropped.
tegning
4. An X25 LTU with a single line which supports N
virtual circuits (VC).
Each VC is a unit.
6.2.4.3.3.2 T̲e̲r̲m̲i̲n̲a̲l̲ ̲C̲o̲n̲t̲r̲o̲l̲l̲e̲r̲s̲
Terminal controllers may dynamically be assigned and
deassigned by the parent process for the TMS.
A controller can either be assigned as an active or
as a stand-by controller.
A stand-by controller is a device which normally is
not active, but which may take over in case of a failure
in an active controller.
When an active controller is assigned for which a stand-by
is available, this must be defined in the assignment
command.
The process which assigned a controller is its initial
owner.
Ownership of a controller may be transfered to another
user process which is logged on to the TMS.
When a controller is assigned, the TMS creates a corresponding
device handler.
6.2.4.3.3.3 L̲i̲n̲e̲s̲
The owner of a controller may assign lines to the controller.
When a line is assigned the TMS calls the device handler
for the controller to that effect.
6.2.4.3.3.4 U̲n̲i̲t̲s̲
The owner of a controller with lines assigned to it
may create units on the lines.
Units can be created for file mode I/O or communication
mode I/O.
A unit created for file I/O may be a multiple or single
access unit.
Single access units can only be accessed by the owner
whereas multiple access units may be accessed by a
number of user processes.
When the owner creates a unit, an access path to the
unit is established. The owner may from now on access
the unit by the IOS functions readbytes for input -
and appendbytes, and modifybytes for output.
Other users may obtain access to a multiple access
unit in different ways as described in the following.
The creator of a unit may offer it to another user
by means of the TMS OFFER function. The user to which
the unit is offered obtains access to the unit by the
ACCEPT function.
The creator of a unit may define a symbolic name -
a unit name - for the unit. A unit name is syntactically
identical to an FMS file name.
Other users may obtain access to the named unit by
the LOOKUP ̲UNIT command which corresponds to the FMS
commands getroot, lookup and descent.
6.2.5 S̲y̲s̲t̲e̲m̲ ̲I̲n̲i̲t̲i̲a̲l̲i̲z̲a̲t̲i̲o̲n̲
When a CR80D memory mapped PU is master cleared., a
boot strap loader is given control.
The boot strap loader is contained in a programmed
read-only memory which is part of the MAP module. Having
initialized the translation tables of the MAP module,
the boot strap loader is able to fetch a system load
module from a disk connected to the PU.
An initialization module which is part of the load
module initalizes the DAMOS kernel and the DAMOS Root
process.
The Root process possesses all the PU resources. The
Root creates and intializes a system File Management
System.
6.3 S̲t̲a̲n̲d̲a̲r̲d̲ ̲S̲u̲p̲p̲o̲r̲t̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
O The support software assists in software development
and in hardware maintenance and diagnostics.
The support software consists of:
- terminal operating system
- language processors
- system generation software
- debugging software
- utilities
- maintenance and diagnostics programs
6.3.1 T̲e̲r̲m̲i̲n̲a̲l̲ ̲O̲p̲e̲r̲a̲t̲i̲n̲g̲ ̲S̲y̲s̲t̲e̲m̲ ̲(̲T̲O̲S̲)̲
TOS is an operating system which supports interactive
terminal users in a program development environment.
The functions performed by TOS are invoked by two types
of requests:
Operator commands, which are messages typed at terminals
and sent to TOS. The functions which may be performed
in response to these requests are:
- assign/deassign disk devices
- mount/dismount volumes on disk drives
- include terminals in the system/remove terminals
from the system
- log on to the system
- remove processes from the system
- broadcast messages to terminals
- manipulate a "news" message facility
- present status information
- run a task
- close the system
Programmed requests are sent from processes to TOS.
The functions which may be performed in response to
these requests are:
- allocate resources (memory) for a task, load a
program, and create a process to execute the program
- start a process
- stop a process
- restart a process
- logout from the system
- reserve a print queue file semaphore
- release a print queue file semaphore
- start a printer task
6.3.2 L̲a̲n̲g̲u̲a̲g̲e̲ ̲P̲r̲o̲c̲e̲s̲s̲o̲r̲s̲
The CR80D language processors include the following:
- PASCAL is a high level block-orientated language
that offers structured and complex data and enforces
well structured programs. The CR80D implementation
is based on standard Pascal as defined by Kathleen
Jensen & Niklaus Wirth, with only minor deviations.
The CR80D implementation provides for bit mask
operations in addition to standard PASCAL data
structures. Furthermore, the CR80D implementation
provides the following powerful additions:
- Compile time option enables merging assembly
object directly into the Pascal module.
- Overlay technique is supported.
- Built-in Trace of program execution may optionally
be switched in/out for debugging purposes.
- Sequential and random file access is available
from run time library.
- The CR80D COBOL compiler is an efficient industry-compatible
two-pass compiler, fulfilling American National
Standard X3.23-1974 level 1 as well as most of
the level 2 features.
- SWELL 80 is a S̲oftW̲are E̲ngineering L̲ow level L̲anguage
for the CR80D minicomputer. SWELL offers most
of the data and program structures of PASCAL, and,
by enabling register control, is without the efficiency
penalties experienced in true high-level languages.
The main purpose of SWELL is to combine efficient
program execution with efficient program development
and maintenance.
- The assembler is a machine-orientated language
for the CR80D. The language has a direct correspondence
between instructions read and code generated.
- ADA compiler. A project has been launched for
implementation of the new DOD standard programming
language ADA on the CR80D machine. The project
is planned for completion in 1983 and includes
development of an ADA compiler hosted on and targeted
for the CR80D as well as of an ADA programming
support environment. The programming support environment
is based on the Stoneman report.…86…1 …02… …02…
…02… …02…
6.3.3 S̲y̲s̲t̲e̲m̲ ̲G̲e̲n̲e̲r̲a̲t̲i̲o̲n̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
The utility SYSGEN-EDIT generates object files - based
upon a set of directives, a system source, and command
files - for subsequent compiling and linking. A BINDER
then binds the system object together with the application
object based upon a command file from SYSGEN-EDIT.
All the external references of the object modules
are resolved in the Binder output, which is a load
module ready for execution. The BINDER produces a
listing giving memory layout, module size, etc.
6.3.4 D̲e̲b̲u̲g̲g̲i̲n̲g̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
The software debugging facilities include:
- Test Output Facility
- On-Line Interactive Debugger
6.3.5 U̲t̲i̲l̲i̲t̲i̲e̲s̲
The CR80 utility software package will include:
- Editor
- File Copy and Compare
- File Merge
- Interactive Proper Patch Facility
- File Maintenance Program
6.3.6 D̲i̲a̲g̲n̲o̲s̲t̲i̲c̲ ̲P̲r̲o̲g̲r̲a̲m̲s̲
The Maintenance and Diagnostic (M&D) package is a collection
of standard test programs which is used to verify proper
operation of the CR80D system and to detect and isolate
faults to replaceable modules.
6.3.6.1 O̲f̲f̲-̲l̲i̲n̲e̲ ̲D̲i̲a̲g̲n̲o̲s̲t̲i̲c̲ ̲P̲r̲o̲g̲r̲a̲m̲s̲
The off-line M&D software package contains the following
programs:
- CPU Test Program
- CPU CACHE Test Program
- Memory Map Test Program
- RAM Test Program
- PROM Test Program
- Supra Bus I/F Test Program
- CIA Test Program
- LTU Test Program
- Disk System Test Program
- Magtape System Test Program
- Floppy Disk Test Program
- TDX-HOST I/F Test Program
- Card Reader and Line Printer Test Program
6.3.6.2 O̲n̲-̲L̲i̲n̲e̲ ̲D̲i̲a̲g̲n̲o̲s̲t̲i̲c̲ ̲P̲r̲o̲g̲r̲a̲m̲s̲
On-line Diagnostic programs will execute periodically
as part of the exchange survailance system. On-line
diagnostics consists of a mixture of hardware module
built-in test and reporting, and diagnostic software
routines. The following on-line diagnostic capability
exists:
- CPU-CACHE diagnostic
- RAM test
- PROM test
- MAP/MIA test
- STI test
- Disk Controller/DCA test
- Tape Controller/TCA test
- LTU/LIA test
On-line diagnostics will report errors to higher level
processing to take recovery/switchover decision in
the case of failures.
6.4 R̲e̲d̲u̲n̲d̲a̲n̲t̲ ̲O̲p̲e̲r̲a̲t̲i̲o̲n̲
O To provide a high level of service to users redundant
equipment is used at vital points in the AIR CANADA
NETWORK.
This section addresses:
- hardware redundancy
- switchover
- check-pointing
- recovery/restart
which are covered in following subsections.
6.4.1 H̲a̲r̲d̲w̲a̲r̲e̲ ̲r̲e̲d̲u̲n̲d̲a̲n̲c̲y̲
The AIR CANADA NETWORK site components (Node, EMH,
GATEWAY, NCC, FEP) have complete internal redundant
hardware.
The NMH contains non-redundant equipment.
The following subsections describe specific hardware
redundancy at the various AIR CANADA NETWORK components.
Generally supra buses are dualized. Trunks to nodes
and between nodes are multiple to allow a software
vice redundancy.
6.4.1.1 N̲o̲d̲e̲ ̲R̲e̲d̲u̲n̲d̲a̲n̲c̲y̲
In each node one standby PU exists. It can substitute
any of the up till 3 remaining PUs.
The node disks are handled as mirrored, i.e. updated
concurrently.
Also, one backup LTU exists. It can substitute any
of the up till 11 remaining LTUs.
6.4.1.2 E̲M̲H̲ ̲R̲e̲d̲u̲n̲d̲a̲n̲c̲y̲ ̲
The EMH contains an active and a standby PU.
The EMH disks are handled as mirrored, i.e. they are
updated simultaneously
For LTUs one spare LTU exists for the remaining 3 LTUs.
6.4.1.3 G̲a̲t̲e̲w̲a̲y̲ ̲r̲e̲d̲u̲n̲d̲a̲n̲c̲y̲
The Gateway contains an active and a standby PU.
A spare LTU exists for one of the remaining 6 LTUs.
6.4.1.4 N̲M̲H̲ ̲r̲e̲d̲u̲n̲d̲a̲n̲c̲y̲
As the NMH role in the AIR CANADA NETWORK is non vital
it contains no redundant elements.
6.4.1.5 N̲C̲C̲ ̲r̲e̲d̲u̲n̲d̲a̲n̲c̲y̲
The NCC is geographically backed up. One NCC contains
no redundant elements.
6.4.1.6 F̲E̲P̲ ̲r̲e̲d̲u̲n̲d̲a̲n̲c̲y̲
One standby PU exists. It can substitute any of the
remaining N FEPs.
The FEP disk are mirrored.
6.4.2 S̲w̲i̲t̲c̲h̲o̲v̲e̲r̲
The switchover forms an active komponent to a standby
component is executed via the watchdog. The watchdog
is a selfstanding computer, which monitors and controls
via a CCB (configuration control bus) all crates in
an AIR CANADA NETWORK element (Node, FEP,EMH, NCC,
NMH or GATEWAY. In order to avoid authorrity conflicts
between PUs the actual switchover decision will be
made by the watchdog.
6.4.2.1 P̲U̲ ̲S̲w̲i̲t̲c̲h̲o̲v̲e̲r̲
A PU switchover implies that:
- the current active PU is electrically disconnected
from its peripherals
- the standby PU is activated and via its separate
databus it can access the peripherals of the former
active PU.
The decision for a PU switchover may be:
- the watchdog having monitored a PU error via the
CCB or the non-arrival of a "keep alive" message,
which the PUs periodically (second basis) sends
to the watchdog.
- the PU having monitored an internal irrecoverable
hardware or software error.
6.4.2.2 L̲T̲U̲ ̲S̲w̲i̲t̲c̲h̲o̲v̲e̲r̲
An LTU switchover implies that the lines (e.g. trunks)
to the current active LTU are switched electrically
to the spare LTU.
LTU software is to be loaded during LTU initialization
so it is easy to cope with a spare LTU for different
LTU types.
6.4.2.3 D̲i̲s̲k̲ ̲S̲w̲i̲t̲c̲h̲o̲v̲e̲r̲
Mirrored disks have separate disk controllers, so a
switchover is only performed softwarevice.
6.4.2.4 S̲u̲p̲r̲a̲ ̲B̲u̲s̲ ̲S̲w̲i̲t̲c̲h̲o̲v̲e̲r̲
Supra bus switchover is performed in software.
6.4.3 C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲i̲n̲g̲
A checkpoint records the state of an activity e.g.
a session or a message.
At specific events e.g. sign on the active PUs transmits
checkpoints to the standby PU memory via the supra
bus or to a disk in order to provide an acceptable
level of recovery at the time of restart.
6.4.4 R̲e̲c̲o̲v̲e̲r̲y̲/̲r̲e̲s̲t̲a̲r̲t̲ ̲
Restart refers to the actions to bring up a former
active or a standby PU as active i.e. reestablishes
the dynamic behaviour of the system.
Recovery refers to the reestablishment of contiuity
in memory and backing storage contents during a restart.
6.4.4.1 R̲e̲c̲o̲v̲e̲r̲y̲ ̲l̲e̲v̲e̲l̲
The contents of checkpoints to disk or to a standby
PU defines the recovery level.
The checkpointing to disk is used to handle the total
sysem failure case.
The checkpointing to the standby PU is used to:
- provide a fast switchover (seconds)
- provide a fine level of recovery due to the high
supra bus throughput.
A PU switchover and subsequent restart takes less than
1 minute.
The contents of checkpoints will enable recovery of:
- sessions i.e. the status of sign in/out
- terminal and printer status (e.g. paper low)
- messages e.g. a line from a terminal
- network control commands e.g.
- alarms
- statistics
- configuration updates.
6.5 T̲r̲a̲n̲s̲m̲i̲s̲s̲i̲o̲n̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
The Transmission Software (or the Nodal Sub System:
NSS) is the software located in each node of the Packet
Switching Network.
The purpose of the Transmission Software is the error
free transmission of message traffic from end to end.
The messages are packetized and routed via virtual
circuits throughout the network. Routing is adaptive
because nodes and/or trunks may fail.
Several kinds of traffic are considered:
- high priority, small delay, small messages, error
free transmission.
- high priority, constant delay, small to medium
messages.
- low priority, larger delay, large msgs, error free
transmission.
The network being a meshed network may be expanded
by adding more and more trunks and nodes. Also the
transmission power may be expanded by increasing the
number of virtual calls.
The Packet Switching Network (fig. III 6.5-1) consists
initially of nodes: The Toronto and Montreal nodes,
and the remote Host connected by lines and trunks.
The adressing scheme of the network is based on the
X.121 recommendation. By using a superior region in
the numbering plan the network may be expanded in practice
without limits.
Traffic control will be maintained in the network by
giving data, which are on their way into the network,
a lower priority than relay data already inside the
network. Relay data,however, are given a lower priority
than data which are on their way out of the network.
In this way a congestion situation will be delayed
or even avoided.
Fig. III 6.5-1: The Initial Packet Switching Network of ACDN.
6.5.1 C̲o̲m̲m̲u̲n̲i̲c̲a̲t̲i̲o̲n̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲
o The external interface of the Transmission software
is the interface between the NSS and other subsystems
of the communication computers.
These are interfaces to:
- Terminal Access Subsystem
(old as well as new concentrators)
- NCC
- Gateway
- NMH
- EMH
- Public Network Subsystem
- Host Access Subsystem
- Store and Forward Subsystem
- CBX subsystem
The interfaces are shown for each individual NSS in
fig. III 6.5-2
The messages are interchanged between the NSS and a
subsystem via the common supra bus and an X.25 level
3 like protocol or a message level protocol.
Fig. III 6.5-2: External Interfaces of the Node Subsystem.
6.5.2 N̲e̲t̲w̲o̲r̲k̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲
o The internal interface of the Transmission software
is the interface between the individual NSS'.
This interface is sketched in fig. III 6.5-1 and
6.5-3.
6.5.2.1 P̲r̲o̲t̲o̲c̲o̲l̲ ̲L̲e̲v̲e̲l̲s̲
Level 1 is the physical layer, and will not be further
described here.
Level 2 will guarantee error free delivery of frames.
It will also have the ability of retransmission of
selected frames so nodes may be connected by satellites.
This layer, however, will be implemented in firmware.
Therefore, it will not be further described here.
Level 3 performs the routing and switching of packets
from node to node.
Level 4 (The Transport Level) performs the end-to-end
control of messages. It also creates and dismantles
virtual calls.
The communication as well as the network interfaces
are shown in fig. III 6.5-4.
Fig. III 6.5-3: The Network Interface
Fig. III 6.5-4: The Nodal Sub System.
6.5.3 T̲h̲e̲ ̲M̲o̲d̲u̲l̲e̲s̲ ̲o̲f̲ ̲t̲h̲e̲ ̲N̲S̲S̲
o The NSS will be structured in modules each consisting
of one or more parallel tasks (co-routines).
6.5.3.1 T̲h̲e̲ ̲T̲r̲a̲n̲s̲p̲o̲r̲t̲ ̲S̲t̲a̲t̲i̲o̲n̲ ̲
The main task of the Transport Station (TS) is to perform
end-to-end control on message level. The message text
may be encrypted.
The TS of the originating node must therefore maintain
a disc copy of the packets of each message which should
be acknowledged. If such a message is not acknowledged
from the destination node, it must be retransmitted.
The TS of the destination must perform a possible sorting
of the packets of a message ensuring the propes sequence
of their delivery, and it must return an ACK or NACK
depending on all the packets were correctly received
or not.
Another task of the TS is to create (and dismantle)
virtual circuits through the network (fig. III 6.5-5).
The creation of virtual circuits are based on Routing
Tables within each node. Routing is adaptive. Several
kinds of virtual circuits may exist, each appropriate
for some kind of traffic; the circuit may be characterized
by:
- priority
- delay (constant or not)
- end-to-end control
Fig. III 6.5-5:
Virtual Circuits through the Packet Switched Network.
As an example printer traffic (and network software)
will be characterized by: low, medium or high priority,
non-constant delay, and end-to-end control. Type A
traffic is characterized by high priority, and no end-to-end
control. Voice will be of high priority, constant
delay and no end-to-end control etc.
Also resetting and restarting virtual circuits will
be done by this module.
Finally it must be mentioned that errors (trunk failure,
node failure, congestion) may be reported from the
underlying module.
6.5.3.2 T̲h̲e̲ ̲P̲a̲c̲k̲e̲t̲ ̲H̲a̲n̲d̲l̲e̲r̲
The main task of the Packet Handler (PH) is to switch
data packets from node to node via virtual circuits.
The PH must maintain flow control ensuring that packets
are not tansmitted faster than they may be received
and buffered. The flow control will also ensure that
some high priority packets will be transmitted even
during congestion conditions.
6.5.3.3 T̲h̲e̲ ̲S̲u̲p̲e̲r̲v̲i̲s̲o̲r̲y̲ ̲M̲o̲d̲u̲l̲e̲
This module will monitor the state of the node and
report statistics (for the Costing-Billing System)
and errors to the NCC. It will also receive controlling
information from the NCC for example Routing Tables.
6.5.3.4 T̲h̲e̲ ̲R̲e̲c̲o̲v̲e̲r̲y̲ ̲M̲o̲d̲u̲l̲e̲ ̲
This module will perform all recovery necessary for
the NSS to be restarted after a node-failure. The
recovery will be done from appropriate checkpoint information.