top - download
⟦6d3d40229⟧ Wang Wps File
Length: 124977 (0x1e831)
Types: Wang Wps File
Notes: UKAIR PROP. - TECHNICAL
Names: »4737A «
Derivation
└─⟦a4aed27c0⟧ Bits:30006016 8" Wang WCS floppy, CR 0463A
└─ ⟦this⟧ »4737A «
WangText
g…00……00……00……00…&…02……00……00…&
%…09…%…02…$…0c…$…06…#…0d…#…01…#…07…"…0c…"
!…0b…!…0f…!…07… …00… …01……1f……09……1f……0a……1f……01……1f……02……1e……08……1e……0c……1e……02……1e…
…1e……07……1d……08……1d……0e……1d……0f……1d……02……1c……08……1c……09……1c……0d……1c……00……1b……08……1b……0b……1b……0c……1b……02……1b…
…1a……09……1a……00……1a……07……19……0f……19……00……19……01……19……05……18……0a……18……0b……18……86…1 …02… …02… …02… …02… …02…
UKAIR CCIS ADPE - PART II 84-03-10
TECHNICAL PROPOSAL Page
UKAIR CCIS ADP EQUIPMENT
DOC. NO. UKAIR/063/PRP/001 ISSUE 1
PART II
TECHNICAL PROPOSAL
SUBMITTED TO: COMPUTER SCIENCES COMPANY LTD.
UK
IN RESPONSE TO: REQUEST FOR PROPOSAL/9 January
1984
PREPARED BY: CHRISTIAN ROVSING A/S
SYSTEMS GROUP
LAUTRUPVANG 2
2750 BALLERUP
DENMARK
PRINCIPLE CONTACTS: Gert Jensen, Advanced Systems
Operations Director
Telex Denmark 35111 cr dk
Telephone: 02 65 11 44
…0e…c …0f…Christian Rovsing A/S - 1983
This document contains information proprietary to Christian
Rovsing A/S. The information, whether in the form of
text, schematics, tables, drawings or illustrations,
must not be duplicated or used for purposes other than
evaluation, or disclosed outside the recipient company
or organisation without the prior, written permission
of Christian Rovsing A/S.
This restriction does not limit the recipient's right
to use information contained in the document if such
information is received from another source without
restriction, provided such source is not in breach
of an obligation of confidentiality towards Christian
Rovsing A/S.
T̲A̲B̲L̲E̲ ̲O̲F̲ ̲C̲O̲N̲T̲E̲N̲T̲S̲
Page
1 INTRODUCTION ..................................
6
1.1 SYSTEM OVERVIEW ............................
6
1.1.1 Hardware Overview ......................
7
1.1.2 Software Overview ......................
8
1.2 SECURITY ARCHITECTURE ......................
9
1.2.1 Basic Objectives .......................
9
1.2.2 Security Design Characteristics ........
10
2 UKAIR CCIS HARDWARE ............................
13
2.1 HARDWARE ARCHITECTURE ......................
13
2.1.1 System Structure .......................
13
2.1.2 Physical Layout ........................
15
2.2 Hardware Characteristics ...................
17
2.2.1 Processor Section ......................
17
2.2.1.1 Central Processor ..................
19
2.2.1.2 Bus Interface ......................
22
2.2.1.3 Majority Voter .....................
23
2.2.1.4 Status and Control Panel ...........
25
2.2.2 Input/Output Section ...................
25
2.2.2.1 Bus Switch .........................
27
2.2.2.2 Communication Line Interface .......
28
2.2.2.3 Disk & Tape Controller .............
30
2.2.2.4 S-NET Interface ....................
31
2.2.2.5 Local Area Networks ................
32
2.2.2.6 Other Interfaces ...................
38
2.2.3 Controller Relational Database (CRDB) ..
38
2.3 PHERIPHERAL CHARACTERISTICS ................
38
2.3.1 Disc System ............................
39
2.3.2 Tape System ............................
39
2.3.3 Operator Consoles ......................
39
2.3.4 Line Printers ..........................
39
2.4 SYSTEM HARDWARE CONFIGURATION ..............
41
2.4.1 PSWHQ Configuration ....................
41
2.4.2 AWHQ Configuration .....................
52
2.5 DEVELOPMENT SYSTEM CONFIGURATION ...........
56
Page
3 UKAIR SYSTEM SOFTWARE ..........................
57
3.1 INTRODUCTION ...............................
57
3.2 OS CHARACTERISTICS .........................
58
3.2.1 Basic Concepts .........................
58
3.2.2 Object Management ......................
61
3.2.3 Memory Management ......................
64
3.2.4 Security ...............................
68
3.2.5 Task Management ........................
71
3.2.6 Device Handling ........................
72
3.2.7 Exception Handling .....................
74
3.2.8 File Management ........................
76
3.2.9 Software Configuration .................
80
3.3 DATA MANAGEMENT CHARACTERISTICS ............
82
3.3.1 Introduction ...........................
82
3.3.2 Physical Data Structure ................
87
3.3.3 Logical Data Structure .................
88
3.3.4 Access & Management of Data ............
91
3.3.5 Database Integrity & Consistency .......
92
3.3.6 Database Utilities .....................
97
3.4 SYSTEM UTILITY CHARACTERISTICS .............
99
3.4.1 Transaction Processing .................
99
3.4.1.1 Architectural Overview .............
99
3.4.1.2 Transaction Processing Subsystem ...
105
3.4.2 Software Development Operating
System - UNIX ..........................
106
3.4.3 Compilers ..............................
106
3.4.4 Miscellaneous Commands .................
107
3.5 SYSTEM TEST PROGRAM ........................
108
3.6 SUMMARY ....................................
109
4 PERFORMANCE ....................................
110
4.1 PERFORMANCE CHARACTERISTICS ................
110
4.2 PEAK HOUR WORK LOAD CHARACTERISTICS ........
111
4.2.1 Transaction Work Load ..................
111
4.2.2 Communication Work Load ................
126
4.3 SYSTEM CONFIGURATION .......................
134
4.4 DATABASE DISTRIBUTION ......................
139
4.5 FUNCTIONAL DISTRIBUTION ....................
141
4.6 WORK LOAD DISTRIBUTION .....................
143
4.6.1 Transactions, "reads" and "writes" .....
143
Page
4.7 EQUIPMENT UTILIZATION ......................
145
4.7.1 Transaction Processing Subsystem .......
145
4.7.1.1 CPU Utilization ....................
146
4.7.1.2 Memory Utilization .................
147
4.7.1.3 Device Utilization .................
149
4.7.2 Communication Subsystem ................
150
4.7.2.1 CPU Utilization ....................
151
4.7.2.2 Memory Utilization .................
152
4.7.2.3 Device Utilization .................
154
4.8 RESPONSE TIME DERIVATION ...................
155
5 SYSTEM AVAILABILITY ............................
157
5.1 GENERAL CONSIDERATIONS .....................
157
5.2 RECOVERY PROCEDURES ........................
159
5.3 FALLBACK PROCEDURES ........................
160
5.4 RECOVERY TIMES .............................
161
5.5 MEAN-TIME-BETWEEN-FAILURE (MTBF) ...........
161
5.6 MEAN-TIME-TO-REPAIR (MTTR) .................
163
5.7 OVERALL SYSTEM AVAILABILITY ................
166
5.8 AVAILABILITY OBJECTIVES ....................
176
6 INSTALLATION ...................................
178
6.1 GENERAL ....................................
178
6.2 PLANNING AND MANAGEMENT ....................
178
6.3 INSTALLATION DOCUMENTATION .................
179
6.4 SITE INSTALLATION ..........................
180
6.5 TRANSPORTATION CHARGES .....................
182
7 MAINTENANCE AND SPARES .........................
183
7.1 MAINTENANCE ................................
183
7.2 SPARES .....................................
183
7.2.1 Optimum Spares Support Strategy ........
183
7.2.2 Nato Codification ......................
187
7.2.2.1 Supply of Data to NCB ..............
187
7.2.2.2 Additional Codification ............
187
7.2.2.3 Spare Parts Design Change Notices
(SPDCN) ............................
187
7.3 TOOLS AND TEST EQUIPMENT ...................
188
Page
8 TRAINING .......................................
192
8.1 TRAINING ...................................
192
8.1.1 Course Participants ....................
192
8.1.2 Initial Test and Course Examination ....
192
8.1.3 Language ...............................
192
8.1.4 Lesson Duration ........................
192
8.1.5 Final Training Program .................
192
8.1.6 Number of Participants .................
192
8.1.7 Course Duration ........................
193
8.1.7.1 Site Level Maintenance Course ......
193
8.1.7.2 Depot Level Maintenance Course .....
193
8.1.7.3 System Operation Course ............
193
8.1.8 Course Profile .........................
193
8.1.9 Location ...............................
193
8.1.10 Course Start .........................
193
8.2 DOCUMENTATION ..............................
194
8.2.1 Language ...............................
194
8.2.2 Reproducible Documents .................
194
8.2.3 Documentation Standard .................
194
8.2.4 Review .................................
194
8.2.5 Documentation Changes ..................
194
8.2.6 Finalized Documentation ................
194
Appendix C. ......................................
195
1̲ ̲ ̲I̲N̲T̲R̲O̲D̲U̲C̲T̲I̲O̲N̲
1.1 S̲Y̲S̲T̲E̲M̲ ̲O̲V̲E̲R̲V̲I̲E̲W̲
The CR90 system has been designed to meet the following
basic objectives:
a) F̲l̲e̲x̲i̲b̲i̲l̲i̲t̲y̲
- Wide Performance Range.
- Support for standard as well as customized
modules at both hardware and software level.
- Dynamic reconfiguration at both hardware and
software level.
- Dynamic extension of hardware and software
configuration without new system generation
and system restart.
b) R̲e̲l̲i̲a̲b̲i̲l̲i̲t̲y̲ ̲a̲n̲d̲ ̲I̲n̲t̲e̲g̲r̲i̲t̲y̲
- Automatic hardware error detection in all critical
components.
- Highly modularized software with strong protection
between modules in order to reduce error propagation.
c) A̲v̲a̲i̲l̲a̲b̲i̲l̲i̲t̲y̲
- Built in fault tolerance by triplicate logic
in critical components.
- Multiple paths to I/O equipment.
- Mirrored disks.
d) S̲e̲c̲u̲r̲i̲t̲y̲
Hardware and software designed to meet DOD evaluation
class B3.
1.1.1 H̲a̲r̲d̲w̲a̲r̲e̲ ̲O̲v̲e̲r̲v̲i̲e̲w̲
The hardware architecture is based on 3 types of elements:
a) Processing Element
- Motorola MC 68020 CPU with 2 MIPS instruction
rate.
- 2M Bytes onboard RAM for programs and "local"
data. Memory access time supports full cpu
instruction rate.
- 256K Bytes onboard ROM.
- Memory Protection Unit for a segmented memory
protection system with variable length segments.
- Hardwired window protection of access to memory
of other processing elements.
b) D̲a̲t̲a̲ ̲H̲i̲g̲h̲w̲a̲y̲
For interconnection of processing elements
- Direct memory access bus without communication
overhead.
- 16M Bytes/sec transfer rate.
- Total memory address space of 4000 MB.
c) I̲/̲O̲ ̲M̲o̲d̲u̲l̲e̲s̲
- Each I/O Module accessible by one or two processing
elements.
- Intelligent Communication Module.
- Intelligent Data Base Module.
- Support for standard VME modules.
Fault tolerance is built into hardware as follows:
- Processing elements can be triplicated with majority
voting.
- Data Highway and I/O buses are duplicated with
automatic selection if one bus fails.
- I/O Modules are selfchecking. Multiple paths and
redundant modules for high availability.
1.1.2 S̲o̲f̲t̲w̲a̲r̲e̲ ̲O̲v̲e̲r̲v̲i̲e̲w̲
The operating system has been designed with the following
objectives:
- Security, refer section 1.2.
- Flexibility to support several types of high level
operating system functions and many types of I/O
equipment and high level I/O interfaces such as
networks and data bases.
- Fault Tolerance.
The operating system is based on an object oriented
kernel, ORION. The kernel defines the security and
protection environment for tasks and for object managers
such as device handlers. The standard environment includes
creation and removal of objects, including the necessary
clean up functions for composite objects.
Object managers (e.g. device handlers and file systems)
are completely protected from each other. It is thus
possible to modify an object handler or include a new
one without compromising the rest of the system.
Based on the kernel are several independent high level
operating system functions:
- UNIX development and utility system.
- Transaction Management System.
- Data Base Management System.
1.2 S̲E̲C̲U̲R̲I̲T̲Y̲ ̲A̲R̲C̲H̲I̲T̲E̲C̲T̲U̲R̲E̲
1.2.1 B̲a̲s̲i̲c̲ ̲O̲b̲j̲e̲c̲t̲i̲v̲e̲s̲
The Security Architecture is designed to meet the criteria
for DOD evaluation class B3.
The major difference between the classes B2 and B3
is the very strong requirements for internal structuring
of TCB and the exclusion from TCB of modules which
are not security related. This is highlighted by the
following two quotations from the DOD requirements:
- Definition of Trusted Computer Base (TCB):
The totality of protection mechanisms within a
computer system - including hardware, firmware
and software - the combination of which is responsible
for enforcing a security policy. It creates a basic
protection environment and provides additional
user services required for a trusted computer system.
The ability of a trusted computing base to correctly
enforce a security policy depends solely on the
mechanisms within the TCB and on the correct input
by system administrative personnel of parameters
(e.g. a users clearance) related to the security
policy.
- System Architecture Requirements:
The TCB shall maintain a domain for its own execution
that protects it from external interference or
tampering (e.g. by modification of its code or
data structures). The TCB shall maintain process
isolation through the provision of distinct address
spaces under its control. The TCB shall be internally
structured into well-defined largely independent
modules. It shall make effective use of available
hardware to separate those elements that are protection-critical
from those which are not. The TCB modules shall
be designed such that the principle of least privilege
is enforced. Features in hardware, such as segmentation,
shall be used to support logically distinct storage
objects with separate attributes (namely: readable,
writeable). The user interface to the TCB shall
be completely defined and all elements of the TCB
identified. The TCB shall be designed and structured
to use a complete, conceptually simple protection
mechanism with precisely defined semantics. This
mechanism shall play a central role in enforcing
the internal structuring of the TCB and the system.
The TCB shall incorporate significant use of layering,
abstraction and data hiding. Significant system
engineering shall be directed toward minimizing
the complexity of the TCB and excluding from the
TCB modules that are not protection-critical.
It will be clear from these quotations that any conventional
operating system will fail to meet the B3 criteria,
even when it has been augmented with a layer of security
software.
The fundamental objectives of the design are:
- Meet all B3 criteria with particular emphasis upon
the TCB structuring requirements. This shall be
done without significant performance degradation.
- Avoid unnecessary overclassification of data.
In many environments, such as message processing,
it is very typical that each user during a terminal
session will work with data of different classifications.
The classification of data will typically vary
from transaction to transaction, so TCB must have
the capability to change the users access priviliges
on a per transaction basis. Otherwise the result
will be overclassification of many data.
- Reliability and Data Integrity.
1.2.2 S̲e̲c̲u̲r̲i̲t̲y̲ ̲D̲e̲s̲i̲g̲n̲ ̲C̲h̲a̲r̲a̲c̲t̲e̲r̲i̲s̲t̲i̲c̲s̲
The base for the security design is an object based
kernel, ORION. It has the following security characteristics:
- Completely object based
- Complete separation of object managers for different
objects.
- Least privilege principle enforced in the form
of small protection domains. The CPU privileged
system state is used exclusively for domain switching.
No data manipulations are performed in system state.
- Unified and simple protection based upon a segmented
memory. No artificial resources such as task control
blocks etc.
- Large physical memory. Avoids complex resource
optimization strategies to obscure the TCB. A virtual
memory management may, however, be built above
the segmented architecture.
- Security label attached to every object.
- Unified creation, removal and checkpointing of
tasks and other objects to facilitate trusted recovery.
2̲ ̲ ̲U̲K̲A̲I̲R̲ ̲C̲C̲I̲S̲ ̲H̲A̲R̲D̲W̲A̲R̲E̲
The hardware configuration proposed for UKAIR CCIS
is based upon the CR90 computer system. CR90 is constructed
by means of the newest technology which, together with
the data bus structure and redundancy principle implemented,
ensures that the proposed configuration can fulfill
the present as well as the future requirement to availability,
expandability, performance and security.
Prior to the detailed presentation of the CR90 system
architecture some of the essential facilities in the
system are presented.
o Compact system is achieved by use of VLSI technology
and a simple and efficient printed board packing
technique throughout the entire system.
o To ensure a very high system availability the following
three basic redundancy principles are implemented
and completely supported in hardware.
I Central processors are triplicated, meaning
that a failing processor immediately can be
detected and isolated without disturbing the
system operation and without software support.
II Data buses are duplicated which together with
error detection bits ensures that a failing
data bus immediately can be detected and isolated
without impact on the system operation.
III Very flexible input/output concept which allow
for implementing redundancy to the level required
by the specific application.
o Central processor is true 32 bit, meaning that
the direct addressable memory space is 4 giga byte,
which together with an instruction rate of better
than 2 mega instructions per second for a single
processor gives a very efficient system.
o Multiple central processors can be interconnected
by means of the Data Highway. Up to 16 processors
on a single Data Highway.
o The Data Highway is a very efficient interprocessor
communication path due to the high transfer rate
(greater than 4 mega 32 bits words per second)
and due to the unique remote access principle implemented,
which is invisible for software, i.e. the complete
system is accessed as a memory, even when accessing
via the Data Highway.
o Security facilities supported in hardware to gain
a high performance together with high security.
Two executing levels are supported in the CPU chip
hardware, one of them is privileged. Executing
tasks have only access to memory in accordance
with an associated set of memory segments and corresponding
access rights. Each segment is defined by means
of a lower and a upper limit. Control of the segments
is performed by the operating system kernel supported
by hardware. Accesses via the Data Highway are
subjects to a similar protection mechanism which
optional can be implemented in hardware, meaning
that only a limited memory area is accessable for
other central processors.
2.1 H̲A̲R̲D̲W̲A̲R̲E̲ ̲A̲R̲C̲H̲I̲T̲E̲C̲T̲U̲R̲E̲
2.1.1 S̲y̲s̲t̲e̲m̲ ̲S̲t̲r̲u̲c̲t̲u̲r̲e̲
The CR90 computer system structure is illustrated in
figure 2.1.1-1 where the major system elements are
identifiable as dualized Data Higways, Processor Sections
and Input/Output Sections.
o Data Highway
Communication between the Processor Sections (up
to 16 sections on a single highway) are performed
via the dualized Data Highway. The highway is a
ring structure where the two redundant paths carry
the same information. By implementing this concept
the following essential advantages are gained:
Error detection in two levels by transmitting error
checking bits on the buses and simultaneously comparing
the content of the two buses, i.e. no single point
error will be undetected. Error correction is simply
performed by selecting the correct working Data
Highway. Due to the ring structure an error is
immediately localized to the failing circuit or
cable.
FIGURE 2.1.1-1
SYSTEM STRUCTURE
The Data Highway is transparent to software, i.e.
transfers on the highway are performed as normal
memory accesses without protocol software overhead.
Maximum transfer rate on the Data Highway is greater
than 4 mega-32 bit-words per second.
o Processor Section
The Processor Section is from an operational point
of view a fail-safe central processor with a local
working storage of 2 mega byte of RAM. To ensure
continuous operation the central processor module
is triplicated while the transfer bus structure
is duplicated.
The central processor's view of the entire system
is as a memory where the input/output function
memory is accessable only from the associated central
processor while the remaining memory is accessable
from all central processors in the system. For
a more detailed description of the Processor Section
please refer to para. 2.2.1.
o Input/Output Section
The Input/Output Section is accessable as memory
from the associated central processor only. This
section is based upon a general available bus structure
(VME) which as other communication paths in the
system is dualized to ensure continuous fail-safe
operation. By utilizing this general bus structure
a great variety of input/output modules can easily
be installed in the system to form a very flexible
and cost efficient external interfacing system.
For a more detailed description of the Input/Output
Section please refer to para. 2.2.2.
2.1.2 P̲h̲y̲s̲i̲c̲a̲l̲ ̲L̲a̲y̲o̲u̲t̲
The physical dimension for the modules is based on
the standard eurocard format. However, two bord sizes
are used; the dobbel eurocard format and the triple
eurocard format. The later is actually defined for
the FUTURE-BUS modules.
Figure 2.1.2-1 shows an example of standard 19" rack
which is equipped with two different crates. The total
configuration contains 3 Processor Sections and 3 Input/Output
Sections.
FIGURE 2.1.2-1
EXAMPLE OF A RACK CONFIGURATION
2.2 H̲a̲r̲d̲w̲a̲r̲e̲ ̲C̲h̲a̲r̲a̲c̲t̲e̲r̲i̲s̲t̲i̲c̲s̲
2.2.1 P̲r̲o̲c̲e̲s̲s̲o̲r̲ ̲S̲e̲c̲t̲i̲o̲n̲
The Processor Section as shown in figure 2.2.1-1 is
designed for continuous and fail-safe operation utilizing
triple redundant central processors and dualized Bus
Interfaces for communication with the associated peripheral
and communication interfaces (VME bus) and for communication
with the other Processor Sections in the system (Data
Highway).
Prior to the detailed description of the modules the
major characteristics for the Processor Section are
mentioned below.
o Triple redundancy of the central processor ensures
that a failing central processor always will be
detected.
o Dualization of external communication paths (Data
Highway and VME bus) ensures that the system operates
even with an erroneous bus.
o Error detection and isolation performed by hardware,
meaning that all the processing power is used for
system operation and not with online hardware test
and error detection.
o Replacement of failing modules is performed during
system operation giving an very high availability.
o 32 bit central processor on a single printed circuit
board containing 32 bit CPU, 2 mega byte RAM for
working storage, 256K byte E-PROM for initialization
and boot strap routines, 16 memory access protection
register sets.
o CPU performance better than 2 mega instructions
per second.
o High speed Data Highway for communication between
Processor Sections up to 8 mega 32 bits word per
second.
FIGURE 2.2.1-1
PROCESSOR SECTION
2.2.1.1 C̲e̲n̲t̲r̲a̲l̲ ̲P̲r̲o̲c̲e̲s̲s̲o̲r̲
The central processor is a single printed circuit board
containing CPU, memory and related circuit as shown
in figure 2.2.1.1-1.
Figure 2.2.1.1-1
CENTRAL PROCESSOR BLOCK DIAGRAM
The CPU is an general purpose 32 bit micro processor,
which has an efficient instruction set well suited
for the application. The instruction rates of the CPU
are higher than 2 mega instructions per second and
two processing levels (system and user) are available.
256 K byte of PROM for system initialization and boot
strapping is available. The PROM is organized as 64
K x 32 bit double words.
As an option a co-processor can be installed for support
in for instance floating point operations.
The working storage consist of 2 mega byte of dynamic
RAM including error detection and correction circuit.
The RAM is organized as 32 bit double words and has
access time matching the CPU for optimal performance.
The access protection ensures that unauthorized and
unvalidated software can only gain access to memory
sections defined by the system software kernel. The
protection is implemented by means of sixteen register
sets each defining an upper and lower limit for a memory
section and the corresponding access rights as illustrated
in figure 2.2.1.1-2. Loading of the access protection
register is supported in hardware.
FIGURE 2.2.1.1-2
ACCESS PROTECTION PRINCIPLE
The timing and control circuit generates the timing
and overall control signals required for operating
the board. The circuit operation is synchronized to
the bus interface circuit to ensure that the three
redundant CPU's are executing in parallel so that the
majority voting easily can be performed by monitoring
the output from the Bus I/F circuit.
2.2.1.2 B̲u̲s̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲
Interfacing of the central processors to the dualized
communication bus structure (Data Highway = 1 & 2 and
VME = 1 & 2) is performed by means of the Bus Interface
modules so that the system is operating even with one
of the modules out of service.
Each of the Bus Interface modules interface two central
processors. Two of the processors are connected to
a Bus Interface only while the third is connected to
both bus interfaces (refer to figure 2.2.1-1). This
connection scheme ensures that a failure free central
processor always is available for the Bus Interface
module even if one of the central processors is failing.
The functions performed by a Bus Interface module are
defined below and illustrated in block diagram figure
2.2.1.2-1.
As the total system seen from a central processor is
a memory, access destinations and thereby communication
on the Data Highway and VME bus is controlled by means
of the address associated with a transfer from the
central processor. Routing is simply performed by address
comperator circuits as illustrated in figure 2.2.1.2-1.
An optional feature which can be implemented in the
Bus I/F is access protection for transfers on the Data
Highway. The protection is implemented by means of
16 upper and lower limit registers, meaning that the
central processor only can access limited parts of
the memory in the other processors connected to the
Data Highway.
The two Bus Interface modules are synchronized to each
other to ensure that the information presented for
the majority voter on the three central processor buses
can be compared.
FIGURE 2.2.1.2-1
BUS INTERFACE BLOCK DIAGRAM
2.2.1.3 M̲a̲j̲o̲r̲i̲t̲y̲ ̲V̲o̲t̲e̲r̲
The Majority Voter monitors the three central processor
buses and performs the selection of the two central
processors to be used as source for the external buses
(Data Highway and VME).
When an error is detected by the Majority Voter the
erroneous circuit will be taken out of service and
an error message (exception) will be presented for
the central processors and displayed on the status
and control panel.
The Majority Voter contain three independent circuits
to ensure that a single failure in this module cannot
interfere with the system operation.
Replacement of a failing Majority Voter is performed
by manual overwriting the central processor to bus
interface association until an operating module is
inserted.
All the manual actions will be presented for the processor
as exceptions.
The block diagram in figure 2.2.1.3-1 identifies the
functional blocks in the Majority Voter.
FIGURE 2.2.1.3-1
MAJORITY VOTER BLOCK DIAGRAM
2.2.1.4 S̲t̲a̲t̲u̲s̲ ̲a̲n̲d̲ ̲C̲o̲n̲t̲r̲o̲l̲ ̲P̲a̲n̲e̲l̲
The status and control panel contain display and switches
required for monitoring and control of the redundant
Processor Section. Status information presented on
the display consist of following for each of the Processor
Section modules and buses:
Failure, Off-line and On-line.
Status information for the bus switches of the Input/Output
Section will be present on the display too.
A switch for each of the modules and buses in the Processor
Section is present. The switch is used for selecting
the item on line or off line.
Furthermore a button for initiating a reload of a module
which has been replaced is available.
2.2.2 I̲n̲p̲u̲t̲/̲O̲u̲t̲p̲u̲t̲ ̲S̲e̲c̲t̲i̲o̲n̲
The Input/Output Section as shown in figure 2.2.2-1
is designed for continuous and fail-safe operation
utilizing dualized standard VME buses.
The two redundant VME buses is synchroneous operated,
meaning that a simple and efficient comparator circuit
can be used for insulation of the failing bus.
One of the advantages, beside the high reliability
in the Input/Output Section, is the possibility for
using standard interface modules from a variety of
manufactures.
In the following a short description of each of the
interface modules applicable for UKAIR is given.
FIGURE 2.2.2-1
INPUT/OUTPUT SECTION BLOCK DIAGRAM
2.2.2.1 B̲u̲s̲ ̲S̲w̲i̲t̲c̲h̲
The Bus Switch serves the following functions (refer
to block diagram in figure 2.2.2.1-1).
VME bus error detection and correction by switching
the module on the single VME bus to an error free redundant
bus. Insulation of the dualized VME bus from the standard
interface modules to ensure that no single point failure
can disturb more than a single or set of peripherals.
FIGURE 2.2.2.1-1
BUS SWITCH BLOCK DIAGRAM
2.2.2.2 C̲o̲m̲m̲u̲n̲i̲c̲a̲t̲i̲o̲n̲ ̲L̲i̲n̲e̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲
The Communication Line Interface module is able to
interface four communication lines and has the required
processing and memory capacity for executing for instance
X25 level 1 and level 2 protocol for the four lines
operating on up to 64 K bit per second.
The block diagram in figure 2.2.2.2-1 shows the functional
structure of the interface.
The CPU chip used in the module is software wise compatible
to the CPU used in the central processor and are utilizing
the same operating system giving optimal security.
The PROM is included to initialize and boot strap the
module, meaning that actual communication line protocol
to be implemented is defined in software during system
generation. Therefore the same hardware can be used
for interfacing different protocols.
The module is a master on the VME bus, meaning that
data buffers can be transferred directly to/from central
processor memory.
The electrical interface circuit is implemented on
a separate printed circuit board which is used to convert
TTL-signals into the electrical interface required
by the various protocols and peripherals. This ensures
that a standard Communication Line Interface can be
used to support any electrical interface, just by adding
the proper Line Interface Circuit.
FIGURE 2.2.2.2-1
COMMUNICATION LINE INTERFACE
2.2.2.3 D̲i̲s̲k̲ ̲&̲ ̲T̲a̲p̲e̲ ̲C̲o̲n̲t̲r̲o̲l̲l̲e̲r̲
This module is a combined disk and tape controller.
It is able to interface the new Intelligent Standard
Interface (ISI), which is defined by Control Data Corp.
For instance up to 32 disk drives may be connected
to this interface.
The block diagram in figure 2.2.2.3-1 shows the functional
structure of the controller:
FIGURE 2.2.2.3-1
In order to give optimal security, the processor part
of the controller is able to utilize the same operating
system, which is used by the central processor in the
processing section.
The module is a master on the VME bus, meaning that
data buffer can be transferred directly to/from the
central processor memory.
2.2.2.4 S̲-̲N̲E̲T̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲
The S-Net Interface (SNI) is capable of interfacing
the S-NET to the VME bus.
The S-Net is a loosed coupled network defined by Christian
Rovsing A/S. The major characteristics of the S-Net
are:
o Serial transmission at 16 MHz.
o 2 M byte/sec. data transfer rate
o Variable transmission frame length
o Max. distance between any two S-Net Interfaces
is 100 meters.
o Maximum number of SNIs is 64
o No central arbiter. The access method is CSMA.
The block diagram in figure 2.2.2.4-1 shows the functional
structure of the interface.
FIGURE 2.2.2.4-1…01…S-NET INTERFACE
The module slaves VME bus, meaning that data buffers
are transferred to/from the main memory by the central
processor. The S-Net Interface interrupts the central
processor when a single transfer on the S-net has completed.
2.2.2.5 L̲o̲c̲a̲l̲ ̲A̲r̲e̲a̲ ̲N̲e̲t̲w̲o̲r̲k̲s̲
The workstations for the PSWHQ can be connected to
the control system in several ways. Three principal
solutions are here outlined:
(1) A star network with serial connections to the central
system.
(2) An X-Net with branching if required based on tempest
twisted pair transmission.
(3) A ring net with fibre optical transmission.
2.2.2.5.1 S̲t̲a̲r̲ ̲N̲e̲t̲w̲o̲r̲k̲
The star net with all the connections at the central
system offer the greatest reliability since the individual
connections are independent of each other. The lines
can be standard V24/RS232 with limited distance (15m
at 9600 baud guaranteed), high radiation and not suited
for tempest environment, or long line low level transmission
with a maximum length of about 1000m, or a fibre optic
transmission.
The fibre optic serial line star is the priced solution
for the workstations of the PSWHQ. The total network
configuration is listed in the hardware matrix in the
Commercial Proposal. The solution requires serial line
(V24) interfaces to the central system for every single
workstation, as well as opto coupler modules.
The serial line interfaces as well as the opto couple
modules must be installed in connection with the central
system. In addition each workstation must be equipped
with an opto coupler module for interfacing the fibre
optic cable to the workstation (V24). A standard length
for the fibre optic cable is quoted since no installation
information is available.
2.2.2.5.2 X̲-̲N̲e̲t̲
The standard CHRISTIAN ROVSING A/S X-Net is a twisted
pair network with possibilities for branching as well
as multiple X-Nets. The priced option is based on a
single line X-Net without branching, but the description
and figures show the full set of possibilities for
the X-Net (see figure 2.2.2.5.2-1). Detailed analysis
of the PSWHQ layout will determine the best solution.
The following is a short description of the X-Net and
its modules:
Figure 2.2.2.5.2-1
X-Net, Multiple branch options
X̲-̲N̲e̲t̲,̲ ̲G̲e̲n̲e̲r̲a̲l̲:
Four different units of the X-Net are placed outside
the central system:
- The XTA-module
(X-Net Terminal Adapter)
- The XAB-module
(X-Net Amplifier and Branching Unit)
- The XCP-module
(X-Net Communication Port)
- The XCT-module
On figure 2.2.2.5.2-2 is shown a detail from the X-Net
installation.
The X-Net bus cables are run in the metallic pipe and
branched off in the junction box to the XTA-box, which
includes the XTA-module.
The terminal is connected to the X-Net by a signal
cable to the XTA-box.
The XAB-module is housed in a box, which is the same
size as the XTA-box and connected to the X-Net in the
same way. However, this box has no terminal-connection.
The XCP- and the XCT-modules can be housed in the same
box, the XCP/CT-box. The XCP-module provides the interface
to the crypto-link to remote subbranches.
In the main net-end, the XCP/CT-box contains only the
XCP-module.
In the remote net-end, the XCP/CT-box contains both
modules, a X-Net controller is required for the remote
subbranch.
The XCP/CT-box is lower than the XTA-box.
T̲H̲E̲ ̲X̲T̲A̲ ̲B̲O̲X̲
The XTA Box which is used when connecting a terminal
to the dualized X-Net contains two XWOs (X-Net wall
outlets), one XTA, a filtered CANNON DB 25s connector
and a power supply with power line filtering.
Figure 2.2.2.5.2-2
Details of an Tempest X-Net Installation
The XWO, XTA and the power supply are existing X-Net
system elements which, in conjunction with the box,
will be upgraded to meet TEMPEST requirements.
T̲H̲E̲ ̲X̲A̲B̲ ̲B̲O̲X̲
The XAB Box is used when
1) an X-Net branch is established
2) signals on the X-Net needs to be restored (amplified)
The XAB Box contains two XWOs (X-Net wall outlets),
one XAB (X-Net amplifier and branching unit) and a
power supply with power line filtering.
The two XWO, XAB and power supply are existing X-Net
system elements which in conjunction with the box will
be upgraded to meet TEMPEST requirements.
T̲H̲E̲ ̲X̲C̲P̲/̲X̲C̲T̲ ̲B̲O̲X̲
The XCP/XCT Box is used when a local and a remote X-Net
are connected via a CRYPTO/MODEM link. The box exists
in two versions:
1) one used in the local end of the link
(version 1).
2) one used in the remote end of the link
(version 2).
The XCP/XCT Box version 1 contains 1 XWO (X-Net wall
outlet), 1 MP…0e…2…0f… (Multi purpose Multi processor), 1 filtered
CANNON DB 25s connector and a power supply with power
line filtering.
The XCP/XCT Box version 2 is a XCP/XCT Box version
1 with 1 XWO and 1 XCT (X-Net controller) added.
The XWOs, MP…0e…2…0f…, XCT and power supply are existing X-Net
system elements which in conjunction with the box will
be upgraded to meet TEMPEST requirements.
2.2.2.5.3 F̲i̲b̲r̲e̲ ̲o̲p̲t̲i̲c̲ ̲n̲e̲t̲
A fibre optic ring net can be developed by CHRISTIAN
ROVSING A/S with dual rings for reliable transmission.
The transmission will be one way in the ring. At each
desired location for a workstation a fibre optic connection
is established to the two ring nets.
2.2.2.6 O̲t̲h̲e̲r̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲s̲
Due to the international standard VME bus, implemented
in the Input/Output Section, a lot of different standard
modules are available from different manufactures,
meaning that any interface may be integrated in system
in order to comply with the required specification.
2.2.3 C̲o̲n̲t̲r̲o̲l̲l̲e̲r̲ ̲R̲e̲l̲a̲t̲i̲o̲n̲a̲l̲ ̲D̲a̲t̲a̲b̲a̲s̲e̲ ̲(̲C̲R̲D̲B̲)̲
This controller provides the full complement of a hardware
implemented relational database. The controller is
based on the BRITTON LEE Intelligent Database Machine
and has implemented the necessary functional features
for the CR90. The interface to the processor section
is via VME Bus and this can be dualized as well as
interfaces to several processor sections can be supplied.
The controller provides interfaces to the CDC SMD compatible
disc drives as well as to tape drives. The controller
also provides for mirrored discs, which is implemented
for all discs in UKAIR-CCIS.
2.3 P̲E̲R̲I̲P̲H̲E̲R̲A̲L̲ ̲C̲H̲A̲R̲A̲C̲T̲E̲R̲I̲S̲T̲I̲C̲S̲
Due to the flexible and open Input/Output Section with
standard bus structure, a wide range of commercially
available peripheral equipment will be available fulfilling
any requirements.
For the system a specific subset has been chosen, but
when the technical requirements are defined in detailed
and the newest equipment is known more cost-effective
peripheral equipment can be substituted.
2.3.1 D̲i̲s̲c̲ ̲S̲y̲s̲t̲e̲m̲
The main storage medium for the UKAIR CCIS is the disc
drive system. The basis is the CDC discs with the new
highly reliable compact MAGNUM discs. The discs are
all mirrored and are either connected directly to the
Processor Section or to the CRDB.
The Magnum Disk is in two versions, the Fixed Small
Disk (FSD) - a sealed Winchester module, and Removable
Small Disk (RSD) - a front loaded Data Cartridge. The
Magnum disk capacities range currently from 80 MB to
500 MB. Interface is SMD/SDI/ISI. The Magnum series
will be installed in the tempest CR90 racks. The discs
have very wide tolerances, little power consumptions
and very high MTBF values.
2.3.2 T̲a̲p̲e̲ ̲S̲y̲s̲t̲e̲m̲
Dualized tape transports are connected to the CRDB
to provide database loading and dumps. The configuration
is based on the CR8320 tape stations, but detailed
technical requirements and newer technology might indicate
better stations.
2.3.3 O̲p̲e̲r̲a̲t̲o̲r̲ ̲C̲o̲n̲s̲o̲l̲e̲s̲
The control of the system operations is done through
a set of intelligent alphanumeric and graphic VDUs.
The system is equipped with 2 CR86T intelligent tempest
workstations with graphic capabilities, detachable
keyboard with 111 keys inclusive programmable function
keys.
Associated with the operator consoles must be log or
journal printers. Due to lack of specifications none
are selected.
2.3.4 L̲i̲n̲e̲ ̲P̲r̲i̲n̲t̲e̲r̲s̲
A printer meeting the requirements inclusive the tempest
is the Dataproducts B-1000 line printer, which can
be delivered with serial or parallel interface, …86…1
…02… …02… …02… …02…
buffer, long line interface and various other options.
Attached is a datasheet for the commercial version,
since no datasheet can be published on the Tempest
version. However, all operating characteristics and
functions are the same.
A minimum of two are attached to each installation.
2.4 S̲Y̲S̲T̲E̲M̲ ̲H̲A̲R̲D̲W̲A̲R̲E̲ ̲C̲O̲N̲F̲I̲G̲U̲R̲A̲T̲I̲O̲N̲
The configurations for PSWHQ and AWHQ are based on
the CR90 and the CRDB with communication interfaces,
local interfaces and peripheral equipment as described
in the previous sections. The packaging is in tempest
racks and the local workstations are connected via
fibre optic. The detailed hardware modules and their
associated cost are listed in Part III chapter 3. The
functionality of the systems is described in Part II
chapter 4 performance, where the allocation of functions
and logical databases is done.
2.4.1 P̲S̲W̲H̲Q̲ ̲C̲o̲n̲f̲i̲g̲u̲r̲a̲t̲i̲o̲n̲
The configuration is based on the communication, the
workload and the response time requirements for the
full PSWHQ ADP Equipment complex. No distinction is
made between initial system and upgraded system; this
division must be done at System Design Specification
time.
Figure 2.4.1-1 shows the basic external structure of
the full PSWHQ ADPE without indicating numbers or capacities.
Figure 2.4.1-2 shows the basic structure of the Processor
Sections and their asssociated i/o sections and database
sections. Associated with the figure is the short module
description provided in table 2.4.1-3. The memory configuration
is given in table 2.4.1-4 both for PS CPU RAM and for
disc systems allocated directly to the CR90 or through
the CRDB to the individual databases.
Figures 2.4.1-5 to 2.4.1-10 show the detailed structure
of the individual processor sections with the dualization.
These figures are also valid for the AWHQ configuration.
PSWHQ ADPE External Configuration
Figure 2.4.1-1
PSWHQ ADPE System Configuration
Figure 2.4.1-2
M̲o̲d̲u̲l̲e̲ ̲D̲e̲s̲c̲r̲i̲p̲t̲i̲o̲n̲:
PS Processor Section with triplicated CPU,
dualized Data Highway I/F and 2 VME busses
to the I/O Sections. Each Processor Section
has been allocated a specific set of processing
functions.
X25 UNITER I/O Section for multiple X25 UNITER interfaces.
CAMPS I/O Section for multiple CAMPS interfaces.
ACENET I/O Section for multiple ACENET interfaces.
LAN Local Area Network Communication for connecting
the PSWHQ Workstations to the system.
Local I/O Local interface section for Operator Consoles,
Printing System, a.o.
DISK CTRL Disc and Tape controllers (see 2.2.2.3).
CRDB Controller Relational DataBase with multiple
VME assesses and a complement of mirrored
disc and tape drives.
Module Legend for Figure 2.4.1-2
Table 2.4.1-3
MEMORY IN MEGABYTES
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲…06…1 …02… …02… …02… …02…
…02…
Processor Section CPU RAM Disc Disc
Tape
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲R̲e̲q̲u̲i̲r̲e̲d̲ ̲ ̲ ̲C̲o̲n̲f̲i̲g̲u̲r̲e̲d̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
PS#1 Communication 2 300 2 x 344
PS#2 Communication 2
PS#3 Communication 2
PS#4 Message Pr. 3 1000 4 x 516
PS#5 Tote Pr. 3 600 4 x 344 yes
PS#6 Special Application 2
PS#7 Various P. 2 700 4 x 344
2 x
80
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲T̲o̲t̲a̲l̲ ̲M̲e̲g̲a̲b̲y̲t̲e̲s̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲1̲8̲ ̲ ̲ ̲ ̲ ̲ ̲2̲6̲0̲0̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲5̲6̲6̲4̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
PSWHQ ADPE Memory Allocation
Table 2.4.1-4
Figure 2.4.1-5, PS # 1, Communication Processor
UNITER and Logging of input
Figure 2.4.1-6, PS # 2, Communication Processor
CAMPS, ACENET and LAN Interfaces
Figure 2.4.1-7, PS # 3, Communication Processor
UNITER and Local Input/Output
Figure 2.4.1-8, PS # 4, Message Processor
Figure 2.4.1-9, PS # 5 and PS # 6,
Tote Processor and Special Application Processor
Figure 2.4.1-10, PS # 7
Processor for various functions
2.4.2 A̲W̲H̲Q̲ ̲C̲o̲n̲f̲i̲g̲u̲r̲a̲t̲i̲o̲n̲
The AWHQ configuration is a scaled version of the PSWHQ
configuration based on the reduced load, memory requirements
and the fewer terminals to serve. Full advantage is
taken of the modularity by simply reducing the number
of Processor Sections. Ample capacity is available
for the different communication interfaces required
for the alternate warheadquarters.
Figure 2.4.2-1 shows the interconnections to the AWHQ
system. Figure 2.4.2-2 shows the detailed system configuration
with 6 Processor Sections compared to the 7 in the
PSWHQ. Table 2.4.2-3 provides the corresponding memory
allocation. The distribution of functions, which is
similar to the PSWHQ system is shown in chapter 4 relative
to the workload.
AWHQ ADPE External Configuration
Figure 2.4.2-1
AWHQ ADPE System Configuration
Figure 2.4.2-2
MEMORY IN MEGABYTES
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲…06…1 …02… …02… …02… …02…
…02…
Processor Section CPU RAM Disc Disc
Tape
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲R̲e̲q̲u̲i̲r̲e̲d̲ ̲ ̲ ̲C̲o̲n̲f̲i̲g̲u̲r̲e̲d̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
PS#1 Communication 2 300 2 x 344
PS#3 Communication 2
PS#4 Message Pr. 3 1000 4 x 516
PS#5 Tote Pr. 3
PS#6 Special Application 2 600 4 x 344 yes
PS#7 Various P. 2 400 2 x 344
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲2̲ ̲x̲ ̲ ̲8̲0̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲T̲o̲t̲a̲l̲ ̲M̲e̲g̲a̲b̲y̲t̲e̲s̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲1̲4̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲2̲3̲0̲0̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲4̲9̲7̲6̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
AWHQ ADPE Memory Allocation
Table 2.4.2-3
2.5 D̲E̲V̲E̲L̲O̲P̲M̲E̲N̲T̲ ̲S̲Y̲S̲T̲E̲M̲ ̲C̲O̲N̲F̲I̲G̲U̲R̲A̲T̲I̲O̲N̲
The software development system is the off-the-shelf
CR32 Cabinet module. The system is based on the Motorola
M68000 CPU with Bell Lab, UNIX V7 multiuser version
Operating System Environment. The system is for 4 users
in the costing, but can be extended or multiple systems
can be employed. The system is described in details
in the datasheets, Appendix C.
The system can also be expanded with a Britton Lee
Intelligent Database Machine for the development and
testing of the application software for UKAIR CCIS.
The system is equipped with 4 VDUs for software development,
those are the non-tempest version of the CR86T with
the same functional capabilities as the Operator Consoles
(2.3.3).
The configuration is itemized in Part III, chapter
3, Price Proposal.
3̲ ̲ ̲U̲K̲A̲I̲R̲ ̲S̲Y̲S̲T̲E̲M̲ ̲S̲O̲F̲T̲W̲A̲R̲E̲
3.1 I̲N̲T̲R̲O̲D̲U̲C̲T̲I̲O̲N̲
The UKAIR system software can be considered to consist
of:
- basic operating system
- operating system extensions:
- transaction processing subsystem (incl. database)
- communication processing subsystem
- software development system.
The ORION operating system is an object oriented, extendable
operating system.
The basic operating system is extended with a transaction
processing subsystem which provides an execution environment
for application processes executing UKAIR transaction.
An integrated data base is included in the transaction
processing subsystem.
The basic operating system is also extended with a
communication processing subsystem, which interfaces
to the physical communication channels for reception
of terminal transaction and for sending output data
to appropriate terminals.
Both the Transaction Processing Subsystem and the Communication
Processing Subsystem use common extensions to the basic
operating system. These extensions are defined the
High Level Operating System (HLOS).
The software development system is the UNIX* system
III which has been implemented on top of the ORION
kernel.
* UNIX is a trademark of Bell Laboratories.
3.2 O̲S̲ ̲C̲H̲A̲R̲A̲C̲T̲E̲R̲I̲S̲T̲I̲C̲S̲
The main objective in the design of the ORION system
has been flexibility. This implies that ORION (in contrast
to other operating systems) is an open system with
an extendable functionality. To a very extreme degree
the standard facilities of the system may be changed
or even left out in a particular ORION configuration.
The approach used makes it possible to implement new
features of the operating system and to allow a particular
project to meet very specific requirements.
The standard system is equipped with modules supporting:
- Hardware fault tolerance
- Multilevel security features
- A UNIX development system
The facilities have been implemented by a very carefully
designed division of tasks between hardware and software.
For the "nucleus" of the hardware (processors, memory,
data paths) the fault tolerance is implemented in hardware.
The duty left for software is reporting of single hardware
failures.
Single hardware failures will, therefore, not have
any impact on the operation of the system. It is, therefore,
not necessary to provide facilities to restore the
state of the system after failures.
3.2.1 B̲a̲s̲i̲c̲ ̲C̲o̲n̲c̲e̲p̲t̲s̲
Current operating systems are normally supplied by
the computer vendor as a (large) monolithic piece of
software which offers some fixed services to its users.
The user is not supposed to - nor has he the means
to - modify the operating system. Thus, the user is
normally not able to integrate new functions in the
system or to remove functions which he does not need
from the system. This is for several reasons an unsatisfactory
state of affairs:
o Operating system modules are not very portable,
i.e. functionality which is available in one system
cannot easily be carried to another system.
o The user must adapt his problem to the system instead
of adapting the system to his problem.
o The user cannot take advantage of operating system
modules which he might want to develop himself
or which a third party might develop.
The decentralized operating system model to be described
in the next section describes how ORION has overcome
these problems.
The basic concept of ORION is the object.
In the object model an object is taken to mean a set
of data and a related set of operations on these data.
This rather broad formulation can be exemplified through
a file management system. The set of data is then the
set of files (perhaps represented inside the file management
system as a set of file control blocks) and the operations
are normal file operations such as OPEN, READ, WRITE,
etc.
The data belonging to an object can only be accessed
from outside the object via the correspondig operations
which are implemented as procedures. Thus, the data
are protected against misuse and corruption by users
because only the procedures belonging to the object
actually manipulate the data. The object concept as
outlined above is a key concept in modern programming
languages such as ADA where an object in the above
sense is known as a "package".
Based on this concept an operating system is now structured
into a set of objects each implementing some abstract
concept with their related functions. The user sees
his operating system as an extension of his program
which is called by normal procedure calls.
The core of the system is an object kernel which manages
the objects. It allows objects to be installed and
removed on the fly while the system is running whereby
one can dynamically change the operating system. The
object boundaries are also the protection boundaries
in the system. Each outside call to a procedure in
an object is via the object kernel which enforces protection
of the data supported by a suitable hardware protection
mechanism.
The advantages of the approach are:
o Operating systems can be developed in a decentralized
way, various parts being developed by independent
groups.
o Existing operating system modules can be re-used
in new systems.
o Operating system modules are protected against
users and against each other, meaning that development
and testing of new operating system modules can
take place on a system also used for production.
o Operating systems can be configured in a very flexible
way to fulfil needs. Indeed, several systems may
co-exist in one computer, f.ex. a development system
and a real time system. Development and testing
can safely take place on the same machine.
Based on the object kernel a number of other functionalities
may be implemented as single modules: task management,
memory management, security management, etc.
In the standard ORION packages a number of modules
are supplied. The functionality of these modules is
described in the following sections. These modules
may be replaced by other modules, and project specific
modules supplying new kinds of facilities may be included.
3.2.2 O̲b̲j̲e̲c̲t̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
The Basic Entity in the ORION system is the abstract
concept of an object. Objects are typed. An object
type is characterised by the operations that can be
performed on instances of the object.
An object is implemented in ORION as a set of procedures
corresponding to the available operations and some
local data which are needed by the procedures to implement
their operations.
Objects - or rather the procedures belonging to an
object - are executed by tasks.
The combination of a task and an object is called an
execution environment.
In a given execution environment a certain set of objects
are visible. Each visible object is associated with
some rights which are exactly those operations which
the task may perform on the object. The set of rights
change, whenever the execution environment changes.
That is, the rights which task T may exercise when
executing object A need not to be the same as task
T may exercise when executing object B, and the rights
which task T may exercise when executing object A need
not to be the same as the rights that task Q may exercise
when executing object A.
Objects are managed by the object kernel - or just
kernel for short.
All calls and returns between objects are performed
through the kernel.
This forms the basis for the security in the system.
The object boundaries are also the security boundaries
and since security boundaries can only be crossed via
the kernel, security is enforced because the kernel
does appropriate security checking as part of call
processing.
Object management also includes "clean-up" when objects
disappear. This is facilitated by the option of an
object o̲w̲n̲e̲r̲ and an object m̲a̲n̲a̲g̲e̲r̲ as illustrated on
the figure below.
FIGURE 3.2.2-1
When one object (B in the figure) opens cooperation
with another object (C in the figure), a new, small
object is created to keep track of the cooperation
(Access Object in the figure). This Access Object is
the basis for the orderly clean-up when B is removed.
The Access Objects also serve to let C keep track of
many concurrent cooperations with other objects.
Protected pointers provide call paths from one object
to another. Within ORION there are two special pointers
to the Access Object: The o̲w̲n̲e̲r̲ ̲p̲o̲i̲n̲t̲e̲r̲ from B to the
Access Object, and the m̲a̲n̲a̲g̲e̲r̲ ̲p̲o̲i̲n̲t̲e̲r̲ from C to the
Access Object.
The owner pointer signifies that B has provided resources
for the Access Object (storage, etc.) and that B can
initiate removal of the Access Object.
The manager pointer signifies that C has determined
the interior of the Access Object and that C can change
the interior part of the Access Object. C will also
be asked to clean-up the Access Object when B decides
to remove it (or when B itself is removed). Compared
to type-oriented high-level languages, the manager
pointer represents the relation of the "variable",
Access Object, to its "type" C.
In ORION all objects have an owner pointer and a manager
pointer.
3.2.3 M̲e̲m̲o̲r̲y̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
A̲d̲d̲r̲e̲s̲s̲ ̲M̲a̲p̲p̲i̲n̲g̲
Memory in an ORION system constitutes one large 32
bit address space implying that 4 Giga bytes of memory
are addressable.
The logical to physical address mapping is the identity
mapping in an ORION system so that the logical addresses
that a CPU generates are used directly as physical
addresses. This means that the same address refers
to the same physical location in any process.
There are considerable advantages to be gained from
this very simple mapping. Sharing of data between different
tasks or different objects can be made very efficient
because the shared data will always be present in the
address spaces of both tasks or objects and at the
same logical addresses.
Programs are not bound to execute at fixed addresses
when they are compiled and linked. The MC68000 addressing
modes imply that most addressing is done relative to
some base register, e.g. addressing of program locations
relative to the program counter and addressing of data
locations relative to some base registers, such as
the stack pointer or one of the other address registers.
At program load time the program location in memory
is determined and the data start address is loaded
into one of the data registers before the program starts
execution.
In some situations a program must refer to a fixed
physical address. For example a device driver needs
to refer to the device registers which are viewed as
memory locations at fixed addresses. This is accomplished
without problems by using the absolute addressing mode.
M̲e̲m̲o̲r̲y̲ ̲P̲r̲o̲t̲e̲c̲t̲i̲o̲n̲
A running task should not be allowed access to all
of memory but only to what is needed. Access to memory
is controlled by the Memory Protection Unit (MPU).
The MPU allows any protection pattern to be defined
for a running task.
The figure below indicates how it works.
Assume that the currently executing task is to be allowed
access to the three indicated memory segments. The
MPU contains a protection descriptor for each segment.
A descriptor points to the first and last byte in the
segment and it contains the access rights which are:
read, write, and execute. Each memory access is controlled
by the MPU to see if the address fits into one of the
descriptors. If it does and the appropriate access
bit is set, access is allowed otherwise an exception
is generated.
When the CPU operates in system mode the access check
is bypassed so that programs running in system mode
have access to all of memory, In ORION only the (object)
kernel runs in system mode. All other software modules
will from a memory protection point of view execute
in normal user mode.
FIGURE 3.2.3-1
As there may exist more segments than there is room
for in the MPU the entire table of descriptors is located
in memory and the MPU contains a register which points
to the descriptor table.
FIGURE 3.2.3-2
The most recently used descriptors are cached in the
MPU. If an address is not found in the MPU it searches
the memory resident table for a descriptor to use.
Switching to another protection domain is performed
simply by writing a new value to the MPU address register.
This will also make the MPU clear its internal descriptors.
V̲i̲r̲t̲u̲a̲l̲ ̲M̲e̲m̲o̲r̲y̲
A virtual memory capability is the capability to present
more memory to the tasks than is actually present.
This is done by transferring physical memory pages
to and from disk storage in a way which is transparent
to the tasks. Virtual memory gives poor performance
and it adds considerably to the complexity of the system.
A much better approach which is the one selected in
this system - is simply to add more physical memory
to the system if it is needed.
It should be noted, however, that the architecture
in the ORION system does not preclude that a logical
to physical memory mapping mechanism be added after
the MPU thereby providing a paged virtual memory system.
P̲r̲o̲t̲e̲c̲t̲i̲o̲n̲ ̲D̲o̲m̲a̲i̲n̲s̲
A protection domain is a set of memory segments with
associated access rights. At any given moment a CPU
executes in some protection domain which is reflected
in the current MPU set up.
Protection domains change either when a task scheduling
occurs or when a task executes a call to another object.
During a task scheduling the switch to the protection
domain of the new task is performed by loading the
MPU address register with the address of the segment
descriptors of the new task.
A call to another object is performed via the kernel.
The kernel constructs a new protection domain for the
called object based on the protection domain of the
caller and the description of the called object. When
control returns to the caller the kernel restores the
old protection domain.
3.2.4 S̲e̲c̲u̲r̲i̲t̲y̲
This section describes the security aspects of the
ORION architecture. The general concept of security
related to the ORION architecture are described, and
furthermore the security policy implemented by the
standard security module supplied with the ORION package
is described. It should be mentioned that projects
using the ORION system may implement their own security
module which may have both stronger and weaker policies
than the standard module.
The ORION achitecture forms the basis for building
an application system with a high degree of security.
It will be possible to build a system which can be
clasified as B3 according the prescription from the
US Department of Defence (ref. *).
The basis for this is the following:
o Reliable hardware is a prerequisite for building
a secure software system. Furthermore the hardware
implements different execution levels and enforces
memory protection.
o A small operating system kernel which is the only
software module executing with all priviligies.
Because the kernel is small, it is possible to
verify its correct function.
o Separation of the mechanisms enforcing security
and a module implementing the policy. Because the
policy is assembled in one module, it is possible
to verify this module. This would be difficult
in systems where the policy is distributed over
many modules.
The security enforcing mechanism of the ORION kernel
is described in this section. Each object is assiciated
with a table describing the entry procedures implemented,
together with specifications of whether a call to a
particular procedure requires read and/or write access.
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
Entry 1 R
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
Entry 2 W
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
Entry 3 R W
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
Entry 4 W
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
Entry 5 R
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
* Trusted Computer System Evaluation Criteria CSC -
SID - 001 - 83
Department of Defense, USA.
When an access object is created the manager calls
the ORION kernel to create it. The manager also specifies
which of the entry procedures may be involved. The
kernel calls the security module for each wanted entry
procedure and specifies the kind of access that procedure
requires. According to its policy the security module
may deny access to some (or all) of the procedures.
The information on the allowed entry calls is now stored
by the kernel in the access object.
At the time of call to some entry procedure in an access
object, the kernel first asserts that the call is allowed
and then examines the required access kind. If the
access kind is read - write, the current domain is
merely extended but if only read or only write access
is allowed, the memory management facilities are used
to limit the access accordingly.
The standard security policy is based on a multicompartment
- multilevel principle.
To each subject is associated a security clearance,
and to each object is associated a security classification.
Clearance and classification are both described by
a common data structure: the security profile. In order
to exchange information, the security profiles of the
involved parts must fulfil certain relations. Information
may only flow from an object to another with the same
or higher security profile. A security profile is divided
into a number of compartments each having a level.
A security profile is higher than or the same compared
to another one, if the levels of all compartments are
higher than or equal to the levels of the second profile.
The standard security module will report any operations
to an audit module which on a project dependent way
can log this information in a suitable form.
3.2.5 T̲a̲s̲k̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
As described in section 3.2.2 the ORION kernel discriminates
a small number of object types, one of which is the
task object. The task object is special in the sense
that it is schedulable (i.e. it has its own autonomous
life), as opposed to passive objects that perform actions
only when they are called by other objects.
When a task is executing it may perform entry calls
into other objects. The processing may be preempted
or explicitly stopped even within entry calls. It is
the initiating task object, however, that is stopped
(and resumed later on). Thus the same object may be
called (entered) by several tasks at the same time
(and mutual exclusion must be implemented within the
called object, if required).
In the overall architecture, objects may be distributed
throughout the entire memory space, and an object may
be called from objects in any other part of the address
space. To optimize performance the execution of an
object entry call is performed by its "host processor"
(i.e. the processing subsystem in the memory of which
the actual object is allocated).
When an object in one "host processor" wants to perform
an entry call in a "remote" object this is performed
by setting up the logical domain (in which the entry
call is to be processed) and then issuing an interrupt
with a reference to the actual domain. The initiating
process may then be rescheduled and the interrrupted
processor includes the referenced domain in its own
scheduling.
When the "remote" entry call is terminated, the original
process is resumed via an interrupt as described above,
and the domain corresponding to the terminating entry
call is removed from its own scheduling.
This technique corresponds to a task migration, where
a task is currently executing in one "host processor"
at a time and when processing switches between hosts
the task is removed from the scheduling of the host
that is left (becomes a passive task) and is included
in the scheduling of the new host.
The technique used for interrupt handling is exactly
the same in the sense that an interrupt corresponds
to a remote entry call performed by a "hardware task".
The ORION kernel includes explicit facilities for task
creation, termination, passivation and activation,
and implicit facilities for task invocation. The basic
ORION kernel also supports passing of information between
objects (also task objects) and sharing of information
between objects. These facilities are supported both
via parameter passing of information at entry calls
and via passing of object handles at entry call or
via the object directory.
The scheduling of tasks, as implemented by the ORION
kernel, is a priority based, short term scheduling
allocating processor resources to tasks in timeslices.
Medium term scheduling is applied to application processes
executing in a transaction processing environment.
These processes may have dynamically assigned priorities.
3.2.6 D̲e̲v̲i̲c̲e̲ ̲H̲a̲n̲d̲l̲i̲n̲g̲
In ORION devices are represented via and handled by
device handler objects. These objects are passive objects,
that may be subject to activation via entry-call from
other objects or from hardware (via device interrupts).
When activated by a remote object or by a device interrupt
the device objects become a schedulable object and
consequently ORION does not require that low level
interrupt handling is performed in a special (disabled)
mode. On the other hand the system supports fast interrupt
handling and if the handling is "short" there is a
great probability that no scheduling overhead will
be introduced during interrupt handling.
As described in section 2, hardware supports a number
of different organizations of peripheral equipment.
From the software point of view these different organizations
fall into two classes. One presents a fault tolerant
organization of hardware as a single reliable IO unit.
The other class does not build fault tolerance into
hardware, and consequently a hardware reconfiguration
includes reconfiguration of the corresponding device
handling objects.
The handling of errors in peripheral equipment is included
in the following section on exception handling.
All peripheral equipment is driven on interrupt basis.
The amount of work performed by an IO unit between
interrupts depends very much on the type of equipment.
However, the handling of interrupts is general.
Whenever an interrupt is issued (associated with an
interrupt vector) it is detected by the ORION kernel,
which, in time, performs an entry call in the device
handler object referenced by the interrupt vector.
Apart from the interrupt reception performed by the
kernel all the device handling is performed in an unpriviledged
state and the access to peripheral equipment is governed
by usual memory access check as all access to such
equipment is performed as memory read and write operations
(mapped onto device registers).
As the device specific handling is performed in normal
(protected) state the ORION kernel allows safe introduction
of new device handlers without introducing a security
or reliability risk to the rest of the system.
A standard ORION system is equipped with a number of
device handler types, supporting disk equipment, magnetic
tape equipment, terminals, printers and local area
network. Generally these device handlers present a
"virtual file" interface. A disk handler presents a
disk as one single random access file. The magtape
handler presents a tape as a set of files with sequential
write access and random read access. A terminal is
presented as a set of sequential access files (a control
input file, a control output file, a data input file
and a data output file) A printer is presented as a
single sequential file. The local area network is presented
as a set of sequential access files. Each physical
address on the network (e.g. a terminal) may then again
be handled via a set of four logical files (a pair
for control and a pair for data). In the local area
network a file actually represents a point to point
connection and a file transfer service is available,
using such a connection as a "pipe".
Communication line handlers are available for some
protocols and the system is open for inclusion of other
special protocols. Where only a communication line
handler is capable of presenting the line as a set
of virtual files (e.g. like the local area network
handlers) these may be used as the basis for accessing
terminals and printers and to transfer files via a
communication line.
In ORION terms the virtual files are individual objects,
and the access to peripheral equipment is subject to
the normal ORION access control. In the case of multilevel
information exchange on a communication line, it is
required that the communication line handler is trusted.
It has to handle information on various levels when
multiplexing output and (especially) when demultiplexing
inputs.
3.2.7 E̲x̲c̲e̲p̲t̲i̲o̲n̲ ̲H̲a̲n̲d̲l̲i̲n̲g̲
During normal operation a number of exceptions may
occur. These exceptions can roughly be classified into
the following three groups:
1) Internal exceptions caused by the executing object.
2) External software exceptions occurring at a result
of object interactions.
3) External hardware exceptions, occurence of hardware
failure.
The first type of exceptions are events like illegal
addressing, overflow, division by zero, timeouts, etc.
The second type of exceptions are typically: attempts
to access failing/removed object, and attempted object
access that is not accepted because of security reasons.
All these exeptions are reported to the audit module
and signalled back through the object hierarchy and
handled by an object, that is authorized to do so,
as explained in section 3.2.2.
The last type of exceptions is signalled in two different
ways:
a) In case of failure in a redundant hardware subunit
(one out of three processors, or one of two data
highway interfaces) an exception interrupt is generated
with a vector, identifying the failing module.
Normal operation continues while the exception
interrupt is received by a "device exception handler".
This handler is responsible for the logging and
the reporting of the exception.
b) In case of a failure in a non-redundant hardware
subunit, this is detected by the device handler
(when performing IO operations), or it is detected
by the access control hardware (checking DMA transfers).
In any case the failure will affect the device
handlers for the failing device. If no direct recovery
action can be performed, the failure is signalled
to the objects accessing the device handlers and
the device handler performs a shutdown.
The shutdown is handled as an exception, and it
is propagated through the object hierarchy. When
it reaches the object, that was responsible for
the initialization of the a actual device, this
object may perform a reconfiguration (associating
an alternate device to the device handler and reinitialize
it).
When objects that were hurt by the failure (when
they accessed the device handler), perform a number
of retries (separated by suitable delays), they
may recover automatically, when the device handler
is reinitialized.
If no reconfiguration is performed, no automatic
recovery is possible, and the exception will be
propagated to all objects that interfere with the
actual device handler.
Apart from the exception handling performed, when failures
are detected, the system supports reconfiguration in
the sense that "new" hardware modules (or modules that
have been removed for repair) can be included in the
system during operation. The technique for hardware
inclusion depends very much on the kind of hardware
and the kind of configuation (redundant configuration).
Inclusion of a processor module with onboard memory
is performed in two steps. In the first step the module
enters a listening state where all memory write operations
that are performed in its partners are also performed
in the module under inclusion. During this phase a
special task performs read/write operation on each
individual word of the local memory area to force valid
data into the onboard memory of the module under inclusion.
Finally normal state is entered by performing an entire
context load. The context load instruction starts the
processing in the module under inclusion and it thereby
enters the normal state.
Inclusion of peripheral equipment is performed by using
corresponding or simpler means, depending on the amount
of local state information residing in the actual module.
In case of disk devices the redundancy is also implemented
for the device and the medium (mirrored disks) and
the reconfiguration is performed by a resynchronization
of disk contents using the same technique as for resynchronization
of local processor board memory.
3.2.8 F̲i̲l̲e̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
The ORION file management system resembles that of
UNIX* in the services offered, but it also provides
robustness against loss of information. The robustness
is achieved both by way of internal organization and
by providing the possibility of dualization of the
actual storage medium and the access paths thereto.
As seen by the applications, the ORION file management
system offers three kinds of files: ordinary disk files,
directories, and special files.
An ordinary file contains whatever information the
application places on it. No particular structuring
is expected by the file management system. Different
applications have different characteristics in the
way they use files. In a time-sharing environment files
must be both dynamically allocatable and they may grow.
Real-time applications require large files, and possibly
contigous files; dynamic allocation and growth are
usually not required, whereas minimizing the number
of disk accesses is essential. Thus, the ORION file
management system distinguishes between contiguous
files and other ordinary files.
The contents of a contigous file is stored on one consecutive
disk area. When accessing this kind of file it is possible
to calculate the relevant disk address. The contents
of an ordinary file is kept in a number of consecutive
disk areas which are addressed through an index.
Although the index block may be found in the sector
cache, it may require some extra disk access to use
this method, but allows for dynamical growth of the
files.
* UNIX is a trademark of Bell Laboratories.
Directories provide the mapping between the names of
files and the files themselves, and thus induce a structure
on the file system as a whole. A directory behaves
exactly like an ordinary file except that it cannot
be written on by unprivileged programs. However, any
one with appropriate access rights may read a directory
just like any other file. All files in the system can
be found by following a path from the root directory
through a chain until the desired file is reached.
The directory structure is constrained to have the
form of a rooted tree. Each directory must appear as
an entry in exactly one other which is its parent.
The same non-directory file may appear in several directories
under possibly different names. All links to a file
have equal status. That is, a file does not exist in
one particular directory; the directory entry for a
file consists merely of its name and a pointer to the
information actually describing the file. Thus a file
exists independently of any directory entry, although
in practice a file is made to disappear along with
the last link to it.
Special files are pseudonyms for I/O devices which
belong to the file system family. Special files are
read and written just like ordinary disk files, but
requests to read or write result in an activation of
the associated device. Directory links may be made
to these files just like to an ordinary file.
Although the root of the file system is always stored
on the same device, it is not necessary that the entire
file system hierarchy resides on this device. There
is a mount system request with two arguments: the name
of an existing ordinary file, and the name of a direct-access
special file whose associated storage volume (e.g.
disk pack) should have the structure of an independent
file system containing its own directory hierarchy.
The effect of mount is to cause references to the ordinary
file to refer instead to the root directory on the
removable volume.
The file management internally keeps administrative
information for all files; date and time of creation,
update, and access, access rights to the file from
any user, size, organization, and physical location
of the file.
The operations of the file management system are organized
in such a way that the information stored on disk will
always be consistent if the last attempted write operation
succeeded. This requirement can be guaranteed if the
dualization module (see figure 3.2.8-1) is put in between
file management system and corresponding device drivers.
This module will route the information to two separate
storage devices and perform the necessary write operations
such that at least one will be successful.
On top of the basic file management system a number
of access methods is put, and many may be added. For
example a variable length record access method is placed
in one of the modules using the file management system
(see figure 3.2.8-1).
As the file management system is an object, the normal
ORION mechanisms prevent objects which are not authorized
from accessing the file system and its entry procedures
as for example:
o create a file object with specified name
o rename a file
A single file is also an object in the ORION sense
and again the kernel prevents an unauthorised object
from accessing its entry procedures which are of the
kind:
o read a byte string into a buffer
o write a byte string from a buffer
FIGURE 3.2.8-1
3.2.9 S̲o̲f̲t̲w̲a̲r̲e̲ ̲C̲o̲n̲f̲i̲g̲u̲r̲a̲t̲i̲o̲n̲
One of the basic properties of the ORION kernel is
its support for mutually protected software modules.
This property supports a safe and dynamic configuration
of software modules.
To support dynamic configuration optionally the internal
configuration of standard software modules is performed
via initialization/configuration entry calls. Thus
a software complex may be loaded and initialized dynamically.
The initial system configuration may be performed in
the following main steps:
1) Load the ORION kernel with a disk handler, a file
management system and a loader task.
2) The loader task initializes itself, the disk handler
and the file management system (with some default
configuration) and opens the system configuration
file.
3) Software modules are loaded and corresponding objects
are created for these according to the contents
of the configuration file. If the configuration
file redefines the configuration of the file management
system or the disk handler, the loader performs
a shut down of the actual object (which will cause
an exception to propagate to dependent objects),
reinstalls and/or reinitializes the module and
dependent objects (e.g. the configuration file)
and resumes configuration from the current position
in the configuration file.
After initialization, however, the software configuration
may be changed dynamically (including operating system
moduls). Reconfiguration may be performed by calling
configuration entries or by shutting down an object
and then reinstall an alternative one. The former kind
of reconfiguration may in some cases be performed without
affecting objects, accessing the reconfigured object.
The latter implies, that an exception will be propagated
to accessing objects. It is then up to such accessing
objects to perform recovery actions (try to get access
to another object of this type) or to give up.
The support for safe, dynamic reconfiguration, affecting
only the part of a system that directly depends on
a specific object allows for test of new/modified subsystems
in an online milieu (if this can be accepted for performance
reasons). The configuration and security mechanisms
also allow for a combination of software development
and online operation, provided that these subsystems
are configurated with proper security classifications.
All objects and system structures that form an entire
ORION system are dynamically allocated from the pool
of free memory. The ORION kernel has no preallocated
resource types and no predefined maximum capacity.
This scheme optimizes the system flexibility by reducing
the amount of configuration decisions to take at system
initialization.
A part of the ORION system is a directory module via
which object references may be established under full
security control.
The references are established via symbolic names,
supporting reconfiguration, where new objects are entered
as substitutes for old ones (as it may be done at device
handlers, serving failing nonredundant hardware).
3.3 D̲A̲T̲A̲ ̲M̲A̲N̲A̲G̲E̲M̲E̲N̲T̲ ̲C̲H̲A̲R̲A̲C̲T̲E̲R̲I̲S̲T̲I̲C̲S̲
3.3.1 I̲n̲t̲r̲o̲d̲u̲c̲t̲i̲o̲n̲
D̲a̲t̲a̲b̲a̲s̲e̲ ̲m̲a̲n̲a̲g̲e̲m̲e̲n̲t̲ ̲s̲y̲s̲t̲e̲m̲.̲
The database management system on a UKAIR Host computer
system is implemented by a specialized intelligent
database backend processor CRDB. The CRDB is logically
positioned between the mainframe and the data which
are stored on disk(s). The CRDB is a fully relational
database computing system implemented with specialized
hardware.
The concept of specialized backend processor as implemented
by the CRDB offers significant advantages in terms
of performance:
o it offloads host CPU(s) of the resource consuming
database management tasks
o it performs DBMS functions at high speed because
the hardware was specifically designed to execute
its relational DBMS software.
The major functions of the CRDB are outlined below.
TRANSACTION MANAGEMENT
- Automatic or application program controlled issuing
of "begin transaction", "end transaction" or "abort
transaction" command.
- Automatic concurrency control.
PROTECTION AND SECURITY
- Protection facilities for user and user groups.
- Checkpoints and transaction journalling for auditing
and recovery.
RELATIONAL DATABASE MODEL
- Includes an integrated data dictionary.
- Support of high level, non-procedural query language
for ad-hoc queries.
- Provides data independence for application programs.
The above listed features are described in greater
detail in the following sub-sections.
The concept of an application program, transaction
processing system and continuously available database
is explained below.
An a̲p̲p̲l̲i̲c̲a̲t̲i̲o̲n̲ ̲p̲r̲o̲g̲r̲a̲m̲ is not aware of the physical
implementation of the database system. The peculiarities
of the physical implementation are hidden by the transaction
processing system which is described in detail in section
3.4.2. The hardware architecture of the database management
system ensures c̲o̲n̲t̲i̲n̲u̲o̲s̲l̲y̲ ̲a̲v̲a̲i̲l̲a̲b̲l̲e̲ ̲d̲a̲t̲a̲b̲a̲s̲e̲ ̲s̲y̲s̲t̲e̲m̲.
The continuous availability of the database system
is provided by the t̲r̲a̲n̲s̲a̲c̲t̲i̲o̲n̲ ̲p̲r̲o̲c̲e̲s̲s̲i̲n̲g̲ ̲s̲y̲s̲t̲e̲m̲ which
uses redundant CRDB architecture to ensure continous
availability. The CRDB uses mirrored disk concept to
prevent media failure from making the data unaccessible.
These fault-tolerant aspects of the transaction processing
and database system are invisible to the application
programs and are shown in figure 3.3.1-1. The CRDB
can also be utilized in a non fault-tolerant environment.
Therefore a description of some crash-recovery procedures
have been included in this chapter.
FIGURE 3.3.1-1
CRDB/HOST ARCHITECTURE
T̲h̲e̲ ̲C̲R̲D̲B̲ ̲r̲e̲l̲a̲t̲i̲o̲n̲a̲l̲ ̲d̲a̲t̲a̲b̲a̲s̲e̲ ̲m̲o̲d̲e̲l̲.̲
A d̲a̲t̲a̲b̲a̲s̲e̲ is an integrated collection of data. In
a relational database model which is supported by the
CRDB the data are logically stored in tables. These
tables are called r̲e̲l̲a̲t̲i̲o̲n̲s̲ and consist of a variable
number of rows (t̲u̲p̲l̲e̲s̲) and a fixed number of columns
(a̲t̲t̲r̲i̲b̲u̲t̲e̲s̲). A relational database is simply a collection
of related tables. Because relational data is viewed
as rows and columns of information, databases are easy
to implement and easy to use. The most important aspect
of a relational database is that the user (which typically
is an application program) just requests the information
it wants. The user never requests the information in
terms of how or where to get it. Implementation of
a relational DBMS application is much easier because
the programmer is not involved in the complicated analysis
associated with older, pointer based DBMS systems.
This important d̲a̲t̲a̲ ̲i̲n̲d̲e̲p̲e̲n̲d̲e̲n̲c̲e̲ feature of the relational
DBMS allows easy implementation of high level non-procedural
query languages.
Because of the relational model employed by the CRDB
it is easy to accomodate future changes, expansions
and modifications of the data structures and the physical
storage media.
L̲o̲g̲i̲c̲a̲l̲/̲P̲h̲y̲s̲i̲c̲a̲l̲ ̲A̲d̲d̲r̲e̲s̲s̲ ̲S̲p̲a̲c̲e̲
The CRDB relational database management system organizes
data into one or more independent databases. Up to
50 databases are supported by a single CRDB. A database
can have up to 32000 relations. Each relation can hold
up to 2 billion tuples.
The logical address space is 32 GB. This addres space
is, however, limited to 16 SMD disks.
P̲e̲r̲f̲o̲r̲m̲a̲n̲c̲e̲ ̲t̲u̲n̲i̲n̲g̲
The tuning of the CRDB is acomplished through use of
o indices
o stored commands
o multiple copies
I̲n̲d̲i̲c̲e̲s̲
An index is a directory that relates the physical location
of each tuple of a relation to the value of an attribute
or group of attributes of that tuple. The creation
of an index improves the performance of the system
by providing a direct access path to the data. Indices
are either clustered - the data are physically sorted
according to the index - or non-clustered. It is possible
to create up to 255 non-clustered indices for every
relation. Up to 15 attributes can be used to specify
an index. The index consists of an external system
of pointer information organized into a tree structure.
Indices save time when searches are done but they cost
time when tuples are added to a relation or key attributes
are modified. Since index creation is easily acomplished
through use of CRDB commands, application programs
may create temporary indices. It applies especially
to time consuming bulk update application programs
which may create indices to speed up the update process,
perform the update and then destroy the newly created
indices.
S̲t̲o̲r̲e̲d̲ ̲C̲o̲m̲m̲a̲n̲d̲s̲
An efficient way to execute an often repeated group
of commands is to store them in the CRDB as a stored
command. A stored command is preprocessed by the CRDB,
globally optimized, and stored for later execution.
Once a command is stored in the CRDB the host can send
the command name and the associated parameters to the
CRDB for execution. This reduces the amount of information
that must be transmitted and significantly reduces
execution time of the command.
M̲u̲l̲t̲i̲p̲l̲e̲ ̲C̲o̲p̲i̲e̲s̲
Further performance improvements are be achieved by
providing multiple copies of the database. The handling
of the multiple copies is done by the transaction processing
system which enforces synchronous updates of the multiple
copies. The application program is not aware of the
existence of multiple copies, while the transaction
processing system balances load of transactions on
the available copies.
D̲a̲t̲a̲ ̲T̲r̲a̲n̲s̲f̲e̲r̲
The transfer of data to/from the CRDB is performed
under the control of the transaction processing system.
The transaction processing part of the extended operating
system is the only SW that is able to directly acces
the CRDB.
S̲e̲c̲u̲r̲i̲t̲y̲ ̲a̲n̲d̲ ̲P̲r̲o̲t̲e̲c̲t̲i̲o̲n̲.̲
The database objects are relations and views. A view
is a collective object defined as a conditioned aggregate
of relations.
a) M̲a̲n̲d̲a̲t̲o̲r̲y̲ ̲A̲c̲c̲e̲s̲s̲ ̲C̲o̲n̲t̲r̲o̲l̲
As an object, each relation has a security classification
corresponding to the highest classified information
in the relation.
A relation may be a single level object or a multilevel
object. For a single level relation, all tuples
have the same security classification. For a multilevel
relation each tuple has its own security classification,
and access control is then made at tuple level
in addition to relation level. This is similar
to a multilevel communication line with security
classification at packet level.
b) D̲i̲s̲c̲r̲e̲t̲i̲o̲n̲a̲r̲y̲ ̲A̲c̲c̲e̲s̲s̲ ̲C̲o̲n̲t̲r̲o̲l̲
Each relation and view is subject to the standard
discretionary access control of objects.
In addition descretionary access control can be
enforced at the attribute level.
3.3.2 P̲h̲y̲s̲i̲c̲a̲l̲ ̲D̲a̲t̲a̲ ̲S̲t̲r̲u̲c̲t̲u̲r̲e̲
The CRDB controls autonomously the allocation of physical
storage to the logical relations. The physical aspects
of the data organization are irrelevant to the application
programs.
The Data Base Administrator (DBA) can, however, effect
some physical characteristics of the CRDB physical
storage medium utilization:
o "Quota" option of the "create relation" command
gives the DBA the possibility to prevent uncontrolled
growth of a relation.
o "Demand" option of the "create database" command
is used to specify the maximum no. of disk pages
used by a database.
o "Create database" can allocate a database to a
specified disk volume.
o "Extend" command can be used to allocate more disk
space to a database
o When a relation is physically sorted ("clustered
index" is created) it is possible to specify a
"fillfactor" which determines how much empty space
should be left in disk pages to accomodate future
growth.
"Skip" option of the "create index" command is
used to specify how many empty disk pages should
be left between (partially) used disk pages. "Skip"
and "fillfactor" are equivalent to an "overflow
area" definition.
o The equivalent of addressing and searching techniques
in a relational database is indexing. It is possible
to create up to 255 non-clustered indices for each
relation. Each index can consist of up to 15 attributes
and its width can be up to 248 bytes. The width
of a clustered index is limited to 252 bytes. Whenever
possible the CRDB utilizes the available indices
to speed up command processing.
3.3.3 L̲o̲g̲i̲c̲a̲l̲ ̲D̲a̲t̲a̲ ̲S̲t̲r̲u̲c̲t̲u̲r̲e̲
A database on the CRDB can be viewed as containing
several classes of objects: system relations, user
relations, stored commands, views and indices.
U̲s̲e̲r̲ ̲r̲e̲l̲a̲t̲i̲o̲n̲s̲ contain the user defined and entered
data.
S̲y̲s̲t̲e̲m̲ ̲r̲e̲l̲a̲t̲i̲o̲n̲s̲ contain data used internally by the
CRDB and descriptions of user relations ("schema definitions").
The later part of system relations form a d̲a̲t̲a̲ ̲d̲i̲c̲t̲i̲o̲n̲a̲r̲y̲.
Since data in the data dictionary is organised in exactly
the same way as all other data in the CRDB - namely
in relations and tuples - standard commands for data
manipulation apply to data in the data dictionary.
Some sensitive parts of data (e.g. data affecting the
physical allocation of disk pages) can only be manipulated
indirectly by special CRDB commands.
S̲t̲o̲r̲e̲d̲ ̲c̲o̲m̲m̲a̲n̲d̲s̲ are command sequences stored by the
CRDB. These commands are used to represent frequently
used operations and can be invoked by sending command
name to the CRDB. It is possible to leave some parameters
open during definition of a stored command and supply
parameter values when the command is invoked.
V̲i̲e̲w̲s̲ are virtual relations which are defined in terms
of real relations or other views. Views can present
to the user a restricted view of a relation: some attributes
and/or tuples may be invisible. It is possible to combine
several relations into one view.
The data dictionary is an integral part of the CRDB.
Usually the data dictionary maintenance is accomplished
by the DBA through an interactive high level query
language called IDL - Intelligent Data Language. IDL
compiler is available to the DBA at selected terminals.
For a summary of IDL commands see table 3.3.3-1.
IDL provides commands for relation, database, index,
view creation. These commands update system relations.
Some of the attributes of these relations can be accessed
by standard data manipulation commands like "replace"
and "retrieve". Therefore no special Data Definition
Language is needed.
abort Aborts transaction
append Appends tuples to a relation
audit Creates audit report from transaction
log
begin/end Marks beginning and end of multiple IDL
commands to be considered one transaction
create Creates databases, relations, indices
or views
define Defines stored commands
delete Removes tuples from a relation
destroy Removes databases, relations, views,
stored Commands and indices
dump Takes a physical dump of a database or
a transaction log
execute Executes a stored command
load Causes a physical load of the database
and transaction log
open/close Opens/closes a database
permit/deny Permits/denies access to relations or
attributes or stored commands
replace Replaces one or more attributes in one
or more tuples of a relation
retrieve Retrieves data from a relation and sends
it to the host or stores it in a new
relation
rollforward Restores a database from a transaction
log after "load database"
TABLE 3.3.3-1
MAJOR IDL COMMANDS
Creation of keys (indices) is performed by executing
"create index" IDL command.
Relationships between records and fields is given by
the contents of the fields only. This is a major advantage
of a relational model because no physical pointers
are needed.
View definition (virtual relations) can be defined
in several ways:
- it can specify a subset of tuples of a relation
- s̲e̲l̲e̲c̲t̲i̲o̲n̲
- it can specify a subset of attributes of a relation
- p̲r̲o̲j̲e̲c̲t̲i̲o̲n̲
- it can combine attributes from several relations
into one relation - j̲o̲i̲n̲
- it can be a combination of the above mentioned
operations.
Protection of a view can differ from the protection
of the underlying relations.
The application programs access data through views.
Any changes in the data structure will, therefore,
not affect data definitions embedded in the programs.
3.3.4 A̲c̲c̲e̲s̲s̲ ̲&̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲ ̲o̲f̲ ̲D̲a̲t̲a̲
The access to the physical data and the management
of data is performed by the CRDB backend processor.
The CRDB is able to determine the position and retrieve
record(s) from
o Unique key value (clustered index).
o Contents of the attributes; the most frequently
used attributes may be used as indices to speed
up the retrieval.
The CRDB handler in CR90 host operating systems buffers
the data exchanged between the application program
and the CRDB. It allows the application program to
read tuple at a time, i.e. it implements "get-next"
operation to allow sequential scanning of the retrieved
tuples.
The basic data manipulation commands can be used for:
o Retrieval of data based on contents of the attributes,
the attributes may be an unique (clustered) index.
An optional sorting of the delivered tuples can
be specified.
o Replacement of attribute values for specified tuples.
o Deletion of specified tuples.
o Appending of new tuples to a relation.
After each update the CRDB automatically updates the
book keeping information, e.g. no. of tuples in a relation
or index information.
The CRDB provides facilities for concurrency control
in order to allow multiple requests to be processed
in parallel. The CRDB software is re-entrant.
The host based database management part of the transaction
management monitors the execution time of the CRDB
requests by accessing special system database in the
CRDB where execution time for all given processes is
recorded.
3.3.5 D̲a̲t̲a̲b̲a̲s̲e̲ ̲I̲n̲t̲e̲g̲r̲i̲t̲y̲ ̲&̲ ̲(̲S̲e̲c̲u̲r̲i̲t̲y̲)̲ ̲C̲o̲n̲s̲i̲s̲t̲e̲n̲c̲y̲
The general policies employed of the CRDB are outlined
below.
The data integrity and consistency are ensured by a
combination of several techniques implemented in the
CRDB:
o Transaction management implements the transaction
concept
o Database checkpoints and transaction log are used
to restore the database to a consistent state
T̲r̲a̲n̲s̲a̲c̲t̲i̲o̲n̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲ ̲a̲n̲d̲ ̲C̲o̲n̲c̲u̲r̲r̲e̲n̲c̲y̲ ̲C̲o̲n̲t̲r̲o̲l̲
The CRDB provides complete support for t̲r̲a̲n̲s̲a̲c̲t̲i̲o̲n̲
̲m̲a̲n̲a̲g̲e̲m̲e̲n̲t̲. The technique utilized for transaction
management is called "l̲o̲c̲k̲i̲n̲g̲ ̲a̲n̲d̲ ̲l̲o̲g̲g̲i̲n̲g̲". The CRDB
commands enclosed by "begin ̲transaction" and "end ̲transaction"
verbs are executed as a single (atomic) operation.
Although multiple users can access a CRDB database
concurrently no one will ever see another
user…08…s partial updates. This is achieved by use of "locking"
mechanism which is used for a c̲o̲n̲c̲u̲r̲r̲e̲n̲c̲y̲ ̲c̲o̲n̲t̲r̲o̲l̲ policy.
Every time new objects are locked the CRDB checks whether
a deadlock occurs. In addition to the locking mechanism
CRDB employs "transaction log", where details of every
discrete change to data in a relation are recorded.
The data in the transaction log can be used for auditing
purposes and for re-doing the transaction or backing-out
from an aborted transaction. The transaction log can
be also used for a manual backup/recovery procedure.
S̲o̲f̲t̲ ̲C̲r̲a̲s̲h̲ ̲R̲e̲c̲o̲v̲e̲r̲y̲ ̲(̲T̲r̲a̲n̲s̲i̲e̲n̲t̲ ̲H̲W̲ ̲F̲a̲u̲l̲t̲s̲,̲ ̲P̲o̲w̲e̲r̲ ̲O̲u̲t̲a̲g̲e̲)̲
The automatic recovery procedures are executed upon
power up of the CRDB or upon reception of special command
from the host operating system. All changes to the
data are recorded in the transaction log. The transaction
log is "write ahead log", which means that changes
are physically written to the transaction log and then
applied to the data. This policy ensures that it is
possible to identify abruptly terminated transactions
- transaction terminated by power outage or CRDB processor
failure - and undo partially executed transactions.
The soft crash recovery is initiated by the transaction
processing system when it detects that the currently
active CRDB processor fails. RECOVER program is automatically
run on the standby CRDB processor when it becomes active.
A̲r̲c̲h̲i̲v̲e̲ ̲F̲a̲c̲i̲l̲i̲t̲y̲
In order to provide a̲r̲c̲h̲i̲v̲e̲ ̲f̲a̲c̲i̲l̲i̲t̲y̲ the CRDB provides
database load and database dump function. These functions
can dump complete logical database copies onto tape
or into a file in a CRDB database. This database or
tape dump can be used together with the transaction
log to restore database to a consistent state. The
backup procedures are as follows:
o The entire database is dumped onto a tape occasionally
o The transaction log is dumped frequently
o In case of loss of data the database is loaded
from a dump
o The transaction logs are loaded and rollforward
command is applied.
The transaction log functions as an i̲n̲c̲r̲e̲m̲e̲n̲t̲a̲l̲ ̲l̲o̲g̲.
Both the transaction dump and the database dump are
dumped onto a tape. It is however possible to store
it temporarily on the host system…08…s disk storage or
in a CRDB database allocated for backup purposes.
C̲o̲n̲s̲i̲s̲t̲e̲n̲c̲y̲ ̲M̲o̲n̲i̲t̲o̲r̲i̲n̲g̲
For applications demanding very high degree of data
integrity special techniques for data integrity monitoring
are applied. Database consistency monitoring is acomplished
by periodically scheduled processes which test data
versus predefined consistency criteria. Uniqueness
of certain attributes in a relation is automatically
enforced by the CRDB when a unique key is created for
a relation.
C̲o̲n̲t̲i̲n̲o̲u̲s̲ ̲D̲a̲t̲a̲b̲a̲s̲e̲ ̲H̲W̲ ̲A̲r̲c̲h̲i̲t̲e̲c̲t̲u̲r̲e̲
The hardware archictecture shown in figure 3.3.5-1
has been designed to provide continously available
database.
Mirrored disk volumes are used to prevent media failure
from rendering the data inaccessible. The stand-by
CRDB provides an alternative access path to the physical
data.
FIGURE 3.3.5-1
CONTINOUSLY AVAILABLE DATABASE SYSTEM
D̲a̲t̲a̲ ̲M̲o̲n̲i̲t̲o̲r̲i̲n̲g̲
Monitoring of updates of certain data items is performed
by host resident software (transaction processing subsystem
TPS) which detects an execution of special update commands.
When the execution of such a command has been detected,
TPS schedules execution of a "Watchdog" program which
tests the data values vis-a-vis predefined (database
stored) threshold values.
A̲r̲c̲h̲i̲v̲e̲s̲ ̲&̲ ̲J̲o̲u̲r̲n̲a̲l̲s̲
A database dump facility can be used to create an archive
copy of a database or a relation. Concurrent "reads"
and "dumps" are allowed while "writes" are deferred.
The transaction log records all the changes that occurred
in the database.
The transaction log record contains the following information:
- Transaction number
- Relation-id
- Tuple-id
- Before-image
- After-image
- Time
- Data
- User-Id
The transaction log can be used to produce "change
history" for a specified tuple, relation or user-id.
C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲i̲n̲g̲ ̲&̲ ̲R̲e̲s̲t̲a̲r̲t̲
The checkpointing of the application programs is the
responsibility of the transaction processing system.
The checkpointing of the database is performed by the
database/transaction dump.
The database load is the reverse operation. Rollforward
command can be applied to the loaded database in order
to update it from a transaction log.
3.3.6 D̲a̲t̲a̲b̲a̲s̲e̲ ̲U̲t̲i̲l̲i̲t̲i̲e̲s̲
There are 3 types of utility programs available:
o RECOVER is a utility program in the CRDB which
uses the transaction log to send aborted transactions.
o Disc pack formatting utility.
o Bulk copy utility which can copy large amount of
data to/from host resident files from/into database
relations.
In addition ot these utilities the CRDB supports commands
for logical database/relation dumps. Rollfoward command
can be applied to a loaded database in order to update
it to a consistent state. The rollforward command uses
the transaction log.
The DBA has full access to data by means of an interactive
IDL compiler. He/she can display data in relations
on the terminal and manually correct it, if it is necessary.
Additional utilities can be developed in exactly the
same way as other application programs.
D̲a̲t̲a̲ ̲D̲i̲c̲t̲i̲o̲n̲a̲r̲y̲
The internal structure of a CRDB resident database
is described in 13 system relations. Some of these
system relations form the backbone of the integrated
data dictionary, which is accessed and updated by IDL
commands. Some of the attributes can be manipulated
by standard data manipulation commands, e.g. the relation-name
attribute in the "relation" relation.
The data dictionary is maintained in essentially 2
system relations: "relation" and "attribute".
The "relation" relation contains the following information
(partial listing):
- name
- owner (user-id of the creator)
- relation-id
- status
- tuple-length
- no-tuples
- quota
- creation-date
- access-date
The "attribute" relation contains the following information
(partial listing):
- attribute-id
- type (data type)
- length in bytes
- relation-id
- name
Manipulation of the data dictionary is achieved through
"create/destroy relation" commands. The DBA can furthermore
change names of relations, and attributes
The standard dictionary is extended with DBA defined
relations which describe relationships between relations
and attributes which are not evident from the data
dictionary data. In a similar way relationships bewtween
processes and attributes are described in DBA defined
"process-att" relation containing he following data:
- process-id
- relation-id
- attribute-id
- access-type
- comment
A utility is provided which can access the extended
data dictionary and produce equivalent high level language
record definition ("subschema definition") to be used
by application programs.
The extended dictionary is used to monitor project
development and to produce listings of relationships
between modules and attributes.
3.4 S̲Y̲S̲T̲E̲M̲ ̲U̲T̲I̲L̲I̲T̲Y̲ ̲C̲H̲A̲R̲A̲C̲T̲E̲R̲I̲S̲T̲I̲C̲S̲
This section briefly describes the facilities for transaction
processing and software development and maintenance.
3.4.1 T̲r̲a̲n̲s̲a̲c̲t̲i̲o̲n̲ ̲P̲r̲o̲c̲e̲s̲s̲i̲n̲g̲
3.4.1.1 A̲r̲c̲h̲i̲t̲e̲c̲t̲u̲r̲a̲l̲ ̲O̲v̲e̲r̲v̲i̲e̲w̲
The workstation process (OP) which requests execution
of a transaction on a host system results in a creation
of a transaction I/O process (TP) which forwards the
request to the transaction processing subsystem (TPS).
TPS creates a worker process WP which executes the
transaction and engages in communication with the CRDB.
It is assisted in this task by the database manager
(DB/TPS) part of TPS which creates a server process
SP in the CRDB. SP executes the database commands on
behalf of WP. The process structure is shown in figure
3.4.1.1-1.
FIGURE 3.4.1.1-1
TRANSACTION PROCESSING - OVERVIEW
The communication subsystem and the transaction subsystem
are the major extension of the basic operating system.
The purposes of the TPS are to provide
o reliable environment for execution of WPs
o facilities which allow easy structuring of the
application programs (WP)
The functions implemented in TPS are as follows:
1) Transaction reception, transaction output TR, TO
2) Process allocation PA
3) Transaction scheduling TS
4) Database transaction manager DTM
5) Checkpoint mechanism CKP
6) Timer services TS
7) Recovery manager RM
A detailed diagram is shown overleaf.
FIGURE 3.4.1.1-2
TRANSACTION PROCESSING - COMPONENTS
The functions of TPS components are described below:
TRANSACTION RECEPTION
- receives requests for a transaction from the communication
processing subsystem
- selects appropriate processor to run the transaction
- updates process allocation input queues
- initiates allocation of primed (i.e. dormant) processes
PROCESS ALLOCATION
- evaluates available resources and transaction priority
- creates new process with priority
- keeps several processes "dormant" for often used
transactions (process priming)
- updates scheduler input queue
SCHEDULER
- schedules execution of WP processes, communicates
with DTM in order to prevent process using high
traffic resources from being preempted.
- dynamically changes process priorities
DTM
- implements database transaction concept
- initiates process checkpointing and receives acknowledgements
- logs operations and transactions
CHECKPOINT MECHANISM
- checkpoints WP and its context incl. files
- sends replies to DTM
- manages checkpoint files
TIMER SERVICES
- maintains queue of transactions to be executed
periodically or at a given time
- receives requests for periodic execution
- sends requests to transaction reception queues
RECOVERY MANAGER
- receives notification upon component failure
- determines which processes are affected by the
failure
- initiates recovery action which involves abort
of ongoing database transactions.
The application processes issue TPS service calls begin-transaction,
end-transaction, and abort-transaction. They are structured
as a sequence of database commands enclosed by begin/end
transaction verbs. Each "end-transaction" indicates
commitment of database updates and WP checkpoint. WP
checkpoint is only used for very timing consuming application
programs.
In most cases end-transaction indicates a termination
of process execution.
The transaction concept is characterized by the following
properties:
- atomicity:
it either goes through or never starts
- unit of recovery:
in case of WP abort it is possible to restart WP
from the end of last transaction
- unit of concurrency:
transactions appear to be serilized, this is enforced
by read/write locks and SPs delay, orders of actions
within transaction is maintained
3.4.1.2 T̲r̲a̲n̲s̲a̲c̲t̲i̲o̲n̲ ̲P̲r̲o̲c̲e̲s̲s̲i̲n̲g̲ ̲S̲u̲b̲s̲y̲s̲t̲e̲m̲
The transaction processing subsystem provides in conjuction
with the communication processing subsystem the following
services:
- facilities to support on-line interaction between
operator workstations and the host system and applications
processes
- determines internal routing & scheduling of input
requests to appropriate processes and processors
- is able to process many simultaneous requests.
- provides operator services to allow the operator
to start background transactions
- monitors and collects performance data
- requests creation and start of application processes
to service input messages
- maintains input/output data queues
- is able to recover from CRDB/WP failure and ensure
database consistency
The transaction I/O processes are responsible for forwarding
the input message to TSP and for sending output data
such to the workstation processes.
3.4.2 S̲o̲f̲t̲w̲a̲r̲e̲ ̲D̲e̲v̲e̲l̲o̲p̲m̲e̲n̲t̲ ̲O̲p̲e̲r̲a̲t̲i̲n̲g̲ ̲S̲y̲s̲t̲e̲m̲ ̲-̲ ̲U̲N̲I̲X̲
As one of the possible systems built on the ORION kernel,
the UNIX* system III has been implemented.
The UNIX operating system gives the user a comprehensive
set of facilities, commands, and tools:
o A hierarchical file system incorporating dismountable
volumes.
o Compatible file, device, and inter-process I/O.
o The ability to initiate asynchronous processes.
o System command language selectable on a per user
basis.
o Over 100 subsystems including a dozen languages.
3.4.3 C̲o̲m̲p̲i̲l̲e̲r̲s̲
A̲D̲A̲**
The ADA compiler which is supplied with the ORION system
was developed under the EEC sponsored Portable ADA
Programming System (PAPS) project. The PAPS ADA compiler
is currently being rehosted and retargeted to MC68000/ORION
and is expected to be validated by the end of 1984.
P̲a̲s̲c̲a̲l̲,̲ ̲F̲o̲r̲t̲r̲a̲n̲,̲ ̲C̲o̲b̲o̲l̲
Compilers for these languages are standard compilers
which are available under UNIX.
A̲l̲g̲o̲l̲ ̲6̲8̲
An Algol l68 compiler is not foreseen with the ORION
system. However, if a commercially available compiler
can be found it will be included in the system.
* UNIX is a trademark of Bell Laboratories
** ADA is a registered trademark of the U.S. Government,
AJPO.
3.4.4 M̲i̲s̲c̲e̲l̲l̲a̲n̲e̲o̲u̲s̲ ̲C̲o̲m̲m̲a̲n̲d̲s̲
Some of the more important UNIX utilities are listed
below.
The "vi" screen editor, which puts up a screen of text
at a time (unless a smaller window is specified) and
allows rapid cursor motion to where the user wants
to perform editing. With "vi" editing can be done on
characters, words, lines, or sections at a time.
The Source Code Control System (SCCS), which is a configuration
management facility, controlling and storing versions
of source code and object modules together with their
related documentation.
The CALCON project management control system, which
provides:
o data base management
o flexible report formats
o cost/schedule analysis
o resource planning/control
o milestone tracking
The adb general purpose debugging program. It may be
used to examine files containing core dumps, and to
provide a controlled environment for the execution
of UNIX programs.
3.5 S̲Y̲S̲T̲E̲M̲ ̲T̲E̲S̲T̲ ̲P̲R̲O̲G̲R̲A̲M̲S̲
Diagnostic software exists for all major hardware components
of the system.
Two different kinds of diagnostic software are available:
- check programs
- test programs
Check programs execute in background mode, while the
system is operational. A check program is exercising
a specific hardware module. The check is suplementary
to the built in test available in hardware. The result
of the check is reported by the general exception handling
mechanisms of the system. The reporting may therefore
be implemented by a project specific object.
Test programs performs an extensive test of a hardware
module, which is not running operationally. The reporting
is performed on operators console, on a disk file or
to a project defined procedure.
The diagnostic programs execute as standard ORION programs
whenever applicable. This implies that the general
security checks will be applied. For certain tests
however the operator must be classified to the highest
security level to perform a throughout test (e.g. a
CPU test).
3.6 S̲U̲M̲M̲A̲R̲Y̲
The software for the UKAIR CCIS ADPE consist of a set
of software packages. Those packages are either standard
packages or packages, special for the UKAIR CCIS. The
packages are divided into the following:
(1) Basic Operating System, with
- ORION Kernel
- basic file system
- basic X.25 protocol
- CRDB host interface
- Spooling utilities.
(2) UNIX and development support environment, with
- Screen Editor
- Source Code Control System
- Link Editor
- General Purpose Test and Debug Facility
- Project Management Facility.
(3) Compilers, working under UNIX. e.g.
- ADA
- PASCAL
- FORTRAN
- COBOL
(4) CRDB System Software.
(5) High Level Operating System, common for the Transaction
Processing and the Communication Processing Subsystems.
(6) Transaction Processing Subsystem for the UKAIR
CCIS application programs.
(7) Communication Processing Subsystem for the UKAIR
CCIS workstations, and external communication interfaces
as well as infra workstation communication and
CCIS support.
The packages 5-7 are customer specific packages especially
tailored to the complex system of UKAIR CCIS. These
packages are developed for UKAIR, the remaining packages
are standard CR90 packages. The standard packages are
priced according to usage and is therefore related
to number of Processor Sections (CR90) or development
systems (CR32). The package and compilers are listed
in the Commercial Proposal, chapter 3.