top - download
⟦ca88a0a1c⟧ Wang Wps File
Length: 29596 (0x739c)
Types: Wang Wps File
Notes: LKSAA, Techn. P.1. 2.2
Names: »4852A «
Derivation
└─⟦8ddc7201e⟧ Bits:30006018 8" Wang WCS floppy, CR 0459A
└─ ⟦this⟧ »4852A «
WangText
Issue
1.5
LKSAA - VOLUME II SYS/84-06-15
Part 1
TECHNICAL PROPOSAL Page
#
2.2.2 G̲e̲n̲e̲r̲a̲l̲ ̲D̲e̲v̲e̲l̲o̲p̲m̲e̲n̲t̲ ̲a̲n̲d̲ ̲M̲a̲i̲n̲t̲e̲n̲a̲n̲c̲e̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
2.2.2.1 P̲r̲o̲g̲r̲a̲m̲m̲i̲n̲g̲ ̲L̲a̲n̲g̲u̲a̲g̲e̲s̲
The following languages are presently available:
- Cobol
- Fortran 77
- Pascal
- SWELL, the CR80 System Programming Language
- Assembler
- Ada
SWELL is the system programming language for CR80 series
of computers. In the design of SWELL, the programming
language PASCAL, has been used as a model, because
of the facilities PASCAL offers for systematic and
structured programming. However, in SWELL, program
execution is not supported by a run time system, and
no facilities that might hide the architecture of the
computer are implemented. Therefore, only constructs
that have a direct and efficient implementation on
the CR80 are included in the language.
In addition to the languages mentioned above a Parsing
System is available. The Parsing System is a set of
tools for table driven text scanning and syntax analysis.
The LKSAA software is/will be developed in the SWELL
Language.
2.2.2.2 U̲t̲i̲l̲i̲t̲y̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
A wide range of utility software is available for the
CR80.
The following utilities can be mentioned:
- Interactive Editor
- Batch Editor
- Text Formatting Program
- File Merge Program
- File Concatenator
- File Copy Program
- Directory Copy Program
- File Display Program
- Dump Program
- File Compare Program
- File Conversion Utility Programs
- Library and Directory Maintenance Utilities
A utility program is started by entering a command
lie to a terminal. The command line generally consists
of the program name followed by a number of parameters
which are program specific.
2.2.2.3 B̲i̲n̲d̲e̲r̲
The utility SYSGEN-EDIT gnerates object files - based
upon a set of directives, a system source, and command
files - for subsequent compiling and linking - A BINDER
then binds the system object together with the application
object based upon a command file from SYSGEN-EDIT.
All the external references of the object modules are
resolved in the binder output, which is a load module
ready for execution. Load modules compiled different
languages can be executed. The BINDER produces a listing
giving memory layout, module size, etc.
2.2.2.4 T̲e̲s̲t̲ ̲T̲o̲o̲l̲s̲
A number of test tools are available for the CR80 and
the following can be mentioned:
- Patch Program
- Disassembler
- On-Line Test Ouptut Facility
- Off-Line Log Editor
- Interactive Symbolic Debugger
The SWELL DEBUGGER is a tool for detecting logical
errors in SWELL progrms.
The debugger provides functions for the following:
o Insertion, deletion and inspection of breakpoints
o Listing of current values of variables referenced
by symbolic name
o Assignment of new values to variables referenced
by symbolic name
o Maintenance of a cyclic log of the last 16
procedures calls
o Process communication, to/from synchronizaion
elements identified by symbolic names in the
SWELL program
o Low level features, such as access to registers
and dumping and changing locations refereced
by address
The program to be debugged is loaded and executed by
the debugger, which acts as a small operating system.
The necesary information for referency the objects
in the SWELL program by symbolic names is provided
by the SWELL compiler and the LINKER.
2.2.3 H̲i̲g̲h̲l̲e̲v̲e̲l̲ ̲O̲p̲e̲r̲a̲t̲i̲n̲g̲ ̲S̲y̲s̲t̲e̲m̲ ̲(̲H̲I̲O̲S̲)̲
HIOS is an operating system, which provides the on-line
user interface for interactive and batch processing
on the CR80 computer.
The development and maintenance programs mentioned
above are operated as processes under the High Level
Operating System.
This provides a user friendly interface by means of
symbolic commands with mnenotemic names and a structured
syntax for parameters.
The functions performed by HIOS are the following:
- define system volume and directory
- define system device(s)
- assign/deassign terminal device(s)
- create/delete terminal subdevices
- assign/deassign of disk
- reserve/release of disk
- mount/dismount of volume
- update bitmap and basic file directory
- display name of user directory
- change user directory
- listing of current status of system
- redefine current input/output
- reopen original outputfile
- maintain a user catalogue
- redefine filesystem dependant I/O resources
- control online log facility
- broadcast messages between terminals
- maintain a hotnews facility
- maintain a number of batch queues
- define spool files for later output
- login/logout of terminals
- load of program
- execute task
- stop and start task
- remove task
- display current date and time
- submit batch task
HIOS is activated by the ROOT process when the system
is bootloaded. After a short initialization phase the
production phase can be entered.
In the production phase, two kinds of users can log
in on the system:
- priviledged users
- non priviledged users
The priviledge of the user is checked at login-time
by means of the user catalogue, and the priviledge
determines which functions the user can execute.
All functions contained in HIOS are executed under
the constraints of the security access control mechanisms
implemented in the DAMOS kernel. This means that unauthorized
access to any DAMOS, FMS or TMS object is impossible.
HIOS contains facilities for logging and timetagging
of all user commands and related system responses.
These facilities are used for:
- system recovery
- system performance
and load monitoring
- user assistance
2.2.4 S̲e̲c̲u̲r̲i̲t̲y̲ ̲S̲y̲s̲t̲e̲m̲ ̲
The security facilities of the proposed system is very
comprehensive and designed to meet requirements from
United States Department of Defence for secure systems
as specified in detail in section 2.7.
In the following only the specific requirements from
the RFP are highlighted:
2.2.4.1 A̲c̲c̲e̲s̲s̲ ̲C̲o̲n̲t̲r̲o̲l̲
Access to the LKSAA System is monitored and controlled
by trusted software and a separate security log is
maintained. Access can only be obtained by means of
password, user identification and key.
2.2.4.2 E̲r̲r̲o̲r̲ ̲P̲r̲o̲c̲e̲s̲s̲i̲n̲g̲
2.2.4.2.1&2 U̲s̲e̲r̲ ̲E̲r̲r̲o̲r̲s̲ ̲a̲n̲d̲ ̲S̲y̲s̲t̲e̲m̲ ̲E̲r̲r̲o̲r̲s̲
Errors detected are logged and errors reports contaiing
information in plain test as displayed on the user
VDU's if a user error occurs or on the Watchdog if
a system error occurs. This is described in more detail
in part 2, section 4.
2.2.4.2.3 P̲o̲s̲t̲-̲M̲o̲r̲t̲e̲m̲-̲P̲r̲o̲t̲o̲c̲o̲l̲
An operating system error will be interpreted by the
COPSY Error Processor and an report explaining the
detected error and the action to be taken will be display
are on the Watchdog Printer. In case of a fatal error
Watchdog will automatically generate a switchover to
the standby processor unit. A switchover can also
be initiated by the operator. Software for partly or
total damps, analysis and diagnostics are available.
2.2.4.2.4 R̲e̲s̲t̲a̲r̲t̲
A complete set of levelled restart commands are provided
to allow the total system to restart and recover to
the previous state either automatically by a Watchdog
initiated switchover or operator controlled. The operator
can execute two types of warn starts, two types of
dead starts and cold start.
2.2.4.3 C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲,̲ ̲R̲e̲c̲o̲v̲e̲r̲y̲ ̲a̲n̲d̲ ̲R̲e̲s̲t̲a̲r̲t̲
2.2.4.3.1 I̲n̲t̲e̲g̲r̲i̲t̲y̲
The integrity is achieved by dualizing all vital parts.
There are two redundant Processor Units. The Channel
Unit contains fully dualized Channel Interface Adapters,
I/O Busses, and power supplies. The on-line disks are
"mirrored".
The Line Termination Units (LTU's) is optionally be
offered as "mirrored".
One of the parts performs the operation under normal
conditions. If a fault occurs in this part, the Watchdog
immediately switches over to the standby part without
loss of data. The fault is automatically reported to
the technician who will then run off-line diagnostic
testprogrammes on the faulty part in order to locate
and repair the fault (e.g. by module replacing).
Thanks to the automation in this procedure, the mean
time to repair can be kept very small.
A detail which ensures the high reliability of the
hardware is that each unit is galvanically isolated
from the others. This limits the possibility for a
fault in one unit to damage other units.
2.2.4.3.2 C̲e̲n̲t̲r̲a̲l̲ ̲R̲e̲c̲o̲v̲e̲r̲y̲ ̲F̲u̲n̲c̲t̲i̲o̲n̲s̲
a. C̲h̲e̲c̲k̲-̲p̲o̲i̲n̲t̲ ̲F̲u̲n̲c̲t̲i̲o̲n̲
As checkpointing is event based, that is that checkpoints
are defined as points in the message propagation
through the system where a transition from one
stage to the next takes place, the checkpoint function
is required to record, at these points, information
about the message that is necessary to recover
it to the last state where it was checkpointed.
As the message itself is stored on disk, the information
required to recover it, is the M̲essage C̲ontrol
B̲lock (MCB) identifying it and the state of it,
plus index information pointing to where the message
is stored.
The recovery functions are:
b. C̲o̲n̲s̲i̲s̲t̲e̲n̲t̲ ̲C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲s̲
Central system functions ensuring that update of
MSG, queues, tables, files etc. is done in a way
that ensures that related updates are either all
performed or none of them performed.
c. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲G̲e̲n̲e̲r̲a̲t̲i̲o̲n̲
Central system function for generation of checkpoint
data transparent to the individual sub-systems
and transferring these data to disk and/or standby
PU.
d. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲R̲e̲c̲e̲p̲t̲i̲o̲n̲
Central system function (SS&C) for receiving checkpoint
data from the active PU and maintaining a pool
of "active checkpoints".
e. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲R̲e̲s̲t̲o̲r̲e̲
Central system function to enable regeneration
of system state from checkpoint data previously
generated. These data may come from disk in the
case of warm start or from standby PU in the case
of switch-over.
f. S̲t̲a̲n̲d̲-̲b̲y̲ ̲P̲U̲ ̲C̲o̲o̲r̲d̲i̲n̲a̲t̲i̲o̲n̲
Central System Functions (SS&C) for re-establishing
operation of stand-by PU by transferring programs,
tables, index, saved checkpoint data etc. to it.
g. Q̲u̲e̲u̲e̲ ̲C̲o̲n̲t̲r̲o̲l̲ ̲a̲t̲ ̲R̲e̲s̲t̲a̲r̲t̲
Central System Functions to enable the required
manipulation of queues after restart and before
the system becomes fully operational.
h. S̲a̲v̲e̲ ̲S̲y̲s̲t̲e̲m̲ ̲D̲a̲t̲a̲
Central System Functions for storing of queues,
tables, index etc. on disk prior to ordered close
down.
i. R̲e̲s̲t̲o̲r̲e̲ ̲S̲y̲s̲t̲e̲m̲ ̲D̲a̲t̲a̲
Central System Functions for re-establishing queues,
tables, index etc. in main memory during restart
after ordered close down.
j. D̲i̲s̲k̲ ̲S̲y̲s̲t̲e̲m̲ ̲I̲n̲t̲e̲g̲r̲i̲t̲y̲
The mirrored disk concept prevents "hard" faults
as well as "soft" faults and thus obviates a special
"safe update" mechanism.
2.2.4.3.3 C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲s̲
The points in the message flow through the system that
defines completion of a defined task are called checkpoints
and described individually as an example in this section.
Ref. Figure 2.2.4.3-1
a. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲1̲
Incoming messages are stored, sent to the IMQ and
checkpointed. After analysis and possibly Garble
Correction, they are sent to MDQ for "normal" messages,
and Service MSG to SVQ. Recovery at this stage
will reestablish the IMQ and thus require actions
prior to next checkpoint to be repeated.
In the case of discontinuity in channel seq-no
supervisor action is invoked (for sending out service
message etc).
In the case of a flash message, the flash message
receipt will be sent to the OMQ (and thus checkpointed)
before the message itself is sent to the MDQ and
checkpointed.
b. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲2̲
All messages, comments and release notifications
sent for distribution will go into the MDQ and
be checkpointed. They will be distributed to all
recipients possibly with assistance from MDCO.
c. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲3̲
All messages, comments and release notifications
that have passed distribution ( with or without
MDCO interaction) are checkpointed avoiding repeated
distribution in case of recovery.
d. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲4̲
All messages including those presented for coordinators,
comments, and release notifications presented at
a terminal are checkpointed. This means that checkpointing
is performed on a "presented per terminal" -basis
avoiding re-presentation after recovery.
This applies to printing where applicable.
e. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲5̲
Messages and comments created during Initial Preparation
are checkpointed and recoverable to the last segment
stored on disk. After completion of that phase
(with possible correction) the message is completely
recoverable in this "initial version". After completion
of editing, a "current version" exists and the
corresponding checkpointing is done. After that,
this version is recoverable.
Input from OCR is recoverable when complete.
f. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲6̲
Messages sent for release are sent to the RLQ and
checkpointed. When the messages are released,
rejected or deferred, they are removed from RLQ
and thus prior to that recovered in the RLQ.
g. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲7̲
Messages released are sent to the MRQ for routing
and checkpointed. They are thus recovered there.
The Release notification (with possible comment)
is sent to the MDQ and checkpointed there.
h. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲8̲
Messages that have passed routing (possibly with
Routing assistance or Group Count assignment) are
sent to the OMQ for transmission or MDQ for local
distribution and checkpointed.
i. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲9̲
When messages are transmitted a check point meaning
"message finished" is sent. Messages for which
an acknowledgement is expected will remain in the
OMQ until reception of the acknowledgement, then
above …86…1 …02… …02… …02… …02…
action will be performed. In case of a channel
failure preventing the message to be sent, it will
be returned to the MRQ and checkpointed.
All messages recovered in the OMQ will be retransmitted.
j. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲1̲0̲
Messages rejected or deferred are removed from
the RLQ and together with the notification (with
comment) sent to the MDQ. They are checkpointed
together in an indivisible operation.
k. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲1̲1̲
Messages directed to supervisor or service messages
are sent to the SVQ and checkpointed. Prior to
that, they will be recovered from previous checkpoint.
l. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲1̲2̲
Messages presented for the supervisor are checkpointed
as finished. Prior to that they are recovered
in the SVQ and represented.
m. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲1̲3̲
Messages that message service decides shall be
stopped are checkpointed as finished.
n. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲1̲4̲
Messages that MDCO decides shall be stopped are
checkpointed as finished.
o. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲1̲5̲
Messages from external networks sent for release,
rejected or deferred are checkpointed as finished.
The associated comment is sent to MRQ and checkpointed.
p. C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲ ̲1̲6̲
Messages prepared by the supervisor are treated
as described for point 5.
Figure 2.2.4.3-1
Checkpoints
2.2.4.3.4 R̲e̲c̲o̲v̲e̲r̲y̲
2.2.4.3.4.1 F̲a̲i̲l̲u̲r̲e̲ ̲T̲y̲p̲e̲s̲
a. M̲i̲n̲o̲r̲ ̲E̲r̲r̲o̲r̲s̲
These are defined as errors that can be recovered
from without switch-over to the stand-by PU and
without system reload.
P̲r̲o̲c̲e̲s̲s̲ ̲F̲a̲i̲l̲u̲r̲e̲
- Illegal operation
- Security violation attempt
- etc.
P̲r̲o̲c̲e̲s̲s̲ ̲D̲e̲t̲e̲c̲t̲e̲d̲ ̲F̲a̲i̲l̲u̲r̲e̲
- HW error (other than PU failure)
- Certain types of file corruption
- Resource error
- etc.
O̲p̲e̲r̲a̲t̲i̲n̲g̲ ̲S̲y̲s̲t̲e̲m̲ ̲D̲e̲t̲e̲c̲t̲e̲d̲ ̲F̲a̲i̲l̲u̲r̲e̲s̲
- Failure on one "mirrored" disc
- Failure on one "mirrored" line termination
Unit (LTU)(if this option is selected)
The "mirrored" LTU's will prevent loss of characters
if one of the mirrored LTU's is faulty.
b. S̲i̲n̲g̲l̲e̲ ̲S̲y̲s̲t̲e̲m̲ ̲F̲a̲i̲l̲u̲r̲e̲s̲
These are defined as HW-errors limited to one PU
thus recoverable by switch-over to stand-by:
- Any active PU failure.
c. T̲o̲t̲a̲l̲ ̲S̲y̲s̲t̲e̲m̲ ̲F̲a̲i̲l̲u̲r̲e̲s̲
These are defined as malfunction of the total dualized
configuration:
- Simultaneous failure of dualized equipment
- Single PU failure when stand-by PU not available
- Power failure
d. S̲o̲f̲t̲w̲a̲r̲e̲ ̲S̲y̲s̲t̲e̲m̲ ̲E̲r̲r̲o̲r̲s̲
These are defined as programming errors (other
than minor errors) resulting in system break and
recoverable by system software reload and restart.
e. D̲i̲s̲a̲s̲t̲r̲o̲u̲s̲ ̲E̲r̲r̲o̲r̲s̲
These are defined as errors resulting in loss of
vital system files and thus not completely recoverable
with respect to:
- Messages
- Accountability
- Log
- Statistics
- System parameters
2.2.4.3.4.2 Re̲c̲o̲v̲e̲r̲y̲ ̲A̲c̲t̲i̲o̲n̲s̲
1̲.̲ E̲r̲r̲o̲r̲ ̲F̲i̲x̲-̲u̲p̲
This action is applicable to recovery from minor errors.
2̲.̲ S̲w̲i̲t̲c̲h̲-̲o̲v̲e̲r̲ ̲t̲o̲ ̲S̲t̲a̲n̲d̲-̲b̲y̲ ̲P̲U̲
This action is taken upon detection of a fault in the
active PU when the stand-by PU is available.
3̲.̲ R̲e̲s̲t̲a̲r̲t̲ ̲o̲f̲ ̲S̲t̲a̲n̲d̲-̲b̲y̲ ̲P̲U̲
This action is required when a PU is returned to the
system (after repair or off-line operation) for use
as a stand-by PU.
4̲.̲ R̲e̲s̲t̲a̲r̲t̲ ̲a̲f̲t̲e̲r̲ ̲C̲l̲o̲s̲e̲ ̲D̲o̲w̲n̲
This action is taken when the system is brought back
into operation following an ordered close down.
5̲.̲ T̲o̲t̲a̲l̲ ̲S̲y̲s̲t̲e̲m̲ ̲R̲e̲l̲o̲a̲d̲ ̲
This action will be taken upon failure of the total
system due to HW or SW, after the error condition is
detected and removed.
6̲.̲ I̲n̲i̲t̲i̲a̲l̲ ̲S̲t̲a̲r̲t̲-̲u̲p̲
This action is taken at first time load of the system
(following installation) and is required to bring the
system back to operational use following a disastrous
error.
2.2.4.3.4.3 R̲e̲c̲o̲v̲e̲r̲y̲ ̲L̲e̲v̲e̲l̲
The level of recovery is defined by the checkpoints
and depends upon the type of failure.
a. L̲e̲v̲e̲l̲ ̲0̲
This is the state resulting from cold start. The
system is empty (except for possibly the HDB) which
means that no processing will take place before
external activity starts (incoming messages, sign-in,
retrieval etc.).
b. L̲e̲v̲e̲l̲ ̲1̲ ̲(̲R̲e̲s̲t̲a̲r̲t̲ ̲a̲f̲t̲e̲r̲ ̲T̲o̲t̲a̲l̲ ̲S̲y̲s̲t̲e̲m̲ ̲F̲a̲i̲l̲u̲r̲e̲)̲
This is the state resulting from total system failure
and recovery is based upon data on disk. Disk
system integrity checks on vital data are performed
to ensure validity. Then system data is reloaded
and checkpoint data read from disk and used to
restore queues, directories, status information
etc. The checkpoint data written to disk are from
points 1, 5, 16 where msg are "created" and points
4, 9, 12, 13, 14, 15 where msg are "finished".
Point 4 only when distribution is fully completed.
Point 5 only when preparation completed. Status
reports generated and not yet presented will be
lost, as the content of them is likely to be out
of date. They can then be requested by the users
and they will cover the time interval set in system
parameters (by supervisor).
Global No Series related to msg will be recovered.
c. L̲e̲v̲e̲l̲ ̲2̲ ̲(̲S̲w̲i̲t̲c̲h̲-̲o̲v̲e̲r̲)̲
This is the state following switch-over to the
standby PU. No disk system integrity checking
is required as the mirrored pair of disks are identical.
Recovery is based upon the checkpoint data collected
in the standby PU prior to switch-over and thus
the recovery level is to the last checkpoint.
Status reports generated but not yet presented
are lost.
Global No Series related to msg will …86…1
…02… …02… …02… …02…
2.2.5 A̲p̲p̲l̲i̲c̲a̲t̲i̲o̲n̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
a. S̲o̲f̲t̲w̲a̲r̲e̲ ̲C̲o̲n̲f̲i̲g̲u̲r̲a̲t̲i̲o̲n̲
The LKSAA software consist of three major subsystems.
In turn, these subsystems are subdivided into
"packages". A package is a convenient grouping
of functions that are performed by software (and
firmware). During the detailed design stage, the
software modules that form the packages will be
identified. For the purposes of this proposal
however, the packages are informally divided, where
convenient, into functions.
b. S̲o̲f̲t̲w̲a̲r̲e̲ ̲S̲u̲b̲s̲y̲s̲t̲e̲m̲s̲
There are 3 subsystems:
- S̲y̲s̲t̲e̲m̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲. This subsystem contains packages
which provide support for, and control of the
resources of the system. The packages constituting
the system software are the only packages which
are allowed to execute in the "Privileged -user"
state.
- A̲p̲p̲l̲i̲c̲a̲t̲i̲o̲n̲s̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲. This subsystem contains
packages that perform the functional capabilities
defined by the LKSAA requirement. The packages
operate in an environment provided by the system
software subsystem.
- S̲u̲p̲p̲o̲r̲t̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲. This subsystem consists
of all the software that is not part of the
operational system.
c. S̲o̲f̲t̲w̲a̲r̲e̲ ̲P̲a̲c̲k̲a̲g̲e̲s̲
Figure 2.2.5-1 lists the subsystems and packages
and gives brief functional description. Figure
2.2.5-2 shows a break-down of software packages
under subpackages.
d. S̲y̲s̲t̲e̲m̲ ̲O̲v̲e̲r̲v̲i̲e̲w̲
Figure 2.2.5-3 gives a system overview and shows
the main functional flows and the security kernel
and monitors.
Figure 2.2.5-4 shows the supervisory functions.
The supervisory functions for Message Service Assistance,
Message Distribution Assistance (MDCO), and System
and Network Control (SUPERVISOR) can be assigned
to several VDU's during system generation and may
be changed dynamically by the primary supervisor.
The system can have more than one SUPERVISOR,
but only on VDU at a time can be assigned capabilities
for system reconfiguration (primary SUPERVISOR).
The user functions as well can be assigned dynamically
to the user VDU's.
SUB-SYSTEM PACKAGE MAJOR FUNCTIONS
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲
KERNEL (KER) STANDARD OPERTING SYSTEM
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲
I/O CONTROL (IOC) I/O SYSTEM (DOS)
TERMINAL HANDLING SYSTEM (THS)
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲
SHARED MEMORY MANAG…86…1 …02…
…02… …02… …02…
Figure 2.2.5-2…01…LKSAA Application Software …01…Packages and Subpackages
Figure 2.2.5-3…01…LKSAA System Overview…01…Functional Flows
Figure 2.2.5-4…01…LKSAA Supervisory Functions
2.2.5.1 M̲e̲s̲s̲a̲g̲e̲ ̲C̲o̲r̲r̲e̲c̲t̲i̲o̲n̲ ̲P̲r̲o̲g̲r̲a̲m̲s̲
Correction of message text may be initiated from the
user VDU processes or the MSO processes.
2.2.5.2 M̲e̲s̲s̲a̲g̲e̲ ̲P̲r̲o̲c̲e̲s̲s̲i̲n̲g̲
2.2.5.2.1 M̲e̲s̲s̲a̲g̲e̲ ̲P̲r̲o̲c̲e̲s̲s̲i̲n̲g̲ ̲P̲r̲o̲g̲r̲a̲m̲s̲
Analysis of messages is handled by the analysis process
and message synthesis is handled by the Conversion
Process. These processes which are part of the Traffic
Handling Package will select messages according to
type of traffic, priority, classification, addresses
and route. All incoming and outgoing messages are
stored via the Storage and Retrieval Process. Local
distribution is handled by the Message Distribution
Process.
2.2.5.2.2 S̲t̲o̲r̲a̲g̲e̲
The Analysis and Conversion processes will communicate
to the Storage and Retrieval Process which stores the
messages along with control data and retrieval keys.
Validation of incoming messages is done by the Analysis
Process.
2.2.5.2.3 T̲r̲a̲n̲s̲f̲o̲r̲m̲a̲t̲i̲o̲n̲ ̲a̲n̲d̲ ̲T̲r̲a̲n̲s̲m̲i̲s̲s̲i̲o̲n̲
After analysis fo the control data the Analysis Process
transforms the message to internal AA-Message-Format
it required.
Transmission of messages are handled by the Transport
Processes (one for each type).
2.2.5.3 N̲e̲t̲w̲o̲r̲k̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
Monitoring and control of the network are handled by
the Supervisor Process through a VDU dialog.
2.2.5.3.1 C̲o̲n̲f̲i̲g̲u̲r̲a̲t̲i̲o̲n̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
Management of the configuration for lines, devices,
processors, and software is handled by the Command
Interpreter Process of the System Status and Control
Package. The actual configuration is displayed on
the VDU by means of figures and tables, and the configuration
can be updated via a VDU dialog. A report printer
is connected to the watchdog which monitors the processors.
2.2.5.3.2 C̲h̲a̲n̲g̲e̲s̲
As described in section 2.2.5.3.2 the configuration
can be changed dynamically in relation to e.g. connection/disconnection
of devices and lines and change of characteristics
as e.g. transmission speed. This is done through the
Configuration Handler. Reports will be analysed and
reported via the Error Handler and Command Dispatcher
to the Watchdog Printer.
2.2.5.4 L̲o̲g̲
All application processes will by communication to
the Log Process generate log records according to specified
criterias e.g. all incoming and outgoing messages will
be logged along with control data, all supervisor sommands
will be logged as commands completion reports.
2.2.5.5 S̲t̲a̲t̲i̲s̲t̲i̲c̲s̲
Statistics from application processing will be monitored,
collected, and generated via the Statistical Package.
Statistics are desorted in details in part 2.
2.2.6 D̲i̲a̲g̲n̲o̲s̲t̲i̲c̲ ̲P̲r̲o̲g̲r̲a̲m̲s̲
The Maintenance and Diagnostic (M&D) package is a collection
of standard test programs which is used to verify proper
operation of the CR80 system and to detect and isolate
faults to replaceable modules.
2.2.6.1 O̲f̲f̲-̲L̲i̲n̲e̲ ̲D̲i̲a̲g̲n̲o̲s̲t̲i̲c̲ ̲P̲r̲o̲g̲r̲a̲m̲
The off-line M&D software package contains the following
programs:
- CPU Test Program
- CPU CACHE Test Program
- Memory Map Test Program
- RAM Test Program
- PROM Test Program
- Supra Bus I/F Test Program
- CIA Test Program
- LTU Test Progrm
- Disk System Test Program
- Magtape System Test Program
- Floppy Disk Test Program
- TDX-HOST I/F Test Program
- Card Reader and Line Printer Test Program
2.2.6.2 O̲n̲-̲L̲i̲n̲e̲ ̲D̲i̲a̲g̲n̲o̲s̲t̲i̲c̲ ̲P̲r̲o̲g̲r̲a̲m̲s̲
On-line Diagnostic programs will execute periodically
as part of the exchange surveillance system. On-line
diagnostics consists of a mixture of hardware module
built-in test and reporting, and diagnostic software
routines. The following on-line diagnostic capability
exists:
- CPU-CACH diagnostic
- TRACE diagnostic
- RAM test
- PROM test
- MAP/MIA test
- STI test
- Disk Controller/DCA test
- Tape Controller/TCA test
- LTU/LIA test
On-line diagnostics will report errors to higher level
processing to take recovery/switchover decision in
the case of failures.
2.2.7 O̲p̲t̲i̲o̲n̲a̲l̲ ̲S̲t̲a̲n̲d̲a̲r̲d̲ ̲P̲r̲o̲t̲o̲c̲o̲l̲s̲
Christian Rovsing A/S has been involved in many communication
programs, and we have developed expertise in the implementation
of all the most used communication protocols. As a
consequence, Christian Rovsing A/S has available a
number of interfaes to all the most common computer
systems. If AA wants it as an option to the LKSAA or
as a later enhancement of the system, then the CR80
equipment proposed is proven to be well suited for
implementation of various protocols. In contrast to
many large main frame vendors, who's expertise reside
only within these types of main frames, Christian Rovsing
A/S expertise lies within inter operation of many different
type of systems.
In the following is given a short description of some
of those protocols. They could easily be modifed and
used in an LKSAA context.
The LITSYNC Protocol is used by the NICS TARE System
in the NATO countries. It is a synchroneous transmission
protocol, well suited for terrestrial and satellite
communications, because it can cope with long delays
between transmission and requests for retransmission.
IBM Channel Interface is a channel protocol which permits
fast and vast transmission of data to and from IBM
360 type of system without the normal communication
overhead.
The IBM Binary Synchroneous Communication Procedure
BSC is used for the IBM 2780/3780 Protocol.
The IBM System Network Architecture, SNA is implemented
using the Virtual Telecommunication Access Method,
VTAM to interface IBM 3270 compatible equipment.
Univac Main Frame Protocols and Uniscope Terminal Protocols
have been implemented.
Honeywell Systems like the 6000 Series has been interfaced
using the DINDAC Procedure.
ICL Protocols have been implemented.
Siemens Main Frames have been interfaced by terminal
equipment emulating the MSV1 Procedure.
The Open System Interconnect Model proposed by ISO
has been used to implement standard protocols like
X25. This has proven to be very flexible to the user
because the standard lower levels can be combined with
specialized higher levels protocols.
Christian Rovsing A/S's vast experience with different
standard protocols, also ensures that specialized protocols
can be implemented if circumstances request this.