top - download
⟦745c4af9b⟧ Wang Wps File
Length: 129079 (0x1f837)
Types: Wang Wps File
Notes: ACCESS, TECHNICAL PROP.
Names: »3181A «
Derivation
└─⟦563d5c4e7⟧ Bits:30006220 8" Wang WCS floppy, CR 0275A
└─ ⟦this⟧ »3181A «
WangText
…00……00……00…4…0a…4…0b……00……00…4…0c…4…01…3…08…3…0f…3…05…2…0d…2…01…2…07…1…08…1…00…1 1…05…0…08…0…0c…0
0…07…/…0c…/ .…0b….…02…-…08…-…0a…-…00…-…07…,…0e…,…02…,
,…05…+…08…+…0c…+…00…+ *…0b…*…00…*…07…)…08…)…0d…)…05…(…86…1 …02… …02… …02… …02…
REV. 1 1983-03-18
3181A
ACCESS PART II - TECHNICAL PROPOSAL SYS/1983-01-25
SUBPART D - SOFTWARE CHARACTERISTICS Page
#
…01…A C C E S S
AUTOMATED COMMAND AND CONTROL
EXECUTIVE SUPPORT SYSTEM
DOC NO ACC/8004/PRP/001 ISSUE 1
PART II
TECHNICAL PROPOSAL
SUBPART D
SOFTWARE CHARACTERISTICS
SUBMITTED TO: AIR FORCE COMPUTER AQUISITION CENTER (AFCC)
Directorate of Contracting/PK
Hanscom AFB
MA. 01731
USA
IN RESPONSE TO:Solicitation No F19630-82-R-0001
AFCAC Project 211-81
PREPARED BY: CHRISTIAN ROVSING A/S
SYSTEM DIVISION
LAUTRUPVANG 2
2750 BALLERUP
DENMARK
…0e…c…0f… Christian Rovsing A/S - 1982
This document contains information proprietary to Christian
Rovsing A/S. The information, whether in the form of text, schematics,
tables, drawings or illustrations, must not be duplicated or
used for purposes other than evaluation, or disclosed outside
the recipient company or organisation without the prior, written
permission of Christian Rovsing A/S.
This restriction does not limit the recipient's right to use
information contained in the document if such information is
received from another source without restriction provided such
source is not in breach of an obligation of confidentiality
towards Christian Rovsing A/S.
T̲A̲B̲L̲E̲ ̲O̲F̲ ̲C̲O̲N̲T̲E̲N̲T̲S̲:
4 SOFTWARE CHARACTERISTICS
4.1 SOFTWARE OVERVIEW ........................
4.2 SOFTWARE PACKAGES ........................
4.2.1 COMMUNICATIONS SOFTWARE ............
a. Initial Communications Systems ....
b. Communication System Expansion ....
c. Interface to LONG-HAUL Communica-..
tions Network .....................
4.2.2 SYSTEM SOFTWARE ....................
a. Reserved ..........................
b. Operating System ..................
c. Data Base Management System (DBMS).
d. Language Processors ...............
e. General Utilities .................
f. Graphics ..........................
g. Text Editor .......................
h. Report Writer .....................
i. Spelling Corrector ................
j. Document Formatter ................
k. Statistics ........................
4.2.3 APPLICATIONS SOFTWARE ............
a. General Requirements ..............
b. Electronic Mail Subsystem .........
c. CINCSAC Executive Management ......
Summary (CEMS) Subsystem ..........
d. Project Monitoring Subsystem ......
e. Personnel and Manpower Subsystem ..
f. Budget Subsystem ..................
g. Physical Resource Monitoring ......
(PRM) Subsystem ...................
h. Routine Executive Support Subsystem
4. S̲U̲B̲P̲A̲R̲T̲ ̲D̲ ̲-̲ ̲S̲O̲F̲T̲W̲A̲R̲E̲ ̲C̲H̲A̲R̲A̲C̲T̲E̲R̲I̲S̲T̲I̲C̲S̲
4.1 S̲O̲F̲T̲W̲A̲R̲E̲ ̲O̲V̲E̲R̲V̲I̲E̲W̲
In the following a description of the Communication,
the System and the Application Software is given.
4.2 S̲O̲F̲T̲W̲A̲R̲E̲ ̲P̲A̲C̲K̲A̲G̲E̲S̲
4.2.1 C̲o̲m̲m̲u̲n̲i̲c̲a̲t̲i̲o̲n̲s̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
L̲o̲c̲a̲l̲ ̲A̲r̲e̲a̲ ̲N̲e̲t̲w̲o̲r̲k̲
As an introduction to Communications Software (prior
to addressing the RFP with respect to: a. Initial Communications
System, b: Communication System Expansion, and c: Interface
to Long-haul Communications Network) the local area
network concept is presented below by highlighting
the network topology configuration, and functional
description.
N̲e̲t̲w̲o̲r̲k̲ ̲T̲o̲p̲o̲l̲o̲g̲y̲
The Local Area Network topology is based on a star
mesh of individually dualized, time division multiplexed
X-Net buses. Each X-Net bus is via a front-end processor
(STI) connected to a dualized CR80 host computer. Communication
between individual X-Net buses is established through
the connected CR80 host computer.
The X-net local network design complies with the International
Standards Organization's seven-level Open Systems Interconnection
Reference Model. This facilitates the integration of
the X-Net with networks supplied by other manufacturers.
N̲e̲t̲w̲o̲r̲k̲ ̲C̲o̲n̲f̲i̲g̲u̲r̲a̲t̲i̲o̲n̲
Each X-Net bus configuration consists of:
- a double twisted pair cable (bus)
- a X-net-Controller
- up to 140 I/O devices
The I/O devices can be divided into:
- CR80 host interfaces (up to 12)
- serial link interfaces (up to 128)
Individual X-Net buses are interconnected either through
the mutual host computer or through the CR80 host-to-host
SUPRA-BUS.
F̲u̲n̲c̲t̲i̲o̲n̲a̲l̲ ̲D̲e̲s̲c̲r̲i̲p̲t̲i̲o̲n̲
A single or dualized X-Net bus is monitored and controlled
by the X-Net Controller which performs the following
tasks:
- Synchronize communication on the upper bus by inserting
a MUX-No. in the HDLC frame on the lower bus.
- Answer a bandwidth request and allocate bandwidth
according to the request.
- Poll and appended devices to collect diagnostic
information.
- Select one of two upper buses to optimize performance
in a dualized bus system.
The X-Net Controller outputs a continous bitstream
of 1.8432 Mbit/Sec. on the lower bus. This stream is
organized in frames of 288 bits each, 6400 per second.
All frames received from the "upper" bus are transmitted
on the "lower" bus delayed one frame. Only if the CRC
of a received frame is not correct or if the frame
is destined to Host No. 0 (the Controller itself),
the frame will not be swapped to the transmitter buffer.
When a received frame is distined to host No. 0, it
is loaded to the controller processor which is managing
the Mux table. The received frame may contain a request
for a changed bandwidth to a given X-Net device (BW-request).
The synchronization is achieved by inserting as second
byte in the HDLC frame on the lower bus, a Device No.
taken from a Mux table that is scanned according to
the speed level assigned to each device of the X-Net
system.
FIG. 4.2.1-1
X-NET FRAME LAYOUT
All devices with their unique Device No. on the X-Net
bus look at the Mux byte, and if it is identical to
its Device No., this device has the use of the upper
bus, to transmit data at the end of the frame on the
lower bus, provided that the lower bus CRC check shows
no errors. This ensures that only one device will transmit
on the upper bus at any time.
The bandwidth allocation is determined by the Mux table
which is changeable (dynamic). A request for a changed
bandwidth to a specified device received on the upper
bus is accepted if the system bandwidth is large enough
or rejected if the system bandwidth is too small. The
answer (ACK, NACK) is sent to the requesting device,
when a free time slot occurs on the lower bus. The
dummy Device No.FF (which is inserted in the MUX table,
to allow the Controller access to the lower bus in
the following frame) has a minimum bandwidth on 100
bit/sec. giving a free time slot to answers at least
every 1.28 sec.
The diagnostic information is collected by polling
each device connected to the system. If an answer is
not received within 4 scans in the Mux table, a retransmission
is executed. After three requests not answered, the
device is perceived as not connected to the system.
The upper bus switch-over feature is achieved by counting
errorfree received frames from the upper bus (both
frames destined to the controller and frames swapped
to the lower bus) each time the Mux table has been
read completely. If this count is less than 1:4 of
the previous count, the bus is switched. This implies
that one device may be removed every 1.28 sec. without
changing buses, but removing a number of devices instantly
causes a switch of upper buses.
4.2.1.1 X̲-̲N̲e̲t̲ ̲D̲a̲t̲a̲ ̲T̲r̲a̲n̲s̲f̲e̲r̲
In the following is given an example of a data transfer
between devices on a X-Net bus shown in figure 4.2.1.1.4-2.
Figure 4.2.1-2
X-Net bus
Let us suppose that device 3 wants to send some data
to device 1. The sequence of events is as follows.
T̲i̲m̲e̲ ̲X̲.̲ The controller inserts a '3' into the MUX
field of the frame currently passing through it.
T̲i̲m̲e̲ ̲X̲ ̲+̲ ̲1̲ ̲s̲l̲o̲t̲.̲ Device 3 notices its own address
(3) as a MUX number on the lower bus, so it prepares
itself for transmission of a frame.
T̲i̲m̲e̲ ̲X̲ ̲+̲ ̲2̲ ̲s̲l̲o̲t̲s̲.̲ Device 3 generates and transmits
a frame containing 128 bits of useful data and a '1'
in the destination address field. This frame is transmitted
on the upper bus. Meanwhile, as always, one frame is
being processed by the controller and another frame
is transmitted on the lower bus and is accepted by
the device that it is addressed to.
T̲i̲m̲e̲ ̲X̲ ̲+̲ ̲3̲ ̲s̲l̲o̲t̲s̲.̲ Data from device 3 has arrived at
the controller. The controller adds a MUX number to
the frame. This is a message to some other device that
means "you may transmit in 2 slots' time". The destination
address (in this case: 1) remains in the frame. Timing
signals are also added.
T̲i̲m̲e̲ ̲X̲ ̲+̲ ̲4̲ ̲s̲l̲o̲t̲s̲.̲ All the devices look at the destination
field of the current frame on the lower bus. In this
example, device 1 realises that this frame is for itself,
and accepts it. Various error checks are performed
by the device and if all is well it extracts the useful
data bits. If all is not well, it issues a special
frame (when it gets a chance via the MUX number mechanism)
to the controller. This frame requests device 3 to
re-transmit the data.
This sequence is illustrated in figure 4.2.1-3.
Note that the lower bus has continous transmission.
If no device has recently taken up its option to transmit,
then the controller "invents" a frame and sends it
along to a bus device. Conversely, if a device tries
to send out frames more frequently than the MUX table
currently allows, it just has to wait. However, the
MUX table may be altered by the controller if the device
makes a habit of trying to flood the bus. This mechanism
is the Dynamic Bandwidth Allocation mentioned previously.
DATA TRANSFER ON THE X-NET BUS
Figure 4.2.1-3
4.2.1.2 X̲-̲N̲e̲t̲ ̲H̲o̲s̲t̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲ ̲(̲S̲T̲I̲/̲T̲I̲A̲)̲
In the range of X-Net devices the STI makes the high
performance end interfacing of a CR80 minicomputer
to the X-Net system.
The STI is a high bandwidth device which is able to
interface to other devices connected to the X-Net.
It may address 4096 logical lines through the X-Net,
and each line may have bandwidth allocated individually.
The current implementation of STI-handler and STI serve
up to 140 channels running actively in parallel. Through
the STI the CR80 minicomputer is able, dynamically,
to establish and dismantle all logical channels originating
from and belonging to the Domain of the STI. The CR80
is also able to make dynamically change of the bandwidth-assignment
on the X-Net.
By connecting to a STI several TIAs and SBAs it is
possible to interface the CR80 with both X-Net and
SUPRA-buses via the same STI. The maximum total number
of X-Net and/or SUPRA-buses connected to a single STI
is limited to 8. The STI will be addressed with the
same HOST-number on each bus in a given configuration.
The STI serves the X-Net packet protocol, which guaranties
errorfree transmission of data.
The STI is based on 2 processors:
- The Ingoing processor, which moves data from the
front end (TIA or SBA) to the CR80 memory. The
data-traffic is controlled by the X-Net packet
protocol. As the data is divided into many logical
channels this processor also demultiplexes traffic
coming from the X-Net. Data delivered to the CR80-computer
is by the X-Net packet protocol ensured errorfree.
When a complete packet is received it is reported
to the STI-handler by chaining the ralated data-buffer
descriptor (DBD) into the ingoing completion queue.
Transmission-errors, which are unrecoverable by
the protocol are reported as completion codes in
the DBDs.
- The Outgoing processor, which moves data from the
CR80 memory to the Front end. Data sent to the
X-Net is controlled by the outputter-part of the
X-Net packet protocol. Beside data transfer and
protocol, this processor also scans all the logical
channels set up by the CR80 computer, and multiplexes
their data into one single stream delivered in
the outgoing front end ringbuffers. When a packet
is correctly transmitted, it is reported in the
outgoing completion queue. Transmission-errors,
which are unrecoverable by the protocol are reported
as completion codes in the DBDs.
Overleaf (figure 4.2.1-4) is shown the dataflow through
the STI and the datastructures in the central-RAM,
which is shared between Ingoing Processor, Outgoing
processor and the STI-handler.
4.2.1.3 X̲-̲N̲e̲t̲ ̲B̲u̲s̲ ̲T̲e̲r̲m̲i̲n̲a̲l̲ ̲A̲d̲a̲p̲t̲e̲r̲
The intelligent Terminal Adapter (XTA) facilitates
a uniform interface to all terminals supporting the
minimal ANSII X3-25 standard capabilities, which the
majority of terminals on the market do. This includes
for example the emulation of protected fields on the
screen where required, by using a map in memory of
the VDU screen, see figure 4.2.1-5.
TERMINAL EMULATION
Figure 4.2.1-5
The three questions put up in the RFP on communication
can now be answered
a. Initial Communications System
Software for the initial Communications System
is the X-Net handler contained in DAMOS Input/Ouput
System. A general description is given in para.
4.2.2.b.4.
b. Communication System Expansion
No additional Software is necessary, when the copmmunication
system is expanded.
c. Interface to LONG-HAUL Communications Network.
The ARPANET interface with the TCP/IP protocols, will
be implemented in the CR80 host computer which consitutes
the center of the Local Area Network star mesh.
Users in the Local Area Network will communicate with
the ARPANET by means of calls to the IP module as outline
in para 3.4 of the Internet Protocol
4.2.2 S̲y̲s̲t̲e̲m̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
a. RESERVED
4.2.2.b. O̲p̲e̲r̲a̲t̲i̲n̲g̲ ̲S̲y̲s̲t̲e̲m̲
The CR80 Advanced Multi Processor Operating system
DAMOS is the standard operating system for memory mapped
CR80 systems.
DAMOS is divided into operational and support software
as defined overleaf.
DAMOS includes a virtual memory operating system kernel
for the mapped CR80 series of computers.
DAMOS fully supports the CR80 architecture which facilitates
fault tolerant computing based on hardware redundancy.
DAMOS supports a wide range of machines from a single
Processing Unit (PU) with 1 CPU and 128 K words of
main memory, and up to a maximum configuration with
16 PU's where each PU has 5 CPU's and 16 M words of
virtual memory and a virtually unlimited amount of
peripheral equipment including backing storage.
DAMOS is particularly suited for use in real time systems
but supports also other environments like software
development and batch. The main objectives fulfilled
in DAMOS are: high efficiency, flexibility, and secure
processing.
DAMOS is built as a hierarchy of modules, each performing
its own special task. The services offered by DAMOS
include CPU, PU, and memory management. Demand paging
is the basic memory scheduling mechanism, but process
swapping is also supported. Other levels of DAMOS
provide process management and interprocess communication,
basic device handling and higher level device handling
including handling of interactive terminals, communication
lines, and file structured backing storage devices.
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
DAMOS
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
OPERATIONAL SUPPORT
SOFTWARE SOFTWARE
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲
- Kernel
- resource management - highlevel operating
system
- directory functions - system generation software
- process management - maintenance and diagnostic
- memory management programs
- process communica-
tion
- device management
- device handling
- error processing
- real time clock
- PU management
- Transport Mechanisms
- Input/output system
- File Management
- Magtape Management
- Terminal Management
- Initialization
Figure 4.2.2.b-1…01…DAMOS Software Overview
DAMOS provides an operating system kernel which integrates
supervisory services for real time, interactive and
batchsystems. A comprehensive set of software development
tools is available under DAMOS. The following languages
are presently available:
- Cobol
- Assembler
- SWELL, the CR80 system programing language
- Pascal
The following languages are announced:
- Fortran 77
- Ada
The DAMOS standard operational software is described
in sections 1-8. The description is divided into the
following areas:
- Overview of DAMOS
- Security,
which describes the general DAMOS approach to data
security
- Kernel,
which describes the DAMOS operating system kernel
components
- DAMOS Input/Output,
which describes the DAMOS standard interfaces to
peripheral I/O equipment, the DAMOS disk file management,
magnetic tape file management and terminal and
communication line management systems
- System initialization
The DAMOS standard support software
- high level operating system
- programing languages
- maintenance and diagnostics programs
is described in sections 9-11.
(b.1.) O̲v̲e̲r̲v̲i̲e̲w̲ ̲o̲f̲ ̲D̲A̲M̲O̲S̲ ̲O̲p̲e̲r̲a̲t̲i̲o̲n̲a̲l̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
DAMOS may be visualized as the implementation of a
set of abstract data types and a corresponding set
of tools for creating and manupulating instantiations
(objects) of these types.
The major components in DAMOS are the Kernel, the File
Management System, the Magnetic Tape File Management
System, the Terminal Management System and the Root
Operating System.
The DAMOS Kernel exists in one incarnation for each
processing unit (PU). The data types and functions
implemented by the Kernel are:
D̲a̲t̲a̲ ̲T̲y̲p̲e̲ F̲u̲n̲c̲t̲i̲o̲n̲
CPUs CPU management and scheduling
processes process management
virtual memory segments memory management
PU's PU management
synchronization elements inter process communication
device device management and
basic device access
methods
ports basic transport service
The Kernel also provides facilities for
- processing of errors
- centralized error reporting
- a data transfer mechanism
- a PU service module
The File Management System (FMS) implements files on
disks. The FMS provides functions for manipulating
and accessing files and acts as an operating system
for a group of disks units. The FMS may exist in several
incarnations in each PU where each incarnation controls
its own devices.
The Terminal Management System (TMS) is similar to
the FMS. It provides functions for manipulating and
accessing communication lines and terminals including
line printers. The objects accessed via the TMS are
called units. A unit may be an interactive terminal,
a line printer or a virtual circuit. The TMS acts
as an operating system for a group of communication
devices attached via LTUs, LTUXs or a parallel controller.
The TMS may exist in several incarnations in each PU,
each incarnation controlling its own devices.
The Magnetic Tape File Management System handles files
on magnetic tape units.
A common security policy and hiearachical resource
management strategy is used by the Kernel, the FMS
and the TMS. These strategies have been designed with
the objective of allowing multiple concurrent higher
level operating systems to coexist in a PU in a secure
and independent manner.
The Root operating system is a basic low level operating
system which intially possesses all resources in its
PU.
(b.2.) S̲e̲c̲u̲r̲i̲t̲y̲
DAMOS offers comprehensive data security features.
A multilevel security system ensures that protected
data is not disclosed to unauthorized users and that
protected data is not modified by unauthorized users.
All memory allocatable for multiple users is erased
prior to allocation in case of reload, change of mode,
etc. The erase facility is controlled during system
generation.
The security system is based on the following facilities:
- Hardware supported user mode/privileged mode with
16 privilege levels. Priviliged instructions can
be executed only when processing under DAMOS control.
- Hardware protected addressing boundaries for each
process.
- Non-assigned instructions will cause a trap.
- Primary memory is parity protected.
- Memory bound violation, non-assigned instructions,
or illegal use of privileged instructions cause
an interrupt of highest priority.
- The hierarchical structure of DAMOS ensures a controlled
use of DAMOS functions.
- A general centralized addressing mechanism is used
whenever objects external to a user process are
referred to.
- A general centralized access authorization mechanism
is employed.
Centralized addressing capabilities and access authorization
are integral parts of the security implementation.
User processes are capable of addressing Kernel objects
only via the associated object descriptor table. The
following types of DAMOS objects are known only via
object descriptors:
- Processes
- Synchronization elements
- Segments
- Devices
- PUs
- CPUs
- Ports
The object forms the user level representation of a
DAMOS Kernel object. It includes the following information:
- A capability vector specifying the operations which
may be performed on the object by the process which
has the object descriptor.
- A security classification
The access right information concerning the various
DAMOS objects is retained in a PU directory of object
control blocks. Each control is associated with a
single object.
When the access right of a process to a segment is
verified and the segment is included in the logical
memory space of the process, the contents of that segment
may be accessed on a 16-bit word basis at the hardware
level subject to hardware access checks.
Authorization of access to an object is based on
- security classification check
- functional capability check for the object
versus the process
The security policy is based on a multilevel -multicompartment
security system.
(b.3.) K̲e̲r̲n̲e̲l̲
The DAMOS Kernel is a set of reentrant program modules
which provide the lowest level of system service above
the CR80 hardware and firmware level.
The Kernel consists of the following components:
- Resource Management,
which administers resources in a coherent way
- Directory Functions,
which provide a common directory service function
for the other Kernel components
- Process Manager,
which provides tools for CPU management, process
management and scheduling
- Page Manager,
which provides memory management tools and implements
a segmented virtual memory
- Process Communication Facility,
which provides a mechanism for exchange of control
information between processes
- Device Manager
which provides a common set of device related functions
for device handlers and a standard interface to
device handlers
- Device Handlers,
which control and interface to peripheral devices
- Error Processor,
which handles errors detected at the hardware and
Kernel level and provides a general central error
reporting mechanism
- Real Time Clock
for synchronization with real time
- PU Manager,
which provides functions for coupling and decoupling
PUs
- Transport Mechanisms
which provides general mechanisms for exchange
of bulk data between processes and device handlers.
The following subsections describe the main Kernel
functions:
- resource management
- process management
- memory management
- process communication
- CPU management
- PU management
- Transport Mechanisms
(b.3.1) R̲e̲s̲o̲u̲r̲c̲e̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
The goal of DAMOS Resource Management is to implement
a set of tools which enables the individual DAMOS modules
to handle resources in a coherent way. This again,
will make it possible for separate operating systems
to implement their own resource policies without interference.
Further built-in deadlock situations will be avoided.
The resource management module governs anonymous resources,
such as control blocks. Examples of resource types
are:
- process control blocks
- segment control blocks
- synchronization elements
- PU directory entries
Each type of resource is managed independently from
all other types.
The resources are managed in a way that corresponds
to the hierarchical relationships among processes.
Two operating systems which have initially got disjoint
sets of resources, may delegate these resources to
their subordinate processes according to separate and
non-interfering strategies. For example, one operating
sytem may give all its ubordinate processes distinct
resource pools, i.e. there will not be any risk of
one process disturbing another. On the contrary, the
other operating system may let all its subordinate
processes share a common pool, i.e there may be a much
better resource utilization at the cost of the risk
for deadlock among these processes.
(b.3.2) P̲r̲o̲c̲e̲s̲s̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
In the CR80 system, a clear distinction is made between
programs and their executions, called processes. This
distinction is made logically as well as physically
be applying two different base registers: one for program
code and one for process data. This distinction makes
reentrant, unmodifiable code inevitable.
The process is the fundamental concept in CR80 terminology.
The process is an execution of a program module in
a given memory area. The process is identified to
the remaining software by a unique name. Thus, other
processes need not to be aware of the actual location
of a process in memory but must refer to it by name.
(b.3.3) M̲e̲m̲o̲r̲y̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
The addressing mechanism of the CR80 limits the address
space seen by a process at any one time to a window
of 2 x 64K words. Due to the virtual memory concept
of DAMOS a process may, however, change the "position"
of the window, thus leading to a practically unlimited
addressing capability.
The finest granularity of the virtual memory known
to a process is a segment. Segments can be created
and deleted. They have unique identifiers and may
have different sizes. A process which has created
a segment may allow others to share the segment by
explicitly identifying them and stating their access
rights to the segment.
The Page Manager implements virtual memory. The actual
space allocated in a Processing Unit to a process may
be only a few segments, while the logical address space
is the full 2 x 64k words. Whenever addressing of
a segment, that is not in physical memory, is attempted,
the Page Manager will bring in the addressed segment.
(b.3.4) P̲r̲o̲c̲e̲s̲s̲ ̲C̲o̲m̲m̲u̲n̲i̲c̲a̲t̲i̲o̲n̲
Synchronization of processes and communication between
them is supported in DAMOS by objects called Synchronization
Elements (synch elements) which are referred to by
symbolic names and may thus be known by processes system-wide.
In DAMOS a process cannot "send" a block of data directly
to another process identified by name. The exchange
must be done using a synch element.
(b.3.5) C̲P̲U̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
The CPUs in a processing unit may be pooled and a given
process is allocated processing power from one such
pool. In this way CPUs can be dedicated processes.
(b.3.6) P̲r̲o̲c̲e̲s̲s̲i̲n̲g̲ ̲U̲n̲i̲t̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
The DAMOS Kernel provides facilities for managing the
logical connections between the individual Processing
Units attached to a Supra Bus.
PUs may be connected logically into groups. The number
of PUs in a group may vary from 1 to 16. Two groups
may be merged, the result being a new PU-group.
Objects are identified by symbolic names having either
local or global scope. They are accessible from all
PUs in the group where they reside.
PU Management provides functions for inclusion of a
PU in a PU-group.
A logical connection between two PUs is not established
until both have received an include request from the
opposite. When trying to connect two PU-groups, conflicts
between the use of global names may arise. Therefore,
a connection is only established if the scope of all
names can be maintained.
The PU Management is designed to allow graceful degradation
when purposely closing a PU or isolating a faulty PU.
It is possible from a PU to force a member out of
its common group. All PUs in the group are informed
to break their logical connection to the designated
PU. As a consequence all global objects residing in
the isolated PU are thereafter unknown to the group.
If not faulty, the isolated PU continues executing
its local processes and is ready to receive new include
requests.
3.7 T̲r̲a̲n̲s̲p̲o̲r̲t̲ ̲M̲e̲c̲h̲a̲n̲i̲s̲m̲s̲
3.7.1 B̲a̲s̲i̲c̲ ̲T̲r̲a̲n̲s̲p̲o̲r̲t̲ ̲S̲e̲r̲v̲i̲c̲e̲
Basic Transport Service (BTS) is a module within DAMOS
which enables processes to communicate in a uniform
manner, whether they are located:
1) in the same CR80 processor unit
2) in computers connected via a supra bus, running
the same operating system
3) in computers connected via a communication line,
running independent operating systems.
Figure 4.2.2.b-2
BTS Environment
BTS can also be used for communication between device
handlers. In this way, data may be switched through
an intermediate node, directly from one communication
device to another (FIG. 4.2.2.b-3.)
Figure 4.2.2.b-3
Switching
Figure 4.2.2.b-4 Connection Establishment
When a user process A wants to communicate with user
process B, it:
1) issues a "connect", specifying the global identification
of the other process BTS, then
2) notifies process B, that it has been called
from A
User process B may then either:
3a) accept to communicate with A
or
3b) reject to communicate with A and BTS notifies
process A either:
4a) that the communication has been established
or
4b) that the communication could not be established.
Figure 4.2.2.b-5 Data Transport
When user process wants to send data via the connection,
it:
1) issues a "send request" giving BTS a pointer to
the data, specifying the address and the length
of data.
When user process B is ready to receive data via the
connecition, it
2) issues a "receiving request" giving BTS a pointer
to an empty data buffer, specifying the address
and length.
BTS then initiates the data transfer, and when the
transfer is completed, it:
3) notifies A that data has been sent
4) notifies B that data has been received.
User process B may simultaneously send data to user
process A via the connection (it is fully duplex).
The "receive request" from B may be issued before the
"send request" from A.
Any number of "send request" and "receive requests"
may be outstanding on a connecition, they will be served
by BTS as soon as a match occurs.
Figure 4.2.2.b-6 Deferred Buffering
If user process B has many connections on which it
may receive data, it may not be able to allocate an
input buffer for each.
It may then:
1) give a data area to BTS to be used as a common
buffer pool for several connections
2) specify that buffers from the pool shall be used
for input on the connection
when user process A
3) issues a "send request"
BTS tries to allocate buffers from the buffer pool.
When they become acailable, BTS initiates the data
transfer, and when it is complete:
4) notifies A that data has been setn
5) notifies B that data has been received, specifying
the buffers containing the input data.
Figure 4.2.2.b-7 The DAMOS Implementation
BTS is an integrated part of the DAMOS kernel.
1) within one CR80 computer, the DMA device in the
MAP is used to move data.
2) On connection between computers connected by a
supra bus, data is to/from the CR80 memory by the
Supra Terminal Interface (STI), interfacing to
the supra bus.
3) On connections via communication lines, data is
first moved from the user process to the memory
of the Line Termination Unit (LTU) by the DMA device
in the MAP.
When it has been transmitted to the memory of the
receiving LTU, it is moved into the memory of the
user processes by the DMA device in the MAP.
The CPU is thus never loaded with movement of data.
(b.3.7.2) B̲a̲s̲i̲c̲ ̲D̲a̲t̲a̲g̲r̲a̲m̲ ̲S̲e̲r̲v̲i̲c̲e̲
The Basic Datagram Service (BDS) is a DAMOS system
component for manipulation and exchange of main memory
resident data buffers. The BDS operates within the
BTS environment of the BTS by exploiting the mapping
mechanisms offered by the CR80 hardware. BDS enables
processes within a single PU to exchange large amounts
of data by reference, and processes in different PUs
to exchange data by copying from buffer to buffer.
The BDS provides functions for allocation, release
and mapping of buffers. Exchange of buffers within
a PU is based on communication of buffer identifiers
via interprocess communication for example by means
of the DAMOS PCF.
The BDS supports buffers of fixed length. It is possible
to configure the BDS with several types of buffers
corresponding to different sizes. In the present system
two types of buffers with sizes 64 and 512 bytes are
envisaged.
Buffers are grouped in pools which are DAMOS objects.
The buffers in a pool have the same size.
Before a process can access a buffer, the buffer must
become part of the logical address space of the process.
This is accomplished by a map-in function which changes
the composition of the translation table of the process.
A special kind of pseudo segment is used for this
purpose. These segments are called buffer-windows.
The buffer-windows must be 'mapped' into the logical
address space of the process before buffers can be
mapped into the buffer-window.
The BDS provides the following functions:
a̲l̲l̲o̲c̲a̲t̲e̲ ̲b̲u̲f̲ (pool)(buf ̲id)
This function allocates a buffer from the specified
pool.
A PU-wide identification - buf ̲id - of the buffer
is returned. This identification must be used
to release and map in the buffer. The identification
of the buffer can be passed to and used by another
process.
r̲e̲l̲e̲a̲s̲e̲ ̲b̲u̲f̲ (buf ̲id)
This function deallocates the buffer identified
by buf ̲id.
M̲a̲p̲ ̲i̲n̲ (buf ̲id, approx-loc)(actual-loc)
This function maps in a specified buffer at the
specified location. It is checked that the affected
logical page(s) is part of a buffer-window. The
location specified at call is only accurate to
1 k; the actual location is defined at exit from
the function.
The function changes the contents of the translation
table for the process.
B̲u̲f̲ ̲a̲d̲d̲r̲ (buf ̲id)(addr)
This function returns the physical address of the
buffer. The address is delivered in a format compatible
with the format required by XFER.
This function is used as a prerequisite for transfer
of buffer contents between PUs.
(b.4.) D̲A̲M̲O̲S̲ ̲I̲n̲p̲u̲t̲/̲O̲u̲t̲p̲u̲t̲
DAMOS supports input/output (I/O) from user programs
at different levels.
At the lowest level user programs can interact with
device handlers directly and transfer blocks of data
by means of the Basic Transport Service modulel. This
interface is illustrated in the figure on next page.
Device control is exercised via the Device Manager
functions. Data is transfered between the user process
and the device handler using a port in the user process
and a port in the device handler.
At a higher level DAMOS offers a more structured I/O
facility under the DAMOS I/O System (IOS).
The IOS provides a uniform, device independent interface
for user processes to
- disk files
- magnetic tape files
- interactive terminals
- communication lines
- line printers
The IOS is a set of standard interface procedures through
which a user communicates with a class of DAMOS service
processes known as General File Management Systems.
General File Management Systems include:
- the File Management System which implements disk
files
- the Magnetic Tape File Management System for magnetic
tape files
- the Terminal Management System for communication
lines, interactive terminals and printers.
The General File Management Systems provide functions
which are classified as:
- device handling
- user handling
- file handling
- file access
The common file access functions provided by the IOS
are readbytes for input and appendbyte and modifybytes
for output.
Figure 4.2.2.b-8
DAMOS I/O SYSTEM
Data and Control Flow
These basic functions are used for transfer of blocks
of data.
On top of these functions the IOS provides a stream
I/O facility where the IOS handles the blocking and
buffering of data.
(b.5.) F̲i̲l̲e̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲ ̲S̲y̲s̲t̲e̲m̲
The File Management System (FMS) is responsible for
storing, maintaining, and retrieving information on
secondary storage devices (disks).
The number and kind of devices attached to the FMS
is dynamically reconfigurable.
The following subjects are handled:
- devices and volumes
…02…- directories
- files
- users
- integrity
- access methods
(b.5.1.) D̲e̲v̲i̲c̲e̲ ̲a̲n̲d̲ ̲V̲o̲l̲u̲m̲e̲ ̲H̲a̲n̲d̲l̲i̲n̲g̲
The file system may be given commands concerning:
- Management of peripheral devices.
Devices may be assigned to and deassigned from
the file system dynamically. Instances of device
handlers are at the same time created or deleted.
- Management of volumes.
Volumes may be mounted on and dismounted from specific
devices.
(b.5.2.) D̲i̲r̲e̲c̲t̲o̲r̲i̲e̲s̲
The file system uses directories to implement symbolic
naming of files. If a file has been entered into a
directory under a name specified by the user, it is
possible to locate and use it later on. Temporary
files does not need to be named. A file may be entered
into several directories, perhaps under different names.
Since a directory is also considered a file, it can
itself be given a name and entered into another directory.
This process may continue to any depth, thus enabling
a hierarchical structure of file names.
(b.5.3.) F̲i̲l̲e̲s̲
(b.5.3.1.) F̲i̲l̲e̲ ̲T̲y̲p̲e̲s̲
The file system supports two different organizations
of files on disk. Al contiguous file consists of a
sequence of consecutive sectors on the disk. The size
of a contiguous file is fixed at the time the file
is created and cannot be extended later on. A random
file consists of a chain of indices giving the addresses
of areas scattered on the volume. Each area consists
of a number of consecutive sectors. The number of
sectors per area is determined at creation time, whereas
the number of areas may increase during the lifetime
of the file.
(b.5.3.2.) F̲i̲l̲e̲ ̲C̲o̲m̲m̲a̲n̲d̲s̲
The commands given to the file system concerning files
may be grouped as:
- Creation and removal of files.
A user may request that a file is created with
a given set of attributes and put on a named volume.
- Naming of files in directories.
A file may be entered into a directory under a
symbolic name. Using that name it is possible
to locate the file later on. The file may also
be renamed or removed from the directory again.
- Change of access rights for a specfic user group
(or the public) vis a vis a file. The right to
change the access rights is itself delegatable.
(b.5.4.) U̲s̲e̲r̲ ̲H̲a̲n̲d̲l̲i̲n̲g̲
The file management system may be given commands concerning:
- Creation and Removal of users (processes)
(b.5.5.) D̲i̲s̲k̲ ̲I̲n̲t̲e̲g̲r̲i̲t̲y̲
(b.5.5.1.) S̲e̲c̲u̲r̲i̲t̲y̲
The protection of data entrusted to the file management
system is handled by two mechanisms:
The first mechanism for access control is based on
the use of Access Control Lists (ACL). There is an
ACL connected to each file. The ACL is a table which
describes the access rights of each individual user
group (one being the public) to the corresponding file.
Whenever a user tries to access a file, the ACL is
used to verify that he is indeed allowed to perform
this access.
The second mechanism for access control is based on
a security classificatio system. Each user and each
file is assigned a classification. The user classification
is recorded in the user control block and the file
classification is recorded on the volume. An access
to a file is only allowed if the classification levels
of the user and the file match to each other.
(b.5.5.2.) R̲e̲d̲u̲n̲d̲a̲n̲t̲ ̲D̲i̲s̲k̲s̲
The FMS allows use of redundant disk packs, which are
updated concurrently to assure that data will not be
lost in case of a hard error on one disk.
The FMS allows exclusion of one of the two identical
volumes, while normal service goes on on the other
one. After repair it is possible to bring up one volume
to the state of the running volume, while normal service
continues (perhaps with degraded performance).
The bringing up is done by marking a raw copy of the
good disk to that which should be brought up. While
the copying takes place all read operations are directed
to both disks.
(b.5.5.3.) B̲a̲d̲ ̲S̲e̲c̲t̲o̲r̲s̲
The FMS is able to use a disk pack with bad sectors,
unless it is sector 0.
The bad sectors are handled by keeping a translation
table on each volume from each bad sector to an alternative
sector.
While using redundant disks the translation tables
of the two disks must be kept identical to assure that
all disk addresses can bve interpreted in the same
way. If bad sectors are detected while bringing up
a disk, they are marked as such on both disks and both
translation tables are updated accordingly.
(b.5.6.) A̲c̲c̲e̲s̲s̲ ̲M̲e̲t̲h̲o̲d̲s̲
The file management system implements two access methods
to files:
(b.5.6.1.) U̲n̲s̲t̲r̲u̲c̲t̲u̲r̲e̲d̲ ̲A̲c̲c̲e̲s̲s̲
For transfer purposes a file is considered simply as
a string of bytes. It is, therefore, a byte string
which is transferred between a file and a user buffer.
The user can directly access any byte string in a file.
The commands which are implemented by this access methods
are:
READBYTES - Read a specified byte string
MODIFYBYTES - Change a specified byte string
APPENDBYTES - Append a byte string to the end of
the file.
(b.5.6.2.) I̲n̲d̲e̲x̲e̲d̲ ̲S̲e̲q̲u̲e̲n̲t̲i̲a̲l̲ ̲A̲c̲c̲e̲s̲s̲
CRAM is a multi-level-index indexed sequential file
access method. It features random or sequential (forward
or reverse) access to records of 0 to n bytes, n depending
on the selected block size, based on keys of 0-126
bytes. The collating sequence is using the binary value
of the bytes so e.g. character strings are sorted alphabetically.
CRAM is working on normal contiguous FMS files which
are initialized for CRAM use by means of a special
CRAM operation.
The CRAM updating philosophy is based on the execution
of a batch of related updatings, which all together
forms a consistent status change of the CRAM file,
being physically updated as a single update by means
of a LOCK operation. That is, after such a batch of
updates, all these updated may either be forgotten
(by means of the FORGET operation) or locked (by means
of the LOCK OPERATION). Both operations are performed
without critical regions, i.e. without periods of CRAM
data base inconsistency.
For convenience, CRAM supports subdivision of the CRAM
file in up to 255 subfiles, each identified by a subfile
identifier of 0-126 byte (as a key).
CRAM keeps track of the different versions of the CRAM
data base by means of a 32 bit version number, which
is incremented every time CRAMNEWLOCK (the locking
operation) is called. This version number can only
be changed by CRAMNEWLOCK (and CRAMINIT), but if the
user intends to use it for some sort of unique update
version stamping, it is delivered by the operations
CRAMNEWOPEN, CRAMNEWLOCK, CRAMFORGET and CRAMNEWVERSION.
(b.6.) M̲a̲g̲n̲e̲t̲i̲c̲ ̲T̲a̲p̲e̲ ̲F̲i̲l̲e̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲ ̲S̲y̲s̲t̲e̲m̲
The Magnetic Tape File Management System (MTFMS) is
responsible for storing and retrieving information
on megnetic tapes. It is able to handle one magnetic
tape controller with a maximum of 8 tape transports
in daisy-chain. The driver is logically split into
3 parts:
- I/O-SYSTEM interface
- Main Processing
- Magnetic tape controller interface
Commands for the MTFMS are received by the I/O-System
interface while the controller interface implements
a number of (low level) commands for handling a tape
transport.
Symbolic volume names and file names are implemented
through use of label records which comply with the
ISO 1001 standard.
The functions of the file system can be separated into
four groups:
- Device functions
- Volume functions
- File functions
- Record functions
(b.6.1.) D̲e̲v̲i̲c̲e̲ ̲f̲u̲n̲c̲t̲i̲o̲n̲s̲
The following functions are defined:
- Assign a given name to a given unit of the
controller.
- Deassign a given device.
(b.6.2.) V̲o̲l̲u̲m̲e̲ ̲f̲u̲n̲c̲t̲i̲o̲n̲s̲
- Initiate the tape on a given device assigning a
name to it by writing a volume label.
- Mount a given volume on a given device.
- Dismount a given volume.
- Rewind a given volume.
(b.6.3.) F̲i̲l̲e̲ ̲f̲u̲n̲c̲t̲i̲o̲n̲s̲
- Create a file on a given volume. The following
information must be supplied by the caller and
will be written onto the tape in a file header
label records:
- File name
- Fixed/variable length record specification
- Record size.
The file is opened for output and the given volume
is reserved for the caller.
- Find a file with a given name on a given volume.
The file is opened for input and the given volume
is reserved.
- Skip a given number of files (backwards or forwards)
on a given volume. The file at the resulting tape
position is opened for input and the volume is
reserved.
- Get information about the currently open file on
a given volume. Information like file sequence
number, record size and type (fixed/variable length)
can be retrieved.
- Close currently open file on a given volume. Volume
reservation is released.
(b.6.4.) R̲e̲c̲o̲r̲d̲ ̲f̲u̲n̲c̲t̲i̲o̲n̲s̲
- Skip a given number of records (forwards or backwards)
in a given file.
- Read a record in a given file.
- Write a record in a given file. The MTFMS performs
recovery from writing errors by
- backspacing over the record in error
- erasing a fixed length of about 3.7 inches
(thus increasing the record gap).
- attempting the writing once more.
This procedure will be repeated maximally 10 times.
(b.7.) T̲e̲r̲m̲i̲n̲a̲l̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲ ̲S̲y̲s̲t̲e̲m̲ ̲
The TMS is a service process which manages devices
characterized by serial blockwise access. Examples
of such devices are:
- interactive terminals (screen or hardcopy)
- data communication equipment (modems)
- line printers
- card readers
In the following, the phrase "terminal" is used as
a common term for any device of this category.
Terminals may be attached to LTUs, LTUXs (via TDX)
and parallel interfaces.
The TMS performs the following main functions:
- terminal related security validation
- access control for terminals
- collecting of statistical information
- management of terminals
- transfer of I/O data between terminal device
handlers and user processes.
The following subsections define:
- transfer of I/O data
- user handling
- hardware categories
(b.7.1.) T̲r̲a̲n̲s̲f̲e̲r̲ ̲o̲f̲ ̲I̲/̲O̲ ̲D̲a̲t̲a̲
The TMS enables user processes to perform I/O communication
with terminals.
The I/O communication can be performed in two modes:
file mode and communication mode.
(b.7.1.1.) F̲i̲l̲e̲ ̲M̲o̲d̲e̲
In this mode I/O to terminals is identical to I/O to
backing store files from the point of view of the user
process.
The same IOS basic procedures are used (appendbytes,
modifybytes, readbytes) and direct as well as stream
I/O can be used.
This mode provides the greatest flexibility for the
user process. This flexibility is obtained at the expense
of an additional overhead, as all I/O requests from
the user process will have to pass the TMS.
File mode I/O is aimed at terminals which will be connected
to varying processes with different security profiles.
The terminals in question will normally be local or
remote interactive hardcopy or screen terminals.
(b.7.1.2.) C̲o̲m̲m̲u̲n̲i̲c̲a̲t̲i̲o̲n̲ ̲M̲o̲d̲e̲
In this mode I/O requests from the user process are
sent directly to the terminal handler. The I/O interface
between the user process and the terminal device handler
is that of the BTS and therefore inherently different
from backing store I/O.
Communication mode I/O is aimed at - but not limited
to - terminals which are connected to a single user
process throughout its lifetime.
The terminals in question are primarily communication
lines like e.g. trunk lines in a message swtiching
network.
(b.7.2.) U̲s̲e̲r̲ ̲H̲a̲n̲d̲l̲i̲n̲g̲
Before a user process can make use of the TMS functions,
it must be logged on to the TMS by means of th Useron
command. This command must be invoked by a process
which is already known by the TMS, either through another
Useron command or because it is the parent process
for the TMS.
In the Useron command the calling process grants some
of its TMS resources to the process which is logged
on to the TMS in the Useron command.
When a user process seizes to use the TMS, its TMS
resources must be released by a call of Useroff.
(b.7.3) H̲a̲r̲d̲w̲a̲r̲e̲ ̲C̲a̲t̲e̲g̲o̲r̲i̲e̲s̲
The TMS recognizes the following categories of equipment:
- T̲e̲r̲m̲i̲n̲a̲l̲ ̲C̲o̲n̲t̲r̲o̲l̲l̲e̲r̲ which is a line controller
interfacing one or more lines.
- L̲i̲n̲e̲, which is a group of physical signals
capable of sustaining one simplex or duplex
data stream.
- U̲n̲i̲t̲, which is a terminal device connected
to a line.
If more than one unit is connected to a given line,
the line is called multiplexed line.
(b.7.3.1.) T̲e̲r̲m̲i̲n̲a̲l̲ ̲C̲o̲n̲t̲r̲o̲l̲l̲e̲r̲s̲
Terminal controllers may dynamically be assigned and
deassigned by the parent process for the TMS.
A controller can either be assigned as an active or
as a stand-by controller.
A stand-by controller is a device which normally is
not active, but which may take over in case of a failure
in an active controller.
When an active controller is assigned for which a stand-by
is available, this must be defined in the assignment
command.
The process which assigned a controller is its initial
owner.
Ownership of a controller may be transfered to another
user process which is logged on to the TMS.
When a controller is assigned, the TMS creates a corresponding
device handler.
(b.7.3.2.) L̲i̲n̲e̲s̲
The owner of a controller may assign lines to the controller.
When a line is assigned the TMS calls the device handler
for the controller to that effect.
(b.7.3.3.) U̲n̲i̲t̲s̲
The owner of a controller with lines assigned to it
may create units on the lines.
Units can be created for file mode I/O or communication
mode I/O.
A unit created for file I/O may be a multiple or single
access unit.
Single access units can only be accessed by the owner
whereas multiple access units may be accessed by a
number of user processes.
When the owner creates a unit, an access path to the
unit is established. The owner may from now on access
the unit by the IOS functions readbytes for input -
and appendbytes, and modifybytes for output.
Other users may obtain access to a multiple access
unit in different ways as described in the following.
The creator of a unit may offer it to another user
by means of the TMS OFFER function. The user to which
the unit is offered obtains access to the unit by the
ACCEPT function.
The creator of a unit may define a symbolic name -
a unit name - for the unit. A unit name is syntactically
identical to an FMS file name.
Other users may obtain access to the named unit by
the LOOKUP ̲UNIT command which corresponds to the FMS
commands getroot, lookup and descent.
(b.8) S̲y̲s̲t̲e̲m̲ ̲I̲n̲i̲t̲i̲a̲l̲i̲z̲a̲t̲i̲o̲n̲
When a CR80 memory mapped PU is master cleared, a boot
strap loader is given control.
The boot strap loader is contained in a programmed
read-only memory which is part of the MAP module. Having
initialized the translation tables of the MAP module,
the boot strap loader is able to fetch a system load
module from a disk connected to the PU.
An initialization module which is part of the load
module initalizes the DAMOS kernel and the DAMOS Root
process.
The Root process possesses all the PU resources. Root
creates and intializes a File Management System, a
Terminal Management System and a Highlevel Operating
System.
(b.9) H̲i̲g̲h̲l̲e̲v̲e̲l̲ ̲O̲p̲e̲r̲a̲t̲i̲n̲g̲ ̲S̲y̲s̲t̲e̲m̲ ̲(̲H̲I̲O̲S̲)̲
HIOS is an operating system, which provides the online
user interface for interactive and batch processing
on the CR80 computer.
The functions performed by HIOS are the following:
- define system volume and directory
- define system device(s)
- assign/deassign terminal device(s)
- create/delete terminal subdevices
- assign/deassign of disk
- reserve/release of disk
- mount/dismount of volume
- update bitmap and basic file directory
- display name of user directory
- change user directory
- listing of current status of system
- redefine current input/output
- reopen original outputfile
- maintain a user catalog
- redefine filesystem dependant I/O resources
- control online log facility
- broadcast messages between terminals
- maintain a hotnews facility
- maintain a number of batch queues
- define spool files for later output
- login/logout of terminals
- load of program
- execute task
- stop and start task
- remove task
- display current date and time
- submit batch task
HIOS is activated by the ROOT process when the system
is bootloaded. After a short initialization phase the
production phase can be entered.
In the production phase, two kinds of users can log
in on the system:
- privileged users
- non privileged users
The privilege of the user is checked at login-time
by means of the user catalog, and the privilege determines
which functions the user can execute.
All functions contained in HIOS are executed under
the constraints of the security access control mechanisms
implemented in the DAMOS kernel. This means that unauthorized
acces to any DAMOS, FMS or TMS object is impossible.
HIOS contains facilities for logging and timetagging
of all user commands and related system responses.
These facilities are used for:
- system recovery
- system performance
and load monitoring
- user assistance
(b.10) S̲y̲s̲t̲e̲m̲ ̲G̲e̲n̲e̲r̲a̲t̲i̲o̲n̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲
The utility SYSGEN-EDIT generates ogject files - based
upon a set of directives, a system source, and command
files - for subsequent compiling and linking. A BINDER
then binds the system object together with the application
object based upon a command file from SYSGEN-EDIT.
All the external references of the object modules are
resolved in the Binder output, which is a load module
ready for execution. The BINDER produces a listing
giving memory layout, module size, etc.
(b.11) D̲i̲a̲g̲n̲o̲s̲t̲i̲c̲ ̲P̲r̲o̲g̲r̲a̲m̲s̲
The Maintenance and Diagnostic (M&D) package is a
collection of standard test programs which is used
to verify proper operation of the CR80 system and to
detect and isolate faults to replaceable modules.
(b.11.1) O̲f̲f̲-̲l̲i̲n̲e̲ ̲D̲i̲a̲g̲n̲o̲s̲t̲i̲c̲ ̲P̲r̲o̲g̲r̲a̲m̲s̲
The off-line M&D software package contains the following
programs:
- CPU Test Program
- CPU CACHE Test Program
- Memory Map Test Program
- RAM Test Program
- PROM Test Program
- Supra Bus I/F Test Program
- CIA Test Program
- LTU Test Program
- Disk System Test Program
- Magtape System Test Program
- Floppy Disk Test Program
- TDX-HOST I/F Test Program
- Card Reader and Line Printer Test Program
(b.11.2) O̲n̲-̲L̲i̲n̲e̲ ̲D̲i̲a̲g̲n̲o̲s̲t̲i̲c̲ ̲P̲r̲o̲g̲r̲a̲m̲s̲
On-line Diagnostic programs will execute periodically
as part of the exchange survailance system. On-line
diagnostics consists of a mixture of hardware module
built-in test and reporting, and diagnostic software
routines. The following on-line diagnostic capability
exists:
- CPU-CACHE diagnostic
- RAM test
- PROM test
- MAP/MIA test
- STI test
- Disk Controller/DCA test
- Tape Controller/TCA test
- LTU/LIA test
On-line diagnostics will report errors to higher level
processing to take recovery/switchover decision in
the case of failures.
c. Database Management System (DBMS)
4.2.2.c D̲a̲t̲a̲ ̲B̲a̲s̲e̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲ ̲S̲y̲s̲t̲e̲m̲ ̲(̲D̲B̲M̲S̲)̲
(c.1) Introduction to the data management.
The Intelligent Database Machine (IDM) is a tool for
providing a sophisticated end user database system.
It is an integrated hardware/software database computing
system using the relational database management approach.
The IDM system is a back end system and performs data
management functions up to 10 times faster than conventional
systems due to the use of special-purpose hardware
which comprises:
o a general purpose processor
o disk controllers
o 3 Mbytes RAM cache memory
o front end I/O processors
o a high speed bus
The software comprises a complete relational database
management system and a random access file system,
though use of the latter is intended for future government
extension of the system use.
The IDM database management functions are:
o transaction management
o optional logging of database changes
o indexing of data
o crash recovery facilities
o concurrency control
o data protection features
o data definition facilities
o non-procedural data manipulation facilities
o data independence
The IDM system communicates with the back end CR80
computer by IDM communication command set. Requests
for data access and response data on retrieval commands
are passed through the back end computer, which pases
input to IDM from host computers and format output
from IDM to host computers. The back end computer both
acts as an intelligent interface between the IDM system
and the host computers and as a supervisor driving
and controlling the database activities. This combination
of functions in the back end computer provides a unique
capability to maintain database availability without
the concern of or impact upon application programs
processing on the host computer. The most crucial feature
is the …86…1 …02… …02… …02… …02… …02…
capability to maintain database access on behalf of
the host computers, while a fatal error has brought
one or more databases to a halt, i.e. by disk crash.
This is accomplished by having two identical IDM systems
running in parallel. The back end computer routes communication
to one or both IDM systems depending on transaction
type (read only) and up/down status of either IDM system.
Recovery processing for one IDM system does not impact
usual databaseaccess on the other one, including the
data base synchronization procedure following a standard
IDM recovery.
End user access to the database is due to the application
subsystem running on the host computer as chosen by
the individual user. All communication to and from
the database is directed to the back end computer in
a high level query language (IDL) embedded in the application
programs.
By each application program the end user is supported
in a highly specific way without having to be concerned
about how to access the database. Though parts of the
application subsystems offer facilities to the end
user to interrogate and update the databases more directly
through an application dependent query language. This
is similar to the general purpose query language IDL,
only it is tailored for the individual application
use by including application dependent prompting and
help-facilities, formatting and expansion of default
and abbreviated terms and clauses.
The remainder of this section describes the functions
of the IDM system and of the part of back end computer
processing concerned with database communication and
recovery, while application query language, and host
computer application subsystems are presented in the
Application Software section (c.4.2.3) below.
The function descriptions are outlined in the following
subsections:
(c2.) The IDM system
(c2.1) The relational database approach
(c2.2) The IDM software and data architecture
(c2.3) Transaction management
(c2.4) Concurrency control
(c2.5) Data protection
(c2.6) Stored commands and views
(c.2.7) Data indexing
(c.2.8) Crash protection and recovery
(c.2.9) Random file system
(c.2.10) Retrieve command and functions
(c.3) The back end computer procedures
(c.3.1) Transaction management
(c.3.2) Recovery procedures
(c.3.3) Extended audit features
(c.4) Direct Data Interrogation Programs(DDIP)
(c.2) T̲h̲e̲ ̲I̲D̲M̲ ̲S̲Y̲S̲T̲E̲M̲
Being a complete back end database management system
the IDM system offers a highly efficient data management
tool, which implies automatic and optional features
of access and security with a minimum of mandatory
data and control communication.
Access to specific data is accomplished by stating
to the IDM system what data are relevant, while all
navigation and logical and physical data positioning
is totally the concern of the IDM system alone. Though
the Database Administrator may provide information
about physical storage space and positions to be used
for specific data (databases and relations) he does
not need survey IDM activities. Physical storage space
is effectively freed and reused without loss in performance
(except for clustered indices, see later). When the
database administrator has informed the IDM system
(via some application program) what kind of data to
be placed in each database, which users are allowed,
which access mode to which data, and some parameters
for frequency and range of recovery measures to be
taken, the database is ready for use. At any time content
of data kinds and/or values, user access permissions
and parameters mentioned above may be added, removed
or changed without impact to application programs or
to other data of the database.
Users with relevant access permission will access and
manipulate data through application program interfaces
in a non-procedural way which means that only the following
information is needed for full efficient database use:
1. state what action to be taken
2. state the names of data elements to be affected
by what value
3. state qualification criteria for data to be
affected
A simplified example as:
SHOW (AIRMAN.NAME, AIRMAN.PAS ̲NR)
WHERE AIRMAN.RANK = "SGT"
will produce a list of names from the IDM system and
pas numbers for all sergeants in the airman file to
be output on the user terminal. All information about
data formats, data positions, data organization, retrieval
technique and the like will be supplied and used internally
by the IDM system in the most efficient way possible.Complex
database use may be prepared in advance for repetitive
use by application programs or end users. Such prepared
database commands or views may be stored in the IDM
system for use simply by name reference.
(c.2.1) T̲h̲e̲ ̲R̲e̲l̲a̲t̲i̲o̲n̲a̲l̲ ̲A̲p̲p̲r̲o̲a̲c̲h̲
In the relational database, data are logically arranged
in two dimensional tables (read "files") called relations
containing a varying number of tuples (read "records")
with identical sets of attributes (read "fields").
Each relation is characterized by the set of attributes
its tuples contain and is identified only by the name
of the relation. No other reference than the relation
name is valid for access to the tuples it contains.
Thus there is stored no implied data structure by which
access to a relation can be made.
Each tuple in a relation is characterized a̲n̲d̲ identified
only by the attribute values assigned to it. Thus there
is no way to identify a tuple or a set of tuples but
to specify a minimum number of combined attribute values,
which is unique for the tuples or the set of tuples
to be reached. Relations may hold an attribute, which
is assigned unique key values for the tuples, i.e.
item number, thus any attribute may participate, alone
or combined with other attributes, in the identification
of tuples.
Attribute values used for identification of tuples
need not to be known explicitly but may be stated by
evaluation of expressions or by reference to other
attributes even in other relations in the same database.
Use of a reference to (the value of) an attribute in
another relation actually describes a relationship
dynamically invoked by the end user and it does not
have to be defined prior to use. In fact any such relationship
which the end user finds relevant may be implemented
at any time.
The way to identify data by relation names and attribute
values alone gives a perfect data independance between
physical data storage and the application programs.
As described later even the attribute formats (or actually
the data value formats) is provided by the IDM system
for both internal use and the benefit of the application
programs if requested.
(c. 2.2) T̲h̲e̲ ̲I̲D̲M̲ ̲S̲o̲f̲t̲w̲a̲r̲e̲ ̲a̲n̲d̲ ̲D̲a̲t̲a̲ ̲A̲r̲c̲h̲i̲t̲e̲c̲t̲u̲r̲e̲
The IDM software which performs all data navigation
and space allocation has the following main areas of
features:
o logical data organization
o integrated data dictionary
o physical data organization
o user process organization
o crash recovery
Seen from the user of IDM the data is organized in
relations as described in the foregoing subsection
(c.2.1) about the relational database approach. Apart
from the composition of attributes and relations to
which data values will be applied, the IDM controls
every aspect of data storage and retrieval for up to
50 independant databases. Each database can have up
to 3200 separate relations, each defined by from 1
to 250 attributes and holding up to 2 billion tuples.
A tuple is limited to 2000 bytes.
The IDM maintains a description of the information
kept in each database as relations in the very same
way as for any other stored information. The data dictionary
is therefore available to the user with few constraints
to protect data for IDM exclusive use. Though access
to the data dictionary, being standard relations created
mostly from input by the database administrator, is
subject to data protection as any other information,
see c.2.5 below. The information in the data dictionary
includes:
o each relation, its name and optional description
o each attribute name and type (or format)
o the indices on each relation
o a list of database users and their access permissions
o an optional audit trail of changes made to
the database
Together with other information this gives the user
all opportunity to interrogate the data descriptions
with the same commands and use of dynamically introduced
relationships as for usual data access. Only the audit
trail information need a special retrieve-command as
information about changed data values are kept in an
internal form. The information for audit trail is kept
in the "transac"-relation, which is used primarily
for crash protection and recovery, see (c.2.8) below.
One database in the IDM system has a special status.
This database, called the "system" database, is created
when the IDM is first installed and holds information
about the IDM system and about the databases on the
IDM. In addition to the data dictionary found in the
other databases, the "system"-database provides information
as:
o a list of all databases
o a list of all disks and their specifications
o physical allocation of disks to databases and
relations
o IDM binary software
o log of system failure
With few restrictions the system database is accessed
exactly like any other database.
Physically the databases are stored on disk drives,
each given a name supplied by the user together with
necessary drive characteristics. The information is
used for initial drive formatting and for space allocation.
Each drive is subdivided into zones containing a whole
number of disk cylinders. When a database or relation
is created size and disk usage may be specified to
reserve proper disk space for it.
The database may reserve disk space in whole zones
on one or more drives and it may be extended later
without precautions like reorganization through
a dump, destroy, create, load procedure. The relation
may reserve space inside the database allocated
storage space, by number of 2K blocks on one or
more specified drives. The reservation may be exclusive
in which case no other relation can obtain the
space. If the reservation is not exclusive other
relations in need for space it may obtain unused
space while the relation itself may, if needed,
reobtain space from other parts of the space allocated
to the database. Space allocation for a relation
may at any time be extended or decreased without
precautions. When a database or relation exceeds
the space reserved for it, the transaction is aborted
by the IDM with an error code stating the cause
of trouble (to be dealt with by the back end computer).
The IDM supports a large number of users simultaneously
accessing the same or different relations. It contains
all the logic needed to control the processing
order of different requests and to prevent the
different requests from interfering with each other.
One user can have many commands running in parallel
on the IDM.
When the user starts a session, the IDM creates
a process for each database opened by the user
and each process will service the user request
for that particular database until the user closes
it. The IDM allows an arbitrary number of users
to simultaneously access the same or different
databases, using main memory both to store user
commands and as private workspace. Inactive user
processes require no main memory, as temporary
disk storage will be used to hold user processes
not currently processing. When the IDM has many
commands all in various state of execution, it
will schedule among them giving each a priority,
which depends on various parameters, and it will
use the temporary disk storage as a buffer.
The user may group commands into a transaction
in order to define consistency of an operation
upon a database. The IDM will then secure that
the whole transaction or nothing will be performed.
This is crucial both for consistency during multiple
user interactions, and in case of a system wide
failure. The IDM is capable of maintaining database
consistency in any case, which is achieved by keeping
redundant information at certain critical points
in the processing of a transaction. Generally used
temporary and optionally transaction logging is
included in this scheme as a means to recover from
a disk head crash. The single IDM system provides
functions to dump databases periodically and transaction
log frequently, the latter without halting the
database usage. When a malfunction has been attended
to the database copy and transaction log copies
are applied to the disk, and the consistent situation
immediately before break down is restored. Some
further procedures are installed to secure data
availability under control of the back end computer,
see c.2.8 below.
(c.2.3) T̲r̲a̲n̲s̲a̲c̲t̲i̲o̲n̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
Transaction Management is the feature to preserve database
consistency through any critical situation arizing
from simultaneous multiple user access and system failure.
The individual user may group a number of commands
into a transaction thereby defining one consistent
operation to be fully executed or totally ignored by
the IDM system. Having started a transaction, the user
may interactively proceed issuing new commands and
receive response data until the whole consistent operation
has been executed properly. When the end of transaction
is signalled by the user, the operation is definitely
concluded by the IDM system. Up to this point the user
may abort the transaction, causing the IDM to back
out from all changes made to the database. In case
of system failure, the same procedure is launched.
In any case the database remains consistent with no
changes on behalf of the transaction made.
If the user does not explicitly state a start of a
transaction, the IDM handles each command as one transaction,
thereby securing consistency of the database.
Though if the user without stating the start of a transaction
requests for execution of more than one command at
a time, the application program will automatically
issue transaction start and -end commands to enclose
the IDM commands input by the user.
(c.2.4) C̲o̲n̲c̲u̲r̲r̲e̲n̲c̲y̲ ̲C̲o̲n̲t̲r̲o̲l̲
During the execution of multiple user transactions
simultaneously it may occur that accesses by two or
more transactions to the same data are in conflict.
As any transaction has exclusive access to all datablocks
accessed, a conflict exists when an updating transaction
has accessed or will access datablocks also accessed
by another transaction. Only between read-only transactions
no conflict exists when they access the same data.
The IDM system is capable of resolving conflicts by
queuing one or more transactions for later sequential
completion. In dead lock situations the IDM some chose
to back out from one or more transactions queuing them
for later sequential restart. For some reason the IDM
in few complex dead lock situations may not be able
to restart a transaction. This will be signalled from
the IDM and the back end computer will then reissue
the transaction from the beginning, suppressing all
redundant output to the host computer.
(c.2.5) D̲a̲t̲a̲ ̲P̲r̲o̲t̲e̲c̲t̲i̲o̲n̲
The IDM Data Protection Feature uses a combination
of an access type, a host computer identification number,
a user identification number and a data object name
as a unit for data protection. For each relation or
view each attribute may be explicitly protected or
it may be protected by default as stated for the relation
or view as a whole.
The right to grant or deny access belongs to the Data
Base Administrator and to the owner of the data object,
which means the user who created it. The System Administrator,
who installs the IDM System, and thereby creates the
system data base as an owner, is the only one who may
grant access rights (for creation) to the data base
administrator of each data base. The Data Base Administrator
becomes the owner of the data bases created and may
grant access rights for the objects in the data bases
for the users, who in turn may grant access rights
for the data objects they have created becoming the
owner of them.
Access rights are input by the permit and deny commands
and the default is permit for the data base administrator
(exclusive the system data base) and for the owner,
while it is deny for anybody else.
A̲c̲c̲e̲s̲s̲ ̲T̲y̲p̲e̲s̲
For each of the following access types the access may
be permitted or denied with the possible scopes stated
inside parenthesis:
- read (data base, relation, file, view, stored command,
attribute)
- write (data base, relation, file, view, stored
command, attribute)
- execute (stored command, program (commands))
- create (data base, relation, file, view)
- destroy (data base, relation, file, view)
- destroy (stored command, program (commands))
- destroy index (index)
Access rights for indirect access through views and
stored commands take precedence over access rights
for the referred objects only if granted by the owner
of the referred objects. In all other cases access
rights for the referred objects remain in effect.
U̲s̲e̲r̲ ̲I̲d̲e̲n̲t̲i̲f̲i̲c̲a̲t̲i̲o̲n̲
To the IDM System a user is identified by the unique
combination of a host identification number and a user
identification number.
The host identification number will be the same for
all data base operations, as the use of the different
data bases is not restricted by the use of individual
terminals or host computers.
The user identifiction number is a unique number, which
is derived by the application program from the system
relation 'user' in each data base using the external
user identification as a key. The external user identification
is the one, which is used to start a session from a
terminal. The permit and deny commands use the external
identification, while any other command uses the internal
identification number.
The Data Base Administrator, who maintains the information
about user identifications, may equally define identification
for groups of users and appoint any individual user
as a member of a group. In this case access rights
are assigned to the group.
P̲r̲o̲t̲e̲c̲t̲i̲o̲n̲ ̲b̲y̲ ̲D̲a̲t̲a̲ ̲V̲a̲l̲u̲e̲s̲
The IDM System protects data efficiently and in detail
for the data objects using the object name as a reference.
Protection by data values, i.e. permission code, is
established by views, which permit the use of data
items for users, whose user identification number qualify
in a predefined table lookup using the data, while
neither the relation nor the table has to be directly
accessible.
(c.2.6) S̲t̲o̲r̲e̲d̲ ̲C̲o̲m̲m̲a̲n̲d̲s̲ ̲a̲n̲d̲ ̲V̲i̲e̲w̲s̲
Access to the data bases may be defined and stored
in the data base for later use in the form of a sequence
of commands or as a virtual relation. They are identified
by name and are automatically registered in the data
dictionary. Beside the advances, decribed below, the
storage of commands and views may be used for refining
data protection so that only prespecified data combinations
or data value dependent qualified users may succeed.
S̲t̲o̲r̲e̲d̲ ̲C̲o̲m̲m̲a̲n̲d̲s̲
Commands may be stored in exactly the same form as
used in straight forward queries to be executed when
requested on-line as a part of a transaction. A crucial
capability is the use of parameter values, which can
be logically referred to in the stored commands to
be applied by the user when executed. In this way
complex data access procedures may be elaborated while
easily used later on.
The use of embedded query language in application programs
in fact imply this facility to hold defined access
routines inside the data base.
S̲t̲o̲r̲e̲d̲ ̲V̲i̲e̲w̲s̲
A view is a virtual relation referring to a selection
of attributes from one or more real relation or other
views. Only the definition of the view is stored (in
a partly preprocessed form), while no data is physically
a part of the stored view.
Definition of a view is as simple as the generation
of a relation, stating view name, a list of data elements
by name reference to existing attributes and the usual
qualification clause. Use is equally simple as it
is treated like a usual relation, which is as a two
dimensional table, no matter how complex relationships
and how comprehensive qualifications are really involved.
Simple views may even be updated, which means that
the data items in the referred relations are updated,
as data only virtually belongs to the view.
(c.2.7) D̲a̲t̲a̲ ̲I̲n̲d̲e̲x̲i̲n̲g̲
Though the relational approach does not constrain access
with prearranged structures some frequently used access
paths may be optimized by the use of ordered indices.
Creation of one or more index for a relation may be
done at any time without affecting existing application
programs. In fact even existing application programs
may gain in performance without being changed. Once
…86…1 …02… …02… …02… …02…
created the indices will automatically be maintained
by the IDM System, due to changes upon data in the
target relation.
Any relation may be inverted by any one or more attributes
(up to twelve in combination it holds). Up to 250
index's may be created for one relation, which practically
gives fast access in any possible order. One of the
indices may be clustered, which implies the relation
itself to be ordered. Deletion of an index may be requested
at any time without affecting existing application
programs, except maybe in performance.
The use of indices is fully automatic as the IDM System
dynamically chooses the optimal acces strategy for
each operation. However, no predefined order of response
data can be guaranteed, so when necessary the order
of response data shall be stated explicitly, even if
in accordance with an existing index.
(c.2.8) C̲r̲a̲s̲h̲ ̲P̲r̲o̲t̲e̲c̲t̲i̲o̲n̲ ̲a̲n̲d̲ ̲R̲e̲c̲o̲v̲e̲r̲y̲
The IDM being a selfcontained backend system has a
powerful protection against loss of data in case of
soft and hard failures. As two IDM Systems are active
in the total ACCESS System the recovery procedures
for the IDM Systems have a supervisory control procedure
running in the back end computer. While this subsection
describes the procedures for the IDM System, the overall
scheme of data base availability, protection and recovery
are described in subsection
(c 3.2) below.
The IDM Dump and Restore facilities to secure and recover
each data base in case of a fatal malfunction (i.e.
disc crash) includes a periodical physical dump of
each data base by the data base administrator and an
optional (per relation) transaction logging on another
disc during data base update transactions. The transaction
log is to be dumped frequently by the data base administrator
as well. Both the full data base dump and the transaction
log dump can, by choice, be directed either to another
data base (backup data base on another disc) or to
the back end computer for storage on magnetic tape.
The dumping of the transactions logs may be done without
interference with the usual data base access, while
the dumping of the data bases interupts accessability.
However, while data base dumping is in progress the
data base is fully operational for normal simultaneous
use on the other IDM System.
After a fatal malfunction the recovery procedure for
each affected data base is to load the most recent
complete data base copy and the transaction logs dumped
since the data base dumping and then apply each transaction
log by the rollforeward command. The content of the
transaction log is before and after images of all attributes
changed by each transaction and rollforeward command
inserts the after images in the same sequence as originally
implied by the transactions. However, all started
but not completed transactions are nullified to be
reissued by the back end computer.
In order to shorten the time of the rollforward procedure
checkpoints may be invoked automatically at predefined
intervals. The checkpoint procedure outputs all updated
data blocks from IDM internal storage to disc, so the
rollforeward command will only apply to the transaction
log for transaction completed after the last checkpoint.
(c.2.9) R̲a̲n̲d̲o̲m̲ ̲A̲c̲c̲e̲s̲s̲ ̲F̲i̲l̲e̲ ̲S̲y̲s̲t̲e̲m̲
The IDM System provides a random access file system
in order to allow the user to store and retrieve non
data base information such as front end programs, stored
graphics and information with a traditional sequential
or structured data organization.
Access to the random access files is controlled by
the application programs by relative record number.
The usual IDM relations may be used as file directories
combining into the indexed sequential access method.
(c.2.10) R̲e̲t̲r̲i̲e̲v̲e̲ ̲C̲o̲m̲m̲a̲n̲d̲ ̲F̲u̲n̲c̲t̲i̲o̲n̲s̲
The IDM System includes a variety of specialized commands
for data storage, retrieval, manipulation, security,
protection and control. The use of some of these are
described above. However, the most crucial capabilities
of interest to the common user are the possibilities
to retrieve specific data to be output in a more or
less computed and arranged way, suitable for easy inspection
or further use.
Three commands are the prime tools for data retrieval
and manipultaion, the retrieve command, the replace
command and the delete command. The definition of
views could be included, yet it is not, because it
is so close to the retrieve command, that no special
functions exists but the virtual view in itself.
Setting up a command the user must answer the following
three questions:
- which attributes shall be affected
- how shall each attribute be affected
- which tuples in the target relation shall be
affected
To the delete command only the last question is relevant,
as the command removes only whole tuples. The three
questions stress the fact that the query language is
non-procedural, which means that no information about
access routes or data base navigation can be input,
as the IDM System has a superb strategy for optimizing
data access in any situation.
The following part of this subsection describes the
most interesting of the available functions included
for answering the three questions above.
W̲h̲i̲c̲h̲ ̲a̲t̲t̲r̲i̲b̲u̲t̲e̲s̲ ̲s̲h̲a̲l̲l̲ ̲b̲e̲ ̲a̲f̲f̲e̲c̲t̲e̲d̲
This question is answered through the target list,
which after the name of the target relation, simply
holds the names of the attributes to be affected. The
sequence of attribute names is used for output sequence
also.
H̲o̲w̲ ̲s̲h̲a̲l̲l̲ ̲e̲a̲c̲h̲ ̲a̲t̲t̲r̲i̲b̲u̲t̲e̲ ̲b̲e̲ ̲a̲f̲f̲e̲c̲t̲e̲d̲
The command chosen shows which kind of data operation
is to be performed. This may be output of values,
creation of a relation by value retrieval or deleting
a set of tuples in the target relation.
The list of attributes to be affected (the target list)
may also state, how each attribute shall be affected
by the command. This may be a simple equation copying
a value into the target attribute.
However, the equations of the target list may state
composite references to attributes in the same or other
relations. The composition may include the use of
arithmetic expressions, scalar aggregates and aggregate
functions as well as embedded tuple selection (see
where clause below). For output purposes the user
may even introduce a name for the result of the composite
reference to exclude output of the attribute values
used.
For evaluation of arithmetic expressions the operations
addition, subtraction, multiplication and division
may be applied to a set of operands, which may be attribute
names, constants, constant functions, unary functions,
binary functions, scalar aggregates, aggregate functions
or embedded expressions. The operands may be qualified
by embedded tuple selection.
The scalar aggregate is an arithmetic expression that
operates over one or more relation and returns a single
value. The scalar aggregates provided are minimum,
maximum, count, sum, average and any (value match yes
or no). As parameter for the scalar aggregates may
be used any valid arithmetic expression.
The aggregate functions operate the same way but return
a set of values which may be qualified by a 'group
by' operator and an attribute name reference.
For the purpose of presentation of the target attribute
values the order in which to show the resulting tuples
may be stated explicitly. If not stated the IDM will
return the tuples in any order suitable for the access
routing, which may not be the same each time. However
the DDIP (Direct Data Interrogation Programs) will
automatically supply an 'order by' clause if missing,
thereby forcing an ascending sequence upon the first
target attribute in the target list.
W̲h̲i̲c̲h̲ ̲t̲u̲p̲l̲e̲s̲ ̲i̲n̲ ̲t̲h̲e̲ ̲t̲a̲r̲g̲e̲t̲ ̲r̲e̲l̲a̲t̲i̲o̲n̲ ̲s̲h̲a̲l̲l̲ ̲b̲e̲ ̲a̲f̲f̲e̲c̲t̲e̲d̲
The qualification criteria for tuples to be affected
by the command are explicitly stated in the 'where'
clause. Embedded tuple selection may be stated likewise
inside the target list for evaluation of values for
target attributes, see above.
The qualification may contain a number of expressions
to be verified true for each candidate tuple. Attribute
values from tuples which do not qualify in every aspect
of the qualification expressions do not participate
in the resulting 'relation'. The expressions may state
combined or alternative conditions by the use of the
boolan operators AND and OR as delimiters.
The expressions to be used may be composed exactly
as stated for the target list above except for the
embedded 'where' clause. Though further possibilities
are available as relational operators and pattern matching
strings. By reference in an expression to attribute
names in different relation, virtual relationships
(or data structures) are dynamically forced upon the
tuple selection.
(c.3) T̲h̲e̲ ̲B̲a̲c̲k̲-̲E̲n̲d̲ ̲C̲o̲m̲p̲u̲t̲e̲r̲ ̲P̲r̲o̲c̲e̲d̲u̲r̲e̲s̲
The ACCESS System being equipped with two IDM Systems
to be updated in parallel is assured a very high data
base availability. Though certain control procedures
are necessary both to keep the two systems synchronized
during updating operations and to control exceptional
handling in case of functional failures in one of the
IDM systems. Furthermore, the back end computer as
a parsing link between the host computers and the IDM
systems performs the direct communiction to the IDM
systems, while the host computers communicate with
the back end computer in the General Purpose Query
Language IDL.
The application programs have no problems in accessing
the data bases arising from the existence of two IDM
systems. In fact the hose computers do not recognize
the existence of two IDM systems as the control of
this aspect lies in the back end computer alone. Receiving
a communication from a host computer to the data base
the backend computer parses the commands into the IDM
communication language, stores them locally, and routes
them to one or both IDM systems. The IDM response
is parsed into a response format, checked 'done' into
the locally stored commands and communicated back to
the host computer.…86…1 …02… …02… …02… …02…
As a part of the data base management system of ACCESS
the back end computer functions are dealing with the
procedures for:
- transaction management
- recovery procedures
- extended audit features
These procedures are described in the subsections below.
(c.3.1) T̲r̲a̲n̲s̲a̲c̲t̲i̲o̲n̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
A transaction is a number of data base commands, which
by the user is grouped as a consistent data base operation,
exactly as stated by the description of a IDM transaction.
The actions taken by the back end computer depend
on the running status for either of the two IDM systems
and of the type of data base operation to be processed.
Receiving the first command in a transaction from a
host computer, the back end computer opens a transaction
log entry on the local storage device. When the command
has been parsed into IDM communication language the
first part of the communication returns the IDM transaction
number from each IDM system to be stored in the local
log entry.
Though if the command does not update the data base
only one IDM system is involved as the non-updating
commands usually are routed alternately. If one IDM
system is not operational only the running IDM system
is involved in any case.
Each command response received from an IDM system is
checked completed in the local log entry for that IDM.
If the response is already received from the other
IDM system this concludes the command entry in the
log. Otherwise the response is parsed into response
format for the host computer and communicated to it,
while the log entry remains open for completion from
the other IDM System. Though for non-updating commands
the log entry is always concluded when a response is
received, as no second system is involved.
Until the end of transaction command has been communicated
this way, the transaction log entry is open for the
inclusion of further commands, each handled the same
way and to abortion if chosen by the user, the host
computer or the back end computer.
Several transactions may be handled simultaneously
this way for each data base, leaving the transaction
management of the IDM systems to handle concurrent
access situations, see subsection (C 2.4) above. As
the two IDM systems might in rare situations choose
different sequences for two concurrent updating commands,
the system adminstrator may for each data base relation
optionally force sequential transaction management,
when updates of the same logical data items are likely
to happen simultaneously. The back end computer will
then issue only one updating transaction at a time
for each relation so flagged, queuing further updating
transactions until both IDM systems have completed.
If for any reason a running IDM should abort a transaction,
i.e. due to extreme concurrency procedure, the back
end computer will reissue the whole transction, command
by command, nullifying any response data already communicated
to the host computer. However, if the IDM error code
indicated, that the IDM system is not able to complete
command processing, i.e. lack of disc storage for that
data base, the transaction is aborted on both IDM systems,
returning an error message to the host computer and
to the system operator. Should the error code or the
communication status show a IDM system unable to proceed
at all for one or more data bases, the IDM system is
flagged down for the data bases in question, all data
base operations for the affected data bases are routed
to the other IDM system alone and the back end computer
will stand by for start of recovery procedure, see
below.
To control excessive and possibly erroneous access
each transaction will be given a limited time for execution
and a limited number of tuples per command to be affected.
The limits may be set as default values by the system
administrator and explicitly by the user for that transaction.
When one or the other limit is proven exceed, the
user will be asked for acknowlegement before further
processing or output will be accomplished. If accepted
the limits for further …86…1 …02… …02… …02… …02…
execution will each time be given additional 25% of
the original limits for the command. Especially a
transaction end command already received from the host
computer will not be executed until the acknowledgement
has been received. The user may at any time before
a transaction end command is executed issue a transaction
abort command in order to nullify the whole transaction.
The back end computer will wait for either acknowledgement
or abortion, leaving it to the host computer to decide,
when the terminal session has time run out. Then the
host computer will issue an abort command on its own.
(c.3.2) R̲e̲c̲o̲v̲e̲r̲y̲ ̲P̲r̲o̲c̲e̲d̲u̲r̲e̲s̲
In case of a major failure, which brings the back end
computer process or both IDM systems to a halt, i.e.
total power failure, all pending transactions will
be aborted as soon as the back end computer process
and at least one IDM system is running again. In the
transaction log of the back end computer the transaction
entries will be prepared for resubmission.
The standard IDM recovery procedures will bring at
least one IDM system upto the point of break down,
and data base operations may continue using ony one
IDM system and starting with the transactions in the
log of the back end computer not yet completed. When
standard IDM recovery procedures for the other IDM
system is completed bringing also this to the point
of break down, the back end procedure for synchronization
of the IDM systems is started, see below.
While the standard IDM recovery procedures are sufficient
for system halts, which do not damage the data on disc
storage, the recovery from disc data loss, i.e. disc
crash, includes the back end computer for load of data
base and transaction log dumps stored on tape or different
discs. This procedure is semiautomatic as the data
bases affected by the break down have to be identified.
The procedure then calls for the relevant dump files
to be mounted if not ready and executes the loads and
rollforeward commands until the point of break down
is reached for each data base affected. The procedure
is completed one data base at a time including synchronization,
bringing each data base to normal execution status
on both IDM systems in the sequence determined by the
constraints of availability.
In nearly any case of failure the recovery activity
will end up to face the problem of synchronization,
as one of the IDM systems will be running full scale,
while the other one is running only non-affected data
bases, yet it is ready (at breaking point) it has not
yet resumed normal action on the affected data bases.
Running one IDM system alone means, that updating
transactions are executed for all data bases to benefit
the host computers leaving the other IDM system behind
on part of the affected data bases.
Synchronization procedure is executed one data base
at a time, bringing it to a running status. Phase
one tranfers a dump of the transaction log from the
running IDM system into the waiting IDM system. During
the transfer operation the log is modified to fit the
receiving IDM data base and the transaction log entries
in the back end computer is completion checked. Then
the rollforeward command is performed. The second
phase starts with abortion of all started transactions
and queuing them for restart together with new incoming
transactions until the data base is ready to proceed
on both IDM systems. The transaction log transfer
and rollforeward are then repeated (with few or no
transactions). Having now completed the synchronization
of the data base, full scale operation is resumed,
starting with the queued transactions and nullifying
response data from resubmitted transaction already
communicated. During the periodical physical data base
dump from one IDM system, the data base operations
are routed to the other IDM system. The synchronization
precedure is then applied automatically.
(c.3.3) E̲x̲t̲e̲n̲d̲e̲d̲ ̲A̲u̲d̲i̲t̲ ̲F̲e̲a̲t̲u̲r̲e̲s̲
The standard IDM Audit Feature is based upon the transaction
log, whose main purpose is to recover the data base.
Only data for updating transactions for relations
actually logged is available.
The transaction log maintained by the back end computer
gathers considerably more information relevant for
auditing. The information may be transferred in a
formalized layout into a data base by request from
the system administrator.
From the data base the following information is then
available:
- transaction user identification
- transaction start and end time
- transaction number for each IDM system
- transaction restart status and completion code
- per command:
- command type
- target data base and relation (view)
- relations or views referred to
- start and end time
- number of tuples affected
- completion code
As the IDM System records a log on abnormal conditions
which may be extracted directly from each IDM system,
the back end computer records a log on abnormal conditions
and restarts procedures. From this the following information
may be extracted:
- communication failures to IDM systems
(IDM number, start and end time, error code)
- communication failures to host computers
(host number, start and end time, error code)
- per data base failure:
- data base identification
- error code
- recovery actions taken
- start and end time for break down
- start and end time for recovery
- number of transactions transferred from
the other IDM system
- first and last transaction number transferred
(c.4) T̲h̲e̲ ̲D̲i̲r̲e̲c̲t̲ ̲D̲a̲t̲a̲ ̲I̲n̲t̲e̲r̲o̲g̲a̲t̲i̲o̲n̲ ̲P̲r̲o̲g̲r̲a̲m̲s̲
All application programs communicate the data bases
in the General Purpose Query Language IDL, using the
parser in the back end machine for translation to and
from the IDM Communication language. While most application
programs are using the data bases as a means for storage
and retrieval of data, the data manipulation is the
purpose of the Direct Data Interogation Programs DDIP.
The DDIP's are developed on top of the IDL to provide
application subsystem dependent services and to minimize
the input of control commands for the IDM accesses.
For each application subsystem the DDIP is tailored
to reflect the data base belonging to the subsystem,
the terminology and the special prepared functions
for guidance of use, field decoding and data value
dependent data protections.
In general the DDIP's provide the following types of
services over and obove the IDM facilities:
- default transaction demarcation
- extension of abbreviated keywords
- interactive syntax control
- interactive command text editing
- default 'order by' clause insertion
- automatic open and close of data base
- application specific help-function
- default and optional limitation of time and number
of tuples affected
- menu driven data dictionary lookup
- application dependent formatted data entry
- report writer formatting usage and definition
- text extention for field decoding
- text extention for data value dependent data protection
4.2.2.d L̲a̲n̲g̲u̲a̲g̲e̲ ̲P̲r̲o̲c̲e̲s̲s̲o̲r̲s̲
The CR80 language processors include the following:
- CR80 COBOL is an efficient industry-compatible
two-pass compiler, fulfilling American National
Standard X3.23-1974 level 1 as well as most of
the level 2 features.
- PASCAL is a high level block-orientated language
that offers structured and complex data and enforces
well structured programs. The CR80 implementation
is based on standard Pascal as defined by Kathleen
Jensen & Niklaus Wirth, with only minor deviations.
The CR80 implementation provides for bit mask
operations in addition to standard PASCAL data
structures. Furthermore, the CR80 implementation
provides the following powerful additions:
- Compile time option enables merging assembly
object directly into the Pascal module.
- Overlay technique is supported.
- Built-in Trace of program execution may optionally
be switched in/out for debugging purposes.
- Sequential and random file access is available
from run time library.
- SWELL 80 is a S̲oftW̲are E̲ngineering L̲ow level
L̲anguage for the CR80 minicomputer. SWELL
offers most of the data and program structures
of PASCAL, and, by enabling register control,
is without the efficiency penalties experienced
in true high-level languages. The main purpose
of SWELL is to combine efficient program execution
with efficient program development and maintenance.
- The Assembler is a machine-orientated language
for the CR80. The language has a direct correspondence
between instructions read and code generated.
- ADA compiler. A project has been launched for
implementation of the new DOD standard programming
language ADA on the CR80 machine. The project
is …86…1 …02… …02… …02… …02…
planned for completion early 1984 and includes development
of an ADA compiler hosted on and targeted for the CR80
as well as of an ADA programming support environment.
The programming support environment is based on the
Stoneman report.
4.2.2.e G̲e̲n̲e̲r̲a̲l̲ ̲U̲t̲i̲l̲i̲t̲i̲e̲s̲
The CR80 utility software package includes:
- Editor
- File Copy, including media conversion
- File Compare
- File Merge
- Interactive Patch Facility
- Memory Dump
- On-line Test Output Facility (Trace)
- On-line Interactive Debugger
- File Maintenance Program
- Magnetic Tape Maintenance Program
4.2.2.f G̲r̲a̲p̲h̲i̲c̲s̲
The proposed graphics software MEGATEK TEMPLATE, consists
of a package of high level graphics routines, which
together with the proposed Color Graphic Display Units
(GCDU) will provide the following functions:
(For a more detailed description see Technical literature).
Interactive
- Creation of geometric figures such as arcs, conic
section, circles, rectangles and polygons.
- Multiple font sizes for labelling and explanatory
texts.
- Interactive color labelling.
- Interactive editing and selective erasure.
- Selection of alternative data display format.
- Independant scaling and windowing.
- Addition subtraction/superposition of two graphs.
- Store and retrieve graphic images.
- Process graphic images entered from a digitizer.…86…1
…02… …02… …02… …02…
4.2.2.g T̲e̲x̲t̲ ̲E̲d̲i̲t̲o̲r
The Access System will be equipped with a standard
text editor used on our CR80 systems for development
purposes. It offers a variety of operations, including:
- inserting lines
- deleting lines
- substituting patterns
- reading and writing files
The editor is intended to be used on-line, but can
be used as a batch program.
The commands available are:
E̲n̲t̲e̲r̲ which is used to specify the file containing
the text ot be edited.
R̲e̲a̲d̲ command is used to set the cursor on the line
to be edited.
W̲r̲i̲t̲e̲ command is used to insert one or more lines into
the editing file.
A̲p̲p̲e̲n̲d̲ command is used to append text to the editing
file.
I̲n̲s̲e̲r̲t̲ command is used to insert text lines into the
editing file.
C̲h̲a̲n̲g̲e̲ command is used to delete one or more lines.
M̲o̲v̲e̲ command is used to move one or more lines in the
editing file.
C̲o̲p̲y̲ command is used to copy one or more lines from
one place to another.
S̲u̲b̲s̲t̲i̲t̲u̲t̲e̲ command is used to replace a character pattern
with another.
R̲e̲p̲e̲a̲t̲ command is used to repeat commands.
Q̲u̲i̲t̲ command is used to terminate the editing session.
In addition to this the VDU has editing capabilities
like insert/delete characters.…86…1 …02… …02… …02… …02…
4.2.2.h Report Writer
The Report Writer is a specialized tool for formatting
and output of data from the IDM data bases and from
any stored data file in the ACCESS System. Through
different commands the specification of what data to
use, what tabulations to make and what data and format
to output may be ad hoc input or stored in the data
bases for later use by name reference.
The functions of the Report Writer may be used as a
subset of the DDIP for data base input or as a separate
program for definition and execution of report output.
While the use of data base information takes advantage
of the internally maintained data directory of IDM
other sources of information will have to be explicitly
described and the description is subject to traditional
maintenance when the input formats are changed. Though
in both cases definitions may be stored in the IDM
data base for ease of use.
As the query language the Report Writer language is
non-procedural as no information about how to access
data is requested from the user. Input to the report
writer is a sequential file or relation, which by means
of the IDM System may be extracted from a number of
relations and sorted before actually input. All references
to data is by names, which is standard for IDM use,
while non-database files may have a record description
stored like the relation description of data base relations.
Input Specification
Specification of input of data base information is
done by phrasing a queston as is usual for interogation
by the DDIP, while other information has to be addressed
through input of file name, storage location and name
of record description. A record description may be
input directly stating data formats and positions exactly
like the definition for creation of a relation in the
data base.
The 'order by' clause may be used to sort the input
into any ascending or descending sequence, which can
be indicated by one or more data values represented
in the file or relation. The sort criteria values
however need not to be output in the report.
The 'grouped by' clause may be used to indicate break
total criteria, for which print commands may be defined
using tabulated totals and calculated results for the
group as input data. The sequence of data names indicates
the break level, forcing a break on low order criteria
inserted first, when a high order break occurs. The
break criteria values however need not to be output
in the report.
Output Specification
Specification of output includes definition of events,
output actions to be taken for each event and format
and position of output data. By default the output
will be directed to the terminal requesting the report.
However the report may include or the user may input
a reference to a preferred output device.
As events may be defined:
- number of printing lines per page
- start of a group
- end of a group
- start on first page
- start on page
- end of page
- first record or tuple
- last record or tuple
Output actions are specified for each event with the
following options:
- print a line (as specified)
- skip a number of lines (or to line number)
- test a condition (if-statement)
- skip page
For each event one or more output actions may be specified
to perform in the stated order. Events occurring during
execution of an output actions will be acted upon immediately,
queuing the remainder of the pending action to be completed
after the actions of the interrupting event.
Line specification is a description of each line to
be output, when the actual event occurs. The specification
includes a reference to data items and position and
format of the output fields to be used.
As data items may be used:
- record or tuple data items
- sum of record of tuple data items
- calculated results of arithmetic expressions
- constants (text and numeric)
- constant functions (data, time, userid)
- parameter references (stored specifications only)
As format and position may as default be used the input
description to the report writer with a blank position
inserted between each output field. However, a mask
may be applied which specify both field length and
position (sum of foregoing field lengths) and data
editing, including decimal point location and filling
by zero, blank or star.…86…1 …02… …02… …02… …02…
4.2.2.i S̲p̲e̲l̲l̲i̲n̲g̲ ̲C̲o̲r̲r̲e̲c̲t̲o̲r̲
The Spelling Corrector program allows you to check
a text file for spelling errors and typos, comparing
them against the entries in one or more dictionaries
on disk. Spelling Correctors includes a basic dictionary
of some 35,000 English words. You can add more words
to this dictionary. You can update the dictionary by
adding and deleting words.
The two main operations you can perform with the Spelling
Corrector are checking spelling and dictionary maintenance.
P̲r̲o̲o̲f̲r̲e̲a̲d̲i̲n̲g̲ ̲t̲h̲e̲ ̲f̲i̲l̲e̲
The Spelling Corrector "proofreads" your document by
first counting and sorting the words in the text/document
file, and counting the words in the main and supplemental
dictionaries you have specified. Then it compares the
text file words with those in the dictionary specified.
The sample on-screen summary shown below reflects the
results of the proofreading.
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
The Spelling Corrector is now checking your document
for misspelled words.
Number of words in document .......... 430
Number of different words ............ 210
Number of words in main dictionary ... 19340
Number of words in supplement ........ 201
Number of dictionary words checked ... 19100
Number of misspelled words ........... 30
TOTAL number of misspellings .........
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲…01…Spelling Corrector Example
The "different" words in the document are the unique
words left after cancellation of duplicates.
The changing number on the fifth line, "Number of dictionary
words checked", indicates how far the Spelling Corrector
has gotten with your spelling check.
A "misspelled word" is any word that does not match
up with a word in the dictionary or dictionaries specified.
So, a "misspelled word" can be either:
o a misspelling (or typo) of a word in your dictionary,
OR
o a word that is not in your dictionary, spelled
correctly or incorrectly.
The Spelling Corrector leaves the "total number of
misspellings" blank until later.
At this point in the spelling check, the Spelling Corrector
has not yet actually flagged the errors within the
text file. So, you can select to see listed on the
screen all the misspellings, typos, and words unmatched
in the dictionary; or you can leave the spelling check
at this point without affecting your file.
Listing the words apart from the text file at this
point can be a handy double-check that you specified
the right dictionary. If you see many words listed
you …86…1 …02… …02… …02… …02…
thought were in the dictionary, leave the spelling
check to go back and check whether the specified main
or supplemental dictionary is the right one. Restart
the spelling check.
If, in a listing of misspelled words, the list exceeds
one screenful, a message will appear at the bottom
of the screen giving commands to go on to the next
screenful, scroll the list continuously, or stop the
listing entirely.
At the end of the listing, you may tell the Spelling
Corrector to proceed with flagging the errors, or return
to the Main menu.
F̲l̲a̲g̲g̲i̲n̲g̲ ̲t̲h̲e̲ ̲F̲i̲l̲e̲
When the Spelling Corrector flags the document file,
it inserts a marker in front of every misspelled word
it has found in the proofreading step.
Single stand alone letters, like those in outlines,
are also not flagged.
While flagging the document, the Spelling Corrector
creates a file with the same filename as the document
file, but with an extension suffix XXX. This file contains
you document with all the errors flagged. Once this
file is created, the files used to sort the words in
your document disappear.
When the flagging is completed, the document may be
corrected; or the Spelling Corrector may be ended,
leaving the document flagged but not corrected.
C̲o̲r̲r̲e̲c̲t̲i̲n̲g̲ ̲t̲h̲e̲ ̲F̲i̲l̲e̲
The Spelling Corrector will display the first part
of the document where it has flagged errors.
The cursor will appear flashing at the first flagged
word; the marker will have been removed. The five possible
actions you may take on each word are listed above.
If a word is flagged, it simply means it is not in
the dictionary. It could be misspelled or it could
be correctly spelled - you must determine that yourself.
You also have no way of knowing if a misspelled word
is in the dictionary in its correct form or not, as
long as you are in the correction phase. You can, however,
make a note of the word and find out later if it is
in the dictionary.
If the word is misspelled and you want to correct it,
you can get into the edit mode. With the full range
of editing capabilities available to you, you can correct
the spelling reformat the paragraph if necessary, and
make any other changes in the document that might occur
to you.
If you do not change the word, it will no longer be
marked as misspelled.
If you suspect a flagged word is misspelled, but want
to wait until you have gone through the whole document
before looking it up, you can bypass the word. It will
remain flagged, and you can come back to it later.
S̲e̲t̲t̲i̲n̲g̲ ̲A̲s̲i̲d̲e̲ ̲W̲o̲r̲d̲s̲ ̲f̲o̲r̲ ̲a̲ ̲D̲i̲c̲t̲i̲o̲n̲a̲r̲y̲
When the Spelling Corrector has flagged a word that
is spelled correctly, you can decide to add it to a
dictionary.
When you choose "add to dictionary" or "add to supplemental
dictionary" during the correction phase, the Spelling
Corrector simply assigns the word to one of two groups,
and sets them aside in a file it creates.
You use the created file later, in a dictionary maintenance
run, to actually add the words it contains to one or
more dictionaries. That is, when you run maintenance
for the created file, you could add the words to or
delete words from any dictionary you name. The words
you selected to add to the main dictionary can also
be added to a supplement if you like, or vice versa.
If you think you may recheck your document before adding
the first run's file to any dictionary, saving the
file can be a definite advantage.
"̲I̲g̲n̲o̲r̲e̲"̲ ̲a̲n̲d̲ ̲"̲A̲d̲d̲"̲ ̲I̲n̲s̲t̲r̲u̲c̲t̲i̲o̲n̲s̲ ̲R̲e̲m̲e̲m̲b̲e̲r̲e̲d̲ ̲b̲y̲ ̲t̲h̲e̲ ̲S̲p̲e̲l̲l̲i̲n̲g̲
̲C̲o̲r̲r̲e̲c̲t̲o̲r̲
The words you tell the Spelling Corrector to ignore
or to add to a dictionary will be remembered the next
time they appear in the same document, and the cursor
will not stop at them. The Spelling Corrector can remember
"ignore" and "add" instructions.
4.2.2.j. D̲o̲c̲u̲m̲e̲n̲t̲ ̲F̲o̲r̲m̲a̲t̲t̲e̲r̲
The Document Formatter provides the formatting controls,
which are most likely to be used when preparing documents.
It produces output for devices like terminals and lne
printers, with automatic right justification, pagination
(skipping over the fold in the paper), page numbering
and titeling, centering, indenting, and multiple line
spacing.
The Document Formatter accepts text to be formatted,
interspersed with formatting commands telling the program
what the output is to look like. A command consists
of a colon, a two letter name, and perhaps some optional
information. Each command must appear at the beginning
of a line, with nothing on the line but the command
and its arguments. For instance,
:CE
centers the next line of output, and
:SP 3
generates three spaces (blank lines).
Default parameter settings and formatting actions are
intended to be reasonable and free of surprises. Ideally
a document containing n̲o̲ commands should be printed
sensibly. For instance words fill up output lines as
much as possible, regardless of the length of input
lines. Blank lines cause fresh paragraphs. Input is
correctly space across page boundaries, with top and
bottom margins.
C̲o̲m̲m̲a̲n̲d̲s̲
As mentioned, all commands consist of a colon at the
beginning of a line, which is a rather unlikely combination
in text, and have two-letter names. This is a reasonable
compromise between brevity and mnemonic value.
By default the Document Formatter f̲i̲l̲l̲s̲ output lines;
by packing as many input words as possible into an
output line before printing it. The lines are also
j̲u̲s̲t̲i̲f̲i̲e̲d̲ (right margins made even) by inserting extra
spaces into the filled lines before output. The filling
…86…1 …02… …02… …02… …02…
of lines can be turned off, however, by the n̲o̲-̲f̲i̲l̲l̲
command
:NF
and thereafter lines will be copied from input to output
without any rearrangement. Filling can be turned back
on, with the f̲i̲l̲l̲ command
:FI
When an :NF is encountered, there may be a partial
line collected but not yet output. The :NF will force
this line out before anything else happens. The action
of forcing out a partially collected line is called
a b̲r̲e̲a̲k̲. The break concept pervades the Document Formatter;
many commands implicitly cause a break. To force a
break explicitly, for example to separate two paragraphs,
use
:BR
Of cause you may want to add an extra blank line between
paragraphs. The s̲p̲a̲c̲e̲ command
:SP
causes a break, then produces a blank line. To get
n blank lines, use
:SP n
(A space is always required between a command and its
argument). If the bottom of the page is reached before
all of the blank lines have been printed, the excess
ones are thrown away, so that all pages will normally
start at the same first line.
If it is desireable to force a table or a program to
appear all on one page, use the n̲e̲e̲d̲ ̲ command
:NE n
which simulates a "begin page" command at the beginning
of the table, but only if it would actually fall acros
a page boundary. The command says, "I need n lines;
if there are not that many on this page, skip to a
new page."
By default output will be single spaces, but the line
spacing can be changed at any time:
:LS n
sets line spacing to n. (n=2 is double spacing). Ths
:LS command does not cause a break.
The b̲e̲g̲i̲n̲ p̲a̲g̲e̲ command :BP causes skip to the top of
a new page and also causes a break. If you use
:BP n
the next output page will be numbered n. A :BP that
occurs at the bottom of a page has no effect except
perhaps to set the page number; no blank page is generated.
The current page length can be changed (without a break)
with
:PL n
To center the next line of output,
:CE
line to be centered
The :CE command causes a break. You can center n lines
with
:CE n
and, if you do not like to count lines (or can not
count correctly), say
:CE 1000
lots of lines
to be centered
:CE 0
The lines between the :CE commands will be centered.
No filling is done on centered lines.
Underlining is much the same as centering:
:UL n
causes the text on the next n lines to be underlined
upon output. But :UL does not cause a break, so words
in filled text may be underlined by
words and words and
:UL
lots more
words
to get
words and words and l̲o̲t̲s̲ m̲o̲r̲e̲ words.
Centering and underlining may be intermixed in any
order:
:CE
:UL
Title
gives a centered and underlined title.
The i̲n̲d̲e̲n̲t̲ command controls the left margin:
:IN n
causes all subsequent output lines to be indented n
positions. (Normally they are indently by 0). The command
:RM n
sets the r̲i̲g̲h̲t̲ m̲a̲r̲g̲i̲n̲ to n. The line length of filled
lines is the difference between right margin and indent
values. :IN and :RM do not cause a break.
The traditional paragraph indent is produced with the
t̲e̲m̲p̲o̲r̲a̲r̲y̲ i̲n̲d̲e̲n̲t̲ command:
:TI n
breaks and sets the indent to position n for one output
line only. If n is less than the current indent, the
indent is backwards (a "hanging indent").
To put running header and footer titles on every page,
use :H1, :H2 and :F2:
:H1 this becomes the first line of
header
:H2 and this the second line
The title begins with the first non-blank after the
command, but a leading quote will be discarded if present,
so you can produce titles that begin with blanks. The
titles are not indented as normal text lines, but the
h̲e̲a̲d̲e̲r̲ i̲n̲d̲e̲n̲t̲ command
:HI n
controls the indentation of titles in a way similar
to the :IN command.
The u̲p̲p̲e̲r̲ m̲a̲r̲g̲i̲n̲ commands control the number of free
lines over and under the t̲i̲t̲l̲e̲:
:U1 n
sets the number of lines in the upper margin over and
including the headers to n (note: the headers are
double spaced, and therefore take up three lines).
:U2 n
sets the number of lines between headers and text to
n.
The number of free lines at the bottom of the pages
may be controlled by the l̲o̲w̲e̲r̲ m̲a̲r̲g̲i̲n̲ commands:
:L1 n
defines the number of blank lines between the text
and the footers to be n. The number of lines of bottom
margin including footers (also double spaced) may be
set by
:L2 n
to be n.
Since absolute numbers are often awkward, the Document
Formatter allows r̲e̲l̲a̲t̲i̲v̲e̲ values as command arguments.
All commands that allow a numeric argument n also allow
+n or -n instead, to signify a c̲h̲a̲n̲g̲e̲ in the current
value. For instance,
:RM -10
:IN +10
shrinks the right margin by ten f̲r̲o̲m̲ i̲t̲s̲ c̲u̲r̲r̲e̲n̲t̲ v̲a̲l̲u̲e̲,
and moves the indent 10 places f̲u̲r̲t̲h̲e̲r̲ to the right.
Thus
:RM 10
and
:RM +10
are quite different.
Relative values are particularly useful with :TI, to
temporarily indent relative to the current indent:
:IN +5
:TI +5
produces a left margin indented by 5, with the first
line indented by a further 5. And
:IN +5
:TI -5
produces a hanging indent, as in a numbered paragraph:
1. Now is the time for all good people to
come to the party.
To help in such situations, a tilde may be useful.
It is treated as a letter, when deciding how to fill,
but it is nevertheless output as a space.
A line that begins with a blank is a special case.
If there is no text at all, the line causes a break
and produces a number of blank lines equal to the current
line spacing. These lines are never discarded regardless
of where they appear, so they provide a way of getting
blank lines at the top of a pace. If a line begins
with n blanks followed by text, it causes a break and
a temporary indent of +n. These special actions help
ensure that a document that contains no formatting
commands will still be reasonably formatted.
The command
:DAn
will align the n columns down the lines, so that all
columns will have the decimal points aligned.
4.2.2.k. S̲t̲a̲t̲i̲s̲t̲i̲c̲s̲
The Statistics Package will provide capabilities for
the users to interactivity perform basic mathematical
operations and statistical analysis on available data.
Available mathematical operations are:
- additions, subtractions
- multiplication, division
- square, squareroot
- percentage
On specified sets of data calculation of the following
statistical measures are provided:
- median
- mode
- geometric mean
- standard deviation
- regression coefficient
- correlation coefficient
- confidence coefficient
4.2.2.l. O̲p̲t̲i̲c̲a̲l̲ ̲C̲h̲a̲r̲a̲c̲t̲e̲r̲ ̲R̲e̲a̲d̲e̲r̲
The OCR software package will provide a user interface
to the OCR with the following capabilities:
- specify input format
characteristics
- specify file and directory
for OCR output
- correct changes flagged by the OCR