top - download
⟦52915074c⟧ Wang Wps File
Length: 20715 (0x50eb)
Types: Wang Wps File
Notes: ACCESS
Names: »3269A «
Derivation
└─⟦c33acb2c5⟧ Bits:30006217 8" Wang WCS floppy, CR 0278A
└─ ⟦this⟧ »3269A «
WangText
…0e……00……00……00……00…%…02……00……00…%
$…08…$…0c…$…02……86…1 …02… …02… …02… …02… …02…
DOC 3269A
ACCESS - PART II - TECHNICAL PROPOSAL SYS/1983-01-25
SUBPART F - PROPOSED SYSTEM FACILITY Page #
…01…A C C E S S
AUTOMATED COMMAND AND CONTROL
EXECUTIVE SUPPORT SYSTEM
DOC NO ACC/8004/PRP/001 ISSUE 1
PART II
TECHNICAL PROPOSAL
SUBPART F
PROPOSED SYSTEM FACILITY
SUBMITTED TO: AIR FORCE COMPUTER AQUISITION CENTER (AFCC)
Directorate of Contracting/PK
Hanscom AFB
MA. 01731
USA
IN RESPONSE TO:Solicitation No F19630-82-R-0001
AFCAC Project 211-81
PREPARED BY: CHRISTIAN ROVSING A/S
SYSTEM DIVISION
LAUTRUPVANG 2
2750 BALLERUP
DENMARK
…0e…c…0f… Christian Rovsing A/S - 1982
This document contains information proprietary to Christian
Rovsing A/S. The information, whether in the form of text, schematics,
tables, drawings or illustrations, must not be duplicated or
used for purposes other than evaluation, or disclosed outside
the recipient company or organisation without the prior, written
permission of Christian Rovsing A/S.
This restriction does not limit the recipient's right to use
information contained in the document if such information is
received from another source without restriction provided such
source is not in breach of an obligation of confidentiality
towards Christian Rovsing A/S.…86…1 …02… …02… …02… …02…
T̲A̲B̲L̲E̲ ̲O̲F̲ ̲C̲O̲N̲T̲E̲N̲T̲S̲
6. PROPOSED SYSTEM FACILITY ...................
6.1 PROPOSED SYSTEM FACILITY REQUIREMENT .....
6.2 CANCELLED ...............................
6.3 SYSTEM RELIABILITY, SHUTDOWN AND RECOVERY
6.3.1 Reliability, Maintainability and
Availability Analysis ................
6.3.1.1 Introduction .......................
6.3.1.2 RMA Analysis .......................
6.3.1.3 Reliability Models and Block Diagrams
6.3.1.4 Reliability Model for ACCESS User ..
6.3.1.5 Equipment Mean Time Between Failures
6.3.1.6 Equipment Maintainability ..........
…86…1 …02… …02… …02… …02…
6.1 P̲R̲O̲P̲O̲S̲E̲D̲ ̲S̲Y̲S̲T̲E̲M̲ ̲F̲A̲C̲I̲L̲I̲T̲Y̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲
a) The facility requirement of room BB-30 concerning
each piece of equipment is listed in table 6.1-1.
Sheet 1 and 2. The racks requires 120V, 2 Phase,
neutral and ground (4 wires), the disc-I/F and
disc drives requires 208V, 2 phase and ground (3
wires). A power factor of 0,8 is used where only
volt ampere or watt is specified by the equipment
manufacturer.
The relative humidity and the temperature range given
are operating equipment. Characteristics for Room
BB-30:
The operational floor area requirement is calculated
on the basis that the layout of room BB-30 has been
made so that different equipment is utilizing the
same access (see FIG. I 3.3.1-1).
The total requirements for the first increment instalation
is indicated in table II 6.1-1, sheet
1 and 2.
b) The facility requirements for the expanded system
is given in table II 6.1-2. The requirement are
given for the fully expanded system shown on FIG.
I 3.3.1-2.
c) The facility requirement for terminal equipment
to be installed in the rooms of building 500, 501,
40, 41 and 407 are given in table II 6.1-3.
6.2 Reserved
REV. 1
1983-03-18
TABLE II 6.1-1, SHEET 1
PROPOSED SYSTEM FACILITY REQUIREMENTS
(FIRST INCREMENT)…86…1 …02… …02… …02… …02…
TABLE II 6.1-1, SHEET 2
PROPOSED SYSTEM FACILITY REQUIREMENTS
(FIRST INCREMENT)…86…1 …02… …02… …02… …02…
TABLE II 6.1-2
PROPOSED SYSTEM FACILITY REQUIREMENTS
(EXPANSION)…86…1 …02… …02… …02… …02…
TBLE II 6..1-3
PROPOSED SYSTEM FACILITY REQUIREMENTS
(TERMINAL EQUIPMENT)…86…1 …02… …02… …02… …02…
6.3 S̲Y̲S̲T̲E̲M̲ ̲R̲E̲L̲I̲A̲B̲I̲L̲I̲T̲Y̲,̲ ̲S̲H̲U̲T̲D̲O̲W̲N̲ ̲A̲N̲D̲ ̲R̲E̲C̲O̲V̲E̲R̲Y̲
6.3.1 R̲E̲L̲I̲A̲B̲I̲L̲I̲T̲Y̲,̲ ̲M̲A̲I̲N̲T̲A̲I̲N̲A̲B̲L̲I̲L̲I̲T̲Y̲ ̲A̲N̲D̲ ̲A̲V̲A̲I̲L̲A̲B̲I̲L̲I̲T̲Y̲ ̲A̲N̲A̲L̲Y̲S̲I̲S̲
This chapter provides the detailed analysis of the
reliability and maintainability provided by the proposed
equipment. Emphasis has been given to include the
analysis for the range covered by the proposed system
architecture. Furthermore, detailed information with
respect to failure rates and repair times are provided
for the various componenents and modules included in
the architecture.
6.3.1.1 I̲N̲T̲R̲O̲D̲U̲C̲T̲I̲O̲N̲
The availability of the proposed equipment is very
high due not only to a high reliability of individual
system elements, but mainly due to the chosen CR80
computer configuration, where functional like elements
automatically substitute each other in case of failure.
Overall system availability has been calculated.
The high system availability has been achieved by the
use of highly reliable modules, redundant processor
units and automatic reconfiguration facilities. Care
has been taken to ensure that single point errors do
not cause total system failure.
The reliability criteria imposed on the computer systems
have been evaluated and the proposed hardware/software
operational system analysed to determine the degree
of availability and data integrity provided. In this
chapter reliability is stated in numerical terms and
the detailed predictions derived from mathematical
models presented.
The availability predictions are made in accordance
with system reliability models and block diagrams corresponding
to the proposed configuration. This procedure involves
the use of module level and processor unit level failure
rates, or MTBF, (mean time between failures) and MTTR
(meantime to repair); these factors are used in conjunction
with a realistic modelling of the configuration to
arrive at system level MTBF and availability.
Tabulated results of the analysis are presented including
the reliability factors: system MTBF and repair time
MTTR.
The basic elements of the proposed system architecture
are constituted by standard CR80 units. Reliability
and maintainability engineering was a significant factor
in guiding the development of the CR80.
The CR80 architecture is designed with a capability
to achieve a highly reliable computer system in a cost-effective
way. It provides a reliable set of services to the
users of the system because it may be customised to
the actual availability requirements. The CR80 fault
tolerant computers are designed to avoid single point
errors of all critical system elements by provision
of redundancy paths, processor capabilities and power
supplies.
The architecture reflects the fact that the reliability
of peripheral devices is lower than that of the associated
CR80 device controllers. This applies equally well
to communication lines where modems are used as part
of the transmission media. Thus, the peripheral devices,
modems, communication line, etc., impact the system
availability much more than the corresponding device
controllers.
To assur this very highly reliable product, several
criteria were also introduced on the module level:
- An extensive use of hi-rel, mil-spec components,
ICs are tested to the requirements of MIL-STD 883
level B or similar.
- All hardware is designed in accordance with the
general CR80 H/W desing principles. These include
derating specification, which greatly enhance the
reliability and reduce the sensibility to parameter
variations.
- Critical modules feature a Built-In Test (BIT)
capability as well as a display of the main states
of the internal process by Light Emitting Diodes
on the module front plate. This greatly improves
module maintainability, as it provides debugging
and trouble shooting methods, which reduce the
repair time.
- A high quality production line, which includes
high quality soldering, inspection, burn-in and
an extensive automativ functional test.
6.3.1.2 R̲M̲A̲ ̲A̲N̲A̲L̲Y̲S̲I̲S̲
This section provides information with respect to RMA
analysis of a system. It includes the detailed formulas
which apply as part of the RMA calculation.
The RMA analysis of a system provides information on
how much of the time the system provides a given set
of required functional capabilities, i.e. provides
operative availability. It shows how many times the
system is not operative during a given period and for
how long. A system may be operative even with one
or more elements of the total system down or taken
off-line for the purpose of repairing and/or replacement
of defect modules/units. Note that this is operative
as seen by a user of the functional capabilites, not
as seen by maintenance personnel.
The basis for determining the system level availability
is a RMA model of serial and parallel system elements.
Each of these elements defines a specific subset of
the total system with a well defined status, either
functioning or not.
Serial elements refer to elements all of which have
to be available for that set to be available.
Parallel elements describes those sets where not all
elements need to be available, the number determined
by the required service level or the redundancy provided.
The subsequent section introduces the basic RMA building
stones.
6.3.1.2.1 S̲E̲R̲I̲E̲S̲ ̲E̲L̲E̲M̲E̲N̲T̲
The mean time between failure (MTBF) of a series of
n different RMA elements is made up as follows:
MTBF (S) = 10…0e…6…0f…/LAMBDA(S))
where the series failure rate (LAMBDA(s)) is determined
by the sum of the failure rates of the elements:
LAMBDA (S) = LAMBDA (1) + LAMBDA (2) +
+ LAMBDA (i)+ + LAMBDA (n).
LAMBDA (i) denotes the failure rate of the i'th element,
i.e. the expected no. of failure per 1 million hours
of operation.
MTBF (S) is thus expressed in hours.
The availability of a system of n serial RMA elements
is determined by:
A(s) = A(1) * A(2) * * A(i) * *A(n)
A(i) = MTBF (i)/(MTBF(i) +MTTR(i))
6.3.1.2.2 P̲a̲r̲a̲l̲l̲e̲l̲ ̲E̲l̲e̲m̲e̲n̲t̲s̲
When RMA elements are in parallel, it is required that
one or more of the parallel units are operative simultaneously
to obtain the required system performance. The actual
number of parallel units required is dependent on the
actual models. Assuming operational redundancy and
neglible recovery time, the calculation rules are:
a. M̲e̲a̲n̲ ̲T̲i̲m̲e̲ ̲B̲e̲t̲w̲e̲e̲n̲ ̲F̲a̲i̲l̲u̲r̲e̲
When the parallel elements have defined MTBF and
MTTR values the following rules apply:
1) MTBF (E) = MTBF(1 of 2 equal parallel elements)
MTBF(E) = 2 MTBF (MTTR + MTBF…0e…2…0f…)/2*MTTR or
MTBF (E) = 2 * MTBF * MTBF (E) = MTBF…0e…2…0f…/2 *MTTR, provided,
that MTTR is much less than MTBF
2) MTBF(E) = MTBF(n of n+1 equal parallel elements)
2
MTBF(E)= ((n+1)*MTBF*MTTR + MTBF…0e…2…0f…),/n(n+1)MTTR, or
MTBF(E) = MTBF…0e…2…0f…/n(n+1)MTTR,
provided, that (n+1)*MTTR is much less than MTBF
b. M̲e̲a̲n̲ ̲T̲i̲m̲e̲ ̲t̲o̲ ̲R̲e̲p̲a̲i̲r̲
The element mean time to repair, MTTR(E),
corresponds to the period where more that n out
of the n+1 units are not avavailable i.e. the element
is not fully operative.
1) MTTR (1 of 2 Parallel Elements) = MTTR (E)
…02……02… MTTR(E) = MTTR/2
2) MTTR (n of n + 1 Parallel Elements) = MTTR (E)
MTTR(E) = MTTR/2
c. A̲v̲a̲i̲l̲a̲b̲i̲l̲i̲t̲y̲
The availability corresponds to the ratio between
the MTBF and the total operative time, which is
equal to the sum of MTBF and MTTR for the element
thus:
A(E) = MTBF(E)/(MTBF(E) + MTTR(E))
6.3.1.3 R̲E̲L̲I̲A̲B̲I̲L̲I̲T̲Y̲ ̲M̲O̲D̲E̲L̲S̲ ̲A̲N̲D̲ ̲B̲L̲O̲C̲K̲ ̲D̲I̲A̲G̲R̲A̲M̲S̲
The computer system is partitioned into system elements
and the models used for reliability and availability
predictions show how the proposed quipment provides
the high degree of reliability required.
The system reliability characteristics for the system
are stated in numberical terms by mathematical models;
the supporting detailed predictions are presented in
this chapter. The system models are partitioned into
modular units and system elements that reflect the
redundancy of the configuration; it accounts for all
interconnections and switching points. The MTBF and
MTTR for the individual elements used in the calculations
were obtained from experience with similar equipment
on the NICS-TARE, FIKS and CAMPS programmes.
The equipment has been partitioned and functions apportioned
so that system elements can have only two states -
operable or failed. System elements are essentially
stand-alone and free of chain failures.
Careful attention has been paid in the design to elimate
series risk elements. Redundant units are repairable
without interruption of service. Maintenance and reconriguration
is possible without compromising system performance.
The primary source selected for authenticated reliability
data and predictions is the MIL-HDBK-217. The failure
rate data are primarily obtained from experience from
previous programmes and continuously revised as part
of the maintenance programme on concurrent programmes.
The reliability models which apply to the proposd configurations
are identified in the fugures shown in the following
sections.
6.3.1.4 R̲E̲L̲I̲A̲B̲I̲L̲I̲T̲Y̲ ̲M̲O̲D̲E̲L̲ ̲F̲O̲R̲ ̲A̲C̲C̲E̲S̲ ̲U̲S̲E̲R̲
In this section is provided estimated values for the
predicted Availability and MTBF of the proposed ACCESS
System Configuration.
In FIG. 6.3.1.4-1 is shown the reliability model used
for the calculation.
The Figures 6.3.1.4-3, 6.3.1.4-4 and 6.3.1.4-5 shows
the reliability models for PU's, CU's and the local
area network (X-Net).
The calculations shown on the table, FIG. 6.3.1.4-2
provides the following reliability figures:
. User availability: A = 0.999920
. Mean Time Between System Failure: 12.500 Hours.
Reliability figures for the basic items of the system
are found in Section 6.3.1.5 below.
FIG. 6.3.1.4-1
RELIABILITY MODEL FOR
ACCESS USER
(LAMBDA VALUES)
FIG. 6.3.1.4-2
AVAILABILITY CALCULATION FOR ACCESS USER
FIG. 6.3.1.4-3
RELIABILITY MODEL FOR PU,
BACK-END/FRONT-END
(LAMBDA VALUED)
FIG. 6.3.1.4-4
RELIABILITY MODEL FOR CU
BACK-END/FRONT-END
(LAMBDA VALUES)
FIG. 6.3.1.4-5
RELIABILITY MODEL FOR X-NET,
LOCAL AREA NETWORK
(LAMBDA VALUES)
6.3.1.5 E̲Q̲U̲I̲P̲M̲E̲N̲T̲ ̲M̲E̲A̲N̲ ̲T̲I̲M̲E̲ ̲B̲E̲T̲W̲E̲E̲N̲ ̲F̲A̲I̲L̲U̲R̲E̲S̲ ̲(̲M̲T̲B̲F̲)̲
The high reliability of the proposed equipment is ahieved
through use of proven failure rate equipment similar
to that supplied by Christian Rovsing A/S for the NICS-TARE,
FIKS and CAMPS programmes.
Early in the design phase, a major objective for each
module is to achieve reliable performance. CR80 modules
make extensive use of carefully chosen components;
most of the IC's are tested to the requirement of MIL-STD
883 level B.
The inverse of MTBF representing failure rate which
applies to system elements and modules is listed in
Table 7-8 entitled CR80 Reliability Factors.
The MTBF data has been drived from reliability data
maintained on the NICS-TARE and CAMPS and similar programmes.
Inherent MTBF values are in general derived from the
reliability predictions accomplished in accordance
with the U.S. MIL-HDBK-217 "Reliable Predictions of
Electronic Equipment". This document, adopted by Christian
Rovsing A/S through their involvement with NICS-TARE,
is used extensively on current military and aerospace
programmes.
Failure rate data for terminal and periphal equipment
is generlly provided by the vendor in accordance with
the subcontract specifications
FIG. 6.3.1.5-1
R & M VALUES FOR MODULES AND PERIPHERALS
FIG. 6.3.1.5-1
R & M VALUES FOR MODULES AND PERIPHERALS
(cont'd)
6.3.1.6 E̲Q̲U̲I̲P̲M̲E̲N̲T̲ ̲M̲A̲I̲N̲T̲A̲I̲N̲A̲B̲I̲L̲I̲T̲Y̲ ̲(̲M̲T̲T̲R̲)̲
The proposed system is designed for ease of maintenance.
The system is built of modules each comprising a complete
well-defined function.
Replacement of modular units results in minimum repair
time. Software and firmware diagnostic routines rapidly
isolate faulty modules; repair can then be performed
by semi-skilled maintenance personnel and usually without
special tools.
The proposed system, composed of redundant elements,
meets the objective of ease of maintenance. All units
and system elements are of a modular construction so
that any defective module can be isolated and replaced
in a minimum amount of time.
In the design of the System Elements, careful attention
was given to ease of maintenance without requiring
special tools, so that the maintenance could be performed
by semi-skilled maintenance personnel.
Fault detection and isolation to the system element,
in some cases module level, is inherent in the software
residing in the various processors. In peripheral
devices, the fault detection and isolation is accomplished
by a combination of on-line, software, built-in test,
and operator observations.
In case the correct function of the system is extremely
critical, the Processors will have built-in, on-line,
diagnostic programmes. Even though the Processors
are highly reliable, failures can occur; usage of the
off-line diagnostics minimises the downtime for a system.
An off-line diagnostics software package is employed
to ease the diagnostics in case of error. Normally,
this software package is stored on disc. After initiation,
the programme will test all modules forming the system
and print the name and address of the erroneous module
on the operator's console. Having replaced the erronious
module, the Processor is ready for operation again.
The operator might, if necessary, run the off-line
diagnostics programme once more to verify that the
system is now working without errors.
The command interpreter module of the diagnostic package
enables the operator to initiate any or all of the
test programmes for the specific subsystem off-line,
to assist in trouble shooting and to verify the repair.
Examples of modules tested are LTU's, CPU and RAM modules,
etc.
The diagnostic package will also assist in fault isolation
of the peripherals. However, common and special test
equipment might have to be used to isolate the faulty
module.
The Mean-Time-To-Repair for the equipment is derived
from two sources. The first is actual experience data
on the equipment proposed for the front-end system.
The other source is from predictions generated in
accordance with MIL-HDBK-472 or similar documents.
As an example, the MTTR for the Disc Storage Unit
was derived from repair times measured by the supplier.
The repair times of other units were derived by a
time-line analysis of the tasks associated with fault
detection, isolation, repair, and verification. These
repair times were weighted by the MTBF of each module
to derive the unit MTTR. The calculation of the Mean-Time-To-Repair
(MTTR) is done by weighting the individual module repair
times by the MTBF of the individual module. The MTTRs
of the major CR80 equipments are presented in Table
7-8
The predicted MTTR values are from experience with
modules of the NICS-TARE, FIKS and CAMPS programmes.
The predicted MTTR assumes that all tools, repair
parts, manpower, etc., required for maintenance are
continuously available.