top - download
⟦2578663e5⟧ Wang Wps File
Length: 10933 (0x2ab5)
Types: Wang Wps File
Notes: R & M PLN
Names: »0696A «
Derivation
└─⟦ecb9329a3⟧ Bits:30006006 8" Wang WCS floppy, CR 0041A
└─ ⟦this⟧ »0696A «
WangText
…02…CPS/PLN/004
…02…LPO/810202…02……02…#
R & M PROGRAM PLAN
…02……02…CAMPS
7.4.11 R̲e̲l̲i̲a̲b̲i̲l̲i̲t̲y̲ ̲T̲e̲s̲t̲ ̲R̲e̲p̲o̲r̲t̲s̲
Every module shall have its own Reliability Test Report
and this report must be so complete that a firm foundation
for the final decisions and estimations is obtained.
The Reliability Test Report shall tell about the historical
data of the module under test, considered in connection
with a particular specification of the load and stress
which have affected the test item and may contribute
to a failure mode. Information of failure occured
and events which have been accepted have to be included.
The intention is to hand over a complete picture to
the reliability engineers.
For each module under test, a test journal or log book
shall be established. The chronological data are entered
at fixed times and after every failure occurrence.
The content of the test journal shall be:
a) Module identification:
Name, number, and test serial number of the module.
b) Chronological records for every observation and
activity:
date and time
operating conditions
environmental conditions
values of measured parameters
comments concerning conditions beyond:
limits of specification
reading of timer
names of personnel involved
general comments
7.4.12 F̲a̲i̲l̲u̲r̲e̲ ̲R̲e̲p̲o̲r̲t̲s̲
For each failure, a report shall be made. The report
shall contain a description of the failure, results
of the failure analysis and those actions which have
been carried out regarding to the module or its parts.
Three participants are involved in the failure reports:
Operating personnel
Repairing personnel
Failure analysing people
T̲h̲e̲ ̲c̲o̲n̲t̲r̲i̲b̲u̲t̲i̲o̲n̲ ̲t̲o̲ ̲t̲h̲e̲ ̲f̲a̲i̲l̲u̲r̲e̲ ̲r̲e̲p̲o̲r̲t̲ ̲f̲r̲o̲m̲ ̲t̲h̲e̲ o̲p̲e̲r̲a̲t̲i̲n̲g̲
̲p̲e̲r̲s̲o̲n̲n̲e̲l̲ ̲s̲h̲a̲l̲l̲ ̲b̲e̲ ̲a̲s̲ ̲f̲o̲l̲l̲o̲w̲s̲:
a) Failure indication:
Date and time
Serial number of the test item
Part which seems to fail
Operating conditions in the moment of failure
Environmental condition in the:
moment of failure
reading of timer
name of the operator
b) Failure symptoms:
The nature and mode for every partly or complete
failure.
The values of those parameters which exceed their
limits specified a list of the measure instruments
which have been used to the failure indication.
Views classification of the failure concerned.
Recommended improvements.
General comments.
R̲e̲p̲o̲r̲t̲i̲n̲g̲ ̲f̲r̲o̲m̲ ̲t̲h̲e̲ ̲R̲e̲p̲a̲i̲r̲ ̲P̲e̲r̲s̲o̲n̲n̲e̲l̲
a) Location of failures:
List of instruments and methods used. Observations
and comments.
b) Description of Repair:
Actions carried out
The amount of time in which the module has been
powered up during the repair.
Organisation and names of repair personnel.
c) Indication of every component or part which has
been exchanged:
Place in the electronic circuit part number and
name type designation, and values specified name
of the manufacturer.
d) Views of the causality and classifying of the failure.
e) Recommendation of actions for improvement or allowed
changes introduced to repair.
f) General comments.
R̲e̲p̲o̲r̲t̲i̲n̲g̲ ̲f̲r̲o̲m̲ ̲t̲h̲e̲ ̲p̲e̲r̲s̲o̲n̲n̲e̲l̲ ̲w̲h̲i̲c̲h̲ ̲c̲o̲n̲d̲u̲c̲t̲ ̲f̲a̲i̲l̲u̲r̲e̲
a̲n̲a̲l̲y̲s̲i̲s̲:
a) Analysis of the exchanged component or part:
Visual investigation and initial measurement.
Description of analysis (physical, chemical etc.).
Results
Time for the analysis
Organisation and names of the personnel which conduct
failure analysis
Analysis of conditions which may have influence
on the failure
The cause of and classification of the failure
Recommended improving action
General comments
This part of the failure report shall have a brief
listing of all failures. The failure information and
the relevant testing time have to relate to the original
test journal and the failure reports. The content
shall be as follows:
b) General Information:
Item or module identification
Reference to detailed Reliability Test Specification
c) Chronological listing of all relevant failures:
Failure date and time
Group of failure
Reference to failure report
Serial number of test item
Accummulated number of relevant failures
Accummulated relevant testing time
d) Listing of all non-relevant failures:
Group of failure
Reference to failure report
e) Information of repair time and the time of duration
where the module has been non-operative or in degraded
mode.
C̲a̲t̲a̲l̲o̲g̲u̲e̲ ̲c̲o̲n̲t̲a̲i̲n̲i̲n̲g̲ ̲e̲x̲c̲h̲a̲n̲g̲e̲a̲b̲l̲e̲ ̲u̲n̲i̲t̲s̲ ̲a̲n̲d̲ ̲s̲p̲a̲r̲e̲s̲
̲w̲h̲i̲c̲h̲ ̲h̲a̲v̲e̲ ̲b̲e̲e̲n̲ ̲u̲s̲e̲d̲ ̲t̲o̲ ̲r̲e̲p̲a̲i̲r̲
This list shall inform about failure frequencies and
how often replacement has taken place of units or spare
parts concerning planning of preventive maintenance
and supplies of spares.
a) Contents of the catalogue:
General information
Module identification
Reference to detailed Reliability Test Plan
b) For every exchanged unit or spare part, a list
shall be set up. It shall hold:
Identification
User conditions during the test
Total number in the module
Total number which have failed
Total accumulated relevant test time
7.5 T̲E̲S̲T̲ ̲A̲N̲D̲ ̲R̲E̲P̲O̲R̲T̲I̲N̲G̲ ̲A̲T̲ ̲M̲O̲D̲U̲L̲E̲ ̲L̲E̲V̲E̲L̲
All CR80D modules have a predicted MTBF value far beyond
3500 operating hours and they therefore don't have
to pass a Factory Acceptance Test - according to our
present knowledge.
However, to each module there shall be produced a Reliability
Report where the MTBF value will be sincerely justified
by analytical calculations.
T̲A̲B̲L̲E̲ ̲O̲F̲ ̲C̲O̲N̲T̲E̲N̲T̲S̲
7.6 TEST AND REPORTING IN SYSTEM LEVEL .........
126
7.6.1 Introduction ...........................
127
7.6.2 System Reliability Report ..............
127
7.6.3 Failure Mode Analysis ..................
127
7.6.4 Reliability Tests ......................
127
7.6.5 Maintainability ........................
128
7.6.6 Availability ...........................
128
7.7 SOFTWARE RELIABILITY .......................
128
7.7.1 Introduction ...........................
128
7.7.2 Software Reliability Model .............
129
7.6.1 I̲n̲t̲r̲o̲d̲u̲c̲t̲i̲o̲n̲
In system level all modules are integrated and tested
interacting with each other. This is a very important
part of the R & M Program Plan. The gained data and
experience from this issue are fundamental in the efforts
to describe, verify, and specify the system performance
of CAMPS.
7.6.2 S̲y̲s̲t̲e̲m̲ ̲R̲e̲l̲i̲a̲b̲i̲l̲i̲t̲y̲ ̲R̲e̲p̲o̲r̲t̲
This report is structured similar to those of the modules.
(See section 5).
The CAMPS Reliability Model of the Total System is
surveyed and appraised against newly gained test data
and data delivered by improved calculations and tools.
7.6.3 F̲a̲i̲l̲u̲r̲e̲ ̲M̲o̲d̲e̲ ̲A̲n̲a̲l̲y̲s̲i̲s̲
FMECA analysis of the total system is based on the
description stated in section 7.3.
The analyses shall be conducted to secure and verify
that failure independence is obtained anywhere in the
system.
7.6.4 R̲e̲l̲i̲a̲b̲i̲l̲i̲t̲y̲ ̲T̲e̲s̲t̲s̲
In CR CAMPS test system (DSMT), which is a close copy
of the real system on a site, every single module shall
be placed in continual working mode for at least 500
operating hours. A distinct group of each module type
is selected for long time testing i.e. 2000 operating
hours continual.
The main task for the 500 hours test is to exercise
and test all possible module functions and load the
modules and the system similar to the practical application.
The purpose of the 2000 hours test is to verify long
time performance and to applicate and test diagnostic
software and firmware. From both tests all sorts of
data will currently be collected and analysed. Obsiously,
the reliability tests on the system level will be planned
and executed just as described in section 7.5.
7.6.5 M̲a̲i̲n̲t̲a̲i̲n̲a̲b̲i̲l̲i̲t̲y̲
Verification of predicted MTTR-values may be carried
out during the Reliability Test, at least up to a certain
extent. A final estimation of MTTR based on reliability
tests alone would be premature, a serial of special
test shall be provided and this subject is referenced
to in the Maintainance Plan no. CPS/PLN/006
which is provided by Logistic Support.
7.6.6 A̲v̲a̲i̲l̲a̲b̲i̲l̲i̲t̲y̲
Based on evaluated MTTR value - inputs from Logistic
Support and the verified reliability figures from the
R & M test the System availability and for groups of
modules i.e. Processor Crate, LTUX Crate etc. Final
availability values would be released relatively late,
i.e. after the Acceptance Test.
7.7 S̲O̲F̲T̲W̲A̲R̲E̲ ̲R̲E̲L̲I̲A̲B̲I̲L̲I̲T̲Y̲
7.7.1 I̲n̲t̲r̲o̲d̲u̲c̲t̲i̲o̲n̲
In the past several years, a considerable number of
different reliability software models have been introduced.
Models assuming an exponential distribution of time
to detect software errors have been ranged from early
ones, to more complicated ones such as Musa's Execution-time
Model.
The intention of using software reliability model is:
a) Reliability Management
b) Management of program changes
c) Making system engineering trade-offs with cost,
schedules and other system parameters
d) Monitoring test progress and predicting test completion
and
e) Evaluating various software engineering techniques
7.7.2 S̲o̲f̲t̲w̲a̲r̲e̲ ̲R̲e̲l̲i̲a̲b̲i̲l̲i̲t̲y̲ ̲M̲o̲d̲e̲l̲
The model to be used in CAMPS is not yet established.
The decision strongly depends of the properties of
the software which is under development for the time
being.
However, it is very likely that Musa's Execution-Time
Model or a model similar to this shall be used.
The preference for this model is based on the following:
a) it has been applied to more actual software systems
than any other - to the best of our knowledge.
b) In the light of the available evidence, the assumptions
appear to be generally well founded and the model
yields results that are in good accordance with
reality.
c) There is no evidence to indicate any class of software
to which the model would not apply.
d) The model seems to meet the criteria of validity,
fairly simplicity, and utility.
Because of the reasons mentioned above, we shall resist
from describing the Execution-Time Theory at the present
time. Of course as soon as the choice of the model
is final, a close description shall be delivered.
In the meantime, please use referenced document.
Referenced Document:
J.D. Musa, "A theory of software reliability and its
application".
IEEE, "Trans. Software Engineering".
vol SE-1, 1975, sep, pp 312-327.