top - download
⟦a893dde20⟧ Wang Wps File
Length: 10585 (0x2959)
Types: Wang Wps File
Notes: Techn. Perform. & Meas. P
Names: »0347A «
Derivation
└─⟦89b9efcb1⟧ Bits:30006072 8" Wang WCS floppy, CR 0029A
└─ ⟦this⟧ »0347A «
WangText
…00……00……00……86…1 …02… …02… …02…
…02…CPS/PLN/013
…02…KNN/801126…02……02…
TECHNICAL PERFORMANCE MEASURING PLAN
…02……02…CAMPS
T̲A̲B̲L̲E̲ ̲O̲F̲ ̲C̲O̲N̲T̲E̲N̲T̲S̲
1. INTRODUCTION ...........................
1.1 PURPOSE ..............................
1.2 SCOPE ................................
2. PROJECT REFERENCES .....................
3. DEFINITION OF CAMPS SOFTWARE KEY
PARAMETER AND MEASURING PREDICTION
MODELS RELATED TO TECHNICAL PERFORMANCE
4. TECHNICAL PERFORMANCE DATA ACQUISITION .
5. TECHNICAL PERFORMANCE REPORTS ..........
Appendix A: Support Tools ..................
1 I̲N̲T̲R̲O̲D̲U̲C̲T̲I̲O̲N̲
1.1 P̲U̲R̲P̲O̲S̲E̲
Technical Performance Measurements (TPM) assesses the
technical characteristics of the system and identifies
through engineering analyses and tests problems, which
may surface in the schedule and cost areas.
In addition to problems due to unrealistic cost and
schedule planning may show up technical inadequacies,
just as technical problems identified through TPM can
surface through inadequacies in the budget of time
and money.
Basically, cost/schedule measurements assume adequacy
of design to meet the technical requirement, TPM is
the complementary function to verify such an adequacy.
By establishing both TPM and cost/schedule measurement
performance control systems, we increase the validity/reliability
of the total CAMPS program performance measurement.
1.2 S̲C̲O̲P̲E̲
The technical performance measurement shall focus at
the following key parameters:
- sizing
- timing
- errorrate/reliability
The TPM program will not impose additional requirements
for technical data, as it is planned to use already
existing data.
The output from the TPM program will be reports showing
the trends of the above mentioned parameters at the
following levels:
1. system level
2. subsystem level
3. package/unit level
By reporting on the above level sufficient information
will be provided to make decisions on technical, cost
or schedule changes.
2 P̲R̲O̲J̲E̲C̲T̲ ̲R̲E̲F̲E̲R̲E̲N̲C̲E̲S̲
a) System Requirements Specification
CPS/210/SYS/0001
b) System Development Plan
CPS/PLN/002
3 D̲E̲F̲I̲N̲I̲T̲I̲O̲N̲ ̲O̲F̲ ̲C̲A̲M̲P̲S̲ ̲S̲O̲F̲T̲W̲A̲R̲E̲ ̲K̲E̲Y̲ ̲P̲A̲R̲A̲M̲E̲T̲E̲R̲S̲ ̲
A̲N̲D̲ ̲M̲E̲A̲S̲U̲R̲E̲M̲E̲N̲T̲ ̲M̲O̲D̲E̲L̲S̲
The software (system) key parameters at which will
be reported on throughout development are:
- memory and disc sizing.
- timing of units, packages subsystem and system.
- error-rates found by inspections during detailed
design and code.
- reliability measurements during integration & test
and preacceptance using a cummulative reliability
model.
The measured data (current) will be compared by the
figures which have been predicted or/and are required
in the CAMPS System Requirements Specification.
3.1 S̲I̲Z̲I̲N̲G̲ ̲M̲E̲A̲S̲U̲R̲E̲M̲E̲N̲T̲S̲
The ultimate sizing objective has been defined by the
HW conf. During system design and later development
phases estimates of subsystem/package/modules are produced
to provide an early warning of memory and/or disc sizing
problems. Sizing estimates will be collected every
month and/or for all major development milestones.
3.2 T̲I̲M̲I̲N̲G̲ ̲M̲E̲A̲S̲U̲R̲E̲M̲E̲N̲T̲S̲
During system design and later development phases timing
estimates of subsystem/package/modules are produced
to provide an early warning of timing problems. Timing
estimates will be collected every month and/or for
all major development milestones.
3.3 E̲R̲R̲O̲R̲-̲R̲A̲T̲E̲S̲ ̲B̲Y̲ ̲I̲N̲S̲P̲E̲C̲T̲I̲O̲N̲ ̲A̲N̲D̲ ̲M̲E̲A̲S̲U̲R̲E̲M̲E̲N̲T̲S̲
Below a number of inspections/review are defined at
which the error-rate of the software will be measured.
Detailed ----* I…0f…1…0e… *-- Code ---* I…0f…2…0e… *--- Unit---* I…0f…3…0e… *-- Integration
Design Test
An improvement in productivity is the most immediate
effect of purging errors from the product by the Il,
I2, and I3 inspections. This purging allows rework
of these errors very near their origin, early in the
process. Rework done at these levels is 10 to 100
times less expensive than if it is done in the last
half of the process. Since rework detracts from productive
effort, it reduces productivity in proportion to the
time taken to accomplish the rework. It follows, then,
that finding errors by inspection and reworking them
earlier in the process reduces the overall rework time
and increases productivity even within the early operations
and even more over the total process. Since less errors
ship with the product, the time taken for the user
to install programs is less, and his productivity is
also increased.
The purpose of inspections/reviews are:
- to increase productivity
- to reduce the cost of errors
- to measure the error-rate/reliability performance
of the software at an early stage in the development
process.
Error shall be classified according to type such as:
- internal logic errors
- interface errors
- data areas
- etc.
The scale of error measurements shall be:
N̲u̲m̲b̲e̲r̲ ̲o̲f̲ ̲e̲r̲r̲o̲n̲e̲o̲u̲s̲ ̲s̲t̲a̲t̲e̲m̲e̲n̲t̲s̲
…0e…100 x …0f…Total number of statements
Statements have a different meaning during I1, and
I2/I3. During I2/I3 statements shall be interpretated
as non commentary source statements and during I1,
statements shall be equivalent to the smallest item
of description.
3.4 S̲O̲F̲T̲W̲A̲R̲E̲ ̲R̲E̲L̲I̲A̲B̲I̲L̲I̲T̲Y̲ ̲D̲E̲F̲I̲N̲I̲T̲I̲O̲N̲ ̲A̲N̲D̲ ̲M̲E̲A̲S̲U̲R̲E̲M̲E̲N̲T̲S̲ ̲M̲O̲D̲E̲L̲
The principal objective of a software reliability model
is to predict behavior that well be experienced when
the program is operational. This expected behavior
changes rapidly and can be tracked during the period
in which the program is tested. Reliability of MTTF
generally increases as a function of accumulated execution
time. The parameters of the model can be determined
to some degree of accuracy from program characteristics
such as size, complexity, etc., before testing commences.
Thus, during the preliminary planning, requirements
generation, design, and coding phases of a project,
expected behavior in the field can be predicted as
a function of execution time accumulated during system
test.
The predicted behavior indicates what would happen
if test were terminated at the prediction point and
the program placed into operational use in the field.
Much better estimates of parameters can be made during
the test phases.
3.4.1 E̲X̲E̲C̲U̲T̲I̲O̲N̲ ̲T̲I̲M̲E̲ ̲M̲O̲D̲E̲L̲
The execution time model deals with two kinds of execution
time:
1) the cumulative execution time (denoted t) that
clocks development activity and is measured up
to the reference point at which reliability is
being evaluated, and
2) the program operating time (denoted t1) which is
the execution time projected from the reference
point into the future on the basis that no further
fault correction is performed.
The model consists of two components, the execution
time component and the calendar time component. The
former component characterizes reliability behavior
as a function of cumulative execution time t. The
latter component relates cumulative execution time
t to cumulative calendar time .
The execution time component of the model, in addition,
postulates that:
1) the hazard rate is proportional to the number of
faults remaining N, and
2) the execution time rate of change of the number
of faults corrected is proportional to the hazard
rate.
The constant of proportionality in the second assumption
above is called the fault reduction factor B. It was
designed to account for three effects that are each
assumed to be proportional to the hazard rate:
1) fault growth due to new faults that are spawned
in the process of correcting old faults.
2) faults found by code inspection that was stimulated
by the detection of a related fault during test,
and
3) a proportion of failures whose causative faults
cannot be determined and hence cannot be corrected.
Based on the foregoing assumptions, a number of relationships
can be derived. The net number of faults detected
and corrected n is shown to be an exponential function
of the cumulative execution time t,
̲C̲ ̲t̲ ̲ ̲
n = N (1 - exp (- )) (5)
o M T
o o
Where N…0f…o…0e… is the number of inherent faults in the program,
M…0f…o…0e… is the total number of failures possible during
the maintained life of the software, T…0f…o…0e… is the MTTF
at the start of test and C is the testing compression
factor. "Maintained life" is the period extending
from the start of test to discontinuance of program
failure correction. (Once program failure correction
has been discontinued, the number of failures becomes
dependent on the program lifetime and (5) does not
hold.) Failures and faults are related through:
N…0f…o…0e… = BM…0f…o…0e… (6)
n = Bm (7)
where m is the number of failures experienced. Thus
the fault reduction factor B may be seen as an expression
of the net fault removal per failure.
The number of failures experienced is also an exponential
function of the cumulative execution time:
C̲ ̲t̲1̲ ̲
m = M (1 - exp (- )) (8)
o M T
o o