top - download
⟦615bc2060⟧ Wang Wps File
Length: 91128 (0x163f8)
Types: Wang Wps File
Notes: CAMPS Perform. Report
Names: »0618A «
Derivation
└─⟦0cd3644ba⟧ Bits:30006012 8" Wang WCS floppy, CR 0051A
└─ ⟦this⟧ »0618A «
WangText
…00……00……00……00……00…2…02……00……00…2
1…0c…1
0…0a…0
/…0a…/…0e…/…06….…0e….…07…-…0f…,…08…,…01…+…09…+…02…*…0b…* )…0d…)…05…(…0b…(
'…0c…'
…18……09……18……00……18……02……18…
…17……0b……17……0f……86…1 …02… …02… …02…
…02…CPS/REP/001
…02…FAH/820107…02……02…#
CAMPS PERFORMANCE REPORT
…02……02…CAMPS
T̲A̲B̲L̲E̲ ̲O̲F̲ ̲C̲O̲N̲T̲E̲N̲T̲S̲
1 OVERVIEW .....................................
1.1 INTRODUCTION ............................
1.2 MAJOR ASSUMPTIONS AND RESULTS ...........
1.2.1 Assumptions ..........................
1.2.1.1 Traffic Flow .....................
1.2.1.2 Performance Calculations .........
1.2.1.3 CPU Processing ...................
1.2.1.4 Disc Access ......................
1.2.2 Major Results ........................
2 REQUIREMENTS MAPPING ........................
2.1 INTRODUCTION ............................
2.2 TRAFFIC FLOW ............................
2.3 OUTGOING TRAFFIC RELATED MAPPING ........
2.4 INCOMING TRAFFIC RELATED MAPPING ........
2.5 OTHER TRAFFIC MAPPING ...................
3 DATA COLLECTION FOR MODEL CALCULATIONS .......
3.1 INTRODUCTION ............................
3.2 BASIC CPU-TIME AND I/O ACCESS
CONSUMPTION FIGURES .....................
3.3 OUTGOING TRAFFIC RELATED DATA ...........
3.4 INCOMING TRAFFIC RELATED DATA ...........
3.5 OTHER DATA ..............................
3.6 RESULTING LOADING REQUIREMENTS ..........
3.6.1 CPU and Disk Loading Requirements ....
3.6.2 Terminal Loading Requirements ........
3.6.3 Additional TDX Loading Requirements ..
3.6.4 Additional I/O Channel
Loading Requirements .................
4 MODEL SELECTION CONSIDERATIONS ..............
4.1 MODEL LAYOUT AND MECHANISMS .............
4.2 PU SYSTEM ................................
4.2.1 Introduction ..........................
4.2.2 Time Slicing ..........................
4.2.3 Multiserver and Single Server
Queuing ..............................
4.2.4 Priority Queuing ......................
4.2.5 CPU Processing Time:
Cache versus Processor Bus ...........
4.2.6 Queue Time and Length ................
4.2.7 Discussion ...........................
4.3 TDX-SYSTEM MECHANISMS AND QUEUING .....
4.3.1 TDX Transfer Mechanisms ..............
4.3.2 TDX Transfer Delays and Throughputs ..
4.4 I/O CHANNEL MECHANISMS AND QUEUING ......
4.4.1 I/O Channel Transfer Mechanisms ......
4.4.2 I/O Channel Transfer Delays
and Throughputs ......................
4.5 DISC SYSTEM MECHANISMS AND QUEUING ......
4.5.1 Disc Access Mechanisms ...............
4.5.2 Disc Access Delays and Throughput ....
4.6 QUEUING OF FUNCTIONAL REQUESTS ..........
4.6.1 Queuing below Level of Application,
Process and Coroutines
(Low Level Queuing) ..................
4.6.2 Queuing at The Level of Application ..
4.6.3 Maximum Queuing Times ................
4.7 CONCLUSIVE MODEL CONSIDERATIONS .........
5 PERFORMANCE CALCULATIONS .....................
5.1 RESPONSE TIME ...........................
5.1.1 Response Times without Priority
Queuing ..............................
5.1.2 Response Time with Two Level
Priority Queuing .....................
5.2 THROUGHPUT ..............................
5.3 STORAGE AND MEMORY ALLOCATION ...........
5.3.1 Buffer Allocation ....................
5.3.2 Memory Allocation ....................
5.3.3 Disc Storage Allocation ..............
1̲ ̲ ̲O̲V̲E̲R̲V̲I̲E̲W̲
1.1 I̲N̲T̲R̲O̲D̲U̲C̲T̲I̲O̲N̲
This document shall present all system performance
figures as related to requirements for throughput,
timing, and sizing in section 3.4.1.1 - 3.4.1.8 of
CPS/210/SYS/0001, the System Requirements Document.
The corresponding traffic flow figures are the ones
for the maximum wired capacity site.
In order to derive the performance figures, the approach
shall be first to state for each CAMPS functional requirement
(e.g. preparation, incoming message analysis, logging),
the throughput and timing requirements (as extracted
from the referenced sections of the System Requirements
Document) and in addition, the CAMPS S/W Packages involved.
This step is called Requirements Mapping and shall
include a description of the functions performed by
the S/W packages in each case.
Requirements Mapping is described in section 2. The
referenced S/W Packages are described in the CAMPS
Design Documents, CPS/SDS/001 - 29.
All sizing requirements are treated separately in section
5.
Following the Requirements mapping comes the step of
Data Collection, the result being tabulated in section
3.
Data collection is the accounting for usage of resources,
i.e. CPU processing time and number of I/O accesses
of different types:
- TDX-system accesses for local terminals and low
speed external channels, also called "TDX-Functions".
- I/O Channel accesses for medium speed external
channels and for disk access.
Disk access is accounted for separately, the remaining
access requirements are covered under the name of "Channel
Functions".
The data collection shall not consider queuing effects,
only the direct consumption of resources for each of
the mapping cases (functions) of section 2. The method
of adding up the resource requirements is explained
in the resum} of section 1.2, it depends on the "traffic
mix", i.e. the way different functions shall contribute
simultaneously in real time.
Having decided on the traffic mix, the resulting loading
figures may now be introduced in the system configuration
model for calculation of queuing effects. It is a
prerequisite for these calculations to have decided
on the priority of each of the functions in queuing
and whether the queuing discipline of preemptive or
non-preemptive priority selection shall apply for a
resource; for certain resources, like the disk system,
the queuing discipline is given by the nature of the
resource access.
Model selection is then performed as follows (section
4): Having introduced the service mechanisms and related
queuing factors of each of the resources, the application
of the loadings will reveal whether all of the basic
assumptions of the model chosen are fulfilled. If
this is the case, the model is now ready for developing
the performance figures (section 5): Throughput and
response times.
If the basic model assumptions are not fulfilled, a
new model has to be developed.
If the model is valid (including marginally valid)
and the performance requirements are not met, then
a redesign will have to be considered. The requirements
mapping and all the following procedures are then repeated.
The interdependence between sizing calculations and
throughput and timing calculations is as follows:
For each possible layout of PU Main Memory and Disk
Memory, a new set of data collection figures will apply
thereby influencing model selection and performance
calculations.
It is important to notice that the analytic method
of performance calculation outlined above does not
attempt to produce accurate performance figures, but
only to give order of magnitude figures to prove that
the systems will perform within the given requirements.
1.2 M̲A̲J̲O̲R̲ ̲A̲S̲S̲U̲M̲P̲T̲I̲O̲N̲S̲ ̲A̲N̲D̲ ̲R̲E̲S̲U̲L̲T̲S̲
1.2.1 A̲s̲s̲u̲m̲p̲t̲i̲o̲n̲s̲
1.2.1.1 T̲r̲a̲f̲f̲i̲c̲ ̲F̲l̲o̲w̲
The basic tool for Requirements Mapping and Data collection
is the traffic flow, functionally and quantatively.
The functional traffic flow has been the basis for
system design in defining the proper hardware and software
modules. The quantative aspects of the traffic flow
have had an impact especially on the configuration
of the central processing unit with main memory and
disk memory access design, but also on the optimization
of the S/W packages (parallel or semi-parallel processing,
data buffering, optimization of check pointing, etc).
The busy hour traffic flow presented in section 2.2
is "rough", concentrating on the direct message handling
functions (group 1 functions in the following) and
leaving out all service and control functions, e.g.
supervisory intervention, Log, Statistics, MSO, MDCO,
Retrievals, etc. It is however, fairly easy to account
for these functions, which may be subdivided into additionally
3 groups.
The four groups are:
G̲r̲o̲u̲p̲ ̲1̲: The functions directly implied by the traffic
flow of section 2.2 are as follows:
- Message, Comments, and VDU Page Preparation
and Editing.
- Coordination
- Release
- Distribution and Reception of traffic.
- Outgoing Traffic Routing Determination,
Conversion, and Transmission.
- Incoming Traffic Input and Analysis.
- Relay Traffic Handling.
G̲r̲o̲u̲p̲ ̲2̲: Service and Control functions, the usage
of which may be expressed as per incoming
message, outgoing message or other transactions
related to the external traffic. To this
group belongs:
- Message service (incl. msg. rerun)
- Message Distribution and Control Operation
- Status collection
- Log Collection and Print-out
- Statistic Collection
- Checkpointing
- Storage key collection
- Storage & Dumping
- Report Generation and Printing
- Security Interrogation
G̲r̲o̲u̲p̲ ̲3̲: Periodic functions. To this group belongs:
- Retrieval
- Statistics print-out
- Status Print-out
G̲r̲o̲u̲p̲ ̲4̲: Low priority and low frequency activities.
To this group belongs:
- On-line diagnostics programs
- Supervisor table updates and service
activity such as redistribution of messages
to users.
- Log trace.
For the understanding of the treatment of group 4 functions,
it is vital to note that the traffic flow considered
in these calculations accounts for the busy hour and
busy minute figures. It is not expected that the supervisor
under these conditions will want to add to this load
by initiating additional distribution, performing Log
Trace, or want to delay the delivery of messages by
changing tables. Similarly on-line diagnostics programs
are run with low frequency and priority.
Some Supervisory activity is however covered, refer
note 5 to traffic flow in section 2.2.
Functions of group 4 are neglected in mapping and data
collection, whereas functions of the first three groups
shall be accounted for as explained below.
Group 1 functions are exercised the number of times
indicated in the traffic flow, which figures directly
applied in multiplying the resource consumption presented
in section 3 will result in the total resource consumption
for each function. We shall in this section return
to a discussion of the applicability of these total
figures.
Group 2 functions shall be accounted for, by means
of nature of these functions, as per usage of group
1 functions as follows:
- Message Service is exercised on a fraction of incoming
messages in connection with analysis and a fraction
of outgoing messages in connection with routing
determination and transmission. Message service
is treated as a separate accounting function in
data collection.
- Message Distribution and Control Operation shall
be handled the same way as the message service
function.
- Status Collection is associated to the message
preparation and message delivery functions. Since
the total resource consumption per function is
rather small, it has been decided to account for
the usage by summing up all requests, separately
for incoming and outgoing messages. Finally, the
resource consumption is then calculated for the
bulk of requests.
For scaling exercises where the bulk of incoming
and outgoing traffic is changed, this way of accounting
is obviously adequate. In cases where the individual
sources of incoming and outgoing traffic and/or
the usage of the group 1 functions change in a
manner to change the status collection in average
per incoming or outgoing message the bulk traffic
scaling is not strictly correct anymore. However,
considering the small contribution from status
collection as compared to the accuracy of these
calculations and the fact that the resulting figures
are correct for the traffic flow dictated by the
requirements, the additional workload of reporting
on the usage of this function as per traffic source
and traffic unit for each function of group 1 has
been discarded.
The non-automatic invocation of priority processing
for group 2 functions is another reason for separate
accounting, refer discussion in section 1.2.1.2.
Status collection is consequently described separately
in section 2 and 3 of this document.
- Log Collection and Print-out: Refer to comments
on Status Collection. The approach of bulk Data
Collection used for Status collection is even more
justified in this case since the Log Collection
is even more directly proportional to the bulk
of incoming and outgoing traffic, independent of
the source.
- Statistics Collection: Refer to comments on Status
Collection concerning bulk data collection.
- Checkpointing shall be performed either as a disk
checkpoint plus a checkpoint to Stand-By PU (via
TDX) or as a checkpoint to Stand-By PU only. Checkpoints
are accounted for as per function of group 1.
- Storage Key Collection and associated catalog build-up:
Refer to comments on Status Collection concerning
bulk data collection.
- Storage and Dumping: Storage actually consists
of collecting storage keys for catalog (see above)
plus processing of request for "store" and associated
"unloading" to intermediate storage from short
term storage. The "store" results in update of
the directory for on-line storage (OCD).
The "dumping" from intermediate to off-line storage
is considered a background process which is not
exercised during high load.
The "unloading" is accounted for under each of
the functions in group 1 as part of "create":
Each time an object is created, which shall later
be subject to unloading, this is accounted for
"in advance".
- Report Generation and printing: Refer to comments
for Status Collection concerning bulk data collection.
This function is associated to initial analysis
of incoming traffic and transmission of outgoing
traffic.
- Security Interrogation: Refer to comments on Status
collection concerning bulk data collection.
The functions of group 3 are exercised periodically.
Except for Status Print-out, which may be considered
bulk traffic dependent, they are practically traffic
independent. This may seem odd in case of retrieval,
but it is an assumption of this document that the maximum
required rate of retrieval is exhausted independent
of the external traffic rise.
The t̲h̲r̲o̲u̲g̲h̲p̲u̲t̲ figures applicable for Requirements
Mapping and Data Collection shall be those corresponding
to t̲h̲e̲ ̲b̲u̲s̲y̲ ̲h̲o̲u̲r̲/̲m̲i̲n̲.̲ ̲t̲r̲a̲f̲f̲i̲c̲ ̲f̲l̲o̲w̲ ̲e̲x̲c̲e̲p̲t̲ ̲f̲o̲r̲ ̲p̲r̲o̲c̲e̲s̲s̲i̲n̲g̲
̲r̲e̲l̲a̲t̲e̲d̲ ̲t̲o̲ ̲o̲r̲i̲g̲i̲n̲a̲l̲ ̲m̲e̲s̲s̲a̲g̲e̲ ̲g̲e̲n̲e̲r̲a̲t̲i̲o̲n̲ o̲f̲ ̲C̲A̲M̲P̲S̲ ̲V̲D̲U̲s̲
̲(̲i̲n̲c̲l̲u̲d̲i̲n̲g̲ ̲p̲r̲e̲p̲a̲r̲a̲t̲i̲o̲n̲,̲ ̲c̲o̲o̲r̲d̲i̲n̲a̲t̲i̲o̲n̲,̲ ̲a̲n̲d̲ ̲r̲e̲l̲a̲t̲e̲d̲ ̲d̲i̲s̲t̲r̲i̲b̲u̲t̲i̲o̲n̲)̲,̲
̲a̲l̲l̲ ̲M̲D̲C̲O̲ ̲a̲n̲d̲ ̲M̲S̲O̲ ̲a̲c̲t̲i̲v̲i̲t̲y̲,̲ ̲a̲n̲d̲ ̲a̲l̲l̲ ̲d̲e̲l̲i̲v̲e̲r̲y̲ ̲(̲r̲e̲c̲e̲p̲t̲i̲o̲n̲)̲
̲o̲f̲ ̲i̲n̲f̲o̲r̲m̲a̲t̲i̲o̲n̲ ̲a̲t̲ ̲l̲o̲c̲a̲l̲ ̲t̲e̲r̲m̲i̲n̲a̲l̲s̲ ̲i̲n̲ ̲w̲h̲i̲c̲h̲ ̲c̲a̲s̲e̲s̲ ̲o̲n̲l̲y̲
̲t̲h̲e̲ ̲b̲u̲s̲y̲ ̲h̲o̲u̲r̲ ̲f̲i̲g̲u̲r̲e̲s̲ ̲s̲h̲a̲l̲l̲ ̲a̲p̲p̲l̲y̲.
The busy minute load thus calculated will possibly
not be executed: queuing of incoming messages for analysis
(throughput bottleneck with a positive effect) prevents
excessive queuing of message copies for delivery to
terminals. The incoming message transfer to disk has
a high priority thus preventing loss of information.
Busy minute figures are 3 times busy hours figures
for incoming and relayed messages and 2 times busy
hours figures for outgoing messages generated at CAMPS.
The t̲i̲m̲i̲n̲g̲ requirements shall be fulfilled only under
conditions corresponding to busy hour traffic.
The following additional conditions shall apply for
timing requirements in general.
- Timing requirements for interactive transactions
shall apply from transmission of last character
from VDU until reception of first response character
on VDU.
- Timing requirements for incoming message handling
shall apply from reception of last character from
external channel until availability for delivery
to printer or VDU, i.e. until queuing.
- Security interrogation shall not be included in
timing requirements, neither shall time for queue
item selection, i.e. a stand alone printer is actually
assumed.
Concerning the queuing for stand alone printers, it
is a basic assumption, supported by the requirements
specification, that buffer and throughput estimates
for busy hour shall be based on a terminal load of
70% of the capacity. Whether this assumption actually
holds, is investigated in the Data Collection section
3.6.2. The same requirement is investigated in connection
with queuing for external channels, OCR's, PTR's, and
PTP's.
1.2.1.2 P̲e̲r̲f̲o̲r̲m̲a̲n̲c̲e̲ ̲C̲a̲l̲c̲u̲l̲a̲t̲i̲o̲n̲s̲
We will here discuss some major aspects of the application
of the data collection figures for performance calculations.
It is a consequence of the queue selection scheme of
the VDUs that apart from preemption at stand alone
printers and external channels there is no direct requirement
for priority processing.
Requirements for priority processing are in certain
cases implied indirectly; one case is delivery of
incoming messages of flash precedence for which the
required maximum delivery time is half the one for
non-flash messages.
It is a consequence of the CAMPS System Design that
higher priority may be given to a process when it is
integrated in the CAMPS Operating System; as a consequence
all dependant processing (calls to File management
systems, etc.) as reflected in the Data Collection
sheets of section 3 will be given the same priority.
Service functions of group 2, as discussed previously
in this section, are not given higher priority automatically
since "calls" to these are based on queuing and extra
queues would lead to a design change.
It is thus seen that priority processing may be invoked
without changing the design of the application software,
but merely by adjusting certain parameters when integrating
the applications in the system. It is important to
note, however, that as a result of such adjustments
(taking the case of incoming message delivery as an
example) flash and non-flash traffic will be treated
with the same priority. In general, processes shall
be given priority on basis of the type of application,
but not on basis of precedence.
It is a consequence of speeding up some processing,
that other processing is delayed. In section 5, a two
level priority system has been considered thereby proving
the enhancements obtainable for certain processing
and the resulting degradation of other processing.
It is a basic assumption, priority processing or not,
that the needs for processing at any time shall correspond
to the traffic flow of section 2.2 for busy hour/minute.
The implications are such that there shall never be
bursts of traffic with a different flow composition.
This assumption does not deny the existence of bursts
of traffic: For strings of applications, such as incoming
message handling and delivery, it is very likely that
the associated processing will dominate the processing
for some time. The important point is, however, the
environment in which the processing of each independently
arriving request is executed at any service position.
The traffic flow figures are reflected in the Data
Collection of section 3. The resulting resource consumption
is simply found by adding the contributions linearily,
separately as per priority, per process, etc., as applicable
for queuing calculations.
The size of the load for busy hour (refer section 1.2.2),
relatively small, will justify the assumption of the
random arrival of all requests for processing at all
service positions as a consequence of the random arrival
of traffic to the CAMPS system. Consequently a model
with separate queuing calculation at each service position
(random arrival) is used. In busy minutes this assumption
is not justified but may still be used as a worst case
assumption in throughput calculations.
At this stage, we have then established the basis for
the model selection of section 4 and the response time
and throughput calculations of section 5. For further
discussion, refer to these sections.
We shall take up the discussion of the use of average
load for performance calculations:
Consider an incoming message for analysis and delivery;
since an incoming msg. will consume quite some resources,
it might be argued that, since most of the load meanwhile
is due to the processing of this message, there will
hardly be any queuing for this message.
This argument is wrong: The queuing calculations are
based on random or independant arrival, and assumes
thus that any new request for a string of processing
sees, for each request at a service position, a load
corresponding to an already established average load
and a certain probability distribution of the service
time.
1.2.1.3 C̲P̲U̲ ̲P̲r̲o̲c̲e̲s̲s̲i̲n̲g̲
CPU processing is based on time slicing: Each request
for processing in front of a queue is given the same
fixed processing time, and if the processing is not
finished, a new request is queued as the last element
in the queue (may be according to priority). A process
may also finish before the end of the time slice due
to call of another process.
The advantage of time slicing is expressed in the w̲a̲i̲t̲i̲n̲g̲
̲t̲i̲m̲e̲, where the time slice applies as service time
for all queued requests and n̲o̲t̲ the average service
time. It is thus seen that a request for processing
with a short service time will not be the victim of
prolonged queuing due to a single process with long
service time, but rather as a result of high system
load.
The queuing factor is in average the same for all requests:
Some processes have finished, i.e. no repeated requests
for time slice, but new external requests have arrived.
The queuing is seen to depend on the total time load
in terms of time slice processing relative to the time
available.
Variations in the service time will depend on the variations
of the PU processing mechanism (mainly variations in
processor bus access time) plus the variations in processing
needs due to the transaction i̲t̲s̲e̲l̲f̲.
It might be argued: Since each request for a string
of time slices on the PU only involves one time slice
at a time, the load should n̲o̲t̲ be based on the total
number of time slices (including fractions) to be processed,
but the number of process request.
This argument is wrong: Queuing shall be based on
the PROBABILITY of a server being occupied and this
is expressed as the total needs for processing time
in a time unit.
The PU has 3 CPUs and any of these may be used for
any process ready for processing when one of the CPUs
are free. There are limitations, however, in the free
usage of CPUs implied by the software itself, in some
cases based on basic constraints and in other cases
on allowed software resource shortages. These constraints
are not dependent on any particular CPU:
The appearance of a specific request for processing
may lead to meanwhile suppression of processing due
to non-availability of a̲ ̲p̲i̲e̲c̲e̲ ̲o̲f̲ ̲s̲o̲f̲t̲w̲a̲r̲e̲ ̲w̲h̲i̲c̲h̲ ̲d̲o̲e̲s̲
̲n̲o̲t̲ ̲a̲l̲l̲o̲w̲ ̲f̲o̲r̲ ̲c̲o̲n̲c̲u̲r̲r̲e̲n̲t̲ ̲p̲r̲o̲c̲e̲s̲s̲i̲n̲g̲.
Such blockings may rise for three reasons:
- request to a process which is already being used.
- request to a coroutine within a process which is
already being used.
- request to procedure (within a process) which does
not allow for concurrent processing.
It is a basic assumption of the CAMPS System that process
copies, i.e. more than one process with the same source
program, shall be avoided and the number of processes
kept down to a minimum. The intent is saving of PU-memory.
Instead coroutines may be copied since they may work
within the same data area ("all coroutines" are always
"in the CPU" at the same time but only one is processing
at any instant of time) and without the additional
system memory overhead of processes.
Waiting for a high level process will involve waiting
for all the associated processing, i.e. system calls,
of the process. This may in certain cases involve
calls in several layers, such as disk access via FMS
or MMS. A high level process will correspond to the
application functions of section 3 (msg. input, msg.
analysis, etc.).
At a level below this, and adding to the waiting time
above, there is the waiting for using certain procedures
and CAMPS system functions, which are processes or
coroutines and procedures in a process.
At the lowest level is finally the waiting for one
of 3 CPUs in a PU.
Apart from the lowest level (waiting for multiserving
CPU's) waiting shall at a̲l̲l̲ levels be calculated from
single server queuing with a load corresponding to
the total processing below the level plus waiting,
i.e. as corresponding to the total service time. Thus,
we may have to consider waiting for a process as well
as for one of the coroutines or procedures inside the
process.
The usage of non-concurrent system functions is a service
available as soon as the particular processing is done
and is only dependent on processing in o̲n̲e̲ of the 3
CPUs in the multiserving assembly:
T̲h̲e̲ ̲l̲o̲a̲d̲ ̲u̲s̲e̲d̲ ̲f̲o̲r̲ ̲c̲a̲l̲c̲u̲l̲a̲t̲i̲n̲g̲ ̲t̲h̲e̲ ̲w̲a̲i̲t̲i̲n̲g̲ ̲s̲h̲a̲l̲l̲ ̲b̲e̲
̲t̲h̲e̲ ̲C̲P̲U̲ ̲t̲i̲m̲e̲ ̲o̲f̲ ̲t̲h̲e̲ ̲f̲u̲n̲c̲t̲i̲o̲n̲ ̲i̲t̲s̲e̲l̲f̲ ̲a̲l̲l̲o̲c̲a̲t̲e̲d̲ ̲t̲o̲ ̲o̲n̲e̲
̲C̲P̲U̲ ̲t̲i̲m̲e̲s̲ ̲t̲h̲e̲ ̲q̲u̲e̲u̲i̲n̲g̲ ̲f̲a̲c̲t̲o̲r̲ ̲f̲o̲r̲ ̲t̲h̲e̲ ̲m̲u̲l̲t̲i̲s̲e̲r̲v̲e̲r̲ ̲P̲U̲.
The queuing on the level of coroutines and finally
on the level of application processes shall thus include
the processing of the system functions. Not only processing
of the PU shall be included, however: At the level
of coroutines shall be included request for other service
positions which will delay the function.
In case of coroutines there is an additional PU processing
delay corresponding to the waiting for a single server
with a load equal to the sum of the load of all coroutines
of the same process:
The load in question for calculating the queuing factor
shall be the load for CPU time of the coroutines themselves,
n̲o̲t̲ ̲i̲n̲c̲l̲u̲d̲i̲n̲g̲ ̲s̲y̲s̲t̲e̲m̲ ̲c̲a̲l̲l̲s̲. The service time shall
be the total PU service time of all coroutines in the
same process.
Waiting for the process containing the coroutines is
otherwise identical to waiting for the coroutine itself,
all system calls and the related processing included
except in case of coroutine duplication: This is waiting
for a multiserver assembly, each server having a proportional
load. The load of the non-concurrent functions is
listed partly in section 3.6.1 (procedures and CAMPS
system functions), and partly in section 5.1 (applications).
Queuing at the higher levels of processing is discussed
in section 4.6 and section 5.1.
Variations in queuing time, i.e. the waiting time plus
service time which again is considered the service
time for higher levels of queuing, is an important
factor in queuing calculations. Such variations are
functionally dependent on the total service time needs
(the load) plus the variations in service time. At
the lowest level of processing, service time variations
are variations in instruction executing time due to
access time variations to Cache and Processer Bus.
At higher levels, these variations are normally negligible
compared to the variations of the transaction processing
time (variations in message contents, etc.).
The mechanisms of the lowest levels of processing,
the CPU-CACHE and the Processor Bus Access, are discussed
in detail in section 4.2. The effect of the CPU-CACHE
is to decrease the number of accesses to the Processor
Bus and thereby decrease the queuing effect arising
from the fact that the Processor Bus is serving all
of the CPUs in the PU: For any given CACHE HIT probability
there is an upper limit to the total PU load due to
the Processor Bus which will be lower than the combined
CPU capacity, refer section 4.2.5 for further discussion.
1.2.1.4 D̲i̲s̲k̲ ̲A̲c̲c̲e̲s̲s̲
The arguments put forward in section 1.2.1.3 concerning
the total average need for time slices to be the determining
factor in average queuing for the PU applies to the
disk as well. The time slice has been replaced by the
disk access time. The general remarks concerning queuing
time variations shall apply as well, access time variations
being due to positioning of data relative to end of
previous search.
For calculating disk access response times, it is assumed
that WRITE to disk is performed in "parallel" to mirrored
disks, whereas READ is done to the disk which becomes
first free.
The mechanism of disk access is as follows:
The requests for disk access are processed by the File
Management system (FMS) or Message Management System
(MMS) and the Disk Cache Manager (DCM) in order of
queuing (FIFO, may be according to priority).
If a WRITE request is received, the corresponding access
instructions (including data) are prepared and stored
in PU memory; commands for execution of the instructions
are queued in PU memory as well in a FIFO queue (maybe
with priority).
If a READ request is received, the corresponding access
instruction is prepared if the information (sectors
on disk) is not already available in one of the two
disk CACHE memories (a list of CACHE contents is in
PU-memory) resident in the DISC CONTROLLER RAMS. If
available here, the READ is executed immediately, otherwise
the instruction is prepared and placed in a PU memory
buffer and the command for execution is queued to the
same queue as the WRITE commands.
The execution of commands is as follows: Whenever
a disk is free (response from disk controller), the
front request of the queue is executed.
If it is a READ command, the action is to transfer
the instruction to the corresponding controller memory
together with the execution command.
If it is a WRITE command, the execution command is
together with the instruction transferred to the controller
memory; it shall furthermore be queued for transfer
to the second disk when it becomes free, but it shall
n̲o̲t̲ be executed before the write has been executed
on the first disc. This special precaution is in order
not to loose information in case of parallel modification
of disk sectors on the two disks.
If further requests are queued or arriving, they may
be exercised before the WRITE to the "second" disk
is executed, not only READ requests, but also WRITE
requests since the execution of previous WRITE requests
to the "first" disk may not be finished.
It is seen that the role as "first" and "second" disk
is not necessarily permanent, but will depend on the
fluctuations of the access time.
The order of execution of read or write requests is
not important since the sequence is controlled by the
applications and the FMS.
The CACHE is updated as follows:
Always during a READ request with no HIT, the retrieved
information is written to the CACHE memory of the same
controller.
During a WRITE and in case of a HIT, the corresponding
area in the CACHE is deleted.
The CPU time needed in the controller for above actions
is added to the average service time.
It is shown in section 4.4 that all queuing due to
I/O Channel throughput limitations may be ignored.
The disk access time used for load and queuing time
calculations shall, according to the above discussion,
include the disk handler time, but not the time of
FMS (MMS) and DCM which is spent before queuing.
The algorithm for access to dual disks in case of sector
faults is discussed in section 4.5. It is assumed
to be a very infrequent incidence and is not included
in the performance calculations.
We may now consider the queuing formalism:
It is seen that the single disk load may be considered
to be all disk WRITEs and one half of all disk READs,
CACHE HITs n̲o̲t̲ included.
READ and WRITE commands are queued in a multiserver
FIFO queue (CACHE HITS still exempted) until one disk
as a minimum is free. The second write is now put
in a separate queue waiting for the same disk to finish
the access just initiated. The second write will also
have to wait for the second disk to finish if busy.
Considering the "second" request for write as a separate
request which is issued after execution of the "first"
request, i.e. after waiting in the multiserver queue
and service, then this request will have to wait for
the second disk to become free as well, i.e. the queuing
of this additional request is proportional to the probability
of one of two disks being busy only, the factor of
proportionality being one half of the queuing for a
single disk with the load already introduced for the
multiserver queue.
Taking an application, the final response time contribution
from the disk is calculated as:
(1-P…0f…HIT…0e…) x Queuing for DISK + P…0f…HIT…0e… x Queuing for CACHE.
Queuing means waiting for service plus service. The
queuing for CACHE only applies to READs, of course.
P…0f…HIT…0e… is the probability of a CACHE HIT.
Waiting for CACHE access, i.e. the DMA transfer of
data to PU, is partly the waiting for a server loaded
as the I/O channel, the contributions to this load
from the CACHE transfer being the DMA transfer from
CACHE to PU. As shown in section 4.4, this waiting
may be ignored, the service time being the only contribution
to the queuing: It will consist of the DMA transfer
time plus the information retrieval time in the disk
controller. The service time is so low, however, that
queuing may be ignored.
Finally, it shall be mentioned, that queuing for disk
may include non-preemptive priority as selected via
FMS or MMS.
A third disk is available for off-line retrievals.
Only PU queuing for the access to MMS or FMS has to
be considered in this case. (The load is very low).
Section 4.5 presents the basic formulas for disk queuing
calculations.
1.2.2 M̲a̲j̲o̲r̲ ̲R̲e̲s̲u̲l̲t̲s̲
This subsection will present the major results derived
in the sections 3, 4, and 5. The main assumptions
have been presented in the previous subsections, but
the value of the main parameters shall be quoted in
this subsection.
L̲o̲a̲d̲s̲
Table 1.2.2-1 lists the important figures.
The busy minute PU load of 192% may be compared to
the maximum possible load on the PU of 225%. This
limit is due to access contention of the Processor
Bus. All the above figures are valid for 5̲0̲%̲ ̲C̲A̲C̲H̲E̲
̲H̲I̲T̲S̲ and an average instruction execution time without
cache hits of 2 s.
The TDX transfer load in terms of characters is by
far less than the capacity of 62500 bytes/sec. More
interesting is, however, the bandwidth assignment for
the different terminals in order to minimize the transfer
delays: Bandwidth assignment corresponds to 47600
bytes/sec.
The I/O Channel transfer load requirements may be compared
to transfer time of about 1.5 ms for 2 sectors or 2x512
bytes transfer introducing the fact that external channel
transfers never will exceed 512 bytes/transfer and
disk transfers never will exceed about 2x512 bytes/transfer
in average due to buffer limitations. It is thus seen
that the I/O Channel Capacity is by far amble, and
allows for an increase in number of transfers in case
of smaller buffers (each transfer is then also less
time consuming).
R̲e̲s̲p̲o̲n̲s̲e̲ ̲T̲i̲m̲e̲s̲
50% CPU-CACHE HITS and no DISK-CACHE HITS are assumed.
Most critical response times are the DELIVERY TIME
for INCOMING FLASH MESSAGE, 5 sec. (99%), and VDU CMD
LINE ENTRY VALIDATION and RESPONSE TIME, 1 sec. (90%).
Concerning the INC. MSG DELIVERY the requirement is
fulfilled safely, but then only with higher priority
for all processing associated with Message Analysis
and Distribution than all other processing in a two
level priority system. All messages, independant of
precedence, have the same average delivery time: due
to the low load of Msg. Analysis and Distribution nothing
is gained, but complication, by separating FLASH and
NON-FLASH precedence messages.
Concerning the CMD LINE VALIDATION RESPONSE the response
time is practically entirely dependant on the bandwidth
assignment for character transfer, mostly on the LTUX
to VDU transfer rate and considerably less on the TDX
transfer rate: The LTUX to VDU transfer rate is assumed
to be 120 ch./sec. and the assigned TDX-bandwidth is
6400 b/s. The requirement is fulfilled with the given
rates; the response time is considerably improved,
however, by increasing the LTUX to VDU transfer rate
to 240 ch/sec. Even if close to the specified limit,
the response time calculation is reliable since almost
entirely dependant on a fixed bandwidth assignment.
T̲h̲r̲o̲u̲g̲h̲p̲u̲t̲
There are no busy hour throughput limitations at any
of the service positions of the system.
Also in case of busy minute traffic, the throughput
is found sufficient, considering that the only strict
requirement is the reception and storage of incoming
messages. Analysis of all messages arriving in busy
minute will not be accomplished within the same minute.
This is, however, an advantage since excess queuing
of messages which cannot be presented anyway is prevented:
messages are assumed presented at busy hour arrival
rate.
We shall consider the expansion possibilities.
It is seen from the load figures above, that the PU
load may be increased to 225%; since the Processor
Bus queuing when approaching limiting load still is
based on a finite population (= number of CPUs per
PU), the PU response times are still limited, although
considerably higher.
The response time increase for PU processing at a 30%
load increase is no more than 30% for busy hour, all
applications having the same priority
A 30% traffic increase will change the disk load in
two ways: The size of the disk on line storage area
will increase somewhat (although not 30%) and the access
time as a result increase somewhat; secondly, and
this is by far of overriding importance, the required
number of accesses will increase with 30%.
The disk load in busy hours is not expected to exceed
55% with a 30% traffic expansion and the response times
will increase no more than 30% for disk reads and approximately
half as much for disk writers.
At 30% expansion of traffic, the busy minute load will
still be served by the disk assembly, but with considerable
increase to the response times. In this case, the
load estimation uncertainty may play an important role
as well, due to the load sensivity of the queuing times
at high loads. In order to avoid throughput limitations
for incoming message input, it is then recommended
to raise the priority of input processing to maximum.
The TDX system allows an expansion in bandwidth or
terminal assignment corresponding to 14900 bytes/sec.
This is a 30% increase. Assuming that a traffic expansion
most likely is reflected in a proportional increase
in the bandwidth assignment for VDUs and Printers (TRC
and P-T-P circuits have low bandwidth assignments and
other circuits have a low load) the TDX expansion capability
will allow for a 40% increase in traffic with a proportional
increase in number of VDUs and Printers, assuming the
bandwidth assignment per terminal unchanged.
S̲t̲o̲r̲a̲g̲e̲ ̲A̲l̲l̲o̲c̲a̲t̲i̲o̲n̲
A 30% expansion of the storage needs is within the
capability. Since a 30% traffic expansion expectedly
will not change the size of tables, the corresponding
requirements for disk storage allocation expansion
are however less.
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
ITEM BUSY
MIN. BUSY
HOUR
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
*) PU load
in 3 CPUs,
NO CACHE HITs
231%
155%
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
*) PU load
in 3 CPUs,
50% CACHE
HITS
192%
129%
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
Mirrored Disk
Load, 32.5
Access/Sec.
NO CACHE HITS
62.4%
38.8%
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
T̲D̲X̲ ̲T̲R̲A̲N̲S̲F̲E̲R̲S̲
(CAPACITY =
62500 Bytes/Sec.)
TDX READS,
No. of per
sec.
9.66
8.04
READ Transfer,
Byte/Sec.
451
269
TDX WRITES,
No. of per
sec.
13.3
11.7
WRITE Transfer,
Byte/Sec. 4688 4082
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
I̲/̲O̲ ̲C̲H̲A̲N̲N̲E̲L̲
̲T̲R̲A̲N̲S̲F̲E̲R̲S̲ (incl.
disk transfers;
1.5 ms/trsfr)
EXT. CHAN.
READS, No.
of per sec.
1.41
0.47
READ Transfer,
Byte/Sec.
716
240
EXT. CHAN.
WRITEs, No.
of per sec.
0.65
0.23
WRITE Transfer,
Byte/Sec.
331
120
DISK READs,
No. of per
sec.
22.8
15.7
DISK WRITEs,
No. of per
sec.
8.85
4.76
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
*) Including
5% overhead
for time slice
changes.
Table 1.2.2-1…01…SERVICE POSITION LOADS
2̲ ̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲ ̲M̲A̲P̲P̲I̲N̲G̲
2.1 I̲N̲T̲R̲O̲D̲U̲C̲T̲I̲O̲N̲
The requirements sheets of section 2.3, 2.4, and 2.5
present a mainly straight forward listing of performance
requirements (section 3.4.1 of CPS/210/SYS/0001) related
to each of the functions known from the system requirements
and with a further breakdown into system packages as
known from the System Design Document.
Sections 2.3 and 2.4 present the mapping related to
functions of group 1 and some from group 2 (refer to
discussion in section 1.2.1), whereas section 2.5 causes
the mapping related to the remaining functions of group
2 and the functions of group 3.
In some cases (e.g. Log Collection, Status Collection)
additional transaction accounting information has been
inserted to support the data collection of section
3.
The traffic flow figures referred to in the mapping
are presented in section 2.2.
For further discussion of the assumptions behind the
mapping: Refer to section 1.2.1.
2.2 T̲R̲A̲F̲F̲I̲C̲ ̲F̲L̲O̲W̲
The traffic flow requirements for busy hour, war, of
Section 3.4.1.2 in CPS/210/SYS/0001 may be pictured
as in fig. 2.2-1.
The busy minute traffic flow figures are as follows:
- 3 times the incoming and relayed traffic flow,
but does not apply for reception of information
where busy hour figures shall apply. MDCO and MSO
actions are also based on busy hour figures.
- 2 times the CAMPS originated outgoing traffic flow,
but applies only from release until dispatch for
transmission on external channels. Otherwise busy
hour figures apply. MDCO and MSO actions are also
based on busy hour figures.
The traffic flow for busy hour, war, combined with
the busy minute figures as stated above presents the
strongest transaction throughput requirements for the
system.
The corresponding character flow rates are less than
the required busy sec. flow rates. This discussion
will be carried through in sections 3.6.3 and 3.6.4.
Fig. 2-1
F̲i̲g̲.̲ ̲2̲.̲2̲-̲1̲ ̲N̲o̲t̲e̲s̲:̲
1) 5% of incoming messages are encrypted; they shall
be routed to the dedicated Paper Tape Puncher (PTP)
for off-line decryption. Later these messages
are entered as plaindress messages via the dedicated
Paper Tape Reader (equivalent to input via low
speed external circuit). These messages shall
thus pass the input and analysis twice.
Similarly, it is possible to route outgoing plaindress
messages to PTP for off-line encryption and later
enter these as encrypted messages from the dedicated
PTR. It is assumed, however, that this traffic
is negligible; the encrypted messages entered
are all entirely off line prepared.
2) Locally prepared comments include 10 associated
to CCIS originated relay traffic.
3) It is assumed that each message for P-T-P give
rise to 4 transmissions; for other circuits it
is only one.
4) First coordination is a one copy presentation to
VDU.
5) Service Messages
10% of all incoming and outgoing traffic is service
message traffic.
Some of the outgoing service messages are locally
prepared by the Supervisor, MDCO, or MSO, others
are generated automatically: It is assumed, even
if the processing requirements are less than for
a normal message, that the processing is the same
as for normal message generation. The additional
load may cover supervisory activity leading to
the generation.
Some of the incoming service messages have to be
delivered to the supervisor position only, i.e.
delivery to only one terminal and in one copy only.
It is assumed, however, that the processing is
the same as for normal messages. The additional
load may cover supervisor created processing as
a result of the reception (RERUN is not included;
it is a separate function for outgoing traffic).
Since the supervisor position only represents 3%
of all positions related to incoming traffic, and
the supervisor position plus MDCO and MSO positions
12% of all positions related to outgoing traffic
(maximum configuration has 32 VDU positions out
of which 4 are supervisory), it will be seen that
the supervisory activity is quite well covered
(it shall be noted that the MSO and MDCO dedicated
activity is covered separately).
A third kind of service messages are never passed
to or generated from a supervisory position. For
busy hour purpose, only the FLASH acknowledge service
messages for all circuits and general acknowledge
messages for CCIS & SCARS circuits shall be considered
as follows: Resources have been included for reception
of acknowledge in connection with transmissions
of messages and for generation of acknowledge in
connection with input of incoming messages.
2.3 O̲U̲T̲G̲O̲I̲N̲G̲ ̲T̲R̲A̲F̲F̲I̲C̲ ̲R̲E̲L̲A̲T̲E̲D̲ ̲M̲A̲P̲P̲I̲N̲G̲
The following tables present the requirements mapping
which is related to outgoing traffic. It includes:
- locally generated traffic from preparation until
dispatch for external channels.
- processing related to coordination, editing, comments
preparation and editing, release, local distribution,
and transmission associated with relay messages
originating from CCIS.
- transmission of relay messages originating from
SCARS, P-T-P, and TARE.
- MDCO: Refer to table 2-12, section 2.4.
The following general table access requirements are
not mentioned in the tables.
For VDUs per transaction: 3 accesses to memory resident
tables for format descriptor, transaction serial number,
and function sequence table.
For Printers: 3 accesses for transaction serial no.,
print serial no. and special handling serial no.
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS IOC TEP/THP SFM
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
O̲U̲T̲G̲.̲ ̲M̲S̲G̲ T̲H̲R̲: BUSY Format Trans- T̲E̲P̲: MSG. Fetch
Prep.
P̲R̲E̲P̲.̲ HOUR, WAR fer to VDU Validation: format;
Header & Reserve
File
512 ch/msg. Text Separa- & Store
msg.
out tely and each
in total.
Msg. Trans-
fer from VDU
Separate 3 x 512 ch/
Prep. of msg. inp
Header T̲I̲M̲: C̲H̲E̲C̲K̲
̲T̲A̲B̲L̲E̲S̲
and Time from 6 PLA,
0.1AIG,
Text entering 3 SIC/MSG
(last ch.
Incl. transmitted SICs
valida-
request from VDU) ted as
3 ch.
seq.
for coord. header or text Disc
table
and release for validation access
for
to first ch. AIG and
PLAs
response: Ass.
user
10 sec (90%) spec.
PLA-REF
for 3
PLAs.
1
from
PLA.
4
disk
table
accesses
for spec
hand-
ling,
operat-
ing signal,
coord.
SCDs
Local
SCDs.
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TABLE 2.3-1
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS IOC TEP/THP SFM
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
O̲C̲R̲ ̲I̲N̲P̲U̲T̲ T̲H̲R̲:̲ BUSY HOUR,
A̲N̲D̲ ̲V̲A̲L̲I̲D̲A̲-̲ WAR 3 x 512 ch/ As above As
above
T̲I̲O̲N̲ msg inp. plus transfer
to disk
before vali-
dation
P̲T̲R̲ ̲I̲N̲P̲U̲T̲ T̲H̲R̲: BUSY HOUR, 3 x 512 T̲H̲P̲: As Msg. Refer
table
a̲n̲d̲ ̲A̲N̲A̲L̲Y̲-̲ WAR ch/msg input
and table
2.4-1
S̲I̲S̲ inp. analysis,
refer table
2.4-1
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TABLE 2.3-1 (cont.)
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS IOC TEP SFM
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
M̲S̲G̲ ̲E̲D̲I̲T̲I̲N̲G̲ T̲H̲R̲:BUSY HOUR, Complete
re- As
prep.
WAR evaluation as
As prep. 3x512 ch/msg
but out. prep.
with 1/2 3.512 ch/msg
send for inp.
coordina-
tion and no
send for
release
T̲I̲M̲: as prep.
2 edit per (Operator
msg except will change
only 1 for 256 ch per
msg. coor- edit).
dinated for
CCIS
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TABLE 2.3-2
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS IOC TEP SFM
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
C̲O̲M̲M̲E̲N̲T̲ ̲P̲R̲E̲P̲ T̲H̲R̲: BUSY HOUR, Validation Fetch
format
WAR of total com- reserve
file,
Incl. CCIS 256 ch/comm ment store
& SCARS out
COMMENTS 512 ch/comm. 2 table
ac-
inp. cess
for spe-
cial
handling
and SCDs.
No
T̲I̲M̲: Time from SCD evaluation
entry (last for SCARS
&
character CCIS
comments
transmitted
from VDU
of fully
prep. comment)
to first ch.
Resp.: as prep.
= 10 sec (90%)
V̲D̲U̲ ̲P̲A̲G̲E̲S̲ T̲H̲R̲:(BUSY HOUR), 256 ch out As above As above
P̲R̲E̲P̲.̲ WAR
3x512 ch inp
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TABLE 2.3-3
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS IOC TEP SFM
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
C̲O̲M̲M̲E̲N̲T̲ ̲ T̲H̲R̲: BUSY HOUR, Complete re- As prep.
E̲D̲I̲T̲I̲N̲G̲ WAR 512 ch/comm. evaluation as
out prep.
1 edit/comm. 512 ch/comm.
T̲I̲M̲: As com- inp.
ment prep.
(Operator will
change 50 ch)
V̲D̲U̲ ̲P̲A̲G̲E̲S̲ T̲H̲R̲:(BUSY HOUR), 3x512 ch out As above As above
E̲D̲I̲T̲ WAR
1 edit/page 3x512 ch inp
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TABLE 2.3-4
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS IOC TEP SFM
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
R̲E̲L̲E̲A̲S̲E̲ T̲H̲R̲: BUSY MIN, Validate re- Fetch
relea-
and BUSY HOUR lease format. se format
+
WAR 3x512 ch/msg Queue notifi- msg.
Accept assu- out cation
for
med 512 ch/msg distribution 1 table
ac- cess
for rel.
inp. serial
no.
Queue msg. to
traffic hand-
ling.
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TABLE 2.3-5
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS MDP/SFM TEP/SFM IOC
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
D̲I̲S̲T̲R̲I̲B̲U̲T̲I̲O̲N̲ T̲H̲R̲: BUSY HOUR, Fetch
Distr.
of outg. msg WAR inf.; check
locally, and queue in
coord, com- copies for
ments, re- TEP.
lease no- For outg. msg.
tification. ass. 3 SIC & 3
SDL (disk resi-
dent). Add. SCDs
from prepara-
tion, when lo-
cally generated
Also fetch msg.
info. for sta-
tistics
R̲E̲C̲E̲P̲T̲I̲O̲N̲ C̲O̲O̲R̲D̲I̲N̲A̲T̲I̲O̲N̲:
of the same 15 msg from Fetch
reception
CAMPS are coord format. Trans-
2 times with 5 fer to termi-
copies; 10 msg nals with secu-
from CCIS are rity int.
presented once,
and coord once
with 5 copies
(ie. 10x6 coord)
After first pre-
sentation msg. is
converted to for-
mat A for editing
Outg. msg. 1792
ch/msg
Coordination 1792
ch/msg
Comments 512 ch/notif.
Release
Notification 512
ch/comm.
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TABLE 2.3-6
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS THP/SFM IOC
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
O̲U̲T̲G̲.̲ ̲M̲S̲G̲ T̲H̲R̲: Busy Min Fetch msg.
H̲A̲N̲D̲L̲I̲N̲G̲: and BUSY HOUR, from sto-
Routing De- WAR age; ACP127
termination conversion
& Trans-
mission
Multiple trans- Routing
missions on determina-
P-T-P network tion based
only, namely 4 on RIs known
from PLA ve-
rific. during
prep. (incl. OCR
and CCIS entry) or
from analysis of
relayed msgs. and
msgs. entered as
complete ACP127
(PTR).
1 disk table access
per RI.
1 mem. table access
for trans. serial no.
S̲C̲A̲R̲S̲ ̲&̲
C̲C̲I̲S̲ ̲C̲O̲M̲M̲. T̲H̲R̲: BUSY HOUR, S̲P̲E̲C̲I̲A̲L̲ ̲F̲O̲R̲M̲A̲T̲
O̲U̲T̲G̲.̲ C̲O̲N̲V̲E̲R̲S̲I̲O̲N̲
H̲A̲N̲D̲L̲I̲N̲G̲ W̲A̲R̲
V̲D̲U̲ ̲P̲A̲G̲E̲S̲ As above As above
O̲U̲T̲G̲.̲ ̲H̲A̲N̲D̲-̲
L̲I̲N̲G̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TABLE 2.3-7
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS TEP SFM/SAR IOC
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
M̲S̲G̲ ̲S̲E̲R̲V̲I̲C̲E̲ T̲H̲R̲: BUSY HOUR, Msg. is
queued
3 table
O̲F̲ ̲O̲U̲T̲G̲ ̲M̲S̲G̲: WAR for further accesses
for
traffic hand- format pre-
ling after sentation
service
O̲u̲t̲g̲.̲ ̲m̲s̲g̲.̲ Not incl. CCIS Format Get routing 512 ch/msg.
out
r̲o̲u̲t̲i̲n̲g̲ & SCARS msg. is information 32 ch/msg.
in
a̲s̲s̲i̲s̲t̲a̲n̲c̲e̲ displayed; referenced
(̲1̲0̲%̲)̲ Add 10% load operator in queue
to Routing enters 32 ch.
Determination
R̲e̲r̲u̲n̲ ̲(̲5̲%̲)̲ Add 5% load for Operator Retrieve 128 ch/msg
out
Routing determi- enters 32 ch. referenced 32 ch/msg
inp
nation and msg. plus retrieval
transmission format
transfer
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TABLE 2.3-8
2.4 I̲N̲C̲O̲M̲I̲N̲G̲ ̲T̲R̲A̲F̲F̲I̲C̲ ̲R̲E̲L̲A̲T̲E̲D̲ ̲M̲A̲P̲P̲I̲N̲G̲
The following sheets present the requirements mapping
which is considered related to incoming traffic. They
include:
- Incoming traffic handling with distribution and
reception, including input handling and analysis
of relay messages plus local distribution for those
relay messages which are both relayed and locally
distributed.
The general table access requirements for VDU and printer
transactions, refer introduction of section 2.3, shall
also apply.
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS IOC THP SFM
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
I̲N̲C̲ ̲M̲S̲G̲ T̲H̲R̲: BUSY MIN Transfer to Store inc.
msg.
I̲N̲P̲U̲T̲ ̲*̲)̲ and BUSY inc. msg. in temporary
HOUR, WAR queue storage
Line by Line
breakdown.
Certain further
analysis
M̲S̲G̲1̲2̲7̲ ̲A̲N̲A̲-̲ T̲H̲R̲: BUSY HOUR, Fetch
message
L̲Y̲S̲I̲S̲ *) WAR
T̲I̲M̲:̲ ACP127 Ana- VDU pages
are
lysis. stored
without
Incl. table This also distribution
2.4-2 figs for includes ana
distr. but n̲o̲t̲ lysis of all
security inter- traffic from
rogation: CCIS & SCARS
Time from re-
ception (EOM in
PU) of last re-
trans. to msg.
available for
delivery as
below
20% FLASH in 5
sec (99%) or
10 sec. (100%)
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TABLE 2.4-1
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS IOC THP SFM
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
15 sec. for pre-
emption (100%)
35% IMM. in 10 sec
(99%) or 20 sec.
(100%).
*) also
including 30% PRIOR as IMM.
comments &
VDU pages 15% ROUT as IMM.
from CCIS
& SCARS
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TABLE 2.4-1 (cont.)
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS MDP TEP/SFM IOC
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
D̲I̲S̲T̲R̲I̲B̲U̲T̲I̲O̲N̲ T̲H̲R̲: BUSY HOUR Fetch distr. Transfer
to 1792
ch/msg.
A̲N̲D̲ ̲R̲E̲C̲E̲P̲-̲ WAR inf., check terminals
T̲I̲O̲N̲ ̲O̲F̲ ̲I̲N̲C̲.̲ and queue 3 tables
M̲S̲G̲.̲ for format
presentation
Distribution
based on 3
SICs & 3 SDLs
(disk resident)
additional msg.
read for stati-
stics
Msg. of Type
Encrypted
are routed
to PTP directly
from the analysis
D̲I̲S̲T̲R̲I̲B̲U̲T̲I̲O̲N̲ As above Distr. based As above Comments:
A̲N̲D̲ ̲R̲E̲C̲E̲P̲-̲ on SCDs 512 ch/comm.
T̲I̲O̲N̲ ̲O̲F̲
C̲O̲M̲M̲E̲N̲T̲S̲
F̲R̲O̲M̲ ̲C̲C̲I̲S̲
&̲ ̲S̲C̲A̲R̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TABLE 2.4-2
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS TEP IOC SFM
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
M̲S̲G̲ ̲S̲E̲R̲V̲I̲C̲E̲ T̲H̲R̲: BUSY HOUR, Presentation Fetch
I̲N̲C̲.̲ ̲M̲S̲G̲.̲ WAR of msg. with format
and
G̲A̲R̲B̲L̲E̲ ̲C̲O̲R̲-̲ annotation msg
for
pre-
R̲E̲C̲T̲I̲O̲N̲ for edit. sentation;
store
after
correction
10% of inc. Msg. is
incl. VDU queued for 1792 ch/msg,
pages, CCIS & further I&O
SCARS Comments traffic
handling
Add 10% to after edit
analysis load
512 ch/comm.,
I&O
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
Table 2.4-3
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS MDP TEP IOC/SFM
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
M̲D̲C̲O̲ *) T̲H̲R̲: BUSY HOUR, Queuing for Presentation Fetch
format
WAR MDCO; msg.re- of msg. + and msg;
turned from short annota- store
after
MDCO tion MDCO
action
I̲N̲C̲.̲ ̲M̲S̲G̲ 30% of inc. MDCO inserts
D̲I̲S̲T̲R̲.̲ ̲A̲I̲D̲ msg., incl. distr. inf. = 1792
ch/msg
comments from 64 ch. out,
64 ch/msg
inp,
CCIS & SCARS 512 ch/comm.
n̲o̲t̲ incl. out,
relayed only 64 ch/comm
msg. inp.
O̲U̲T̲G̲.̲ ̲M̲S̲G̲.̲
D̲I̲S̲T̲R̲I̲B̲U̲T̲I̲O̲N̲ 10% of outg. As above As above As
above
A̲I̲D̲ ̲(̲L̲O̲C̲A̲L̲ msg. subject
D̲I̲S̲T̲R̲I̲B̲U̲T̲I̲O̲N̲ to local di-
stribution.
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
Table 2.4-4
*) MDCO action as a result of non-deliverable msg. due to
precedence
or classification shall be considered, and furthermore
msgs.
which are not containing the proper distribution information
on
entry from external circuits
2.5 O̲T̲H̲E̲R̲ ̲T̲R̲A̲F̲F̲I̲C̲ ̲M̲A̲P̲P̲I̲N̲G̲
The following sheets present the requirements mapping
related to functions which are accounted for on a bulk
basis, in two parts: One part dependant on locally
generated traffic and a second part related to traffic
of external origin.
The general table access requirements for VDU and printer
transactions, refer introduction of section 2.3, shall
also apply.
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS TEP IOC SFM
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
S̲E̲C̲U̲R̲I̲T̲Y̲ ̲I̲N̲T̲ T̲H̲R̲: BUSY HOUR, Assumed
in- 10
ch/int. Disk
access
WAR; for each terrogation I&O for check
of
case of re- format in password
ception with memory
classification
higher than
restricted, i.e.
60%.
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
Table 2.5-1
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS TEP SFM IOC
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
M̲E̲N̲U̲E̲ ̲P̲R̲E̲-̲ T̲H̲R̲ ̲(̲B̲U̲S̲Y̲ ̲H̲O̲U̲R̲)̲
S̲E̲N̲T̲A̲T̲I̲O̲N̲ 1/10 of cases Present Fetch format 512 ch/menue
F̲O̲R̲ ̲U̲S̲E̲R̲ for select, see menue,accept out
below selection
F̲U̲N̲C̲T̲I̲O̲N̲ T̲H̲R̲ ̲(̲B̲U̲S̲Y̲ ̲H̲O̲U̲R̲)̲ 4 ch/select
S̲E̲L̲E̲C̲T̲ ̲F̲O̲R̲ rel. to inter- in
U̲S̲E̲R̲ ruptions due to
reception,
assumed every
second recep-
tion
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
Table 2.5-2
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS TEP STP
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
S̲T̲A̲T̲I̲S̲T̲I̲C̲ B̲U̲S̲Y̲ ̲M̲I̲N̲/̲H̲O̲U̲R̲ Printout Collection
C̲O̲L̲L̲E̲C̲T̲I̲O̲N̲ to super- and Prepara-
A̲N̲D̲ ̲P̲R̲I̲N̲T̲-̲ Related to all visor every tion of
O̲U̲T̲ inc. & outg. 24 hours, statistics
traffic once all 1
hour records
See following
page for
discussion.
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
Table 2.5-3
S̲t̲a̲t̲i̲s̲t̲i̲c̲ ̲C̲o̲l̲l̲e̲c̲t̲i̲o̲n̲,̲ ̲G̲e̲n̲e̲r̲a̲t̲i̲o̲n̲ ̲a̲n̲d̲ ̲P̲r̲i̲n̲t̲o̲u̲t̲
1) C̲o̲l̲l̲e̲c̲t̲i̲o̲n̲
Assuming a rate of 3 statistics entries for each of
about 35 messages during busy minute would result in
105 entries/minute.
Assuming a rate of 10 statistics entries for each of
780 msg. during busy hour would result in 7.800 entries/hour
or 130 entries/minute.
With a resource consumption of 2ms per entry, the total
needs are never higher than about 4ms/sec.
2) D̲u̲m̲p̲
Every 6 min. the main memory collection area of
8 Kbytes is written to disk; 10 dumps (1 hour statistic)
are executed before 1 hour of statistic is generated
and added to the 24 hour statistic section on disk.
Each dump consumes 2 disk writes. Direct CPU
time per dump is 1 ms.
3) G̲e̲n̲e̲r̲a̲t̲i̲o̲n̲
Equivalent data sectors of 512 bytes are read from
the 10 dumps; the figures are added in memory
and the resulting data sector is written to disk.
Consequently, the resource consumption will be
per hour:
160 disk reads
16 disk writes
Direct CPU time needs are 10 sec.
4) P̲r̲i̲n̲t̲o̲u̲t̲
The size of 1 hour statistic is about 17Kbytes,
which read from disk in sectors of 512 bytes will
result in 34 disk reads.
A few more disk accesses shall be added for moving
the statistics to a temporary storage before printout,
assume 4 reads and 4 writes. T̲h̲e̲ ̲t̲o̲t̲a̲l̲ ̲n̲e̲e̲d̲s̲ ̲f̲o̲r̲
̲1̲ ̲h̲o̲u̲r̲ ̲s̲t̲a̲t̲i̲s̲t̲i̲c̲ ̲p̲r̲i̲n̲t̲o̲u̲t̲ ̲a̲r̲e̲ ̲t̲h̲e̲n̲ ̲3̲8̲ ̲d̲i̲s̲k̲ ̲r̲e̲a̲d̲s̲
̲a̲n̲d̲ ̲4̲ ̲d̲i̲s̲k̲ ̲w̲r̲i̲t̲e̲s̲.̲
Assuming the plain text amplification factor before
printout is 10. 1 hour of statistics will need
170 Kch. printout.
The associated direct CPU time for conversion is
assumed to be 1 sec/hour.
Even if there is no requirement for periodic printout
of statistics, on hourly basis, it might be necessary
in order to cope with this amount of data. The
requirements for printout of 1 hour statistics
are thus:
With a print rate of 80 ch/sec, the printout will
take 2125 sec. (36 min) and the disk access rate
with 34 reads is then 0.016 reads/sec. (= 80/512/10)
5) C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲s̲
No checkpoints.
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS LOG/SAR/SFM TEP
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
L̲O̲G̲ ̲C̲O̲L̲L̲E̲C̲-̲ B̲U̲S̲Y̲ ̲M̲I̲N̲/̲H̲O̲U̲R̲,̲ ̲W̲A̲R̲ Collection
of Print
of
Log,
&̲ ̲P̲R̲I̲N̲T̲ ̲O̲U̲T̲ Related to all incoming records. simultaneous-
and outgoing traffic; Write to disk ly with
crea-
Refer table 2.5.4-B and every 5 tion
of records
and the discussion on records. Store to Supervisor
the following page to SAR every printer
10 min.
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TABLE 2.5-4
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
RECORDS PER HOUR
ITEM ITEM BUSY MIN BUSY HOUR
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
M̲S̲G̲ ̲P̲R̲E̲P̲ *) 2 70 70
EDIT 2 80 80
COORD. REC. 1 210 210
C̲O̲M̲M̲.̲ ̲P̲R̲E̲P̲ 2 110 110
EDIT 2 110 110
REC. 1 100 100
R̲E̲L̲E̲A̲S̲E̲ 1 110 55
NOT. REC. 1 110 55
T̲X̲ ̲O̲U̲T̲G̲.̲ ̲M̲S̲G̲ 1 500 250
LOC. REC. 1 700 700
I̲N̲C̲.̲ ̲M̲S̲G̲.̲ 1 2040 680
RECEPTION 1 3345 3345
S̲E̲C̲.̲ ̲I̲N̲T̲ 1 1398 1398
R̲E̲T̲R̲I̲E̲V̲A̲L̲ 2 800 800
M̲S̲G̲ ̲S̲E̲R̲V̲.̲ 1 95 95
M̲D̲C̲O̲ 1 127 127
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
9905 8185
*) Incl. OCR & PTR
For 24 Hours: 38.000 records
25% of Records related to CAMPS loc. gen. traffic
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TABLE 2.5-4B…01…LOG RECORDS ACCOUNTING
L̲o̲g̲ ̲C̲o̲l̲l̲e̲c̲t̲i̲o̲n̲ ̲&̲ ̲P̲r̲i̲n̲t̲-̲o̲u̲t̲
1) L̲o̲g̲ ̲C̲o̲l̲l̲e̲c̲t̲i̲o̲n̲
Direct CPU time per record is 1.1 ms.
2) C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲i̲n̲g̲
Each Log record shall be checkpointed to stand-by
PU via TDX. (40 bytes).
3) P̲r̲i̲n̲t̲-̲o̲u̲t̲
Standard size of a log record is 40 Bytes.
The size of the busy hour log is thus 40 x 8000
bytes = 320 Kbytes.
Assuming a print multiplication factor relative
to log file of 2.5, the amount of printout is 800
Kch. The associated direct CPU time is assumed
to be 1 sec.
With one printer and a print speed of 80 ch/sec,
the time needed for printing 1 busy hour of Log
is 10.000 sec. (167 min). Assuming 512 bytes are
read from disk at a time, the needs for disk access
are found as 80/512/2.5 = 0.0625 disk reads/sec.
Since 24 hours traffic are about 5 times that of
busy hour, the print-out time will be approx 13
hours.
On o̲n̲e̲ ̲p̲r̲i̲n̲t̲e̲r̲, the amount of print-out is approx.
3750 Kbytes, which again is about 47,000 lines
of 80 ch.
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS TEP/SAR/SFM
THR = THROUGHPUT
TIM = TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
S̲T̲A̲T̲U̲S̲ B̲U̲S̲Y̲ ̲M̲I̲N̲,̲ ̲H̲O̲U̲R̲,̲ ̲W̲A̲R̲ Collection of
C̲O̲L̲L̲E̲C̲T̲I̲O̲N̲ records;
&̲ ̲P̲R̲I̲N̲T̲-̲O̲U̲T̲ Related to incoming write to disk
and outgoing traf- every 5 records;
fic; store to SAR
Refer table 2.5-5B every 10 min.
and discussion Print-out to
on the following Supervisor
page. every 24 h.
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
2.5-5
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
RECORDS PER
ITEM ITEM BUSY MIN BUSY
HOUR
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
COMM. EDIT 1
40
40
COMM. REC. 1
100
100
MSO. PREP 1
15
15
MSG. RELEASE 1
110
55
COORD. REC. 1
210
210
D̲E̲L̲I̲V̲E̲R̲Y̲ ̲S̲T̲A̲T̲U̲S̲:
OUTG. MSG. LOC. REC.
1
700
700
INC. MSG. REC. 1 3345
3345
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
4520 4465
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
For 24 hours: 23000 REC.
14% of records related to CAMPS
LOCALLY GENERATED TRAFFIC
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
2.5-5 B…01…MSG.STATUS COLLECTION
S̲T̲A̲T̲U̲S̲ ̲C̲o̲l̲l̲e̲c̲t̲i̲o̲n̲ ̲&̲ ̲P̲r̲i̲n̲t̲-̲O̲u̲t̲
1) S̲t̲a̲t̲u̲s̲ ̲C̲o̲l̲l̲e̲c̲t̲i̲o̲n̲
Every 5 records, a write to disk is performed.
Direct CPU time is 1 ms per record.
2) C̲h̲e̲c̲k̲p̲o̲i̲n̲t̲i̲n̲g̲
There is no checkpointing.
3) S̲t̲a̲t̲u̲s̲ ̲U̲p̲d̲a̲t̲e̲
Every 2 min, the collected records are read and
used for updating the Status File. The Status
File is sorted according to terminals and for each
terminal in outgoing message and/or release message
status plus delivered message status. An overall
catalog contains pointers to the records for terminals,
to each of the outgoing and release messages, and
to the status area(s) for delivered messages.
For each update, associated to 2 min. of collected
records, we assume the consumption of the following
resources: 1 disk read and 1 write for catalog,
1 disk read and 1 write per status record except
for msg. delivery, and 1 disk write for each case
of msg delivery. The direct CPU time is assumed
to be 3 ms per record.
4) P̲r̲i̲n̲t̲-̲o̲u̲t̲
The size of a record is assumed to be 20 bytes.
The size of the busy hour status collection (i.e.
new records) is then about 4500 x 20 = 90 KBytes
and is almost entirely related to msg. delivery.
Assuming a print multiplication factor relative
to the file of 2.5, the amount of print out is
225 KBytes.
With one printer and a print speed of 80 ch/sec,
the time needed for print-out of 1 busy hour of
records is 2813 sec (47 min).
Since 24 hours of status collection is about 5
times the one of a busy hour, the total time needed
to print-out on one printer is 3.9 hours. Print-out
is assumed executed during quiet hours.
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS TEP SFM
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
Report T̲H̲R̲ ̲(̲B̲U̲S̲Y̲ ̲H̲O̲U̲R̲)̲ Print out Transfer to
Generation Related to inc. on line and/ Disc/Retrieval
and and outg. msg. or from out- from disc.
print out standing re-
See discussion port files;
on following assumed
page print out
…02……02…within
BUSY HOUR.
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
Table 2.5-6
R̲e̲p̲o̲r̲t̲ ̲G̲e̲n̲e̲r̲a̲t̲i̲o̲n̲
Reports are related mostly to Incoming and Outgoing
message traffic.
Reports are generated when something is not normal
(e.g. channel sequence number out of sequence).
10% of incoming messages will go to message service:
We will assume 10% "bad cases", and one report for
each case, for all incoming and outgoing messages.
We find for b̲u̲s̲y̲ ̲h̲o̲u̲r̲:
10% of 780 msg. giving 78 reports.
This will correspond to the amount of reports for approximately
1/2 of all report types: Since there is approximately
30 different report types generated, 15 report types
are not accounted for. We assume they are generated
once per busy hour, hence we find a total of about
1̲0̲0̲ ̲r̲e̲p̲o̲r̲t̲s̲/̲b̲u̲s̲y̲ ̲h̲o̲u̲r̲.
We will assume that each report shall result in both
Temporary Storage and Print-Out; hence for disc write,
100 disk accesses/busy hour
and, assuming 60 bytes/record,
6̲ ̲k̲b̲y̲t̲e̲s̲ ̲o̲f̲ ̲s̲t̲o̲r̲a̲g̲e̲/̲b̲u̲s̲y̲ ̲h̲o̲u̲r̲
Each report is assumed to consume 2 ms of direct CPU
time.
With a speed of 80 ch/sec (1 printer) and a print amplification
factor of 2.5, the print-out time will be 188 sec.
There is one disk read associated to print-out of a
report.
The CAMPS locally generated traffic is responsible
for approx. 10% of the reports (55 "msgs" vs 725 other
"msgs").
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS TEP SAR SFM
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
R̲E̲T̲R̲I̲E̲V̲A̲L̲ B̲U̲S̲Y̲ ̲H̲O̲U̲R̲
T̲I̲M̲:̲ Validation of Check whether Request
via
Request 5 sec (90%) CMD (confir- on or off-line CMD line:
no.
Retr. mation) plus retrieval format
re-
indication quired.
to user of on
or off-line
retrieval
Request T̲H̲R̲: See below Request:
single item only at a time 2x28
ch. out Search
catalog As
above
plus
(DTG or T̲I̲M̲: 10 sec 64 ch. inp. trsf.
of
TOC) + 90% (no other catalog,
msg.
reception retr.), format
and
otherwise 30 msg.
sec. (90%)
Request T̲H̲R̲: 3 per Reception: as above as above
single item 30 sec. Ass. Msg. re-
TOC retrie- T̲I̲M̲: On line: trieval =
val + recep- 30 sec (90%) 1792 ch. out
tion without security
int.
Request T̲H̲R̲: 1 per 90 Request: Gen. cata- as above
TOC window sec. as above log extract
= catalog T̲I̲M̲: On Line:
extract + 30 sec. + 1 min. catalog dis- Ass. extract
reception; for 1 BUSY HOUR play: 20x80 of 20 msg.
this trans- traffic search
action is without sec. int
simultane-
ous with
above off line: see table 2.5-7B
single item T̲I̲M̲: as on line +4 min. and discussion
retr. excluding time for mounting on following
disk and time for selection page
from user queue.
T̲H̲R̲: as above
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
Table 2.5-7A
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
RECORDS PER
ITEM ITEM BUSY HOUR
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
MSG PREP *) 1 35
COMMENT EDIT 1 55
RELEASE 1 55
NOTIFICATION 1 55
INC. MSG. 1 680
INC. MSG. DISTR. 1 412
OUTG. MSG. 1 250
OUTG. MSG. DISTR. 1 ̲ ̲ ̲ ̲7̲0̲
̲ ̲1̲6̲1̲2̲
*) Incl. PTR + OCR.
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
Table 2.5-7B…01…S̲T̲O̲R̲E̲ ̲R̲E̲Q̲U̲E̲S̲T̲S̲
S̲t̲o̲r̲a̲g̲e̲ ̲C̲a̲t̲a̲l̲o̲g̲ ̲G̲e̲n̲e̲r̲a̲t̲i̲o̲n̲ ̲a̲n̲d̲ ̲R̲e̲t̲r̲i̲e̲v̲a̲l̲
1) G̲e̲n̲e̲r̲a̲t̲i̲o̲n̲ ̲o̲f̲ ̲C̲a̲t̲a̲l̲o̲g̲
Storage requests are queued to SAR and checkpointed
to Standby PU; for each request SAR makes a storage
request to MMS (1 DISK MODIFY for entry to on-line
directory leading to later unloading to intermediate
storage).
For every request SAR retrieves item information
from disk for SAR catalog (1 DISK READ) and collects
the information in PU-memory for transfer every
10 sec. (according to table 2.5-7B about 5 requests
have been forwarded by them) to catalog: 2.1 DISK
MODIFY.
Total direct CPU time is 5 ms per request.
2) R̲e̲t̲r̲i̲e̲v̲a̲l̲
2a) I̲n̲t̲r̲o̲d̲u̲c̲t̲i̲o̲n̲
Retrieval is composed of the following steps:
s1) presentation of retrieval format on request
followed by selection of retrieval type.
s2) presentation of retrieval format for type in
question followed by insertion of identification.
s3) validation of request followed by confirmation:
"Retrieval in progress".
s4) SAR investigation whether on- or off-line retrieval
and associated response to VDU.
s5) on-or off-line retrieval execution and queueing
for presentation.
s6) Information is presented for user automatically
(on-line retr.) or user selects display (off-line).
s1 and s2 may be replaced by a direct command request;
this is assumed for busy hour.
Retrieval Request is assumed to include s1 - s4.
R̲e̲c̲e̲i̲v̲e̲ ̲S̲i̲n̲g̲l̲e̲ ̲I̲t̲e̲m̲ is assumed to include s5 -
s6.
R̲e̲q̲u̲e̲s̲t̲ ̲f̲o̲r̲ ̲R̲e̲t̲r̲i̲e̲v̲a̲l̲ ̲C̲a̲t̲a̲l̲o̲g̲ ̲(̲T̲O̲C̲-̲w̲i̲n̲d̲o̲w̲)̲
corresponds to s1 - s4.
R̲e̲c̲e̲i̲v̲e̲ ̲T̲O̲C̲-̲w̲i̲n̲d̲o̲w̲ ̲C̲a̲t̲a̲l̲o̲g̲
corresponds to s5 - s6.
2b) T̲O̲C̲ ̲s̲p̲e̲c̲i̲f̲i̲e̲d̲: The TOC-resolution of the catalog
is 1 minute corresponding to approx. 30 records
of approx. 50 Bytes or in total 1.5 KBytes, which
may be accessed in 1 disk read. On top of that,
there is 1 directory read for pointer. The direct
CPU time (SAR mostly):
- 20 ms for request validation plus on- or off-line
response from SAR.
- 20 ms for catalog search and request via MMS
for presentation (on-line) or queuing for user
(off-line).
In case of o̲f̲f̲-̲l̲i̲n̲e̲ ̲r̲e̲t̲r̲i̲e̲v̲a̲l̲: Add resources for
opening off-line volume: The needs are 5 disk
reads and 100 ms direct cpu-time.
2c D̲T̲G̲ ̲s̲p̲e̲c̲i̲f̲i̲e̲d̲:̲ The DTG resolution is 1 min. corresponding
to 30 records of approx. 50 bytes which may be
accessed in max. 32 reads (incl. directory read).
To this figure shall be added the DTG directory
search: 24 hours catalog has about 4000 entries
of 6 bytes (DTG, PLA) or approx. 24 KBytes, which
may be read in 24 accesses max. for finding the
DTG(S).
The direct CPU time (SAR mostly).
- 20 ms for request validation plus on- or off-line
response from SAR.
- 150 ms for catalog search and request via MMS
for presentation: (on-line) or queuing for
user (off-line).
In case of o̲f̲f̲-̲l̲i̲n̲e̲ ̲r̲e̲t̲r̲i̲e̲v̲a̲l̲: additional to the
TOC case we may have to search the complete DTG-file
in the on-line catalog before attending the off-line
catalogs, where the same procedure may repeat.
It is assumed for these calculations, however,
that the first off-line attempt is a success.
A̲d̲d̲i̲t̲i̲o̲n̲a̲l̲ needs for off-line retrieval are then
found by adding the needs for DTG directory search
plus the needs for off-line volume opening:
DISC reads: 12 + 5
Direct CPU time:
- 150 ms for DTG catalog search and second off-line
response from SAR:
- 100 ms for opening volume.
2d T̲O̲C̲ ̲-̲ ̲W̲I̲N̲D̲O̲W̲ ̲S̲P̲E̲C̲I̲F̲I̲E̲D̲
We assume that the resources for catalog search
corresponding to 1 BUSY HOUR are equal to 60 times
the the needs for one TOC specified search.
- 60 plus 1 disc read
- 20 x 60 ms direct CPU time for catalog search
and request for presentation.
To these figures shall be added the temporary storage
before presentation: It is assumed that 20 objects
are retrieved resulting in a displayed catalog
of 1,5 KByte: 1 disc write for establishing temporary
storage (3 reads for presentation to VDU) The resources
for request validation are as for single item retrieval.
For OFF-LINE retrieval refer to TOC specified retrieval
for additional resources.
2e T̲r̲a̲f̲f̲i̲c̲ ̲L̲o̲a̲d̲
It is assumed that off line retrievals are rather
infrequent (as indicated by the allowed retrieval
time). Also DTG retrievals are assumed to be infrequent
or to replace the TOC-window retrievals. The traffic
load shall be based on 3 single item TOC-retrievals
and one TOC window retrieval as indicated in table
2.5 - 7A, all on-line.
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS TEP SAR/SFM
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
I̲N̲T̲E̲R̲A̲C̲T̲I̲V̲E̲ During BUSY HOUR:
E̲N̲T̲R̲I̲E̲S̲: Timing from entry
see below of last char. from
VDU to reception of first
char. in response
C̲M̲D̲ ̲L̲I̲N̲E
E̲N̲T̲R̲Y̲:
Invalid
entry resp. 1 sec (90%) Validate
entry
REQUEST 5 sec (90%) Validate and
(retrieval, return answer
status, whether succes-
etc) ful, Refer table
2.5-7A for retrieval
request.
S̲U̲P̲E̲R̲V̲I̲S̲O̲R̲
CMD execute 5 sec. (99%) Validation Worst
Case:
entered via 10 sec. (100%) As
CMD
LINE
CMD line or ENTRY
plus
via format 5 table read
accesses.
I̲N̲F̲O̲R̲M̲A̲T̲I̲O̲N 10 sec (90%) Covered by
V̲A̲L̲I̲D̲A̲T̲I̲O̲N̲ ref. to table 2.3-1
for msg. preparation
S̲u̲c̲c̲e̲e̲d̲i̲n̲g̲
A̲c̲t̲i̲o̲n̲s̲:̲
after event start within Not worse than supervisor
or inter- 5 sec (90%) entered cmd execution;
active refer above
transaction
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TABLE 2.5-8
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
P̲E̲R̲F̲O̲R̲-̲ Consists of a Msg. Flow Trace as a separate
M̲A̲N̲C̲E̲ ̲M̲O̲- CAMPS System Function, and additional S̲t̲a̲t̲i̲s̲t̲i̲c̲a̲l̲ ̲D̲a̲t̲a̲
N̲I̲T̲O̲R̲I̲N̲G̲ forwarded by CAMPS MONITOR procedures.
No requirements for print-out of data. (Assumedly
during quiet hours).
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
TABLE 2.5-9
M̲A̲P̲P̲I̲N̲G̲ ̲O̲F̲ ̲R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FUNCTION REQUIREMENTS
THR=THROUGHPUT
TIM=TIMING
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
E̲X̲P̲A̲N̲S̲I̲O̲N T̲H̲R̲
CAPABILITY Allowing for addition
of H/W, such as
storage modules,
it shall be within the
system capability to
allow for increase of
message
traffic with 30%.
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
Table 2.5-10
3̲ ̲ ̲D̲A̲T̲A̲ ̲C̲O̲L̲L̲E̲C̲T̲I̲O̲N̲ ̲F̲O̲R̲ ̲M̲O̲D̲E̲L̲ ̲C̲A̲L̲C̲U̲L̲A̲T̲I̲O̲N̲
3.1 I̲N̲T̲R̲O̲D̲U̲C̲T̲I̲O̲N̲
It is the basic assumption of section 1.2.1 that the
data collection based on the functions listed in section
2.3, 2.4 and 2.5, with application of the system resources
needed per usage and with proper accounting for the
usage as reflected in the traffic flow of section 2.2,
shall lead to the average system load in busy hour
when the resource needs are added linerly.
The resulting loading requirements are listed and discussed
in section 3.6.1.
Section 3.6.2 is concerned with the terminal loading
in terms of character transfer resulting from the required
traffic as compared to the capacity.
It is a basic assumption that the capacity is sufficient
and section 3.6.2 is then mostly of informative nature.
It serves also, however, the purpose of proving the
busy hour and busy minute character transfer rates
to be within the busy second rates required and thus
not adding unjustified requirements. The TDX and I/O
channel character load figures are presented as well.
Busy second requirements for character transfer are
dealt with in section 3.6.3 and 3.6.4.
3.2 B̲A̲S̲I̲C̲ ̲C̲P̲U̲-̲T̲I̲M̲E̲ ̲A̲N̲D̲ ̲I̲/̲O̲ ̲A̲C̲C̲E̲S̲S̲ ̲C̲O̲N̲S̲U̲M̲P̲T̲I̲O̲N̲ ̲F̲I̲G̲U̲R̲E̲S̲
Tables 3.2-1 and 3.2-2 list the CPU time and I/O access
consumption figures for the CAMPS (System) Functions
which are called most frequently or which are contributing
with a considerable load. Other CPU time needs are
included in the "DIRECT" CPU time.
These are the functions referred to in the Data Collection
sheets of section 3.3 - 3.5.
CPU time needs for access to I/O Channel, TDX-System
and the DISC System are listed separately in table
3.2-2. This CPU-time is included in the total CPU time
of the Data Collection Sheets, but not in the accounting
for the functions at the bottum lines of data collection
sheets (XFER; refer discussion below).
The CPU time needs for some functions, i.e.:
Processes, coroutines and monitor procedures are additionally
listed separately at the bottum lines of the Data Collection
Sheets; these are the ones resulting in a considerable
CPU load and/or corresponding to non-concurrent processing
in the PU (refer to discussion in section 1.2.1.3):
QMON, MMON, MMS, FMS, TMP, CPT, XFER (in part). Others
are covered via the resource accounting corresponding
to table 3.2-2: TMS, LTU-Handlers, TDX-Handler, XFER
(in part), DCM and the DISC-Handler. The total needs
for each of these cases plus the total direct CPU-time
needs for application processes are listed in section
3.6.1.
All demands for process communication, DMA transfers,
and I/T handling are included in the CPU figures of
the appropriate functions. The CPU-time is based on
a CACHE-CPU where the HIT-Probability is zero. The
effect of process change is covered in section 4.2.
The effect of the DISC-CACHE is similarily included
with zero HIT-probability (refer section 4.5).
The UNLOADING of a CIF (message) is included once in
connection with the creation. The effect of repeated
storage request until the unloading is actually performed
is included in the accounting for storage key collection
(section 3.5).
The TDX transfer delay, I/O Channel transfer delay,
and the disc access time (including Channel transfer
and DISC Controller processing) are introduced during
discussion of the single resources and their mechanisms
in section 4.
CAMPS CPU TDX DISC
FUNCTION ms. R W R W M
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
QUEUE READ 1(QMON)
QUEUE WRITE 2(QMON)
CPT=CHECKPOINT(SAVE)
CPT(TDX) 12(MMON)*+13(MMS)*+6(xFER)+7(CPT) 1 1
CPT(TDX+DISC) 12(MMON)*+13(MMS)*+6(xFER)+7(CPT) 1 1 1
*) not appl. to LOG & SAR
M̲S̲G̲.̲ ̲F̲C̲T̲S̲
CREATE(CIF) 8(MMON)+ 8(MMS)
CREATE+UNLOAD
(̲C̲I̲F̲)̲:̲ ̲ ̲ ̲ ̲ ̲ ̲
CREATE 8(MMON)+ 8(MMS)
MODIFY OCD 8(MMON)+8(MMS)
- 6(xFER) 1 1
READ CIF 3.5(MMS) 1
WRITE CIF 3.5(MMS) 1
WRITE RECOVERY
INF. 4(MMS) 1
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
8(MMON)+27(MMS)-6(XFER) 2 3
OBTAIN REF
= MAKE CIF
A̲C̲T̲I̲V̲E̲:̲ ̲ ̲ ̲
MODIFY OCD 8(MMON)+8(MMS)-6(XFER) 1 1
READ CPT INF. 5(MMS) 1
CPT(TDX) 5(MMS)+6(XFER)+7(CPT) 1 1
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
8(MMON)+18(MMS)+7(CPT) 1 1 2 1
READ(VIEW) 8(MMON)+13(MMS) 1
WRITE(VIEW) 8(MMON)+13(MMS) 1
MODIFY 8(MMON)+13(MMS) 1
MAKE CIF
I̲N̲A̲C̲T̲I̲V̲E̲(̲M̲I̲)̲:
MODIFY OCD 8(MMON)+8(MMS)-6(XFER) 1 1
CPT(TDX) 5 (MMS)+6(xFER)+7(CPT) 1 1
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
8(MMON)+13(MMS)+7(CPT) 1 1 1 1
TABLE 3.2-1…01…CAMPS RESOURCES
CAMPS CPU TDX DISC
FUNCTION ms. R W R W M
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
FILE FCTS =
F̲O̲R̲M̲A̲T̲ ̲H̲A̲N̲D̲L̲I̲N̲G̲
FORMAT OUTPUT 6.5(FH)+13(FMS)+Nx6.5(Ch.Trsfer) N 1
RELATED INPUT
Mx6.5(Ch.Trsfer) M
T̲A̲B̲L̲E̲ ̲M̲A̲N̲A̲G̲E̲M̲E̲N̲T̲
READ MEMORY TABLE 4(TMP.MON)*
+ 13(TMP)*
READ DISC TABLE
(PER KEY) 4(TMP.MON)*
+ 26(TMP) 1
+7(FMST)
Table management accounting
for disc access is based on
worst case assumption of 1 access
per key.
*) concurrent processing
allowed.
TABLE 3.2-1 CONT.…01…CAMPS RESOURCES
CAMPS FUNCTION CPU(ms)
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
I/O CHANNEL READ 6(xFER)+10(TMS)+4(LTU-HANDLER/
I/O CHANNEL WRITE as
for
READ
TDX READ 6(XFER)+10(TMS)+4(TDX-HANDLER)
TDX WRITE as
for
READ
DISC READ 6(XFER)+3(DCM)+6(DISC
HANDLER)
DISC WRITE 6(XFER)+6(DCM)+12(DISC
HANDLER)
DISC MODIFY READ + WRITE - 6(XFER)
The CPU time consumption is included in the total CPU
time figures of the Data Collection sheets of section
3.3 - 3.5.
TABLE 3.2-2…01…CAMPS RESOURCES
3.3 O̲U̲T̲G̲O̲I̲N̲G̲ ̲T̲R̲A̲F̲F̲I̲C̲ ̲R̲E̲L̲A̲T̲E̲D̲ ̲D̲A̲T̲A̲
The FUNCTION KEY Reception and ARM Cmd. Transmission
accounting for VDUs are all described in the
table 3.3 - 39.
Figures for busy min. are per hour.
Checkpoints (CP's) result in 512 ch. write's
and 16 ch. read's (acknowledge) on the TDX
bus.