|
|
DataMuseum.dkPresents historical artifacts from the history of: CP/M |
This is an automatic "excavation" of a thematic subset of
See our Wiki for more about CP/M Excavated with: AutoArchaeologist - Free & Open Source Software. |
top - metrics - download
Length: 161792 (0x27800)
Types: TextFile
Names: »D85«
└─⟦e634bf8f4⟧ Bits:30005867/disk11.imd Dokumenter (RCSL m.m.)
└─⟦this⟧ »D85«
i
T_A_B_L_E_ _O_F_ _C_O_N_T_E_N_T_S_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _P_A_G_E_
1. INTRODUCTION ............................................ 1
2. RCNET ARCHITECTURE ...................................... 2
2.1 Topology of Packet Switch .......................... 3
2.2 Network Node Software .............................. 4
2.3 Host and Host Process .............................. 5
2.4 Structure of Router ................................ 7
2.5 Neighbour Node Protocol ............................ 10
2.6 Optional Features .................................. 10
3. ROUTING METHODS ......................................... 12
3.1 Indentification of Nodes and Comminication Lines ... 12
3.2 Choice of Routing Method ........................... 13
3.3 Adaptive Routing ................................... 14
4. HOST AND HOST ADDRESSING ................................ 22
4.1 Addressing of Hosts ................................ 23
4.2 External Host Support .............................. 24
4.3 Resolution of Host Conflicts ....................... 26
4.4 Connection and Removal of Local Hosts .............. 27
5. NETWORK TIME ............................................ 28
6. END-TO-END CONTROL ...................................... 30
7. NETWORK SUPERVISION ..................................... 39
7.1 Supporting Functions in Nodes ...................... 40
7.2 Addressing the NOC from the Nodes .................. 44
8. STATISTIC ACCUMULATION .................................. 45
8.1 Organisation of Statisties Information ............. 45
8.2 Statistical Fields ................................. 47
8.3 Statistics Collection .............................. 47
8.4 Survey of Statistical Record Types ................. 48
\f
ii
C_O_N_T_E_N_T_S_ _(_c_o_n_t_i_n_u_e_d_)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _P_A_G_E_
9. REPORTS COLLECTION ...................................... 50
9.1 Survey of Report Types ............................. 50
9.2 Report Handling .................................... 52
10. DUMP FUNCTIONS ......................................... 53
11. REMOTE LOAD FUNCTIONS .................................. 55
11.1 Introduction to Remote Load ...................... 55
11.2 The Remote Software System ....................... 56
11.3 The Central Software System ...................... 57
11.4 Applications ..................................... 58
12. TEST FACILITIES ........................................ 60
12.1 Operator Communication ........................... 60
12.2 Test Traffic ..................................... 61
12.3 Destination for Test Traffic ..................... 62
12.4 Test Output from Router .......................... 63
A_p_p_e_n_d_i_x_e_s_:
A. REFERENCES ............................................. 64
B. FORMAT OF RCNET PACKET ................................. 67 \f
1_._ _ _ _ _ _ _ _ _I_N_T_R_O_D_U_C_T_I_O_N_ 1.
This manual gives a technical description of the facilities in
RCNET, Level I and is addressed to those, who want to achieve a
more detailed knowledge of the network functions.
Having read this manual, the reader should be able to select the
facilities of RCNET, which suit a specific application.
The reader should be familiar with reference 1.
The difference between that manual and the present one is that
1 defines the general concepts of RCNET, while this describes
the actual implementation of a subset of those concepts and the
strategies used in that particular implementation.
The concepts of R_e_g_i_o_n_ and C_o_n_n_e_c_t_i_o_n_ _o_f_ _N_e_t_w_o_r_k_ (cf. 1, sec-
tion 1.2.3-4) are not supported by RCNET, Level I, i.e. only a
single network with a single region is supported.
The M_e_s_s_a_g_e_ _T_r_a_n_s_p_o_r_t_e_r_ (cf. 1, section 5) is described in se-
perate manuals.
\f
2_._ _ _ _ _ _ _ _ _R_C_N_E_T_ _A_R_C_H_I_T_E_C_T_U_R_E_ 2.
This section gives an overview of the architecture of RCNET, Le-
vel I, covering topological aspects as well as software architec-
ture.
As described in 1, section 1, a network consists of a Packet
Switch and attached hosts. The Packet Switch keeps track of the
host currently connected and their locations in the network. Its
main function is to receive packets from hosts and deliver them
to addressed destination hosts.
The nucleous of the Packet Switch offers thus a basic datagram
service.
Fig. 2.1 RCNET Basic Structure. \f
2_._1_ _ _ _ _ _ _ _T_o_p_o_l_o_g_y_ _o_f_ _P_a_c_k_e_t_ _S_w_i_t_c_h_ 2.1
The Packet Switch is made up of nodes, which are connected by
communication lines.
Fig. 2.2 Nodes, communication lines and hosts.
N: Node
L: Communication Line
H: Host
A node is an RC3600 minicomputer, running the ROUTER process, a
number of line driver processes and a number of host interfaces,
cf. section 2.2.
The nodes are connected by means of point to point communication
lines, which may be of different types, and with a number of dif-
ferent line control protocols.
On public telephone lines, HDLC technique will normally be used,
and the line control protocol is then X25 LAP B (cf. 2).
\f
If, for some reason, two nodes are placed physically close to
each other, they may be connected by Front End Processor Adaptors
(FPA) (cf. 3) allowing transmission speeds of up to 500 KBytes
per second. The line control protocol will then be a special FPA
protocol.
A host connects to the packet switch by connecting to a node, de-
noted the c_u_r_r_e_n_t_ _n_o_d_e_ of the host.
A host may be connected to at most one node at a time, but it may
dynamically change from one node to another, if the need arises.
The nodes inform each other about the current node for each host
connected at the moment.
When a packet is received from a host, it is forwarded through
the communication lines from node to node, until it arrives at
the current node for the destination host.
The algorithms used in the nodes for this purpose are denoted the
r_o_u_t_i_n_g_ _a_l_g_o_r_i_t_h_m_s_.
2_._2_ _ _ _ _ _ _ _N_e_t_w_o_r_k_ _N_o_d_e_ _S_o_f_t_w_a_r_e_ 2.2
The programs executed in a node may be classified into three
groups:
LINE DRIVERS
A line driver process controls one or more communica-
tion lines to other nodes according to the line control
protocol used for that (those) line(s). It is responsi-
ble for error control and retransmission of erroneous
packets.
2: semantikcheck og standardværditildeling - samt \f
ROUTER
Is a single process, which performs the routing - and
host addressing functions of the node. It serves as the
entry to the packet switch at that node. A more tho-
rough description is given in section 2.4.
HOST PROCESSES
A host process acts as interface between the packet
switch and one or several hosts. The relations between
host, host process and ROUTER are described in section
2.3.
All the above mentioned processes are executed under the standard
RC3600 MUS System, so all RC3600 standard programs may be execu-
ted in the node in parallel with the RCNET software. Especially
the standard test and debugging tools may execute in a node.
2_._3_ _ _ _ _ _ _ _H_o_s_t_ _a_n_d_ _H_o_s_t_ _P_r_o_c_e_s_s_ 2.3
From the point of view of the Packet Switch, represented in each
node by ROUTER, it is unimportent whether or not a host repre-
sents a physical entity. All that counts is that a host functions
as an addressee for packets.
A host connected to a node is denoted a l_o_c_a_l_ _h_o_s_t_ for that
node.
A local host must from the point of view of ROUTER be represented
by a single process in the node. The process is denoted h_o_s_t_p_r_o_-
c_e_s_s_ for the host. The hostprocess exchanges packets with ROUTER
on behalf of the host.
A hostprocess may at a given moment represent several hosts, but
a host can only be represented at a given moment by one hostpro-
cess. As mentioned earlier, it may never be connected to more
than one node at the same time. \f
The relations between ROUTER, host process and hosts may be sum-
marized in fig. 2.3.
Fig. 2.3 Relations between ROUTER, host processes and hosts.
Very often a host is another computer, e.g. RC8000, RC3600,
IBM370 and so the host process communicates with the host through
some sort of communication channel. In such cases, the host
process acts as a network interface for the host, possibly
assisted by one or more driver processes for the communication
channel. \f
In other cases, the functions of the host are executed within the
hostprocess itself, which in that case is "the real host".
The ROUTER, however, does not distinguish between those two ca-
ses. All that counts is that a host process represents one or
more hosts. The interface between hostprocess and ROUTER is de-
fined in 4.
2_._4_ _ _ _ _ _ _ _S_t_r_u_c_t_u_r_e_ _o_f_ _R_O_U_T_E_R_ 2.4
The ROUTER process is divided into four main modules:
HOST PROCESS INTERFACE
Controls the communication with hostprocesses about the
exchange of packets, connection and removal of hosts
and other control functions.
SUPERVISOR
Maintains routing tables, host address information and
network time by communication with neighbour nodes (cf.
section 2.5). Performs gathering of reports about inte-
resting events such as changing line status, routing
tables etc.
HDLC INTERFACE
Controls communication with line drivers (most often
the HDLC driver). Transmits packets on request from ot-
her modules. Receives packets from lines and directs
them to the appropriate handler. Reroutes packets wait-
ing for transmission, if a line goes down.
PACKET TRANSPORTER
A facility for hosts, which may request end to end con-
trol performed for packets sent through the net. This
means that the network guarantees that packets are de-
livered to the destination host without loss or dupli-
cation (cf. 1 section 4).
This facility thus offers a virtual circuit service on
top of the datagram network. \f
Detailed information about ROUTER modules and their function may
be found in references 5-9.
The interrelations between ROUTER modules and their interaction
with the surroundings are shown in fig. 2.4.
The numbers on the figure have the following meaning:
1. Packets from local hosts are delivered to the Packet Transpor-
ter, which checks them for end to end control request.
2. Packets whose receiver host is not local are delivered to the
routing procedure.
3. Routing procedure requests transmission on a specific line.
4. Packets received on lines are delivered to the routing proce-
dure which may decide between 5 or 3 .
5. Packets to a local host are delivered to Packet Transporter
for end to end control check.
6. Packets to a local host are delivered to Host Process
Interface (may have come from 1 ).
7. Host Process Interface indicates if a packet has been delive-
red to the host, or if delivery was not possible.
8. Special Neighbour Protocol Packets (cf. section 2.5) are ex-
changed with Supervisor.
9. Reports from all modules are handled by the Supervisor.
\f
Fig. 2.4 Main Modules and Data Flow in ROUTER.
\f
2_._5_ _ _ _ _ _ _ _N_e_i_g_h_b_o_u_r_ _N_o_d_e_ _P_r_o_t_o_c_o_l_ 2.5
The maintenance of routing tables, host address tables and net-
work time is performed solely by means of information exchange
between neighbour nodes.
In the literature such methods are often referred to as "distri-
buted methods" as opposed to "centralized methods", where routing
tables etc. are updated under control of some sort of Network
Center.
The conventions for this information exchange are contained in
the N_e_i_g_h_b_o_u_r_ _N_o_d_e_ _P_r_o_t_o_c_o_l_, which is described in 10.
The algorithms used are described in sections 3-5.
The supervisor module of ROUTER contains a b_r_o_a_d_c_a_s_t_ module,
which may broadcast control information to all neighbouring nodes
(that is on all active communication lines) on request from other
modules. Moreover it contains a n_e_i_g_h_b_o_u_r_ _p_r_o_t_o_c_o_l_ _h_a_n_d_l_e_r_, ta-
king action on all packets received in the neighbour protocol.
Neighbour protocol packets are identified by having the value
2>1000 in the format field of the packet (cf. appendix B).
2_._6_ _ _ _ _ _ _ _O_p_t_i_o_n_a_l_ _F_e_a_t_u_r_e_s_ 2.6
RCNET, Level 1 is expected to be used for a wide variety of app-
lications and network sizes. Different levels of sophistication
may be required for different applications. This fact has been
kept in mind in the design of the network software.
The ROUTER is built up around a relative small nucleus, where the
Host Process Interface and HDLC Interface constitute the main
part, while the Supervisor and Packet Transporter are nearly dum-
my. \f
The nucleus implements a simple network with a fixed routing
strategy (cf. section 3.2).
Based upon this, higher levels of sophistication may be selected
by means of optional features. Examples are adaptive routing (cf.
section 3.3), support for end to end control (cf. section 6),
support for network time (cf. section 5) etc.
The optional features to be used in a specific network must be
selected at compile time for the ROUTER program together with di-
mensioning parameters for the network.
\f
3_._ _ _ _ _ _ _ _ _R_O_U_T_I_N_G_ _M_E_T_H_O_D_S_ 3.
When a host enters a packet into the Packet Switch (e.g. a host
process delivers a packet to ROUTER) at some node that node will
be denoted the s_o_u_r_c_e_ _n_o_d_e_ of the packet. At that moment the add-
ress information in the packet consists of only the s_e_n_d_e_r_ _h_o_s_t_
i_d_e_n_t_i_f_i_c_a_t_i_o_n_ and the r_e_c_e_i_v_e_r_ _h_o_s_t_ _i_d_e_n_t_i_f_i_c_a_t_i_o_n_.
The first task for ROUTER is to find the current node of the re-
ceiver host. This is done by means of the h_o_s_t_ _t_a_b_l_e_s_ in ROUTER,
cf. section 4. The node in question will be denoted the d_e_s_t_i_n_a_-
t_i_o_n_ _n_o_d_e_ of the packet.
The packet should now be forwarded from node to node until it
reaches the destination node. Figuratively speaking, the packet
has found a p_a_t_h_ through the network from source node to destina-
tion node, the path being made up of the communication lines
passed by the packet.
In the source node, and in each i_n_t_e_r_m_e_d_i_a_t_e_ node, the determina-
tion of the next element of the path is done by the routing pro-
cedure, cf. fig. 2.4, by means of the routing tables in the
node.
This section describes the contents and maintenance of the rou-
ting tables.
3_._1_ _ _ _ _ _ _ _I_d_e_n_t_i_f_i_c_a_t_i_o_n_ _o_f_ _N_o_d_e_s_ _a_n_d_ _C_o_m_m_u_n_i_c_a_t_i_o_n_ _L_i_n_e_s_ 3.1
For the purpose of routing, each node in the network has a unique
identification called the n_o_d_e_ _n_u_m_b_e_r_. This is an integer between
1 and 255.
At configuration time it must be decided, how many nodes the net-
work should be prepared for in the future. This parameter, deno-
ted n_o_ _o_f_ _n_o_d_e_s_ is used in dimensioning the routing tables, and\f
all nodes in the network must have their node number within the
interval
1 <= node number <= no of nodes
Within each node it must be decided how many communication lines
the node should be able to support in the foreseeable future.
This parameter, denoted n_o_ _o_f_ _l_i_n_e_s_ is used in dimensioning the
routing tables within that node.
For the purpose of line identification, each line internally in
the node is identified by an integer, the l_i_n_e_ _n_u_m_b_e_r_ wich must
be in the interval
0 <= line number <= no of lines -1
Note that the line number is a purely internal matter within each
node. Although a commmunication line\f
A r_o_u_t_i_n_g_ _m_e_t_h_o_d_ is characterized by the strategies used in upda-
ting the path vector, thus accommodating changes in network topo-
logy, line loads etc.
RCNET, Level 1 offers the choice between two routing methods,
f_i_x_e_d_ _r_o_u_t_i_n_g_ and a_d_a_p_t_i_v_e_ _r_o_u_t_i_n_g_.
With fixed routing, the contents of the path vector are determi-
ned at compile time and are never updated. So packets at a given
source node to a given destination node will always follow the
same path through the network. If that path is broken, for in-
stance by a failure at an intermediate node or a communication
line, the two nodes are not able to commmunicate at all.
The only advantages of fixed routing are its simplicity and the
small core requirements for code and data areas in the network
nodes. For tree-structured networks, however, it may be appropri-
ate.
3_._3_ _ _ _ _ _ _ _A_d_a_p_t_i_v_e_ _R_o_u_t_i_n_g_ 3.3
(Optional Feature)
This routing method has a number of advantages in comparison with
fixed routing:
The network administration is freed from the manual prepara-
tion of routing tables, which may be a source of errors.
New nodes or communication lines may freely be introduced
without any need for manual updating of routing tables.
The most important aspect, however, is that the network au-
tomatically adapts to changing network topology, when a node
or a communication line fails or comes up again after fai-
lure.
\f
The basic assumptation of the adaptive routing method is that the
optimal path between two nodes is the shortest one, e.g. the one
with the smallest number of intermediate nodes. For obvious rea-
sons this assumptation may not always be fulfilled, but in most
cases it will be a good one.
The method used is an improved version of the one described in
11. The method is briefly described below.
For all other nodes, each node tries to keep track of the shor-
test range currently available to the node. The shortest range to
a node is defined as the number of communication lines in the
shortest path to that node.
The r_a_n_g_e_ _v_e_c_t_o_r_ is a one-dimensional array, indexed by node num-
ber. For each node the corresponding element contains the cur-
rently shortest range to that node.
There are two special cases. The entry corresponding to the node
itself contains a zero value, indicating that the range to one-
self is zero. At the opposite extreme, a value, no. of nodes, in-
dicates that the corresponding node cannot be reached at the mo-
ment.
In each element, the path vector contains the line number for the
line participating in the shortest path to the corresponding
node.
The contents of range- and path vectors are calculated on the ba-
sis of information in the r_o_u_t_i_n_g_ _m_a_t_r_i_x_. This is a two-dimensio-
nal array with no. of nodes-rows and no. of lines-columns.
The contents of each column are the ranges to all nodes, when the
corresponding line is used.
The calculation of range- and pathvectors proceeds as follows:
\f
For each node in the network, the corresponding row in the rou-
ting matrix is inspected. The smallest element in the row is the
shortest range to the node, and the column number where this ele-
ment occurs, is the line number to be put into the pathvector e-
lement corresponding to the node.
If all elements in the row have the value no of nodes, the node
can not be reached at the moment.
Fig. 3.1 shows an example of a network with 4 nodes.
Updating of the routing matrix is done on 3 different occasions:
A. Whenever a line goes down, the corresponding column is initia-
lized with the no. of nodes in all elements except that a zero
is inserted in the element corresponding to the node itself.
B. The same initialization is done for all columns, when ROUTER
starts execution after a load.
C. Whenever a range vector is received on a line (see below), all
elements in the vector are increased by one, except if the o-
riginal calue was no. of nodes. A zero is inserted in the ele-
ment corresponding to the node itself, and finally the vector
is inserted into the column of the routing matrix, correspon-
ding to the line on which the range vector was received.
Each updating of the routing matrix is followed by a recalcula-
tion of range- and pathvectors. If a change occurs in any of
these, the contents of the range vector are broadcast on all ac-
tive lines, in order to inform the neighbour nodes about the pos-
sible change in network topology. They will in turn perform the
action in C above and in this way, information about topology
changes is rapidly distributed to all nodes.
Whenever a line comes up, the range vector is sent on that line
in order to supply the new neighbour with routing information. \f
A slight modification to the above scheme is introduced in order
to avoid some unnecessary "ping-pong" effects in case of a line
going down. The effect is most easily seen by referring to fig.
3.1, imaging that the line between node 2 and node 4 is down. The
resulting routing tables in node 2 are shown in fig. 3.2.
Now, if the line from node 2 to node 1 broke down, then node 2
might for a moment believe that it was possible to reach node 1
through line 0, as the routing matrix indicates in column 0. So
if any packets were queued on line 2 at that moment, they would
be rerouted to line 0, and then passed to node 3, which would
have to send them back at once.
In order to avoid such effects, the following procedure is app-
lied to a rangevector which is about to be sent on line x:
For each node a test is performed to see, if the shortest
path to that node goes through line x. In that case the cor-
responding row in the routing matrix is searched for the
smallest element not belonging to column x, and the value
found is substituted for the original element of the range-
vector.
The effect of the modification is seen in the routing matrix of
node 2 in fig. 3.2.
\f
Tables in node 1:
_ _ _ _ _ _ _0_ _ _ _1_ _ _ _ _n_o_d_e_ _ _ _ _ _r_a_n_g_e_ _ _ _n_o_d_e_ _ _ _ _ _p_a_t_h_ _ _
_1_ _ _ _ _ _0_ _ _ _0_ _ _ _ _ _1_ _ _ _ _ _ _ _ _ _0_ _ _ _ _ _ _1_ _ _ _ _ _ _ _ _0_ _ _ _ _
_2_ _ _ _ _ _1_ _ _ _2_ _ _ _ _ _2_ _ _ _ _ _ _ _ _ _1_ _ _ _ _ _ _2_ _ _ _ _ _ _ _ _0_ _ _ _ _
_3_ _ _ _ _ _2_ _ _ _3_ _ _ _ _ _3_ _ _ _ _ _ _ _ _ _2_ _ _ _ _ _ _3_ _ _ _ _ _ _ _ _0_ _ _ _ _
_4_ _ _ _ _ _2_ _ _ _1_ _ _ _ _ _4_ _ _ _ _ _ _ _ _ _1_ _ _ _ _ _ _4_ _ _ _ _ _ _ _ _1_ _ _ _ _
Fig. 3.1 Routing tables in network with 4 nodes.
\f
_ _ _ _ _ _ _0_ _ _ _1_ _ _ _2_ _ _ _ _ _n_o_d_e_ _ _ _r_a_n_g_e_ _ _ _ _n_o_d_e_ _ _ _p_a_t_h_ _ _
_1_ _ _ _ _ _3_ _ _ _4_ _ _ _1_ _ _ _ _ _ _1_ _ _ _ _ _ _ _1_ _ _ _ _ _ _ _1_ _ _ _ _ _ _2_ _ _ _ _
_2_ _ _ _ _ _0_ _ _ _0_ _ _ _0_ _ _ _ _ _ _2_ _ _ _ _ _ _ _0_ _ _ _ _ _ _ _2_ _ _ _ _ _ _0_ _ _ _ _
_3_ _ _ _ _ _1_ _ _ _4_ _ _ _3_ _ _ _ _ _ _3_ _ _ _ _ _ _ _1_ _ _ _ _ _ _ _3_ _ _ _ _ _ _0_ _ _ _ _
_4_ _ _ _ _ _4_ _ _ _4_ _ _ _2_ _ _ _ _ _ _4_ _ _ _ _ _ _ _2_ _ _ _ _ _ _ _4_ _ _ _ _ _ _2_ _ _ _ _
Routing tables in node 2, unmodified algorithm.
_ _ _ _ _ _ _0_ _ _ _1_ _ _ _2_ _ _ _ _ _n_o_d_e_ _ _ _r_a_n_g_e_ _ _ _ _n_o_d_e_ _ _ _p_a_t_h_ _ _
_1_ _ _ _ _ _4_ _ _ _4_ _ _ _1_ _ _ _ _ _ _1_ _ _ _ _ _ _ _1_ _ _ _ _ _ _ _1_ _ _ _ _ _ _2_ _ _ _ _
_2_ _ _ _ _ _0_ _ _ _0_ _ _ _0_ _ _ _ _ _ _2_ _ _ _ _ _ _ _0_ _ _ _ _ _ _ _2_ _ _ _ _ _ _0_ _ _ _ _
_3_ _ _ _ _ _1_ _ _ _4_ _ _ _4_ _ _ _ _ _ _3_ _ _ _ _ _ _ _1_ _ _ _ _ _ _ _3_ _ _ _ _ _ _0_ _ _ _ _
_4_ _ _ _ _ _4_ _ _ _4_ _ _ _2_ _ _ _ _ _ _4_ _ _ _ _ _ _ _2_ _ _ _ _ _ _ _4_ _ _ _ _ _ _2_ _ _ _ _
Routing tables in node 2, modified algorithm.
Fig. 3.2 Effect of modified algorithm.
\f
It may now be seen that adaptive routing has a number of attrac-
tive qualities.
If for instance all or some nodes are equipped with "spare chan-
nels", then new connections between existing nodes may be intro-
duced simply by plugging in communication equipment at the chan-
nels. The line(s) so introduced will then come into use automati-
cally.
Also new nodes may come into existance simply by connecting them
to existing nodes, as long as the maximum number of nodes is not
exceeded.
On the other hand a number of disadvantages may be observed, e-
specially with respect to load sharing among parallel communica-
tion paths. In the network sketched below, it would for instance
be preferable if traffic between node 1 and node 4 could be sha-
red equally between the two existing paths. The adapting routing
method does not account for that. Only parallel lines between
neighbour nodes are supported.
It is expected that further developments of RCNET will take such
problems into consideration.
Note that if someone has a copy of the range vectors from all the
nodes, a picture of the current network topology may be construc-
ted. This is possible because a rangevector of a given node among
other things indicates the neighbour nodes, namely the nodes
having a 1 in the corresponding element of the rangevector. \f
For this reason, rangevectors are sent from the nodes to a defin-
ed Network Operating Centre each time a change has occured, cf.
section 9.
\f
4_._ _ _ _ _ _ _ _ _H_O_S_T_S_ _A_N_D_ _H_O_S_T_ _A_D_D_R_E_S_S_I_N_G_ 4.
Any host connected to the network must have a h_o_s_t_ _i_d_e_n_t_i_f_i_c_a_t_i_o_n_
(denoted host-id from now on), which must be unique throughout
the network.
Host-id is a 16-bit unsigned integer.
As explained in 1, section 1.3, hosts are divided into two
groups, denoted i_n_t_e_r_n_a_l_ _h_o_s_t_s_ and e_x_t_e_r_n_a_l_ _h_o_s_t_s_.
The main difference is that an internal host is logically associ-
ated with a specific node in the network, while an external host
may in principle connect to any node whatsoever.
An external host has its host-id in the interval
0 < host-id < 32768
that is, host-id <' 0 and host-id.bit0 = 0.
An internal host has host-id in the interval
32768 < host-id.
The format of an internal host-id is:
_1_ _ _ _F_ _U_ _N_ _C_ _T_ _I_ _O_ _N_ _ _ _ _ _ _ _ _ _N_ _O_ _D_ _E_ _ _ _ _ _ _ _ _
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
NODE is the node number of the node to which the internal host
may be connected. That node is known as the o_w_n_e_r_ _n_o_d_e_ of the
host.
FUNCTION may take the values
0 < FUNCTION < 128
A number of values of FUNCTION are reserved for use by the net-
work administration, cf. section 7. Apart from that, internal
host identifications may be used freely. \f
4_._1_ _ _ _ _ _ _ _A_d_d_r_e_s_s_i_n_g_ _o_f_ _H_o_s_t_s_ 4.1
External hosts are known throughout the network in the sense that
each node knows exactly which external hosts are connected at
that moment, as well as their location in the network in form of
the node numbers of their current nodes. Support for external
hosts is an optional feature.
Whenever an external host disconnects itself from the network,
the information about this event will also be distributed to all
nodes, where ROUTER will inform all host processes about the e-
vent (optional feature).
Whenever a packet is delivered from a hostprocess to ROUTER, the
treatment depends upon the type of receiver host.
If receiving host is an external host, it can immediately be de-
cided, if the host is connected at that moment. If not, the host-
process will be notified at once. Otherwise, the packet is routed
to the current node of the receiving host.
If the receiving host is an internal host, the packet is simply
routed to the owner node, and the test for the existence of the
receiving host is made when the packet reaches the owner node.
In both cases, it may happen that the receiving host is not con-
nected when the packet reaches the destination node. It may also
happen that the destination node becomes unreachable while the
packet is travelling through the network. The packet will then be
r_e_j_e_c_t_e_d_ either at the destination node or at some intermediate
node. This means that the packet state (cf. appendix B) is set to
>rejected>, and the packet is returned to the sending host.
If for some reason a rejected packet cannot reach the sending
host, it will simply be dropped. \f
4_._2_ _ _ _ _ _ _ _E_x_t_e_r_n_a_l_ _H_o_s_t_ _S_u_p_p_o_r_t_ 4.2
(Optional Feature)
This section describes the strategies used for external host
support.
The basic requirement is that each node keeps a table, the e_x_t_e_r_-
n_a_l_ _h_o_s_t_ _t_a_b_l_e_, containing host-id and current node for all ex-
ternal hosts connected to the network at a given moment. This
table is maintained by the supervisor module in ROUTER, cf. fig.
2.4.
Updates to the external host table may come from 3 different
sources:
A. Local hosts are inserted or removed on request from the Host
Process Interface, when a hostprocess requests connection or
removal of an external host.
B. When the supervisor discovers that some node becomes unreach-
able, it removes from the external hosttable all hosts having
that node as current node.
C. External hosts which are not local to the node are inserted or
removed as a consequence of information coming from neighbour
nodes.
The basic source of updates is when some node executes point A
above. This piece of information must then be distributed to all
other nodes thus executing point C.
The method of distribution is that each node occasionally broad-
casts its entire external hosttable to all neighbours as a packet
in the neighbour node protocol (cf. section 2.5). This is done at
regular time intervals and each time a change to the table has
occured. \f
When a host table is received at a node from a neighbour, it is
compared with its own host table in order to see, if it should be
updated as a result of the received information.
During the comparison, the following cases may occur with a given
host-id found in at least one of the tables:
1. Host-id present in own table, but not in received table.
Basic idea: remove it from own table.
2. Host-id present in received table, but not in own table.
Basic idea: Insert it into own table.
3. Host-id present in both tables, but with different nodes.
Potential conflict.
4. Host-id present in both tables, with same nodes.
Now the idea in point 1 is obviously too primitive. If, for in-
stance, the host is a local one, it might be removed from the re-
ceiving host>s own table just because the neighbour node has not
yet received the information about it.
So if the two host tables differ in some respect, the degrees of
the trustworthiness of the two pieces of information must be
compared.
Referring to 1 above, the host must at least earlier have been
connected to the node specified in its own table. The question is
then >is the neighbour node close to the node in question?> If
so, it should posses the best information about the hosts on that
node, and the host should then be removed from the table. If not,
no confidence is placed in the received information.
\f
The problem just mentioned calls for a principle making it possi-
ble to decide for any node in the network, if a given neighbour
node is in a better position to know about the node than oneself.
The adopted principle is that this is the case, if the neighbour
node lies "closer" to the node in question. This will be decided
on the basis of the routing information in the pathvector. Infor-
mation about a given node will be trusted, if the information has
been received from a line on the shortest path to the node in
question. Otherwise the information will be ignored.
This principle may always be used in points 1 and 2 above. The
potential conflict in point 3 is treated in the following sec-
tion.
4_._3_ _ _ _ _ _ _ _R_e_s_o_l_u_t_i_o_n_ _o_f_ _H_o_s_t_ _C_o_n_f_l_i_c_t_s_ 4.3
A host conflict may in principle arise from two different sour-
ces:
A given host-id is connected to two different places in the
network, or a host disconnects from one node and connects to
another so quickly that the information about the disconnec-
tion has not yet been distributed to the whole network.
In a practical situation, the node experiencing this conflict
will not be able to make a distinction between the two cases, so
all conflicts are treated as if they were of the first type.
At the Host Process Interface a conflict arises, if a hostprocess
tries to connect a host with a host-id already known by the node.
In that case the connect request is refused, cf. 4, section
4.3.
A more difficult case is point 3 in the preceding section, where
upon analysis of a received host table, two conflicting pieces of
information are found. If neither of the two facts can be ignored
by applying the decision making procedure described in preceding\f
section, the conclusion must be drawn that at the moment the same
host-id is connected to two different nodes.
This situation is intolerable, so the network recovers by exclu-
ding b_o_t_h_ of the hosts connected with the host-id in question.
This is done by sending to each of the nodes in question a d_i_s_-
c_o_n_n_e_c_t_ _h_o_s_t_ packet (cf. 7, section 5.1.4.2). The packet is
supplied with a f_a_c_i_l_i_t_y_ _m_a_s_k_ indicating that the host must be
excluded from the host tables of any intermediate node. In this
way the whole network will quickly get rid of the conflicting in-
formation.
It is seen that yet another source of host table update must be
added to the list in section 4.2:
D. When a packet with facility mask bit "exclude host" is recei-
ved, the host in question is removed from the host table.
4_._4_ _ _ _ _ _ _ _C_o_n_n_e_c_t_i_o_n_ _a_n_d_ _R_e_m_o_v_a_l_ _o_f_ _L_o_c_a_l_ _H_o_s_t_s_ 4.4
Local hosts are connected and removed upon request from host pro-
cesses in the node, cf. section 2.3 and 4, section 4. If a
hostprocess is removed, all hosts conneted by it are removed,
too.
The host process may qualify the host by k_i_n_d_ , which is a 16 bit
unsigned integer. It is not used by the network in any way, but
it is reported in the "host up" report (cf. section 9) and may
then be used by the Network Administration.
\f
5_._ _ _ _ _ _ _ _ _N_E_T_W_O_R_K_ _T_I_M_E_ 5.
(OPTIONAL FEATURE)
In a larger network it may be advisable to run the network under
the supervision of a Network Operating Gentre (called NOC from
now on). The NOC may then among other things receive reports from
the nodes about interesting events, such as line failures, con-
nection and disconnection of hosts etc.
In the analysis at NOC of the reports from different nodes it may
be a great help to have the opportunity of arranging them into
chronological order. This may be done, if each node maintains a
clock and the nodes make their clocks run synchronously. Thus a
common network time is defined, enabling all reports from the
nodes to be stamped with the time of origin.
Another application of a synchronized network time is in report
printing at unmanned nodes. It is almost mandatory that such
reports are stamped with date and time, and a synchronous network
time makes this possible with reasonable accuracy even if the
node looses its power for longer periods.
The algorithms used for network time maintenance are described
here. They are performed by cooperation between ROUTER and the
process TIME (cf. 12).
ROUTER maintains a 32 bit integer, denoted n_e_t_w_o_r_k_ _t_i_m_e_, expres-
sing real time in units of 0.1 seconds from an arbitrary time
base.
TIME maintains the date and the clock.
ROUTER and TIME both run their time counters synchronously with
the real time clock of the MUS Monitor.
The basic idea in time synchronization between the nodes now is
that ROUTER in each node relies upon its own network time, unless
it happens that it has been delayed compared to any of its neigh-\f
bours. One may think of it in the way that the common network ti-
me is driven by the node having the fastest real time clock.
At regular intervals, each node broadcasts its current network
time to all its neighbours in a packet in the neighbour node pro-
tocol (cf. sec. 2.5).
Receiving a network time packet, the network time in it is compa-
red with the value of its own network time. If the latter is the
biggest (as in most cases), nothing is done. Otherwise the recei-
ved network time is substituted for its own network time, and an
update message is sent by ROUTER to the process TIME (cf. 12).
Network time maintenance is done by the supervisor module of ROU-
TER (cf. fig. 2.4).
\f
6_._ _ _ _ _ _ _ _ _E_N_D_ _T_O_ _E_N_D_ _C_O_N_T_R_O_L_ 6.
(OPTIONAL FEATURE)
This chapter describes the packet transporter of the
RCNET/ROUTER.
The Packet Transporter (PT) of RCNET Level 1 provides a host to
host protocol, featuring e_n_d_-_t_o_-_e_n_d_ _a_n_d_ _f_l_o_w_ _c_o_n_t_r_o_l_. These func-
tions are achieved by the creation of logical links, in the fol-
lowing called p_i_p_e_l_i_n_e_s_, between the involved hosts. The adapted
method utilizes the principle of the X.25 level 3 recommendation,
which again resembles the HDLC-line protocol (X.25 level 2) used
in RCNET.
The PT will queue the transmitted packets in a retransmission
chain maintained for every pipeline. The packets will remain here
until an acknowledge is received from the PT at the destination
node, indicating that the host addressed has accepted the packet.
The PT at the receiving node will deliver the packets in the cor-
rect sequence to the host, through the Host Process Interface.
Once delivered an acknowledgement is communicated to the sender,
causing the particular packet(s) to be removed from the retrans-
mission chain, and thereby the sending host process to recieve an
acknowledge of its corresponding outputmessage(s).
The traffic encountered upon a pipeline is bi-directional - i.e.
may be considered full duplex.
Please note that the pipeline concept defines a host protocol.
The pipelines have no relation to the physical network topology
or the routing methods applied to the packets. Packets transmit-
ted on the same pipeline may very well find different ways
through the network.
A pipeline is able to survive a complete disconnection of the
physical link between the two involved hosts. When the link (or
another alternative path) is reestablished the datastream will\f
be continued without any disturbances.
The two involved hosts will not feel this physical break in any
other way than a delay in the datastream.
In order to suit different applications it is possible to define
a maximal time the two hosts must be physically disconnected
before the pipeline is aborted and the two hosts notified.
T_h_e_ _e_n_d_-_t_o_-_e_n_d_ _c_o_n_t_r_o_l_
In order to achieve the end-to-end control the PT must ensure
that no packets are lost or duplicated by the network. For this
purpose each packet is earmarked with a sequence number. Two
counter values are placed in every ordinary packet exchange
within a pipeline:
- Transmit Sequence Count (TSC) is the sequence number of this
packet in the direction from sender to receiver.
- Receive Sequence Count (RSC) signals from receiver to sender
that all packets up to, but not including, this number have
been received (by the host), and are thus acknowledged.
The counters operate in modulo 256, with the possible values of
0, 1, ... , 255, 0, 1, .... This interval determines that the
maximum number of outstanding packets is 256. However, a much
lower window is normally used in order to increase the security
against the loss or duplication of packets.
If the traffic is not bidirectional (no datapackets carry the RSC
signalling the acknowledges), then special supervisor packets are
created by the PT at the receiver end. This is done every time a
predetermined number of packets has been received and yet not
acknowledged or when the time since last receival of an unacknow-
ledged packet exceeds a certain limit.
These two criterias are determined at configuration time as the
>acknowledge counter> and >acknowledge timer> respectively. \f
The >retransmission timer> determines when unacknowledged packets
must be retransmitted by the transmitter PT.
T_h_e_ _F_l_o_w_ _C_o_n_t_r_o_l_
Along with end-to-end control every pipeline provides a flow
control mechanism. The main objective is to ensure that the
sender will transmit through the network at a pace determined by
the capabilities of the receiver. Due to the transmission delay
of the network the flow control facility is a compromise between
avoiding that every packet must await end-to-end acknowledge
(yielding a double transmission delay for every packet), while at
the same time avoiding congestion at the receiver node.
For this purpose the sender end of the pipeline is only allowed
to maintain a limited number of outstanding packets, i.e. the
length of the retransmission chain is limited. This limit is a
configuration parameter.
The receiver end of the pipeline (for convenience considered
one-directional) maintains a >receive chain> of packets not yet
accepted by the receiver host.
Similar to the >retransmission chain> in the sender end of the
pipeline this >receiver chain> has a maximal length (defined at
configuration time). In many applications these two maximal chain
lengths will be the same - known as the >network window size>.
The receiver end is able to accept packets out of sequence but
within the >receive window> - putting them into the ordered
receive chain when they arrive from the communication lines, and
delivering them to the receiver host in the correct sequence.
This facility will avoid unnecessary retransmissions in case of
e.g. multilines between neighbour nodes or rerouting of packets
because of line failures. \f
Status indications >receive ready> (RR) and >receive not ready>
(RNR) returned along with the RSC indicates the current state of
the receiver end.
Normally RR is returned. RNR indicates that the receiver end is
no longer able to accept packets because the receive chain length
has reached a certain threshold (dependent on the application).
This will cause the sender end to stop transmitting until an RR
status is received. This is illustrated on the first figure. \f
(figur)\f
C_r_e_a_t_i_o_n_ _o_f_ _P_i_p_e_l_i_n_e_s_
Pipelines are created whenever needed. The demand for a pipeline
arises whenever two hosts, not connected to the same node, com-
municate. A new pipeline is created if the two host identificati-
ons and the priority field at the packet header do not define an
already known pipeline.
In order to create a new pipeline the first packet transmitted
towards the receiver contains a >new pipeline request> status in
the flag field.
The TSC of the first packet is zero, and therefor of course the
RSC is zero too.
The receiver end will normally allocate the pipeline and the
sender end notified by the first supervisor (or data) packet with
RSC and status indications.
Reasons for refusing the establishment of a pipeline are:
1) No resources
2) Receiver unknown
3) Pipeline exists (A special conflict situation only possible
when the network is in an instable situation)
which will be indicated in t_h_e_ _r_e_j_e_c_t_e_d_ _>_n_e_w_ _p_i_p_e_l_i_n_e_ _r_e_q_u_e_s_t_>_-
p_a_c_k_e_t_.
The status indication format are shown on the figure.
\f
R_e_m_o_v_a_l_ _o_f_ _P_i_p_e_l_i_n_e_s_
Most of the criterias for removal of pipelines are optional and,
if selected, adapted to the specific application by configuration
constants.
The only inconditional criteria is:
1) Removal of one of the involved hosts.
The optional criterias are:
2) A certain number of RNR status packets has been received in
sequence (i.e. the receiver host has >stopped> accepting
packets).
3) The physical connection between the two involved hosts has
been breaked for a certain time.
4) The >retransmission counter>, counting the number of retrans-
missions of the >oldest> packet in the >retransmission
chain>, has exceeded a certain number (similar to 2, the
receiver host is stopped).
These criterias are trimmed at configuration time.\f
P_a_c_k_e_t_ _F_o_r_m_a_t_s_
When end-to-end control is selected by the host, the packet
header format is extended with two words - indicated by a one-bit
in bit 0 of the identification field.
(tegning) \f
0 Bit RNR RNR FLAG (zero = RR)
6 R.T.M. Retransmitted packet
7 A.R.Q. Request acknowledge
8 SUP Supervisor packet
9 N.R. No resources (*)
10 R.U. Receiver unknown (*)
11 P.D. Pipeline disconnected
12 N.P.R.Q. New pipeline request
(*) Together with P.D. and N.P.R.Q. in a rejected packet. \f
7_._ _ _ _ _ _ _ _ _N_E_T_W_O_R_K_ _S_U_P_E_R_V_I_S_I_O_N_ 7.
(OPTIONAL FEATURES)
In large and medium sized networks it is almost mandatory that
the Network Administration has at its disposal some tools to keep
track of the behaviour of the network. This task is called Net-
work Supervision.
Targets for supervision are for instance:
Keep track of traffic volumes and peak loads.
Produce accounting information.
Discover bottlenecks.
Discover error conditions and produce diagnostic informa-
tion.
Close erroneous parts of network after failure.
Generate artificial traffic to anticipate future loads.
Run tests in new or repaired nodes.
RCNET, Level 1 offers a range of optional features supporting the
above mentioned supervisory functions.
The present section describes the general conventions for network
supervision, while subsequent sections contain details about va-
rious functions available in RCNET, Level 1.
Network supervision is supposed to be performed by a socalled
N_e_t_w_o_r_k_ _O_p_e_r_a_t_i_n_g_ _C_e_n_t_r_e_, called NOC. The NOC consists of one or
more processes executed in one computer, or distributed over se-
veral computeres as the need may be. Some of the processes pre-
sent themselves as hosts in the network, and act as destinations
for supervisory information from the network or as sources for\f
commands to the network. Others are utilities used to analyze and
present information received from the network, such as statisti-
cal information, reports etc.
The NOC functions may vary considerably in different RCNET in-
stallations and may even be required to run on equipment from
other vendors. Consequently this paper does not contain any de-
scription of a NOC center but it outlines the different NOC sup-
port functions available in each node.
7_._1_ _ _ _ _ _ _ _S_u_p_p_o_r_t_i_n_g_ _F_u_n_c_t_i_o_n_s_ _i_n_ _N_o_d_e_s_ 7.1
A number of optional features are available for NOC support at
the network nodes.
All network supervisory functions in a node may be addressed as
internal hosts at the node. The following functions are defined
at present:
1. CATCH A general destination at each node for packets directed
to that node, e.g. artificial traffic packets. The pac-
kets may be logged on some medium, e.g. flexible disc,
at the receiving node for later inspection, c.f. sec-
tion 12.
2. COMD A destination for supervisory commands to a network
node, e.g. close or open communication lines.
3. OPCOM A process which may be used for communication between
human operators at different nodes. At the console key-
board of a node, the operator may prepare text messages
and address them to hosts in the network, and especial-
ly to OPCOM at other nodes.
\f
From packets received by OPCOM, the sender host-id and
packet text are displayed on the console of the node,
cf. 14.
4. REPO A process collecting reports about events in the node
where it resides. The reports may be logged at the lo-
cal console and/or sent to the NOC, which may then keep
a central log of all events in the whole network, cf.
section 9.
5. STATO A process collecting statistical information in a net-
work node for use by the NOC., cf. section 8.
6. TRAFO A process which on request from NOC may generate traf-
fic with a given frequency to indicated destinations
(hosts) in the network. Traffic may be sent in parallel
from each node to more than one destination and each
may have its own frequency, cf. section 12.
7. EKKO A destination in each node, used for instance to test
the responsiveness of the node. Packets addressed to
the EKKO-host are "echoed" by the receiving node that
is returned at once to the sending host. For test pur-
poses they are stamped with the current value of the
network time before they are sent back, cf. section
12.
8. DUMP A process used to dump the contents of routing tables
etc. in a network node. More generally it may be used
to inspect and even modify core locations in the node.
9. LOAD A process used to control program execution in the no-
des. Processes may be deleted from core and new pro-
grams may be loaded into core and started as processes
under the control of the NOC. Binary programs may re-
side on backing storage, e.g. flexible disc, or they
may be sent from the NOC through the network, cf. 19
and section 11. \f
All the above mentioned functions may be addressed as internal
hosts at the node. The numbers listed in front of each function
are the function code fields of the host-id (cf. section 4). They
will from now on be denoted s_u_p_e_r_v_i_s_o_r_ _h_o_s_t_s_.
Only two of the supervisor hosts are explicitly known by ROUTER,
namely REPO and CATCH. Reports generated by the various ROUTER
modules are sent by the supervisor module as special type packets
to the internal host with function REPO, cf. section 9, and test
records from the various modules are in a similar way sent to the
internal host with function CATCH, cf. section 12, provided that
the respective hosts are connected. If not, the special type pac-
kets are just dropped.
Apart from the two above mentioned cases, ROUTER is unaware of
the special functions of the supervisor hosts in question. Each
of them must reside within host processes, connecting the host in
the normal way. Each function may reside within a separate host
process or several functions may for reasons of storage efficien-
cy be put together into one process. If the NOC addresses a su-
pervisor host not connected at the moment, the packet will be
returned to the sender as a rejected packet in the normal way.
Apart from conceptual simplicity, this approach has several ad-
vantages.
Firstly it allows existing tools such as end to end control to be
reused for the communication between NOC and the supporting func-
tion in the nodes. Secondly it allows a "supervisor core area" in
each node to be used for different purposes at different times by
dynamically replacing hostprocesses with other ones. This may ta-
ke place by means of the LOAD function, cf. section 11.
Finally it should be noted that standard programs are certainly
available for each of the internal host functions mentioned abo-
ve. If, however, for a specific application some or all of the
standard programs are too general or not general enough, they may\f
easily be substituted by special programs written for the pur-
pose. Such a substitution will not at all affect the rest of the
network software.
Some of the supervisor functions require special support in ROU-
TER, in the form of code and/or tables. They are:
Command Support:
Commands are directed to the various ROUTER modules and
each command requires some code to be executed in the
appropriate module.
Statistics:
Requires code and data area for statistics accumula-
tion.
Reports:
Requires code and data areas for generation.
Dump:
Requires index tables in ROUTER to point out the rou-
ting tables, host tables etc.
At compile time for ROUTER it is decided, using the option para-
meters, which functions should be supported. In this way functi-
ons, which will never be needed in a specific network, do not
create overhead and take up core storage.
On the other hand it is important to note that even if a ROUTER
is compiled with options for, say reports and statistics produc-
tion, it will work perfectly well without the supervisory hosts
REPO and STATO being connected. When REPO is connected, however,
it will receive all reports generated a_f_t_e_r_ that moment. Again
later it may be removed in order to use the core space for other
purposes, and ROUTER will accommodate that situation also.
\f
7_._2_ _ _ _ _ _ _ _A_d_d_r_e_s_s_i_n_g_ _t_h_e_ _N_O_C_ _f_r_o_m_ _t_h_e_ _N_o_d_e_s_ 7.2
It is essential that the network is able to function properly
without NOC, even if the network is prepared for NOC-functions.
The network must for instance tolerate the NOC breaking down and
then coming up again after a while. It should also be possible
that another host takes over, or that the NOC is moved to another
node for backup purposes.
The objective is accomplished by letting the supervisor hosts in
the nodes be completely unaware of the existence of the NOC, un-
til it presents itself as such. The NOC functions are not tied to
specific host-id>s or nodes.
As an example, REPO connects itself as a host and then prints re-
ports on the local console until the NOC presents itself. This is
done in two steps. Firstly the NOC must send a r_e_s_e_r_v_a_t_i_o_n_ _p_a_c_k_e_t_
with the effect that REPO will from now on refuse requests from
any but the sending host of the reservation packet.
After that the NOC may send s_t_a_r_t_ _r_e_p_o_r_t_s_ packets to REPO. Such a
packet contains a "report NOC" identification that is a host-id
to which the reports should be sent. From that moment REPO will
send all reports to that host-id, and if required still print
them on the local console.
If the NOC for some reason breaks down, a new one must have the
opportunity of taking over, even if the old still has a reserva-
tion on one or more supervisor hosts. So the new NOC must have
the right of c_l_e_a_r_i_n_g_ a reservation made by the old one.
For security reasons a p_a_s_s_w_o_r_d_ must be supplied in reservation
and clear reservation packets.
\f
8_._ _ _ _ _ _ _ _ _S_T_A_T_I_S_T_I_C_ _A_C_C_U_M_U_L_A_T_I_O_N_ 8.
(OPTIONAL FEATURE)
A varity of statistical information is available in the network
nodes. It may roughly be classified into three different groups:
A. Measuring the load of the node.
B. Measuring error frequencies.
C. Measuring utilization and avalibality of buffer resources.
The information may be used for accounting purposes, for discove-
ring bottlenecks in the network topology or within the node it-
self or for discovering failures in communication lines.
8_._1_ _ _ _ _ _ _ _O_r_g_a_n_i_z_a_t_i_o_n_ _o_f_ _S_t_a_t_i_s_t_i_c_s_ _I_n_f_o_r_m_a_t_i_o_n_ 8.1
Statistical information in a node is organized into s_t_a_t_i_s_t_i_c_s_
r_e_c_o_r_d_s_.
A statistic record contains all statistical information about a
single logical entity, such as a communication line, a host, a
host process etc.
The entities in question will normally occur in several instan-
ces, e.g. a node will in general have several communication li-
nes, several hosts connected etc. Each instance will then have
its own statistic record.
The collection of statistics records belonging to the same kind
of entity will be denoted a s_t_a_t_i_s_t_i_c_s_ _t_y_p_e_.
In ROUTER a number of types are defined.
In order to facilitate retrieval of the statistics records in
ROUTER, they are referenced by means of a two level index. On the
lowest level an index table, called t_y_p_e_ _t_a_b_l_e_ exists for each
statistics type, pointing out all the menber records of that\f
type. On the higher level, the socalled s_t_a_t_i_s_t_i_c_s_ _t_a_b_l_e_ points
out the index table for all types.
Fig. 8.1 Organization of all statistical information in ROUTER.
The number of statistics types is fixed. In some particular in-
stallations, however, some (or all) statistics types may be omit-
ted. This is indicated by a zero in the corresponding entry of
the statistics table.
The organization described is obviously very flexible in the
sense that new types may easily be added simply by expanding the
statistical table.
Statistical information on communication line drivers is simpler
as only one type will normally exist, with a record for each
line, counting transmission errors, temporary stops caused by
lack of buffer space etc. \f
8_._2_ _ _ _ _ _ _ _S_t_a_t_i_s_t_i_c_a_l_ _F_i_e_l_d_s_ 8.2
Three sorts of variables may occur in a statistical record.
The most frequent ones are simple counters, counting the occu-
rence of some event. On communication lines for example, the num-
ber of transmitted packets is counted. It is a convention that
the counter starts with zero at the time the ROUTER process is
loaded and it is never reset. It is the responsibility of the NOC
to read the value at regular intervals and then calculate the
count corresponding to the last interval.
Another sort of statistical variable is the r_a_n_g_e_ _i_n_d_i_c_a_t_o_r_, gi-
ving extreme values assumed by some other variable. In each buf-
ferpool, for instance, a statistical field contains the minimum
number of free buffers in the pool.
In order that a range indicator may be of any value, it must be
reset every time it is read. In the buffer pool example above,
the value of "min no. of free buffers" must be reset to "current
no. of free buffers".
Yet another sort of statistical variable is a v_a_l_u_e_, giving simp-
ly the current value of some variable. In each bufferpool, for
instance, the variable "current no. of free buffers" is located
in the statistical record for the pool.
8_._3_ _ _ _ _ _ _ _S_t_a_t_i_s_t_i_c_s_ _C_o_l_l_e_c_t_i_o_n_ 8.3
At compile time it is decided by means of the option parameters
which statistics types should be supported in a specific network.
Statistics information in the selected types will be continuosly
accumulated from the time of loading the ROUTER process, whether
or not a NOC exists to collect the information. \f
In order to collect statistical information from a node, the su-
pervisor host STATO must be connected to the node, cf. section
7.1.
When STATO is connected, the NOC may then address it, requesting
statistical information to be collected at regular intervals and
sent to a s_t_a_t_i_s_t_i_c_s_ _d_e_s_t_i_n_a_t_i_o_n_ specified in the request packet,
cf. 18.
8_._4_ _ _ _ _ _ _ _S_u_r_v_e_y_ _o_f_ _S_t_a_t_i_s_t_i_c_a_l_ _R_e_c_o_r_d_ _T_y_p_e_s_ 8.4
This section gives a brief survey of the statistical types avail-
able. The authoritative sources, however, are the reference
manuals for 5 - 9 for the various ROUTER modules.
buffer pool statistics:
Each buffer pool contains the following statistical
variables, cf. 9 section 2.7:
Total number of buffers
Number of free buffers
Minimum number of free buffers
Number of accesses to pool
Number of waits for a buffer from pool
Host Process Interface statistics, cf. 5, section 3.6:
For each host process:
Process description address (to identify the record)
Total number of lost packets
Number of lost packets since last input
For each local host:
Host identification (to identify the record)
Number of packets sent on each priority level
Number of packets received
\f
Supervisor Module Statistics, cf. 7, section 4.6:
For each communication line:
Number of resets of routing information for the line
Number of hosttable packets received
Number of range vector packets received
For external host management:
Number of host conflicts
HDLC interface statistics, cf. 8, section 4.1:
Logical line no (to identify the record)
Number of normal packets transmitted
Number of neighbour protocol packets transmitted
Number of normal packets received
Number of neighbour protocol packets received
Number of packets dropped (caused by congestion)
Packet Transporter statistics, cf. 6:
For each pipeline:
Sender host-id
Receiver host-id (to identify the pipeline)
Pipeline no
Number of transmitted normal packets
Number of transmitted supervisor packets
Number retransmitted packets
Number of dropped packets
Max. value of retransm. counter
Max. value of acknowl. counter
Max. value of RNR counter\f
9_._ _ _ _ _ _ _ _ _R_E_P_O_R_T_S_ _C_O_L_L_E_C_T_I_O_N_ 9.
(OPTIONAL FEATURE).
Report generation in the network nodes serves the purpose of en-
abling the Network Administration to keep track of the behaviour
of the network. So any event that may serve this purpose should,
if possible, be reported by ROUTER.
At compile time for ROUTER, it is decided by means of the option
parameters, which events should cause a report to be generated.
Reports are generated by the various ROUTER modules and then de-
livered by the supervisor module to the Supervisor host REPO, cf.
section 7.1. If at the time of delivery, it turns out that REPO
is not connected, the report will simply be dropped. Whenever RE-
PO becomes connected, however, subsequent reports will be delive-
red until it eventually becomes disconnected again.
If network time support is included in ROUTER, cf. section 5, re-
ports will be stamped with the current value of the network time.
This may facilitate concurrent analysis of reports from several
nodes.
9_._1_ _ _ _ _ _ _ _S_u_r_v_e_y_ _o_f_ _R_e_p_o_r_t_ _T_y_p_e_s_ 9.1
This section gives a brief survey of report types defined at the
time of writing. The authoritative sources, however, are the re-
ference manuals 5 - 9 for the various ROUTER modules.
Host Process Interface Reports (cf. section 4):
Host up.
Contains host-id and kind
Host exist.
Contains host-id and kind
\f
Host down.
Contains host-id and the statistical record for the
host, if statistics included (cf section 8.4)
Router ident.
Contains configuration identification for the ROUTER.
(Generated when REPO connects).
HDLC Interface Reports:
Line up.
Contains internal line number
Line down.
Contains internal line number, and statusword
Line in drop.
Contains internal line number
Indicates recovery from a congestion situation
Unsuccessfull connect.
As line down.
Supervisor Module Reports:
Range vector (cf. section 3.3).
Contains the current range vector, when a change has
occurred
Neighbours.
An array with node numbers of the neighbour nodes
(or zero if none) for all internal known lines
(internal line number is index in the neighbour
array).
Host table (cf. section 4.2).
Contains the current external hosttable, when a
change has occured
\f
Host conflict (cf. section 4.3).
Contains the host-id in question
Packet Transporter Reports:
None.
9_._2_ _ _ _ _ _ _ _R_e_p_o_r_t_ _H_a_n_d_l_i_n_g_ 9.2
The program containing the REPO supervisor host (cf. section 7.1)
has three versions.
The first version prints all reports on a local log device at the
node. It will ignore all requests from the NOC.
The second version prints a local log as well, but is in addition
able to send the reports to the NOC on request.
The third version will send the reports to the NOC and skip the
logging. \f
1_0_._ _ _ _ _ _ _ _D_U_M_P_ _F_U_N_C_T_I_O_N_S_ 10.
(OPTIONAL FEATURE).
A Network Operation Centre (NOC) may for various reasons need to
know the contents of routing tables etc. in the network nodes.
Certainly part of the information may occasionally be sent in re-
ports to the NOC, cf. section 9, but this may not be satisfactory
enough, if for instance the NOC comes up after an error and then
needs to construct a picture of the present state of the net-
work.
The solution to the problem is that a supervisor host, DUMP, may
on request from the NOC extract copies of certain data areas
within ROUTER and send them in packets to the NOC.
More generally, DUMP may also be used to inspect and even modify
specified core locations in a node. This may be useful for troub-
le-shooting purposes.
In order to avoid absolute addressing of tables in ROUTER, selec-
ted tables are described in the socalled DUMP-table in ROUTER. A
table description consists of 6 words, giving absolute address,
table length, table organization (simple table, indexed table
etc) etc. Each table description contains an identification
field, uniquely specifying the table in question.
The DUMP host may use the DUMP table for retrieval of tables or
entries within tables of ROUTER. A request for table dump sup-
plies a table identification, used as search key in the dump
table, and a specification of the required part of the table.
The following tables are described:
coroutine descriptors
line descriptors
routing tables
buffer pools
\f
Inclusion of the JUMP-table is optional at compile time for ROU-
TER. The exact table formats and identifications are described in
the reference manuals 5 - 9 for various ROUTER modules.
\f
1_1_._ _ _ _ _ _ _ _R_E_M_O_T_E_ _L_O_A_D_ _F_U_N_C_T_I_O_N_S_ 11.
1_1_._1_ _ _ _ _ _ _I_n_t_r_o_d_u_c_t_i_o_n_ _t_o_ _R_e_m_o_t_e_ _L_o_a_d_ 11.1
Computer networks are characterized by node-computers placed at
remote locations, normally not attended by network operators. As
a consequence it may be advantageous to reduce the system comple-
xity and hence the vulnerability of the node-computer systems.
One way of accomplishing this, while retaining software flexibi-
lity, is by omitting load devices, as for example fixed or floppy
disc device, and instead to utilize the communication link as
load device.
Within RCNET the dowm-line load operations are split into two
phases:
- Autoload - (Initial program loading/error recovery)
- Application program load.
The autoload function resembles the "normal" RC3600 autoload se-
quence as initiated by an operator activating the autoload button
on the Operators Control Panel. An unattended autoload sequence
is initiated by a >watchdog> system. As a consequence the current
memory content is overwritten by an initial RCNET software sys-
tem.
Once the basic system is resident in the memory, the specific ap-
plication programs may be loaded. Actually all "s"-commands may
be used, thus allowing any software system to be loaded, started
or removed at the remote system, for example for test-purposes. \f
1_1_._2_ _ _ _ _ _ _T_h_e_ _R_e_m_o_t_e_ _S_o_f_t_w_a_r_e_ _S_y_s_t_e_m_ 11.2
The software system implemented for the RCNET Remote Load
function consists of a number of independent program modules,
physically situated in at least two RC3600 RCNET node computers.
Logically the software may be split into two main categories:
- The Remote Software System
- The Central Software System.
The remote software system consists of two independent systems:
- The Primary Load System (PLS)
- The Secondary Load System (SLS).
The PLS (cf. 21) is initially stored within the IML (F102-Image
Loader). This is a writeprotected permanent storage area,
physically created by a number of EPROM>s (E_raseable
P_rogrammeable R_ead O_nly M_emory). The content of the IML is loaded
into the RC3600 CPU main memory during an autoload sequence.
The primary task of the PLS is to load the SLS. This is
accomplished by executing the following order of events:
1) Establish an HDLC link to the neighbouring RCNET node, and
identify itself to this node.
2) Copy the core-areas specified by the Central Software System
into a number of packets, and send these through the network
to a specified receiver (post mortem dump).
3) Load the SLS. The program modules constituting the SLS are
sent in a number of packets from the Central Software System.
4) When the last packet to be loaded has arrived, the SLS and
the PLS are both resident within the core memory. The last
function of the PLS is to overwrite itself with the SLS.
Following this action the SLS is started.
\f
The SLS represents a normal MUS, RCNET system, i.e. with MUS mo-
nitor (actually a special version (NET-S) and ROUTER software,
however without any application programs. For this purpose the
internal host RLS - Remote Load Supervisor (cf. ref 22) - will
be contained within the SLS. With the help of the RLS it is pos-
sible to load the remaining software into the node.
Actually the RLS host enables the entire repertoire of S-commands
to be performed remotely through the network. This means that the
following functions are allowed:
- LOAD
- LIST
- BREAK <process'
- CLEAR
- STOP/START <process'
- KILL <process'
1_1_._3_ _ _ _ _ _ _T_h_e_ _C_e_n_t_r_a_l_ _S_o_f_t_w_a_r_e_ _S_y_s_t_e_m_ 11.3
The Central Software System is loaded in one or more node-compu-
ters equipped with a disc device. The system consists of a number
of RCNET hosts:
- The Autoload Master Host (AMA)
- The Dump Receiver Host
- The Master Load Supervisor Host (MLS).
For back-up purposes the entire system may exist in any number of
incarnations within the network.
The Autoload Master Host (AMA) specifies to the Remote Software
System the core areas to be dumped, and specifies the dump recei-
ver. When the dump is completed the AMA informs the MLS which
program modules are to be loaded at the remote system.
\f
The Dump Receiver Host receives the packets containing the dumped
core areas from the remote system. These packets are stored on
disc (or any ither device) for later inspection.
The MLS is an RCNET internal host process capable of loading pro-
grams at the PLS, or later the SLS, by reading binary program mo-
dules from a disc and transmitting them via RCNET to the Remote
Software System. Commands are also received through the network,
thus facilitating that the MLS may be physically located in the
same or any other node-computer as the AMA.
More detailed information pertained to the MLS may be found in
reference 23.
1_1_._4_ _ _ _ _ _ _A_p_p_l_i_c_a_t_i_o_n_s_ 11.4
A typical network configuration where the remote load facility
may be applied could be:
(tegning)
With the aid of the Remote Load facility the remote system may
receive program modules, via the network, from the disc at the
central system.
Apart from the initial, or error recovery, load action (autoload)
the system enables that the special version of the MUS-System
(NET-S) of the remote node may be controlled from a distant site
through the network. It would for example be possible, during
normal operation of the network, to perform the following\f
on the remote system from a console located at the central sys-
tem:
- LOAD
- LIST
- BREAK <process'
- CLEAR
- STOP/START <process'
- KILL <process'
Providing that the necessary free core area is available in the
remote system, special test programs (traffic generators/recei-
vers etc.) may be loaded at the remote Site.
\f
1_2_._ _ _ _ _ _ _ _T_E_S_T_ _F_A_C_I_L_I_T_I_E_S_ 12.
During the period of installation of a network it is important
that tools are available for testing the network before it is ta-
ken into use. The same argument applies each time a component is
brought into operation after a period where it has been out of
order, or when the network has been extended with new nodes or
communication lines.
As network software is occasionally changed by adding new facili-
ties or correcting errors, it is also important that tools for
tracing the software operation are available.
Finally the Network Administration may wish to test the behaviour
of the netowrk under varying load conditions in order to antici-
pate future needs.
The present section gives a broad overview of the testing tools
in RCNET, Level 1. Note that the testing tools in question may be
dynamically loaded into or removed from any node.
1_2_._1_ _ _ _ _ _ _O_p_e_r_a_t_o_r_ _C_o_m_m_u_n_i_c_a_t_i_o_n_ 12.1
The internal host OPCOM makes the console of a network node act
as a host in the network, cf. 14.
This facility has two advantages in test situations. The first
single test of a node may be carried out by means of OPCOM, ente-
ring packets into the network. By directing packets to the EKKO
host at other nodes, the ability to address other network nodes
may be checked. Moreover, directions for further tests may be
sent from a Network Operation Centre to the console, possibly an-
swering a request sent to the centre by means of OPCOM.
By addressing the OPCOM host at other nodes, operators at diffe-
rent nodes may communicate.
\f
1_2_._2_ _ _ _ _ _ _T_e_s_t_ _T_r_a_f_f_i_c_ 12.2
The delay in communication with a given node may at a given mo-
ment be checked by sending a packet to the EKKO host at the node.
The packet will then be "echoed" from the node, having been
stamped with the time of arrival.
The actual path used by packets travelling through the network
may be traced by means of the t_r_a_c_e_ _f_a_c_i_l_i_t_y_.
Any packet sent through the network has a f_a_c_i_l_i_t_y_ _m_a_s_k_ (cf. ap-
pendix B) indicating special actions to be taken in each network
node passed by the packet. One possible action is selected by the
t_r_a_c_e_ _b_i_t_ (1B15). If this bit is set, each node will stamp the
packet with the network time of arrival and the node number of
the node. The information will be appended to the packet, so at
the time of arrival at the destination, the packet will contain a
list of the nodes passed on its way.
By means of the supervisor host TRAFO, artificial traffic may be
generated from any node in the network, cf. 17.
TRAFO may in parallel generate an optional number of traffic-
streams, a trafficstream being identified by two parameters, the
receiver host and the priority of the packets.
A traffic stream from a node is started by sending to TRAFO at
the node a s_t_a_r_t_ _t_r_a_f_f_i_c_ packet containing the following para-
meters:
receiver host
priority
identification
facility mask
packet text lenght
frequency
maximal number of packets (infinitely is possible)
\f
The packets sent will have the standard contents of the text
field, but the lenght is determined by the parameter packet text
length. The packets in a traffic stream are sequentially numbered
for control purposes at the receiver.
Packets will be generated independently for each traffic stream
with the specified frequency. The distribution will be uniform.
A traffic stream will stop when the specified number of packets
has been sent, but it may be stopped deliberately by sending to
TRAFO a s_t_o_p_ _t_r_a_f_f_i_c_ packet, specifying that the traffic item is
to be stopped.
1_2_._3_ _ _ _ _ _ _D_e_s_t_i_n_a_t_i_o_n_ _f_o_r_ _T_e_s_t_ _T_r_a_f_f_i_c_ 12.3
Test traffic may in principle be addressed to any host in the
network (receiver host in parameters to TRAFO). However a stan-
dard destination CATCH is defined at each node.
As mentioned in section 7.1, CATCH is one of the two internal
host functions explicitly recognized by ROUTER. If a "CATCH-host"
is connected at the moment, packets addressed to it are delivered
in the normal way. Otherwise they are simply dropped in ROUTER at
the node, in order that the CATCH host may act as a general de-
stination at the node without requiring a process to handle the
received packets.
The standard program CATCH may be used to store received packets
in a file on disc, magnetic tape, flexible disc etc. for later
inspection or analysis.
\f
1_2_._4_ _ _ _ _ _ _T_e_s_t_ _O_u_t_p_u_t_ _f_r_o_m_ _R_O_U_T_E_R_ 12.4
Two different kinds of test output may be produced by ROUTER.
Both of them are described in detail in 24.
The Coroutine Testprogram serves the purpose of tracing the
cooperation and execution sequence of the various ROUTER modules.
This is performed by generating testrecords on certain occasions,
for instance when a buffer is released to its buffer pool.
Coroutine Testoutput is generated using the standard tools of the
Coroutine Monitor, cf. 25 - 26.
ROUTER Traceoutput serves the purpose of being able to trace the
packet flow through a node in general, and trace the effects of
packets in neighnour protocol in particular (cf. section 2.5).
Trace records are generated in selected modules of ROUTER, for
instance in the control routing procedure ROUTE. They are deli-
vered from ROUTER to some trace output collection program by cal-
ling the external procedure TEST, cf. 24, section 4.
The standard program for CATCH (cf. section 12.3) will supply to
ROUTER a procedure TEST, which delivers trace records as special
type packets to the internal host CATCH. By this means, the trace
records are merged into the stream of normal packets addressed to
CATCH.
By keyboard commands to CATCH the operator at a node may select
certain packet types or trace record types to be stored in the
file, while the types not selected are skipped.
The utility program CAPRI (cf. 20) may be used to print the
stored packets in a structured format defined for each packet- or
trace record type.
\f
A_P_P_E_N_D_I_X_ _A_ _-_ _R_E_F_E_R_E_N_C_E_S_ A.
1 RCNET General Information
RCSL No 42-i 1277
2 CCITT Recommendation X.25
Orange Book Volume VIII.2 - 1976
3 RCNET
FDLC - FPA Data Link Control
RCSL No 43-GL 7810
4 RCNET Level 1 Host Process Interface
Programmers Reference
RCSL No 43-GL 5071
5 RCNET Level 1 Host Process Interface
Reference Manual
RCSL No 43-GL 5070
6 RCNET Level 1 Packet Transporter
Programmers Reference
RCSL No 43-GL 6112
7 RCNET Level 1 Router, Supervisor
Reference Manual
RCSL No 43-GL 5204
8 RCNET Level 1 Router, HDLC Interface
Reference Manual
RCSL No 43-GL 7893
9 RCNET Level 1 Router, Common Procedures
Reference Manual
RCSL No 43-GL 5205 \f
10 RCNET Level 1 Neighbour Node Protocol
Reference Manual
RCSL No 43-GL 5034
11 Communications of the ACM
July 1977 Vol. 20 no 7
"A Correctness Proof of a Topology Information
Maintenance Protocol for a Distributed Computer
Network"
William D. Tajibnapis
12 Application Program: TIME
RCSL No 43-GL 4935
13 MUS System
Programming Guide
RCSL No 43-GL 8902
14 RCNET Level 1, OPCOM
Reference Manual
RCSL No 43-GL 5025
15 RCNET Level 1, EKKO
Reference Manual
RCSL No 43-GL 5033
16 RCNET Level 1, REPO
Reference Manual
RCSL No 43-GL 6531
17 RCNET Level 1, TRAFO
Reference Manual
RCSL No 43-GL 5501
\f
18 RCNET Level 1, STATO
Reference Manual
RCSL No 43-GL 7090
19 RCNET Level 1, Remote Load
General Description
RCSL No 43-GL 8330
20 RCNET Level 1, Catch/Capri
Programmers Reference
RCSL No 43-GL 6050
21 RCNET Level 1, Primary Load System
Programmers Reference
RCSL No 43-GL 8331
22 RCNET Level 1, Remote Load Supervisor
Reference Manual
RCSL No 43-GL 7640
23 RCNET Level 1, Master Load Supervisor
Programmers Reference
RCSL No 43-GL 7786
24 RCNET Level 1, ROUTER Testoutput
Reference Manual
RCSL No 43-GL 5484
25 Coroutine Monitor Testoutput
User>s Guide
RCSL No 43-GL 4475
26 The Extended Coroutine Monitor for RC3600
Programmer>s Manual
RCSL No 43-GL 4715 \f
A_P_P_E_N_D_I_X_ _B_ _-_ _R_C_N_E_T_,_ _L_E_V_E_L_ _1_ _-_ _P_A_C_K_E_T_ _F_O_R_M_A_T_ B.
(figur)\f
\f
i
T_A_B_L_E_ _O_F_ _C_O_N_T_E_N_T_S_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _P_A_G_E_
1. GENERAL DESCRIPTION .................................... 1
2. SPECIFICATIONS ......................................... 2
2.1 Mains Requirements ................................ 2
2.2 Environmental Specifications ...................... 2
2.3 Performance ....................................... 2
2.4 Logic Signals ..................................... 3
3. PRINCIPLES OF OPERATION ................................ 6
3.1 AC Mains Interface ................................ 6
3.2 Master Control Circuitry .......................... 7
3.2.1 Service Power and Signalsource ............. 8
3.2.2 Voltage and Temperature Monitoring ......... 10
3.2.3 Control Logic .............................. 11
3.3 Converters ........................................ 13
3.3.1 Power Transformation ....................... 13
3.3.2 Voltage Regulation and Overload Protection . 17
3.3.3 Voltage Checkout ........................... 20
4. ADJUSTMENT PROCEDURES .................................. 21
4.1 Voltage Adjustment ................................ 21
4.2 Current Limit Adjustment .......................... 21
5. DIAGRAMS ............................................... 23
5.1 J1 and J2 Backpanel Connectors .................... 23
5.2 Master Control Circuitry I ........................ 24
5.3 Master Control Circuitry II ....................... 25
5.4 POW729 +5 V Power Module, Control Section ......... 26
5.5 POW729 +5 V Power Module, Power Section ........... 27
5.6 POW733 +_12 V Power Module, Control Section ........ 28
5.7 POW733 +_12 V Power Module, Power Section .......... 29
5.8 POW204 Internal Cabling ........................... 30
5.9 CRA202 Cabling .................................... 31
6. OSCILLOGRAMS ........................................... 32 \f
ii
\f
F_ 1_._ _ _ _ _ _ _ _ _G_E_N_E_R_A_L_ _D_E_S_C_R_I_P_T_I_O_N_ 1.
The POW204 switch mode power supply is housed in a DIN 41.494
compatible plug-in unit. It is a selfcontained unit producing +5,
+12 and -12 Volt DC directly from the 220 Volt main supply.
The power supply is installed in the Module Crate from the front
of the cabinet. All connections to the logic boards are provided
through two backpanel connectors. AC mains is connected through a
cabel plugged in at the back of the power supply.
One of the two backpanel connectors holds the various control
signals. The other connector holds the high current terminals.
At the front of the unit, four light emitting diodes indicate the
status of the power supply. A reset push button and a main cir-
cuit on/off switch are also available.
The internal construction of the POW204 is structured as a main-
frame holding the master logic, the AC main supply interface and
three slots for power modules.
The master control logic provides functions such as AC main power
monitoring, voltage reference, clock oscillator, overvoltage- and
overtemperature protection, Power OK and Power Interrupt signal
generation and visual indicators.
The AC main supply interface includes electromagnetic interfer-
ence filter, fuse, circuit breaker, inrush current limiter, full
wave rectification and a 310 V DC energy reservoir of 25 Joule
minimum.
The power modules are DC to DC switch mode converters, producing
the low voltage output current from the 310 Volt DC.
Generally, the POW204 is equipped with one 5 Volt and one +_ 12
Volt power module of 100 Watt each, leaving space for one addi-
tional 5 Volt module. But for non-standard applications, other
combinations and even other voltages are possible.
\f
F_ 2_._ _ _ _ _ _ _ _ _S_P_E_C_I_F_I_C_A_T_I_O_N_S_ 2.
The following specifications are valid for the POW204 equipped
with two POW729 5 Volt modules and one POW733 +_ 12 Volt module.
2_._1_ _ _ _ _ _ _ _M_a_i_n_s_ _R_e_q_u_i_r_e_m_e_n_t_s_ 2.1
Min. Nom. Max.
Frequency: 45 50 66 Hz
Voltage: 198 220 242 Volt RMS
280 311 342 Volt Peak
Phase: Single phase
Current: 2,5 A RMS
10 A Peak
Starting surge: 5 A Peak
Allowable disturbances:
Mains drop-out: 1 Halfperiod each S.
Width of spikes 50 S.
Magnitude of spikes 800 V Peak
2_._2_ _ _ _ _ _ _ _E_n_v_i_r_o_n_m_e_n_t_a_l_ _S_p_e_c_i_f_i_c_a_t_i_o_n_s_ 2.2
Min. Norm. Max.
Ambient temperature: 5 - 45 C
Relative humidity: 10 - 80 %
M_ Heat generation - - 360 KJ/hour
M_m_m_ 3
P_P_p_p_ Air flow 30 - - m /hour
2_._3_ _ _ _ _ _ _ _P_e_r_f_o_r_m_a_n_c_e_ 2.3
5_ _V_o_l_t_ _o_u_t_p_u_t_: Min. Norm. Max.
Volt (Iout < 40 A) -1% Uref +1% Volt
Short Circuit Current 80 A
Ripple (f < 1 MHz) 250 mV
Ripple + HF noise 500 mV \f
Min. Norm. Max.
Voltage OK level: 4.65 4.70 4.75 Volt (Hi going)
4.50 4.00 4.70 Volt (Lo going)
Overvoltage level: 5.25 5.60 5.95 Volt
+_ 1_2_ _V_o_l_t_ _o_u_t_p_u_t_
Min. Norm. Max.
Uout+ (I+ < 4 A,I+ - I- < 3 A):
-5% 2.4*Uref +5% Volt
Uout- (I- < 4 A,I+ - I- < 3 A):
-5% -2.4*Uref +5% Volt
Short circuit current: 16 A
Ripple (f < 1 MHz): 200 mV
Ripple + HF noise: 500 mV
Voltage OK level
(+12 V): 11.0 11.3 11.6 Volt (Hi going)
10.5 11.0 11.4 Volt (Lo going)
(-12 V): -11.4 -10.9 -10.4 Volt (Hi going)
-11.7 -11.3 -10.9 Volt (Lo going)
Overvoltage level
(+12 V): 12.6 13.4 14.5 Volt
(-12 V): -14.5 -13.4 -12.6 Volt
M_i_s_c_e_l_l_a_n_e_o_u_s_
Uref adjustability: 4.7 5.0 5.3 Volt Mean
Uref stability: 1%
Clock frequency: 19 20 24 KHz
"310 V DC" Reservoir 680 F
Mains (reservoir) OK levels:
Hi going threshold 250 267 284 Volt DC
Lo going threshold 198 212 224 Volt DC
Hysteresis 51 56 61 Volt DC
2_._4_ _ _ _ _ _ _ _L_o_g_i_c_ _S_i_g_n_a_l_s_ 2.4
The backpanel connector J1 provides several logic signals. Refer
to section 5.1 for pin-out definition.
\f
P_o_w_e_r_ _M_o_n_i_t_o_r_ _T_i_m_i_n_g_
(See fig. 1).
Min. Norm. Max.
ACLO to -,INIT delay
powering off, t1: 4 8 15 mS.
powering on, t3: 4 8 15 mS.
ACLO to Power OK delay
powering off, t2: 2 mS.
Power OK to ACLO delay
powering on: 0 mS.
-,INIT pulse width t4: 7
ACLO pulse width t5: 7
-,RESET pulse width t6: 5
(The -,RESET pulse is initiated by -,INIT being active a_n_d_ Power
OK being false, and remains active until -,INIT is released).
E_l_e_c_t_r_i_c_a_l_ _C_h_a_r_a_c_t_e_r_i_s_t_i_c_s_
ACLO: Open collector output. Must be pulled up external by
battery backed up devices. Active High.
VOL at IOL = 48 mA: 0.4 Volt
IOH at VOH = 5.5 V: 250 A
Figure 1: Power Monitor sequencing.
\f
-,INIT: Open collector output, internally pulled up by 1 KOhm.
Active low.
VOL at IOL = 47 mA: 0.4 Volt
VOH at IOH = -1.6 mA 3.0 Volt
-,RESET: Electrical identical to -,INIT
-,Power OK: Active low current sink for external LED indicator
VOL at IML = 32 mA: 0.4 Volt
VOH at IOH = 400 A: 3.0 Volt
-,Power OK RTN: 5 Volt through 220 Ohm.
-,TEMP: Remote shut down input. Active low.
Trigger voltage: 6.5 11 Volt
Trigger current: -0.2 -2 mA
-,AUTOLOAD:
High level input: 4 - 5 Volt
Low level input: 0 - 1 Volt
\f
F_ 3_._ _ _ _ _ _ _ _ _P_R_I_N_C_I_P_L_E_S_ _O_F_ _O_P_E_R_A_T_I_O_N_ 3.
The POW204 power supply may functionally be divided into three
sections:
1. AC mains interface,
2. Master Control circuitry,
3. Converter Modules.
In this chapter each of these parts are discussed in detail.
3_._1_ _ _ _ _ _ _ _A_C_ _M_a_i_n_s_ _I_n_t_e_r_f_a_c_e_ 3.1
The AC main supply is plugged in at the back of the power supply.
From the socket the AC mains is connected - via the fuse circuit
breaker and EMI filter - to the CCB201 printed circuit board.
On the CCB201 the AC current passes through a Graetz rectifier
bridge and a 68 Ohm resistor to the capacitor energy reservoir
(not on the PC).
Via an optocoupler and an SCR, the control logic is able to shunt
the 68 Ohm resistor when the voltage of the reservoir has reached
an operational level (app. 260 Volt DC). In this way the inrush
current is limited to max. 5 A peak.
A level detector monitors the voltage of the reservoir and
through an optocoupler the status is sent to the control logic.
The reservoir capacitor contains a usable energy of
M_
M_m_m_ 2 2
P_P_p_p_ E = 0.5*C(U1 - U2 )
where C is the capacity, U1 is the initial voltage and U2 is the
lower limit of primary voltage for the converters to operate.
\f
Assuming U1 = 300 V, U2 = 200 V and C = 680 F this gives:
M_
M_m_m_ -6 2 2
P_P_p_p_ E = 0.5 x 680 x 10 (300 - 200 ) Joule = 17 Joule
Without charging, the reservoir maintains an operational voltage
level for
t = 17/300 S. = 57 mS.
This figure lies in the interval of time between two and three 50
Hz periods.
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
W_A_R_N_I_N_G_: The large energy at high voltage represents a
_ _ _ _ _ _ _ _ _ _ _p_o_t_e_n_t_i_a_l_ _d_a_n_g_e_r_ _f_o_r_ _d_e_a_d_l_y_ _e_l_e_c_t_r_i_c_a_l_ _c_h_o_c_k_._ _ _ _ _ _ _
The energy store is constantly discharged through 50 KOhm. This
ensures that the primary voltage has descended to an undangerous
value Ut = 48 Volt from the highest initial value U0 = 350 Volt
and without any load at the time t after power has been removed:
t = R x C x Ln(U0/Ut) = 6_0_ _S_.
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
NOTE: After having turned off power, wait 1_ _m_i_n_u_t_e_ before
_ _ _ _ _ _ _t_o_u_c_h_i_n_g_ _a_n_y_ _i_n_t_e_r_i_o_r_ _p_a_r_t_ _o_f_ _t_h_e_ _p_o_w_e_r_ _s_y_s_t_e_m_._ _ _ _ _ _ _ _ _
3_._2_ _ _ _ _ _ _ _M_a_s_t_e_r_ _C_o_n_t_r_o_l_ _C_i_r_c_u_i_t_r_y_ 3.2
The master control circuitry resides on the CCB201 circuit board.
It takes care of the following tasks:
1. Providing service power and control signals for the con-
verter modules,
2. Voltage and temperature monitoring,
\f
3. Status and control signal interface and visual status
indication.
3_._2_._1_ _ _ _ _ _S_e_r_v_i_c_e_ _P_o_w_e_r_ _a_n_d_ _S_i_g_n_a_l_s_o_u_r_c_e_ 3.2.1
Service power is obtained via a 12 Veff line transformator, a
double rectifier and filter capacitor and provides a 20 Volt
unregulated DC source capable of supplying 200 mA.
The converters are controlled by the master control circuitry
using only one single physical line. This line (called REF. OSC.)
contains the following information:
1. converter switching frequency,
2. maximum converter dutycycle,
3. reference voltage.
The voltage on this line is a square wave alternating between
approx. 0 Volt and approx. 11 Volt.
All converters synchronize to the frequency of this square wave.
The dutycycle of this signal indicates to the converters the
maximum dutycycle of the output switch transistor; that is, con-
duction of the output transistor is inhibited during the low-time
of the clockline. This is used for soft start up, as the duty-
cycle at power up is increased slowly from zero to nominal. The
average voltage measured over one entire cycle is the reference
voltage. All the converters regulate their output to maintain the
voltage at a specific ratio to this reference voltage.
The reference voltage is derived from an adjustable voltage
divider as a fraction of the output of an integrated 12 Volt
regulator.
\f
F_
Figure 2: Reference Clock signal.
Figure 3: Reference Oscillator.
\f
F_ The clock signal generator is in principle composed of a timer of
type 72555, an integrator Rf Cf and a voltage adjust resistor Rv
as shown in fig. 3.
The 72555 contains a flip-flop and two voltage comparators, one
sets the flip-flop when Ui is lower than Us, and the other clears
it when Ui is higher than Ur. Also contained are the three 5 KOhm
resistors which together with Rv define Ur and Us in relation to
12 Volt stabilized. The waweforms generated from this circuit are
shown in fig. 2.
It can be shown that for constant 12 Volt, the average value of
Uo is almost independant of the Uo high and low levels.
During start up, the control circuit controls the dutycycle of
the clock-linesignal by clamping the voltage Ur low (this of
course affects the reference voltage).
3_._2_._2_ _ _ _ _ _V_o_l_t_a_g_e_ _a_n_d_ _T_e_m_p_e_r_a_t_u_r_e_ _M_o_n_i_t_o_r_i_n_g_ 3.2.2
In order to prevent the system from damage caused by excess
temperatur and/or supply voltage, the power supply is shut down
if any of these conditions are detected.
The status is latched and visual displayed. To restart the power
supply, either the front panel reset switch must be activated, or
mains must be switched off.
The temperature sensor consists of a reed relay maintained open
by a permanent magnet. If the temperature exceeds the Curie point
of the magnetic material, the reed relay contacts are closed and
a logic TRIAC is trigged.
Via the backpanel connector external temperatur sensors may be
connected in parallel.
The overvoltage sensors are made up by three comparators,
checking the +5 V, -12 V and +12 V output voltages. The voltage\f
reference used is the 12 V output of a three-terminal regulator.
Besides the protective overvoltage monitoring, the output volt-
ages are also checked to be above a minimum level. This function
is performed on each converter module, and the Power OK status is
passed to the master control logic through an open-collector
"wired AND" line.
3_._2_._3_ _ _ _ _ _C_o_n_t_r_o_l_ _L_o_g_i_c_ 3.2.3
The principles of the control logic is illustrated in fig. 4.
Starting with no power applied to system, all signals are low,
except ACLO which signal may be pulled up externally by battery
backed up devices.
After power has been turned on, the "operating condition" signal
becomes active when the primary DC voltage levels are greater
than the positive going thresholds. This causes the clock oscil-
lator to start running, gradually increasing the duty cycle of
the REF OSC.
At the time all output voltages have exceeded the minimum level,
the Power OK status from the converter modules goes high. This
causes ACLO to be pulled down, and approx. 5 mS. later -,INIT is
pulled up.
If then, during this steady state of operation, any of the oper-
ating conditions go false (e.g. overtemperature), ACLO is in-
stantly aserted signalling a possible power loss within the next
few milliseconds.
Approx. 5 mS. later -,INIT is also aserted and if the operating
condition is still false, the clock is stopped.
Even if the interruption in the operating condition is very
short, a correct and full sequence of ACLO and -,INIT is always
guaranteed because of the RS-latch in the logic.
\f
F_
Figure 4: Control logic.
\f
F_ Another RS-latch driving a light emitting diode indicates if a
power interrupt has occured. This latch is only cleared by the
reset switch, or if power has been removed for more than 10
seconds.
The -,RESET signal is almost identical to the -,INIT signal, but
it will only be aserted if the output voltage has been low.
This means that short interruptions of the mains may affect the
ACLO and -,INIT signal but not the -,RESET. In other words the
-,RESET signal is not as sensitive to mains disturbances as the
-,INIT signal.
3_._3_ _ _ _ _ _ _ _C_o_n_v_e_r_t_e_r_s_ 3.3
The power converters provide the following functions:
1. power transformation and galvanic separation,
2. voltage regulation,
3. overload protection (current limiting),
4. voltage checkout.
3_._3_._1_ _ _ _ _ _P_o_w_e_r_ _T_r_a_n_s_f_o_r_m_a_t_i_o_n_ 3.3.1
Figure 5: Basic converter.
\f
In fig. 5 the principle for power transformation is shown. The
mode of operation is that of a ringing choke flyback converter.
That means that the clock period normally is divided into three
slots (as shown in fig. 6):
a: conduction period,
b: flyback period,
c: waiting period.
Figure 6: Collector voltage wave form.
During the conduction period, the 300 Volt DC voltage is applied
through the transistor BUX83 and the base drive circuitry to the
primary winding of the transformator. The secondary rectifier is
reverse biased and hence not conducting. That means that the
transformatoraction is that of a selfinduction. With constant
voltage applied, the current through the primary will increase at
a constant rate, charging the transformer airgap with magnetic
energy.
When the transistor now is turned off, the voltage on both trans-
former windings will reverse and current will flow through the
rectifier diode into the output and output capacitor, discharging
the magnetic energy stored in the transformer into the output.
During this period the secondary current will decrease from the
maximum value to zero at a constant rate (a constant voltage is
in fact applied to the secondary). After this the converter en-
ters the waiting period, waiting for a new cycle to be initiated.
\f
During this period neither the transistor nor the output diode
are conducting and the transformer "rings", i.e. the voltage on
the windings will show damped sinusoidal oscillations at a fre-
quence determined by transformer characteristics and stray capa-
citances and caused by a small amount of residual energy left in
the system when the secondary diode turns off.
This behavior has given rise to the name "ringing choke". Let a,
b and c represent the time duration of the three periods, then
the ratio b/a equals the ratio between input voltage and n x out-
put voltage, where n is the ratio between the number of winding
in primary and secondary. Hence varying a is a way to control the
output voltage against varying primary voltage. If we call the
total cycle time T, the ratio a/T is proportional to the trans-
ferred power as long as the converter is in the ringing choke
region. The conclusion is that the converter can be entirely con-
trolled by simply controlling the time a.
I1 is the snooper current
path.
I2 is the capacitor re-
covery path.
Tr is the transformer
primary stray inductance.
Figure 7: Snooper circuit.
During none of the periods a, b or c significant power dissipated
in the switch transistor collector circuit. However, in the tran-
sition time between the conducting- and the flyback-period, the
collector voltage on BUX83 swings from zero to 600 Volt with
maximum current flowing in the transformer primary and a very
high voltage overshoot. This is at the limit of what the transis-
tor can stand, and represents an unnecessary powerloss in the
transistor. This is the reason for the existence of the snooper
circuit. See fig. 7.
\f
The main components in this circuit are one diode and one capaci-
tor. When the transistor turns off, the diode turns on, letting
the transformer current flow through the capacitor leaving the
transistor currentless, and limiting the overshoot to about 850 V
max. collectorvoltage. During this transition the energy stored
in the transformer primary stray induction is converted to elec-
trostatic energy in the capacitor. The voltage on the capacitor
is reversed however, and has to be reversed again before the cir-
cuit can handle a new transition of the described nature. This is
done by means of another selfinduction and another diode at the
start of the next conduction period. These precausions reduce
collector dissipation from the transistor, increasing the relia-
bility.
The two screens in fig. 5 are connected to "cold" terminals at
the two transformerwindings, and helps reducing the converted
generated RF noise to a minimum.
Figure 8: Base drive circuit.
A high voltage switching powertransistor is characterized by a
very small current amplification if low collector-emitter satur-
ation is to be guaranteed. This would require much service power
if the driver circuit should drive the switchtransistorbase di-
rectly. Therefore the emittercurrent in the switchtransistor is\f
let through a winding on the base drive transformer that is the
basecurrent is taken from the transistor emittercurrent rather
than from the more "expensive" service powersource. Another ad-
vantage is that the basecurrent all the time (except at the very
beginning of each cycle) is proportional to the strongly varying
collectorcurrent, minimizing the base drive power dissipation.
This mode of operation is to some degree similar to that of a
thyristor, but shorting the primary of the drive transformer will
turn off the transistor. This is also the case when the drive
transformer saturates.
The primary of the basedrive transformer consists of two
windings: a startwinding to which a short pulse is applied when
the switch transistor is turned on, and a hold-off winding used
to short the transformer flux when the outputtransistor is turned
off.
To ensure that the outputtransistor is not turned on by noise
when the drive circuit has no servicepower applied, the shortcir-
cuit is formed by an FET transistor acting as a "normally on"
switch (active off).
3_._3_._2_ _ _ _ _ _V_o_l_t_a_g_e_ _R_e_g_u_l_a_t_i_o_n_ _a_n_d_ _O_v_e_r_l_o_a_d_ _P_r_o_t_e_c_t_i_o_n_ 3.3.2
As pointed out in subsection 3.3.1, the converter operation is
completely controlled by controlling the conduction dutycycle,
hence this is the way voltage regulation is carried out.
The control module broadcasts a squarewave clock signal with 5 V
reference as average values, so what the converter regulation has
to do is to integrate the clocksignal (by means of an RC circuit)
extracting the five volts, compare them with the modified sense
voltage and use the difference for regulation of the conduction
duty.
Fortunately the integration of a square-wave gives a triangular
wave with a peak voltage depending on frequency and the value of
the RC product. By choosing the values properly, comparison of\f
the triangle and the modified sensevoltage directly produces a
square-wave with the wanted dutycycle. Fig. 9 and 11 show the
circuits.
Figure 9: Voltage regulation circuits.
Figure 10: Overload protection latch.
\f
Figure 11: Voltage regulation circuit for tracking outputs.
On the +_12 V module, the main transformer contains two separated
sets of secondary windings. The output voltages therefore are not
mutual independent.
The regulation circuit implementation illustrated in fig. 11,
controls the switch transistor in order to keep the numerical sum
of the two output voltages constant.
As the sensevoltage is measured at the load, the circuit automa-
tically compensates for the load dependent voltagedrop across the
converter to load connections (only +5 V).
Fig. 10 shows the logic contents of the protection circuit (block
P in fig. 9). The current signal becomes true, when the converter
primary current as measured over a current transformer exceeds a
certain threshold (approx. 2 A). This signal sets the overcurrent
flip-flop, which limits the converter dutycycle. As shown, the
clocksignal lowlevel resets this flip-flop, but inhibits the con-
verter, limiting the maximum dutycycle.
\f
3_._3_._3_ _ _ _ _ _V_o_l_t_a_g_e_ _C_h_e_c_k_o_u_t_ 3.3.3
The voltage checkout circuit compares the voltage of the sense-
lines (or a fixed ratio of the voltage) with that of a precision
zenerdiode, and releases the Power OK status signal line (called
"VOLT. BUS") when the comparison is satisfactory.
The VOLT. BUS line is an open collector "wired AND" signal, in-
forming the master control logic whether the output voltages are
above a certain minumum level.
\f
F_ 4_._ _ _ _ _ _ _ _ _A_D_J_U_S_T_M_E_N_T_ _P_R_O_C_E_D_U_R_E_S_ 4.
In the following sections the adjustment of the potentiometers on
the control circuit board and on the power module are discussed.
4_._1_ _ _ _ _ _ _ _V_o_l_t_a_g_e_ _A_d_j_u_s_t_m_e_n_t_ 4.1
The output voltages from the different power modules are central-
ly adjusted from the control circuit board. Individual adjustment
of a separate voltage is not possible.
The central voltage adjustment is made possible by the fact that
every power module utilizes the integrated average voltage of the
clock signal as the basic reference voltage. Therefore, adjust-
ments of the mean value of the clock influence on all the output
voltages.
The use of precision resistors (1%) in the voltage dividing net-
works relating the output voltage to the average reference clock,
insures that all voltages are within 1% of the nominal value (if
the reference clock is exact).
The voltage adjust potentiometer is available at the front of the
power supply after the front cover has been removed. Also avail-
able at the front are test terminals connected to the senselines
of the output voltages.
4_._2_ _ _ _ _ _ _ _C_u_r_r_e_n_t_ _L_i_m_i_t_ _A_d_j_u_s_t_m_e_n_t_ 4.2
The maximum output power available from the power modules can be
adjusted by changing the threshold level of the emitter current
protection circuit. This adjustment is made at the factory, and
readjustment should only be made in succession to repairs.
Note that though it is possible to adjust the module to supply
more than 100 W output power, attempt to utilize this fact should
be avoided. The over-all design is only aimed at 100 W and de-
creased life time will result at higher output power levels.
\f
The current protection potentiometer is adjusted using the
following procedure:
a) decrease the "300 V DC" high voltage to 230 V,
b) at 100 W output power the trimmer is turned clockwise
until the output voltage is no longer affected,
c) turn the trimmer counter clockwise until the output
voltage has decreased by 1%,
d) finally make a half-turn clockwise.
\f
F_ 5_._ _ _ _ _ _ _ _ _D_I_A_G_R_A_M_S_ 5.
5_._1_ _ _ _ _ _ _ _J_1_ _a_n_d_ _J_2_ _B_a_c_k_p_a_n_e_l_ _C_o_n_n_e_c_t_o_r_s_ 5.1
J1, 64 pins DIN 41612 outline C connector.
M_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _a_ _ _ _ _ _ _ _ _ _ _ _ _ _c_ _ _ _ _ _ _ _ _ _ _ _ _
1 5 V sense 0 V
2 -,RESET 0 V
3 -,INIT 0 V
4 ACLO 0 V
5 -,TEMP 0 V
6 -,POWER OK -,P OK RTN
7 -,AUTOLOAD 5 V EXT
8 -,AUTOLOAD 0 V
P_ _9_-_3_2_ _ _ _ _ _ _n_/_c_ _ _ _ _ _ _ _ _ _ _ _n_/_c_ _ _ _ _ _ _ _ _ _ _
J2, 15 pins DIN 41612 outline H connector.
M_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
4 +12 V 6 +12 V
8 +5 V 10 +5 V
12 +5 V 14 +5 V
16 RTN 18 RTN
20 RTN 22 RTN
24 SPARE 1 26 SPARE 2
_2_8_ _ _ _ _-_1_2_ _V_ _ _ _ _ _ _3_0_ _ _ _ _-_1_2_ _V_ _ _ _ _ _ _ _ _ _
P_ _3_2_ _ _ _ _G_R_O_U_N_D_ _ _ _ _
POW204 PLUG LIST
\f
F_ 5_._2_ _ _ _ _ _ _ _M_a_s_t_e_r_ _C_o_n_t_r_o_l_ _C_i_r_c_u_i_t_r_y_ _I_ 5.2
\f
F_ 5_._3_ _ _ _ _ _ _ _M_a_s_t_e_r_ _C_o_n_t_r_o_l_ _C_i_r_c_u_i_t_r_y_ _I_I_ 5.3
\f
F_ 5_._4_ _ _ _ _ _ _ _P_O_W_7_2_9_ _+_5_ _V_ _P_o_w_e_r_ _M_o_d_u_l_e_,_ _C_o_n_t_r_o_l_ _S_e_c_t_i_o_n_ 5.4
\f
F_ 5_._5_ _ _ _ _ _ _ _P_O_W_7_2_9_ _+_5_ _V_ _P_o_w_e_r_ _M_o_d_u_l_e_,_ _P_o_w_e_r_ _S_e_c_t_i_o_n_ 5.5
\f
F_ 5_._6_ _ _ _ _ _ _ _P_O_W_7_3_3_ _+_/_-_1_2_ _V_ _P_o_w_e_r_ _M_o_d_u_l_e_,_ _C_o_n_t_r_o_l_ _S_e_c_t_i_o_n_ 5.6
\f
F_ 5_._7_ _ _ _ _ _ _ _P_O_W_7_3_3_ _+_/_-_1_2_ _V_ _P_o_w_e_r_ _M_o_d_u_l_e_,_ _P_o_w_e_r_ _S_e_c_t_i_o_n_ 5.7
\f
F_ 5_._8_ _ _ _ _ _ _ _P_O_W_2_0_4_ _I_n_t_e_r_n_a_l_ _C_a_l_l_i_n_g_ 5.8
\f
F_ 5_._9_ _ _ _ _ _ _ _C_R_A_2_0_2_ _C_a_b_l_i_n_g_ 5.9
\f
F_ 6_._ _ _ _ _ _ _ _ _O_S_C_I_L_L_O_G_R_A_M_S_ 6.
The following pages contain oscillograms showing typical voltage
waveforms at the different testpoints available on the power
modules. Testpoint No 0 (TP0) is used as signal ground reference
in the low voltage section of the circuit, and testpoint No 5
(TP5) is used in the high voltage area.
The timebase on all oscillograms is 10 S/div.
M_m_m_ TP6 (V ) 1 V/div.
BE
P_p_p_
300 V DC = 0 V
TP2 (-,DRIVE) 10 V/div.
TP3 (Start) 5 V/div.
300 V DC = 0 V
TP4 (turn off) 10 V/div.
\f
TP2 (Drive) 20 V/div.
Pout = 0 W
TP3 (Start) 10 V/div.
TP2 (-,Drive) 20 V/div.
Pout = 100 W
TP3 (Start) 10 V/div.
\f
M_m_m_ TP7 (V ) 200 V/div.
CE
P_p_p_
Pout = 50 W
M_m_m_ TP9 (I ) 1 V/div.
C
P_p_p_
M_m_m_ TP7 (V ) 200 V/div.
CE
P_p_p_
Pout = 100 W
M_m_m_ TP9 (I ) 1 V/div.
C
P_p_p_
M_m_m_ TP9 (I ) 1 V/div.
C
P_p_p_
Pout = 80 W
TP4 (Turn off) 20 V/div.
\f
M_m_m_ TP7 (V ) 200 V/div.
CE
P_p_p_
Pout = 10 W
TP8 200 V/div.
TP7 200 V/div.
Pout = 50 W
TP8 200 V/div.
TP7 200 V/div.
Pout = 100 W
TP8 200 V/div.
\f
M_m_m_ TP9 (I ) 1 V/div.
C
P_p_p_
Pout = 100 W
M_m_m_ TP1 (I limit) 10 V/div.
C
P_p_p_
M_m_m_ TP9 (I ) 1 V/div.
C
P_p_p_
Pout = 135 W
M_m_m_ TP1 (I limit) 10 V/div.
C
P_p_p_
\f
«eof»