top - download
⟦34dc7e761⟧ Wang Wps File
Length: 118679 (0x1cf97)
Types: Wang Wps File
Notes: Air Canada Proposal
Names: »1339A «
Derivation
└─⟦85ad3effd⟧ Bits:30006249 8" Wang WCS floppy, CR 0083A
└─ ⟦this⟧ »1339A «
WangText
…15……00……00……00……00…G…0a…G…0b…G…0d…G…0f…G…01…G…05…F…08…F…0d…F
F…05…E…0c…E
D…09…D…0f…D…02…D…06…C…0c…C…01…C
C…05…C…07…B…0d…B
A…08…A…0e…A…01…A…05…@…0c…@ @…06…?…0c…?…02…>…0a…>…0f…>
>…06…=…0b…=…0f…=…05…<…0c…<…0e…<…05…;…0a…;…0f…:…08…:…00…:…02…9…86…1 …02… …02… …02… …02… …02…
CHAPTER 3
Page #
DOCUMENT III TECHNICAL PROPOSAL Apr. 29, 1982
2. R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲ ̲A̲N̲A̲L̲Y̲S̲I̲S̲
2.1 I̲n̲t̲r̲o̲d̲u̲c̲t̲i̲o̲n̲
2.1.1 V̲v̲v̲v̲v̲v̲v̲v̲v̲ ̲X̲x̲x̲x̲x̲x̲x̲x̲x̲
o This is a socalled 'thematic sentence', which summarizes
the content of section 2.1.1.
A thematic sentence may be applied on sections
of lower levels, when such sections contain several
pages of text.
2.1.1.1 Y̲y̲y̲y̲y̲y̲y̲y̲y̲y̲ ̲Z̲z̲z̲z̲z̲z̲z̲z̲z̲z̲z̲z̲
2.1.1.1.1 A̲a̲a̲a̲a̲a̲a̲a̲a̲a̲ ̲B̲b̲b̲b̲b̲b̲b̲b̲b̲b̲b̲b̲
Sidste linie der kan skrives p>
2. R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲ ̲A̲N̲A̲L̲Y̲S̲I̲S̲
2.1 I̲n̲t̲r̲o̲d̲u̲c̲t̲i̲o̲n̲
2.1.1 V̲v̲v̲v̲v̲v̲v̲v̲v̲ ̲X̲x̲x̲x̲x̲x̲x̲x̲x̲
o This is a socalled 'thematic sentence', which summarizes
the content of section 2.1.1.
A thematic sentence may be applied on sections
of lower levels, when such sections contain several
pages of text.
2.1.1.1 Y̲y̲y̲y̲y̲y̲y̲y̲y̲y̲ ̲Z̲z̲z̲z̲z̲z̲z̲z̲z̲z̲z̲z̲
2.1.1.1.1 A̲a̲a̲a̲a̲a̲a̲a̲a̲a̲ ̲B̲b̲b̲b̲b̲b̲b̲b̲b̲b̲b̲b̲
Sidste linie der kan skrives p>
LIST OF CONTENTS Page
1. INTRODUCTION 2
1.1 Content of document III 2
1.2 Structure of document III 2
1.3 Summary of chapter contents 3
1.4 Overview of appendices to Document III 5
1.5 Standards 6
LIST OF CONTENTS Page
2. REQUIREMENTS ANALYSIS 2
2.1 Introduction 2
2.2 System Overview 3
2.3 Basic Requirements 5
2.3.1 Functional Requirements 5
2.3.1.1 Services 5
2.3.1.1.1 - Type A Service 5
2.3.1.1.2 - Printer Traffic 6
2.3.1.1.3 - Type B Service 6
2.3.1.1.4 - Host-to-Host Service 6
2.3.1.1.5 - New Traffic Services 6
2.3.1.2 Network Interfaces 7
2.3.1.2.1 Standard/Consistent Interfaces 7
2.3.1.2.2 Existing 8
2.3.1.2.3 New 8
2.3.1.2.4 Other Networks 9
2.3.1.2.5 Transmission Media Compatibility 9
2.3.2 Performance Requirements 10
2.3.2.1 Connectivity 10
2.3.2.2 Peak Traffic Transfer Rates 12
2.3.2.3 Processing Throughput 14
2.3.2.4 System Response Time 14
2.3.3 Availability/Realibility 16
2.4 Network Services and Features 18
2.5 Network Operations, Control and Management 22
2.6 Evolution 27
2.7 Support requirements 29
LIST OF CONTENTS Page
3. PROPOSED SOLUTION 2
3.1 Introduction 2
3.2 Proposed Technical Solution 3
3.2.1 System Architecture 5
3.2.2 Baseline System (1983) 9
3.2.2.1 Overview 9
3.2.2.2 Interfaces and Protocols 11
3.2.2.2.1 Protocols 17
3.2.2.2.2 Interfaces 19
3.2.2.3 Performance 20
3.2.2.3.1 Installed Capacity 20
3.2.2.3.2 Response Times 20
3.2.2.4 Modelling 21
3.2.3 Future Growth (85 and beyond) 24
3.2.4 Options 25
3.3 Telecommunications 25
LIST OF CONTENTS Page
4. OPERATOR INTERFACE 2
4.1 Introduction 2
4.2 Terminal User Interface 3
4.2.1 Sign-in 3
4.2.2 Sign-in During Congestion 3
4.2.3 Multiple Sign-in 3
4.2.4 Sign-out 4
4.2.5 Multiple sign-out 5
4.2.6 Host Initiated Sign-out 5
4.3 Network Control Center Functions 6
4.3.1 Monitoring 6
4.3.2 Concentrator Network 6
4.3.2.1 Concentractor Network Monitoring 6
4.3.2.2 Concentrator Network Control 6
4.3.3 Host Links and Other Network Links 7
4.3.3.1 Monitoring Links and Trunks 7
4.3.3.2 Controlling Links and Trunks 7
4.3.4 ACNC Network 7
4.3.5 Statistics and Reports 7
4.3.6 Distribution 8
4.4 Network Management Host 9
4.5 Electronic Mail Host
Control and Monitoring Functions 10
4.5.1 Electronic Mail Operator Interface 11
4.5.2 PMS User Interface 11
4.5.2.1 FIKS Definition and System Elements 12
4.5.2.2 System Overview and Functional Summary 13
4.5.2.3 FIKS Nodal Network 13
4.5.2.4 FIKS Operator Interface 16
4.6 Test Sessions 20
4.6.1 Software Development 20
4.6.2 Test Network 21
4.6.3 Volume Generator 21
4.7 Data Base Manipulation 22
LIST OF CONTENTS Page
5. EQUIPMENT CHARACTERISTICS 3
5.1 Introduction 3
5.2 Network Configuration 3
5.3 System Configuration 5
5.3.1 Network Elements 5
5.3.1.1 Compuring Elements 5
5.3.1.2 CR80 General Description 6
5.3.1.2.1 The Processor Units (PU) 8
5.3.1.2.2 The Channel Units (CU) 9
5.3.1.2.3 Bus Structures 11
5.3.1.2.4 Watchdog System 13
5.3.1.2.5 CR80 Modules 16
5.3.1.2.6 Peripheral Equipment 21
5.3.1.2.7 Mechanical Dimensions 22
5.3.1.2.7.1 Rack Dimensions 22
5.3.1.2.7.2 Peripheral Dimensions 23
5.3.1.2.8 Power Consumption 24
5.3.2 H/W Monitor 26
5.3.3 Network Nodes 26
5.3.3.1 Node Configurations 27
5.3.3.2 Equipment List 37
5.3.4 Gateway Processor 40
5.3.4.1 Gateway Configurations 41
5.3.4.2 Equipment List 46
5.3.5 Network Control Centre (NCC) 47
5.3.5.1 NCC Configuration 48
5.3.5.2 Equipment List 53
5.3.6 Electronic Mail Host 55
5.3.6.1 EMH Configurations 56
5.3.6.2 Equipment List 61
5.3.7 Network Management Host (NMH) 62
5.3.7.1 NMH Configurations 63
5.3.7.2 Equipment List 68
5.3.8 Front-End Processor (FEP) 69
5.3.8.1 FEP Configurations 70
5.3.8.2 Equipment List 76
5.3.9 Standard Expansion 79
Page
5.4 Electrical Interfaces 81
5.4.1 Host Interfaces 81
5.4.1.1 UNIVAC Interface 81
5.4.1.2 Other Host Interfaces 81
5.4.2 Communication Interfaces 82
5.4.2.1 X20 bis, X21 bis, V24 82
5.4.2.2 X21 82
5.4.2.3 X75 83
5.4.3 Future Interfaces 85
LIST OF CONTENTS Page
6. SOFTWARE CHARACTERISTICS
5
6.1 Introduction
5
6.2 DAMOS CR80D Standard System Software
6
6.2.1 Overview of DAMOS Operational Software
8
6.2.2 Security 11
6.2.3 Kernel 13
6.2.3.1 Resource Management 14
6.2.3.2 Process Management 15
6.2.3.3 Memory Management 15
6.2.3.4 Process Communication 15
6.2.3.5 CPU Management 16
6.2.3.6 Processing Unit Management 16
6.2.3.7 BASIC Transport Service 17
6.2.3.7.1 Service Types 19
6.2.4 DAMOS Input/Output 20
6.2.4.1 File Management System 22
6.2.4.1.1 Device and Volume Handling 22
6.2.4.1.2 Directories 23
6.2.4.1.3 Files 23
6.2.4.1.3.1 File Types 23
6.2.4.1.3.2 File Commands 23
6.2.4.1.4 User Handling 24
6.2.4.1.5 Disk Integrity 24
6.2.4.1.5.1 Security 24
6.2.4.1.5.2 Redundant Disks 24
6.2.4.1.5.3 Bad Sectors 25
6.2.4.1.6 Access Methods 25
6.2.4.1.6.1 Unstructured Access 25
6.2.4.1.6.2 Indexed Sequential Access 25
6.2.4.2 Magnetic Tape File Management System 26
6.2.4.2.1 Device Functions 27
6.2.4.2.2 Volume Functions 27
6.2.4.2.3 File Functions 27
6.2.4.2.4 Record Functions 27
6.2.4.3 Terminal Management System 28
6.2.4.3.1 Transfer of I/O Data 28
6.2.4.3.1.1 File Mode 28
6.2.4.3.1.2 Communication Mode 29
6.2.4.3.2 User Handling 31
6.2.4.3.3 Hardware Categories 31
6.2.4.3.3.1 Examples 32
6.2.4.3.3.2 Terminal Controllers 34
6.2.4.3.3.3 Lines 34
6.2.4.3.3.4 Units 34
6.2.5 System Initialization 36
Page
6.3 Standard Support Software 37
6.3.1 Terminal Operating System (TOS) 37
6.3.2 Language Processors 38
6.3.3 System Generation Software 39
6.3.4 Debugging Software 39
6.3.5 Utilities 39
6.3.6 Diagnostic Programs 40
6.3.6.1 Off-line Diagnostic Programs 40
6.3.6.2 On-line Diagnostic Programs 40
6.4 Redundant Operation 41
6.4.1 Hardware Redundancy 41
6.4.1.1 Node Redundancy 41
6.4.1.2 EMH Redundancy 41
6.4.1.3 Gateway Redundancy 42
6.4.1.4 NMH Redundancy 42
6.4.1.5 NCC Redundancy 42
6.4.1.6 FEP Redundancy 42
6.4.2 Switchover 43
6.4.2.1 PU Switchover 43
6.4.2.2 LTU Switchover 43
6.4.2.3 Disk Switchover 43
6.4.2.4 Supra Bus Switchover 43
6.4.3 Checkpointing 44
6.4.4 Recovery/Restart 44
6.4.4.1 Recovery Level 44
6.5 Transmission Software 45
6.5.1 Communication Interface 47
6.5.2 Network Interface 49
6.5.2.1 Protocol Levels 49
6.5.3 The Modules of the NSS 52
6.5.3.1 The Transport Station 52
6.5.3.2 The Packet Handler 55
6.5.3.3 The Supervisory Module 55
6.5.3.4 The Recovery Module 55
6.6 Communication Software 56
6.6.1 High Level Service Subsystem (HSS) 56
6.6.2 Terminal Access Subsystem 57
6.6.2.1 Three Lowest Levels 57
6.6.2.2 Terminal Transport Layer 57
6.6.2.3 Terminal Session Layer 57
6.6.2.4 Terminal Protocol/Printer Protocol Layer 58
6.6.2.5 Application Layer 58
6.6.3 Host Access Subsystem Software 59
6.6.3.1 Univac Host Interface 64
Page
6.7 Gateway Software 72
6.7.1 ACNC Interface 74
6.7.2 Network Interface 74
6.7.3 Modules of the Gateway 76
6.7.3.1 The ICC Module 76
6.7.3.2 The Communications Control Interface Module 76
6.7.3.3 The High Level Service Module 76
6.7.3.4 The Supervisory Module 76
6.7.3.5 The Recovery Module 76
6.8 Electronic Mail Host 77
6.8.1 EMH Interface SW 77
6.8.2 Application Software 79
6.8.2.1 Programme Development 79
6.8.2.2 Protected Message Switching (PMS) 80
6.8.2.2.1 FIKS Definition and System Elements 80
6.8.2.2.2 System Overview and Functional Summary 81
6.8.2.2.3 FIKS Nodal Network 81
6.8.2.2.4 Message Users 82
6.8.2.2.5 Data Users 83
6.8.2.2.6 Network Supervision 83
6.8.2.2.7 FIKS Generic Elements 83
6.8.2.2.8 Traffic Security 84
6.8.2.2.9 Message Categories,Code and Formats 84
6.8.2.2.10 Message Entry,Storage and Distribution 85
6.8.2.2.11 Message Routing and Data Switching 87
6.8.2.2.12 System Supervision,Control and
Maintenance 88
6.8.2.3 Session Control 98
6.8.2.4 Reconfiguration 98
6.8.2.5 EMH Recovery 98
6.8.3 System Software 98
6.8.4 External Devices 100
Page
6.9 Network Control Centre (NCC) Software 101
6.9.1 NCC Network Environment 101
6.9.2 NCC Hardware Components 103
6.9 3 NCC Data Base 105
6.9.4 NCC-BNCC Operation 106
6.9.5 NMH-NCC Operation 107
6.9.6 Centralized versus Distributed Control 108
6.9.7 NCC Monitoring via WDP (Watchdog) 109
6.9.8 NCC Control via WDP 109
6.9.9 Software Monitoring 109
6.9.10 Software Initialization and Modification 110
6.9.11 Routing Control 111
6.9.12 Network Monitoring 111
6.9.13 Application Affection by Network Reconfigurations 112
6.9.14 Statistics 113
6.9.14.1 Statistics Information 113
6.9.15 Alarms 114
6.9.16 Supervisory Functions 114
6.9.17 NCC Man-Machine I/F 114
6.10 Network Management Host Software 115
6.10.1 NMH Interfaces and Functional Overview 115
6.10.2 Software Development and Maintenance Functions 116
6.10.3 Maintenance of the Configuration Data Base 117
6.10.4 Down-line Loading of Software and Configuration Data 118
6.10.5 Processing of Raw Data for Statistics and Billing Information 118
6.10.6 Network Modelling Software and Support for Automatic
Network Tests 119
6.10.6.1 Network Model 119
6.10.6.2 Network Testing by Means of ATES 120
LIST OF CONTENTS Page
7. RELIABILITY, MAINTAINABILITY AND
AVAILABILITY ANALYSIS (RMA) 2
7.1 Introduction 2
7.2 Reliability Models and Block Diagrams 4
7.2.1 Reliability Models for PU's 6
7.2.2 Reliability Models for CU's 11
7.2.3 Node Reliability 16
7.2.4 EMH Reliability 17
7.2.5 GTW Reliability 18
7.2.6 Site Reliability 19
7.2.7 NCC Subsystem 20
7.2.8 Network Availability 21
7.3 Equipment Mean Time Between Failures (MTBF) 22
7.4 Equipment Maintainability (MTTR) 25
7.5 RMA Analysis 26
7.5.1 RMA Analysis for Communication Lines 27
LIST OF CONTENTS Page
8. ENVIRONMENTAL CHARACTERISTICS & COMMON ASPECTS 2
8.1 General 2
8.2 Climatic Environmental Characteristics 2
8.2.1 General 2
8.2.2 Operating Characteristics 3
8.2.3 Storage and Transport Characteristics 5
8.3 Electrical Environmental Characteristics 6
8.3.1 Static Electricity 6
8.3.2 Electromagnetic Waves 6
8.3.3 Interference on Power Feed Lines 6
8.3.4 Overvoltage Protection 7
8.4 Common Aspect 8
8.4.1 Safety 8
8.4.2 Human Engineering 9
8.4.3 Maintenance and Repair 9
8.4.4 Expandability 11
8.4.5 System Life Time 11
8.4.6 Components 12
8.4.7 Testing 13
8.4.8 Marking 13
8.4.9 Changeable Marking 13
8.4.10 Mechanical Dimensions 14
8.4.11 Power Supplies 14
LIST OF CONTENTS Page
9. SUPPORT 3
9.1 Introduction 3
9.2 Maintenance and Technical Support 5
9.2.1 Requirements Analysis 5
9.2.2 Maintenance Planning 6
9.2.2.1 Maintenance Plan 6
9.2.2.2 Maintenance Documentation 9
9.2.2.3 Recommended Spare Parts List (RSPL) 9
9.2.2.4 Tools and Test Equipment List 9
9.2.3 Maintenance Activities 12
9.2.3.1 Preventive Maintenance 12
9.2.3.2 Emergency Maintenance 12
9.2.4 Technical Support 13
9.2.4.1 Hardware Support 13
9.2.4.2 Software Support 15
9.3 Training 17
9.3.1 Requirements Analysis 17
9.3.2 Training Planning 18
9.3.2.1 Management and Organization 18
9.3.2.2 Training Plan 20
9.3.2.2.1 Course Overview 20
9.3.2.3 Development of Courses 26
9.3.3 Training Activities 28
9.3.3.1 Training Techniques 28
9.3.3.2 Training Material 29
9.3.3.2.1 Training Support for Sub-contractor (CNCP) 29
9.3.3.2.2 Copyright 30
9.4 Installation 31
9.4.1 Requirement Analysis 31
9.4.2 Installation Planning 32
9.4.2.1 Site Surveys 32
9.4.2.2 Transportation and Installation Plan 33
9.4.2.3 Site Preparation Requirements 33
9.4.2.4 Equipment Installation Drawings 34
9.4.2.5 Site Readiness Verification 34
9.4.3 Installation Activities 35
9.4.3.1 Transportation 35
9.4.3.2 Site Installation 39
9.4.3.3 Typical Layout 40
9.4.3.4 Site Acceptance 46
Page
9.5 Documentation 47
9.5.1 General 47
9.5.2 Documentation Tree 47
9.5.2.1 System Description Manual 47
9.5.2.2 Installation Manual 49
9.5.2.3 Operating Manuals 49
9.5.2.4 Technical Manuals 50
9.5.2.5 Peripheral Equipment Manuals 51
9.5.2.6 Tools and Test Equipment Manual 51
9.5.2.7 Programming Development Tools 51
9.5.2.8 Software Description Manual 52
9.5.3 Documentation Requirements 53
9.5.3.1 Language 53
9.5.3.2 Binder 53
9.5.3.3 Binder Arrangement 53
9.5.3.4 Paper/Printing and Typing 53
9.5.4 Documentation Implementation 54
9.5.4.1 Preliminary Delivery 54
9.5.4.2 Final Documentation 54
9.5.4.3 Documentation Delivered 54
9.5.4.4 Documentation Standard 54
9.6 Spare Parts Provisioning 56
9.6.1 Requirements Analysis 56
9.6.2 Spares delivery 56
9.7 Test and Development Facilities 57
9.8 Live Demonstration 59
LIST OF CONTENTS Page
10. COMPLIANCE STATEMENTS 2
10.1 Introduction 2
10.2 Responses to Air Canada Requirements 2
10.3 Responses to questions in RFP part 8 chapter 2 2A
APPENDICES
APPENDIX A: CR80 Fault Tolerant Computing
System Architecture
APPENDIX B: CR80 Data Sheets
APPENDIX C: DAMOS Overview
APPENDIX D: Modelling Documentation
APPENDIX E: CNCP's Response to Air Canada's RFI
APPENDIX F: Functional Presentation of the
Automated Test & Emulation System,
ATES
APPENDIX G: The CR Office System Project
LIST OF CONTENTS Page
1. INTRODUCTION 2
1.1 Content of document III 2
1.2 Structure of document III 2
1.3 Summary of chapter contents 3
1.4 Overview of appendices to Document III 5
1.5 Standards 6
1. I̲N̲T̲R̲O̲D̲U̲C̲T̲I̲O̲N̲ ̲T̲O̲ ̲D̲O̲C̲U̲M̲E̲N̲T̲ ̲I̲I̲I̲
o This chapter serves as an introduction to the reader
of Document III by outlining the content and structure
of the document.
1.1 C̲o̲n̲t̲e̲n̲t̲ ̲o̲f̲ ̲D̲o̲c̲u̲m̲e̲n̲t̲
This document, called Document III Technical Proposal,
covers the Christian Rovsing reply to Air Canada's
RFP, Publication 2001 part 3 through 8. The document
contains a detailed description of the proposed solution
to the Air Canada backbone network. It further contains
the requested responses to the requirements stated
in part 6 of the RFP and replies to the questions in
part 8 of the RFP.
1.2 S̲t̲r̲u̲c̲t̲u̲r̲e̲ ̲o̲f̲ ̲D̲o̲c̲u̲m̲e̲n̲t̲ ̲I̲I̲I̲
The document is one of the three documents in the bid
package:
Document I: Commercial Proposal
Document II: Executive Summary
Document III: Technical Proposal
Document III is organised in 10 chapters. Each chapter
is preceeded by a chapter contents table and a list
of figures and tables. In addition to the 10 chapters
is included a number of appendices which contain further
technical information regarding equipment and software
characteristics of the proposed system.
Below section 1.3 outlines the content of each chapter,
and section 1.4 contains a brief description of each
appendix.
1.3 S̲u̲m̲m̲a̲r̲y̲ ̲o̲f̲ ̲C̲h̲a̲p̲t̲e̲r̲ ̲C̲o̲n̲t̲e̲n̲t̲s̲
Chapter 1: I̲N̲T̲R̲O̲D̲U̲C̲T̲I̲O̲N̲ ̲T̲O̲ ̲D̲O̲C̲U̲M̲E̲N̲T̲ ̲I̲I̲I̲:̲
This chapter serves as an introduction to the reader
of Document III by outlining the content and structure
of the document.
Chapter 2: R̲E̲Q̲U̲I̲R̲E̲M̲E̲N̲T̲S̲ ̲A̲N̲A̲L̲Y̲S̲I̲S̲:̲
This chapter contains our interpretation of the
requirements stated in the RFP.
Chapter 3: P̲R̲O̲P̲O̲S̲E̲D̲ ̲S̲O̲L̲U̲T̲I̲O̲N̲:̲
We consider this chapter as the key section of
our proposal as it contains a system level description
of our proposed architecture for the Air Canada
Backbone network.
The chapter addresses how our solution satisfies
the Air Canada network requirements on a long term
basis.
Chapter 4: O̲P̲E̲R̲A̲T̲O̲R̲ ̲I̲N̲T̲E̲R̲F̲A̲C̲E̲:̲
This chapter covers specifically our solution to
the different Man-Machine interfaces in the Air
Canada Data Network.
Chapter 5: E̲Q̲U̲I̲P̲M̲E̲N̲T̲ ̲C̲H̲A̲R̲A̲C̲T̲E̲R̲I̲S̲T̲I̲C̲S̲:̲
In this chapter you will find a general description
of the characteristics of the equipment elements
which is included in the proposed solution. More
details will be found in the appendices ref. section
1.4 below.
Chapter 6: S̲O̲F̲T̲W̲A̲R̲E̲ ̲C̲H̲A̲R̲A̲C̲T̲E̲R̲I̲S̲T̲I̲C̲S̲:̲
Similar to chapter 5 this chapter contains a description
of the software components included in the solution.
Chapter 7: R̲M̲A̲ ̲A̲N̲A̲L̲Y̲S̲I̲S̲:̲
This chapter contains an analysis of how the proposed
solution will meet the availability requirements
for the Air Canada Data Network.
Chapter 8: E̲N̲V̲I̲R̲O̲N̲M̲E̲N̲T̲A̲L̲ ̲C̲H̲A̲R̲A̲C̲T̲E̲R̲I̲S̲T̲I̲C̲S̲ ̲&̲ ̲C̲O̲M̲M̲O̲N̲ ̲A̲S̲P̲E̲C̲T̲S̲:̲
The chapter contains data on the environmental
characteristics, like power, temperature, weight,
size etc., and a description of some common aspects
related to construction and testing of the Christian
Rovsing supplied equipment.
Chapter 9: S̲U̲P̲P̲O̲R̲T̲:̲
This chapter describes the Integrated Logistics
Support (ILS) included in our proposal. The following
areas are covered:
- Maintenance & Technical Support
- Training
- Installation
- Documentaiton
- Spare Parts
- Test and Development facilities
Chapter 10: C̲O̲M̲P̲L̲I̲A̲N̲C̲E̲ ̲S̲T̲A̲T̲E̲M̲E̲N̲T̲S̲:̲
This chapter contains a "Requirements Analysis"
which correlates the requirements in the RFP Part
6 with facilities provided in the proposed solution.
Further the chapter covers the replies on the specific
questions in the RFP Part 8.
1.4 O̲v̲e̲r̲v̲i̲e̲w̲ ̲o̲f̲ ̲A̲p̲p̲e̲n̲d̲i̲c̲e̲s̲ ̲t̲o̲ ̲D̲o̲c̲u̲m̲e̲n̲t̲ ̲I̲I̲I̲
Appendix A: C̲R̲8̲0̲ ̲F̲A̲U̲L̲T̲ ̲T̲O̲L̲E̲R̲A̲N̲T̲ ̲C̲O̲M̲P̲U̲T̲I̲N̲G̲ ̲S̲Y̲S̲T̲E̲M̲ ̲A̲R̲C̲H̲I̲T̲E̲C̲T̲U̲R̲E̲
This appendix describes in detail the CR80 hardware
architecture.
Appendix B: C̲R̲8̲0̲ ̲D̲A̲T̲A̲ ̲S̲H̲E̲E̲T̲:̲
This appendix is a collection of Data sheets for
the CR80 H/W modules included in the proposed solution.
Appendix C: D̲A̲M̲O̲S̲ ̲O̲V̲E̲R̲V̲I̲E̲W̲
This is an overview description of the CR80 standard
operating system called DAMOS. This appendix expands
the brief DAMOS description in Document III Section
6.2.
Appendix D: M̲O̲D̲E̲L̲L̲I̲N̲G̲ ̲D̲O̲C̲U̲M̲E̲N̲T̲A̲T̲I̲O̲N̲
This appendix describes the models used for predicting
the performance throughput and response times of
the proposed solution.
Appendix E: C̲N̲C̲P̲'̲s̲ ̲R̲E̲S̲P̲O̲N̲S̲E̲ ̲T̲O̲ ̲A̲I̲R̲ ̲C̲A̲N̲A̲D̲A̲'̲S̲ ̲R̲E̲Q̲U̲E̲S̲T̲ ̲F̲O̲R̲ ̲I̲N̲F̲O̲R̲M̲A̲T̲I̲O̲N̲
This appendix describes the CNCP's communications
capabilities. It is included in the Christian
Rovsing proposal as a response to the RFP Part
2 Section 1.1.3, which invites to a possible extension
of the proposal to cover communications links,
total network operation e.t.c.
Appendix F: F̲U̲N̲C̲T̲I̲O̲N̲A̲L̲ ̲S̲U̲M̲M̲A̲R̲Y̲ ̲O̲F̲ ̲T̲H̲E̲ ̲A̲U̲T̲O̲M̲A̲T̲E̲D̲ ̲T̲E̲S̲T̲ ̲&̲ ̲E̲M̲U̲L̲A̲T̲I̲O̲N̲
̲S̲Y̲S̲T̲E̲M̲ ̲
This appendix gives an overview of our solution
to the test facility requirements ref RFP Part
6, R3.15.
Appendix G: T̲H̲E̲ ̲C̲R̲ ̲O̲F̲F̲I̲C̲E̲ ̲S̲Y̲S̲T̲E̲M̲ ̲P̲R̲O̲J̲E̲C̲T̲
This document describes the Christian Rovsing concepts
for an open ended automated office system, including
local networking and integrated Digital Exchanges.
1.5 S̲t̲a̲n̲d̲a̲r̲d̲s̲
The documentation standards for this document follows
the guidelines in the RFP part 2, Section 2.2.2.4:
- All pages are on the top identified by Document
Id, Chapter No. and page No. Pages are numbered
consecutively within each chapter.
- Figures are identified by title, Document Number,
Section No., and a serial No. within each section.
E.g.:
…01…Figure III 3.2-1…01…Baseline System Overview
- Each chapter is preceded by a "Chapter list of
content". In the beginning of the document is included
a list of content for the entire document.
LIST OF CONTENTS Page
2. REQUIREMENTS ANALYSIS 2
2.1 Introduction 2
2.2 System Overview 3
2.3 Basic Requirements 5
2.3.1 Functional Requirements 5
2.3.1.1 Services 5
2.3.1.1.1 - Type A Service 5
2.3.1.1.2 - Printer Traffic 6
2.3.1.1.3 - Type B Service 6
2.3.1.1.4 - Host-to-Host Service 6
2.3.1.1.5 - New Traffic Services 6
2.3.1.2 Network Interfaces 7
2.3.1.2.1 Standard/Consistent Interfaces 7
2.3.1.2.2 Existing
8
2.3.1.2.3 New 8
2.3.1.2.4 Other Networks 9
2.3.1.2.5 Transmission Media Compatibility 9
2.3.2 Performance Requirements 10
2.3.2.1 Connectivity 10
2.3.2.2 Peak Traffic Transfer Rates 12
2.3.2.3 Processing Throughput 14
2.3.2.4 System Response Time 14
2.3.3 Availability/Realibility 16
2.4 Network Services and Features 18
2.5 Network Operations, Control and Management 22
2.6 Evolution 27
2.7 Support requirements 29
2. R̲e̲q̲u̲i̲r̲e̲m̲e̲n̲t̲s̲ ̲A̲n̲a̲l̲y̲s̲i̲s̲
2.1 I̲n̲t̲r̲o̲d̲u̲c̲t̲i̲o̲n̲
o A summary of the intentions of this chapter is
given.
The essential performance requirements as stated in
the tender were carefully analysed and the performance
of the proposed system was projected. The results are
presented in this chapter.
The user consideration and application requirements
which determine the system performance objectives are
both functional and quantitative.
The functional requirements deal with call set-up and
disconnect, packet transmission, supervisory control,
statistics and reports, protocols, and recovery and
back-up techniques. These aspects are the subject of
subsequent sections which describe operating procedures,
application software, hardware configuration and system
functions.
The quantitative requirements are analysed and summarised
in this chapter. The quantative requirements deal with
traffic statistics, traffic distribution for busy hour
and delivery time. From an analysis of the given and
derived factors, the design criteria were determined.
Most significant among these criteria for the evaluation
and selection of a properly sized system are:
- connectivity
- processing throughout
- system response time
Special attention has been given towards the use of
the OSI reference model architecture with a substantial
modularity and growth potential in line with the expanding
future throughput requirements.
2.2 S̲y̲s̲t̲e̲m̲ ̲O̲v̲e̲r̲v̲i̲e̲w̲
o In this section the general network requirements
are described, such as flexibilty and evolution.
The network required is a packet switch network with
3 nodes (1985). Store and forward (message switch)
facilities are to be included in the network. Since
the network will form the basis of almost any of Air
Canada's activities, the points mentioned below are
of the utmost importance:
- availablity
- flexibility
- easy operation
- compatibility
- graceful evolution
The availability requirement refers to the day to day
availablity during normal operation and the maximum
duration of planned outages due to major software updates
or configuration changes. Maximum down time will be
45 minutes/month and no more than 20 minutes/month
is permitted during prime time (0630-2030 ). Planned
outages should not exceed
2 minutes.
A great flexibility in the network is required to accomodate
the great variation in traffic load. These variations
may be purely statistical within a short time (minutes),
or of a periodic nature (noon traffic, approximately
10 times midnight traffic).
Since lots of people are going to use the system easy
operation is a key item. This counts for the daily
usage as well as for the network maintenance procedures.
Powerful statistics and test facilities should be at
the disposal of network superviser personnel.
Compatibility is necessary to avoid loss of the large
investment in the existing terminal access network.
Furthermore, it must be possible, with only minor changes,
to connect existing hosts to the new network. It must
also be possible to connect existing and new networks
to the new backbone network.
Since a continuous increase, both in traffic and number
of terminals are foreseen (about 20% annually until
1991) the network is required to permit a virtually
unlimited expansion. The implementation of such expansion
must be possible with only minor disturbance of the
network operation and with no degradation of the network
performance.
2.3 B̲a̲s̲i̲c̲ ̲R̲e̲q̲u̲i̲r̲e̲m̲e̲n̲t̲s̲
O This section outlines the services supported by
the network, the interfacing supported by the network,
the network performance and the network availability
and reliability.
2.3.1 F̲u̲n̲c̲t̲i̲o̲n̲a̲l̲ ̲R̲e̲q̲u̲i̲r̲e̲m̲e̲n̲t̲s̲
2.3.1.1 S̲e̲r̲v̲i̲c̲e̲s̲
O The different traffic types (TYPE A, printer, TYPE
B, HOST-HOST, new) supported by the network are
outlined.
The different services required lead to an elaborate
multipriority structure to ensure fast delivery of
e.g. type A traffic. To support printer services,
store and forward (message switching) facilities are
built into the node as the transport layer in the OSI
reference model. Easy implementation of new traffic
services leads to a highly structured software design,
again to a great extent based on the OSI reference
model.
2.3.1.1.1 T̲y̲p̲e̲ ̲A̲ ̲S̲e̲r̲v̲i̲c̲e̲
This traffic may be characterized as follows:
- terminal-host-terminal
- short input and relatively long output
- information retrieval or data base update
- requires fast response times (1 to 5 seconds) traffic
is very time sensitive and perishable (loses significance)
if delayed more than about 20 seconds)
- based on CRT screen messages (typical message 20
- 300 characters, maximum length about 1900 characters)
- no end-to-end acknowledgement required
- no network protection required, no link acknowledgement
necessary
- error recovery normally from end point (terminal
or host)
2.3.1.1.2 P̲r̲i̲n̲t̲e̲r̲ ̲T̲r̲a̲f̲f̲i̲c̲
This traffic may be characterized as follows:
- host to terminal (printer)
- terminal (printer) is unattended
- end-to-end acknowledgement is required
- possible contention for a printer by multiple hosts
- messages both direct and through Electronic Mail
Host.
2.3.1.1.3 T̲y̲p̲e̲ ̲B̲ ̲S̲e̲r̲v̲i̲c̲e̲
The network must provide a protected message switching
store and forward service.
This service is charaterized as follows:
- message stored until destination is ready to receive
- messages may be retrieved several hours after delivery
- interfaces to other protected message switching
systems such as CNT and ARINC
- mnemonic addressing (addresses up to 11 characters
in length), is generally used.
- various priorities for delivery
2.3.1.1.4 H̲o̲s̲t̲ ̲t̲o̲ ̲H̲o̲s̲t̲ ̲S̲e̲r̲v̲i̲c̲e̲
The network must support host to host traffic in the
form of short real time messages and bulk data transfers
(1 million to 100 million bits). As this traffic is
sent between automated end points, data integrity is
essential. Also the network should allow a range of
priorities for this type of traffic.
2.3.1.1.5 N̲e̲w̲ ̲T̲r̲a̲f̲f̲i̲c̲ ̲S̲e̲r̲v̲i̲c̲e̲s̲
The network must support an easy implementation of
new traffic types:
- Terminal to terminal traffic (man-to-man and machine-to-machine)
based on short, transaction type messages and on
longer file transfer messages
- The requirements of financial applications traffic
(encryption, delivery confirmation, audit trails)
- Timesharing or interactive traffic characterized
by long output messages, asynchronous character
orientated terminals.
2.3.1.2 N̲e̲t̲w̲o̲r̲k̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲s̲
O In this section the Standard/consistent interface
requirements are desribed, the existing and future
terminal interfaces are outlined, and the requirements
of interfaces with other networks are listed.
Finally the transmission media compability are
addressed.
The requirement of common access procedures for different
types of hosts leads to the use of a special front-end
processor, mapping the channel protocol of the host
to the standard X25 protocol of the network.
To support the existing terminal network an intelligent
communication controller (ICC) driver is developed.
The driver will be implemented as the network layer
and the data link layer in the OSI reference model.
Interfaces to other existing networks are implemented
in a similar fashion.
The requirement of uniform terminal interface leads
to the definition of a virtual terminal protocol on
the presentation layer. This protocol is used internally
in the network and facilitates the implementation of
new terminal functions.
The requirements of multi-sign-in leads to the development
of a special session layer to support these functions.
The requirement of transmission media compability leads
to the usage of a general "firmware" controlled line
termination unit (LTU) implementing at least the physical
layer of the OSI reference model. Programmes may be
down-line loaded to the LTU, increasing the flexibility
of the system.
2.3.1.2.1 S̲t̲a̲n̲d̲a̲r̲d̲/̲C̲o̲n̲s̲i̲s̲t̲e̲n̲t̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲s̲
The network must provide standard interfaces for connection
of any type of host computer.
The network must provide common procedures for access
to different types of HOST's from different types of
terminals, i.e. ideally the user needs only to care
about the type of service he wants, not the type of
terminal used or the type of HOST supporting the service.
Multi-sign-in must be allowed.
It is desirable that international standards are used
if this is possible without significant performance
impact.
2.3.1.2.2 E̲x̲i̲s̲t̲i̲n̲g̲
The network must provide suitable interfaces to existing
terminals.
The existing terminals include:
- CRTs - Types 405, 406, 407, 408
- Flight Information Display System Terminals
- Printers - (attached to CRT's)
- Teletype Model 40 - various configura-
tions
- Ticket printers - DI-AN
- Extel
- Other Devices - ASTAC (self ticketing machines)
- MAC (microcomputer based travel
agent administrative system(
2.3.1.2.3 N̲e̲w̲
The network must be flexible enough to allow connection
of a variety of new terminal types, for example:
- Graphics
- Voice input/output
- Wands
- Mobile radio
- Bar code printer
- Interactive terminals
2.3.1.2.4 O̲t̲h̲e̲r̲ ̲N̲e̲t̲w̲o̲r̲k̲s̲
The initial network must be able to interconnect with
- ARINC-ESS Protected message switching
- SITA TYPE A and TYPE B
- CNT Protected message switching
- Public network X25 packet switch network
- Aeronautical fixed Protected message switching
- TELEX, direct link Protected message switching
dial-up signalling
Interfaces to other networks than the above mentioned
must be easy to implement.
2.3.1.2.5 T̲r̲a̲n̲s̲m̲i̲s̲s̲i̲o̲n̲ ̲M̲e̲d̲i̲a̲ ̲C̲o̲m̲p̲a̲b̲i̲l̲i̲t̲y̲
The network must be compatible with various types of
transmission facilities:
- digital and analogue terrestrial links of various
qualities (error rates)
- satellite links
- future use of variable bandwidth and switched links
2.3.2 P̲e̲r̲f̲o̲r̲m̲a̲n̲c̲e̲ ̲R̲e̲q̲u̲i̲r̲e̲m̲e̲n̲t̲s̲
O In this section the performance requirements 1983-1991
are stated. Subsections are defined on the following
subjects: Connectivity, peak traffic transfer rate,
processing throughput and system response time.
The performance requirements lead to a powerful hardware
configuration built over the CR80 concept. Up to 4
processing units and 3 channel units are supported
per node.
2.3.2.1 C̲o̲n̲n̲e̲c̲t̲i̲v̲i̲t̲y̲
The initial network must be able to support a minimum
of 8 hosts in 3 centres, 12000 CRT terminals and 4500
printers. An equipment summary for the period 1983-1991
is shown overleaf.
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲Y̲E̲A̲R̲ ̲E̲N̲D̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲
1981 1982 1983 1984 1985
E̲Q̲U̲I̲P̲M̲E̲N̲T̲
Concentrator 87 110 155 177 200
RDMC 522 583 647 715 785
MUX 3654 4084 4530 5003 5493
Circuits 435 486 539 595 653
Trunks 121 153 215 245 277
Terminals (CRT) 7086 7919 8783 9699 10649
̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲Y̲E̲A̲R̲ ̲E̲N̲D̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲ ̲
̲ ̲ ̲ ̲ ̲ ̲
1986 1987 1988 1989 1990
1991
Concentrator 240 288 346 415 498
597
RDMC 942 1131 1357 1628 1953
2344
MUX 6592 7910 9492 11390 13668 16402
Circuits 784 940 1129 1354 1625
1950
Trunks 333 399 479 575 689
827
Terminals (CRT) 12779 15335 18402 22082 26498 31798
Fig. III 2.3.2.1
Network Equipment Summary
2.3.2.2 P̲e̲a̲k̲ ̲T̲r̲a̲f̲f̲i̲c̲ ̲T̲r̲a̲n̲s̲f̲e̲r̲ ̲R̲a̲t̲e̲s̲
The network must have the capacity to support the traffic
and transaction volumes for all hosts connected to
the network. In fig. III 2.3.2.2 is shown number
of interactions (in thousands) for the total network
in the period 1983-1991.
A detailed analysis (ref. appendix D) of the network
load requirements leads to requirement of 522 packets/node
& second in 1985. The Christian Rovsing solution will
be designed to support 600 packets per node & second
in 1985.
In the above mentioned analysis it is assumed that
inbounded traffic consist of 1 packet per interaction
and outbound traffic consist of 2 packet per transaction
in average.
In particular, the network must support one or more
large transaction volume hosts. This host interface
must have the capacity to handle at least 1000K transactions
per peak hour plus overhead.
1̲9̲8̲1̲ 1̲9̲8̲2 1̲9̲8̲2̲ 1̲9̲8̲4̲ 1̲9̲8̲5̲
M̲A̲J̲O̲R̲ ̲A̲P̲P̲L̲I̲C̲A̲T̲I̲O̲N̲ ̲S̲Y̲S̲T̲E̲M̲S̲
RES 259.8 300.6 301.9 334.2
RESERVIA 22.0 40.0 40.0 40.0 60.0
PMS 29.8 40.5 48.5 57.0 65.0
SUPPORT/CORP.SERVICES 10.3 12.7 6.4 7.3 5.2
MAINTENANCE/P & S - - 28.4 35.3 41.6
OPERATIONS - - 118.4 70.4 74.4
CARGO - - - 67.6 75.6
PSGT. MGMT ̲ ̲-̲ ̲ ̲ ̲ ̲-̲ ̲ ̲ ̲ ̲-̲ ̲ ̲ ̲ ̲-̲ ̲ ̲ 3̲6̲0̲.̲3̲
TOTAL NETWORK 321.9 393.8 543.6 611.8 682.1
1̲9̲8̲6̲ 1̲9̲8̲7̲ 1̲9̲8̲8̲ 1̲9̲8̲9̲ 1̲9̲9̲0̲
1̲9̲9̲1̲
RESERVIA 72 86.4 103.7 124.4 149.3
179.2
PMS 78 93.6 112.3 134.8 161.8
194.2
CORP. SVCS 6.2 7.4 8.9 10.7 12.8
15.4
MTCE/P & S 49.9 59.9 71.9 86.3 103.6
124.3
OPNS 89.3 107.2 128.6 154.3 185.2
222.2
CARGO 90.7 108.8 130.6 156.7 188.0
225.6
PSGT. MGMT 4̲3̲2̲.̲4̲ 5̲1̲8̲.̲9̲ 6̲2̲2̲.̲7̲ 7̲4̲7̲.̲2̲ 8̲9̲6̲.̲6̲ 1̲0̲7̲5̲.̲9̲
TOTAL NETWORK 818.5 982.2 1178.7 1414.4 1697.3 2036.8
Fig. III 2.3.2.2…01…Interaction Profile for Total Network -…01…Enquiry/Response and PMS Traffic…01……01…Thousands
interactions in a peak hour
2.3.2.3 P̲r̲o̲c̲e̲s̲s̲i̲n̲g̲ ̲T̲h̲r̲o̲u̲g̲h̲p̲u̲t̲
The network must not constrain the number of simultaneous
sessions or calls between terminals and hosts (terminals
signed-in). For example:
- One host could have simultaneous sessions with
almost all configured terminals.
- A terminal could be signed-in to all hosts and
directing transactions to any one of them
and/or could have multiple sessions with one host
if signed-in by application.
- A large number of terminals could be signed-in
to 2 or more hosts concurrently.
The network should be resilient enough to handle variations
in traffic level.
- Short term variation (minutes or seconds)
due to statistical factors or degraded/recovery
conditions.
- Long term variation due to pattern changes, new
application, changes in average message length.
2.3.2.4 S̲y̲s̲t̲e̲m̲ ̲R̲e̲s̲p̲o̲n̲s̲e̲ ̲T̲i̲m̲e̲
- Type A traffic: 85% less than 2.5 seconds, 95%
less than 5 seconds.
Response time is the time elapsed between depressing
the "enter" key on the CRT and the first character
of response appearing on the CRT.
Response time is calculated for peak hour and assumes
60% load on communications links.
- High priority printer traffic: 5 seconds.
Definition and assumptions as for type A.
- Type B: Priority dependent as outlined below:
QS, QX or equivalent several seconds
QU or equivalent several minutes
QK or no priority tens of minutes
QD overnight delivery
Response time should be uniform and independent on
application type and user location.
In case of degraded and/or congested network, response
time of high priority services should remain short.
The network should meet the performance requirements
of new services without adversely affecting the response
time of type A traffic or making inefficient use of
critical resources.
2.3.3 A̲v̲a̲i̲l̲a̲b̲i̲l̲i̲t̲y̲/̲R̲e̲l̲i̲a̲b̲i̲l̲i̲t̲y̲
o Since most of the activities of Air Canada depends
of a well functionating communication network,
the down-time of the network must be practically
0.
To ensure a high degree of availability and reliability
the following points are carefully observed.
- Reliability oriented system design
- Reliability oriented module design
- Mil-spec component usage
- Module burn-in
- Detailed module testing
- Detailed system testing
- Single element error tolerance
- Extensive checkpointing
- Fast recovery
The detailed requirement for reliability and availability
are stated below.
A̲v̲a̲i̲l̲a̲b̲i̲l̲i̲t̲y̲ ̲S̲t̲a̲n̲d̲a̲r̲d̲:̲
Availability must be equal to or higher than the existing
network.
The general availability requirement is 24 hours/day
and 7 days a week.
The following dimensions of availability should be
considered:
- Planned outages of the entire network (for major
software or configuration changes) should not exceed
2 minutes duration.
- Outages of individual network nodes (for extensive
local configuration changes) should not exceed
1 minute.
- Unavailability of significant components of the
network (e.g. trunks) for more than 2 minutes is
acceptable only if the remaining components can
carry the entire traffic load without adversly
affecting users.
- Total downtime of all network components in any
given path from source to destination node must
not exceed 45 minutes/month. No more than 20 minutes/month
downtime is permitted during prime time (0630-2030)
EST).
- Recovery from failures (for any reason) should
not require more than 1 minute. (Time to repair
or activate standby equipment).
S̲i̲n̲g̲l̲e̲ ̲E̲l̲e̲m̲e̲n̲t̲ ̲F̲a̲i̲l̲u̲r̲e̲ ̲S̲u̲r̲v̲i̲v̲a̲l̲:̲
The network must survive any single element failure
e.g. trunk, line, node, etc. Back-up facilities or
schemes should be provided to minimize the effects
of any network failure.
The stringent requirements for availability and short
recovery times have resulted in the requirement to
survive single element failure. Survival means that
the network will not only continue to operate, but
that it will also continue to provide an acceptable
level of service even under peak hour loading.
Even extremely low theoretical failure probabilities
are generally not a substitute for back-up because
of the unacceptability of a prolonged outage affecting
many users. This factor becomes even more critical
as more and more of the operational functions of the
airline are automated.
2.4 N̲e̲t̲w̲o̲r̲k̲ ̲S̲e̲r̲v̲i̲c̲e̲s̲ ̲a̲n̲d̲ ̲F̲e̲a̲t̲u̲r̲e̲s̲
O A basic requirement to the new network is to support
all the services in the old network. In excess
of this, a number of new services have been defined
which also are to be supported by the new network.
The services and features required are listed in the
below table:
- Reroute traffic to backup host
- Sign-in of high priority users during congestion
- Multi sign-in
- Functional equivalence with existing network
- Guaranteed delivery
- Security check during network access
- Security in test environment
- Bit orientated network protocol
- Encrypted data capability
- Error Detection and Correction
- Virtual terminal protocol.
The above requirements may all be satisfied by a careful
design based upon the OSI reference model. E.g. the
bit orientated protocol is supported by several international
standards (X.21,HDLC) addressing layers 1 and 2. The
virtual terminal e.g. may be implemented as layer 6
(presentation) according to another standard (ISO/TC97/CS16N666)
More details on the above requirements are listed below.
Reroute traffic to backup host:
The network must have the capability to reroute
all or any part of a host's traffic to an alternate
host.
This requirement addresses the need for the network
to allow an application to be provided by both
a primary and an alternate host. This would permit
the alternate host to provide service in the event
of primary host failure.
The basic requirement is to make access to and
use of the back-up host transparent to the user.
Sign-in of high priority users during congestion:
High priority applications must be able to establish
a call during congested conditions.
An acceptable minimum level of service must be
maintained for certain applications (and possibly
for selected users of these applications). Terminals
must be able to sign-in to these applications and
obtain acceptable response times during congestion.
Multi sign-in:
Network must support multi sign-in.
The network must provide service to terminals that
are signed-in to more than one host or application
and are originating transactions which must be
routed to the appropriate destination. This is
an important feature which is provided by the ACDN
network. The new network must accomodate the anticipated
increase in the number of sign-in combinations
(RES, OPS, CARGO, VIA, etc.).
Functional equivalence with existing network:
The new network must provide the functional equivalence
of the existing network, e.g. in areas such as
access network monitoring and control, printer
status awareness and control and protected message
switching services.
Guaranteed delivery:
The network must provide for guaranteed delivery
of messages with optional acknowledgement to origin.
If the destination is not immediately available,
the network can still accept the message and "guarantee"
that it will be delivered at a later time. Users
may request confirmation that the entire message
was correctly received at the destination.
Security check during network access:
A network access security procedure should be provided
for use where physical access to terminals is not
controlled (e.g. access from a public network).
The network must also support the requirements
of host access security in the configuration related
information which is supplied at session establishment
time.
Security in test environment:
Network should prevent test host applications from
access outside of the test environment. The network
should prevent unauthorized CRT's (not part of
the test configuration) from accessing a test host.
The network should also prevent test applications
from interfering with live applications, terminals
or hosts.
The concept of closed user groups is often referred
to in connection with the above requirement; however,
in the Air Canada context, the following should
be considered:
- CRT's authorized to access the test host may
be geographically separated (in different cities)
- members of a closed user group will share concentrator
remote multi-drop lines and node ports with
non-members.
Bit orientated network protocol:
The backbone network should be based on a bit orientated
protocol.
Encrypted data capability:
The network should allow for data encryption and
should be able to store and disseminate public
encryption keys.
Error detection and correction:
The network should provide error detection and
correction for type A messages.
This is a desirable feature and should be provided
whenever it can be done without significantly affecting
performance and efficiency.
Virtual Terminal Protocol:
With the advent of new hosts, it should be possible
to evolve to a Virtual Terminal Protocol (VTP)
in the new network.
The basic requirement here is to isolate the applications
from terminal characteristics.
In the future, a wider variety of terminal types
is foreseen, and some may not conform to common
Air Canada specifications. There will be an increased
requirement for manipulation of presentation level
data between the application and the terminal.
The network architecture should be compatible
with this concept, and the network should have
the capability to evolve in this direction.
2.5 N̲e̲t̲w̲o̲r̲k̲ ̲O̲p̲e̲r̲a̲t̲i̲o̲n̲s̲,̲ ̲C̲o̲n̲t̲r̲o̲l̲ ̲a̲n̲d̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲
O When operating a large network it is of the utmost
importance to have easy access to information about
network status. Also it is very important to have
an efficient software tool to define the complex
tables needed for network configuration and reconfiguration.
It should be possible in advance to estimate the
consequences of suggested configuration changes.
The required operation, control and management procedures
are listed below:
- Status awareness, network monitor
- Status awareness, user
- Alarms and statistics
- Configuration process
- Configuration distribution
- On-line inventory
- Network management distribution capability
- Limited distribution of control
- Control Network software management
- Costing - Billing system
- Application - Network sensitivity
Hardware monitor
- Network management - Data base system
- Network modelling tools
- Test and devlelopment facilities.
To fulfil the above requirements detailed statistical
information must continuously be collected by the software
modules at the nodes and forwarded to the network control
center at least every minute. To concentrate and reduce
the information an efficient data base management system
will be used.
The configuration file generation will be supported
by a special, interactive "configuration language"
based upon a general parser program.
Status Awareness - Network Monitor:
The connection status of all devices (CRT's, printers,
FIDS, etc.) must be available for display at the
Network Control Centre and to field technicians
at remote terminals.
Status Awareness - User:
The connection status of the DTE should be known
to the DTE operator.
This includes all of the elements in the previous
requirements with emphasis on sign-in status.
It also includes information on congestion, flow
control and host/application status.
Alarms & Statistics:
The new network must collect statistics sufficient
both to enable the minute to minute assessment
of network operation and status, and to provide
enough historical information to allow network
planning and costing to be accomplished. The concern
here is not to consider what reports to generate,
so much as it is to have the ability to obtain
enough information to generate reports from. Therefore,
the new network must have the FLEXIBILITY and CAPACITY
in both hardware and software to be able to capture
raw statistics, (and to be able to change or enlarge
the statistical information captured) and to forward
this information to a central data base where reports,
either hard copy or through dynamic query language
use, can be generated.
Configuration Process:
The network must have a practical and easy to use
configuration process. The complexity of the information
for the Air Canada network is considerable, and
this will increase with a distributed nodal network.
The requirement is to minimize the manpower required
to make configuration changes, to allow some limited
configuration changes to be made while the network
is running, and to minimize downtime for extensive
changes.
Network configuration should be controlled at the
NCC for logistical and security reasons. The configuration
process should have appropriate interfaces to enable
generation of both hard copy and interactive reports
and allow efficient control over equipment inventories
to be achieved.
In the event of error, it must be possible to change
or correct the configuration dynamically for isolated
errors, or to revert to the previously working
copy of the configuration if required.
Configuration Distribution:
The network must have the ability to distribute
service data (terminal related information similar
to the configuration data in the present network)
across various network components (distributed
configuration).
On-Line Inventory:
There should be an on-line inventory of all network
components available in the new network.
Network components include CRT's and printers as
well as multiplexers, concentrators, nodes, circuits,
trunks, etc. This inventory should be easy to
modify and linked to the configuration and billing
processes.
Network Management Distribution Capability:
It should be possible to distribute network management
functions in the network (e.g. billing, planning,
statistics).
The initial requirement is to have network management
centralized in one location.
Limited Distribution of Control:
The basic requirement is that all control can be
exercised from one central point to make efficient
use of manpower and ensure a global view of the
network. Network operations personnel must have
available a set of commands to control and change
the status of any network component.
Air Canada's objective is to have unmanned nodes.
Thus the NCC operator should have full capabilities
for performing such functions as dial-up of ICC
access "trunks", sending and activating new versions
of configurations or operating software, selecting
standby nodes, diagnosing software problems and
so on.
To satisfy the general objective of preventing
catastrophic failure, capability should be provided
to move central control and alarm functions to
a secondary node.
Central Network Software Management:
It should be possible to distribute network software
in the network.
The capability must be provided to maintain a software
base at one central point, develop changes as required,
and then distribute new software through the network
to the decentralized nodes.
Costing-Billing System
It must be possible to charge network costs to
the user level.
The network must continuously collect sufficient
statistics to permit accurate identification of
usage-sensitive costs by user group. Costing will
also require information from the configuration
(device types and quantities, etc.) and the network
management software must have appropriate interfaces
between the configuration process and the billing
or costing process
Hardware Monitor:
An external hardware monitor capability must be
provided.
The hardware monitor is used as an important network
management tool which can collect and produce statistics
on the utilization of critical hardware resources.
The hardware monitor may also be used continuously
as a means to obtain statistics for monitoring
purposes. In this regard, it should be considered
as an alternative to the deveopment of customized
node software for statistics collection.
Network Management - Data Base System:
The network management function must be based on
a data base management system which uses an interactive
query-oriented high level language for real time
access to statistical information, updates to configuration,
and production of reports. Sophisticated network
planning and software development tools must also
be provided for productivity improvement reasons.
Network Modelling Tools:
The network management system must include a simulation
model(s), which can be used by network planners
to predict the effects of changes in network configuration,
new traffic types and increased traffic volumes
and to determine when and where additional capacity
will be required. The vendor should highlight
specifics as to the tools to be provided to Air
Canada in these areas.
Test and Development Facilities:
The network must include facilities for the test
and development of both network and host software:
- volume generator
- protocol tester, off-line test hardware - nodes,
concentrators, terminals and host connections
- on-line test configurations
2.6 E̲v̲o̲l̲u̲t̲i̲o̲n̲
O To prohibit any disturbance in the daily routines
of the Air Canada personnel, a gradually switchover
to the new network must be supported. Further
the implementation of new services must be possible
with only minor disturbance of network traffic.
To support an easy switchover, the new network must
support the same interfaces as the old one. The traffic
needed between the old and the new network will be
supported by the gateway processor.
The implementation of new interfaces will require only
local updates at the specific node computer, since
a flexible interface is achieved by the usage of the
OSI reference model.
Graceful Evolution:
Air Canada must be able to evolve from existing
to new network gracefully without interrupting
service.
A large, heavily used network is already in place
servicing a variety of airline and transportation
industry functions with three hosts. Although
new hosts will be progressively added to the new
network and will off-load some applications from
the existing hosts, there will be a period of several
years during which many terminals will require
access to hosts on both networks. Service interruptions
which exceed availability specifications are not
acceptable nor are extended periods of degraded
performance. Parallel networks with multiple terminals
serving one user is quite undesirable and impractical.
Changes to procedures for use of the existing and
new network must be minimized and should be as
transparent as possible to users.
To facilitate this graceful and progressive evolution
of users from the existing ACND networks (Yellow
and Green) to the new network, a component designated
as the ACNC Gateway has been identified as the
most cost effective solution.
Network Interfaces Evolution:
The network must have an inherent capability to
evolve to satisfy the requirements of new communications
services and terminal types in the post 1985 period.
New services such a facsimile, office automation,
videotex and electronic mail will become increasingly
important. Air Canada anticipates that the existing
custom developed concentrator-multiplexer access
network architecture will not satisfy the requirements
of these services and their terminal types. One
or more of the evolving local area communications
network (LACN) architectures will be required and
the backbone network must have sufficient flexibility
and capacity to interface with these technologies
and their terminal population.
An example of an access network architecture that
is expected to be attractive for many of these
services is an integrated local voice and data
network based on CBX technology. The backbone
network is expected to provide a private inter-city
data transportation resource and a common terminal-host
interface. As a result, interconnection will be
required between the CBX based network and the
backbone network. Although this interface is not
an immediate requirement, it will be needed during
the lifetime of the network.
2.7 S̲u̲p̲p̲o̲r̲t̲ ̲R̲e̲q̲u̲i̲r̲e̲m̲e̲n̲t̲s̲
O To ensure optimum utilization of the new network
it is important that an efficient training of customer
personnel is supported together with a detailed
documentation. To support emergency actions, supplier's
personnel must be available.
The support requirements may be summarized in the following
points:
- Equipment maintenance
- Training support
- Software support
- Documentation
Equipment Maintenance:
The supplier should state his maintenance philosopshy
and provide a maintenance plan for all components
of the network (nodes, network associated hosts,
special display terminals, mass storage units,
printers, test equipment, etc.). The plan should
include procedures for unscheduled preventive maintenancve
where applicable.
A list of recommended spare parts and their deployment
to network locations is required. The location
of supply depots and procedures and time estimates
for obtaining parts not stocked at network locations
should be specified. The proposal should be based
on continuous resident maintenance support at the
Network Control Center site by a minimum of one
fully qualified technician.
Air Canada will consider performing all or part
of the maintenance support function internally.
Therefore, the supplier should provide sufficient
information for a sound evaluation of this alternative.
Training Support:
Suppliers should indicate the nature and scope
of their training support and quote the total cost
for his activity. Both initial and ongoing training
should be covered. Minimum and advanced training
programs should be proposed for the following categories:
- System Management Staff (in the areas of network
design, planning, administration and configuration)
- Communications Specialists
- Software Programmers
- Operators
- Network Monitors
- Technicians
Software Support:
The supplier should:
a) provide details as to the level of support and
experience he is prepared to offer before, during
and after installation of the system
b) specify if support of the above is at the Air Canada
site or supplier's location
c) provide a statement of Corporate Policy relative
to meeting user demands for software improvement
and/or incorporation of new functions. The proposal
should include policies on upgrading of delivered
software
d) state number and qualifications of personnel required
by Air Canada to maintain the proposed system software
and support system expansion and tecnological improvements.
Documentation:
The minimum documentation to be supplied with the selected
system consists of:
a) five complete sets of technical publications and
manuals related to hardware, software, languages,
utilities, and system operations
b) two complete sets of equipment maintenance manuals
for all equipment at each location in the system
c) a source code listing of all related standard software
d) complete documentation of all modules written or
modified for Air Canad's use. Documentation should
satisfy Air Canada standards
e) full operator instructions and procedures
f) full network monitor procedures
LIST OF CONTENTS Page
3. PROPOSED SOLUTION 2
3.1 Introduction 2
3.2 Proposed Technical Solution 3
3.2.1 System Architecture 5
3.2.2 Baseline System (1983) 9
3.2.2.1 Overview 9
3.2.2.2 Interfaces and Protocols 11
3.2.2.2.1 Protocols 17
3.2.2.2.2 Interfaces 19
3.2.2.3 Performance 20
3.2.2.3.1 Installed Capacity 20
3.2.2.3.2 Response Times 20
3.2.2.4 Modelling 21
3.2.3 Future Growth (85 and beyond) 24
3.2.4 Options 25
3.3 Telecommunications 25
3. P̲R̲O̲P̲O̲S̲E̲D̲ ̲S̲O̲L̲U̲T̲I̲O̲N̲
o The Air Canada Data Network proposed is based on
state of the art software and hardware technology.
The layered software structure reflects the current
trends in standardization of Open Systems Interconnection.
The Backbone Network provides X25 packetswitching
as well as higher level services through Virtual
Terminal- and File Transfer protocols.
The fault tolerant multiprocessing hardware architecture
of CR80 provides a processing power at least 5
times the presently projected needs for 1985.
3.1 I̲n̲t̲r̲o̲d̲u̲c̲t̲i̲o̲n̲
This chapter describes the proposed system, in terms
of hardware and software system architectures. The
structure of the actually proposed baseline system,
its initial capacity and projected response times for
user traffic is covered. The interfaces to the backbone
network from users, e.g. host computers, the Gateway,
and the Terminal access equipment, is covered on all
levels.
Finally, the future growth capabilities of the network
is discussed with respect to capacity and functionality.
3.2 P̲r̲o̲p̲o̲s̲e̲d̲ ̲T̲e̲c̲h̲n̲i̲c̲a̲l̲ ̲S̲o̲l̲u̲t̲i̲o̲n̲ ̲
The solution to the Air Canada Data Network ACDN proposed
herein, is based on a network architecture developped
during the last four years and used in national as
well as international networks, where high performance
reliability, security and flexibility are essential.
The equipment used is from the CR80 computer series,
designed and manufactured by Christian Rovsing can
be configured to meet a broad range of applications,
including fault tolerant data-, packet-, and message
switching networks.
CR80 computer elements are proposed for all of the
computerized functions in the so-called Backbone Network,
i.e. not only for the nodal processors but also for
the Electronic Mail Host and the Network Management
Host.
For some reason, the Backbone Network as defined in
the RFP includes the Gateway and the Terminal Access
function (i.e. the ability to interface with terminals
via ICC's) while it excludes the host access function.
For symmetry reasons, however, we have also addressed
the host access function in the following presentation
of our system solution.
An overview of a specific system configuration is shown
in fig. III 3.2-1. It reflects the concept network
for 1983 and illustrates the presence of the generic
network elements positioned in accordance with the
concept network in the RFP, i.e. the nodes, the NCC's,
the Gateway, the Electronic Mail Host EMH, the Network
Management Host NMH, and the Front End Processors FEP's.
For each node, there is a (hardware) Communications
Processor, CP, which houses a number of (software)
Subsystems, supporting e.g. Packet Switching (NSS)
and Higher Level Services (HSS)
Each CP may be configured as a completely colocated
system with all Processing Units working in an equal
sharing of the load; or some degree of specialization
of Processing Unit functions may be chosen. Regardless
of the configuration, the basic processor and memory
modules are the same.
Fig. III 3.2.1
3.2.1 S̲y̲s̲t̲e̲m̲ ̲A̲r̲c̲h̲i̲t̲e̲c̲t̲u̲r̲e̲
The CR80 hardware configuration comprises a number
of loadsharing CPU's, grouped together in processing
units, PU's, with up to 5 CPU's per PU. Up to 1 Mwords
may also be housed in a PU.
Multiple PU's may be interconnected by one or more
16Mb/s supra busses. Some PU's may be loadsharing
equally, some may carry out special functions while
still others may be standby units ready to be activated
to substitute currently active PU's in case of failure
by "watchdog" control elements.
The CP hardware is fault tolerant, which means that
it includes redundant standby computer elements, which
can be switched into operation .
The hardware is continuously monitored and controlled
via a serial Configuration Control bus extending from
a Watchdog to all switchable and/or monitored assemblies,
such as CPU's, busses, power supplies, and LTU's.
To fully understand the CR80 fault tolerance concept,
the Equipment Characteristics Chapter should be read.
This architecture allows the open ended growth in the
equipment and hence in processing power, which is so
crucial in a dynamic environment.
The great flexibility in the hardware configuration
capabilities also supports the graceful evolution of
the CP configurations. As an example, the Gateway
which is being used in the transition phase 1983-1985
may in 1986 be sublimated into a more powerful node
by rearranging the interconnecting SUPRA busses. Due
to the redundant hardware design such major configuration
changes may take place while the remaining system is
fully operational.
Just as important, however, is the software architecture
adapted.
The hardware and system software was designed in close
connection to create an environment for switching classified
information in a secure manner. The result was the
secure multiprocessor operating system DAMOS, which
efficiently supports graceful evolution, by allowing
full information interchange between any 2 processes
in any combination of CPU's or PU's.
The partitioning of the computer system into a specific
PU configuration hence does not impact application
software structure.
Based on this versatile system approach, the communications
software complex implements the services of the Air
Canada Network.
The Open Systems Interconnection standardization results
guide the design of the communications software developped
at Christian Rovsing.
To generalize the functionality of the interfaces,
the network will provide service to interactive terminals
and will support bulk transfers.
These services will be provided by virtual protocols
residing in the network.
In general the choice of virtual protocols depends
on the service wanted. Apart from the two services
mentioned it is the intention of Christian Rovsing
to implement other necessary virtual protocols on request
for instance a standardized graphics protocol or a
protocol covering the very high speed classes used
for direct microwave- and satellite-communication.
The backbone network proposed provides the following
levels of service:
- Packet switching (CCITT's X25)
- Virtual File transfer, (NPL's File Transfer Protocol)
- Virtual Terminal Interaction (ECMA'S Virtual Terminal
Protocol)
A virtual protocol is one, which is not used by any
actual equipment attached to the network, at least
not as yet. Actual equipment protocols attached to
the (ACDN) network must be mapped onto the virtual
protocol supported by the network.
Future equipment should be designed to work directly
with the virtual protocols in the network. By providing
a baseline for future communication via the network,
the virtual procotols propel commonality in the ACDN.
In the selection of the virtual protocols for the network,
one must investigate carefully the trends of the related
standards in the world today. The FTP and the ECMA
VTP presently appear to be the most commonly adopted
in Europe.
Should this not be so for the Canadian environment,
other virtual protocols may be selected for the ACDN.
The packet switching service is carried out by an X25
packet Switching Network, implemented by the Nodal
Switch Subsystem. This constitutes the lowest level
of data transfer service offered by the ACDN.
The File Transfer Protocol, FTP and the Virtual Terminal
Protocol are included in the High level Services Subsystem,
HSS.
The File Transfer Protocol, FTP, represents a virtual
network protocol for bulk transfers. The implementation
in the backbone network, which is supposed to enable
multihost access to remote facilities like printers
and cardreaders will be the relevant parts of FTP-B(80),
also known as blue book (Data Comm. Protocols Unit,
NPL, G3).
The line of services that is foreseen to be adopted
in an initial implementation will be the Host-to-Host
transfer of files at a low level, i.e. printer files,
whereas at a later stage a full implementation including
job-service may be implemented according to existing
standards (ISO/TC97/SC161 N628 or later).
Also, a Virtual Terminal Protocol is proposed for the
ACDN. Several possibilities have been looked at.
CCITT defines a low level virtual terminal standard
by the three standards X.3, X.28 and X.29. Combined
these standards will be a so-called scroll-mode VT
offering user selectable PAD-functions described by
a set of parameters.
However, this will not be sufficient to cover the needs
for the terminals to be supported in the backbone network.
Consequently the design will be extended with functions
necessary to cover the level of terminal service needed
for VDU's like UTS 400 and IBM3270BSC, the line of
services supported thus will be as described for the
terminal class "formmode" described in ISO/TC97/SC16
N666 (ECMA/TC23/81/53).
The network is governed by the following general communications
software subsystems, of which the NSS and HSS conforme
to the 7-layer definitions of the ISO-Open Systems
Interconnection model:
- Nodal Switching Subsystem NSS
- High Level Services Subsystem HSS
- Network Control Subsystem NCS
In addition the following special subsystems have been
defined for the Air Canada Backbone Network:
- Terminal Access Subsystem TAS
- Host Access Subsystem HAS
- ICC Emulator Subsystem IES
Of the above-mentioned subsystems, the NCS is special
since it does support the transmission of user data;
instead it controls the network configuration. The
NCS uses permanently allocated paths in the network
to control the remaining part of the network topology.
The NCS uses the NSS and the HSS to distribute and
retrieve configuration control reformation and statistics.
The remaining subsystems,the NSS,the HSS,TAS,HAS, and
IES interact in order to transmit user data. In the
section on interfaces and protocols these interactions
are further described. Below follows a short description
of each subsystem.
The Nodal Switch Subsystem,NSS, is located in each
physical node. It provides packet switching via a network
of mutually interconnected NSS's. The numbering plan
is based on X121 and the routing principle is that
of X110. The following essential service types are
included for sessions through the nodal network:
- Priority
- End-to-end acknowledge or non-acknowledge.
The high level Services Subsystem,HSS, is located in
each physical node. It provides higher level services
to users of the backbone network. These services presently
include the following virtual protocols:
- File Transfer Protocol,FTP-B(80)
- Virtual Terminal Protocol,ECMA/TC23/81/53
The Terminal Access Subsystem,TAS, interfaces to the
existing Air Canada Terminals through the existing
ICC's. The TAS acts against the ICC's as the ACNC and
maps the ICC protocol on the presentation layer into
the Virtual terminal protocol. For printer traffic
the TAS maps into the FTP and solves shared printer
contention.
The Host Access Subsystem,HAS, interfaces a host to
the backbone network. There shall be a HAS type for
each host type supported. According to the RFP terminology
the HAS would not be part of the backbone network,
rather it would be included in the Front End Processors
in the Host Access Network. For symmetry (with the
TAS) the HAS is also addressed in this proposal, and
an illustrative description for a UNIVAC HAS is given
in the software section.
Similar to the TAS, the HAS maps the Host protocols
onto the virtual protocols supported in the backbone
network. The Terminal number is also mapped into the
number convention of the backbone network.
3.2.2 T̲h̲e̲ ̲B̲a̲s̲e̲l̲i̲n̲e̲ ̲S̲y̲s̲t̲e̲m̲ ̲1̲9̲8̲3̲
3.2.2.1 O̲v̲e̲r̲v̲i̲e̲w̲
The Baseline Backbone Network Configuration was shown
in figure III 3.2-1.
This Backbone Network Configuration exhibits nearly
the same partitioning as suggested in the Concept Network
chapters of the RFP, i.e. there are two nodes, a gateway,
a Network Management Host, and an Electronic Mail Host.
The dualized Network Control Center with geographically
distributed dualization is suggested to reside, singularly
in both the initial nodes.
The actual hardware partitioning indicated as circles
interconnected by supra links is based on the assumption,
that all equipment groups delivered for a particular
node site such as Montreal or Toronto can be placed
within reach of the SUPRA BUS, which is some 150 meters.
Should some equipment have to be placed more distributed
but less than 1.2 km away, instead a TDX link might
be used for such equipment.
About the nodes the following must be noted:
In our architecture the term "node" denotes that part
of a physical Communication Processor, CP, which acts
as a member of the nodal (X25-) Packet Switching Network,
PSN.
Other functions, such as Terminal Access and Host Access
Subsystems, which may be housed in the same CP's, constitute
network access services, separate from the node. Also
true for the High Level Services Subsystem is, that
it adds to the service level of the backbone network
but is not part of the PSN.
E̲l̲e̲c̲t̲r̲o̲n̲i̲c̲ ̲M̲a̲i̲l̲ ̲H̲o̲s̲t̲
The Electronic Mail Host system includes subsystems
for the following special switching networks: The ARINC,
SITA, CNT and TELEX.
Against the Air Canada network the EMH uses the virtual
protocols supported by the HSS. Thus the EMH is the
first user to employ the virtual protocols directly.
The EMH comprises the following subsystem:
o ARINC Subsystem ARS
o SITA Subsystem SIS
o CNT Subsystem CNS
o TELEX Subsystem TLS
o Store & Forward Subsystem SFS
Further subsystems may be developed and tested and
new operational software systems generated via local
or remote terminals.
N̲e̲t̲w̲o̲r̲k̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲ ̲H̲o̲s̲t̲
The Network Management Host functions are included
in the Network Management (software) subsystem, it
includes the functions necessary for administrating
the network i.e. Subscriber, installation and Billing
Management, Topology Planning, network simulation,
statistics and charging information collection.
Further application subsystems and functions can be
programmed by users via local or remote terminals,
in the Software Development Environment supplied.
G̲a̲t̲e̲w̲a̲y̲
The Gateway function for interconnecting the old ACNC
network and the new backbone network is placed in the
ICC Emulator Subsystem IES. Operating a number of 9.6
kbps the Gateway acts against the ACNC as multiple
ICC's.
Against the new backbone network the Gateway maps interactive-,
printer-, and host-traffic into the virtual protocols
for interactive terminals and file transfer, respectively.
In the proposed baseline system configuration the Gateway
function has been placed in separate dualized Processor
Units.
3.2.2.2 I̲n̲t̲e̲r̲f̲a̲c̲e̲s̲ ̲a̲n̲d̲ ̲P̲r̲o̲t̲o̲c̲o̲l̲s̲
o The protocols used by the different traffic types
are shown in 4 figures. A short overview on the
different protocols is given. Also the interfaces
in the system are illustrated in a figure supplied
with short descriptions. Further details on these
subjects are given in section 6.
In fig. III 3.2.2.2.1 (1-4) the protocols used are
shown. When selecting the protocols the following
has been observed:
- Concurrence with the OSI reference model.
- Compability with existing networks
- Flexibility
- Graceful evolution.
In fig. III 3.2.2.2.2 the submodules used are shown.
The following external interfaces are identified:
- Backbone network Existing terminal Network
- Backbone network Host computer
- Backbone network Existing Air Canada Network
(ACNC)
- Backbone network ARINC/SITA/TELEX/CNT
The interfaces are made up from the above mentioned
protocols, allowing for an easy implementation of changes.
Figure III 3.2.2.2-1(1)
Figure III 3.2.2.2-1(2)
Figure III 3.2.2.2-1(3)
Figure III 3.2.2.2-1(4)
Figure III 3.2.2.2-2
3.2.2.2.1 P̲r̲o̲t̲o̲c̲o̲l̲s̲
AAP: Application Protocol. This is constituted
by the user selected application program in
the host computer and is not included in this
bid.
TP: Terminal Protocol. This protocol depends on
the actual terminal equipment used (e.g. AC407).
The protocol includes things like cursor control
and split screen options.
PP: Printer Protocol. Depends on printer type.
One of the functions of this protocol will
be to adapt the message to the actual line
length of the printer.
VTP: Virtual Terminal Protocol. To support an easy
introduction of new terminal types in the network
a virtual terminal protocol has been defined.
This protocol is implemented as specified
in ISO paper: ISO/TC97/SC16 N666: Generic virtual
terminal model and service desription. In
the Terminal Access
Subsystem the virtual protocol is mapped to
the actual terminal protocol.
FTP: File Transfer Protocol. The protocol supports
the protected message service. It is implemented
as specified in FTP-B-80, also known as blue
book. In the Terminal Access Subsystemn the
virtual protocol is mapped to the actual printer
protocol.
CSU: Command System User. This is an internal version
of the VTP/FTP used between the front-end processor
and the host computer. The mapping between
the VTP/FTP and the CSU is done by the host
access subsystem (HAS).
TSL: Terminal Session Layer. This layer controls
the sign-in and sign-out procedures. Also
it controls terminal operation during multi-sign-in.
Billing information, as log-in time per host
and number of characters transferred, is collected
by this layer and passed on to the Network
Control Center using the ISL.
ISL: Internal Session Layer. This is the internal
network support of the functions of the TSL.
Also the layer supports session parameter setting
during sign-on:
- Priority
- Rate
- Protected/Non-protected.
- Test/operational/other
PFC: Port Flow Control. This is the session layer
of the front-end processor and provides for
initialization of a logical port (LP) session,
ack of port data units, retransmisson and flow
control.
TTL: Terminal Transport Layer. Supports message
segmentation and assembly. Solves printer
contention problem. The protocol is terminal
dependent.
NTI: Nodal Transport Interface. This is a very
primitive protocol supporting the interface
between High level Service Subsystem (HSS)
and Nodal Switch Subsystem (NSS). The protocol
asks for the setup of a specific virtual circuit
in the network.
ITL: Internal Transport Layer. Performs the end-to-end
control of messages, It also creates and dismantles
virtual calls throughout the network.
"X.25/3": Switches packets from node to node via
virtual circuits. Flow control is maintained.
"X.25/2": Transmits frames from node to node. In
case of error, frames will be retransmitted.
For satellite links selective retransmission
will be implemented.
"X.25/1": The physical layer will be X.21 or X.21.bis.
IPT, IPN, ILL, IPL: These protocols are all defined
by the AC200 ICC Intelligent Communication
Controller Procurement Specification and
will be implemented as defined.
QIF: Queue Interface. Very simple interface transferring
packets from Terminal Access Subsystem to High
level Service Subsystem and vice versa.
LPM: Logical Port Multiplexer. Transport layer
of the Host Access Subsystem Protocols.
SAI: Sub Architectucal Interfaces. Not part of
the distributed communications architecture
(DCA) used in the front-end processor. Implementation
dependent.
SITA, ARINC, TELEX etc.: These interfaces will be im-
plemented according to the detailed specifications
of the actual systems.
3.2.2.2.2 I̲n̲t̲e̲r̲f̲a̲c̲e̲s̲
Backbone network Existing Terminal Network
This interface is made up by the Terminal Access
Subsystem (TAS) situated in the Communication Processors
("Node's"). The subsystem supports the following
protocols: TP, PP, TSL, TTL, IPN, ILL, IPL and
QIF. The physical interface is made up by a number
of Line Termination Units (LTU's) supporting initially
30 9.6 kbps communication lines. The interface
is carefully designed to support the existing terminal
network.
Backbone network Host computer
This interface is made up by the Host Access Subsystem
(HAS). The HAS is built upon the UNIVAC host interface
as designed by Christian Rovsing. Other hosts
supported are IBM. Based upon the micro programmable
high speed channel interface used in the Univac
and IBM interfaces, new interfaces to most host
types can be developed with short lead time.
Backbone network Existing Air Canada Network
In the tender this interface is defined to correspond
to 12 ICC's of the type used in the Terminal Access
Network. To support this, the ICC Emulator Subsystem
(IES) emulates a complete ICC (Terminal) ACDN interface.
To do this, the following protocols are supported:
TP, PP, TSL, TTL, IPT, ILL and IPL. A mapping
is performed to the virtual protocol as used internally
in the network. The IES is situated in the so-called
gateway processor.
Backbone network ARINC, SITA etc.
A number of subsystems are defined, each supporting
one of the mentioned interfaces. The subsystems
emulates the corresponding systems from the lowest
to the highest protocol level. A mapping is performed
to the Virtual Terminal Protocol as used internally
in the network.
3.2.2.3 P̲e̲r̲f̲o̲r̲m̲a̲n̲c̲e̲
o This section discusses the installed processing
capacity as well as the response time introduced
by the nodal network.
3.2.2.3.1 I̲n̲s̲t̲a̲l̲l̲e̲d̲ ̲C̲a̲p̲a̲c̲i̲t̲y̲
Initially, one active PU and one redundant, would be
sufficient. The theoretical maximum throughput by one
PU with four CPU's of pure packetswitching is
325 packets/sec.
Various overheads are estimated to reduce the theoretical
maximum for the same PU to
295 packets/sec.
With a facility utilization of less than 70% on all
multi-server queues, the rated throughput for one PU
becomes 206 packets/sec.
For the 1985 processing requirement, 3 such PU's were
suggested, hence estimated to switch
618 packets/sec.
With the present suprabus capacity and the estimated
PU load of 206 packets/sec. of 30 char in average,
at least 16 PU's can be interconnected, producing packet
switching capacity of at least 3200 packets/sec., which
should cover twice the projected 1991 switching requirements.
3.2.2.3.2 R̲e̲s̲p̲o̲n̲s̲e̲ ̲T̲i̲m̲e̲s̲
The response time is affected by the node cross-office
time for the Air Canada Data Network. Assuming that
a transaction has to pass 2 nodes through the backbone,
the transaction-and-response together will pass across
four nodes.
The node cross office time is estimated to
tcross,mean = 41 ms 4 nodes passed in 164
ms
tcross,max 65% = 85 ms 4 nodes passed in 340
ms
tcross,max 90% = 131 ms 4 nodes passed in 524
ms
tcross,max 95% = 157 ms 4 nodes passed in 6̲2̲8̲
̲m̲s̲
for mean and for various confidence levels. These
figures express how much delay is added to the total
response time by the packet switching network. These
figures are based on the model described in 3.2.2.4.
3.2.2.4 M̲o̲d̲e̲l̲l̲i̲n̲g̲
To obtain a system with a balanced distribution of
resources, the following is applied as guidelines for
dimensioning.
D̲a̲t̲a̲ ̲F̲l̲o̲w̲ ̲M̲o̲d̲e̲l̲
The Data flows through the network as finite entities
being transferred from queue to queue by various processing
elements such as LTU's, microprocessors, CPU's and
SUPRA links. Some of these queues are served by only
one server (e.g. outgoing line queues within an LTU),
while other queues have multiple servers (e.g. ingoing
queues in Processor Units with say four load sharing
CPU's).
The queues are served by software processes of which
several may timeshare one CPU.
D̲i̲m̲e̲n̲s̲i̲o̲n̲i̲n̲g̲ ̲R̲u̲l̲e̲
For each queue, the facility utilization of the process
or processes serving it, should not exceed 60% for
single server queues and 70% for selected multiserver
queues.
On figure III 3.2.2.4-1a is shown a simplified Data
FLow, which indicates the queues and processes involved
to transfer a packet through a node. In figure III
3.2.2.4-1b is given the essential data flow model
figures.
Fig. III 3.2.2.4-1a
P̲r̲o̲c̲e̲s̲s̲i̲n̲g̲ ̲T̲i̲m̲e̲s̲
tRS2 = 1 ms packet handling in LTU
tRS3 = 6 ms " " " CPU
tTS3 = 4 ms " " " CPU
tXS3 = 1 ms " " " CPU
tRS2 = 0.5 ms " " " LTU
tTS1 = ̲3̲.̲7̲ ̲m̲s̲ " " " line at 56 kbps
tS 16.2 ms
F̲a̲c̲i̲l̲i̲t̲y̲ ̲U̲t̲i̲l̲i̲z̲a̲t̲i̲o̲n̲ ̲F̲a̲c̲t̲o̲r̲s̲
4CPU = 0.7 ; (4-server queues)
CPU = 0.6 ; (single server queues)
LTU = 0.5 ; (LTU u processor)
LINE = 0.6 ; (output com line)
SPL = 0.3 ; (supra link)
Q̲u̲e̲u̲e̲i̲n̲g̲ ̲T̲i̲m̲e̲s̲ mean and at various confidence levels
mean max a 65% max a 90%
tQ1 9 ms 34 ms 59.5 ms
tQ2 1.5 ms 2 ms 2 ms
tQ2 6 ms 22 ms 38.5 ms
tQ6 8̲.̲4̲ ̲m̲s̲ 1̲1̲ ̲m̲s̲ 1̲5̲ ̲ ̲ ̲m̲s̲ ̲
tQ 24.9 ms 69 ms 115.0 ms
N̲o̲d̲e̲ ̲C̲r̲o̲s̲s̲ ̲O̲f̲f̲i̲c̲e̲ ̲T̲i̲m̲e̲ ̲
Mean: ts + tQ mean = 41 ms
Max a 65%: ts + tQ max 65% = 85 ms
Max a 90%: ts + t0 max.90% = 131.2 ms
Fig. III 3.2.2.4-1b
Essential Data Flow Model Figures
3.2.3 F̲u̲t̲u̲r̲e̲ ̲G̲r̲o̲w̲t̲h̲ ̲(̲8̲5̲ ̲a̲n̲d̲ ̲b̲e̲y̲o̲n̲d̲)̲
o The hardware and software structures fully support
openended growth beyond the projected backbone
network expansion.
Potential growth areas are:
- Packet Switching capacity ( 3200 packet/sec)
- Subscriber capacity ( 20.000/node)
- Internodal Megabit Trunk Capacity (2Mbit(s)
- Voice Switching through the PSN
- Internodal Megabit satellite links
- Front End Processor connections to Host Access
Network
- Videotex
- Telefax
The Nodal packet switching capacity growth suggested
above can be supported by approx. 16 PU units as we
know them today. Ongoing development and research may
however lower this figure considerably before 1991.
The Subscriber Capacity of 20.000/node is suggested
for the approximate trippling of the capacity needs
of the system during the period 1985-1991.
The (Internodal) trunk capacity in the megabit range
will be necessary both for the connection of the Passenger
Management System and for the Host-to-Host links in
the Host Access Network. The CR80 module, the STI,
presently interfacing to the 16Mbits/sec SUPRA Bus
and the 1.8 Mbit/s TDX bus, can be used as the interface
also to such megabit communication lines, together
with the appropriate adapter.
This also supports megabit satellite loops, when needed,
since the STI can have at its disposal up to 1 Mega
words; and large buffering Capacity and a selective
retransmission function are needed in order to provide
efficient transmission over the long-delaying satellite
hops.
Either based upon present days digitized voice transmission
using 56 K bps per channel or on the latest compressed-coding
transfer chips, by which voice channels be as narrow
as 2400 bps, the Air Canada Data Network Equipment
is well suited to include such types of traffic too.
This is indicated in Appendix G which describes, how
the CR80 is used as a Common Branch Exchange and as
an automated office processor with a local network.
Videotex and Telefax would also be attachable, directly
or through other public networks.
These examples were intended to illustrate the residence
of the CR80 System Approach:
The system is balanced, i.e. it does not create inhibiting
bottlenecks, when growth occurs.
3.2.4 O̲p̲t̲i̲o̲n̲s̲
Presently foreseen is the needs for encryption facilities.Such
implementations will benefit from our significant experience
in this field.
Soon to come will be the needs for further Host Access
Subsystems which are parts of the Host-Access Network
to come. The CR80 system concept is well suited for
such applications, as shown with the UNIVAC host Example
offered, and we look forward to supply parts of the
Host Access Network as amendments to the backbone Network.
For future efficient interconnections of the ACDN and
public data networks, the X75 Gateway may be of interest.
3.3 T̲e̲l̲e̲c̲o̲m̲m̲u̲n̲i̲c̲a̲t̲i̲o̲n̲s̲
Recently Christian Rovsing's maintenance sub-contractor
CNCP Telecommunications, responded to an Air Canada
Request for Information (RFI). Their response to the
RFI dated December 15, 1980 is included in its entirety
in this proposal as Appendix E.
Christian Rovsing feels this document provides Air
Canada with sufficient details on CNCP's existing network
services and offerings, as well as their plans for
the future, to enable Air Canada to plan for a total
solution to their communication requirements.
LIST OF CONTENTS Page
4. OPERATOR INTERFACE 2
4.1 Introduction 2
4.2 Terminal User Interface 3
4.2.1 Sign-in 3
4.2.2 Sign-in During Congestion 3
4.2.3 Multiple Sign-in 3
4.2.4 Sign-out 4
4.2.5 Multiple sign-out 5
4.2.6 Host Initiated Sign-out 5
4.3 Network Control Center Functions 6
4.3.1 Monitoring 6
4.3.2 Concentrator Network 6
4.3.2.1 Concentractor Network Monitoring 6
4.3.2.2 Concentrator Network Control 6
4.3.3 Host Links and Other Network Links 7
4.3.3.1 Monitoring Links and Trunks 7
4.3.3.2 Controlling Links and Trunks 7
4.3.4 ACNC Network 7
4.3.5 Statistics and Reports 7
4.3.6 Distribution 8
4.4 Network Management Host 9
4.5 Electronic Mail Host
Control and Monitoring Functions 10
4.5.1 Electronic Mail Operator Interface 11
4.5.2 PMS User Interface 11
4.5.2.1 FIKS Definition and System Elements 12
4.5.2.2 System Overview and Functional Summary 13
4.5.2.3 FIKS Nodal Network 13
4.5.2.4 FIKS Operator Interface 16
4.6 Test Sessions 20
4.6.1 Software Development 20
4.6.2 Test Network 21
4.6.3 Volume Generator 21
4.7 Data Base Manipulation 22
4. O̲P̲E̲R̲A̲T̲O̲R̲ ̲I̲N̲T̲E̲R̲F̲A̲C̲E̲
4.1 I̲n̲t̲r̲o̲d̲u̲c̲t̲i̲o̲n̲
This section describes the interactive "man-machine"
communication for the following set of users:
- Terminal Users
- NCC/NMH Operators
- EMH Operators
At terminal user level only session control commands
are discussed in this context.
At operator level the complete set of interactive procedures
for network monitoring and control is described. The
procedures for EMH and NMH operators are described
in brief. The operator functions appear in further
detail in sections 6.8 and 6.10
4.2 T̲e̲r̲m̲i̲n̲a̲l̲ ̲U̲s̲e̲r̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲
o This section deals with the terminal operators
interface to the network and his initial access
to one or more host computers.
4.2.1 S̲i̲g̲n̲-̲i̲n̲ ̲
To sign-in to a host computer it is necessary that
the terminal is under operational control of the Air
Canada Data Network (ACDN) and that the terminal operator
delivers initial informations consisting of
- Sign-in request
- User identification
- Security password
- Host identification
The security password will indicate the operator activity
allowed in the host.
The ACDN will route the sign-in request to the selected
host after check for proper input format.
An illegal or unsuccessful sign-in will result in an
error message to the terminal operator.
After a successful sign-in the host computer notify
the terminal operator that
- The sign-in is accepted
- A session is created
- The host is ready for further requests.
4.2.2 S̲i̲g̲n̲-̲i̲n̲ ̲D̲u̲r̲i̲n̲g̲ ̲C̲o̲n̲g̲e̲s̲t̲i̲o̲n̲
Some selected terminal operators have the opportunity
to sign-in of high priority application during congested
conditions.
A certain minimum level of service is maintained and
acceptable response times are obtained.
4.2.3 M̲u̲l̲t̲i̲p̲l̲e̲ ̲S̲i̲g̲n̲-̲i̲n̲ ̲
Any terminal is allowed to sign-in to more than one
host computer.
Multiple sign-in can be performed as
- a single sign-in (sec 4.2.1) repeated with different
host identifications
- a single sign-in with two ore more host identifications.
When a multi sign-in sequence has been performed further
terminal input messages will be routed to the host
mentioned as the latest in the sign-in sequence.
If the operator wants to communicate with another host,
at which he is signed-in, he must send a 'host change
request' containing a host identification, to the ACDN.
The future terminal input will then be routed to the
'new' host, until a new 'host change request' with
another host identification is entered.
The ACDN can on request inform the terminal operator
about
- which host's he is signed-in at
- which host he is commucating with at the moment.
4.2.4 S̲i̲g̲n̲-̲o̲u̲t̲
To sign-out a terminal the operator has to enter a
sign-out request to the ACDN.
The ACDN is searching a table, to determine which host
the terminal is signed-in at, and sends a request to
the host for permission to sign-out.
If the sign-out is unsuccessful (the host does not
respond to the sign-out request), an error message
will be routed to the terminal, and the operator has
to repeat the sign-out request.
If the sign-out is successful, but the host refuse
to let the terminal sign-out (the terminal might have
some processing to complete before the sign-out) the
operator will be notified. The terminal will still
be signed-in to the host.
If the sign-out is successful and the terminal is allowed
to sign-out, the link between the terminal and the
specified host is dismantled.
4.2.5 M̲u̲l̲t̲i̲p̲l̲e̲ ̲s̲i̲g̲n̲-̲o̲u̲t̲
A user, who is signed-in to more than one host, may
sign-out of any of the hosts individually. To do this
he has to send a sign-out request to the network followed
by one or more hosts identifications.
The criteria for a successful multiple sign-out is
as for a single sign-out. The terminal will receive
an error message or an acknowledgement message from
each host as in sec. 4.2.3.
If no host identifications is given in the sign-out
request, it is asumed that the operator requests sign-out
from any host, at which he is signed into.
4.2.6 H̲o̲s̲t̲ ̲I̲n̲i̲t̲i̲a̲t̲e̲d̲ ̲S̲i̲g̲n̲-̲o̲u̲t̲
Any host in the network has the option, at any time,
of signing-out a terminal which is signed-in to it.
The host will send a sign-out request to the network
and the terminal will receive a sign-out notification.
The host initiated sign-out can have several reasons,
e.g.
- terminal inactivity
- host problems
- reconfiguration at ACDN.
4.3 N̲e̲t̲w̲o̲r̲k̲ ̲C̲o̲n̲t̲r̲o̲l̲ ̲C̲e̲n̲t̲e̲r̲ ̲F̲u̲n̲c̲t̲i̲o̲n̲s̲
o A global view of the network can be provided at
the control center (NCC). The network monitoring
and control facilities encompass the
- Concentrator network
- Host links and other network links
- Trunks
- ACNC network via gateway
- Statistics and reports
- Distribution
4.3.1 M̲o̲n̲i̲t̲o̲r̲i̲n̲g̲
The network status is illustrated in table form on
a CRT or in a grafic way on a special display unit.
The monitoring functions are available at the NCC and
to technicians in the field Monitoring functions relevant
for the terminal-user can be retrieved by the operator.
4.3.2 C̲o̲n̲c̲e̲n̲t̲r̲a̲t̲o̲r̲ ̲N̲e̲t̲w̲o̲r̲k̲
4.3.2.1. C̲o̲n̲c̲e̲n̲t̲r̲a̲t̲o̲r̲ ̲N̲e̲t̲w̲o̲r̲k̲ ̲M̲o̲n̲i̲t̲o̲r̲i̲n̲g̲
The connection status of all devices are avialable
for display. The status includes
- Sign-in status (terminal users)
- Terminal status (which hosts the terminals are
assigned to)
- Status of physical and logical paths to the destination,
seen from the NODE (up/down/error rate).
- Status of concentrators and modems
- Traffic load (queue length for printers)
4.3.2.2 C̲o̲n̲c̲e̲n̲t̲r̲a̲t̲o̲r̲ ̲N̲e̲t̲w̲o̲r̲k̲ ̲C̲o̲n̲t̲r̲o̲l̲
Different control commands make it possible to manage
the traffic in the field. Control commands concerning
terminals and printers are:
- Up/down/switchover the concentrator or controller
- Connect and disconnect terminals and printers
- Set alternate delivery for a printer
- Set duplicate delivery for a printer
- Discard a message currently being delivered or
next to be delivered
- Security interrogate the user
4.3.3 H̲o̲s̲t̲ ̲L̲i̲n̲k̲s̲ ̲a̲n̲d̲ ̲O̲t̲h̲e̲r̲ ̲N̲e̲t̲w̲o̲r̲k̲ ̲L̲i̲n̲k̲s̲
4.3.3.1 M̲o̲n̲i̲t̲o̲r̲i̲n̲g̲ ̲L̲i̲n̲k̲s̲ ̲a̲n̲d̲ ̲T̲r̲u̲n̲k̲s̲
Status of the links and trunks are available for display
and show:
- Status of links and trunks (up/down/failed/dialed
up)
- load of links and trunks
4.3.3.2 C̲o̲n̲t̲r̲o̲l̲l̲i̲n̲g̲ ̲L̲i̲n̲k̲s̲ ̲a̲n̲d̲ ̲T̲r̲u̲n̲k̲s̲
A set of commands can manage the traffic on the network
- Open and close links and trunks
- Switch to alternative routing
4.3.4 A̲C̲N̲C̲ ̲n̲e̲t̲w̲o̲r̲k̲
All monitoring and control functions available at the
new network are also available from the "old" network.
4.3.5 S̲t̲a̲t̲i̲s̲t̲i̲c̲s̲ ̲a̲n̲d̲ ̲R̲e̲p̲o̲r̲t̲s̲
Statistic informations are continuously collected and
stored on a central data base from where the information
can be retrieved at any time. The information is used
to indicate activity and performance during the past
5 minutes, 1 hour or a number of hours up to 24 hours.
The statistic is used for maintenance, error detection,
planning and cost. By means of the statistic, it is
possible to charge costs to the user level.
The statistic includes:
- no of transactions pr. terminal
- no of characters sent and received pr. terminal
- no of errors pr. terminal
- error rate on trunks and links
- no of transactions to different hosts, other networks
and the gateway
- response time on terminals which are supporting
this.
4.3.6 D̲i̲s̲t̲r̲i̲b̲u̲t̲i̲o̲n̲
The network configuration is controlled at the NCC.
Changes in the configurations are performed from the
NCC, eventually on request from the NMH. Transferring
of new software to the nodes can be made without disrupting
normal network operation.
Central control can be taken over by the back-up NCC
with a short notice.
Short information can be broadcasted to a closed group
of terminals.
The group can be
- all terminals (teletypes and CRT's)
- only teletype terminals
- terminal connected to a specified host.
4.4 N̲e̲t̲w̲o̲r̲k̲ ̲M̲a̲n̲a̲g̲e̲m̲e̲n̲t̲ ̲H̲o̲s̲t̲ ̲
New configuration tables are generated at the NMH and
loaded to the affected nodes under the control of the
NCC. New software modules can also be routed via the
network.
The input to the table generation process will be from
any terminal, able to establish a session with the
NMH for purpose of table generation.
The new tables are stringently checked to avoid fatal
errors.
The NMH is able to maintain several configurations,
which is needed when the network configuration changes
rapidly or when a change to previous version is needed.
The tables included are:
- Tables indicating hardware devices used
- Tables for hardware network addresses
- Tables indicating the statistic required
- Routing tables (primary, secondary)
- User security tables (user id. password)
- User cost tables.
4.5 E̲l̲e̲c̲t̲r̲o̲n̲i̲c̲ ̲M̲a̲i̲l̲ ̲H̲o̲s̲t̲ ̲C̲o̲n̲t̲r̲o̲l̲ ̲a̲n̲d̲ ̲M̲o̲n̲i̲t̲o̲r̲i̲n̲g̲ ̲F̲u̲n̲c̲t̲i̲o̲n̲s̲
o The EMH operator interface makes the different
EMH functions visible and useable for the EMH operators.
Two essential operator interfaces are identified, namely:
the electronic mail interface and the protected message
switching (PMS) interface.
4.5.1 E̲l̲e̲c̲t̲r̲o̲n̲i̲c̲ ̲M̲a̲i̲l̲ ̲O̲p̲e̲r̲a̲t̲o̲r̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲
o This interface is used by the operator for development
and implementation of the electronic mail service.
The most external physical interface consists of CRTs
and line printer. These physical devices are driven
by TTY drivers and line printer driver respectively.
On logical level the EMH is connected to the bakbone
network as an ordinary host and is thus providing the
EMH services to any user connected to the backbone
network.
The orders or command issued by the operator are directed
to the command interpreter (CMI) which reads and interpretes
the commands.
As an overall supervisor of all terminals and resources
the terminal operating system (TOS) supervises the
attached terminals.
For being able to support the software development
a number of support software tools exists such as:
- editor
- language compilers
- linkers
- utilities
This support software are further described in section
6.3.
4.5.2 P̲M̲S̲ ̲U̲s̲e̲r̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲
o As PMS interface the DANISH DEFENCE INTEGRATED
COMMUNICATIONS SYSTEM, FIKS, support similar and
more advanced facilities.
During the next sections an overview description of
FIKS is given and how the operator interfaces to the
communication system is implemented.
This shall be considered as an example implementation
of an electronic mail system and protected message
switching functions.
4.5.2.1 F̲I̲K̲S̲ ̲D̲e̲f̲i̲n̲i̲t̲i̲o̲n̲ ̲a̲n̲d̲ ̲S̲y̲s̲t̲e̲m̲ ̲E̲l̲e̲m̲e̲n̲t̲s̲
The DANISH DEFENCE INTEGRATED COMMUNICATIONS SYSTEM,
FIKS, is a fully integrated communications network
for the rapid, reliable, and efficient automated transfer
of message- and data traffic, shared by multiple users
for a variety of Danish military and defence applications.
FIKS provides dedicated network facilities and nodal
switching centers to service communication centers
and interconnect data terminals and computer systems
geographically distributed throughout Denmark.
The FIKS Network facilities consists of dedicated high
speed internodal trunk shared by all users and dedicated
lines connecting users and small comcenters to the
nodal switching centers.
The nodal switching centers are configured from three
functions entities.
the Node - providing access to FIKS for data
terminals, interfacing MEDEs, and
performing network - oriented functions
common to both data and message traffic,
the MEDE - message entry and distribution
equipment, providing access to FIKS for
communications centers and performing
terminal oriented functions related to
message traffic,
the SCC - system control center, providing
network supervision and control and
function as software development and
maintenance centers.
These FIKS system elements may be co-located and physically
integrated.
Initially, FIKS is structured as an 8-Node grid network
whose topology is shown in figure III 4.5.2.1-1 and
described in the following sections.
4.5.2.2 S̲y̲s̲t̲e̲m̲ ̲O̲v̲e̲r̲v̲i̲e̲w̲ ̲&̲ ̲F̲u̲n̲c̲t̲i̲o̲n̲a̲l̲ ̲S̲u̲m̲m̲a̲r̲y̲
FIKS, The Danish Defence Integrated Communication System,
is an integrated and fully automated message switching
and data transfer communication system used by the
Danish armed Forces. It replaces individual torn tape
message traffic networks and dedicated data circuits
until now operated by the three services- army, navy
and airforce.
4.5.2.3 F̲I̲K̲S̲ ̲N̲o̲d̲a̲l̲ ̲N̲e̲t̲w̲o̲r̲k̲
FIKS consists of a multinode network geographically
distributed throughout Denmark (Fig. III 4.5.2.1-1).
As initially structured, 8 Nodes are arranged in a
grid configuration and interconnected via full-duplex
trunks operating at 9.6Kbit. These internodal trunks
are permanently leased circuits backed up by automatically-dialed
PTT data circuits. The internodal trunks may be upgraded
to 64 Kbit, when higher traffic rates are required.
Message and data traffic is interchaged between military
users under control of computerized nodal switching
centers. Node and MEDE (M̲essage E̲ntry and D̲istribution
E̲quipment) processors are located at all Nodes.
The internodal trunk circuits carry a mixture of message
and data traffic. The 9.6 Kbit bandwidth is dynamically
allocated between message and data sources. A minimum
of 1.2 kbit will always be available for message traffic,
and 2,4 kbit is reserved for signalling, and protocol
overhead (see fig III 4.5.2.3-1). The remaining bandwidth
of 6,0 kbit is divided into 20 time slots each with
a capacity of 300 bps. These slots are dynamically
allocated to continuous and discontinuous (polling,
contention and dial-up) data traffic. Data traffic
sources will be allowed to use to 300 bps slots in
accordance with bandwidth requirements and priority.
Up to 15 different priority levels are used and the
nodal software automatically preempts lower priority
data users, if bandwidth becomes too small accomodate
all data users simultaneously. Preemption should however
only take place when the network becomes partly inoperable
due to trunk or equipment failure.
Fig. III 4.5.2.1-1
Fig. III 4.5.2.1-2
4.5.2.4 F̲I̲K̲S̲ ̲O̲p̲e̲r̲a̲t̲o̲r̲ ̲I̲n̲t̲e̲r̲f̲a̲c̲e̲
Messages enter the FIKS Network from a number of message
preparation and receiving terminals such as teleprinters
and visual display units. Each MEDE, initially serve
up to 30 full duplex terminals. However the total capacity
of the MEDE is 242 terminals and 12 interfaces to host
computers. Message preparation is interactive with
prompts from the MEDE computer. An example of a message
preparation format (SMF) is shown in figure III 4.5.2.4-1.
The underlined portions are either prompts or other
computer inserted information. Address information
is keyed-in as a character representing the MEDE to
which the terminal is connected, followed by 3 digits.
The computer replaces this by the correct address,
which is then appearing in the delivered message. Figure
III 4.5.2.4-2.
Message terminal operators can use a number of interactive
procedures such as:
- preparation (4 types)
- coordination
- release
- retrieval
- readdressing
- distribution, local
- log on
- log off
- special handling
- editing
The MEDEs are manned 24 houra daily and MEDE supervisors
have control with the security and traffic of the system
and its terminals. A number of special procedures are
available for supervisors:
- distribution (2 types)
- control of terminal queue status
- re-arrangement of queues
- relocation of queues
- re-routing of terminal traffic
- block/unblock terminals
- security interrogation of terminals
- establishment of PTT data net connections
- up-dating of route and addresstables
- security profile handling
- call-up of daily traffic statistics
and many other procedures.
Full accountability is provided for all messages.
Messages are queued by precedence (Flash, Immediate,
Priority, Routing and two other yet unspecified levels)
to the Node for network routing and for automatic distribution
to local addressees.
All outgoing and incoming messages are stored at the
MEDEs for 10 days. SPECAT messages will be deleted
from local storage after transmission and delivery.
Retrieval of messages from 10 day storage by authorised
users are provided. Messages can be retrieved by message
identification subject indicator codes (SIC) and date/time
indication.
P̲r̲o̲c̲ PRE(CR)
A̲B̲C̲ ̲1̲2̲3̲ (CR)
F̲O̲R̲M̲A̲T̲T̲E̲D̲ ̲M̲E̲S̲S̲A̲G̲E̲ ̲A̲2̲1̲ ̲(̲C̲R̲)̲
P̲R̲E̲C̲ ̲A̲C̲T̲ 0 (CR)
P̲R̲E̲C̲ ̲I̲N̲F̲O̲ ̲ R (CR)
F̲M̲ / (CR) C̲H̲O̲D̲D̲E̲N̲
T̲O̲ AIG 1601 (CR)
X̲M̲T̲ (CR)
T̲O̲ E104/ (CR) T̲A̲C̲D̲E̲N̲
T̲O̲ CR)
I̲N̲F̲O̲ X115 (CR)
I̲N̲F̲O̲ (CR)
B̲T̲
C̲L̲A̲S̲S̲ NS (CR)
S̲P̲E̲C̲A̲T̲ (CR)
S̲I̲C̲ ̲ RHQ (CR)
TEXT
NNNN (CR
B̲T̲
D̲I̲G̲/ (CR) 0̲1̲2̲3̲4̲7̲z̲ ̲j̲a̲n̲
P̲R̲O̲C̲
FIGURE III 4.5.2.4-1
FIKS MESSAGE PREPARATION FORMAT
(CR) = carriage return
EXAMPLE
0801 KAB
NATO RESTRICTED
0 R 012347z JAN 80 MSG ID ABC 123
FM CHODDEN
TO AIG 1601
TACDEN
INFO SHAPE
BT
NATO RESTRICTED
SIC RHQ
IN RELPY REFER TO TST 312.1-1227
SUBJECT CONTRACT NO FK 7900
IN ACCORDANCE WITH PARAGRAPH 16.5 OF THE SUBJECT CONTRACT
AMC IS PLEASED TO SUBMIT AN ORDER FOR THE OPTION FOR
ADDITIONAL RDS-V
PPI DISPLAYS AS FOLLOWS
QTY IN UNITED STATES DOLLARS
1-2 1000 DOLLARS EA
3-6 976 DOLLARS EA
THE EQUIPMENT SHALL INCLUDE THE RDS-V PPI DISPLAY/DATA
ENTRY AND TRACKBALL WITH THE NECESSARY SYSTEM MODIFICATION
TO ALLOW SEPARATION OF THE DISPLAY UP TO 3500 METERS.
DELIVERY SHAAL BE ACCOMPLISHED AT THE RATE OF TWO PER
MONTH STARTING 10 MONTHS AFTER RECEIPT OF A CONTRACT
MODIFICATION. ALL OTHER ITEMS AND CONDITIONS SHALL
BE IN ACCORDANCE WITH THE SUBJECT CONTRACT.
BT
INT DIST 0-DIV
ACCEPTANCE TIME 020005z
RETRIEVAL TIME 020006z
NATO RESTRICTED
FIGURE III 4.5.2.4-2
FIKS HARD COPY EXAMPLE
4.6 T̲e̲s̲t̲ ̲S̲e̲s̲s̲i̲o̲n̲s̲
o Test facilities are used to develop new software
and to test
- New software enhancements
- Software bug fixes
- New node equipment
- Configuration
- NCC functions
- New terminal equipment
4.6.1 S̲o̲f̲t̲w̲a̲r̲e̲ ̲D̲e̲v̲e̲l̲o̲p̲m̲e̲n̲t̲
A terminal operator has the option of signing-in to
a test host. This is done with a special sign-in request
containing the same information of user and host identifications
as a normal sign-in request.
The terminal will be marked as being in test mode and
can be signed-in to any other test host, but not to
an in-live host.
The traffic from in-live hosts will not be delivered
at the test terminal, since the terminal is marked
as being in test mode. It will be possible to reroute
the in-live traffic to another terminal.
While the terminal is signed-in to one or more test
hosts, the operator can develop both network and host
software.
4.6.2 T̲e̲s̲t̲ ̲N̲e̲t̲w̲o̲r̲k̲
A test network consisting of the network back-up modules
can be established.
At this network some selected terminals and operators,
Close User Group, can sign-in.
The test network will act as the in-live network.
This will give the test staff the option of testing
new software and equipment in a 'real' network.
No terminal signed-in at the test network will communicate
with the in-live network and vice versa.
4.6.3 V̲o̲l̲u̲m̲e̲ ̲G̲e̲n̲e̲r̲a̲t̲o̲r̲
A volume generator (ATES - Ref. Appendix F) is included
in the test network to test the network under load
conditions.
Volume tests will ensure that equipment or software
changes in the network have no compromised performance
and to ascertain at what volume level various parts
of the network will be saturated.
In addition the volume generator will be used as a
protocol tester.
4.7 D̲a̲t̲a̲ ̲B̲a̲s̲e̲ ̲M̲a̲n̲i̲p̲u̲l̲a̲t̲i̲o̲n̲
At the NMH there are connected two bases:
- statistic and cost/bill data base
- configuration and inventory data base
It is possible for a terminal user to get part of this
information saved to those data bases by using a special
interactive query-orientated high-level langauge.
The information available to the user depends on the
user's security profile.
The user can chose such as information as:
- one or more record(s) of one data base
- extracts from one data base where the final extract
will be composed of parts of the different records
available according to the user's security profile.
3.8 Options ........................................ 74
3.8.1 Videotex ....................................... 76
3.8.2 High Density Digital Tape Recordings ........... 79
3.8.2 H̲i̲g̲h̲ ̲D̲e̲n̲s̲i̲t̲y̲ ̲D̲i̲g̲i̲t̲a̲l̲ ̲T̲a̲p̲e̲ ̲R̲e̲c̲o̲r̲d̲i̲n̲g̲s̲
Long term storage of all PMS traffic tends to become
impractical, if conventional mass storage techniques
like magnetic tapes or disk packs are used. This is
caused by the load on operators in mounting and demounting
this type of mass storages together with the physical
space required for storing the large anticipated traffic
volumes.
High Density Digital Tape Recording offers an attractive
alternative archiving technique. The high packing
densities offered by this technique could provide Air
Canada with a means for very compact storage which
in addition can reduce the load on the operators.
The High Density Digital Tape Technology has the following
characteristica:
o 14-track wideband tape recorder with record/reproduce
capability for 7 trakcs
o 9200 feet tapes
o 22000 bit/inch per track on data tracks
2750 bit/inch per track on search tracks
o recording on two channels of each one search and
six data tracks
o tape capacity of 2.2 x 10…0e…10…0f… bits or 3,000 Mbytes
o tape speeds:
- record : 2.5/5/10 inch/sec
- reproduce: 2.5/5/10 inch/sec
- search : 240 inch/sec
o data rate of 160 Kbytes/sec for record/reproduce
o search time average 4 1/2 min.
Christian Rovsing has, as a results of involvement
in a number of Ground Computer Systems for handling
large volumes of satellite generated image data, developed
a number of products for interfacing to and handling
high density tape recorders.
These products used in a computing environment like
the Air Canada's could be an attractive alternative
storage media to more conventional ones.
Intentionally left blank
Intentionally left blank
Intentionally left blank
Intentionally left blank