DataMuseum.dk

Presents historical artifacts from the history of:

DKUUG/EUUG Conference tapes

This is an automatic "excavation" of a thematic subset of
artifacts from Datamuseum.dk's BitArchive.

See our Wiki for more about DKUUG/EUUG Conference tapes

Excavated with: AutoArchaeologist - Free & Open Source Software.


top - metrics - download
Index: T s

⟦b14e38021⟧ TextFile

    Length: 15861 (0x3df5)
    Types: TextFile
    Names: »ssba.doc«

Derivation

└─⟦db229ac7e⟧ Bits:30007240 EUUGD20: SSBA 1.2 / AFW Benchmarks
    └─⟦this⟧ »EUUGD20/AFUU-ssba1.21/ssba1.21E/ssba/ssba.doc« 

TextFile

        THE SYNTHETIC SUITE OF BENCHMARKS FROM THE A.F.U.U. (SSBA 1.21E)


			French Association of Unix Users
				11, rue Carnot
			94270 Le Kremlin-Bicetre, France
			   Tel.: (33) (1) 46 70 95 90+

		"In all things there is only one way of beginning
	      when you wish to discuss correctly: the subject of the
		     discussion must be clearly understood."
								Plato

				   MANIFESTO

   The SSBA is the result of the studies of the AFUU (French Association of Unix
Users) Benchmark Working Group. This group, consisting of some 30 active members
of varied origins (universities, public and private research, manufacturers, end
users), has assigned itself the goal of thinking on the problem of assessing the
performance of data  processing systems, collecting a maximum number of tests a-
vailable throughout  the world, dissecting the codes and results, discussing the
utility, fixing versions and supplying  them in the form of a magnetic tape with
various comments and procedures.

   This tape is therefore both  a simple and coherent tool for the end users and
also for the specialists, providing a clear and pertinent  initial approximation
of the performance, and could also become a "standard" in the Unix (R) world. In
this way the SSBA ( Synthetic Suite of Benchmarks from the AFUU ) originated and
here you find release 1.21E.

				  A DEFINITION

   Benchmark program : "A standard data processing program used to test the pro-
cessing power of one computer in relation to the others".A benchmark program can
be designed to assess general problems, referred to as benchmark  problems, such
as file management, sort or mathematical operations,or assess more specific pro-
blems which will take more account of the use of this computer. The performance,
like the processing rate,can be assessed and compared to that of other computers
tested with the same  program. This process is referred to as "a benchmark test"
and can be used as a decision aid tool when purchasing a computer.

				    FOREWARD

   The evaluation of performance  is a necessity in  the data processing area as
in all other. The problem is all the more delicate in that there is no exact ma-
thematical solution and we are therefore obliged to work by approximation. Unfor
tunately this  state of things has lead to  excesses or considering this problem
like tricks of the trade or "gadgetting". One of the ways of approach is the de-
velopment of benchmark tests.

   It is easy to criticize the benchmarks and say that they don't mean anything.
By experience,it is proven that this is generally due to manufacturers whose ma-
chines fail to obtain good results on the benchmarks.The whole thing is to known
what you taking about.A machine must not be reduced to a single figure as is far
too often the case (the mips for example), but one must try  to provide a multi-
dimensional image of it using  a number of specific tests. This is the  approach
which has guided us here.

_________________________________
(R) UNIX is a trademark registered by AT&T in United States and other countries.



				August 3rd, 1989





				   SSBA 1.21E



   The solution which  appears as immediately  ideal to the end user is  to take
its application and have  it run, as such, on a  set of machines. The problem is
that as soon as the application evolves, everything must  be recommenced, whence
the need for finding tests  characterizing specific types of problems which will
develop in most final applications. As the use of these test risks being perver-
ted, the limits but also the real qualities must be very carefully indicated.

   There have never been, and they will never be benchmarks capable of fully re-
presenting the workload of data processing systems in real environment (numerous
studies are aimed at  approaching it). This depends on a large number of factors
and it is an obvious fact to indicate this. Starting from these observations one
can just remain seated saying that nothing will ever happen... We have chosen to
act (in any event  someone would have done it) by providing  a practical tool as
valid as possible, well aware of all the limits to our approach.

   There are at present some 200 tests referenced throughout the world, they can
be classified in 3 main categories :

1)	The so-called "standard" benchmarks: Dhrystone,Whetstone, Linpack, Doduc
Byte, Spice, Euug, Stanford, Musbus, Livermore, Los Alamos, etc... published  in
magazines or issuing from the main users (General Electric, Exxon, etc...) whose
code has been  broadly circulated and  sometimes modified, generally accepted by
the whole of the trade but in connection  whereof great confusion prevails as to
the versions, results, interpretations and the use that can be made of them.

2)	The so-called "commercial" benchmarks: AIM, Neal Nelson, Uniprobe, Work-
station Laboratories, etc... well  documented, subject to  licences whose prices
are generally very high, providing a  professional service  but generally giving
the same kind of information as the tests above, with more applicative notations
(AIM: system, Neal Nelson: office automation, Workstation Labs: technical) and a
well finished package for the purchaser. These tests are nevertheless fallible.

3)	The so-called "internal" benchmarks  used by certain manufacturers (IBM,
DEC, HP, ATT, Olivetti, NCR, Texas, for those  which have been presented) to si-
mulate workloads (pseudo users performing conventional tasks) and thus calibrate
their systems.

   We have recovered or taken  cognizance of  most of these  benchmarks in all 3
categories, we have examined  further the substance  and form, we have  executed
them under various conditions in order to validate them.

   The tests have been selected here after close considerations and, in our opi-
nion provide a general image of the machine as complete as possible, with a view
to rigour and portability.

   There will be evolutions towards more applicative areas : graphic, real time,
transactional, DBMS, languages, etc... The AFUU index will appear in release 1.5
and new functionalities be added in release 2.0.

   The advantage of this  approach is that it is broadly  circulated and adopted
by the greatest possible number of users. Send us your results, bug discoveries,
comments or insults to:

			   afuu_bench@inria.inria.fr


				August 3rd, 1989





				   SSBA 1.21E



   The 1.21E release SSBA characterizes in a machine under UNIX or its derivats:

*	CPU power, the processing rate, the computation capabilities;
*	the implementation of the Unix system in general, the file system;
*	the C and Fortran compilers, the optimization capabilities;
*	the access to and the management of the memory, the performance of the
	cache memories;
*	the disk input/output, the performance of the controller;
*	the multi-user performance versus significant tasks;
*	a set of parameters latent in the system.


				   COPYRIGHT

   The SSBA is AFUU copyright, it is a "public domain" software.

   The AFUU rejects any responsibility as to consequences of running the SSBA.

   The procedures, comments, a part of the code, together  with the  general ar-
chitecture and the debugging  have been designed and implemented by Philippe Dax
(ENST), Christophe Binot (LAIH) and Nhuan Doduc (Framentec).

   Extract from the minutes of the meeting of the Board of Directors of the AFUU
of Thursday March 9th, 1989:

   " The results will be published only under the sole responsibility of the ma-
nufacturer or agency  having performed the tests, and will  no event involve the
responsibility of the AFUU. The name of the AFUU may, if the case arises, be re-
ferred only as a supplier of the SSBA.

   The manufacturers and agencies  concerned can publish the results in TRIBUNIX
(the AFUU liaison bulletin) on condition that respect for the procedure and con-
ditions of running have been certified by the AFUU ".


			     STRUCTURE OF THE SSBA

   The SSBA consists of 12 benchmarks selected as  stated above. These 12 bench-
marks originate, totally or partially, from the following benchmarks : mips/Joy,
Dhrystone, Whetstone,  Linpack, Doduc, Byte, Saxer, Utah, Mips, Test C, Bsd  and
Musbus.

   The SSBA is  organized into 16 directories on  the same level. A directory is
associated to each benchmark, and their names are respectively: mips, dhry, whet
linpack, doduc, byte, saxer, utah, tools, testc, bsd, musbus ; 2 additional ssba
and config directories are used respectively to initiate the SSBA and to analyze
the configuration of the machine and system.These 14 directories are part of the
initial distribution which also comprises a COPYRIGHT, a READMEFIRST, a checking
procedure and an afuu directory with a EUUG PAPER.Two other directories, install
and results, are constructed during the SSBA execution phase. The install direc-
tory contains parameter files which reflect  the type of the Unix system and the
choice, by the person initiating the SSBA, of the compilers  and compilation op-
tion chains. The results directory contains  the results and the trace of the o-
perations performed during the execution of the SSBA.


				August 3rd, 1989





				   SSBA 1.21E



   The whole of the SSBA,in its initial state, comprises 236 files split up into
14 directories. The SSBA uses 99 source programs.It generates 92 raw results and
a description of the machine's main system  parameters. The space occupied is a-
round 1.5 Megabytes. It is advisable to have  at least 15 Megabytes available on
the same file system where the SSBA  will be located and also provides a /tmp of
at least 3 Megabytes in order for it to run correctly.

   It is also necessary to have a UNIX or derived systems which must support the
following 41 commands:

	awk, bc, cat, cc, cd, chmod, comm, cp, date, dc, df, diff, echo, ed, ex-
	pr, f77, grep, kill, lex, ls, make, mkdir, mv, od, ps, pwd, rm, sed, sh,
	sleep, sort, spell, split, tail, tee, test, time, touch, wc, who, yacc.

   Optional commands:

	banner, hostname, logname, lp, more, pr, shar, tar, uname, uuname.

   For information the SSBA 1.21E is completely executed in 3h00 on a non-loaded
"4 mips" machine.

   The functional profile of the different programs is as follows:

   CPU	        SYSTEM		COMPUTATION	MEMORY	   DISK		LOAD

dhrystone	tools		doduc		seq	   disktime	multi.sh
mips		forkes		whetstone	ran	   saxer	work
bct.sh		execs		linpack		gau	   iofile
testc		contswit	float		iocall
		signocsw	fibo24		bytesort
		syscall
		pipeself
		pipeback
		pipedis

   Apart from the code and data files for each of the benchmarks, there is a set
of 5 additional files in each of the 14 directories. If bench corresponds to the
dummy name of one of the benchmarks, for a given directory they will be:

bench.doc	the comments relevant to this benchmark,
bench.files	the list of files used,
bench.mk	the Makefile,
bench.cf	the configuration shell script,
bench.run	the run shell script.

   During running of the SSBA other files can be created:

bench.h		header created by bench.cf,
bench.jou	trace of compilations,
bench.log	trace of operations during the running,
bench.kill	shell script to kill the current benchmark,
bench.tmp	temporary files,
bench.res	results local to this benchmark.



				August 3rd, 1989





				   SSBA 1.21E



   When placed at general level (ssba directory) or at the local level of each
benchmark, the Makefile files (bench.mk) all contain the same targets:

conf		configuration,
compile		compilation,
run		run,
sizes		size of executables,
clean		deletion of objects, executables, logs, results, etc...
readme		display of documentation,
print		printing of Makefiles and shell scripts,
printall	printing of all sources,
tar		tar format archiving,
shar		shell-archiver format archiving.


			  GENERAL IMPLEMENTATION

1 -	Place yourself under the ssba directory, preferably under sh.

2 -	Edit the ssba.def  file containing the  default commands. Each input  of
	this file consists of 2 fields separated by a ":" (column). To the right
	of the ":"  place the command or  option chain suited to your  system or
	that of your choice.

	For example:

C compiler			: gcc		(default value: cc)
C optimization option		:		(default value: -O)
C floating option		:		(default value: nothing)
Fortran compiler		: ftn		(default value: f77)
Fortran optimization option	:		(default value: -O)
Fortran floating option		: -f68881	(default value: nothing)
Printer				: laser		(default value: lp)
Printer option			: -l66		(default value: -l66)
Pager				: pg		(default value: more)

	If the 2nd field is empty, then the default value is taken (see above).

3 -	Enter the SSBA initiation command:

	card (to fill in the specification sheet)	or	ssba.run&

4 -	Place yourself in the results  directory to check  the correct operating
	of the SSBA displaying the ssba.log file, operations trace log.

5 -	In case of problems, it is possible to kill the whole of the SSBA by en-
	tering ssbakill (located under  the ssba directory), if this command has
	no effect try ssba.kill.

6 -	Wait for the end of execution of the SSBA and analyze the results in the
	ssba.res file, an abstract  of which can be  found in the synthese file,
	both located in the results directory.

7 -	Print the ssba.log, ssba.res and synthese files in the results directory
	and then communicate them to the AFUU.

				August 3rd, 1989





				   SSBA 1.21E



			      LOCAL IMPLEMENTATION

   Local execution of a benchmark is possible on condition that the files of the
install and config directories have been previously created, either by ssba.run&
or by ssba.ini (see header to this file),  followed by unix.sh, followed by make
conf and finally, in the directory config, by make compile -f config.mk.
In fact, in order for any benchmark to be normally  executed, it is necessary to
have computed  the granularity parameter of  the time "HZ", then to have entered
in the time measuring tools: config/chrono and config/etime.o.
   This pre-initialization phase constructs files which will be used for most of
the benchmarks: install/hz, install/signal.h, install/*.cmd and install/*.opt.
   Once this preliminary operation has  been performed, the procedure in view of
executing a benchmark locally, is as follows:

make conf -f bench.mk		configuration
make compile -f bench.mk	compilation
make run -f bench.mk		running
make sizes -f bench.mk		sizes


				    ADVICE

*	Do not attempt to modify  anything whatsoever,  the SSBA could no longer
	run and in any event we will perceive the results on output.... And then
	it's nasty!

*	Do not panic in view of the run times or certain errors.

*	If the "doduc"  fails to  compile,  it may  be necessary to increase the
	size of the symbols table:  add -Nn4000 to the line FFLAGS of the doduc.
	mk file.

*	You need not be afraid of executing the SSBA under root (advised for sys
	tems  which do not  authorize more than  50 processes  per user), but it
	would be  preferable to execute  it in normal user mode,  in all, in the
	8 users simulation phase, the SSBA generates 46 processes.

*	For the daring, in the musbus directory, the musbus.run file at the line
	nusers=8, it  is quite possible to  place 16, 32  or as you like, simply
	make sure that the kernel of the machine will support it.
	In the bsd  directory,  the bsd.run  file, the  1500 parameter after the
	"-p" of seq,  ran, gau, can be modified contingent on the available vir-
	tual memory, for example 10000 for 10 Megabytes.
	Other parameters can be  very simply modified, do not hesitate, the SSBA
	is your tool.

*	For your purchases, have the SSBA executed on a "fresh" machine with the
	manufacturer and examine with him the results obtained.

*	And above all:
			TAKE A LOT OF PLEASURE IN THIS

May you be strong!

		  Christophe Binot, Philippe Dax, Nhuan Doduc

				August 3rd, 1989