|
DataMuseum.dkPresents historical artifacts from the history of: DKUUG/EUUG Conference tapes |
This is an automatic "excavation" of a thematic subset of
See our Wiki for more about DKUUG/EUUG Conference tapes Excavated with: AutoArchaeologist - Free & Open Source Software. |
top - metrics - downloadIndex: T g
Length: 50305 (0xc481) Types: TextFile Names: »gawk-info-2«
└─⟦9ae75bfbd⟧ Bits:30007242 EUUGD3: Starter Kit └─⟦f133efdaf⟧ »EurOpenD3/gnu/gawk/gawk-doc-2.11.tar.Z« └─⟦a05ed705a⟧ Bits:30007078 DKUUG GNU 2/12/89 └─⟦f133efdaf⟧ »./gawk-doc-2.11.tar.Z« └─⟦8f64183b0⟧ └─⟦this⟧ »gawk-2.11-doc/gawk-info-2«
Info file gawk-info, produced by Makeinfo, -*- Text -*- from input file gawk.texinfo. This file documents `awk', a program that you can use to select particular records in a file and perform operations upon them. Copyright (C) 1989 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions, except that this permission notice may be stated in a translation approved by the Foundation. ▶1f◀ File: gawk-info, Node: Non-Constant Fields, Next: Changing Fields, Prev: Fields, Up: Reading Files Non-constant Field Numbers ========================== The number of a field does not need to be a constant. Any expression in the `awk' language can be used after a `$' to refer to a field. The value of the expression specifies the field number. If the value is a string, rather than a number, it is converted to a number. Consider this example: awk '{ print $NR }' Recall that `NR' is the number of records read so far: 1 in the first record, 2 in the second, etc. So this example prints the first field of the first record, the second field of the second record, and so on. For the twentieth record, field number 20 is printed; most likely, the record has fewer than 20 fields, so this prints a blank line. Here is another example of using expressions as field numbers: awk '{ print $(2*2) }' BBS-list The `awk' language must evaluate the expression `(2*2)' and use its value as the number of the field to print. The `*' sign represents multiplication, so the expression `2*2' evaluates to 4. The parentheses are used so that the multiplication is done before the `$' operation; they are necessary whenever there is a binary operator in the field-number expression. This example, then, prints the hours of operation (the fourth field) for every line of the file `BBS-list'. If the field number you compute is zero, you get the entire record. Thus, `$(2-2)' has the same value as `$0'. Negative field numbers are not allowed. The number of fields in the current record is stored in the built-in variable `NF' (*note Built-in Variables::.). The expression `$NF' is not a special feature: it is the direct consequence of evaluating `NF' and using its value as a field number. ▶1f◀ File: gawk-info, Node: Changing Fields, Next: Field Separators, Prev: Non-Constant Fields, Up: Reading Files Changing the Contents of a Field ================================ You can change the contents of a field as seen by `awk' within an `awk' program; this changes what `awk' perceives as the current input record. (The actual input is untouched: `awk' never modifies the input file.) Look at this example: awk '{ $3 = $2 - 10; print $2, $3 }' inventory-shipped The `-' sign represents subtraction, so this program reassigns field three, `$3', to be the value of field two minus ten, `$2 - 10'. (*Note Arithmetic Ops::.) Then field two, and the new value for field three, are printed. In order for this to work, the text in field `$2' must make sense as a number; the string of characters must be converted to a number in order for the computer to do arithmetic on it. The number resulting from the subtraction is converted back to a string of characters which then becomes field three. *Note Conversion::. When you change the value of a field (as perceived by `awk'), the text of the input record is recalculated to contain the new field where the old one was. Therefore, `$0' changes to reflect the altered field. Thus, awk '{ $2 = $2 - 10; print $0 }' inventory-shipped prints a copy of the input file, with 10 subtracted from the second field of each line. You can also assign contents to fields that are out of range. For example: awk '{ $6 = ($5 + $4 + $3 + $2) ; print $6 }' inventory-shipped We've just created `$6', whose value is the sum of fields `$2', `$3', `$4', and `$5'. The `+' sign represents addition. For the file `inventory-shipped', `$6' represents the total number of parcels shipped for a particular month. Creating a new field changes the internal `awk' copy of the current input record--the value of `$0'. Thus, if you do `print $0' after adding a field, the record printed includes the new field, with the appropriate number of field separators between it and the previously existing fields. This recomputation affects and is affected by several features not yet discussed, in particular, the "output field separator", `OFS', which is used to separate the fields (*note Output Separators::.), and `NF' (the number of fields; *note Fields::.). For example, the value of `NF' is set to the number of the highest field you create. Note, however, that merely *referencing* an out-of-range field does *not* change the value of either `$0' or `NF'. Referencing an out-of-range field merely produces a null string. For example: if ($(NF+1) != "") print "can't happen" else print "everything is normal" should print `everything is normal', because `NF+1' is certain to be out of range. (*Note If Statement::, for more information about `awk''s `if-else' statements.) ▶1f◀ File: gawk-info, Node: Field Separators, Next: Multiple Line, Prev: Changing Fields, Up: Reading Files Specifying How Fields Are Separated =================================== The way `awk' splits an input record into fields is controlled by the "field separator", which is a regular expression. `awk' scans the input record for matches for this regular expression; these matches separate fields. The fields themselves are the text between the matches. For example, if the field separator is `oo', then the following line: moo goo gai pan would be split into three fields: `m', ` g' and ` gai pan'. The field separator is represented by the built-in variable `FS'. Shell programmers take note! `awk' does not use the name `IFS' which is used by the shell. You can change the value of `FS' in the `awk' program with the assignment operator, `=' (*note Assignment Ops::.). Often the right time to do this is at the beginning of execution, before any input has been processed, so that the very first record will be read with the proper separator. To do this, use the special `BEGIN' pattern (*note BEGIN/END::.). For example, here we set the value of `FS' to the string `","': awk 'BEGIN { FS = "," } ; { print $2 }' Given the input line, John Q. Smith, 29 Oak St., Walamazoo, MI 42139 this `awk' program extracts the string `29 Oak St.'. Sometimes your input data will contain separator characters that don't separate fields the way you thought they would. For instance, the person's name in the example we've been using might have a title or suffix attached, such as `John Q. Smith, LXIX'. From input containing such a name: John Q. Smith, LXIX, 29 Oak St., Walamazoo, MI 42139 the previous sample program would extract `LXIX', instead of `29 Oak St.'. If you were expecting the program to print the address, you would be surprised. So choose your data layout and separator characters carefully to prevent such problems. As you know, by default, fields are separated by whitespace sequences (spaces and tabs), not by single spaces: two spaces in a row do not delimit an empty field. The default value of the field separator is a string `" "' containing a single space. If this value were interpreted in the usual way, each space character would separate fields, so two spaces in a row would make an empty field between them. The reason this does not happen is that a single space as the value of `FS' is a special case: it is taken to specify the default manner of delimiting fields. If `FS' is any other single character, such as `","', then two successive occurrences of that character do delimit an empty field. The space character is the only special case. You can set `FS' to be a string containing several characters. For example, the assignment: FS = ", \t" makes every area of an input line that consists of a comma followed by a space and a tab, into a field separator. (`\t' stands for a tab.) More generally, the value of `FS' may be a string containing any regular expression. Then each match in the record for the regular expression separates fields. For example, if you want single spaces to separate fields the way single commas were used above, you can set `FS' to `"[ ]"'. This regular expression matches a single space and nothing else. `FS' can be set on the command line. You use the `-F' argument to do so. For example: awk -F, 'PROGRAM' INPUT-FILES sets `FS' to be the `,' character. Notice that the argument uses a capital `F'. Contrast this with `-f', which specifies a file containing an `awk' program. Case is significant in command options: the `-F' and `-f' options have nothing to do with each other. You can use both options at the same time to set the `FS' argument *and* get an `awk' program from a file. As a special case, in compatibility mode (*note Command Line::.), if the argument to `-F' is `t', then `FS' is set to the tab character. (This is because if you type `-F\t', without the quotes, at the shell, the `\' gets deleted, so `awk' figures that you really want your fields to be separated with tabs, and not `t's. Use `FS="t"' on the command line if you really do want to separate your fields with `t's.) For example, let's use an `awk' program file called `baud.awk' that contains the pattern `/300/', and the action `print $1'. Here is the program: /300/ { print $1 } Let's also set `FS' to be the `-' character, and run the program on the file `BBS-list'. The following command prints a list of the names of the bulletin boards that operate at 300 baud and the first three digits of their phone numbers: awk -F- -f baud.awk BBS-list It produces this output: aardvark 555 alpo barfly 555 bites 555 camelot 555 core 555 fooey 555 foot 555 macfoo 555 sdace 555 sabafoo 555 Note the second line of output. If you check the original file, you will see that the second line looked like this: alpo-net 555-3412 2400/1200/300 A The `-' as part of the system's name was used as the field separator, instead of the `-' in the phone number that was originally intended. This demonstrates why you have to be careful in choosing your field and record separators. The following program searches the system password file, and prints the entries for users who have no password: awk -F: '$2 == ""' /etc/passwd Here we use the `-F' option on the command line to set the field separator. Note that fields in `/etc/passwd' are separated by colons. The second field represents a user's encrypted password, but if the field is empty, that user has no password. ▶1f◀ File: gawk-info, Node: Multiple Line, Next: Getline, Prev: Field Separators, Up: Reading Files Multiple-Line Records ===================== In some data bases, a single line cannot conveniently hold all the information in one entry. In such cases, you can use multi-line records. The first step in doing this is to choose your data format: when records are not defined as single lines, how do you want to define them? What should separate records? One technique is to use an unusual character or string to separate records. For example, you could use the formfeed character (written `\f' in `awk', as in C) to separate them, making each record a page of the file. To do this, just set the variable `RS' to `"\f"' (a string containing the formfeed character). Any other character could equally well be used, as long as it won't be part of the data in a record. Another technique is to have blank lines separate records. By a special dispensation, a null string as the value of `RS' indicates that records are separated by one or more blank lines. If you set `RS' to the null string, a record always ends at the first blank line encountered. And the next record doesn't start until the first nonblank line that follows--no matter how many blank lines appear in a row, they are considered one record-separator. The second step is to separate the fields in the record. One way to do this is to put each field on a separate line: to do this, just set the variable `FS' to the string `"\n"'. (This simple regular expression matches a single newline.) Another idea is to divide each of the lines into fields in the normal manner. This happens by default as a result of a special feature: when `RS' is set to the null string, the newline character *always* acts as a field separator. This is in addition to whatever field separations result from `FS'. The original motivation for this special exception was probably so that you get useful behavior in the default case (i.e., `FS == " "'). This feature can be a problem if you really don't want the newline character to separate fields, since there is no way to prevent it. However, you can work around this by using the `split' function to break up the record manually (*note String Functions::.). ▶1f◀ File: gawk-info, Node: Getline, Next: Close Input, Prev: Multiple Line, Up: Reading Files Explicit Input with `getline' ============================= So far we have been getting our input files from `awk''s main input stream--either the standard input (usually your terminal) or the files specified on the command line. The `awk' language has a special built-in command called `getline' that can be used to read input under your explicit control. This command is quite complex and should *not* be used by beginners. It is covered here because this is the chapter on input. The examples that follow the explanation of the `getline' command include material that has not been covered yet. Therefore, come back and study the `getline' command *after* you have reviewed the rest of this manual and have a good knowledge of how `awk' works. `getline' returns 1 if it finds a record, and 0 if the end of the file is encountered. If there is some error in getting a record, such as a file that cannot be opened, then `getline' returns -1. In the following examples, COMMAND stands for a string value that represents a shell command. `getline' The `getline' command can be used without arguments to read input from the current input file. All it does in this case is read the next input record and split it up into fields. This is useful if you've finished processing the current record, but you want to do some special processing *right now* on the next record. Here's an example: awk '{ if (t = index($0, "/*")) { if(t > 1) tmp = substr($0, 1, t - 1) else tmp = "" u = index(substr($0, t + 2), "*/") while (! u) { getline t = -1 u = index($0, "*/") } if(u <= length($0) - 2) $0 = tmp substr($0, t + u + 3) else $0 = tmp } print $0 }' This `awk' program deletes all comments, `/* ... */', from the input. By replacing the `print $0' with other statements, you could perform more complicated processing on the decommented input, such as searching it for matches for a regular expression. This form of the `getline' command sets `NF' (the number of fields; *note Fields::.), `NR' (the number of records read so far; *note Records::.), `FNR' (the number of records read from this input file), and the value of `$0'. *Note:* the new value of `$0' is used in testing the patterns of any subsequent rules. The original value of `$0' that triggered the rule which executed `getline' is lost. By contrast, the `next' statement reads a new record but immediately begins processing it normally, starting with the first rule in the program. *Note Next Statement::. `getline VAR' This form of `getline' reads a record into the variable VAR. This is useful when you want your program to read the next record from the current input file, but you don't want to subject the record to the normal input processing. For example, suppose the next line is a comment, or a special string, and you want to read it, but you must make certain that it won't trigger any rules. This version of `getline' allows you to read that line and store it in a variable so that the main read-a-line-and-check-each-rule loop of `awk' never sees it. The following example swaps every two lines of input. For example, given: wan tew free phore it outputs: tew wan phore free Here's the program: awk '{ if ((getline tmp) > 0) { print tmp print $0 } else print $0 }' The `getline' function used in this way sets only the variables `NR' and `FNR' (and of course, VAR). The record is not split into fields, so the values of the fields (including `$0') and the value of `NF' do not change. `getline < FILE' This form of the `getline' function takes its input from the file FILE. Here FILE is a string-valued expression that specifies the file name. `< FILE' is called a "redirection" since it directs input to come from a different place. This form is useful if you want to read your input from a particular file, instead of from the main input stream. For example, the following program reads its input record from the file `foo.input' when it encounters a first field with a value equal to 10 in the current input file. awk '{ if ($1 == 10) { getline < "foo.input" print } else print }' Since the main input stream is not used, the values of `NR' and `FNR' are not changed. But the record read is split into fields in the normal manner, so the values of `$0' and other fields are changed. So is the value of `NF'. This does not cause the record to be tested against all the patterns in the `awk' program, in the way that would happen if the record were read normally by the main processing loop of `awk'. However the new record is tested against any subsequent rules, just as when `getline' is used without a redirection. `getline VAR < FILE' This form of the `getline' function takes its input from the file FILE and puts it in the variable VAR. As above, FILE is a string-valued expression that specifies the file to read from. In this version of `getline', none of the built-in variables are changed, and the record is not split into fields. The only variable changed is VAR. For example, the following program copies all the input files to the output, except for records that say `@include FILENAME'. Such a record is replaced by the contents of the file FILENAME. awk '{ if (NF == 2 && $1 == "@include") { while ((getline line < $2) > 0) print line close($2) } else print }' Note here how the name of the extra input file is not built into the program; it is taken from the data, from the second field on the `@include' line. The `close' function is called to ensure that if two identical `@include' lines appear in the input, the entire specified file is included twice. *Note Close Input::. One deficiency of this program is that it does not process nested `@include' statements the way a true macro preprocessor would. `COMMAND | getline' You can "pipe" the output of a command into `getline'. A pipe is simply a way to link the output of one program to the input of another. In this case, the string COMMAND is run as a shell command and its output is piped into `awk' to be used as input. This form of `getline' reads one record from the pipe. For example, the following program copies input to output, except for lines that begin with `@execute', which are replaced by the output produced by running the rest of the line as a shell command: awk '{ if ($1 == "@execute") { tmp = substr($0, 10) while ((tmp | getline) > 0) print close(tmp) } else print }' The `close' function is called to ensure that if two identical `@execute' lines appear in the input, the command is run again for each one. *Note Close Input::. Given the input: foo bar baz @execute who bletch the program might produce: foo bar baz hack ttyv0 Jul 13 14:22 hack ttyp0 Jul 13 14:23 (gnu:0) hack ttyp1 Jul 13 14:23 (gnu:0) hack ttyp2 Jul 13 14:23 (gnu:0) hack ttyp3 Jul 13 14:23 (gnu:0) bletch Notice that this program ran the command `who' and printed the result. (If you try this program yourself, you will get different results, showing you logged in.) This variation of `getline' splits the record into fields, sets the value of `NF' and recomputes the value of `$0'. The values of `NR' and `FNR' are not changed. `COMMAND | getline VAR' The output of the command COMMAND is sent through a pipe to `getline' and into the variable VAR. For example, the following program reads the current date and time into the variable `current_time', using the utility called `date', and then prints it. awk 'BEGIN { "date" | getline current_time close("date") print "Report printed on " current_time }' In this version of `getline', none of the built-in variables are changed, and the record is not split into fields. ▶1f◀ File: gawk-info, Node: Close Input, Prev: Getline, Up: Reading Files Closing Input Files and Pipes ============================= If the same file name or the same shell command is used with `getline' more than once during the execution of an `awk' program, the file is opened (or the command is executed) only the first time. At that time, the first record of input is read from that file or command. The next time the same file or command is used in `getline', another record is read from it, and so on. This implies that if you want to start reading the same file again from the beginning, or if you want to rerun a shell command (rather that reading more output from the command), you must take special steps. What you can do is use the `close' function, as follows: close(FILENAME) or close(COMMAND) The argument FILENAME or COMMAND can be any expression. Its value must exactly equal the string that was used to open the file or start the command--for example, if you open a pipe with this: "sort -r names" | getline foo then you must close it with this: close("sort -r names") Once this function call is executed, the next `getline' from that file or command will reopen the file or rerun the command. ▶1f◀ File: gawk-info, Node: Printing, Next: One-liners, Prev: Reading Files, Up: Top Printing Output *************** One of the most common things that actions do is to output or "print" some or all of the input. For simple output, use the `print' statement. For fancier formatting use the `printf' statement. Both are described in this chapter. * Menu: * Print:: The `print' statement. * Print Examples:: Simple examples of `print' statements. * Output Separators:: The output separators and how to change them. * Printf:: The `printf' statement. * Redirection:: How to redirect output to multiple files and pipes. * Special Files:: File name interpretation in `gawk'. `gawk' allows access to inherited file descriptors. ▶1f◀ File: gawk-info, Node: Print, Next: Print Examples, Prev: Printing, Up: Printing The `print' Statement ===================== The `print' statement does output with simple, standardized formatting. You specify only the strings or numbers to be printed, in a list separated by commas. They are output, separated by single spaces, followed by a newline. The statement looks like this: print ITEM1, ITEM2, ... The entire list of items may optionally be enclosed in parentheses. The parentheses are necessary if any of the item expressions uses a relational operator; otherwise it could be confused with a redirection (*note Redirection::.). The relational operators are `==', `!=', `<', `>', `>=', `<=', `~' and `!~' (*note Comparison Ops::.). The items printed can be constant strings or numbers, fields of the current record (such as `$1'), variables, or any `awk' expressions. The `print' statement is completely general for computing *what* values to print. With one exception (*note Output Separators::.), what you can't do is specify *how* to print them--how many columns to use, whether to use exponential notation or not, and so on. For that, you need the `printf' statement (*note Printf::.). The simple statement `print' with no items is equivalent to `print $0': it prints the entire current record. To print a blank line, use `print ""', where `""' is the null, or empty, string. To print a fixed piece of text, use a string constant such as `"Hello there"' as one item. If you forget to use the double-quote characters, your text will be taken as an `awk' expression, and you will probably get an error. Keep in mind that a space is printed between any two items. Most often, each `print' statement makes one line of output. But it isn't limited to one line. If an item value is a string that contains a newline, the newline is output along with the rest of the string. A single `print' can make any number of lines this way. ▶1f◀ File: gawk-info, Node: Print Examples, Next: Output Separators, Prev: Print, Up: Printing Examples of `print' Statements ============================== Here is an example of printing a string that contains embedded newlines: awk 'BEGIN { print "line one\nline two\nline three" }' produces output like this: line one line two line three Here is an example that prints the first two fields of each input record, with a space between them: awk '{ print $1, $2 }' inventory-shipped Its output looks like this: Jan 13 Feb 15 Mar 15 ... A common mistake in using the `print' statement is to omit the comma between two items. This often has the effect of making the items run together in the output, with no space. The reason for this is that juxtaposing two string expressions in `awk' means to concatenate them. For example, without the comma: awk '{ print $1 $2 }' inventory-shipped prints: Jan13 Feb15 Mar15 ... Neither example's output makes much sense to someone unfamiliar with the file `inventory-shipped'. A heading line at the beginning would make it clearer. Let's add some headings to our table of months (`$1') and green crates shipped (`$2'). We do this using the `BEGIN' pattern (*note BEGIN/END::.) to cause the headings to be printed only once: awk 'BEGIN { print "Month Crates" print "---- -----" } { print $1, $2 }' inventory-shipped Did you already guess what happens? This program prints the following: Month Crates ---- ----- Jan 13 Feb 15 Mar 15 ... The headings and the table data don't line up! We can fix this by printing some spaces between the two fields: awk 'BEGIN { print "Month Crates" print "---- -----" } { print $1, " ", $2 }' inventory-shipped You can imagine that this way of lining up columns can get pretty complicated when you have many columns to fix. Counting spaces for two or three columns can be simple, but more than this and you can get ``lost'' quite easily. This is why the `printf' statement was created (*note Printf::.); one of its specialties is lining up columns of data. ▶1f◀ File: gawk-info, Node: Output Separators, Next: Printf, Prev: Print Examples, Up: Printing Output Separators ================= As mentioned previously, a `print' statement contains a list of items, separated by commas. In the output, the items are normally separated by single spaces. But they do not have to be spaces; a single space is only the default. You can specify any string of characters to use as the "output field separator" by setting the built-in variable `OFS'. The initial value of this variable is the string `" "'. The output from an entire `print' statement is called an "output record". Each `print' statement outputs one output record and then outputs a string called the "output record separator". The built-in variable `ORS' specifies this string. The initial value of the variable is the string `"\n"' containing a newline character; thus, normally each `print' statement makes a separate line. You can change how output fields and records are separated by assigning new values to the variables `OFS' and/or `ORS'. The usual place to do this is in the `BEGIN' rule (*note BEGIN/END::.), so that it happens before any input is processed. You may also do this with assignments on the command line, before the names of your input files. The following example prints the first and second fields of each input record separated by a semicolon, with a blank line added after each line: awk 'BEGIN { OFS = ";"; ORS = "\n\n" } { print $1, $2 }' BBS-list If the value of `ORS' does not contain a newline, all your output will be run together on a single line, unless you output newlines some other way. ▶1f◀ File: gawk-info, Node: Printf, Next: Redirection, Prev: Output Separators, Up: Printing Using `printf' Statements For Fancier Printing ============================================== If you want more precise control over the output format than `print' gives you, use `printf'. With `printf' you can specify the width to use for each item, and you can specify various stylistic choices for numbers (such as what radix to use, whether to print an exponent, whether to print a sign, and how many digits to print after the decimal point). You do this by specifying a string, called the "format string", which controls how and where to print the other arguments. * Menu: * Basic Printf:: Syntax of the `printf' statement. * Control Letters:: Format-control letters. * Format Modifiers:: Format-specification modifiers. * Printf Examples:: Several examples. ▶1f◀ File: gawk-info, Node: Basic Printf, Next: Control Letters, Prev: Printf, Up: Printf Introduction to the `printf' Statement -------------------------------------- The `printf' statement looks like this: printf FORMAT, ITEM1, ITEM2, ... The entire list of items may optionally be enclosed in parentheses. The parentheses are necessary if any of the item expressions uses a relational operator; otherwise it could be confused with a redirection (*note Redirection::.). The relational operators are `==', `!=', `<', `>', `>=', `<=', `~' and `!~' (*note Comparison Ops::.). The difference between `printf' and `print' is the argument FORMAT. This is an expression whose value is taken as a string; its job is to say how to output each of the other arguments. It is called the "format string". The format string is essentially the same as in the C library function `printf'. Most of FORMAT is text to be output verbatim. Scattered among this text are "format specifiers", one per item. Each format specifier says to output the next item at that place in the format. The `printf' statement does not automatically append a newline to its output. It outputs nothing but what the format specifies. So if you want a newline, you must include one in the format. The output separator variables `OFS' and `ORS' have no effect on `printf' statements. ▶1f◀ File: gawk-info, Node: Control Letters, Next: Format Modifiers, Prev: Basic Printf, Up: Printf Format-Control Letters ---------------------- A format specifier starts with the character `%' and ends with a "format-control letter"; it tells the `printf' statement how to output one item. (If you actually want to output a `%', write `%%'.) The format-control letter specifies what kind of value to print. The rest of the format specifier is made up of optional "modifiers" which are parameters such as the field width to use. Here is a list of the format-control letters: `c' This prints a number as an ASCII character. Thus, `printf "%c", 65' outputs the letter `A'. The output for a string value is the first character of the string. `d' This prints a decimal integer. `i' This also prints a decimal integer. `e' This prints a number in scientific (exponential) notation. For example, printf "%4.3e", 1950 prints `1.950e+03', with a total of 4 significant figures of which 3 follow the decimal point. The `4.3' are "modifiers", discussed below. `f' This prints a number in floating point notation. `g' This prints either scientific notation or floating point notation, whichever is shorter. `o' This prints an unsigned octal integer. `s' This prints a string. `x' This prints an unsigned hexadecimal integer. `X' This prints an unsigned hexadecimal integer. However, for the values 10 through 15, it uses the letters `A' through `F' instead of `a' through `f'. `%' This isn't really a format-control letter, but it does have a meaning when used after a `%': the sequence `%%' outputs one `%'. It does not consume an argument. ▶1f◀ File: gawk-info, Node: Format Modifiers, Next: Printf Examples, Prev: Control Letters, Up: Printf Modifiers for `printf' Formats ------------------------------ A format specification can also include "modifiers" that can control how much of the item's value is printed and how much space it gets. The modifiers come between the `%' and the format-control letter. Here are the possible modifiers, in the order in which they may appear: `-' The minus sign, used before the width modifier, says to left-justify the argument within its specified width. Normally the argument is printed right-justified in the specified width. Thus, printf "%-4s", "foo" prints `foo '. `WIDTH' This is a number representing the desired width of a field. Inserting any number between the `%' sign and the format control character forces the field to be expanded to this width. The default way to do this is to pad with spaces on the left. For example, printf "%4s", "foo" prints ` foo'. The value of WIDTH is a minimum width, not a maximum. If the item value requires more than WIDTH characters, it can be as wide as necessary. Thus, printf "%4s", "foobar" prints `foobar'. Preceding the WIDTH with a minus sign causes the output to be padded with spaces on the right, instead of on the left. `.PREC' This is a number that specifies the precision to use when printing. This specifies the number of digits you want printed to the right of the decimal point. For a string, it specifies the maximum number of characters from the string that should be printed. The C library `printf''s dynamic WIDTH and PREC capability (for example, `"%*.*s"') is not yet supported. However, it can easily be simulated using concatenation to dynamically build the format string. ▶1f◀ File: gawk-info, Node: Printf Examples, Prev: Format Modifiers, Up: Printf Examples of Using `printf' -------------------------- Here is how to use `printf' to make an aligned table: awk '{ printf "%-10s %s\n", $1, $2 }' BBS-list prints the names of bulletin boards (`$1') of the file `BBS-list' as a string of 10 characters, left justified. It also prints the phone numbers (`$2') afterward on the line. This produces an aligned two-column table of names and phone numbers: aardvark 555-5553 alpo-net 555-3412 barfly 555-7685 bites 555-1675 camelot 555-0542 core 555-2912 fooey 555-1234 foot 555-6699 macfoo 555-6480 sdace 555-3430 sabafoo 555-2127 Did you notice that we did not specify that the phone numbers be printed as numbers? They had to be printed as strings because the numbers are separated by a dash. This dash would be interpreted as a minus sign if we had tried to print the phone numbers as numbers. This would have led to some pretty confusing results. We did not specify a width for the phone numbers because they are the last things on their lines. We don't need to put spaces after them. We could make our table look even nicer by adding headings to the tops of the columns. To do this, use the `BEGIN' pattern (*note BEGIN/END::.) to cause the header to be printed only once, at the beginning of the `awk' program: awk 'BEGIN { print "Name Number" print "--- -----" } { printf "%-10s %s\n", $1, $2 }' BBS-list Did you notice that we mixed `print' and `printf' statements in the above example? We could have used just `printf' statements to get the same results: awk 'BEGIN { printf "%-10s %s\n", "Name", "Number" printf "%-10s %s\n", "---", "-----" } { printf "%-10s %s\n", $1, $2 }' BBS-list By outputting each column heading with the same format specification used for the elements of the column, we have made sure that the headings are aligned just like the columns. The fact that the same format specification is used three times can be emphasized by storing it in a variable, like this: awk 'BEGIN { format = "%-10s %s\n" printf format, "Name", "Number" printf format, "---", "-----" } { printf format, $1, $2 }' BBS-list See if you can use the `printf' statement to line up the headings and table data for our `inventory-shipped' example covered earlier in the section on the `print' statement (*note Print::.). ▶1f◀ File: gawk-info, Node: Redirection, Next: Special Files, Prev: Printf, Up: Printing Redirecting Output of `print' and `printf' ========================================== So far we have been dealing only with output that prints to the standard output, usually your terminal. Both `print' and `printf' can be told to send their output to other places. This is called "redirection". A redirection appears after the `print' or `printf' statement. Redirections in `awk' are written just like redirections in shell commands, except that they are written inside the `awk' program. * Menu: * File/Pipe Redirection:: Redirecting Output to Files and Pipes. * Close Output:: How to close output files and pipes. ▶1f◀ File: gawk-info, Node: File/Pipe Redirection, Next: Close Output, Prev: Redirection, Up: Redirection Redirecting Output to Files and Pipes ------------------------------------- Here are the three forms of output redirection. They are all shown for the `print' statement, but they work identically for `printf' also. `print ITEMS > OUTPUT-FILE' This type of redirection prints the items onto the output file OUTPUT-FILE. The file name OUTPUT-FILE can be any expression. Its value is changed to a string and then used as a file name (*note Expressions::.). When this type of redirection is used, the OUTPUT-FILE is erased before the first output is written to it. Subsequent writes do not erase OUTPUT-FILE, but append to it. If OUTPUT-FILE does not exist, then it is created. For example, here is how one `awk' program can write a list of BBS names to a file `name-list' and a list of phone numbers to a file `phone-list'. Each output file contains one name or number per line. awk '{ print $2 > "phone-list" print $1 > "name-list" }' BBS-list `print ITEMS >> OUTPUT-FILE' This type of redirection prints the items onto the output file OUTPUT-FILE. The difference between this and the single-`>' redirection is that the old contents (if any) of OUTPUT-FILE are not erased. Instead, the `awk' output is appended to the file. `print ITEMS | COMMAND' It is also possible to send output through a "pipe" instead of into a file. This type of redirection opens a pipe to COMMAND and writes the values of ITEMS through this pipe, to another process created to execute COMMAND. The redirection argument COMMAND is actually an `awk' expression. Its value is converted to a string, whose contents give the shell command to be run. For example, this produces two files, one unsorted list of BBS names and one list sorted in reverse alphabetical order: awk '{ print $1 > "names.unsorted" print $1 | "sort -r > names.sorted" }' BBS-list Here the unsorted list is written with an ordinary redirection while the sorted list is written by piping through the `sort' utility. Here is an example that uses redirection to mail a message to a mailing list `bug-system'. This might be useful when trouble is encountered in an `awk' script run periodically for system maintenance. print "Awk script failed:", $0 | "mail bug-system" print "at record number", FNR, "of", FILENAME | "mail bug-system" close("mail bug-system") We call the `close' function here because it's a good idea to close the pipe as soon as all the intended output has been sent to it. *Note Close Output::, for more information on this. Redirecting output using `>', `>>', or `|' asks the system to open a file or pipe only if the particular FILE or COMMAND you've specified has not already been written to by your program. ▶1f◀ File: gawk-info, Node: Close Output, Prev: File/Pipe Redirection, Up: Redirection Closing Output Files and Pipes ------------------------------ When a file or pipe is opened, the file name or command associated with it is remembered by `awk' and subsequent writes to the same file or command are appended to the previous writes. The file or pipe stays open until `awk' exits. This is usually convenient. Sometimes there is a reason to close an output file or pipe earlier than that. To do this, use the `close' function, as follows: close(FILENAME) or close(COMMAND) The argument FILENAME or COMMAND can be any expression. Its value must exactly equal the string used to open the file or pipe to begin with--for example, if you open a pipe with this: print $1 | "sort -r > names.sorted" then you must close it with this: close("sort -r > names.sorted") Here are some reasons why you might need to close an output file: * To write a file and read it back later on in the same `awk' program. Close the file when you are finished writing it; then you can start reading it with `getline' (*note Getline::.). * To write numerous files, successively, in the same `awk' program. If you don't close the files, eventually you will exceed the system limit on the number of open files in one process. So close each one when you are finished writing it. * To make a command finish. When you redirect output through a pipe, the command reading the pipe normally continues to try to read input as long as the pipe is open. Often this means the command cannot really do its work until the pipe is closed. For example, if you redirect output to the `mail' program, the message is not actually sent until the pipe is closed. * To run the same program a second time, with the same arguments. This is not the same thing as giving more input to the first run! For example, suppose you pipe output to the `mail' program. If you output several lines redirected to this pipe without closing it, they make a single message of several lines. By contrast, if you close the pipe after each line of output, then each line makes a separate message. ▶1f◀ File: gawk-info, Node: Special Files, Prev: Redirection, Up: Printing Standard I/O Streams ==================== Running programs conventionally have three input and output streams already available to them for reading and writing. These are known as the "standard input", "standard output", and "standard error output". These streams are, by default, terminal input and output, but they are often redirected with the shell, via the `<', `<<', `>', `>>', `>&' and `|' operators. Standard error is used only for writing error messages; the reason we have two separate streams, standard output and standard error, is so that they can be redirected separately. In other implementations of `awk', the only way to write an error message to standard error in an `awk' program is as follows: print "Serious error detected!\n" | "cat 1>&2" This works by opening a pipeline to a shell command which can access the standard error stream which it inherits from the `awk' process. This is far from elegant, and is also inefficient, since it requires a separate process. So people writing `awk' programs have often neglected to do this. Instead, they have sent the error messages to the terminal, like this: NF != 4 { printf("line %d skipped: doesn't have 4 fields\n", FNR) > "/dev/tty" } This has the same effect most of the time, but not always: although the standard error stream is usually the terminal, it can be redirected, and when that happens, writing to the terminal is not correct. In fact, if `awk' is run from a background job, it may not have a terminal at all. Then opening `/dev/tty' will fail. `gawk' provides special file names for accessing the three standard streams. When you redirect input or output in `gawk', if the file name matches one of these special names, then `gawk' directly uses the stream it stands for. `/dev/stdin' The standard input (file descriptor 0). `/dev/stdout' The standard output (file descriptor 1). `/dev/stderr' The standard error output (file descriptor 2). `/dev/fd/N' The file associated with file descriptor N. Such a file must have been opened by the program initiating the `awk' execution (typically the shell). Unless you take special pains, only descriptors 0, 1 and 2 are available. The file names `/dev/stdin', `/dev/stdout', and `/dev/stderr' are aliases for `/dev/fd/0', `/dev/fd/1', and `/dev/fd/2', respectively, but they are more self-explanatory. The proper way to write an error message in a `gawk' program is to use `/dev/stderr', like this: NF != 4 { printf("line %d skipped: doesn't have 4 fields\n", FNR) > "/dev/stderr" } Recognition of these special file names is disabled if `gawk' is in compatibility mode (*note Command Line::.). ▶1f◀ File: gawk-info, Node: One-liners, Next: Patterns, Prev: Printing, Up: Top Useful ``One-liners'' ********************* Useful `awk' programs are often short, just a line or two. Here is a collection of useful, short programs to get you started. Some of these programs contain constructs that haven't been covered yet. The description of the program will give you a good idea of what is going on, but please read the rest of the manual to become an `awk' expert! `awk '{ num_fields = num_fields + NF }' ` END { print num_fields }'' This program prints the total number of fields in all input lines. `awk 'length($0) > 80'' This program prints every line longer than 80 characters. The sole rule has a relational expression as its pattern, and has no action (so the default action, printing the record, is used). `awk 'NF > 0'' This program prints every line that has at least one field. This is an easy way to delete blank lines from a file (or rather, to create a new file similar to the old file but from which the blank lines have been deleted). `awk '{ if (NF > 0) print }'' This program also prints every line that has at least one field. Here we allow the rule to match every line, then decide in the action whether to print. `awk 'BEGIN { for (i = 1; i <= 7; i++)' ` print int(101 * rand()) }'' This program prints 7 random numbers from 0 to 100, inclusive. `ls -l FILES | awk '{ x += $4 } ; END { print "total bytes: " x }'' This program prints the total number of bytes used by FILES. `expand FILE | awk '{ if (x < length()) x = length() }' ` END { print "maximum line length is " x }'' This program prints the maximum line length of FILE. The input is piped through the `expand' program to change tabs into spaces, so the widths compared are actually the right-margin columns.