awk command optimization

gawk -v sw="error|fail|panic|accepted" 'NR>1 && NR <=128500 {
                                                                                for (w in a)
                                                                                {
                                                                                        if ($0 ~ a[w])
                                                                                                d[a[w]]++
                                                                                }
                                                                }
                                                                BEGIN {
                                                                        c = split(sw,a,"[|]")
                                                                }
                                                                END {
                                                                for (i in a)
                                                                {
                                                                        o = o (a"="(d[a]?d[a]:0)",")
                                                                }
                                                                        sub(",*$","",o)
                                                                        print o
                                                                }' /var/log/treg.test

the above code works majestically when searching for multiple strings in a log.

the problem is, as the log gets bigger (i.e. 5MB), the time it takes to search for all the strings gets longer as well. took 2 seconds to search a 5MB file using this code. had the file been bigger, say 10MB, it would take longer.

so i'm wondering, can this code be optimized at all to make it run faster? maybe if the strings were read from a separate file it would help speed things up??

code runs on linux redhat / ubuntu platforms

FWIW -

egrep -c '(error|fail|panic|accepted)' logfile

Does a lot of what your awk code does, not all of it.

for (w in a)
 {
         if ($0 ~ a[w])
                 d[a[w]]++
 }

This code above means you loop 5 times on every line. I do not think regex in awk supports alternation, someone who knows more please comment. But that would be the first place to attack your problem. And if you search for more terms your program will iterate more times over each line of input.

This is the same problem we have when we use grep -f list_of_items filename with a large number of entries in list_of_items .

Edit: the red comment is flat wrong. Alternation is possible. You can rewrite the main loop to use it.

{ /error|wanting|panic|failure/ } { [define array here]++..... }
1 Like

Can you post a sample of the input file as well...

the input file is just any data file. no set format. the code is used to list the number of specific patterns in a file. so the file does not matter.

Hi,
You can try:

gawk -v sw="error|fail|panic|accepted" 'NR>1 && NR <=128500 && match($0,"/"sw"/") {
											d[substr($0,RSTART,RLENGTH)]++
                                                                                }

                                                                BEGIN {
                                                                        c = split(sw,a,"[|]")
                                                                }
                                                                END {
                                                                for (i in a)
                                                                {
                                                                        o = o (a"="(d[a]?d[a]:0)",")
                                                                }
                                                                        sub(",*$","",o)
                                                                        print o
                                                                }' /var/log/treg.test

Regards.

1 Like

Well you could give this [g]awk a try...

gawk '{
    for (i=1; i<=NF; i++)
        if ($i ~ "^(error|fail|panic|accepted)$")
            a[$i]++
} END {
    for (i in a) {
        n++
        printf("%s=%s%s", i, a, (n < 4 ? ", " : "\n"))
    }
}' file
1 Like

thank you so much!

this looks promising. when i run it though, it only gives a count for one of the strings even though there are lines in the data file that contain the other strings:

accepted=0,error=0,fail=3859,panic=0

---------- Post updated at 02:01 PM ---------- Previous update was at 01:58 PM ----------

this looks quite promising as well. thank you so much!!!

looks like the code is written in such a way that it only counts the number of lines that contain just the specific patterns specified. but i believe i can play with it some more.

one question. is the "n < 4" setting a limit of patterns that can be specified?

btw, this completed in under .3 seconds on a 5mb file. so very good news!!!

Strange, this awk script work fine for me.
The case where not work is when a line with more one pattern (only the first pattern find).
In your data file, have you more that one pattern by line ?

No, reading your patterns from a file will not be faster than using split() to extract them from a string!

What is on line 1 and on lines 128501 to the end of your input file. If you can adjust your counters for what I assume is constant data on those lines you can skip two tests that are being performed on every line. (The tests may be fast individually, but performing more than a quarter of a million fast tests adds up.)

If no more than one or your search patterns could appear on a single input line , changing:

for (w in a) {
        if ($0 ~ a[w])
                d[a[w]]++
}

to:

for (w in a) {
        if ($0 ~ a[w]) {
                d[a[w]]++
                next
        }
}

would speed things up.

Depending on what percentage of your input lines contain one or more of the search patterns, adding the test:

if($0 ~ sw) {
        above for loop
}

may speed things up or slow things down. If one of the search patterns appears on every input line, it will slow things down. If none of the search patterns appear on a vast majority of your input lines, it will speed things up. Your mileage will vary depending on your version of awk and your input data.

so this code seems to be doing what i need. however, it doesn't appear to be finding strings that have spaces in them. here's how i'm running it:

gawk '{
    for (i=1; i<=NF; i++)
        if ($i ~ "(error|fail|panic|open database|accepted)")
            a[$i]++
} END {
    for (i in a) {
        n++
        printf("%s=%s%s", i, a, (n < 4 ? ", " : "\n"))
    }
}' file

it finds every other string except the "open database" one. i tried replacing it with "open.*database" and that still didn't work. can this be tweaked to accept strings with spaces?

i also tried:

gawk '{
    for (i=1; i<=NF; i++)
        if ($i ~ "/error|fail|panic|open database|accepted/")
            a[$i]++
} END {
    for (i in a) {
        n++
        printf("%s=%s%s", i, a, (n < 4 ? ", " : "\n"))
    }
}' file

Not easily. When your field separator is whitespace characters (the awk default), trying to match a single field that contains one of your field separators is always going to fail. Furthermore, this code will combine your desired patterns with surrounding non-space characters. For instance, with the following contents in file :

error open
open database
database fail
fail accepted
		if ($i ~ "(error|fail|panic|open database|accepted)")
accepted error
error, fail, panic, open database, accepted

the code:

gawk '{
	for (i=1; i<=NF; i++)
		if ($i ~ "error|fail|panic|open database|accepted")
			a[$i]++
} END {
	for (i in a) {
		n++
		printf("%s=%s%s", i, a, (n < 4 ? ", " : "\n"))
	}
}' file

produces the output:

error=2, fail=2, fail,=1, error,=1
accepted.=1
accepted=2
panic,=1
"(error|fail|panic|open=1
database|accepted)")=1

Note that the 4 shown in red above controls how many patterns are shown on the 1st line of the output. If all of your patterns only appeared as complete fields (with no punctuation and no additional letter [such as errors or panics]), you would want that number to be the same as the number of patterns you have in your | separated list of patterns.

That was specific to your question as you were looking for 4 patterns and if you want something generic then take out the n...

---------- Post updated at 12:59 PM ---------- Previous update was at 12:47 PM ----------

The gawk above wont work as it splits each line into fields delimited by whitespace...and to find strings or phrases with whitespace you'd have to do something like...

gawk '{
    a["error"] += gsub("(^| )error( |$)", "&")
    a["fail"] += gsub("(^| )fail( |$)", "&")
    a["panic"] += gsub("(^| )panic( |$)", "&")
    a["accepted"] += gsub("(^| )accepted( |$)", "&")
    a["open database"] += gsub("(^| )open database( |$)", "&")
} END {
    for (i in a)
        printf("%s=%s\n", i, a)
}' treg.test

Hope this helps...

1 Like

Yes, it helps a lot!!!

can i pass the strings as an argument instead of hardcoding it in?

strvar="error|fail|panic|open database|accepted"

I would be interested to see how the performance of these solutions compare with something like:

egrep -ow "(error|fail|panic|open database|accepted)" treg.test | sort | uniq -c
1 Like

SkySmart,
I have yet to see a clear definition of what the submitter wants to be matched by the patterns given. For example, with the following input:

error open
open database
database fail
fail accepted
		if ($i ~ "(error|fail|panic|open database|accepted)")
accepted error

error, fail, panic, open database, accepted.
error123 error456:error789
error error error

should error match error| , error, , and error123 as well as when error is at the start or end of a line and when it is preceded and followed by whitespace characters. The following seems to do what is wanted if all of the above are supposed to match:

strvar="error|fail|panic|open database|accepted"
awk -v patlist="$strvar" '
BEGIN {	npat = split(patlist, pl, "[|]")
	for(i = 1; i <= npat; i++)
		pat = "(^|[^[:alpha:]])" pl "([^[:alpha:]]|$)"
		# for(i=1; i <= npat; i++)
		#	printf("pl[%d]=%s, pat[%d]=%s\n", i, pl, i, pat)
}
$0 ~ patlist {	# printf("NR=%d, $0=%s\n", NR, $0)
	for(i = 1; i <= npat; i++) {
		a += gsub(pat, "&")
		# printf("a[%s] = %d (for %s)\n",  i, a, pl)
	}
}
END {	for(i = 1; i <= npat; i++)
		printf("%s=%s\n", pl, a)
}' file

If you don't want to match 123error and error123 , change both occurrences of :alpha: in the code above to :alnum: .

The the vast majority of your input lines will contain one or more of your search patterns, this will probably run faster if you remove the code shown in red.

With the above sample input file, this script produces the output:

error=9
fail=4
panic=2
open database=3
accepted=4

You also have yet to explain why your script was eliminating line 1 and lines 128501 and above from your input file. Until we know what is special about these lines and what is on them, we won't be able to help you make adjustments to these awk and egrep commands to solve your problem. I agree with Chubler_XL that egrep looks like a better solution than awk , but depending on what you're trying to match, the ERE operand may need to be significantly modified and the -w option removed before invoking egrep . Furthermore, if some lines do need to be treated specially, awk becomes more attractive.

1 Like

thanks everyone. this has now been resolved. i took bits and pieces from every single post in here to get what i want.

thank you guys!