Hello,
I read and search through this wonderful forum and tried different approaches but it seems I lack some knowledge and neurones ^^
Here is what I'm trying to achieve :
file1:
test filea 3495;
test fileb 4578;
test filec 7689;
test filey 9978;
test filez 12300;
file2:
test filea 3495;
test filed 4578;
test filec 7689;
test filex 8978;
results:
test filea 3495;
test filed 4578;
test filec 7689;
test filex 8978;
test filey 9978;
test filez 12300;
comparison in based on last field (field $3), new content from file2 (here content with "key" 8978 is new) should be added to final output and content that is different in file2 (test filed 4578; here) should replace file1 one.
I thought the solution was something like store keys from file1, iterate them on file2, then reverse the iteration to find missing records... I was far far away from the beauty of awk...
if I understand correctly, awk reads the two files and automagically merged records itself ? It means that there is no need to store values from file1 to compare them to file2 ? Beautifull...
Two things I don't get: the use of the underscore (while i guess it stands for "all read records" ?), and why is END not at the end ?
About the sort command wouldn't it fail on the ';' ? Do you know how to specify 'last field' of line with sort ? Or is something like :
| awk '{ printf substr($NF, 1, length($NF)-1);$NF = "";printf " %s\n",$0 }' | sort -n | awk '{ printf "%s%s;\n",$0,$1 }' | awk '{$1="";sub(/^ +/, "");printf "%s\n",$0}'
preferable ?
It uses an associative array (a hash), so it guarantees the uniqueness
of the key ($NF in this case) and the value is alway the last one it sees
(the one in file2). It will associate to every key ($NF) the entire record ($0) and it will update the value when it sees the same key.
Well, this is kinda style of writing,
it you want the code more readable,
you could use this instead (and this is compatible even with the old plain Solaris awk):
awk '{
key_record[$NF] = $0 # associate key ($NF) with entire record ($0)
}
END {
# after the entire input has been read
for (key in key_record) # for every key stored
print key_record[key] # print the associated value
}' file1 file2
I think the sort command will cast it correctly. Do you have an example where the input like this is not sorted correctly?
Why? Isn't the last field position fixed?
In that case I would go with:
Thanks a lot for taking the time to explain all this radoulov ^^ that's really great !
well not in that particular case but i remember having to strip the ';' to be able to use 'sort -n' correctly (without specifying a key, i just extract last field with awk then apply sort -n to it. A shame 'sort' doesn't allow reverse key selection), for example with values like :
27384;
7384; or 384;
but I tried so many different things, I guess this should be a remain of some mistypes/mistakes on my side or because of the Windows line endings some files seems to have (some files are created on Windows and some on Unix) ?
No the last field is not fixed because I'm on a bash script utility for sql queries files sorting/updating, this have to be used on several different files where the number of fields is not always the same and where the key value can be, rarely but happens, in the middle of the line.
So in this case taking a $key arguments from cli:
awk 'END{for(k in )print _[k]}{[$'"$key"']=$0}' $file1 $file2 > $file1.updated
with an additionnal conditional on argument '0' for the end of the line (because I didn't get $key to turn into NF and awk taking '"$key"').
I'm making it for a small community and it has to be really simple.
If you're not afraid to read awfull code I can post it ^^