Hi, do you got any idea if I deal with long list of data ?
For short list of data, I believe "grep" is able to handle it.
But for long list of data, it might be difficult
thanks for your advice ^^
-m NUM, --max-count=NUM
Stop reading a file after NUM matching lines. If the input is standard input from a regular
file, and NUM matching lines are output, grep ensures that the standard input is positioned to
just after the last matching line before exiting, regardless of the presence of trailing
context lines. This enables a calling process to resume a search. When grep stops after NUM
matching lines, it outputs any trailing context lines. When the -c or --count option is also
used, grep does not output a count greater than NUM. When the -v or --invert-match option is
also used, grep stops after outputting NUM non-matching lines.
Hi, I just try the code that you suggested.
Sad to said that it is not worked
Do you have better suggestion?
Both of my input file 1 and file 2 got different line number.
I just want to print out those content that file 1 and file 2 match at first column.
Really thanks for your help
---------- Post updated at 04:01 AM ---------- Previous update was at 03:57 AM ----------
thanks a lot.
Even though take some time to do for huge data
It is worked
yup. you're right.
Thanks a lot ^^
By using the join, do you got any idea like how to let the output result got tab delimiter in between each line?
I got try to do this by using the awk "\t" for the file3
awk '{print $1"\t",$2"\t",$3"\t"}' file3 > file4
Instead of using awk to generate file4.
Do you have any other suggestion to improve my code by just using join to do it?
Thanks for your suggestion
Hi,
I just try both of the code that you suggested.
End up It will link all the data together and generated the output like this:
1285_t chris germany 8288_c steve england 9626_a dave swiss
Do I did anything wrong?
Thanks again, frans