I have a text file that has (4) columns. There are about 300 lines on this file. It is a plain text file. I am looking to write a simple script that will read each line from the file and generate another text file. The file looks something like this:
These are the columns:
StudentName Grade Class Teacher
StudentName2 Grade Class Teacher
...
...
...
StudentName300 Grade Class Teacher
What I need to do is extract the text from this file and re-arrange it into another file.
For example, this is how the new file would be arranged:
Teacher Class Grade StudentName
I can easily do it when a file has just a single line, like this:
-------------------
StudentName = `cat /fileName | awk '{print $1}'
Grade = `cat /fileName | awk '{print $2}'
Class = `cat /fileName | awk '{print $3}'
Teacher = `cat /fileName | awk '{print $4}'
[user@host ~]$ cat file
a b c d
a b c d
a b c d
a b c d
[user@host ~]$ awk '{for (i=NF;i>=1;i--){printf "%s ",$i} {print""}}' file
d c b a
d c b a
d c b a
d c b a
[user@host ~]$
Hi all, thanks for the quick responses. I guess I should have mentioned that that I also intend for the script to be able to use each filed as a variable aside from the sorting of the columns. For example, the sorting of the columns will be used to print a report, however, I would also like the script to generate the following text at the bottom of the report:
"The student, ${StudentName} is failing ${Teacher}'s ${Class} with a grade of ${Grade}"
Of course I will add a mechanism to check if the grade is passing or failing, but that I know how to do, It is the assigning variable values from multiple lines that is giving me trouble.