I have a file which contain data something like this:
CELL 2 TEST AND DIAGNOSTIC UNIT 2
CELL 2, CDM 1 CBR 1 TRANSMIT PORT (TXPORT) 1
CELL 2, CDM 1 CBR 2 TRANSMIT PORT (TXPORT) 1
CELL 2, CDM 1 CBR 3 TRANSMIT PORT (TXPORT) 1
CELL 2, CDM 1 CBR 1 TRANSMIT PORT (TXPORT) 2
CELL 2, CDM 1 CBR 2 TRANSMIT PORT (TXPORT) 2
CELL 2, CDM 1 CBR 3 TRANSMIT PORT (TXPORT) 2
CELL 3 PACKET PIPE \(PP\) 3
CELL 3, CDM 1 CBR 3 TRANSMIT PORT \(TXPORT\) 1
CELL 4 TIME FREQUENCY UNIT \(TFU\) 1
CELL 4 TEST AND DIAGNOSTIC UNIT 2
CELL 4, CDM 1 PRIMARY SIGNALING LINK
CELL 4, CDM 1 ALTERNATE SIGNALING LINK
CELL 4 PACKET PIPE \(PP\) 1
CELL 4 PACKET PIPE \(PP\) 2
CELL 4 PACKET PIPE \(PP\) 3
What i want is that i read that file and those lines which starts from cell 2 copies and paste into new file named cell2 in the same directory, same as for cell 2 and cell 3 and 4. Its mean that 3 new files will be created named cell2,cell3 & cell4.
the following script reads in the file line by line, then extracts
the filenumber from string, constructs the corresponding filename
and prints the matching information to this file. Blank lines are
delete.
while read line
do
[[ $line = *CELL* ]] \
&& FILENUMBER=$(sed 's/CELL \([0-9]\+\)[ ,].*/\1/' <<< $line) \
&& echo $line >> cell${FILENUMBER}
done < file
I used this : cat cell | grep "CELL 2" > cell2 to read file named cell and to grep only lines that starts from cell 2. But when i open the output file cell2 so it also contain lines starting from cell 21 22 23 to 29.
But my requirement is that output file cell2 only contains lines that starts from cell 2 not from cell21 to 29.
I hope you understand what i am saying.
firstly: you don't need cat: grep "pattern" file is enough.
secondly: your regexp search for all patterns matching "CALL 2"
at "any position of any length".
grep "^CELL 2[ ,]" file
Will match only lines starting with "CELL" followed by "2" and a space
or comma.
Try my solutions. It should give you what you want. Save it in a
file a make it executable or run it from the command line.
Now i am using nawk and its giving below error. all is the name of my input file.
#!/bin/sh
nawk -F" |," 'NF{print > tolower($1$2)}' all
KarachiOMP root> ./code3
nawk: null file name in print or getline
input record number 5, file all
source line number 1
Thanks Franklin52 for your Great HELP, now code is working with my file.
Can you please check the thread named below in Shell Programming and Scripting, Actually i having problem in that too, actually i want command output in new fie but complete output is not coming in output file.
I have one more question as the above code creates multiple files according to cell * numbers.
What if i have the same data (as input file) merge with many other lines in the log file and i want to just get those line which start from word CELL and then put all the data in single file. and when the word cell 86 comes so the script stops and output file creates containing same data as input file from cell 1 to cell 85
But above code is also getting lines like this:
A 46 REPT:CELL 65 CP FAILURE, UNANSWERED ORIGINATION
A 46 REPT:CELL 35 CP FAILURE, ANSWERED TERMINATION
A 46 REPT:CELL 7 CP FAILURE, UNANSWERED ORIGINATION
Actually i want to tell you the exact story what i am try to do, one other thread is also going on that issue so please dont consider it duplicate thread issue.
i am running below code:
/omp/bin/TICLI "op:alarm,all" &
nawk '/CELL 86/{exit}/CELL/{$1=$1;print}' /omp/omp-data/logs/OMPROP1/081207.APX > all
what my requirement is get complete output of this command:
/omp/bin/TICLI "op:alarm,all"
in output file, but when i run this:
/omp/bin/TICLI "op:alarm,all" > outputfile
so always different amount of data comes in my file but never complete.
when i run that command so the output of the command is also generated in our LOG file so now i am trying to extract the output of that command through log file.
But because log file contains data of whole system and many other commands so i need some mechanism that at the same time i run command on the shell and the output of sametime comes in output file.