I have input file below mentioned.Input file has Yahoo,gmail,yuimn etc..are websites and there are users listed under it. I have many other unique websites but i mentioned just few as below.
For example: Yahoo is website and 123,fsfd are members of website "yahoo". See below input file.All websites,users are in one column.
Output:
Need each wesite in separate text file with users listed on it as below.
vi yahoo
123
fsfd
vi gmail
10022000
100dfg018
vi Yuimn
dfsdfsd
dfdsfdsfds
Yes there are no output files. and i ran command you have provided and below is output:
sample]$ od -c sample.out | head -10
0000000 M S D A R G \n \t a l e g \n \t c l
0000020 i n i c a \n \t E D U B O S S I \n
0000040 \t m a r i o \n \t t i g r e c a r
0000060 \n M S D B R Z \n \t 1 0 0 0 0 0 \n
0000100 \t 1 0 0 0 1 8 \n \t 1 0 0 0 3 4 \n
0000120 \t 1 0 0 1 3 2 \n \t 1 0 0 1 5 8 \n
0000140 \t 1 0 0 1 8 6 \n \t 1 0 0 1 9 7 \n
0000160 \t 1 0 0 2 1 6 \n \t 1 0 0 2 2 7 \n
0000200 \t 1 0 0 2 6 5 \n \t 1 0 0 2 7 9 \n
0000220 \t 1 0 0 2 8 0 \n \t 1 0 0 3 1 0 \n
---------- Post updated at 12:11 PM ---------- Previous update was at 12:09 PM ----------
Hi Cjcox
i ahve run command you posted. It has created files for each users,thats not expected. It has created almost 1 million files
Your file is <TAB> delimited, whic you didn't specify nor include in your sample . Try this adaption of Yoda's script (and try it on a small subset of your data):
If we knew WHAT is "not working as expected" there might be a chance we could help. What's your OS, shell, awk version? What's the result of applying the solutions given to the small sample file you gave in post#6?
# Set <tab> and <space> as field separators
awk -F'[\t ]' '
# If NF (number of fields) in current input record == 1, set ofile variable = $1 (output filename)
# next forces awk to immediately stop processing current record and go on to the next record
NF == 1 {
ofile = $1
next
}
# If NF (number of fields) in current input record = 2, write record to ofile (output filename)
NF == 2 {
print $2 > ofile
}
' input.txt
Here is a slightly different approach that closes output files when a new output file is specified in the input and uses the default field separator (any combination of contiguous <space> and <tab> characters separate fields, leading <space>s and <tab>s are ignored when counting fields). (Closing files is important if lots of files are to be created from an input file.)
awk ' # Name utility used to run this script and open the script.
NF == 0 {
# If this is a blank line (i.e., # of fields is zero), skip to next line
# of input.
next
}
!/^[ \t]/ {
# if the first character on the line is not a <space> or a <tab>...
# If the current output filename is not the empty string, close the
# current output file.
if(out_file) close(out_file)
# Set the name of the output file to use on the followng lines.
out_file = $1
# Skip to the next llne of input.
next
}
{ # Print the current data (assumed to be a single word) to the current
# output file, skipping leading <space> and <tab> characters...
print $1 > out_file
# Note that the above replaces existing input files with new contents
# every time a new filename is encountered. If you want to append
# (instead of replace), change ">" to ">>".
}' input.txt # Close the script and name the input file(s) to be processed.
If someone wants to try this on a Solaris/SunOS system, change awk to /usr/xpg4/bin/awk or nawk .