Editing 1st or nth column

Hi,

I have a file whick is pipe delimited :

100| alpha| tabgo|watch| |||| 444444
| alpha| tabgo|watch| |||| 444444
| sweden |tabgo|watch| |||| 444444
| US| tabgo|watch| |||| 444444
100| factory| tabgo|watch| |||| 444444
| ABC| tabgo|watch| |||| 444444
| launch| tabgo|watch| |||| 444444
| Cam| tabgo|watch| |||| 444444
| Roger| tabgo|watch| |||| 444444
| Sixty| tabgo|watch| |||| 444444

The first line either has a space or a static value 100.

Need to know two things:
1) How to append a character/s say "XXX" in the second column?
Output expected:
100| alphaXX| tabgo|watch| |||| 444444
| alphaXX| tabgo|watch| |||| 444444
| swedenXX |tabgo|watch| |||| 444444
| USXX| tabgo|watch| |||| 444444
100| factoryXX| tabgo|watch| |||| 444444
| ABCXX| tabgo|watch| |||| 444444
| launchXX| tabgo|watch| |||| 444444
| CamXX| tabgo|watch| |||| 444444
| RogerXX| tabgo|watch| |||| 444444
| SixtyXX| tabgo|watch| |||| 444444

2) How to add space for the first column if there is no static value or already has a space
expected output:
100| alphaXX| tabgo|watch| |||| 444444
| alphaXX| tabgo|watch| |||| 444444
| swedenXX |tabgo|watch| |||| 444444
| USXX| tabgo|watch| |||| 444444
100| factoryXX| tabgo|watch| |||| 444444
| ABCXX| tabgo|watch| |||| 444444
| launchXX| tabgo|watch| |||| 444444
| CamXX| tabgo|watch| |||| 444444
| RogerXX| tabgo|watch| |||| 444444
| SixtyXX| tabgo|watch| |||| 444444

awk  '
  BEGIN {  FS="|"
           OFS="|"
        }
$2 ~ /^[^ ]/ {  # 2nd field 1st char is not space
        $2=sprintf(" %s",$2)
        }
  {  # all
        $2=sprintf("%s%s",$2,"XX")
        print $0
  }

' inputfile

Try....

First one...

awk 'BEGIN{FS=OFS="|"}{$2=$2"XXX";print $0}' yourfile

Second one...

awk 'BEGIN{FS=OFS="|"}{if($1!=100)$1=$1" ";print $0}' yourfile

For the second option from malcom: I am gettign awk error
awk: record `100|1||0001|296292323...' has too many fields -- I have 3350 columns in each line separated by delimiter.
Any way the awk messages can be suppressed?

---------- Post updated at 07:27 PM ---------- Previous update was at 07:24 PM ----------

Looks like awk is giving problems for all for too mnay columns everywhere.

in Perl you could...

  1. Read in the file line by line -- assume stdin

while (<STDIN>) {
...
}

  1. Inside that while() loop, split the line into an array...
    @array = split(/\|/); # you need '\' because the pipe is a special character I think

  2. Now do what you want with the first two elements of the array...
    $array[0] = " " if $array[0] !~ /100/;
    $array[1] = $array[1] . 'XXX';

  3. Now join it all back together...
    $string = join('|', @array);

  4. write the string to some file or other structure -- assume STDOUT
    print $string . "\n"; # can't forget the lf at end of each string.

When the while() loop ends you will have your original file back with the first two fields modified as desired... The advantage of this approach compared to fancier stuff you could do with more elegant tools (e.g. perl's "map" command) is that this will work even if you have a million (or ten million) lines with a hundred thousand fields each!

execute with 'cat input | perl progname > output'

quine@sonic.net