in bash:
for i in `cat file` ; do
echo $i
done;
how will i do this in perl ?
in bash:
for i in `cat file` ; do
echo $i
done;
how will i do this in perl ?
open(MYINPUTFILE, "<filename");
while(<MYINPUTFILE>)
{
# Good practice to store $_ value because
# subsequent operations may change it.
my($line) = $_;
# Good practice to always strip the trailing
# newline from the line.
chomp($line);
# Print the line to the screen and add a newline
print "$line\n";
}
---------- Post updated at 06:25 PM ---------- Previous update was at 06:25 PM ----------
#!/usr/local/bin/perl
open (MYFILE, 'data.txt');
while (<MYFILE>) {
chomp;
print "$_\n";
}
close (MYFILE);
just google it.. you can find lot of ways
If you want to preserve the input file content intact
during processing you should be using something like
this:
while IFS= read -r var; do
printf '%s\n' "$var"
done < infile
Consider the following:
bash-2.03$ ls
infile
bash-2.03$ cat infile
one two
-n -e ok?
three *
bash-2.03$ for i in `cat infile` ; do echo $i ; done
one
two
ok?
three
infile
bash-2.03$
And:
bash-2.03$ while IFS= read -r var; do printf '%s\n' "$var"; done < infile
one two
-n -e ok?
three *
bash-2.03$
So the later with Perl would be:
bash-2.03$ perl -pe1 infile
one two
-n -e ok?
three *
Or:
#!/usr/bin/perl
use warnings;
use strict;
open my $infile, '<', 'infile'
or die "open: $!\n";
print while <$infile>;
close $infile
or warn "close: $!\n";
Thanks itkamaraj... i will try this ..
How to emulate that shell fragment depends on the value of IFS.
Assuming the default value, itkamaraj's suggestions are incorrect. That sh loop prints out one line per word, not a line per line. The sh loop will also not only trim leading and trailing whitespace, but squeeze contiguous whitespace embedded in the line.
If IFS is set to a non-default, non-whitespace value, there would be no trimming.
Regards,
Alister
That's not even how you're supposed to do this in bash. It's wasteful and dangerous -- a file too long might throw an error, or just be silently truncated. Where did you learn this?
while read i
do
...
done < file
Actually, below is my bash script that i want to do in perl and it has two arguements:
Arguement1 is list of servers and arguement2 is list of home directories.
#!/bin/bash
user='rsolis@sysgen.com';
servers_list=`cat $1`
homedir_list=`cat $2`
for i in $servers_list
do
ssh -q root@$i true
if [ $? = 0 ]
then
for j in $homedir_list
do
ssh root@$i "/bin/echo $user >> /opt/home/$j/.k5login"
echo -e " ... appending $user in server $i at /opt/home/$j/.k5login ...OK"
done
else
echo -e "\nServer: $i ssh is down"
fi
done
Can you please help me reg below issue???
All errors are included inside �IGNORE_ERROR.txt'
For.e.g End user will just put all errors inside IGNORE_ERROR.txt' file which they don't want...
ERROR1
ERROR2
ERROR3
Now script should search ERROR1,ERROR2,ERROR3 inside LOG.txt and delete all these lines. Using below script only lines which contain ERROR3 is deleting from LOG.txt
FILE=/home/spm/ptc/scripts/IGNORE_ERROR.txt
##Script-
UNIQUE='-={GR}=-'
#
if [ -z "$FILE" ]; then
exit;
fi;
#
for LINE in `sed "s/ /$UNIQUE/g" $FILE`; do
LINE=`echo $LINE | sed "s/$UNIQUE/ /g"`
echo $LINE >>
sed -e "/"$LINE"/d" /home/spm/ptc/scripts/LOG.txt > /home/spm/ptc/scripts /"RequiredERROR-$SERVER_NAME-`date '+%d-%m-%Y'`.log"
done
dont hijack others thread.. anyway below the code
$ xargs < IGNORE_ERROR.txt|sed 's, ,|,g;s,^,egrep -v \",g;s,$,\" inputfile,g'|sh
I'm in Perth at the moment and won't get back to Adelaide until the end of September, but I think I have an actual example from a Unix Primer (circa 1990) identical to the original one presented... I'll get back to you in October (if I remember)...
Obviously limitations exist when using grave accents... the user should be aware of them or suffer the consequences.
Either that, or had the intuitive sense to know when using it is a bad idea.
I've answered several threads on this forum about mysterious problems with the "for x in `cat foo`" design where data had been mysteriously lost or ignored. They hit the limit, of course -- the limit's pretty small on some systems.
Agreed. Unfortunately, they can get away with it for quite a long time until the habit becomes ingrained. Most people who use it don't seem to understand the implications, namely, the backticks have to finish first -- they seem to think of them like pipes. I saw a poster suggest using it to process data already known to be a 40-gig flatfile...
When nothing is known about the size of the file, `cat foo` should be discouraged, I feel. Even for small files, I see little reason beyond habit to use it...