input="data.txt"
while IFS= read -r var
do
startdirectory=$loc
search=$(echo $var | awk -F'=' '{print $1}')
replace=$(echo $var | awk -F'=' '{print $2}')
find "/tmp/config" -type f -exec grep -l "$search" {} + |
while read file
do if sed -e "s#$search#$replace#g" "$file" > /tmp/tmpfile.tmp
then mv /tmp/tmpfile.tmp "$file"
echo $file
printf "Modified: %s\n" "$file"
fi
done
rm -f /tmp/tmpfile.tmp
done < "$input"
srstring.sh reads data.txt line by line and replaces right side value with the left in any file found under /tmp/config folder.
So.
Now, my requirement is rather than replacing & saving the values in the the same file i would like to have the original files untouched .. rather save the replace file as <original-filename>_tmp.replace
So, after running the srstring.sh
Can you let me know how can i easily tweak my code to achieve the same ? Also, it will be great if it runs on Linux as well as Solaris.
I don't want to restrict to these three "white|red|blue"
The data.txt may contain as many replace string as we like.
---------- Post updated at 12:55 PM ---------- Previous update was at 12:34 PM ----------
How will this work ?
sed -f data.sed "$file" > "$file"_tmp.replace
It will work for the first string replacement.
But for the entry string replacement after reading from the data.txt it will not update "$file"_tmp.replace rather take "$file" as the input again and override the first replaced string.
So, it needs to update the same "$file"_tmp.replace for each entry it pickes from the data.txt. Note: data.txt can have as many replace entries as the user likes to feed-in.
No it will do all replacements at once; you do not need an outer loop for each pattern.
A drawback is, it will produce a new file even if nothing was changed.
An improvement is: also produce a grep file
sed -n 's/..*=.*/s=&=g/p' < data.txt > data.sed
sed -n 's/=.*//p' < data.txt > data.grep
find "/tmp/config" -type f -print |
while read file
do
if grep -f data.grep "$file" >/dev/null
then
sed -f data.sed "$file" > "$file"_tmp.replace
fi
done
---------- Post updated at 13:40 ---------- Previous update was at 13:28 ----------
In case /usr/bin/grep does not take the -f, use /usr/xpg4/bin/grep
As stated before in post#3, the grep string should be constructed from the data file. This is a bit tricky, and I'm not sure it will run on ALL versions of awk . So, don't complain if it doesn't - give it a try and come back with the results.
awk '
NR == FNR {if (NR == 1) ARGV[ARGC++] = XFN = FILENAME
RP[$1] = $2
next
}
!L {for (r in RP) P = P "|" r
P = "(" substr (P, 2) ")"
cmd = "find /tmp/config -type f -exec grep -lE \"" P "[^=]\" {} + "
while (cmd | getline X) ARGV[ARGC++] = X
L = 1
}
FNR == 1 {NFN = FILENAME ".tmp.replace"
AFN = FILENAME
}
AFN == XFN {next}
{for (r in RP) gsub (r, RP[r])
print > NFN
}
' FS="=" data.txt
This Works !! But, can you tell me how can i add an echo to print the files that got modified ? I can't put the echo after the sed as it logs the same file name multiple times incase the string to be replaced if found in the file multiple times.
So, can you tell me how can i add an echo to print the files that got modified ?
That depends on the size of the directories being processed by find , the find file selection criteria, the underlying filesystem type, and the way that updated files are copied or moved back to the original files.
With:
find . -type f | while read -r path
do process "$path" > "$path.tmp"
[ $? -eq 0 ] && mv "$path.tmp" "$path"
done
it is theoretically possible for process to be asked to process a given filename more than once and to also be asked to process whatever that filename was with .tmp appended (possibly also more than once).
However, with:
find . -type f ! -name '*.tmp' | while read -r path
do process "$path" > "$path.tmp"
[ $? -eq 0 ] && cp "$path.tmp" "$path"
rm -rf "$path.tmp"
done
you are correct. A given filename should only be processed once (and the temp files will never be passed to process).