Performing extractions on web source page

I have downloaded a web source page to a file. I then egrep a single word to extract a line containing it to another file.
I then cat the second file and remove everything before a word and after a second word to capture the phrase desired.
This did not work. I used vi to validate that the 2 search words existed in the extracted (second) file.
To test I inserted XXXX and YYYY in the second file and used those words as sed extractions terms instead of the terms contained which I originally searched on. That works.
Can someone explain/suggest what might be in the extracted file (#2) that is fiddling my search? Here is how I did it:

egrep "seconds" /tmp/wget_saved_file > /tmp/tstfil

[used vi to insert XXXX & YYYY into tstfil]

cat /tmp/tstfil | sed  's/.*XXXX//;s/YYYY.*$//'   #<<--works

Is there perhaps something about text contained in a source file that prevents it from being used as search terms?

It would be helpful to see the original line that you extract with grep, and the original sed.

I can only guess, but it sounds like there might be two instances of one of the words. Consider this line of text:

This text is a sample for the use of pattern matching in text

If the words that you are looking to capture things between are text and pattern, and you use the set of sed replacements shown below, you'll get nothing on the output:

s/.*text//; s/pattern.*//

The reason this produces nothing as output is because the first match matches the whole line and deletes it.

If you are certain that word two only appears once on the line, then reversing the sed substitution commands will help, but it's not foolproof.

And, you don't need to cat a file to sed. Sed is capable of reading the file and thus you'll reduce overhead:

sed 's/foo/bar/' input-flie >new-file
1 Like