Removing all lines prior to the last pattern in a file/stream

Hi all,

I didn't find anything that specifically answers this after searching for a bit, so please forgive me if this has been covered before.

I'm looking to delete all lines prior to the last occurrence of a string in a file or stream from within a shell script (bash.)

A bit of background:
I'm running a command to pull the contents of a circular buffer, so I get whatever happens to be there. I'm only interested in the last string and the contents after that for each command I run. If can operate on the stream as it comes out and before it's written to a file, that'd be ideal. I'd also like to use sed if possible, since I use it in other areas of the script and I want to keep in all in-line.

An pseudo-example of the output I might get is below:

# run_async_command "mark command start test"
# pull_serial_log
^@CLI> [20372:53.558][20372:59.504] command^@ start^@ test^@
command output 1
command output 1
[...]
command output 1
^@CLI> [20372:53.558][20372:59.504] command^@ some^@ other^@ test^@
command output 2
command output 2
[...]
command output 2
^@CLI> [20372:53.558][20372:59.504] command^@ start^@ test^@
command output 3
command output 3
[...]
command output 3
#

In this case, all I'd want to see is the last instance of "command^@start^@ test^@" followed by its output to the end of the stream/file. Note that the "^@" characters here represent NULL characters. The command output is an arbitrary number of lines - it may be 10 lines, or it may be 1000 lines, and has no specific structure. Here's what I would like to see:

# run_async_command "mark command start test"
# pull_serial_log | sed -n 'MAGIC_GOES_HERE'
^@CLI> [20372:53.558][20372:59.504] command^@ start^@ test^@
command output 3
command output 3
[...]
command output 3
#

What I plan on doing is running the run_async_command, then dumping the buffer and stripping out only the last command, then run_async_command again with a different command and stripping out only the output from that command, and so on.

Please let me know if anyone wants more details.

Did a quick search on reverse grep, and this should do the trick if you customize it to your problem:

tac /some/log.log | grep -m1 searchword | cut -d ' ' -f7

Seems I'm not allowed to post the link to the article since I'm new here, so you'll have to search for: reverse grep tac, to find it yourself.

My solution would be to scan the output twice -- so that I can identify which pattern is the last one. Something like this:

lastOccur=`sed -n '/pattern/=' output | tail -1` #will output line numbers matching pattern
sed -n "$lastOccur,$ p" output

---------- Post updated at 11:30 AM ---------- Previous update was at 11:27 AM ----------

Ahhh... 'tac' ... nice one. Then you can do:

tac output | sed  "/pattern/,$ d" | tac

How about this?

 awk '/command\^@ start\^@ test\^@/,/^command/{p++;a[p]=$0}{a[p+1]=a[p+1] RS $0}END{print a[p-1] a[p+1]}' file
^@CLI> [20372:53.558][20372:59.504] command^@ start^@ test^@
command output 3
command output 3
[...]
command output 3

I think I like this one best so far - I'll fiddle with it in the morning and see if I can get it in a function so I can use it like so:

pull_serial_log | get_last "command start test" >> my_output_file

Thanks to all who have responded so far.