Zabbix item for last line of a log file

Dear all,

Zabbix version     : 2.4 (yes, I know, upgrading soon - honest)
Server OS version  : CentOS 6, 64-bit (CentOS 7 with the Zabbix upgrade)

I've got a large log file that I would like to read by an external process. It's basically the same as reading the item value on a web-page. I have tried to create an item for

vfs.file.contents[/path/to/log]

but it fails because the log is too large. I know that there are log item types and vfs.file.regex where I can specify the start & end lines, but being a log file I won't know the line count to set the start line value. I'm going round in circles with documentation, probably because I'm misunderstanding what it is telling me. :eek:

It is likely that a key of

log[/path/to/log,something,or,other]

may help. I don't want to keep a lot, just the last line every now and then. I've tried with

log[/path/to/log,,,,skip]

and other variations without success. My process to pick it up just comes back with unknown item. Trying to set the start line to $ of -1 doesn't seem to help me.

Can anyone send me in the right direction to get the last line of a simple text file recorded as a Zabbix item. I don't want to graph it because it will be text such as a timestamp and message. I just want to be able to use Zabbix to collect it. The item would be in a template and I can write something to read the items in rather than have to set up SSH keys all over the place and collect them by shell script. I'd rather not have to set up all sorts of extra spaghetti each time we create a new server, just give it the Zabbix template and extend the list of servers in my script that knows which servers should have the log file. We need to keep track that processing is running normally and last run timestamp, status, messages etc. The process that creates the log file is not available to us, else I would get it to "append log" all messages as it does now and overwrite log each messages to a separate file so that there would be just a single record in the second file.

Pass the dunce's hat. :o

Thanks, in advance,
Robin

When I want the last line of a file I use the very simple tail command.

:slight_smile:

Maybe I am too simple?

What am I missing from your requirement?

Maybe I wasn't clear on the initial post. Sorry about that. The process producing the display doesn't have direct access to the file.

We have multiple CentOS servers running various positions of our application testing programme and I'm being asked to get a display together which shows how they're all doing, batch state etc.. We destroy and re-create them all the time, but the Zabbix agent is in our VMWare template already so that is the mechanism of choice to get the data. It's shouldn't be too hard, but I'm stuck.

The server showing the display doesn't have access to the log file directly (NFS, SMB or an SSH password-less connection defined) If we did have, I could simply tail -1 /path/to/log but because we a frequently rebuilding, I'd rather not have to add another few steps and more complexity between them. There's lots to do already when we build them. With Zabbix, I can get the file contents but only for a small file, and these log files I want to read are very large.

We don't dare have Zabbix remote commands enabled because that is an entry point to attack the server. I know I can define the Zabbix item type as log with a key of log[/path/to/file] , but then I don't know how to get at the content/last line.

The files will get quite large, but I can set the history low, because after that we just want the current position.

I'm happy to define the type as log if I could get that to work, but all I get back is a large number. :confused:
Naturally anything I can define in Zabbix will be added to the existing template so I don't have to set that up every time too.

I'm sure I'm missing something simple,
Robin (feeling like a fool :o)

The following is probably more a workaround then a solution:

You said that you have no problems with small files but ony big files. In addition, i get from your wording that you don't need real-time exactness because you will poll the data only once in a while. So, why not set up a small cron job that copies the last line of the big log into a small file, each time overwriting the old one, like this sketch script:

tail -n 1 /path/to/big.log > /path/to/last.logline

Then you can query this new file with your Zabbix methods because it always contain one line only.

I hope this helps.

bakunin

Got it at last!! It's quite convoluted, so why not share and see if someone else might benefit or suggest how to improve it:-

  1. Create the Zabbix item in a template for the file as type log and key logrt[/path/to/log] with a history of 1 day and a refresh of 3 seconds (don't want old logs and need it reasonably up to date without hammering the servers) The logrt key allows it to cope with logs that are rotated.
  2. Assign the template to the required hosts, although in my case the template was already on several servers so it appears in various places straight away.
  3. Wait a few minutes for the data to be picked up, ensuring that some new data is actually written to the log.
  4. Determine the Zabbix item number for the log file on the particular host I'm interested in
  5. Use a web-request to get the log file from the Zabbix server
  6. Get the first record (stored so newest is at the top)
  7. Drop the log reference number (first item, tab separated)
  8. Hey presto, I have the message that I can display.

Quite a palaver. I'm also not sure if we have bespoke tools that do things like working out the item number or if having a webserver that can share the log files is default.

I hope that this helps someone. Does anyone think I need to add more detail on each step?

Robin

3 Likes

Not now, but be prepared to be pestered withquestions as in my new project i probably have to work more with Zabbix. :-))

Great find, though, and thanks for sharing.

bakunin