multiple instances of syslogd - is it possible?

I would like to start up multiple instances of syslog daemon. I am having a little difficulty. Is this at all possible?

I have separate syslog.conf1.... syslog.conf5 files.
I have linked the daemon to separate files syslogd1 ... syslogd5
I have arranged the rcd.2 start/stop scripts for multiple instances

however, they don't appear to be running ?

I have placed the daemon in debug mode and I guess I got my answer unless someone knows how to do some magic.

logerror(1): syslogd: syslogd pid 7213 already running. Cannot start another syslogd pid 2974

And you want multiple syslogs running why???

On startup the syslog puts it's pid into a file - on Solaris it's /etc/syslog.pid. Since each instance you try to start is going to look for this file and not start another instance if it's running, I don't see how you can get around it (as I don't see any options in starting to not look at the file or to change it).

If you are looking to output to multiple log files or to other systems, you can do this with one syslog process. Provide more information on what you are trying to accomplish.

Thank you.

However, I fully understand the syslog implementation. I am just seeking a resolve to a problem I have. I have placed a case with SUN. The reason is simple. As with many other daemons as Tuxedo and Orbix you can run separate instances within separate environments.

I need to do this to try to distribute the load. I believe the syslogd is over taxed. rouglhy 1% of my messages are getting corrupted.

thanks again.

Are these locally generated messages? Or do they arrive over the network?

over the network. I have 24 authentication servers, 4 session management servers, 9 accounting servers feeding a single syslog server. I use local4 , local5 and local6 files. These files get real big. lots of data.

That's the problem. The syslog ptotocol is based on UDP which is inherently unreliable. If packets get trashed in transport, the network layers make no effort to correct them. Even delivery of the packets is not guaranteed. Sorry for the bad news.

makes sense thanks - I did not even think to think on that layer.

Is there a way to ensure it is getting trashed over the network and not by the daemon process itself being over taxed?

I was hoping to be able to prove that out - but it does not look possible now.

thanks again.

Think about it.... what else could possibly happen if all 37 servers transmit UDP packets that attempt to arrive at the same time? But you could put protocol sniffers on all 37 servers and look at all syslogd packets that they send. And sniff the destination server and look at all arriving *intact* syslogd packets and compare the lists.

Bear in mind though that the packets may not even make it to the network. If you throw UDP packets at an interface faster than it can transmit them it will queue some up, then start discarding them. The same thing can happen on the other side. If the syslogd does not read incoming udp packets fast enough, they get tossed. I'm not sure but I think the the udp error counts get incremented by either condition. "netstat -s -P udp" will show those counts.

It might be interesting to us a sniffer on your loghost (snoop, tcpdump, snort), to watch udp traffic. Should be able to spot any malformed or incomplet packets. As well as the previously mentioned netstat command.

If that's not the case, maybe you could write a seperate syslogd app in Perl. I know nothing about you setup, but some of the messages could be sent to a different port on your loghost (on which the new Perl syslogd is listening).

Also, maybe you could benchmark your loghost. Shutdown in network interfaces (so you aren't getting any network traffic) from the syslog clients. Then run a simple script which bombards syslogd which local messages. Count the number sent. Compare to the actual number logged.

#!/bin/sh

start=`date "+%M:%S"`
x=0

trap 'echo "";
echo "started at $start";
echo "finished at `date "+%M:%S"`";
echo "sent $x messages";
echo "found `grep TEST_MESSAGE /var/log/messages | wc -l` messages";
exit' 2

while()
do
logger -p syslog.notice TEST_MESSAGE_$x
x=`expr $x + 1`
echo sent $x
done

Running this script, my FreeBSD machine logged about 3000 messages in 1 minute. None were lost. If you run the script more than once, you will have to change the test log name for the count to be accurate.

If you truely want to beat your machine up, we can run a forking Perl script. I'll have to get back to you on that.