Tring to locate a script

Hello,
We have a process on our Linux RedHat machine creating symbolic links and moving around some files from domain to domain. The issue is that the programmer has left a long time ago and nobody knows how the program is called, where it is and how the scheduling is setup. It runs every day for sure. I've been trying to find out for hours now, we have no documentation nor notes or emails left after all these years and I really need to change the script. I know it exists for a fact.

All links are created using the root user so I looked up all crontab entries for root but I couldn't find anything relevant. We do have autosys installed as well and there also I couldn't find anything helping my search. I checked the applications installed on the machine since they are running their own servers and scheduled processes (scripts can be kicked off this way) but I�m fairly convinced it�s all on the Linux side, knowing the developer was an expert shell programmer.

To make things easier, our OS support (I�m from the applicatin side although with privileged access) say that they cannot help us.

Does any of you have the slightest idea about what I could check because I�ve run out of ideas. Are all crontab entries supposed to be stored in the /var/spool/cron against the user? I checked anacron and at as well. As for autosys we have all the jobs listed and I checked that none of the scripts were calling external programs.

I'm stuck with this crap, is any of you aware of any other ways to find out that kind of information?

Thanks by advance

What are the domains you are talking about?
Do you use any other scheduler?
I dont know for RH (I have no box anymore...) but sure if I had such issue I would check all the files that have connection logged and see if I could find anything at that specific time

Have you tried enabling auditing? It's usually installed by default on Red Hat, just needs to be configured and enabled. There are plenty of tutorials online on how to set it up.

Alternatively you could install an app such as either TripWire or Splunk to monitor filesystem changes. I believe both have open source versions if you go that route.

We don't use any other scheduler that I know of. About domains, I meant filesets. We call them "domains" hence my initial formulation.

I looked into auditing and such, we have no auditing rules currently on the machine but it's enabled. Before proceeding though I'd like to try to assess if it's really worth a go mainly because my OS maintenance team won't do it for me; after all I already know the user id (root) and what time. I�m afraid auditing will just tell me that it was this user at that specific time using a "ln" command (to create links), which is not a stellar amount of information compared to what I already know.

I got our best expert into this now and he wasn't able to find the program either. I�m looking at the possibility of a program executed from another machine, even though I still can't find any scheduled entry for anything relevant. We have backup processes between machines so I�m looking into this, but still can't find anything.

A question though: since we are talking about different file sets, my knowledge is pretty limited at that level but is there any way two file sets could be setup to "mirror" each other to some extent, so that files and/or links could theoritically spawn on one end without any program or script being involved to go through all amendments every day? Since we can't find said "script" anywhere I start to wonder if such question can be legitimate, e.g. that something else than a script is doing this operation. Only thing is, the file creation happens at a very specific time so that kind of clashes with the idea of having files spawn as they are created. I don't think it makes sense but who knows.

Thanks for the replies so far

It could be a long running process that is loaded at boot time.

Enabling Process Accounting (psacct) may also help find the script and/or program performing the process. Once it's installed and enabled you can use the lastcomm command to list every command being run by every user or narrow it down to a specific user. What is nice is if the process is a script it shows all the commands that the script runs in addition to the script's name.

cat testscript.sh
#!/bin/bash

ln -sf /root/testfile /tmp/testfile

./testscript.sh

lastcomm --user root | head
testscript.sh          root     pts/14     0.00 secs Thu Sep 19 09:17
ln                         root     pts/14     0.00 secs Thu Sep 19 09:17

Thanks, I�ll use that and check the logs tomorrow (process runs at 05:30), alt on monday if it can give me the name of the process. Awesome idea. �ll let you know if it gave me the answer I�m looking for. Just hoping having process auditing enabled won't eat up too much CPU on the machine.

Hope it works out for you.

No it's surprisingly not a resource hog. Should be fine.