Currently, we have a Perl script from a third-party vendor that is generating core dumps. It has been reported. We can't turn off the script as it does generate some diagnostic file that's required. So at the moment, we have to let it continue to do its run.
I wish I can say the vendor is dependable enough to help us cleanup the core dumps but unfortunately they aren't, hence am here asking for help.
Is it possible to check that the core dump belongs to this script or were generated by this script before removing it? I can do a non-recursive find below and change the ls -l to a rm
But I am a bit concern that I may remove core files that we needed that belong to other processes? Also, how do I change the regex so that is does a find fo core\.<numbers-only>
-name "core.[0-9]*" is probably good enough. No escape for the dot is needed in a glob.
1000%% correct is \( -name "core.?*" \! -name "core.*[!0-9]*" \)
The ? enforces a character, ?* is at least one character.
[!0-9] is a character that is not a digit.
Unlike the glob the RE is not anchored, needs explicit anchors -regex '^core\.[0-9]+$'
And the RE needs to escape the dot (unescaped means one character).
The -maxdepth is a "global" option so should be first (not dependent on previous options).
I am not used to the -regex, and after reading the man page and some examples on stackexchange:
it IS anchored - on the whole pathname!
That means one must use -regex '.*/core\.[0-9]+'
Thanks to all your suggestion.
For the time being, I've moved them to a temp directory and strings them one at a time :(. I wish there is a better way of doing. I am trying to check if I can maybe dd a few bytes of the core file maybe that will be show the name of the program. Not sure if it is quicker than having to do strings though.
It's been fun trying out all the suggestion and regex experiments.
The vendor had given instruction to disable said diagnostic script. I believe they've given up on it as well and will be deprecating it.