Opinions/Ideas/Suggestions needed

I'm currently developing a script to clean out certain directories based on age and name. Part of the assignment is to ensure that the cleaning of a directory is done under the user id of the owner (script is running as root). I have a few ideas on how to do this, but I'd like to hear your opinions/ideas/suggestions on these (maybe I've missed something):

  • Move the relevant parts into a separate script which gets called by su
  • Built a separate script in memory and pass it to su <uid> -c 'bash -c' (probably going to end in quoting ****)
  • As above, but write to a temporary location
  • ?

Ideally, I'd like a mechanism like setuid/seteuid to make a temporary privilege deescalation for a certain block, saving me the hassle of passing parameters between those scripts.

I see no problem with the su method, since the only command you really need to do under that user ID is the actual rm. For efficiency make it remove many files per rm command rather than running su for each individual file (consider xargs).

I probably should have been more clear, I'm sorry.

The cleaning itself isn't just a rm (that's done later, after the file has been in limbo for a specified time, and can be done as root), but include creating the target directory (if needed) and moving the file(s) there. Now in order to avoid having to call su/sudo for each and every file (and possibly creating a lot of unneeded syslog messages about the context switch) I'd like to change the user context for the whole loop.

Pseudo-Code to clarify:

loop over dirs to be cleaned
    switch user context if needed
    loop over find
        check if target exists
            otherwise create
        move file to target
        make a note of (attempted) move
    mail user the changes done and those failed
    switch back to original context

I think you should as a first step enumerate all of the candidate files, sort the output into files based on file ownership like: tmp.ownername, tmp.ownername2, etc. You have to do this at some point, so do it all up front. If it is hundreds of thousands of files then "thread it" - in the sense of assign one process to one sub-directory. When all the child processes are done, then do the big sort/split.

Next, write a dynamic (one call per tmp.ownername) wrapper script that invokes the cleanup, move, etc., as a separate script with the tmp.ownername file as input. This invocation is the point at which you sudo or su, to the user as specifed by the tmp.ownername. Invoke one instance of the cleanup script per tmp.* file from the wrapper script. Again, if it is large numbers of files consider executing several cleanup scripts in the background at one shot. Only you can see the impact on other unrelated processes from all the I/O you generate doing this stuff.

I noticed cp mentioned and mv mentioned. mv is more efficient for moving files among directories within a filesystem.