Chmod by multiple users.

The requirement is that whichever user logs in and does not find /tmp/logs folder will create the folder and a file /tmp/logs/date.log and gives full permission so that every user can read and write to the file / folder.

I write this code in setup.sh

mkdir -p /tmp/logs
touch /tmp/logs/date.log
chmod -R 777 /tmp/logs

this setup.sh is injected in the profile of all the users.

The problem i m facing is that once the file and folder is created the other users throw the below error

chmod: changing permissions of �/tmp/logs�: Operation not permitted

I understand that multiple users cannot chmod to the same file or folder; but how can i avoid the error to show up when other users login.

Is simply suppressing the error chmod 777 /tmp/root_log 2>/dev/null the solution to my problem ?

Why don't you post essential information as user / group of /tmp and /tmp/logs , and the user and his/her group that gets the error?

1 Like

Those requirements are ridiculous.

/tmp/logs should never belong to any users you want to monitor.

/tmp/logs should never be writable by any users you want to monitor.

/tmp/logs ... should not even be in /tmp.

This is not a "compromise" or "the best you can do". This is a screen door on a submarine - worse than nothing, so untrustworthy that the records this system generates will be useless and liable to disappear randomly even when the users behave.

I was tempted to reply earlier but my reply would have been harsh though justified...
So to put what Corona just wrote in other way:
1) Who would ever want to create log files and a subdirectory that could be removed by anyone? What goal is that if not an autogoal?
2) What value has a log file anyone can write what he wants in it, but even more if you cannot trace who wrote what in it?

I suggest you really try to learn UNIX basics, what permissions are and what for, and the use of logging ( as not all is reasonable...).
Instead of trying to rebuild what exist, learn, yes seriously, learn what UNIX can offer, and what exists, as sure looking and all you recent threads, if would have been far more easier to implement what exist already in UNIX:
Auditing....

chmod: changing permissions of �/tmp/log�: Operation not permitted
chmod: changing permissions of �/tmp/log/logs_120817�: Operation not permitted
Now sleeping for 3 seconds ... Please wait ....
 You can BEGIN NOW !!
[temp@techx ~]$ id
uid=1016(temp) gid=1003(jenking) groups=1003(jenking) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[temp@techx ~]$ ls -ltrd /tmp/root_log
drwxrwxrwx. 2 user2 jenking 4096 Aug 19 05:41 /tmp/root_log

---------- Post updated at 12:49 AM ---------- Previous update was at 12:43 AM ----------

I would appreciate if anyone could share the correct proposed auditing system so i could learn and implement it. I should know who logged in when, be able to prompt & enforce a user login to enter an explanation of why he login, what all commands they fired i.e history for that session etc.

Anyways, because i thought the nature of my request is custom, i would rather go implementing it myself.

By the way to address the concerns as soon as a user login i will have the root user run and copy rsync the logs from /tmp to a place where only root has access.

Also, the root will check if the profile of any user has been tampered with using grep or cksum .

Here is a plan that is working.

Test first and assume an existing file has the desired permissions.

if [ \! -f /tmp/logs/date.log ]
then
  mkdir -p /tmp/logs
  touch /tmp/logs/date.log
  chmod -R 777 /tmp/logs
fi
1 Like

The problem, mainly, is that you're logging things as the same user you want to monitor. This means, by definition:

  • Everything you log, they can delete.
  • Everything you do, they can undo.
  • Everything you run, they can kill.
  • Everything you make, they can destroy.

This leaves it wide-open to both intentional and accidental abuse. There is no amount of shell script alone you can write to avoid this.

To prevent users from deleting the stuff being logged about them, it has to be logged somewhere they can't control. Meaning, the logging code has to run as some other user.

We should not assume a certain scenario.
Many years ago I had to administer a booking system, where people had to register their project before using the application. It was not safe, but worked quite okay. The users were in the same company, and nobody wanted to boykott the company goals.

OK, so here we go again:

We have already established that there is always the problem that a user with sufficient privileges can undo what you do. This is especially true for the root user, who can modify any and every log/file/list/whatever there is on a system. There is only one solution to this dilemma: store a log offsite. That means, specifically: store a log file not on the system you want the auditing taking place but on another system. On this other system none of the people you want to be audited (especially the ones who can become root on your "normal" production systems) should have any rights - ideally not even to log on.

This is the reason why syslog has not only files as possible targets for the logs it creates but also "network", where the log updates are sent over the network to a target system. Modern syslogd replacements (i.e syslog-ng, rsyslogd, etc..) even refine this concept and you can send different facilities and different criticalities to different destinations, etc..

So, to get what you want you should investigate how to create the respective syslog-messages and how you configure the syslog-daemon of your choice to relay these to some destination.

Don't! If anything you don't come across as a professional software developer and therefore it is doubtful that you would be able to finish such a project successfully - let alone in the production quality offered by the projects i have just named (and perhaps a few more i don't even know of).

Look: this is a (badly performing) workaround for some workaround covering a conceptual flaw in badly thought out plan. As a rule of thumb: reasonable plans don't need workarounds and quickfixes, so if you ever notice yourself looking out for exactly these - understand that your plan is most probably flawed in first place and it would be better to fix the conceptual flaws than to patch a non-workable plan into working somehow.

This is how i do it too. There is no shame in formulating a bad plan and - upon recognizing that it is bad - rethinking it until one gets to something better. But holding onto a bad plan because with a vast amount of quickfixes it might sometimes work is asking for troubles (usually very successfully so).

Don't! First, it is my right as a user to modify my profile so that i can configure my environment to my likings. I suppose you have your own desk and let us assume that you are used to have your coffee cup on the left side of the keyboard. Would you appreciate a "company policy" of having it on the right side? The only thing you can achieve with such a ridiculous rule is to frustrate users because the environment they use doesn't "feel right" any more. One prefers i.e. a command prompt like this, the other wants to have it different - of course you can cksum (and this way make effectively read-only) the profile, but one of these two people will be disappointed in the end.

Instead, do something like what i suggested: it is not intrusive, it doesn't pester users and whats more it is efficient and relies on standard methods and procedures.

So, there. Instead of creating these enormously complex apparitions sit down, learn the UNIX facilities and what they can do for you. You will never be disappointed.

I hope this helps.

bakunin

6 Likes