How to prevent Accidents 'rm -rf *'?

When invoking unix commands from other third party tools (IBM ETL), we run the rm / mv commands with the folder as argument been passed. Eg

rm -rf {folder}/*

when the parameter {folder} did not pass rightly or becomes blank, the command becomes dangerous to execute

rm -rf /*

How to prevent the disaster?

I seen some suggestions to add alias as alias

rm='rm -i'

But this wont work all the time, as we cannot make the command interactive when running from a tool.

Situation becomes worse, when the command is execute via super user.

Please throw some light.

Thanks,
Deepak

Let me see if I understand what you're doing. You have a script that is given an operand that is the name of the directory to be removed. You expect it to be invoked with something like:

removeall drectory

and you have written removeall to be:

#!/bin/YourShellName
rm -rf "$1"/*

And, if the person who invokes removeall forgets to give an operand, bad things happen.

So, why did your script add /* ??? If the script had been:

#!/bin/YourShellName
rm -rf "$1"

you would get the same results when a directory operand is given, but you wouldn't have a problem when no operand is given ( rm would just print a diagnostic saying no operands were given or an empty string is not a valid pathname).

Or your script could actually check for missing or "invalid" operands:

#!/bin/YourShellName
IAm=${0##*/}
if [ $# -ne 1 ] || [ ! -d "$1" ]
then    printf "Usage: %s directory\n" "$IAm" >&2
        exit 1
fi
rm -rf "$1"

UNIX utilities are there to help you get a job done. If you use them correctly, they can do wonderful things for you. If you tell them to do stupid things, you'll get what you asked for.

One way would be alias rm to a script that checks its parameters:

alias rm='/usr/local/bin/myrm.sh'
#!/bin/bash
# script: myrm.sh
# check parameters to prevent system damage

[ "$*" = "-rf $(echo /*)" ] && echo "Illegal parameters" && exit 1

/bin/rm $@

Edit: as Don Cragun was faster I'll explain how I interpreted the OP: his problem is invoking commands via a third party tool which does not check the parameters it passes.

This is an extremely dangerous script. It seems to be intended to catch an attempt to remove all files in and under the root directory. But,it won't complain if you try any of the following (all of which do exactly what this script seems to be intended to catch):

rm -r -f /*
rm -f -r /*
rm -fr /*
cd /; rm -rf *
rm -rf /

It won't complain if there happen to be any files in the root directory that contain a tab character, start or end with a space character, or contain two or more adjacent space characters. It will fail if any file is added to or removed from the root directory between the time when the rm alias was called and the time when this script processes echo . And, it will attempt to remove a different set of files than what was requested if any files in the operand list contain any whitespace characters.

That's why I explained how I interpreted the original post, but you're right, I was not clear enough.
I assumed that the third party tool issues rm -rf /* if the user of that tool does not provide any arguments. ONLY this case is catched. The user of that tool may not have any knowledge that he is working on an UNIX system at all because he only sees that tools frontend and may not know what effect is caused by not giving any arguments.
For all other cases I'll quote you:

A user who issues the commands you mentioned in the last reply most likely knows what he is doing - that would not be an accidental use of the rm command.

Edit again: thanks for pointing out the issues in your last paragraph.
In my tests (using GNU bash, version 4.1.5(1)-release (x86_64-pc-linux-gnu)) the script compains correctly when there are files in the root directory that contain space or tab characters in their names.
I'm not sure if the race condition can be entirely avoided (given my assumtions about the problem are correct).
The issue with whitespaces in the operand list is caused by my lack of quoting. The last line should read /bin/rm "$@" .

Don / Cero, Thanks for your Reply!!!

I was intended to say as what Cero interpreted.
The tool that I use, would invoke a "rm" or "/usr/bin/rm" commands.
Operands will be passed to the 'rm' utility, with the optional arguments of -r / -rf / -f etc as parameters.
Eg for the parameters:

/usr/bin/rm -rf #folder#/#filepatter#*.csv

rm -f #folder#/#filepatter#*

/bin/rm -rf #folder#/*

cd #directory# ; touch #file(s)# ; rm -rf #file(s)#

During any abnormality, there are chances for the tool to send parameters like #folder# or #file# as empty value.
And we know the impact of it.

I was infact trying the options this morning, similar to what cero was describing in.
I created the alias, and it would work in unix terminal, but not in the tool. Not sure, If i need to bounce the tool to refresh the change in .profile.
So with the help of admins, i tried to create the soft link for the rm command.
/usr/bin

rm -> /home/dsadm/rm_chck.ksh

And it seems to work for my testing.

And thinking to add more conditions to capture in the script as Don mentioned.

Atleast this way, we can avoid the possibilities of known issues that we far. Hope this would be a right way to proceed.

Aliases only work for interactive users.

This does not mean you should gut and replace your rm command with a script -- that'd be a very bad idea, important system things may use rm.

It means you should fix your tool instead. Adding more conditions would be a good idea.

Is there any possibility for editing the tool itself, or are you stuck with it?

Aliases are set up differently for different shells; may be disabled or replaced by a user or by a script, and -- as you already know -- won't have any effect if whatever is invoking rm isn't a shell running with the defined alias in its current execution environment.

I wasn't suggesting adding more conditions to an mv filter. I was trying to point out that on an active system, there is no way to reliably do what that script seems to want to do (and even for the simple, static case) there are several errors in this rm filter that could keep it from recognizing that the caller was attempting to recursively remove all files under the root directory, and would also make it impossible to remove some files unless the user knew how to avoid this buggy filter so the unadulterated rm utility could process the operands that the user intended to pass in.

I fully agree with Corona688: Fix, disable, or remove code that is taking input from users or website input fields and transforming it into dangerous commands without performing appropriate input validation. <<getting onto soapbox>> If you have a programmer who is writing code that will be running with super-user privileges, taking a (possibly empty) directory from a website field, adding a /* to that directory, and then invoking rm with the -r and -f options and that modified directory name; fire that programmer. If you are getting code from a 3rd party vendor that contains code like this; demand a refund and remove them from your approved vendor list. <<getting off of soapbox>>

Perhaps the following trick helps: create a file named -i in the current work directory

 > -i

In most locales -i comes first in the alphabet, so rm * expands to rm -i file1 file2 ... .
Another trick is to turn off the shell's wildcard globbing, in .bashrc with

set -f

or in .cshrc with

set noglob
1 Like

From experience, if you give people the slightest chance to do something dangerous, then eventually they will do it by accident. :frowning:

Experience is a great teacher, if you keep your job long enough to use it. :eek:

I would suggest that the approach is flawed as others have said. You need to find a far more secure way lock down what is removed. :cool:

Can you explain a little more why they might need to delete everything in a particular directory? Are these temporary files perhaps left being by a previous user perhaps? There are better ways of dealing with that issue.

Robin