Remove contents of directory, but not directory

What's the best way to delete everything in a directory, but not the directory itself, without using shell wildcards?

Why can't you use shell wildcards?

Carl

Looks like homework to me, but here's a thought: why dont you look up the man pages of ls and for loop in sh/ksh.

It's not homework. I'm looking to know mostly out of curiosity, but you could argue that using wildcards is bad anyway. If there's a million files in the directory, that's a million rm processes (obviously not all simultaneous, but still). It's not bullet-proof:

mercury-2:~:0 mkdir foo
mercury-2:~:0 touch foo/.bar
mercury-2:~:0 rm -rf /foo/*
zsh: sure you want to delete all the files in /foo [yn]? y
zsh: no matches found: /foo/*
mercury-2:~:1

On top of that, it's not easy to use from, say, an exec() call. You need the infrastructure of a shell.

So my question stands.

find . -type f -exec rm {} \; -print

Except that won't remove any subdirectories. Removing the "-type f" and specifying "rm -r" fixes this issue... but you'll still hit an error when find attempts to remove the "." directory. Incidentally, "find . -delete" is probably a nicer way of coding this.

Anyone else got suggestions? It seems like the kind of thing their should be a command, or flag to rm, for: a simple way to empty a directory.

I dont think you have that right.. a million processes is not a million rm processes. Its all those million files as arguments to one rm process. Now that might fail due to too many arguments (any command can handle only a finite number).
I will give you your second argument about not being easy to use from an exec() call.

Why don't you use a slightly modified form of the find command that ppierald gave:

find . -exec rm -rf {} \;

That, by the way, would be a million rm processes (one for each file/directory that the find command would output).

If there are a 1000 files in a directory, "rm -rf *" will not attempt to create 1000 processes. It will attempt to create a single process with a rather long command line. This will probably fail. With one million files, I believe that it is guaranteed to fail. Even if you write your own shell that can handle the command line, no kernel I know can exec() an arg list that large. A directory with one million files is a problem. It will take hours to empty that. And with most filesystems, directories grow as needed, but do not shrink. In general, emptying a directory is not wise. Remove the directory and recreate it. If the directory is super-sized, rename it, create a new empty directory, then remove the renamed directory. But this won't work with a mount point, such as /tmp. This is why your original question might make some sense.

However, additional constraints are arriving out of the blue...
"you'll still hit an error when find attempts to remove the "." directory"
"it's not easy to use from, say, an exec() call. You need the infrastructure of a shell."
So far, not only can we not use wildcards, but we must guarantee that no errors occur, it must be easy to use with exec(), and we cannot even use a shell. These are unreasonable additional contraints. The "unix way" involves connecting programs together. And yes, you should expect that the solutions we provide will require the use of a shell. Expected and harmless error messages can be ignored. Or you can use "2>/dev/null" to suppress them. If you are emptying a directory from a C program you should probably use opendir(), readdir(), and unlink() directly rather than exec'ing another program. If you must exec(), something like "ksh -c commandline" can be used with any valid command line.

For the record, my solution is:
cd directory
find . \( ! -name . -prune \) -exec rm -rf {} \+
The + with find is required by the Posix standard (unlike -delete) , but not everyone has it yet. Assuming the the user has permission to delete the files and subdirectories, this command actually may satisfy all of your currently stated constraints. If the filenames have no embedded newline characters,
cd directory
find . \( ! -name . -prune \) -print | xargs rm -rf
is nearly as good and will work with older versions of find. However it uses a pipe and thus violates your prohibition against using "infrastructure of a shell".

Thanks for your solution.

Apologies if I didn't make my question clear. I was asking if there was any easy way of getting behaviour similar to a hypothetical "empty" command -- "$ empty dir" -- just a simple, sweet, atomic operation, without having to resort to something like `find', which seems, to my mind at least, to be a sledgehammer approach (I've always thought find was the antithesis of the "unix way" -- "a command should do one thing, and one thing well", but I'm not trying to start a flamewar.)

It wasn't for any particular purpose, but there are situations where deleting and recreating the directory might not be practical (with open file handles, mount points, etc.). As for not stating my command can't create errors, surely a "solution" that guarantees the occurrence of an error during execution is not the "unix way"!

Still, maybe my question was a bit misguided. It's not as if the operation of individually deleting a large number of files is particularly clean, simple or atomic, unlike unlinking a directory. I hadn't thought of the whole directory growth/shrinkage issue. Anyone with more FS expertise than me want to comment further? How do operating-systems that implement Recycle bins/trash directories/etc handle this?

It seems your rm may be aliased to rm -i, which overrides your -f option. Try

/usr/bin/rm -rf *

from the appropriate directory.

More safely, you can test with

prompt%> mkdir /foo
prompt%> cd /foo
prompt%> touch a.txt
prompt%> touch b.txt
prompt%> touch c.txt
prompt%> /usr/bin/rm -rf *

I realize this does not follow your constraint of not using shell wildcards, but I'm guessing that issue is caused by having to verify every delete 'interactively'