cp command not working

Hi Guys,

I have about 12000 files in a folder and I want to copy these to another folder. I am using the cp command to do this but it errors out saying cp -- argument list too long.

Is there any way to get around this?

I don't want to do a mv but use only cp.

Thanks

Please post exactly what you typed and the exact error message you received. Knowing you Operating System and Shell would also help.

I use tar to move large directories around...use standard input and output:

tar cvf - .|(cd $DDIR;tar xf -)

Of course this will copy the entire directory. To only move certain files:

tar cvf - .$FILESPEC|(cd $DDIR; tar xv - )

This is where $DDDIR uis your destination directory and $FILESPEC is the filename specifications (you can use * to expand filenames).

Sorry to be rude, but I repeat.

Please post exactly what you typed and the exact error message you received. Knowing you Operating System and Shell would also help.

Geekasaurus post can be ignored because is uses erroneous syntax for irrelevant commands.

The cp command does not work for directories with large number of files in them. I don't know what the upper limit is; I could probably find out with a lot of research. Also, the mv and rm commands do not work either under the same circumstance.

I suspect that the upper limit of the number of files that it can work on is 65536. I know that it is greater than 10,000 and less than 100,000. The 65536 is a SWAG number based on 26 years of Unix shell-scripting experience.

A number of years ago, our data warehouse people left more than 4194304 million small files in a single directory. I don't know exactly how many, but they used the entire inode table and its size was set at 4194304 per filesystem (hpux 11.0). I had to reboot the system from CDROM, alter the boot parameters and mount the offending filesystem to the RAM image. Only then could I descend the directory tree and find the overfilled directory. As the OP said, the cp command does not work on directories with that many entries in it. Neither do the rm or mv commands.

I cleaned up the filesystem by removing files. I had to use a for-do-done loop and remove only one file at a time; this took many hours.

The shell being used does not matter. Neither does the OS (particularly). It is the same cp command (or rm, or mv) regardless of the shell.

The tar syntax I quoted is in the man page on my AIX box. I've been using something similar to it for more than 20 years. It works.

Thanks, methyl and Geekasauras.

@methyl,
I did not save the error message and do not have the PC here so I will copy the error message next time I try it on the PC and post it.

@Geekasaurus,
I will try the tar command and see if it works.

It has nothing to do with the rm command. The limit is the size of the buffer that commandline parameters can be fit into. This applies to all commands(except certain shell builtins in certain shells).

It is a "swag number" because it's a power of two, which memory is often allocated based on. It's the maximum size of the buffer for commandline arguments per process, and system-dependent.

You may have been interested in the xargs command. It converts things read on stdin into commandline arguments, spanning across as many executions as necessary.

# Delete everything in ./dir_containing_files
find ./dir_containing_files -type f -print0 | xargs --null rm

It works in certain situations, yes. It will probably do what the OP wants. I have my doubts that "$FILESPEC" is working the way you think it is though -- either that or AIX has a very strange implementation of tar...

Actually, the operating system definitely matters. A command line length limit has nothing to do with the executables (rm, mv, cp, tar, etc...); it is a byproduct of limits imposed on the system call which creates the process. In situations like these, find and/or xargs are our friends.

This may be of interest to some:
ARG_MAX, maximum length of arguments for a new process

Take care,
Alister

This event was in 2002, and so I have forgotten many of the details. I think the find command also barfed. But it was 3AM, of course, and I must've tried a lot of things before settling on the 'for' script.

Hpux 11.0 was a 32bit OS. And it was on a HP V-class machine which had some very Byzantine I/O constraints. Nevertheless I have almost always had success with the tar command but have had my share of troubles with the others.

And note that I qualified my statement with the word "particularly". We had some Tru64 systems which had absolutely no trouble with this sort of problem. I suspect that most/all 64bit OS's will not demonstrate this kind of problem.

It was not a technical reason that I chose to remove one file at a time. The system in question processed several billion dollars in transaction volume with every batch run. I was expected to 'do something' about the problem rather than actually fix it. You know that it will be a bad day at the office when the Public Affairs Officer shows up in the Data Center at 4AM in her pajamas. At least with all those filenames scrolling up across the screen, I could say that something was being done, even if it was not particularly elegant or technically desirable.

running ./configure on a 64-bit linux system:

...
checking the maximum length of command line arguments... 1572864
...

So, 1.5 million instead of 64 thousand. Which is a lot, but only 24 times larger, and still not big enough for your absolute worst-case scenario there. The rule of thumb is, if you're using enough arguments enough to even ask "does my OS support it" the only truly safe answer is "no" -- even if it works now it won't scale, and won't be portable. If you have that many already you could always have more, you'll find out your OS' limit in an inconvenient way sooner or later.

I can certainly appreciate needing to make an emergency solution. Four million files!! :slight_smile: