Unexpected Argument list too long error on later os level

I have a script on RedHat which runs this:

ssh -q -t -i $HOME/.ssh/my_key server2 "find ~/toCopy/*data* | xargs sudo mv -f -t /home/files/"

I'm getting:

sudo: unable to execute /bin/mv: Argument list too long

but the reason I use xargs is to avoid this restriction.

It used to work fine on :
Red Hat Enterprise Linux Server release 5.11 (Tikanga)
but fails on :
Red Hat Enterprise Linux Server release 6.8 (Santiago)

Any ideas, or is there a better command which overcomes this?

You can restrict the number of args used by xargs used for one target call.

... | xargs -n 50 .... 

The old box has:

xargs --show-limits
Your environment variables take up 1231 bytes
POSIX lower and upper limits on argument length: 2048, 129024
Maximum length of command we could actually use: 127793
Size of command buffer we are actually using: 127793

The new box has:

Your environment variables take up 2078 bytes
POSIX upper limit on argument length (this system): 2617314
POSIX smallest allowable upper limit on argument length (all systems): 4096
Maximum length of command we could actually use: 2615236
Size of command buffer we are actually using: 131072

The base command isn't any bigger, and the target folder is smaller, so I don't get why it doesn't work if Maximum length of command is what it seems to imply.

I've updated my script to use

-exec sudo mv -f {}

which works (much slower), but I would be interested for opinions on why the other way didn't work - it may be a redhat bug.

Have you tried using

-exec sudo mv -f -t /home/files/ {} +

?

That has the same principal problem.
The sudo was not compiled with the system limits. (Update available?)
Did you try stomp's proposal? Gives still good performance.

no. unfortunately I have a variable amount of files to process 3000-5000. This would mean adding a loop to copy in batches which is an ugly solution.

If, instead of using:

-exec sudo mv -f {} /home/files/ \;

you use:

-exec sudo mv -f -t /home/files/ {} +

it should run as fast as (and probably a little bit faster than) what you were getting when xargs was working successfully for you. But, of course, this will only work if find knows the correct limits on exec argument lists.

No. Using stomp's suggestion of:

find ... | xargs -n 50 sudo mv -f -t /home/files/

does not require you to add a loop. This takes exactly the same output from find that you were using before but invokes sudo mv once for every 50 files to be moved instead of trying to fit as many files as it can into one invocation of sudo mv that it thinks it can process with the wrong built-in parameters to determine how many it can use. And, without wasting much time, you could try a considerably higher number as a starting point, for example:

find ... | xargs -n 1000 sudo mv -f -t /home/files/

and cut the number back to smaller values until you find a number that works if you get the ARGMAX limit exceeded diagnostics with -n 1000 .

1 Like

for info, I set -n 2000. 2250 failed.

I'd set the limit rather pessimistic. E. g. rather 500 than 2100, because it depends on the length of the parameters when the environment space is used up.

A parameter may be ...

~/toCopy/short

or it maybe...

~/toCopy/some_filenames_are_really_long_and_if_you_do_not_know_how_long_the_space_is_eaten_up_with_few_parameters

...and for sure check if it fails nevertheless.

-----

But would it not be better to set the variables so that every command is happy with the limits and will work correctly with it? I do not yet now what variables to adjust so var. But fumbling around to set some limits that hopefully would not be hit seems not to be the cleanest way, despite it'll work 99% if you set the limits very conservative.

-----

the man-page here shows:

xargs (GNU findutils) 4.4.2

       --max-chars=max-chars
       -s max-chars
              Use at most max-chars characters per command line, including the command and 
              initial-arguments and the terminating nulls at the ends of the argument strings.  
              The largest allowed value is system-dependent, and is calculated as the argument 
              length limit for exec, less the size of your environment, less 2048 bytes of 
              headroom.  If this value  is  more  than 128KiB, 128Kib is used as the default value; 
              otherwise, the default value is the maximum.  1KiB is 1024 bytes.

I would assume that --max-chars is more robust then --max-args

1 Like