Is there a maximum length for a shell script command?

Is there a maximum length for a shell script command? How can I detect that in my OS?

For example, if I have something like:

command A | command B | command C | awk '{print $1 $2 $3 $4 $5}'

then can we break the commands and also the arguments inside awk ?

Thanks

I don't know which is your OS, but try with:

getconf ARG_MAX

--
Bye

I'm using Red Hat (Scientific Linux).

getconf ARG_MAX

gives

131072

What about breaking command line?

Hi,

command A >outa
<outa command B >outb
<outb command C >outc
cat >script.awk <<"EOF"
{print $1 $2 $3 $4 $5}
EOF
awk -f script.awk outc

Which should be enough, shouldn't it? :slight_smile:

The sequence \<newline> is what you're looking for. To get <newline>, you obviously press ENTER.

Be careful: \<newline> works, "\<newline>" works, but '\<newline>' doesn't.

So:

lem@biggy:/tmp$ echo A\
> B\
> C
ABC
lem@biggy:/tmp$ echo "A\
> B\
> C"
ABC
lem@biggy:/tmp$ echo 'A\
> B\
> C'
A\
B\
C

--
Bye

Why are you asking? What problem are you trying to solve or prevent?

Regards,
Alister

Actually there is a maximum line length for most of the line-oriented utilities (awk, sed, grep ...) which applies to the shell too (at least to all shells of my knowledge). This is a system constant and can be found in /usr/include/limits.h .

To quote from the POSIX standard:

There might be other limits, which can be found in this file too:

The minimum of these three values will determine the actual upper bound for the length of your input line.

In fact some 15 years ago, when these values were considerably smaller on average, it was possible to break some seemingly working shell-code like this:

for FILE in $(ls) ; do ..... ; done

by executing it for a directory with enough entries. Given good enough (=bad enough) circumstances this will lead to the "input line too long" error if the file names add up to the critical amount of characters. To avoid this risk altogether it is advisable to write it this way:

ls | while read FILE ; do ..... ; done

because the pipeline will not have that problem.

I hope this helps.

bakunin

Thanks a lot. Actually I had been using crontab. There the escape was not working!

Generally it's good practice to not have too much code in the crontab entry its self. Write a script (eg /usr/local/bin/purge_old_reports.sh) and call this from the crontab entry.

# Archive and purge old spool jobs each Sunday morning
12 06 * * 0 /usr/local/bin/purge_old_reports.sh >> /var/log/purge_old_reports.log 2>&1

You're right. But that perhaps doesn't answer my question. Is there a way to break the command line for crontab?

All the usual ways, plus, % is as special character in cron.

cron stops reading the command at the first unescaped %.
example

1 2 * * 6 /path/to/script.sh %
inputfile
outputfile

inputfile and outpfile are read by the program through stdin (keyboard)