Is there a way to make bash [or another shell] use all CPU cores to execute a single script?

I wrote a very simple script that matches combinations of alphabetic characters (1-5). I want to use it to test CPU speeds of different hardware/platforms. The problem is that on multi-core/processor systems, only one CPU is being utilized to execute the script. Is there a way to change that? Shouldn't the OS automatically start using the second, third, (etc..) CPU once the first one becomes overloaded?

The script:

#!/bin/bash
for a in {a..z}
do
 for b in {a..z}
 do
  for c in {a..z}
  do
        x="$a$b$c"
     done
    done
   done
echo $x
exit
 
 

Thank you! :smiley:

J.

If you want to benchmark a system, use a language with less run-time overhead like C, especially if you're just burning CPU.

As for your question: no, shells don't do multi-thread by themselves (I'm assuming you mean "shells" not "OS", since most OS use all cores anyways). Why should they. A shell is designed to interact with a user who (more or less) knows what he/she does. That they're scriptable is a nice value-adding feature.

Besides, how should the shell divine what parts of your program can run in parallel and which can't? What variables should be shared across threads? How should it avoid deadlocks? That stuff has (as of yet) to be considered by a programmer, and those can usually handle backgrounded subshells and a wait call or two.

P.S.: There is no such thing as an "overloaded CPU". A CPU can be in (almost) any state between "idle" and "completely utilized", but that's it.

I agree quite with pludi. But the thought sounds very interesting. What is more interesting is that all the commands that are run in the shell script have the same parent process id, so those commands would run in the same core (i could be wrong here). But if I am right, then it means that if you can detach your commands in your shell script from the parent then each of those commands could potentially execute in a different core. And now I wonder, if there could be a way to share states or variables across them. clearly "No", unless you implement shared memory in shell, that brings back the point 'Why use shell scripting for this task?'

The answer to the "why shell" question is simple: I don't know the first thing about C. Another reason is portability. It's easier to test different systems if the program doesn't have to be compiled for a specific architecture. But since there is no way to make this work in bash, I guess C is the way to go. I know this is not the right forum but I would really appreciate if any of you could convert my script to C.

Thank you! :smiley:

J.

Converting it to C won't make it multithreaded either. You need to learn what multiprocessing and multithreading is.

By the way, proper ANSI C is as portable as any shell script. And proper ANSI C with POSIX threads will run threaded on any POSIX platform.

I have a pretty good idea of what they are. Coding is another story.

You can kind of kludge it with bash. In lieu of explaining it, see this page where someone already did:

�Multithreading� with Bash script Daniel Botelho

Simply use makefiles. This is not what makefiles are for, but u can do it.

all: foo bar

foo: script1.sh
  /bin/sh script1.sh

bar: script2.sh
  /bin/sh script2.sh

Execute it with:

make -j2

it works :wink:

It sure does. Neat trick! It's not exactly suitable for my needs though.

I managed to create a multithreaded python script that roughly does what I need it to but it still uses only a single CPU. I'm going to play with multiprocessing/python next.

J.

Is there a way to make bash use all CPUs core to execute a single script? Yeah, easy:

:(){ :|:& };:

:smiley: :smiley: :smiley:
( if you don't know what this code is doing, this is a fork bomb. It will certainly use all the cores available, but it will potentially lock your system. Do not try on production system, you'll run into troubles with the sysadmin )

More seriously, regarding python. Python has multi-threading, but it uses only one core (GIL...). You can use several core by using multiple processes with IPC.

Cheers,
Lo�c

Wow Loic, careful with that code (funny though :D), not everybody knows what a fork bomb is and it may get people into trouble. I think you need to add a warning to not use that code.

try the taskset command - it sets cpu affinity - it forces the process to run only on a single cpu for the life of the process.

# Start job with given CPU mask, one for each cpu using a mask:
taskset mask command

The mask is a binary number. To see what a mask looks like:

# current process cpu affinity mask
taskset -p $$
# affinity mask for init (all available cpus)
taskset -p 1

all this assumes Linux 2.4 kernel and above

Here's what I've got in python so far:

import threading
import string
theVar = 1
class MyThread ( threading.Thread ):
   def run ( self ):
      global theVar
      theVar = theVar + 1
      for x in set(string.ascii_lowercase):
        print x
        for y in set(string.ascii_lowercase):
         print x,y
         for z in set(string.ascii_lowercase):
          print x,y,z
MyThread().start()
 
 

Despite the multiple threads, the python script takes twice as long to execute than the bash script.
Aside from that, something is off with the printed results. Even though the python script prints more lines than the bash script, it seems to be missing all the 1 and 2 letter combinations *Sigh*

Yeah, thank Scrutinizer. I didn't realize that some readers might just try the code without knowing what it is doing.

On a properly configured UNIX system, this bomb should be ineffective. But we live in a real world, and there are a lot of not properly configured systems out of there :wink:

Cheers.
Lo�c.

I tested the python script on a Mac OS-X system and to my surprise, it seemed to be utilizing both CPU cores while the script was executing. I couldn't tell you why but "top" showed CPU usage at around 100%.

Why would Linux be different?

---------- Post updated at 11:35 AM ---------- Previous update was at 10:10 AM ----------

BTW,
I've verified that python was compiled with threads on the linux system.

What gives?

Not quite what the op asked about, but it's pretty darn good. Vidar�s Blog Multithreading for performance in shell scripts AUR (en) - multilame