Display-performance in terminal, bash or python?

Heyas

I've been working on my project TUI (Text User Interface) for quite some time now, its a hobby project, so nothing i sit in front of 8hrs/day.

Since the only 'real' programming language i kn[eo]w is Visual Basic, based upon early steps with MS-Batch files. When i 'joined' linux 3 years ago, i was a bit overhelmed upon my first approaches (those were back in 199X) with QT/dont-recall with GUI.
Thus, i followed another dream of mine, and started to write TUI - using BASH.

Anyhow, i've been suggested to re-write it as python, as it would be so much faster.
Well, i finaly feel 'confident' enough to actualy start learning a new language, with the first 'speed' comparisons between a simple example script.

Reason for my question is, BASH seems faster than python using the example tutorial script.
And before i completly (re)-write my already existing and working project, i want to be sure that there WILL be a noticable performance boost...

#!/usr/bin/bash
echo "The script is called: $0"
echo "Your first variable is: $1"
echo "Your second variable is: $2"
echo "Your third variable is: $3"
#!/usr/bin/python
from sys import argv

script, first, second, third, four = argv

print "The script is called:", script
print "Your first variable is:", first
print "Your second variable is:", second
print "Your third variable is:", third

While i call both scripts like:

time ./test.[py|sh] 1 2 3  4

I get these result ranges:
py: 0.017 - 0.038
sh: 0.005 - 0.008

Which leads me to the conclusion that bash is faster in displaying.
So i'm quite irritated by the 'promise' my friend gave me, and what i see here.

Not only that bash seems faster (with the above scripts at least, i'm aware that they are very small), also the compare between the fastest and slowest is more than the double for python, while bash is less than double.

Allthough i usualy could blindly trust him on technical stuff, i'm not doing this (anyone for that matter), if it means i have to start from 0 and drop my 'baby'...

Specialy since i just rewrote the core components with the aim for speed/performance/optimizations, i could decrease the time needed to display the 'basic config script' from 1.6 sec (sometimes 2.2) to less than 0.3 sec (on average).$

The screenshot is ment as example how TUI looks like, what i intend to display.
Its that white-blue stuff and those # | and | # borders, note that the variables are read using another script/application that is part of TUI.

So i would like to ask you guys, and your collected experience, if python IS faster than bash in displying info to terminal?

Thank you in advance
Regards

PS: Could not attach screenshot while post is/was not posted (please notify admin)

Let's see the difference between what bash and python do when you tell them to do nothing:

$ strace bash 2>&1 </dev/null | grep "^open"

open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libreadline.so.6", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libncurses.so.5", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
open("/dev/tty", O_RDWR|O_NONBLOCK)     = 3
open("/usr/lib64/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
open("/proc/meminfo", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib64/gconv/gconv-modules.cache", O_RDONLY) = 3

$ strace python 2>&1 </dev/null | grep "^open"

open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
open("/etc/env.d/python/config", O_RDONLY) = 3
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib64/libpython3.3.so.1.0", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libutil.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libm.so.6", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib64/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib64/gconv/gconv-modules.cache", O_RDONLY) = 3
open("/dev/urandom", O_RDONLY)          = 3
open("/usr/bin/pyvenv.cfg", O_RDONLY)   = -1 ENOENT (No such file or directory)
open("/usr/pyvenv.cfg", O_RDONLY)       = -1 ENOENT (No such file or directory)
open("/proc/meminfo", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/lib64/python3.3/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
open("/usr/lib64/python3.3/encodings/__pycache__/__init__.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/codecs.cpython-33.pyc", O_RDONLY) = 3
openat(AT_FDCWD, "/usr/lib64/python3.3/encodings", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
open("/usr/lib64/python3.3/encodings/__pycache__/aliases.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/encodings/__pycache__/utf_8.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/encodings/__pycache__/latin_1.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/io.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/abc.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/_weakrefset.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/locale.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/re.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/sre_compile.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/sre_parse.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/sre_constants.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/functools.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/collections/__pycache__/__init__.cpython-33.pyc", O_RDONLY) = 3
openat(AT_FDCWD, "/usr/lib64/python3.3/collections", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
open("/usr/lib64/python3.3/collections/__pycache__/abc.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/keyword.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/heapq.cpython-33.pyc", O_RDONLY) = 3
openat(AT_FDCWD, "/usr/lib64/python3.3/plat-linux", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/lib64/python3.3/lib-dynload", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
open("/usr/lib64/python3.3/lib-dynload/_heapq.cpython-33.so", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib64/python3.3/__pycache__/weakref.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/reprlib.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/copyreg.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/site.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/os.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/stat.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/posixpath.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/genericpath.cpython-33.pyc", O_RDONLY) = 3
openat(AT_FDCWD, "/usr/lib64/python3.3", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
open("/usr/lib64/python3.3/__pycache__/sysconfig.cpython-33.pyc", O_RDONLY) = 3
open("/usr/lib64/python3.3/__pycache__/_sysconfigdata.cpython-33.pyc", O_RDONLY) = 3
openat(AT_FDCWD, "/usr/lib64/python3.3/site-packages", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/lib64/python3.3/site-packages", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3

$

Python is a much bigger language -- it has to do a whole lot more to load than BASH. That doesn't mean it's slow once it's actually loaded... See how long either language takes to print 100,000 lines. I bet they'd be close to the same (since it's mostly going to be limited by I/O speed).

Python would be much better than BASH because Python will probably let you use the ncurses console library. That alone would probably make it faster for your needs -- replacing a lot of cumbersome things with simple calls.

Thank you, sounds very much the same as from my friend.. but since TUI is not an application by itself, but a package of multiple executables (scripts, keep it interpreted is a MUST!) that would mean the libs/mods would be loaded upon each call of a 'function', such as the blue line, the white line, and the non-colored ones...

The argument of 'once the libs are loaded' is very valid, but to my understanding not applicable to TUI, as its not a standalone apllication, but more like a framework of multiple (25 as of now) commands/scripts.

Or do i lack understanding here something important?

You have a seperate executable for each and every little task? Such as, for instance, draw_line.sh so to draw a line, you do ./draw-line.sh row col row col ? No wonder it's slow.

Imagine it this way... If this were a visual basic script, it'd be loading visual basic, calling the draw line routine, and quitting visual basic each time you drew a line. This is not going to be fast no matter how you cut it, in any language.

You should put these routines into functions, instead. Load once, use many times. This could make it literally hundreds to thousands times faster.

1 Like

Yes you should definitely look at converting those 25 or so scripts to functions within the 1 script. It shouldn't require very much changes to the code to do this and the performance improvement should be immediately evident.

Also, with some careful redesign the performance of your bash script could end up very close to python for most functions.

I'm sure many people here can help in optimising bash code, but it may involve you posting some of your work here.

As an example most shell screen based I/O would use tput, which is an external program and costs a lot to call from bash.
A good idea is to avoid using $(tput rmso) everywhere in your script in printf/echo statements,
instead store this string once at the start of your script like this BOLD_OFF=$(tput rmso) and then use $BOLD_OFF everywhere else.

1 Like

The only place i use tput is as a fallback if $COLUMNS is not set, as this is required to print the output according to the currently available columns/width.

Allthough i've seen several scripts using tput as they're core output, i was used (from batch files) to use echo as output.
Quickly after i joined this forum, i've figured echo might not be available on all systems, and changed to printf instead.

An executable for each and every task, yeno...
The core/key components are:

  • tui-printf (absolute core!, this is even the base for: )
  • tui-echo
  • tui-header (the blue one)
  • tui-title (the white one)

If you look at the screenshot, the 'lower' output also uses: tui-value-get , which only returns either all variable names (all strings/lines not starting with # and having a string like VARNAME= without the = ) or just the single value of the provided varname of a file. (eg: tui-value-get -l CONFFILE / value=$(tui-value-get CONFFILE VARNAME) ).
So this is called 8 times in the screenshot above... 1 time to list all the VARIABLES, and 1 time per variable (7) to be read/displayed.
So to print those 20 lines, i'm calling/using:

  • 1 x tui-header
  • 4 x tui-title
  • 15 x tui-echo
  • 8 x tui-value-get (in a for-loop)

and it 'only' needs 0.433 seconds, that is around 0.021 secs per line (0.015 respectivly per command) on average, to my feeling this is fast already, at least considering that 28 commands were executed (of which some rely on sed/grep/awk combinations) and all but tui-value-get rely on tui-printf .
And keep in mind, its always printing a full line - using all available width of the terminal, not only a few chars / words per line.

Some parts of the first 'performance-rewrite' also caused several vars to be placed in $env beeing set using /etc/profile.d/tui.sh . Also, tui-printf was using a custom function 'printx' which had to be sourced first, changing that completly to tui-printf was a speed increase of like 0.250 secs.

Initialy, like 2-2.5 years ago, i've had a folder containing several scripts providing functions.
But back then TUI was part of another script-package, completly 'involved', not seperate at all.
But since TUI is ment as an 'interface framework' i wanted the users/scripters to simply call the function they want, instead of sourcing just 'everything'.
So by now, the scripts only source files upon real need, as in username/email/prefered licence/licence url are only read/sourced by tui-new-script .

Well, i'm learning python now anyway, for future projects at least.
This thread is merly to figure if TUI has to be rewritten for the 5th or 6th time :eek::mad:

Code examples:
tui (to config the vars)
tui-printf (absolute core fuction)
tui-value-get

So, writing this in python, would decrease the time used (those 0.433s from the screenshot) down to 0.090 or even as less as 0.043 for all those 20 lines?
(friend promised 500-1000% speed gain, which i hardly can imagine)

And if i may ask along, what do you think of this TUI framework?

EDIT:
I keep learning python, maybe i understand your arguments better later on.

I repeat: Calling executables repeatedly is not going to be fast no matter what language its in.

Writing your code properly is going to make a much better improvement than learning a "Faster language".

I had a quick look at tui-printf and it looks pretty good. you can get rid of 2 sub shell calls by:

replace WIDTH=$( [ -z $COLUMNS ] && tput cols || printf $COLUMNS ) with WIDTH=${COLUMNS:-$( tput cols )}

replace EMPTY="$(printf '%*s' $WIDTH)" with printf -v EMPTY '%*s' $WIDTH

Of course, if you made is a function definition and sourced it in your main script(s) you would save another sub shell call.