Comparing two numbers with decimal point

$ time awk -v v="$x" -F"." 'BEGIN{print $1}'


real    0m0.023s
user    0m0.000s
sys     0m0.002s

Not much different from the pipe, as a subshell adds very little time compared to that of a new process.

What maintenance? It's a black box function that was written once, many years ago and hasn't been touched since.

Its use is a one-liner.

No, I now write one line of code: fpmul ...

Absolute nonsense.

The shell is a very good programming language. It is the only one I need.

There are also things that python and perl don't do well.

File globbing and external commands are seamless in the shell; not so in other languages.

But ridiculously inefficient to use them on a single string, which I how I see them often used.

How often have you seen something like:

int=$(echo $x | awk -F. '{print $1}')
dec=$(echo $x | awk -F. '{print $2}')

Such coding can slow a script to a crawl.

Speed of execution is still very important.

The difference between a command that executes immediately and one that takes a second or two is the difference between a good user experience and a bad one.

Development of shell programs can be just as fast as writing perl or python, and just as legible.

That's your problem isn't it? If I can run it in 0.003s , that says something about our hardware differences. also, its not even 1s. to a user , its NOT A BIG DEAL

what if it breaks in the future or one need to modify something to suit one's need.. he will have to read the code eventually.

but you need to write many lines before you could do that. a > b or a < b syntax is almost universal in every language and understandable.

doesn't mean its the only one I (or others) need.

there are far more many other things that they do well over the shell. We don't do just file globbing all our lives. What we do more often than globbing are parsing things. Big files , small files, formatting reports. Under such cases, and as you mentioned shell is no good for big files, a better tool is more advisable.

of course, i wouldn't use that kind of programming either. I would just do them in awk ( or a real programming language)

not so much using today's hardware processor.

a second or two doesn't make a difference. Nobody cares, (maybe except you). If it's more than 30 minutes maybe it will make a difference.

how so ? example?

Are you an academic? Or are a *nix administrator?

I had that doubt after posting the example. Thank you for pointing it out.

I like this approach.
Thanks again!

ksh93 can do it too and it is very fast.

good to know. and so are others that have this capability.

no use see below:-

bash-3.00$ echo "12.13>12.14"|/usr/xpg6/bin/bc; echo "12.14>12.13"|/usr/xpg6/bin/bc

syntax error on line 1, teletype
syntax error on line 1, teletype

:(:(:frowning:

---------- Post updated at 04:21 AM ---------- Previous update was at 04:18 AM ----------

Thanks man your solution worked fine..:D:D:D:D

Well happy hunting for a bc that actually works on Solaris then.
In the mean time, use ksh93, which as you stated works well.

Note that as cfajohnson already pointed out, expr supports only an integer arithmetic (hence my expr examples are wrong).

With bc on Solaris and the relational expressions I get syntax error too.

That's not the case.

The basic advice regarding response times has been about the same for thirty years [Miller 1968; Card et al. 1991]:

* 0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.

* 1.0 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.

* 10 seconds is about the limit for keeping the user's attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect. 

Response Time Overview

that book was published 1994 right? How many years has it been ? computer technology since that time has evolved. CPUs got faster since then. don't be a speed freak!

Ok, let's keep to the topic. you produce a function to compare/calculate/do floating points but that takes you how many lines of code? tools like bc and awk, which are under the single unix specs are already available in most distro and ready to use (of course don't talk about embedded systems). The fact that they are ready to use means i only take just less than 10 secs? to code up a decimal comparison statement as compared to having to reinvent the whole thing, or search, download and install your custom made function on your website to one's system. Then he has to figure out how to use your function or ponder whether they are stable enough to not have bugs in the future.
Also, not everyone who install *nix knows about your function, BUT they more or less do know about standard utilities like awk/bc. Not everyone uses the same shell. (can your function work in most shells?? ), but awk/bc are almost always there.

As already shown, the time taken to run floating point math using awk vs your function are comparable. Not even 0.010 secs. and the awk syntax is standard in every version of awk (and pretty standard across multiple languages as well). So everyone understands what < or > or <= or >= means.

On a side note, you mention that bash is good programming language, then why is it that until now, there isn't support for floating point maths? ( i am under the assumption that bash 4.0 still doesn't support floating point maths, if it does already, pls correct me.). Why is that using bash's while read loop to read big files is still so slow?

In conclusion about response time wrt to "comparing 2 numbers with decimal", it doesn't matter if you use external tools !

I think its about time we stop as its going OT.

.1 seconds is still .1 seconds. That will never change.

Many people use Linux on sub-1GHz and even sub-.5GHz computers.

To use it requires one line of code, just as for bc or awk.

My function is ready to use, too.

The function will work in any standard Unix shell.

That depends on the speed of the machine, and, perhaps more importantly, how much activity there is on the machine. A call to an external command will be slowed down much more than a purely shell function.

I need floating point arithmetic so rarely, that I don't find its lack to be a handicap.

If you care about the user experience, it does matter.

you are going OT. we are just talking about comparing two decimals!! and its definitely NOT VERY MUCH DIFFERENCE in speed using external tools against your function!!

On a busy machine, the difference will be magnified.

In a script where it is called many times, there will be a big difference. When combined with other inefficient code (e.g., unnecessary external commands), the difference is magnified even more.

no it will not make much difference, wrt to just comparing decimals.!

yes, your script will also call your function many times too! I would certainly like to see an example of this speed difference on many float calculations using your function vs doing them all in awk.