Boy, is the shell powerful.

Reading replies to questions, as an amateur, I have learnt a lot from you pros on here.
The shell in any of its guises is serioulsy poweful.

With so many transient and resident commands at one's disposal is there anything,
non-GUI, that cannot be done inside a default shell and terminal?

Why bother with Python, (I do like Python BTW), or even ANSI C, or any other language
for that matter...

The more I get into the Scope the more impressed I am with shell scripting...

So to you pros, I say thanks, just by searching and observing this site alone I have
learnt a great deal...

Although Debian and PCLinuxOS were my main OSes, (still used daily), OSX 10.7.5 has
now taken its place for my dev' work. Vista - UGH!, and Windows 7 are usd by my wife
and only by me for WinUAE and anything Python wise that is platform independent and
needs testing...

Once again chaps and chapesses, your knowlegde on shell stuff makes me feel - oh - so -
amateur... ;oD

The deep dark secret of high-level languages is that, even if you treat C/C++ as a dark-ages language and eschew it for java/python/shell/CAML/INTERCAL, nearly all the "Good Stuff" your programs depend on(like SOX, and BASH itself) are actually written in it. It's a powerful enough language to build other efficient languages and language addons, a rare feat; hardly anything would be much good without it.

Second, the shell is a good interface to the UNIX system, but a poor interface to other systems where not everything is a file... Imagine you didn't have utilities like SOX or an external /dev/dsp, how simple would your oscilloscope be then?

Shells are bad at networking. Even though BASH and KSH have some networking built into them these days(via a faked /dev/tcp), try building a network server without C help... You can't do it. Too many things missing.

Another thing the shell is poor at is performance. It's great at summoning other programs to do its work for it... not the greatest if you have to sum 3 million numbers in a flash with no outside help.

It's highly subject to system limits, like the length of a command-line, and the maximum length of an environment variable. You happen to get conveniently big ones in LINUX and OSX but aren't always so lucky.

If you don't have access to install things on your system, you will find the shell very limiting. No sox.

Also, you are using nonstandard capabilities of the BASH shell (i.e. dealing with binary data). Imagine you were forced to use an ancient bourne shell, not bash. I don't think you'd consider it quite as fantastic.

2 Likes

Hi Corona688...

I would have to develop the hardware to suit an existing port.
Like this for example using the AMIGA A2100 parallel port;
It is a GIF animation of such an animal that required HW to be built using standard tools...

http://wisecracker.host22.com/public/SCOPE.GIF

The whole project required 4 AMIGA floppies and is PD on AMINET.

I found this type of limitation using RANDOM which is why I switched to /dev/urandom instead.

Already in that situation with the AMIGA shell. Found a solutuon and created an execuatble from it...

Aminet - dev/src/Filter.shell.txt

There is always a back door... <wink - wink>

Yes and no: from a theoretical point of view the shell language is Turing-complete, so in principle there is nothing you couldn't do given enough resources.

From a practical standpoint, these resources are limited and the theoretical possibility to do something doesn't imply it is also wise to do so. High-level-languages, like Corona already noted, do not have to offer more possibilities, it is enough that they are usually better (faster, smaller, ...) in doing it which makes them preferable for some tasks. They generally also afford more effort to do something (a script in shell is usually written faster than the same as a C program because the "building blocks" are bigger), which makes the shell languages on turn preferable for some other tasks.

This might or might not be a good idea. "/dev/random" is the front-end of a driver, which collects noise from device drivers (and some other next-to-random events, implementations vary) to fill a pool of "randomness" (called the "entropy pool"), which can be accessed (via "/dev/random") as source for random numbers of high quality. As this pool is accessed i is depleted and once it runs empty read-access is blocked as long as it takes to refill this pool.

So, yes, "/dev/random" might be slow, but it delivers constant high quality. "/dev/urandom", on the other hand, works similar, but the random numbers are generated by a pseudo-random-number generator. The device will not block like "/dev/random", but once you exhaust the entropy pool you get random numbers of considerable lower quality than from "/dev/random".

So, both these devices have their uses, you will have to decide which one is sufficient for your work. For high-quality cryptography probably "/dev/urandom" won't suffice, while as source for the dice in a game program it might be perfectly OK.

I hope this helps.

bakunin

PS: The Amiga had a very powerful script language itself: (A)REXX. It serves the same purpose on IBM mainframes the shell serves on Unix systems. You might want to give it a try.

Hi bakunin...

Nice reply; my need for /dev/urandom was purely to generate a "random" like waveform for the Scope project so as to be able to see the code working when in default DEMO mode...

I have done a great deal using ARexx, including getting Arduino to talk to Classic AMIGAs...

Someone asked me to do the link below because he didn't know how, and frankly I enjoyed the challenge. I thought about doing an assembler/disassembler for it, but, realised my amateur limitations... (In other words, I am not good enough.)

Aminet - dev/src/MEM-EDIT_AREXX.lha

I also think that shell programming is underrated. But when it comes to serious application layers running on top of large databases, shell will quickly be abandoned. There is a right tool for every job and the shell is not suitable for every job.

We used to say "C is an assembly language looking for a CPU!" You can ask the CPU to do all general tasks with it, and can package any CPU features into calls or optimized libraries. Since GCC is free and open, it has been the first language out of the box for many CPUs.

If you want the best of both, one friend loved PERL so much he went back and rewrote all his shell and C tools. PERL and JAVA let you call C/C++ APIs, so you can have it both ways. C++ can call JAVA bits, too. Python is out in the same direction as PERL, but just a bit crazier!

My customers want me to maintain what they have and write things they can get suport for, so favoritism is not on my menu! Just save me from csh!

Maybe the large database apps will determine the next language. The COBOL-esque nature of SQL keeps suggesting its demise, even as the set-language problem definition allows massive parallelism. Maybe the final winner will be a database of persistent objects finding each other in OO ways.

Even a tech carrying the best scope often has a knife in his pocket. C will persist as the fast, simple and 'close to assembly' tool. OO will continue to thrive and grow for things it does well.

1 Like

Hi figaro...

Absolutely. I am new to it, and it probably shows too, but try and do even half of what the Scope does in a Windows, (TM), Command Prompt.
It is possible but it is seriously difficult.

Ah, but as an amateur I see it differently. How many coders like me are interested in single file sizes 100's or 1000's of MegaBytes in size?

Hi DGPickett...

I have veered away from ANSI C and C++ and base most of my non-shell scripting time in Python. I try to write code that works on version 1.4.x to current on Classic AMIGAs, (usually a stock A1200), Linux, Windows and now OSX. Most of it is on code.activestate.com but some is on AMINET...
This I find a mental challenge as there is so much to know and learn about the various platforms.

Isn't that supposed to be the purpose of Windows Power Shell?

I've never used it, I'm genuinely wondering.

Hi verdepollo...

I have been experimenting with Powershell and it is still highly limited.
Commands are one thing but manipulating text on screen is something else again...
(Unless of course I have missed something.)

What the *NIX family of OSes take for granted is not possible inside a defauilt 32 bit Windows command prompt...

The Escape sequences - such as multicolors per text line and plotting at any point within the command window.

Even HW problems when _refreshing_ the CLI window at _high_speed_ causes a pseudo-flicker inside the CLI window...

There are workarounds for some aspects but to generate the same image for the Scope project is not possible as you cannot have more than 2 colours from a CMD.EXE window.

i suspect you know this already though...

It surprises me that the Classic AMIGA can use many of the *NIX escape codes and works inside its default CLI but that Windows is oh, so, backwards...

Well, PERL and JAVA are pretty transparently portable including Windows, or you can get CygWin and bash away.

Code that will work fine in UNIX may have problems in windows. Something that crops up often enough in cygwin is failed fork() calls... UNIX will let you create thousands of short-lived processes without complaining, windows will hiccup once in a while. I consider this a bug in windows. It's not 10,000 live process after all, or even 10,000 dead ones, it's 10,000 processes that have already died and been reaped, but there's still some buffer or queue deep inside windows which it hasn't bothered to clear, and instead of doing so it gives you the occasional inexplicable error. You can reorganize your code to make less processes and it "usually" works but there's no way to be sure it always will.

2 Likes

Yes, CygWin fork/exec is slow at best. I have not had fork failures, but I keep my VM tuned to accomodate unexpected big. And I script to get the most out of every fork. No grep shell out just to examine the contents of a variable from every line. On UNIX, fork() is remarkably quik, and they have special cut down fork for exec immediately cases. The exec takes 10 times as long as fork, so what people call fork time is really exec time.