Bash functions sequence ?

OK,

I know function has to be defined first - in sequence - before it can be used.

So the script has to be build "bottoms -up style, if you pardon my expression.

I am running into a problem reusing function and breaking the sequence.

It would be nice to be able to see the function usage tree as used in script.

Is there such a thing /application ?

Or - will bash complain when I have multiple definitions of a function to keep them in sequence?

I don't understand what you're trying to do. How does having multiple definitions of a function keep anything in sequence?

If you have defined a function more than once, the last function definition command executed for that function will be the one used when that function is invoked.

Yuck, that sounds like a maintenance nightmare.

It is quite common to see a main function at the top a the script. Like this:

main() {
     foo
     bar
}

foo() {
   # code for foo
}

bar() {
   # code for bar
}

# Pass script arguments into main for argument parsing
main $@ 

But you still need to keep the functions in the correct order, i.e callee before caller.

There is no need to have the functions in strict order. A function just needs to be known when it is invoked. Hence, if you put all the functions on the top of your script, you can arrange them in any order.

To be honest i prefer Korn shell over bash on any day of the week and twice on Sundays and this (the missing FPATH) is one of the reasons.

Still it is possible to get some sort of order in bash too: put every function in its own separate file, then use the source command to load these at the beginning of the script. This way you can't ever have two functions with the same name because the filesystem has distinct entries.

Note that this only pays off in big script projects. Scripts with 50-100 lines are easy to oversee but once the script gets several thousands of lines it is equally easy to lose track.

I hope this helps.

bakunin

1 Like

So the bottom like is - I have to clean -up my own mess. O well...

------ Post updated at 11:14 AM -----

Right church wrong pew -

sorry I do not know how my NEW post got here.

OK, I have not got the full grasp of this "standard " input / output /error process.

I have manged to append (two) variables to a file.

I know how to reverse their order using tac.

I can see the "standard output" in terminal. - output from tac.

I like to "send" each line to $1 and $2 respectively. Actually to an array of $, but I'll tackle that later.

I know how to do "while read" , I like to learn more by using tac directly.

Here is the base I have so far , not much.

echo "$LINENO  reverse  file entries  FIFO "
tac $PWD$DEBUG_DIR$DEBUG_MENU 
#ouputs lines to terminal FIFO 
echo "$LINENO  reverse  file entries  FIFO "


Hi bakunin...

What a neat idea, I like it...
They make similar functions in other scripts effectively redundant.

Writing scripts is software development - programming - so all the do's and dont's of programming apply.

Programming is in my experience mostly about bringing your thought process into order. It always helped me to standardise and conventionalise as much as i can: i have a system for naming variables so that the name immediately tells me what it should contain and what type it is (note that shell variables aren't really typed but if you try to multiply a string by 5 it still leads to an error). I have a naming convention for my functions so i cannot name two functions the same name. And so on, and so on.

Picture programming as building a house but with the twist that you have to build the bricks for that first: if your bricks are all neat cubes or cuboids it is easy to build straight walls and it is equally easy to foresee if the wall will hold or not. If your bricks are all irregular shapes your walls will be equally irregular shapes and it might be impossible to calculate if they are going to hold or not until they come crushing down.

To be honest i have no idea what your question is. Please explain again. If it is a new question and unrelated to the topic of this thread please open a new thread. We like to organise our threads so that they are about one topic only.

In Korn shell i have built myself a "library" using the FPATH variable and strive to write my functions in a way so that they can be included in this library if possible. When i started programming one of the first things i learned is write your functions always in a way so that they can be put into a library and i still try to adhere to that.

I also have a standard header for my function files which you might want to adopt/adapt. I always found it useful to have a standard format for documentation so that i know immediately where to look for specific information (like with man pages). As i use the same header for functions and scripts i have a "USAGE"-part for the user documentation and a "DOCUMENTATION"-part for the internal documentation:

# ------------------------------------------------------------------------------
# template.ksh                               template for ksh scripts/functions
# ------------------------------------------------------------------------------
# Author.....: 
# last update: 0000 00 00    by:
# ------------------------------------------------------------------------------
# Revision Log:
# - 0.99   0000 00 00   Revision title
#                       Revision description
#
# ------------------------------------------------------------------------------
# Usage:
#
#     Example:
#
# Prerequisites:
#
# ------------------------------------------------------------------------------
# Documentation:
#
#     Parameters:
#     returns:
# ------------------------------------------------------------------------------
# known bugs:
#
#     none
# ------------------------------------------------------------------------------
# ..............................(C) 2018 bakunin ...............................
# ------------------------------------------------------------------------------

I hope this helps.

bakunin

I can understand that you prefer ksh over bash in general, but I don't see how for this particular case, ksh would give an advantage over bash. I must admit that I'm not proficient at all in ksh, and would appreciate an explanation of this matter.

Thanks Bakunin.
I need to restate my "primary objective" was and still is to modify existing bash script,
I started with 2000+ lines and now it is doubled!

I realize I am asking stupid and basic questions, however I have no intention to make carrier out of writing bash scripts.

But I am thankful for all the support I have recieved so far.

Its a bit off-topic, but i'll bend the rules slightly for you, as it is (remotely) connected with the problem here:

In ksh there is a Variable FPATH, which works just like PATH for executables, but for functions. Once set the shell will seek undeclared functions along this path(es) like it would search for unqualified executables along the contents of the PATH variable. This makes it easy to create a directory with standardised functions used in many scripts - just like a library in high-level-languages. I use this mechanism a lot and have about 50 functions in my "library" which i use over and over. It makes organising (and improving!) ones work a lot easier than copying and pasting all the "standard stuff" from one script to the next.

Suppose you have 50 scripts where you have copied and pasted alls sorts of functions into. Now you come up with an (internal) improvement to one of those functions. Are you gonna copy and paste it again through all those scripts? I can improve my library functions easily and all scripts using it will profit from that immediately.

@annacreek
I said it above but you probably missed it: i do not understand what your question is. Please explain again what you want to achieve and I'll gladly help.

I hope this helps.

bakunin