Compiler/Runtime uses of sizeof

Ignoring other considerations for a moment and in general ...
Would there be a difference in result (dot oh or execution) of:

A.

   strncpy( a, b, sizeof(a) );

vs.

B.

   c = sizeof(a);
   strncpy( a, b, c );

My general understanding is (at least I think my understanding is) that in cases where something is inherently fixed in size then the compiler inserts the literal value of the size of the item at compile time as opposed to leaving an implicit run-time function call.

If my understanding is correct then it seems in scenario A. we'd have strncpy( a, b, 4 ) in one line of execute compared to two lines in B.: c = 4; strncpy( a, b, c ) .

Is that not correct?
Am I missing something?
Thanks in advance for any comments.
Geo. Salisbury
Long Valley, NJ

Ignoring other considerations makes it impossible to give you a useful answer.

  1. Have you included <string.h> ? If not, what function prototype did you supply for strncpy() , if any?
  2. Have you included <sys/types.h> ?
  3. How is a declared?
  4. How is c declared?
  5. What programming environment are you using?
  1. & 2. Yes
string.h

and

sys/types.h

have been included.
3. An example a might be char

a[8]

.
4. An example c might be

int c = sizeof(a)

.
5. Using gcc on RedHat Linux boxes.

What I was after is what, if any, would be the virtue of using:

c = sizeof(a); strncpy( a, b, c );

vs.

strncpy( a, b, sizeof(a) );

I'm not a C maven (I can kludge along given good consult) but it seems to me that using the sizeof value in a separate int adds overhead while not altering the execution result.

I appreciate that we're talking fractional nano-seconds and negligible amounts but am interested in clarifying the precept.

Using:

int c = sizeof(a);

is wrong, but won't actually hurt you unless a contains more bytes than fit in an object of type signed int . If you're using an external object to store the results of the sizeof operator, its type should be size_t ; not int .

Note that if you are in a subroutine that has been passed a pointer to an array of characters to be copied into an area of memory pointed to by another pointer to an array of characters, you have to be also be given the size of the destination array, as in:

char *my_copy(char *from, char *to, size_t to_size) {
        return(strncpy(to, from, to_size);
}

because using:

        return(strncpy(to, from, sizeof(to));

gives you the size of the pointer to ; not the size of the array of characters pointed to by to .

But, as long as the compiler knows the size of the destination object as in:

#include <string.h>
...
int main() {
        char a[128], b[]="source string", *ret;
        ...
        ret = strncpy(a, b, sizeof(a));
        ...
}

there is no need to create a variable to hold the result of the sizeof operator before calling strncpy() .

I have to bail on this for now - will pick it back up tomorrow after considering your comments.
Thx.
Geo.

---------- Post updated 12-15-15 at 05:19 PM ---------- Previous update was 12-14-15 at 05:50 PM ----------

Finally able to return ...

The ability of 'c' to hold the

sizeof(a)

would not be a concern as 'a' is defined as an eight character, fixed-length string

char a[8]

for example.

All is happening in a closed domain of a running application. The present method in place is the one-step

strncpy(a,b,sizeof(a))

. The two-step approach came up as quote better but it struck me as accomplishing nothing different and "cost" the teensy bit of the size of the default int.

We'll leave the strncpy... as is with the sizeof... as one of the arguments.
Thanks for your thoughts.
Geo.

You seriously don't need to worry about overhead of an extra line of code.
Lets do some rough guess calculation here...
What you got a 2GHZ processor? That's 2000,000,000 cycles per second.

Lets say that extra line adds 20 cycles on the program. That's a 10 millionth of a second.
Reading a few bytes from a disk may well take in the order of milliseconds.

If you think it makes the program clearer then do it. I routinely add extra steps and variables to make my code more readable and I deal with vast amounts of 24 hour streaming data.
And my stuff flies.

Its I/O that causes bottlenecks.

FWIW,
The original query was not overtly based upon concerns of overhead or performance per se but, rather, [possible?] quote technical differences in compiled result.

The illustrated fragments using sizeof as arguments were in place in an area that was already selective in execution and time and volume did not come into play.

A discussion had started on the merits of an argument vs. a variable and the posting here was more for enlarging the set of opinions. I, too, will often include additional steps etc. in order to make a sequence more clear or to be able to include some step-wise commentary.

The upshot was we (I) left the use of the sizeof as arguments in place as is principally because we (I<g>) didn't have to do anything (Management 101 - 1st precept - do nothing) and because the use was in a block that already well commented leaving no ambiguity.

Thanks.
Geo. Salisbury
Long Valley, NJ

char a[8];
c = sizeof(a);
strncpy( a, b, c );

If optimization is enabled, GCC will optimize away the second line of this code.

Thanks - that's pretty much what the takeaway has been.

In the code blocks here that gave rise to the conversation the sizeof was expressed in the using statements and not set as separate variables.

It seems that, for all practical purposes, the compiler treats sizeof as a placeholder for substituting the literal value of the size of the referenced item. That's good enough and is as far we need take it

Thanks again.
Geo. Salisbury
Long Valley, NJ

It hardwires it at compile time. As such, it gives a completely literal, unchanging result.

Thanks for re-stating it is a literal value.
That's what had been indicated thoughout the last couple of posts.
Done (again)