File size limit exceeded

When i run my C program which dynamically creates the output file, the program stops after sometime and gives the error "File size limit exceeded" even though my working directory has space.Can anyone plz help me out.

it seems that you have not handled SIGXFSZ,

check with the following program to find the max size allowed on your system,
if needed increase the limits by setrlimit,

# include<stdio.h>
# include <sys/resource.h>

int main()
{
struct rlimit limitbuf;
getrlimit(RLIMIT_FSIZE, &limitbuf);
fprintf(stderr, "Minimum Limit in bytes: %d\n", limitbuf.rlim_cur);
fprintf(stderr, "Maximum Limit in bytes: %d\n", limitbuf.rlim_max);
return 0;
}

if this is the case of dumping logs,
i would suggest to change the logging filename after monitoring for a threshold time or file size value.

hope this helps.

Thanks for your reply.I got the min and max byte limit.But from what I've read there are limits that can be set using ulimit but when I do a "ulimit -a" the "file size" is already set to unlimited. Is it that the signal SIGXFSZ has already some size defined because of which it is throwing this error?

This sounds like maybe it is a "large file" problem.

UNIX has actual physical limits to file size determined by the number of bytes a 32 bit file pointer can index, about 2.4 GB. For older filesystems or runtimes.

Depending on your system, you may or may not have large file support. Try

 man fopen64 

if you have an older unix.

If you can't find how or if large file support exists for your box, consider closing the first file just before it reaches 0x7fffffff bytes in length, and opening an additional new file.

Thanks Jim.
Now i am printing my ouput in 5 files.This is the size status of my output files

-rw-rw-rw- 1 usr users 53296243 Feb 23 11:09 output
-rw-rw-rw- 1 usr users 191 Feb 23 11:09 error
-rw-rw-rw- 1 usr users 2701503 Feb 23 11:09 pipe
-rw-rw-rw- 1 usr users 255 Feb 23 11:09 summary
-rw-rw-rw- 1 usr users 2147483647 Feb 23 11:09 debug

The size of the debug file is same as the maximum bytes i got i.e 2147483647(7FFFFFFF)
Eventhough the debug file occupies all the max bytes then also other files are created which are pretty large in size.That means the physical limit is exceeded here.
Then it should give me that message in unix.
But i get this error on linux now.

You debug file is the problem. You are exceeding the file size limit, just like the error message said. The error message is not the issue, the issue is your code. It is trying to do something it is not able to do.

If you have to have one super-large file, then see if your flavor of linux supports 64 bit file pointers - large files. And change your code accordingly. It may involve using a different linux filesystem as well , I do not know.

Otherwise, stop writing to the file "debug" when its big, and open a second one "debug2", then when it gets big, write to "debug3" and so on.

Thanks Jim.
Actually my code is too big to make any changes now. But i have tried your other suggestion to create more debug files.
But I seriously don't know whether my linux flavour supports 64 bit file pointers or not. My system is redhat linux 7.3 version. Can you please tell me whether it supports or not?

I don't know. I do know that v 8.0 of RH has a version of perl that supports large files, so the rest of the OS must also comply.

Try man fopen64

Thanks a lot.
I also don't know whether my system supports or not.

If i assume my system does not support can you suggest me some way to change my code according to the system otherthan using a different linux filesystem?
How do i make the pointers support to open a big file?

Because you didn't expect to do this at the outset, put a few global
variables at the top of the code, then add two small functions:

#include <stdlib.h>
/* global variables */
unsigned long byte_count=0;
int debug_file_count=0;
char *debug_file_name="/path/debug";
FILE *dbg=NULL;
/* ..........................  further on --- */
void dbg_file_open(void)
{
    char tmp[256]={0x0};

    sprintf(tmp,"%s%d",debug_file_name,debug_file_count++);
    dbg=fopen(tmp,"w");
    if(dbg==NULL)
    {
         perror("Error opening debug file");
         exit(EXIT_FAILURE);
    }
    byte_count=0;
}


void write_debug_file(char *string)
{
    char tmp[256]={0x0};
    size_t len=strlen(string);
    if(dbg==NULL  || byte_count + len > 0x7fffffff)
    {
        dbg_file_open();
        
    }
    byte_count+=len;
    fprintf(dbg,"%s\n",string);
}

Thanks.
All the answers were very helpful.
Thank you all.

jim,

I would like to add few points,
prior opening to new file through dbg_file_open function, close the previously opened file, else the last batch of writes done to the previous file is not guaranteed to be flushed.

the need for char tmp[256] in write_debug_file ???

You're right about closing the file, my bad. However - 256 appears arbitrary. It is not.
Whatever file record length he/she expects, best practice dictates using a variable that is decalred to be significantly larger.

However - and don't get me going on this - picking lengths that are just one char longer than the record leads to trouble in production systems, if the program deals with data derived from any external source. And the memory savings is not worth the cost of debugging and fixing it later on, because some users did not follow procedures.

If you're worried about memory, which, within reason, is basically not merited except in embedded systems or realtime processing, check your requirements. I seldom see - "must run in less than 10MB of memory" - as a stipulated requirement. It's the same issue as using floats instead of doubles because it "saves memory" or "is faster".
The "is faster" is debatable, and is hardware dependant, plus floats are often promoted to doubles. The "saves memory" is usually correct except that what you gain is not worth what you lose - 9 digits of precision.

my question abt tmp[] is not abt the allocation of 256.
simple as that; where the variable tmp is made use of in the program,
if thats for use in later point of the program.. then it is self-answered,

besides the effort spent in debugging for a mistake where the developer had deliberately missed out sentinel byte allocation for a variable is cumbersome
and pathetic. Pt noted.