Max file size can't exceed 2 GB

We have Sun OS 5.9 we are doing a backup process (ProC program) that uses the function...

fprintf(fp,"%s;%s;%s;%s;%s;%ld;%ld;%ld;%ld;%s;%s;%s;%d;%s;%s;%s;%ld;%s;%s;%s;%ld;%ld;%s;%ld;%s;%ld;%s;%s;%c%c",x_contrno, x_subno, x_b_subno,x_transdate,x_last_traffic_date,BillAmt_s, x_billamount_int, x_duration
, x_act_duration,x_third_party_no, x_call_type,x_tariffclass,x_party_flag,x_chargetype,x_vol_group,x_billcode,x_no_packets,v_SUBSCR_TYPE.arr,v_AREA.arr,v_TARIFF_GROUP.arr,file_id0,d_GROSS_AMOUNT,v_UPDATED,d_RATE_POS
,v_DISC_TYPE.arr,d_OTHER_AMOUNT,v_OTHER_GROUP.arr,v_TARIFF_PROFILE.arr,0x0D,0x0A);

and fp is declared such that

FILE *fp (fp is pointer to file)

MY problem is that I can't exceed 2GB as file size ......

What are the options availabe to increase the file size to be unlimited............?

and I need to know if this file size depends on the filesystem itself...

What kind of filesystem is it? If it is VxFS (Veritas file system) you can set the largefiles flag in /etc/vfstab and reboot. If you want to increase it on the fly there is a command, I believe it is called fsadm, which can do that. Check your Veritas man pages for details, but I know you can do it.

If it is UFS I'm not sure if you can change it once the filesystem has been created.

I have UFS file system...
!!!!!
Please if anybody help I would appreciate that......

Mounting

/homes on /dev/dsk/c1t0d0s6 read/write/setuid/intr/largefiles/xattr/onerror=pani
c/dev=1d8000e on Wed Feb 11 20:59:31 2004

means the file system supported large files ....

relating the C program......

Post the output from:
isainfo -b
isainfo -v
isainfo -kv
All of these need to say "64". Read the man page for your compiler. Are you putting it 64-bit mode? If not then you must be using the stuff mentioned in "man lfcompile".

To see if the filesystem supports largefiles --- try it. Use some program that you did not write to attempt to create a largefile. "man largefile" will list the os commands that are large file aware.

[Billtest]tabs:/homes/tabs>isainfo -b
64
[Billtest]tabs:/homes/tabs>isainfo -v
64-bit sparcv9 applications
32-bit sparc applications
[Billtest]tabs:/homes/tabs>isainfo -kv
64-bit sparcv9 kernel modules
[Billtest]tabs:/homes/tabs>

Please if you help me in an axample (small C program write to file more than 2GB) using fprintf or fwrite ?

as it is an important issue here to backup local calls details table.....

I don't have 64 bit Sun with a compiler. But again, your compiler has a man page. It will tell you how to use 64 bit mode. Or "man lfcompile" will tell you how to write largefiles while in 32 bit mode. You're going to need to read those man pages and follow the instructions.

The first option, using 64 bit mode is the best. You will probably need nothing more than one more option on the cc command line. The option may be phased "use sparcv9 code" or something like that.

I guess that I can do this on a 32 bit sparc and use the technique in the lfcompile man page. First I need a program:

#include <stdio.h>
main()
{
        FILE *fp;
        int i;
        fp=fopen("bigfile", "w");
        for(i=1; i<107374190; i++)
                fprintf(fp ,"line num=%09d \n", i);
        exit(0);
}

This will try to make a file that is a few bytes larger than 2GB. After, I run it, "tail bigfile ; sleep 4" will show:

line num=107374180
line num=107374181
line num=107374182
line nu

I need that sleep so my prompt did not overwrite the last line which does not end with a newline character. This is where you are, you have a program that can't write a largefile.

Now that "man lfcompile" page say to use "getconf LFS_CFLAGS" to get the flags I need to compile the program. I see that "getconf LFS_CFLAGS" returns
-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
but I will just go with:
/opt/SUNWspro/bin/cc `getconf LFS_CFLAGS` makebigfile.c -o makebigfile
to compile my program. All that matters is that I feed those new options into the compiler one way or another. This really seems rather trivial, but this must be where you were stuck.

Now when I rerun the program, bigfile ends with:
line num=107374186
line num=107374187
line num=107374188
line num=107374189

Source code :

#include <stdio.h>
#define CHUNKSZ 1048576

int main( int argc, char** argv )
{
int imb;
FILE* fp;
char buf[CHUNKSZ];
memset( buf, 0, CHUNKSZ );

 if \( argc &lt; 2 \)
     \{ fprintf\( stderr, "Usage: %s temp\_file_name\\n",
       argv[0] \); return 1; \}

 fp = fopen\( argv[1], "w" \);

 for \( imb=0; imb&lt;11000; imb\+\+ \)
 \{
             if \( imb % 100 == 0 \)
         \{ fprintf\( stderr, "." \); fflush\(stderr\); \}
     fwrite\( buf, CHUNKSZ, 1, fp \);
 \}

 return 0;

}

------------------------------------------------------------

I compiled this source using your way yesteraday ...it reached this value
-rw-r--r-- 1 tabs tabs 1897119744 Feb 27 12:56 omar.txt

I add also -xarch=v9 to generate 64 bit EXE also it reached the same value

-rw-r--r-- 1 tabs tabs 1897119744 Feb 27 12:56 omar.txt

Sorry I discover that the df -k

/dev/dsk/c1t0d0s6 4129290 4088000 0 100% /homes

and my path is mounted on this

Thanks,,,,Yes I will test yours and mine on large partition to resolve this issue .......

I apologize to you....

It is working .................:wink: :wink: :wink: :wink: :wink: