How to get uptime in miliseconds ?

I need to get, in my application, in different methods, the uptime of the system in milliseconds.

time() - returns only seconds.
/proc/uptime - returns the seconds + a truncated milliseconds value, but it need to be parsed to extract data and convert it to milliseconds

Any other suggestions ?

Since you are describing /proc/uptime which is Linux:

$ awk '{print $1*1000}' /proc/uptime

or C:

#include <stdio.h>
#include <stdlib.h>
// return 0 on error, else ms uptime
double ms_uptime(void)
{
    FILE *in=fopen("/proc/uptime", "r");
    double retval=0;
    char tmp[256]={0x0};
    if(in!=NULL)
    {
          fgets(tmp, sizeof(tmp), in);
          retval=atof(tmp);
          fclose(in);
    }
    return retval*1000;
    
}

Note that time(2) returns seconds since the epoch, not since system boot (Linux sysinfo(2) call does return uptime, but only in seconds).

Not terribly helpful, I know...

Which do you think is faster:

jim mcnamara version, or mine ?

unsigned long ms_uptime()
{
   std::string script_loc, line;
   script_loc = "/proc/uptime";
   m_file.open(script_loc.c_str());

   if(m_file){

       getline(m_file, line);
       double param = atof(line.c_str());
       m_file.close();
 
       return param*1000;  
   } 
   return -1;
}

Please post exactly which Operating System and version you have and what programming language or Shell you are using for you "application".

In many unix systems the first entry in the file /etc/utmp (/etc/utmpx in Solaris) is the "system boot" time in seconds since the epoch. See "man utmp". It is the same file which is read by the unix commands "uptime" and "who -b" (to name but a few).

gettimeofday(2) has more precision than time(2) .

OK, why?

Systems take much longer than a millisecond to boot up. Measuring uptime to millisecond precision is like measuring the depth of the Earth's atmosphere to meter precision - the edge isn't well-enough defined to make measuring to that accuracy meaningful.

I sort of agree with "achenle" mainly because unix systems which need accurate time will have additional hardware to keep time and a suite of program routines to access the alternative clock - or even accurately timestamp external events (like someone crossing the finishing line).

Systems with an ordinary "PC style" real time clock will normally be set to use NTP to keep their clocks in sync with the rest of the world. Without adjustment the clock drift can be as much as a minute-a-month.

Sometimes you do need accurate time if you are indexing real time events in a database or trying to generate a decent spread of unique filenames.

Before servers came with real time clocks the Operating System would keep track of time by counting clock ticks since boot.
Real-time clock - Wikipedia, the free encyclopedia