Hello,
I am working on an application that uses allot of memory depending on the input. I am also working on more than one processing algorithm.
The program has a long processing time (hours) so it would be nice to be able to monitor the maximum memory footprint of the application during runs of various data.
It seems like there may be a linux tool that could do that from a script but I don't know of such a thing.
I am running under windows with cygwin so I have most of the linux toolbox, but could do this in open suse as well.
Are there any suggestions?
------ Post updated 08-09-18 at 01:19 PM ------
It looks like
/usr/bin/time -v
gives me the output I need.
User time (seconds): 354.48
System time (seconds): 0.00
Percent of CPU this job got: 99%
Elapsed (wall clock) time (h:mm:ss or m:ss): 5:54.98
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 350976
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 1366
Minor (reclaiming a frame) page faults: 0
Voluntary context switches: 0
Involuntary context switches: 0
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 65536
Exit status: 0
Now I need to figure out where all those page faults are coming from. I would appreciate other suggestions if they are out there.
Thanks,
LMHmedchem