They want a view of the lowest file time / the highest file time and then the average of all files. Is this possible if so what would the command be as I have no clue?
Not sure what do you mean by average values by checking Docbase processes? They are just processes for documentum repository. Please do mention actual requirement of yours as it is not clear, are you seeing any slowness on content server or application server side? Kindly do mention all details in your post.
So above we can see a list of process so I have been asked to get the average time of all process are taking. Simply adding up all the times & dividing it by how many there are. The average response time the processes are taking
I have to second RavinderSingh13 that the request is highly unclear. The process list you posted shows a sort of "documentum daemon" with PID 20473 and 8 (7 documentum + 1 dm_agent_exec) children thereof, all with different "cumulative CPU time". No "file times" nor "response times" in sight.
Please become way more specific! What processes to include? What numbers to include?
It looks to me that you want to be able to measure performance, probably by transaction request/return times. Documentum originally worked on files of almost any kind. So your asking for "file" times is confusing everyone. They think you mean UNIX file access times. I am guessing this is sort of a communication breakdown here
Documentum seemed like Ingres (or Oracle) for any kind of digital object - files, metafiles, metadata. To me anyway. If this helps people to answer the question.
I do not know anything much about Documentum now. What performance monitoring tools come with the product? Use them to generate data. We can help you organize and use that data to answer the requirements.