Help me Solaris 10&11 cpu load average states for 24 hours report

need to capture the following data on an hourly basis without cronjob scheduling in Solaris 5.10/5.11:-

  1. load averages
  2. Total no. of processes.
  3. CPU state
  4. Memory
  5. Top 3 process details.

any other third-party tool is available?

Hi,

There are lots of third party tools available, some paid for some free. I have had some sucess with Xymon as it is aware of zones and is easy to install and configure. There are several other free tools as well Nagios etc, along with paid for tools like BMC Patrol.

Regards

Dave

1 Like

CPU, memory and load averages can be derived from sar.

Check :

 crontab -l sys 

And notice lines with sa1 and sa2 scripts, which will generate sar output in destination directory.

You can then use various tools to import and draw graphs and such or use sar <required options for cpu/mem etc.> -f safilename

Hope that helps.

Regards
Peasant.

thank u for sharing and I'm new for this domain

---------- Post updated at 02:49 PM ---------- Previous update was at 02:47 PM ----------

Hi Peasant

i want check the cpu utilization without cronjobs?

Hi Peasant,

While sar is a usefull tool, it does have some limitations - I suggested the more user friendly tools in preference to sar due to the more specific nature of the original request.

All the requirements are already available as standard from Xymon (they are also available in many other tools), so rather than present the requestor a part solution I plumped for the whole package.

A combination of several standard unix utilities will give the desired output, but that is the point it's a combination. All the above data could be gleaned from judicious use of "top", "ps", "uptime" and some other utilities - but as per the users request it would have to be run with out the aid of cron - so a manual process. Using something like Xymon removes the cron requirement, if sar is set up then it also fair to say that there is no need for the user to have access to cron - but the information is more difficult to access and it falls down on the top three process request.:slight_smile:

Regards

Dave

I like to keep my servers cleans of various agents and services since most of them are closed source hacks often ran with root account.

Of course, not all products are in that category, but i tend to avoid as much 3rd party software as i can.

Got this small program i believe it is called ksar or something, i just copy the files and draw the graphs, enough for me.

Regards
Peasant.

Hi Peasant,

I do know what you mean with some of the open source stuff, what I will say is that as an admin you have to take reasonable care to ensure that what you are doing is safe. Also we tend to forget that the very nature of the Unix OS in all its flavours is such that it allows all users to use the system, provided they have the appropriate access rights.

Too often we tend to regard the server as our personal fiefdom, ignoring the basic premis that we are here to administer the system on behalf of all the users of the system. It may well be that in this particular case the admin will refuse to deploy a tool like this, based on several things. They may already have the required functionallity in place, in which case this is a user education matter. They may not allow the users in this location access to this type of data, in which case there is a need for communication. Or any number of other reasons, in essence this is down to the systems admin.

Regards

Dave

Well, i'm running mostly servers with oracle databases and NFS clusters on ldoms, so no users (except DBA and system engineers) are using it.

Everything is kerberized and being logged on domain controllers.

Nothing has access to hypervisors except people who are trusted (a few).

As for access to users for various filesystems, can be accomplished safely with ACL's or chroot (built in ssh is nice), not compromising security.

On development / test systems i tend to relax things a bit and let people monitor how things work.

Production is and should deterministic e.g. you will not have performance problems if you tested everything before on same configuration.

Unfortunately, today practice is to have various tools monitoring everything, since code is being hyper produced and pushed into production with less and less testing resulting in production machines being brought to its knees.

Sorry for the offtopic, we should stop now, if you want to debate my PM is open :wink: