HPUX & Top Command

Hello,

I'm a newbie to HPUX so please be patient :slight_smile:

I'm looking at top and it only shows 3 of the 8 cpu's: 0 4 and 5. But the box has 8 cpu's according to the 'machinfo' command.

There are 4 processes using 10% "%CPU" each but it shows the box as 90% idle...???

Also, I don't see any metric for the percent of time spent waiting for data on disk. We're running Oracle and I'm trying to evaluate how many ways parallel we can suitably run processes.

I'm accustomed to Solaris - can anyone shed some light on this?

Thank You! :slight_smile:

Never seen that one before.

On my HPUX it show average across all processors not a total idle

Try using 'glance' instead of top, it shows Current, Average and high usage.

I'm glad to hear it's not just me. I wonder if top only showing certain cpu's is a sign that something is wrong.

We're on version: B.11.23 and
"glance" doesn't seem to be available:

$ which glance
no glance in w/bin /usr/ccs/bin /usr/openwin/bin /usr/bin /usr/ccs/bin /usr/contrib/bin /usr/contrib/Q4/bin /opt/hparray/bin /opt/nettladm/bin /opt/fcms/bin /usr/contrib/kwdb/bin /usr/bin/X11 /opt/graphics/common/bin /opt/upgrade/bin /opt/ipf/bin /opt/cfg2html /opt/resmon/bin /opt/wbem/bin /opt/wbem/sbin /opt/sas/bin /opt/sec_mgmt/bastille/bin /opt/dsau/bin /opt/dsau/sbin /opt/firefox /opt/gnome/bin /opt/ignite/bin /opt/mozilla /opt/perl/bin /opt/sec_mgmt/spc/bin /opt/ssh/bin /opt/hpsmh/bin /opt/thunderbird /opt/gwlm/bin /opt/sfm/bin /usr/contrib/bin/X11 /usr/local/bin w/bin

On mine glance is in /opt/perf/bin/glance

Thank you - I checked /opt/perf/bin/glance but it's not there. Oh well, thanks anyway! :slight_smile:

Hiya,

I do not think that glance is inbuilt stuff and we do need to install it..

I'm kinda with a server issue here, however, below thread of our forum would help you I guess..

-DB

Thank you very much for that! :slight_smile:

Hi mj62,

Glance will be a good option. Still you can try using following command that gives output as per CPU usage:

# sar -q -M 2 2

--UniRock

Hello,

Thank you very much for that command. The output, strangely, still only shows 3cpu's!! Numbers 0, 4 and 5. Just like top does.

HP-UX server1 B.11.23 U ia64 10/22/08

12:11:07 cpu runq-sz %runocc swpq-sz %swpocc
12:11:09 0 0.0 0
4 1.0 50
5 0.0 0
system 1.0 17 0.0 0
12:11:11 0 0.0 0
4 1.0 50
5 0.0 0
system 1.0 17 0.0 0

Average 0 0.0 0
Average 4 1.0 50
Average 5 0.0 0
Average system 1.0 17 0.0 0

$ machinfo
CPU info:
Number of CPUs = 8

Any ideas?

Thanks :slight_smile:

Hi mj62,

Can you post the o/p of following commands:
#model
#uname -a
#machinfo
#ioscan -fkC processor

We might get a clue !!

There may also be a case that your server has dual-core CPUs, so if you have Four dual-core CPU, machinfo will show as Eight CPUs
or
There might be a case the CPUs are disabled in your server.

If they are disabled, reboot or certain software are required to enable them. Everything then depends on the hardware and OS version.

-UniRock

$ model
ia64 hp superdome server SD32B

$ uname -a
HP-UX server-1 B.11.23 U ia64 <machine-id> unlimited-user license

$ machinfo
CPU info:
Number of CPUs = 8
Clock speed = 1598 MHz
Bus speed = 533 MT/s
CPUID registers
vendor information = "GenuineIntel"
processor serial number = 0x0000000000000000
processor version info = 0x0000000020000704
architecture revision: 0
processor family: 32 Intel(R) Itanium 2 9000 series
processor model: 0 Intel(R) Itanium 2 9000 series
processor revision: 7 Stepping C2
largest CPUID reg: 4
processor capabilities = 0x0000000000000005
implements long branch: 1
implements 16-byte atomic operations: 1
Bus features
implemented = 0xbdf0000020000000
selected = 0x0020000000000000
Exclusive Bus Cache Line Replacement Enabled

Cache info (per core):
L1 Instruction: size = 16 KB, associativity = 4
L1 Data: size = 16 KB, associativity = 4
L2 Instruction: size = 1024 KB, associativity = 8
L2 Data: size = 256 KB, associativity = 8
L3 Unified: size = 12288 KB, associativity = 12

Memory = 8142 MB (7.951172 GB)

Firmware info:
Firmware revision = 9.22
FP SWA driver revision: 1.18
IPMI is supported on this system.
ERROR: Unable to obtain manageability firmware revision info.

Platform info:
model string = "ia64 hp superdome server SD32B"
machine id number = <machine id>
machine serial number = <serial number>

OS info:
sysname = HP-UX
nodename = server-1
release = B.11.23
version = U (unlimited-user license)
machine = ia64
idnumber = <id>
vmunix _release_version:
@(#) $Revision: vmunix: B11.23_LR FLAVOR=perf Fri Aug 29 22:35:38 PDT 2003 $

$ ioscan
sh: ioscan: not found.

Thanks :slight_smile:

From 'machinfo', its clear one thing about your CPU:

Model name: Itanium 2 9000 series Dual-Core.
Code name: Montecito

Your CPU is dual-core. So you have 8 CPUs = 32 cores.

Top should show at least 8 CPUs :confused:
Is it a vpar/npar system?

-UniRock

uh oh - vpar/npar? You're suddenly out of my realm of knowledge.

I think I need to bring in an actual hp unix admin to have a look at the box now that we have confirmed something is very weird. I had originally raised the question wondering if I was running the wrong version top or if hp had some kind of VM which was only allowing me 3 cpu's.

If we get to a solution (or if you'd like me to run any commands) I will post the results here for the benefit of others in the future.

Apologies for error in the last post:

WRONG ONE:
Your CPU is dual-core. So you have 8 CPUs = 32 cores.:frowning:

CORRECT ONE :
Your CPU is dual-core. So you have 8 CPUs = 16 cores

-UniRock

parstatus and vparstatus will give you an idea whether you're dealing with physical (NPAR) or virtual (VPAR) partitions here. If you are running a VPAR it's possible that CPUs have been removed using vparmodify?

Hi mj62mj62,

Well to begin, let me tell you what type of server you have from the output you gave me. It should be a HP Integrity Superdome 32-way server. It can have up to 64 processor sockets, four per cell, with each cell having either single Intel� Itanium� 2 processors or dual-core Intel� Itanium� 2 processors. (Up to 128 processor cores total when using dual-core processors.)

Per the machinfo, it seems that you have 8 processors.

If this is my setup, I would configure it with npar and vpar on top of it. I do not say that it should be the case, however, the ideal plan of having a superdome/cell based server would be having npar's and vpar's on each npar's.

If you would like to check if you have configured vpar's on top of npar's, you can just check if the daemons for vpar are running using ps -ef | grep <dameon name>
vpard
vphbd
vconsd - Not for your case as your superdome is itanium based.

If yes, vpar should be configured I believe. Also, you can check using the swlist command to see if a vpar package is installed. Also, FYI, vpar lisence is issued on per "processor" basis.

FYI, parstatus for npar and vparstatus for vpar should work as well. You can use vparmodify to modify the parameters, however, first collect more information about the structure before modifying as if its configured and we modify, it depends on a lot of things. Including memory, iochasis, core io, processor set and memory granules, lba's and so on...

Comming back to the scenario.
Have you heard of the term icod? Actually, its "Instant Capacity on Demand". It works like this, when you purchase a superdome with required cellboards in it, you can do an offer while purchasing like you buy a cellboard with some processors, however, you would not be using it. If in case, you need an immediate requirement in future for 2 more cpu's, you pay the money to HP to collect a lisence and use that CPU for the specified period. So, that's when you use that processor and once the lisence expires, you're done and back to normal.

An important concept in npar's and vpar's on it is Processor Sets. Processor sets inter operate with other partitioning mechanisms in HP Partitioning Continuum and
also with the HP Utility Pricing offerings (iCOD, pay-per-use, and pay-per-forecast). The psets are fully integrated into PRM and can be configured using the PRM GUI.

So, I believe that this may also be a scenario here being this as a superdome server.

To be more clear on this situation, Check the below.
= How many cellboards do you have on your server?
= Is there a vpar configured on top of npar?
= If yes, how many N partitions are there and V partitions accordingly?
= Also, how many processors are allotted for each vpar?
= How many processors are floating?
=> floating processors are those who can move across vpar's without changing configuration.
= What about the information about licensing for your vpar?

These things should help you IN CASE, you have a vpar and npar setup on your server ELSE IF, am sorry.. lol.

That should not be the case I guess as top or machinfo wont work if case of a version problem or incompatibility, instead, it would not throw wrong information. As of I know.

I can help you with more information on whichever I know if the need may be

Cheers!
-DB

Thank you so much for your detailed post on the subject.

I've learned a great deal in the process.

After lots of digging, it turns out that we are running vpar's. The vpar's are 'instances' I didn't even know about. When logging into those instances I can see the other CPU's. It looks like the one superdome has been 'split' into 3 separate machines with 3, 3 and 2 cpu's respectively.

That explains it!

Many Thanks! :slight_smile: :slight_smile: :slight_smile: