Need to understand CPU capping in Zones

Hi All

I am using below command to do zone capping

#zonecfg -z zone1
zonecfg:zone1>
zonecfg:zone1>add capped-cpu
zonecfg:zone1>capped-cpu> set ncpus=2
zonecfg:zone1>capped-cpu> end
zonecfg:zone1> commit
zonecfg:zone1> exit

It means that it can used two CPUs in zone1 then I run CPUHUNGRY shell script to get the CPU utilized. When I am checking in prstat -a command it shows that it is using more than two CPUs. When I am running psrinfo -v command in that zone it show 128CPUs. Please find prstat output.

   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
 22927 root     4408K 1824K cpu46   34    0   0:00:53 0.1% cpuhungry/1
 22929 root     4408K 1824K wait    16    0   0:00:52 0.1% cpuhungry/1
 22910 root     4408K 1824K cpu32   39    0   0:00:43 0.1% cpuhungry/1
 22912 root     4408K 1824K wait    15    0   0:00:56 0.1% cpuhungry/1
 22909 root     4408K 1824K cpu83   39    0   0:00:38 0.1% cpuhungry/1
 22917 root     4408K 1824K wait    16    0   0:00:38 0.1% cpuhungry/1
 22914 root     4408K 1824K wait    12    0   0:00:39 0.0% cpuhungry/1
 22908 root     4408K 1824K wait    17    0   0:00:44 0.0% cpuhungry/1
 22906 root     4416K 2992K wait    32    0   0:00:40 0.0% cpuhungry/1
 22916 root     4408K 1824K wait    32    0   0:00:45 0.0% cpuhungry/1
 22928 root     4408K 1824K wait    17    0   0:00:51 0.0% cpuhungry/1
 22907 root     4408K 1952K wait     1    0   0:00:48 0.0% cpuhungry/1
 22913 root     4408K 1824K wait    30    0   0:00:42 0.0% cpuhungry/1
 22911 root     4408K 1824K wait    11    0   0:00:36 0.0% cpuhungry/1
 22930 root     4408K 1824K wait    15    0   0:00:34 0.0% cpuhungry/1
 22931 root     4408K 1824K wait    24    0   0:00:40 0.0% cpuhungry/1
 22915 root     4408K 1824K wait    28    0   0:00:34 0.0% cpuhungry/1
 23692 root     4088K 2408K sleep   59    0   0:00:00 0.0% snmpXdmid/2
 23295 root       23M 6552K sleep   50    0   0:00:03 0.0% inetd/4
 23231 daemon   5144K 1688K sleep   58    0   0:00:00 0.0% nfsmapid/3
 23542 root     3264K 2360K sleep   59    0   0:00:00 0.0% automountd/2
 23778 root     6552K 1928K sleep   57    0   0:00:00 0.0% dtlogin/1
 23541 root     2984K 1584K sleep   59    0   0:00:00 0.0% automountd/2
 23220 daemon   2824K 1600K sleep   59    0   0:00:00 0.0% nfs4cbd/2
 23293 root     1720K 1080K sleep   58    0   0:00:00 0.0% utmpd/1
 23243 daemon   2792K 1600K sleep   52    0   0:00:00 0.0% lockd/2
 22743 root       12M 5736K sleep   59    0   0:00:21 0.0% svc.configd/20
 22999 daemon     10M 6048K sleep   59    0   0:00:15 0.0% rcapd/1
 23546 root     4616K 1488K sleep   59    0   0:00:00 0.0% sshd/1
 24546 noaccess  155M   88M sleep   59    0   0:00:55 0.0% java/17
 24407 smmsp      10M 1520K sleep   59    0   0:00:00 0.0% sendmail/1
 22806 root     3496K 2864K sleep   59    0   0:00:00 0.0% bash/1
 22943 root     7344K 4464K sleep   50    0   0:00:00 0.0% nscd/30
 23290 root     2424K 1200K sleep   59    0   0:00:00 0.0% smcboot/1
 22739 root     3024K 1736K sleep   59    0   0:00:00 0.0% init/1
 23294 root     2512K 1560K sleep   59    0   0:00:00 0.0% sac/1
 22741 root       32M   11M sleep   29    0   0:00:10 0.0% svc.startd/13
 23691 root     3720K 1592K sleep   59    0   0:00:00 0.0% dmispd/1
 23216 daemon   3264K 2304K sleep   59    0   0:00:00 0.0% rpcbind/1
 23663 root     2912K 1736K sleep   59    0   0:00:00 0.0% snmpdx/1
 23289 root     2424K 1208K sleep   59    0   0:00:00 0.0% smcboot/1
 23305 root     2856K 1664K sleep   59    0   0:00:00 0.0% ttymon/1
 NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
    44 root       93M   50M   0.1%   0:12:47 0.7%
     1 noaccess  148M   97M   0.1%   0:00:55 0.0%
     1 smmsp    2920K 5824K   0.0%   0:00:00 0.0%
     7 daemon     12M   13M   0.0%   0:00:15 0.0%

If I used dedicated-cpu syntax instead of capped-cpu then I can see only two processors in prtstat -a & psrinfo -v command also.

My server configuration is having 2 Physical having 8 core each & each core having 8 threads means 128 CPU in term OS.

Please let me know where I went wrong in case of capped-cpu syntax

I do not see 128 cpus being used. What the number of cpus being used means in this case, and in informal language:

You add up all of the %cpus for everything in the zone, AFTER the first prstat display is refreshed in a command like prstat 5 5 .

When I do that I do not see 128 cpus. Capping means that you sum up use across all 128 cpus and it will be leas than or equal to 2.

If you want to lock 2 cpus "away from all other zones" - what I think you seem to want -
check out pooladm to understand the dedicated cpu concept.

find dedicated-cpu in the zonecfg guide for Solaris 11. This feature is not supported in Solaris 10 global zones AFAIK. We have Solaris 10 branded zones running on top of Solaris 11. The 11 global zone "grants" dedicated cpus to the non-global Solaris 10 zones.:

How to Configure the Zone (System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones)

Hi Jim

Please find the output after using capped-cpu where it is showing three cpu are using when I keep the value 1. Why it showing 3 CPUs instead of 1.Could you elaborate the meaning of "Capping means that you sum up use across all 128 cpus and it will be leas than or equal to 2" or in below case it is 1.

bash-3.00# zonecfg -z zone2 info
zonename: zone2
zonepath: /zone2
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
hostid:
inherit-pkg-dir:
        dir: /lib
inherit-pkg-dir:
        dir: /platform
inherit-pkg-dir:
        dir: /sbin
inherit-pkg-dir:
        dir: /usr
fs:
        dir: /mnt
        special: /u02/mnt
        raw not specified
        type: lofs
        options: []
capped-cpu:
        [ncpus: 1.00]
rctl:
        name: zone.cpu-cap
        value: (priv=privileged,limit=100,action=deny)
bash-3.00#
bash-3.00#
bash-3.00# zlogin zone2
[Connected to zone 'zone2' pts/3]
Last login: Fri Jul 11 09:20:27 on pts/3
Oracle Corporation      SunOS 5.10      Generic Patch   January 2005
# bash
bash-3.00#
bash-3.00# ls
bin          dev          export       kernel       mnt          nohup.out    pl
cpuhungry    etc          home         lib          net          opt          pr
bash-3.00# ./cpuhungry &
[1] 1372
bash-3.00# eating the CPUs
created PID 1373
created PID 1374
created PID 1375
created PID 1376
created PID 1377
created PID 1378
created PID 1379
created PID 1380
created PID 1381
created PID 1382
created PID 1383
created PID 1384
created PID 1385
created PID 1386
created PID 1387
created PID 1388
bash-3.00#
bash-3.00#
bash-3.00# psrinfo
5       on-line   since 06/03/2014 16:45:49
6       on-line   since 06/03/2014 16:45:49
7       on-line   since 06/03/2014 16:45:49
8       on-line   since 06/03/2014 16:45:49
9       on-line   since 06/03/2014 16:45:49
10      on-line   since 06/03/2014 16:45:49
11      on-line   since 06/03/2014 16:45:49
12      on-line   since 06/03/2014 16:45:49
13      on-line   since 06/03/2014 16:45:49
14      on-line   since 06/03/2014 16:45:49
15      on-line   since 06/03/2014 16:45:49
16      on-line   since 06/03/2014 16:45:49
17      on-line   since 06/03/2014 16:45:49
18      on-line   since 06/03/2014 16:45:49
19      on-line   since 06/03/2014 16:45:49
20      on-line   since 06/03/2014 16:45:49
21      on-line   since 06/03/2014 16:45:49
22      on-line   since 06/03/2014 16:45:49
23      on-line   since 06/03/2014 16:45:49
24      on-line   since 06/03/2014 16:45:49
25      on-line   since 06/03/2014 16:45:49
26      on-line   since 06/03/2014 16:45:49
27      on-line   since 06/03/2014 16:45:49
28      on-line   since 06/03/2014 16:45:49
29      on-line   since 06/03/2014 16:45:49
30      on-line   since 06/03/2014 16:45:49
31      on-line   since 06/03/2014 16:45:49
32      on-line   since 06/03/2014 16:45:49
33      on-line   since 06/03/2014 16:45:49
34      on-line   since 06/03/2014 16:45:49
35      on-line   since 06/03/2014 16:45:49
36      on-line   since 06/03/2014 16:45:49
37      on-line   since 06/03/2014 16:45:49
38      on-line   since 06/03/2014 16:45:49
39      on-line   since 06/03/2014 16:45:49
40      on-line   since 06/03/2014 16:45:49
41      on-line   since 06/03/2014 16:45:49
42      on-line   since 06/03/2014 16:45:49
43      on-line   since 06/03/2014 16:45:49
44      on-line   since 06/03/2014 16:45:49
45      on-line   since 06/03/2014 16:45:49
46      on-line   since 06/03/2014 16:45:49
47      on-line   since 06/03/2014 16:45:49
48      on-line   since 06/03/2014 16:45:49
49      on-line   since 06/03/2014 16:45:49
50      on-line   since 06/03/2014 16:45:49
51      on-line   since 06/03/2014 16:45:49
52      on-line   since 06/03/2014 16:45:49
53      on-line   since 06/03/2014 16:45:49
54      on-line   since 06/03/2014 16:45:49
55      on-line   since 06/03/2014 16:45:49
56      on-line   since 06/03/2014 16:45:49
57      on-line   since 06/03/2014 16:45:49
58      on-line   since 06/03/2014 16:45:49
59      on-line   since 06/03/2014 16:45:49
60      on-line   since 06/03/2014 16:45:49
61      on-line   since 06/03/2014 16:45:49
62      on-line   since 06/03/2014 16:45:49
63      on-line   since 06/03/2014 16:45:49
64      on-line   since 06/03/2014 16:45:49
65      on-line   since 06/03/2014 16:45:49
66      on-line   since 06/03/2014 16:45:49
67      on-line   since 06/03/2014 16:45:49
68      on-line   since 06/03/2014 16:45:49
69      on-line   since 06/03/2014 16:45:49
70      on-line   since 06/03/2014 16:45:49
71      on-line   since 06/03/2014 16:45:49
72      on-line   since 06/03/2014 16:45:49
73      on-line   since 06/03/2014 16:45:49
74      on-line   since 06/03/2014 16:45:49
75      on-line   since 06/03/2014 16:45:49
76      on-line   since 06/03/2014 16:45:49
77      on-line   since 06/03/2014 16:45:49
78      on-line   since 06/03/2014 16:45:49
79      on-line   since 06/03/2014 16:45:49
80      on-line   since 06/03/2014 16:45:49
81      on-line   since 06/03/2014 16:45:49
82      on-line   since 06/03/2014 16:45:49
83      on-line   since 06/03/2014 16:45:49
84      on-line   since 06/03/2014 16:45:49
85      on-line   since 06/03/2014 16:45:49
86      on-line   since 06/03/2014 16:45:49
87      on-line   since 06/03/2014 16:45:49
88      on-line   since 06/03/2014 16:45:49
89      on-line   since 06/03/2014 16:45:49
90      on-line   since 06/03/2014 16:45:49
91      on-line   since 06/03/2014 16:45:49
92      on-line   since 06/03/2014 16:45:49
93      on-line   since 06/03/2014 16:45:49
94      on-line   since 06/03/2014 16:45:49
95      on-line   since 06/03/2014 16:45:49
96      on-line   since 06/03/2014 16:45:49
97      on-line   since 06/03/2014 16:45:49
98      on-line   since 06/03/2014 16:45:49
99      on-line   since 06/03/2014 16:45:49
100     on-line   since 06/03/2014 16:45:49
101     on-line   since 06/03/2014 16:45:49
102     on-line   since 06/03/2014 16:45:49
103     on-line   since 06/03/2014 16:45:49
104     on-line   since 06/03/2014 16:45:49
105     on-line   since 06/03/2014 16:45:49
106     on-line   since 06/03/2014 16:45:49
107     on-line   since 06/03/2014 16:45:49
108     on-line   since 06/03/2014 16:45:49
109     on-line   since 06/03/2014 16:45:49
110     on-line   since 06/03/2014 16:45:49
111     on-line   since 06/03/2014 16:45:49
112     on-line   since 06/03/2014 16:45:49
113     on-line   since 06/03/2014 16:45:49
114     on-line   since 06/03/2014 16:45:49
115     on-line   since 06/03/2014 16:45:49
116     on-line   since 06/03/2014 16:45:49
117     on-line   since 06/03/2014 16:45:49
118     on-line   since 06/03/2014 16:45:49
119     on-line   since 06/03/2014 16:45:49
120     on-line   since 06/03/2014 16:45:49
121     on-line   since 06/03/2014 16:45:49
122     on-line   since 06/03/2014 16:45:49
123     on-line   since 06/03/2014 16:45:49
124     on-line   since 06/03/2014 16:45:49
125     on-line   since 06/03/2014 16:45:49
126     on-line   since 06/03/2014 16:45:49
127     on-line   since 06/03/2014 16:45:49
bash-3.00#
bash-3.00#
bash-3.00# prstat 5 5
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
  1374 root     4408K 1824K wait    49    0   0:00:02 0.1% cpuhungry/1
 22927 root     4408K 1824K wait    34    0   4:10:47 0.0% cpuhungry/1
  1388 root     4408K 1824K cpu55   49    0   0:00:04 0.0% cpuhungry/1
 22907 root     4408K 1952K wait    27    0   4:11:06 0.0% cpuhungry/1
  1387 root     4408K 1824K wait    49    0   0:00:04 0.0% cpuhungry/1
  1375 root     4408K 1824K wait    27    0   0:00:03 0.0% cpuhungry/1
 22930 root     4408K 1824K wait    23    0   4:17:32 0.0% cpuhungry/1
  1372 root     4416K 3064K wait    27    0   0:00:04 0.0% cpuhungry/1
  1376 root     4408K 1824K wait    42    0   0:00:03 0.0% cpuhungry/1
  1385 root     4408K 1824K wait    12    0   0:00:02 0.0% cpuhungry/1
  1377 root     4408K 1824K cpu101  49    0   0:00:03 0.0% cpuhungry/1
 22906 root     4416K 2992K cpu38   49    0   4:09:59 0.0% cpuhungry/1
  1378 root     4408K 1824K wait    34    0   0:00:03 0.0% cpuhungry/1
 22911 root     4408K 1824K wait    32    0   4:18:10 0.0% cpuhungry/1
  1383 root     4408K 1824K wait    15    0   0:00:05 0.0% cpuhungry/1
  1381 root     4408K 1824K wait     7    0   0:00:03 0.0% cpuhungry/1
 22909 root     4408K 1824K wait    49    0   4:11:56 0.0% cpuhungry/1
 22917 root     4408K 1824K wait    43    0   4:11:41 0.0% cpuhungry/1
 22928 root     4408K 1824K wait     8    0   4:17:34 0.0% cpuhungry/1
 22929 root     4408K 1824K wait    17    0   4:16:35 0.0% cpuhungry/1
 22910 root     4408K 1824K wait    33    0   4:08:23 0.0% cpuhungry/1
 22914 root     4408K 1824K wait    23    0   4:09:13 0.0% cpuhungry/1
 22908 root     4408K 1824K wait     1    0   4:16:33 0.0% cpuhungry/1
  1380 root     4408K 1824K wait    48    0   0:00:04 0.0% cpuhungry/1
 22931 root     4408K 1824K wait    23    0   4:10:50 0.0% cpuhungry/1
  1386 root     4408K 1824K wait    46    0   0:00:04 0.0% cpuhungry/1
 22916 root     4408K 1824K wait     8    0   4:06:12 0.0% cpuhungry/1
  1373 root     4408K 1952K wait    16    0   0:00:03 0.0% cpuhungry/1
  1384 root     4408K 1824K wait    16    0   0:00:03 0.0% cpuhungry/1
  1382 root     4408K 1824K wait     9    0   0:00:01 0.0% cpuhungry/1
 22915 root     4408K 1824K wait    48    0   4:07:11 0.0% cpuhungry/1
  1379 root     4408K 1824K wait    48    0   0:00:01 0.0% cpuhungry/1
 22912 root     4408K 1824K wait    27    0   4:11:16 0.0% cpuhungry/1
 22913 root     4408K 1824K wait     9    0   4:09:22 0.0% cpuhungry/1
 23288 root     2424K 1608K sleep   59    0   0:00:00 0.0% smcboot/1
  1282 root     3432K 2800K sleep   59    0   0:00:00 0.0% bash/1
  1261 root     1760K 1480K sleep   59    0   0:00:00 0.0% sh/1
 23295 root       23M 6584K sleep   50    0   0:00:09 0.0% inetd/4
 23231 daemon   5144K 1688K sleep   58    0   0:00:01 0.0% nfsmapid/3
 23542 root     3264K 2376K sleep   59    0   0:00:01 0.0% automountd/2
 23778 root     6552K 1928K sleep   57    0   0:00:00 0.0% dtlogin/1
 23541 root     2984K 1592K sleep   59    0   0:00:00 0.0% automountd/2
 23220 daemon   2824K 1608K sleep   59    0   0:00:00 0.0% nfs4cbd/2
 23293 root     1720K 1168K sleep   59    0   0:00:00 0.0% utmpd/1
 23243 daemon   2792K 1608K sleep   52    0   0:00:00 0.0% lockd/2
 22743 root       12M 5872K sleep   59    0   0:00:32 0.0% svc.configd/20
 22999 daemon     10M 6048K sleep   59    0   0:01:19 0.0% rcapd/1
 23546 root     4616K 1496K sleep   59    0   0:00:00 0.0% sshd/1
 24546 noaccess  155M  131M sleep   59    0   0:06:04 0.0% java/17
 24407 smmsp      10M 3328K sleep   59    0   0:00:04 0.0% sendmail/1
 22943 root       19M   16M sleep   59    0   0:00:21 0.0% nscd/30
 23290 root     2424K 1208K sleep   59    0   0:00:00 0.0% smcboot/1
 22739 root     3024K 1752K sleep   59    0   0:00:00 0.0% init/1
 23294 root     2512K 1544K sleep   59    0   0:00:00 0.0% sac/1
 22741 root       32M   11M sleep   29    0   0:00:16 0.0% svc.startd/13
 23691 root     3720K 1600K sleep   59    0   0:00:00 0.0% dmispd/1
 23216 daemon   3264K 1744K sleep   59    0   0:00:00 0.0% rpcbind/1
Total: 69 processes, 172 lwps, load averages: 31.72, 22.48, 19.30
bash-3.00#
bash-3.00#

When I remove the capped-cpu parameter & then I am using, it will show lot of CPUs are being used which seems to be ok.

bash-3.00# zonecfg -z zone
zone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone>
bash-3.00# zonecfg -z zone2
zonecfg:zone2>
zonecfg:zone2> remove rctl
zonecfg:zone2> info
zonename: zone2
zonepath: /zone2
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
hostid:
inherit-pkg-dir:
        dir: /lib
inherit-pkg-dir:
        dir: /platform
inherit-pkg-dir:
        dir: /sbin
inherit-pkg-dir:
        dir: /usr
fs:
        dir: /mnt
        special: /u02/mnt
        raw not specified
        type: lofs
        options: []
zonecfg:zone2>
zonecfg:zone2>
zonecfg:zone2> verify
zonecfg:zone2> commit
zonecfg:zone2>
zonecfg:zone2>
zonecfg:zone2> exit
bash-3.00#
bash-3.00# zoneadm -z zone2 reboot
bash-3.00#
bash-3.00#
bash-3.00# zoneadm list -iv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
   9 zone3            running    /zone3                         native   shared
  10 zone4            running    /zone4                         native   shared
  23 zone2            running    /zone2                         native   shared
bash-3.00#
bash-3.00# zlogin zone2
[Connected to zone 'zone2' pts/2]
Last login: Fri Jul 11 12:44:01 on pts/3
Oracle Corporation      SunOS 5.10      Generic Patch   January 2005
#
# bash
bash-3.00#
bash-3.00# ./cpuhungry  &
[1] 6659
bash-3.00# eating the CPUs
created PID 6660
created PID 6661
created PID 6662
created PID 6663
created PID 6664
created PID 6665
created PID 6666
created PID 6667
created PID 6668
created PID 6669
created PID 6670
created PID 6671
created PID 6672
created PID 6673
created PID 6674
created PID 6675
bash-3.00#
bash-3.00# prstat -a
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
  6584 noaccess  152M  107M cpu57   58    0   0:00:36 0.9% java/14
  6659 root     4416K 3056K cpu78    2    0   0:00:11 0.3% cpuhungry/1
  6665 root     4408K 1824K cpu40    2    0   0:00:11 0.3% cpuhungry/1
  6661 root     4408K 1824K cpu67    2    0   0:00:11 0.3% cpuhungry/1
  6674 root     4408K 1824K cpu92    2    0   0:00:11 0.3% cpuhungry/1
  6672 root     4408K 1824K cpu97    1    0   0:00:11 0.3% cpuhungry/1
  6675 root     4408K 1824K cpu6     2    0   0:00:11 0.3% cpuhungry/1
  6666 root     4408K 1824K cpu126   2    0   0:00:11 0.3% cpuhungry/1
  6670 root     4408K 1824K cpu111   1    0   0:00:11 0.3% cpuhungry/1
  6667 root     4408K 1824K cpu10    1    0   0:00:11 0.3% cpuhungry/1
  6668 root     4408K 1824K cpu105   1    0   0:00:11 0.3% cpuhungry/1
  6662 root     4408K 1824K cpu84    2    0   0:00:11 0.3% cpuhungry/1
  6673 root     4408K 1824K cpu5     2    0   0:00:11 0.3% cpuhungry/1
  6671 root     4408K 1824K cpu33    2    0   0:00:11 0.3% cpuhungry/1
  6660 root     4408K 1952K cpu19    2    0   0:00:11 0.3% cpuhungry/1
  6669 root     4408K 1824K cpu20    1    0   0:00:11 0.3% cpuhungry/1
  6664 root     4408K 1824K cpu119   2    0   0:00:11 0.3% cpuhungry/1
  6663 root     4408K 1824K cpu25    2    0   0:00:11 0.3% cpuhungry/1
  5694 root       12M   11M sleep   59    0   0:00:20 0.1% svc.configd/14
  5692 root       32M   26M sleep   29    0   0:00:10 0.0% svc.startd/14
  5996 root       23M   12M sleep   37    0   0:00:02 0.0% inetd/4
  5897 daemon   4992K 3744K sleep   29    0   0:00:00 0.0% kcfd/3
  6724 root     3944K 3664K cpu29   59    0   0:00:00 0.0% prstat/1
  6189 root     3256K 2528K sleep   29    0   0:00:00 0.0% automountd/2
  5993 daemon   2792K 2160K sleep   59    0   0:00:00 0.0% lockd/2
  6024 root     2424K 1312K sleep   59    0   0:00:00 0.0% smcboot/1
  6291 root     4088K 2960K sleep   53    0   0:00:00 0.0% snmpXdmid/2
  6263 root     2912K 2240K sleep   56    0   0:00:00 0.0% snmpdx/1
  5683 root     3024K 2112K sleep   59    0   0:00:00 0.0% init/1
  5966 daemon   5144K 2016K sleep   29    0   0:00:00 0.0% nfsmapid/3
  6209 root       10M 5544K sleep   58    0   0:00:00 0.0% sendmail/1
  6168 root     3432K 2800K sleep   59    0   0:00:00 0.0% bash/1
  5955 root     3288K 1872K sleep   59    0   0:00:00 0.0% cron/1
  6195 root     4784K 2592K sleep   59    0   0:00:00 0.0% syslogd/17
  5959 daemon   3200K 2352K sleep   59    0   0:00:00 0.0% rpcbind/1
  6289 root     3720K 2584K sleep   53    0   0:00:00 0.0% dmispd/1
  5999 root     2512K 1752K sleep   59    0   0:00:00 0.0% sac/1
  6328 root     6552K 2600K sleep   58    0   0:00:00 0.0% dtlogin/1
  6187 root     2984K 1832K sleep   59    0   0:00:00 0.0% automountd/2
  6013 root     1720K 1200K sleep   59    0   0:00:00 0.0% utmpd/1
  6087 root     1760K 1480K sleep   59    0   0:00:00 0.0% sh/1
  6014 root     2928K 2120K sleep   29    0   0:00:00 0.0% ttymon/1
 NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
    45 root       95M  106M   0.2%   0:03:39 5.8%
     1 noaccess  140M  103M   0.2%   0:00:36 0.9%
     7 daemon     10M   18M   0.0%   0:00:00 0.0%
 
Total: 53 processes, 147 lwps, load averages: 19.05, 27.30, 23.52
bash-3.00#
bash-3.00#

In case of dedicated-cpu parameter in solaris 10, it is showing 3 CPU only. Here we can use Dedicated-cpu parameter in solaris 10 also this also seems to be OK

bash-3.00# zonecfg -z zone2
zonecfg:zone2>
zonecfg:zone2> add dedicated-cpu
zonecfg:zone2:dedicated-cpu> set ncpus=3
zonecfg:zone2:dedicated-cpu> end
zonecfg:zone2> commit
zonecfg:zone2> verify
zonecfg:zone2>
zonecfg:zone2> exit
bash-3.00#
bash-3.00#
bash-3.00# zoneadm -z zone2 reboot
bash-3.00#
bash-3.00# zlogin zone2
[Connected to zone 'zone2' pts/2]
Last login: Fri Jul 11 12:52:46 on pts/2
Oracle Corporation      SunOS 5.10      Generic Patch   January 2005
# bash
bash-3.00#
bash-3.00# ./cpuhungry &
[1] 17539
bash-3.00# eating the CPUs
created PID 17540
created PID 17541
created PID 17542
created PID 17543
created PID 17544
created PID 17545
created PID 17546
created PID 17547
created PID 17548
created PID 17549
created PID 17550
created PID 17551
created PID 17552
created PID 17553
created PID 17554
created PID 17555
bash-3.00#
bash-3.00# psrinfo
5       on-line   since 06/03/2014 16:45:49
6       on-line   since 06/03/2014 16:45:49
7       on-line   since 06/03/2014 16:45:49
bash-3.00#
bash-3.00#
bash-3.00# 
bash-3.00#
bash-3.00# prstat -a
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
 17289 noaccess  155M  113M sleep   57    0   0:01:00  11% java/18
 17554 root     4408K 1824K run     19    0   0:00:05 4.8% cpuhungry/1
 17541 root     4408K 1824K run     16    0   0:00:05 4.8% cpuhungry/1
 17553 root     4408K 1824K run     15    0   0:00:05 4.7% cpuhungry/1
 17552 root     4408K 1824K run     17    0   0:00:05 4.7% cpuhungry/1
 17550 root     4408K 1824K cpu5    22    0   0:00:05 4.6% cpuhungry/1
 17555 root     4408K 1824K run     20    0   0:00:05 4.5% cpuhungry/1
 17548 root     4408K 1824K run     18    0   0:00:05 4.5% cpuhungry/1
 17543 root     4408K 1824K run     22    0   0:00:05 4.5% cpuhungry/1
 17540 root     4408K 1952K run     21    0   0:00:05 4.5% cpuhungry/1
 17542 root     4408K 1824K run     20    0   0:00:05 4.4% cpuhungry/1
 17545 root     4408K 1824K run     23    0   0:00:05 4.4% cpuhungry/1
 17546 root     4408K 1824K cpu7    24    0   0:00:05 4.3% cpuhungry/1
 17549 root     4408K 1824K run     17    0   0:00:05 4.3% cpuhungry/1
 17539 root     4416K 3056K run     14    0   0:00:05 4.3% cpuhungry/1
 17551 root     4408K 1824K run     15    0   0:00:05 4.2% cpuhungry/1
 17547 root     4408K 1824K run     22    0   0:00:05 4.1% cpuhungry/1
 17544 root     4408K 1824K run     18    0   0:00:05 4.0% cpuhungry/1
 16399 root       11M   10M sleep    1    0   0:00:22 0.5% svc.configd/15
 17802 root     3944K 3664K cpu6    50    0   0:00:00 0.2% prstat/1
 16397 root       11M 8896K sleep   58    0   0:00:06 0.1% svc.startd/13
 16698 root     5712K 4680K sleep   44    0   0:00:01 0.0% inetd/4
 17474 root     3432K 2808K sleep   59    0   0:00:00 0.0% bash/1
 16917 root       10M 7392K sleep   35    0   0:00:00 0.0% snmpd/1
 16528 daemon   4992K 3744K sleep   29    0   0:00:00 0.0% kcfd/3
 17694 smmsp    9288K 2688K sleep   59    0   0:00:00 0.0% sendmail/1
 17429 root     1760K 1480K sleep   59    0   0:00:00 0.0% sh/1
 17692 root     9288K 3432K sleep   59    0   0:00:00 0.0% sendmail/1
 16548 root     6272K 3744K sleep   38    0   0:00:00 0.0% nscd/32
 17025 root     6552K 2616K sleep   57    0   0:00:00 0.0% dtlogin/1
 16525 daemon   8104K 4464K sleep   59    0   0:00:00 0.0% rcapd/1
 16886 root     4792K 2632K sleep   59    0   0:00:00 0.0% syslogd/17
 16958 root     4088K 2960K sleep   58    0   0:00:00 0.0% snmpXdmid/2
 16869 root     3256K 2536K sleep   29    0   0:00:00 0.0% automountd/2
 16667 daemon   2824K 2184K sleep   29    0   0:00:00 0.0% nfs4cbd/2
 16395 root     3024K 2128K sleep    1    0   0:00:00 0.0% init/1
 16710 root     2424K 1312K sleep   56    0   0:00:00 0.0% smcboot/1
 16677 daemon   2792K 2176K sleep   59    0   0:00:00 0.0% lockd/2
 16711 root     2424K 1312K sleep   59    0   0:00:00 0.0% smcboot/1
 16868 root     2984K 1840K sleep   59    0   0:00:00 0.0% automountd/2
 16731 root     2856K 2160K sleep   58    0   0:00:00 0.0% ttymon/1
 16879 root     4616K 2824K sleep   29    0   0:00:00 0.0% sshd/1
 NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
    44 root       50M   60M   0.1%   0:01:54  76%
     1 noaccess  148M  122M   0.2%   0:01:00  11%
     7 daemon   9400K   16M   0.0%   0:00:00 0.0%
     1 smmsp    1832K 8184K   0.0%   0:00:00 0.0%
 
Total: 53 processes, 155 lwps, load averages: 7.73, 2.07, 0.73
bash-3.00#
bash-3.00#
bash-3.00# uname -a
SunOS sun 5.10 Generic_142909-17 sun4v sparc SUNW,T5240
bash-3.00#
 
 

We have the kernel scheduling processes to use the cpu:

'capped cpu' means 'use any available cpu' , there is no short list of cpus, just a limit in how much cpu time you get for the zone as a whole. Every minute there are 128 minutes of available cpu on a system with 128 cpus. A capped zone is allowed to use 3 minutes max and the 3 minutes is scheduled on any cpu, since all cpus are shared equally - as in fair share scheduling. This is not like zone-based cpu affinity.

Dedicated cpu means the zone gets 3 cpus all for itself. If they are idle no cpu time is given to any other zone. In no case are those 3 cpus accessed by any other non-global zone. All processes are scheduled against a given prset (fixed set of 3 cpus). Think of this as cpu affinity on a zone basis rather than a per-process basis.

capped cpu is less restrictive. dedicated cpu locks zones (that can potentially do evil things) into a little cpu "space" where all they can trash are their own resources. Great for oracle. For example, a programmer requests a Cartesian product from two, billion-row tables. All the other zones hum happily along while the one evil zone thrashes in its own little pond.

Thanks Jim for explaining it

Now one last question is that in case of capped-cpu where I have put the value of 1 for ncpu parameter means 1 minute of CPU time will be allocated to 1 or more number of any cpus but total time will be 1 minute of CPU time as in my case it was showing three cpus. am I right?

That is good - maybe you can think of it as

 max_cpu_time/minute= (ncpu * capped-cpu)