#zonecfg -z zone1
zonecfg:zone1>
zonecfg:zone1>add capped-cpu
zonecfg:zone1>capped-cpu> set ncpus=2
zonecfg:zone1>capped-cpu> end
zonecfg:zone1> commit
zonecfg:zone1> exit
It means that it can used two CPUs in zone1 then I run CPUHUNGRY shell script to get the CPU utilized. When I am checking in prstat -a command it shows that it is using more than two CPUs. When I am running psrinfo -v command in that zone it show 128CPUs. Please find prstat output.
I do not see 128 cpus being used. What the number of cpus being used means in this case, and in informal language:
You add up all of the %cpus for everything in the zone, AFTER the first prstat display is refreshed in a command like prstat 5 5 .
When I do that I do not see 128 cpus. Capping means that you sum up use across all 128 cpus and it will be leas than or equal to 2.
If you want to lock 2 cpus "away from all other zones" - what I think you seem to want -
check out pooladm to understand the dedicated cpu concept.
find dedicated-cpu in the zonecfg guide for Solaris 11. This feature is not supported in Solaris 10 global zones AFAIK. We have Solaris 10 branded zones running on top of Solaris 11. The 11 global zone "grants" dedicated cpus to the non-global Solaris 10 zones.:
Please find the output after using capped-cpu where it is showing three cpu are using when I keep the value 1. Why it showing 3 CPUs instead of 1.Could you elaborate the meaning of "Capping means that you sum up use across all 128 cpus and it will be leas than or equal to 2" or in below case it is 1.
bash-3.00# zonecfg -z zone2 info
zonename: zone2
zonepath: /zone2
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
hostid:
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
fs:
dir: /mnt
special: /u02/mnt
raw not specified
type: lofs
options: []
capped-cpu:
[ncpus: 1.00]
rctl:
name: zone.cpu-cap
value: (priv=privileged,limit=100,action=deny)
bash-3.00#
bash-3.00#
bash-3.00# zlogin zone2
[Connected to zone 'zone2' pts/3]
Last login: Fri Jul 11 09:20:27 on pts/3
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.00#
bash-3.00# ls
bin dev export kernel mnt nohup.out pl
cpuhungry etc home lib net opt pr
bash-3.00# ./cpuhungry &
[1] 1372
bash-3.00# eating the CPUs
created PID 1373
created PID 1374
created PID 1375
created PID 1376
created PID 1377
created PID 1378
created PID 1379
created PID 1380
created PID 1381
created PID 1382
created PID 1383
created PID 1384
created PID 1385
created PID 1386
created PID 1387
created PID 1388
bash-3.00#
bash-3.00#
bash-3.00# psrinfo
5 on-line since 06/03/2014 16:45:49
6 on-line since 06/03/2014 16:45:49
7 on-line since 06/03/2014 16:45:49
8 on-line since 06/03/2014 16:45:49
9 on-line since 06/03/2014 16:45:49
10 on-line since 06/03/2014 16:45:49
11 on-line since 06/03/2014 16:45:49
12 on-line since 06/03/2014 16:45:49
13 on-line since 06/03/2014 16:45:49
14 on-line since 06/03/2014 16:45:49
15 on-line since 06/03/2014 16:45:49
16 on-line since 06/03/2014 16:45:49
17 on-line since 06/03/2014 16:45:49
18 on-line since 06/03/2014 16:45:49
19 on-line since 06/03/2014 16:45:49
20 on-line since 06/03/2014 16:45:49
21 on-line since 06/03/2014 16:45:49
22 on-line since 06/03/2014 16:45:49
23 on-line since 06/03/2014 16:45:49
24 on-line since 06/03/2014 16:45:49
25 on-line since 06/03/2014 16:45:49
26 on-line since 06/03/2014 16:45:49
27 on-line since 06/03/2014 16:45:49
28 on-line since 06/03/2014 16:45:49
29 on-line since 06/03/2014 16:45:49
30 on-line since 06/03/2014 16:45:49
31 on-line since 06/03/2014 16:45:49
32 on-line since 06/03/2014 16:45:49
33 on-line since 06/03/2014 16:45:49
34 on-line since 06/03/2014 16:45:49
35 on-line since 06/03/2014 16:45:49
36 on-line since 06/03/2014 16:45:49
37 on-line since 06/03/2014 16:45:49
38 on-line since 06/03/2014 16:45:49
39 on-line since 06/03/2014 16:45:49
40 on-line since 06/03/2014 16:45:49
41 on-line since 06/03/2014 16:45:49
42 on-line since 06/03/2014 16:45:49
43 on-line since 06/03/2014 16:45:49
44 on-line since 06/03/2014 16:45:49
45 on-line since 06/03/2014 16:45:49
46 on-line since 06/03/2014 16:45:49
47 on-line since 06/03/2014 16:45:49
48 on-line since 06/03/2014 16:45:49
49 on-line since 06/03/2014 16:45:49
50 on-line since 06/03/2014 16:45:49
51 on-line since 06/03/2014 16:45:49
52 on-line since 06/03/2014 16:45:49
53 on-line since 06/03/2014 16:45:49
54 on-line since 06/03/2014 16:45:49
55 on-line since 06/03/2014 16:45:49
56 on-line since 06/03/2014 16:45:49
57 on-line since 06/03/2014 16:45:49
58 on-line since 06/03/2014 16:45:49
59 on-line since 06/03/2014 16:45:49
60 on-line since 06/03/2014 16:45:49
61 on-line since 06/03/2014 16:45:49
62 on-line since 06/03/2014 16:45:49
63 on-line since 06/03/2014 16:45:49
64 on-line since 06/03/2014 16:45:49
65 on-line since 06/03/2014 16:45:49
66 on-line since 06/03/2014 16:45:49
67 on-line since 06/03/2014 16:45:49
68 on-line since 06/03/2014 16:45:49
69 on-line since 06/03/2014 16:45:49
70 on-line since 06/03/2014 16:45:49
71 on-line since 06/03/2014 16:45:49
72 on-line since 06/03/2014 16:45:49
73 on-line since 06/03/2014 16:45:49
74 on-line since 06/03/2014 16:45:49
75 on-line since 06/03/2014 16:45:49
76 on-line since 06/03/2014 16:45:49
77 on-line since 06/03/2014 16:45:49
78 on-line since 06/03/2014 16:45:49
79 on-line since 06/03/2014 16:45:49
80 on-line since 06/03/2014 16:45:49
81 on-line since 06/03/2014 16:45:49
82 on-line since 06/03/2014 16:45:49
83 on-line since 06/03/2014 16:45:49
84 on-line since 06/03/2014 16:45:49
85 on-line since 06/03/2014 16:45:49
86 on-line since 06/03/2014 16:45:49
87 on-line since 06/03/2014 16:45:49
88 on-line since 06/03/2014 16:45:49
89 on-line since 06/03/2014 16:45:49
90 on-line since 06/03/2014 16:45:49
91 on-line since 06/03/2014 16:45:49
92 on-line since 06/03/2014 16:45:49
93 on-line since 06/03/2014 16:45:49
94 on-line since 06/03/2014 16:45:49
95 on-line since 06/03/2014 16:45:49
96 on-line since 06/03/2014 16:45:49
97 on-line since 06/03/2014 16:45:49
98 on-line since 06/03/2014 16:45:49
99 on-line since 06/03/2014 16:45:49
100 on-line since 06/03/2014 16:45:49
101 on-line since 06/03/2014 16:45:49
102 on-line since 06/03/2014 16:45:49
103 on-line since 06/03/2014 16:45:49
104 on-line since 06/03/2014 16:45:49
105 on-line since 06/03/2014 16:45:49
106 on-line since 06/03/2014 16:45:49
107 on-line since 06/03/2014 16:45:49
108 on-line since 06/03/2014 16:45:49
109 on-line since 06/03/2014 16:45:49
110 on-line since 06/03/2014 16:45:49
111 on-line since 06/03/2014 16:45:49
112 on-line since 06/03/2014 16:45:49
113 on-line since 06/03/2014 16:45:49
114 on-line since 06/03/2014 16:45:49
115 on-line since 06/03/2014 16:45:49
116 on-line since 06/03/2014 16:45:49
117 on-line since 06/03/2014 16:45:49
118 on-line since 06/03/2014 16:45:49
119 on-line since 06/03/2014 16:45:49
120 on-line since 06/03/2014 16:45:49
121 on-line since 06/03/2014 16:45:49
122 on-line since 06/03/2014 16:45:49
123 on-line since 06/03/2014 16:45:49
124 on-line since 06/03/2014 16:45:49
125 on-line since 06/03/2014 16:45:49
126 on-line since 06/03/2014 16:45:49
127 on-line since 06/03/2014 16:45:49
bash-3.00#
bash-3.00#
bash-3.00# prstat 5 5
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
1374 root 4408K 1824K wait 49 0 0:00:02 0.1% cpuhungry/1
22927 root 4408K 1824K wait 34 0 4:10:47 0.0% cpuhungry/1
1388 root 4408K 1824K cpu55 49 0 0:00:04 0.0% cpuhungry/1
22907 root 4408K 1952K wait 27 0 4:11:06 0.0% cpuhungry/1
1387 root 4408K 1824K wait 49 0 0:00:04 0.0% cpuhungry/1
1375 root 4408K 1824K wait 27 0 0:00:03 0.0% cpuhungry/1
22930 root 4408K 1824K wait 23 0 4:17:32 0.0% cpuhungry/1
1372 root 4416K 3064K wait 27 0 0:00:04 0.0% cpuhungry/1
1376 root 4408K 1824K wait 42 0 0:00:03 0.0% cpuhungry/1
1385 root 4408K 1824K wait 12 0 0:00:02 0.0% cpuhungry/1
1377 root 4408K 1824K cpu101 49 0 0:00:03 0.0% cpuhungry/1
22906 root 4416K 2992K cpu38 49 0 4:09:59 0.0% cpuhungry/1
1378 root 4408K 1824K wait 34 0 0:00:03 0.0% cpuhungry/1
22911 root 4408K 1824K wait 32 0 4:18:10 0.0% cpuhungry/1
1383 root 4408K 1824K wait 15 0 0:00:05 0.0% cpuhungry/1
1381 root 4408K 1824K wait 7 0 0:00:03 0.0% cpuhungry/1
22909 root 4408K 1824K wait 49 0 4:11:56 0.0% cpuhungry/1
22917 root 4408K 1824K wait 43 0 4:11:41 0.0% cpuhungry/1
22928 root 4408K 1824K wait 8 0 4:17:34 0.0% cpuhungry/1
22929 root 4408K 1824K wait 17 0 4:16:35 0.0% cpuhungry/1
22910 root 4408K 1824K wait 33 0 4:08:23 0.0% cpuhungry/1
22914 root 4408K 1824K wait 23 0 4:09:13 0.0% cpuhungry/1
22908 root 4408K 1824K wait 1 0 4:16:33 0.0% cpuhungry/1
1380 root 4408K 1824K wait 48 0 0:00:04 0.0% cpuhungry/1
22931 root 4408K 1824K wait 23 0 4:10:50 0.0% cpuhungry/1
1386 root 4408K 1824K wait 46 0 0:00:04 0.0% cpuhungry/1
22916 root 4408K 1824K wait 8 0 4:06:12 0.0% cpuhungry/1
1373 root 4408K 1952K wait 16 0 0:00:03 0.0% cpuhungry/1
1384 root 4408K 1824K wait 16 0 0:00:03 0.0% cpuhungry/1
1382 root 4408K 1824K wait 9 0 0:00:01 0.0% cpuhungry/1
22915 root 4408K 1824K wait 48 0 4:07:11 0.0% cpuhungry/1
1379 root 4408K 1824K wait 48 0 0:00:01 0.0% cpuhungry/1
22912 root 4408K 1824K wait 27 0 4:11:16 0.0% cpuhungry/1
22913 root 4408K 1824K wait 9 0 4:09:22 0.0% cpuhungry/1
23288 root 2424K 1608K sleep 59 0 0:00:00 0.0% smcboot/1
1282 root 3432K 2800K sleep 59 0 0:00:00 0.0% bash/1
1261 root 1760K 1480K sleep 59 0 0:00:00 0.0% sh/1
23295 root 23M 6584K sleep 50 0 0:00:09 0.0% inetd/4
23231 daemon 5144K 1688K sleep 58 0 0:00:01 0.0% nfsmapid/3
23542 root 3264K 2376K sleep 59 0 0:00:01 0.0% automountd/2
23778 root 6552K 1928K sleep 57 0 0:00:00 0.0% dtlogin/1
23541 root 2984K 1592K sleep 59 0 0:00:00 0.0% automountd/2
23220 daemon 2824K 1608K sleep 59 0 0:00:00 0.0% nfs4cbd/2
23293 root 1720K 1168K sleep 59 0 0:00:00 0.0% utmpd/1
23243 daemon 2792K 1608K sleep 52 0 0:00:00 0.0% lockd/2
22743 root 12M 5872K sleep 59 0 0:00:32 0.0% svc.configd/20
22999 daemon 10M 6048K sleep 59 0 0:01:19 0.0% rcapd/1
23546 root 4616K 1496K sleep 59 0 0:00:00 0.0% sshd/1
24546 noaccess 155M 131M sleep 59 0 0:06:04 0.0% java/17
24407 smmsp 10M 3328K sleep 59 0 0:00:04 0.0% sendmail/1
22943 root 19M 16M sleep 59 0 0:00:21 0.0% nscd/30
23290 root 2424K 1208K sleep 59 0 0:00:00 0.0% smcboot/1
22739 root 3024K 1752K sleep 59 0 0:00:00 0.0% init/1
23294 root 2512K 1544K sleep 59 0 0:00:00 0.0% sac/1
22741 root 32M 11M sleep 29 0 0:00:16 0.0% svc.startd/13
23691 root 3720K 1600K sleep 59 0 0:00:00 0.0% dmispd/1
23216 daemon 3264K 1744K sleep 59 0 0:00:00 0.0% rpcbind/1
Total: 69 processes, 172 lwps, load averages: 31.72, 22.48, 19.30
bash-3.00#
bash-3.00#
When I remove the capped-cpu parameter & then I am using, it will show lot of CPUs are being used which seems to be ok.
In case of dedicated-cpu parameter in solaris 10, it is showing 3 CPU only. Here we can use Dedicated-cpu parameter in solaris 10 also this also seems to be OK
We have the kernel scheduling processes to use the cpu:
'capped cpu' means 'use any available cpu' , there is no short list of cpus, just a limit in how much cpu time you get for the zone as a whole. Every minute there are 128 minutes of available cpu on a system with 128 cpus. A capped zone is allowed to use 3 minutes max and the 3 minutes is scheduled on any cpu, since all cpus are shared equally - as in fair share scheduling. This is not like zone-based cpu affinity.
Dedicated cpu means the zone gets 3 cpus all for itself. If they are idle no cpu time is given to any other zone. In no case are those 3 cpus accessed by any other non-global zone. All processes are scheduled against a given prset (fixed set of 3 cpus). Think of this as cpu affinity on a zone basis rather than a per-process basis.
capped cpu is less restrictive. dedicated cpu locks zones (that can potentially do evil things) into a little cpu "space" where all they can trash are their own resources. Great for oracle. For example, a programmer requests a Cartesian product from two, billion-row tables. All the other zones hum happily along while the one evil zone thrashes in its own little pond.
Now one last question is that in case of capped-cpu where I have put the value of 1 for ncpu parameter means 1 minute of CPU time will be allocated to 1 or more number of any cpus but total time will be 1 minute of CPU time as in my case it was showing three cpus. am I right?