vmstat behaviour after upgrade to solaris 10

kthr      memory            page            disk         faults      cpu
r b w   swap  free  re  mf pi po fr de sr s3 s3 sd sd   in   sy   cs us sy id
 0 0 528 67916496 10713856 678 3518 1167 545 853 221952 1097 1 48 0 26 1307 25353 4869 5 7 88
 0 0 1115 41541440 586816 948 5536 1477 3 3 131072 0 0 131 0 19 1999 20401 6464 12 6 81
 0 0 1115 41532800 586896 943 4132 1498 5 5 77408 0 0 135 0 25 1863 20938 5746 8 6 86
 0 0 1115 41519352 578312 516 2274 1323 3 3 45728 0 0 125 0 21 1972 12580 6095 5 5 90
 0 0 1115 41508928 562664 928 2430 3204 8 8 276152 0 0 140 0 21 2041 12526 5820 5 8 87

Has anyone else seen this? And they have changed the man page from Solaris 8 too. The first column no longer has processes, it has kernel threads. Does that mean any change in the interpretation?

No nothing like that...it is a standard format...a better idea would be to see the output for a larger interval like
#vmstat 5 5

blowtorch, you left out the first line of the heading which would have illustrated your question. Anyway Solaris 9 was the same way. What are you upgrading from?

We did an upgrade from Solaris 8. Its a big system, 32 CPU, 64G of memory. Anyway, I think they've got it fixed. Have been quite busy with other stuff the past few days.

Hey guys, just an update, first of all, the box is a UAT box, so it has 'only' 16 CPUs and had 32GB of memory. And a ton of oracle databases from another box were swung over (it has VCS), so all the SGAs combined needed about 12GB more memory than the system actually had. Anyway, the problem has been fixed by adding 16GB memory and swinging most of the databases back to the original configuration.