Server running slow

Hi,

Wonder is someone can help.

I've got a server SCO_SV 3.2v5.0.7 PentIII that is located at a different site and is running slow and has been for a week. I've been speaking to a third party who say nothing is wrong with it but its still running slow.
The 3rd party advise it could be a network problem. This isn't the case as I pinged the server from another site and get not porblems.

Reply from 194.61.192.241: bytes=32 time<1ms TTL=61
Reply from 194.61.192.241: bytes=32 time<1ms TTL=61
Reply from 194.61.192.241: bytes=32 time<1ms TTL=61
Reply from 194.61.192.241: bytes=32 time<1ms TTL=61

But when I run mpsar i get:

SCO_SV 3.2v5.0.7 PentIII    03/13/2007

12:05:05    %usr    %sys    %wio   %idle (-u)
12:05:06       1       2      85      12 
12:05:07       2       4      89       5 
12:05:08       0       2      96       2 

Average        1       3      90       6
# mpsar 1 3

SCO_SV 3.2v5.0.7 PentIII    03/13/2007

12:08:02    %usr    %sys    %wio   %idle (-u)
12:08:03       2       4      83      11 
12:08:04       2       0      91       7 
12:08:05       1       4      89       6 

# mpsar 1 3

SCO_SV rocc2 3.2v5.0.7 PentIII    03/13/2007

11:59:20    %usr    %sys    %wio   %idle (-u)
11:59:21       2       6      91       1 
11:59:22       0      10      89       1 
11:59:23       1       4      95       0 

Any ideas what I can check to see what the &%*@ is going on??

Thanks in advance.

Do you have a sample of this output when things were all right? You need to check performance tools enough that you can recognize changes. I don't use SCO so I can't give very specific advice but the very high wio stands out like a sore thumb and the first thing I would suspect is disk. That mpsar thing seems very close to sar and so there probably is a -d option which you could try. How about iostat, is that available?

And check any error logs very carefully looking for disk soft errors. Disk drives can slow to a crawl when they recover from soft errors. If the disk is recovering from soft errors now it may soon fail completely. This is especially true of ATA (aka IDE) disks which I'm guessing you have.

# sar 1 3

SCO_SV rocc2 3.2v5.0.7 PentIII 03/14/2007

14:51:55 %usr %sys %wio %idle (-u)
14:51:56 2 15 37 46
14:51:57 4 17 34 45
14:51:58 6 12 34 49

Average 4 15 35 46
# sar -d
sar: Can't open /usr/adm/sa/sa14
# sar -d 1 3

SCO_SV rocc2 3.2v5.0.7 PentIII 03/14/2007

14:52:05 device %busy avque r+w/s blks/s avwait avserv (-d)
14:52:06 Sdsk-0 100.00 1.00 2278.64 68502.91 0.00 0.61
14:52:07 Sdsk-0 100.00 1.00 1965.05 58712.62 0.00 1.01
14:52:08 Sdsk-0 100.00 1.00 1607.92 47720.79 0.00 1.43

Average Sdsk-0 100.00 1.00 1952.77 58381.11 0.00 0.96
#
# mpsar 1 3

SCO_SV rocc2 3.2v5.0.7 PentIII 03/14/2007

14:55:15 %usr %sys %wio %idle (-u)
14:55:16 2 4 81 14
14:55:17 0 2 83 16
14:55:19 1 4 80 15

Average 1 3 81 15
# sar -d 1 10

SCO_SV rocc2 3.2v5.0.7 PentIII 03/14/2007

14:55:26 device %busy avque r+w/s blks/s avwait avserv (-d)
14:55:27 Sdsk-0 100.00 1.00 1184.16 6225.74 0.00 2.53
14:55:28 Sdsk-0 100.00 1.00 1156.31 6366.99 0.00 2.75
14:55:29 Sdsk-0 100.00 1.00 962.75 5239.22 0.00 3.22
14:55:30 Sdsk-0 100.00 1.00 897.03 4992.08 0.00 3.81
14:55:31 Sdsk-0 100.00 1.00 1477.45 9278.43 0.00 2.01
14:55:32 Sdsk-0 100.00 1.00 3341.18 17178.43 0.00 0.58
14:55:33 Sdsk-0 100.00 1.00 3950.00 23492.16 0.00 0.46
14:55:34 Sdsk-0 100.00 1.00 2738.83 14246.60 0.00 0.63
14:55:35 Sdsk-0 100.00 1.00 2523.53 12845.10 0.00 0.69
14:55:37 Sdsk-0 100.00 1.00 2867.65 17476.47 0.00 0.55

Average Sdsk-0 100.00 1.00 2111.67 11743.33 0.00 1.16
# sar 1 3

SCO_SV rocc2 3.2v5.0.7 PentIII 03/14/2007

14:55:38 %usr %sys %wio %idle (-u)
14:55:39 3 8 52 37
14:55:40 0 2 81 17
14:55:41 0 5 80 15

Average 1 5 71 23

Hope this makes sense to you.