Iostat and top command

I am facing strange behaviour of iostat command and top command. where initially it is showing high utilization and after it is showing low utilization.

iostat command

avg-cpu: %user %nice %system %iowait %steal %idle
 73.60 0.01 23.93 0.92 0.00 1.54

 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
 sda 0.00 0.35 0.09 0.50 2.04 10.50 21.30 0.00 1.29 0.19 0.01
 sdb 0.04 3.07 0.27 4.12 8.20 73.39 18.62 0.00 0.69 0.18 0.08
 sdc 0.26 0.46 0.31 7.42 155.19 850.85 130.10 0.07 9.00 0.40 0.31
 sdd 0.00 0.00 2.40 1.00 5.73 1.01 1.98 0.00 0.38 0.37 0.13
 sdf 0.00 0.00 2.40 1.00 5.74 1.01 1.98 0.00 0.38 0.37 0.13
 sde 0.00 0.00 2.40 1.00 5.74 1.01 1.98 0.00 0.38 0.37 0.13
 sdg 0.00 0.01 10.20 4.58 529.39 159.60 46.63 0.01 0.84 0.45 0.67
 sdh 0.08 0.16 63.83 63.13 4792.88 1946.14 53.08 0.15 1.17 0.23 2.92
 sdi 0.00 0.01 0.93 0.26 236.48 260.57 420.04 0.03 25.61 0.67 0.08

 avg-cpu: %user %nice %system %iowait %steal %idle
 0.89 0.00 0.76 0.00 0.00 98.35

 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
 sdd 0.00 0.00 2.00 1.00 2.00 1.00 1.00 0.00 0.33 0.33 0.10
 sdf 0.00 0.00 2.00 1.00 2.00 1.00 1.00 0.00 0.33 0.33 0.10
 sde 0.00 0.00 2.00 1.00 2.00 1.00 1.00 0.00 0.33 0.33 0.10
 sdg 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
 sdh 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
 sdi 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

 avg-cpu: %user %nice %system %iowait %steal %idle
 2.15 0.00 0.63 0.00 0.00 97.22

 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
 sdd 0.00 0.00 2.00 1.00 2.00 1.00 1.00 0.00 0.67 0.67 0.20
 sdf 0.00 0.00 2.00 1.00 2.00 1.00 1.00 0.00 0.33 0.33 0.10
 sde 0.00 0.00 2.00 1.00 2.00 1.00 1.00 0.00 0.33 0.33 0.10
 sdg 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
 sdh 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
 sdi 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

top

 top - 11:15:06 up 16 days, 19:04, 3 users, load average: 0.18, 0.24, 0.23
 Tasks: 260 total, 1 running, 259 sleeping, 0 stopped, 0 zombie
 Cpu(s): 51.1%us, 34.6%sy, 0.3%ni, 11.5%id, 2.1%wa, 0.0%hi, 0.5

but after a while it is showing low.

top - 11:15:59 up 16 days, 19:05, 3 users, load average: 0.26, 0.25, 0.23
 Tasks: 259 total, 1 running, 258 sleeping, 0 stopped, 0 zombie
 Cpu(s): 0.5%us, 0.3%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st

I have checked /var/log/messages, can not found any issues.

using oracle Linux 6.7

There appears to be one or a group of processes - maybe a service - that run and then stop periodically. What process(es) writes to the sdh drive?

As the load averages (short to mid time) are all around some 25%, there doesn't seem to be reason for concern. The node is busy, but not too busy, and there's enough headroom, cpu-wise.

The first report of iostat (and vmstat and sar) is the average since the system boot.
After that each report is the average since the last report.