I have just posted almost the same question, not really that sure about how tha CPU I/O wait thing works in AIX - I'm told that it's not the same as Solaris which is what I normally work on.
So will be intereste to see what comes up in your thread as well as mine.
CPU "wait"s mean: a process, ready to be run (again) cannot be run AND there is no other process which could be run instead. In this regard "wait" is a special kind of "idle" - one, where there are indeed processes to be run whereas "idle" takes place when there are no processes to be run at all.
It is true that - like mentioned in the paper "Demystifying I/O Wait". CPU-intensive processes running at the same time could mask I/O-waits because the processor time is given to these CPU-intensive processes while the I/O-bound process waits. This doesn't seem to be the case here, though.
To see a considerable number of waits always means: the I/O-part of the systems operation is the bottleneck. All the other parts of the system are faster than I/O. This is not necessarily bad: some part always is the weakest link in the chain and if it isn't I/O it would be something else. The question is: is the systems speed enough for you purposes? If this is the case you have nothing to do - once it isn't fast enough any more you will know where to start.
Just a clarification. CPU wait pseudo state also means there are no processes ready to be run at all. The difference is at least one thread is waiting for a disk I/O to be completed. This metric is so confusing and open to misinterpretation that Solaris give up providing it starting with version 10 (2005).
I just found such a misinterpretation by looking at the other thread mentioned by gull04:
There is no Unix OS where 20% of I/O wait is that alarming. An idle CPU combined with an I/O in progress are not a disaster. If you are concerned by I/O performance and load, you should use iostat, not vmstat.