I can attest to the usefulness of booting off of SAN (my environment also EMC Symmetrix), I had a situation where I needed to practice migration of a server and was able to move an AIX 5.1 host from an H70 to a P670 LPAR and back as necessary. Of course you must ensure that you have appropriate device driver support on your AIX image for all devices in both machines. Also must ensure that the firmware for each machine is at a level which is stated to support the AIX level that you are trying to run. If hardware at recovery site matches or is relatively similar to hardware at primary site I wouldn't expect a problem.
At my workplace we have several AIX 5.1 LPARs booting from SAN, they are all development instances however. At one time a former senior sysadmin tested SAN boot with PowerPath (EMC multi-path software technology for your older AIX 5.1 or 4.3.3!!! machines if you're not familiar with it) but couldn't get it to work, so that's why no SAN-boot prod systems, we do internal SCSI with OS mirroring for rootvg and PowerPath SAN external storage. We also feared possible problems with latency of SAN rootvg, particularly for paging space and possibly /tmp... many things don't work well if your system does heavy paging and there is too much latency in getting stuff in and out of paging space. I've actually seen a post on derkeiler lists quite awhile ago that indicated some people had experienced problems with systems that SAN-booted and did intensive I/O. That being said the LPARs we SAN boot in our environment have been rock solid since 2002, however they don't carry an extremely heavy workload (standard full WebSphere appserver install - DB2/WebSphere/HTTP all on the same box). We've even had to migrate the SAN booters from older McData SAN switches to newer Brocades and didn't encounter any problems seeing the rootvg disks and restarting the hosts after cutting over FA's to the new SAN directors.
Myself or my coworker are actually set to test EMC's claims of Symm SAN boot AIX 5.3 micropartition feasability, this includes the microLPAR (mLPAR) rootvg disk being on SANdisk as well as presented through VIO. We've investigated and asked EMC and the answer has been that this is a supported configuration. Our 5.3 environment uses Power5 VIO servers and native AIX MPIO (instead of PowerPath) for almost all I/O to LPARs other than the ones with heaviest SAN traffic which current VIO server implementation by IBM will bottleneck. Our site has several P570 machines and we are doing this testing to try to come up with a "poor man's HA", reason being that the P570 hardware is perhaps not yet ready for prime time, we've not seen the reliability that was so easily overlooked with the Power4 Regatta family of machines. Many firmware updates to the 570 server frame as well as HMC upgrades mandatory to support said firmware, so when you have to take down a frame running multiple prod LPARs for firmware, the business types prefer to have some way of getting their app back for them in a reasonable amount of time. Basically we're trying to do a mini-onsite DR similar to what you are proposing. OK, so that's my very long-winded way of getting to saying that we will be testing mLPAR rootvg presented through a VIO server where the rootvg utilizes MPIO, as we're not interested in losing multi-pathing for a prod LPAR to try and gain migration-ability (new 25-cent word to use at your next big DRP meeting).
So the basic idea is to put rootvg of mLPAR on SAN disk and if frame shutdown is necessary, either have the partition profile already defined for the host on another frame or have an HMC script that can generate it quickly (however we're not yet there with scripting mLPAR creation! - perhaps others are) and then quickly bring the host back up on another frame. Of course you must have all of the appropriate zoning set up on the SAN, for rootvg and all appvgs / datavgs but that's another group's job We don't use SRDF at our site (hoping that we will be soon pending outcome of current DRP process re-eval), but I think the situation is fairly similar.
Sorry for the ridiculously long post, I guess to sum up I'd say give it a try and see what happens. It will make your DRP tests a much more relaxing exercise than restoring mksysb's and then finding out an hour later that the image on tape won't restore correctly!
I'll try to post our findings of how things work out for us, would appreciate if you could do the same if you attempt SRDF procedure. I'd love to be able to push management by saying that I know of another person successfully using that kind of replication tech. for DR, it sucks going to DR and spending 24 hours solid restoring mksysb's and recovering TSM, which I will be doing in exactly two weeks time.
Hope you find this info useful >< bOOtnix