Every cluster is different .. I've seen some database clusters that the only thing the cluster controls is the filesystems. (Like that's not a disaster waiting to happen.. ). In that particular case, off-lining cluster resources without DBA involvement could make for a bad day Since it looks like you might not be familiar with the nuances of this cluster, here's what I consider the safe route for DB servers:
Do hastatus -sum, and note the group that controls the database. Then look at
/etc/VRTSvcs/conf/config/main.cf
and see what that group actually does (or look via the hagrp and hares commands). Assuming the database itself is controlled by the cluster, bring it down like this -
In one window:
tail /var/VRTSvcs/log/engine_A.log
In another:
hastop -g <database group> -sys <system its running on>
Watch hastatus, and/or the log file you're tailing. If things go down smoothly, then great. If it hangs up waiting on the DB, let the DBA do their thing. The log will usually tell you everything you need to know. Be patient, depending on the DB, it can take a long time to come down.
Once the cluster and the DBA are both satisfied that the DB is down, you can usually then do a hastop -all, and the cluster should pretty easily take care of the dependencies. Wait for it to complete, and help it along if necessary using the info from the log file you're tailing.
Personally, if I'm not 100% comfortable with the system I'm on, I'm paranoid. In that case I like to do everything with cluster nodes one at a time. So offline all resources on all nodes of the cluster and stop VCS, then shut down one node, then the next, etc. Same on the way up. Bring up nodes one at a time, if you want to be extra careful. Let VCS find its brain on one node before another tries.
I've brought down CFS nodes at the same time, and end up with goofy fencing issues. (in hindsight I should have fully closed out gab and llt). It's never happened when I bring them down one at a time, so if I have the time, I like to do it that way.
Anyway, hastop -all would probably work just fine on a properly configured cluster. The fun is when the cluster isn't properly configured. And unless you know for sure either way, it's best to play it safe.