Any ideas how to accomplish this storage move?

We have a datacenter in another part of the country with about a 100Mb WAN between that datacenter and the local datacenter. The remoted datacenter has about 20TB of data on roughly 75 servers that all must be moved to this datacenter with minimal downtime. Please help me come up with the best way to move this data. We figure if we can get maximum throughput on the WAN that we might be able to copy 300-400 GB per day.

Problems:
Downtime should be minimal from a few hours to a couple of days.
Most applications are live databases.

One thought I had was to use iSCSI to mount local disks to the remote servers then use EMC's OpenMigrator to sync the disks. This way the sync can happen in the background and take weeks if necessary. We can tune OpenMigrator to not put too much stress on the network and we will not have to impact the running applications. The problem with this is that our DMX does not currently support iSCSI and according to our storage admin it would be too cost prohibitive.

Another option is to try rsync to sync the data. Obviously, we would shutdown the database on the day of the move and only sync the delta but I am concerned it will have trouble syncing the live database.

We could try NFS but there is no easy way of synchronizing the incremental data on the day of the move. Also, NFS is prohibited in my company for security reasons.

Does anyone have any other suggestions or know of other products that could handle a server to server copy?

Dump the data to tape(s) and transport the tape(s) to the second site (you can FedEx priority overnight if in the US, for example)

Sync each server with the backup tape(s) and then only move the data which has been updated since the dump to tape(s) over the network.

With a little planning, this is an economical solution with minimal downtime.

Your datacenter is on SAN?
A $$ service : brocade had an appliance they now use "as" service for migrating data
Thats if you cannot follow Neo's advice...