Deploy a package on many systems

I had a question on my interview today.
so how would you automate install/upgrade to a package to 200 servers.
and obviously not by going to each system physically.

Answer: NFS share , using perl script and rpm utility. But i have not done such kind of install/upgrade so my knowledge is limited.

but i want to know how would i do that on Redhat Enterprise or CentOS.

Thanks

Using TCL/Expect is another possible answer, at least for me. Here's an example of usage - I have CVS checked-out every day, so I want to distribute certain files to a list of servers, I would use something as simple as :

#!/usr/bin/expect

set timeout 60

if { $argc != 1 } {
    puts "Usage $argv0 file"
    exit 1
}

set file [lindex $argv 0]
set user root
set pass password
set remote_path "/opt/install"

foreach host { host1 host2 host3 } {

eval spawn scp -oUserKnownHostsFile=/dev/null  $::file $::user@$::host:$::remote_path
expect *assword:

send "$::pass\r"
expect eof

}

The script is very simple, and it relies on certain hard-coded parameters, including the security risk of exposing the password in plain text, but those could be parametrized. This is just a sample logic.

There are already configured tools to do this such as Satellite server for RHEL or Spacewalk for CentOS. You can set up a YUM server and puppet or cfengine.

Using an NFS share mounted to all the hosts means they can get the RPM file without resorting to SCP.

I have written a script that installs changes using an NFS mounted directory containing the packages/patches and flag directory with a sub directory per package/patch which each machines checks hourly (using cron) and if they do not see a flag file of their own hostname in any of the flag directories they then install that package/patch and create a flag file on successful completion.

There are various products like AltirisNS (Windows hosted) in addition to satellite server, etc. that can do this but that was not what the interview question was asking for...