How to create a bootable ISO from running Linux box?

Hi All,

I have one query on creating bootable ISO.

I have installed Centos 5.6 and done few configuration changes which is needed for deploying my App. Later I have deployed my app. Now Centos is up and running in a dedicated box along with my app.

Now I want to create the bootable ISO out of this box. Is it possible to create a bootable ISO image out of running Centos Box if so could you please help me in this.

Thanks,
Kalai

They're really not the same. Hard disks have partitions, ISO's do not. Hard disks are writable, ISO's are not.

It's possible certainly but there's not a direct translation -- you can't just boot an iso and get all your partitions, you have to have a special bootloader which arranges them for you, and then boots your system somehow in a manner it won't try to mount them again -- and it'd mean putting a ton of stuff into ram or unionfs so you don't get read-only-filesystem errors everywhere.

Besides, will your system even fit on ISO?

I believe your goal is to automate the installation of OS including the application so that it starts working out of the box.

Well, for OS installation, the good old kickstart should do the job. For the application installation and configuration, you may automate it with puppet. It scales really good when the configuration is complicated and has lots of dependencies.

If puppet is not an option for you, go ahead and write a shell script which would mount an NFS export with all the configs, executables, etc required for your app and will install them locally.

If you want, you can make P2V (physical to virtualized) image (to be used as a VM later) for the physical machine with the app installed.

I think what you are looking for is something comparable to the AIX mksysb, the HP-UX Ignite tape make_recovery etc. so that you can clone / DR your server without having a Kickstart/Jumpstart (Solaris)/NIM (AIX) server that you would have to build first.

From what I have seen so far, there is no easy process, but when running Solaris many years ago, we had to do a set of filesystem saves. Our recovery process was a bit convoluted, but was basically:-

  1. Boot the new server to single user from install CD
  2. Slice the target boot disk according to our documentation
  3. Restore the filesystems to the correct slices with ufsrestore
  4. Delete and re-create the device mappings (can't remember how now)
  5. Remove the encrypted password for root
  6. Unmount all restored filesystems and boot.

I'm informed that I will have to build and manage Red Hat servers in future, so I too am after a sensible process to follow, but at worst I will probably end up doing something like the above, but that relies on documentation to be kept up to date and can get messey.

Our aim has always been to just put back a base operating system, but our operating system not just a plain install. We then would rebuild the volume groups definitions (AIX saveg/restvg excluding all files) we saved into the root filesystem before the DR backup was taken and then used the 3rd party backup tool to put back the data to the non-system filesystems and databases.

I hope that this helps that someone else can make some suggestions.

Robin
Liverpool/Blackburn
UK