Help finding a Unix friendly RAID 1 backup

First time poster and a very new Unix user, so I'll just pre-apologize for stupid questions now.

Does anybody know of a good RAID 1 hard drive backup that is Unix friendly? I want to avoid any hardcore programming. Can you recommend both NAS and non-NAS options? I need to do nightly backups from a Unix data server running SAMBA/SWAT that currently has ~300 of 420 GB used split between public and user folders. This is for an office and involves sensitive data so I need a safe and secure option.

This is what I was able to find online that seems to fit what I'm looking for:
Buffalo Technology TeraStation Duo TS-WX2.0TL/R1 2x1 TB 368.98

Synology DiskStation DS211 21002x1 TB550.99

Netgear ReadyNAS Duo 2-Bay RND2210 2x1 TB 393.6
Data Dock II DDQ-2000 2x1 TB 269.95
Do any of the above make sense? From what I can tell, only the Netgear is out of the box Unix friendly; the tech guys at Fantom couldn't tell me whether the data dock II was or not. Can you recommend any of these or other models? I don't really think I need the NAS option and it seems you pay considerably more for that. Should I be looking at an entirely different type of data storage? (Cloud storage is not an option)

In the meantime, while I figure this out, my boss wants me to backup the data asap. I was thinking about getting a consumer grade 500 GB or 1 TB external with an ethernet port and simply manually backing up the data via windows. I was thinking this would provide a good stop gap and, once the RAID 1 is setup, could simply be manually backed up weekly and provide essentially an additional disk to the RAID 1 array.

For this I was looking between these two:
Iomega Home Media 34337 1 TB 99.99
Buffalo LS-CH1.0TL 1 TB 99.99
Any help is greatly appreciated. Thank you.

Define "UNIX-friendly". Which UNIX? Furthermore, what's your architecture and system? What kind of disks do you want to use?

A good stopgap a USB or ethernet drive would be, as any backup is better than no backup. However Windows has no respect for UNIX permissions so just blindly copying files could result in much hair-pull later. You could use the udpcast utility and do something like this:

# On UNIX
tar -cpf - /path/to/files/i/want/to/backup | udp-sender
# On Windows
udp-receiver > file.tar

..to just keep one giant tar which should preserve the permissions of the files inside it. To restore,

# in Windows
udp-sender < file.tar
# in UNIX
udp-receiver | tar -C /path/ -vxpf -

Of course, make sure the drive is formatted with NTFS or something which allows Windows to create >4GB files. FAT won't do.

And if you can have the UNIX system use the drive directly? All the better.

You might have troubles reusing the disk in a RAID once you want to make one, since you'd quite likely need to blank the contents before you make it part of an array.

If you want reliable hardware, I would suggest avoiding consumer-grade stuff. Especially avoid jmicron chipsets. 3ware works pretty closely with UNIX vendors.

1 Like

To Corona688
By �UNIX-friendly� I meant that I am ideally looking for something that is out of the box compatible with Unix in order to minimize additional coding and therefore potential problems and headaches. From what I understand, in some cases, the internal cards that coordinate the RAID 1 aren't necessarily designed to work with a Unix system. I would like to avoid those.

To be honest, I don't know what type of Unix I am using, or what the architecture and system is, where would I find that information? (I just started this job and am essentially on my own technical wise; nobody even told me the server was on site in a closet for about a week and a half.) I do have access as the su if that can help me find this info.

What do you mean types of disks? Speed, size, company?

I think I get your point about copying from Unix to Windows. I basically need to format the drive first, and send the data over as one large file that can be restored later if needed, which would ensure individual users permissions?

Reliability is key, I will look into 3ware.

Thank you so much for your help and suggestions, I really appreciate it.

Avoid software RAID then. We use that and it works decently well but took a lot of frustration to get going.

A hardware RAID on the other hand, can present multiple disks to the operating system like a single hard drive. Configuring which drives are part of an array often becomes an extended CMOS setting completely independent of the installed OS: you might get a 'press ESC to configure drives' message on boot before the OS actually loads. Assuming your server is a PC that is.

It's not so much that they're not designed to work with UNIX, as much as they may not have bothered writing UNIX device drivers. This may not be as important as it used to be (for PC hardware, anyway) since most disk controllers are AHCI-compliant these days and work fine with a generic driver.

That's a pretty important question... "UNIX" is a completely generic term; Your wireless router might be running a kind of UNIX. So do most supercomputers. Obviously you can't run software from one on the other or fit hardware from one into the other. In a shell, try uname and uname -a to find out what your system is.

That alone won't tell you what kind of card slots this server has and which, if any, are free, so you might need to take a look at the hardware itself too.

Mostly, interface. SATA-II? SATA-III? SCSI? Probably you want SATA I think. SATA hotswap cages are cheap these days.

Exactly. The permissions, timestamps, users, and everything else all get bundled up along with the files when you make a tar with -p. Formatting it in NTFS is necessary for Windows if the drive came with FAT, because FAT can't hold files larger than 4 gigabytes.

Good deal. We tried to go cheap and tried a jmicron controller, which caused some (fortunately recoverable) data corruption. Never again.

---------- Post updated at 10:23 AM ---------- Previous update was at 10:08 AM ----------

Also: A RAID isn't exactly a backup. It's tempting since it's automatic and improves your speed too. It somewhat protects you from a single-disk failure -- that's all. (And single-disk failiures will occur more often because you're running more disks.) Any other kind of problem -- an out-of-control program, dying disk controller, murderous power supply, fire, lightning strike, utility company, theft, volcano -- are still quite capable of swallowing your data entire. A trustworthy backup is when you make a copy and mail it somewhere else.

1 Like

Thanks again for your help so far Corona688.

An update so far:
I used the uname command, we're running Linux.

The decision was made to purchase a cheaper 1 TB external hard drive as a stop-gap measure to make sure the data is backed up before moving ahead with setting up an automatic backup to a dedicated raid 1 array. We purchased a Buffalo Linkstation Live LS-CHL.

The drive can be formatted to to FAT, NTFS, XFS, and HFS+. The server I want to backup is an NTFS file system. I believe the drive comes XFS standard. Should I format the drive to NTFS for compatibility? or is that a non-issue.

The user manual for the drive states the disadvantages of NTFS as:
1) Read-only from the LinkStation or a Mac.
2) Not suitable for backup from the LinkStation.

The relevant disadvantage of XFS is:
You cannot read data by directly connecting to a PC.

Once that is determined, what are the steps I need to take to make a backup .tar of the server files based upon the code you provided earlier?

Does the Unix command simply go in the command line with root access? I assume I change the /path/to/files/i/want/to/backup to the path relevant to my server? What if I simply want to copy all the files on the drive? Do I designate the name of the backup file to be sent before the udp-sender or is built in to be named when received?

On the Windows end, where is that command entered? The command prompt? Can I name the file to be anything I like (probably something along the lines of backupmmddyy.tar or are there restrictions?

Sorry for, what I am sure, are elementary, if not asinine, questions; I really appreciate the responses.

All right so far.

??? I thought you were running Linux!

If you intended to plug the hard drive into anything directly, Windows wouldn't understand XFS. But that should be a non-issue for network storage.

I'm not sure why it says this.

True for Windows, but XFS isn't completely alien -- we use it here at work for our Linux file server. Linux can, if it's configured to do so.

I don't have a Buffalo Linkstation Live LS-CHL so I can't tell you how you'd be able to connect to it, but once you do, you open a DOS prompt, change to the drive you attached your NAS as, and run the command. You'll need to put udpcast on the same drive or in your PATH. You can download a windows version of udp-receiver from UDP Cast

Well, you need to install udpcast first of course. And you can run it under any user with sufficient access privileges to get at all the files in question.

Yes.

All the files on which drive? Linux doesn't have a c: or d: like Windows, all your partitions are accessed through the same file tree. Some folders chosen by /etc/fstab become partitions -- files inside them reside on that partition.

You can't just do a blind copy of everything while the server's running. There's things that shouldn't be copied while in use, and lots of things it wouldn't make sense to bother copying anyway.

If you really want to do a true, blind copy of the server that you could copy back into a new drive and boot without it knowing the difference, you shouldn't do so while the server's operating, you should boot from a livecd and do so with minimal effect on the system itself. But if you don't actually know how to use Linux yet, your options are very limited.

So, what's your server actually doing? If you don't know, could you find out?

Windows creates the file, Windows decides the name.

Yes.

You can name it anything you want.

---------- Post updated at 03:21 PM ---------- Previous update was at 03:17 PM ----------

It may be simpler, and faster to plug the drive into the server direct, mount it, and create the tarball on it that way. Assuming your Linux server can understand XFS.

1 Like

Quote:
The server I want to backup is an NTFS file system.

Okay so here is the setup as best I know it. We have a unix terminal running Linux (I logged in and used the uname command to check) that is used as a data server. Users in the office, mostly on pcs but some on macs, can access the drive (through windows) by mapping a network drive under Tools in My Computer and signing in as a registered user. When one does that, the details section on the left info bar lists the name and the physical address, then "Network Drive", "File System: NTFS", and then the free space and total size. I can also access the network through Samba/SWAT, SecureCRT, and the physical terminal itself.

By doing, do you mean, what is it used for? If so, the department uses it to store research data. From what I understand, most users use it to backup data from their computers, but there may be some users who save data primarily or only to the server for personal or security reasons. The server has joint shared space, where anybody has read and copy privileges but only the author of a file has edit/delete rights. In addition, each registered user should have a personal space that only they see. To be quite honest I don't know much more beyond that, nor do I think does anyone else at this point. I quite accidentally stumbled onto this problem looking for a fix for something else (trying to make a public folder on the shared space that gave all users full privileges of any files placed there) and contacted a number of current and former employees to figure out what had been done in the past in terms of backups, and that appears to be nothing.

So that would be simply plugging the drive into the server via USB, mounting (with code) and creating the tarball (more code)? How can I tell if my Linux server can understand XFS or not? The uname -a command gave more info, would that help? How long approximately would that take? and vs. say doing it through the udpcast as you suggested above?

When I do the backup do I need to prevent other activity on the server?

Thanks

At the level they're using it they say "OK server, give me <filename>" and the server goes "okay, here is <filename>". They don't use it at the filesystem level, and only care what it is because FAT won't let them create files >4GB while NTFS will.

It's almost certainly not actually NTFS.

I mean in more detail. What daemons are running to do what, using what files?

Someone had to set this up at some point in time.

$ cat /proc/filesystems
nodev   sysfs
nodev   rootfs
nodev   bdev
nodev   proc
nodev   cgroup
nodev   cpuset
nodev   tmpfs
nodev   binfmt_misc
nodev   debugfs
nodev   sockfs
nodev   usbfs
nodev   pipefs
nodev   anon_inodefs
nodev   rpc_pipefs
nodev   devpts
        ext3
        ext2
        ext4
nodev   ramfs
nodev   hugetlbfs
        vfat
        msdos
        iso9660
nodev   nfs
nodev   nfs4
nodev   autofs
        xfs
nodev   mqueue
nodev   selinuxfs
        ntfs
$

(don't get excited over the NTFS entry. Linux's built-in NTFS support is still poor and, by necessity, read-only. There's a better driver under development but it's external to the kernel so quite cumbersome to use right now.)

Probably.

I have no idea. How long it takes depends on how large it is and how fast your computer is and how fast your drive is.

Direct connection would probably be faster.

Depends what you're backing up, and what activity you're preventing.

You need to learn more about your system. I don't even know your distro at this point so I can't begin to help you find out what's even running on it at this point.

A good start would be all the output of

df -h
fdisk -l

I don't know. How would I determine that? I tried a ps -aux but it didn't seem to tell me anything that made sense to me; only one file said daemon.

Yes, unfortunately, that individual is no longer with the company, I contacted him and he was either unwilling or unable to help me further with what he had done. (Though, it would be amusing to think of a Linux server spontaneously setting itself up somewhere; divine code-eption?. OK, bad joke, I'll stop now.)

uname -a command return:
Linux servername.location 2.6.26-2-686 #1 SMP Wed May 12 21:56:10 UTC 2010 i686 GNU/Linux

Fair enough. You answered my poorly written question despite its ambiguity. I understand it is dependent on computer speed, disk size etc.

Okay, here it is:

df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda5              37G  948M   34G   3% /
tmpfs                1014M     0 1014M   0% /lib/init/rw
udev                   10M  672K  9.4M   7% /dev
tmpfs                1014M     0 1014M   0% /dev/shm
/dev/sda1             464M   19M  421M   5% /boot
/dev/sda6             419G  232G  165G  59% /home
fdisk -l
Disk /dev/sda: 499.9 GB, 499930628096 bytes
255 heads, 63 sectors/track, 60779 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000da2f6

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1          61      489951   83  Linux
/dev/sda2              62       60779   487717335    5  Extended
/dev/sda5              62        4924    39062016   83  Linux
/dev/sda6            4925       60363   445313736   83  Linux
/dev/sda7           60364       60779     3341488+  82  Linux swap / Solaris

I hope that helps clarify things. Again, thank you for your continuing help.

Did you get the output of cat /proc/filesystems ?

Writing a longer reply, but wanted to catch you whiel you were still online

---------- Post updated at 03:18 PM ---------- Previous update was at 02:34 PM ----------

Those help a lot. You've got a single-disk IBM-PC compatible system. There's no worrying about which disk to back up, and the backup you purchased is definitely larger than the disk the server uses. And all the files you care about, your company's data-files, are probably all in /home/, stored in /dev/sda6, nicely separated from the rest.

You've got two options:

A) Offline backup. Turn your server off, boot some backup software, run the backup, reboot, done. It won't be able to serve files while it's backing up.

500 gigs might take 15 hours on a 100baseT network, or even twice that if half-duplex, so I'm not sure a network backup is practical. Too bad, since a udpcast boot CD would make it much easier for you.

Instead I'd suggest booting a gentoo minimal liveCD. The amd64 disk is probably better if your system can boot 64-bit at all (whether your server's OS is 64-bit isn't relevant for an offline backup, just your CPU). You can make your server reboot cleanly by running /sbin/reboot , it may take several minutes as it will try to shut down things in an orderly fashion. When it reboots, make it boot the Gentoo CD, it should boot you to a raw Linux root prompt. From there you can do this:

  1. fdisk -l to see what your main hard drive is. It'll almost certainly be /dev/sda but it's good to be sure! (Ignore sda1, sda2, etc -- those are partitions. We want the whole disk.)
  2. chmod 400 /dev/sda* Prevent yourself from writing to your company drive and any partitions on it. Just insurance. The setting doesn't exist outside of the livecd's tiny mind, so it'll forget this next reboot.
  3. Plug in your Buffalo disk with USB, wait 15 seconds, then fdisk -l again. It will probably show up as /dev/sdb. Press enter to get a prompt back if kernel debug messages print garbage over it.
  4. dd if=/dev/sda of=/dev/sdb bs=16777216 && /sbin/poweroff or, "copy the raw contents of /dev/sda into /dev/sdb, then turn off the server". It will be stonily silent while doing this, but your hard drive activity light will suddenly be mostly solid-on. (The bs=16777216 is just for efficiency -- copy 16 megs at a time instead of 512 bytes at a time.)
  5. Once it finishes, it will turn off(literally, power itself completely off) to let you know the backup is complete and it's now safe to remove the USB drive, CD, and let the server boot back up normally. If an error happens, it won't turn off, just go back to a prompt and wait.

This will take roughly four to eight hours, I think. In the end you'll have a raw, bare-metal backup; Windows won't be able to use it, but Linux can, and if your server's hard drive dies, you could crack the drive out of your Buffalo's case and expect it to boot normally inside your server. (Assuming, of course, it has the proper connectors.)

There might even be ways to keep the /home partition on it fresh once you make it since your Linux server will still be able to talk to this disk.

B) Online backup. It won't back up the whole server, just your company's datafiles. You may be able to do this with minimal interruption to the server's clients, but, files in use may not be backed up properly.

  • Plug the USB drive into the server, wait 15 seconds, fdisk -l . See if it lists the partitions on your USB disk. There may be several but the largest one should be the data one.
  • Try to mount the disk. mkdir /mnt/backup then mount /dev/sdb1 /mnt/backup . May not be sdb1 of course!
  • tar -vcpf /mnt/backup/filename-date.tar /home/ Will create one giant tarball under filename-date.tar. Filenames will pour across the screen as it adds them.
  • Once it's done, umount /mnt/backup then sync . Once the sync command finishes you know it will be safe to power down and unplug your USB drive.

Sorry, didn't realize in your earlier post you were asking for this as well.

$ cat /proc/filesystems
nodev   sysfs
nodev   rootfs
nodev   bdev
nodev   proc
nodev   cgroup
nodev   cpuset
nodev   debugfs
nodev   securityfs
nodev   sockfs
nodev   pipefs
nodev   anon_inodefs
nodev   tmpfs
nodev   inotifyfs
nodev   devpts
nodev   ramfs
nodev   hugetlbfs
nodev   mqueue
nodev   usbfs
        ext3
nodev   rpc_pipefs
nodev   nfsd
$

I'm not seeing XFS or NTFS...

It seems option B might be easier but (from what I can gather) has the following limitation: it only copies the datafiles, not the entire hard drive. So, with the first option, if the server hard drive fails, I could conceivably turn the external hard drive into the new server hard drive (assuming, as you say, the connections match). With the second option, if the server hard drive fails, I would need to purchase a new server hard drive, program it accordingly, and upload/unpack the .tar file onto the new hard drive. Is this correct? Do you have any preferences/suggestions for what you would do?

Thanks again!

Exactly.

Yep.

You're on a roll, that's exactly what would happen.

The main advantage of the online backup, besides that it's easy, would be that you could conceivably access the files in a pure Windows system if you really, really needed to get at them. You'd need to install something like 7zip to extract the tarball, and it'd take a long time, but you could do it.

Except it doesn't look quite so easy now since your system can't understand XFS. You'd have to reformat the drive as something else, or install XFS drivers. It's possible you already have them, just haven't loaded them yet -- try modprobe xfs. (The offline backup doesn't care what's on the USB drive -- it overwrites it all raw.)

I like bare-metal backups. Having an entire working installation to throw in when things go seriously pear-shaped has let me stumble through a few awful mistakes learning experiences mostly unscathed. It's neither quick, pretty, nor elegant, but it's powerful. Harder to do for a complicated system, but yours only has one disk.

You could even keep the backup "fresh" in a similar way to the online backup, once you have it, since almost nothing but user files are going to change.

How long are we talking? For ~200+ GBs, hours, days? My window machine is a core2 duo e8600 running at 3.33 GHz with 3.25 GB RAM.

I think the drive comes XFS base, reformatting to another format shouldn't be too difficult. What would you suggest, NTFS? Based upon the output of the $ cat /proc/filesystems command, the system doesn't support NTFS either. Would NTFS still allow the tarball to be read by windows in a pinch?

Does this command just see if they are there or load them if they are? Is this safe, stability wise? I don't want to make changes that jeopardize the safety of the data prior to having it backed up...

By that do you mean it doesn't care what files are on the drive or what format it is, or both?

So what you're saying is, I would make a mirror of the entire system hard drive, and then weekly, could do backups, more similar to the online style, where I just upload file changes?

This would be opposed to downloading a tarball onto the drive, and then moving forward downloading new tarfiles on a weekly basis?

Choosing between these options should all be kept in the context that, at some point, I am going to ideally be installing the RAID 1 backup to do nightly backups. At that point, the external drive will be either kept in a separate location in the office or as you have suggested kept off-site (if possible) and brought in to do weekly backups.

When I do install the RAID system, is it better to have them simply backup data files or for them to act as a mirrored system? Planning ahead could help determine what should be done now.

Finally, if I really wanted to, could I do both? Would there be any advantage? disadvantage?

Thanks!

The limit is mostly disk speed. Since you're reading and writing to the same disk, the speed is reduced. The time might approach double-digit hours.

Definitely not NTFS. I already noted that the "easy" ntfs driver is read-only. There's a better one available, but even the "good" one is still incomplete -- and intentionally difficult to get. They had to do that for their own sanity. People were ignoring an amazing quantity of red flashing "USE AT YOUR OWN RISK" warnings and getting all mad when important data was lost.

All it does is load the ability to understand XFS filesystems, if available. It's safe.

All dd does is this:

  • Read sector 0 from /dev/sda
  • Write sector 0 to /dev/sdb
  • Read sector 1 from /dev/sda
  • Write sector 1 to /dev/sdb
  • ...
  • Read sector N from /dev/sda
  • Write sector N to /dev/sdb
  • End of file. Quit.

It dumps the contents of sda into sdb with complete ignorance of the meaning of data in sda and complete disregard for any current contents of sdb. (getting them backwards would be very bad. Hence the chmod step to prevent disaster.)

The way disks and partitions work means it doesn't have to understand what's in sda is to make a perfect copy. Dumping it raw replicates everything with perfection, including boot sectors, boot loaders, partition layouts, partition types, hidden recovery partitions, empty space, remnants of deleted files sitting in empty space, etc, including any things you didn't know about or didn't care about. The only caveat is the destination disk must be of equal or larger size than the source.

This single-minded blindness means it can even clone a system alien to it. Contents that seem like nothing but garbage will be replicated faithfully. I regularly do replications of Windows machines from Linux disks when installing a larger hard drive -- clone the old disk onto the new one and enlarge the partition later. I've done replications with macs once or twice.

Yeah.

An up-to-date clone of the server would be a nice thing to have handy if you're trying to set up a RAID. If the RAID controller does anything unexpected like blank your drives when you configure an array, you can dub your backup back onto it to get everything back.

It'd hardly be a RAID if you didn't mirror or strip or extend the disks in some way. It'd just be an extra hard drive.

If you want to keep it simple, a hardware RAID mirror would be simple. It'd act like the system you already have, a one-disk system, and all the advice I've given you so far would still apply to it. There's more complicated, fault-tolerant setups like raid5, but this takes a lot more disks and hardware, money that's probably better spent on things like a good uninterruptible power supply and a fire safe for your backups.

Absolutely. A hardware mirror can swallow a single-disk failure and keep going, plus weekly backups to your external drive can save you from more drastic things.

So I used the lsmod command, then the modprobe xfs command and then the lsmod command again, the difference was the following was added:

Module                  Size  Used by
xfs                   458072  0 

I then tried cat /proc/filesystems command again and received:

nodev   sysfs
nodev   rootfs
nodev   bdev
nodev   proc
nodev   cgroup
nodev   cpuset
nodev   debugfs
nodev   securityfs
nodev   sockfs
nodev   pipefs
nodev   anon_inodefs
nodev   tmpfs
nodev   inotifyfs
nodev   devpts
nodev   ramfs
nodev   hugetlbfs
nodev   mqueue
nodev   usbfs
        ext3
nodev   rpc_pipefs
nodev   nfsd
        xfs

I assume that means that the server does in fact support XFS? Is it normal that the size is so large? Its twice the size of the next largest mod.

OK. So I will potentially setup a mirror on the external hard drive using the first option you provided. Next week, when I want to start weekly backups, what do I do, as in how will those backups be done? Do I redo the entire process weekly? Can I set it up to only update the changes?(Is this where a cron job comes in?) Or, do I mirror the drive and then download weekly tarballs (is it even feasible or wise to have both systems on the same drive)?

Yep, you have XFS support. Doesn't matter if you're cloning the disk anyway.

It's a pretty complicated filesystem.

You've got a lot of options there. You could clone the /home filesystem again(might be doable without powering the system down, if you remount it read-only and sync, since that won't deny you access to the rest of the system), or freshen the files with rsync, or make a more complicated backup scheme where you make tarballs only of files that've been changed since last time, or more... I won't be able to lead you lock-step every step of the way because what you do depends very much on what you want and what you have and how your system is set up. You're going to need to learn more about your system.

Okay, so I have a few fairly specific questions you may be able to help me with:

Based upon the tutorial you provided for the two options:

For Option A:
For the gentoo minimal liveCD, I went to their website and I think found the correct file on one of their mirrors. Using RIT's mirror as an example found here: Index of /gentoo/releases/alpha/current-iso/
do I want to use this file: install-alpha-minimal-20110319.iso ?

(Neither the OS nor the CPU are 64 bit, I checked, or at least I think I did.)

When I do power down the server and use the LiveCD, you said to "make it boot the Gentoo CD"; how do I make it do that?

For Option B:
If I make the tar file and in the future want to back it up again on a weekly basis; how would I change this code to make it apply?

tar cvf /dev/st0 'find / -mtime -1 -type f -print' 

From what I understand, the -mtime -1 portion of the code determines the duration to search for new files; in my case would I make it -mtime -7 ?

Is this correct?

My plan, assuming I can get the appropriate minimal liveCD file, is to do the 'bare metal' backup tomorrow and do weekly backups from there. If I don't think I can get that to work (i.e. I don't feel comfortable with it) I'll go the tarball route for now.

If I go the tarball route for now on the external drive and then next week decide to try the bare metal on the same drive, what happens to the tarball? What if the situation is reversed?

Thanks again.

No, you don't have an alpha. :wall: You have an x86 or an amd64.

Try cat /proc/cpuinfo to see what CPU you really have.

:wall:

1) Turn it on
2) Check the CMOS settings to make sure the CDROM boots first (BIOS dependent)
3) Put in the disk
4) Wait

Forget option B. You can't do that and a bare-metal backup. You really ought to have a proper backup of the entire system if you're intending to put in a RAID at some point.

How you freshen the bare-metal backup would be completely different from option B. There's a couple ways to do it and none of them involve tar. How you do it depends on a) what you want to do b) what things you need to shut down to do so c) what devices the USB drive shows up as. We still know next to nothing about what your system is doing so I can't advise you on any of those.

This is what I got:

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 15
model           : 2
model name      : Intel(R) Pentium(R) 4 CPU 3.40GHz
stepping        : 9
cpu MHz         : 3391.613
cache size      : 512 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 1
apicid          : 0
initial apicid  : 0
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 2
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe pebs bts cid xtpr
bogomips        : 6789.30
clflush size    : 64
power management:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 15
model           : 2
model name      : Intel(R) Pentium(R) 4 CPU 3.40GHz
stepping        : 9
cpu MHz         : 3391.613
cache size      : 512 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 1
apicid          : 1
initial apicid  : 1
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 2
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe pebs bts cid xtpr
bogomips        : 6783.37
clflush size    : 64
power management:

:wall:

OK, that seems straightforward. Sorry for the stupid question.

Fair enough. When I get to the point of freshening the bare-metal backup, how would I go about figuring out that info so that you can continue to provide awesome advice?

Thank you for your quick replies and patience.

I think you're right, you have a 32-bit machine. Use the install-x86-minimal CD.

When the CD boots it'll pour text across the screen for abit. That's normal. Wait until it brings you to a red root prompt to start typing things in.

Um, Try cat /etc/issue ? I still don't know what your system's distro is and am running out of ways to find out.

How you'd freshen the bare-metal backup? A couple ways. All of them would involve mounting the disk somehow(might be sdb6 or sdc6 or something like that, but not sda6) and updating its contents.

1) You could halt whatever service is serving files, cp -Rp /home/* /mnt/whatever , and start the service again. Simple, but slow since it copies files it already has, and won't delete any files that were deleted from your server in the meantime so it'll tend to accumulate old files.
2) Same as 1, but delete everything first. Even slower since it does even more work, but won't accumulate dead files.
3) rsync or something like rsync, to keep them synchronized. Smart enough to trawl through the drives and only update changed things. I'd need to research how to do this safely and sanely.