Are /home partitions worth it?

I'm new to the Linux world and whilst I've been learning the ropes, I've read some conflicting opinions regarding the creation of separate partitions for /home and other directories during OS install.

Some say that having these directories in separate partitions allows you to reinstall without losing your data. Others say that it adds pointless complexity to the system and that some unwanted files from old installations linger after new installs.

What do you people think about this?

If storing certain directories on separate partitions is a good idea, why is this the case? Would it be better to use completely different drives?

Is this different from distro to distro?

Thanks in advance.

There are several good reasons to have separate filesystems (not in any particular order):

  • Filling up the /home filesystem with files or using up all the inodes with tons of tiny files will not affect the operating system
  • In case of a filesystem damage, the loss of data is limited
  • I/O can be balanced over several physical devices
  • /home can be mounted in a way to disallow execution of s-bit programs, resulting in a higher system security
  • and certainly many more reasons I can't think of at the moment :slight_smile:

But of course there are drawbacks

  • free space is distributed over several filesystems resulting in more unused space. One filesystem can not borrow space from another (in most cases)
  • on specialized servers, like a DNS server, with no users but the admins, the overhead is unneccesary

Conclusion: in my opinion, the advantages of separate filesystems outweigh the disadvantages by far in most cases. This is valid for all kinds of *ix operating systems.

1 Like

Thanks for your informative response.

To be honest, I don't know what s-bit programs are and I never actually considered that having separate partitions could be a security control. I'll put this on my list for things to research.

The main reason I was hesitant to create separate partitions is because I'm not sure how much space I'll need and I don't want to be stuck in a few months when either my /home directory is full or the rest of my system is full.

Would I get the same benefits by having all of my data on a separate drive and creating symlinks to my home directories? Would it be better to just store /home on a separate drive or is there some advantage to having partitions on the same drive?

Are there any other directories which you would recommend to create a separate partition for?

Free space on disk C: in Windows refers to free space of that partition.

Newcomers to Linux are usualy best serverd (to get started) by using:

  1. / : The 'root' partition (unlike the root dir: /root - the 'home' of root/admin)
  2. /home : The home for all your personal files and configurations

NOTE: Most linux use 500 as the default UID for new Users.
AFAIK: Fedora (not sure about the other RH based Linux') is the only one using 1000.

Thus, sharing /home of Fedora with a Debian install might cause issues, unless you have changed /etc/passwd accordingly and relabel the /home for SELinux matters (better use: system-config-users).

The useless complexity is given if you do:

  • /opt
  • /var
  • /tmp
  • /home
  • /boot
  • /

As a first time user, most of those 'used' or 'suggested' moutpoints are only relevant for companys/professional use (meaning: work - not hobby/private use), in which case they'd have the NEED for those mountpoints.

Again, as a newcomer, all you 'need' is:
"/home" and of course "/"
For which "/" should be something around 8-24gb, and /home 'ALL' the rest!
A comman linux installation takes around 4-6 gb, ~7gb if Gnome 3.11...
Adding another 7gb just in case you might want do backup-copys of your fav. DvDs giving a requirement of ~ 14gb providing 7gb of free space for the tempfiles of the dvd.

---
EDIT: to include post that was done mainwhile....
I tried to do symlinks too at the beginning, makes things more complex than they are, and might cause you to delete files by accident, by deleting a symlink the wrong way (happened several times to me, even recently).

If you have 2 drives, and no windows.
Then i'd use the small disk-drive for the OS (read: the "/" partition) (as previously stated, usualy max 24gb for / are sufficant... 32gb for LTS - just to be sure....)
And the large disk for the files: /home.

Do not forget about SWAP, that should ALWAYS BE about 1.5 times your RAM. 1:1 at least - so you can hibernate/suspend.

1 Like

Thanks sea.

I'm using Ubuntu at the moment but I'm thinking about switching to Debian once I have more of an idea what I'm doing but I need to do some more exploring first. I do use my computer for work but I doubt my demands require some of the more 'exotic' mount points as you pointed out.

I've been convinced. When I reinstall, I'll create /home in a separate partition.

I posted this same question in another forum and the replies didn't seem to be as informative and thoughtful as the replies I got here. Thanks a lot.

It seems to be quite different between Ubuntu and Redhat.

The default layout in Redhat is to have two disk partitions: one for /boot and one for LVM, and in LVM goes / and swap.

Neither Ubuntu or Redhat has (and I assume no distributions have) a separate partition for /home. Ideally the "home" directory should exist in one place and be automounted to the various servers where and when it's needed.

With the exception of /boot - if everything else is using LVM - there's no requirement to have any separate partitions for anything, even though it's desirable to do so for the reasons given.

It's also important to know which filesystems should not be mounted on their own filesystems. Namely:

  • /etc
  • /bin
  • /sbin
  • /dev
  • /lib
  • /root
  • /sbin
  • /selinux

While that is true, i HIGHLY recomend to use "plain partitions" rather than LVM in RedHat based installs (Fedora, CentOS, Scientific Linux). (EDIT: For personal use that is.)

LVM's have great bonus if used with raid-systems, or on systems that will 'never' change.
(NOTE: LVM is great in resizing its partitions within the LVM volume)

But for personal use, LVM is just a pita.
Specialy if you face any mount issues (fstab or kernel) and are not 'familiar' with handling those.

I'm inclined to disagree. Do you have any such experiences that you can tell us about, or post links to any references which support your recommendations?

Enterprise use: No, just read so in another forum.
Personal use: Yes, several first hand experiences, though, half of them were because encryption was included, but not the cause for the trouble in the first place.

Either way, for anyone coming from a Windows world, i dont want to make their first Linux experience more challenging than necessary.
Specialy in a thread asking for help with plain partitions.

EDIT:
Edited previous post to state that that statement is ment for personal use.
I do have to admit, i do feel more save with an encrypted LVM than with an encrypted plain partition - but then again, its harder to get my own data back too. (backups aside)

EDIT:
Eg: Fedora and LVM [LWN.net]
Fedora 17 LVM Issue
Installed xfce, no access to luks-lvm, and other unusables

Following on from post 4.

I've been given a pre-installed RHEL 6.3 server that has just the root filesystem fairly empty, but filling the boot disk except for a small slice off for /boot. It's under VMWare, so using a clone I practised shrinking the root filesystem and splitting off /tmp, /var & /usr.

Oh dear. :eek: Splitting off /usr gave a few issues. The server did boot, but very slowly and none of the /etc/rc.d/rc2.d/S* scripts had run. Fortunately I could log on at the console and just run them and everything then was just fine.

My thought is that the filesystem /usr was not mounted so it just failed to run most of the boot - and I've been very lucky too.

I added the lines in blue to /etc/fstab is as follows:-

/dev/mapper/vg_rhel63x64-lv_root /                       ext4    defaults        1 1
/dev/mapper/vg_rhel63x64-lv_tmp /tmp                     ext4    defaults        1 1
/dev/mapper/vg_rhel63x64-lv_var /var                     ext4    defaults        1 1
/dev/mapper/vg_rhel63x64-lv_usr /usr                     ext4    defaults        1 1
UUID=7be417ee-faf2-4c33-80b3-eb3a8348fd3a /boot                   ext4    defaults        1 2
/dev/mapper/vg_rhel63x64-lv_swap swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0

Have I correctly identified the problem? What did I do wrong?

Eventually I moved /usr back within the root filesystem and commented out it's entry in /etc/fstab and the next boot was better. I guess I was a little too aggressive. The boot is still rather slow, so perhaps I've messed up something else too.

Good thing that this is a test machine! :stuck_out_tongue:

Robin

Just adding my 2 cents here.
maerlyngb, when I was a noob myself, I wish I could have come across this great book to help me get started - it would have made things a whole lot easier, especially with new concepts for someone coming from the Windows world. I don't know if that is your case, but I'd highly recommend that you download The Linux Command Line (which is available for free as in free pizza ;)) here. The last edition was published 2 months ago. Hope you find it useful.

For home use, don't bother with LVM, and create three partitions on your hard drive. One for /, one for /home, and one for swap.

I recommend:

/, 32 GB
swap, 2 GB
/home, everything else

For a server, it makes sense to use separate partitions or volumes for /var and /opt and others, but not for home use.

One really good reason to use partitions, not just /home but beyond -- is if you don't store /home/, /var/, and /log/ inside /, then your root partition almost never needs writing to. And it's the one you actually need to boot. If / is okay, it can fsck the other partitions, but if root gets corrupted, it can't fix itself. And disk corruption lands wherever you've been writing to...

Imagine your machine's been forcibly powered off for some reason. With partitions, the system is able to start booting, mount /, and fsck the other filesystems, go through a few scary reboots and probably work again, minus whatever files inside /home/ the corruption hit.

Without partitions, you'd need a rescue CD to get it working again when / fails to mount. And when you check it with a rescue CD, you'd better hope the corruption hit nothing vital.

When I was experimenting with bleeding-edge Linux kernels this is mostly what kept me from trashing my system repeatedly.