Hard Disk Prep From Shell

Hi there. I am a MS experienced admin who has recently been charged with crossing over into the Unix/Linux realms. Years of working with the GUI and DOS have left me somewhat hobbled as I make the transition and try to sift through the mountains of partial information available :wall:... which leads me to my question at hand.

I am working with Fedora 16. My goal is to achieve the following (not necessarily in the order listed) from the shell. The stuff I have already figured out will have notes after that particular action.

  • attach/mount <?> a disk - this seems to be automatic in Fedora... what if I am on an older machine that does not do this automatically?
  • format said disk - easily done from the disk manager GUI but I need to know it from the shell in the event I am on an older machine.
  • label said disk - easily done from the disk manager GUI but I need to know it from the shell in the event I am on an older machine.
  • partition said disk - so far I can do what I need here with fdisk
  • encrypt said disk - easily done from the disk manager GUI but I need to know it from the shell in the event I am on an older machine.

I am grateful for any assistance as I venture into this new world.

Probably best to explain how disks actually work in UNIX first.

You don't get drive letters in UNIX, drives become special files under /dev/. If you had a disk with three partitions in it, you would get something like

/dev/sda -- the raw disk itself, with the contents of all three partitions. This is the device you use when editing partitions.
/dev/sda1,2,3 -- The partitions inside /dev/sda. These are the things you format and mount.

To partition a disk in Linux, you'd probably prefer the commandline parted tool since it's common, supports several partition types, can do things like moving and resizing of partitions, and has a somewhat verbose built-in help. If you don't have it, you can try fdisk on systems with old-fashioned boot labels, but its abilities are more limited, and you must read its options carefully.

When you mount a disk, it doesn't become a drive letter, it takes over a folder, typically an empty folder. Say you had a folder /mnt/disk inside your root partition, you could do mount /dev/sda1 /mnt/disk to have its contents made available inside /mnt/disk. The root partition itself is mounted by the kernel on boot, because it has to start somewhere. You unmount partitions with umount /dev/sda1 or umount /path/to/folder , but can't do that to a partition that's in use.

Automounting of non-removable disks is typically done through /etc/fstab, a text file containing a list of partitions and where they belong. When the system boots, it will attempt to mount anything in this list lacking the 'noauto' option. If any fail to mount, this is considered a severe error. Here's a fstab from one of my systems:

# /etc/fstab: static file system information.
#
# noatime turns off atimes for increased performance (atimes normally aren't
# needed; notail increases performance of ReiserFS (at the expense of storage
# efficiency).  It's safe to drop the noatime options if you want and to
# switch between notail / tail freely.
#
# The root filesystem should have a pass number of either 0 or 1.
# All other filesystems should have a pass number of 0 or greater than 1.
#
# See the manpage fstab(5) for more information.
#

# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>

/dev/sda1              /boot           ext2            noauto,noatime  1 2
/dev/sda3              /               ext3            noatime         0 1
/dev/sda2              none            swap            sw              0 0
/dev/sda5              /home           ext3            noatime         0 1
/dev/sda6              /usr            ext3            noatime         0 1
/dev/sda7              /var            ext3            noatime         0 1
/dev/sda8              /var/tmp        xfs             noatime         0 1
/dev/cdrom2            /mnt/cdrom      udf,iso9660     noauto,ro,user  0 0
/dev/sdb4              /opt            xfs             noatime         0 0

# glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for
# POSIX shared memory (shm_open, shm_unlink).
# (tmpfs is a dynamically expandable/shrinkable ramdisk, and will
#  use almost no memory if not populated with files)
shm                     /dev/shm        tmpfs           nodev,nosuid,noexec    0 0

Once you've partitioned a disk, you format it with the mkfs command. There's actually many different mkfs commands, since there's many different partition types, but they have a lot in common and can be mostly used the same way. For example:

mkfs.ext4 -L volume-label /dev/sda1

would reformat /dev/sda1 with the ext4 filesystem. Other common Linux filesystems include ext3 and reiserfs.

Encrypting a disk isn't something I've tried personally. There seem to be quite a lot of steps involved, it's not a single command.

1 Like

Thank you for your prompt response.

To make sure I am understanding this correctly, I won't have to configure it to recognize that the disk is plugged in, rather it's a plug and play proposition?

The linux operating system automatically recognizes when disks are attached*, including USB disks and the like, and the corresponding devices under /dev/ ought to appear automatically too. But the only disk Linux automatically mounts is the root partition, right when it boots. From there on out, the kernel only ever mounts the disks it's told to mount.

So, automounting hotplugged drives is something done by application software, not Linux itself. The GNOME window manager tries to automount external disks for instance. Other times, the pmount suite is used to automount, and can also decide who's allowed to automount what.

And quite a few systems don't automount at all. Automount is useful for plugging in flash drives, but there are many situations you do not want disks to be automounted. A system with a software RAID for instance -- you want the RAID to control the disks itself, instead of them being grabbed by the automounter. Or a secure server, which would have no business mounting a strange drive unasked. Or a system being used for data-recovery on a spotty drive which is simply unable to be mounted. And so forth.

  • The hardware has to support it of course. Any modern SATA port is technically supposed to be hotswap, but the feature isn't always implemented properly.
1 Like