Script to scan the disks and make file system

Hi
What I'm trying to do(manually) is logging into the server
and running the below mentioned commands

ls /sys/class/scsi_device/ | while read i; do echo "- - -" > /sys/class/scsi_device/$i/device/rescan;done

lsblk
echo -e "o\nn\np\n1\n\n\nw" | fdisk /dev/sdd
partx -a /dev/sdd1
mkfs.ext4 /dev/sdd1
blkid  /dev/sdd1 | awk '{print $2}' | cut -d "\"" -f2
Finding the blkid of the disk and updating the /etc/fstab
echo "UUID=$(blkid -s UUID -o value /dev/sda1)" /root/abcd  ext4 defaults 0 0 >> /etc/fstab

so i have to do this on multiple machines logging into each machine
scanning for new disk where the disk may be any (/dev/sdg /dev/sdb or /dev/sdi like wise but the newly added disk will be of 6G
so though of writing simple shell script but a little mistake can mess up the whole thing here i need to partition only the disk which i have scanned and has been added recently otherwise i may loose the data
i thought of something like this

ls /sys/class/scsi_device/ | while read i; do echo "- - -" > /sys/class/scsi_device/$i/device/rescan;done
b=$(lsblk | grep 6G|awk '{print $1}')
echo -e "o\nn\np\n1\n\n\nw" | fdisk /dev/$b
partx -a /dev/$b
mkfs.ext4 /dev/$b1
echo "UUID=$(blkid -s UUID -o value /dev/$b1)" /root/abcd  ext4 defaults 0 0 >> /etc/fstab

Please suggest me the best way to do this.

First of all, you cannot check enough, or you risk to damage existing file systems!
Suggestion for a script

#!/bin/bash
# scan_add_fstab.sh
for i in /sys/class/scsi_device/*
do
  # must be a directory
  [ -d "$i" ] &&  echo "- - -" > "$i/device/rescan"
done
for b in $(lsblk -d | awk '$4=="6G" {print $1}')
do
  # jump to next loop cycle if /dev/$b is in fstab
  grep -q "^[^#]*/dev/$b\>" /etc/fstab && continue
  uuid=$(blkid -s UUID -o value /dev/$b)
  # jump to next loop cycle if its UUID is present and in fstab
  [ -n "$uuid" ] && grep -q "^[^#]*$uuid" /etc/fstab && continue
  # echo -e is discouraged, the echo options can differ between shells
  printf "o\nn\np\n1\n\n\nw\n" | fdisk /dev/$b
  partx -a /dev/$b
  mkfs.ext4 /dev/$b1
  uuid=$(blkid -s UUID -o value /dev/$b)
  echo "UUID=$uuid  /root/abcd  ext4 defaults 0 0" >> /etc/fstab
done

Then pass this script ("scan_add_fstab.sh") to /bin/bash on each server

for i in server1 server2
do
  ssh -x "$i" "/bin/bash" < scan_add_fstab.sh
done
1 Like

what is the difference between using

ls /sys/class/scsi_device/ | while read i; do echo "- - -" > /sys/class/scsi_device/$i/device/rescan;done

and

for i in /sys/class/scsi_device/*
do
  # must be a directory
  [ -d "$i" ] &&  echo "- - -" > "$i/device/rescan"
done

At first glance, it appears that as long as all files in the directory /sys/class/scsi_device are directories, they should produce similar results. If a file in that directory is not a directory, the 1st script will get an error trying to write to an invalid pathname while the 2nd script will silently ignore any files that are not directories.

In both cases, the scripts make the assumption that every directory in that directory has a subdirectory named device and that the user running the script has permission to create or overwrite a file named rescan in that subdirectory.

In the 2nd script, the shell reads the directory to get the list of files to process and then processes the files found in that list. In the 1st script, ls must be invoked to get the list of files but ls and the rest of the script can run in parallel.

As MadeInGermany said, neither of these demonstrate what would be called production code.

And, knowing that we must never create new files in /sys, the following is more consequent

for i in /sys/class/scsi_device/*/device/rescan
do
  # unless nullglob is set, we must ensure it exists
  [ -e "$i" ] &&  echo "- - -" > "$i"
done

--
The ls command truncates the given path. Can be useful, but in the while loop one needed to add it again.

Just one more caveat: if your shop is similar to my customer you have to deal with several versions of several Linux distributions. I am no expert for Linux by any stretch, but i remember older Linux distributions didn't have that /sys -tree but files in /dev to deal with devices.

I don't know when this changed (as i said, i don't work regularly with Linux) but if there is any chance you might hit such a system you should perhaps put some safety test of the version you are running on in there prior to such a deep-impact procedure.

I hope this helps.

bakunin

The purpose of /sys is to give some insight into the kernel memory. Typically you can read (and patch) kernel parameters here. But also device drivers can plug in, so you can set their properties.
Sounds like /proc? Indeed the implementation of /sys is similar, and at the beginning they (mis)used the /proc for kernel parameters.
The /dev tree is still for the main functionality: give the user land a file-like interface, and handle it with device drivers in the kernel.
Of cause many parameters in /sys and /proc have little drivers that present the binary kernel values as text (and in the case of writing/patching convert the text to binary).
And yes, /sys sometimes changes. One should carefully check the existing /sys structure when using it; that's better than making asssumptions because of a certain OS version.

 grep -q "^[^#]*/dev/$b\>" /etc/fstab && continue
  uuid=$(blkid -s UUID -o value /dev/$b)
  # jump to next loop cycle if its UUID is present and in fstab
  [ -n "$uuid" ] && grep -q "^[^#]*$uuid" /etc/fstab && continue
  # echo -e is discouraged, the echo options can differ between shells

what do the above lines do and what if the disk are partioned with lvm and mounted in the below format

/dev/mapper/vg_lv_home /home                 
/dev/mapper/vg_lv_tmp /tmp                   
/dev/mapper/vg_lv_u01 /u01