Samba share on software raid1

Lets just make this clear for the sake of argument.

So, you create a /dev/md0 from /dev/sdc /dev/sdb disks.
You mount it via mount command and add into /etc/fstab.

After reboot (all disks are inside), everything works as expected (mounted and healty).
You poweroff the box, and remove one of the disks from array.
Booting up everything is fine, except lacking one disk you removed.

After you poweroff and replug the drive, power on, there should be still one disk in the md setup. The one you didn't touch should still be part of md device.
Is this correct ?

Point being, if you unplug the drive, that drive is not considired part of array anymore.
When you plug it back, it will not magically rebuild, this has to be done by hand.
And of course, if you plug the other one out, and leave the first one in, you will not have a working md array.

Regards
Peasant.

You are right.
I added it to md0 manually, thats not a problem. Problem is why my data from /mnt/raid is not accessible and i cant read it when i unplug one of the 2 devices?
Thats strange from me, i cant figure out.
I added a folder in array in which i have mounted /dev/md0.

so i have a folder
/mnt/raid/test
and inside test some files.
I cant access it and all is fine. When I unplug disk when machine is running(or not, its the same result), i cant read from that test folder.
Input output error gives me.
But when I unplug another disk, it is working just fine.
Thats what bothers me.
Idea behind all this is to have a shared folder and prevention of data loss. If that disk fails i cannot access to that files. Thats the problem

So, as far as i understood - one disk removed array works, other disk array does not work ?
Did you use fdisk or (g)parted on those disks at all to put raid type ?

As for samba and your actual requirement, that is layers above. One at the time :slight_smile:
First you need your md device to be redundant and working after reboot.

What is the HW on that server, since you can unplug disks live ?
I would advise against that practice to test redundancy, if not specifically supported.

You see disk fail in various ways, but rarely unplugging the cable or hitting it with an axe.
Testing will prove difficult :slight_smile:
But from my experience @ home, i had an RAID1 array, one disk died of natural causes (old age) and the mdadm system did the job.
This was some time ago tho..

Can you show output of :

fdisk -l /dev/sdb
fdisk -l /dev/sdc

Have you considered using ZFS on ubuntu ?
It should really ease up the process of creating a mirror and managing it in a long run.

Regards
Peasant

Here you are output

Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
root@myuser:/mnt/md0# fdisk -l /dev/sdc
Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Yeah,u arr right. When i pull out on of the disks i cant access folder test where i mounted md0.
My only concerne is to secure those data. Ok samba is on the layers above and its not very important to users to access it,but files must be replicated each other when i put some unix based live system to backup or save files if one disk fail.
I didnt try anything,only madm

------ Post updated 09-01-18 at 04:26 AM ------

My only concern if i left it like this, does the data be readble from a some linux usb live system or not. Nothing else :slight_smile:

I do not know if you have tried already any of the following commands to fail a drive, instead of forcefully removing it.

mdadm /dev/md0 -f /dev/sdc

Check the result

mdadm --detail /dev/md0

Remove the failed drive

mdadm /dev/md0 -r /dev/sdc

Check the result

mdadm --detail /dev/md0

Afterward you can add it back and check the result again and see it recover.

mdadm /dev/md0 -a /dev/sdc
1 Like
mdadm /dev/md0 -a /dev/sdc

[/quote]

root@myuser:/home/myuser# mdadm /dev/md0 -f /dev/sdc
mdadm: set /dev/sdc faulty in /dev/md0
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:42:36 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3655

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       -       0        0        1      removed

       1       8       32        -      faulty   /dev/sdc
root@myuser:/home/myuser# mdadm /dev/md0 -r /dev/sdc
mdadm: hot removed /dev/sdc from /dev/md0
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:42:48 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3656

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       -       0        0        1      removed

root@myuser:/home/myuser# mdadm /dev/md0 -a /dev/sdc
mdadm: re-added /dev/sdc
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:43:29 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3660

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
root@myuser:/home/myuser# mdadm /dev/md0 -f /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:46:27 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3672

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       32        1      active sync   /dev/sdc

       0       8       16        -      faulty   /dev/sdb
root@myuser:/home/myuser# mdadm /dev/md0 -r /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md0
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:46:37 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3673

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       32        1      active sync   /dev/sdc
root@myuser:/home/myuser# mdadm /dev/md0 -a /dev/sdb
mdadm: re-added /dev/sdb
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:46:55 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3677

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

------ Post updated at 09:55 AM ------

I tried to do

mdadm /dev/md0 -f 
/dev/sdc mdadm /dev/md0 -r /dev/sdc

and login to share folder via Windows, and put, download and delete files, and then add disk, then try again, then do it with /dev/sdb, and everything went ok.

So, from how i can see, it it ok.

------ Post updated at 09:57 AM ------

So from this i can be sure that files are on the disks? Can I check it somehow? Or this is a good test for it.

The highlighted parts in red shows that the raid is good, you would need to investigate if you see a degraded state.

You should be able to access the files going to the mounting point whether your raid1 is degraded or not. There's some indication that you have done it in /mnt/md0

The samba setup is not particularly related to whether the underneath storage technology is raid or something else.

I understand that, and samba just watch for folders. My problem was that when i pull out one disk of two, that test folder inside /dev/raid (earlier mount point) was unreadable.
Now is ok.
THanks a loT!