Lets just make this clear for the sake of argument.
So, you create a /dev/md0 from /dev/sdc /dev/sdb disks.
You mount it via mount command and add into /etc/fstab.
After reboot (all disks are inside), everything works as expected (mounted and healty).
You poweroff the box, and remove one of the disks from array.
Booting up everything is fine, except lacking one disk you removed.
After you poweroff and replug the drive, power on, there should be still one disk in the md setup. The one you didn't touch should still be part of md device.
Is this correct ?
Point being, if you unplug the drive, that drive is not considired part of array anymore.
When you plug it back, it will not magically rebuild, this has to be done by hand.
And of course, if you plug the other one out, and leave the first one in, you will not have a working md array.
You are right.
I added it to md0 manually, thats not a problem. Problem is why my data from /mnt/raid is not accessible and i cant read it when i unplug one of the 2 devices?
Thats strange from me, i cant figure out.
I added a folder in array in which i have mounted /dev/md0.
so i have a folder
/mnt/raid/test
and inside test some files.
I cant access it and all is fine. When I unplug disk when machine is running(or not, its the same result), i cant read from that test folder.
Input output error gives me.
But when I unplug another disk, it is working just fine.
Thats what bothers me.
Idea behind all this is to have a shared folder and prevention of data loss. If that disk fails i cannot access to that files. Thats the problem
So, as far as i understood - one disk removed array works, other disk array does not work ?
Did you use fdisk or (g)parted on those disks at all to put raid type ?
As for samba and your actual requirement, that is layers above. One at the time
First you need your md device to be redundant and working after reboot.
What is the HW on that server, since you can unplug disks live ?
I would advise against that practice to test redundancy, if not specifically supported.
You see disk fail in various ways, but rarely unplugging the cable or hitting it with an axe.
Testing will prove difficult
But from my experience @ home, i had an RAID1 array, one disk died of natural causes (old age) and the mdadm system did the job.
This was some time ago tho..
Can you show output of :
fdisk -l /dev/sdb
fdisk -l /dev/sdc
Have you considered using ZFS on ubuntu ?
It should really ease up the process of creating a mirror and managing it in a long run.
Yeah,u arr right. When i pull out on of the disks i cant access folder test where i mounted md0.
My only concerne is to secure those data. Ok samba is on the layers above and its not very important to users to access it,but files must be replicated each other when i put some unix based live system to backup or save files if one disk fail.
I didnt try anything,only madm
------ Post updated 09-01-18 at 04:26 AM ------
My only concern if i left it like this, does the data be readble from a some linux usb live system or not. Nothing else
and login to share folder via Windows, and put, download and delete files, and then add disk, then try again, then do it with /dev/sdb, and everything went ok.
So, from how i can see, it it ok.
------ Post updated at 09:57 AM ------
So from this i can be sure that files are on the disks? Can I check it somehow? Or this is a good test for it.
The highlighted parts in red shows that the raid is good, you would need to investigate if you see a degraded state.
You should be able to access the files going to the mounting point whether your raid1 is degraded or not. There's some indication that you have done it in /mnt/md0
The samba setup is not particularly related to whether the underneath storage technology is raid or something else.
I understand that, and samba just watch for folders. My problem was that when i pull out one disk of two, that test folder inside /dev/raid (earlier mount point) was unreadable.
Now is ok.
THanks a loT!