Should I use a CoW filesystem on my PC if I only wanted snapshot capabilities ?

I will be installing Linux on my HP Laptop and I really like ext4, its stable and time tested. But I want snapshot capabilities, or something like system restore in Windows. This is obviously for times when I shoot myself in the foot and want to restore back to a stable state.

Will filesystems like ZFS or btrfs work better in these cases rather than ext4 ?

My only requirement is snapshots, so is it worth running btrfs or ZFS despite its unstabilty (more for btrfs rather than ZFS).

No.

I do not recommend those file systems.

Your are better off running ext4 , a RAID configuration (I run RAID1, but do not depend on it), and doing regular backups on your data based on your risk management model (this is the most critical).

Nothing beats a strong filesystem and a very well thought out backup and recovery plan.

That is my view. YMMV

On the desktop, I run macOS and have a similar strategy. I make full backups often, based on the activity on the system. The more activity and files (and the nature of the files) created, the more frequent the backups.

If you are using LVM, you can use logical volume snapshot ability to achieve want you want.

BRTFS i have not used, but i did read some horror stories some time ago.
Probably nothing to worry about for home use, since those bugs were about raid protection.

ZFS in ubuntu, for instance, is openzfs (OpenZFS)
This is a mature and high quality file system & volume manager, but i would not use it for root just yet.

For data disks, i see no reason to reap the benefits of snapshots, compression and deduplication.
Just be sure those dedup tables fit in memory :slight_smile:

Here is a quick example of LVM snapshots from my home box which is filesystem agnostic :

root@box:~# vgdisplay  dumpvg
  --- Volume group ---
  VG Name               dumpvg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <1.82 TiB
  PE Size               4.00 MiB
  Total PE              476931
  Alloc PE / Size       262144 / 1.00 TiB
  Free  PE / Size       214787 / 839.01 GiB
  VG UUID               Qtrbm7-GEAz-CcgA-JOUV-6eZ1-qrZ6-L4ey3h
root@box:~# mount | grep dumpvg
/dev/mapper/dumpvg-dumpvol on /srv/dump type xfs (rw,noatime,attr2,inode64,noquota)
cd /srv/dump/some_files
root@box:/srv/dump/some_files# ls -rlt
total 4
-rw-r--r-- 1 root root 19 Feb 28 15:49 file1.txt
root@box:/srv/dump/some_files# cat file1.txt 
Some stuff written
root@box:/srv/dump/some_files# 

You need to have free space in volume group to create a snapshot, as i do, so lets create a snapshot.
We also specify that 5GB of total space in VG that can be consumed by system to maintain snapshot.

root@box:/srv/dump/some_files# lvcreate -L 5G -s -n lv_snap_$(date "+%Y%m%d%H%M") /dev/dumpvg/dumpvol
  Logical volume "lv_snap_202002281552" created.
root@box:/srv/dump/some_files# rm -f file1.txt
root@box:/srv/dump/some_files# mount -o ro,nouuid /dev/dumpvg/lv_snap_202002281552 /srv/dump_snap/
root@box:/srv/dump/some_files# ls -lrt /srv/dump_snap/some_files/
total 4
-rw-r--r-- 1 root root 19 Feb 28 15:49 file1.txt

Hope that helps
Regards
Peasant.

2 Likes

FWIW:
Don't Use ZFS on Linux: Linus Torvalds - It's FOSS

Torvalds will not allow Linux kernel inclusion of ZFS support because of Oracle's position on ZFS licensure, this was important to us because we have only ZFS on Solaris 11/12 boxes. We did not want different files systems for production Linux servers - but that is what we got....ext4

How this plays out on a home desktop I cannot say exactly. I would recommend NOT using ZFS for Linux boot filesystems - as @neo said.

1 Like

Hi.

I am a big fan of Virtual Machine technology. Here is what I do:

On my main workstation I install a small, stripped-down (i.e. no Office, etc.) Linux distribution as a host -- I prefer Debian GNU/Linux. I then install Virtualbox and create a VM. On the VM I install my day-to-day work environment -- again Debian.

Whenever I have mods to install, I use VB to take a snapshot. Then I install the mods. I leave the snapshot for some time (it's a CoW). If it works for a few days, a week, etc., then I merge the collected CoW changes into the VM (by, ironically, deleting the snapshot). If the mods fail to run, I restore the running system. I've had to do the restore perhaps 3 times in years, and it goes quite quickly, as does the creating and merging of the CoW.

Finally if the VM seems OK, then I install the mods into the host system.

For a plan for backup, we use a separate computer as a backup server, and on that we have a set of mirrored disks. We use that to run rsnapshot to backup our running, day-to-day systems. ( You might be able to run rsnapshot on the running system itself, in which case you can then use LVM, and rsnapshot will do its own LVM snapshot, do the backup, and remove its snapshot; then you could, say, tar up the resulting backup and send it to another computer ).

We have a small shop and rsnapshot helps us in many ways. The rsnapshot utility is a pull system, so the remote needs passphrase-less access to the system being backed up. The big advantage of rsnapshot is that it uses hard links, so conserving storage dramatically. For example, I backup my day-to-day system every hour, day, week, month, so 24+7+4+12 -> 47 collections, yet rarely goes over 20 GB, but oddly, if you look at any single collection, it is 20 GB, all due to the magic of hard links. The code also uses an algorithm that it transfers only the changes of a particular file, thus saving real time and network time. It also handles all renaming, copying, and removing of necessary files to accomplish the rotation of backup collection names.

We are also interested in the zfs filesystem. Our rsnapshot backup server was replaced this month with a newer model -- the old one lasted 17 years (2005..2020). In addition to being an external backup, I also installed VBox there and, as VMs, installed as guests Ubuntu 19.10 on zfs, as well as FreeBSD 12 on zfs. Since the host is on a RAID1 mirror, I didn't need the additional support for zfs mirroring (but it might be of some interest later on to experiment with them). We've had a Solaris VM for a long time on zfs:

OS, ker|rel, machine: SunOS, 5.11, i86pc
Distribution        : Solaris 11.3 X86

As additional VMs, we also installed a guest that is the same as the host, and we're using that as a test bed (just as I do with my day-to-day workstation). This new install is of distribution Debian GNU/Linux buster. We also like to have the next rev available, so I installed the testing version, known as bullseye.

So VMs are what we use for experimentation, as well as to make backups, like MS System Restore Points, easy.

Best wishes ... cheers, drl

2 Likes

Thanks drl,

Honestly, I have tried a similar approach on macOS, but even with a 12-core machine with 64 GB of memory on my desktop, all flavors of VMs slow the machine down to an intolerable degree. After all, running in a VM, by definition, is slower than running "on bare metal".

My guess is that your desktop VM configuration on Linux runs a lot better, since you are a bit fan.

Because I prefer the fastest speed possible (we are talking desktop operations) I find that using the built in macOS time machine to keep data based up works great; so in this configuration, I keep my machine running the fastest possible (running "bare metal" not in a VM) and at the same time my data is always backed up an external disk.

Maybe we mac users are lucky because time machine works so well and is very easy to use for users of all skill levels?

From the big wiki in the sky:

REF: Time Machine (macOS - Wikipedia)

As I mentioned before, on my Linux servers in production, I run ext4 on all of them using RAID1 and make incremental and full backups of all mission critical data on a daily basis. Of course, remote servers are different compared desktops; and so that is why I prefer macOS and time machine on the desktop.

Naturally, everyone has different setups and preferences, so it is great to see people sharing their ideas openly and open-mindedly.

I do not recommend the filesystems the OP suggested ( ZFS or btrfs ) on Linux and as mentioned, and prefer a different backup strategy (but then again, I don't user Linux on the desktop, as mentioned, but If I used Linux on the desktop instead of macOS, I would still use ext4 . This file system ( ext ) has served me well over the many years and I cannot recall a single problem related to the ext filesystem 27 years of Linux experience. I like "reliable" and "proven" in a filesystem... even if it is not the "most fancy SOTA out there.).

Hi,

some personal experiences and experiences from others:

  • zfs
    I'm very fond of the ease of use of zfs administration. Few simple commands which all did what I expected them to do. Compression is recommended. Data Checksumming is also a great feature. Deduplication is not as clearly recommended. It needs lots of resources(RAM). It's not that flexible as LVM or btrfs, but in a Desktop-Environment, this should not be a problem. No Problems in several years of usage. Some reading about zfs is recommended for basic configuration (ashift, blocksize values.) getting root fs to zfs is manual work. But for a data partition it's very easy to use. If I need more flexibility, I will use lvm.
  • btrfs
    The times when btrfs had grave bugs are long gone. If you use it, make sure you do not use features that are marked as experimental. (For example the btrfs internal raid5 implementation. Use linux software raid with btrfs, if you want raid). I read some entries in another forum from someone who uses it at scale at will never change it for any other filesystem(he had experieance with all major filesystems) . It also has checksums and snapshot capabilities. The flexibility far exceeds zfs and lvm.

    I checked it out and decided not to use it, for these reasons:
    [list=a]
  • Some things are more complicated. You have to work your way through the documentation quite carefully. For example. You can not trust the values of du and df . The complexity of the filesystems circumvents that this is always correct and consistent. btrfs introduces own tools in addition to the tools everybody is used to.
  • Things do not work the way I like them to: If I have some raid and a disk is failing, I would expect the system to come up and maybe complain about that it is in degraded mode or just have some file, which would be my part to check. And if the disk is replaced, I expect an easy command or even automatic restoration of the failed disk. That seems not to be the case with btrfs. If you then use your file system in degraded mode, bad things can happen and your raid will do things like writing data only to one disk despite the fact that the device may be set up as a mirror. Those data may be kept non-replicated even if you replace and reintegrate a new disk afterwards. That of course may lead to data loss. That's not an error of the filesystem, but the ordinary procedure. One has to know correctly how btrfs is to be operated according to the documentation or you'll get into serious trouble.
    [/list]
    I'd say btrfs is an advanced filesystem, which get's you a lot in terms of flexibility, performance, robustness, features and data security. It comes with the cost of taking your time and really get to learn the details, which can become really important.

On the other hand filesystem snapshotting ist not what you get with windows system restore points. I think that is a lot of voodoo going on there with windows system restore points(meaning it is complex under the hood). I did not ever test if filesystem-snapshots really get you a working system if you roll them back. Maybe it just works. But if you have a database, there's no guarantee, that the data will be consistent with such a snapshot.

I worked on my personal workstation within a VM too. Snapshots were possible. But the user experience was a mess. Regular Problems with the virtualization environment(virtualbox) and speed too drove me away from that and back to bare metal. For testing things virtual machines are great. Snapshotting ist great there. But not for the main workstation(for me).

Personally OS state snapshotting it's a feature, that I liked to have on windows(system restore points), but I never missed on linux, even if it would be nice to have it. I broke linux at the beginning a lot.(Because I liked experimenting). But since I'm working on a linux machine, I know what better not to do and I never had the need to reinstall the system due to a broken os.

Recommendation for the lazy: If you want to experiment: Use a virtual machine. For your workstation: Use a proper backup strategy and get to know how to validate and restore it. Backup is important!

And in opposition to windows, if it would really be necessary, it's a piece of cake to take any computer install linux on it and get your backup onto it and have every setting restored. You just do not have to endlessly reboot and klick and update.

1 Like

After thinking about this some more, let me please add....

I have nothing against anyone using btrfs and someday, when it is default on Ubuntu, I will probably follow the crowd and use btrfs ; but as long as ext4 is the Ubuntu default, I will stick with ext4 .

However, for anyone who wants to get into btrfs , one approach is to start with btrfs on a non-root partition (as Peasant suggested, as I recall). Maybe you can stress-test btrfs in that way by crashing your system, unplugging the power cord, etc and see if you are happy with how btrfs recovers. When you are comfortable, then maybe try a root partition with btrfs with proper backups on your desktop machine.

After all, the OP was talking about his desktop machine, not some remote linux server on the other side of the planet where you want a very safe filesystem like ext4 .

Some people (unlike me) prefer to be early adopters of these kinds of technologies and if you really want to use btrfs , then go for it. I am not really an "early adopter" of any technology which I don't really have an operational need for, so I am waiting for tech like btrfs to be the OOTB default with Ubuntu. That is the signal I am looking for.

Where Ubuntu goes (default and mainstream), I will follow, generally speaking.

However, I was not always like this with Linux and in the early days (over 20 years ago), I was keen to be a very early adopter. However, over the years, I am less of an "early adopter" and more of a "late adopter" of new tech like btrfs . This is only me.

I encourage others to march to the beat of your own music, not the beat of my music. I only offer my opinion about what I do and do not do. This is only my opinion and in no way is presented as "the truth" or "the way to go".

btrfs has a lot of great features. If btrfs fits into your current plans, then go for it as you deem appropriate. Just because "Neo" is stuck on ext4 until Ubuntu makes btrfs the Ubuntu default, there is no reason not to use btrfs if the spirit moves you, especially on your desktop machine (not a server in a data center far, far away....).

2 Likes

Other the licencing issues, ZFS on Linux is production quality.

I'm no zealot but name me one file system on Linux (other then BRTFS) which has all those features so to say inline :

  • Builtin in bit rot protection with raid levels integrated or non-protected with copies feature.
  • Snapshots ( accessible via one cd command) / clones / compression / dedup / encryption one command away - granularity per filesystem / transparent to user.
  • Incremental send - receive, local or remote, protocol agnostic (STDIN/STDOUT).
  • Quotas, reservations - one command away.
  • Not using standard (not to say obsolete) LRU for caching, but ARC.
  • Ability to increase performance with dedicated devices for L2ARC/ZIL.
  • Sharing volumes/files using NFS, CIFS (NAS) or ISCSI (block) - couple of commands away.
  • Compatibility between systems running Opensolaris & BSD derivatives, endianess aware.

Those are features offered by enterprise array systems for big price tag and additional licences.

Of course, much can be achieved with various other tools (LVM snaps, rsync tools, hard links, xfsdump, dump etc.)

In ZFS all is builtin, one file system to rule them all :slight_smile:

Regards
Peasant.

4 Likes

Ok, so can you tell me something about the Timeshift program. Lets say I make certain changes to the root filesystem which makes my system unbootable. Or I install a malicious or problematic update. Then will a program like TimeShift help ?

My question is that what do I do if the system becomes unbootable, how do I recover then ?

What do you use ? What is your strategy ?

I hope its not taking a cold boot backup with CloneZilla.

--- Post updated at 12:14 AM ---

Can this work if the system becomes unbootable ? If so how ? Grub doesn't have the lvm tools for restoring snapsots, AFAIK.

Also what if the drive is encrypted ? I will be going with full disk encryption.

How do you do this if the system becomes unbootable ?

--- Post updated at 12:19 AM ---

Thats interesting, but I really need restoration capabilities on my base system.

Could you tell me how rsnapshot will help in a case where lets say the system is unbootable ? How do you restore then using rsnapshot ? Does it work when the system is unbootable ?

--- Post updated at 12:23 AM ---

What would be that proper strategy ? CloneZilla ? Or LVM snapshotting ?

--- Post updated at 12:27 AM ---

Interesting analogy. But doesn't zfs use more RAM than btrfs ? So isn't btrfs "the better ZFS" for desktops ?

Also this snapshot thing is really confusing me ? Restore incase of unbootable system with encrypted HDD ?

--- Post updated at 12:29 AM ---

Is the licensing the only problem for Torvalds ?

Encryption makes the backup task more difficult. That's why I avoid encryption except where I really have the need for it(which was never case for me so far, and thus I won't be helpful on the topic due to the lack of experience with it). You probably want to encrypt your backup space too. You need to get aquainted with the tools to mount the encrypted storage from within your running os and from a preferred rescue system.

Since you're a beginner, a CloneZilla can be a fallback solution until you're famillar enough with your linux os. With CloneZilla you can save and restore the os partition without knowing very much about linux.

For an easy start you may take a usb disk and put your data there. A more advanced and safe approach can be to have a networked device that connects via network to your computer and backups the updated data regularly. (rsnapshot can be used with both variants).

There are lots of backup tools. A simple backup tool is the mentioned rsnaphot. I would recommend it too. It's a file based backup in contrast to the image based backup of clonezilla. It's not primarily targeted for full system restores, but that can be done without problem too. If your system crashes completely, you can take the following steps to recover an os installation by only having the files:

Full system restore to an empty disk

  1. boot into a rescue system via pendrive or cd/dvd (systemrescuecd, grml or knoppix are 3 good alternatives)
  2. partition, format and mount hard disk(maybe also a replacement hard disk)
  3. copy data from the backup to the mounted disk
  4. change filesystem config file(/etc/fstab)
  5. Change configuration of boot loader(grub) and install boot loader onto hard disk

These are quite some steps to learn the commands if you never did it. But once you got that, restoration is easy. For me with many years of linux experience this had become childplay for me. I did it hundreds of times with very different systems and in contrast to windows, where this is just not possible: There maybe challenging advanced linux setups, but 99,9% are solvable and most of them with ease, when you have the basic knowledge.

... and of course one can just examine the problem using a rescue system and fix it without the need for a full restore.

Some cases I experienced which required fixing it from outside the running os:

  • forgotten root password: I had to append init=/bin/bash at the boot screen, reset the password
  • misconfiguration of the bootloader(grub): Boot into resuce, mount the disk, fix the bootloader config
  • a corrupted file system where one or more essential file(s) were missing(very rare): Boot into rescue, copy the backup files of the base system onto the system
  • replacing a faulty disk or migrating a system from one hardware to another: the full program from above
  • an installation action, which wasn't thouroughly reviewed, so essential packages got removed(very rare): Boot into rescue, install the packages again.
  • troubleshooting with a password protected bootloader: boot into rescue, do the fixing from there and/or remove the password protection from the boadloader config

Yes. And for him and many others in the field of the OSS-Community it's a complete showstopper. It's absolutely inacceptable for those to put work into something where lawyers could come and pry the work out of the hands of the community.

There are tales from the past about the enormous memory hunger of zfs. Those tales belong to the lands of fairy tales and myths. But at the core there is a grain of truth. btrfs is more resource efficient than zfs. I read that a system with 1 GB should be an adequate minimum for use with zfs. If you have a new system with 8 GB of ram or more, this won't be a limiting factor. And again: If you really want to use deduplication: Carefully read the documentation before you decide to use it! And as already written: the same goes for btrfs!

2 Likes

Another comment:

A con against zfs is the inability to remove VDEVs. A VDEV is a subpart of a volume.

Example:

Say you have a data volume consisting of a single disk(=vdev, 1 TB). You decide to replace your vdev of a single disk with a raid-1 vdev(1 TB), since you want add redundancy to be safe in case of a disk crash. That's possible. Over the years, you add another 2 vdevs(2x2TB,2x4TB) with raid-1 arrays. So you then have 3 vdevs making up your volume consisting of 2 disks each with an overall capacity of 7 TB.

You now decide you want to increase your storage again and simultaneously reorganize your 3 x raid1(6 disks=>7 TB usable) to 1 x raidz2(5x6 TB =>18 TB usable) to be able to cope with more simulateous disk crashes(2 disk crashes without data loss here) and at the same time reduce the number of active disks(6->5).

With zfs this is only possible by reformatting, since device removal is not fully supported by now. So you have to copy all the data, which must be done offline. ZFS top level device removal is in development at the moment, but i expect some years to pass until even raidz vdevs can be removed.

With LVM you can just add the new underlying disks and remove the old disks. No problem. All is possible to be done online. Btrfs can do that to and is even flexible to do more advanced migrations.

And Here are some experience reports about btrfs and zfs from users:

ZFS Vs BTRFS : linux

Some not to long gone data loss stories about btrfs are also there. I assume the cause may be lacking knowledge about file system operation. But of course that's only a suspicion.

Regarding ZFS, I tend to follow Linus on this. Linus is a smart guy and he knows what he is talking about.

Don't Use ZFS on Linux: Linus Torvalds - Last updated January 10, 2020

1 Like

@Neo: That are definitely good points. Regarding performance, I do not need very high performance for my zfs servers, so I have no comparison to other servers using btrfs/ext4.

What's regarding the quote it has no real maintenance behind it either any more, as I see it the zfs on linux code seems to be in development steadily. The names of the top contributors Brian Behlendorf or Matthew Ahrens sound familiar to me in terms of respectable international known programmers. So before hearing more about that I assume, the OpenZFS project is very active at the moment.

Code frequency . openzfs/zfs . GitHub
Contributors to openzfs/zfs . GitHub

Update:

Here's a more comprehensive benchmark comparison(from 2015, quite ancient, important because btrfs evolved a lot in recent years) between zfs and btrfs(and partly also with xfs and ext4):

Update-2:

A Performance comparison of zfs vs xfs of Percona - the mysql experts (from 2018)

About ZFS Performance - Percona Database Performance Blog

Here another guide on tuning with zfs

I just place it here for others to read. The topic is quite interesting for me right now. The document is a reminder to verify every setting you made in a complex system by testing it and reverting it if it did not improve the situation. He shows a lot of examples which had negative performance impact, like LVM, Compression, Default Proxmox CPU-assignment(kvm64, for performance its better to use host-cpu, but you may sacrifice live-migration capability if you have a mixed hardware pool), Proxmox Storage Driver, ...

ZFS performance tuning - Martin Heiland

1 Like

Two more comments on zfs:

  • Don't fill up the filesystems
    If you fill up zfs file systems above 80%, performance will degrade.
  • No manual balancing method available
    If you have a Volume with more than one vdev and they are not equally full performance also degrades. For best performance vdev utilization should be equal of every vdev. But there are times when vdev utilization is completely different. For example if you add a new vdev: The new vdev will be empty. There are 2 typical ways to solve that:
    [list]
  • utilization will slowly level to the pool average over time
    The percentage of the probability for a vdev to be the targe for a new write is the reversed percentage of the utilization of that vdev. So the least filled up vdev will get more new data as the other ones and the vdev utilization will average with writes and deletes over time.
  • export and import the zfs pool
    If you like to have it immediately, you may export and import the pool. That way on the import all data will be distributed evenly over all vdevs. That task of course needs a lot of temporary space and probably time when you have quite some TB of data.
    [/list]

Regarding the performance of filesystems, I'm interested in it quite much. Right now, I'm writing benchmark scripts testing different aspects of it and will open a thread here soon.

Unfortunately I need it, I can't avoid it.

Okay Clonezilla is not an option for me. Simply because I don't have that much of space to spare. It seems I am not getting the answer that I want because I am not asking the right questions.

So let me apologize for that, and let me ask if the following workflow is possible on Linux.

  1. I have a single 1 TB SATA hard disk.
  2. I will be using an encrypted LVM with ext4 formatting.
  3. Now lets say before an update or a dist-upgrade I take a snapshot of the root partition and store that snapshot in the root partition itself.
  4. The upgrade or update fails or is causing problems, and the system is no longer bootable to my desktop.
  5. I boot into a live CD.
  6. Mount my encrypted partitions, and /proc, /sys and /dev from the live CD.
  7. Chroot into my system.
  8. Find the saved snapshot.
  9. Revert it.
  10. Exit from Live CD environment and boot back to the reverted system.

Main Challenges:

  1. Will the backup process work ?
  2. Will the Live CD of my OS contain CLI tools to decrypt encrypted partitions ?

As you can see, I cannot forego full-disk encryption nor do I have that much space or time for a full cold boot snapshot of a partition.

So is the above workflow possible ?

From all my experience in linux I would assume yes. The better OSS pieces can be operated from rescue systems. But since I have very little competence in the area of those encryption techs, I can not help you further here.

Hi,

I just had a use case for encryption. I decided to use dm-crypt to create an encrypted container. It's fairly easy. You may just have an encrypted container for your live data and another for your backup. Once it is open, you can read from and write to the filesystem. Many rescue distributions support dm-crypt out of the box(grml, sysresccd, knoppix).

Interesting would be, how you securely automate that, because a backup that's not automated is worthless for me. And if you do not do it securely, encryption makes no sense in my view. Maybe you can place a pendrive with the key on it in your computer, so it only boots up when the pendrive is there?

Here's a tutorial for you to read(use google for a lot of resources on the dm-crypt topic):

How To Use DM-Crypt to Create an Encrypted Volume on an Ubuntu VPS | DigitalOcean

Interesting would be, what the nature of your data is and what confidentiality level of your data is, so I/we can better understand your situation and maybe help more.

regards,
stomp.

As far as Solaris ZFS is concerned, this is no more true since one year ago:

https://blogs.oracle.com/solaris/oracle-solaris-zfs-device-removal

On the other hand, OpenZFS has only implemented vdev removal a partial way, excluding raidz:

I have been using the Sun then Oracle versions since the end of 2004, and the initial fuse based one on Linux, then the native one as soon as they were released and all I can tell is I really miss ZFS features each time I have no choice but to use something different.

The only one I avoid is deduplication which is far too memory hungry.

A little off topic here but the worst case is when I have to run Windows on companies laptops. Given Microsoft is now sending Linux kernel through Windows Updates, maybe one day there will be an official native ZFS on Windows. They might just pick GitHub - openzfsonwindows/ZFSin: OpenZFS on Windows port ...

3 Likes