A year ago I set up Ubuntu server with 3 ZFS pools on my server, normally I don’t make copies of very large files but today I was making a copy of a ~30GB directory and I saw in rsync that the transfer doesn’t exceed 3mb/s (cp is also very slow).

What is the best file system that “just works”? I’m thinking of migrating everything to ext4

EDIT: I really like the automatic pool recovery feature in ZFS, has saved me from 1 hard drive failure so far

    • Fisch@lemmy.ml
      link
      fedilink
      English
      arrow-up
      14
      ·
      9 months ago

      I use BTRFS on everything too nowadays. The thing that made me switch everything to BTRFS was filesystem compression.

        • Fisch@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          I use zstd too, didn’t specifiy a level tho, so it’s just using the default. I only use like ⅔ of the disk space I used before and I don’t feel any difference in performance at all.

    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 months ago

      FWIW lvm can give you snapshots and other features. And mdadm can be used for a raid. All very robust tools.

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      5
      ·
      9 months ago

      Yes and BTRFS, unlike Ext4, will not go corrupt on the first power outage of slight hardware failure.

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        I’ve run btrfs for years and never had a issue. They one time my system wouldn’t boot it was due to a bad drive. I just swapped the drive and rebalanced and I was back up and running in less than a half an hour.

      • devfuuu@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        Corruption on power only regularly happened to me on xfs a few years ago. That made me swear to never use that fs ever again. Never seen it on my ext4fs systems which are all I have for years in multiple computers.

        • TCB13@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          4
          ·
          9 months ago

          I’m confused with your answer. BTRFS is good and reliable. Ext4 gets fucked at the slightest issue.

          • SayCyberOnceMore@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            Never had an issue with EXT4.

            Had a problem on a NAS where BTRFS was taking “too long” for systemD to check it, so just didn’t mount it… bit of config tweaking and all is well again.

            I use EXT* and BTRFS where ever I can because I can manipulate it with standard tools (inc gparted).

            I have 1 LVM system which was interesting, but I wouldn’t do it that way in the future (used to add drives on a media PC)

            And as for ZFS … I’d say it’s very similar to BTRFS, but just slightly too complex on Linux with all the licensing issues, etc. so I just can’t be bothered with it.

            As a throw-away comment, I’d say ZFS is used by TrusNAS (not a problem, just sayin’…) and… that’s about it??

            As to the OPs original question, I agree with the others here… something’s not right there, but it’s probably not the filesystem.

          • Eideen@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            9 months ago

            Yes both BTRFS and Ext4 are vulnerable to unplanned powerloss when writes are in flight. Commonly knows as a write hole.

            For BTRFS since it use of Copy of Write, it is more vulnerable. As metadata needs to be updated and more. Ext4 does not have CoW.

            • Atemu@lemmy.ml
              link
              fedilink
              English
              arrow-up
              3
              ·
              9 months ago

              Ext4 does not have CoW.

              That’s the only true part of this comment.

              As for everything else:

              Ext4 uses journaling to ensure consistency.

              btrfs’ CoW makes it resistant to that issue by its nature; writes go elsewhere anyways, so you can delay the “commit” until everything is truly written and only then update the metadata (using a similar scheme again).

              Please read https://en.wikipedia.org/wiki/Journaling_file_system.

            • TCB13@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              9 months ago

              For BTRFS since it use of Copy of Write, it is more vulnerable. As metadata needs to be updated and more. Ext4 does not have CoW.

              This is where theory and practice diverge and I bet a lot of people here will essentially have the same experience I have. I will never run an Ext filesystem again, not ever as I got burned multiple times both at home/homelab and at the datacenter with Ext shenanigans. BTRFS, ZFS, XFS all far superior and more reliable.

      • AggressivelyPassive@feddit.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        You could try to redo the copy and monitor the system in htop, for example. Maybe there’s a memory or CPU bottleneck. Maybe one of your drives is failing, maybe you’ve got a directory with tons of very small files, which causes a lot of overhead.

        • TheWilliamist@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          Yes, file size, drive types, the amount of RAM in the server, in the source and destination of the operation, can all have an effect on Performance. But generally if he’s moving within the same pool, it should be pretty quick.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    ·
    9 months ago

    Most filesystems should “just work” these days.

    Why are you blaming the filesystem here when you haven’t ruled out other issues yet? If you have a drive failing a new FS won’t help. Check out “smartctl” to see if it reports errors in your drives.

    • KptnAutismus@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      they may be using really slow hard drives or an SSD without DRAM.

      or maybe a shitty network switch?

      maybe the bandwidth is used up by a torrent box?

      there’s a lot of possible causes.

    • Merlin404@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      9 months ago

      That ive learnt the hard way it dosent 😅 have a Ubuntu server with unifi network in it, thats now full in inodes 😅 the positive thing, im forced to learn a lot in Linux 😂

  • ptman@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    9 months ago

    How full is your ZFS? ZFS doesn’t handle disk filling and fragmentation well.

  • taladar@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    9 months ago

    XFS has “just worked” for me for a very long time now on a variety of servers and desktop systems.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    9 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    LVM (Linux) Logical Volume Manager for filesystem mapping
    NAS Network-Attached Storage
    PSU Power Supply Unit
    SSD Solid State Drive mass storage
    ZFS Solaris/Linux filesystem focusing on data integrity

    5 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.

    [Thread #486 for this sub, first seen 5th Feb 2024, 15:05] [FAQ] [Full list] [Contact] [Source code]

  • nezbyte@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    9 months ago

    MergerFS + Snapraid is a really nice way to turn ext4 mounts into a single entry point NAS. OpenMediaVault has some plugins for setting this up. Performance wise it will max out the drive of whichever one you are using and you can use cheap mismatched drives.

    • Moonrise2473@feddit.it
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      From the article it looks like zfs is the perfect file system for smr drives as it would try to cache random writes

      • PedanticPanda@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        Possibly, with tuning. Op would just have to be careful about reslivering. In my experience SMR drives really slow down when the CMR buffer is full.

  • SayCyberOnceMore@feddit.uk
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    Where are you copying to / from?

    Duplicating a folder on the same NAS on the same filesystem? Or copying over the network?

    For example, some devices have a really fast file transfer until a buffer files up and then it crawls.

    Rsync might not be the correct tool either if you’re duplicating everything to an empty destination…?

      • SayCyberOnceMore@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Still the same, or has it solved itself?

        If it’s lots of small files, rather than a few large ones? That’ll be the file allocation table and / or journal…

        A few large files? Not sure… something’s getting in the way.

  • Unyieldingly@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    ZFS is by far the best just use TrueNAS, Ubuntu is crap at supporting ZFS, also only set your pool’s VDEV 6-8 wide.

    • Trincapinones@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      I was thinking about switching to debian (all that I host is in docker so that’s why), but the weird thing is that it was working perfectly 1 month ago

      • Unyieldingly@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Maybe your HBA is having issues? or a Drive is Failing? have you done a memtest? you may need to do system wide tests, it can even be a PSU failing or a software Bug.

        also TrueNAS is built with Docker they use it heavily something like 106 apps, Debian has good ZFS support, but you will end up doing a lot of unneeded work using Debian unless you keep it simple.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    9 months ago

    Use zfs sync instead of rsync. If it’s still slow, it’s probably SMR drives.