• moseschrute@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    3 days ago

    Lol, you lost me there. I’ve read up on the various RAID configurations. I’ve heard about CephFS. I don’t know much about it, but I get the sense it’s the new kid on the block.

    I actually have a RAID question for you. I want to setup a little RAID array starting with 2 mirrored drives and add more drives later. But it seems there is no easy way to migrate RAID versions? Let’s say I want to start with 2, then 3, than 4 drives as stuff fills up. I always want some level of redundancy. And I don’t want to use any additional drives aside from the 2, 3, then 4 in the array. Is this possible? Either with RAID or with CephFS?

    • grue@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      Funny you should mention that, because it’s what got me thinking about Ceph in the first place. My other Proxmox node has a 2-drive mirrored ZFS pool, and I went to add a third drive to it and realized that I’d have to move all the data off and rebuild it from scratch, so I started looking for other solutions.

      So yeah, I think Ceph can add to an array after-the-fact like that (in addition to the not-waste-capacity-of-random-assorted-disks thing), but I haven’t figured it out enough yet to be sure.

      • moseschrute@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        I also totally was mixing up Ceph with ZFS. Linux tech mentions ZFS a lot. That’s the source of most of my RAID knowledge lol