So after months of dealing with problems trying to get the stuff I want to host working on my Raspberry Pi and Synology, I’ve given up and decided I need a real server with an x86_64 processor and a standard Linux distro. So I don’t continue to run into problems after spending a bunch more, I want to seriously consider what I need hardware-wise. What considerations do I need to think about in this?

Initially, the main things I want to host are Nextcloud, Immich (or similar), and my own Node bot @DailyGameBot@lemmy.zip (which uses Puppeteer to take screenshots—the big issue that prevents it from running on a Pi or Synology). I’ll definitely want to expand to more things eventually, though I don’t know what. Probably all/most in Docker.

For now I’m likely to keep using Synology’s reverse proxy and built-in Let’s Encrypt certificate support, unless there are good reasons to avoid that. And as much as it’s possible, I’ll want the actual files (used by Nextcloud, Immich, etc.) to be stored on the Synology to take advantage of its large capacity and RAID 5 redundancy.

Is a second-hand Intel-based mini PC likely suitable? I read one thing saying that they can have serious thermal throttling issues because they don’t have great airflow. Is that a problem that matters for a home server, or is it more of an issue with desktops where people try to run games? Is there a particular reason to look at Intel vs AMD? Any particular things I should consider when looking at RAM, CPU power, or internal storage, etc. which might not be immediately obvious?

Bonus question: what’s a good distro to use? My experience so far has mostly been with desktop distros, primarily Kubuntu/Ubuntu, or with niche distros like Raspbian. But all Debian-based. Any reason to consider something else?

  • curbstickle@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 days ago

    Business mini PCs with a decent amount of ram in them fit your use case well. And mine, which is why I have a bunch of them.

    The only time ive seen heat be an issue is when they are stacked - to be clear, airflow on those is usually front to back, the problem is the chimney effect. Heat rises. So stacking can be a problem, but I just stick some thick nylon washers between, its worked quite well sticking them on a shelf in my rack. I generally put them in stacks of two, with two side by side, for a total of four per shelf.

    You don’t need to do that right off though with just one.

    If you do get a used one, look for units with 16 or more ram, or bump it to 32gb/64gb (model dependant) yourself. There is usually an unused m2 slot, great for a host os to live if you’ve got a spare (prices suck right now to buy), and typically there is a 2.5" data ssd though sometimes its mechanical or one of those hybrids. Useful storage, but use m2 if you can.

    I prefer the Intel based units so I can use the igpu for general tasks, and if it has a dgpu (I have a few with a quadro in there) I use that for more dedicated transcoding tasks, or to pass through to a VM. For Jellyfin its using the igpu, no need to pass through if youre using an lxc for example.

    Make sure to clean it out when you get it, and check how the fan is working. I’d pull the case, go into the bios, and manually change the fan speed. Make sure its working correctly, or replace it (pretty cheap, the last replacement I bought was ~$15). Any thermal paste in there is probably dried out, so replacing it isnt a bad idea either.

    In terms of what to get, I’d lean towards 6th gen or newer intel cpu’s for most utility. One with a dgpu is handy obviously but not a requirement.

    Personally I am a Debian guy for anything server. So I put Debian on, no DE, set up how I want. Then I convert to proxmox. If youre not overly specific about your setup (like most people, and how I should probably be but I’m too opinionated), you can just install proxmox.

    Proxmox has no desktop environment. Its just a web GUI and the CLI, so once set up you can manage it entirely from another device. Mine connect to video switchers I have to spare, but you can just plug a monitor in temporarily if you need it.

    Proxmox community scripts will show lots of options - I dont recommend running scripts off the internet though, but it will show you a lot of easy options for services.

    Hope this helps!

    • loiakdsf@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 hours ago

      i have a similar setup but am facing a storage issue now. is a usb-c external case for 2 HDDs in RAID1 any good or how do you handle that?

      • curbstickle@anarchist.nexus
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 hours ago

        Depends on what youre using it for, though I don’t like external drives in general for anything I want stability for.

        I have a NAS for my media storage (and backup NAS for that media plus another for miscellaneous), so the only thing on the drives in those machines are the VMs and LXCs themselves.

    • Zagorath@aussie.zoneOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      Wow thanks, a lot of great advice in here!

      I actually do have an old m2 drive sitting around somewhere, if I can find it. I think it was an m2 SATA (not NVMe) drive though, so not sure if there’s any advantage over a 2.5" other than the physical size.

      What exactly is proxmox? A distro optimised for use in home servers? What does it do for you exactly that’s better than more standard Debian/Ubuntu?

      • Allero@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        What exactly is proxmox?

        In layman terms, it’s a Debian-based distro that makes managing your virtual machines and lxc containers easier. Thanks to an advanced virtual interface, you can set up most things graphically, monitor and control your VMs and containers at a glance, and just generally take the pain away from managing it all.

        It’s just so much better when you see everything important straight away.

        • Zagorath@aussie.zoneOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          I guess I have the same question for you as I did for curbstickle. What’s the advantage of doing things that way with VMs, vs running Docker containers? How does it end up working?

          • Allero@lemmy.today
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            19 hours ago

            Proxmox can work with VMs and LXC containers.

            When you need to always have resources reserved specifically for a given task, VMs are very handy. VM will always have access to the resources it needs, and can be used with any OS and any piece of software without any preparations and special images. Proxmox manages VMs in an efficient way, ensuring near-native performance.

            When you want to run service in parallel with other with minimal resource usage on idle, you go with containers.

            LXC containers are very efficient, more so than Docker, but limited to Linux images and software, as they share the kernel with the host. Proxmox allows you to manage LXC containers in a very straightforward way, as if they were standalone installations, while at the same time maintaining the rest behind the scenes.

      • curbstickle@anarchist.nexus
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        What exactly is proxmox?

        Debian with a custom kernel, web interface, accompanying CLI tools in support of virtualization.

        For one, I won’t touch Ubuntu for a server. Hard recommend against in all scenarios. Snap is a nightmare, both in use and security, and I have zero trust or faith in canonical at this point (as mentioned, I’m opinionated).

        Debian itself is all I’ll use for a server, if I’m doing virt though I’d rather use proxmox to make management easier.

        • Zagorath@aussie.zoneOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          if I’m doing virt though

          What’s the use case for that? My plan has been to run a single server with a handful of Docker containers. No need for more complex stuff like load balancing or distributed compute.

          • curbstickle@anarchist.nexus
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 days ago

            I prefer lxc to docker in general, but that’s just a preference.

            If you end up relying on it, you can expand your servers by adding another to the cluster, and easily support the more complex stuff without major changes.

            The web interface is also extremely handy as is the CLI, and backups are easy. High utility for minimal effort.

            Its also a lot easier to add a VM later if youre set up for it from the start IMO.

            • Zagorath@aussie.zoneOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              Interesting. I’ve never really played around with that style of VM-based server architecture before. I’ve always either used Docker (& Kubernetes) or ran things on bare metal.

              If you’re willing to talk a bit more about how it works, advantages of it, etc., I’d love to hear. But I sincerely don’t want to put any pressure and won’t be at all offended if you don’t have the time or effort.

              • curbstickle@anarchist.nexus
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                No worries

                Like I said, I generally prefer lxc. LXC and docker aren’t too far off specifically in that both are container solutions, but the approach is a bit different. Docker is more focused on the application, while lxc is more about creating an isolated container of Linux that can run apps. If that makes sense.

                LXC is really lightweight, but the main reason I like it is the security approach. While docker is more about running as a low privileged user, the lxc approach is a completely unprivileged container - its isolating at the system level rather than the app level.

                The nice thing about a bare metal hypervisor like proxmox is that there isnt just one way to do things. I have a few tools that are docker containers that I run, mostly because they are packaged that way and I dont want to have to build them myself. So I have an lxc that runs docker. Mostly though, everything runs in an lxc, with few exceptions.

                For example, I have a windows VM just for some specific industry applications. I turn on the VM then open remote desktop software, and since I’m passing the dGPU to the VM, I get all the acceleration I need. Specifically, when I need it - when I’m done I shut that VM off. Other VMs with similar purposes (but different builds) also share that dGPU.

                Not Jellyfin though, that’s an lxc where I share access to my igpu - so the lxc gets all the acceleration, and I dont need to dedicate it to the task. Better yet, I actually have multiple JF instances (among a few other tools that use the iGPU) and they all get the same access while running simultaneously. Really, really handy.

                Then there are other things I like as a VM that are always on, like HomeAssistant. I have a USB dongle I need to pass through (I’ll skip the overly complex setup I have with USB switching), and that takes no effort in virt. And if something goes wrong, it just starts on another machine. Or if I want to redistribute for some manual load balancing, or make some hardware upgrades, whatever. Add in ceph and clustering is just easy peasy IMO.

                The main reason I use proxmox is its one interface for everything - access all forms of virt on the entire cluster from a single web interface. I get an extra layer of isolation for my docker containers, flexibility in deployment, and because its a cluster I can have a few machines go down and I’m still good to go. My only points of failure are the internet (but local still works fine) and power (but everything I “need” is on UPS anyway). Cluster is, in part, because I was sick of having things down because of an update and my wife being annoyed by it, once she got used to HA, media server, audiobook server, eBook server, music server (navidrome as well as JF, yes, excessive), so on.

                Feel free to ask on any specifics