Got a new PC handed down to me. And now have my old one collecting dust. It has a dedicated GPU (GTX 1060 6GB VRAM) i guess the most obvious thing would be an AI model or maybe jellyfin (which is currently running on a raspi 5 just fine for), but was wondering if you maybe had other suggestions?

  • Seefra 1@lemmy.zip
    link
    fedilink
    English
    arrow-up
    6
    ·
    14 hours ago

    “I have a hammer and I hate it’s not hammering, any cool ideas involving nails?”

    You see, I have the exact same problem as you, I just can’t stand seeing hardware going unused. Specially computer hardware that deprecates. But I think before thinking “what can I do with this hardware” you should think “do I have a need or a problem that can be solved with this hardware?” And if the awnser is “no” then maybe consider selling the GPU or giving it to some friend who needs it.

    My Jellyfin works without a GPU, just my old 2nd generation i3 is enough to realtime transcode video to my phone, maybe I would need upgrade of I had more users, but I’m it’s only user.

    Do you have multiple users on your server where you require GPU acceleration, if not there’s no much reason to use GPU accell anyway (which is usually trickier to setup)?

    Still reporposing the computer to use as a server seems to be a good idea, because I at least can’t stand the nightmare of using USB hard drives, I’ve hard really bad experiences with those lousy cables and connection. But if you do that. That leaves you with another problem. What to do with the raspberry pi?

    Also, I just recently also built a new PC had the same problem of not knowing what to do with my laptop, I came to conclusion that the best thing I can do with it is to run background chat applications on it and maybe web browser via waypipe. So it just looks like a window on my main PC and this way I have ram on my new PC that I may need for some heavy workloads like blender rendering.

    • Vogi@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      Yea, you are probably right. Just wanted to ask before I hand it down myself, in case I missed some really cool use case for it.

      My Jellyfin is also very basic with everything pre transcoded as I don’t even have/need >1080p.

      So it just looks like a window on my main PC and this way I have ram on my new PC

      This sounds really cool. Does that actually work, did you do a comparison? I would have expected that most RAM usage comes from rendering the frame and not sending it, or does waypipe somehow outsource that as well?

      • Seefra 1@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        This sounds really cool. Does that actually work, did you do a comparison? I would have expected that most RAM usage comes from rendering the frame and not sending it, or does waypipe somehow outsource that as well?

        I haven’t tested yet, it’s something I plan to do this or next week.

        From what I understand, waypipe should use minimal ram, all it basically does is forwarding an image, sound and inputs, all the heavy lifting is done on the server side where the application actually runs.

  • NotSteve_@piefed.ca
    link
    fedilink
    English
    arrow-up
    10
    ·
    19 hours ago

    Regarding Jellyfin, if the PC you got has an Intel CPU then using Intel QuickSync would actually easily outperform the NVIDIA card for transcoding.

    Up until very recently I was using a cheap i3 to power my Jellyfin instance that often has 5+ streams going at a time. (The only reason I upgraded was that I had a friend getting rid of an i7 from the same gen lol)

  • Truscape@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    21 hours ago

    Honestly the easiest use for a PC would be to remove the GPU (if integrated graphics are available on the CPU), and to host things like community game servers for your friends (or maybe something like a self-host chat server for Teamspeak/similar).

    A GPU of that caliber is not ideal for those kinds of workloads (although it’d work fine for media encoding).

  • KorYi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    21 hours ago

    I have my old GTX 980 in a server. It is currently handling object recognition and transcoding in frigate, immich and Plex. Works great for this (although not super useful for Plex as it doesn’t support HEVC).

    I haven’t tried throwing any LLMs at it.

  • wabasso@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    19 hours ago

    Are you going to be running Linux?

    I’ve also got a tower with a GTX 1060 and I’d like to have it sleeping, but ready to be woken by the Pi when needed. But it never wakes up from a sleep state, so I’m curious if you’ve had any luck with that and we can trade notes.

  • Eirikr70@jlai.lu
    link
    fedilink
    English
    arrow-up
    2
    ·
    20 hours ago

    I presume that its power consumption is not to be neglected. Do I’d just keep it off of I don’t really need it.

  • lyralycan@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    18 hours ago

    Can confirm what another user said, that Intel iGPU would be better in your case.

    I’ll let you know now – if it runs Windows kill it. My server was originally Windows running Docker Desktop. It hosted three services: Minecraft server which lagged like a bitch; Samba folder share; and Emby. Whenever Emby playback froze I knew Windows, whose antivirus kept running the HDD under constant load, had fucked the i6 6100 to 100%, which happened at least twice a day.

    Moving on, now I run Proxmox. I host 25 services with the CPU at ~35% idle and 24GB RAM at 75%. Nothing lags.

    Before I plugged in the GPU my server drew 25W consistently, going to 35W under load. With the GPU, an RTX 3060 11GB (used), it uses 85W idle, so make sure it’s worth it. For my case it not only transcodes for Emby and resumes streaming in a second, but also handles voice inference for Home Assistant in under a second, and mid-sized Ollama LLM responses. Would recommend a high VRAM Nvidia card (for CUDA) in that scenario, as my model Gemma3 7B uses 6GB VRAM and 2GB RAM. But a top model, say Dolphin-Mixtral 22B, needs 80GB storage, 17GB RAM and… Well I don’t have the RAM but you get it. LLMs are intensive.