In the What are YOU self-hosting? thread, there are a lot of people here who are self-hosting a huge number of applications, but there’s not a lot of discussion of the platform these things run on.

What does your self-hosted infrastructure look like?

Here are some examples of more detailed questions, but I’m sure there are plenty more topics that would be interesting:

  • What hardware do you run on? Or do you use a data center/cloud?
  • Do you use containers or plain packages?
  • Orchestration tools like K8s or Docker Swarm?
  • How do you handle logs?
  • How about updates?
  • Do you have any monitoring tools you love?
  • Etc.

I’m starting to put together the beginning of my own homelab, and I’ll definitely be starting small but I’m interested to hear what other people have done with their setups.

  • wpuckering@lm.williampuckering.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I have a single ASUS Chromebox M075U I3-4010U which I use as a Docker host. It’s neatly and inconspicously tucked away under my TV, and it’s quiet even when the fan’s on full if a heavy workload is running.

    Main Specs:

    • Processor: Intel Core i3-4010U 1.7 GHz
    • Memory: 4 GB DDR3 1600 (I upgraded this to 16 GB)
    • Storage: 16 GB SSD (I upgraded this to 64 GB)
    • Graphics: Intel HD Graphics 4400
    • OS: Google Chrome OS (Currently running Ubuntu 22.04)

    Full Specs: https://www.newegg.ca/asus-chromebox-m075u-nettop-computer/p/N82E16883220591R

    I started off with a single-node Kubernetes cluster (k3s) a few years ago for learning purposes, and ran with it for quite a long time, but have since gone back to Docker Compose for a few reasons:

    • Less overhead and more light-weight
    • Quicker and easier to maintain now that I have a young family and less time
    • Easier to share examples of how to run certain stacks with people that don’t have Kubernetes experience

    For logs, I’m only concerned with container logs, so I use Dozzle for a quick view of what’s going on. I’m not concerned with keeping historical logs, I only care about real-time logs, since if there’s an ongoing issue I can troubleshoot it then and there and that’s all I need. This also means I dont need to store anything in terms of logs, or run a heavier log ingestion stack such as ELK, Graylog, or anything like that. Dozzle is nice and light and gives me everything I need.

    When it comes to container updates, I just do it whenever I feel like, manually. It’s generally frowned upon to reference the latest tag for a container image to get the latest updates automatically for risk of random breaking changes. And I personally feel this holds true for other methods such as Watchtower for automated container updates. I like to keep my containers running a specific version of an image until I feel it’s time to see what’s new and try an update. I can then safely backup the persistent data, see if all goes well, and if not, do a quick rollback with minimal effort.

    I used to think monitoring tools were cool, fun, neat to show off (fancy graphs, right?), but I’ve since let go of that idea. I don’t have any monitoring setup besides Dozzle for logs (and now it shows you some extra info such as memory and CPU usage, which is nice). In the past I’ve had Grafana, Prometheus, and some other tooling for monitoring but I never ended up looking at any of it once it was up and “done” (this stuff is never really “done”, you know?). So I just felt it was all a waste of resources that could be better spent actually serving a better purpose. At the end of the day, if I’m using my services and not having any trouble with anything, then it’s fine, I don’t care about seeing some fancy graphs or metrics showing me what’s going on behind the curtain, because my needs are being served, which is the point right?

    I do use Gotify for notifications, if you want to count that as monitoring, but that’s pretty much it.

    I’m pretty proud of the fact that I’ve got such a cheap, low-powered little server compared to what most people who selfhost likely have to work with, and that I’m able to run so many services on it without any performance issues that I myself can notice. Everything just works, and works very well. I can probably even add a bunch more services before I start seeing performance issues.

    At the moment I run about 50 containers across my stacks, supporting:

    • AdGuard Home
    • AriaNG
    • Bazarr
    • Certbot
    • Cloudflared
    • Cloudflare DDNS
    • Dataloader (custom service I wrote for ingesting data from a bunch of sources)
    • Dozzle
    • FileFlows
    • FileRun
    • Gitea
    • go-socks5-proxy
    • Gotify
    • Homepage
    • Invidious
    • Jackett
    • Jellyfin
    • Lemmy
    • Lidarr
    • Navidrome
    • Nginx
    • Planka
    • qBittorrent
    • Radarr
    • Rclone
    • Reactive-Resume
    • Readarr
    • Shadowsocks Server (Rust)
    • slskd
    • Snippet-Box
    • Sonarr
    • Teedy
    • Vaultwarden
    • Zola

    If you know what you’re doing and have enough knowledge in a variety of areas, you can make a lot of use of even the most basic/barebones hardware and squeeze a lot out of it. Keeping things simple, tidy, and making effective use of DNS, caching, etc, can go a long way. Experience in Full Stack Development, Infrastructure, and DevOps practices over the years really helped me in terms of knowing how to squeeze every last bit of performance out of this thing lol. I’ve definitely taken my multi-layer caching strategies to the next level, which is working really well. I want to do a write-up on it someday.