My Homelab Setup

(bryananthonio.com)

312 points | by photon_collider a day ago ago

209 comments

  • linsomniac a day ago ago

    >Because all of my services share the same IP address, my password manager has trouble distinguishing which login to use for each one.

    In Bitwarden they allow you to configure the matching algorithm, and switching from the default to "starts with" is what I do when I find that it is matching the wrong entries. So for this case just make sure that the URL for the service includes the port number and switch all items that are matching to "starts with". Though it does pop up a big scary "you probably didn't mean to do this" warning when you switch to "starts with"; would be nice to be able to turn that off.

    • PunchyHamster 18 hours ago ago

      Just giving them hostnames is easier.

      In homelab space you can also make wildcard DNS pretty easily in dnsmasq, assuming you also "own" your router. If not, hosts file works well enough.

      There is also option of using mdns for same reason but more setup

      • overfeed 6 hours ago ago

        > Just giving them hostnames is easier

        Bitwarden annoyingly ignores subdomains by default. Enabling per-sudomain credential matching is a global toggle, which breaks autocomplete on other online service that allow you to login across multiple subdomains.

        • danparsonson 5 hours ago ago

          You can override the matching method on an individual basis though, just using the setting button next to the URL entry field.

        • rodolphoarruda 6 hours ago ago

          Tell me about it... that infinite Ctrl + Shift + L sequence circling through all credentials from all subdomains. Then you brain betrays you making you skip the right credential... ugh, now you'll circle the entire set again. Annoying.

        • freeplay an hour ago ago

          You can set that globally but override at the individual entry.

        • Groxx 4 hours ago ago

          Seriously? That sounds incredibly awful - my keepass setup has dozens of domain customizations, there's no way in hell you could apply any rule across the entire internet.

      • c-hendricks 17 hours ago ago

        How do I edit the hosts file of an iPhone?

        • nerdsniper 17 hours ago ago

          You don't have to if you use mDNS. Or configure the iPhone to use your own self-hosted DNS server which can just be your router/gateway pointed to 9.9.9.9 / 1.1.1.1 / 8.8.8.8 with a few custom entries. You would need to jailbreak your iPhone to edit the hosts file.

          • simondotau 14 hours ago ago

            I have a real domain name for my house. I have a few publicly available services and those are listed in public DNS. For local services, I add them to my local DNS server. For ephemeral and low importance stuff (e.g. printers) mDNS works great.

            For things like Home Assistant I use the following subdomain structure, so that my password manager does the right thing:

              service.myhouse.tld
              local.service.myhouse.tld
          • c-hendricks 4 hours ago ago

            Exactly, you don't. My qualm was with the "hosts file works well enough" claim of the person I responded to.

      • tehlike 11 hours ago ago

        This is what i do.

    • gerdesj 18 hours ago ago

      "Because all of my services share the same IP address"

      DNS. SNI. RLY?

      • tbyehl 7 hours ago ago

        Not to diminish having names for everything but that just shifts the Bitwarden problem to "All of my services share the same base domain."

      • sv0 11 hours ago ago

        That's a bit weird to read for me as well. DNS and local DNS were the first services I've been self-hosting since 2005.

        On Debian/Ubuntu, hosting local DNS service is easy as `apt-get install dnsmasq` and putting a few lines into `/etc/dnsmasq.conf`.

        • merpkz 11 hours ago ago

          These modern-day homelabbers will do anything to avoid DNS, looks like to them it's some kind of black magic where things will inevitably go wrong and all hell will break loose.

    • dpoloncsak an hour ago ago

      For my homelab, I setup a Raspberry Pi running PiHole. PiHole includes the ability to set local DNS records if you use it as your DNS resolver.

      Then, I use Tailscale to connect everything together. Tailscale lets you use a custom DNS, which gets pointed to the PiHole. Phone blocks ads even when im away from the house, and I can even hit any services or projects without exposing them to the general internet.

      Then I setup NGINX reverse proxy but that might not be necessary honestly

    • predkambrij 16 hours ago ago

      One cool trick is having (public) subdomains pointing to the tailscale IP.

      • timwis 12 hours ago ago

        This is what I do. Works great! And my caddy setup uses the DNS mode to provision TLS certs (using my domain provider's caddy plugin).

    • brownindian 20 hours ago ago

      Could also use Cloudflare tunnels. That way:

      1. your 1password gets a different entry each time for <service>.<yourdomain>.<tld>

      2. you get https for free

      3. Remote access without Tailscale.

      4. Put Cloudflare Access in front of the tunnel, now you have a proper auth via Google or Github.

      • lukevp 19 hours ago ago

        You can also use cloudflare to create a dns record for each local service (pointed to the local IP) and just mark it as not proxied, then use Wireguard or Tailscale on your router to get VPN access to your whole network. If you set up a reverse proxy like nginx proxy manager, you can easily issue a wildcard cert using DNS validation from your NAS using ACME (LetsEncrypt). This is what I do, and I set my phone to use Wireguard with automatic VPN activation when off my home WiFi network. Then you’re not limited by CF Tunnel’s rules like the upload limits or not being able to use Plex.

        • organsnyder 2 hours ago ago

          This is exactly what I do. I have a few operators set up in k8s that handle all of this with just a couple of annotations on the Ingress resource (yeah, I know I need to migrate to Gateway). For services I want to be publicly-facing, I can set up a Cloudflare tunnel using cloudflare-operator.

        • sylens 3 hours ago ago

          This is the way

        • johnmaguire 17 hours ago ago

          Yup doing this with Caddy and Nebula, works great!

      • QGQBGdeZREunxLe 19 hours ago ago

        Tunnels go through Cloudflare infrastructure so are subject to bandwidth limits (100MB upload). Streaming Plex over a tunnel is against their ToS.

        • miloschwartz 18 hours ago ago

          Pangolin is a good solution to this because you can optionally self-host it which means you aren't limited by Cloudflare's TOS / limits.

          • somehnguy 3 hours ago ago

            Also achievable with Tailscale. All my internal services are on machines with Tailscale. I have an external VPS with Tailscale & Caddy. Caddy is functioning as a reverse proxy to the Tailscale hosts.

            No open ports on my internal network, Tailscale handles routing the traffic as needed. Confirmed that traffic is going direct between hosts, no middleman needed.

          • arvid-lind 5 hours ago ago

            Another vote for Pangolin! Been using it for a month or so to replace my Cloudflare tunnels and it's been perfect.

      • mvdtnz 19 hours ago ago

        Yeesh, the last thing I want is remote access to my homelab.

    • techcode 21 hours ago ago

      Setup AdGuard-Home for both blocking ads and internal/split DNS, plus Caddy or another reverse proxy and buy (or recycle/reuse) a domain name so you can get SSL certificates through LetsEncrypt.

      You don't need to have any real/public DNS records on that domain, just own the domain so LetsEncrypt can verify and give you SSL certificate(s).

      You setup local DNS rewrites in AdGuard - and point all the services/subdomains to your home servers IP, Caddy (or similar) on that server points it to the correct port/container.

      With TailScale or similar - you can also configure that all TailScale clients use your AdGuard as DNS - so this can work even outside your home.

      Thats how I have e.g.: https://portainer.myhome.top https://jellyfin.myhome.top ...etc...

    • dewey a day ago ago

      This is always annoying me with 1Password, before that I just always added subdomains but now I'm usually hosting everything behind Tailscale which makes this problem even worse as the differentiation is only the port.

      • domh a day ago ago

        You can use tailscale services to do this now:

        https://tailscale.com/docs/features/tailscale-services

        Then you can access stuff on your tailnet by going to http://service instead of http://ip:port

        It works well! Only thing missing now is TLS

        • avtar a day ago ago

          This would be perfect with TLS. The docs don't make this clear...

          > tailscale serve --service=svc:web-server --https=443 127.0.0.1:8080

          > http://web-server.<tailnet-name>.ts.net:443/ > |-- proxy http://127.0.0.1:8080

          > When you use the tailscale serve command with the HTTPS protocol, Tailscale automatically provisions a TLS certificate for your unique tailnet DNS name.

          So is the certificate not valid? The 'Limitations' section doesn't mention anything about TLS either:

          https://tailscale.com/docs/features/tailscale-services#limit...

          • domh 8 hours ago ago

            I think maybe TLS would work if you were to go to https://service.yourts.net domain, but I've not tried that.

          • nickdichev 7 hours ago ago

            It works, I’m using tailscale services with https

            • avtar 3 hours ago ago

              Thanks for clarifying :) I'll try it out this weekend.

      • altano 12 hours ago ago

        In the 1Password entry go to the "website" item. To right right there's an "autofill behavior" button. Change it to "Only fill on this exact host" and it will no longer show up unless the full host matches exactly

        • oarsinsync 3 hours ago ago

          Is this a per-item behaviour or can this be set as a global default?

          I'm guessing this is 1Password 8 only, as I can't see this option in 1Password 7.

          • vladvasiliu 2 hours ago ago

            I've looked in the settings on 1p8, and didn't find a setting for a global default.

        • jorvi 4 hours ago ago

          Not entirely true. It can't seem to distinguish between ports..

          • mhurron 3 hours ago ago

            because ports don't indicate a different host.

        • karlshea 4 hours ago ago

          Omg thank you, I had no idea they added this feature!

      • miloschwartz 18 hours ago ago

        Pangolin handles this nicely. You can define alias addresses for internal resources and keep the fully private and off the public internet. Also based on WireGuard like Tailscale.

      • wrxd a day ago ago

        You can still have subdomains with Tailscale. Point them at the tailscale IP address and run a reverse proxy in front of your services

        • dewey a day ago ago

          Good point, but for simplicity i'd still like 1Password to use the full hostname + port a the primary key and not the hostname.

      • zackify a day ago ago

        tailscale serve 4000 --BG

        Problem solved ;)

    • m463 14 hours ago ago

      or just use the same password for everything. ;)

      • ozim 10 hours ago ago

        If it is like 12 characters non dictionary and PW you use only in your homelab - seems like perfectly fine.

        If you expose something by mistake still should be fine.

        Big problem with PW reuse is using the same for very different systems that have different operators who you cannot trust about not keeping your PW in plaintext or getting hacked.

    • lloydatkinson a day ago ago

      I wonder why each service doesn’t have a different subdomain.

      • cortesoft 20 hours ago ago

        That's what I do, but you still have to change the default Bitwarden behavior to match on host rather than base domain.

        Matching on base domain as the default was surprising to me when I started using Bitwarden... treating subdomains as the same seems dangerous.

        • akersten 17 hours ago ago

          It's probably a convenience feature. Tons of sites out there that start on www then bounce you to secure2.bank.com then to auth. and now you're on www2.bank.com and for some inexplicable reason need to type your login again.

          Actually it's mostly financial institutions that I've seen this happen with. Have to wonder if they all share the same web auth library that runs on the Z mainframe, or there's some arcane page of the SOC2 guide that mandates a minimum of 3 redirects to confuse the man in the middle.

      • tylerflick a day ago ago

        This is the way. You can even do it with mDNS.

    • harrygeez 4 hours ago ago

      not really a solution (as others have pointed out already) but it also tells me you are missing a central identity provider (think Microsoft account login). You can try deploying Kanidm for a really simple and lightweight one :)

    • photon_collider 21 hours ago ago

      Ah nice! Didn’t know that. I’ll try that out next time.

  • acidburnNSA a day ago ago

    I have something like this, in the same case. I have beefier specs b/c I use it as a daily workstation in addition to running all my stuff.

    * nginx with letsencrypt wildcard so I have lots of subdomains

    * No tailscale, just pure wireguard between a few family houses and for remote access

    * Jellyfin for movies and TV, serving to my Samsung TV via the Tizen jellyfin app

    * Mopidy holding my music collection, serving to my home stereo and numerous other speakers around the house via snapcast (raspberry pi 3 as the client)

    * Just using ubuntu as the os with ZFS mirroring for NAS, serving over samba and NFS

    * Home assistant for home automation, with Zigbee and Z-wave dongles

    * Frigate as my NVR, recording from my security cams, doing local object detection, and sending out alerts via Home Assistant

    * Forgejo for my personal repository host

    * tar1090 hooked to a SDR for local airplane tracking (antenna in attic)

    This all pairs nicely with my two openwrt routers, one being the main one and a dumb AP, connected via hardwire trunk line with a bunch of VLANs.

    Other things in the house include an iotawatt whole-house energy monitor, a bunch of ESPs running holiday light strips, indoor and outdoor homebrew weather stations with laser particulate sensors and CO2 monitors (alongside the usual sensors), a water-main cutoff (zwave), smart bulbs, door sensors, motion sensors, sirens/doorbells, and a thing that listens for my fire alarm and sends alerts. Oh and I just flashed the pura scent diffuser my wife bought and lobotomized it so it can't talk to the cloud anymore, but I can still automate it.

    I love it and have tons of fun fiddling with things.

    • VladVladikoff 17 hours ago ago

      For anyone considering this, it's not a good plan to do it this way, if you have any family members relying on these services, you have to kill them all every time you reboot your workstation. It's really not great to mix destop and server like this. (speaking from experiance and I really need to get a separate box setup for this self hosted stuff)

      • zem 14 hours ago ago

        > if you have any family members relying on these services, you have to kill them all every time you reboot your workstation

        yikes!

        • Pooge 12 hours ago ago

          Yeah I can't imagine killing my family members every time I'm shutting down my computer

          • altano 12 hours ago ago

            It's better than having to hear them complain every time plex goes down

          • giwook 4 hours ago ago

            And yet sometimes you just need to pull the plug.

      • bjackman 8 hours ago ago

        You are always gonna have some downtime in a homelab setup I think. Unless you go all in with k8s I think the best you can do is "system reboots at 4AM, hopefully all the users are asleep".

        (Probably a lot of the services I run don't even really support HA properly in a k8s system with replicas. E.g. taking global exclusive DB locks for the lifetime of their process)

        • embedding-shape 4 hours ago ago

          > You are always gonna have some downtime in a homelab setup I think. Unless you go all in with k8s I think the best you can do is "system reboots at 4AM, hopefully all the users are asleep".

          Huh, why? I have a homelab, I don't have any downtime except when I need to restart services after changing something, or upgrading stuff, but that happens what, once every month in total, maybe once every 6 months or so per service?

          I use systemd units + NixOS for 99% of the stuff, not sure why you'd need Kubernetes at all here, only serves to complicate, not make things simple, especially in order to avoid downtime, two very orthogonal things.

          • bjackman 3 hours ago ago

            > I don't have any downtime except when I need to restart services

            So... you have downtime then.

            (Also, you should be rebooting regularly to get kernel security fixes).

            > not sure why you'd need Kubernetes at all here

            To get HA, which is what we are talking about.

            > only serves to complicate

            Yes, high-availability systems are complex. This is why I am saying it's not really feasible for a homelabber, unless we are k8s enthusiasts I think the right approach is to tolerate downtime.

            • embedding-shape 3 hours ago ago

              > So... you have downtime then.

              5 seconds of downtime as you change from port N to port N+1 is hardly "downtime" in the traditional sense.

              > To get HA, which is what we are talking about.

              Again, not related to Kubernetes at all, you can do it easier with shellscripts, and HA !== orchestration layer.

        • furst-blumier 6 hours ago ago

          I run my stuff in a local k8s cluster and you are correct, most stuff runs as replica 1. DBs actually don't because CNPG and mariadb operator make HA setups very easy. That being said, the downtime is still lower than on a traditional server

      • ryukoposting 3 hours ago ago

        It's also worth noting you don't need sophisticated hardware to run anything listed in the parent comment. 8GB of RAM and a Celeron would be adequate. More RAM might be nice if you use the NAS a lot.

    • wbjacks 19 hours ago ago

      Have you tried using snapcast to broadcast sound from your Samsung tv? I gave it a shot and could never get past the latency causing unacceptable A/V delay, did you have any luck?

    • pajamasam a day ago ago

      Impressive that all that can run on one machine. Mind sharing the specs?

      • c-hendricks a day ago ago

        I run similar (gitea, scrypted+ffmpeg instead of frigate, plex instead of jellyfin) plus some Minecraft servers, *arr stack, notes, dns, and my VM for development.

        It's an i7-4790k from 12 years ago, it barely breaks a sweat most hours of the day.

        It's not really that impressive, or (not to be a jerk) you've overestimated how expensive these services are to run.

        • hypercube33 20 hours ago ago

          Video is usually offloaded too to the igpu on these. I have like 13 vms running on a AMD 3400g with 32gb

        • pajamasam a day ago ago

          Fair enough. How much RAM though?

          • decryption 21 hours ago ago

            16GB would be plenty. I've got like a dozen services running on an 8GB i7-4970 and it's only using 5GB of RAM right now.

            • shiroiuma 11 hours ago ago

              If you're running ZFS, it's advisable to use more RAM. ZFS is a RAM hog. I'm using 32GB on my home server.

              • renehsz 4 hours ago ago

                ZFS doesn't really need huge amounts of RAM. Most of the memory usage people see is the Adaptive Replacement Cache (ARC), which will happily use as much memory as you throw at it, but will also shrink very quickly under memory pressure. ZFS really works fine with very little RAM (even less than the recommended 2GB), just with a smaller cache and thus lower performance. The only exception is if you enable deduplication, which will try to keep the entire Deduplication Table (DDT) in memory. But for most workloads, it doesn't make sense to enable that feature anyways.

              • hombre_fatal 5 hours ago ago

                That + full-disk encryption is why I went with BTRFS inside LUKS for my NAS.

                They recommend 1GB RAM per 1TB storage for ZFS. Maybe they mean redundant storage, so even 2x16TB should use 16GB RAM? But it's painful enough building a NAS server when HDD prices have gone up so much lately.

                The total price tag already feels like you're about to build another gaming PC rather than just a place to back up your machines and serve some videos. -_-

                That said, you sure need to be educated on BTRFS to use it in fail scenarios like degraded mode. If ZFS has a better UX around that, maybe it's a better choice for most people.

                • renehsz 4 hours ago ago

                  1GB RAM per 1TB storage is really only required if you enable deduplication, which rarely makes sense.

                  Otherwise, the only benefit more RAM gets you is better performance. But it's not like ZFS performs terribly with little RAM. It's just going to more closely reflect raw disk speed, similar to other filesystems that don't do much caching.

                  I've run ZFS on almost all my machines for years, some with only 512MiB of RAM. It's always been rock-solid. Is more RAM better? Sure. But it's absolutely not required. Don't choose a different file system just because you think it'll perform better with little RAM. It probably won't, except under very extreme circumstances.

          • c-hendricks 18 hours ago ago

            32gb for me because half of that is given to the development VM

      • acidburnNSA 7 hours ago ago

        Ryzen 5950x cpu, 64 gb ecc ram, dual 16 tb drives for zfs, Nvidia 5070 gpu.

        Way way overspeced for what I listed, but I use it for lots of video processing, numerical simulations, and some local AI too.

        I have a similar subset of this stuff running at my mom's house on a 16 GB ram Beelink minicomputer. With openvino frigate can still do fully local object detection on the security case, whish is sweet.

      • drnick1 a day ago ago

        Not impressive at all. I run just about as many services, plus several game servers, on a Ryzen 5, and most of the time CPU usage is in the low single digits. Most stuff is idle most of the time. Something like a Home Assistant instance used by a single household is basically costless to run in terms of CPU.

        • pajamasam 21 hours ago ago

          Not costless in terms of RAM though, surely?

          • embedding-shape 4 hours ago ago

            Ultimately, basically. I have two servers in my homelab, one that is more beefy, which hosts a bunch of stuff (basically everything parent outlined + ), including a DHT crawler, download clients, indexers, databases and a lot more. It's sitting and using 16GB (out of available 126GB) right now. Then I have another which only runs the security system + Frigate + Home Assistant, it's using 2.3GB out of 32GB available.

          • drnick1 18 hours ago ago

            Web apps like Home Assistant are very light, things like game servers are heavier since they have to load maps etc.

      • cyberpunk a day ago ago

        You could easily run all of that on a rpi…

        • tclancy a day ago ago

          No, you definitely can’t. Or at least, not 3B+. I wound up buying https://www.amazon.com/ACEMAGICIAN-M1-Computers-Computer-3-2... which was $50 less a month ago (!!) because so many things don’t fit well. Immich is amazing, but you wouldn’t get a lot of the coolness of it if you can’t run the ai bits, which are quite heavy.

      • TacticalCoder 19 hours ago ago

        > Impressive that all that can run on one machine. Mind sharing the specs?

        Not GP but I have lots of fun running VMs and lots of containers on an old HP Z440 workstation from 2014 or so. This thing has 64 GB of ECC RAM and costs next to nothing (a bit more now with RAM that went up). Thing is: it doesn't need to be on 24/7. I only power it up when I first need it during the day. 14 cores Xeon for lots of fun.

        Only thing I haven't moved to it yet is Plex, which still runs on a very old HP Elitedesk NUC. Dunno if Plex (and/or Jellyfin) would work fine on an old Xeon: but I'll be trying soon.

        Before that I had my VMs and containers on a core i7-6700K from 2015 IIRC. But at some point I just wanted ECC RAM so I bought a used Xeon workstation.

        As someone commented: most services simply do not need that beefy of a machine. Especially not when you're strangled by a 1 Gbit/s Internet connection to the outside world anyway.

        For compilation and overall raw power, my daily workstation is a more powerful machine. But for a homelab: old hardware is totally fine (especially if it's not on 24/7 and I really don't need access to my stuff when I sleep).

        • leptons 16 hours ago ago

          Cheap to buy old hardware, but electricity to run those old rigs isn't really cheap in many areas now. My server is costing me about $100/month in electricity costs.

          It does have 16 spinning disks in it, so I accept that I pay for the energy to keep them spinning 24/7, but I like the redundancy of RAID10, and I have two 8-disk arrays in the machine. And a Ryzen-7 5700G, 10gbit NIC, 16 port RAID card, and 96GB of RAM.

          • matja 5 hours ago ago

            How have you measured the power usage/cost? That seems like a incredibly high price for electricity, similar to a 600W constant load in my part of the world.

          • gessha 6 hours ago ago

            I’ve been watching some storage and homelab-themed videos and I heard there’s a lot of optimizations you can do to lower power usage - spinning the disks down, turning the machine on for a limited time, etc.

          • shellwizard 12 hours ago ago

            It depends on the type of hardware that you use for your server. If it's really server grade you're totally right. For example cheap memory+CPU+MB x99 off AliExpress are cheap but they're not very efficient.

            In my case I fell in love with the tiny/mini/micros and have a refurbish Lenovo m710q running 24/7 and only using 5W when idling. I know it doesn't support ECC memory or more than 8 threads, but for my use case is more than enough

  • xoa a day ago ago

    I'll admit I've still stuck with the original FreeBSD based TrueNAS, and still am kinda bummed they swapped it. So it's interesting to see a direct example of someone for whom the new Linux based version is clearly superior. I'm long since far, far more at the "self-hosted" vs "homelab" end of the spectrum at this point, and in turn have ended up splitting my roles back out again more vs all-in-one boxes. My NAS is just a NAS, my virtualization is done via proxmox on separate hardware with storage backing to the NAS via iSCSI, and I've got a third box for OPNsense to handle the routing functions. When I first compared, the new TrueNAS was slower (presumably that is at parity or better now?) and missing certain things of the old one, but already was much easier to have Synology or Docker style or the like "apps" AIO. That didn't interest me because I didn't want my NAS to have any duty but being a NAS, but I can see how it'd be far more friendly to someone getting going, or many small business setups. A sort of better truly open and supported "open Synology" (as opposed the xpenology project).

    Clearly it's worked for them here, and I'm happy to see it. Maybe the bug will truly bite them but there's so much incredibly capable hardware now available for a song and it's great to see anyone new experiment with bringing stuff back out of centralized providers in an appropriately judicious way.

    Edit: I'll add as well, that this is one of those happy things that can build on itself. As you develop infrastructure, the marginal cost of doing new things drops. Like, if you already have a cheap managed switch setup and your own router setup whatever it is, now when you do something like the author describes you can give all your services IPs and DNS and so on, reverse proxy, put different things on their own VLANs and start doing network isolation that way, etc for "free". The bar of giving something new a shot drops. So I don't think there is any wrong way to get into it, it's all helpful. And if you don't have previous ops or old sysadmin experience or the like then various snags you solve along the way all build knowledge and skills to solve new problems that arise.

    • ryandrake 20 hours ago ago

      One of the most helpful realizations I had as I played around with self-hosting at home is that there is nothing magical about a NAS. You don't need special NAS software. You generally don't need wild filesystems, or containers or VMs or this-manager or that-webui. Most people just need Linux and NFS. Or Linux and SMB. And that's kind of it. The more layers running, the more that can fail.

      Just like you don't really need the official Pi-hole software. It's a wrapper around dnsmasq, so you really just need dnsmasq.

      A habit of boiling your application down to the most basic needs is going to let you run a lot more on your lab and do so a lot more reliably.

      • rpcope1 18 hours ago ago

        Kind of expanding on this, it feels like a huge chunk of specialized operating systems are just someone just putting their own skin over Debian. The vast majority of services and tools they wrap aren't any more complicated than the wrapper.

        Hardware is kind of the same deal; you can buy weird specialty "NAS hardware" but it doesn't do well with anything offbeat, or you can buy some Supermicro or Dell kit that's used and get the freedom to pick the right hardware for the job, like an actual SAS controller.

        • dizhn 7 hours ago ago

          There are exceptions to this such as Proxmox which can actually be added to an existing Debian install. I must admit that when I first encountered it I didn't expect much more than a glorified toy. However it is so much more than that and they do a really good job with the software and the features. If anybody is on the fence about it I recommend giving it a go. If you do, I recommend using the ISO to install, pick ZFS as the filesystem (much much more flexible), and run pbs (proxmox backup server) somewhere (even on the same box as an lxc host with zfs backed dir).

        • shiroiuma 11 hours ago ago

          >it feels like a huge chunk of specialized operating systems are just someone just putting their own skin over Debian. The vast majority of services and tools they wrap aren't any more complicated than the wrapper.

          That's exactly what TrueNAS is these days: it's Debian + OpenZFS + a handy web-based UI + some extra NAS-oriented bits. You can roll your own if you want with just Debian and OpenZFS if you don't mind using the command line for everything, or you can try "Cockpit".

          The nice thing about TrueNAS is that all the ZFS management stuff is nicely integrated into the UI, which might not be the case with other UIs, and the whole thing is set up out-of-the-box to do ZFS and only ZFS.

      • globular-toast 20 hours ago ago

        Same with a router. Any Linux box with a couple of (decent) NICs is a powerful router. You just need to configure it.

        But for my own sanity I prefer out of the box solutions for things like my router and NAS. Learning is great but sometimes you really just need something to work right now!

    • lostlogin a day ago ago

      > splitting my roles back out again more

      The fiasco you can cause when you try fix, update, change etc makes this my favourite too.

      Household life is generally in some form of ‘relax’ mode in evening and at weekends. Having no internet or movies or whatever is poorly tolerated.

      I wish Apple was even slightly supportive of servers and Linux as the mini is such a wicked little box. I went to it to save power. Just checked - it averaged 4.7w over the past 30 days. It runs Ubuntu server in UTM which notably raises power usage but it has the advantage that Docker desktop isn’t there.

      • xoa a day ago ago

        >The fiasco you can cause when you try fix, update, change etc makes this my favourite too.

        I think some of the difference between "self-hosted" vs "homelab" is in the answer to the question of "What happens if this breaks end of the day Friday?" An answer of "oh merde of le fan, immediate evening/weekend plans are now hosed" is on the self-hosted end of the spectrum, whereas "eh, I'll poke at it on Sunday when it's supposed to be raining or sometime next week, maybe" is on the other end. Does that make sense? There are a few pretty different ways to approach making your setup reliable/redundant but I think throwing more metal at the problem features in all of them one way or another. Plus if someone moves up the stack it can simply be a lot more efficient and performant, the sort of hardware suited for one role isn't necessarily as well suited for another and trying to cram too much into one box may result in someone worse AND more expensive then breaking out a few roles.

        But probably a lot of people who ended up doing more hosting started pretty simple, dipping their toes in the water, seeing how it worked out and building confidence. And having everything virtualized on a single box is a pretty easy and highly flexible way get going and experiment. Also if it's on a ZFS backing makes "reset/rollback world" quite straight forward with minimal understanding given you can just use the same snapshot mechanism for that as you do for all other data. Issues with circular dependencies and the like or what happens if things go down when it's not convenient for you to be around in person don't really matter that much. I think anything that lowers the barrier to entry is good.

        Of course, someone can have some of each too! Or be somewhere along the spectrum, not at one end or another.

        • lostlogin a day ago ago

          > And having everything virtualized on a single box is a pretty easy and highly flexible way get going and experiment. Also if it's on a ZFS backing makes "reset/rollback world" quite straight forward with minimal understanding given you can just use the same snapshot mechanism for that as you do for all other data.

          Docker-compose isn’t a backup, but from a fresh ubuntu server install, it’ll have me back in 20 mins. Backing up the entire VM isn’t too hard either.

          I was in a really sweet spot and then ESXi became intolerable. Though in fairness their website was alway pure hell.

        • lostlogin a day ago ago

          > And having everything virtualized on a single box is a pretty easy and highly flexible way get going and experiment. Also if it's on a ZFS backing makes "reset/rollback world" quite straight forward with minimal understanding given you can just use the same snapshot mechanism for that as you do for all other data.

          Docker-compose isn’t a backup, but from a fresh ubuntu server install, it’ll have me back in 20 mins. Backing up the entire VM isn’t too hard either.

          I was n a really sweet spot and then ESXi became intolerable. Though in fairness their website was alway pure hell.

    • vermaden a day ago ago

      I also regret that change.

      Big downgrade after moving to Linux:

      - https://vermaden.wordpress.com/2024/04/20/truenas-core-versu...

    • photon_collider 17 hours ago ago

      Fair point! When I first started on this I went down a deep rabbit hole exploring all the ways I could set this up. Ultimately, I decided to start simple with hardware that I had laying around.

      I definitely will want to have a dedicated NAS machine and a separate server for compute in the future. Think I'll look more into this once RAM prices come back to normal.

    • PunchyHamster a day ago ago

      There was just not a good reason to stay with BSD, especially with NAS -> homeserver evolution.

      Really, we should rename that kind of devices to HSSS (Home Service Storage Server)

    • globular-toast 20 hours ago ago

      I'm similar to you[0]. I still run FreeBSD TrueNAS, and it's just a NAS. Although I do run the occasional VM on it as the box is fairly overprovisioned. I run all my other stuff on an xcp-ng box. I'm a little more homelab-y as I do run stuff on a fairly pointless kubernetes cluster, but it's for learning purposes.

      I really prefer storage just being storage. For security it makes a lot of sense. Stuff on my network can only access storage via NFS. That means if I were to get malware on my network and it corrupted data (like ransomware), it won't be able to touch the ZFS snapshots I make every hour. I know TrueNAS is well designed and they are using Docker etc, but it still makes me nervous.

      I guess when I finally have to replace my NAS I'll have to go Linux, but it'll still be just a NAS for me.

      [0] https://blog.gpkb.org/posts/homelab-2025/

  • polairscience a day ago ago

    A lot of people are talking about their backup storage solutions in here, but it's mostly about corporate cloud providers. I'm curious if anyone is going more rogue with their solution and using off-prem storage at a friend's house.

    Which is to say, hardware is cheap, software is open, and privacy is very hard to come by. Thus I've been thinking I'd like to not use cloud providers and just keep a duplicate system at a friends, and then of course return the favor. This adds a lot of privacy and quite a bit of redundancy. With the rise of wireguard (and tailscale I suppose), keeping things connected and private has never been easier.

    I know that leaning on social relationships is never a hot trend in tech circles but is anyone else considering doing this? Anyone done it? I've never seen it talked about around here.

    • nsbk a day ago ago

      My off-prem backups are in a Tailscale connected NAS at my parent's house. I'm in the process of talking a friend into having Tailscale configured to host more off-prem backups at his place as well. I'm moving out of iCloud for photo library management and into Immich. I really don't want to lose my photos and videos hence the off-prem backups. Tailscale has been a blessing for this kind of use case

      • polairscience a day ago ago

        Oooo. That's the other thing I need to figure out, because it's 90% for my photography. How have you liked immich? Have you tried any other options?

        • neop1x 4 hours ago ago

          I can also recommend Ente. It is pretty polished. Go-based backend using Postgres DB, Flutter-based android version, React-based web frontend (electron for desktop).

        • Root_Denied 16 hours ago ago

          I'm in the process of moving all my backups to Immich - honestly it's best in class software.

          I'm able to set it up so that my SO and I can view all the pictures taken by the other (mostly cute photos of our dog and kid, but makes it easier to share them with others when we don't have to worry about what device they're on), have it set to auto-backup, and routed through my VPS so it's available effectively worldwide.

          The only issue that I run into is a recent one, which is hard drive space - I've got it on a NAS/RAID setup with backups sent to another NAS at my parents' place, but it's an expensive drive replacement in current market conditions.

        • michelsedgh 11 hours ago ago

          I recommend Ente photos, harder to setup but feels much more robust and its end to end encrypted, which I prefer.

    • nine_k a day ago ago

      > hardware is cheap

      Hardware was cheap a year ago. Whoever managed to build their boxes full of cheap RAM and HDDs, great, they did the right thing. It will be some time until such an opportunity presents itself again.

    • bluedino 5 hours ago ago

      > I'm curious if anyone is going more rogue with their solution and using off-prem storage at a friend's house.

      Have been doing this for 25 years.

      If you have asymmetrical connections it's easiest to do the initial backup locally and then take your drive(s) to your friends house and then just sync/update.

    • Evan-Purkhiser 20 hours ago ago

      I do something like this! I’m based in NY but my dad’s in LA. I put together an rpi5 + 5xSATA hat with 3 10TB WD red drives using zraid1 (managed to pick these up over the holidays before prices started going up, $160 per drive!). 3D printed the case and got it running a diskless alpine image with tailscale and zrepl for ZFS snapshot replication. Just left it running in a corner at his place and told him not to touch it heh

      Whole thing cost around $500. Before that I was paying ~$35 a month for a Google workspace with 5TB of drive space. At one point in the past it was “unlimited” space for $15 a month. Figure the whole thing will pay for itself in the next couple of years.

      Actually just finished the initial replication of my 10TB pool. I ran into a gnarly situation where zrepl blew away the initial snapshot on the source pool just after it finished syncing, and I ended up having to patch in a new fake “matching” snapshot. I had claude write up a post here, if you’ll excuse the completely AI generated “blog post”, it came up with a pretty good solution https://gist.github.com/evanpurkhiser/7663b7cabf82e6483d2d29...

    • Jedd 18 hours ago ago

      Yes, absolutely. I move between two sites, and also run some gear at my sibling's home, so I have the 3 separate sites thing sorted. ECC + RAID1 + borg at each site gives archival capability on top of standard backup.

      Syncthing has the 'untrusted peer' feature, which I've only used once, accidentally, but I believe provides an elegant way of providing some disk for a friend while maintaining privacy of the content.

    • mtsolitary a day ago ago

      I get 3-2-1 backups with no "big cloud" dependency using - My Mac - My NAS (RAID1) using Syncthing - Incremental borg backups to rsync.net (geo-redundant plan) with a cron job.

  • freetonik a day ago ago

    The author uses Restic + Backblaze B2 storage. I was recently setting up backups for my homebase as well, and went with Restic + BorgBase [0]. Not affiliated, just wanted to share that I think they have a nice service with a straight-forward pricing model. They are the company behind excellent Pikapods [1], which may be interesting to the homelab crowd.

    [0] https://www.borgbase.com

    [1] https://www.pikapods.com

    • natterangell a day ago ago

      I also use backrest/restic on my NAS, but I went with a Hetzner StorageBox instead, a little cheaper for 1TB (I pay 5USD monthly including VAT, billed monthly too).

      • reddalo 20 hours ago ago

        Me too, I highly recommend Hetzner Storage Box. It's cheap, and it works great (unlike their S3-compatible storage, which has been a huge fiasco since they launched it).

        • bluehatbrit 20 hours ago ago

          Could you elaborate on the issues with their S3 compatible storage? I've been considering it and haven't seen too many issues in my testing, beyond the lack of identity control.

          • lowdude 8 hours ago ago

            I cannot say much about the quality, but I am also testing around with it at the moment. As for the identity control, you may be able to achieve this with a few extra steps, if you set up bucket policies for the credentials. For this, it would be a bit cleaner to move the storage box to a project of its own.

            I still have to check if this actually works in practice, but I am hopeful. I based it off their documentation here: https://docs.hetzner.com/storage/object-storage/faq/s3-crede...

          • reddalo 5 hours ago ago

            If you look at the Hetzner status page, you'll always see their status about degraded performance for Object Storage: https://status.hetzner.com/

            The main problem is that it sometimes slows down to a crawl, or requests fail altogether.

  • xandrius a day ago ago

    One thing to consider before doing the same, a computer done for homelab has a much lower consumption.

    The setup mentioned in the article has an avg 600 kWh/year as opposed to a pretty solid HP EliteDesk (my own homelab) which uses 100 kWh/year. Sure you don't get a GPU but for what it is used for, you might as well use a laptop for that.

    • firecall 16 hours ago ago

      One reason to repurpose desktops is that you get a full ATX Motherboard with SATA ports!

      If you are doing a DIY NAS with HDDs then you want real SATA ports. Or a well supported PCI card with SATA Ports, which you cant sensibly connect to a Laptop or micro PC. Sure, you might be able to use Thunderbolt to reliably hook up an external PCI chassis, but then you might as well buy a NAS at that point or use a full tower case with an ATX mobo!

      Using an older Gaming PC you already have is actually a very good option for TrueNAS or OMV.

      I took an older 10th Gen Intel Gaming PC we had, sold the core i9 CPU, and replaced it with an i7-10700T I found used on eBay.

      I'm finding this setup to be better for my needs than various ex-lease Dell Micro PCs I've used in the past, mainly because of the reliability of the SATA ports.

      I've found quality external Samsung T5 SSDs to be very reliable over USB with TrueNAS. But HDDs are a nightmare over USB for a NAS, in my experience.

      I was hoping this might be the year that I can finally get rid of the spinning rust. But looks like AI data centres had other ideas! :-)

      However, I will say that if you just want to run some virtualized Linux servers or similar, then ex-lease micro PCs are a fantastic deal and can be fun to setup and learn Proxmox and Truenas etc..

      • bpye 12 hours ago ago

        You can definitely get PCIe on some micro PCs. I have a Lenovo m920q that I use with a Mellanox NIC as my router.

        You could certainly install a SAS or SATA controller, the issue would be having somewhere to mount the drives, and a way to power them. External SAS enclosures are not cheap.

      • sambf 10 hours ago ago

        M.2 SATA cards are also a thing, I repurposed a NUC in a SuperMicro (SYS-521R-T) mini tower server with 4 drives and it works great.

    • predkambrij 6 hours ago ago

      I have chromebox (with 32GB DDR4) that idle at 4W, but after adding couple nvme drives it doubled it's power consumption. Having full ATX mobo is cool (flexibility), with BIOS settings, powertop, and some other settings can also idle at quite low power. I have i7-7700K that idle at 18W. With combination of wake-on-lan and similar you can have a monster but won't empty your wallet.

    • Havoc 5 hours ago ago

      Minipcs are nice but they’re not really like for like comparable.

      A good AM4 board can do 7 nvme, 8 sata and ecc ram.

    • noname120 8 hours ago ago

      Mac Mini M1 running Asahi Linux is half of that: 65-70 kWh/year

    • hparadiz 21 hours ago ago

      I've been thinking of tearing down my old gaming desktop (same as OP) and using a 2014 Macbook Pro instead for exactly this reason.

  • ErneX 6 hours ago ago

    I run (among many other VMs) TrueNAS on a VM of an xcp-ng host (Supermicro board with a Xeon and ECC ram). Passing a dedicated SAS controller to it. Before that I was using esxi but migrated all my VMs and hosts to xcp-ng. TrueNAS has been pretty good so far, been running this for many years already.

    I also have another xcp-ng host for other VMs running on a Dell OptiPlex Micro.

    OP should configure DNS locally and reverse proxy each service, I use bind 9 and nginx for that.

  • kelvinjps10 3 hours ago ago

    I decided that instead of having to deal with self hosting or cloud solutions to just use local apps and sync them. Syncing Syncthing manages the syncing Passwords Keepassxc for desktop Keepassdx Mobile Note taking and documents Obsidian Neovim and PAndoc doc Photos Native gallery apps Mpv I might do a post about it

    • ryukoposting 3 hours ago ago

      Syncthing is the way, especially for OP's photography. My film scans spit out TIF files that can be north of 100MB. Editing those files on a network share is unpleasant. But Syncthing keeps local copies of the files, so editing a photo inside a Synthing folder is like editing a normal file on your computer - because it is.

  • ivanjermakov 19 hours ago ago

    I never understood using a NAS OS and hosting non-NAS services there, it feels upside down. I would rather have a general purpose server OS with running NAS services. Same applies to Proxmox.

    • denkmoon 19 hours ago ago

      Proxmox is just Debian with a qemu and lxc webui. You can do anything with it

    • drnick1 17 hours ago ago

      Agreed, I just don't see the point for a "homelab." Unlike many, I like very straightforward setups based on a regular distro like Debian. I also run many services bare metal. This includes nginx, email stack, DNS server, game servers, etc. I use use virtualization/containers for things that I treat as an appliance. This includes Home Assistance, Nextcloud, Matrix, Jellyfin, among others.

  • benlivengood a day ago ago

    I've started building a kubernetes cluster (Talos Linux) across town with wireguard between various houses. ZFS boxes for persistent volumes (democratic-csi) in each "zone" with cross-site snapshot replication and Gateway (Traefik) running at each site behind the ISP. CrunchyPGO allows separate StorageClasses to easily split the leader/followers up.

    • nickorlow 21 hours ago ago

      Have had issues w/ doing k8s over residential wan once I had enough hosts in my cluster

      (though they were halfway across the US from each other, and not town)

      • benlivengood 17 hours ago ago

        So far everything is under 15ms apart, but it is a small number of nodes so far. Did you mostly have trouble with etcd?

        • nickorlow 5 hours ago ago

          Yeah, etcd was the main culprit, but latency was 150-300ms in my case. At 3 nodes, it was relatively stable (had an issue every week or so that lasted < 5 min), but at 4 the camel's back broke.

  • tuananh 15 hours ago ago

    *most* of the homelab setup doesn't have much load so it's mostly matter of ram available and then power consumption.

    many people with setup like this probably needs maybe a 4 cores low powered machine with idle consumption at ~5-10w

    • Semaphor 9 hours ago ago

      Yeah, this is the AI tax. I have several times as many services (28) on a vastly smaller machine (N100 fanless), but besides some very light AI for image detection which runs on CPU, I have no AI there, so I don’t need a desktop PC.

  • seriocomic 12 hours ago ago

    With AI/LLM assistants the barrier to setting up and running a homelab is so much lower - in the past 6 months I've had Claude help me completely reconfigure the (now) 5 RPis that were sitting around severely underutilized, I have 3 running Docker, some split between home stuff, production testing and a separate management layer (along with backups that were just in the too hard basket previously). Not to forget all the documentation that goes with it. Fun times!

  • izacus 7 hours ago ago

    Folks that use restic - how does it handle laptop backups?

    That is - handling laptop going to sleep during backup, laptop being on only for shorter periods of time, etc.?

    Because I had issues with backup tooling which wouldn't resume if it got interrupted and expected for the machine to always run at certain hour of the day. I had examples where laptops wouldn't backup for months because they were only on for a short 30-60min bursts at the time and the backup tools couldn't handle piece-meal resume.

    How does restic handle that?

    • calcifer 7 hours ago ago

      I invoke backups from a systemd timer. If the schedule is missed (due to sleep, power off etc.) it runs it at the next earliest opportunity.

      • izacus 6 hours ago ago

        How does it handle restart after the machine wakes up again?

        • calcifer an hour ago ago

          When the machine wakes up, systemd checks the timer's schedule and when it last ran. If one or more runs were missed due to the sleep, it's executed immediately.

    • dizhn 7 hours ago ago

      https://restic.readthedocs.io/en/stable/faq.html

      It will resume from where it got interrupted. The only exception is the initial backup where it doesn't have a snapshot yet.

    • predkambrij 7 hours ago ago

      /etc/anacrontab can solve your use case.

  • succo 5 hours ago ago

    For remote acces I use NetBird, I think is the best and secure option to not expose stuff directly on the web and put all your resources under a vpn. Is super easy to setup and it supports also sso with 2fa

  • kleebeesh a day ago ago

    Neat!

    > Right now, accessing my apps requires typing in the IP address of my machine (or Tailscale address) together with the app’s port number.

    You might try running Nginx as an application, and configure it as a reverse proxy to the other apps. In your router config you can setup foo.home and bar.home to point to the Nginx IP address. And then the Nginx config tells it to redirect foo.home to IP:8080 and bar.home to IP:9090. That's not a thorough explanation but I'm sure you can plug this into an LLM and it'll spell it out for you.

    • c-hendricks a day ago ago

      Also recommending using a DNS server that points `*.yourdomain` do your reverse proxy's IP. That way requests skip going outside your network and helps for ISPs that don't work with "loopback" DNS (quotes because I don't know the proper term)

      You can then set your DNS in Tailscale to that machines tailnet IP and access your servers when away without having to open any ports.

      And bonus, if it's pihole for dns you now get network-level Adblock both in and outside the home.

    • mnahkies a day ago ago

      Personally I'm using haproxy for this purpose, with Lego to generate wildcard SSL certs using DNS validation on a public domain, then running coredns configured in the tailnet DNS resolvers to serve A records for internal names on a subdomain of the public one.

      I've found this to work quite well, and the SSL whilst somewhat meaningless from a security pov since the traffic was already encrypted by wire guard, makes the web browser happy so still worthwhile.

    • pajamasam a day ago ago

      This worked for me to get subdomains and TLS certificates working on a similar setup: https://blog.mni.li/posts/internal-tls-with-caddy/

    • Frotag a day ago ago

      IME androids dont respect static routes published by the router. I guess self hosting DNS might be more robust but I usually just settle for bookmarking the ip:port

    • frumiousirc a day ago ago

      This (reverse proxy) is essentially what "tailscale serve" does.

    • anon7000 a day ago ago

      Or just use Tailscale serve to put the app on a subdomain

    • verdverm a day ago ago

      Caddy is increasingly popular these days too. I use both and cannot decide which I prefer.

      • victorio a day ago ago

        Caddy's configuration is so simple and straightforward, I love it. For sure a more comfortable experience for simple setups

        • hk1337 a day ago ago

          I like Caddy's integration with Cloudflare for handling SSL and when I originally saw the idea it was promoted as an easy way to have SSL for a homely but I don't use real domains for my internal apps and that is required with Cloudflare.

          • cyberpunk a day ago ago

            caddy has tailscale integration i think too, so your foo.bar.ts.net “just works”

        • verdverm a day ago ago

          The pain I've had with it is distributed configuration, i.e. multiple projects that want to config rules. I've been using the JSON API rather than their DSL.

          Do you know how I might approach this better?

      • windexh8er a day ago ago

        I think most homelabbers default to Caddy and/or Traefik these days. Nginx is still around with projects like NPM (the other NPM), but Caddy and Traefik are far more capable.

        DevOpsToolbox did a great video on many of the reasons why Caddy is so great (including performance) [0]. I think the only downside with Caddy right now is still how plugins work. Beyond that, however it's either Caddy or Traefik depending on my use case. Traefik is so easy to plug in and forget about and Caddy just has a ton of flexibility and ease of setup for quick solutions.

        [0] https://www.youtube.com/watch?v=Inu5VhrO1rE

        • verdverm a day ago ago

          far more capable is an exaggeration

          I use both, they are by and large substitutable. Nginx has a much larger knowledge base and ecosystem, the main reason I stick with it.

          • windexh8er 6 hours ago ago

            Just as one small example: if you're deploying in k8s and want the configuration external to Nginx, you want built in certificate provisioning and you need to run midleware that can easily be routed in-config...

            Traefik is far more capable, for example. If all you're doing is serving pages, sure.

          • philsnow a day ago ago

            I agree with you that they're more or less equal. I don't like the idea of my reverse proxy dealing with letsencrypt for me, personally, but that's just a preference.

            One tricky thing about nginx though, from the "If is evil" nginx wiki [0]:

            > The if directive is part of the rewrite module which evaluates instructions imperatively. On the other hand, NGINX configuration in general is declarative. At some point due to user demand, an attempt was made to enable some non-rewrite directives inside if, and this led to the situation we have now.

            I use nginx for homelab things because my use-cases are simple, but I've run into issues at work with nginx in the past because of the above.

            [0] https://nginx-wiki.getpagespeed.com/config/if-is-evil

            • dwedge 21 hours ago ago

              I'm not sure why Apache is so unpopular, it can also function as a reverse proxy and doesn't have the weird configuration issues nginx has.

              Some people take this way too far, for instance I've send places compiling (end of life) modsec support into nginx instead of using the webserver it was built for

    • ls612 a day ago ago

      The part you are leaving out is that you also need to set up something like a pihole (which you can just run in a container on the homelab rather than on a pi) to do the local DNS resolution.

  • garyfirestorm 20 hours ago ago

    you can use https://nginxproxymanager.com/ to manage various services on your homelab. it works flawlessly with Tailscale - I can connect to my tailnet and simply type http://service.mylocaldomain to open the service. you will also need adguard -> adguard dns rewrite -> *.mylocaldomain forwards to the NPM instance and NPM instance has all the information of which IP:PORT has which service Also tailscale DNS should be configured to use adguard -> you can turnoff adblock features if it interferes with any of your stuff.

    I would also suggest to use two instances of adguards - one as backup two instances of NPM.

  • mcbuilder 16 hours ago ago

    I did the exact same thing except a virtualized opensense router and bare metal kubernetes on one host. The kubernetes broke and I downgraded from 32GB of RAM to 16GB . I actually may revisit the setup since opensense FRR and Cilium BGP to peer your cluster and home LAN is actually a really seamless way to self host things in kubernetes. Maybe there are other ways, maybe there is something simpler, but a homelab is about fun more than pure function.

  • buybackoff 19 hours ago ago

    TrueNAS works perfectly as a VM eg on Proxmox with passing through a SATA controller from the motherboard. It may not work always with bad IOMMU groups, but I have this on an old Xeon Precision Tower 3420 and not so old Asus Z690 motherboard. NVMe passthrough should be straightforward as well. No need for LSIs or cheap PCI-to-SATA cards if the number of existing physical slots is enough. And as far as TrueNAS is concerned, it's baremetal disk access. Even the latest TrueNAS is not in the same league as Proxmox for managing VMs/containers, not even close.

  • hk1337 a day ago ago

    This is a lot of my similar setup in hardware. I just repurposed a PC I was using for windows that I barely used anyways. I would like to move that to a Framework Desktop mounted in my mini rack at some point though.

    I ended up making my own dashboard app, not as detailed as Scrutiny because I just wanted a central place that linked to all my internal apps so I didn't have to remember them all and have a simple status check. I made my own in Go though because main ones I found were NodeJS and were huge resource hogs.

  • monkaiju 2 hours ago ago

    I ran TrueNAS for years but now that I've discovered immutable OSes, and uCore specifically, I'm never going back!

    Sauce: https://github.com/ublue-os/ucore

  • Prabhapa 15 hours ago ago

    use cloudflare & cloudflare tunnels for exposing your apps over internet via custom domains. Its free of costs. Tailscale only allows 3 devices i suppose. If we have more devices to be able to connect to , then cloudflare is the best .

    • tgrowazay 13 hours ago ago

      Tailscale is free for up to 3 users with up to 100 devices

    • Pooge 10 hours ago ago

      > use cloudflare

      Please don't

  • luzionlighting 9 hours ago ago

    Clean setup. It's interesting how much attention people give to cable management and layout in tech setups.

    In architectural lighting projects we often think in a similar way about fixture placement, wiring access and maintenance because poor planning becomes very visible once a space is finished.

  • mattschaller 3 hours ago ago

    Add an *arr stack to that bad boy.

  • gehsty 21 hours ago ago

    I’m using a refurbed m4 Mac mini, connected to a unifi nas pro 8, super fun and straightforward. Feels like I only have to do the tinkering I want to do.

  • navigate8310 a day ago ago

    Why are you using restic, when TrueNAS offers native solutions to backup your data elsewhere?

    • dizhn 7 hours ago ago

      Encryption, deduplication, snapshots. Although if the poster has a zfs based system elsewhere zfs based backups would be fantastic.

    • PunchyHamster a day ago ago

      exactly because it isn't trueNAS specific I'd imagine

  • EdNutting a day ago ago

    Have a look at Headscale to avoid the cost of Tailscale for small home setups.

    • SauntSolaire 20 hours ago ago

      I believe Tailscale is free to use for small home setups. It's limited to 3 users and 100 devices which has been plenty for my homelab setup.

    • drnick1 a day ago ago

      This, or simply expose a VPN (Wireguard) port on a public IP. I don't see why you need to involve any third parties in such a setup.

      • EdNutting 21 hours ago ago

        For a single machine, yeah Wireguard is fine. For my multi-user multi-machine many-service home lab, it’s quite helpful to have the extra small features that Headscale offers (and some it exposes in a more convenient way).

        Edit: Tailscale has a fairly frank page on Wireguard vs Tailscale with suggestions on when to use which: https://tailscale.com/compare/wireguard

    • miloschwartz 18 hours ago ago

      Pangolin is also a good choice. Can be fully self-hosted. Also based on WireGuard.

      Handles both browser-based reverse proxy access and client-based P2P connections like a VPN.

  • buckle8017 15 hours ago ago

    Get yourself a custom domain and just use subdomains. Nothing says a public dns server has to return public ips. Bonus you can get https certs with certbot and dns challenge.

  • sbinnee 18 hours ago ago

    I learned about Mealie.io, thanks.

  • ritcgab a day ago ago

    Hard pass whenever you host long-term storage without ECC memory.

  • sgt a day ago ago

    This is extremely light - not a bad setup, but I mean.. it's like 1% of typical Homelabs.

    • tclancy a day ago ago

      Mother of God, why make this comment? It’s the poster’s setup and they are happy with it. What possible value could denigrating it do? The ol’ ball coach breakin’ em down to build em up shtick is gone and I don’t miss it.

      • sgt a day ago ago

        Didn't mean it that way - and for that I apologize. I was just expecting a lot more since it was on the front page.

        • tclancy 21 hours ago ago

          No worries Sarge and thanks for keeping the kids alive out there.

          • sgt 20 hours ago ago

            It's a pleasure, Tom! May I call you Tom?

            • tclancy 20 hours ago ago

              Can't hurt.

    • skyberrys a day ago ago

      I too was wondering what made this a homelab. I appreciate the setup, but from the word lab I was expecting at least an oscilloscope. That being said it has cool features I hadn't known about like the image storing system and at home LLM support.

      • tclancy a day ago ago

        Deeply suspect it has to do with being in the authors home.

    • switchbak a day ago ago

      It feels like day 2 after you’ve received the new hard drives. It’s nice, modern enough but still a pretty bog standard home machine, not really “homelab” territory yet.

      • akerl_ a day ago ago

        Why do we need to gatekeep “homelab”?

        • PunchyHamster a day ago ago

          Terms making defined sense aid in conversation.

          Why do you need to dilute the term? There is nothing wrong with your NAS running 3 apps that you press update once a year not being called "homelab" but just "a NAS"

          • akerl_ a day ago ago

            > Why do you need to dilute the term?

            Nobody is diluting anything. This person posted the setup they have in their home. It’s their homelab.

            It’s not diluting any terms for them to call it that. Their setup is just as much a homelab as somebody else’s 48U rack.

            It’s just a dick move, and against the rules of the site, to see somebody’s earnest post about their tech setup and post a shallow dismissal about how their setup isn’t deserving of your imagined barrier to entry.

            • PunchyHamster 20 hours ago ago

              They are not researching anything. They just want to have few things running.

              The whole idea of homelab (regardless of size) is learning first.

              He just have home server. It's okay to call it that

              • Capricorn2481 2 hours ago ago

                Is the average person really using Tailscale? This seems plenty deep enough

              • akerl_ 20 hours ago ago

                Oh. Now the imaginary gate is “research”?

            • tokyobreakfast 15 hours ago ago

              Quit whining, you know damn well the bar for a typical "Show HN" has been raised to the point of being irrelevant these days, this post is a perfect example. This is not a home lab.

              I'm happy for the OP and that it works for him. That said:

              The equivalent of Joe Bloggs installing Linux onto an old laptop is neither curious nor interesting, let's not pretend it is because feelings.

              • akerl_ 15 hours ago ago

                This isn't a Show HN, and also I think you mean "lowered" given the tone of your post.

                It's also been on the front page for most of the day on its own merits. It's clear you don't like the article. The guidelines are clear that you're expected to either engage constructively or just move along.

          • anon7000 a day ago ago

            I think if you’re playing around with apps & Tailscale on your NAS, it’s a homelab.

      • sgt a day ago ago

        Exactly. And I don't mind this being on the HN front page, but I'd like to see some proper Homelab setups here. Maybe someone can post the coolest setup they've seen so far?

    • HelloUsername a day ago ago

      > This is extremely light

      I'm curious about its power consumption on idle, average use, and peak.

    • Scene_Cast2 a day ago ago

      Of typical homelabs that are posted and discussed.

      The online activity of the homelab community leans towards those who treat it as an enjoyable hobby as opposed to a pragmatic solution.

      I'm on the other side of the spectrum. Devops is (at best) a neutral activity; I personally do it because I strongly dislike companies being able to do a rug-pull. I don't think you'll see setups like mine too often, as there isn't anything to brag about or to show off.