(Republishing this one on the blog, instead of just as a gist)
Recent changes in the tech scene have made it clear that relying on commercial companies to provide services I rely on isn’t a good strategy in the long term, and given that Tailscale is so effective these days as a remote-access system, I’ve gradually been expanding a small collection of self-hosted web apps and services running on my home network.
Until now they’ve mainly been addressed using their IP addresses and random high ports on the internal LAN, for example:
- Pihole: http://10.19.72.7/admin
- Home Assistant: http://10.19.72.11:8123/
- Linkding: http://10.19.72.6:9092/
- Grafana: http://10.19.72.6:3000/
- (plus a good few others)
Needless to say this is a bit messy and inelegant, so I’ve been planning to sort it out for a while. My requirements:
- no more ugly bare IP addresses!
- a DNS domain;
- with HTTPS URLs;
- one per service;
- no visible port numbers;
- fully valid TLS certs, no having to click through warnings or install funny CA certs;
- accessible regardless of which DNS server is in use — ie. using public DNS records. This may seem slightly unusual, but it’s useful so that the internal services can still be accessed when I’m using my work VPN (which forces its own DNS servers);
- accessible internally;
- accessible externally, over Tailscale;
- not accessible externally without Tailscale.
After a few false starts, I’m pretty happy with the current setup, which uses Caddy.
Hosting The Domain At Cloudflare
First off, since the service URLs are not to be accessible externally without Tailscale active, the HTTP challenge approach to provision Let’s Encrypt certs cannot be used. That would require an open-to-the-internet publicly-accessible HTTP server on my home network, which I absolutely want to avoid.
In order to use the ACME DNS challenge instead, I set up my public domain "taint.org" to use Cloudflare as the authoritative DNS server (in Cloudflare terms, "full setup"). This lets Caddy edit the DNS records via the Cloudflare API to handle the ACME challenge process.
One of the internal hosts is needed to run the Caddy server’s reverse proxies; I picked "hass", 10.19.72.11, the Home Assistant host, which didn’t have anything already running on port 80 or port 443. (All of my internal hosts are running on a private /24 IP range, at 10.19.72.0/24.)
The dedicated DNS domain I’m using for my home services is "home.taint.org". In order to use this, I clicked through to the Cloudflare admin panel and created a DNS record as follows:
Type Name Content Proxy Status TTL
A *.home 10.19.72.11 DNS only - reserved IP Auto
Now, any hostnames under "home.taint.org" will return the IP 10.19.72.11 (where Caddy will run).
I don’t particularly care about exposing my internal home network IPs to the world, as a trade-off to allow the URLs to work even if an internal host is using the work VPN, or resolving with 8.8.8.8, or whatever. That’s worth missing out on a little bit of paranoia, since the IPs won’t be accessible from outside without Tailscale anyway.
It is worth noting that the Cloudflare-hosted domain doesn’t have to be the same one used for URLs in the home network; using dns_challenge_override_domain you can delegate the ACME challenge from any "home" domain to one which is hosted in Cloudflare.
The Caddy Setup
One wrinkle is that I had to generate a custom Caddy build in order to get the "dns.providers.cloudflare" non-standard module, from https://caddyserver.com/download . This is a click-and-download page which generates a custom Caddy binary on the fly. It would have been nicer if the Cloudflare module was standard, but hey.
Once that’s installed, I can get this output:
$ /usr/local/bin/caddy list-modules
[long list of standard modules omitted]
dns.providers.cloudflare
dns.providers.route53
Non-standard modules: 2
Unknown modules: 0
(Yes, I have Caddy running as a normal service, not as a Docker container. No particular reason; I think Docker should work fine.)
Go to the Cloudflare account dashboard, and create a user API token
as described at https://developers.cloudflare.com/fundamentals/api/get-started/create-token/ .
In my case, it has Zone / DNS / Edit
permission, on the specific zone taint.org
.
Copy that token as it’s needed in the "Caddyfile", which now looks like the following:
hass.home.taint.org {
tls {
dns cloudflare cloudflare_api_token_goes_here
}
reverse_proxy /* 10.19.72.11:8123
}
links.home.taint.org {
tls {
dns cloudflare cloudflare_api_token_goes_here
}
reverse_proxy /* 10.19.72.6:9092
}
pi.home.taint.org {
tls {
dns cloudflare cloudflare_api_token_goes_here
}
redir / /admin/
reverse_proxy /admin/* 10.19.72.7:80
}
grafana.home.taint.org {
tls {
dns cloudflare cloudflare_api_token_goes_here
}
reverse_proxy /* 10.19.72.6:3000
}
[many other services omitted]
Running sudo caddy run
in the same dir will start up and verbosely log what it’s doing.
(Once you’re happy enough, you can get Caddy running in the normal systemd service way.)
After setting those up, I now have my services accessible locally as:
- Home Assistant: https://hass.home.taint.org/
- Pihole: https://pi.home.taint.org/
- Grafana: https://grafana.home.taint.org/
- Linkding: https://links.home.taint.org/
Caddy seamlessly goes off and configures fully valid TLS certs with no fuss. I found it much tidier than Certbot, or Nginx Proxy Manager.
The Tailscale Setup
So this has now sorted out all of the requirements bar one:
- accessible externally, over Tailscale.
To do this I had to log into Tailscale’s admin console and go to https://login.tailscale.com/admin/machines , pick a host on the 10.19.72/24 internal LAN, click it’s dropdown menu and "Edit Route Settings…", and enable a Subnet Route for 10.19.72/24
. By doing this, all of the service.home.taint.org DNS records are now accessible, remotely, once Tailscale is enabled; I don’t even need to use ts.net names to access them! Perfect.
Anyway, that’s the setup — hopefully this writeup will help others. And kudos to Caddy, Let’s Encrypt and Tailscale for making this relatively easy.