Perhaps this is a weird question I have, but I’ve been watching some technotim videos lately and he seems to have local dns addresses for local services. Perhaps I’ve got this wrong, but if not: how would you go over doing this?
I have a pterodactyl dashboard, which I access locally using the machines IP and the port, but it would be great to have a pterodactyl.example.com domain, which isn’t accessible from other networks, but does work on my own network. I also still want some services exposed to the internet, so I’m not sure if this would work.
Run your own DNS server on your network, such as Unbound or pihole. Setup the overrides so that domain.example.lan resolves to a local IP. Set your upstream DNS to something like 1.1.1.1 to resolve everything else. Set your DHCP to give out the IP of the DNS server so clients will use it
You don’t need to add block lists if you don’t want.
You can also run a reverse proxy on your lan and configure your DNS so that service1.example.lan and service2.example.lan both point to the same IP. The reverse proxy then redirects the request based on the requested domain name, whether that’s on a separate server or on the same server on a different port.
Thanks for the reply! I think I get it now.
Ive got this working with Caddy and Adguard
I use Caddy as my reverse proxy. It is running on the machine in the basement with all the different docker-container-services on different ports. My registrar is set up so that *.my-domain.com goes to my IP.
Caddy is then configured for ‘service-a.my-domain.com’ to port 1234, and the others going to their ports. This is just completely standard reverse proxy.
For some subdomains (i.e. different services) ive whitelisted only the local network. There is some config for that.
Im pretty sure that I also have to have adguard do a dns rewrite on the local network as well. That is, adguard has a rewrite for ‘*.my-domain.com’ to go to 192.168.0.22 (the local machine with caddy). I think i had to do this to ensure that when the request gets to caddy it is coming from the local whitelisted network rather than my public IP (which changes every couple months, but could be more).
Yup, I have a domain I purchased and on my lan I use PiHole and Caddy. All my apps and services use the format app.mydomain.com. PiHole forwards all requests for *.mydomain.com to Caddy, which handles the LE certificate (via DNS challenge) and forwards the requests to the proper IP:PORT. I started using this for everything, my Proxmox hosts, printer, my APs…
One thing to be careful of that I don’t see mentioned is you need to setup ACLs for any local-only services that are accessible via a web server that’s public.
If you’re using the standard name-based hosting in say, nginx, and set up two domains publicsite.mydomain.com and secret.local.mydomain.com, anyone who figures out what the name of your private site is can simply use curl with a Host: header and request the internal one if you haven’t put up some ACLs to prevent it from being accessed.
You’d want to use an allow/deny configuration to limit the blowback, something like
allow internal.ip.block.here/24; deny all;
in your server block so that local clients can request it, but everyone else gets told to fuck off.
Or just point secret.local.mydomain.com to the LAN IP of the server.
That’s the gotcha that can bite you: if you’re sharing internal and external sites via a split horizon nginx config, and it’s accessible over the public internet, then the actual IP defined in DNS doesn’t actually matter.
If the attacker can determine that secret.local.mydomain.com is a valid server name, they can request it from nginx even if it’s got internal-only dns by including the header of that domain in their request, as an example, in curl like thus:
curl --header 'Host: secret.local.mydomain.com' https://your.public.ip.here -k
Admittedly this requires some recon which means 99.999% of attackers are never even going to get remotely close to doing this, but it’s an edge case that’s easy to work against by ACLs, and you probably should when doing split horizon configurations.
But the attacker should know the internal and the external DNS. If the internal DNS doesn’t have any SSL certificate on its name, it’s impossible to discover.
By the way, I always suggest to reach services through VPN and use something like Cloudflare tunnel for services that must be public.
P.s. Shouldn’t public and private DNS be inverted in your curl example?
Nope, that curl command says ‘connect to the public ip of the server, and ask for this specific site by name, and ignore SSL errors’.
So it’ll make a request to the public IP for any site configured with that server name even if the DNS resolution for that name isn’t a public IP, and ignore the SSL error that happens when you try to do that.
If there’s a private site configured with that name on nginx and it’s configured without any ACLs, nginx will happily return the content of whatever is at the server name requested.
Like I said, it’s certainly an edge case that requires you to have knowledge of your target, but at the same time, how many people will just name their, as an example, vaultwarden install as vaultwarden.private.domain.com?
You could write a script that’ll recon through various permuatations of high-value targets and have it make a couple hundred curl attempts to come up with a nice clean list of reconned and possibly vulnerable targets.
I was planning on filtering local and external IP’s, like technotim explains in one of his videos by using cloudflare as an external reverse proxy
DNS? Why so complicated? Just edit your hosts file 😏
This is the correct answer.
Edit /etc/hosts and add
127.0.0.1 example.com
so when you type example.com into the address bar it goes to 127.0.0.1.
Yes - I do this with Pi-hole. It happens to be the same domain name that I host (very few) public services on too, so those DNS names work both inside and outside my network.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters AP WiFi Access Point CA (SSL) Certificate Authority DHCP Dynamic Host Configuration Protocol, automates assignment of IPs when connecting to a network DNS Domain Name Service/System HTTP Hypertext Transfer Protocol, the Web IP Internet Protocol PiHole Network-wide ad-blocker (DNS sinkhole) SSL Secure Sockets Layer, for transparent encryption VPN Virtual Private Network nginx Popular HTTP server
[Thread #840 for this sub, first seen 29th Jun 2024, 17:35] [FAQ] [Full list] [Contact] [Source code]
look into local dns servers if you want multiple machines to use your local domains if you only want a single windows or linux (and probably mac) computer to use the domain to access a specific local ip an entry in your etc/hosts file would be enough
Yes you definitely can, first you either got to have a DNS resolver or change the systems hosts files so they can lookup the dns address correctly
If the dns address “pterodactyl.example.com” points at the machine directly you still have to use the port of the pterodactyl dashboard You can also get a reverse proxy listening on port 443 (if you wan to use https, which I assume is the goal) on that machine or another machine which proxies the name “pterodactyl.example.com” to the right port
The next part is to get a certificate, you can either create a self-signed root CA and install the root cert on each system, or you have to get it with an ACME client using a dns-01 challange (since “pterodactyl.example.com” is not resolvable from the outside)
Then put the certificate either on the pterodactyl dashboard itself or the reverse proxy, there are also several reverse proxies that can fetch and reload the cert automatically, for example Caddy can do this with dns plugins
If you want I can help you with the configuration, I’ve done much of the same thing already
Yup. You can run both local amd external services off the same proxy, at least with traefik and I assume others. Alternatively you could use traefik to solely for local services and Cloudflare zero trust tunnel for external. I think his traefik video covers it? If not, it covers some part.
The other part is that you need pihole setup to serve local DNS.
You can just point your domain at your local IP, e.g. 192.168.0.100
If you mean to do that in the public DNS records please note that public records that point at private IPs are often filtered by ISP’s DNS servers because they can be used in web attacks.
If you don’t use your ISP’s DNS as upstream, and the servers you use don’t do this filtering, and you don’t care about the attacks, carry on. But if you use multiple devices or have multiple users (with multiple devices each) eventually that domain will be blocked for some of them.
Simplest is use /etc/hosts to set up names, if there are just a few.
You can do that with pihole and basically any reverse proxy. The process is the same, so you can follow tutorials, you just have to set up your domain through your pihole instance instead of a registrar. You can set pihole as your dns for specific devices, or you can set it as the default dns for your network through the router.
Will also take a look at the router DNS, thanks a lot!
People already talked about hosting your own DNS, let me add that a reverse proxy would be used for something like mapping myhome.local:8000 to myhome.local/jellyfin.
Generally speaking, a subdomain like
jellyfin.myhome.com
will work out much better than a subpath likemyhome.com/jellyfin
.Very few web apps can deal well (or at all) with being used under a subpath.
Using reverse proxies is common enough now that quite a few apps can deal with subpaths, and for the ones that can’t you can generally get nginx to rewrite the paths for you to make things work.
Alright, have fun with that. 🙂
I am, no worries.
Well, whatever works. Your example wouldn’t need a reverse-proxy.