That would mean not being able to vibe code up an entire app to deal with something as insurmountable as looking at a list of numbers and post it on HN for those sweet, sweet upvotes. Why would they not do that.
Various services have existed, such as portmap(8), though NFS and similar services have often suffered from the "too complicated to debug" problem where devops (then sysadmins) would try turning the system off and then back on again in the hopes of resolving the issue du jour. You might get lucky and determine that node number three (of many) was cursed and leave it switched off for the Season of Mammon, more commonly known as Christmas, and to retire it quietly, later. Hypothetically.
Generally host and port mapping gets shoved somewhere into the configuration management layer and hopefully does not become too complicated (or have too many security holes) as this could vary from "configuration files and a few scripts" to database and services layers that few can debug, especially not a sysadmin at 3 AM in the morning running on an hour of bad sleep. Hypothetically.
this is a nice idea, but
idk why, in macos if i do
`nc -l 127.0.0.1 gopher`
and then try to open url "http://127.0.0.1:gopher/" - safari does not open it, no requests visible in the `nc` output.
* URL rejected: Port number was not a decimal number between 0 and 65535
* Closing connection
curl: (3) URL rejected: Port number was not a decimal number between 0 and 65535
so the ports are named, it is nice, but in practice it does not make life easier.
i chose gopher port just as an example. try with any other service name mapped to a port number from /etc/services and the result will be the same. the OP's goal was to use many http/https services, so we are talking about many http(s) services.
i just wanted to make the point that even if you have service names in /etc/services, it is not possible to use that names easily to host/access http(s) services.
The names are the kind of servers that listen on those ports (by default) like ssh, telnet, http, and smtp. They are not subdomains or for URI parsing.
Well, the entire context of this is https so anything else is immaterial. The only reason it would be gopher is if you didn't read the post or don't understand the basics of https.
i know, but the OP's goal was to host/access http(s) services with names and avoid port numbers, and gopher service name was chosen by me as an example. my point was that /etc/services cannot be used for the OP's need.
if you host an http(s) service on port 11111 you can reach it with url http://127.1:11111, but url http://127.1:vce/ would not work in most software.
Perhaps we could even make the file the port itself, perhaps calling it a “socket”? A “unix socket” would be a great name. If we could place all these files behind a local reverse proxy then we could use localhost/jekyll or localhost/fastapi. It’s just a dream
Sure, but they are running web-apps they've vibe-coded (hence the .vibe tld) and for that use-case of many web apps that I run in docker containers I use nginx-proxy [0]. All the container needs is a VIRTUAL_HOST environment variable with the domain and what my router needs is an address entry for the wildcard subdomains. I even have nginx-proxy on a internet-accessible staging server.
Not modern enough. Unix is too low level, antiquated, and discriminates against those who just want to get shit done instead of reading manpages or documentation by hand.
you go and look in etc services for what is bound to port 5009. the article might not be the most useful but these comments are completely off the mark and stupid.
No, go ahead. Tell me how just using /etc/services does what this does. Because I'm calling bullshit.
But go ahead. /etc/services, please, share with me how it's setup to do thing likes create the HTPS and makes it trusted and sets up the domain. Go ahead.
Go ahead. You can ONLY use /etc/services.
Or, you are admit you don't actually have a clue as to what /etc/services does.
This is exact problem I see with all of those vibe coded software: In few years everything will be super fragmented, everyone will be using their own set of tools, or vibe coding them, themselves. Communication between teams or even between team members will become very hard because of those differences. 'What do you mean production is down? On my vibe coded dashboard everything is green!'
Why do people always assume that change is permanent?
It's never.
After decentralisation we always see decentralisation.
After a period of growth, a decline will follow.
After the vibe coding hype, consolidation will follow.
After rain comes sunshine.
> It's like someone should make a file... maybe in /etc ... and put short names for services in it... maybe it could be called /etc/services...
People shit-talk container orchestration systems like Kubernetes, but if anything they greatly simplified (if not completely eliminated) the need for this sort of network bookkeeping.
i dont know why people keep insiting on that file while there are perfectly fine commands to pull from your boxes what is holding what port.
that is all besides the point though if you look at what you should be doing and keeping all this information in some kind of asset management system from which you can deploy things (which is kinda what k8s and docker etc. try to do (miserably)).
unless you are binding stuff to random ports on random boxes there is no need to do any of it at runtime and you can just consult your bookkeeping (for which etc services lacks a lot of details to use...)
I know it's mixing of layers, but I can't help but feel the IPv6 transition missed the boat when they didn't just get rid of ports in the process. They've changed so much else anyway.
Want to run another webserver instance or whatever on your computer? Get the OS to allocate a new IP for it. Ports be damned.
Could be implemented in a backwards compatible way by requiring all IPv6 TCP/UDP traffic to use a fixed port number.
ipv6 packet does not have any port field. ports are on the level of tcp and udp, and you don't have to use tcp or udp on top of ipv6. ipv4 packet does not have any port information as well.
tcp6 is a thing though, was created at the same time as ipv6, and it does have ports, along with udp6. But if you really want one ip per stream and just hardwire port 1 or something, it's not like IPv6 does anything to stand in the way of that. Mght have performance issues on some OS's binding thousands of IPs to one interface, but that's on them to fix. Bigger lift would be the APIs that would need to change to manage whole prefixes at a time instead of single IPs.
Yes, that's why I said I know it was mixing of layers.
However ports are a layer violation in a strict sense, introduced as a workaround because there was no easy way to just add thousands of new IPs to a single host back in the IPv4 days. No need to continue a workaround that causes grief on a daily basis.
Conceptually its doable on linux and ipv6. Have the listening program sit on that default port of 80.
Something involving socat, an any-IP / TCP routing rule, a VPS or other machine with a ipv6 /64 and plenty of duct-tape.
You'd get an application sitting on port 80 accessible via some unique ipv6 address (in the /64) on a tcp port 80. They needn't be the same port number but it would make it easier.
There is no need to come up with "local TLDs" like .vibe, .local, .test and so on -- there is already an industry convention! macOS and most Linux distros support subdomains of localhost, so <anything>.localhost works. You still need the reverse proxy to do the host->port mapping, but you save yourself local DNS fiddling.
> There is no need to come up with "local TLDs" like .vibe, .local, .test and so on -- there is already an industry convention! macOS and most Linux distros support subdomains of localhost, so <anything>.localhost works.
That would work if your goal was to route traffic to localhost.
What if it isn't?
There are reasons why the likes of example.com exists.
> So I built local.vibe — a friendly dashboard and local .vibe hostname for every local web app on your Mac. No more localhost:3000 vs localhost:5173 roulette.
> The whole thing communicates over a Unix socket acting as a reverse proxy. No external services, no accounts, no telemetry.
We’re discussing a tool that is designed for – and is only capable of – routing traffic to localhost. It’s perfectly reasonable to point out that there’s an easier solution for this use case.
example.com, and the reserved TLD ".example", exist for technical documentation and writing. If you are writing a comment on HN, or a curriculum for a networking class, then you can discuss "foo.example.com connects to bar.example.com" or "Let's hypothesize about two offices called accounts.example and human-resources.example"
The "example" domains are never supposed to reflect anything that is actually deployed onto LANs, or test labs, or the Internet, current situation notwithstanding.
There are, likewise, IPv4 and IPv6 ranges that are reserved to be used in documentation. Not the 192.168.0.0/24 or 10.0.0.0/8, but separate ranges that writers only write about, and are never deployed, not even in private.
And there's also .local for mDNS on local network!
I've also come across projects using a public DNS record that points to 127.0.0.1 (something like localtest.me?). IMO that's way worse than using .localhost since you're trusting some rando not to change the DNS records and exfiltrate your meant-to-be-local traffic.
I did not mention .local, because it is covered in the linked articles: a special-use TLD, reserved for a certain purpose. It has often happened that LAN admins try to name something under ".local" and configure a zone for it in their BIND server. But this is incorrect, because ".local" is already managed by the zeroconf/mDNS protocols. It is a special case; and that is what ".internal" seeks to rectify, by giving y'all a TLD that can be truly internal and truly a zone under DNS server control, whatever that looks like for you.
".vibe" is not a TLD. It is not a registered TLD; it is not a reserved name. It isn't a domain at all. Go ahead, do a WHOIS lookup. Anyone who attempts to use such gibberish, even in documentation, deserves to be rudely surprised, someday in the future.
I think this product demonstrates the atrophying of thought that results from too much LLM usage: design was obviously a long back-and-forth with a sycophantic LLM.
I find out what all my local servers are by `cat /etc/hosts`, because I put them in there. They run using an entry in the nginx config.
For short-lived stuff I don't even bother with that, I just use `whatever.localhost`.
If there was no LLM, author would have put a little more thought into this, maybe did a google search, and realised that all he needed were two shell scripts.
The more you use LLMs, the less you actually think
> The real annoyance is that it wasn’t just one machine. It was layers.
> I wanted a simple launcher for all the things that aren’t traditional desktop apps. Not Finder, Alfred or Raycast.
The entire damn article is like this - why would I trust software to run on my local machine when it was written by someone who did not even take care writing a blog post? How much care would they have possibly put into reviewing their vibe coded slop if they couldn't even bother to review their blog post?
That seems to run orthogonal to this. The primary benefit I see here is not having to care at all what ports apps are actually starting on. Just run them, and access them by name. Same as a regular website on the internet where one doesn't care about the IP.
> How much care would they have possibly put into reviewing
Just enough to ensure that it works for them, which is what really matters. Others go in knowing that as well, and add/change that base to their own preference. That's the world we're now in.
> Just enough to ensure that it works for them, which is what really matters.
If that's what really mattered they wouldn't have posted an article they didn't write trying to get traction on a product they didn't create from a userbase that doesn't need it.
I couldn't agree more. It seems so odd to have a HN submission for some vibe coded little "help-me tool" that is not clearly even needed. I wouldn't even say anything except the whole "vibe" thing is all over the article. It's just all a bit much. It's just sad.
Why not resolve everything with UNIX sockets instead, that way you can have them named and scoped instead, hiding behind port 443, since it's mosly HTTP anyway.
works with curl, maybe there is a case to either build a proxy for UDS and expose them to a browser, or open a request ticket to browser maintainers to support UDS
There's a simple method you can use with nginx and /etc/hosts, I wrote about it couple days ago [0]. I used it for an internal demo recently and realized that a new breed of devs have never seen a non localhost url run locally.
Custom domain is nice if you're planning on a real host later, but nowadays you can just use the .localhost domain and skip the whole /etc/hosts editing thing.
Super simple. (although I use rewrites at my dns layer for the whole local lan, but whatever)
It also solves issues my password manager has with multiple services on the one host but with different ports, but putting each on their own 2nd level domain.
This is a valid concern, certainly. I use kube for most things so it's not a problem, but my homeserver and its apps run on quadlets that I manage. In my case, I just added a README.md in the server account folder that each project's CLAUDE.md or whatever is configured to read. Then it selects a port and sticks that in the document and to be honest I have a few tens of services and it works. Haha, a direct replacement of machine for my own process.
I've built this twice before. The main problem that I hit is that the AI agents suck at the process lifecycle management: leaving processes alive, starting the same daemon multiple times, etc.
From a brief glance over the code I like the approaches I see. Using the `/etc/resolver/` mechanism is a new trick to me!
The interesting part to me isn't the port numbers, it's the automatic service start/stop, including idle route shutdown.
Vercel’s portless is a great alternative, but unfortunately it doesn’t work well with oauth flows.
I’ve built portmap[0] to solve that. Also comes with skills which makes it work really great with coding agents (instructions in the readme).
I use Cloudflare Tunnel so most of the products I build are exposed and listed there. I just add comments for those that aren't exposed (eg. browser extension dev port) to that file too. A single doc means coding agents know to look there and keep it updated too.
This is literally what mDNS is for. Didn't even know that it was a thing until I needed it for some custom firmware I was writing recently. It's like DNS but also has service port advertisements.
I use subdomains on an OVH VPS, since I want to access the services outside the network, so I can use freshrss.mydomain.com. But anything that can rationalize port number sprawl is welcome.
Also, a service port is always qualified by its protocol. There are separate port namespaces for each IP protocol that uses ports. "8483" is not a service port, until you spell it out:
And localhost being the exception is often quite painful - I've stuck into several projects that worked just fine on localhost, and then were a pain in the neck to convert to run in secure contexts
This project is essentially "give me some metadata & a command which takes env $PORT, and I'll handle the rest". Which is neat!
I am also sick of handling port numbers - I end up allocating them on a schema to different services, so for testing I can spool any VM/service combination and avoid crossover. But if I want the same service twice, ah...
It always fascinated me that ports don't have any kind of textual resolver, so you can bind to `:1234` and also say "please also accept `:foobar`".
But that would itself require some kind of "port resolver" on a device, and that's another service to break and fix :)
It is funny, I just built something like this last week and named it "Network". Additionally it scans for any type of data packages arriving at the SonicWall and sees if they are approved by me or not. I am paranoid after using TP Link at home like a dumbass.
This is a neat approach. One thing I wonder about is how it handles services that use the same port number across different protocols (like 443 for both HTTPS and SMTPS). The /etc/services file approach has the same ambiguity, but at least it lists the protocol alongside the port. A lookup table that includes protocol would be more robust for mixed environments.
It's like someone should make a file... maybe in /etc ... and put short names for services in it... maybe it could be called /etc/services...
That would mean not being able to vibe code up an entire app to deal with something as insurmountable as looking at a list of numbers and post it on HN for those sweet, sweet upvotes. Why would they not do that.
And then they might code up some sort of service lookup tool thingy to use on the train wreck that is the modern web.
And if they want name resolution, maybe even names that reflect the scope of its location like .localhost or .internal
Various services have existed, such as portmap(8), though NFS and similar services have often suffered from the "too complicated to debug" problem where devops (then sysadmins) would try turning the system off and then back on again in the hopes of resolving the issue du jour. You might get lucky and determine that node number three (of many) was cursed and leave it switched off for the Season of Mammon, more commonly known as Christmas, and to retire it quietly, later. Hypothetically.
Generally host and port mapping gets shoved somewhere into the configuration management layer and hopefully does not become too complicated (or have too many security holes) as this could vary from "configuration files and a few scripts" to database and services layers that few can debug, especially not a sysadmin at 3 AM in the morning running on an hour of bad sleep. Hypothetically.
this is a nice idea, but idk why, in macos if i do `nc -l 127.0.0.1 gopher` and then try to open url "http://127.0.0.1:gopher/" - safari does not open it, no requests visible in the `nc` output.
also `curl -v http://127.0.0.1:gopher/` gives error message
so the ports are named, it is nice, but in practice it does not make life easier.> http://...:gopher
is it http or gopher? :)
i chose gopher port just as an example. try with any other service name mapped to a port number from /etc/services and the result will be the same. the OP's goal was to use many http/https services, so we are talking about many http(s) services.
i just wanted to make the point that even if you have service names in /etc/services, it is not possible to use that names easily to host/access http(s) services.
The names are the kind of servers that listen on those ports (by default) like ssh, telnet, http, and smtp. They are not subdomains or for URI parsing.
Well, the entire context of this is https so anything else is immaterial. The only reason it would be gopher is if you didn't read the post or don't understand the basics of https.
As bandie pointed out, you‘re explicitly making a http request. Duh.
nc is for generic connections and handles it well.
i know, but the OP's goal was to host/access http(s) services with names and avoid port numbers, and gopher service name was chosen by me as an example. my point was that /etc/services cannot be used for the OP's need.
if you host an http(s) service on port 11111 you can reach it with url http://127.1:11111, but url http://127.1:vce/ would not work in most software.
Heck, maybe even `resolvectl service`?
Perhaps we could even make the file the port itself, perhaps calling it a “socket”? A “unix socket” would be a great name. If we could place all these files behind a local reverse proxy then we could use localhost/jekyll or localhost/fastapi. It’s just a dream
If the port number space was bigger, I wonder if we would have gotten a global naming service (ala DNS) for unique service names.
You can still publish port numbers along with addresses in DNS though (SRV records).
Sure, but they are running web-apps they've vibe-coded (hence the .vibe tld) and for that use-case of many web apps that I run in docker containers I use nginx-proxy [0]. All the container needs is a VIRTUAL_HOST environment variable with the domain and what my router needs is an address entry for the wildcard subdomains. I even have nginx-proxy on a internet-accessible staging server.
[0] https://github.com/nginx-proxy/nginx-proxy
You have to be root to edit /etc/services ...
URLs already have default ports for service names as a feature.
http:// means port 80 unless specified otherwise
https:// means port 443 unless specified otherwise
ftp:// means port 21 unless specified otherwise
sftp:// means port 22 unless specified otherwise
...
The practical solution for TFA is actually just an nginx server running on port 80 with proxy_pass
...How many little web servers work without issue when their root page is loaded from a path other than /?
Not modern enough. Unix is too low level, antiquated, and discriminates against those who just want to get shit done instead of reading manpages or documentation by hand.
This is the best example of Poe’s Law I’ve ever seen. Well done…?
What about identifying different instances of the same service?
Top reply, and clearly based on the article's title rather than its content, as are the follow-ups. You're making this site worse.
The article is short; go read it then come back and delete.
Sounds like more of a problem with the title than the person you're attempting to insult.
I read the article and now I got the impression the author should delete their project and blog.
Why?
What's up with the hate? It seemed like an interesting project to me, maybe not something I see myself using, but not something deserving hate.
The author should be thankful that people are not reading the article he wrote.
you go and look in etc services for what is bound to port 5009. the article might not be the most useful but these comments are completely off the mark and stupid.
The hosts file is enough, what is needed is a way to assign an ip address to a process/service like you can with port numbers.
No, go ahead. Tell me how just using /etc/services does what this does. Because I'm calling bullshit.
But go ahead. /etc/services, please, share with me how it's setup to do thing likes create the HTPS and makes it trusted and sets up the domain. Go ahead.
Go ahead. You can ONLY use /etc/services.
Or, you are admit you don't actually have a clue as to what /etc/services does.
This is exact problem I see with all of those vibe coded software: In few years everything will be super fragmented, everyone will be using their own set of tools, or vibe coding them, themselves. Communication between teams or even between team members will become very hard because of those differences. 'What do you mean production is down? On my vibe coded dashboard everything is green!'
Why do people always assume that change is permanent?
It's never.
After decentralisation we always see decentralisation. After a period of growth, a decline will follow. After the vibe coding hype, consolidation will follow. After rain comes sunshine.
> It's like someone should make a file... maybe in /etc ... and put short names for services in it... maybe it could be called /etc/services...
People shit-talk container orchestration systems like Kubernetes, but if anything they greatly simplified (if not completely eliminated) the need for this sort of network bookkeeping.
You forgot the /s at the end.
All our bookkeeping is now in YAML. Watch the spaces on your way out the door.
And don’t forget to quote your port assignments and version strings.
Cool project. Just yesterday I looked at https://portless.sh/ which is vercels take.
The file /etc/services maps names to port numbers, like /etc/hosts does for hosts.
E.g. "telnet localhost ssh" takes you to port 22 (not the default 23 for telnet). This works because /etc/services maps "ssh" to "22".
If you're sick of remembering port numbers, create some entries in your /etc/services.
Of course, only programs which use getservbyname to resolve port numbers will accept your names.
maintaining services files.
i dont know why people keep insiting on that file while there are perfectly fine commands to pull from your boxes what is holding what port.
that is all besides the point though if you look at what you should be doing and keeping all this information in some kind of asset management system from which you can deploy things (which is kinda what k8s and docker etc. try to do (miserably)).
unless you are binding stuff to random ports on random boxes there is no need to do any of it at runtime and you can just consult your bookkeeping (for which etc services lacks a lot of details to use...)
I know it's mixing of layers, but I can't help but feel the IPv6 transition missed the boat when they didn't just get rid of ports in the process. They've changed so much else anyway.
Want to run another webserver instance or whatever on your computer? Get the OS to allocate a new IP for it. Ports be damned.
Could be implemented in a backwards compatible way by requiring all IPv6 TCP/UDP traffic to use a fixed port number.
ipv6 packet does not have any port field. ports are on the level of tcp and udp, and you don't have to use tcp or udp on top of ipv6. ipv4 packet does not have any port information as well.
tcp6 is a thing though, was created at the same time as ipv6, and it does have ports, along with udp6. But if you really want one ip per stream and just hardwire port 1 or something, it's not like IPv6 does anything to stand in the way of that. Mght have performance issues on some OS's binding thousands of IPs to one interface, but that's on them to fix. Bigger lift would be the APIs that would need to change to manage whole prefixes at a time instead of single IPs.
> ipv6 packet does not have any port field
Yes, that's why I said I know it was mixing of layers.
However ports are a layer violation in a strict sense, introduced as a workaround because there was no easy way to just add thousands of new IPs to a single host back in the IPv4 days. No need to continue a workaround that causes grief on a daily basis.
Conceptually its doable on linux and ipv6. Have the listening program sit on that default port of 80.
Something involving socat, an any-IP / TCP routing rule, a VPS or other machine with a ipv6 /64 and plenty of duct-tape.
You'd get an application sitting on port 80 accessible via some unique ipv6 address (in the /64) on a tcp port 80. They needn't be the same port number but it would make it easier.
There is no need to come up with "local TLDs" like .vibe, .local, .test and so on -- there is already an industry convention! macOS and most Linux distros support subdomains of localhost, so <anything>.localhost works. You still need the reverse proxy to do the host->port mapping, but you save yourself local DNS fiddling.
Portless is a great tool for this! I use it for all my apps now. Zero config + built in HTTPS locally.
https://portless.sh/
Example from the website:
- "dev": "next dev" # http://localhost:3000
+ "dev": "portless myapp next dev" # https://myapp.localhost
> There is no need to come up with "local TLDs" like .vibe, .local, .test and so on -- there is already an industry convention! macOS and most Linux distros support subdomains of localhost, so <anything>.localhost works.
That would work if your goal was to route traffic to localhost.
What if it isn't?
There are reasons why the likes of example.com exists.
From the article:
> So I built local.vibe — a friendly dashboard and local .vibe hostname for every local web app on your Mac. No more localhost:3000 vs localhost:5173 roulette.
> The whole thing communicates over a Unix socket acting as a reverse proxy. No external services, no accounts, no telemetry.
We’re discussing a tool that is designed for – and is only capable of – routing traffic to localhost. It’s perfectly reasonable to point out that there’s an easier solution for this use case.
It looks like this will win: https://en.wikipedia.org/wiki/.internal
example.com, and the reserved TLD ".example", exist for technical documentation and writing. If you are writing a comment on HN, or a curriculum for a networking class, then you can discuss "foo.example.com connects to bar.example.com" or "Let's hypothesize about two offices called accounts.example and human-resources.example"
The "example" domains are never supposed to reflect anything that is actually deployed onto LANs, or test labs, or the Internet, current situation notwithstanding.
https://en.wikipedia.org/wiki/.example
There are, likewise, IPv4 and IPv6 ranges that are reserved to be used in documentation. Not the 192.168.0.0/24 or 10.0.0.0/8, but separate ranges that writers only write about, and are never deployed, not even in private.
localhost is only ever going to be the loopback interface, never across a network: https://en.wikipedia.org/wiki/.localhost#Conventional_use
See also: https://en.wikipedia.org/wiki/.test
The latter article lists foreign-language TLDs which serve the same purpose.
Some proposals are described here: https://en.wikipedia.org/wiki/.home
And there's also .local for mDNS on local network!
I've also come across projects using a public DNS record that points to 127.0.0.1 (something like localtest.me?). IMO that's way worse than using .localhost since you're trusting some rando not to change the DNS records and exfiltrate your meant-to-be-local traffic.
I did not mention .local, because it is covered in the linked articles: a special-use TLD, reserved for a certain purpose. It has often happened that LAN admins try to name something under ".local" and configure a zone for it in their BIND server. But this is incorrect, because ".local" is already managed by the zeroconf/mDNS protocols. It is a special case; and that is what ".internal" seeks to rectify, by giving y'all a TLD that can be truly internal and truly a zone under DNS server control, whatever that looks like for you.
As for 127.0.0.0/8 in the public DNS: https://utcc.utoronto.ca/~cks/space/blog/sysadmin/HowNotToDo...
As for localnet and localhost in general:
https://utcc.utoronto.ca/~cks/space/blog/sysadmin/LocalhostI...
https://utcc.utoronto.ca/~cks/space/blog/web/LocalhostSurpri...
".vibe" is not a TLD. It is not a registered TLD; it is not a reserved name. It isn't a domain at all. Go ahead, do a WHOIS lookup. Anyone who attempts to use such gibberish, even in documentation, deserves to be rudely surprised, someday in the future.
I like localias[1] for this problem. Not only do you get nice aliases for all your local ports, but you also get nice Caddy-managed TLS certs for them
[1]: https://github.com/peterldowns/localias
I thought that was the reason ‘lsof -i -P’ existed
Sounds similar to Vercel‘s portless CLI (https://portless.sh/)
I think this product demonstrates the atrophying of thought that results from too much LLM usage: design was obviously a long back-and-forth with a sycophantic LLM.
I find out what all my local servers are by `cat /etc/hosts`, because I put them in there. They run using an entry in the nginx config.
For short-lived stuff I don't even bother with that, I just use `whatever.localhost`.
If there was no LLM, author would have put a little more thought into this, maybe did a google search, and realised that all he needed were two shell scripts.
The more you use LLMs, the less you actually think
> The real annoyance is that it wasn’t just one machine. It was layers.
> I wanted a simple launcher for all the things that aren’t traditional desktop apps. Not Finder, Alfred or Raycast.
The entire damn article is like this - why would I trust software to run on my local machine when it was written by someone who did not even take care writing a blog post? How much care would they have possibly put into reviewing their vibe coded slop if they couldn't even bother to review their blog post?
> I put them in there
That seems to run orthogonal to this. The primary benefit I see here is not having to care at all what ports apps are actually starting on. Just run them, and access them by name. Same as a regular website on the internet where one doesn't care about the IP.
> How much care would they have possibly put into reviewing
Just enough to ensure that it works for them, which is what really matters. Others go in knowing that as well, and add/change that base to their own preference. That's the world we're now in.
> Just enough to ensure that it works for them, which is what really matters.
If that's what really mattered they wouldn't have posted an article they didn't write trying to get traction on a product they didn't create from a userbase that doesn't need it.
That was my exact post a couple days ago: https://news.ycombinator.com/item?id=47936315 (didn't get much traction)
I couldn't agree more. It seems so odd to have a HN submission for some vibe coded little "help-me tool" that is not clearly even needed. I wouldn't even say anything except the whole "vibe" thing is all over the article. It's just all a bit much. It's just sad.
Why not resolve everything with UNIX sockets instead, that way you can have them named and scoped instead, hiding behind port 443, since it's mosly HTTP anyway.
Does this work in the browser? How will paths to different resources used by the web app work?
works with curl, maybe there is a case to either build a proxy for UDS and expose them to a browser, or open a request ticket to browser maintainers to support UDS
There's a simple method you can use with nginx and /etc/hosts, I wrote about it couple days ago [0]. I used it for an internal demo recently and realized that a new breed of devs have never seen a non localhost url run locally.
[0]: https://idiallo.com/blog/say-no-to-localhost3000-use-custom-...
Custom domain is nice if you're planning on a real host later, but nowadays you can just use the .localhost domain and skip the whole /etc/hosts editing thing.
I essentially do this.
Super simple. (although I use rewrites at my dns layer for the whole local lan, but whatever)
It also solves issues my password manager has with multiple services on the one host but with different ports, but putting each on their own 2nd level domain.
I wonder why not use nginx and some local DNS settings to just serve all these local services under a new, local URL.
Not too long ago I had a similar issue and solved with that.
I did the same using caddy for ease of getting https certificates
I mean, that's essentially what he's recreating here it looks like
This is a valid concern, certainly. I use kube for most things so it's not a problem, but my homeserver and its apps run on quadlets that I manage. In my case, I just added a README.md in the server account folder that each project's CLAUDE.md or whatever is configured to read. Then it selects a port and sticks that in the document and to be honest I have a few tens of services and it works. Haha, a direct replacement of machine for my own process.
Alternative https://github.com/peterldowns/localias
Granted no fancy UI to start and stop things but is it really needed?
Tbh this is not a single binary you need dnsmasq go and other things
I created something similar to help me spin up complex apps in multiple worktrees with full port orchestration: https://outport.dev/
Not the same, but omeone recently posted this "port" tool here on HN: https://github.com/raskrebs/sonar
HN Thread
https://news.ycombinator.com/item?id=47452515
I've built this twice before. The main problem that I hit is that the AI agents suck at the process lifecycle management: leaving processes alive, starting the same daemon multiple times, etc.
From a brief glance over the code I like the approaches I see. Using the `/etc/resolver/` mechanism is a new trick to me!
The interesting part to me isn't the port numbers, it's the automatic service start/stop, including idle route shutdown.
Vercel’s portless is a great alternative, but unfortunately it doesn’t work well with oauth flows. I’ve built portmap[0] to solve that. Also comes with skills which makes it work really great with coding agents (instructions in the readme).
[0] https://github.com/JonasKs/portmap
*.localhost works btw
https://datatracker.ietf.org/doc/html/rfc6761#section-6.3
I use Cloudflare Tunnel so most of the products I build are exposed and listed there. I just add comments for those that aren't exposed (eg. browser extension dev port) to that file too. A single doc means coding agents know to look there and keep it updated too.
This is literally what mDNS is for. Didn't even know that it was a thing until I needed it for some custom firmware I was writing recently. It's like DNS but also has service port advertisements.
I think about a decade ago pow did something similar, but using the .test domain, and perhaps ruby specific
https://github.com/basecamp/pow/tree/master
Interesting.
I've been wanting something like this for local Dev, but I think more:
Per user DNS.
So if the process doing the lookup is my own then redirect to the named service.
I use subdomains on an OVH VPS, since I want to access the services outside the network, so I can use freshrss.mydomain.com. But anything that can rationalize port number sprawl is welcome.
Huh when I start a service in dev, I just click on the link in the terminal to visit the url. What is even the problem?
I hate these signs of LLM generated texts so much!!
> The real annoyance is that it wasn’t just one machine. It was layers.
Aspire.dev should be mentioned here.
I use the tailscale services feature for this, added benefit is I get https.
I'm slightly annoyed that vite's default port isn't 8483
why?
VITE typed on a T9 keyboard is 8483.
5173 spells Vite...
173 looks like ITE
5 in roman numerals is V.
T9 is predictive and based on a dictionary and training.
If you type "8483" on T9, your phone may offer "THUD" or "TITE" or all three, as choices.
But with a normal telephone keypad, if you dial, e.g. "(800) 555-VITE" then you will always dial "8483".
https://en.wikipedia.org/wiki/Phoneword
Also, a service port is always qualified by its protocol. There are separate port namespaces for each IP protocol that uses ports. "8483" is not a service port, until you spell it out:
or or or etc.A TCP stream, for example, consists of a tuple:
What is the benefit of using HTTPS for this particular use case?
Some browser functions only work over https, localhost is the exception. So if you change localhost:5173 to myapp.vibe it needs a valid certificate.
And localhost being the exception is often quite painful - I've stuck into several projects that worked just fine on localhost, and then were a pain in the neck to convert to run in secure contexts
Nice. An instant disappointment that there's no Linux support, but adding it should be a quick prompt away.
This project is essentially "give me some metadata & a command which takes env $PORT, and I'll handle the rest". Which is neat!
I am also sick of handling port numbers - I end up allocating them on a schema to different services, so for testing I can spool any VM/service combination and avoid crossover. But if I want the same service twice, ah...
It always fascinated me that ports don't have any kind of textual resolver, so you can bind to `:1234` and also say "please also accept `:foobar`". But that would itself require some kind of "port resolver" on a device, and that's another service to break and fix :)
There is /etc/services to map port numbers to service names, and using getportbyname() to resolve port numbers.
DNS for /etc/hosts and now vibe.local for /etc/services. What will they think of next!
SVCB DNS records
What I do is use a hash function to derive port number from service name.
i have something like this too, currently a 60 line nodejs file
It is funny, I just built something like this last week and named it "Network". Additionally it scans for any type of data packages arriving at the SonicWall and sees if they are approved by me or not. I am paranoid after using TP Link at home like a dumbass.
I am very out of the loop. What is wrong with TP Link? What are the risks with it?
Bind to Port 0
This is a neat approach. One thing I wonder about is how it handles services that use the same port number across different protocols (like 443 for both HTTPS and SMTPS). The /etc/services file approach has the same ambiguity, but at least it lists the protocol alongside the port. A lookup table that includes protocol would be more robust for mixed environments.