Ideally the fees would be similar to the Norway model, where some tickets are tied to the income of the driver, in this case the pre-tax earnings of the company that created the driverless car.
That can make sense (opinions differ) for individuals, but it's not like the company is advertising with "we get you there at 1.2x legal speed". They're not competing on that; they're not choosing to do this on purpose like an individual might choose to speed (for example because of economic incentives if their hourly price is high)
If they were, then it makes sense to fine them to some multiple of the benefit they got from this advertising tactic, but as it is, I don't see why it should be different from anyone else's ticket. The company isn't likely to enjoy a flood of this administrative work, besides the cost of the actual fines, so they'll work to minimise them anyway
Assuming you divide it down to the earnings per car, that makes perfect sense. Of course right now they aren't making any profit at all, and by the time it is relevant it is likely that the cars will commit substantially no violations at all.
Isn't Norway only for drunk driving? Finland has it for massive speed excesses, but it is based on net taxable income taking out business expenses for taxi drivers, and Waymo is still negative.
If they become profitable you'd want to normalize by number of miles, unless you just want an incentive system to get more people on the road (extra drivers) and increase chance of humans suffering road injuries to boost employment in an internal service sector.
But even then coming out with a more efficient fleet than a competitor for higher margin would be penalized. You'd rather disincentivize skimping on safety for margin and not disincentivize better maintenance and fuel economy.
Various services have existed, such as portmap(8), though NFS and similar services have often suffered from the "too complicated to debug" problem where devops (then sysadmins) would try turning the system off and then back on again in the hopes of resolving the issue du jour. You might get lucky and determine that node number three (of many) was cursed and leave it switched off for the Season of Mammon, more commonly known as Christmas, and to retire it quietly, later. Hypothetically.
Generally host and port mapping gets shoved somewhere into the configuration management layer and hopefully does not become too complicated (or have too many security holes) as this could vary from "configuration files and a few scripts" to database and services layers that few can debug, especially not a sysadmin at 3 AM in the morning running on an hour of bad sleep. Hypothetically.
this is a nice idea, but
idk why, in macos if i do
`nc -l 127.0.0.1 gopher`
and then try to open url "http://127.0.0.1:gopher/" - safari does not open it, no requests visible in the `nc` output.
* URL rejected: Port number was not a decimal number between 0 and 65535
* Closing connection
curl: (3) URL rejected: Port number was not a decimal number between 0 and 65535
so the ports are named, it is nice, but in practice it does not make life easier.
i chose gopher port just as an example. try with any other service name mapped to a port number from /etc/services and the result will be the same. the OP's goal was to use many http/https services, so we are talking about many http(s) services.
i just wanted to make the point that even if you have service names in /etc/services, it is not possible to use that names easily to host/access http(s) services.
The names are the kind of servers that listen on those ports (by default) like ssh, telnet, http, and smtp. They are not subdomains or for URI parsing.
URI contains ":port" tho, but practically it is only digital number.
the OP made a tool which helps them to avoid using port numbers. people commented in a way that looked like laughing at him, like he reinvented the wheel, and talking about /etc/services. well ok, i decided to try using /etc/services for the purpose of using names instead of port numbers.
would it be possible to add "myapp 60001/tcp" to /etc/services and then work with "http://localhost:myapp"? NO! browsers do not translate these names into port numbers. netcat does. curl does not.
so probably the OP's solution is not that questionable and really solved their need? and "good old friend /etc/services" is not useful for this? i dont know what it is useful for as running services on non-standard ports actually helps with hiding from security/vuln scanners and is practiced widely.
Maybe the joke has gotten too far, but the point that most people have been trying to say is that the issue that the OP is trying to solve has been solved for years. It's called a reverse proxy. Doing the configuration automatically like OP does does not alter the fact that it is a reverse proxy. `/etc/services` serve a different purpose.
Also URL parsing is a completely different matter. Browsers are primarily an HTTP(s) client. If you do not mention the ":port", it will try to connect to 80 (HTTP) or HTTPS (443). Because that the default ports for a web server. Other services have different port. So if your URL has the ftp scheme, the default port would be 21.
I know you're trying to be funny but ... technically it's 100% clear: You should talk HTTP, because that's the URL scheme here. The port makes no difference. You just happened to use a port by name. For all we know I run my HTTP server on some NFS related port so all the script kiddies try all the wrong exploits on it or something ;)
Well, the entire context of this is https so anything else is immaterial. The only reason it would be gopher is if you didn't read the post or don't understand the basics of https.
This is not possible since it is ambiguous. It is impossible to parse “http://127.0.0.1:gopher/“ since that would be valid as either “scheme://user:host/“ or “scheme://host:portname/“.
if you configure sshd to listen on port 443, does it become an https server? i was just trying to demondtrate: pick any port from /etc/services and try to use the name instead of port number. no, it does not work well when trying to use for local-hosting http(s) services. so to address the irony and sarcasm of the messages i was replying to:
zdw: It's like someone should make a file... maybe in /etc ... and put short names for services in it... maybe it could be called /etc/services...
tolciho: And then they might code up some sort of service lookup tool thingy to use on the train wreck that is the modern web.
$ getent services gopher
gopher 70/tcp
Many clients also do not support getservent(3) or portmap or DNS SRV records or NIS or LDAP or ActiveDirectory so one might wonder why there are so many half-baked, failed, or overy complicated attempts at solving whatever the problem is here even before "AI has entered the chat".
i know, but the OP's goal was to host/access http(s) services with names and avoid port numbers, and gopher service name was chosen by me as an example. my point was that /etc/services cannot be used for the OP's need.
if you host an http(s) service on port 11111 you can reach it with url http://127.1:11111, but url http://127.1:vce/ would not work in most software.
Try http://127.0.0.1:hkp instead of http://127.0.0.1:11371 for an OpenPGP HTTP keyserver. HTTP will work, but using the service name won't. Does that make what they're trying to say clearer?
That would mean not being able to vibe code up an entire app to deal with something as insurmountable as looking at a list of numbers and post it on HN for those sweet, sweet upvotes. Why would they not do that.
Perhaps we could even make the file the port itself, perhaps calling it a “socket”? A “unix socket” would be a great name. If we could place all these files behind a local reverse proxy then we could use localhost/jekyll or localhost/fastapi. It’s just a dream
Sure, but they are running web-apps they've vibe-coded (hence the .vibe tld) and for that use-case of many web apps that I run in docker containers I use nginx-proxy [0]. All the container needs is a VIRTUAL_HOST environment variable with the domain and what my router needs is an address entry for the wildcard subdomains. I even have nginx-proxy on a internet-accessible staging server.
Not modern enough. Unix is too low level, antiquated, and discriminates against those who just want to get shit done instead of reading manpages or documentation by hand.
I am pretty convinced you need root on most systems to update DNS resolution mechanism system-wide (eg. to edit /etc/hosts or run a local DNS server and put that into /etc/resolv.conf).
Technically you can set the HOSTALIASES variable to point to a custom hosts file, but that only works with programs that use gethostbyname(3). (Which is most of them? IDK.)
The article is about the dude not knowing what service is where so he codes a json mapping. He could just update his /etc/services for the same thing. Oh but wait, he mentioned ai agents that changes everything!
you go and look in etc services for what is bound to port 5009. the article might not be the most useful but these comments are completely off the mark and stupid.
Maybe not hate, but I think that kind of blog, and every single person posting AI slop to LinkedIn, deserve to be shamed publicly for that. It's just that no one does that, and those who do are frowned upon, and down voted to death, like here. The reason I asked "Why?" was to confirm that in fact there are others who think doing that is shameful, that I am not the only one. The outcome is disappointing.
HTTP 1.1 and later will have the browser supply the domain name that was used to access the site, and even though *.localhost all resolve to 127.0.0.1, nginx will pluck out the correct configuration and proxy_pass the correct one.
This is exact problem I see with all of those vibe coded software: In few years everything will be super fragmented, everyone will be using their own set of tools, or vibe coding them, themselves. Communication between teams or even between team members will become very hard because of those differences. 'What do you mean production is down? On my vibe coded dashboard everything is green!'
Why do people always assume that change is permanent?
It's never.
After decentralisation we always see decentralisation.
After a period of growth, a decline will follow.
After the vibe coding hype, consolidation will follow.
After rain comes sunshine.
> It's like someone should make a file... maybe in /etc ... and put short names for services in it... maybe it could be called /etc/services...
People shit-talk container orchestration systems like Kubernetes, but if anything they greatly simplified (if not completely eliminated) the need for this sort of network bookkeeping.
Your comment unironically is something I prefer and one of my biggest pain points with Linux.
As a newb, I'm sure there's something called with a mycommonproblemd name that has a stateful interface. But sometimes that all adds up to make things feel complex. And it let's me make stupid mistakes, like I forgot to close or open a port on firewalld, or I disabled a container but forgot to commit a change to my systemd units.
It's nice to just have a nice file called myservice.nix that tracks the firewall port, name, systemd startup and update scripts.
Hacker News loves to make snarky comments about everything to do with K8s and YAML all the time, and yet in my experience the amount of times an issue was caused by actual YAML can be counted on one hand.
Way more often it’s developers who can’t figure out that their http library only supports 2 concurrent connections, or emit garbage/malformed log lines and then bitch that they can’t see their app logs because we dropped them, or can’t be fucked to do “kubectl describe” in their own developer namespace that they have full permission for.
If you truly experience issues with just using YAML then you need to skill up probably.
Most of the issues with YAML are really issues with people who think that since "configuration as code" is good, that "code as configuration" must also be good.
No, go ahead. Tell me how just using /etc/services does what this does. Because I'm calling bullshit.
But go ahead. /etc/services, please, share with me how it's setup to do thing likes create the HTPS and makes it trusted and sets up the domain. Go ahead.
Go ahead. You can ONLY use /etc/services.
Or, you are admit you don't actually have a clue as to what /etc/services does.
IMO, Kubernetes isn't inevitable, and this seems to paint it as such.
K8s is well suited to dynamically scaling a SaaS product delivered over the web. When you get outside this scenario - for example, on-prem or single node "clusters" that are running K8s just for API compatibility, it seems like either overkill or a bad choice. Even when cloud deployed, K8s mostly functions as a batteries-not-included wrapper around the underlying cloud provider services and APIs.
There are also folks who understand the innards of K8s very well that have legitimate criticisms of it - for example, this one from the MetalLB developer: https://blog.dave.tf/post/new-kubernetes/
Before you deploy something, actually understand what the pros/cons are, and what problem it was made to solve, and if your problem isn't at least mostly a match, keep looking.
This is a need it fails at miserably. k8s reminds me of the raid recentralization anti pattern problem where you fix a hardware failure that never occurs in exchange for knowing simple higher level mistakes or security problems will tank something now too large to fail again.
Kubernetes, in the form of k3s, was a critical success factor for us with the onprem deployment of our SaaS product.
What's the problem with a single-node cluster? We use that for e.g. dev environments, as well as some small onprem deployments.
> Even when cloud deployed, K8s mostly functions as a batteries-not-included wrapper around the underlying cloud provider services and APIs.
Which batteries are not included? The "wrapper around the underlying cloud provider services and APIs" is enormously important. Why would you prefer to use a less well-designed, more vendor-specific set of APIs?
I seriously don't get these criticisms of k8s. K8s abstracts away, and standardizes, an enormous amount of system complexity. The people who object to it just don't have the requirements where it starts making sense, that's all.
> Kubernetes, in the form of k3s, was a critical success factor for us with the onprem deployment of our SaaS product.
What surprises and gotchas did you have to deal with using k3s as a Kubernetes implementation?
Did you use an LB? Which one? I'm assuming all your onprem nodes were just linux servers with very basic equipment (the fanciest networking equipment you used were 10GbE PCIe cards, nothing more special than that?)
We sell to enterprise customers. All of them deploy our solution on internal cloud-style VM clusters. We use the Traefik ingress controller by default.
There really weren't any particular surprises or gotchas at that level.
In this context, I've never had to deal with anything at the level of the type of Ethernet card. That's kind of the point: platforms like k8s abstract away from that.
It's also difficult for data pipelines or data intensive things. At several companies we've run into the "Need to put ML model behind API and pods get killed because health checks via API are basically not compatible with container fully under load but still working"
I followed the link to flexographic ink, and now I'm wondering whether boutique fine art flexography could or should exist. Like lithography, but more plastic.
It is used to strengthen materials. For example if plaster has crumbled, or the paint on a canvas has become flakey, or wood rotten, Paraloid B-72 can be used to hold everything together. The issue is that generally it is not reversible. Therefore one should always look at varnishes that can easily be removed and reapplied, but sometimes only Paraloid can hold everything toghther.
Yes, but that esoteric nature is the charm of HN at its best.
This is unusual as posts go, but it's not totally unreasonable and even though I wouldn't have an immediate use, it's fascinating, leads to further exploration (like another commenter mentioning the inks) and knowledge gets filed away.
I try to remember posts like this when people are less positive about HN! :-)
It's a relatively soft plastic and I don't think you can realistically build a uniform, good-looking layer that's 1/8" thick, if that's what you mean. If you need that thickness, high hardness, and nice appearance, I think your best bet is just a sheet of glass or acrylic on top.
It can be used as protective varnish, but that would be a very thin layer, probably 0.1 mm or something like that.
It's solvent-based, so it won't set well in thick layers and it will shrink significantly as the solvent evaporates. You can do thick layers with solvent-free thermoset resins such as epoxy, but epoxy will yellow over time.
Purchase as crystals and dissolve in acetone or ethanol to desired concentration. It will self level based concentration, allow to evaporate before applying next layer
The issue is that it does yellow but after 25 to 50 years. The challange is that it is very difficult to reverse.
On the restoration of my house I allow its use on very specific cases. It very useful for example in strengthening wood that has rotten. Sometimes Paraloid is the only thing that can be used, but it needs to be used with care.
It does discolor over time. The point is that one should be thinking about the impact over centuries and not years.
It needs to be used with care and other alternatives need to always be considered.
For a painting or building that has survived for half a millennium we need use methods that will preserve the object for another 500 years.
Too many times I hear people say we will just use Paraloid.
I'm especially curious about the high upvote count, considering the Wikipedia article as well as the substance in general is not that interesting IMHO.
The high number of upvotes is the same phenomenon as the comment chain full of people patting themselves on the back for enjoying estoteric content on HN. They didn't read it, they just like to imagine themselves as the sort of person who would read it. They probably have an apartment with a shelf full of curated tastefully selected novels that were purchased used for the proper patina and arranged just so, and then forgotten until it's time to subtly attract their guests attention to how clever they are. They probably have a couple Hemingway references they have memorized and bring out when the time is right.
Yes it mentioned firming piano hammers in the article. From what I remember, a piano hammer is a shaped piece of wood (or several?) with a leather strip around the striker part? What is the difference for you between hardening and softening the hammer, and how would it be done with this .. is it penetrating? (acetone base would enable that, it is used for carrying chemicals through a surface). Could you soften the hammers by replacing the leather strips, or soaking them to loosen & expand the presumably compacted fibres?
In my wider life in the UK, speaking to people associated with pianos (from a piano tuner, to school premises teams), it is often not worth the commercial expense to repair old pianos unless they are of particularly good quality or have some sentimental value.
The hammer is felt around wood. You don't replace the felt, you'd replace the entire hammer, but then you'd likely want to replace all the hammers to get matching sound anyway.
There's a solution you can add to soften the hammers, but I don't know what chemical it is or how well it works since I haven't tried it yet; you can also needle the felt to fluff it up.
wouldn't it be more accurate to say its their star trek? admittedly not a gundam fan but I don't see it talked about or merchandised nearly as often as evangelion.
Maybe not in Western countries, but Gundam is HUGE in Japan and neighboring countries like Taiwan. A big part of it is that the merchandise is heavily focused on model kits, like Warhammer on steroids.
Oh yeah, I forgot about gunpla. I think the reason I'm so unfamiliar with that side of the mecha world is that it's so fractured if that makes sense. I have friends into it that I could ask but I don't need another expensive hobby haha
It's actually a bad example - there is barely anything around Kyoto station except a few hotels and some shopping malls. The main shopping/entertainment area and almost all tourist attractions are north of it, requiring connection by bus or subway.
The areas around major stations in basically any other city are far more developed. Look at Osaka-Umeda for example. I don't know if that's due to the historical buildings or the relative lack of good railway within the city itself (Kyoto is mostly a hub to get between other lines)
The original comment was "I think that though we are a railway company, we consider ourselves a city-shaping company." Kyoto is absolutely not built around its station. Walk a few blocks away and there's nothing but regular apartments! The true centre is Shijo Kawaramachi.
The station itself is a pretty active hub. We arrived there 9:30 AM to visit teamLabs Kyoto (which is just walking distance away from the station) and it was already pretty packed in the station.
But I think your observation/comment maybe misses the mark: the rail operators may still end up owning some of the commercial real estate nearby whether it's office buildings, hotels, etc. It doesn't all have to be shopping or dining, just that the rail operating owning the real estate near the transit hubs provides an incentive to provide service to that hub to create more value from those holdings.
In my travels through Japan and Taiwan, rail stops are almost always hubs of economic activity of all sorts. It's a selling point when searching for accommodations while planning trips. Easy access to food and shopping. Taiwan night markets in cities, for example, are almost always near major rail station of some kind (light, metro, train). No need to go very far to get from one point of interest to another.
reply