I think most people agree on that point, but what are truckers going to do after they lose jobs to the robots? In the USA, there is never any planning for that, they just say "oh well" and in a generation those folks die out, having lived a shitty, destitute life with no chance of recovery.
The fact that there are too few younger truck drivers means that most young people have already found higher paying, better jobs than current truck driving jobs.
There's an ideological assertion when automation happens: the workers will retrain and be better off.
If it was true, Youngstown Ohio and Flint Michigan would be jewels of the Midwest - bustling metropolises of highly skilled retrained workers in wondrous utopias. You'd have AI unicorns popping out of Huntington and Wheeling, West Virginia.
Whether you count number of bankruptcies, overall mortality rate, number of offspring, percentage that own versus rent... The fervent assertion that going from a union job to hustling for say Postmates, is somehow the rising tide lifting all boats is baseless.
Doesn't matter though. It's ideologically, not materially based so evidence is irrelevant.
There Are state interventionist ways to make it work. China moved from agrarian to industrial. South Korea, Taiwan Japan.... The difference is they don't have this pentacostal snake handling level blind faith in the free market where they go around like Peter Popof preaching Hayek and Rothbard like it's sacred scripture.
I challenge anyone who seriously proposes this to first spend a month in a wheelchair. You quickly discover that your sense of scale and freedom of movement is largely a function of your physical capability and financial comfort.
I’m confused by this. Making public infrastructure people first, not car first, in your opinion would make things more difficult and expensive for handicap people?
The town I live in has many streets without sidewalks, and even the ones with sidewalks, many of those are entirely unsuitable for wheelchairs. Designing the streets to have pedestrian needs prioritized over cars would make the streets more handicap accessible not less.
People. As in people walking. My town is not designed in a way that is conducive towards walking. It isn’t safe, there is either no sidewalks, the sidewalks are in disrepair, are not wide enough for two people to pass each other, or set back far enough from the road to not feel like someone could run you over.
But the roads and dozens upon dozens of acres of parking lots? Perfectly fine. Millions spent every year maintaining them while sidewalks are neglected, there are no bike lanes or paths, and it could take twice as long to take a bus as it would be to just walk on the dangerous roads.
I just would think I should be able to walk to a restaurant or the library with my kid safely but I guess because I can do it in a car I should shut up and stop worrying about everybody getting excited about autonomous vehicles making infrastructure even more car centric.
With water? Like, hose it down? It's mostly ammonium phosphate anyway and afaik it's water soluble.
Edit: yes it moves it around, and just like the cleaning person at the office does you move it into the water table or drainage system. Or do you separate your dirt when you mop a floor or wash your clothes?
That isn't actually removing anything, it's just spreading it around.
Removing dirt from the carpet and washing it down the drain is fine because ordinary "dirt" (i.e. soil) is made of non-toxic or biodegradable stuff. By contrast, washing toxic materials or heavy metals into the water table is the place you don't want them. There's a reason it's illegal to pour used motor oil down the drain.
And there are plenty of things it's legal to pour down the drain, but illegal to put in rivers, because it (grey water) needs treatment before release into the environment.
Sodium nitrate has a bitter aftertaste to me. A bit like baking soda. I can usually taste it then I check the labels and sure enough I see 'preservative (252)'.
Interesting. After this comment I mentioned it to a family member who is in natural medicine and diet (no fan of preservatives themselves) and they were surprised that I could taste it. So Maybe there is a genetic component. I mean it's more like an aftertaste than a taste but it's obvious to me. Like if you eat pancakes with too much baking soda in them, it leaves a kind of numbing effect and slight bitterness on the tongue... Or maybe that's genetic too haha.
You can't have an AI that fills in things automatically and then expect a signature on that document to be legally binding.
As soon as you modify the content or suggest what someone fills in, you are no longer a disinterested third party. Ask any notary or go look at DocuSign, they explicitly won't advise you on how to complete a form aside from basic things like making sure a field isn't blank or contains a number and not a string.
I don't hold that view point, but if even the critics can see this as something evil, that should tell you something.
You can not like your neighbors partying and playing music until 3 AM, but also have the moral compass to know that setting fire to their house is not the solution.
For example, Voice of America publishes "Learning English" radio/tv broadcasts, podcasts, and news articles. They are produced with a limited vocabulary (I think a few thousand words), shorter sentences, and are spoken slower. They often match up with native language reporting of current events so you can listen to both for context clues.
Having people all over the world able to speak a basic level of English helps further the dominant role of the US in international trade, allows our military to use friendly locals as translators anywhere they go, and gives people around the world some level of "connection" with us - shared common ground to work from.
VoA does not broadcast propaganda, they hold themselves to a very high standard of reporting only the truth. Which is why repressive governments hate it, as do people who want to create them.
Their Wikipedia page in the controversy section lists at least 5 specific examples where they were criticized for direct political messaging - so certainly not a neutral party (whatever that means).
Also you seem to think propaganda means “not true” when it more often is true things that promote your interests. What matters is someone is paying to highlight information over other perspectives.
HTTP keep-alive still limits the number of outgoing connections to 65535. Pipelining suffers from the known same issues addressed in the article.
But I agree, it is a solved problem unless you really have a lot of incoming connections. When you use multiple outgoing ip addresses that fixes that even for very busy load balancers, and since IPv6 is common today you will likely have a /64 to draw addresses from.
On modern systems you have about 28k ephemeral ports available. 65,535 is the total number of ports (good luck trying to use them all). Either way, if you have more than 20k connections open to a single backend (remember linux does connection tracking using the 4 tuple, so you can reuse a source port to different destinations) you are doing something seriously wrong and should hire competent network engineering folks.
> Either way, if you have more than 20k connections open to backends you are doing something seriously wrong
I don't see how that is a fringe or rare case. With a loadbalancer (using no pipelining or multiplexing), the number of simultaenous outgoing http connections to backend systems is at least the number of simulatenous open incoming http connections. Having more than 28k simultanous incoming http requests is not a lot for a busy load balancer.
Now with pipelining (or limiting to 28k outgoing connections), the loadbalancer has to queue requests and multiplex them to the backends when connections become available. Pipelining suffers from head-of-line blocking, increasing possible latency caused by the loadbalancer further. In any case, you will increase latency to the end-user by queing. If you use HTTP/2 multiplexing, you can go past those 28k incoming connections without queing on the loadbalancer side.
> the number of simultaenous outgoing http connections to backend systems is at least the number of simulatenous open incoming http connections
No it isn't. You establish a pool of long lived connections per backend. The load balancer should be coalescing in flight requests. At that traffic volume you should also be doing basic in-memory caches to sink things like favicon requests.
I am not going to respond further as this chain is getting quite off topic. There are plenty of good resources available from relevant Google searches, but if you really still have questions about how load balancers work my email is in my profile.
> You establish a pool of long lived connections per backend
Yes, and you would do the same with HTTP/2. You haven't addressed the head-of-line blocking problem caused by HTTP/1.1 pipelining, which HTTP/2 completely solves. Head-of-line blocking becomes an increasing issue when your HTTP connections are long lived, such as when using websockets or large-media transfers or streaming.
It's amazing how people having visibly never dealt with high loads can instantly become vehement against those reporting a real issue.
The case where ports are quickly exhausted is with long connections, typically WebSocket. And with properly tuned servers, reaching the 64k ports limit per server comes very quickly. I've seen several times the case where admins had to add multiple IP addresses to their servers just to hack around the limit, declaring each of them in the LB as if they were distinct servers. Also, even if Linux is now smart enough to try to pick a random port that's valid for your tuple, once your ports are exhausted, the connect() system call can cost quite a lot because it performs multiple tries until finding one that works. That's precisely what IP_BIND_ADDRESS_NO_PORT improves, by letting the port being chosen at the last moment.
H2 allows to work around all this more elegantly by simply multiplexing multiple client streams into a single connection. And that's very welcome with WebSocket since usually each stream has little traffic. The network also sees much less packets since you can merge many small messages into a single packet. So there are cases where it's better.
Another often overlooked point is that cancelling a download over H1 means breaking the connection. Over H2 you keep the connection opened since you simply send an RST_STREAM frame for that stream in the connection. The difference is important on the frontend when clients abort downloads multiple times per browsing session (you save the TLS setup again), but it can also make a difference on the backend, because quite often an aborted transfer on the front will also abort an H1 connection on the back, and then that's much less fun for your backend servers.
> It's amazing how people having visibly never dealt with high loads
I've built multiple systems at 1M+ r/s and Tb+ scale.
> The case where ports are quickly exhausted is with long connections, typically WebSocket
Yes, HTTP2 is great for websockets. I was never advocating against it. The comment I was replying to was under the false assumption that you needed an outbound backend connection for every incoming connection. All of his concerns are solved problems in any modern open source load balancer. See https://www.haproxy.com/blog/http-keep-alive-pipelining-mult... ;)
But it's the same for other long sessions such as slow downloads and git clones. Sites concerned by the number of source ports are not those dealing with just favicon.ico and bullet.png, but mainly those dealing with long transfers.
Also there's a cascade effect on large sites, where as long as your servers respond fast, everything's OK. Suddenly a database experiences a hiccup, everything saturates, and once you enter the situation where the LB has all of its ports in use, it can take a while to recover because of connect() getting much slower (I already observed delays up to 50ms!). At this point there's no hope to recover in a sane time, because excess connections are not even served by the servers, they're in the accept queue in the system, so they keep a port busy, slowing down connect() which means more even connections are needed for other incoming requests. If the LB is not properly sized and tuned, you'd rather just kill it to get rid of all the connections at once, wait a second or two for the RST storm to calm down and start again.
H2 can avoid that, at the expense of other issues I mentioned in another response above (i.e. don't multiplex too much to the servers, 5-10 streams max, to avoid the risk of inter-client HoL). But H2 also comes with higher xfer costs than H1 for large objects due to framing.
I would second this. No security person says "I don't have enough problems to look into."
Security spending is down, so navel gazing products are going to be a really hard sell. Figure out how to actually solve problems in an automated/semi-automated way and ship that instead.
The other issue with all of these tools is handling onboarding/integrations and getting terrible visibility as a result. A big market gap I see is a tool that can use the vulnerabilities it discovers to further information collection just like a real attacker would. Found Splunk creds in a log? Awesome, start using them. Syslog in an S3 bucket... boom. You are now hitting the stuff that every other ASM/visualization tool has missed.
Makes sense -- we're focused on fixing problems over just being yet another Jira ticket generator.
> Found Splunk creds in a log? Awesome, start using them. Syslog in an S3 bucket... boom. You are now hitting the stuff that every other ASM/visualization tool has missed.
This is my dream :). This past weekend I was playing around with something where if I clicked on a SecretsManagerSecret node then it'd give me the CLI commands to assume the roles and then retrieve the secret. It'd be neat to take it a step further and be able to click here and get a shell -- I don't think we're _that_ far off from that (but for now to be very clear we're focusing on read-only actions only since a security tool with permissions to do scary things in your environment kinda defeats the purpose).
The websites opening an audio context without using it to play anything are probably doing bot detection.
Different browser engines and operating systems implement audio processing differently, so if you play a completely inaudible sound and then record it back (from the API not the microphone) you end up with a signature.
You can use that signature to see if the browser is lying about its user agent, running in headless mode, or all sort of other interesting edge cases that are not a real user buying widgets.
They definitely thought about it, but fingerprinting is already so easy, and really difficult to stop even if you started the web platform from scratch. Nobody is going to accept "websites can't play any audio because it would make fingerprinting sliiiiiightly easier".
Maybe I'm an extreme outlier, but I don't want 99.99% of websites to be able to make any noise at all. And for the remaining ones I could live without audio too.
You don't "pay more" to get more young people and women to apply to moving spent nuclear fuel, you give the job to the robots.
reply