Hacker Newsnew | past | comments | ask | show | jobs | submit | 4str0n0mer's commentslogin

Many dictatorships actually do. The question is more like "is it a process that only exists on paper?". If an election ends with a 98% approval of the supreme leader (or whatever the designation), then it's probably rigged and just for show...


To give an example Italy had an election with 98.5% for the fascist party in 1929. That was to confirm or reject the list of house members appointed by the party. I mean, there were no candidates to chose from, the ones in the list would be... elected?


Pray tell


Another small correction: In order to MITM https, you need access to either a trusted root certificate and key, or a key & certificate with the CA bit set, signed by a valid root CA.


Assuming you haven't somehow stolen the real site owner's private key, you would need to produce a certificate for that DNS name, signed with a key you do have.

Which is something you could in principle do if you are a trusted root CA. But, this creates a smoking gun. The bogus certificate is a public document, you're always giving it to the client, and for Chrome, Safari, Chromium Edge you are also obliged to publicly log the certificate, where everybody can see it forever, in order to have an SCT (proof of logging) which those browser insist on seeing.

Modern rules require a root CA to disclose any intermediate CAs which are created, even if not currently in use (e.g. because still being tested) and which could issue trusted certificates unless the certificate for the intermediate is technically constrained (which is complicated, but a general purpose CA is not technically constrained for the purpose of this definition)

In practice, most outfits offering "MITM" type capabilities are for corporate environments, education, that sort of thing, where you can say "All employers/ students/ whatever shall trust our our private CA FOO" and then you can MITM using the trusted FOO CA. So this doesn't interact with the Web PKI overseen by m.d.s.policy at all. If you don't want to get MITM'd don't trust some sketchy private CA.


Maybe a sufficiently crafty vendor of MITM equipment could prevent MITM site certificates that are signed by evil intermediate CAs from appearing in CT logs by filtering access to those CT logs. But it is a risky proposition for the vendor, as you've said.


Since it is about public institutions, any public servant disobeying will be disciplined. That can include strongly worded letters, demotions or even firing. For public institutions spending money on (for them) illegal goods, activities or licenses is not possible, which means that any receipt they hand in for that won't ever get reimbursed. Which also means they wouldn't get the goods in the first place, because the business selling them knows they will get stiffed.


Isn’t it notoriously difficult to fire or demote anyone in the french bureaucracy?


This pops up every few years. I can remember building a liquid cooled LAN-party rig at some point.

But imho especially immersion cooling is a dead end. Contact problems with cabling lower reliability because the isolating coolant creeps between the metal surfaces. Maintenance is unbelievably convoluted, not only do you have to drain all the fluid out to pull a machine, you will need to clean each and every plug, socket and pin you touch because the contact surfaces will be coated with isolating coolant. This could only be fixed with a complete redesign of everything, so that all or at least most contacts are "dry". Using fluid-tight plugs is very expensive, and anyways, most ideas there won't work for important components such as CPU and RAM.

Liquid cooling will fare better, because you can strike a balance between high-power components like CPU and GPU where you slap a liquid cooler on. And low-power components like most of the board you just cool by air as usual. But even there, maintenance is somewhat of a hassle, in addition to all the cabling you also need to deal with piping, leaks, drippage (which aren't supposed to happen, but trust me, they do). I think the only field where this is really going to be used is HPC for applications where floorspace is limited. E.g. if your network cabling must be within a certain length limit of your central network node, so you need to cram all your racks for the supercomputer you are building within some 20m circle or the like.

Less top-of-the-line HPC and datacenters are better served by just limiting power density and continuing air cooling as usual. Datacentre floorspace is expensive, but not _that_ expensive, so you will always fare better (and usually more environment-friendly) with air cooling.


I think this is a nice summary. As a point for water cooling, it's easier to use the generated heat; it can for instance be used to heat nearby buildings. With air cooling this is much more difficult. I think watercooling will become dominant in new data centers as environmental laws sets stricter requirements for heat waste.


Yes, heat reuse is easier with water cooling since the losses are smaller, but even air cooling will produce warm water in the end. There will be coming regulations requiring a certain amount of heat reuse, but in a first step, this probably won't force direct water cooling.

For those regulations, there is also the important aspect of free cooling, i.e just using outside air as a medium or using an air-air heat exchanger like a Tokyo Wheel. Mandated heat reuse (without appropriate exceptions) would make those illegal, however in certain situations those are the best (environmentally and financially) methods of cooling. E.g. in a moderately hot summer you can use 20-30°C outside air without a problem, but if you would mandate heat reuse at that time, you would have lots and lots of excess heat on your hands with nobody buying it. So while some regulation will be coming, I hope and think that it wouldn't really issue a strict mandate to reuse heat and allow free air cooling, which would exclude direct water cooling.


> With air cooling this is much more difficult

To my understanding, DCs use air conditioning systems for cooling. These are already heat pumps - why would it be more difficult to just pump that heat into district heating instead of outside air?


Because the temperature spread a given heat pump uses isn't really suitable for district heating, your output water would only be in the 50 to 60°C range at most, too low for typical district heating systems. One can get around that with multi-stage heat pumps or different coolants, but both are expensive investments typically not viable for existing systems.


If it's not enough for district heating because of the distance and heat losses, it could be enough for a greenhouse right next to a DC.

Economically the greenhouse would have a distinct advantage of using free heat, and already having enough electricity wired nearby.

This, of course, only makes sense during cold seasons, or somewhere like Alaska or Sweden. During hot months, 40-50°C of heat seem completely useless.

Maybe next to a sea such waste heat could help evaporatively desalinate seawater, while cooling back to reusable temperatures.


>

>so you will always fare better (and usually more environment-friendly) with air cooling.

Sorry but this is not true. One of the key drivers of liquid cooling IS efficiency. Liquids are for more effective at heat transfer than air. Additionally, a liquid can carry a lot more heat. Yes, you can run higher density, you can also indirectly free cool servers year-round, even in tropical climates.


You are both right and wrong, because you (and I) are not clearly distinguishing between different kinds of efficiency and effectiveness.

Liquid cooling is, without question, more /effective/. It makes things cooler, it has a far higher capacity for heat transport.

However, with /efficiency/, you do weigh the results against the effort put in: A solution is more efficient if the quotient result/effort is higher. However, in case of cooling, there are two ways to measure effort: energy expended or money expended. In the case of energy efficiency, yes, liquid cooling is better, because of the aforementioned higher effectiveness of heat transport, lesser energy expenses for pumps/fans and easier energy reuse. However, financially, all that liquid handling, custom piping, pumping, coolant, etc. is by far more expensive than a few air ducts and fans. Even with air-air heat pumps, air cooling is still less expensive, so better financial efficiency. That's why everybody is doing it as a default atm.


The problem with liquid cooling, ultimately, is you're still dumping to air. You're actually doing not much more with liquid cooling than just changing where the heat concentration is located.

Unless your radiator is outside of the building itself, you're dumping that waste heat right back into the system over time.


>The problem with liquid cooling, ultimately, is you're still dumping to air.

There's a lot of things you can do outside of a server chassis that you cannot do inside of a chassis. You can have heat exchangers with far more surface area than what you are volume constrained to inside of a chassis. You can run evaporative cooling towers. If you're in the right climate, you can even radiate that heat to space.

>You're actually doing not much more with liquid cooling than just changing where the heat concentration is located.

You're using a more efficient heat transfer medium which does a lot of things for you. You can downsize your chiller tonnage, reduce/eliminate CRAC/CRAH units, and all the UPS required to keep those systems running. All of this translates to CAPEX and OPEX savings.

>Unless your radiator is outside of the building itself, you're dumping that waste heat right back into the system over time.

No one is suggesting that you can plonk some servers into a tank of liquid and the thermal energy magically disappears. Of course, there has to be a thermodynamic "sink" that you are dumping to... the atmosphere, the ocean, the ground, space, etc. Liquid cooling is more efficient means of transferring heat from source to sink.


Isn’t that the point?

You use a liquid to transport the heat from denser compute (lowering costs), then run it through a high efficiency radiator (eg, drip/spray towers like with industrial AC). Your compute is denser and your interface with the air is more efficient.


Yes. Liquid cooling is surface area arbitrage. You turn small surface area into large area with a good conductor between them


What's your opinion of the Microsoft Ignite 2022 demo by Mark Russinovich of their next-gen immersion-cooled setup for Azure compute nodes? Link: https://youtu.be/PO5ijv6WDv0?t=370


Not good. As you can see, even the upper side connections are submerged, and plugs still look like standard components. Nothing to say of the backplane connectors on the lower side and RAM and CPU connections. They seem to just have taken a blade enclosure and flipped it backside up. So yes, it looks expensive, maybe impressive for some audience, but totally impractical even though they claim it to be a custom design.


Your main concern seems to be with ongoing maintenance, but my understanding is that the "hyperscalers" like Azure don't do maintenance. They assemble everything once and then let failed components remain in-place and powered off.

Google championed this over a decade ago. They figured that the cost of staff, spare parts, diagnostics, etc... outweighed the cost of the hardware. If you factor in the complexity of liquid cooling, it might not make sense to repair hardware even a single compute node costs north of $20K.


If you would detonate the moon, break it up into small pieces and smear them out over its orbit, tidal friction would cease. Tidal friction is a major component of the earth's rotation's slowdown.


> If you would detonate the moon, break it up into small pieces […]

Close to the plot of a recent Stephenson sci-fi novel:

* https://en.wikipedia.org/wiki/Seveneves


I hated the fact that you had to get 9/10 through the book to find out how it derived its name. The number 7 appears right at the start, but it's a misdirection.


I got about halfway through before I started skimming. Interminable.


Basically, things like the weather can influence the speed of rotation. Angular momentum changes when mass is closer to the center of gravity or farther out. That measurably includes mass like water in clouds and leafs on trees.


Very much so.

Look at astronomical software, e.g. xephem. There will be a "clocks" tab that displays a number of different clocks, TAI (atomic clock, no leap seconds), UTC (global civil time), media solar time (GMT, which ISN'T UTC) and finally, derived from the aforementioned "sidereal time", which is the one you really need to adjust your telescope. Sidereal time is derived from a year with 1 more day basically, because the earth moving around the sun adds 1 more rotation of the background stars. Which is a drift of roughly 4 minutes per day.

https://en.wikipedia.org/wiki/Sidereal_time

Oh, and then there is stuff like Julian date which you need to look up the myriads of catalogues and tables you need for corrections because everything "wobbles" even more than you'd think.

Yes, dropping leap seconds will remove 1 table lookup from the above. But astronomical time systems are so complex that that change is a drop in the ocean.


(insert link to canonical "standards" xkcd)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: