Hacker Newsnew | past | comments | ask | show | jobs | submit | markonen's commentslogin

In the performance tests they said they used "consensus among 64 samples" and "re-ranking 1000 samples with a learned scoring function" for the best results.

If they did something similar for these human evaluations, rather than just use the single sample, you could see how that would be horrible for personal writing.


I don’t understand how that is generalizable. I’m not going to be able to train a scoring function for any arbitrary task I need to do. In many cases the problem of ranking is at least as hard as generating a response in the first place.


From the PostgreSQL 17 Beta 1 announcement:

> PostgreSQL 17 adds a new connection parameter, sslnegotiation, which allows PostgreSQL to perform direct TLS handshakes when using ALPN, eliminating a network roundtrip. PostgreSQL is registered as postgresql in the ALPN directory.

I'm looking forward to being able to offload PostgreSQL TLS to a standard (non-pg-specific) proxy.


They’re probably amplifiers rather than repeaters. Optical amplifiers don’t need to decode the signal to work. Here’s Wikipedia on erbium-doped fiber amplifiers:

> A relatively high-powered beam of light is mixed with the input signal using a wavelength selective coupler (WSC). The input signal and the excitation light must be at significantly different wavelengths. The mixed light is guided into a section of fiber with erbium ions included in the core. This high-powered light beam excites the erbium ions to their higher-energy state. When the photons belonging to the signal at a different wavelength from the pump light meet the excited erbium ions, the erbium ions give up some of their energy to the signal and return to their lower-energy state.

https://en.wikipedia.org/wiki/Optical_amplifier


I moved away from Cloudflare—to self hosting our network infrastructure—because, while this didn’t happen to us, I was very aware that it could. We had a great deal on Enterprise for a couple of years, but zero guarantees that it would last (and some indications that it wouldn’t). I wanted to stop praying that they wouldn’t alter the deal.


Do you mean to imply that cloud services at higher levels of abstraction are cheaper per unit of compute than simple VMs? I believe you’ll find that the opposite is true.

At the scale discussed here, there are no free lunches.


It depends on the scale, but running containers over a k8s clusters means your load will be distributed among the nodes according to capacity.

Managing VMs with dedicated resources directly means you have to distribute the load manually, leading to unused and wasted resources.


You absolutely do not have to distribute VMs manually. This [0] is a tiny Python script run as a cron that migrates VMs in a Proxmox (also free) cluster according to CPU utilization. You could extend it for other parameters.

While I don’t personally have experience with more enterprise-y solutions like VMWare, I have to imagine they have more complete solutions already baked-in.

[0]: https://gitlab.com/tokalanz/proxmox-drs


Not really, because Concorde died in the seventies when the lie flat business seat didn't exist.

BA and AF managed to keep the zombie fleet going very profitably all the way until the end in the early 2000s, and that business wasn't killed by the lie flat business class seat either. It was killed by the impossibility of continuing to operate a tiny fleet of '60s planes forever.

Now if you said that the reason we don't have ANY supersonic passenger jets today is because lie flat business seats are good enough, then that's a more defendable position, but I'd still say that the overland flight restrictions limiting any SST to just a couple of routes is a bigger factor.

When I flew on Concorde the one thought I never had was "I wish I had a lie flat seat and half the airspeed".


It is the combination of lie flat seat and the very limited range and the overland restriction.

A six to three hour flight is not really worth the premium. At the same time no supersonic flight has the range to do transpacific where the time difference would be much greater.


Not sure why you're getting downvoted, this was definitely a key factor that made Concorde into a niche product.

It's not that customers preferred slower and cheaper flights over Concorde—they didn't, Concorde had very healthy average occupancy rates and operating the flights was very profitable for BA and Air France (they got the planes for free, of course).

It's that you can't fly a 1960s plane forever and you also can't amortize the design and development cost of new models with the only addressable market being first class customers travelling between the East Coast and a couple of European capitals (and this was directly caused by the overland flight restrictions).

Flying Concorde is one of my fondest memories :/


If you need macOS on the server for whatever reason, your only option is the Mac.

I have four Mac minis racked up for this, with two use cases: 1) iOS CI/CD and 2) some computer vision stuff using Apple’s Vision framework.

No complaints, but I obviously wouldn’t run anything that didn’t need to be on a Mac on these systems.

BTW, you can rack mount twenty Mac minis in the footprint of a single rack mounted Mac Pro (there’s a 1U mount that takes two and they’re small enough to mount on both the front and the back of the rack). So 20 M2 Pros per 5U.

They’re of course not a sysadmin’s dream but they do tend to stay up and Ansible works fine.


Apple's policies for external purchases are hilarous. The only goal is to be punitive.

For the External Link Account Entitlement that "reader” apps can use to link to purchase flows off-app, Apple forbids offering IAP in the same app. Why? Because they think this will discourage adoption.

For the new StoreKit External Purchase Link Entitlement that other apps can use for the same exact thing, Apple requires an IAP alternative. Why? Because they think this, too, will discourage adoption.


I think the crucial flaw here for the IPMI/BMC access use case is the fact that the card requires the server to be powered on to function. So if you accidentally turn the server off it’s game over for remote access.


Most servers can be configured to always power on after a power failure.

Then all you need is a remotely-operated power outlet, which is pretty standard for colos nowadays. Toggling the power outlet is as good as a human driving out there and poking the stupid ACPI power-on button.


Yes, but not a very deep problem in my case since I can order Remote Hands for this very niche scenario. A reboot doesn't let it loose power and that's all this is going to see for the forseeable future.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: