Hacker Newsnew | past | comments | ask | show | jobs | submit | teh's commentslogin

I can't fully answer your question but I did once spend about a week porting plain internal configuration to cue, jsonnet, dhall and a few related tools (a few thousand lines).

I was initially most excited by cue but the novelty friction turned out to be too high. After presenting the various approaches the team agreed as well.

In the end we used jsonnet which turned out to be a safe choice. It's not been a source of bugs and no one complained so far that it's difficult to understand its behaviour.


I feel you (like many people) got burned by the steep learning curve. Empirically some pretty high powered companies use nix successfully. It's of course always difficult to know the counterfactual (would they have been fine with ubuntu) but the power to get SBOMs, patch a dependencies deep in the dependency stack, roll back entire server installs etc. really helps these people scale.

nixpkgs is also the largest and most up to date package set (followed by arch) so there's clearly something in the technology that allows a loosely organised group of people to scale to that level.


NixOS has very limited usage, with few companies adopting it for critical or commercial tasks. It is more common in experimental niches.

One of the main issues with nixpkgs is that users have to rely on overlays for a package. This can lead to obscure errors because if something fails in the original package or a Nix module, it's hard to pinpoint the problem. Additionally, the overuse of links in the directory hierarchy further complicates things, giving the impression that NixOS is a patched and poorly designed structure.

As someone who has tried Nix, uses NixOS, and created my own modular configuration, I made optimizations and wrote some modules to scratch my own itch. I realized I was wasting time trying to make one tool configure other tools. That’s essentially what NixOS does through Nix. Why complicate a Linux system when I can just write bash scripts and automate my tasks without hassle? Sure, they might say it’s reproducible, but it really isn’t. Several packages in NixOS can fail because a developer redefined a variable; this then affects another part of the module and misconfigures the upstream package. So, you end up struggling with something that should be simple and straightforward to diagnose.


I know it's not a proper measurement, but I can't remember the last time I missed something in AUR, but in my short time on NixOS I missed 2 apps and 1 app that disappeared in the NixOS channel upgrade.


I feel the same way. Excited to see another attempt. But it's a c++ engine so not something I would want to expose to the internet really.


I've looked into this but saw hugely variable throughput, sometimes as little as 20 MB / second. Even if full throughput I think s3 single key performance maxes out at ~130 MB / second. How did you get these huge s3 blobs into lambda in a reasonable amount of time?


* With larger lambdas you get more predictable performance, 2GB RAM lambdas should get you ~ 90MB/s [0]

* Assuming you can parse faster than you read from S3 (true for most workloads?) that read throughput is your bottleneck.

* Set target query time, e.g 1s. That means for queries to finish in 1s each record on S3 has to be 90MB or smaller.

* Partition your data in such a way that each record on S3 is smaller than 90 MBs.

* Forgot to mention, you can also do parallel reads from S3, depending on your data format / parsing speed might be something to look into as well.

This is somewhat of a simplified guide (e.g for some workloads merging data takes time and we're not including that here) but should be good enough to start with.

[0] - https://bryson3gps.wordpress.com/2021/04/01/a-quick-look-at-...


This is a great book!

After reading it I found it much harder to enjoy movies showing bad security though (such has heists, nuclear anything, ..).

E.g. from the book I learned about the IAEA recommendations for safekeeping nuclear material [1], and it's pretty clear that smart people spent some time thinking about the various threats.

Anyway, rambling. It's a great and very entertaining book, go read it!

[1] https://www-pub.iaea.org/MTCD/Publications/PDF/Pub1481_web.p...


I just spent some time implementing a lazy VM: Note also the push/enter VS eval/apply implementation change in GHC described in [1].

[1] https://www.microsoft.com/en-us/research/publication/make-fa...


I think this is a common misunderstanding. The images are in the public domain. Nothing stops Getty (or you, or anyone) from selling them, even though you can just use them for free.

The value-add service that Getty offers is legal indemnification, i.e. they cover the legal costs if the image turns out to be copyrighted after all. To offer this service they spend some time and money upfront to research images' copyright status.

Whether you think that's good value for money is up to you.


> they spend some time and money upfront to research images' copyright status.

From the discussion a few days ago, that doesn't seem to be the case. It seems to be more like they just gamble on not getting caught most of the time. https://news.ycombinator.com/item?id=22340547


The other issue brought up recently is when they try to enforce their licencing of public domain images, which is a lot more shady. Selling you a licence, sure, why not. Complaining you’re using a public domain image without getty’s licence? Threatening legal action over the same?

There may be a lot of value to a lot of their portfolio. But there’s some warty rough edges too.


I don't misunderstand it at all. I am aware its legal. I just think that getty should be completely transparent about the copyright status, instead of granting a restricted license to use something they don't own the rights to grant in the first place.


If they really do indemnify you, it's actually a pretty huge benefit. It's pretty easy to use content that is 'royalty free' but then get sued later on when you find out it actually wasn't.


More often than not recruiters (external and in-house) make up large numbers to get you to reply. Here is a verbatim quote from a mail I got last year:

> For the right candidate year 1 comp will be up to £500k.

After going through a three hour coding test, phone screen + onsite it turned out to be 120k + smallish bonus. Maybe I'm not the right candidate but most likely 500k was never on the table.

It's not the first time this has happened to me or people I know either. What I'm getting at is that I'd take the Oxford Knight compensation numbers with a pinch of salt. Those 300k jobs _do_ exist but nowhere near as many as in the US.

Additionally levels.fyi or glassdoor data doesn't support a large number of 300k jobs in London.


Yeah this seems like a common bait and switch tactic amongst recruiters these days, I've received the same kind of emails as you.


The original SES doesn't seem to do anything to prevent meltdown/spectre attacks [1]

This version removed direct access "Date" [2] but I'm not sure I'd trust any code running in the same process space given how hard it is to fix spectre in general.

[1] https://github.com/google/caja/wiki/SES#current-date-and-tim...

[2] https://github.com/Agoric/SES/tree/master/demo#taming-dateno...


What I really want is a JavaScript API--doesn't need to be a "VM": just a wrapper for an existing one--that makes it trivial to manipulate JavaScript engine spaces but where instead of them merely being separate memory allocators (as would be the case if you allocated two JavaScriptCore runtimes or engines or whatever they were called) the code is run in a separate process that doesn't contain any others memory or information and all communication with it is done via some kind of IPC (which you would then minimize using).


> ...but I'm not sure I'd trust any code running in the same process space...

Can someone ELI5 how a separate process would fix Spectre/Meltdown?


Spectre relies on being able to speculatively access data and extracting information about said data through a side channel despite the speculative execution not committing. A separate process means address spaces are separate, which means speculative execution cannot access the data.

Meltdown is similar, but because a CPU affected by Meltdown does not perform permission checks during speculative execution, you can read memory that the execution environment doesn't even have permissions for. E.g. kernel memory.

The fix for Spectre is thus to only consider address spaces a security boundary; interpreters or JITs cannot be considerd security boundaries any more (in general).


I think this is the UK's implementation of EU PSD2 directive (e.g. [1]) so may not survive brexit. Looking forward to what'll come out of it though!

[1] https://www.tsys.com/news-innovation/whats-new/Articles-and-...


It should survive; the EU Banking Authority is based in London (it will move post Brexit) and the UK treasury were major influencers on this legislation.

Worth saying also; Open Banking actually came out of the UK competition marketing authority - its just become tied up with PSD2 (as its one way to achieve compliance with that legislation)


Haha. And my initial thought was ‘you see this is the type of thing the UK could leverage on arguing for UK thriving without eu’


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: