Hacker News new | past | comments | ask | show | jobs | submit | nathankleyn's comments login

We had the same reticence about Dependabot missing before we made the switch, but realised Renovate works with Bun and is a good enough stand in for now until support arrives.

Absolutely zero regrets, the cumulative savings across everything that is faster and the massive step up in DX is worthy of the hype.


Yeah, I had heard about Renovate supporting Bun. I guess I was reluctant to start working with another 3rd party dependency management system when I already knew how to setup and use Dependabot, and I assumed Bun support was coming soon-ish. But maybe the benefits of moving over to Renovate and getting to use Bun now outweighs the costs. <thinking-about-it>


Actions and Codespaces are listed as having "degraded performance" now too - seeing a lot of action builds disappear into the ether and never get added to the queue to run.


5mins later: WAIT, YOU FORGOT ALL THESE 20 RUNS!!!!


Yeah 20 or so runs. And I'm like: well they did say added to the queue successfully.


Thanks to the ULEZ (ultra-low emission zone) in London, this is very much already happening — anecdotally (because I can't find any easy numbers right now, everything I can find is from 2020) a huge number of the private and public taxis in London are now hybrid or full electric.

Apparently there are already 5k TX electric taxis out there, which is a good start:

https://levc.com/levc-celebrates-sale-of-5000th-tx-electric-...

https://levc.com/tx-taxi/overview/


It's what they call the releases of Chrome — M91 is version 91.


I think the number of comments here asking about the M indicate that it really is obscure jargon. The Chrome release blog doesn't use the M consistently, about:chrome doesn't use the M, and the M doesn't appear in the user agent string.


If they just had said Chrome 91 everyone would have been ok.


Propagating "noalias" metadata for LLVM has actually finally been enabled again recently in nightly [0]. However it has already caused some regressions so it is not clear whether we may go through another revert/fix in llvm/reenable cycle [1]. This has happened several times already sadly [2] as, exactly as you say, basically nobody else has forged through these paths in LLVM before.

[0]: https://github.com/rust-lang/rust/pull/82834

[1]: https://github.com/rust-lang/rust/issues/84958

[2]: https://stackoverflow.com/a/57259339


Seriously, this has been years now. Is this understandable, or does this tell us something bad about llvm?


> Is this understandable, or does this tell us something bad about llvm?

LLVM is a large project that's mostly written in pre-modern C++, and "noalias" is a highly non-trivial feature that affects many parts of the compiler in 'cross-cutting' ways. It would be surprising if it did not turn up some initial bugs.


Initial, yes, but this was first uncovered in Oct 2015. That seems like long enough to fix it.


It’s not a single bug, it’s a bunch of different bugs in the interactions between noalias and various analysis and optimisation passes.


Aliasing analysis is a complicated part of the compiler, and it underpins a lot of optimization passes. It’s not an easy thing to bolt on.


TBF the internal API could be designed more pessimistically, as in llvm could drop noalias annotations unless they’re explicitly maintained (/ converted).

This means optimisation phases would need to explicitly opt-in and aliasing optimisations would commonly be missed by default, but it would avoid miscompilations.


This is understandable. Rust really uses restrict semantics in anger compared to any other language I know of. Have you seen restrict used in a c codebase? The LLVM support for restrict just doesn't get exercised much outside of Rust.


Also there are a lot of fortran compilers using llvm now. Fortran has the information for noalias as well.


I used it extensively in my bc (https://git.yzena.com/gavin/bc) in the math to tell the compiler that the out BigNum is not aliased by any of the in operands. (See https://git.yzena.com/gavin/bc/src/branch/master/src/num.c#L... .)


I use it in c++ on most signal processing stuff. I think eigen will use it if you don't let it use intrinsics for simd. I also use g++ so I wouldn't have encountered it anyway.


This is what happens when you're trying to add a hairy feature to a big legacy codebase.


Referring to the LLVM codebase as legacy makes me feel old…


Or something bad about noalias?


Ah damn that's a real shame, something to be said about how rarely restrict is used(I usually only touched it for particle systems or inner loops of components where I knew I was the only iterator).


GDPR applies to any company that handles the data of EU citizens, regardless of whether that company is located in the EU or not.


EU citizens OR residents


What happen if the non-EU company won't comply? Can EU sue it? Or block its website?


GDPR has little or nothing to do with citizenship. It focuses on presence in the EU, not on citizenship in the EU. See Article 3.

It applies to:

1. Processing that takes place in the context of processors and controllers that are in the Union, regardless of whether or not the processing itself takes place in the Union.

2. Processing the data of subjects who are in the Union by controllers or processors who are not in the Union if the processing is related to offering goods or services to such subjects in the Union or the processing is related to monitoring the behavior of such subjects that takes place in the Union.

One of the recital elaborates on offering goods or services to subjects in the Union, and that includes this:

> In order to determine whether such a controller or processor is offering goods or services to data subjects who are in the Union, it should be ascertained whether it is apparent that the controller or processor envisages offering services to data subjects in one or more Member States in the Union. Whereas the mere accessibility of the controller’s, processor’s or an intermediary’s website in the Union, of an email address or of other contact details, or the use of a language generally used in the third country where the controller is established, is insufficient to ascertain such intention, factors such as the use of a language or a currency generally used in one or more Member States with the possibility of ordering goods and services in that other language, or the mentioning of customers or users who are in the Union, may make it apparent that the controller envisages offering goods or services to data subjects in the Union.

I'd guess that HN would argue that they are not in the Union, so don't fall under #1, and not monitoring behavior, so don't fall under the second prong of #2, and that they did not envisage offering goods and services to people in the Union, getting them out of the first prong of #2.


The GDPR only applies to entities it can be enforced against.

There's a whole world outside of Europe, which does not follow European law.


There was a post maybe a year ago about GDPR, and the general feeling on HN was the all websites should follow it in principle even if not required by law.

I guess that means all sites, except HN.


Sadly Nashorn, the JS engine that was built into the JVM, has been deprecated as of Java 11: https://openjdk.java.net/jeps/335

It had so many compatibility issues with real JS that it was kind of untenable to use it with scripts designed for other JS environments: https://jaxenter.com/nashorn-javascript-engine-deprecated-14...

The only real way to deal with JS now is Rhino (which still exists!), or some custom binding to V8 or something via JNI (yikes).

Our work uses Nashorn a lot for scripting and this is a big blocker for us to move to newer Java versions.


The real way is to adopt GraalVM, which is the alternative being proposed.

https://www.graalvm.org/docs/reference-manual/languages/js/


Hey! I'm one of the co-maintainers of the project here. I've posted a very similar reply to a very similar comment below at [1], but to replay the main points:

We absolutely agree this tool only solves the easiest part of anonymising data, and internally we rely on our team of data scientists to do the difficult parts. This tool is absolutely not up to the task of anonymising a dataset in such a way as to make it able to be made public. For us, it's about risk management vs effort: from a security perspective there are scenarios where we can use samples of data that have gone through this process and decrease the risk of holding data internally in multiple places substantially without significant effort. If we were to go onto to make any of these datasets ultimately public, we'd be looking for a better suited tool (eg. ARX [2]).

Regarding one part of your comment:

> My concern is that helping the user with the easiest part of anonymizing data stands to encourage the user to go full steam ahead without slowing down to stop and think very carefully about what they're doing.

We're going to try to add something to the README addressing this exact question from both of you as it's one I anticipate we're going to get asked a lot - or one that carries risk if it's not made obvious form the outset - so thanks for the constructive line of questioning as it really will ultimately help us and people who choose to use this tool make a decision that's right for them and their use-cases.

[1]: https://news.ycombinator.com/item?id=17144702

[2]: https://arx.deidentifier.org


Hey! I'm one of the co-maintainers of the project here.

What you see today in this project is really a means to scratch an itch we had - mainly to quickly and easily sample/obfuscate some delimited data in a way that is "good enough" for use for demonstrating a visualisation tool without using the original dataset. It's important to note that that we intend to use this data still within a secure environment.

This tool is absolutely not up to the task of anonymising a dataset in such a way as to make it able to be made public. For us, it's about risk management vs effort: from a security perspective there are scenarios where we can use samples of data that have gone through this process and decrease the risk of holding data in mutliplate places substantially without significant effort. If we were to go onto to make any of these datasets ultimately public, we'd be looking for a better suited tool.

As a result, tools like ARX are not something we really want to compete with - they're aiming for a complete solution whereby the results are good enough to potentially make public. It goes perhaps without saying really that the reality of this goal is debatable given the research you linked, but some people might be comfortable with those risks.

One thing we've done to try and bridge the gap a bit is to make it really easy to add new functions as we need them, and I think we can get to a point whereby for a good portion of use-cases this tool is good enough (for example, making datasets you can use in a development environment that are representative, but a manageable size and anonymised to a reasonable degree).

We'll also try to add something to the README addressing this exact question from you as it's one I anticipate we're going to get asked a lot - so thanks for the constructive line of questioning as it really will ultimately help us and people who choose to use this tool make a decision that's right for them and their use-cases.


I would recommend you make this clearer in the readme, as I wasn't left with the impression reading the documentation that the tool was for limited scenarios and scope.


I think it was less that Amazon wouldn't provide support for Chromecast with their services and more that they refused to stock and sell Chromecast devices at all.

I suppose the difference between this and Walmart's move is that we are not entitled to buy the things we want from every shop, whereas these drivers are probably entitled to employment without discrimination in regards to other customers they provide services to?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: