Hacker Newsnew | past | comments | ask | show | jobs | submit | Perseids's commentslogin

You misunderstand. The physicists are developing their own software to analyze their experimental data. They typically have little software development experience, but there is seldom someone more knowledgeable available to support them. Making matters worse, they often are not at all interested in software development and thus also don't invest the time to learn more than the absolute minimum necessary to solve their current problem, even if it could save them a lot of time in the long run. (Even though I find the situation frustration, I can't say I don't relate, given that I feel the same way about LaTeX.)

Honestly, they should be using conda (if they're working on their laptops) and the cluster package manager otherwise.

Conda has slowly but surely gone down the drain as well. It used to be bullet proof but there too you now get absolutely unsolvable circular dependencies.

I'd be curious as to seeing what these circular dependencies you're seeing are (not saying I don't believe you, and I do recall in the early days of conda it doing some dumb stuff, but that particular issue seems odd)?

As for why conda: wheels do not have post-installation hooks (which given the issues with npm, I'm certainly a fan of), and while for most packages this isn't an issue, I've encountered enough packages where sadly they are required (for integration purposes), and the PyPI packages are subtlety broken on install without them. Additionally, conda (especially Anaconda Inc's commercial repositories) have significantly more optimised builds (not as good as the custom build well-run clusters provide, but better than PyPI-provided ones). I personally do not use conda (because I tend to want to test/modify/patch/upstream packages lower down the chain and test with higher up packages), but for novices (especially novices on Windows), conda for all its faults is the best option for those in the "data science" ecosystem.


I haven't ever experienced this yet, what packages were involved?

Good question, I can't backtrack right now but it was apmplanner that I had to compile from source, and it contains some python that gets executed during the build process (I haven't seen it try to run it during normal execution yet).

Probably either one of python-serial python-pexpect judging by the file dates, and neither of these are so exciting that there should have been any version conflicts at all.

And the only reason I had to rebuild it at all was due to another version conflict in the apm distribution that expects a particular version of pixbuf to be present on the system and all hell breaks loose if it isn't, and you can't install that version on a modern system because that breaks other packages.

It is insane how bad all this package management crap is. The GNU project and the linux kernel are the only ones that have never given me any trouble.


I wish non-conformity was more of a thing at points where it actually matters. Your product manager asks you to add invasive user tracking and surveillance? Push back and explain how this makes the world a worse place. Got a ticket to implement a "[yes][ask me later]" dialog [1]? Make a short survey that shows how user hate it. Nobody listens to you? Refuse to comply. The government requires you to take deeply unethical or unlawful actions? Sabotage the feature [2] (or quit/resign).

Performative non-conformance might be e.g. helpful to nurture a culture of critical thinking, but if it is just performative, then it is worthless.

(I write this with no intent to criticize you, burningChrome, or Jyn. You might very well do just that.)

(Also, I'm aware that the ability to push back is very unevenly distributed. I'm addressing those that can afford this agency. And also, non-conformance is spectrum: You can also push back a little without choosing the specific point to be the hill to die on. Every bit counts.)

[1] https://idiallo.com/blog/hostile-not-enshittification

[2] https://www.404media.co/heres-a-pdf-version-of-the-cia-guide...


Yeah, agreed. Otherwise it's a kind of low stakes "non-conformity", even a conformity of sorts (because everything lowercase is/was actually an internet fad, so it's a kind of "extremely online" conformity).

Non-conformity where it matters would be a lot better, but it's also scarier.


To cite and expand on lambdaone below [1]:

> Clearly power capacity cost (scaling compressors/expanders and related kit) and energy storage cost (scaling gasbags and storage vessels) are decoupled from one another in this design

Lambdaone is differentiating between the costs to store energy (measured in kWh or Joules) and the costs to store energy per time (which is power, measured in Watts). If you want to store the whole excess energy that solar panels and wind turbines generate on a sunny, windy day, you need to have a lot of power storage capability (gigawatts of power generated during peak power generation). This can be profitable even if you only have a low energy storage capability, e.g. if you can only store a day worth of excess solar/wind energy, because you can sell this energy in the short term, for example in the next night, when the data centers are still running, but solar panels don't produce power. This is what batteries give you -- high power storage capabilities but low energy storage capacities.

Of course, you can always buy more batteries to increase the energy storage capacities, but they are very expensive per energy (kWh) stored. In contrast, these CO2 "batteries" are very cheap per energy (kWh) stored -- "just" build more high pressure tanks -- but expensive per power (Watts) stored, because to store more power, you need to build more expensive compressors, coolers etc. This ability to scale out the energy storage capability independently of the power storage capability is what Lambdaone was referring to with the decoupling.

For what is this useful? For shifting energy over a larger amount of time. Because energy storage costs of batteries are so high, they are a bad fit for storing excess energy in the summer (lots of solar) and releasing it in the winter (lots of heating). I'm not sure if these "CO2" batteries are good for such long time frames (maybe pressure loss is too high), but the claim most certainly is that they can shift energy over a longer time frame than is possible with batteries in an economically profitable fashion.

[1] https://news.ycombinator.com/item?id=46347251


What an excellent explanation, thanks

> But that's literally the question I'm asking. Where do you draw the line in a way that stops what we consider to be abuses, but doesn't stop what we think of as legitimate uses by journalists, academics, etc.?

I think the wrong assumption you're making, is that there is supposed to be a simple answer, like something you can describe with a thousand words. But with messy reality this basically never the case: Where do you draw the line of what is considered a taxable business? What are the limits of free speech? What procedures should be paid by health insurance?

It is important to accept this messiness and the complexity it brings instead of giving up and declaring the problem unsolvable. If you have ever asked yourself, why the GDPR is so difficult and so multifaceted in its implications, the messiness you are pointing out is the reason.

And of course, the answer to your question is: Look at the GDPR and European legislation as a precedent to where you draw the line for each instance and situation. It's not perfect of course, but given the problem, it can't be.


Generally, you do want the general principle of something like this to be explainable in a few sentences, yes.

Even if that results in a bunch of more detailed regulations, we can then understand the principles behind those regulations, even if they decide a bunch of edge cases with precise lines that seem arbitrary.

Things like the limits of free speech can be explained in a few sentences at a high level. So yes, I'm asking for what the equivalent might be here.

The idea that "it's so impossibly complicated that the general approach can't even be summarized" is not helpful. Even when regulations are complicated, they start from a few basic principles that can be clearly enumerated.


This is not how things ever work in practice in representative democracy. The world is too complex, and the many overlapping sets of political groups in a country/provice/city have different takes on what the policy should be, and more importantly, each group have different tolerances for what they will accept.

Because everyone has different principles by which they evaluate the world, most laws don't actually care about principles. They are simply arbitrary lines in the sand drawn by the legislature in a bid to satisfy (or not dissatisfy) as many groups as possible. Sometimes, some vague sounding principles are attached to the laws, but its always impossible for someone else to start with the same principles and derive the exact same law from them.

Constitutions on the other hand seem simple and often have simple sounding principles in them. The reason is that constitutions specify what the State institutions can and cannot do. The State is a relatively simple system compared to the world, so constitutions seem simple. Laws on the other hand specify what everyone else must or must not do, and they must deal with messy reality.


This is not just unhelpful (and overly cynical), but it is untrue.

Courts follow the law, but they also make determinations all the time based on the underlying principles when the law itself is not clear.

Law school itself is largely about learning all the relevant principles at work. (Along with lots of memorization of cases demonstrating which principle won where.)

I understand you're trying to take a realist or pragmatic approach, but you seem to have gone way too far in that direction.


The principle is that you should be able to casually document what you see in public, but you should not be able to intrude on the privacy of others.


Emphasis on casual, IMO. It is perfectly reasonable to decide that past norms which evolved in the absence of large scale computing power, digital cameras, and interconnected everything do not translate to the right to extrapolate freedom of casual observation into computer-assisted stalking.


> If people moved to other providers, things would still go down, more likely than not it would be more downtime in aggregate, just spread out so you wouldn't notice as much.

That is the point, though: Correlated outages are worse than uncorrelated outages. If one payment provider has an outage, chose another card or another store and you can still buy your goods. If all are down, no one can shop anything[1]. If a small region has a power blackout, all surrounding regions can provide emergency support. If the whole country has a blackout, all emergency responders are bound locally.

[1] Except with cash – might be worth to keep a stash handy for such purposes.


Yeah, exactly this. I don’t know why the person who responded to me is talking about survivorship bias… and I suppose I don’t really care because there’s a bigger point.

The internet was originally intended to be decentralised. That decentralisation begets resilience.

That’s exactly the opposite of what we saw with this outage. AWS has give or take 30% of the infra market, including many nationally or globally well known companies… which meant the outage caused huge global disruption of services that many, many people and organisations use on a day to day basis.

Choosing AWS, squinted at through a somewhat particular pair of operational and financial spectacles, can often make sense. Certainly it’s a default cloud option in many orgs, and always in contention to be considered by everyone else.

But my contention is that at a higher level than individual orgs - at a societal level - that does not make sense. And it’s just not OK for government and business to be disrupted on a global scale because one provider had a problem. Hence my comment on legislators.

It is super weird to me that, apparently, that’s an unorthodox and unreasonable viewpoint.

But you’ve described it very elegantly: 99.99% (or pick the number of 9s you want) uptime with uncorrelated outages is way better than that same uptime with correlated, and particularly heavily correlated, outages.


> > Can one really speak of efficient markets

> Yes, free markets and monopolies are not incompatible.

How did you get from "efficient markets" to "free markets"? The first could be accepted as inherently value, while the latter is clearly not, if this kind of freedom degrades to: "Sure you can start your business, it's a free country. For certain, you will fail, though, because there are monopolies already in place who have all the power in the market."

Also, monopolies are regularly used to squeeze exorbitant shares of the added values from the other market participants, see e.g. Apple's AppStore cut. Accepting that as "efficient" would be a really unusual usage of the term in regard to markets.


The term "efficient markets" tends to confuse and mislead people. It refers to a particular narrow form of "efficiency", which is definitely not the same thing as "socially optimal". It's more like "inexploitability"; the idea is that in a big enough world, any limited opportunities to easily extract value will be taken (up to the opportunity cost of the labor of the people who can take them), so you shouldn't expect to find any unless you have an edge. The standard metaphor is, if I told you that there's a $20 bill on the sidewalk in Times Square and it's been there all week, you shouldn't believe me, because if it were there, someone would have picked it up.

(The terminology is especially unfortunate because people tend to view it as praise for free markets, and since that's an ideological claim people respond with opposing ideological claims, and now the conversation is about ideology instead of about understanding a specific phenomenon in economics.)

This is fully compatible with Apple's App Store revenue share existing and not creating value (i.e., being rent). What the efficient markets principle tells us is that, if it were possible for someone else to start their own app store with a smaller revenue share and steal Apple's customers that way, then their revenue share would already be much lower, to account for that. Since this isn't the case, we can conclude that there's some reason why starting your own competing app store wouldn't work. Of course, we already separately know what that reason is: an app store needs to be on people's existing devices to succeed, and your competing one wouldn't be.

Similarly, if it were possible to spend $10 million to create an API-compatible clone of CUDA, and then save more than $10 million by not having to pay huge margins to Nvidia, then someone would have already done it. So we can conclude that either it can't be done for $10 million, or it wouldn't create $10 million of value. In this case, the first seems more likely, and the comment above hypothesizes why: because an incomplete clone wouldn't produce $10 million of value, and a complete one would cost much more than $10 million. Alternatively, if Nvidia could enforce intellectual property rights against someone creating such a clone, that would also explain it.

(Technically it's possible that this could instead be explained by a free-rider problem; i.e., such a clone would create more value than it would cost, but no company wants to sponsor it because they're all waiting for some other company to do it and then save the $10 million it would cost to do it themselves. But this seems unlikely; big tech companies often spend more than $10 million on open source projects of strategic significance, which a CUDA clone would have.)


You scuttled your argument by using apple AppStore as an example.


This feature actually existed (see https://en.wikipedia.org/wiki/HTTP/2_Server_Push ) but was deemed a failure unfortunately (see https://developer.chrome.com/blog/removing-push )


Thanks for the links! Yes, my comment was based of a vague recollection of this kind of thing.

I'll read up on the '103 early hints' and 'preload' and 'preconnect' which might be close in enough practice.


> I'm personally pretty skeptical that the first round of PQC algorithms have no classically-exploitable holes

I was of the impression that this was the majority opinion. Is there any serious party that doesn't advocate hybrid schemes where you need to break both well-worn ECC and PQC to get anywhere?

> The standard line is around store-now-decrypt-later, though, and I think it's a legitimate one if you have information that will need to be secret in 10-20 years. People rarely have that kind of information, though.

The stronger argument, in my opinion, is that some industries move glacially slow. If we don't start pushing now, they won't be any kind ready when (/if) quantum computing attacks become feasible. Take industrial automation: Implementing strong authentication / integrity protection, versatile authorization and reasonable encryption into what would elsewhere be called IoT is just now becoming an trend. State-of-the-art is still "put everything inside a VPN and we're good". These devices usually have an expected operational time of at least a decade, often more than one.

To also give the most prominent counter argument: Quantum computing threats are far from my greatest concerns in these areas. The most important contribution to "quantum readiness"[1] is just making it feasible to update these devices at all, once they are installed at the customer.

[1] Marketing is its own kind of hell. Some circles have begun to use "cyber" interchangeable with "IT Security" – not "cyber security" mind you, just "cyber".


Yes: there are reasonable, reputable cryptographers who advocate against hybrid cryptosystems.


Could you be so kind to provide a link or reference? I'd like to read their reasoning. Given the novelty of e.g. Kyber, just relying on it alone seems bonkers.


No. I don't agree with them.


My intuition went for video compression artifact instead of AI modeling problem. There is even a moment directly before the cut that can be interpreted as the next key frame clearing up the face. To be honest, the whole video could have fooled me. There is definitely an aspect in discerning these videos that can be trained just by watching more of them with a critical eye, so try to be kind to those that did not concern themselves with generative AI as much as you have.


Yeah, it's unfortunate that video compression already introduces artifacts into real videos, so minor genAI artifacts don't stand out.

It also took me a while to find any truly unambiguous signs of AI generation. For example, the reflection on the inside of the windows is wonky, but in real life warped glass can also produce weird reflections. I finally found a dark rectangle inside the door window, which at first stays fixed like a sign on the glass. However it then begins to move like part of the reflection, which really broke the illusion for me.


For IoT devices, the upcoming regulations will probably include a stipulation that vendors need to specify a guaranteed support period for the devices. I would prefer the same kind of commitment and dependability for games to a simple badge. It would combine free choice for how to build your business model with the ability for customers to make an informed choice ("they can pull the plug in 5 month? I'm not paying EUR 60 for that"). At least as long as there isn't a malicious compliance cartel, e.g. all big vendors only guaranteeing a month and "kindly" supporting it for longer…

(And my highest preference would be for vendors to be forced to publish both server and client code as free software, if they don't continue selling their service for reasonable prices. Not only for games, but for all services and connected devices. Getting political support for such regulations is, of course, extremely hard.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: