Hacker Newsnew | past | comments | ask | show | jobs | submit | more saurik's favoriteslogin

The funny thing is Stallman started his fight like half a century ago and on regular days Hacker News shits on him eating something off of his foot and not being polished and diplomatic, and loves practical aspects of Corporate Open Source and gratis goodies and doesn't particularly care about Free Software.

On this day suddenly folks come out of the woodwork advocating for half baked measures to achieve what Stallman portrayed but they still hardly recognize this was EXACTLY his concern when he started the Free Software movement.


I use it, and love it.

But it's not intended for or good at (without forcing a square peg into a round hole) the sort of thing LFS and promisors are for, which is a public project with binary assets.

git-annex is really for (and shines at) a private backup solution where you'd like to have N copies of some data around on various storage devices, track the history of each copy, ensure that you have at least N copies etc.

Each repository gets a UUID, and each tracked file has a SHA-256 hash. There's a branch which has a timestamp and repo UUID to SHA-256 mapping, if you have 10 repos that file will have (at least) 10 entries.

You can "trust" different repositories to different degrees, e.g. if you're storing a file on both some RAID'd storage server, or an old portable HD you're keeping in a desk drawer.

This really doesn't scale for a public project. E.g. I have a repository that I back up my photos and videos in, that repository has ~700 commits, and ~6000 commits to the metadata "git-annex" branch, pretty close to a 1:10 ratio.

There's an exhaustive history of every file movement that's ever occurred on the 10 storage devices I've ever used for that repository. Now imagine doing all that on a project used by more than one person.

All other solutions to tracking large files along with a git repository forgo all this complexity in favor of basically saying "just get the rest where you cloned me from, they'll have it!".


There was chatter about this in one of the NYC subreddits over the weekend.

Apparently ending the de minimus exemption is closing the grey market for e.g. sunscreen; places that used to sell Japanese sunscreens on American shelves no longer are.

There's a frustratingly long list of goods that the US decided to put requirements on in previous generations, and then stopped maintaining. Sunscreen is one; other countries have invented sunscreens that feel better on your skin than the old styles, but aren't yet approved in the US. Motorcycle helmets are another. You may have seen the MIPS system - the yellow slipliner that's become popular in bicycle helmets. Scientists have realized that rotational impact leads to concussions and similar brain damage, but prior helmets only protected against naive impacts. Europe now requires helmets to protect against rotational damage. The US requires that manufacturers self-assert that they meet a very old standard that ignores rotational impact. They do not recognize Europe's new standard.

Closing these de minimus exemptions is making it harder for discerning consumers to buy higher quality goods than are currently available in the US right now. Protectionists are going to see this as a win.

More background on helmet standards:

https://www.youtube.com/watch?v=0BUyp3HX8cY

https://www.youtube.com/watch?v=76yu124i3Bo


The distinction is at best meaningless. And, at worst, actively harmful for understanding the behavior of those systems.

We already know that you can train an "evil" AI by fine-tuning a "normal" AI on sequences of "evil" numbers. How the fuck does that even work? It works because the fine-tuning process shifts the AI towards "the kind of AI that would constantly generate evil numbers when asked to generate any numbers".

And what kind of AI would do that? The evil kind.

AI "wants", "preferences" and even "personality traits" are no less real than configuration files or build scripts. Except we have no way of viewing or editing them directly - but we know that they can be adjusted during AI training, sometimes in unwanted ways.

An AI that was fried with RL on user preference data? It, in a very real way, wants to flatter and affirm the user at every opportunity.


https://archive.md/8asa5

I spent a lot of time living in China. Nobody believes the government figures. But I'm also skeptical that using artificial light as a proxy for economic growth is rational, particularly when you realise that Chinese people overwhelmingly live in vertical high density buildings and the amount of light used when moving from last-gen 'heavy industry' to next-gen 'value add'/'light industry'/'design work'/whatever is going to be reduced.

Therefore although I am a big fan of the Economist and like the idea, I think the premise of this particular study may be somewhat flawed.

Where the article states "the mismatch between satellite and GDP data did not appear in dictatorships until they were too rich to receive some types of aid" I think what they may be discovering is "when people move in to dense modern housing and shift to white collar work the model breaks down". There are other factors too: more modern lighting is more efficient, people increasingly socialize through phones, and outdoor living spaces are reduced in relatively inhospitable climates, somewhat limiting light pollution.

Thinking back to first principles, the majority of outdoor light pollution is probably from freeways and city centers, and if you proxy that with economic growth it's probably significant as a pre-emption at a certain phase of transition from agricultural/low-development-level economy through highly developed economy, but becomes irrelevant rapidly once those development prerequisites have been achieved.

It doesn't help that this guy is trying to sell a book.


I've recently discovered there's a lot of stuff like covers even by relatively mainstream artists that you can get from YouTube Music but not on the other platforms like Spotify.

I believe most of these have public (or made-public) documentation available for their SoCs: https://en.wikipedia.org/wiki/Banana_Pi

But for real openness, nothing beats an older x86 PC.


Apple has been all about contradictions, and somehow that works for them. They strategically make a big deal about things, and then when silently does what every company is doing. The impressive part is they get away with it.

For instance, everybody thinks Apple hates advertising, esp user-tracking. The interesting thing is Apple themselves run a $6B+ ads businsess, which does first-party user tracking - which is the nuance.

Similarly, if Apple truely wanted user privacy, they'll outright ban Facebook from their platform.

Or most egregious is Apple "stands up to government" (famously with FBI) but is more than happy to bend the knee to Chinese government, or most recently with the gold plaque with Trump.


I've long held that this is one of those areas that if Apple really cared about privacy they'd disallow in-app browsers. They'd add the rule that an app that is not a browser must list in its manifest 10 or fewer domains that its webview is allowed to access. All the rest would be denied.

This would mean many apps like the Facebook App, Messenger, Google Maps, GMail, Line, WeChat, Slack, Discord, etc would effectively not be allowed to open links to the entire internet but only domains directly related to the app and would be a privacy win.

They'd have to have some wording that would have to distinguish between a browser app and a non-browser app but i'd argue that's probably not that hard to do.


Yes we have studied actual rates of infection in response to interventions like air filtering, so these studies account for all that real-world complexity and messiness you worry about.

The article is complaining that every study doesn't redo the evidence collection, end-to-end, every time. That's not realistic and not necessary.

A lot of your specific questions are leading (with a nothing-we-can-do attitude underneath) or asking the wrong question (eg expecting one universal number for "hours of filtration per infection prevented").

For instance the correct answer may be air filters in classrooms and buses and workplaces, but strangely your line of questioning doesn't even consider that possibility.

This would be like someone in the 1800s questioning how handwashing avoids Cholera if they don't wash their hands at home. I think I see a solution to this one...


That's not what it's ever actually about. You're buying a disingenuous framing that pins blame on the bottom when all these harmful trends come from the top. This isn't to protect grandma, it's to protect Google. This is always what happens when you allow pockets of power with interests misaligned from those of most people. The pockets of power get their way, and people are worse off.

We have! The only problem is a very limited amount of legal decisions accidentally paved the way for a massive dystopia. In particular, the first sale doctrine [1] solves everything immediately.

The courts assumed good faith with a licensing exception, and maybe it was. But that opened the door to essentially completely dismantle the first-sale doctrine. Get rid of that loophole and all this stupidity ends, immediately. Well that and the DMCA. Once you buy something, it's yours to do whatever you want to do with it short of replicating it for commercial benefit.

[1] - https://en.wikipedia.org/wiki/First-sale_doctrine


I’ll never forget overhearing this quote from a fellow sophomore in the comp sci lab in college: “if I have to sit in front of a computer every day for the rest of my life I’ll kill myself.” Computer science is an interesting career choice for someone who hates computers and being with computers.

I think the “get rich easy” reputation that software engineering gained somewhere around the 2010s really hurt the industry and a lot of people who are chasing the dollar.

I’m an unhinged lunatic who loves productivity software and user experiences. The type of kid who was setting up Outlook betas in 6th grade to try the new features. Watching videos about how the Ribbon was designed. Reading C++ for dummies even though I had untreated ADHD and couldn’t sit still long enough to get much past std::cout. Eventually daydreaming about walking into the office, tired from a hard sprint, getting coffee in corporate-sponsored coffee cups.

I wake up and reflect how profoundly lucky I am to have my dream job. Not just having the career I have, but having a dream at all and having a dream I could love in practice.


My friend in college was worried she would fall into trap that she eventually fell into: She wanted to be a writer, and she felt that Comparative Lit put you in danger of knowing your writing was crap before you had the motivation and discipline to do something about it.

I tend to give junior devs as much rope as I can because they're just going to be awful until they get about 1000 hours in, and no amount of me scaring them is going to make that any better. And once in a while they surprise me by doing something they shouldn't have been able to do. We all have our preconceptions and nobody's are right all the time.


I’m not sure this title is completely correct

“The researchers identified the type of water loss on land, and for the first time, found that 68% came from groundwater alone — contributing more to sea level rise than glaciers and ice caps on land.”

They are saying the leading loss of water loss is from ground water. The largest contributor to sea level rise I would guess is still thermosteric sea level rise due to the ocean becoming warmer and less dense

See ipcc https://www.ipcc.ch/report/ar6/wg1/chapter/chapter-9/

9.6.1 Global and Regional Sea Level Change in the Instrumental Era

In particular, Cross-Chapter 9.1, Figure 1 | Global Energy Inventory and Sea Level Budget. Panel b

EDIT: @dang could the submission title be changed to the article or journal article title?

“New global study shows freshwater is disappearing at alarming rates”

Or

“Unprecedented continental drying, shrinking freshwater availability, and increasing land contributions to sea level rise”


> Reflecting on the fact that 3 credits at UVA costs me $5000+ and 2100+ minutes,” Drew wrote, “I do not believe I grew enough through this course for it to be worth it.” Having noticed only “incremental improvements in [his] writing and thinking,” he concluded that “I would rather have spent this large sum of money and time on a course that interests me and teaches me about my career aspirations, like the finances of real estate. If I need to learn to write, I believe AI can serve me well for MY purpose at a fraction of the cost

Somehow this hits hard


XSLT/XPath is an example of a platform that provides multiple axis through which to access your data structure.

https://developer.mozilla.org/en-US/docs/Web/XML/XPath/Refer...


XMPP missed the boat largely because it couldn't handle multiple clients correctly for years - the default is to deliver messages to one of your clients, you need an extension to do the sensible thing, and that extension spent years in bikeshed limbo right as smartphones were taking off and people started wanting to use the same messenger on their phone and computer at the same time. (I've heard that performance/battery issues from XML validation didn't help either)

Personal speculation but I blame the "everything is an extension" model - it was meant to reduce fragmentation and allow clients with different featuresets to interoperate, but in practice adding a new XEP seems to have all the downsides of making a change to a non-extension-based standard (you still have to get all the clients to agree) and none of the upsides.


I miss the days when our best minds developed protocols instead of products. The last 15 years has been just the commodification and destruction of everything the previous generation has built.

I'm frankly surprised email has stood up as well as it has, even if it is nearly impossible to run your own email server these days.

In the mid-to-late teens IRC was making something of a comeback and then Slack EEE'd it.


The reckless, infinite scope of web browsers https://drewdevault.com/2020/03/18/Reckless-limitless-scope....

This is my favorite quote:

> First, Google started to leverage its ownership of the largest web browser, Chrome, to track and target publishers’ audiences in order to sell Google’s advertising inventory. To make this happen, Google first introduced the ability for users to log into the Chrome browser. Then, Google began to steer users into doing this by using deceptive and coercive tactics. For example, Google started to automatically log users into Chrome if they logged into any Google service (e.g., Gmail or YouTube). In this way, Google took the users that choose not to log into Chrome and logged them in anyways. If a user tried to log out of Chrome in response, Google punished them by kicking them out of a Google product they were in the process of using (e.g., Gmail or YouTube). On top this, through another deceptive pattern, Google got these users to give the Chrome browser permission to track them across the open web and on independent publisher sites like The Dallas Morning News. These users also had to give Google permission to use this new Chrome tracking data to sell Google’s own ad space, permitting Google to use Chrome to circumvent reliance on cookie-tracking technology.


E.g. Google releasing through dozens of Chrome-only APIs with hardly a spec, and then expecting everyone to support the "standards".

Every discussion about "Safari holding back the web" on HN are about 99% about Google-only non-standards that both Safari and Firefox oppose.

There are multiple "works only in Chrome" websites, many of them regularly published on HN.


> Companies that monetize user data in exchange for “free” services that abuse your privacy aren’t affected by this [the app store tax], as they don’t process payments through the App Store. However, privacy-first companies that monetize through subscriptions are disproportionately hit by this fee, putting a major barrier toward the adoption of privacy-first business models.

Huh. I’ve never seen it framed this way and it might be the most compelling argument I’ve heard to date. It’s not simply a debate about whether a company should be allowed to be vertically integrated in isolation, but whether that vertical integration allows them to exert unfair distorting pressure on the free markets we are trying to protect.


Be careful when companies market themselves as Swiss or that due to them being located in Switzerland means there is some extra layer of security or privacy.

Sure, it's a more stable country than many other countries in the world, but not much different from most EU countries for example. And privacy wise there is no difference.

Be also aware of the fact that many companies market themselves as Swiss, but all it means is they have a head office in Switzerland due to tax reasons. In one example, it's a cloud storage company, they say on their marketing page and their about page that they are based in Switzerland and under Swiss law, but if you look at the legal pages the company you sign up with are actually based in Bulgaria. Their servers are based in Texas, USA and Luxemburg, Europe and their development team in Bulgaria.


> So what exactly is the "much smaller and cleaner language struggling to get out" of Rust?

Austral? https://austral-lang.org/features


The Bjarne quote is basically sales pitch for a recurring rationale to make C++ worse and worse. It was, I suppose, not unreasonable to assume Bjarne was sincere the first time, but that was a long time ago. Here's how it goes:

1. “Within C++, there is a much smaller and cleaner language struggling to get out”

2. However just subsetting the language to get at the smaller one would not be a cleaner language. Instead we must first make a superset language, adding features, then we can subset this new language to reach our smaller but cleaner C++

3. Step one, superset will land in C++ N+1. Planning of that "subset of a superset" will need to wait until we've completed that work.

4. C++ N+1 is an even clunkier behemoth. Rinse and repeat.

I don't understand why people who've seen this happen more than once would stick around. You're not going to get the "smaller and cleaner" language after step two, there is no step two, it's just going to be step one again and then step one again, and then step one again, forever.


Ironically the "simple" JS program has a bug in it. The documentation for fs.watch is very explicit that the filename in the callback can be null and that you need to check for that. In Rust that fact would be encoded in the type system and the programmer would be forced to handle it, but in JS it's easier to just write bad code.

https://nodejs.org/api/fs.html#filename-argument


Pro-tip: don't write the summary at all until you need it for evidence. Store the call audio at 24Kb/s Opus - that's 180KB per minute. After a year or whatever, delete the oldest audio.

There, I've saved you more millions.


> because the language has such love for backwards compatibility

I still remember when Java 9 introduced modules. And I’m currently pulling my hair because Java 21 renamed all javax.* into jakarta.* because Javax was a trademark of Oracle, and all libs now require a “-jakartax” version for JDK 21.

But somehow I still have to deal with nulls everywhere and erased-at-runtime generics because Java loves backwards compatibility so much. The simple fact all libs released a “-jakartax” proves the entire ecosystem is fully maintained (plus CVEs means unmaintained libs aren’t allowed in production), so they could very well release a -jdk25 version with non-null types.


So if in reading the two threads correctly essentially Google asked for feedback, essentially all the feedback said "no, please don't". And they said "thanks for the feedback, we're gonna do it any way!"?

The other suggestions ignored seemed to be "if this is about security, then fund the OSS, project. Or swap to a newer safer library, or pull it into the JS sandbox and ensure support is maintained." Which were all mostly ignored.

And "if this is about adoption then listen to the constant community request to update the the newer XSLT 3.0 which has been out for years and world have much higher adoption due to tons of QoL improvements including handling JSON."

And the argument presented, which i don't know (but seems reasonable to me), is that XSLT supports the open web. Google tried to kill it a decade ago, the community pushed back and stopped it. So Google's plan was to refuse to do anything to support it, ignore community requests for simple improvements, try to make it wither then use that as justification for killing it at a later point.

Forcing this through when almost all feedback is against it seems to support that to me. Especially with XSLT suddenly/recebtly gaining a lot of popularity and it seems like they are trying to kill it before they have an open competitor in the web.

https://github.com/whatwg/html/issues/11523


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: