Hacker News new | past | comments | ask | show | jobs | submit | more honr's comments login

Please kindly cut the BS and use the link and title of the study. The magazine it's published in seems to be the second highest impact magazine of its field (sorry I haven't done research in cellular biology to know the magazine well). So, the study is probably reasonable. The click-baity artistic interpretation of the study is what tarnishes this and numerous other works.

"Dietary fat, but not protein or carbohydrate, regulates energy intake and causes adiposity in mice."

https://www.sciencedirect.com/science/article/pii/S155041311...


Some of us do not subscribe to the "ends justify the means" theory. And with that, I'd consider this example a form of failed journalism. Regardless of whether or not there is actually an underlying point, this particular piece of pseudo-journalism didn't capture it. I chuckled when I read the author's description of himself/herself: "... who does extensive research into spreading technical awareness ...". One might think with the extensive research they have done so far, some of that technical awareness should have had reached the author, themselves.


> Some of us do not subscribe to the "ends justify the means" theory

I don't think that this is the case here. To be an "end justifies means" the writer would have to give a false information on an information that is relevant to the core of the story. Does it really matter what's the size of some digital data? Would it be O.K. if Google collected 100Mb of data but totally unacceptable if that was 5Gb?

The size of the data is an irrelevant implementation detail of the story. Even assuming that this is all the data about you is silly in the first place but the articla is not about that - even if the title says so.


My point was exactly that the size is irrelevant - because the data being measured is irrelevant to the story.


I have used ports (if that is what you are referring to) albeit on a mac. It is very basic, without any of the interesting properties that make nix interesting. It essentially feels like a slightly improved "download && ./configure && make && make install" script.


It does have a usable binary cache, which is absent from other source-based systems glances in portage's direction


As someone who hasn't actively used nixos (yet), only nix (the package manager) for a while:

I have always been curious and dumbfounded why he (Eelco) abandoned Maak (his build system, that I thought, would go together with nix). I think a proper, integrated build system (integrated with nix, the package manager) is a (or the?) missing piece to improve software development.


My two current favorite make replacements are redo and tup.

They go in opposite directions in replacing make:

redo boils down to "How can we capture 99% of the value of make with 1% of the complexity" and does it well. It helps that you don't have to figure out whether sh or make will be interpreting your (similar, yet different enough to trip you up) expressions; redo is "sh all the way down" It also manages to automatically have dependencies on build rules which Make could really use.

Tup is more like "How can we limit what make can do in order to make a better special purpose tool?" Tup can't handle files with spaces and implementing "make install" is flat out impossible. It also is very inflexible with your directory tree. However, it is the best tool for "Many files in, one file out, rebuild if any inputs change" and making this implementation tractable is a specific result of being less flexible than make.

[edit]

Just found the Maak paper:

1) Ports actually does unify building and deployment by using make for deployment

1a) This does require writing (or modifying) makefiles for every single package

2) Nix fits in the same space as Maak, as you use nix both for building and deployment; it just may farm out some of the work to a secondary build system such as make. If you consider the build-script to be the generator (rather than e.g. `cc`) then it maps closely to Maak.

2a) There is no reason I can think of why you couldn't have a nix expression for every single .o file, and have nix-build handle incremental builds for you.

2b) Farming out much of the work to existing build systems is a pragmatic approach to reduce the amount of work needed to create a nix expression. 99% of the time if cmake or autoconf are used, a minimal nix file will give you a mostly working package. This is much more reliable than trying to autoconvert Make (which is composed of two turing complete languages) to some other system.

3) One that the paper claims that only the generators can have complete dependency information (Make, Maak, and Nix all farm out specifying dependency information to the author of the build script; though Nix attempts to prevent you from omitting dependencies by sandboxing), but Tup sidesteps this by sitting between the filesystem and the generator.


This sounds like a much better approach than our current "No V-Tech". Thanks for posting.


This sounds a bit misguided. IIUC, Google Fi is really a mechanism of accessing two carriers from the same device. It consists of 1. A partnership between Google Fi (the cellular provider), T Mobile, and Sprint, and 2. A special radio hardware on the phone that is capable of working with two different "channels" at the same time. Since my Nexus device broke, I have personally been using "Google Fi" (the provider) on my iPhone which means I do NOT reap the full benefits of Fi without having the special radio hardware.

So, I don't think it is physically possible to do the "two provider at the same time" thingy without a special hardware (you'd need two parallel "radio" circuits on your phone).

And with that, your analogy falls sadly on its face, as you seem to indicate an artificial "software" limitation has been set to prevent us from using Google Fi on our iPhones.


Sorry, I did not mean to say that is how it is but how it seemed. My understanding is that the modems aren’t capable. That being said, activation is done only with a nexus or pixel device (software) and while switching might be a hardware issue, iPhone works on both sprint and Tmobile (on an iPhone on Tmobile after moving from Verizon) and so google fi should still work.


As I mentioned, I AM already using Google Fi on my iPhone. What does not work is the hardware feature of handing off between providers mid-flight. With my [now deceased] Nexus X, a call would seamlessly [to me, anyways] switch between providers during medium/longer commutes. With my iPhone using just T Mobile, calls get dropped during my daily route as there are certain "T Mobile blind spots".


Sorry, I completely misunderstood. Thanks for clarifying!


Is it actually a special radio, or just features which normally get disabled by e.g. Verizon programming the baseband? I'm assuming that the "special" radios are simply NOT using crippleware and setting some other registers which could be enabled on many more devices...if carriers played ball, which they don't want to for obvious reasons.


I have to research deeper to be sure (there is some FAQ at https://fi.google.com/about/faq/#supported-devices-7). But skimming through that page, I suspect this is a new-ish feature not available on most smartphone radios.


I believe that is only in passing. I honestly think the majority of laptop manufacturers are not visionaries when it comes to product specs. Only Apple and a couple of niche ones truly seem to know what they are doing. The rest will have a delay to see the change in trends (but they do eventually see it).

I think consumer masses moving away from the laptop market has a dire implications for this market (far less competition, expect premium prices). But on the plus side it should shift the market towards productivity, which is a big plus for some of us.


I agree with most[1] of the issues he is aiming to solve. But the approach would have been useful if he would have built it on top a superior existing system which is Nix & NixOS. Docker is increasing popular while missing most of the problem NixOS (and the new vision of systemd) solve, so I'd say it is not wrong to assume there is definitely solid demand for this kind of app "packaging" (virtualization).

Tying this to systemd is an "interestingly" bold move, to put it mildly. People don't switch to btrfs just for the sake of an initrd replacement, after all, and forcing them is openly asking for replacements. So, I would be very careful if were willing to see that vision materialize.

[1]: One obvious point of discussion is declaring exact versions of library dependencies, which in most systems leads to highly undesirable results: just compare the horrible world of maven packaging (i.e., the java world, mostly) to the efficient (but curated) world of debian packages. But when you have exact declarable dependencies (where any dependency dictates exact versions of its own dependencies), and reliably reproducible builds (the game changing factor), and hopefully some level of automated testing, you will be fine: You can quickly upgrade all your package portfolio to the newest version of the "security fixed" library, have the world get rebuilt against the newness, and get everything verified (using the tests that you have. You have some, don't you?). The problem of transferring only what actually changed is non-trivial but completely solvable in various ways (Example: only send diffs, and issue new encryption/hashes for the diffs through an automated trusted system that does the diffing).


I totally agree. I was very surprised Nix wasn't mentioned once in the article. They seem to be solving the same problem, but in a way that should work across different setups (no btrfs requirement for instance).

Systemd has made a number of good (imo) standards for distros to adhere (most notably service management and logging), but the standard that the article is describing doesn't seem like the way to go.

Whatever the case, I'm very interested in seeing how things will progress with Nix, Systemd and others.


You should probably do some research and find out what language(s) you like (note: there are many languages and stacks outside the MS ecosystem, some of them might be far more fulfilling than what you have experienced so far). In particular, take a look at functional languages and see if you like them. Look at some dynamic languages and see if you get happy with one. Also play with some properly typed languages (ML family and Haskell, for instance) to see how those fair. Then also consider C++ (the widely used, but frankenstein-style language) as it is not as horrible as it used to be. Some new languages might be fun too (I'd suggest looking at Rust, Go, and Clojure). Once you've done that, you can probably have a better picture; without looking at all these, you might be or feel missing out.

That all said, and as you might have already heard, the MS ecosystem is not as wide as many of these, but it is far more unified than most other frameworks.

My own opinion is that you will miss out on new price and competition that is happening outside the MS world, so I recommend having some hands-on experience there if you are serious about starting your own business.


Awesome dashboard and project! Bookmarked to come back to it and use it :-)

P.S.: Wish the title was not so poorly chosen. Remember that title really does matter when submitting anything to Hacker News.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: