Hacker Newsnew | past | comments | ask | show | jobs | submit | electroly's commentslogin

In the late 90s/early 00s, I worked at a company that bought a single license of Visual Studio + MSDN and shared it with every single employee. In those days, MSDN shipped binders full of CDs with every Microsoft product, and we had 56k modems; it was hard to pirate. I don't think that company ever seriously considered buying a license for each person. There was no copy protection so they just went nuts. That MSDN copy of Windows NT Server 4 went on our server, too.

This was true of all software they used, but MSDN was the most expensive and blatant. If it didn't have copy protection, they weren't buying more than one copy.

We were a software company. Our own software shipped with a Sentinel SuperPro protection dongle. I guess they assumed their customers were just as unscrupulous as them. Probably right.

Every employer I've worked for since then has actually purchased the proper licenses. Is it because the industry started using online activation and it wasn't so easy to copy any more? I've got a sneaky feeling.


> In the late 90s/early 00s, I worked at a company that bought a single license of Visual Studio + MSDN and shared it with every single employee.

During roughly the same time period I worked for a company with similar practices. When a director realised what was going on, and the implications for personal liability, I was given the job of physically securing the MSDN CD binder, and tracking installations.

This resulted in everyone hating me, to the extent of my having stand-up, public arguments with people who felt they absolutely needed Visual J++, or whatever. Eventually I told the business that I wasn't prepared to be their gatekeeper anymore. I suspect practices lapsed back to what they'd been before, but its been a while.


Primarily it's the reason you already know: restic and borg are the same model, but restic doesn't need it to be an ssh-accessible filesystem on the remote end. Restic can send backups almost anywhere, including object storage like your Backblaze B2 (that's what I use with restic, too). I agree with OP: restic is strictly better. There's no reason to use borg today; restic is a superset of its functionality.

Thanks! Then I’ll look more at Restic :)

Does restic work well with truenas?

I don't know specifically, but it's a self-contained single file Go executable. It doesn't need much from a Linux system beyond its kernel. Chances are good that it'll work.

I simply use SQLite for this. You can store the cache blocks in the SQLite database as blobs. One file, no sparse files. I don't think the "sparse file with separate metadata" approach is necessary here, and sparse files have hidden performance costs that grow with the number of populated extents. A sparse file is not all that different than a directory full of files. It might look like you're avoiding a filesystem lookup, but you're not; you've just moved it into the sparse extent lookup which you'll pay for every seek/read/write, not just once on open. You can simply use a regular file and let SQLite manage it entirely at the application level; this is no worse in performance and better for ops in a bunch of ways. Sparse files have a habit of becoming dense when they leave the filesystem they were created on.

I dont think the author could even use SQLite for this. NULL in SQLite is stored very compactly, not as pre-filled zeros. Must be talking about a columnar store.

I wonder if attaching a temporary db on fast storage, filled with results of the dense queries, would work without the big assumptions.


I think I did a poor job of explaining. SQLite is dealing with cached filesystem blocks here, and has nothing to do with their query engine. They aren't migrating their query engine to SQLite, they're migrating their sparse file cache to SQLite. The SQLite blobs will be holding ranges of RocksDB file data.

RocksDB has a pluggable filesystem layer (similar to SQLite virtual filesystems), they can read blocks from the SQLite cache layer directly without needing to fake a RocksDB file at all. This is how my solution (I've implemented this before) works. Mine is SQLite both places: one SQLite file (normal) holds cached blocks and another SQLite file (with virtual filesystem) runs queries against the cache layer. They can do this with SQLite holding the cache and RocksDB running the queries.

IMO, a little more effort would have given them a better solution.


Ah, clever. Since they chose RocksDB I wonder if Amazon supports zoned storage on NVMe. RocksDB has a zoned plugin which describes an alternative to yours.

Being specific: AWS load balancers use a 60 second DNS TTL. I think the burden of proof is on TFA to explain why AWS is following an "urban legend" (to use TFA's words). I'm not convinced by what is written here. This seems like a reasonable use case by AWS.

Not one of the downvoters, but I'd guess it's because this is only true with HATEOAS which is the part that 99% of teams ignore when implementing "REST" APIs. The downvoters may not have even known that's what you were talking about. When people say REST they almost never mean HATEOAS even though they were explicitly intended to go together. Today "REST" just means "we'll occasionally use a verb other than GET and POST, and sometimes we'll put an argument in the path instead of the query string" and sometimes not even that much. If you're really doing RPC and calling it REST, then you need something to document all the endpoints because the endpoints are no longer self-documenting.

HATEOAS won't give you the basic nouns on which to work with

Right, you wouldn't need HTML at all for LLMs though. REST would work really well, self a documenting and discoverable is all we really need.

What we find ourselves doing, apparently, is bolting together multiple disparate tools and/or specs to try to accomplish the same goal.


But that is roughly the point here. If we still used REST we wouldn't need swagger, openapi, graphql (for documentation at least, it has other benefits), etc.

We solved the problem of discovery and documentation between machines decades ago. LLMs can and should be using that today instead of us reinventing bandaids yet again.


A lot of negative responses so I'll provide my own personal corroborating anecdote. I am intending to replace my low-code solutions with AI-written code this year. I have two small internal CRUD apps using Budibase. It was a nice dream and I still really like Budibase. I just find it even easier yet to use AI to do it, with the resulting app built on standard components instead of an unusual one (Budibase itself). I'm a programmer so I can debug and fix that code.

LLMs are great at reviewing. This is not stupid at all if it's what you want; you can still derive benefit from LLMs this way. I like to have them review at the design level where I write a spec document, and the LLM reviews and advises. I don't like having the LLM actually write the document, even though they are capable of it. I do like them writing the code, but I totally get it; it's no different than me and the spec documents.

Right, I'd say this is the best value I've gotten out of it so far: I'm planning to build this thing in this way, does that seem like a good idea to you? Sometimes I get good feedback that something else would be better.

If LLMs are great at reviewing, why do they produce the quality of code they produce?

Reviewing is the easier task: it only has to point me in the right direction. It's also easy to ignore incorrect review suggestions.

Imho it's because you worked before asking the LLM for input, thus you already have information and an opinion about what the code should look like. You can recognize good suggestions and quickly discard bad ones.

It's like reading, for better learning and understanding, it is advised that you think and question the text before reading it, and then again after just skimming it.

Whereas if you ask first for the answer, you are less prepared for the topic, is harder to form a different opinion.

It's my perception.


Its also because they are only as good as they are with their given skills. If you tell them "code <advandced project> and make no x and y mistakes" they will still make those mistakes. But if you say "perform a code review and look specifically for x and y", then it may have some notion of what to do. That's my experience with using it for both writing and reviewing the same code in different passes.

Steel-manning the idea, perhaps they would ship object files (.o/.a) and the apt-get equivalent would link the system? I believe this arrangement was common in the days before dynamic linking. You don't have to redownload everything, but you do have to relink everything.

> Steel-manning the idea, perhaps they would ship object files (.o/.a) and the apt-get equivalent would link the system? I believe this arrangement was common in the days before dynamic linking. You don't have to redownload everything, but you do have to relink everything.

This was indeed comon for Unix. The only way to tune the systems (or even change the timezone) was to edit the very few source files and run make, which compiled those files then linked them into a new binary.

Linking-only is (or was) much faster than recompiling.


But if I have to relink everything, I need all the makefiles, linker scripts and source code structure. I might as well compile it outright. On the other hand, I might as well just link it whenever I run it, like, dynamically ;)

And then how would this be any different in practice from dynamic linking?

My hobby language[1] also has no reference semantics, very similar to Herd. I think this is a really interesting point in the design space. A lot of complexity goes away when it's only values, and there are real languages like classic APL that work this way. But there are some serious downsides.

In practice I have found that it's very painful to thread state through your program. I ended up offering global variables, which provide something similar to but worse than generalized reference semantics. My language aims for simplicity so I think this may still be a good tradeoff, but it's tricky to imagine this working well in a larger user codebase.

I like that having only value semantics allows us, internally, to use reference counted immutable objects to cut down on copying; we both pass-by-reference internally and present it as pass-by-value to the programmer. No cycle detection needed because it's not possible to construct cycles. I use an immutable data structures library[2] so that modifications are reasonably efficient. I recommend trying that in Herd; it's almost always better than copy-on-write. Think about the Big-O of modifying a single element in an array, or building up a list by repeatedly appending to it. With pure COW it's hard to have a large array at all--it takes too long to do anything with it!

For the programmer, missing reference semantics can be a negative. Sometimes people want circular linked lists, or to implement custom data structures. It's tough to build new data structures in a language without reference semantics. For the most part, the programmer has to simulate them with arrays. This works for APL because it's an array language, but my BASIC has less of an excuse.

I was able to avoid nearly all reference counting overhead by being single threaded only. My reference counts aren't atomic so I don't pay anything but the inc/dec. For a simple language like TMBASIC this was sensible, but in a language with multithreading that has to pay for atomic refcounts, it's a tough performance pill to swallow. You may want to consider a tracing GC for Herd.

[1] https://tmbasic.com

[2] https://github.com/arximboldi/immer


How do I square "he has debunked that" with the article about his brain fMRI and the results about his amygdala, linked above in this subthread? It's full of direct quotes from both Honnold and the doctors. Where did he debunk it... and how? He's got a more accurate analysis than the fMRI? Do you have a link?


There is no contradiction

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: