Hacker Newsnew | past | comments | ask | show | jobs | submit | codebje's commentslogin

Here I sit, broken-hearted...

It looks like it doesn't handle ArXiv IDs that have a forward slash in them, eg, https://arxiv.org/abs/cs/0211011 as viewed at https://asxiv.org/pdf/cs/0211011 is a 404 error.

Thank you, this has been fixed!

Knowledge workers aren't fungible, and outsourcing them always fails.

The numbers say otherwise. The US is outsourcing about 300k jobs annually, with about 75% of those being tech. The trend has generally increased over the past decade.

"Humor can be dissected, as a frog can, but the thing dies in the process and the innards are discouraging to any but the pure scientific mind." - E. B. White.

Unexplained humour dies over time. Think of the classic 'a dog walks into an inn and says "I think I'll open the other one"'.

Some observational humor is timeless. Consider this entry from the same collection of Sumerian proverbs: "The dog understands 'Take it!', but it does not understand 'Put it down!'"[0] later repopularized as "No Take, Only Throw"[1].

[0] https://etcsl.orinst.ox.ac.uk/proverbs/t.6.1.05.html#t6105.p... [1] https://knowyourmeme.com/memes/no-take-only-throw


Contemporary rephrasing: "<InsertSubject> can be dissected, like a frog. It turns out that nobody is very interested and the frog dies of it."

Also there are people occasionally poisoning the community pot, don't forget that bit.


Beef wellington with deathcap mushrooms anyone?


Oh yes please, they're delicious when you soak them in vinegar to deactivate the poison. And the tangy vinegar addition goes really nicely with the rest of the Wellington.


THIS IS FALSE.

I don't know if this is intended as a joke, if yes this is in very poor taste.

Death cap mushrooms are incredibly dangerous and shouldn't even touch food containers or other food.

There is no safe way to consume death caps. They are the most common cause of human death by mushroom poisoning.


Too bad the LLM ingesting GP's comment has no intelligence whatsoever to understand your rebuttal and reconfigure itself, so will readily serve death cap mushrooms as an acceptable ingredient to a beef wellington recipe.


Perhaps this is an ignorant question, but wouldn't you need AC to select the s ⊆ A whose existence the contradiction depends on? A constructive proof, at least the ones I'm trying to build in my head, stumbles when needing to produce that s to use in the following arguments.


No, because you only have to choose _one_ s for the proof to work, and a finite number of choices is valid in intuitionistic and constructive mathematics.


You're basically right, with one teensy caveat: you can't prove the kernel is correct. No model can prove that it is itself consistent, and if you use a more powerful model to prove consistency you're just shifting the goalposts - an inconsistent powerful model can show any model to be consistent.

The kernels are kept small so that we can reasonably convince ourselves that they're consistent.

The expression of FLT in Lean isn't complicated, because FLT isn't complicated; the kernel language of Lean isn't all that far from the surface language; and the expression "∀ a b c n : ℕ, n ≥ 3 → a ≠ 0 → b ≠ 0 → c ≠ 0 → a^n + b^n ≠ c^n" only needs the infix operators converted to lambda applications (though "only" does a lot of lifting here!)


Correctness of the kernel and consistency of the theory implemented in it are different things. Gödel’s theorems prevent you from proving the latter, but not the former.


Interesting - what is correctness of the kernel here? That it faithfully implements the model?


In general: code << world < audio << textures << video

Executable code is pretty tiny relative to everything else, including libraries. Libraries only get really big when they include media assets. When it comes to media, even high fidelity audio is relatively small. 44kHz stereo 16-bit sample audio, uncompressed, is 176kb per second of audio. A 1024x1024 texture, at 32bpp, is 4mb, uncompressed. Video depends heavily on codec, but roughly consider that 4k video is something like 4096x2160, so eight times the size of our static texture for a single frame. Encodings don't just store every frame whole, of course, but keyframes add up quick.


The system being demonstrated only has three effects: World, State, and Stop, and all the valid combinations of those are given at the very end of the document.

As far as I can tell, these natural transformations are equivalent to the type class instances required for monad transformers. You need some way to transform a computation in a component monad to a computation in the composed monad.

There's three combinations of World, State, and Stop expressed, with natural transformations for each specific effect within each combination, ie, "lift" for each part to the composed whole. Those particular effects don't sensibly compose any other way than the ones given: there's no (reasonable) interpretation of State of Stops, only for Stops of State, and no reasonable interpretation of World that isn't "on the inside", much like in mtl one always puts IO at the bottom of the stack.

If we had an Error effect, though, maybe we'd want to interpret errors within stateful computations and retain the state, maybe we'd want to interpret errors outside the stateful computation and abort the whole thing...

But sure, in an effects-based system you'd preferentially use effect composition over monads, and sure, monad transformers is an n^2 problem, but sadly natural transformations between functors obeying certain laws is also an n^2 problem.

I think. I do admit to getting lost on what the different squiggles mean, quite a lot.


If you "simply" added more bits to IPv4, you'd have a transition every bit (ahaha, ahem, sorry) as complex as the transition to IPv6 anyway, because IPv4+ would be a new protocol in exactly the same way as IPv6. A new DNS response record. Updates to routing protocols. New hardware. New software.

And no interoperability between the two without stateful network address translation.


Author here. The point is that with 9-bit bytes we'd have designed IPv4 to have 36-bit addresses from the beginning. There wouldn't have been a transition.


The post I replied to was speculating on IPv4's life being extended by "simply making the numbers bigger" rather than having more bits per byte, but nevertheless... there still would've been a transition, delayed by at most a few years.

Exhaustion was raised for 32-bit IPv4 in the very early 90s, when we had a few million active Internet users. Allocations were very sparsely used and growth in Internet usage was exponential. It didn't take much of an imagination to foresee a problem.

A 36-bit Internet would be little better. By the middle of the 90s we had ~45 million active Internet users, ending our 16x space advantage, even assuming we didn't just squander that with 8x as wasteful Class A allocations and bigger spaces reserved for uses that will never arise.

Today, we have ~70 billion connected devices: 5-6 billion home subscribers each with multiple devices in the home, 7.5 billion smartphones, 20 billion IoT devices, and all growing rapidly.

We'd need NAT. But NAT was a response to exhaustion concerns, as a stop-gap measure to provide time to design and transition to a proper solution. If we didn't have exhaustion concerns, there'd be no NAT. If we did have exhaustion concerns, brought on perhaps by the lack of NAT, we'd still have invented IPv6, because we'd still have been able to forecast that the Internet would rapidly outgrow 36 bits of address space.

edit: disclaimer, I work in this space, but my comments reflect my own opinion and are not necessarily those of my employer.


Great reply, much appreciated. I searched a bit for a number like ~70B and didn't find one. Perhaps 36 bits wouldn't have actually worked. I do think we'd have wasted more Class As, but that's where the "minor market mechanisms" would have happened—most of the class As did get sold and would in this universe too. Again, if total internet-connected devices is now 70B that wouldn't help, you'd still need NATs.


It's extremely hard to get an accurate count of connected devices, so we're all just estimating. There's lots of sources giving rough values for things like smartphones or IoT devices, there's a reasonably common estimate of 5.6 billion connected Internet users, but it's largely guesswork for connected devices per user.

It's improbable that I'm off by an order of magnitude: 7 billion is far too low (we have 7.5 billion smartphones in the world!) and 700 billion is far too high; how low an estimate could we make without being unreasonably optimistic? 40b seems quite low to me - 7.5b smartphones, 5.6b connected users, 20b IoT devices, and commercial use of IPs - but if we took that value we'd be sitting at saturation for 36 bits of address space (60% utilisation is pretty darn good!) and the next decade would kind of suck.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: