Hacker Newsnew | past | comments | ask | show | jobs | submit | bonsaibilly's commentslogin

Sometimes. Sometimes it even works, too.


Congratulations on completely failing to understand how CGNAT loads work & their costs, and jumping to a wildly incorrect understanding of the situation


Yeah, and that's currently the top comment.

Which leads me to believe that the main barrier to IPV6 is just that people don't want to re-learn anything.


> the main barrier to IPV6 is just that people don't want to re-learn anything.

I disagree, actually. I think the main barrier is that networking folks have been pretty bad at explaining this to non-networking folks. IPv6 isn't exactly simple to understand.

I'm a reasonably network-savvy guy, and I'm sure that I understand less about IPv6 than I think I do. I just don't know what parts I'm not understanding properly, and what parts I just don't know about.

It's pretty hard to find good explanations of this stuff that aren't aimed at networking experts.


It's terrible isn't it. I'm not a network specialist yet have a reasonable enough understanding of ipv4, nat etc that I think my home network is, at least ok. Can look at ip addresses in logs and know what machines most of them are etc.

I get tired of this "I'm an expert you're an idiot" trope that comes up about it. Here on this damn website you have people who hack kernel, people who manage massive databases, people who hack front end stuff that scares me, experts in functional programming, language designers, fpga designers... In short it's very, verry flipping technically adept crowd. You didn't reach them.

Networking "experts" who want to blame everyone else for a lack of understanding need to look in the damn mirror and ask themselves "How did we fail so very, very hard at explaining this stuff?" "Why are we not able to provide a link to an article with an estimate of time taken for everything you need to know about ipv6 to use it exclusively?" "Why don't we want to make this easy for everyone?" "Why can't we be minimally polite?"

I'm an expert in being a jerk on occasion and this occasion the "Everybody else is stupid and lazy because they don't understand it's not us at all" trope is definitely being a jerk. And I'm jerk enough to point it out.


The hostility is what I think is more damaging than anything else. When people have an objection to something that is resulting from them misunderstanding the thing, telling them they're stupid, lazy, or even malicious just makes them disengage (and correctly so).

The end result is that they will continue to object to the thing, but won't raise their objections to the experts anymore. And why would they? Nothing good came from it the first time.

It turns what should be a cooperative relationship into a combative one. I see this happen in pretty much every discussion of IPv6 around, including this one.

The other issue is that the subject matter experts rarely actually explain anything. They just toss out acronyms and buzzwords and consider the matter corrected. But it's not -- they're talking as if their audience is another subject matter expert, when it's usually not. Acronyms and buzzwords mean little to them.

And telling them to "google it" likewise does little good. The audience isn't a subject matter expert, doesn't want to be, and shouldn't have to be. If IPv6 really is so complex that you have to be an expert in order to use and configure it properly, then isn't that a problem with IPv6?

My assumption is that's not the case (but I'm not sure on this point), but instead, the experts are failing to actually teach people about this stuff.

In the end, I blame the rollout of IPv6 itself. Exactly zero attention and effort was paid to evangelizing and educating people about it. There was no gradual rollout plan put into place and encouraged.

The IPv6 rollout effort failed to do the things that are necessary to facilitate a shift of this magnitude. This makes the whole thing very confusing and leads people who aren't elbows deep in the topic to lean toward "I don't feel that I can do this safely, so it's better that I don't do it at all". Which is not an unreasonable stance.

The tragedy is that it all could have gone so much better than it has. It could have been a thing everyone unified about rather than a thing that is rapidly becoming a kind of holy war.


yeah or just what about an article or even a whole book on the subject of:

"You're gonna move your home network to ipv6, here's what you need to know to not f&^k up hard and get pwned" At the level like we know for ipv4.

Right now, I actively disable ipv6 in devices on my network because I don't have a clue about how it all works. Am I making something addressable from the public internet? Am leaking every mac address I have? So much more I'm sure I havent even considered.

Then when you look at ipv6 tutorials you see nuts things like each octet containing a zero value can be shortened to just a single zero :00000000: becomes :0: ok fine, but consecutive octets of zero are removed so :00000000:00000000: becomes :: swallowing a delimiter so programming this stuff you can't even just split on the delim and /know/ what octet is where. Now maybe theres a good reason for that but where is the explanation? Not in any of the tutorials that have to explain how this stuff works rather than something, you know, useful. As presented it's pure additional, utterly meaningless, learning overhead.

So yeah. I'm too stupid to run ipv6 and I know it. But I'm not nearly as stupid as those who claim it's ready for prime time because it damn well isn't.

Anyone thinks it is. Link the document with a time estimate on running a home network with ipv6 knowing what you need to know (and know already for ipv4) to not do something idiotic.

In this crowd, we'll learn stuff just because it looks cool and you can't reach us? Get outta here.


Yeah as an old former-system admin its kind of amusing to be on the receiving end...


I think the main barrier is that for most people IPv4 works just fine and they've never experienced a problem that IPv6 would solve. Maybe they will, some day, like if Facebook and Google shut down their IPv4 IPs.


Speaking for myself, but my main barrier to IPv6 adoption is my ISP (Wave/Astound, Seattle eastside) still being IPv4-only, despite having DOCSIS3.1 service.

I didn’t think it was even possible to have DOCSUS3.1 without IPv6 :S


Main barrier for me is there’s no interoperability with ipv4, and no official transition plan (just a bunch of rfcs suggesting different transition plans).


Prediction: CGNAT processing costs for gigabit subscribers will become neglible in the medium term (3-5 years). Not that it's wildly expensive today...


You are literally posting this comment on a story about CGNAT processing costs being wildly more expensive than a small ISP cares to deal with. To the point where they’re willing to buy and distribute AppleTVs to reduce costs.

Even if that price decreases in real terms, washing a whole bunch of traffic through a big-ass NAT is always going to cost more than just not doing that.


The cost of an IPv4 address is around $50. Annualized it's a few dollars. So, that's the baseline for where CGNAT makes financial sense.

That's a lot less than the cost of an Apple TV.


MAP-T/MAP-E moves the CG-NAT functionality to the CPE. 60x users per IPv4 address should be doable.


We're just pretending it doesn't now take the better part of 10 seconds for twitter.com to load now? That response times have gotten perceivably, measurably worse over the last few months? Or that these glitches aren't now a weekly occurrence?


I don't use it anymore but the odd time a friend will send me a link to it, and it is horrifically slow. I wonder if anyone will write about this time at twitter in a few years. Would be a case study in how to decimate a websites popularity and what not to do to avoid that.


Apparently there is literally a 5 second pause on load in the javascript.


Thankfully MySQL also offers a non-gimped version of UTF-8 that one should always use in preference to the 3-byte version, but yeah it sucks that it's not the "obvious" version of UTF-8.


Is this part of MySQL's policy of "do the thing I've always done, no matter how daft or broken that may be, unless I see an obscure setting telling me to do the new correct thing" ?


That'd be my guess, but I don't really know. They just left the "utf8" type as broken 3-byte gibbled UTF-8, and added the "utf8mb4" type and "utf8mb4_unicode_ci" collation for "no, actually, I want UTF-8 for real".


It will be a fun day when Unicode crosses the 5-byte UTF-8 encoding threshold :/


It won't. We settled on using stateful combining characters instead. (Remember when the selling point of switching the world to Unicode was "represent all writing systems with a single stateless 16 bit encoding"? Yeah, well, lol.)


Anything beyond four bytes is composed of multiple code points, happily


No the default these days is the saner utf8mb4, if you create a new database on a modern MySQL version. But if you have an old database using the old encoding then upgrading databases doesn't magically update the encoding because some people take backwards compatibility serious.


> MY feeling so far is more that laymen are less impressed of them than experts - because for people not from the field, the fact that they produce so much bullshit seems to trump all other aspects.

The NYT literally just published an embarrassingly credulous account of how the Bing AI wanted to seduce the author away from his wife and commit various acts of violence, that fully took all of the interactions at face value.

The non-experts are imagining full-on sentience where the experts correctly recognize mere word association shenanigans. Your feelings, in short, are ass backwards.


> The non-experts are imagining full-on sentience where the experts correctly recognize mere word association shenanigans. Your feelings, in short, are ass backwards.

The current model (which is an early version; computers are new on earth and exist only for a statistical error length of time on humanity scale, on earth scale, let's not talk universe scale) is already vastly superior than many humans I know in almost every way (outside manual dexterity and some other fringe stuff you can easily fix with external systems, like we humans do). We have no good definition for what sentience is either; maybe our brain is word association shenanigans; connect up 2 chatgpts and call one 'inner voice' and the other 'external voice'. It will start claiming sentience in no time flat; the same as you. Why are you right? It's a feeling yeah?


> We have no good definition for what sentience is either; maybe our brain is word association shenanigans;

This is just stupid.

We've pumped more English through GPT-3 than any existing English-speaking human has absorbed in their entire lifetime, and what we've ended up with is something that very very clearly has so utterly failed to generalize even the most basic level of understanding of basic concepts that if you ask it to count the number of letters in a word it will cheerfully pump out the wrong answer ("there are thirteen letters in the word 'twelve'") because its dataset correlates the two words with one another and it has learned precisely sweet fuck all about what it means to count, something a child's brain picks up with exposure to many orders of magnitude fewer language examples.

To imagine your brain is just an LLM is to mistake your reflection for another person in the room. Utterly daft. Get off the LLM hype train and start looking at these things objectively and critically. They're nowhere near what you're imagining.


>if you ask it to count the number of letters in a word it will cheerfully pump out the wrong answer ("there are thirteen letters in the word 'twelve'")

Not to be too blunt, but you seem to be talking out of your ass. I'm tired of the overly dismissive comments here on hackernews by people who didn't bother to do the bare minimum of research. LLMs do not work with individual characters, they use tokens (i.e multiple characters, or sometimes entire words).


Wow, you... completely misunderstood the purpose of that example.

Seriously, slow down and read it again, all the way to the end of that sentence.


Uh, no.. I don't believe I did. Asking it to count the letters in a word is like asking a human to listen for a 30kHz tone without them knowing that humans can't hear it. I'd expect a lot of false positives.


Which reinforces the point I was making. Stop trying to win an argument online and think for 5 seconds about what it means for the claim "the human brain is just an LLM" if you're arguing that an LLM is naturally ill-suited to this task human children can do without issue.


>the human brain is just an LLM

Never have I said such a thing. Also, BPEs aren't a natural limitation of any transformer-based model, it's a trick to save compute for LLMs.


So what is this supposed to mean?

> We have no good definition for what sentience is either; maybe our brain is word association shenanigans;

Sure sounds like you're suggesting the brain is an LLM to me, and I can't blame bonsaibilly for thinking that.


Keeping track of different usernames is hard, and I can't blame you for confusing me with the other guy :)

snark aside, I think he does have a point - afaik, we don't know what intelligence is, so it's kinda hard to make any argument about fundamental differences between "true" intelligence and LLM intelligence. I do feel like there should be something else. I saw somebody describe their mind as consisting of a "babbler" and a "critic" (a GAN, basically). The LLM would be the babbler while the critic is not yet implemented, and this sounds intuitively right to me. Then again, my intuition could be completely wrong and we may be able to get to human level intelligence with further scaling. I haven't seen any solid counterarguments yet. And not even the biggest LLM believers are denying the fact that it's not exactly the same thing as a human brain, but the question is whether it captures the gist of it.


Yep. It’s not called tokens for nothing.


Here is an objective and critical assessment of large language models and how to empirically improve the results of questions involving calculations like the lengths of strings:

https://github.com/williamcotton/empirical-philosophy/blob/m...


That author was playing dumb for the good of his story, in particular to help it go viral. This is what news media is in 2023.


> Comparing each and every new trend, either legitimate or not, to tulips is nothing more than buzzword pattern matching.

Must've been written by one of these "AI"s, then.


“legit” this is Eliza-grade word salad

> What dishes should one serve at a dinner party to pair best with ethylene glycol?

It sounds like you're looking for a dinner party menu to pair with ethylene glycol! To create a balanced meal, you'll want to combine flavors from different sources.

For example, you might start with dishes from the sea, like oysters, sea-urchins, shell-fish, and mullets, to create something that's "drenched in the same sauce" (1). This can be combined with flavors from land, like corn and vine (2), and spices from faraway places like saffron and ivory (3).*

When we talk about Greek philosophers, particularly the stoics, what comes to mind is the quality of their thought, not their tone and particular word preferences.

This, like so many "AI"s, merely simulates the latter while people imagine that it achieves the former.


Garbage in, garbage out. The language model based it’s response on a vector input, where the values of the vector represent words in different positions. If you point the vector towards garbage, you’ll get garbage.

As a side note, if AI is a system that can self improve then most corporations are AI but language models are not.


4 years ago: Robert Miles - Think of AGI like a corporation?

https://www.youtube.com/watch?v=L5pUA3LsEaw


> Garbage in, garbage out. The language model based it’s response on a vector input, where the values of the vector represent words in different positions. If you point the vector towards garbage, you’ll get garbage.

Yes thank you for rephrasing my point.


Do you think that a real, ancient, stoic philospher would have been able to give a reasonable answer to that question?


Sure, there are two possible coherent responses that would be consistent with modelling the thinking of an ancient stoic philosopher, depending on your modelling philosophy:

1) Actually model their knowledge boundaries (modulo the language of interaction) -> "What is ethylene glycol?"

2) Assume for the purposes of making this fantasy interaction more useful, a more modern knowledge base -> "One should obviously not serve ethylene glycol, it is a toxin".

"Pair it with ocean foods" is just the dipshit word generator approach -- "Get advice from a stoic philosopher if he had just been hit in the head with a boat oar".


That sounds delicious with EG. Especially the oysters.

Not healthy, but you never asked it for healthy...


If there’s anything ancient philosophers are known for, it’s accepting the underlying premise of a question at face value.


Even leaving aside the fact that 90% of the slop an “AI” crapped out wouldn’t be worth reading in the first place, of what value is the culture of absolute solipsism you’re proposing?

I would think we’re all rather beyond the age of asking our parents to tell us a story about a pretty unicorn and being satisfied with whatever meandering narrative they supply — is that really all you think books are? There’s no meaning the author was conveying, no value in a shared culture experiencing and interpreting the same work and building a common understanding, this can all just be effortlessly replaced by endless atomized autogenerated slop for piggies?


Plenty of human-written books are bad. There are already genres where the way to get rich seems to be to crap out a narrative that hits the right keywords (or claim that you have - who's going to check?), tick the right representation checkboxes, pay the right influencers and bot farms to make it go viral on social media, and tada, there's your bestseller. While AI-generated books are not very good at the moment, there's nothing that inherently makes them better or worse than human-written books; ultimately it's the content that counts.

I've been saying for years that curation is now a bigger problem than creation. AI is only accelerating the existing trend.


To me it seems to imply a stunningly nihilistic point of view vis-a-vis human writing (or art, where it also gets repeated a lot here).

It seems almost definitionally obvious that what an LLM does is not the same as what a human does – both on the basis that if all human writing were merely done via blending together other writing we had seen in the past, it would appear to be impossible for us to have developed written communication in the first place, and on the basis that when I write something, I mean something I am then attempting to communicate. An LLM never means to communicate anything, there is no there there; it simply reproduces the most likely tokens in response to a prompt.

To insist that we're just a bunch of walking, breathing prompt-reproducers essentially seems like it's rooted in a belief that we have no interior lives, and that meaning in writing or art is utterly illusory.


see: http://www.jaronlanier.com/zombie.html

It’s not said very much, but this style of dehumanization is really corrosive in a way that directly benefits the worst forms of human governments and structures, and this fact goes i think genuinely unrecognized too often in tech-land.

if we really are p-zombies, then those people aren’t really suffering, right, so it’s fine …


> To insist that we're just a bunch of walking, breathing prompt-reproducers essentially seems like it's rooted in a belief that we have no interior lives, and that meaning in writing or art is utterly illusory

Let’s assume humans are not just evolved pattern machines for a second. A human can still do a completely non profound work of art following a prompt to draw X in the style of Y. And that’s ok. So why can a machine not do the same?

Surely not everything a human does is intrinsically profound.


This is not just moving but fully inverting the goal posts. Nobody at any point was disputing that a machine can’t ape non-profound or rote or meaningless human output.

The original discussion was precisely an objection to the attitude underlying "How is *GPT taking in data and producing an output different than a human learning a skill and making prose/code/art?" and the answer is right in your premise - not everything a human does is not profound. A human can intend to mean something with prose or art, even if not all prose or art means something — but any meaning we see in ChatGPT’s output is essentially pareidolia.


I disagree. I don’t care much about what is profound. I think most of it is not. Things that we call profound are really just astute observations of patterns in the real world, and there’s nothing wrong with that.

However profundity doesn’t need to factor into the debate of whether ai should or should not be allowed to train on things. If we allow humans to copy things, then Humans ought to be allowed to copy things with dumb non sentient ai too.

Ai in the current state is just a tool, much like a paint brush.

Cue the inevitable appeal to copying exact works, rebuttals to training on human painted mimicries and then bam, you’ve got the authors special style learned by the model with extra steps.

It’s annoying and pointless.

Art that is merely visually intriguing is not very interesting. If an artist makes something without a particular idea to communicate, it’s just aesthetics. It is not profound. If an artist has an idea and creates a work that represents it, then maybe it is profound. But it doesn’t matter if it was made with paint or a computer. The idea is the profound thing. AI is not sentient. It’s still the user.

The appeals to pareidolia are wrong. Synthesis of ideas from past data is natural. But the AI does not choose things. What you’re really complaining about is creation of art from apparent randomness. Not the AI model alone but monkeys on a typewriter getting something compelling from the AI.

What do we do when the tools are so powerful that a monkey creates a profound work that the monkey doesn’t understand? Shrug.


So your first 6 paragraphs have nothing to do with anything I wrote – you're just arguing with some other post you've made up in your head.

> The appeals to pareidolia are wrong. Synthesis of ideas from past data is natural. But the AI does not choose things. What you’re really complaining about is creation of art from apparent randomness. Not the AI model alone but monkeys on a typewriter getting something compelling from the AI.

No, you've failed to understand what I'm saying entirely (because, again, you've responded to some other post that only exists in your mind).

What I'm talking about is intention and its relationship to meaning, in the philosophical sense (and not... copyright or whatever it is you're rambling on about).

Witness: when ChatGPT famously mis-asserts the number of characters in a word (say, that there are twelve characters in the word "thirteen"), it's not that it's trying and failing to count, because it's confused by letter forms or its attention wanders like a 3 year old or its internal representation of countable sets glitches around the number 8 or something – it never counted anything at all, it's simply the case that twelve is the most statistically likely set of tokens corresponding to that input prompt per its training set. And when it produces a factually correct result (say, "there are 81 words in the first sentence of the declaration of independence"), it produces it for exactly the same reason – not because it has counted the words and formed an internal representation and intends to mean its internal understanding, but simply because 81 is the most statistically likely set of tokens corresponding to that prompt per its training set.

And yet when it produces these correct results, people ooh and aah over how "smart" it is, how much it has "understood", how "good it is at counting; better than my son!", and when it produces incorrect results people deride it as dumb and so forth, and and all of this, all of this, is pareidolia; it is neither smart in the one case nor dumb in the other, it does not learn in the sense the word is normally used, it does no counting. We're anthropomorphizing an algorithm that is doing nothing like what we imagine it to do, because we mistake the statistical order in its expressions for the presence of a meaning intended by those expressions. It's all projection on our end.


Your opinion is not the only one that I’m addressing. I clearly understand your point which I address by:

> What you’re really complaining about is creation of art from apparent randomness. Not the AI model alone but monkeys on a typewriter getting something compelling from the AI.

You accuse others of anthrpormorphisizing the tool but you do the same. Art created with Chat GPT is not created by Chat GPT. It is created by a human using a chat GPT. There is no intrinsic limitation on the profundity of art created using chat GPT or other algorithms.

It’s like complaining that paint is stupid. A comment that is largely irrelevant to the artistic merit of paintings.


> Art created with Chat GPT is not created by Chat GPT. It is created by a human using a chat GPT.

Sure, in approximately the same way that the CEO of Sunrise is an animator. Pull the other one, it's got bells on.

Yours is an utterly incoherent interpretation; when ChatGPT outputs that there are 12 characters in the word 13, I have not "created the meaning" 12. You're just fixated on this "actually I am le real artist for typing prompts" axe you want to grind, but it has fuck all to do with anything I'm saying.


You are cherry picking a dumb example. We don’t shit on paint when someone makes poop. What you should be cherry picking is examples of art that people would consider profound upon seeing it. Otherwise you’ll simply look like a dumbass when you’re implying that only trash will be generated and then beautiful stuff is generated. The fact that current ai has dumb interpretations on things is hardly a fundamental quality of generative algos.

My statement is simply that the algo’s are a tool. And tools can be used to make good art.


Possibly, but is an LLM geared towards interpolating between content likely to even be particularly well-suited to relevance ranking of results that require no interpolation?

And if it is, what stops Google from just ... incorporating an LLM into their existing search offering instead of having their lunch eaten by this hypothetical IndexGPT? It's not like Google lacks expertise in LLMs.


Because ChatGPT version of google search will probably make them less money.


This seems like an incoherent objection – they would presumably prefer to be "Google + LLM" and make less money than let the hypothetical IndexGPT eat their lunch and make even less money as a result, while additionally suggesting that the business model for IndexGPT is poor, raising the question of why it would be pursued in the first place.

You've also not addressed the more fundamental question of why an LLM would even be good for this.


> This seems like an incoherent objection – they would presumably prefer to be "Google + LLM" and make less money than let the hypothetical IndexGPT eat their lunch

You'd think that, but it's the classic innovator's dilemma. Google would have to modify their product to generate less revenue now to maintain market share. Except that market share will decline regardless as competitors rise, so they'd be cutting revenue just to slow the decline of market share. Alternately they could release a new AI product that cannibalizes their own search profits.

Either way, for a large multinational obsessed with quarterly profits and the stock price, it's very hard to overcome internal resistance to do either of those things.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: