Hacker Newsnew | past | comments | ask | show | jobs | submit | A4ET8a8uTh0's commentslogin

Not parent, but why/how is not always difficult; actual execution tends to be difficult. There is also a more cynical take, but I try not to engage that on HN.


Ahh, but you miss out on the 'cool' aspect embodied by letter X, the underlying meaning that effectively refers to the title of the movie and other things are likely going over my head. Gige prefix does not have the same impact. Alien in fake latin sounds a lot better.


Greek, not Latin


Thank you for catching this. Will leave original in place.


Yeah, if they used Z instead of X, the Brits would have said Zedenomorph and that's just not right in any language


You are jesting, but there is some wisdom to that post. No reasonable person is suggesting global company changes direction on the basis of one post on the internet, but the advice provided is not without merit. Surely, a company of that size can do some research to see if it is a viable path. In fact, if it does anything right, it should have people like that ready to do appropriate analysis.

I have my thoughts on the matter and cautiously welcomed their move to GPUs ( though admittedly on the basis that we -- consumers -- need more than amd/nvidia duopoly in that space; so I am not exactly unbiased ).


>but there is some wisdom to that post.

That's just speculation. There's no guarantee that would have happened. Nobody has a crystal ball to guarantee that as the outcome.

It's like saying if someone would have killed Hitler as a baby, that would have prevented WW2.


I think you may be either misinterpreting my post or misunderstanding the sequence of events.

What do you think has happened so far?

Your mental model of the world may help me understanding the point you are trying to make.


I'm saying nobody can guarantee the claim of the GP I've replied to, that if Intel would have produced mediocre GPUs with 64+ GB of RAM that would have magically help them rise to the top of ML HW sales and save them.

That's just speculations from people online. I don't see any wisdom in that like you do, all I see is just a guessing game from people who think they know an industry when they don't (armchair experts to put it politely).

What made Nvidia dominant was not weak GPUs with a lot of RAM. The puzzle of their success had way more pieces that made the whole package appealing over many years, and a great timing of the market also helped. Intel making underperforming GPUs with a lot of RAM would not guarantee the same outcome at a later time in the market with an already entrenched Nvidia and a completely different landscape.


Your comments that nobody knows anything for sure are generically applicable to any discussion of anything.

But since they obviously apply just as well to Intel itself, it is a poor reason to dismiss other’s ideas.

> What made Nvidia dominant was not weak GPUs with a lot of RAM.

Intel doesn’t have the luxury of repeating NVidia’s path in GPUs. NVidia didn’t have to compete with an already existing NVidia-like incumbent.

That requires no speculation.

Competing with an incumbent via an underserved low end, then moving up market, is called disruption.

It is a very effective strategy since (1) underserved markets may be small but are are immediately profitable, and (2) subsequent upward growth is very hard for the incumbent to defend against. The incumbent would have to lower their margins, and hammer their own market value.

And it would fit with Intel’s need to grow their foundry business from the low end up too.

They should take every low-end underserved market they can find. Those are good cards to play for ambitious startups and comebacks.

And the insane demand for both GPUs and chip making is increasing the number of such markets.


<< That's just speculations from people online. I don't see any wisdom in that like you do, all I see is just a guessing game from people who think they know an industry when they don't (armchair experts to put it politely).

True, it is just speculation. 'Any' seems to be a strong qualifier. One of the reasons I troll landscape of HN is that some of the thoughts and recommendations expressed here ended up being useful in my life. One still has to apply reason and common sense, but I would not dream of saying it has no ( any ) wisdom.

<< What made Nvidia dominant was not weak GPUs with a lot of RAM.

I assume you mean: 'not in isolation'. If so, that statement is true. AMD cards at the very least had parity with nvidia, so it clearly wasn't just a question of ram.

<< The puzzle of their success had way more pieces that made the whole package appealing over many years, and a great timing of the market also helped.

I will be honest. I am biased against nvidia so take the next paragraph for the hate speech that it is.

Nvidia got lucky. CUDA was a big bet that paid off first on crypto and now on ai. Now, we can argue how much of that bet was luck meets preparation, because the bet itself was admittedly a well educated guess.

To your point, without those two waves, nvidia would still likely be battling amd in incremental improvements so the great market timing accounts for majority of its success. I will go as far as to say that we would likely not see a rush to buy 'a100s' and 'AI accellerators' with exception of very niche applications.

<< Intel making underperforming GPUs with a lot of RAM would not guarantee the same outcome at a later time in the market with an already entrenched Nvidia and a completely different landscape.

Underperforming may be the key word here and it is a very broad brush. In what sense are they underperforming and which segment are they intended for? As for ram, it would be kinda silly in current environment to put a new card out with 8gb; I think we can agree on that at least.

<< I'm saying nobody can guarantee the claim of the GP I've replied to,

True, but it is true for just about every aspect of life so as statements go, so it is virtually meaningless as an argument. Best one can do is argue possibilities based on what we do know about the world and the models it tends to follow.


I think the answer is that is not nearly as convenient as original service providers would claim it is. I am in US and we just went through a crazy period of everyone and their mother trying to start their own streaming service. Good portion went under, some consolidated, but the market fragmentation leading to actual content you want to see being spread across multiple services is an annoyance.

I will provide a concrete example. My buddy got into anime and was raving about one specific title so I checked Hulu for it, but Hulu, for some unfathomable reason starts that anime at season 4.. If I want to legally Stream season one, I would need to try the Sony owned anime thing, which I refuse to do for reasons not related to streaming wars. I ended up buying a dvd ( cheap and good enough quality for me ).

And pirates... have everything and, unless you are looking for newest releases, is of superior quality.


So.. I think it already has been happening ( people attempting to poison some sources for a variety of reasons ). I was doing a mini fun project on HN aliases ( attempting to derive/guess their user's age based on nothing but that alias ) and I came across some number of profiles that have bios clearly intended to mess with bots one way or another. Some have fun instructions. Some have contradictory information. Some are the length of a small night story. I am not judging. I just find it interesting. Has vibes of a certain book about a rainbow.


Tell me about that side project. How does that work? What does it say about me? I find that very interesting.


The idea itself is kinda simple, but kinda hard, because it relies on how the language we use, gives us away.

For example, references we put ( simpsons, star trek, you name it ), language we use ( gee whiz, yeet, gyatt) and that is used to generate an online persona tends to be something of note to our image of self - one can determine to some extent the likely generation from those

The reference itself may not automatically mean much, but it is likely that if it is present in an alias, it had an impact on a younger person ( how many of the new generation jump on an old show? so mr robot would have the exposure range of 2015 to 2019 ). If that hypothesis is true, then one can attempt to guess age if the individual given that work work, because 1) we know what year is now 2) we know when it was made, which allows for some minor inference there.

Naturally, some aliases are more elaborate than others. Some are written backwards and/or reference a popular show or popular sci-fi author. Some are anagrams ( and - I discovered today - require additional datasets to tag properly so that is another thing I will need to dig up from somewhere ). And to complicate things further, some aliases use references that are ambiguous and/or belong in more than one category ( Tesla being one of them ).

The original approach was to just throw everything into LLM and see what it comes up with, but the results were somewhat uneven so I decided to start from scratch and do normal analysis ( language, references, how digits are used and so on - it is still amazing how well that one seems to work ).

Sadly, it is still a work in progress ( I was hoping for a quick project, but I am kinda getting into it ) and I probably won't touch until next weekend since the coming week promises to be challenging.

Unfortunately, this means in your particular alias ended up as:

Alias category is_random length is_anagram generic_signal Loughla Mixed Case 0 7 FALSE FALSE

( remaining fields were empty, basically couldn't put a finger on you:D). If you can provide me with an approximate age, it would help with my testing though:D

edit: This being HN. Vast majority of references are technology related.


That is very cool…and your alias is hard for me to decipher


I have a separate - not fully implemented - section for more semi-random aliases, but it revolves around our tendency to use default settings and commonly used tools for generating them. Thus far the only thing I was able to show with it is that it is not uncommon, but no clear proxy for age.. so seems like a dead end.


It is kinda interesting. I talked with a less technical member of my extended family over the holidays. Fairly successful guy in his chosen profession ( accounting ). To say he was skeptical is an understatement and he is typically the most pro-corporate shill you can find for a company to save a few bucks. I assumed he would be attempting to extol its virtues with the assumption that lower level work has errors anyway. I was wrong. Sadly, we didn't get to continue down that line since my kid started crying at that moment.


Yeah I'm interested in how it will play out. I can understand skepticism because the current AI isn't that good, but it'll keep improving.


count me among the skeptics. the big problem i see is that there is no way to verify whether any AI output is correct. it is already very hard to prove that a program is correct. proving that for AI is several levels more difficult, and even if it were possible the cost would be so high to make it not worth it.


I am personally somewhere in between. Language models do allow me to do things I wouldn't have patience to do otherwise ( yesterday chatgpt was actually helpful with hunting down a bug it generated:P ). I think there is some real value here, but I do worry it will not be captured properly.


Caught my attention as well. Must be neologism assuming it originated from as a variant of incel, but here focused on deriding people getting their jollies out of the written word ( me:P ). Naturally, I might be wrong. Lets see if the author responds.


Not the author but: yes. This word emerged from online discourse a few years back about 'wordcels' vs 'shape rotators':

https://en.wiktionary.org/wiki/wordcel

https://en.wiktionary.org/wiki/shape_rotator


that’s bonkers. “philosophers and other wordcels” not only insults Borges but the entire world of philosophy. the arrogance and the nonsense in that phrasing are both off the charts.


Because western philosophy is not a place where you can find arrogant nonsense, right? Philosophers are some of the most arrogant people I've ever met. And I say that with great affection.


Both can be true


It’s internet youth culture bleeding into highbrow discourse.

An suitably brain rotted riposte to your complaints would be something like

‘Wordcels be sneething at rotationmaxxing shapechads’

and an image of a badly drawn wojak figure.


take a joke man jeez


Re-training can be done, but, and it is not a small but, models already do exist and can be used locally suggesting that the milk has been spilled for too long at this point. Separately, neutering them effectively lowers their value as opposed to their non-neutered counterparts.


This is a genuinely relevant question given that HN could be easily argued to be social media. For the record, I too am concerned about social media impact and so on ( for good and valid reasons ), but this law does not seem that great at first glance.


Hacker News doesn't require a login to view content, so I guess it is exempt, from this specific law.


There is a place for everything. I absolutely love video for home improvement stuff, because instructions for those tend to be not great or inaccurate pictographs. The problem is that we forgot that for each task, there is an appropriate tool. Video is a good tool for some things. Raw text is a better tool for other.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: