Hacker Newsnew | past | comments | ask | show | jobs | submit | FanaHOVA's commentslogin


Because they make $60B/yr on advertising and car sales is a very valuable ad market.

> not truly groundbreaking foundation models.

Where is any proof that Yann LeCun is able to deliver that? He's had way more resources than any other lab during his tenure, and yet has nothing substantial to show for it.


The structure of each section gives away that it's mostly AI even without having to read the actual words. I'm sure it was AI + writer, but there's something about ending each section with 3-4 short, question-like sentences that is strongly AI. This is the same format as the successful LinkedIn slop so maybe it's not AI and just algo-induced writing.


Yup. It's the colons after every paragraph's first sentence:

> It worked because it solved a real problem: Kenyans were already sending money through informal networks. M-PESA just made it cheaper and safer.

> Here’s why this matters: M-PESA created a payment rail with near-zero transaction costs.

> The magic is this: You’re not buying a $1,200 solar system.

> It gets even better: there are people who will pay for credits beforehand.

It's just again and again and again. It's sounds 100% ChatGPT.

Maybe this is 100% written by hand by someone who reads too many ChatGPT-generated articles. Possibly the author just spends a ton of time chatting with ChatGPT and have picked up its style. Or it's just more AI-written than OP wants to admit.


We are so cooked. We spend more time trying to suss out if something was written by AI than actually reading the article. So many legitimate ways of writing are now “ai” style. I used to use emdash a lot, but now I deliberately avoid it because it’s an AI smell - using the less “correct” version instead. E

The equivalent of "If you have to ask, you can't afford it" here is "If you have to ask, you shouldn't do it".


Overall for the common person I'd agree, but I assume we're all more or less hackers here and for us, I'd say "If you have to ask, ask and learn, then do it".

If everyone followed your advice no one would ever do anything, as we all begin somewhere, something that should OK.

Of course, don't do million dollar trades when you begin, but we shouldn't push back on people wanting to learn, feels very backwards compared to hacker ethos.


we shouldn't push back on people wanting to learn but we should really point out very loudly that not fully understanding something like shorting can turn a small investment someone was fully ok with losing into a life altering bankruptcy due to a margin call.

Leverage can be a fearful thing.


Yup, I agree, be clear what the consequences are if you fuckup, allow people to fuckup if they wish.


> "If you have to ask, ask and learn..."

Totally! But also keep in mind this :)

https://www.explainxkcd.com/wiki/index.php/1570:_Engineer_Sy...


How about, "If you think an explanation from HN will explain it all to you, you're being naive about the complexities and risks"?


thank you for being nice


To expand on the original reply to you - shorting companies, or engaging in almost any stock-based activity beyond “buy and hold,” typically entails much, much higher risk than just buying and selling stock. The most you can lose when buying a share is the purchase price, and that’s fairly unlikely, but when you start getting into even options/etc, you’re magnifying your risk - small swings in the market can lead to large and disproportionate losses, and when you get into shorting in particular you can lose far more than your initial investment. This is why you’re getting the reaction you’re getting - because the thing you’re asking about is sufficiently risky that if you're asking on Hacker News (and not, say, asking a professional), you don’t understand the risk profile well enough to do it “safely.”

That, and because snarky answers get more imaginary internet points than helpful ones.


> you don’t understand the risk profile well enough to do it “safely.”

Since when is this a problem? For gods sake, let people fuck up and harm themselves if they're stupid enough to take the risks, or not.

I think it's fine to say "Remember, this is risky because of A, B and C, but here's how to do it anyways..." but straight up "If you have to ask, you shouldn't" seems so backwards and almost mean, especially when we talk about money which is mostly "easy come, easy go". Let the fool be parted with their money if that's what they want :)


I mean, there’s risk and there’s risk. If someone comes in asking “how do I mod my phone/ebike/toaster”, sure, caveat commentor and all that. If someone comes in asking “how do I make dioxygen difluoride,” that’s a different category of risk. OP can do whatever they want, but I’m not in the habit of giving guns to people who don’t know what they are without making sure they know which risk category they’re in.


> Imagine being able to retire at 40 and do whatever you want. If you weren't stupid, your health should be good enough.

Do you really believe people who have health issues at an early age are simply stupid?


There is probably a stronger argument that health issues later in life a due to being ‘stupid’.


You really think the amount of savings in 401ks is the same size as the GDP of the whole country?


That's comparing a rate of flow to a static amount. In other words, GDP is 27 trillion per year.


It's not, but IRA plus 401k is about equal to GDP, yes (and if you add approximately equivalent defined contribution plans like 403b, tsp, etc, then it's more than GDP)


> It's not, but IRA plus 401k is about equal to GDP

The economic vampires must salivate about this opportunity all night and day.


Why shouldn't it be? GDP is _economic production (for some value of economic production) for a year_. It's not all that closely linked to wealth.


You can pay more. It's unlimited (sorta) through API at API pricing.


The non-tinfoil hat approach is to simply Google "Boston demographics", and think of how training data distribution impacts model performance.

> The data set used to train CheXzero included more men, more people between 40 and 80 years old, and more white patients, which Yang says underscores the need for larger, more diverse data sets.

I'm not a doctor so I cannot tell you how xrays differ across genders / ethnicities, but these models aren't magic (especially computer vision ones, which are usually much smaller). If there are meaningful differences and they don't see those specific cases in training data, they will always fail to recognize them at inference.


The problem is that: - These are not really super computing cluster in LLM terms. Leonardo is a 250 PFlops cluster. That is really not much at all. - If people in charge of this project actually believe R1 costs $5.5M to build from scratch, it's already over.


I think no one believes that R1 costs $5.5m from scratch. People in this project (most, not all) are very aware of the realities in training and are very well connected in the US as well. Besides Leonardo there are JUWELS, LUMI & other which can be used for ablations and so on.

This will never compete with what the frontier labs have (+ are building) but might be just enough for something, that is close enough to be a useful alternative :).

PS: Huge fan of Latent Space :)


what are you all talking about? most people in the industry do believe the publicly stated numbers for dsv3


> If people in charge of this project actually believe R1 costs $5.5M to build from scratch, it's already over.

wdym?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: