Hacker Newsnew | past | comments | ask | show | jobs | submit | meta_x_ai's commentslogin

or may be typical leaders have a better vision of what "sustainability" means.

"Hire great engineers that have sustainability in their bones"

Actual implementation by the grifters : Hire other grifters with Sustainability in their resume, whose only job is to act as gatekeepers with psuedo-science garbage and make this team as big as the Engineering Team.

It's perfectly fine for the leader to look at the implementation and say "what's this fucking bullshit and cut everything".

These concepts are of course completely alien to leader/rich-hating HN


Why wasn't such cartoon made when Mark Zuckerberg donated $400 million to essentially increase Get-out-to-vote in essentially largely democratic areas.

Delusional to think the WP and NYT journalists are coming at this at a highly non-partisan way.

The entire mainstream media is Anti-Trump, Anti-GOP.

I remember how MSM treated Javier Millei as the next hitler


> Why wasn't such cartoon made when Mark Zuckerberg donated $400 million to essentially increase Get-out-to-vote in essentially largely democratic areas.

untrue.


Funny how trying to get more people to vote is somehow seen as partisan by you.

You are on the wrong side of democracy. More votes, even votes of people who disagree with you, is always a good thing!


The money was spent in Democratic-heavy zipcodes, not rural areas


Such an American response: this whole idea that efforts spent on Democratic-heavy zipcodes (i.e. cities) are a gotcha. That spending less in rural areas, where *less people live* is somehow a tar.


Look up Fox News' market share and get back to us.


> The entire mainstream media is Anti-Trump, Anti-GOP.

Fox Corp, News Corp, Sinclair Broadcasting, and Newsmax would like to have a word with you about your lack of recognition.


not only that, the most “popular” podcasts and social media accounts are all conservatives. too funny people still BS this “anti-GOP” rhetoric, if someone wanted get onboard with this “anti-GOP / anti-Trump” shit you’d have to have some amazing searching skills to find this content :)


Reality has a liberal slant. That didn't stop MSM from trying to be 'fair' by sane washing Trump in some misguided attempts at (false) balance.


Thank god for John Roberts for preventing governmental overreach by unelected bureaucrats which has become increasingly ideological


The idealist in me also hates the idea of unelected "government experts" having a wide berth to do whatever they think is best since I know that 50% of the time they'll be appointed by / taking orders from [Insert part(ies) I don't like] and thus they'll be against my interests.

But the pragmatist in me still winces at all the stupidity that happens in the real world because Congress hasn't passed many useful laws in 25 years. Most ideas are put into place by executive fiat because we only have two functioning branches of government now. (Yes, I agree that it's still better than just having one!)


You should set aside an hour or two and research how the administrative state actually works. These agencies aren't full of political appointees, rather they're staffed with engineers and scientists who are dedicated to keeping our water potable, food safe, weather tracked, air travel safe, etc.

It's literally not possible for the unelected lobbyists who write bills for Congress to write imperative-style laws. Even if they could manage to promptly draft and pass updates to laws as infrastructure, tech, the situation, etc evolve, it wouldn't be able to get the information needed to provide coherent instructions, and it would hamstring implementation forever. It's obviously much better for Congress to write in a declarative style, e.g. "1251.A.3. It is the national policy that the discharge of toxic pollutants in toxic amounts be prohibited;" [0]. Clearly an important goal, but absolutely impossible for Congress and its unelected lobbyists to write out executable instructions for achieving this (also, Congress regularly explicitly delegates implementation to actual experts via clauses like this "1251.d. Except as otherwise expressly provided in this chapter, the Administrator of the Environmental Protection Agency (hereinafter in this chapter called ‘‘Administrator’’) shall administer this chapter."[0])

Just listen to the oral argument in the recent San Francisco vs EPA Supreme Court case [1] (or review the transcript [2], or get the summary from Oyez [3]). During heavy rains, San Francisco's city govt dumps a lot of effluent into the Pacific ocean. The EPA requires they get a permit, track the amount of effluent, work to remediate the issue, and develop a Combined Sewer Overflow control plan. The EPA wants to help, but San Francisco has failed for decades to provide adequate information about their sewage system to the EPA to enable the EPA to help develop said control plan (e.g. pg 98 of the transcript).

There's just no way Congress's unelected lobbyists could hope to write imperative laws. The experts staffing the administrative state aren't receiving partisan orders from the Democrats to harass San Francisco. Republicans don't issue partisan orders to agencies either (the Republicans just throw sand into the machine by tying up agency experts in frivolous lawsuits).

But in any case, it's the agency experts and their hundreds of thousands of years of knowledge and experience who keep America running.

[0] https://www.govinfo.gov/content/pkg/USCODE-2018-title33/pdf/... [1] https://www.supremecourt.gov/oral_arguments/audio/2024/23-75... [2] https://www.supremecourt.gov/oral_arguments/argument_transcr... [3] https://www.oyez.org/cases/2024/23-753


ahh yes, the overreach like making sure Netflix can't make competition worse by bribing ISP's.


Why would you waste time in Good models when there are great models?


Good models are good enough for me, meta_x_ai, I gain experience by setting them up and following up on industry trends, and I don't trust OpenAI (or MSFT, or Google, or whatever) with my information. No, I do not do anything illegal or unethical, but it's not the point.


The good local model isn't creating a profile about me, my preferences, my health issues, political leanings, and other info like the "great" Google and OpenAI are most likely doing based on the questions you ask the models. Just imagine if one day there's a data breach and your profile ends up on the dark web for future employers to find.


I understand your concerns.

For me though, this would be all upside because I have largely explored technical topics with language models that would only be impressive to an employer.

At this point, it is like asking what does someone use a computer for? The use cases are so varied.

I can see how it would be interesting for myself to setup a local model just for the fun of setting it up. When it comes down to it for me though it is just so much easier to pay $20 a month for Sonnet that it isn't even close or really a decision point.


Looks like it's not updated for nearly a year and I'm guessing Gemini 2.0 Flash with 2m context will simply crush it


That's true. They don't have Claude 3.5 on there either. So maybe it's not relevant anymore, but I'm not sure.

If so, let's move on to the murder mysteries or more complex literary analysis.


A software developer's time is much more precious than wasting time on sub-optimal models.

Open Weights models has it's place (in training custom agents and custom services), but if you are knowledge worker, using a model even 5% less than SOTA is extremely dumb


100% disagree with this take, the flexibility in controlling the prompt leads to QwenCoder2.5-32b outperforming gpt-o1 and claude sonnet 3.5 for nearly everything that I use it for (true for Gemma-27b and llama3.3-70b, though in this context I'm almost always using the former). A specialist model that's specifically prompted to do the correct thing will outperform a SOTA generic model with a one size fits all system prompt. This is why small autocomplete models can very obviously outperform larger models at that specific task. I am speaking 100% from experience and ignoring all benchmarks in forming this view btw, so maybe it's just my specific situation.

Also, in general I don't find the difference between SOTA models and local models to be that significant in the real world even when used in the exact same way.


Sounds great.

Does this run with VSCode and how hard is it to set this up?


yes, the vscode extension is a one click install, so is ollama which is a separate project that provides local inference

you'll then have to download a model, which ollama makes very easy. choosing which one will depend on your hardware but the biggest QwenCoder2.5 you can fit is a very solid starting place. it's not ready for your grandma, but it's easy enough that I'd trust a junior dev to be able to get it done


What's the extension name?


Continue, I talk about it at length in the gp post.


Ah, thanks.

I just read the parent post, lol.


Are there any small trained models out there that are specifically for python programming that you know of?


Do you have any example prompts or suggestions for coming up with them?


As a South Indian My name (in public school records) till I was age 21, was <name>. <initial>

I was forced to pick the last name for passport purposes and typically i either have the option of attaching my dad's name or my dad's town name.

My wife, didn't even do that and when she migrated to US, she was <name> LNU (short for Last Name Unknown). While applying for greencard we decided it was too much of a hassle for her and she attached her father's name


> when she migrated to US, she was <name> LNU (short for Last Name Unknown).

Interesting!

The loser of the previous World Chess Championship match was Russia's Ian Nepomniachtchi. His last name means "one who doesn't remember [his last name]" -when asked by the Czar's census taker!

I guess this kind of thing happens in many countries.


Google has 4 Billion users. It's delusional to think that you don't know anyone or you live in an incredibly small bubble


Yea the only stories I ever see are ones that bubble up to HN. Often they are very one-sided as well. Not saying it hasn't happened, but let's not pretend it's rampant.


OpenAI has completely pivoted to a Product company (vs a Model / API company)

The minute they switch to API is the time their userbase realizes that OpenAI neither has the best model nor the most cost-effective one nor the fastest one.

So, their business strategy is subscription and bundle a bunch of products and market it to the mass.

They have a brand "chatGPT" recognition and they'll milk that.

Sam is more of a Steve Jobs than a Bill Gates personality or at least wannabe


You may think Anthropic is different? It's the same UI + API. The difference is they don't create free accounts for personal emails. Sort of shadow banning. However they accepted my work email for free account and personal email for payed API. May be that's just me, I tried several personals.

As for model's quality they are both impressive. I haven't run meaningful comparison tests.


> May be that's just me

I would suggest this may be the case.

> The difference is they don't create free accounts for personal emails.

I'm using a personal account for free just fine, as are many in my circle.


I don't know what you imply by juxtoposing Jobs vs Gates. From what I know, Apple successfully marketed their hardware as a lifestyle. I don't think OpenAI is anything like that.


chatGPT's popularity and almost becoming a verb, says otherwise


As far as I can tell, o1 is the best model for anything to do with mathematics, by a solid margin.


gemini-exp-1206 and I'm pretty sure Gemini 2.0 Pro will beat it

https://livebench.ai/#/


Thanks for the data.


The full pivot to products is another tell that they don't have high hopes on AGI


A picture is worth a thousand words.

A word is worth a thousand pictures. (E.g Love)

It is abstraction all the way


it is all Information to be precise.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: