the article was ai to a large part. i looked at the domain and saw it wasnt standard, assumed br = brazil, and gave the benefit of the doubt that ai was used to translate technicality. while the prose sucks as ai prose does, the content behind the prose did not suck. so i disagree that this was slop. for the record ive been flagged many times for vitriol against ai based on my personal, moral, and professional hatred. you didnt add anything to the conversation and i think thats against the hn spirit just as much as ai abuse
may i ask how the current generation language models are jailbroken? im aware the previous generation had 'do anything now' prompts. mostly curious from a psychological perspective.
it would be interesting to me if you could explain the motivation behind posting your comment. from my perspective, if somebody with 5 years of forum tenure had the intelligence to comment about advanced benchmarks, they probably noticed that censorship was a voluntary decision here, and had made a personal decision on that front.
I'm not layer8, but I had a similar thought. In this case the needless censoring is problematic because it hides the name of the benchmark from future searches (the uncensored URL spells it differently).
Such self-censoring is often done out of habit or a mistakenly assumed obligation to do so. I consider it inappropriate here, as it obscures an actual name, doesn’t constitute an expletive, and the HN readership is generally mature enough to recognize that. The counterquestion is, what justified reason could there possibly be to censor it here? I don’t think there is any, in the sense that people wouldn’t take any offense at the uncensored version, and the intent of my comment was to inform about that.
I censored it out of habit of commenting on other platforms and, I actually didn't have any idea about whether you should censor such words or not in here. Will keep that in mind when commenting here next time.
i dont see anyone sane trusting ai to this degree any time soon, outside of web dev. the chances of this strategy failing are still well above acceptable margins for most software, and in safety critical instances it will be decades before standards allow for such adoption. anyway we are paying pennies on the dollar for compute at the moment - as soon as the gravy train stops rolling, all this intelligence will be out of access for most humans. unless some more efficient generalizable architecture is identified.
> as soon as the gravy train stops rolling, all this intelligence will be out of access for most humans. unless some more efficient generalizable architecture is identified.
All Chinese labs have to do to tank the US economy is to release open-weight models that can run on relatively cheap hardware before AI companies see returns.
Maybe that's why AI companies are looking to IPO so soon, gotta cash out and leave retail investors and retirement funds holding the bag.
i was under the impression that we were approaching performance bottlenecks both with consumer GPU architecture and with this application of transformer architecture. if my impression is incorrect, then i agree it is feasible for china to tank the US economy that way (unless something else does it first)
I think it just needs to be efficient or small enough for companies to deploy their own models on their hardware or cloud, for more inference providers to come out of the woodwork and compete on price, and/or for optimized models to run locally for users.
Regarding the latter, smaller models are really good for what they are (free) now, they'll run on a laptop's iGPU with LPDDR5/DDR5, and NPUs are getting there.
Even models that can fit in unified 64GB+ memory between CPU & iGPU aren't bad. Offloading to a real GPU is faster, but with the iGPU route you can buy cheaper SODIMM memory in larger quantities, still use it as unified memory, eventually use it with NPUs, all without using too much power or buying cards with expensive GDDR.
Qwen-3.5 locally is "good enough" for more than I expected, if that trend continues, I can see small deployable models eventually being viable & worthy competition, or at least being good enough that companies can run their own instead of exfiltrating their trade secrets to the worst people on the planet in real-time.
There aren't any released open-weight models that are "good enough" yet, but Qwen-3.5 is getting really damn close to the point where more than half of my LLM usage gets routed to it.
I suspect, but don't know, some fields of inquiry will be fruitful when it comes to "good enough" small models. Especially when it comes to constrained tasks like software development. Software development models don't have to generalize to anything a chatbot can be asked or tasked with, the space it's required to generalize on is pretty small compared to literally the whole world.
If I was a betting man, I'd put my money where my mouth is, but I'm not. I am betting with my time and focus that smaller local models are worth it, and will be worth it, though.
Even in webdev it rots your codebase unchecked. Although it's incredibly useful for generating UI components, which makes me a very happy webslopper indeed.
im grateful to have never bothered learning web dev properly, it was enlightening witnessing chat gpt transform my ten second ms paint job into a functional user interface
I don't think anybody is doubting its ability to generate thousands of PR's though. And yes, it's usually in the stuff that should have been automated already regardless of AI or not.
Depends on your circle. On HN I would argue that there are still a fair number of people that would be surprised to see what heavy organizational usage of AI actually looks like. On a non programming online group, of which I am a member of several, people still think that AI agents are the same as they were in mid 2025 and they can't answer "how many R's are in the following word:". Same thing even when chatting with my business owner friends. The majority of the public has no clue of the scale of recent advancement.
these companies contribute to swathes of the west's financial infrastructure, not quite safety critical but critical enough, insane to involve automation here to this degree
regarding brain rot, short form content is absolutely going to be the root physical cause - people could tolerate smartphones prior to the inception of short form content. on a cultural level, this level of destruction could be compared to the effects of a coordinated and targeted attack from enemy nation states - if not for the fact that we did this to ourselves in the name of profit. one can only hope that the old guard wakes up to systematically handle this issue that we have no familiarity with, otherwise our system will buckle under the pressure of 10-20 years worth of nonfunctional humans. i do find a technocratic dystopia far more likely, considering the aforementioned mentally castrated opposition ... hows a generation of kids going to win against trillions of dollars of zuckerberg 'engineering' steering them since birth? shame on the 'engineers' who engendered this mess, shame on their shepherd 'managers', and shame on the sociopaths at the top.
tobacco contains MAO inhibiting compounds, which potentiate nicotine and increase addiction potential. that doesnt mean nicotine on its own isnt insanely addictive, i have no idea what the guy youre responding to is talking about. however, MAOIs were withdrawn as antidepressants for a good reason - they have a terrible withdrawal all on their own.
this article isnt as relevant as when it was written. eg regarding price, cigarette taxation has skyrocketed in certain countries. furthermore, the depicted studies were performed prior to the proliferation of disposable vapes - i somehow doubt that the idea of infinite nicotine on tap was accounted for. as to your question, some individuals find cutting down to be easier than cold turkeying. personally i opt for the latter, although this strategy should not be universally applied (eg. alcohol withdrawal may induce seizures). at the end of the day i find smoking (not vaping or gum) to be a net neutral - controlled motivation, treatment of schizophrenia symptoms, and neuroprotectivity are balanced out by addiction potential, shortening of lifespan, and reduced red blood cell count.
im just as much of a hater of this as the next guy, because i depend on custom apks for work sometimes. pushing custom apks over adb is apparently going to be fine, so if that holds true, i dont care about this. at the end of the day, buying an android phone is buying a google device. i dont get the righteousness here. wouldnt this energy be better spent on discussing how we could make a new open source os to rival that of google? why would anyone at google (company at the forefront of anti privacy measures) care about what some nerds on the internet think about privacy? its like an ant screaming in front of an approaching bulldozer.
It's a pretty dire situation. There are two major options. iOS is iOS. Android is at least somewhat open and Google free Android actually exists.
The problem is that you often need a smartphone running either Android or iOS to participate in modern life. Unfortunately when running Android many apps that one might be more or less forced to use do not just require AOSP, but expect the presence of the proprietary Google services malware.
If we want to create an independent mobile OS AOSP might actually be a good start. We're just faced with a world that is actively harmful to people having control over their device and data.
reply