To me, this is a reminder of how much of a specific minority this forum is.
Nobody I know in real life, personally or at work, has expressed this belief.
I have literally only ever encountered this anti-AI extremism (extremism in the non-pejorative sense) in places like reddit and here.
Clearly, the authors in NeurIPS don't agree that using an LLM to help write is "plagiarism", and I would trust their opinions far more than some random redditor.
> Nobody I know in real life, personally or at work, has expressed this belief.
TBF, most people in real life don't even know how AI works to any degree, so using that as an argument that parent's opinion is extreme is kind of circular reasoning.
> I have literally only ever encountered this anti-AI extremism (extremism in the non-pejorative sense) in places like reddit and here.
I don't see parent's opinions as anti-AI. It's more an argument about what AI is currently, and what research is supposed to be. AI is existing ideas. Research is supposed to be new ideas. If much of your research paper can be written by AI, I call into question whether or not it represents actual research.
> Research is supposed to be new ideas. If much of your research paper can be written by AI, I call into question whether or not it represents actual research.
One would hope the authors are forming a hypothesis, performing an experiment, gathering and analysing results, and only then passing it to the AI to convert it into a paper.
If I have a theory that, IDK, laser welds in a sine wave pattern are stronger than laser welds in a zigzag pattern - I've still got to design the exact experimental details, obtain all the equipment and consumables, cut a few dozen test coupons, weld them, strength test them, and record all the measurements.
Obviously if I skipped the experimentation and just had an AI fabricate the results table, that's academic misconduct of the clearest form.
I am not an academic, so correct me if I am wrong, but in your example, the actual writing would probably only represent a small fraction of the time spent. Is it even worth using AI for anything other than spelling and grammar correction at that point? I think using an LLM to generate a paper from high level points wouldn't save much, if any, time if it was then reviewed the way that would require.
My brother in law is a professor, and he has a pretty bad opinion of colleagues that use LLMs to write papers, as his field (economics) doesn't involve much experimentation, and instead relies on data analysis, simulation, and reasoning. It seemed to me like the LLM assisted papers that he's seen have mostly been pretty low impact filler papers.
> I am not an academic, so correct me if I am wrong, but in your example, the actual writing would probably only represent a small fraction of the time spent. Is it even worth using AI for anything other than spelling and grammar correction at that point? I think using an LLM to generate a paper from high level points wouldn't save much, if any, time if it was then reviewed the way that would require.
Its understandable that you believe that, but its absolutely true that writing in academia is a huge time sink. Think about it, the first thing your reviewers are going to notice is not results but how well it is written.
If its written terribly you have lost, and it doesnt matter how good your results are at that point. Its common to spend days with your PI writing a paper to perfection, and then spend months back and forth with reviewers updating and improving the text. This is even more true the higher up you go in journal prestige.
Who knows? Do NeurIPS have a pedigree of original, well sourced research dating back to before the advent of LLMs? We're at the point where both of the terms "AI" and "Experts" are so blurred it's almost impossible to trust or distrust anything without spending more time on due diligence than most subjects deserve.
As the wise woman once said "Ain't nobody got time for that".
"If much of your research paper can be written by AI, I call into question whether or not it represents actual research" And what happens to this statement if next year or later this year the papers that can be autonomously written passes median human paper mark?
What does it mean to cross the median human paper mark? How os that measured?
It seems to me like most of the LLM benchmarks wind up being gamed. So, even if there were a good benchmark there, which I do not believe there is, the validity of the benchmark would likely diminish pretty quickly.
I find that hard to believe. Every creative professional that I know shares this sentiment. That’s several graphic designers at big tech companies, one person in print media, and one visual effects artist in the film industry. And once you include many of their professional colleagues that becomes a decent sample size.
> Plagiarism is using someone else's words, ideas, or work as your own without proper credit, a serious breach of ethics leading to academic failure, job loss, or legal issues, and can range from copying text (direct) to paraphrasing without citation (mosaic), often detected by software and best avoided by meticulous citation, quoting, and paraphrasing to show original thought and attribution.
Higher education is not free. People pay a shit ton of money to attend and also governments (taxpayers) invest a lot. Imagine offloading your research to an AI bot...
Where does this bizarre impulse to dogmatically defend LLM output come from? I don’t understand it.
If AI is a reliable and quality tool, that will become evident without the need to defend it - it’s got billions (trillions?) of dollars backstopping it. The skeptical pushback is WAY more important right now than the optimistic embrace.
The fact that there is absurd AI hype right now doesn't mean that we should let equally absurd bullshit pass on the other side of the spectrum. Having a reasonable and accurate discussion about the benefits, drawbacks, side effects, etc. is WAY more important right now than being flagrantly incorrect in either direction.
Meanwhile this entire comment thread is about what appears to be, as fumi2026 points out in their comment, a predatory marketing play by a startup hoping to capitalize on the exact sort of anti AI sentiment that you seem to think is important... just because there is pro AI sentiment?
Naming and shaming everyday researchers based on the idea that they have let hallucinations slip into their paper all because your own AI model has decided thatit was AI so you can signal boost your product seems pretty shitty and exploitative to me, and is only viable as a product and marketing strategy because of the visceral anti AI sentiment in some places.
No that’s a straw man, sorry. Skepticism is not the same thing as irrational rejection. It means that I don’t believe you until you’ve proven with evidence that what you’re saying is true.
The efficacy and reliability of LLMs requires proof. Ai companies are pouring extraordinary, unprecedented amounts of money into promoting the idea that their products are intelligent and trustworthy. That marketing push absolutely dwarfs the skeptical voices and that’s what makes those voices more important at the moment. If the researchers named have claims made against them that aren’t true, that should be a pretty easy thing for them to refute.
The cat is out of the bag tho. AI does have provably crazy value. Certainly not the agi hype marketing spews and who knows how economically viable it would be without vc.
However, i think any one who is still skeptical of the real efficacy is willfully ignorant. This is not a moral endorsement on how it was made or if it is moral to use but god damn it is a game changer across vast domains.
There was a front page post just a couple of days ago where the article claimed LLMs have not improved in any way in over a year - an obviously absurd statement. A year before Opus 4.5, I couldn't get models to spit out a one shot Tampermonkey script to add chapter turns to my arrow keys. Now I can one small personal projects in claude code.
If you are saying that people are not making irrational and intellectually dishonest arguments about AI, I can't believe that we're reading the same articles and same comments.
Isn’t that the whole point of publishing? This happened plenty before AI too, and the claims are easily verified by checking the claimed hallucinations.
Don’t publish things that aren’t verified and you won’t have a problem, same as before but perhaps now it’s easier to verify, which is a good thing.
We see this problem in many areas, last week it was a criminal case where a made up law was referenced, luckily the judge knew to call it out.
We can’t just blindly trust things in this era, and calling it out is the only way to bring it up to the surface.
No, obviously not. You're confusing a marketing post by people with a product to sell with an actual review of the work by the relevant community, or even review by interested laypeople.
This is a marketing post where they provide no evidence that any of these are hallucinations beyond their own AI tool telling them so - and how do we know it isn't hallucinating? Are there hallucinations in there? Almost certainly. Would the authors deserve being called out by people reviewing their work? Sure.
But what people don't deserve is an unrelated VC funded tech company jumping in and claiming all of their errors are LLM hallucinations when they have no actual proof, painting them all a certain way so they can sell their product.
> Don’t publish things that aren’t verified and you won’t have a problem
If we were holding this company to the same standard, this blog wouldn't be posted either. They have not and can not verify their claims - they can't even say that their claims are based on their own investigations.
Most research is funded by someone with a product to sell, not all but a frightening amount of it. VC to sell, VC to review.
The burden of proof is always on the one publishing and it can be a very frustrating experience, but that is how it is, the one making the claim needs to defend themselves, from people (who can be a very big hit or miss) or machines alike. The good thing is that if this product is crap then it will quickly disappear.
That's still different from a bunch of researchers being specifically put in a negative light purely to sell a product. They weren't criticized so that they could do better, be it in their own error checking if it was a human-induced issue, or not relying on LLMs to do the work they should have been. They were put on blast to sell a product.
That's quite a bit different than a study being funded by someone with a product to sell.
Yup, and no matter how flimsy an anti-ai article is, it will skyrocket to the top of HN because of it. It makes sense though, HN users are the most likely to feel threatened by LLMs, and therefore are more likely to be anxious about them.
> Clearly, the authors in NeurIPS don't agree that using an LLM to help write is "plagiarism",
Or they didn't consider that it arguably fell within academia's definition of plagiarism.
Or they thought they could get away with it.
Why is someone behaving questionably the authority on whether that's OK?
> Nobody I know in real life, personally or at work, has expressed this belief. I have literally only ever encountered this anti-AI extremism (extremism in the non-pejorative sense) in places like reddit and here.
It's not "anti-AI extremism".
If no one you know has said, "Hey, wait a minute, if I'm copy&pasting this text I didn't write, and putting my name on it, without credit or attribution, isn't that like... no... what am I missing?" then maybe they are focused on other angles.
That doesn't mean that people who consider different angles than your friends do are "extremist".
They're only "extremist" in the way that anyone critical at all of 'crypto' was "extremist", to the bros pumping it. Not coincidentally, there's some overlap in bros between the two.
How is that relevant? Companies care very little about plagiarism, at least in the ethical sense (they do care if they think it's a legal risk, but that has turned out to not be the case with AI, so far at least).
What do you mean how is that relevant? Its a vast majority opinion in society that using ai to help you write is fine. Calling it "plagiarism" is a tiny minority online opinion.
First of all, the very fact that companies need to encourage it shows that it is not already a majority opinion in society, it is a majority opinion among company management, which is often extremely unethical.
Secondly, even if it is true that it is a majority opinion in society doesn't mean it's right. Society at large often misunderstands how technology works and what risks it brings and what are its inevitable downstream effects. It was a majority opinion in society for decades or centuries that smoking is neutral to your health - that doesn't mean they were right.
> Secondly, even if it is true that it is a majority opinion in society doesn't mean it's right. Society at large often misunderstands how technology works and what risks it brings and what are its inevitable downstream effects. It was a majority opinion in society for decades or centuries that smoking is neutral to your health - that doesn't mean they were right.
That its a majority opinion instead of a tiny minority opinion is a strong signal that its more likely to be correct. For example its a majority opinion that murder is bad; this has held true for millennia.
Heres a simpler explanation: toaster frickers tend to seek out other toaster frickers online in niche communities. Occams razor.
The author seems to be very aware of the benefit of upgradability, but thats not an excuse for the shoddy experience. Some of the issues the author mentions are just absurd. Sharp edges, panels that creak? Come on.
The sharp edges are exclusively an issue with the Framework 16 due to the spacers that allow you to change the alignment of the trackpad. It's definitely been one of my main annoyances with my F16 that I didn't experience with my F13. I've been scratched by them and had my arm hair caught and pulled.
However, Framework has already indicated that they are looking into providing an input module that spans the entire width of the device to eliminate the need for the spacers.
I don't really know what the "creaking screen" is about though. IMO the F16 screen and hinges are a higher build quality than the F13. I had to upgrade my F13 hinges to the 4kg hinges to keep it from bouncing and moving.
> I don't really know what the "creaking screen" is about though. IMO the F16 screen and hinges are a higher build quality than the F13. I had to upgrade my F13 hinges to the 4kg hinges to keep it from bouncing and moving.
I think the comment was referring to the noise of the spacers, unless the author also thought it was in relation to the display. So to clarify, the display makes no noise whatsoever and neither do the hinges. The noise shown in the video is specifically about the trackpad and keyboard spacers.
I had a 12th-gen 13", and I had severe thermal throttling problems that took two years for Framework to resolve to my satisfaction (eventually they gave me a free 13th-gen upgrade that "solved" it).
I think the "I have X and don't see problems the author has" is a generally useless statement. Well, duh, sure, it's pretty rare that everyone will have the same problems. And some people will end up having no problems at all. But that doesn't invalidate the experiences of the people who do have problems.
That's apparently how 4chan got hacked a while back. They were letting users upload PDFs and were using ghostscript to generate thumbnails. From what I understand, the hackers uploaded a PDF which contained PostScript which exploited a ghostscript bug.
Yes but the primary issue was that 4chan was using over a decade old version of the library that contained a vulnerability first disclosed in 2012: https://nvd.nist.gov/vuln/detail/CVE-2012-4405
In one of my penetration testing training classes, in one of the lessons, we generated a malicious PDF file that would give us a shell when the victim opened it in Adobe.
Granted, it relied on a specific bug in the JavaScript engine of Adobe Reader, so unless they're using a version that's 15 years old, it wouldn't work today, but you can't be too cautious. 0-days can always exist.
True, I just considered that once you handle a PDF with so much care like if it was poisoned, it's perhaps better to send this poison to someone else to handle.
It's not all or nothing. Depending on your threat model, Apple's services might be fine. But I guess most people don't think enough about the implications of storing many years worth of data at a US company like Apple.
Apple has actually proven itself over a long period of time on this issue. Maybe Mozilla has as well (do they encrypt telemetry logs etc for people with a Mozilla login?) but I haven't heard so much about that.
Did you really forgot about Snowden's Apple slide? Also their phones are routinely mirrored at the border. Just to support the unconstitutional government agenda of policing thoughts and speech.
All US automakers are doing the same thing. There's gentle up-marketing collusion.
The issue at root is that auto demand is a finite, population-based amount. Automakers are all pretty good at margin and manufacturing cost control.
So that leaves the only independent variable that can influence revenue and profits as {average sold vehicle price}.
New entrants face a scale issue: it's difficult to compete with the larger manufacturers' production costs with orders of magnitude less sales volume.
Which is why you historically only saw state-sponsored new manufacturers break into the market (read: Japan, Korea, China).
Electrification turned some of this on its head, but not completely. GM, Ford, et al. can still build just enough mid-market electrics to spoil others volumes, without attempting to build something really good and cannibalizing their own luxury vehicles.
Price conscious consumers have been out of the "New" car market for a very long time. New cars have a massive premium that never makes sense.
Instead of buying a brand new Geo Metro like you would in the 90s, you just buy a used Corolla or Civic. You end up with a better car and it lasts longer anyway.
That means the majority of the "New" car market has already decided price isn't that important.
Which is why the "average" new car price is $50k and people are signing up for 80 month loans on trucks.
But also note that not all LED lights flicker.
reply