Hacker Newsnew | past | comments | ask | show | jobs | submit | afdbcreid's commentslogin

As an open source maintainer, I feel that statement is really unfair. Yes, we do sometimes close bug reports without evidence they are fixed. But:

- We owe you nothing! And the fact that people still expect maintainers to work for them is really sad, IMHO.

- Unlike corporate workers, nobody is measuring our productivity therefore we have no incentive to close issues if we believe they are unfixed. That means that when we close the issue, we believe it has a high chance of being fixed, and also we weigh the cost of having many maybe-fixed open issues against maybe closing a standing issue, and (try to) choose what's best for the project.


It's not about expectation of work (well, there's some entitled people sure.)

It's about throwing away the effort the reporter put into filing the issue. Stale bots disincentivise good quality issues, makes them less discoverable, and creates the burden of having to collate discussion across N previously raised issues about the same thing.

Bug reports and FRs are also a form of work. They might have a selfish motive, but they're still raised with the intention of enriching the software in some way.


IMO closing issues via stale bot is fine, the problem is locking issues so that no further conversation is allowed on the issue. Multiple times, I've encountered multi-year old issues (which is usually not fixed due to the fix not being simple or compatible with the current architecture). There's usually a good amount of conversation between users offering workarounds (and those workarounds updated for newer versions) - till stale bot locks the issue.

This 1000%. Whoever came up with the idea of closing and locking issues because no one has posted on them for awhile is at best not all that bright and at worst downright sinister.

Closing an issue due to staleness is one thing, locking it is another.


> That means that when we close the issue, we believe it has a high chance of being fixed

I agree with this iff it's being done manually after reading the issue. stalebot is indiscriminate and as far as "owing" the user, that's fair, but I'd assume that the person reporting the bug is also doing you a favor by helping you make things more stable and contributing to your repo/tool's community.


I partially agree, but even with stalebots nobody is measuring the maintainers' productivity. So when they made the choice to use stalebots, they did that because they believe that's best for the project. It's different from corporate.

Nobody is measuring their productivity, but people definitely look at how many open issues they have and potentially how long those issues have existed. They’re likely incentivized to close issues for appearances.

With a popular open source project, you'll quickly get to a number of bug reports that you have no chance of ever solving. You will have to focus on the worst ones and ones affecting most users.

At the same time, you want to communicate to users that this is the case so they don't have wrong expectation. But also, psychologically it is demotivating to have a 1000+ open bugs queue with no capacity to re-triage and only two maintainers able to out a few fours in every month or every week.

In open source, "won't fix" means either "not in scope — feel free to fork" or "no capacity ever expected — feel free to provide a fix".

The optimization problem is how do you get the most out of very limited time from very few people, and having 1000+ open bugs that nobody can keep in their head or look for duplicates in is mentally draining and stops the devs from fixing even the top 3 bugs users do face.


The problem is that your users also have limited time and if it's clear you're not even looking at issues where someone has put in lots of effort to help you then you're only going to get lazy issues and it will actually take more effort from you to do all that work yourself if you want to reach the same software quality.

I think you are missing the point: a user putting in a lot of effort into a bug report is usually trying to help themselves get the bug fixed.

As a maintainer, you will obviously look at that bug with more appreciation: but if you estimate it will take you 3 months of active development to fix it that you will have to spread over a full year of your weekends (which you can't afford), what would you do?

And what would a reasonable user rather see? Yes, this is an issue, but very hard to fix, and I don't have the time, or just letting the bug linger?


> We owe you nothing! And the fact that people still expect maintainers to work for them is really sad, IMHO.

Users also don't owe you anything either. Auto-closing reports without even looking at them is like asking for donations only to throw 90% of what you get straight into the trash. Not cool. If you don't want bug reports, state that up front or at least leave bugs open for other users to see and talk about. Otherwise, users are free to warn others to stay away from you and your projects.

And that's before getting into more complex issues like what responsibility you have if you take on maintenance of existing software and end up breaking something what was working perfectly for some users.

> Unlike corporate workers, nobody is measuring our productivity therefore we have no incentive to close issues if we believe they are unfixed.

There are plenty incentives, e.g. pride.

> That means that when we close the issue, we believe it has a high chance of being fixed, and also we weigh the cost of having many maybe-fixed open issues against maybe closing a standing issue, and (try to) choose what's best for the project.

That's fine, but bots that auto-close issues unless the reporter dances for them is the opposite of that.


Why do you close the issue then?

Because I have a reason to believe it's fixed, I have many more like it and it's difficult to reproduce. Simple :)

You have no evidence that it's fixed, but you have reason to believe it's fixed?

>Because I have a reason to believe it's fixed

What reason?


Because open source is corporate now

There are two reasons this logic is incorrect.

1. It's not Iran's mercy, but deterrence. If Iran was to target critical infrastructure constantly, Israel and the U.S. would bomb its much more easily. Both sides currently avoid doing that for the same reason.

2. Targeting the same places again and again will mean they cannot target other places, like cities, where even a miss has greater impact. So the economy of munitions make them prefer to not do that.


Uh, Israel and USA are already bombing core infra in Iran. Iran is retaliating against Israel as your point 2 states, and against the Gulf countries on their critical monetary assets - because that's where it hurts either party. Targeting civilian infra in Israel means Israel's image of infallibility is shattered, while targeting monetary assets in Gulf countries (like gas fields, refineries, financial districts, etc) means that they're intent on applying pressure to the Gulf countries. They can't do the former to the latter because of the extremely large (90%+) expat populations, and they can't do the latter to the former because Israel's sensitive assets were presumably prepared for the long fight, so are likely to be heavily guarded.

There is no .NET Framework 5. .NET Core 5 is just .NET 5.


As a Hebrew speaker I cannot understand how you came into this conclusion. The closest I can think of is ת-י-ר, which is the root of being in a trip.


Oh sorry, i thought it was also in Hebrew but it looks like it is not. I would expect the same root to show up in other Semitic languages, but at least in Arabic it’s

ط ي ر


ChatGPT claims that the same or similar root does exist in the meaning "bird" or "to fly" in a lot of other Semitic languages. Interestingly, it also claims that there is some correspondence in Hebrew, in the noun תור (tor) that represents a specific kind of bird (turtledove).


Indeed, in Hebrew, תור (تور, Tor) is the word for turtle-dove.


You'd be surprised to see, after deep inspection, how little of Rust you can remove while keeping its safety story the same (that is, memory safe without GC).

Traits? Nope. We need some way for code reuse. Classes cannot be made memory safe without extra cost (at least, I don't know how can they). And they are not less complex either. Templates like C++? More complex, and doesn't allow defining safety interfaces. No tool for code reuse? That will also severely limit the safety (imagine how safe Rust was if everyone would need to roll their `Vec`).

The borrow checker of course cannot be omitted. ADTs are really required for almost anything Rust does (and also, fantastic on their own). Destructors? Required to prevent use after free.

Async can be removed (and in fact, wasn't there in the beginning) which is a large surface area, but even today it can mostly be avoided if you're not working in some areas.

I don't think anybody can deny Rust is complex, but most often it's inherent complexity (what you call "sophistication") given the constraints Rust operates in, not accidental complexity.


Absolutely this. Folks are used to an awful lot of the complexity being hidden from them through avoidance of threading, runtimes, garbage checkers, standard libraries, and so on. For a language which exposes all of the complexity, Rust feels minimalist. C++ is one of a small number of other languages which also expose all the complexity, and it feels gargantuan and like poorly-thought out additions after additions by comparison. I don't mean to disparage the C++ devs at all, C++ has managed to be useful for ~40 years, and it's still capable of incredible things. Just that we've learned a lot over those 40 years, and computational capacity has grown significantly, and Rust has had the opportunity and architecture to integrate some of that learning more fundamentally.

Somehow most of the libraries in the Rust ecosystem seem to interoperate with each other seamlessly, and use the same build system, which I didn't have to learn another unrelated language to use! Astounding!


> Traits? Nope. We need some way for code reuse.

Says who? You can totally do code reuse using manually-written dynamic dispatch in "rust without traits". That's how C does it, and it works just fine (in fact, it's often faster than Rust's monomorphic approach that results in a huge amount of code bloat that is often very unfriendly to the icache).

Granted, a lot of safety features depend on traits today (send/sync for instance) but traits is a much more powerful and complex feature than you need for all of this. It seems to me like it's absolutely possible to create a simpler language than Rust that retains its borrow checker and thread safety capabilities.

Now whether that'd be a better language is up to individual taste. I personally much prefer Rust's expressiveness. But not all of it is necessary if your goal is only "get the same memory and thread safety guarantees".


> Says who? You can totally do code reuse using manually-written dynamic dispatch in "rust without traits". That's how C does it, and it works just fine.

Rust can monomorphize functions when you pass in types that adhere to specific traits. This is super-handy, because it avoids a bounce through a pointer.

The C++ equivalent would be a templated function call with concept-enforced constraints, which was only well-supported as of C++20 (!!!) and requires you to move your code into a header or module.

Zig can monomorphize with comptime, but the lack of trait-based constraint mechanism means you either write your own constraints by hand with reflection or rely on duck typing.

C doesn't monomorphize at all, unless you count preprocessor hacks.


If Anthropic can't use a really simple API separation and rate-limit only one it's really on them.


They can, but then cost per subscription would not be that low.


I think GP is saying that if the attackers can push an update it will be scary.


Yes, I got that.


The only things in Rust that are real statements are `let` statements, and item statements (e.g. declaring an `fn` inside a function). All other statements are in fact expressions, although some always return `()` so they're not really useful as such.


Surely declaring structs, traits, top-level functions, etc?


I don't know a single mainstream language that uses parser generators. Python used to, and even they have moved.

AFAIK the reason is solely error messages: the customization available with handwritten parsers is just way better for the user.


I'll let you decide whether it counts as "mainstream", but the principal implementation of Nix has a very old school setup using bison and flex:

https://github.com/NixOS/nix/blob/master/src/libexpr/parser....

https://github.com/NixOS/nix/blob/master/src/libexpr/lexer.l


It shows, even as a Nix fan. The errors messages are abysmal


Ruby also used to use Bison, uses its own https://github.com/ruby/lrama these days.


Rust is susceptible to segfaults when overflowing the stack. Is Rust not memory safe then?

Of course, Go allows more than that, with data races it's possible to reach use after free or other kinds of memory unsafety, but just segfaults don't mark a language memory unsafe.


Go is most emphatically NOT memory-safe. It's trivially easy to corrupt memory in Go when using gorotuines. You don't even have to try hard.

This stems from the fact that Go uses fat pointers for interfaces, so they can't be atomically assigned. Built-in maps and slices are also not corruption-safe.

In contrast, Java does provide this guarantee. You can mutate structures across threads, and you will NOT get data corruption. It can result in null pointer exceptions, infinite loops, but not in corruption.


This is just wrong. Not that you can't blow up from a data race; you certainly can. Simply that any of these properties admit to exploitable vulnerabilities, which is the point of the term as it is used today. When you expand the definition the way you are here, you impair the utility of the term.

Serious systems built in memory-unsafe languages yield continual streams of exploitable vulnerabilities; that remains true even when those systems are maintained by the best-resourced security teams in the world. Functionally no Go projects have this property. The empirics are hard to get around.


There were CVEs caused by concurrent map access. Definitely denials of service, and I'm pretty sure it can be used for exploitation.

> Serious systems built in memory-unsafe languages yield continual streams of exploitable vulnerabilities

I'm not saying that Go is as unsafe as C. But it definitely is NOT completely safe. I've seen memory corruptions from improper data sync in my own code.


Go ahead, talk through how this would be used for exploitation.


I would try to cause the map reallocation at the same moment I'm writing to it. Leading to corrupted memory allocator structures.


Go ahead and demonstrate it. Obviously, I'm saying this because nobody has managed to do this in a real Go program. You can contrive vulnerabilities in any language.

It's not like this is a small track record. There is a lot of Go code, a fair bit of it important, and memory corruption exploits in non-FFI Go code is... not a thing. Like at all.


Go is rarely used in contexts where an attacker can groom the heap before doing the attack. The closest one is probably a breakout from an exposed container on a host with a Docker runtime.

I triggered SSM agent crashes while developing my https://github.com/Cyberax/gimlet by doing concurrent requests.

I'm certain that they could have been used to do code execution, but it just makes no real sense given the context.


If you're certain, demonstrate it. It'll be the first time it's been demonstrated. Message board arguments like this are literally the only place this claim is taken seriously.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: