There isn't "one Indian metaphysics" but many schools (with 3 of them being the most prominent - Advaita, Vishishtadvaita and Dvaita) called the Vedanta schools. These schools deal with answering fundamental questions:
1. Who am I? What is I? Is the concept of "I" an Illusion or is it Real?
2. What is Consciousness and its nature?
3. What is Karma?
4. Is there a God?
5. If there is a God, then what is the relationship between God and I? Am I different from God or the same as God?
6. What is Creation? Is it an Illusion or is it Real? How big is it? What are the various planes of existence (also called Lokas)?
7. What is the relationship between God and Creation? Is it dependent, independent or one with God?
8. What is the relationship between Creation and I?
9. What is the ultimate purpose of existence? Is there a life beyond this life?
10. What is Time? Is it Relativistic or Absolute? What is the Age of Creation? Is Creation cyclical or is it linear? How does Karma tie into this?
11. What is considered Pramana (Proof) for arriving at an answer for these fundamental questions?
> an example of those groundbreaking Indian metaphysics studies
It is hard to point to one because Indian metaphysics is vast. But for me, the most groundbreaking Indian metaphysics study is in coming up with a value for the Age of the Creation by dividing Time into Yuga cycles.
To quote Carl Sagan "The Hindu religion is the only one of the world’s great faiths dedicated to the idea that the Cosmos itself undergoes an immense, indeed an infinite, number of deaths and rebirths. It is the only religion in which the time scales correspond to those of modern scientific cosmology. Its cycles run from our ordinary day and night to a day and night of Brahma, 8.64 billion years long. Longer than the age of the Earth or the Sun and about half the time since the Big Bang.".
Even though he got the part about Creation undergoing continuous Creation and Destruction right, the figure he quoted was wrong. The Multiverse, as per Sanatan Dharma, has a lifespan of 311.04 trillion years with our Universe being a really, really tiny portion of it (also called Brahmanda. There are infinite Brahmandas in the Creation and the Creation itself is divided into 14 planes of existence / parallel universes called Lokas). The day and night of Brahma, as per Sanatan Dharma, does not constitute the length of the Cosmos. So 8.64 billion years is definitely the Time duration of a day and night of Brahma but it does not equate to Age of the Creation. The Age of the Creation = 100 Brahma Years = 311.04 trillion human years. And yes we are talking relativistic Time here.
Coming to relativistic Time, I really like how such concepts are so well explained through stories in Dharmic scriptures. One of those is about a King named Kakudmi who took his daughter Revati to visit Brahma Loka (another plane of existence / parallel Universe) which is the residence of Lord Brahma (the deity incharge of Creation in Sanatan Dharma). He felt there was no one worthy of her intellect and beauty on Earth and wanted to consult with Brahma to get her a suitable husband. He nevertheless took with him a shortlist of candidates which he reluctantly prepared to be presented to Brahma. One arrival in Brahma Loka, Brahma was engaged in listening to musical performances by the Gandharvas (artists in the Dharmic lore). Kakudmi bowed and waited patiently. Once the performance was over, he approached Brahma with his problem and list of suitable candidates. Brahma laughed loudly and explained to Kakudmi that Time runs differently in different planes of existence and that in the short time they had to wait in Brahma Loka to see him, 27 Chaturyugas had elapsed on Earth. 27 Chaturyugas. Each Yuga cycle = 4,320,000 human years. So 27 Chaturyugas = 116,640,000 human years.
“O King, all those whom you may have decided within the core of your heart to accept as your son-in-law have died in the course of time. Twenty-seven catur-yugas have already passed. Those upon whom you may have already decided are now gone, and so are their sons, grandsons and other descendants. You cannot even hear about their names. You must therefore bestow your daughter upon some other husband, for you are now alone, and your friends, your ministers, servants, wives, kinsmen, armies, and treasures, have long since been swept away by the hand of Time.”
Brahma then comforts King Kakudmi and prophesizes that by the time he reaches Earth, God would incarnate and with him would incarnate various Devatas (demigods) and one of them, by the name of "Balarama", would become her husband.
The bright yellow color can be used perfectly if the design style wasn't flat. It's incredible how much more legible UI elements can be if you add some shading, gradients, 3d borders, anything!
> It may sound stupid, but you can't have unhandled exceptions if you don't have exceptions...
> panic!() exists in Rust, but that's not how recoverable errors are handled.
This is the worst argument in the whole article, and this is the worst part of the language. Everyone says it's not like exceptions, but in fact it is much worse. Panic is stringly typed and you can catch_unwind it, just like with try/catch in any other language. And the actual worst part of it, you will never know if a panic can occur in any of the underlying functions until it is too late. Developers be damned if they want to choose different behaviour other than crashing the whole program.
Either double down on using the standard error handling everywhere, or put something like "throws panic" in the function signature (ala Java checked exceptions). Many parts of the language has strict checks for everything, why does panic has to be an outlier?
It's not like exceptions because it's not used like exceptions. You only use panic if you want to crash the whole program. If you don't want to crash the whole program, you don't use panic. You do not want to crash the whole program if the user data failed to validate, so you do not panic in that case. If a library panics on invalid user data, that's a pretty serious bug.
I've been programming in Rust since it came out, and a couple of those years professionally, and I don't think I've ever seen anyone use catch_unwind. Maybe once in a test case?
To be concrete, let's talk about an example of a panic. Say you want to access the 3rd element of a vector. There are two cases:
1. You're not sure whether the vector actually has three elements on not. In this case, you call `my_vector.get(2)`, which returns an Option, and you handle the case where it's present and the case where it's not. This is standard error handling.
2. You are sure that the vector has at least three elements. Perhaps you just checked its length for some other reason, or you are careful to maintain this invariant, or you just constructed this vector by pushing 5 elements onto it. In this case, you would typically use `my_vector[2]`, which panics if the vector is too short.
For #2, the thing to notice is that this function literally never panics, under any input whatsoever if it is written correctly. Should that fact really clutter up its type signature, either by forcing it to return a Result type or by forcing it to have a "throws panic" marker?
EDIT: This is for a function that uses a possibly-panicking operation, `my_vector[2]`. There are also the functions that define a potentially panicking operation, like the vector indexing function itself. You could put a marker in the type signature of those, that would be reasonable. Though it would only be for users; the compiler wouldn't care.
Wasn't this argument used all the time by the Go community. I.e. only use panic when you intend the program to halt, and handle all potential problems with the Error type.
I think Rob Pike even said it's easy to see where a program fail in one of his talks?
But to me the superb thing about exceptions, is that error handling can be done where it makes sense. I.e. we can try{ problem-code }catch(problem){ handle problem } in a single location. Otherwise we end up peppering the entire code base with a ton of error checking far down the call stack, where we really cannot do much about the problem anyway (unless we are writing command line tools where error handling is just writing the problem to stderr).
Exceptions gives us a nice way to let problems bubble up to the surface, while also stating what the problem was, and where it occurred. That is great IMO.
> Exceptions gives us a nice way to let problems bubble up to the surface, while also stating what the problem was, and where it occurred.
Work is ongoing on some of this, but there are popular libraries in Rust (like `anyhow`) that let you attach backtraces to regular errors, add context, etc. Propagating erros to callers is handled with the standard `?` operator, which means "short-circuit and return this if it was an error, otherwise give me the successful result". This has the benefit of making early exits explicit, without interrupting the visual flow of straight-line code.
The simple explanation is that Rust has an equivalent of Java-style exceptions of "throw here, handle elsewhere", but has a different syntax for this. Instead of try/catch, there's a `?` operator to return ("rethrow") the error to an outer scope. It's a better fit for Rust's use of a generic Result type, but overall its usage is similar to the checked exceptions in Java.
Because Rust uses the type-safe explicit Result/? approach for all non-bug failures in the program, the implicit panic (that behaves similarly to RuntimeException in Java) is reserved for assertion failures and crashes only.
`catch_unwind` is not guaranteed to work in Rust. There's a setting to disable it and always hard abort() the whole process on every panic. Rust is serious with panics being for programmer's bugs only, and not trivialities like "file not found".
Thanks for the explanation. I guess most languages need this feature i.e. fail anywhere below this call, then return the type error + where it occurred + a stack trace. I've even heard of people doing stuff like that in C, where they store the stack trace in a list they populate and return errors as part of the same struct, in order to have something similar to exceptions.
I have to say I really enjoyed Rust when it was in its infancy (version 0.1 - 0.2 or thereabouts), but have since fallen off. It used to be so simple and so clean, and unlike anything else. Today it's just way to complex for me :-)
> But to me the superb thing about exceptions, is that error handling can be done where it makes sense.
Not necessarily. With exceptions, it is easy to be a cause of error and just throw the exception, then expect up the stack to handle it. Which of course has no idea how, it didn't control the cause in the first place.
Forcing error handling as near as where error can happen prevents this.
Actually, up the stack is usually the only place that knows how to handle the error. For instance sometimes dumping to stderr is the right thing to do, other times logging it, other times displaying a generic crash GUI, sometimes display a customized UI. There may also be times when the exception can be handled in a better way, with fallbacks for example.
The Rust/Go approach always makes me laugh. Normally in engineering or anything where reliability matters, panicking is understood to be a bad thing to do and people go through extensive training to ensure they don't do it. Somehow these language communities decided that panicking and giving up on the spot is a smart behaviour.
Panic is idiomatic error handling. Take something as basic as indexing into a list. Get it wrong and Rust will panic.
Sometimes exiting the program is the right thing to do.
Yes, but it's very rare that the code where something went wrong is in the position to decide that. The survival of the entire process is not a decision to delegate to every possible line of code or library author.
Consider a very common case where I benefit from exceptions every day - my IDE. IntelliJ hosts a bazillion plugins, of varying quality. These plugins like to do very complex analysis on sometimes broken and incoherent snippets of code, that may be in the process of being edited. In other words it's a nightmare to correctly predict everything that can go wrong.
Not surprisingly, sometimes these plugins crash. And you know what? It doesn't matter. A lot of the code is just providing optional quality-of-life features like static analysis. If one of them goes wrong, IntelliJ looks at the exception and figures out which plugin is likely to blame, it examines the type of error and maybe gathers editor context, it can report it easily to a central server that then groups and aggregates exceptions based on stack traces. Meanwhile as a user, it doesn't bother me because it's fine to just not have that analysis show up in the editor.
If every time an IDE plugin encountered an unexpected situation it aborted the entire process it'd be insane. The plugin ecosystem could never scale that way. People would be afraid of installing/upgrading plugins and that in turn would discourage people from writing them or adding features to them.
In reality nothing does that because, well, why would you when you have good exceptions? But even so, Java has a way to block that using the SecurityManager. Now they are deprecating the SecurityManager "how do I stop code calling System.exit" is one of the use cases they're planning replacements for.
I'm sure there would still be ways to bring the entire process to halt(for example, spawn thousands of threads with infinite loop). My point is just because a bad developer wrote bad code doesn't mean that a tradeoff chosen for a language design is necessarily bad.
In reality it's very hard to accidentally write an infinite loop that spawns threads. There's no idiom that would lead to such a pattern and I can't recall ever encountering such a bug in the wild.
Yes, in theory, there are all sorts of ways you can still trash the process with bad code. But in practice, the sorts of bugs that programmers really make in GC-d memory-safe languages are the ones that don't. So, exception based error handling really does come in very useful and Rust probably got it wrong here.
> The survival of the entire process is not a decision to delegate to every possible line of code or library author.
Stated like that, who can really disagree?
I remember when I was writing a bunch of Go when the language was still very new (2009 - 2011). One of the most popular use cases for the language was making websites. All sorts of unexpected problems caused the entire website to go down, due to unexpected panics here and there. The suggested solution from the Go team was to just restart the web-server whenever it was killed by a panic. Surely that cannot be the best way to do it..
>Panic is idiomatic error handling. Take something as basic as indexing into a list. Get it wrong and Rust will panic.
This is not really true. If you are indexing into something that may fail you use the `get` method which returns an `Option` if the index is out of bounds. The index operator is just a shorthand for `v.get(i).unwrap()` pretty much.
Yes, but the problem is that very often a programmer "knows" an index operation can't fail because they haven't thought of a case where it is a different size, or code gets refactored and assumptions are invalidated, etc.
The panic mentality comes from people who have spent most of their life writing C++, in which if anything goes wrong like an out of bounds index, memory might be corrupted in arbitrary ways, and in which you don't have a GC to clean up after you. Writing exception safe code is much easier in type safe GCd languages, and many programming errors end up being recoverable.
> it is easy to be a cause of error and just throw the exception, then expect up the stack to handle it.
Agree, this can happen. Perhaps the bad attempt at fixing this in Java for instance - checked exceptions, made people dislike exceptions ever more. The caller "has to handle" the exceptions or re-throw them of course. Even though RuntimeException's can come from anywhere at anytime, so "guard" provided by checked exceptions just made a complete mess of things. People are lulled into thinking that methods without the 'throws BlaBlaException' signature are safe and so on.
I guess no language is 100% on everything, but I've always felt that exceptions are one thing I really like; especially when a language manages to do them correctly.
I don't understand your #2 (or your whole point). It's exactly the case for exceptions and how exceptions happen. "You won't get an exception if your code is written correctly and the inputs of your program match your programmer expectations" yeah maybe two year down the line someone refactors the code which was resizing the vector before and now you have the most run-of-the-mill exception ; I just hope that someone making an app with your library has a way to catch the panic so that the software doesn't crash but shows a helpful error dialog to your user and makes a backup of its data before softly existing instead, otherwise we're really back at the pre-1980 state of the art of software design
> maybe two year down the line someone refactors the code which was resizing the vector before and now you have the most run-of-the-mill exception
In other words: there’s a bug in the code, and that bug has now caused an unrecoverable error, panicking the thread. Now the thread has died (or maybe caught the panic to present a friendly error message). Either way, the user is now aware of the bug, and disaster has been avoided.
Of all situations where your app might want to create a backup of its state, why would you choose to do so precisely while unwinding a crashed thread, where all assumptions, bets and invariants are already off?
And what would the helpful error dialog even say? „A problem has occurred and the app will now shut down“? From the user’s point of view, is that really an actionable or helpful error dialog?
> And what would the helpful error dialog even say? „A problem has occurred and the app will now shut down“? From the user’s point of view, is that really an actionable or helpful error dialog?
Yes, literally. This is already much better than anything that gets the spinning ball of death of macOS going. You can even continue if you are running an event-driven app where the error may have happened as part of an event handler (and thus limited to a very specific part of the software).
To give my own experience: I develop https://ossia.io and use this catch-all method. In 7 years I've gotten numerous thanks from the users for it not crashing but being able to carry forward in case some issue crops up in a sub-sub-sub-module. Not a single time I remember this to cause some memory corruption later.
(backing up state is done up to the previous user action but while in my case it works, it's not always practical)
So in this space, you might well feel confident that catch_unwind() is appropriate, although I still think the thread solution is more elegant.
I suspect in reality most of the problems this would catch in OSSIA wouldn't end up as panics in a hypothetical "Rust OSSIA" because of the different attitudes to exception throwing/ panic vs "normal" error flow in these languages and libraries - unless you got really happy slapping "unwrap()" on things when you shouldn't, but sure, it would solve this problem.
As to memory corruption - the problem isn't strictly "memory corruption" but unstable system state. If my underlying cause is that somebody's dubious Leslie simulator blows up when I frob the gain control on it too quickly, restoring exactly the state in which it blew up last time doesn't help me on its own. I need some way to say OK, that was crazy, no Leslie simulator until I save the project and then we can take it gently, which again is somewhere the thread solution is nicer.
>To be concrete, let's talk about an example of a panic. Say you want to access the 3rd element of a vector. There are two cases:
Reality is not that simple, if you worked in this industry you would know. For example I was building a web scraper years ago and the WebView would crash since is C/C++ , instead of doing it's job and show a web page or a broken web page it crashed my entire program,. The solution was to split my program in a parent program and a child program so this bug does not bring my entire thing down, and I can crash the issue and record the bad url that crashes and try again or just skip it.
I would hate to use Rust libraries that would crash my entire program if they for some reason are bugged. In my experience I found bugs in many popular libraries. So in Rust if I import a say library to resize an img and say the img is corrupted and library is shit it will crash my entire program? I would prefer a higher language where I can try=crash the image resize function and if shit goes wrong I can show the user a relevant message , or fallback to some other resizing method.
> I would prefer a higher language where I can try=crash
What you're describing is the `catch_unwind` mechanism that Rust does have. Because panics are implemented with unwinding (by default), you can catch them. But it's not the normal error handling mechanism; it's the "oh god an assert just failed, or we just OOMed or something, who knows, most bets are off" mechanism. If you have a main loop that's sufficiently isolated from individual tasks, such that you think you can do something useful with the fact that one of your tasks just vanished in a puff of smoke, then catching a panic coming out of a task might be a reasonable thing to do. That often makes sense in server code, where your main loop might want to keep trucking, or at least gracefully shut down other connections. But for most library code, the right thing to do is to allow most panics to propagate and crash the caller.
So for example in JS a correct regex can throw exception on some input , so in the places where this can happen we can use a try catch . What do you do in Rust , do you check the return result and on top of that do you try to catch a panic just in case the regex library is bugged ? so you have to implement everywhere 2 error handling methods to be 100% safe? If yes seems more ugly to have to implement 2 error catching ways.
YEs it happen to me many times to hit bugs when working in real world, bugs in image libraries, bugs in regex libraries, bugs in pdf libraries, bugs in html/xml parsers so from my experience working with c/c++ and higher level languages I prefer the higher level languages, less bugs, almost no complete crashes and better error reports from the exceptions. I never had the tiem to try Rust but I am not tempted so far.
> What do you do in Rust , do you check the return result and on top of that do you try to catch a panic just in case the regex library is bugged ?
Nope, we just check the return result because libraries usually don't crash and have well-defined error cases. Having a decent type system helps catching all the possible outcomes. In a few years coding Rust, i never had a single crash due to a library panic, only from explicit unwraps i applied in my own codebase.
Panics are not intended for errors, but for unrecoverable failures. For example, in rust std a failing memory allocation will crash your whole program, which is in most cases what you want to do. For the remaining cases, there are other non-fallible methods.
For example: String::reserve vs String::try_reserve or HashMap::insert vs HashMap::try_insert.
There is no 100% safe anywhere. Does your JS code handle out of memory errors with try-catch? No, it will abort as if nothing happened at all.
Sure, there are bugs in every code but unexpectedly panicking is considered a bug in a library so in my not too extensive experience with rust libs, these are not the norm at all. So simply writing code where you yourself don’t panic should give you quite a high chance of not hitting this case ever.
>Sure, there are bugs in every code but unexpectedly panicking is considered a bug
Yes so would you like say Firefox to just crash when one of it's many dependencies crashes?
You are suggesting but I am not sure if I understand correctly that only memory errors cause panics? So what if the library reads a file and unexpectedly it shit happens with the file, it will crash the program because the developer maybe forgot to return a special error code in this case,
All the filesystem APIs return Results rather than panicking, since like you said it's expected for those to fail sometimes and for the program to handle those failures. It's possible for a library to convert those Results into panics by calling .unwrap() on them, but that would usually be considered a bad design (ok for tests and tiny programs though). So I think you have an important point here, which is that if your application is calling into a library that you worry might have some bad design decisions in it, you do have to worry about it bringing down your process. And maybe it could make sense in some rare cases to try to isolate that library with catch_unwind. But I think most Rust programmers would prefer to just fix the dependency. The fact that you can visibly spot a lot of these conversions in the code is helpful for auditing.
I'm not super up to speed on JS, but I might draw an analogy to Python. Handling a result in Rust is similar to catching an exception of a known type in Python, a very common thing to do. On the other hand, catch_unwind is (loosely) similar to writing a bare except clause that catches every conceivable exception. You can do that, and sometimes it's correct to do that, but in most cases it's a bad idea. You don't want to accidentally suppress errors that indicate a bug in your program.
Thanks, from my experience with desktop apps in managed language I alwas added a global catch for crashes that were not caught or can't be caught, there I was writing the details in a log file. Then I had a menu entry for submitting a bug report, a popup would open and the user had the option to include the log file with the exception information and details like operating system, runtime version etc. The only thing that was bringing down this app in the higher level language(it was an Adobe AIR app in Action Script 3) was the freaking Web View , because was a wrapper over WebKit and that was C++.
This days doing backend dev I am forced to move stuff in a different process but most stuff I use I prefer to use binaries then libraries , for example for resizing an image instead of using the built in image library that crashes sometimes and brings the script down I install image magic on the server and write a script for resizing an image then call that script and check it's output , sometimes I had to use the timeout linux program to kill the program if it gets stuck on some input file.
If I were to create that image resize library in Rust I would attempt to catch everything , including panics and return it as a error result(so only system crashes would be uncaught)
IO errors are generally handled by returning a `Result` type, that contains the details of the problem on error. You wouldn't use `panic` for IO errors. `panic`s are meant for dealing with broken invariants/assertion failures because of a bug in the program.
You need to run it in a separate process. Rust does not have good enough fault isolation features to safely assume a buggy image processor won’t break your app.
* Entering an infinite loop can bring down everything. A separate thread might not, but since Rust provides no way to kill a thread without it cooperating, there is no way to stop a stuck thread without bringing down the whole process.
* Stack overflow is an instant abort, not a panic.
* Double panic, where panicking calls a destructor that itself panics, is an instant abort.
Question, if you are a library/program author why would you intentionally use a panic and not cleanup and return an error? Maybe I misunderstood and in fact good developers never trigger panics unless there is no way to avoid it, like if they could not prevent it with more checks or is it impossible to cleanup because they already fucked up, wrote garbage in the process memory and safest thing is to kill the process.
I think there are a few cases where Rust likes to panic, but different people probably have different opinions here:
- Extremely common operations with dedicated syntax, where introducing error handling would be burdensome. Things like array indexing or arithmetic overflow. In these cases, you usually want an alternative, fallible way to do the same operation.
- Cases where most callers will probably convert the error into a panic anyway. One example of this might be .split_at() on slices, which is bounds-checked just like an array access. Most callers would probably just .unwrap() the out-of-bounds case, and callers who don't want it to panic can easily check before the call, so it's more ergonomic to panic.
- Cases where the only plausible reason for failure is a bug in the caller. For example, the .borrow() and .borrow_mut() methods on RefCell will panic if a write overlaps with another read or write. The caller is almost always expected to statically guarantee that that doesn't happen, usually by making all borrows short-lived. (And here again there are fallible alternatives available.)
An interesting example of something that doesn't panic, but which probably should, is taking a mutex. The standard mutex in Rust includes a "poisoning" mechanism, which almost every caller just .unwrap()s. I think the majority opinion these days is that poisoning should just be removed, but given that it's around I think most people wish it just panicked instead of returning a Result.
> is it impossible to cleanup because they already fucked up, wrote garbage in the process memory and safest thing is to kill the process
That’s essentially it, yes. My code should never actually panic. If it does, it means the state of the process has become deeply diseased, and attempting to “clean up” is likely to just make things worse. Of course, if it’s safe Rust, then it still won’t write past the end of a buffer or anything disastrous like that, but buggy code is still buggy code and there’s lots of stuff Rust won’t stop you from doing.
One of the more extreme things I’ve done in production Rust code was add a “watchdog thread”. It has a channel that takes unit and receives on a timeout, and the thread doing the actual work is expected to send it a message once a minute. If it doesn’t receive a message within a minute, it hard aborts the process. The default setup is run under a service manager like systemd to make sure it gets restarted, and that failures are actually logged somewhere.
This is meant to solve the problem that safe Rust is a Turing complete language, so is subject to the halting problem. The type checker can prove that you won’t read past the end of a buffer, but it cannot prove that your code will ever actually finish running. Which means, if you have a project like a web scraper that needs high uptime, you need to prevent it from getting stuck somehow.
I agree, so Am I wrong or the issue seems to be community culture thing, where some developers panic too eagerly? Say I make a library for resizing images and I have one public function resizeImage(options) , a good developer would think that maybe my code code has a bug and some function from standard library would panic, I should ensure I catch this and my public function never panics (even if there is no memory,disk or whatever Ias an author I should try not to intentionally panic" where a bad Rust developer thinks like " this will never panic unless I made a bug, if I amde a bug I am happy to panic and crush some sucker program so I get the bug report and fix the bug".
There are always bugs(logic bugs where Rust can't protect you) so why not have a clean interface?
IO errors like running out of disk space would be handled by returning a `Result` type, not by panicking. Often, Rust code/libraries panic on out-of-memory errors because recovering from that isn't a priority for most application code. But if you are writing lower-level or high-reliability code and you do want to handle out-of-memory errors, the Rust standard library (and many third-party libraries) offer alternative fallible memory allocation APIs that return `Result` instead of panicking on out-of-memory.
A good developer will crash the program, as soon as possible, if and only if it has a bug. If you want to write a program that never crashes, then you need to write a program with no bugs.
The reason you don’t want buggy code to limp along after it detects a bug, is that crashing isn’t the worst possible thing.
The worst possible thing is getting stuck in an infinite loop or a deadlock.
> I would hate to use Rust libraries that would crash my entire program if they for some reason are bugged.
You would, but for other programs with other requirements it would actually be beneficial. There's no single right answer, and you should pick the library that follows your particular requirements for that particular program.
You can turn panics into aborts with `panic = "abort"` in Cargo.toml, in which case nobody has to pay for being exception safe (though, to get the full benefits of this, you may have to rebuild the stdlib? I'm not entirely sure here).
I'm talking about the cost paid by the library author for the additional burden of writing exception safe code. Whether you use this downstream doesn't matter, the cost is already paid (in fact arguments like "no one catches" make it worse since the cost is paid and no one benefits).
The only place I've seen people using catch unwind was in Sentry library to catch panics. Needless to say that it was never used even before we removed all unwraps from the code.
> If a library panics on invalid user data, that's a pretty serious bug.
I swear, sometimes it seems like Rust people are from another planet. What do you think "unwrap" does? It's not used in every library, but certainly in many of them.
There’s no need to talk in a condescending manner.
What they said was correct - an “unwrap” outside of test/prototyping is considered a serious bug. They Rust loving strawmen you’re creating never claimed that every line of Rust ever written is perfect and bug free.
So you essentially want me to write error handling code nonstop, constantly, all across my functions. Practically after every 5 lines of code there is going to be an unwrap() where I'm not allowed to call unwrap() so I have to know the details of the implementation, the error code, deal with the error code, return early from the function and then gracefully handle it all. Meanwhile in a language that has exceptions I just put a try catch around all the code I think works fine but maybe not and I deal with it in a single location in a way where I dont have to care about what the precise error code might be.
Error code programming really seems to be objectively worse for everyone except the compiler writer. Somehow people let themselves get convinced that this is better when it's objectively not.
Rust has syntactic sugar to help you coalesce error handling into returning a single Result. You'd have to check and make sure the library you use doesn't call unwrap willy nilly. As crazy as that sounds it is actually common practice in the Rust community, there's tools that reveal use of unwrap and unsafe in your dependencies.
In the end you don't use Rust because it's so easy and nice to use (unless you come from C/C++). You come to Rust because you want meticulous control over performance, and you don't want to sacrifice safety to attain that.
If that's not why you're using it, I agree you're probably better off choosing Java, it's plenty fast and comfortable to use, especially if you pick modern tooling.
> You'd have to check and make sure the library you use doesn't call unwrap willy nilly.
That statement really resonates with me.
If you use a library, you’re responsible for what it does, just like how you’re responsible for your own code.
That’s what the ? syntactic sugar is meant to solve. It will return at the point with an Option null, or the error variant of Result if the preceding expression’s error can be converted to it.
something.map_err(…)? is quite readable in my opinion and that is the worst case, when your method returns a Result<..,..> but the called method has an Optional return type. Otherwise it is just a single ?.
Sure, I do believe that exceptions are superior but we do have to understand that rust is a low-level language, period. It is very expressive considering its nature, but it will never be as productive as a managed language in my opinion - we have this distinction for a very good reason. If you want maximal control over what happens “behind the scenes” you loose some automatism that could improve productivity.
How is try catch any better than Err/Ok pattern? Code that doesn't handle error cases shouldn't even pass any code reviews. This is exactly why Rust guides the programmers in a certain paths to ensure all cases are always handled. If you really don't want check the Err/Ok in each call, you are free to use '?' to pass that burden to higher functions.
They didn’t know about `?`. My guess is that they read the first page about error handling, where it talks about unwrap and match. They didn’t get to the second page, where `?` is introduced.
Remarkable that people with such little knowledge feel comfortable talking so much.
No, you should not unwrap unless you know it is safe to do so. You should also add a comment why it is safe to unwrap, if it is not obvious.
Many programmers are writing code for sunny weather only, with error handling being something you might add as an afterthought if your code starts to feel a little too brittle.
In my eyes error handling is just as important to do correctly as getting the core of the functionality done, because error handling is a core functionality of any program, especially if we speak of libraries others are meant to use.
Error handling is what differenciates engineering from coding.
What OP meant is that proponents of Rust are often a bit out of touch with reality: Go to github and find a random Rust repo which doesn't use unwrap excessively. And is thus full of serious bugs, according to your wording.
> Go to github and find one Rust repo which doesn't use unwrap excessively.
Consider serde-json, a widely used library to serialise and deserialise json. You asked me to find “one Rust repo”. Ok here it is - https://github.com/serde-rs/json/search?q=unwrap&type=. Of the 22 uses of unwrap, nearly all are in test code or in comments. Of the remaining 3 or 4, they seem safe to me. But maybe they’re not. Could you think of some json input that could trigger a panic from one of those unwraps?
I’ll put my money where my mouth is. I’ll donate $100 to a charity of your choice if you can find that.
But if you can’t, at least have the honesty to admit that you misspoke when you said not even a single repo without “excessive” use of unwraps exists.
Not every use of unwrap is a bug. For example a regex library returns Result on regex construction because the passed regex could be invalid. But if you construct the regex yourself, from a hard coded string, you know it is correct. Then you just use unwrap and it is ok.
The assertion remains true though. Unwrap should only be used if you are prototyping or you are 100% sure it will never actually panic.
It's just like the IndexOutOfBounds exception in Java. Many functions can theoretically throw it, but most libraries and programs do not catch it because usually if it is thrown it means that something happened that the programmer did not expect and therefore the program should crash.
The problem would not be that it is commonly used, the problem would be that it is abused. And I don't see that happening currently.
The assertion that "If a library panics on user data, that's a pretty serious bug" remains true.
If a library is panicking on invalid user data, it is because they are abusing panic, which is a serious bug. Or they just didn't realize that their code could panic, which is also a serious bug.
It panics just like my `my_vector[2]` example does. What did you think `my_vector[2]` did? Libraries use `my_vector[2]` too. I don't get why we're changing topics from one commonly used panicking operation to another.
Unwrap is supposed to be used when the developer know that the error can't happen.
Or that if that error happen there is no recovery anyway, and the best thing to do is to abort the program.
To be a bit fair, checked exceptions in java also have their 'bypass' system, since Errors are not checked. So you can't be sure whether someone will decide to throw an error in the middle of library code. You still have to catch-all. I'm not saying it's better.
I haven't seen a way to do exceptions better than fully-checked exceptions, but you have to be ready to have buffer/integer over/underflow exceptions everywhere or have a fine prover for the absence or runtime erroes to 'allow' you not to have them in your signature.
Otherwise having discriminated records (or option types if you prefer) for return and error-handling seems more down to earth, if a bit painful to write.
Frankly, I love Java's checked and un-checked exceptions differentiation even if the standard library is confused about it.
Make logical exceptions (depending on purpose of interface) into checked-exceptions. Make system exceptions into un-checked exceptions. Document in javadoc with `@throws`
A higher level module can wrap and re-throw into the appropriate exception if needed.
Error handling can be done in the desired place instead of scattered across the code.
Yeah, I also believe that Java is the closest to the best error handling I am aware of. Unfortunately though, it is inheritance based which is a bummer here. It would be perfect with sum types though.
> Everyone says it's not like exceptions, but in fact it is much worse. Panic is stringly typed and you can catch_unwind it
I'm not sure which argument you are trying to make but panics are not stringly typed unless you panic with a string. You can use panic_any(MyPayload) and then it panics with that instead.
And then you encounter some ancient apps without GNU readline support (oracle sqlplus and mysql, I'm looking at both of you). No history support, cursor keys emit characters on the line, literal hell when trying to quickly fix remote database...
> ARM's less open platform also comes with some advantages though. It's easier for ARM to prevent ecosystem fragmentation and non-standard instruction set extensions.
It's kind of funny way of looking at the core part of the ARM ecosystem while forgetting how much outside of the CPU is non-standard, undefined. None of the ARM devices share bootloader, device enumeration, and a plethora of things needed for an open, non-fragmented OS/Software ecosystem like how PC does.
Maybe you can run parts of the same ARM machine code on most devices, but it's not terribly portable to be honest, it has to be very generic. For example, Android devices end up in a pile of trash because you can't just upgrade the kernel to the latest version on a 1, 5, 10 year old smartphone without losing functionality or being stuck at step 1 for the lack of tools from broken forum links and shady fileshares. So much for software flexibility...
My idea is that a CPU is just a component and it's useless by itself without considering the rest of the computer system. ARM needs to dip more into standardising the rest of the picture, and the RISC-V guys could also start looking into creating an open computer architecture initiative/group to prevent further fragmentation.
I do worry though that the RISC-V ecosystem could be really torn in two by a big player (Intel?) who adds proprietary extensions and associated software.
I certainly don't want each installed desktop app to have a copy of base gnome/kde runtime and everything down to libc. And the implication is even the graphics would be duplicated, for example the Adwaita icon pack is huge. So if I have a full set of gnome applications (say 50) would I have 50 copies of Adwaita icon set? Suddenly disk space isn't cheap. Shared libs are good and we could do better than flatpaks and containers and static linking.
And just because shared libs are PITA it's not just because of their nature, it's the lack of tooling from supposedly modern languages, lack of guidance, lack of care for versioning and API stability, and distro agnostic convention. Each of these problems can be solved by not sweeping them under the rug.
I don't know whatyour point is. He literally says:
>Yes, it can save on disk use, but unless it's some very core library used by a lot of things (ie particularly things like GUI libraries like gnome or Qt or similar), the disk savings are often not all that big - and disk is cheap
He's literally making the point you're arguing. He says, core libraries should be shared.
Tell me how much of the system libraries are written in C or C++, and how much of them are being written in newer languages.
C is the default choice because of its ABI, and the tooling around it is made for using shared libraries.
What can you say about modern languages? Each of the languages are designed to work in a silo and not much cooperation for let's say, system plumbing. Their own package manager makes it easy to pull in code, and only make code for that language only. You can't just create a package that works as a shared library for other programs without mushing around C FFI. They make it hard by default, which creates resistance in developers to make a piece of code usable by others other than their own language. This trend is pretty alarming, especially when hidden and manipulative language fanboyism is showing its ugly head everywhere.
You should explain what's wrong with the argument instead of being a passive agressive asshole. It was a continuation of why it happens nowadays that people swing to static linking and then posts like this get shamelessly upvoted.
OK, if being an active aggressive asshole is better (which judging from your comment certainly seems to be your opinion): You made a stupid comment. It was pointed out to you that your "big counter-argument to what Linus wrote" was actually exactly what he had written. In stead of graciously acknowledging, or admitting by even so much as a hint, that you were wrong (which was so obvious that one would have to be a total blithering idiot not to get it), you went off gibbering about some other tangent. That makes you the primary passive-aggressive asshole here. Now you've graduated to active-aggressive assholery, which makes you just simply an asshole.
Nix(OS) solved this problem by hashing all packages based on their inputs (including other package hashes) all the way down in a merkle tree. You would have one copy of the icon pack, for example. But if any common libraries are built with different inputs for a particular program it will be duplicated instead of shared. Nix can then go through your store and hard-link any duplicate files between similar packages to save some more space.
> So if I have a full set of gnome applications (say 50) would I have 50 copies of Adwaita icon set?
No. Icons is easier to load with regular open/read, then with dlopen. But in any case they go into separate files, they are not in a binary.
Dynamic loading might be used to load data into the process, but it will be a very strange way to do it.
> And just because shared libs are PITA it's not just because of their nature, it's the lack of tooling from supposedly modern languages, ...
It is more complex than this. Dynamic linkage is limited in its ways. It couldn't do a type parametrization, for example. All it could do is to fill gaps in a code with function addresses. But it is not enough. Far not enough. For instance, you wouldn't like to dynamically link a c++ vector, because it is meant to be inlined and highly optimized then. Dynamic linker cannot inline or optimize.
So you are forced to use a lot of inlining at the stage of the static linking of the application binary, but then you get a problem of binary incompatibility between an app and a lib coming from lib being rebuilt with different optimizations.
So I'd say, that modern linux distributions should adapt to modern languages (like c++, lol), not vice versa.
> Each of these problems can be solved by not sweeping them under the rug.
For what end? What we possibly might gain, from solving these problems? Dynamic linkage is a runtime cost. Why should we prefer runtime costs to compile-time ones?
We’ll surely individual applications only use a few icons, not the entire set, so they can statically link in only the resources they actually depend on, right?
Who knew using already existing data race analysis tools is much more feasible than shoehoning a huge project into a brand new language. Way to go mozilla!
Hey, could you please not take HN threads into flamewar, including programming language flamewar? It's tedious, and one reason this site got started was to have a forum that avoids that kind of thing.
Thoughtful critique is welcome, but that would require (a) adding solid information, and (b) taking out the swipes.
> now suddenly going full Rust is the most important thing
As said above, Mozilla does not seem to have ever suggested, in the past or now, that "going full Rust" is a goal, let alone "the most important thing."
Webrender and Stylo were big projects, but they’re already in tree, and that means about 10% of Firefox is Rust. That’s hardly “full Rust,” and even less so “at all cost.” You would have to show intent to actually re-write all of Firefox, you’d have to somehow deal with the fact that the number of engineers writing more C++ and continuing to do so is far greater than those writing Rust... there’s a ton of evidence that “full Rust at all cost” is not intended, desired, or being worked on.
It sounds like you’re objecting to any Rust being included at all, if I’m being honest.
EDIT: Like, for example, this article is about committing engineering time to help improve their C++. If the goal was 100% Rust at all costs, that would be a waste. Even this article is an indication that this thesis is incorrect.
Firefox has something like 8 million lines of C++. Stylo replaced about 150,000 lines of that, and WebRender is more of a fast path than a true replacement of anything.
Both of those rewrites came with massive speedups due to the amount of parallelism they were able to use, which was impractical in C++. Other parts of Firefox are not as critical or parallelizable, hence why they aren't being replaced any time soon.
You seem to be quite aggressively misinformed, if you are not deliberately trolling.
> the amount of parallelism they were able to use, which was impractical in C++
You mean after not using OpenMP, basic multithreading core guidelines, and not doing the refactoring with sanity checkup in TLA+, tools that end up on the frontpage everytime people rediscover them.
Do you believe that "just applying TLA+ to the rendering code in Firefox" is a useful thing to recommend to people? TLA+ cannot scale to that size of a codebase without such dramatic simplifications that it's not going to catch any of these kinds of bugs. This is the entire reason why proving safe encapsulation of `unsafe` blocks is possible was so important, it lets you focus the proof engineering effort on a much tinier part of the program. Sure, it can't prove the kind of global invariants TLA+ aims for, but you can also actually apply it to a very large codebase like Firefox (and keep it in sync--another problem you need to deal with when using tools like TLA+).
Out of curiosity, have you used OpenMP or TLA+? I seem to remember that solve problems that are really, really different from the use cases for which Rust is designed.
Also, I'll throw out there that TLA+ is nearly as handy for Rust as it is for C++. Safe Rust will keep you from data races that break it's safety guarantees, but there's more to correct use of concurrent algorithms than that. TLA+ is a great tool for modelling those other concerns if you need to apply a little more engineering rigor to some of those problems.