Hacker Newsnew | past | comments | ask | show | jobs | submit | ainar-g's commentslogin

Which is why the Soviet government used “ultraviolet baths” to make sure that children up North got their vitamin D.

https://www.nationalgeographic.com/photo-of-the-day/photo/ul...


There's always ReactOS[1], a project for a bug-for-bug compatible Windows clone. It used to mostly aim at Windows 9x compatibility the last time I'd checked, though, but that could probably change. And if anyone wants to create a Win7 clone, at least some of the groundwork has already been made.

[1]: https://reactos.org/


Sorry, but ReactOS is not seriously usable. Not to insult the work done on it but it is an experimental OS.


"Compatibility with Windows programs" is a massive undertaking in the first place, as evidenced by the huge amount of development effort that has gone into Wine without quite reaching 100% bug-for-bug compatibility. (The level of compatibility they've achieved is truly impressive but it's really difficult to get to 100% for a large existing base of arbitrary applications.)

Reliable real-world compatibility requires not only implementing Windows APIs as documented (or reverse-engineered) but also discovering and conforming to quirks, undocumented features, and permissive interpretations of the specs or even outright bugs in Windows that some applications have either intentionally or unintentionally ended up relying on over the years.

I don't know if modern apps would tend to be better engineered to actually follow the spec and to only build on features as documented but for example older Windows games were sometimes notorious for being quite finicky.

And of course if the goal is a full-scale independent OS rather than a compatibility layer on top of an existing one, there's the whole "operating system" part to implement as well.


Thank you, rsc, for all your work. Development in Go has become much more enjoyable in these 12 years: race detector, standardized error wrapping, modules, generics, toolchain updates, and so on. And while there are still things to be desired (sum types, better enum/range types, immutability, and non-nilness in my personal wishlist), Go is still the most enjoyable ecosystem I've ever developed in.


Nomination for RSC's greatest technical contribution: module versioning. Absolutely fundamental to the language ecosystem.

https://research.swtch.com/vgo-intro



The interesting thing is - this went pretty much against the community at the time.

At the time, the community seemed to have settled on dep - a different, more npm-like way of locking dependencies.

rsc said "nope this doesn't work" and made his own, better version. And there was some wailing and gnashing of teeth, but also a lot of rejoicing.

That makes me a bit sad that rsc is leaving.

On the other hand, I don't really like the recent iterator changes, so maybe it's all good.

Btw if you reading this rsc, thanks a lot for everything, go really changed my life (for the better).


Iterators definitely have one of the strangest syntaxes I've seen, but if you promise not to break the language you better not introduce new syntax without a Major reason (like generics, but even those actually introduced next to no new syntax, even re-using interfaces o_O).


Iterators don’t really introduce any new syntax, strange or otherwise.


That's what he said


In part of the comment yes, kind of, but the comment begins by saying "Iterators definitely have one of the strangest syntaxes I've seen". As there is no syntax specific to iterators in Go, I find this a bit hard to understand.


Well, yes, that's the thing: you don't get any special syntax for generators (like "yield" keyword), which makes them look quite weird compared to other languages that have native support for them. You need to have very clunky and verbose syntax (at least I view it as such) which consists of having to define an extra nested closure and use a function pointer that was passed to you. Having a new keyword would allow for a much nicer looking generator functions, but that would break all existing tooling that doesn't yet support that keyword (and potentially break existing programs that use yield as a variable name or something like that).


Seems like a total non-issue to me. It’s conceptually very easy to grasp and follows the normal Go syntax for defining functions. And what percentage of your code base is really going to consist of custom iterators? A syntactic shortcut might save you two lines of code for every 5000 you write. A lot of Go programmers will probably never write a custom iterator from scratch. The important thing is to make the custom iterators easy to use, which I think has been achieved. I’m sure they’ll consider adding some syntax sugar in time, but it would be a trivial convenience.

The benefit of Go’s generator implementation is a huge simplificación of the language semantics compared to other approaches. The generator function has no special semantics at all, and when used with ‘range’, all that occurs is a very simple conversion to an explicit loop repeatedly calling the function. Other popular approaches require either special runtime support for coroutines of some form, or a much more elaborate translation step within the compiler.


s/syntax/API/

It's not that hard to understand what OP means.


It's just the visitor pattern, taught in software engineering 101. A function that takes a callback function that gets called for each visited value. Nothing strange about it. Many standard library functions such as sync.Map.Range or filepath.Walk have always used it. The new thing is that it now gets easier to use on the caller side.


The sync.Map.Range, filepath.Walk and other similar functions with visitor pattern in standard Go packages will remain there forever because of backwards compatibility. This means that new functions must be added to standard Go packages in order to be able to use them with iterators starting from Go 1.23. This complicates Go without reasonable benefit:

- You need to be able maintaining code with multiple ways to iterate over various collections in standard Go packages.

- You need to spend time on deciding which approach for iteration to use when you write new code.


Plenty of people in the community considered dep way too messy to be the real solution.


One I enjoyed a lot (a lot) was this one https://research.swtch.com/pcdata

Hope he gives us more in the future

thanks rsc


Fundamentally broken model of versioning, but I guess nobody really cares.


Can you elaborate on what problems you have with the MVS algorithm?


Not really a fan at all. Dep and its predecessors followed kiss principles, were easy to reason about, and had great support for vendoring.

I’ve wasted so much time dealing with “module hell” in go, that I never dealt with in the prior years of go usage. I think it has some major flaws for external (outside Google) usage.


> non-nilness

Ah, I still remember this thread:

https://groups.google.com/g/golang-nuts/c/rvGTZSFU8sY/m/R7El...


Wow, that's painful to read.

Separating the concept of pointers and nullable types is one of the things that I think go having from the beginning would have made it a much better language. Generics and sum types are a couple of others.


False things programmers believe:

All reference types should be able to take a null value.

It's impossible to write complex and performant programs without null.

It's impossible to write complex and performant programs without pointers.

References always hold a memory address in a linear address space. (Not even true in C!)

Every type is comparable.

Every type is printable.

Every type should derive from the same common type.

All primitive types should support all kind of arithmetic the language has operators for.

The only way to extend an existing type is to inherit from it.

What else?


Every type must have some sort of a default value. (A generalization of the first item, really.)


It's going to be much faster to enumerate the true things programmers believe.


> It's impossible to write complex and performant programs without pointers.

Well, I'd rather not copy around a multi-hundred-megabyte (or gigabyte) 3D object around to be able to poke its parts at will.

I'll also rather not copy its parts millions of times a second.

While not having pointers doesn't make impossible, it makes writing certain kinds of problems hard and cumbersome.

Even programming languages which do not have pointers (cough Java cough), carry pointers transparently prevent copying and performance hits.


Well, looks like the GP missed a very common false fact:

The operations written in a program must literally represent the operations the computer will execute.

This one stops being true on high-level languages at the level of x86 assembly.


Exactly. A MOV is reduced to a register rename. An intelligent compiler can rewrite multiply/divide by 2 as shifts if it makes sense, etc.

"Assembly is not a low level language" is my favorite take, and with microcode and all the magic inside the CPU, it becomes higher level at every iteration.


True. :)


Without pointers in some form or another, you can’t refer to allocated memory. You can change the name of pointers but they are still pointers.


It is possible to write complex and performant programs without allocating memory.

And in some languages, where you only operate on values, and never worry about where something is stored, allocation is just an implementation detail.


> It is possible to write complex and performant programs without allocating memory.

I assume you mean by only allocating on the stack? Those are still allocations. It's just someone else doing it for you.

> And in some languages, where you only operate on values, and never worry about where something is stored, allocation is just an implementation detail.

Again, that's someone else deciding what to allocate where and how to handle the pointers etc. Don't get me wrong, I very much appreciate FP, as long as I do information processing, but alot of programming doesn't deal in abstract values but in actual memory, for example functional programming language compilers.


That's... not true?

Example in C:

void fun(void) {

    int a[16];

    for (int i = 0; i < sizeof(a); i++)
    {
        a[i] = 1;
    }
}

I have allocated and referred to memory without pointers here.


in some form or another is the key to their point.

Here's something to try at home .. exactly your code save for this change:

    i[a] = 1;
... guess what, still compiles, still works !!

WTF ??? you ask, well, you see, X[Y] is just syntactic sugar for X+Y - it's a pointer operation disguised to look like a rose (but it smells just the same).


> Every type is printable.

It’s 2024, every type is jsonable!


Never met a programmer that thought these things were true.


A few of those myths are stated as fact in the aforementioned thread.


> It's impossible to write complex and performant programs without null.

Well, clearly there is a need for a special value that is not part of the set of legal values. Things like std::optional etc. are of course less performant.

If I can dream, all of this would be solved by 72-bit CPUs, which would be the same as 64-bit CPUs, but the upper 8 bits can be used for garbage collection tags, sentinel values, option types etc.


> Well, clearly there is a need for a special value that is not part of the set of legal values.

There's a neat trick available here: If you make zero an illegal value for the pointer itself, you can use zero as your "special value" for the std::optional wrapper, and the performance overhead goes away.

This is exactly what Rust does, and as a result, Option<&T>, Option<Box<T>>, etc are guaranteed to have zero overhead: https://doc.rust-lang.org/std/option/index.html#representati...


> If I can dream, all of this would be solved by 72-bit CPUs, which would be the same as 64-bit CPUs, but the upper 8 bits can be used for garbage collection tags, sentinel values, option types etc.

https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/cheri...

Address space is 64bit, pointers are 128bit, and encode the region the pointer is allowed to dereference. And there's a secret 129th bit that doesn't live in the address space that gets flipped if the pointer is overwritten (unless it's an explicit instruction for changing a pointer)


> Wow, that's painful to read.

And the dismissive tone of some people including Ian. But to be fair before Rust there was definitely this widespread myth in the dev hivemind that nullable pointers is just the cost of performance and low level control. What’s fascinating is how easy and hindsight-obvious it was to rid code of them. I’ve never had to use pointers in Rust and I’ve worked on quite advanced stuff.


Nullable pointers are fine for those who need them. What we're asking for is non-nullable pointers.


> "Go doesn't have nullable types in general. We haven't seen a real desire for them"

Ouch, who were they asking? There are so many problems from even the most simple CRUD apps where "lack of a value" must be modelled, but where the zero-value is a valid value and therefore an unsuitable substitute. This is probably my single biggest pain point with Go.

Using pointers to model nullability, or other "hacks" like using a map where keys may not be set, feel completely at odds with Go's stated goal of high code clarity and its general disdain for trickery.

I know with generics it's now trivially easy to implement your own Optional wrappers, but the fact that it's not part of the language or even the standard library means you're never going to have a universal way of modelling this incredibly basic and common requirement across projects. It also means you're never going to have any compile-time guarantees against not accidentally using an invalid value—though that's also the case with the ubiquitous (value, error) pattern and so is evidently not something the language is concerned with.


Everyone just keeps repeating the same old gripe, without bothering to read the responses.

Go needs a null-like thing because the language forces every type to have a zero value. To remove the concept of zero value from Go would be a major change.


The responses from Ian and the Go fans are not very well-thought.

To begin with, zero values were never a great idea. It sounds better than what C does (undefined behavior), but zero values can also hide subtle bugs. The correct approach is to force values to always be initialized on declaration or make use-before-initialization an error.

Having said that, it was probably too late to fix zero values by 2009, when Go was released to the public, and this is not what the thread's OP suggested. He referred to Eiffel, which is an old language from the 1990s (at least?) that didn't initially have null-safety (or "void-safety" in Eiffel's case), but released a mechanism to do just that in 2009, shortly after Tony Hoare's talk at QCon London 2009 (no idea if they were influenced by the talk, but they did mention the "Billion Dollar Mistake" in the release notes).

Eiffel's added nullability and non-nullability markers to types (called "detachable" and "attached"), but it's also using flow-sensitive typing[1] to prevent null-dereferencing (which is the main cause for bugs).

The thread OP didn't ask to eliminate zero values or nullable types, but rather requested to have a non-nullable pointer type, and flow-sensitive typing.

If structs need to be zero-initialized, a non-nullable pointer could be forbidden in structs, or alternatively Go could make explicit initialization mandatory for structs that have non-nullable pointers. At the very least, Go could support non-nullable pointers as local stack values, and use flow-sensitive typing to prevent null dereference.

[1] https://en.wikipedia.org/wiki/Flow-sensitive_typing


If there's a non-nullable type, then there's types without zero values, and that means some basic properties of Go no longer hold. I don't know how many times that can be said differently. Whether something is in a struct or not is not relevant.


What basic properties no longer hold?


Uninitialized variables are zero. Composite literals may omit fields, and they'll be zero. Map accesses for nonexistent keys return zero values. Channel receives from closed channels return zero values. make returns zero-valued slices. Comma-ok style type assertions return zero values. Slices are fat pointers where the zero value avoids an allocation for data.


That would still hold. Those things just wouldn’t be typed as non-nullable.


Now you're creating a flavor of types that cannot be used in many places. Or worse, a flavor of types that when added to a struct breaks existing uses. That'd be a major change.


> Now you're creating a flavor of types that cannot be used in many places.

Yeah, that’s the entire point. Type safety.


Sigh. Yes, everyone knows what the goal is. But that goal is in conflict with the design decisions made in Go, and there's no clear path forward. You're not responding at all to the actual implications of types without zero values. There's no such thing as a "type that cannot be sent over a channel" or "a type that cannot be a map value" or "a type that when added anywhere in a composite type prevents omitting fields in composite literals" or "a type that cannot be comma-ok type asserted to". And it's highly unlikely such concepts will be added to the spec.

You'd have to start by constructing replacements for all those mechanisms, then migrate all Go source code in the world over to the new APIs, just to enable Go to have types without zero values.


Wow, that discussion is infuriating. I'm shocked that many people on there don't seem to understand the difference between compile time checks and runtime checks, or the very basics of type systems.


I think people do understand the basics of static type systems, but disagree about which types are essential in a "system language" (whatever that is).

An integer range is a very basic type, too, conceptually, but many languages don't support them in the type system. You get an unsigned int type if you're lucky.


> An integer range is a very basic type, too

Not really, its semantics get hairy almost instantly. Eg does it incrementing it produce a new range?


The semantics are always complex. The same type of question arises for all basic types. For example, what does adding a string to an integer produce?

Or do you give up on answering that and simply prevent adding strings and integers? When one wants to add them they can first manually apply an appropriate type conversion.

That is certainly a valid way to address your question – i.e. don't allow incrementing said type. Force converting it to a type that supports incrementing, and then from that the developer can, if they so choose, convert it back to an appropriate range type, including the original range type if suitable.

Of course, different languages will have different opinions about what is the "right" answer to these questions.


I think you're confusing the type and value level.

The original statement was about a range type, that is something like an integer that is statically constrained to a range of, say, 1..4 (1, 2, 3, 4).

To work with this as a type you need to have type level operations, such as adding two ranges (which can yield a disjoint range!), adding elements to the range, and so on, which produce new types. These all have to work on types, not on values. If 1..4 + 5..8 = 1..8 this has to happen at the type level, or, in other words, at compile-time.

Range types are very complicated types, compared to the types most people deal with.

Converting a string to an int is very simple to type (String => Int if you ignore errors) and adding integers is also simple to type ((Int, Int) => Int)


A range type could be very simple if it were just used for storage - you couldn’t do anything with it other than passing it around and converting it to something else, and there would be a runtime check when creating it.

But such a thing would be useful mostly for fields in data structures, and the runtime checks would add overhead. (Though, perhaps it would replace an array bounds check somewhere else?)


OP is just saying that you don't have to permit operations such as addition or incrementation on range types, in which case you don't need the corresponding type-level operations.


> That is certainly a valid way to address your question – i.e. don't allow incrementing said type. Force converting it to a type that supports incrementing, and then from that the developer can, if they so choose, convert it back to an appropriate range type, including the original range type if suitable.

The quoted part above is an argument for dependent types. The conversion back to a range type creates a type that depends on a value, which is the essence of dependent typing.


No, I think the idea is that you'd get a runtime exception if the value was outside the range. No need for dependent types. It is no different conceptually from casting, say, a 64-bit integer to a 32-bit integer. If the value is outside the range for a 32-bit integer, then (depending on the language semantics) you either raise a runtime error, or the result is some kind of nonsense value. You do not need to introduce dependent types into your language to enable such casts (as long as you're willing to enforce the relevant checks only at runtime, or forego such checks altogether).


I think the original comment is imprecise. E.g. "don't allow incrementing said type" can be read as either "don't allow incrementing values of said type" or literally as don't allow incrementing the type. I can see both your and my interpretation, depending on how one chooses to read the comment.


I regularly find quite smart people assume Rust's references must be fat pointers to handle lifetimes, and check them all at runtime.


> An integer range is a very basic type, too, conceptually

Just signed vs unsigned makes this a complex topic.


Many people on here as well! :-) Reading the comments on this post is stepping into an alternative universe from the PL crowd I usually interact with. Very conservative. It's quite interesting.


It's like they don't speak the same languages.


Well written list of what made Go better language during last years. I'd add iterators, the recent big thing from Russ.


Wow. I haven't followed Go for a while, thanks for that note.

Iterators are very nice addition, even with typical Go fashion of quite ugly syntax.


Just last week I've implemented an iterator for my C++ type and lol to your comment. It was fucking nightmare compared to how you (will) implement an iterator in Go.

I didn't study the reason why Go chose this way over others. I do know they've considered other ways of doing it and concluded this one is best, based on complex criteria.

People who make value judgements like this typically ignore those complex consideration, of which playing well with all the past Go design decisions is the most important.

Frankly, you didn't even bother to say which language does it better or provide a concrete example of the supposedly non-ugly alternative.


C#, python - here are the most mainstream examples of syntax that doesn’t look alien.


They don't have any syntax that differs from the previous Go versions.


Iterators and generics go against the original goals of Go - simplicity and productivity. They complicated Go language specification too much without giving back significant benefits. Iterators and generics also encourage writing unnecessarily complicated code, which makes Go less pleasant to work with. I tried explaining this at https://itnext.io/go-evolves-in-the-wrong-direction-7dfda8a1...


This argument is brought up again and again, but it is just wrong.

Go had both generics and iterators from the get go. Just not user defined ones.

Thus it is obvious that the creators of the language always saw their need for a simple and productive language


Not obvious to me. We just implemented a streaming solution using the iterator interfaces. They are just functions so reading the code its easy to understand. Adding special language support only serves to obfuscate the actual code.


Go provides generic types since v1.0 - maps, slices and channels. Go also provides generic functions and operators for working with these types - append, copy, clear, delete. This allows writing clear and efficient code.

There is close to zero practical need in user-defined generic types and generic functions. Go 1.18 opened Pandora box of unnecessary complexity of Go specification and Go type system because of generics. Users started writing overcomplicated generic code instead of writing simple code solving the given concrete task.


tell me, how often do you find yourself writing `interface{}`?


Very rarely


I do agree with his point that the implicit mutation of the loop body for an iterative will be difficult to debug.


I'm not sure I can agree about generics. In many cases Go code is already fast enough, so other things come to play, especially type safety. Prior to generics I often had to write some quite complicated (and buggy) reflection code to do something I wanted (e.g. allow to pass functions that take a struct and return a struct+err to the web URL handlers, which would then get auto-JSONed). Generics allow to write similar code much easier and safer.

Generics also allow to write some data structures that would be useful e.g. to speed up AST parsing: writing a custom allocator that would allocate a large chunk of structs of a certain type previously required to copy this code for each type of AST node, which is a nightmare.


> Prior to generics I often had to write some quite complicated (and buggy) reflection code to do something I wanted (e.g. allow to pass functions that take a struct and return a struct+err to the web URL handlers, which would then get auto-JSONed).

This sounds like a good application for Go interfaces (non-empty interfaces). The majority of generics Go code I've seen could be simplified by using non-empty interfaces without the need of generics.


Totally agree, as far as I know Rob Pike no longer develop Go, that's why. Came corporate people instead who think they do their work, asking community (who said that it should be asked?) what new cocojamba feature should be added (like js)

But what can we do against of this? This what I think: - stuck to use Go 1.16 - fork Go 1.16 and continue develop lang from there - learn OCaml... - give up and consume what these people decide to add to lang next and everytime feel disgust


I’m sure if Go had nullable types and/or sum types from the beginning, it’s have been much more popular


It's already quite popular. I'm less convinced there's a large pile of people wishing for a fairly high performance garbage collected language that are not using Go because of this. There just aren't many viable alternatives.


Java and C# being the obvious (and more performant) alternatives. And compared to them, Go already wins because of not being, well, "enterprisey". And with that I mean less the languages itself, but also the whole ecosystem around them.


Go's is already "enterprisey" enough, thanks to Kubernetes ecosystem.


For the 3 people who actually need Kubernetes.


There are definitely lots, I'm one of them. I use Scala, which is very powerful and imho much nicer language than golang. But the tooling and other support is slow and subpar. But I just can't go back to a brain-dead language(!) like golang because it hurts to program in such languages to me. So I hope that either golang catches up with Scala's features, or that Scala catches up with golangs tooling.

And I think there are many similar people like me.


Scala is overcomplicated esoteric programming language. It is great for obfuscation contests. It is awful for production code, since Scala code lacks maintainability, readability and simplicity properties.


I guess we have different opinions. Maybe you had some bad experiences in the past? Scala 3 is very different from Scala 2 many years ago

There are few languages that are safer and easier to maintain, imho. The typesafety is superb.


I'm sure of the opposite given the ideas behind Go's design.


Perhaps, but other languages that look a lot like Go with these additions (e.g. OCaml) have not gained much popularity, despite getting much more love on forums like HN. It's important to remember that the people expressing strong opinions about sum types on the internet are a tiny and non-representative fraction of working programmers.


OCaml has a huge number of challenges besides "popular language plus sum types"


Go has nullable types! We want non-nullable types!


I blame C# for the confusion. Think of it this way: the ability to explicitly express a type Foo|null implies the existence of a non-nullable Foo as well. IOW it’s shorthand for “nullable and non-nullable types”.


Don't forget that the semantic change of traditional 3-clause "for" loops: https://go101.org/blog/2024-03-01-for-loop-semantic-changes-...

Because of this change, Go 1.22 is actually the first Go version which seriously breaks Go 1 compatibility, even if the Go official doesn't admit the fact.


> since Go 1.22, every freshly-declared loop variable used in a for loop will be instantiated as a distinctive instance at the start of each iteration. In other words, it is per-iteration scoped now. So the values of the i and v loop variables used in the two new created goroutines are 1 2 and 3 4, respectively. (1+2) + (3+4) gives 10.

I think you are assuming more guarantees than are actually guaranteed.

You have a well-documented history of making incorrect claims about Go compiler and runtime behaviors, so this isn't surprising.

> since Go 1.22, you should try to specify a Go language version for every Go source file

What on Earth?? Absolutely 100% not.


:D

It looks you don't understand the change at all.

The statement "since Go 1.22, you should try to specify a Go language version for every Go source file" is made officially, not by me.

> You have a well-documented history of making incorrect claims about Go compiler and runtime behaviors, so this isn't surprising.

The claim is totally baseless.

All my opinions and articles are based on facts. If you have found ones which are incorrect or which are not based on facts, please let me know: https://x.com/zigo_101.


> The statement "since Go 1.22, you should try to specify a Go language version for every Go source file" is made officially, not by me.

Please provide a link to documentation on golang.org. Note: not a comment in a GitHub issue, not a blog article -- official stuff only.

> baseless

It should be evident by the consistent responses to your GitHub issues that nobody takes you seriously. Which is unsurprising, when you make recommendations like

> Anyway, since Go 1.22, you should try to specify a Go language version for every Go source file, in any of the above introduced ways, to avoid compiler version dependent behaviors. This is the minimum standard to be a professional Go programmer in the Go 1.22+ era.


I think it is best to let rsc answer your questions. :D


:D :D I don't have any questions :D except to you :D


Are there cases where people actually rely on the previous behavior?

I always assumed that it was considered faulty to do so.


There were certainly buggy tests that relied on the old behavior. We didn't find any actual code that relied _correctly_ on the old behavior.

https://go.dev/wiki/LoopvarExperiment


Love it, even though it must have been incredibly confusing when old tests failed at first. The assumption being that the tests were correct. They _passed_ all those months or years!

I'm just also watching your YT video on testing and enjoying it very much!


The tests are in the simplest forms, there are more complex use cases of traditional "for" loops. The complex cases are never explored by the authors of the change.

And there are a large quantity of private Go code in the world.


No convincing evidences to prove there are not such cases. In my honest opinion, if there are such cases in theory, there will be ones in practice. It is a bad expectation to hope such cases never happen in practice.

The authors of the change did try to prove such cases don't happen in practice, but their proving process is totally breaking.

It is my prediction that multiple instances of broken cases will be uncovered in coming years, in addition to the new foot-gun issues created by the altered semantics of transitional 'for' loops.


I wish they would opt for ARC instead of a GC, to have a more deterministic memory objects lifecycle.

Other than that, I agree with your comment.


Source on the reasons, from the editor: https://pony.social/@thephd/112791335889843647.


Re. bad tooling. grpcurl[1] is irreplaceable when working with gRPC APIs. It allows you to make requests even if you don't have the .proto around.

[1]: https://github.com/fullstorydev/grpcurl


> make requests even if you don't have the .proto around

Like this?

    > grpcurl -d '{"id": 1234, "tags": ["foo","bar"]}' \
        grpc.server.com:443 my.custom.server.Service/Method
How is that even possible? How could grpcurl know how to translate your request to binary?


If I recall correctly, ProtoBuf has a reflection layer, and it's probably using that.


I could be wrong, but it is probably using json encoding for the object body, and implementing the transport for grpc instead of http. Proto objects support json encode/decode by default in all the implementations I've seen.

https://grpc.io/blog/grpc-with-json/


One can use Kreya for a GUI version


I just build a cli in Java or Go. It literally takes minutes to build a client.


> While the compiled programs stayed the same, we no longer get a warning (even with -Wall), even though both compilers can easily work out statically (e.g. via constant folding) that a division by zero occurs [4].

Are there any reasons why that is so? Do compilers not reuse the information they gather during compilation for diagnostics? Or is it a deliberate decision?


In the second example, the constant is propagated across expression/statement boundaries. It is likely, that this happened on IR level, rather than on the AST level.

I'd imagine the generic case becomes a non-trivial problem if you don't want to produce fluke/useless diagnostic messages.

The compiler might already be several optimization passes in at this point, variables long since replaced by chained SSA registers, when it suddenly discovers that an IR instructions produces UD. This itself might end up being eliminated in a subsequent pass or entirely depend on a condition you can't statically determine. In the general case, by the point you definitely know, there might not be enough information left to reasonably map this back to a specific point in the input code, or produce useful output why the problem happens here.


Correct. And to add to this answer slightly: there might not be enough information because to keep all the context around, the compiler might need exponentially more memory (even quadratically more memory might be too much; program sizes can really add up and that can matter) to keep enough state to give a coherent error across phases / passes.

Back in the day when RAM wasn't so cheap that you could find it in the bottom of a Rice Krispies box, I worked with a C++ codebase that required us to find a dedicated compilation machine because a standard developer loadout didn't have enough RAM to hold one of the compilation units in memory. Many of these tools (gcc in particular, given its pedigree) date back to an era where that kind of optimization mattered and choosing between more eloquent error messages or the maximum scope of program you could practically write was a real choice.


There is very strong bias in clang not to emit any diagnostics once you get to the middle-end optimizations, partially because the diagnostics are now based on whims of heuristics, and partially because the diagnostics now become hard to attribute to source code (as the IR often has a loose correlation to the original source code). Indeed, once you start getting inlining and partial specialization, even figuring out if it is worth emitting a diagnostic is painfully difficult.


Take for example the compiler optimizing:

    void foo(bool cond) {
      int a = 0;
      if (cond) a = 10;
      if (cond) printf("%d\n", 10 / a);
    }
into:

    void foo(bool cond) {
      int a = 0;
      if (cond) {
        a = 10;
        if (cond) printf("%d\n", 10 / a);
      } else {
        if (cond) printf("%d\n", 10 / a);
      }
    }
and then screaming that (in its generated 'else' block) there's a very clear '10 / 0'. Now, of course, you'd hope that the compiler would also recognize that that's in a never-taken-here 'if' (and in most cases it will), but in various situations it might not be able to (perhaps most notably 'cond' instead being a function call that will give the same result in both places but the compiler doesn't know that).

Now, maybe there are some passes across which false-positives would not be introduced, but that's a rather small subset, and you'd have to reorder the passes such that you run all of them before any "destructive" ones, potentially resulting in having to duplicate passes to restore previous optimization behavior, at which point it's not really "reusing".


I believe clang does not use data gathered from optimization for normal compilation diagnostics to avoid them being dependent on compilation flags.

GCC does, but I guess this case just a case of missed warning, possibly to suppress false positive cases.


The do reuse information. But you have no guarantee that the point at which information is used is run after the point at which something is discovered.

They do try to run things so everything's used. They also try to compile quickly. There is a conflict.


Whenever you do:

  v := interfaceType(concreteTypeValue)
in Go, what you're actually doing on a lower level is:

  dataPtr := &concreteTypeValue
  typePtr := typeData[concreteType]()
  v := interfaceData{
          data: dataPtr,
          typ: typePtr,
  }
The first line here is the allocation, since (at least, the way I recall the rule) in Go pointers never point to values on the stack, so concreteTypeValue must be allocated on the heap. The rule about pointers not pointing to the stack is there to make it easier for goroutine stacks to grow dynamically.

See https://go.dev/doc/faq#stack_or_heap.


>in Go pointers never point to values on the stack

This is only the case for pointers that the compiler can't prove not to escape:

>In the current compilers, if a variable has its address taken, that variable is a candidate for allocation on the heap. However, a basic escape analysis recognizes some cases when such variables will not live past the return from the function and can reside on the stack.


That could be true, but I've done a few optimizations of allocations in Go code, and I don't recall pointers to stack values ever being optimized (unless the entire branch is removed, of course). If anyone could provide an example of the code that does pointer operations yet doesn't cause allocations, I'd appreciate it!



Ah, so extremely localized uses of pointers, got it, thanks.


More or less, yes, but it doesn't have to be extremely local. Here's a variant of the example that introduces a function call boundary:

https://godbolt.org/#g:!((g:!((g:!((h:codeEditor,i:(filename...


Oops, copy paste error there. Here is the example with the function call boundary:

https://godbolt.org/#g:!((g:!((g:!((h:codeEditor,i:(filename...


To be fair, if that's what you need ProtoBuf isn't the only option. Cap'n Proto[1], JSON Schema[2], or any other well supported message-definition language could probably achieve that as well, each with their own positives and negatives.

[1]: https://capnproto.org/

[2]: https://json-schema.org/


Big fan of Cap'n Proto here, but to be fair it doesn't support as many languages as Protobuf/gRPC yet.


I'm currently building a Protocol Buffers alternative that uses JSON Schema instead: https://jsonbinpack.sourcemeta.com/. It was proven on research to be as or more space-efficient than any considered alternative (https://arxiv.org/abs/2211.12799).

However, it is still heavily under development and not ready for production use. Definitely looking for GitHub Sponsors or other type of funding to support it :)


There is a proposal to get this using the "defer" keyword into either the next C revision or the one after that[1].

[1]: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3199.htm.


It seems that whenever ISO C invent their own imitation of some GNU feature, in order to foist onto other compilers, they fuck it up first.

VLA's, inline, variadic macros, ...

More recently, they gaffed by making alignment specification not a type attribute but, get this, a storage class specifier. Ouch!


That looks a lot like what Zig does.

One problem with the proposed defer there is that it has no way to interact with stack unwinding, which is part of the platform ABIs for handling exceptions. Maybe some alternative syntax, such as "defer_finally" could make the defer block act as both a regular defer block, and also explicitly add that block to the stack unwinding chain.


That would be sublime! Defer is something I've been wanting in C for 20 years. It makes resource management clear, easy to reason about, and concise.


you can implement a crappy version of defer with GNU extensions, I think? https://gist.github.com/cozzyd/a9eb2ddb9c8785ad5c60e3280b0ba...


Privilege separation[1], most likely. If your system needs to do three things, it could either just do all of them using a single executable requiring all three permissions (thus also theoretically allowing an attacker to use it to do all three things as well) or split your system into three executables, each only having the permission to do one thing (thus reducing the amount of potential damage).

[1]: https://en.wikipedia.org/wiki/Privilege_separation


Yes, the main daemon needs to run as root so it can become any user. Once you're logged in you just run as a regular user.


Yes and no. At least one process needs to run as root to be able to become any (other) user. It doesn't have to be the one accepting incoming connections, or the one handling user authentication and authorisation. OpenBSD already contains several examples of this e.g. OpenBGPd limits the attack surface by putting the BGP session handling (and protocol parsing) in one process running with reduced privs (dedicated user and group, chroot(), pledge()/unveil()). To communicate with the other processes the parent creates unix socket pairs to be inherited. The children also re-exec after fork() so they're re-randomised and can't be abused as oracles for the memory layout of other processes.


The last sentence in the article seems like a clue.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: