I was originally going to scold the GP for a shallow dismissal and shitting all over a perfectly good application that runs on the best operating system in development today, Microsoft Windows.
But I also have a soft spot in my heart for Linux and I thought I would instead mollify the GP by pointing out that there really is Linux software for his Software-Defined Radio.
Little did I suspect that the GP wants all of the special features that SkyRoof offers, such as hyperencabulation of quasicosmic quantum wave collapse, a genuine pair of cathode ray boobs, and also don't forget the 1.7 jiggawatt zener diode that is included free with every subscription.
Other than the flourish of adding some Scala to enterprisey Java there is absolutely nothing atypical about this bog enterprisey application. It’s a JS/TS/Java app, nothing else stands out.
Listing every config language and a few lines of CI or whatever scripts shit is misleading.
I see nothing other than typical boring enterprise/big gov crap here (which is fine, and expected).
I’m not going to shit on it, nothing wrong with going into the family business - but it isn’t a complete coincidence that her dad is a PhD biochemist at UT Tyler.
Yeah, it's great to have kids excited about science, but at 17 it's extremely unlikely that she taught herself enough chemistry, organic chemistry and biochemistry to come up with this. She needs years more of college-level coursework. Essentially impossible without a biochemist in the family to guide her.
With mentorship from Caltech and access to data from NASA’s NEOWISE mission, Matteo created a machine learning algorithm and compiled a groundbreaking database called VarWISE.
I knew someone who reached the final stage of one of these science fairs. Project was done at a lab in an Ivy League university over a couple of summers. Relative was a senior scientist at the lab and guided them every step of the way. Not to discount what these kids are doing, but the reality is that these science fairs have largely become a contest about how well your family is connected to science-fair-friendly research facilities and how good your presentation skills are. I mean, do we really think 17 year olds are out there doing human trials on novel cancer therapies? I’m sure there are some projects that are genuinely thought of and done by the students themselves, but looking at a lot of these PhD level research that are supposedly done as after school projects of high school kids, I can’t help but think the whole thing has become a bit of a farce.
> It is tempting to do the fast thing when you are on a tight schedule.
The alternative option in C or C++ is to do some undefined behavior.
Before a pile on, yes of course you can avoid it but the rush option is usually going to be the equivalent of unwrap anyway, and Rust does make it quite a bit harder to invoke undefined behavior.
What is an arbitrary TCP port? Ports in isolation from an IP address aren't inherently arbitrary, they're nothing, and the IP:port pair is arbitrary. Once you allow connections to any host on the internet the port doesn't really matter - you can do whatever nefarious shit over port 80. And not allowing apps to connect to external internet servers seems pretty limiting.
No, you can't do that either: https://godbolt.org/z/vzdTMazx7 : error: '__builtin_bit_cast' is not a constant expression because 'char' is a pointer type
Here the `constexpr` keyword means the function might be called in a constant-evaluated context. f doesn't need to have all its statements be able to be evaluated in constexpr, only those which are actually used are. You need to explicitly instantiate a constexpr variable to test this.
The consteval specifier declares a function or function template to be an immediate function, that is, every potentially-evaluated call to the function must (directly or indirectly) produce a compile time constant expression.
It's possible that the compiler just doesn't bother as long as you aren't actually calling the function.
Definitely, and architectures back then were far less standardized. The Xbox 360 was a big-endian PowerPC CPU, the PS2 had a custom RISC-based CPU. On the desktop, this was still the era of PowerPC-based Macs. Far easier (and I would argue safer) to use a standard, portable sscanf-like function with some ascii text, than figure out how to bake your binaries into every memory and CPU layout combination you might care about.
Not at all. Most objects die young and thus are never moved. Also, the time before it is moved is very long compared to CPU operations so it is only statistically relevant (very good throughput, rare, longer tail on latency graphs).
Also, write-only barriers don't have that big of an overhead.
It doesn't matter if objects die young — the other objects on the heap are still moved around periodically, which reduces performance. When you're using a moving GC, you also have additional read barriers that non-moving GCs don't require.
Is that period really that big of a concern when your threads in any language might be context switched away by the OS? It's not a common occurrence on a CPU-timeline at all.
Also, it's no accident that every high-performance GC runtime went the moving, generational way.
That time may seem negligible, since the OS can context switch threads anyway, but it’s still additional time during which your code isn’t doing its actual work.
Generations are used almost exclusively in moving GCs — precisely to reduce the negative performance impact of data relocation. Non-moving GCs are less invasive, which is why they don’t need generations and can be fully concurrent.
I would rather say that generations are a further improvement upon a moving collector, improving space usage and decreasing the length of the "mark" phase.
And which GC is fully concurrent? I don't think that's possible (though I will preface that I am no expert, only read into the topic on a hobby level) - I believe the most concurrent GC out there is ZGC, which does read barriers and some pointer tricks to make the stop-the-world time independent of the heap size.
Java currently has no fully concurrent GC, and due to the volume of garbage it manages and the fact that it moves objects, a truly fully concurrent GC for this language is unlikely to ever exist.
Non-moving GCs, however, can be fully concurrent — as demonstrated by the SGCL project for C++.
In my opinion, the GC for Go is the most likely to become fully concurrent in the future.
In that case, are you doing atomic writes for managed pointers/the read flag on them? I have read a few of your comments on reddit and your flags seem to be per memory page? Still, the synchronization on them may or may not have a more serious performance impact than alternative methods and without a good way to compare it to something like Java which is the state of the art in GC research we can't really comment much on whether it's a net benefit.
Also, have you perhaps tried modeling your design in something like TLA+?
You can't write concurrent code without atomic operations — you need them to ensure memory consistency, and concurrent GCs for Java also rely on them. However, atomic loads and stores are cheap, especially on x86. What’s expensive are atomic counters and CAS operations — and SGCL uses those only occasionally.
Java’s GCs do use state-of-the-art technology, but it's technology specifically optimized for moving collectors. SGCL is optimized for non-moving GC, and some operations can be implemented in ways that are simply not applicable to Java’s approach.
I’ve never tried modeling SGCL's algorithms in TLA+.
The only thing that seems kind of similar (sdr-radio.com) is also Windows only.
reply