Hacker News new | past | comments | ask | show | jobs | submit | epcoa's comments login

What’s the point being made here? Is there anything on there that is similar in design?

The only thing that seems kind of similar (sdr-radio.com) is also Windows only.


Well thank you for calling this out.

I was originally going to scold the GP for a shallow dismissal and shitting all over a perfectly good application that runs on the best operating system in development today, Microsoft Windows.

But I also have a soft spot in my heart for Linux and I thought I would instead mollify the GP by pointing out that there really is Linux software for his Software-Defined Radio.

Little did I suspect that the GP wants all of the special features that SkyRoof offers, such as hyperencabulation of quasicosmic quantum wave collapse, a genuine pair of cathode ray boobs, and also don't forget the 1.7 jiggawatt zener diode that is included free with every subscription.

Sorry to confuse you!


Other than the flourish of adding some Scala to enterprisey Java there is absolutely nothing atypical about this bog enterprisey application. It’s a JS/TS/Java app, nothing else stands out.

Listing every config language and a few lines of CI or whatever scripts shit is misleading.

I see nothing other than typical boring enterprise/big gov crap here (which is fine, and expected).


I’m not going to shit on it, nothing wrong with going into the family business - but it isn’t a complete coincidence that her dad is a PhD biochemist at UT Tyler.


Yeah, it's great to have kids excited about science, but at 17 it's extremely unlikely that she taught herself enough chemistry, organic chemistry and biochemistry to come up with this. She needs years more of college-level coursework. Essentially impossible without a biochemist in the family to guide her.


It never is, the winner of this competition is in astral radio telescopes.

Something tells me he didn't launch the satellite.


With mentorship from Caltech and access to data from NASA’s NEOWISE mission, Matteo created a machine learning algorithm and compiled a groundbreaking database called VarWISE.

https://www.societyforscience.org/regeneron-sts/2025-student...


I knew someone who reached the final stage of one of these science fairs. Project was done at a lab in an Ivy League university over a couple of summers. Relative was a senior scientist at the lab and guided them every step of the way. Not to discount what these kids are doing, but the reality is that these science fairs have largely become a contest about how well your family is connected to science-fair-friendly research facilities and how good your presentation skills are. I mean, do we really think 17 year olds are out there doing human trials on novel cancer therapies? I’m sure there are some projects that are genuinely thought of and done by the students themselves, but looking at a lot of these PhD level research that are supposedly done as after school projects of high school kids, I can’t help but think the whole thing has become a bit of a farce.


And look at her mother -- https://profiles.unthsc.edu/profile/381 -- hmmm


> It is tempting to do the fast thing when you are on a tight schedule.

The alternative option in C or C++ is to do some undefined behavior. Before a pile on, yes of course you can avoid it but the rush option is usually going to be the equivalent of unwrap anyway, and Rust does make it quite a bit harder to invoke undefined behavior.


What is an arbitrary TCP port? Ports in isolation from an IP address aren't inherently arbitrary, they're nothing, and the IP:port pair is arbitrary. Once you allow connections to any host on the internet the port doesn't really matter - you can do whatever nefarious shit over port 80. And not allowing apps to connect to external internet servers seems pretty limiting.


I will continue to abstain from eating dog shit to convince myself that a ribeye tastes good.


> either they are lazy or don't understand them enough to do it themselves.

Meh, I used to keep printed copies of autotools manuals. I sympathize with all of these people and acknowledge they are likely the sane ones.


I've had projects where I spent more time configuring autoconf than actually writing code.

That's what you get for wanting to use a glib function.


Why would you need to do that though if you can static_cast?


You can't static_cast in this case; https://godbolt.org/z/a1bMbPcaj


You can use `std::bit_cast` to do that in constexpr contexts.

    constexpr auto f(uint8_t *x) {
      return std::bit_cast<char *>(x);
    }
https://godbolt.org/z/K3f9b9GGs


No, you can't do that either: https://godbolt.org/z/vzdTMazx7 : error: '__builtin_bit_cast' is not a constant expression because 'char' is a pointer type

Here the `constexpr` keyword means the function might be called in a constant-evaluated context. f doesn't need to have all its statements be able to be evaluated in constexpr, only those which are actually used are. You need to explicitly instantiate a constexpr variable to test this.

cppreference is very clear* about this, regarding bit_cast: https://en.cppreference.com/w/cpp/numeric/bit_cast


Good catch. Its weird that it compiles without error as a consteval func.


Hmm, looking at cppreference:

The consteval specifier declares a function or function template to be an immediate function, that is, every potentially-evaluated call to the function must (directly or indirectly) produce a compile time constant expression.

It's possible that the compiler just doesn't bother as long as you aren't actually calling the function.


Ah this was my case! Was trying to constexpr a uint8_t ptr to char * in a constexpr constructor for a string class.

Ah that’s what bitcast is for, neat!


By the 2000s, portability was a concern for most titles. Certainly anything targeted at a rapidly changing console market back then.


Definitely, and architectures back then were far less standardized. The Xbox 360 was a big-endian PowerPC CPU, the PS2 had a custom RISC-based CPU. On the desktop, this was still the era of PowerPC-based Macs. Far easier (and I would argue safer) to use a standard, portable sscanf-like function with some ascii text, than figure out how to bake your binaries into every memory and CPU layout combination you might care about.


No. If you have a moving multi generational GC, allocation is literally just an increment for short lived objects.


This is about go not Java. Go makes different tradeoffs and does not have moving multigenerational GC.


If you have a moving, generational GC, then all the benefits of fast allocation are lost due to data moving and costly memory barriers.


Not at all. Most objects die young and thus are never moved. Also, the time before it is moved is very long compared to CPU operations so it is only statistically relevant (very good throughput, rare, longer tail on latency graphs).

Also, write-only barriers don't have that big of an overhead.


It doesn't matter if objects die young — the other objects on the heap are still moved around periodically, which reduces performance. When you're using a moving GC, you also have additional read barriers that non-moving GCs don't require.


Is that period really that big of a concern when your threads in any language might be context switched away by the OS? It's not a common occurrence on a CPU-timeline at all.

Also, it's no accident that every high-performance GC runtime went the moving, generational way.


That time may seem negligible, since the OS can context switch threads anyway, but it’s still additional time during which your code isn’t doing its actual work.

Generations are used almost exclusively in moving GCs — precisely to reduce the negative performance impact of data relocation. Non-moving GCs are less invasive, which is why they don’t need generations and can be fully concurrent.


I would rather say that generations are a further improvement upon a moving collector, improving space usage and decreasing the length of the "mark" phase.

And which GC is fully concurrent? I don't think that's possible (though I will preface that I am no expert, only read into the topic on a hobby level) - I believe the most concurrent GC out there is ZGC, which does read barriers and some pointer tricks to make the stop-the-world time independent of the heap size.


Java currently has no fully concurrent GC, and due to the volume of garbage it manages and the fact that it moves objects, a truly fully concurrent GC for this language is unlikely to ever exist.

Non-moving GCs, however, can be fully concurrent — as demonstrated by the SGCL project for C++.

In my opinion, the GC for Go is the most likely to become fully concurrent in the future.


Is SGCL your project?

In that case, are you doing atomic writes for managed pointers/the read flag on them? I have read a few of your comments on reddit and your flags seem to be per memory page? Still, the synchronization on them may or may not have a more serious performance impact than alternative methods and without a good way to compare it to something like Java which is the state of the art in GC research we can't really comment much on whether it's a net benefit.

Also, have you perhaps tried modeling your design in something like TLA+?


Yes, SGCL is my project.

You can't write concurrent code without atomic operations — you need them to ensure memory consistency, and concurrent GCs for Java also rely on them. However, atomic loads and stores are cheap, especially on x86. What’s expensive are atomic counters and CAS operations — and SGCL uses those only occasionally.

Java’s GCs do use state-of-the-art technology, but it's technology specifically optimized for moving collectors. SGCL is optimized for non-moving GC, and some operations can be implemented in ways that are simply not applicable to Java’s approach.

I’ve never tried modeling SGCL's algorithms in TLA+.


It’s in uncharitable to say the benefits are lost - I’d reframe it as creating tradeoffs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: