An "explorative" hex editor where you can do "fuzzy" searches, e.g., searching for a header with specific values for certain fields. (I thought ImHex should be able to do this (and still think it might), but haven't really figured out a good work flow...)
If you want recognize all the common patterns, the code can get very verbose. But it's all still just one analysis or transformation, so it would be artificial to split into multiple files. I haven't worked much in llvm, but I'd guess that the external interface to these packages is pretty reasonable and hides a large amount of the complexity that took 16kloc to implement
If you don’t rely on IDE features or completion plugins in an editor like vim, it can be easier to navigate tightly coupled complexity if it is all in one file. You can’t really scan it or jump to the right spot as easily as smaller files, but in vim searching for the exact symbol under the cursor is a single character shortcut, and that only works if the symbol is in the current buffer. This type of development works best for academic style code with a small number (usually one or two) experts that are familiar with the implementation, but in that context it’s remarkably effective. Not great for merge conflicts in frequently updated code though.
If it was 16K lines of modular "compositional" code, or a DSL that compiles in some provably-correct way, that would make me confident. A single file with 16K lines of -- let's be honest -- unsafe procedural spaghetti makes me much less confident.
Compiler code tends to work "surprisingly well" because it's beaten to death by millions of developers throwing random stuff at it, so bugs tend to be ironed out relatively quickly, unless you go off the beaten path... then it rapidly turns out to be a mess of spiky brambles.
The Rust development team for example found a series of LLVM optimiser bugs related to (no)aliasing, because C/C++ didn't use that attribute much, but Rust can aggressively utilise it.
I would be much more impressed by 16K lines of provably correct transformations with associated Lean proofs (or something), and/or something based on EGG: https://egraphs-good.github.io/
On the other end of the optimizer size spectrum, a surprising place to find a DSL is LuaJIT’s “FOLD” stage: https://github.com/LuaJIT/LuaJIT/blob/v2.1/src/lj_opt_fold.c (it’s just pattern matching, more or less, that the DSL compiler distills down to a perfect hash).
Part of the issue is that it suggests that the code had a spaghettified growth; it is neither sufficient nor necessary but lacking external constraints (like an entire library developed as a single c header) it suggests that code organisation is not great.
Hardware is often spaghetti anyway. There are a large number of considerations and conditions that can invalidate the ability to use certain ops, which would change the compilation strategy.
The idea of good abstractions and such falls apart the moment the target environment itself is not a good abstraction.
I find the real question: are all 16,000 of those lines require to implement the optimization? How much of that is dealing with LLVM’s internal representation and the varying complexity of LLVM’s other internal structure?
For control systems like avionics it either passes the suite of tests for certification, or it doesn't. Whether a human could write code that uses less memory is simply not important. In the event the autocode isn't performant enough to run on the box you just spec a faster chip or more memory.
I’m sorry, but I disagree. Building these real-time safety-critical systems is what I do for a living. Once the system is designed and hardware is selected, I agree that if the required tasks fit in the hardware, it’s good to go — there’s no bonus points for leaving memory empty. But the sizing of the system, and even the decomposition of the system to multiple ECUs and the level of integration, depends on how efficient the code is. And there are step functions here — even a decade ago it wasn’t possible to get safety processors with sufficient performance for eVTOL control loops (there’s no “just spec a faster chip”), so the system design needed to deal with lower-ASIL capable hardware and achieve reliability, at the cost of system complexity, at a higher level. Today doing that in a safety processors is possible for hand-written code, but still marginal for autogen code, meaning that if you want to allow for the bloat of code gen you’ll pay for it at the system level.
>And there are step functions here — even a decade ago it wasn’t possible to get safety processors with sufficient performance for eVTOL control loops (there’s no “just spec a faster chip”)
The idea that processors from the last decade were slower than those available today isn't a novel or interesting revelation.
All that means is that 10 years ago you had to rely on humans to write the code that today can be done more safely with auto generation.
50+ years of off by ones and use after frees should have disabused us of the hubristic notion that humans can write safe code. We demonstrably can't.
In any other problem domain, if our bodies can't do something we use a tool. This is why we invented axes, screwdrivers, and forklifts.
But for some reason in software there are people who, despite all evidence to the contrary, cling to the absurd notion that people can write safe code.
> All that means is that 10 years ago you had to rely on humans to write the code that today can be done more safely with auto generation.
No. It means more than that. There's a cross-product here. On one axis, you have "resources needed", higher for code gen. On another axis, you have "available hardware safety features." If the higher resources needed for code gen pushes you to fewer hardware safety features available at that performance bucket, then you're stuck with a more complex safety concept, pushing the overall system complexity up. The choice isn't "code gen, with corresponding hopefully better tool safety, and more hardware cost" vs. "hand written code, with human-written bugs that need to be mitigated by test processes, and less hardware cost." It's "code gen, better tool safety, more system complexity, much much larger test matrix for fault injection" vs "human-written code, human-written bugs, but an overall much simpler system." And while it is possible to discuss systems that are so simple that safety processors can be used either way, or systems so complex that non-safety processors must be used either way... in my experience, there are real, interesting, and relevant systems over the past decade that are right on the edge.
It's also worth saying that for high-criticality avionics built to DAL B or DAL A via DO-178, the incidence of bugs found in the wild is very, very low. That's accomplished by spending outrageous time (money) on testing, but it's achievable -- defects in real-world avionics systems overwhelming are defects in the requirement specifications, not in the implementation, hand-written or not.
Codegen from Matlab/Simulink/whatever is good for proof of concept design. It largely helps engineers who are not very good with coding to hypothesize about different algorithmic approaches. Engineers who actually implement that algorithm in a system that will be deployed are coming from a different group with different domain expertise.
Not my experience. I work with a -fno-exceptions codebase. Still quite a lot of std left. (Exceptions come with a surprisingly hefty binary size cost.)
Apparently according to some ACCU and CPPCon talks by Khalil Estel this can be largely mitigated even in embedded lowering the size cost by orders of magnitude.
Yeah. I unfortunately moved to an APU where code size isn't an issue so I never got the chance to see how well that analysis translated to the work I do.
Provocative talk though, it upends one of the pillars of deeply embedded programming, at least from a size perspective.
Not exactly sure what your experience is, but if you work with in an -fno-exceptions codebase then you know that STL containers are not usable in that regime (with the exception of std::tuple it seems, see freestanding comment below). I would argue that the majority of use cases of the STL is for its containers.
So, what exact parts of the STL do you use in your code base? Most be mostly compile time stuff (types, type trait, etc).
Of course you can, you just need to check your preconditions and limit sizes ahead of time - but you need to do that with exceptions too because modern operating systems overcommit instead of failing allocations and the OOM killer is not going to give you an exception to handle.
I don't think it would be typical to depend on exception handling when dealing with boundary conditions with C++ containers.
I mean .at is great and all, but it's really for the benefit of eliminating undefined behavior and if the program just terminates then you've achieved this. I've seen decoders that just catch the std::out_of_range or even std::exception to handle the remaining bugs in the logic, though.
Not scaffolding in the same way, but, two examples of "fetishizing accidental properties of physical artworks that the original artists might have considered undesirable degradations" are
- the fashion for unpainted marble statues and architecture
- the aesthetic of running film slightly too fast in the projector (or slightly too slow in the camera) for an old-timey effect
The industry decided on 24 FPS as something of an average of the multiple existing company standards and it was fast enough to provide smooth motion, avoid flicker, and not use too much film ($$$).
Overtime it became “the film look”. One hundred-ish years later we still record TV shows and movies in it that we want to look “good” as opposed to “fake” like a soap opera.
And it’s all happenstance. The movie industry could’ve moved to something higher at any point other than inertia. With TV being 60i it would have made plenty of sense to go to 30p for film to allow them to show it on TV better once that became a thing.
Now, don't get me wrong, I'm a fan of pixel art and retro games.
But this reminds me of when people complained that the latest Monkey Island didn't use pixel art, and Ron Gilbert had to explain the original "The Curse of Monkey Island" wasn't "a pixel art game" either, it was a "state of the art game (for that time)", and it was never his intention to make retro games.
Many classic games had pixel art by accident; it was the most feasible technology at the time.
I don't think anyone would have complained if the art had been more detailed but in the same style as the original or even using real digitized actors.
Monkey Island II's art was slightly more comic-like than say The Last Crusade but still with realistic proportions and movements so that was the expectation before CoMI.
The art style changing to silly-comic is what got people riled up.
(Also a correction: by original I meant "Secret of" but mistyped "Curse of").
I meant Return to Monkey Island (2022), which was no more abrupt a change than say, "The Curse of Monkey Island" (1997).
Monkey Island was always "silly comic", it's its sine qua non.
People whined because they wanted a retro game, they wanted "the same style" (pixels) as the original "Secret", but Ron Gilbert was pretty explicit about this: "Secret" looked what it looked like due to limitations of the time, he wasn't "going for that style", it was just the style that they managed with pixel art. Monkey Island was a state-of-the-art game for its time.
So my example is fully within the terms of the concept we're describing: people growing attached to technical limitations, or in the original words:
> [...] examples of "fetishizing accidental properties of physical artworks that the original artists might have considered undesirable degradations"
I wouldn't call it "fetishizing" though; not all of them anyway.
Motion blur happens with real vision, so anything without blur would look odd. There's cinematic exaggeration, of course.
24 FPS is indeed entirely artificial, but I wouldn't call it a fetish: if you've grown with 24 FPS movies, a higher frame rate will paradoxically look artificial! It's not a snobby thing, maybe it's an "uncanny valley" thing? To me higher frame rates (as in how The Hobbit was released) make the actors look fake, almost like automatons or puppets. I know it makes no objective sense, but at the same time it's not a fetishization. I also cannot get used to it, it doesn't go away as I get immersed in the movie (it doesn't help that The Hobbit is trash, of course, but that's a tangent).
Grain, I'd argue, is the true fetish. There's no grain in real life (unless you have a visual impairment). You forget fast about the lack of grain if you're immersed in the movie. I like grain, but it's 100% an esthetic preference, i.e. a fetish.
>Motion blur happens with real vision, so anything without blur would look odd.
You watch the video with your eyes so it's not possible to get "odd"-looking lack of blur. There's no need to add extra motion blur on top of the naturally occurring blur.
On the contrary, an object moving across your field of vision will produce a level of motion blur in your eyes. The same object recorded at 24fps and then projected or displayed in front of your eyes will produce a different level of motion blur, because the object is no longer moving continuously across your vision but instead moving in discrete steps. The exact character of this motion blur can be influenced by controlling what fraction of that 1/24th of a second the image is exposed for (vs. having the screen black)
The most natural level of motion blur for a moving picture to exhibit is not that traditionally exhibited by 24fps film, but it is equally not none (unless your motion picture is recorded at such high frame rate that it substantially exceeds the reaction time of your eyes, which is rather infeasible)
In practice, I think the kind of blur that happens when you're looking at a physical object vs an object projected on a crisp, lit screen, with postprocessing/color grading/light meant for the screen, is different. I'm also not sure whatever is captured by a camera looks the same in motion than what you see with your eyes; in effect even the best camera is always introducing a distortion, so it has to be corrected somehow. The camera is "faking" movement, it's just that it's more convincing than a simple cartoon as a sequence of static drawings. (Note I'm speaking from intuition, I'm not making a formal claim!).
That's why (IMO) you don't need "motion blur" effects for live theater, but you do for cinema and TV shows: real physical objects and people vs whatever exists on a flat surface that emits light.
You're forgetting about the shutter angle. A large shutter angle will have a lot of motion blur and feel fluid even at a low frame rate, while a small shutter angle will make movement feel stilted but every frame will be fully legible, very useful for caothic scenes. Saving private Ryan, for example, used a small shutter angle. And until digital, you were restricted to a shutter angle of 180, which meant that very fast moving elements would still jump from frame to frame in between exposures.
I suspect 24fps is popular because it forces the videography to be more intentional with motion. Too blurry, and it becomes incomprehensible. That, and everything staying sharp at 60fps makes it look like TikTok slop.
24fps looks a little different on a real film projector than on nearly all home screens, too. There's a little time between each frame when a full-frame black is projected (the light is blocked, that is) as the film advances (else you'd get a horrid and probably nausea-inducing smear as the film moved). This (oddly enough!) has the effect of apparently smoothing motion—though "motion smoothing" settings on e.g. modern TVs don't match that effect, unfortunately, but looks like something else entirely (which one may or may not find intolerably awful).
Some of your fancier, brighter (because you lose some apparent brightness by cutting the light for fractions of a second) home digital projectors can convincingly mimic the effect, but otherwise, you'll never quite get things like 24fps panning judder down to imperceptible levels, like a real film projector can.
Me at every AirBnB: turn on TV "OH MY GOD WTF MY EYES ARE BLEEDING where is the settings button?" go turn off noise reduction, upscaling, motion smoothing.
I think I've seen like one out of a couple dozen where the motion smoothing was already off.
I think the "real" problem is not matching shutter speed to frame rate. With 24fps you have to make a strong choice - either the shutter speed is 1/24s or 1/48s, or any panning movement is going to look like absolute garbage. But, with 60+fps, even if your shutter speed is incredible fast, motion will still look decent, because there's enough frames being shown that the motion isn't jerky - it looks unnatural, just harder to put your finger on why (whereas 24fps at 1/1000s looks unnatural for obvious reasons - the entire picture jerks when you're panning).
The solution is 60fps at 1/60s. Panning looks pretty natural again, as does most other motion, and you get clarity for fast-moving objects. You can play around with different framerates, but imo anything more than 1/120s (180 degree shutter in film speak) will start severely degrading the watch experience.
I've been doing a good bit of filming of cars at autocross and road course circuits the past two years, and I've received a number of compliments on the smoothness and clarity of the footage - "how does that video out of your dslr [note: it's a Lumix G9 mirrorless] look so good" is a common one. The answer is 60fps, 1/60s shutter, and lots of in-body and in-lens stabilization so my by-hand tracking shots aren't wildly swinging around. At 24/25/30fps everything either degrades into a blurry mess, or is too choppy to be enjoyable, but at 60fps and 1/500s or 1/1000s, it looks like a (crappy) video game.
Is getting something like this wrong why e.g. The Hobbit looked so damn weird? I didn't have a strong opinion on higher FPS films, and was even kinda excited about it, until I watched that in theaters. Not only did it have (to me, just a tiny bit of) the oft-complained-about "soap opera" effect due to the association of higher frame rates with cheap shot-on-video content—the main problem was that any time a character was moving it felt wrong, like a manually-cranked silent film playing back at inconsistent speeds. Often it looked like characters were moving at speed-walking rates when their affect and gait were calm and casual. Totally bizarre and ruined any amount of enjoyment I may have gotten out of it (other quality issues aside). That's not something I've noticed in other higher FPS content (the "soap opera" effect, yes; things looking subtly sped-up or slowed-down, no).
[EDIT] I mean, IIRC that was 48fps, not 60, so you'd think they'd get the shutter timing right, but man, something was wrong with it.
Not necessarily heavy (except sometimes as an effect), but some compression almost all the time for artistic reasons, yes.
Most people would barely notice it as it's waaaay more subtle than your distorted guitar example. But it's there.
Part of the likeable sound of albums made on tape is the particular combination of old-time compressors used to make sure enough level gets to the tape, plus the way tape compresses the signal again on recording by it's nature.
I work in vfx, and we had a lecture from one of the art designers that worked with some formula 1 teams on the color design for cars. It was really interesting on how much work goes into making the car look "iconic" but also highlight sponsors, etc.
But for your point, back during the pal/ntsc analog days, the physical color of the cars was set so when viewed on analog broadcast, the color would be correct (very similar to film scanning).
He worked for a different team but brought in a small piece of ferrari bodywork and it was more of a day-glo red-orange than the delicious red we all think of with ferrari.
Yes. The LEON series of microprocessors is quite common in space industry. It is based on SPARC v8 and SPARC is big-endian. And also, yes, SPARC v8 is a 33 years old 32-bit architecture, in space we tend to stick to the trailing edge of technology.
Also remember: Even though many of these articles/books/papers/etc. are good, even great, some of them are starting to get a bit old. When reading them, check what modern commentators are saying about them.
E.g.:
What every programmer should know about memory (18 years old) [1]
How much of ‘What Every Programmer Should Know About Memory’ is still valid? (13 years old) [2]
While i cannot comment on the specifics u listed i dont think the fundamentals have changed much concerning memory. Always good to have something more digestible though.
reply