As a user of a crt pc monitor and a 240hz oled, the motion clarity of the oled is pretty darn close now. I’d bet 480hz is the point where the smoothness of modern panels finally catches up to the crts.
Of course the question is how to leverage those monitors. Either games have to render 480 frames per second (which is impossible on average hardware in most cases other than Subpixel Snake), or the monitor just displays 7 black frames after every rendered frame, which would cut down the games to 60 (rendered) frames per second. But the latter would of course greatly reduce the maximum screen brightness to 1/8, possibly below CRT level, because OLEDs aren't very bright in the first place.
You’re assuming that they only have the power to show the user propaganda, but I think the real power is in hiding content they don’t want people to see.
It’s on the article before this one. The tldr is that no, this doesn’t reduce latency so there’s no chance of making the original light guns work without modifying either them or the game.
There might not be many new games, but for the old games getting a used crt is free and the consoles are cheap too! I’ve been playing through the ps2 light gun games and it really does feel like you’ve got an arcade at home.
The CRT requirement has pleasantly eroded recently.
A kickstarter a few years back for the Sinden light gun [1] realized that by using webcams, some quick image processing and perspective transforms, you could make a light gun work anywhere and could get real-time performance on non-CRTs by essentially adding a small border region of the screen, making it work on essentially any monitor. He filmed and wrote extensive technical breakdowns about the build process and mechanics at play, which were great.
The maker also seems to have had a solid understanding of what made those old light gun games cool, because he made sure to build versions with solenoid-based recoil as well as the big chunky metal foot pedal you’d use for games like time crisis.
Sinden is no longer the way to go. Most lightgun enthusiasts have now gone the Gun4IR route [0]. It uses the IR sensor from a WiiMote plus a microcontroller in the gun (either a gutted commercial controller like the PS Guncon, a modified Nerf or similar, or something straight up 3d printed) and four IR LEDs placed around a monitor / TV at the midpoints of each each. This system is extremely accurate and there is no flashing border around the screen like with Sinden. Unfortunately, the whole shooting match (see what I did there?) is closed source code and (as of now) Window's only for the calibration-based PC software.
The current open source competitor to Gun4IR is the Samco light gun [1]. It uses four LEDs as well, but with two on the top edge and two on the bottom edge of the screen. A couple Wii LED bars will do the job here as well. I don't think it is quite as accurate as the Gun4IR as I don't think it accounts for perspective correction if you move from the position it was originally calibrated at. But...
Sam & a few others are readying a new design called OpenFire [2] that will be at least on par accuracy-wise as Gun4IR and will be fully open source and cross platform. It should be available relatively soon. Pair this with the PiCon [3] and you have a lightgun with a pretty crazy feature set. All the guns mentioned support some kind of solenoid & rumble support, but the PiCon kicks it up a notch with exclusive OpenFire features like an OLED display, NeoPixel LED, accelerometer, and analog joystick.
That's true, but crts are basically free and plug and play while looking extra crispy. I think if you're okay spending a lot more to get an equivalent setup those are good options, but harder to recommend.
I’m looking up photos of restaurants 40+ years ago and struggling to find any obvious acoustic differences in their designs (I do notice carpet seems more prominent?) Do you have any examples of what they used to do better?
Booths, designs, and acoustic tile ceilings off the top of my head.
They went with easy to clean floors and took out the acoustic tiles leaving the ceilings and air handling systems bare and echoic.
In fancy buildings you also had a lot of decorative wood and molding breaking up the sound. And those embossed tin tiles, covered with a few layers of paint.
Obviously it varies widely by restaurant and location, but in general I'd agree with the statement that restaurants are a bit louder than they used to be. I'm talking about table service restaurants, rather than fast food. I think the reason is probably that real estate is more expensive now, so restaurants are trying to pack people closer together. Architectural styles are different as well, with spaces being more open, ceilings higher, and more hard surfaces (how many new restaurants have carpet?). There may be differences in people's behavior too, but I can't say that for sure.
For a while during covid, a place I would go to on occasion had full-height plexiglass dividers between each booth. It made such a huge difference in noise, I was sad when they got rid of them.
Author here; I have a lot of other posts in my personal blog about this, but: the current trends in VC-backed tech companies are about minimizing risk and following fashion, rather than any technical merit or experimentation. Said another way: if an Elixir company dies, it's "damn, shouldn't have picked Elixir!" If a Python company dies, it's "startups are hard," with no investigation behind what Python cost you.
Elixir has real technical downsides too, but honestly they never come up when someone advocates against it. And this is fine, building companies and engineering culture is a social game at the end of the day.
Sure! I find most responses (like the other one on this comment) talk about the social as it relates to the technical (what I call "atmosphere" in [this blog post][1]). I'll avoid that, since a) I think it's kind of obvious, and b) somewhat overblown. If Python has 20 CSV libraries and Elixir has 2, but they work, are you really worse off? I'll instead try to talk about "soil" and "surface" issues: the runtime, and assuming engineers know the languages already and what they allow for expressivity. Here we go!
---
Most of the BEAM isn't well-suited for trends in today's immutable architecture world (Docker deploys on something like Kubernetes or ECS). Bootup time on the VM can be long compared to running a Go or OCaml binary, or some Python applications (I find larger Python apps tend to spend a ton of time loading modules). Compile times aren't as fast as Go, so if a fresh deploy requires downloading modules and compile-from-scratch, that'll be longer than other stacks. Now, if you use stateful deploys and hot-code reloading, it's not so bad, but incorporating that involves a bit more risk and specific expertise that most companies don't want to roll into. Basically, the opposite of this article https://ferd.ca/a-pipeline-made-of-airbags.html
Macros are neat but they can really mess up your compile times, and they don't compose well (e.g. ExConstructor and typed_struct and Ecto Schemas all operate on Elixir Structs, but you can't use all three)
If your problem is CPU-bound, there are much better choices: C++, Rust, C. Python has a million libraries that use great FFI so you'll be fine using that too. Ditto memory-bound: there are better languages for this.
This is also not borne from direct experience, but: my understanding is the JVM has a lot more knobs to tune GC. The BEAM GC is IMO amazing, and did the right thing from the beginning to prevent stop-the-world pauses, but if you care about other metrics (good list in this article https://blog.plan99.net/modern-garbage-collection-911ef4f8bd...) you're probably better off with a JVM language.
While the BEAM is great at distribution, "distributed Erlang" (using the VM's features instead of what most companies do, and ad-hoc it with containers and infra) makes assumptions that you can't break, like default k-clustering (one node must be connected to all other nodes). This means you can distribute to some number of nodes, but it's hard to use Distributed Erlang for hundreds or thousands of nodes.
Deployment can be mixed, depending on what you want. BEAM Releases are nice but the lack some of the niceness of direct binaries. Libraries can work around this (like Burrito https://github.com/burrito-elixir/burrito).
If you like static types, Dialyzer is the worst of the "bolted-on" type checkers. mypy/pyright/pyre, Sorbet, Typescript are all way better, since Dialyzer only does "success typing," and gives way worse messages.
> --- Most of the BEAM isn't well-suited for trends in today's immutable architecture world (Docker deploys on something like Kubernetes or ECS). Bootup time on the VM can be long compared to running a Go or OCaml binary, or some Python applications (I find larger Python apps tend to spend a ton of time loading modules). Compile times aren't as fast as Go, so if a fresh deploy requires downloading modules and compile-from-scratch, that'll be longer than other stacks. Now, if you use stateful deploys and hot-code reloading, it's not so bad, but incorporating that involves a bit more risk and specific expertise that most companies don't want to roll into. Basically, the opposite of this article
I don't necessarily disagree with your reading of the trends, but if following the trends means losing the best tool for the job, maybe it's not the tool that's in the wrong. Re: deploy time, I don't think there's a need for deployed servers to fetch and compile modules --- you wouldn't do that for a Java or C server, you'd build an archive once and deploy the archive. I guess if you're talking about the speed of the build pipeline, I'd think the pieces in the build could be split into meaningful chunks so builds could be in parallel and/or only on the parts where the dependencies changed. I imagine BEAM startup itself isn't particularly fast at the moment, because I haven't seen a lot of changelogs about speeding it up, but I'm not sure there's a huge amount of perceived need there? If you're really going to abandon all hotloading, you could also abandon all dynamic loading and preload your whole app (but that might complicate your build pipeline).
> This is also not borne from direct experience, but: my understanding is the JVM has a lot more knobs to tune GC. The BEAM GC is IMO amazing, and did the right thing from the beginning to prevent stop-the-world pauses, but if you care about other metrics (good list in this article https://blog.plan99.net/modern-garbage-collection-911ef4f8bd...) you're probably better off with a JVM language.
The BEAM GC is really so different than a JVM GC that it's hard to compare them. I guess you can still measure and compare throughput, but JVM has to deal with all sorts of datastructure complexity that comes when you don't have the restrictions of a language with immutable data. You can't make a reference loop in a process heap, so the BEAM GC doesn't have to find them; only a process can access its heap, and the BEAM GC runs in the process context, so there's no concurrent access to guard against (otoh, a process stops to GC, so it's locally 100% stop the world, it's just the world is very small). Etc, the specifics are so different, which one is better is hard to say, the world views are just so different. But yes, there's significantly fewer knobs, because there's a lot less things to change.
> assumptions that you can't break, like default k-clustering (one node must be connected to all other nodes). This means you can distribute to some number of nodes, but it's hard to use Distributed Erlang for hundreds or thousands of nodes.
That's certainly the default, and some things might assume it, but you absolutely can change the behavior, especially with the pluggable epmd and dist protocol support. You will certainly break assumptions though, and some things wouldn't work: I'd expect the global module's locks to not function properly, and if you forward messages between nodes, you can make things fail in different ways ... In a fully meshed cluster, if a process on node A sends a gen_server:call to a process on node B, that process can forward the message to a gen_server on node C, and when the process on node C replies, it sends directly to node A; if your cluster is setup so that nodes A and C can't communicate directly, the reply would be lost, unless the process on node B that forwarded the request to node C arranges to forward the reply back to node A. If you do that sort of forwarding, you also lose out on the process monitoring that gen_server:call typically does: if a gen_server crashes, or the connection to the node is ended, calls fail 'immediately', at the cost of extra traffic to setup the monitors, although at WhatsApp, we didn't use that... the cost to setup and teardown monitors was too great.
reply