I disagree. PDF is the most desirable format for printed media and its analogues. Any time I plan to seriously entertain a paper from Arxiv, I print it out first. I prefer to have the author's original intent in hand. Arbitrary page breaks and layout shifts that are a result of my specific hardware/software configuration are not desirable to me in this context of use.
I agree that PDF is best for things that are meant to be printed, no questions. But I wonder how common actually printing those papers is?
In research and in embedded hardware both, I've met some people who had entire stacks of papers printed out - research papers or datasheets or application notes - but also people who had 3 monitors and 64GB of RAM and all the papers open as browser tabs.
I'm far closer to the latter myself. Is this a "generational split" thing?
Possibly, but then again, when I need to study a paper, I print it, when I need just to skim it and use a result from it, it is more likely that I just read it on a screen (tablet/monitor). That is the difference for me.
I used to print papers, probably stopped about 10 years ago. I now read everything in Zotero where I can highlight and save my annotations and sync my library between devices. You can also seamlessly archive html and pdfs. I don't see people printing papers in my workplace that often unless you need to read them in a wet lab where the computer is not convenient.
Actual film grain (i.e., photochemical) is arguably a valid source of information. You can frame it as noise, but does provide additional information content that our visual system can work with.
Removing real film grain from content and then recreating it parametrically on the other side is not the same thing as directly encoding it. You are killing a lot of information. It is really hard to quantify exactly how we perceive this sort of information so it's easy to evade the consequences of screwing with it. Selling the Netflix board on an extra X megabits/s per streamer to keep genuine film grain that only 1% of the customers will notice is a non-starter.
Exactly. In the case of stuff shot on film, there's little to be done except increase bitrate if you want maximal fidelity.
In the case of fake grain that's added to modern footage, I'm calling out the absurdity of adding it, analyzing it, removing it, and putting yet another approximation of it back in.
The complexity here is partially a consequence of the energy storage mechanism and may be essential.
It is not possible for an entire tank of gasoline to spontaneously detonate in the same way that an EV battery can. If a mechanic fucks up a procedure and drills a hole through fuel tank, it's not fantastic but you can usually detect and recover from this before it gets to be catastrophic. If you accidentally puncture an EV battery or drop something across the terminals it can instantly kill everyone working on the car. These are not the same kind of risk profile.
I would not want work on anything with a high voltage system. Especially if it had been involved in an accident or was poorly maintained. These fuses and interlocks can only help up to a certain point. Energy is energy and it's in there somewhere. You can have 40kW for an entire hour or 100MW for 2 seconds. Gasoline cars usually throw a rod or something before getting much beyond 2x their rated power output.
The most interesting part of UT2k4 to me is the software renderer. It actually worked on period hardware and many would argue it looks better. You should definitely give it a try if you've got the game on a modern machine.
I was surprised when I realized, that a game from year 2004 still had a software renderer. But I thought it was just old leftover from the first Unreal game.
> If you ask audiophiles online they will swear up and down that a cheater plug, balanced cables, or optical isolation will fix it - that will not fix it.
Lifting the ground on my studio monitors absolutely fixed my noise problems. I run them off a MiniDSP 2x4HD, so other sources like EMI aren't really a factor.
The problem I have with a double conversion UPS is that it isn't an ideal sinusoidal source. It implies it is on the tin, but when you've got protected loads with PWM power delivery slamming around 1+ kilowatts, there's no way to guarantee a smooth waveform with a typical ~2500VA unit. Directly passing through to the grid could provide cleaner power under the most transient conditions.
I think it's more of a comfort thing than a safety thing in many cases. Definitely in my case.
If you've never experienced it, I think you should at least understand what you are up against. Most people aren't buying these things to be evil to each other in some big dick safety war. Go visit an FCA dealership and see for yourself. Have a sales guy drive you down the freeway in that Ram 1500 Lonestar Edition. Observe how quiet your conversation can be at 80mph. It might change your perspective a bit.
> Have a sales guy drive you down the freeway in that Ram 1500 Lonestar Edition. Observe how quiet your conversation can be at 80mph.
I have been driven in luxury murdertrucks before, but none have come close in terms of sound isolation to German executive sedans from a similar price bracket
This is great until you encounter a customer with a hard RPO requirement of 0. SQLite has a few replication options, but I would never trust this in high stakes domains over PGSQL/MSSQL/DB2/Oracle/etc.
My experience: customers with $$$ will always believe they are very important, so important that losing a single bit is the end of world.
So you may not want to convince customers waving huge $$$ checks that their data are not that important. But instead, providing options to keep them once they realize that: their pockets are not that deep, and they are also totally ok losing some data.
Unity is easily your best option if you've got any sensitivity to platform targeting issues. They are working on a CoreCLR conversion but no telling if it will actually see the light of day.
The lack of modern c# features and hijacking of things like null coalescing operators are annoying but it's not something that ruins the overall experience for me. The code is like 20% of the puzzle. How it all comes together in the scene is way more important.
The direct hooking into the narrow phase solver is the most performant way to go about it, but it does present several issues in state management. I did the same thing in Farseer Physics Engine, but also added high level events on bodies[1]. The extra abstraction makes it easier to work with, but due to the nature of delegates in C#, it was also quite a bit slower.
They could do with creating defaults for the narrow phase handler, buffer pool, threat dispatcher, etc. for devs who don't need extreme performance and just want a simple simulation.
It's 1 step forward 2 steps back with this "server side rendering" framing of the issue and in practice observing Microsoft Github's behaviors. They'll temporarily enable text on the web pages of the site in response to accessibility issues then a few months later remove it on that type of page and even more others. As that thread and others I've participated in show this is a losing battle. Microsoft Github will be javascript application only in the end. Human people should consider moving their personal projects accordingly. For work, well one often has to do very distasteful and unethical things for money. And github is where the money is.
I disagree. PDF is the most desirable format for printed media and its analogues. Any time I plan to seriously entertain a paper from Arxiv, I print it out first. I prefer to have the author's original intent in hand. Arbitrary page breaks and layout shifts that are a result of my specific hardware/software configuration are not desirable to me in this context of use.
reply