Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1. Non-smart TVs are being hard to find.

2. most devices to plug into TVs are underpowered for some applications, for example I tried to transmit games from my PC to such devices, and they all introduced too much artifacting and input lag, TVs tend to have a bunch of ASIC/FPGA alongside their general processor to handle video and thus can perform better.

3. computers that CAN handle the previous use cases, often are very expensive anywya, so... in a sense doesn't make sense anymore, for example I looked into this and concluded the "cheapeast" option would be a rpi4 with more RAM and SSD... at the prices here it made more sense to buy a actual x86 computer instead.



What use case are you talking about? Plugging a chord from PC to TV will only have input lag due to the TV itself. No artifacting would be caused from this either. I can't imagine what "transmitting a game to such devices as a rasberry pi" means.


You send input from Pi to a PC elsewhere, then on that PC the game is running, instead of outputting to screen it instead writes to a compressed video stream and sends it back to the pi. Same thing for audio.

Then the pi must uncompress the video and audio, convert to the format HDMI uses and send to TV.

All that must be done in less than a millisecond, and depending on your TV model might require it to be done 120 or even 240 times per second.


Okay I'm still not sure if this is something to record streams for Twitch or something to have a Pi in front of your TV instead of a full PC. Either way, sounds like a configuration issue more than anything. I have processed 1080p 60Hz video with zero input lag on 15 year old 1.5GHz laptops. Higher end stuff should be as easy on modern hardware.

The way a (presumably non-vsynced) game normally outputs to a monitor is it generates new frames as fast as it can, and each time it instantaneously replaces the framebuffer with the latest frame. Meanwhile, the monitor is continuously scanning out the pixels in the framebuffer to the pixels from top to bottom. Therefore, it will output part of one frame at one point in the screen, then part of the next frame under it (above when the end of screen is reached), and so on.

I was going to give a solution but I just realized that if you have no control over scanout you literally will always have input lag and other strange problems. This is not a performance problem. You need to stream each frame from the computer (and each time a new frame is generated, redirect the stream to source from that new frame) to the Pi, compressing on the fly (never buffer a full frame), and the Pi needs to be able to stream this in parallel through the HDMI cable instead of merely writing stuff into a framebuffer. In such a setup, the input lag will merely be how long it takes for a single bit to go through from the computer to the TV. Not sure if there are good compressors for this (other than DSC), but literally the way this stuff is normally supposed to work is through high speed video connections, not a slow ethernet.


I don't understand any of this.

If I want to play super duper games I'll hook up the gaming console directly to my non-smart TV through HDMI.

If I want to play retro games or other bullshit that can run on an RPI 4, I'll hook a controller directly up to the RPI 4.

If I want to use my trackpad and keyboard of my laptop in lieu of having a remote control, I use x2x to shuttle my events to the RPI 4, open Chromium on the RPI 4, and then watch netflix or whatever junk from a full-screen browser window. This is the only place I sometimes get lag, but that's just to start a video playing so it doesn't matter.

AFAICT I'm missing the 4k output that apparently requires some model of TV that ships with smart-tv adware. Outside of that I don't see any costs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: