Hacker Newsnew | past | comments | ask | show | jobs | submit | Everlag's commentslogin

If you're interested in seeing a use case of the APIs, there was a really cool talk at Demuxed last year where some folks built a compositor using this plus canvas[0]

[0] https://www.youtube.com/watch?v=zvsF6ZTYl0Y


nebula[0] may be interesting; you can allow list connectivity for specific groups, all burned into the cert used to join the network. It uses some NAT hole punching orchestration to accomplish connectivity between hosts without opening ports.

The main painful thing I've found has been cert management. PKI, as usual, is not a solved problem.

I've managed to do some fun stuff using salt + nebula on the hobby side.

[0] https://github.com/slackhq/nebula


That's such a different approach than I've grown used to, but different platforms encourage different flows.

I've been using bitbucket (not the shiny nice new bitbucket) at work for years and its PR search is so abysmally bad that anything in the PR message but not in the commit message may as well not exist once its merged. `git log` is forever, bitbucket search is /dev/null

I'm curious what other affordances or lack of affordances encourage what git behavior.


The current state of media ingest protocols is actually getting interesting. For quite some time its been a situation of 'everyone supports RTMP so you support RTMP' plus 'vendor X does not support $FANCY_NEW_PROTOCOL but they do support RTMP'.

Now we've got this fun competition between various protocols at different stages of their lifecycle. Some are SRT where one vendor is pushing them quite hard and support is flakily available but not everywhere. Others, like WHIP, are much earlier in their lifecycle.

Most new protocols support carrying more codecs without gross hacks and generally are much nicer than an early 2000s protocol which hasn't really ever had a canonical specification; beating RTMP is a somewhat low bar. Each one also has their own shiny ribbons like using QUIC/webtransport or very-low latency.

The next few years will be interesting to see which gets adoption and which get dropped at the wayside. Could be any of the current contenders; I'm just hoping I can stop maintaining an RTMP stack sometime before 2040 without suffering from extreme fragmentation.


Sorry for going off topic a bit, but what is wrong/not ideal with RTMP? I've been using RTMP with Nginx for a couple projects and didn't notice anything inherently wrong. However, I was working with streaming for the first time so whatever problems I ran into were probably overlooked as a part of learning the tech.

What problems do you have with "maintaining an RTMP stack" does any of these newer protocols alleviate? What do you mean exactly by RTMP stack so I can be sure I understand you correctly? Thanks.


Honestly, RTMP is a workhorse that powers most live video ingest today; its not bad, its just mostly frozen in time. Some issues that came immediately to mind are:

- It was developed to support flash; if anyone is sending metadata about the data streams are probably serializing flash objects, which is truly an experience when writing go.

- The supported codecs are limited and the spec provides no generic way to extend it. If you want h264 and aac, RTMP is great. If you want any codec developed after that, you need to either reach for another protocol or use a non-standard extension to RTMP whose support across different providers is flaky.

- Its TCP under the hood and that locks you to a fixed reliability/performance tradeoff.

- The 'spec' is a document released by Adobe which is not particularly great, falling pretty far short of what we'd expect out of the IETF or similar. Its overly detailed in some areas and woefully under detailed in others.

- RTMP isn't particularly tune-able; if you want to tradeoff latency for quality or similar, you're exploring the dark depths of the protocol.

None of these are damning but, as an aggregate, they go a long way to encouraging new protocols to be developed which avoid these issues.

  > What problems do you have with "maintaining an RTMP stack" does any of these newer protocols alleviate? What do you mean exactly by RTMP stack so I can be sure I understand you correctly?
Maintaining an RTMP stack, at least for me, means making sure your implementation of underspecified RTMP works with other, third party implementations of underspecified RTMP. Some implementations are great(ffmpeg and OBS) whereas some will send you timestamps that oscillate between the correct time and 3 billion seconds in the future; or, they'll send you exactly 3 packets at the start of the connection with timestamps far off in the future. More subtly, some providers want you to send a packet when you're closing the connection and others just want to go away and get angry if you send them a 'goodbye' packet.


Thanks for the answer. I was not aware of all those details and sounds like I had a relatively easy time with RTMP since both projects I built, coincidentally, used either ffmpeg or OBS.


I would be surprised and saddened if `?` as described in the blog was added to go. Error wrapping is critical for operating a service and staying sane; an operator encouraging `if err != nil { return nil, err }` is a step backwards.

Consider this situation(it may sound familiar): you get paged at 4am for 500s from a service, check the logs, see 'file does not exist'. go doesn't attach stacktraces to errors by default. go doesn't enforce that you wrap errors with human context by default. How do you debug this? Pray you have metrics or extra logging enabled by log level that can give some degree of observability.

This error could have been 'opening config: file does not exist' or 'initializing storage: opening WAL: file does not exist' or even just decorated with a stacktrace. Any of those are immediately actionable.

Now, if go decided to make error wrapping a first class citizen with a fancy '?' operator, I'd be excited. However, I doubt that will happen because go is definitely not rust-like in its design.


First of all, it's important to recognize that the majority of error handling in Go today is actually `if err != nil { return err; }`. Take a look at Kubernetes if you don't believe me.

Second of all, nothing prevents `x ?= Foo();` from implicitly doing `if x,err := Foo(); err != nil { return fmt.Errorf("Error in abc.go:53: %w", err)}`


K8s is regularly regarded as a very poor example of idiomatic Go code, along with the AWS client libs.

Searching our prod code, "naked" if-err-return-err showed up in about 5% of error handling cases, the rest did something more (attempted a fallback, added context, logged something, metric maybe, etc).


I would love a syntactic sugar for 'if err == nil {Errors.Wrap(err, *What I was trying to do")}'. As a SRE, making each part of the stack explicit (or explicitly non-explicit) is invaluable for understanding debugging. I'm A-ok with some forms of error throwing, but they need to be clear and understood.


I think you meant the code:

  if err != nil { 
    return fmt.Errorf("What I was trying to do: %w")

  }
That's the correct and standard way to do error wrapping in Go since Go 1.13[1]. There is also Dave Cheney's pkg/errors[2] which does define "errors.Wrap()", but that has been superseded by the native functionality in Go.

[1]: https://go.dev/blog/go1.13-errors [2]: https://github.com/pkg/errors


The code I'm primarily working on is old enough that Errors.Wrap is the standard - Though refactoring that is definitely on the radar :)


Would it not be easier if Go just provided a stack trace attached to the Error? How to cleanly do this in Go I don't know, I do however do this in embedded C++ and it works well. I agree without context, errors can be hard to track down when they come from functions that are called by many other functions.

I am not a fan of manually wrapping errors because it seems inferior to stack trace.

Also, I hate that Errors in Go are mostly just random strings, super hard at a high level to do anything intelligent.


Heads up for anyone reading this on chrome, the examples may not render with a `Cannot read property '_gl' of null` exception; firefox worked fine when I tried it.


Hmmmmm, that's kind of interesting. You're seeing the sort of crash you get when the browser's decided your system can't handle WebGL, and turned it off. Normally that would translate to "what GPU are you using, it's probably too old" -- except Firefox is apparently fine.

I'll bet chrome://gpu either shows "WebGL: disabled" or "WebGL: enabled; WebGL2: disabled". I think `ignore-gpu-blocklist` in chrome://flags should affect WebGL.

FWIW I'm running Chrome on Linux with hardware rendering force/explicitly enabled for both video and rasterization (`enable-gpu-rasterization` in chrome://flags - don't think this affects WebGL), and it all works great (notwithstanding terrible thermal design) on this fairly old HP laptop w/ i5 + HD Graphics 4000. (The GPU process does admittedly occasionally hang and need killing so it restarts, but that's about it.)

Getting video decode requires --enable-features=VaapiVideoDecoder on the commandline as well, or at least it did in the last version of Chrome, I haven't checked if this is no longer required.

If poking `ignore-gpu-blocklist` doesn't work, what does chrome://gpu show in Chrome, what does about:support (and possibly about:gpu? not sure) show in Firefox, and what GPU and OS are you using?


I had it fail on first visit in Firefox, so I think the page is just buggy. It worked when I refreshed it


Same here, Firefox 88.0.1 on Linux x86-64. Thanks for mentioning this, so I could give it a second try.


Works For Me(tm) on chrome 90, macOS


My team agrees with you :)

https://developers.cloudflare.com/stream/

Happy to take any critique y'all may have.


hello, can your service handle live streams (HLS/DASH) or is it just for static streams? many thanks


Stream currently supports VOD delivered over HLS/DASH. We'd like to explore live in the future.


If you want to avoid the 'this chrome extension has been sold to a malicious actor' fiasco that hit me with the Great Suspender yesterday, you can load this as an unpacked extension.

Grab the zip from github, unzip it, and point chrome to it as an 'unpacked extension'. Works perfectly and takes 2 minutes.

The github is here https://github.com/tom-james-watson/old-reddit-redirect

You lose automatic update but that's a feature in this case.


What happened with Great Suspender?


Looks like there's discussion happening here: https://github.com/greatsuspender/thegreatsuspender/issues/1...


Whoa that is heavy.


Double-"huh?"

- This is a Firefox Extension

- Why not just ... disable automatic updates?


Aye, this is the 'Cryptograph Doom Principle'[0].

To very lossily summarize: always authenticate before looking at the message.

Its a handy rule of thumb when you're making choices like how to validate a message.

https://moxie.org/2011/12/13/the-cryptographic-doom-principl....


This labels itself small but, honestly, it feels too small. Why would I introduce a dependency that I can implement in 5 minutes from first principles?

The entire implementation essentially lives in atom.ts[0]. Its a class that has an array member and a for loop. It does handle async code for you but, I don't know, I'm not convinced.

I would love to know the intended audience for this package, since its seemingly not me.

[0] https://github.com/lucialand/luciex/blob/main/packages/core/...


You could copy atom.ts and save yourself 5 minutes


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: