> My understanding is that there is no intrinsic meaning to "crf", and that it is just a rough way of controlling bitrate (in that it refers to internal variables in the specific implementation of the encoder), am I mistaken about this?
You are not mistaken. The difference is in how reliable the control is regardless of input video.
For libvpx, the CRF control is garbage. A CRF of 30 will be good for some scenes and horrible for scenes that are too dark or have too much motion. It means if you want to just use libvpx (or ffmpeg), you are often setting that CRF way lower than you need to so scenes where it fails don't end up looking like smooth color blobs. It's bad enough that they introduced a "minimum bitrate" flag.
x264 is not that experience. The amount of adjustment you have to do for CRF for a given input are extremely minor, I found between 20 and 24 to be more than acceptable. For vpx, you need to come up with a value anywhere from 10 to 50 depending on the source.
I get that a lot of this is subjective experience, but it's what I've experienced doing a bunch of dvd rips.
> What are up-to-date AV1 encoders still leaving on the table as far as optimization is concerned?
The biggest seems to be good quality controls that have been tuned by someone with a good subjective eye for that sort of thing. Beyond that, IDK, the bitstreams allow for a LOT more transformations than H.264 allowed for, yet the codecs don't seem to have the same level of complexity. For example, x264 came up with a bunch of motion vector search patterns over it's evolution. You don't see those sorts of developments with the other encoders.
Heck, you even saw that sort of care for quality output in the fact that x264 has tuning guides for (at the time) common objective measures of quality, SSIM and PSNR. (which returned worst quality than the x264 subjective quality metrics.
IDK, this may also be that I don't have as much time to geek out over video codecs :).
You are not mistaken. The difference is in how reliable the control is regardless of input video.
For libvpx, the CRF control is garbage. A CRF of 30 will be good for some scenes and horrible for scenes that are too dark or have too much motion. It means if you want to just use libvpx (or ffmpeg), you are often setting that CRF way lower than you need to so scenes where it fails don't end up looking like smooth color blobs. It's bad enough that they introduced a "minimum bitrate" flag.
x264 is not that experience. The amount of adjustment you have to do for CRF for a given input are extremely minor, I found between 20 and 24 to be more than acceptable. For vpx, you need to come up with a value anywhere from 10 to 50 depending on the source.
I get that a lot of this is subjective experience, but it's what I've experienced doing a bunch of dvd rips.
> What are up-to-date AV1 encoders still leaving on the table as far as optimization is concerned?
The biggest seems to be good quality controls that have been tuned by someone with a good subjective eye for that sort of thing. Beyond that, IDK, the bitstreams allow for a LOT more transformations than H.264 allowed for, yet the codecs don't seem to have the same level of complexity. For example, x264 came up with a bunch of motion vector search patterns over it's evolution. You don't see those sorts of developments with the other encoders.
Heck, you even saw that sort of care for quality output in the fact that x264 has tuning guides for (at the time) common objective measures of quality, SSIM and PSNR. (which returned worst quality than the x264 subjective quality metrics.
IDK, this may also be that I don't have as much time to geek out over video codecs :).