Get the top HN stories in your inbox every day.
wronglebowski
HDR/SDR Tone mapping has long been the bane of hardware transcoding for Plex. Even when it "kinda works" which is Nvidia on Linux only it kills my GTX 1650 in RAM and compute required. Normally NVENC can handle 3 4k Streams or many many 1080p streams, only constrained by RAM.
undefined
compsciphd
on quicksync (600 series above igpu, i.e. kabylake/gemini lake) it can do it without an issue. on gemini lake cores, I've done 4 hdr tone mapping (uhd bluray source to avc sdr) without a problem.
m-p-3
Thanks to FFmpeg it's really simple to convert an HDR file to SDR in HandBrake by setting the colourspace to BT.709 under the Filters tab.
Jaruzel
Does anyone know how well this works?
I have a real-world use case: I have a full Atmos audio setup (7.1.4) and a projector for video. Now projectors don't do HDR very well[1], and to be honest I'm more interested in immersive audio than shiny highlights in the picture.
Unfortunately, a good few film disc releases only put the Atmos audio on the UHD disk along with HDR and running those disks through my rig and forcing SDR yields a picture that is too dark compared to the original HD SDR copy (and letting it try HDR isn't much better).
So I'm wondering if this will enable me to rip and convert to a good SDR UHD copy of a film with the Atmos audio ?
--
[1] ...and 100 inch OLED TVs are stupidly expensive.
Veliladon
It looks too dark because you're forcing light levels on a curve that goes up to 10,000 nits through a device that has linear gamma and is probably over an order of magnitude dimmer. This process should help because it's manually remapping HDR reference white to Rec. 709/sRGB max white, transforming the curve back into linear, and forcing the display to clip anything brighter to max white which is an acceptable way of doing things when compressing a gamut.
Your output device's color space needs to match the color space the display device expects. We've been in an SDR world of everything being sRGB/Rec. 709 for so long we don't even realize when something is putting out a mismatched color space. Normally if you have something like an Apple TV it'll do the transform automatically from whatever the media is to the color space of the screen it's connected to but if you're using something like a PC that assumes sRGB and force video in the BT.2020 color space out at sRGB levels you're going to have a bad time.
Jaruzel
Thanks for that info. I'm currently putting the discs into a mid-range Panasonic UHD player, which is supposed to be able to convert HDR to SDR properly, but clearly it's not doing a good job of it.
jahnu
ffmpeg would be a great candidate for use with natural language processing, a la
r4vik
I don't know if the output is correct but I prompted openai:
ffmpeg command to convert hdr video to SDR:
ffmpeg -i video.mp4
and it returned:
ffmpeg -i video.mp4 -vf format=pix_fmts=yuv420p -c:v libx264 -crf 18 -preset veryslow -c:a copy output.mp4 This FFmpeg command will take your HDR video and convert it to an SDR video using the libx264 codec with a CRF value of 18.```r4vik
This doesn't really fix the color space though so I tried enhanced the prompt
ffmpeg command to convert hdr video to SDR and fix the colorspace:
and it returned: ffmpeg -i input.hdr -pix_fmt yuv420p -filter_complex "colorlevels=entropy_mode=1:levels=srgb:source_range=full:target_range=full:rgb_to_yuv=sigmoidal" -y output.sdr.mp4
I don't really understand this so I asked it to explain: how does this fix the colorspace?
It replied: The RGB to YUV conversion is done using a "sigmoidal" function, which basically means that it applies a non-linear transformation to the input video. This transformation is designed to "compress" the bright colors in the video, making them more manageable for SDR displays.
The colorlevels filter is also adjusting the black and white points of the video. This is done to ensure that the video can be displayed correctly on SDR displays, which have a limited color gamut.
Finally, the target_range and source_range options are used to ensure that the video is encoded in the correct range for SDR displays.compsciphd
does anyone know if DV oriented tonemapping is in development? (i.e. DV to HDR10 or DV to SDR). so DV files can play on non DV screens and look correct.
randomfinn
Jellyfin has DV to SDR in the latest release (https://jellyfin.org/posts/jellyfin-10-8-0/). I think it's in their own ffmpeg branch and not yet upstream.
jaywalk
Dolby Vision is a proprietary format. So unless Dolby develops something (highly doubtful) then it's unlikely anything will be developed. It would be incredibly complex, and almost certainly lead to legal action from Dolby.
Mindwipe
DV is an additional metadata layer on top of HDR-10 PQ. If you play it back on a non Dolby Vision device it will just play back the same as a HDR-10 file would do. No tonemapping is required.
compsciphd
DV in blurays (dual layer). Not in streaming media where its in a different colorspace than hdr10.
sp332
Most of the images aren't loading for me.
Get the top HN stories in your inbox every day.
> The tone-mapping algorithm place 100% load on a single CPU core.
In 4.1, a OpenCL variant filter was added for GPU processing.
In 4.2 threading support was added to the tonemap CPU filter.
In 4.3, a VAAPI variant filter was added for GPU processing on linux.