The gameplay output from the PS5 is recorded using an Atomos Ninja 5, which is recording HDR 4K UHD 59.94FPS using ProRes 422 HQ which is 1768Mb/s and 10 Bit.
Sorry this is beyond my experience but does the term HDR really apply to a screen recording device? Isn’t it a way of extending the sensitivity of a camera to real world light levels by blending shots at different exposures. How does this apply to a screen I’m wondering?
The easiest way to think about HDR is in terms of maximum brightness. The lowest brightness level to the maximum brightness level is what’s called the dynamic range. It’s similar to audio where the lowest level to the maximum level is also dynamic range.
This is a simple explanation, things such as colour are also effected because the luma and chroma characteristics/levels of any range are both increased when the dynamic range is increased.
When you increase the bit depth you also add and increase more/higher values, the more/higher values you have the greater/wider the expression of dynamic range.
When you are at 8bit you can have a value of anywhere between 0 to 255. In this instance 0 is actually a value and therefore gives you 256 options. When you go to 10bit you may think that it doesn’t sound that more than 8 but in binary terms those extra 2 bits add a lot more values. 10bit gives you values between 0 and 1023, which is 1024 options.
With 1024 options per colour channel, Red, Blue, Green (RGB) the possible combination of unique colours is increased to over one billion, as opposed to just under 17 million for an 8bit representation. This higher resolution, in simple terms, equates to dynamic range, or the ability to have more colours when talking video.
For this to all work in video, everything has to be compatible. Be that the sensor of a camera, digitally/computer generated colours, the recording/storing of the information and also the monitor to view them on. So as you can see it’s not really a way to increase sensitivity of a camera. Although, it does in a way have the effect of simulating real world light levels as you say.
Mixing multiple exposures is sort of HDR but only in the sense that you are balancing shadows and highlights to create an exposure that’s got the best bits of what any single exposure can have. Any one exposure in a multi exposure set can still only have a value that is dictated by its bit depth, as is the same for the blended/mixed version.
When shooting with a high enough dynamic range you can basically capture a lot more detail/information in a single exposure.
There’s also other concerns such as colour space, gamma, gamut etc. and possibly to a lesser degree chroma sampling. But the easiest way to visualise/think about HDR is basing it on the levels of brightness, or an increased range of brightness.
If you’re interested in such things a good place to start is understanding how numbers and values are represented and stored in traditional computing. If you do a Google search for ‘Binary Value’ you should find something quickly that explains what a bit value is and how multiple bits very quickly help to express large numbers/values. These binary values and bits are represented as 0 or 1, or off and on. It’s a very convenient way for a computer system to deal with numbers, especially large numbers, without having to write a large number in the way that we humans recognise numbers.
I was around during the transition from analogue to digital audio recording when I used to produce music and had to learn/understand how sampling worked, so the transition to digital video/film was a very similar one. Interestingly, or not
I started in digital video with some of the early commercially available analogue to digital video capture cards. This was when companies like Fast and Pinnacle where a big deal and recording single field (from interlaced) SD signals was the future of digital video. And now you have inexpensive software like LumaFusion that can edit 8K on a production studio that fits in your pocket