To explain the problem I would like to solve, consider the following image. It is simply made up of two stripes in which the tones oscillate between black and white; clearly in the upper strip the oscillation "frequency" [1] is higher, given that more oscillations occur in the horizontal length of the space occupied by the image, in the lower strip the frequency is lower because vice versa there are fewer oscillations. And, to be exact, we expect the frequency of the oscillation of the upper part of the image to be 10 times greater than that of the lower part since in the space of a single oscillation (below) there are 10 oscillations above.
It should be quite straightforward to understand that the possibility of working separately on the details/colors of the upper part of the image is not solved in the same way as the problem of working on the details/colors of the lower part. If you try with the frequency separation methodology presented in the previous post you will find that the details of the upper part of the image disappear thanks to a Gaussian filter with a much smaller radius than that necessary to make the detail disappear from the lower part of the image as well.
In fact, with the original image - whose dimensions are 1000x200 pixels - a Gaussian blur with a radius of 5 pixels is enough to make every detail in the upper part disappear, while one with a radius of about 50 pixels is needed to make the details of the lower part also disappear.
This example should make it clear what it means to say that in a real image - which is much more complex than the one we are using - there are many frequencies that correspond to tonal variations that occur on different scales in the image. But let's see this aspect in detail, operating on the original image with two different frequency separations, choosing precisely blurring radii of 5 or 50 pixels.
From top to bottom: levels containing the high or low frequencies obtained with a blur of 5 pixels (first two images) or 50 pixels (second two images)
Let's try to read this series of images, starting from the bottom: here we find the result of the low frequencies for the Gaussian blur with a radius of 50 pixels, i.e. all that part of the image (which varies) on spatial scales greater than 50 pixels , which in this case corresponds to a substantially uniform medium grey.
I'll come back to the image description in a moment. Let's stop for a second to think about the relationship between the radius at which a blur is made and the separation of frequencies that is obtained: clearly fixing a blur radius also identifies a corresponding frequency, in fact once a certain blur has been made we end up with two levels, one containing high frequencies, the other containing low ones. But "high" or "low" compared to what? Naturally with respect to that frequency that was identified when we chose a precise blur radius. As I told you, this is a rather incorrect statement since the term "frequencies" normally refers to phenomena that vary over time (and not, as in our case, over time), but we continue to use it since in the photographic field it is usual.
We have to answer this question: given a certain radius (at which we will blur), how does this identify a precise 'frequency'? It's actually very simple. Identifying a certain radius means being able to 'separate' the tonal variations of the image between those that occur on a larger (than that radius) or smaller scale of distances. And of course those tones that vary on scales of greater distances than the one we have chosen will find themselves varying more slowly, i.e. they will have to correspond to lower frequencies, and vice versa.
High frequencies <------> rapid changes, below the chosen blur radius
Low frequencies <-----> slow variations, above the chosen blur radius
This naturally leads us to think that the relationship between distances and frequencies is an inverse proportionality one, i.e. that f = C / r, where C will be a suitable constant. If you think about it for just a moment now everything comes back. If the radius of separation increases, the corresponding frequency decreases: because of course if I choose a greater radius I will separate those tonal variations that occur even more slowly, i.e. on longer spatial scales. We won't need to know the value of this constant (let alone what dimensions it is since we carry around this confusion between times and spaces which I have mentioned several times), but we need the notation that derives from this reasoning and therefore we simply use f = 1 /r.
So if I choose, for example, a radius of 5 pixels, I identify the frequency 1/5 [3] and the high frequencies will be those greater (numerically) than 1/5. If I choose a smaller radius, for example 2 pixels, we will have even higher frequencies, and in fact 1/2 is bigger than 1/5.
As you can see everything comes back.
Let's go back to the last image below. As already said it corresponds to low frequencies and now we know that this must be interpreted as f < 1/50
It was to be expected that at so low frequencies there would be a uniform gray tone: the starting image contained no colors and consisted of a regular oscillation of tones from black to white. And the blur radius (which corresponds to the separation frequency) has been specifically chosen to make even the slowest-changing details disappear. Finally, the gray is medium because in the initial image all the oscillations occur and are symmetrical with respect to medium tone.
Immediately above we find the layer with details at higher frequencies f > 1/50, which of course are all the details of the original image.
Going up again, in second place we find the image containing the frequencies f < 1/5: here you find almost all the slower-changing detail and (above) a uniform strip of medium gray, since it is cut at 5 pixels of radius the faster-changing details disappeared from the low frequencies, but not the slower details of the lower part of the image.
Finally, in the image above you will find all the frequencies greater than 1/5 which correspond only to the fastest oscillations of the upper part of the image: in fact, in the upper part of the first image you will find all the details of the rapidly changing part of the original image, while in the lower part you find a medium tone. This last peculiarity is perhaps not so obvious: if with the 5 pixel cut we keep the slowest details to vary in the low frequency level (second image from the top), these will not be able to appear in the high frequency level.
I advise you to spend some time thinking about what has just been said because it is easy to get confused. I summarize in another way. I have shown you two frequency separations one (the first two images) with the Gaussian blur radius at 5 pixels, the other (the second two images) with a blur radius of 50 pixels. In the first case, the high frequencies refer to tonal variations that occur in a space smaller than 5 pixels, while the low frequencies refer to tonal variations that occur on spatial scales greater than 5 pixels. In the second separation - the one with a blur radius of 50 pixels - the high frequencies refer to what happens (or rather, varies) on a spatial scale below 50 pixels, the low ones to what happens above.
You understand that here - in a nutshell - the algorithm we were looking for is ready: if we subtract from the level containing the frequencies from 50 upwards (i.e. the level with frequencies with f > 1/50) those from 5 up, what do we have left? Of course the frequencies between 5 and 50.
Let we see this. Suppose we want to separate three groups of frequencies, choosing two reference radii, for example 5 and 50 pixels. Then we duplicate the starting image 3 times, since in the end we want to have a layer with high frequencies (f > 1/5), a layer with intermediate frequencies ( 1/50 < f < 1/5 ) and finally one with long frequencies ( f < 1/50 )
The "very high frequencies" will have to remain in the topmost layer, so let's proceed as usual: let's make the topmost layer inactive after renaming it, for example "f > 1/5" to remind us. On the second layer we apply a Gaussian blur with a radius of 5 pixels. Then we apply the second layer over the top one in "Subtract" mode as usual with offset 128 and scale at 2. We put the layer "f > 1/5" in "Linear Light" mode. Note that the original image is reconstructed: this is the usual separation of frequencies with respect to a radius of 5 pixels (or a frequency of 1/5)
Now from the second layer which contains the frequencies f < 1/5, we must subtract even slower frequencies to get the frequencies between 1/5 and 1/50. Let's rename the second layer "1/50 < f < 1/5" for the record. So let's make it inactive and apply a 50 pixel radius Gaussian blur filter on the third layer. We apply this third level on the second with "Subtract" mode as usual with offset 128 and scale at 2. So we subtracted from the second level the frequencies lower than 1/50 and it contained on its own already only those f<1/5 . So the second level realy matches its name.
We put the layer "1/50 < f < 1/5" in "Linear Light" mode and call the third layer "f < 1/50" and of course the game is done. We have 3 levels each containing those parts of the original image that vary on very short, intermediate or long scales with respect to the two radii
(or frequencies) that we have chosen. Of course the set of these 3 levels reconstructs the original image (not exactly as we will see)
And the radii chosen are naturally arbitrary.
Before we go any further and figure out what to do with this apparatus, we need to check that everything is OK. So let's duplicate the starting image again and bring it to the top of all layers in "Difference" blend mode: we apparently get black: below the histogram we find mean 0.64 and standard deviation 0.52. Not bad (remember that this is a simple image) and we know how to correct it, we will eventually have to go to 16 bit.
It doesn't make much sense to do something more complicated on such a simple image, that is, one that contains so few different scales of tonal variation. The double strip from which we started actually contains only two types of oscillation, that of the upper part which occurs at the 5 pixel scale and that of the lower part at the 50 pixel scale (or, alternatively, at frequencies of 1/5 and 1/50): it is in other words an image that contains only 2 frequencies.
So I chose a different - or if you prefer more complete image - and I tried a 4-frequency separation, choosing 5, 10 and 25 pixels as blur radii respectively.
From left to right and top to bottom: original image, very high frequencies, mid-high frequencies, mid-low frequencies and low frequencies. (66% magnification)
In the case of this image - I remind you that the goodness of the frequency separation depends, if it is not performed at 16 bit, both on the image and on the chosen separation radii - we obtain an average of 0.99 and a standard deviation of 0.97 against 0.66 and 0.63 respectively in the case of a "normal" two-frequency separation (with radius, for example, at 10 pixels). These numbers indicate that continuing on this game without going to 16 bit causes a progressive degradation of the reconstructed image.
But there's another problem. Not so relevant actually, but I quote: Photoshop doesn't allow Gaussian filters with radius greater than 1000 pixels, therefore the accessible range of frequencies - in the sense of their separation is somehow limited. Here I therefore present the maximum possible separation at radii of 10, 25, 50, 100, 200, 500 and 1000, i.e. at frequencies 1/10, 1/25 and so on.
Having made this separation after a 16-bit conversion, we verify in the usual way that the image reconstructed by placing all these layers except the last one in "Linear Light" fusion mode coincides exactly with the original. We could of course have chosen several sets of separation frequencies, even more conspicuous in number, but we could never have gone above a radius of 1000 pixels (or below a frequency of 1/1000). Not a huge problem with today's sensors, even the 50MP ones, which present us with 6880 px on the long side.
How to operate on levels with separated frequencies
First of all, it is advisable to build a group that contains all the levels thus created, in this case the 8 that I have shown you. The second thing to do is to duplicate each of these layers, giving them a different name - for example adding the suffix "-Edit" and creating a clipping mask on each one; all the duplicates should be in blending mode "Normal": in this way we will be able to operate on each level without altering the separation of the frequencies obtained.
The simplest way to intervene is the traditional one in frequency separation for skin retouching: insert an empty layer at some point in the sequence (not above the highest or below the lowest if we don't want to nullify all the work of separation) and with a very light brush - with opacity typically between 5 and 10% - go over the color in some area where we want to modify the tone, using a color similar to the pre-existing one. We won't have to worry about detail when doing this - detail will automatically be brought in from higher levels at high frequencies.
In the next post in this series, we'll go deeper into how this sophisticated frequency separation can be used.
______
[1] I use - for the last time - double quotes to underline once again the impropriety of the term frequency in this context.
[2] Here I use an abbreviated (and, I must say, very incorrect) term referring with "major/minor frequencies" to those respectively corresponding to more/less rapid tonal variations with respect to the cut chosen using a certain blur radius.
[3] For the more precise among you, I note that the "pixel" is a dimensionless unit of measurement, unlike centimeters, meters or inches, which helps us to consider the (inverse) proportionality constant C irrelevant between radii and frequencies. Consequently, even the 'frequencies' we are talking about will be dimensionless, coherently with everything done so far.