Fixed: Bloom postprocess FX artifacts in URP [Re-post]

  • Post comments:0 Comments

WARNING: The following article is based on the code of version 2020.3.4 of Unity and version 10.4 of the Universal Render Pipeline (May 16th, 2021). Although the code has changed a lot since then, the engine may still be the same so the text below should be useful for you anyway.

On December, 2020, I created a thread in the Unity forums asking for help to get rid of an annoying visual artifact that was showing up when rendering some shiny thin-shaped sprites (used to represent light bulbs or lamps, for example). I even filed a bug. The answer I received in both places was that the ugly flickering pixels I was seeing were due to how Bloom FX techniques are implemented in general, a known limitation.

Fortunately, 2 years later I “fixed” it. This means that, in the case of my game, artifacts are practically imperceptible. I think it’s impossible to get rid of it since it is intrinsic to every Bloom technique, so I prefer to be prudent.

In the following video you can see an example of the artifacts and how they look after applying the fix:

(Using DirectX, in Unity 2020.3.4, URP 10.4)

How to fix

To achieve this I had to change the code of the Universal Render Pipeline, so you would need to fork the Graphics code repository of Unity (the full source code of the URP), import it into your project replacing the built-in package and change the code.
The changes I did were few and simple. In PostProcessPass.cs, search for the SetupBloom() method. In the first 2 lines, remove the “>> 1”, so the size that is stored in tw and th is the source texture’s instead of its half.
That’s it. Compile. Enjoy.

    void SetupBloom(CommandBuffer cmd, int source, Material uberMaterial)
          // Start at half-res
          int tw = m_Descriptor.width;// >> 1;
          int th = m_Descriptor.height;// >> 1;

Why the fix works

The value stored in the variables tw and th is used to generate temporary render targets to perform the Bloom technique (downsampling first, then upsampling, applying Gaussian blur). Before this happens, the algorithm in SetupBloom() performs an operation they call “prefilter”, which copies the content of the source texture (the texture used while rendering geometry, where the color and brightness related to the Bloom FX are stored) to the first texture that feeds the downsampling loop.

    // Prefilter
    var desc = GetCompatibleDescriptor(tw, th, m_DefaultHDRFormat);
    cmd.GetTemporaryRT(ShaderConstants._BloomMipDown[0], desc, FilterMode.Bilinear);
    cmd.GetTemporaryRT(ShaderConstants._BloomMipUp[0], desc, FilterMode.Bilinear);
    Blit(cmd, source, ShaderConstants._BloomMipDown[0], bloomMaterial, 0);

In the loop, the content of the texture is processed and copied to another texture whose size is half of the previous, and this reduction happens N times (which depends on the original value of tw and th). Without our changes, in the “prefilter” step the algorithm mixes the preparation of the content of the first texture with a first downsampling (halving), saving some GPU power, instead of copying first and downsampling afterwards.

The format of the source texture (R8G8B8A8_UNORM) is different from the format of the textures used in the process (R11G11B10_FLOAT).
The shader used in the process is called “Bloom.shader“, which is stored in the bloomMaterial variable, and contains 4 passes: Prefilter (0), Blur horizontal (1), Blur vertical (2) (both used in the downsampling step) and Upsample (3).
The content of the Prefilter pass and the blur passes is obviously not the same. Apart from the maths used in the code, the difference is that it is sampling texels from a texture with a different format (less precise) than the destination texture’s, applying a bilinear filter (as in the other steps). Somehow, when we get rid of the bilinear filtering (as both textures are equal in size, thanks to our changes) the texels are read and converted properly, and then the Bloom process continues without problem. In other words, the value of each texel of the source texture is not interpolated; since source texture doubled the size of the destination, the UV of the texel [2, 4] in the destination corresponded to the texel[4,8] in the source texture (skipping texels in the column 3 and row 7), but the final value was interpolated among the 8 texels surrounding [4, 8], deforming it and, I guess, losing information.
Honestly, I haven’t studied the code of the shader deeper to fully understand how is the data lost, since I don’t have more time for this.
Feel free to add your ideas and conclusions.

Leave a Reply