Unity Rendering Principle (12) Transparency Effects and Rendering Queues
Transparency is a scene often used in games. To achieve transparency in real-time rendering, you usually control its transparency channel when rendering the model. When transparent blending is enabled, when an object is rendered to the screen, each element has another property, transparency, in addition to its color value and depth value. When transparency is 1, it means that the pixel is completely opaque, and when it is 0, it means that the pixel will not be displayed at all.
In Unity, we usually use two methods to achieve transparency effect: the first is to use transparency test (Alpha Test), this method can not actually get the true translucent effect; the other is transparency blending (Alpha Blending).
For opaque objects, we did not consider whether to render A first, then B, and finally C, or in some other order. In fact, for opaque objects, the correct ordering effect can be obtained regardless of their rendering order, due to the presence of powerful depth-buffers (also known as z-bufers). In real-time rendering, depth buffering is used to solve visibility problems, it can determine which parts of the object will be rendered in front and which parts will be obscured.
Its basic idea is: according to the value of the depth buffer to determine the distance of the film element from the camera, when rendering a film element, it is necessary to compare its depth value and the value already existing in the depth buffer, if its value farther away from the camera, it means that the film element should not be rendered to the screen; otherwise, the film element should cover the pixel value in the color buffer at this time, and update its depth value to the depth buffer (if the depth write is turned on).
But if we want to achieve transparency, things are more troublesome, because once we turn on transparency blending, we turn off deep writing (ZWrite). In short, the basic principles of transparency testing and transparency blending are as follows:
- Transparency test: As long as the transparency of a piece does not meet the conditions (usually less than a certain threshold), then its corresponding piece will be discarded. The discarded piece will not be processed in any way and will not have any impact on the color buffer; otherwise, it will be processed in the same way as ordinary opaque objects, that is, depth testing, depth writing, etc. That is to say, the transparency test does not need to turn off the depth writing. The biggest difference between it and other opaque objects is that it will discard some elements according to the transparency. Although it is simple, the effect is also very extreme, either completely transparent, that is, completely invisible, or completely opaque, just like opaque objects.
- Transparency blending: This method can achieve a true translucent effect. It will use the current chip transparency as the blending factor and blend with the color values already stored in the color buffer to obtain a new color. However, ** Transparency blending requires turning off depth writing, which makes us very careful about the rendering order of objects **. It should be noted that transparency blending only turns off depth writing, but not depth testing. This means that when rendering a chip using transparency blending, its depth value will still be compared with the depth value in the current depth buffer. If its depth value is farther away from the camera, then the blending operation will not be performed again. This determines that when an opaque object appears in front of a transparent object, and we render the opaque object first, it can still cover the transparent object normally. That is to say, for transparency blending, the depth buffer is read-only.
Why Rendering Order Matters
For transparency blending technology, depth writing needs to be turned off, which requires us to carefully handle the rendering order of transparent objects. So, why turn off depth writing?
If depth writing is not turned off, the surface behind a translucent object can be seen through it, but because the depth test determines that the translucent surface is closer to the camera, the back surface will be removed, and we cannot see the back object through the translucent surface.
Let’s consider the simplest case, assuming there are two objects A and B in the scene, where A is a translucent object, B is an opaque object, and A is closer to the camera.
Let’s consider what the results of different rendering orders will be.
- In the first case, first render B, and then render A. Then since the opaque object has turned on depth testing and depth inspection, there is no valid data in the depth buffer at this time, so B will first write the color buffer and depth buffer. Then we are rendering A, and the transparent object will still perform depth testing because we find that A is closer to the camera, so we will use the transparency of A and the color of B in the color buffer to mix to get the correct translucent effect.
- In the second case, we render A first, and then render B. When rendering A, the depth buffer does not have any valid data, so A writes the color buffer directly, but since depth writing is turned off for translucent objects, A will not modify the depth buffer. When rendering B, B will conduct a depth test and find that there is no value in the depth cache yet, so it will directly inhale the color buffer, so B will directly cover the color of A, and visually, B appears in front of A. This is wrong.
From this example, we can see how important the rendering order is when depth writing is turned off. From this we know that we should render translucent objects after rendering opaque objects. So, if both are translucent objects, does the rendering order still matter? The answer is yes, let’s still assume that there are two objects A and B in the scene, both translucent objects, and A is closer to the camera.
Let’s consider what different rendering orders have different results.
- In the first case, render B and then render A. Then B writes the color buffer normally, and then A will mix with the B color in the color buffer to get the correct translucency effect.
- In the second case, render A first and then render B. Then A will write the color buffer first, and then B will mix with A in the color buffer, so the mixing result will be completely reversed, it looks like B is in front of A, and you get the wrong translucent structure.
It can be seen from this example that translucent objects also need to conform to a certain rendering order.
Based on these two points, the rendering engine generally sorts the objects and renders them. The commonly used methods are:
- First render all opaque objects and enable their depth testing and depth writing.
- Sort the translucent objects according to their distance from the camera, then render the translucent objects in backward-forward order, and turn on their depth testing, but turn off depth writing.
So is the problem solved? Unfortunately, still not. In some cases, translucent objects will still have a passing shot. If we think about it, the rendering order in step 2 above is still ambiguous - “sort by how close they are to the camera”, so how is this distance determined? You might think it’s the depth value of the camera. But the depth buffer is pixel level, that is, each pixel has a depth value, but now we sort individual objects, which means that the sorting result is either A objects are all in front of B, or A is all behind B, if There is a loop overlap, this method will never get the correct result
As shown in the figure above, three objects overlap each other, we can not get a correct sorting order, this time, we can choose to split the object into two parts, and then the correct sorting, then we can choose to split the object into two parts, and then sort.
But even if we solve the cyclic coverage problem by segmentation, there are still other cases, as shown in the right figure of the above figure. We know that the grid structure of an object often occupies a certain area in space, that is to say, the depth value of each point on this grid may be different. Which depth value do we choose as the depth value of the entire object? Should we choose the midpoint, the furthest point, or the nearest point of the grid? Either way, it is wrong in this case in the above picture.
This means that once a certain determination method is selected, in some cases, there will be a wrong occlusion relationship between translucent objects, and the solution to this problem is usually to separate the network.
Although there will always be some situation that leads to wrong results, segmentation networks can also help us solve most problems, and most engines implement them in the same way. Of course, in order to minimize error sorting, we should make the model as convex as possible, and consider splitting a complex model into multiple submodels that can be ordered independently, etc.
Unity
In order to solve the rendering order, Unity provides a rendering queue (Render Queue), we can use the SubShader Queue tag to determine which rendering queue our model belongs to, specifically refer to the official doc: https://docs.unity3d.com/ScriptReference/Rendering.RenderQueue.html
Transparency test
Definition: As long as the transparency of a chip element does not meet the conditions, then its corresponding chip element will be discarded, and the discarded chip element will not be processed any more, nor will it affect the color buffer; otherwise, it will be processed according to ordinary opaque objects.
Usually, we will use the clip function in the chip shader to perform transparency testing. Clip is a function in CG, and its definition is as follows:
函数:void clip(float4 x);void clip(float3 x);void clip(float2 x);void clip(float1 x);void clip(float4 x)
Parameters: Scalar or vector conditions used when clipping
Description: If any component of the given parameter is negative, the output color of the current pixel will be discarded
Let’s write a simple Shader to use this clip function
1 | Shader "Unlit/AlphaTest" |
When we talked about rendering queues in the previous section, we knew that transparency testing in Unity uses a rendering queue named AlphaTest, so we need to set the Queue tag to AlphaTest. The RenderType tag allows Unity to put the Shader into a pre-defined group (the TransparentCutout group) to indicate that the Shader is a Shader using transparency tests. The RenderType tag is often used for shader replacement. We also set the IgnoreProjector to True, which means that the Shader will not be affected by the Projector. Usually, Shaders using transparency tests should set these three tags in SubShader.
The rest of the Shader code is very ordinary code, just calling the clip function.
Transparency mixing
Definition: This method can achieve a true translucent effect. It will use the transparency of the current slice as the blending factor, and blend with the color values already stored in the color buffer to obtain a new color. But transparency blending requires turning off depth writing, which makes us very careful about the rendering order of objects.
In order to mix, we need to use the Blend command provided by Unity - Blend. Blend is a command provided by Unity to set Blend Mode. To achieve translucency, you need to mix the current color of yourself with the color value already in the color buffer. The function used when mixing is determined by this command.
The specific semantics of the Blend command can be seen in the official doc: https://docs.unity3d.com/2020.3/Documentation/Manual/SL-Blend.html
Assuming we use’Blend SrcFactor DstFactor, SrcFactor Alpha DstFactor Alpha 'semantics for blending, it should be noted that this command also enables Blend Mode when the blending factor is set. This is because it only makes sense to set the transparent channel of the chip shader after blending is enabled, and Unity automatically opens it for us when we use the Blend command.
We set the mixing factor SrcFactor of the source color to SrcAlpha, and the mixing factor DstFactor of the target color to OneMinusSrcAlpha, which means that the new color after mixing is: ‘finalColor = SrcAlpha * SrcColor + (1-SrcAlpha) * DstColor’
1 | Shader "Unlit/AlphaBlend" |
The difference between this Shader and the AlphaTest above is that ZWrite is turned off, and then the Blend Mode is set, and the clip in the slice element shader is removed.
However, this method of turning off deep writing will give the wrong transparency effect due to the wrong sorting when the model itself has a complex occlusion relationship.
So can we find a way to reuse deep writing? The answer is yes, that is, use two passes.
Turn on the translucent effect of depth writing
Since using one Pass for translucency is problematic, we can consider using two.
The first Pass enables depth writing, but does not output color. Its purpose is only to write the depth value of the model into the depth buffer; the second Pass performs normal transparency mixing. Since the previous Pass has obtained the correct depth information at the pixel-by-pixel level, the Pass can render transparently according to the pixel-by-pixel depth sorting results. However, the disadvantage of this method is that using one more Pass will have a certain impact on performance
This Shader has an extra Pass at the beginning compared to not turning on deep writing.
1 | Pass |
This Pass first enables depth writing, and writes the depth information of the model into the depth buffer, thereby eliminating the elements in the model that are blocked by itself. Therefore, the first line of the Pass enables depth writing. The second line uses the ColorMask command to set the mask of the color channel. Its value can be any combination of R, G, B, and A. If it is 0, it means that no color channel is written.
Transparency effect for double-sided rendering
In real life, if an object is transparent, it means that we can not only see what other objects look like through it, but also see the structure inside it. But in the transparency effect implemented earlier, whether it is transparency testing or transparency blending, we cannot observe the shape inside the cube and its back, resulting in the object looking as if it is only half. This is because, by default, the rendering engine removes the render graph element on the back of the object (relative to the camera direction) and only renders the front of the object. If we want to get the effect of two-sided rendering, we can use the Cull directive to control which side of the render graph element needs to be removed.
In Unity, the syntax of the Cull directive is as follows:
Cull
If set to Back, the render graph element facing away from the camera will not be rendered
Double-sided rendering of transparency test just need to turn off culling in Pass
Transparency blending is more troublesome. Compared to transparency testing, it is more complicated to make transparency blending achieve two-sided rendering, because transparency blending requires turning off deep writing, which is “the beginning of all chaos”
To get the correct transparency results, the rendering order is very important, we need to ensure that the data is rendered from back to front. For transparency testing, since we did not turn off depth writing, we can use depth buffering to sort the depth by pixel granularity to ensure the correctness of rendering. However, once depth writing is turned off, we need to carefully control the rendering order to get the correct depth relationship.
If we simply turn off the culling function, then we can’t guarantee the rendering order of the front and back elements of the same object, and we may get wrong translucent results
To this end, we chose to divide the work of two-sided rendering into two passes. The first pass only renders the back, and the second pass only renders the front. Since Unity will execute each pass in SubShader in sequence, we can ensure that the back is always Rendered before the front is rendered, so the correct depth relationship can be guaranteed
The specific method is to copy the Pass in the previous transparency mixed code, then add Cull Front to the first one, and add Cull Back to the second one