Unity Rendering Principle (7) Surface Shaders and Shader Graphs

In the built-in rendering pipeline, surface shaders are a simplified way to write shaders that interact with lighting.

Writing shaders that interact with lighting is very complex. There are different light source types, different shadow options, different rendering paths (forward and deferred rendering); shaders should somehow cope with all these complexities.

Surface shaders are a code generation method that makes it easier to write light shaders than using low-level vertex/pixel shader programs.

Working principle

You can define a “surface function” that takes all the UVs or data you need as input and populates the output structure SurfaceOutput. SurfaceOutput basically describes the properties _ of _ surface (albedo color, normal, luminescence, specular reflection, etc.). ** This code needs to be written using HLSL **.

The surface shader compiler then calculates the required inputs, filled outputs, etc., and generates actual vertex and pixel shaders and rendering channels to handle forward and delayed rendering.

Standard output structure for surface shaders:

1
2
3
4
5
6
7
8
9
struct SurfaceOutput
{
Fixed3 Albedo;//diffuse color
Fixed3 Normal;//tangent space normal (if written)
fixed3 Emission;
Half Specular;//0.. 1 specular reflection capability
Fixed Gloss;//specular reflection intensity
Fixed Alpha;//Transparency Alpha
};

In Unity 5, surface shaders can also use a physics-based lighting model. The built-in standard lighting model and standard specular reflection lighting model (see below) use the following output structures respectively:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
struct SurfaceOutputStandard
{
Fixed3 Albedo;//base (diffuse or specular reflection) color
Fixed3 Normal;//tangent space normal (if written)
half3 Emission;
Half Metallic;//0 = non-metallic, 1 = metal
Half Smoothness;//0 = rough, 1 = smooth
Half Occlusion;//occlusion (default 1)
Fixed Alpha;//Transparency Alpha
};
struct SurfaceOutputStandardSpecular
{
Fixed3 Albedo;//diffuse color
Fixed3 Specular;//specular reflection color
Fixed3 Normal;//tangent space normal (if written)
half3 Emission;
Half Smoothness;//0 = rough, 1 = smooth
Half Occlusion;//occlusion (default 1)
Fixed Alpha;//Transparency Alpha
};

Surface shader input structure

The input structure Input usually has all the texture coordinates required by the shader. The texture coordinates must be named in the form of “uv” followed by the texture name (starting with “uv2” if you want to use a second set of texture coordinates).

Other values that can be put into the input structure:

  • float3 viewDir - Contains view orientation for calculating parallax effects, edge lighting, and more.
  • float4 with COLOR semantics - contains per vertex color for interpolation.
  • float4 screenPos - contains the screen space position of the reflection or screen space effect. Note that this is not suitable for GrabPass; you will need to calculate the custom UV yourself using the ComputeGrabScreenPos function.
  • float3 worldPos - contains the world space location.
  • float3 worldRefl - Contains the world reflection vector without writing Normal_ to the _ surface shader. See Reflect-Diffuse Shaders for an example.
  • float3 worldNormal - contains the world normal vector without writing Normal_ to the _ surface shader.
  • float3 worldRefl; INTERNAL_DATA - Contains the world reflection vector with _ surface shader written to Normal_. To obtain a reflection vector based on a normal map per pixel, use WorldReflectionVector (IN, o. Normal). See Reflect-Bumped Shaders for an example.
  • float3 worldNormal; INTERNAL_DATA - Include the world normal vector with _ surface shader written to o.Normal_. To get a normal vector based on a per pixel normal map, use WorldNormalVector (IN, o. Normal).

Surface shader compile instruction

Just like any other shader, the surface shader is placed inside the CGPROGRAM… ENDCG Code Block. The difference is:

** It must be placed inside a SubShader Code Block, not a Pass. The surface shader itself will be compiled into multiple channels **.
It uses the #pragma surface… directive to indicate that it is a surface shader.
The ‘#pragma surface’ directive is:

# pragma surface surfaceFunction lightModel [optionalparams]

Required parameters

  • surfaceFunction - Cg function with surface shader code. The format of the function should be void surf (Input IN, inout SurfaceOutput o) where Input is the structure you defined. Input should contain any texture coordinates and extra auto variables required by the surface function.
  • lightModel - the lighting model to use. The built-in lighting models are physically based Standard and StandardSpecular, and simple non-physically based Lambert (diffuse) and BlinnPhong (specular reflection). See the Custom Lighting Model page to learn how to write your own lighting model.
    • The Standard lighting model uses the SurfaceOutputStandard output structure and matches the standard (metal workflow) shaders in Unity.
    • The Standard Specular lighting model uses the SurfaceOutputStandard Specular output structure and matches the standard (specular reflection setting) shaders in Unity.
    • Lambert and BlinnPhong lighting models are not physically based (from Unity 4.x), but shaders using these two lighting models can improve rendering speed on low-end hardware.

Optional parameter

For these optional parameters, start by focusing on custom function modifiers and code generation options

Transparency and

Controlled by alpha and alphatest commands.
Transparency can generally be of two types: traditional alpha blending (for fading out objects) or the more physical “premultiplied blending” (allowing translucent surfaces to retain proper specular reflection). Enabling translucency causes the generated surface shader code to include blending commands; while enabling Alpha cutout will perform element discarding in the generated pixel shader based on the given variables.

  • alpha or alpha: auto - for simple lighting functions, fade transparency (same as alpha: fade) will be selected; for physics-based lighting functions, premultiply transparency (same as alpha: premul) will be selected.
  • alpha: blend - Enable alpha blending.
  • alpha: fade - Enables traditional fade transparency.
  • alpha: premul - Enable premultiplied alpha transparency.
  • alphatest: VariableName - Enable Alpha cutout transparency. The clipping value is in a floating point variable with VariableName. You may also want to generate the correct shadow projectile channel using the addshadow directive.
  • keepalpha - by default, the opaque surface shader writes 1.0 (white) to the alpha channel regardless of what the alpha output of the output structure is, or what the lighting function returns. Use this option to keep the alpha value of the lighting function, even for opaque surface shaders.
  • decal: add - Attach decal shader (e.g. terrain AddPass). This works for objects that are on top of other surfaces and use additional blending. See Surface Shader Example
  • decal: blend - Semitransparent decal shader. This works for objects that are on top of other surfaces and blended using Alpha. See Surface Shader Example

Custom modifier function

Can be used to change or calculate incoming vertex data, or change the final calculated slice color.

  • vertex: VertexFunction - Custom vertex modification function. This function is called at the beginning of the generated vertex shader, and this function can modify or calculate per-vertex data. See the surface shader example.
  • finalcolor: ColorFunction - Customize the final color modification function. See the surface shader example.
  • finalgBuffer: ColorFunction - A custom delay path for changing the contents of the G buffer.
  • finalprepass: ColorFunction - Customize the prepass base path.

Shading and Surface Subdivision

Other instructions may be provided to control the processing of shadows and surface subdivisions.

  • addshadow - Generate shadow projectile channels. Commonly used for custom vertex modification so that shadow casting can also get procedural vertex animation. Typically, shaders do not require any special shadow handling, as they can use shadow projectile channels through a fallback mechanism.
  • fullforwardshadows - Support all light shadow types in Forward rendering path. By default shaders only support shadows from one directional light in forward rendering (to save on internal shader variant count). If you need point or Spot Light shadows in forward rendering, use this directive.
  • tessellate: TessFunction - Surface Subdivision using DX11 GPU; this function calculates the surface subdivision factor. See Surface Shader Surface Subdivision for more information.

Code Generation Options

By default, the generated surface shader code tries to handle all possible light/shadow/lightmap situations. But in some cases, you know you don’t need parts of it, and you can adjust the generated code to skip them. This reduces the shader, which improves loading speed.

  • exclude_path: deferred, exclude_path: forward, and exclude_path: prepass - do not generate channels for the given render path (corresponding to delayed shading path, forward path, and legacy delayed path, respectively).
  • noshadow - disables all shadows in this shader to receive support.
  • noambient - do not apply any ambient lighting or light probes.
  • novertexlights - do not apply any lighting probes or per vertex lights in forward rendering.
  • nolightmap - disables all lightmap support in this shader.
  • nodynlightmap - disables runtime dynamic global lighting support in this shader.
  • nodirlightmap - disables directional lightmap support in this shader.
  • nofog - disables all built-in fog effect support.
  • nometa - does not generate “Meta” channels (used by lightmapping and dynamic global illumination to extract surface information).
  • noforwardadd - disables additional channels for forward rendering. This enables the shader to support a full directional light, with all other lights being calculated per vertex/SH. Shaders can also be reduced.
  • nolppv - disables lightprobe agent support in this shader.
  • noshadowmask disables shadow mask support (including Shadowmask and Distance Shadowmask) for this shader.

Other options

  • softvegetation - Render surface shaders only when Soft Vegetation is turned on.
  • interpolateview - Calculate view orientation and interpolate in vertex shader; not in pixel shader. This makes pixel shader faster, but consumes an extra texture interpolator.
  • alfasview - Pass the halfdirection vector to the lighting function instead of the view orientation. Calculate the halfdirection and normalize it for each vertex. This is faster, but not quite correct.
  • approxview - removed in Unity 5.0. Please use interpolateview instead.
  • dualforward - Use bilightmaps in forward rendering paths.
  • dithercrossfade - Make the surface shader support the dither effect. You can then apply this shader to gameobjects that use the LOD Group component configured to cross-fade transition mode.

Rendering path for surface shader

In the built-in render pipeline, when using surface shaders, how lighting is applied and which channels of the shader are used depends on the render path used. Each channel in the shader expresses its lighting type through a channel label.

  • In forward rendering, ForwardBase and ForwardAdd channels will be used.
  • In delayed shading, the Deferred channel will be used.
  • In older versions of deferred lighting, the PrepassBase and PrepassFinal channels will be used.
  • In legacy vertex lighting, Vertex, VertexLMRGBM, and VertexLM channels will be used.
  • In any of the above cases, to render shadows or depth textures, the ShadowCaster channel will be used.

Example of surface shader

Simple Shader Example

We’ll start with a very simple shader and build on it. The shader below sets the surface color to “white”. It uses the built-in Lambert (diffuse) lighting model.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Shader "Example/Diffuse Simple" {
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float4 color : COLOR;
};
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = 1;
}
ENDCG
}
Fallback "Diffuse"
}`

Texture

An all white object is boring, so let’s add a texture. We will add Properties Code Block to the shader, so we will see the texture selector in the material.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Shader "Example/Diffuse Texture" {
Properties {
_MainTex ("Texture", 2D) = "white" {}
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float2 uv_MainTex;
};
sampler2D _MainTex;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
}
ENDCG
}
Fallback "Diffuse"
}

To understand this code you need to understand what the concept of UV is, in simple terms, is that each point on the model has a UV coordinate, this coordinate corresponds to a point on the map, when rendering this point on the model, this point The color is obtained by going to the map with these coordinates. For a detailed explanation, see: https://www.cnblogs.com/cancantrbl/p/14766502.html

There are two variables used in the surf function, and you may feel a little confused about how they came about.
The first is the variable _MainTex, which is defined in the HLSL code. Find a variable whose name is exactly the same as the property name defined in Properties.
Another variable is in the structure “Input” we defined, called “uv_MainTex”, this variable compiler will automatically help us inject the uv coordinates of the point

Input structure

Normal map

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Shader "Example/Diffuse Bump" {
Properties {
_MainTex ("Texture", 2D) = "white" {}
_BumpMap ("Bumpmap", 2D) = "bump" {}
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float2 uv_MainTex;
float2 uv_BumpMap;
};
sampler2D _MainTex;
sampler2D _BumpMap;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
o.Normal = UnpackNormal (tex2D (_BumpMap, IN.uv_BumpMap));
}
ENDCG
}
Fallback "Diffuse"
}

In this Shader, we have one more input and one more output, we have one more input normal map, and we also assign valid values to the output normal variables.

If you are not sure about the concept of normal maps, you can take a look.法线贴图的官方文档

Here we talk about the line of code’o. Normal = UnpackNormal (tex2D (_BumpMap, IN. uv_BumpMap)); ’

The first is the section’tex2D (_BumpMap, IN. uv_BumpMap) ', we have understood that it is to obtain the corresponding rgba (or xyzw) value from the normal map according to the uv coordinate of this point, so why can’t we directly assign the value to Normal?
First of all, we need to understand that the storage of the discovery map in Unity is packaged format DXT5nm, only G and A channels are useful, and our discovery is a 3D vector, so we need to restore it. At this time, we will take a look at the definition of’UnpackNormal ':

1
2
3
4
5
6
7
8
9
10
11
inline fixed3 UnpackNormal(fixed4 packednormal)
{
#if defined(SHADER_API_GLES)
return packednormal.xyz * 2 - 1;
#else
fixed3 normal;
normal.xy = packednormal.wy * 2 - 1;
normal.z = sqrt(1 - normal.x*normal.x - normal.y * normal.y);
return normal;
#endif
}

Edge illumination

Now, try adding some edge lighting to highlight the edges of the game object. We will add some emission lighting based on the angle between the surface normal and the view direction. For this, we will use the built-in surface shader variable viewDir

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Shader "Example/Rim" {
Properties {
_MainTex ("Texture", 2D) = "white" {}
_BumpMap ("Bumpmap", 2D) = "bump" {}
_RimColor ("Rim Color", Color) = (0.26,0.19,0.16,0.0)
_RimPower ("Rim Power", Range(0.5,8.0)) = 3.0
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float2 uv_MainTex;
float2 uv_BumpMap;
float3 viewDir;
};
sampler2D _MainTex;
sampler2D _BumpMap;
float4 _RimColor;
float _RimPower;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
o.Normal = UnpackNormal (tex2D (_BumpMap, IN.uv_BumpMap));
half rim = 1.0 - saturate(dot (normalize(IN.viewDir), o.Normal));
o.Emission = _RimColor.rgb * pow (rim, _RimPower);
}
ENDCG
}
Fallback "Diffuse"
}

This code has two more lines. First, let’s look at the first line: ‘half rim = 1.0 - saturate (dot (normalize (IN.viewDir), o. Normal);’, let’s interpret this code. First, it is’normalize (IN.viewDir) ', which unifies the viewing direction, and then the dot method multiplies the normal point. The function of the saturate function is to return 0 if the result is less than 0, and 1 if it is greater than 1. Then the result of this piece of code is to determine the direction of the viewing angle and the direction of the normal of this point. If the included angle is greater than 90 degrees, the dot product result is less than 0, and the saturate result is 0, then the result of rim is 1. Everyone else can push it down by themselves.

Then there is the second line: ‘o. Emission = _RimColor rgb * pow (rim, _RimPower);’, use the pow function to exponent the rim, get the self-luminous intensity of the point, then multiply the rgb value we set to ‘_RimColor’, and finally assign it to’Emission ', which is the self-luminous property of the point.

The official doc has some other examples, here will not be analyzed one by one, interested can continue to watch: https://docs.unity3d.com/cn/current/Manual/SL-SurfaceShaderExamples.html

What’s behind Unity?

Unity will generate a vertex/slice shader with many passes based on the surface shader behind it.

Some of these passes are for different rendering paths. For example, by default, Unity will generate passes with LightMode as ForwardBase and ForwardAdd for forward rendering paths, passes with LightMode as PrePassBase and PrePassFinal for delayed rendering paths before Unity 5, and passes with LightMode as Deferred for delayed rendering paths after Unity 5.

There are also passes that are used to generate additional information. For example, to extract surface information for lightmapping and dynamic global lighting, Unity generates a Pass with LightMode of Meta. The generation of these passes is based on the compile command and custom functions in our surface shader, which follow the rules. Unity provides a feature that allows us to look at the code automatically generated by the surface shader: on the panel of each compiled surface shader, there is a “Show generated code” button, as shown in the image below. We only need to click to see all vertex/slice shaders that Unity has generated for this surface shader.

Taking the Pass with LightMode ForwardBase generated by Unity as an example, its rendering pipeline is shown in the following figure:

Unity’s automatic generation process for this Pass is roughly as follows:

  1. Copy the code between CGPROGRAM and ENDCG in the surface shader.
  2. Unity generates struct v2f_surf (the output of the vertex shader) based on the above code. If Input defines some variables but is not used, the generated struct will not contain the variables. It will also contain shadow texture coordinates, lighting texture coordinates, vertex-by-vertex lighting, etc.
  3. Generate vertex shaders.
  • If a vertex modification function is defined, the variables in the custom Input structure will be called first or filled. Unity will analyze the data modified by the function and store the modified result in the corresponding variable v2f_surf through the Input structure.
  • Calculate other variables in the v2f_surf: vertex position, texture coordinates, normal direction, vertex-by-vertex lighting, lighting texture, etc.
  • Pass the v2f_surf to the slice element shader.
  1. Generate chip element shaders.
  • Fill v2f_surf variables (texture coordinates, viewing angle direction) into the Input structure.
  • Call the custom surface function to populate the SurfaceOutput structure.
  • Call the lighting function to get the initial color value. If using the built-in Lambert or BlinnPhong lighting functions, Unity will also calculate dynamic global lighting and add it to the calculation of the lighting model.
  • Make other color overlays. For example, baking without light will add the effect of vertex-by-vertex lighting.
  • Call the final color modification function.

Shader

To simplify the process of writing vertex and slice shaders, Unity abstracts them into surface shaders.
But this is not enough. In order to simplify, Unity has proposed Shader Graph, which can help us visually write Shaders.

We simply drag one out:

The function of this Shader Graph is to extract the rgb value corresponding to each point in the map from the map, and then assign it to the output of the surface shader, Albeo and Emission, respectively.

The Shader Graph is stored as node after node, which is compiled into surface shaders, which in turn are compiled into vertex shaders and remote shaders.

Although Shader Graph is convenient, there is also a problem, that is, he adds a lot of keywords by default, which will lead to too many Shader variants compiled in the end. If it is a simple Shader, there is no need to use Shader Graph

Reference article:
https://www.jianshu.com/p/3d8a9f3f2430
https://docs.unity3d.com/cn/current/Manual/SL-SurfaceShaders.html