GROW YOUR STARTUP IN INDIA

SHARE

facebook icon facebook icon

Hi there! Today I continue to discuss rendering in Unity. This article will be two times bigger than the previous one. Hold tight!

What is a Shader?

Based on what has been described in a previous article, a shader is a small program that can be used to create interesting effects in our projects. It contains mathematical calculations and lists of instructions (commands). They allow us to process color for each pixel in the area covering the object on our computer screen or to work with object transformations (for example, to create dynamic grass or water).

image

This program allows us to draw elements (using coordinate systems) based on the properties of our polygonal object. The shaders are executed on the GPU because it has a parallel architecture consisting of thousands of small, efficient cores designed to handle tasks simultaneously. By the way, the CPU was designed for sequential serial processing.

Note that there are three types of shader-related files in Unity.

First, we have programs with the “.shader” extension capable of compiling different types of rendering pipelines.

Second, we have programs with the “.shadergraph” extension that can only compile to either URP or HDRP. In addition, we have files with the “.hlsl” extension that allow us to create customized functions. These are typically used in a node type called Custom Function, which is found in the Shader Graph.

There is also another shader type with the “.cginc” extension – Compute Shader. It is associated with the “.shader” CGPROGRAM, while “.hlsl” is associated with the “.shadergraph” HLSLPROGRAM.

image

In Unity, at least four types of structures are defined for shader generation. Among them, we can find a combination of vertex and fragment shaders, surface shaders for automatic lighting calculation, and compute shaders for more advanced concepts.

A small excursion into the shader language

Before we start writing shaders in general, we should take into account that there are three shader programming languages in Unity:

  • HLSL (High-Level Shader Language – Microsoft)
  • Cg (C for Graphics – NVIDIA) – an obsolete format
  • ShaderLab – a declarative language – Unity

We’re going to quickly run through Cg, ShaderLab, and touch on HLSL a bit.

Cg is a high-level programming language designed to compile on most GPUs. NVIDIA developed it in collaboration with Microsoft and used a syntax very similar to HLSL. The reason shaders work with the Cg language is that they can compile with both HLSL and GLSL (OpenGL Shading Language), speeding up and optimizing the process of creating material for video games.

image

All shaders in Unity (except Shader Graph and Compute) are written in a declarative language called ShaderLab. The syntax of this language allows us to display the properties of the shader in the Unity inspector. This is very interesting because we can manipulate the values of variables and vectors in real time, customizing our shader to get the desired result.

image

In ShaderLab, we can manually define several properties and commands, among them the Fallback. It is compatible with the different types of rendering pipelines that exist in Unity.

Fallback is a fundamental block of code in multiplatform games. It allows us to compile another shader instead of the one that generated the error. If a shader breaks during compilation, Fallback returns another shader, and the graphics hardware can continue its work. This is necessary so that we don’t have to write different shaders for Xbox and PlayStation but use unified shaders.

Basic shader types in Unity

The basic shader types in Unity allow us to create subroutines to be used for different purposes.

image

Let’s discuss what each type is responsible for:

  • Standard Surface Shader. This type of shader is characterized by the optimization of writing code that interacts with the base lighting model and only works with Built-In RP.
  • Unlit Shader. It refers to the primary color model and will be the base structure we typically use to create our effects.
  • Image Effect Shader. Structurally, it is very similar to the Unlit shader. These shaders are mainly used in Built-In RP post-processing effects and require the “OnRenderImage()” function (C#).
  • Compute Shader. This type is characterized by the fact that it is executed on the video card and is structurally very different from the previously mentioned shaders.
  • RayTracing Shader. An experimental type of shader that allows to collect and process ray tracing in real time. It works only with HDRP and DXR.
  • Blank Shader Graph. An empty graph-based shader that you can work with without knowledge of shader languages using nodes.
  • Sub Graph. It is a sub-shader that can be used in other Shader Graph shaders.

Rendering in Unity is a difficult topic, so read this guide attentively.

Shader Structure

To analyze the structure of shaders, all we need to do is create a simple shader based on Unlit and analyze it.

When we create a shader for the first time, Unity adds default code to ease the compilation process. In the shader, we can find blocks of code structured so that the GPU can interpret them.

If we open our shader, its structure looks similar:

Shader "Unlit/OurSampleShaderUnlit"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
    }
    SubShader
    {
        Tags {"RenderType"="Opaque"}
        LOD 100
        
        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #pragma multi_compile_fog
            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                UNITY_FOG_COORDS(1)
                float4 vertex : SV_POSITION;
            };

            sampler 2D _MainTex;
            float4 _MainTex;

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                UNITY_TRANSFER_FOG(o, o.vertex);
                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
                fixed4 col = tex2D(_MainTex, i.uv);
                UNITY_APPLY_FOG(i.fogCoord, col);
                return col;
            }
            ENDCG
         }
     }
}

With the current example and its basic structure, it becomes a bit clearer. A shader starts with a path in the Unity editor inspector (InspectorPath) and a name (shaderName), then properties (e.g., textures, vectors, colors, etc.), then SubShader. In the end, an optional Fallback parameter supports different variants.

Working with ShaderLab

Most shaders start by declaring the shader and its path in the Unity inspector, as well as its name. Both properties, such as SubShader and Fallback, are written inside the “Shader” field in the ShaderLab declarative language.

Shader "OurPath/shaderName"
{
// The shader code will be here
}

Both the path and the shader name can be changed as needed within the project.

Shader properties correspond to a list of parameters that can be manipulated from within the Unity inspector. There are eight different properties, both in terms of value and usefulness. We use these properties relative to the shader we want to create or modify, either dynamically or in runtime. The syntax for declaring a property is as follows:

PropertyName ("display name", type) = defaultValue

“PropertyName” stands for the name of the property (e.g., _MainTex), “display name” specifies the name of the property in the Unity inspector (e.g., Texture), “type” indicates its type (e.g., Color, Vector, 2D, etc.). Finally, “defaultValue” is the default value assigned to the property (e.g., if the property is “Color,” we can set it as white (1, 1, 1, 1, 1, 1).

image

The second component of a shader is the SubShader. Each shader consists of at least one SubShader for perfect loading. When there is more than one SubShader, Unity will process each of them and select the most appropriate one according to hardware specifications, starting with the first and ending with the last one in the list (for example, to separate the shader for iOS and Android). When SubShader is not supported, Unity will try to use the Fallback component corresponding to the standard shader so that the hardware can continue its task without graphical errors.

Shader "OurPack/OurShader"
{
    Properties { ... }
    SubShader
    {
        // Here will be the shader configuration
    }
}

You can read more about parameters and sub-shaders here and here.

Blending

We need blending for the process of combining two pixels into one. Blending is supported in both Built-In and SRP.

Blending occurs in the step that combines the final color of a pixel with its depth. This stage occurs at the end of the rendering pipeline after the fragment shader stage when executing the stencil buffer, z-buffer, and color mixing.

By default, this property is not written in the shader, as it is an optional feature and is mainly used when working with transparent objects. For example, it is used when we need to draw a pixel with a low-opacity pixel in front of another (this is often used in UI).

We can enable blending here:

Blend [SourceFactor] [DestinationFactor]

You can read more about blending here.

Z-Buffer (Depth-Buffer)

To understand both concepts, we must first learn how the Z-Buffer (also known as Depth Buffer) and the depth test work.

Before we begin, we must consider that pixels have depth values. These values are stored in the Depth-Buffer, which determines whether an object goes in front of or behind another object on the screen.

Depth testing, on the other hand, is a condition that determines whether a pixel will be updated or not in the Depth-Buffer.

As we already know, a pixel has an assigned value that is measured in RGB color and stored in the color buffer. Z-buffer adds an additional value that measures the depth of the pixel in terms of distance from the camera, but only for those surfaces that are within its frontal area. This allows 2 pixels to be the same in color but different in depth.

The closer the object is to the camera, the smaller the Z-buffer value, and pixels with smaller buffer values overwrite pixels with larger values.

To understand the concept, suppose we have a camera and some primitives in our scene, and they are all located on the “Z” space axis.

The word “buffer” refers to the “memory space” where the data will be temporarily stored, so the Z-buffer refers to the depth values between the objects in our scene and the camera that are assigned to each pixel.

image

We can control the Depth test thanks to the ZTest parameters in Unity.

Culling

This property, compatible with both Built-In RP and URP/HDRP, controls which of the polygon’s faces will be removed when processing pixel depth.

What does this mean? Recall that a polygon object has inner edges and outer edges. By default, the outer edges are visible (CullBack). However, we can activate the inner edges:

  • Cull Off. Both edges of the object are rendered
  • Cull Back. By default, the back edges of the object are displayed
  • Cull Front. The front edges of the object are rendered.

This command has three values, namely Back, Front, and Off. The Back command is active by default; however, usually, the line of code associated with culling is not visible in the shader for optimization purposes. If we want to change the parameters, we have to add the word “Cull” followed by the mode we want to use.

Shader "Culling/OurShader"
{
    Properties 
    {
       [Enum(UnityEngine.Rendering.CullMode)]
       _Cull ("Cull", Float) = 0
    }
    SubShader
    {
        // Cull Front
        // Cull Off
        Cull [_Cull]
    }
}

We can also dynamically configure Culling parameters in the Unity inspector via the “UnityEngine.Rendering.CullMode” dependency. It is Enum and is passed as an argument to a function.

Using Cg/HLSL

In our shader, we can find at least three variants of default directives. These are processor directives included in Cg or HLSL. Their function is to help our shader recognize and compile certain functions that otherwise cannot be recognized as such:

  • #pragma vertex vert. It allows the vertex shader stage to be compiled into the GPU as a vertex shader.
  • #pragma fragment frag. This directive performs the same function as pragma vertex, with the difference that it allows a fragment shader stage called “frag” to be compiled as a fragment shader in the code
  • #pragma multi_compile_fog. Unlike the previous directives, it has a dual function. First, multi_compile refers to a variant shader that allows us to generate variants with different functionality in our shader. Second, the word “_fog” includes the fog functionality from the Lighting window in Unity. If we go to the Environment/Other Setting, we can activate or deactivate the fog options of our shader.

We can also plug Cg/HLSL files into our shader. Typically we do this when we plug in UnityCG.cginc. It includes fog coordinates, object positions for clipping, texture transformations, fog carry, and much more, including UNITY_PI constants.

The most important thing we can do with Cg/HLSL is to write direct processing functions for vertex and fragment shaders, to use variables of these languages and various coordinates like texture coordinates (TEXCOORD0).

#pragma vertex vert
#pragma fragment frag

v2f vert (appdata v)
{
   // Ability to work with the vertex shader
}

fixed4 frag (v2f i) : SV_Target
{
    // Ability to work with fragment shader
}

You can read more about Cg/HLSL here.

Shader Graph

Shader Graph is a new solution for Unity that allows you to create your solutions without knowledge of the shader language. Visual nodes are used to work with it (but nobody forbids combining them with the shader language). Shader Graph works only with HDRP and URP.

You must remember that when working with Shader Graph, versions developed for Unity 2018 are BETA versions and do not get support. Versions developed for Unity 2019.1+ are actively compatible and get support.

Another issue is that it is very likely that shaders created with this interface may not compile correctly in different versions. This is because new features are added in every update.

So, is Shader Graph a good tool for shader development? Of course, it is. And it can be handled not only by a graphics programmer but also by a technical designer or artist.

image

To create a graph, all we need to do is select the type we want in the Unity editor.

image

Before we start, let’s briefly introduce vertex/fragment shader at the Shader Graph level.

As we can see, there are three defined entry points in the vertex shader stage, viz: Position(3), Normal(3), and Tangent(3), just like in a Cg or HLSL shader. When compared to a regular shader, this means that Position(3) = POSITION[n], Normal(3) = NORMAL[n] and Tangent(3) = TANGENT[n].

Why does a Shader Graph have three dimensions, but a Cg or HLSL has 4?

Recall that the four dimensions of a vector correspond to its component W, which is “one or zero” in most cases. When W = 1, the vector corresponds to a space or point position. Whereas when W = 0, the vector corresponds to a direction in space.

So, to set up our shader, we first go to the editor and create two parameters: color – _Color and Texture2D – _MainTex.

image

To create a link between ShaderLab properties and our program, we must create variables in the CGPROGRAM field. However, this process is different in Shader Graph. We must drag and drop the properties we want to use into the node workspace.

image

All we need to do to make Texture2D work in conjunction with the Sample Texture 2D node is to connect the output of the _MainTex property to the input Texture(T2).

image

To multiply both nodes (color and texture), we just need to call the Multiply node and pass both values as an input point. Finally, we need to send the color output in the Base Color at the fragment shader stage. Now save the shader, and we’re done. Our first shader is ready.

image

We can also turn to the general graph setup, divided into Node and Graph sections. They have customizable properties that allow us to change the color reproduction. We can find options for blending, alpha clipping, etc. In addition, we can customize the properties of the nodes in our Shader Graph configuration.

image

The nodes themselves provide analogs of certain functions that we write in ShaderLab. As an example, the code of the function Clamp:

void Unity_Clamp_float4(float4 In, float4 Min, float4 Max, out float4 Out)
{
    Out = clamp(In, Min, Max);
}

In this way, we can simplify our lives and reduce the time to write shaders at the expense of visual graphs.

Conclusion

I can talk about shaders a lot and for a long time, as well as touch upon the rendering process itself. I have discussed all the basics in this guide on rendering in Unity. Here I haven’t discussed the raytracing shaders and Compute-Shading. I’ve covered shader languages superficially and described the processes only from the tip of the iceberg.


The Unity Writing Contest is sponsored by Tatum Games. Power your games with MIKROS – a SaaS product that enrolls game developers in an information-sharing ecosystem, also known as data pooling, that helps identify better insights about user behavior, including user spending habits.


This article was originally published Zhurba Anastasiya by on Hackernoon.

SHARE

facebook icon facebook icon
You may also like