operating system learning notes (ten) operating system knowledge points
Recently just read the entire operating system book, while there is a general impression, first sort out the knowledge points
Recently just read the entire operating system book, while there is a general impression, first sort out the knowledge points
The original purpose of texturing is to use an image to control the appearance of the model. Using texture mapping technology, we can “glue” an image to the surface of the model and control the color of the model by texel (to distinguish from pixels).
When modeling, artists usually use texture expansion techniques in modeling software to store texture map coordinates on each vertex. Texture map coordinates define the 2D coordinates corresponding to the vertex in the texture. Usually, these coordinates are represented by a 2D variable (u, v), where u is the abscissa and v is the ordinate, so texture coordinates are also called uv coordinates.
A previous blog has talked about the meaning of the existence of a process and why we need to come up with the concept of a process.
But then there are several consecutive problems, which are also several problems encountered by multichannel concurrency:
In this blog we will explain the first problem, which is how processors are scheduled.
Recently, I have been developing an SDK for Unity character images. The goal is that the SDK can dynamically download the latest models of characters and clothing without having to be built into the SDK in advance.
At the beginning, the solution I plan to use is the traditional and mature Addressables way to package bundles, but the division of bundles is a problem, and this method needs to import the model into unity every time, then release the package, and then update the catalog. The whole process is relatively tedious.
So I was wondering if I could load FBX and textures directly from remote, and then load them directly using FBX, so that the whole release process would be much simpler.
After some searching, he found the TriLib repository, which was very convenient for him to load models remotely. However, there were also some problems. In order to locate these problems, I specially read the source code.
We are officially starting to learn some Shaders that can be applied, this time we will start with the lighting model. The code and concepts in this article are interpreted from “Getting Started with Unity Shader Essentials”.
This article mainly explains the principles and code analysis of several lighting models in the book.
The previous blogs talked about the basic principles of game rendering and how to write a Shader in Unity. This blog first inserts two concepts:
In the built-in rendering pipeline, surface shaders are a simplified way to write shaders that interact with lighting.
Writing shaders that interact with lighting is very complex. There are different light source types, different shadow options, different rendering paths (forward and deferred rendering); shaders should somehow cope with all these complexities.
Surface shaders are a code generation method that makes it easier to write light shaders than using low-level vertex/pixel shader programs.
In Unity, we use HLSL syntax to write Shader Programs, but initially Unity used CG syntax, so some Unity keyword names (CGPROGRAM) and Enterprise Archive File (.cginc) were used. Although Unity no longer uses Cg, these names are still used.
The previous blog explored some of the concepts of Unity Shader, so go ahead and take a look at how to write a Shader.
The basic content of the rendering pipeline has been introduced in front, and then the parts that we can program are introduced. Before we start, we need to introduce some basic concepts and briefly explain some of the properties.