Saturday, November 24, 2012

Tutorial 24 - Basic 3D Graphics

Figure 1. Ripple Shader.

24.1 Setting the Scene


Version 1.5 of Codea is a huge update. In addition to camera access, image blend modes and a tween library for simple animation, it includes full access to shaders and a shader editor. This feature gives you full access to GLSL (OpenGL Shading Language) vertex and fragment shaders (which can be used to apply the ripple shader effect shown in Figure 1). To understand how to implement and use shaders we need to take a few steps back and provide some graphical foundations.

24.2 OpenGL


OpenGL is a multipurpose open-standard graphics library. Although it is actually a specification, it is usually thought of as an Application Programming Interface (API), which is the manifestation of this specification. The OpenGL API uses C and GLSL is very similar in structure to C but has its own peculiarities. As a C API, OpenGL integrates seamlessly with Objective-C based Cocoa Touch applications. The OpenGL API is defined as a state machine (see Tutorial 5), and almost all of the OpenGL functions set or retrieve some state in OpenGL. The only functions that do not change state are functions that use the currently set state to cause rendering to happen.

OpenGL for Embedded Systems (OpenGL ES) is a simplified version of OpenGL that provides a library which is easier to learn and implement on mobile graphics hardware. Apple provides implementations of OpenGL ES v1.1 and OpenGL ES v2.0. Codea uses v2.0.

OpenGL ES 2.0 is very similar to OpenGL ES 1.1, but removes functions that target the fixed-function vertex and fragment pipeline stages. Instead, it introduces new functions that provide access to a general-purpose shader-based pipeline. Shaders allow you to write custom vertex and fragment functions that execute directly on the graphics hardware (which is very fast). 

24.3 Rendering Graphics


Everything displayed on your iPad screen is a 2 dimensional array of pixels. Each pixel has a particular colour defined by a red, green, blue and alpha (transparency) value in the range 0 to 1. It is the purpose of the graphics pipeline to determine what colour to put in each pixel to provide a representation of your image. Displaying a 2D image is fairly straight forward but what about 3D? The process of converting a 3D world into a 2D image is called rendering.

There are many different rendering systems. The one that we will concern ourselves with is called rasterization, and a rendering system that uses rasterization is called a rasterizer. In rasterizers, all objects that you see are represented by empty shells made up of many triangles. These series of triangles are called "geometry", "model" or "mesh". We will use the term mesh as that is what is used in Codea.


Figure 2. Rasterizing a Triangle.

The process of rasterization has several phases. These phases are ordered into a graphics pipeline (figure 3), where the mathematical model of your image, consisting of a mesh of triangles, enter from the top and a 2D pixel image comes out the bottom. This is a gross simplification but may help in the understanding of the process. The order which triangles from your mesh are submitted to the pipeline can effect the final image. Pixels are square, so they only approximate the triangles (Figure 2), just as the triangles approximate the 3D image. The process of converting your triangles to pixels is called scan conversion, but before we can do this we need to perform some mathematics to check whether the triangle is visible and convert it from 3D to a 2D representation.




24.4 Graphics Pipeline Overview


Triangles are described by 3 vertices, each of which define a point in three dimensional space (x, y, z). To represent these in two dimensions we have to project the vertex co-ordinates onto a plane. We maintain the illusion of depth by using tricks like perspective (i.e. things the same size appear smaller the further away they are). We get to influence the graphics pipeline at two points, the vertex shader and the fragment shader, shown in orange in Figure 3.

If you are interested in a much more detailed explanation then we suggest that you read Andrew Stacey's tutorial on Using Matrices in Codea.

Step 1 - Vertex Shader (Clip Space Transformation)

The first phase of rasterization is to transform the vertices of each triangle into "clip space". Everything within the clip space region will be rendered to the output image, and everything that falls outside of this will be discarded. In clip space, the positive x direction is to the right, the positive y direction is up, and the positive z direction is away from the viewer. Clip space can be different for different vertices within a triangle. It is defined as a region of 3D space with the range [-w, w] in each of the x, y, and z directions. 

This is difficult to visualise and use so the vertices are normalised by dividing each co-ordinate (x, y, z) by w. After being normalised, the (x, y, z) co-ordinates will be in the range of -1 to +1. Dividing by w also applies a perspective effect to each of our triangles.

The entire process can be thought of as a mapping from the projection volume to a 2 unit cube with the origin at (0, 0, 0).


Figure 4. Clip Space Transformation & Normalisation.

In terms of the graphical pipeline (figure 3), this transformation is coded in the Vertex Shader. Open up the Shader Lab in Codea and tap on the vertex shader tab. In the main() function, the line:

gl_Position = modelViewProject * position;

performs the clip space transformation for you. 

The inputs to the vertex shader consist of:
  • Attributes - per vertex data supplied via vertex arrays (e.g. position, color and texCoord). They are signified by the attribute tag in GLSL;
  • Uniforms - constant data used by the vertex shader (e.g. modelViewProjection). Labelled as uniform in GLSL; and
  • Samplers - a specific type of uniforms that represent textures used by the vertex shader. These are optional.
The outputs of the vertex shader are called (somewhat redundantly) varying variables.

Step 2 - Primitive Assembly

A primitive is a geometric object which can be drawn by OpenGL ES (e.g. a point, line or triangle). In this stage, the shaded vertices are assembled into individual primitives.

Normalisation and Clipping will happen automatically in the Primitive Assembly stage between the vertex shader and fragment shader. Primitive Assembly will also convert from normalized device coordinates to window coordinates. As the name suggests, window coordinates are relative to the window that OpenGL is running within. Window coordinates have the bottom-left position as the x, y (0, 0) origin. The bounds for z are [0, 1], with 0 being the closest and 1 being the farthest away. Vertex positions outside of this range are not visible. The region of 3D space that is visible on the screen is referred to as the view frustum.

Step 3 - Rasterization

Rasterization converts the graphic primitives from the previous stage to two dimensional fragments. These 2D fragments represent pixels that can be drawn to the screen (Figure 2).

In this stage, the varying values are calculated for each fragment and passed as inputs to the fragment shader. In addition, the colour, depth, stencil and screen co-ordinates are generated and will be passed to the per-fragment operations (e.g. stencil, blend and dither).

Step 4 - Fragment Shader

The fragment shader is executed for each fragment produced by the rasterization stage and takes the following inputs:
  • Varying variables - outputs from the vertex shader that are generated for each fragment in the rasteriser using interpolation (e.g. vColor in the Ripple Shader Lab example).;
  • Uniforms - constant data used by the fragment shader (e.g. time and freq in the Ripple Shader Lab example).; and
  • Samplers - a specific type of uniforms that represent textures used by the fragment shader (e.g. texture in the Ripple Shader Lab example).
The output of the fragment shader will either be a colour value called gl_FragColor or it may be discarded (see Step 5).

Step 5 - Per Fragment Operations

The final step before writing to the frame buffer is to perform (where enabled) the following per fragment operations.
  1. Pixel ownership test - checks if the pixel is currently owned by the OpenGL context. If it isn't (e.g. the pixel is obscured by another view) then it isn't displayed.
  2. Scissor Test - if enabled, is used to restrict drawing to a certain part of the screen. If the fragment is outside the scissor region it is discarded.
  3. Stencil & Depth test - if enabled, OpenGL's stencil buffer can be used to mask an area.The stencil test conditionally discards a fragment based on the value in the stencil buffer. Similarly, if enabled the depth buffer test discards the incoming fragment if a depth comparison fails.
  4. Blending - combines the newly generated fragment colour value with the corresponding colour values in the frame buffer at that screen location.
  5. Dithering - simulates greater color depth to minimise artifacts that can occur from using limited precision. It is hardware-dependent and all OpenGL allows you to do is to turn it on or off.

24.5 A Simple Shader Example


Version 1.5 of Codea comes with a sample ripple shader (see Figure 1). The following fragment shader code will tint a texture with the tint colour by the tint amount.

// A basic fragment shader with tint.


// This represents the current texture on the mesh
// uniform lowp sampler2D texture;

// The interpolated vertex color for this fragment
// varying lowp vec4 vColor;

// The interpolated texture coordinate for this fragment
// varying highp vec2 vTexCoord;

void main()
{
    // Sample the texture at the interpolated coordinate
    
    lowp vec4 texColor = texture2D( texture, vTexCoord );
    
    // Tint colour - red is currently hard coded.
    // Tint amount - select a number between 0.0 and 1.0
    // Alternatively you could pass the tint color and amount
    // into your shader by defining above:
    //
    // uniform lowp vec4 tintColor;
    // uniform lowp float tintAmount;
    
    lowp vec4 tintColor = vec4(1.0,0.0,0.0,1.0);
    lowp float tintAmount = 0.3;
    tintColor.a = texColor.a;

    // Set the output color to the texture color
    // modified by the tint amount and colour.
    
    gl_FragColor = tintColor * tintAmount + texColor * (1.0 - tintAmount);
}

24.6 Appendix - GLSL Precision Qualifiers


You will notice the lowp, mediump and highp precision specifiers in the shader lab example. It is much faster to use lowp in calculations than highp.The required minimum ranges and precisions for the various precision qualifiers are:


Apple provides the following guidelines for using precision in iOS applications:
  • When in doubt, default to high precision.
  • Colours in the 0.0 to 1.0 range can usually be represented using low precision variables.
  • Position data should usually be stored as high precision.
  • Normals and vectors used in lighting calculations can usually be stored as medium precision.
  • After reducing precision, retest your application to ensure that the results are what you expect.

No comments:

Post a Comment