Ken Perlin
Images and colors
How humans see colors
Representing shapes with polygons
Homogeneous coordinates
Forming a ray
Ray tracing to a sphere
t2 ( Wx2 + Wy2 + Wz2 ) +which equals: W•W t2 + 2W•(V-c) t + (V-c)•(V-c) - r2 = 0 So we need to solve the quadratic equation for: A = W•Wwhere the quadratic equation is: t = (-B ± √B2-4AC) / 2ASince w is normalized, the value of W•W is always 1.0, so the quadratic equation in this case simplifies to: t = (-B ± √B2-4C / 2 Interpreting the results: If there are no real roots, then the ray has missed the sphere. Otherwise, the smaller of the two roots is where the ray enters the sphere, and the larger of the two roots is where the sphere exists the sphere.
Using the root value to find the surface point: Once we have found the smaller root t, we can substitute it back into the ray equation to get the surface point S: S = V + t W
Simple lighting model: Lights at infinity (like the Sun) We are going to assume for now that light sources are infinitely far away, or at least far enough away that they can be considered infinitely far for practical purposes, like the Sun, which is 93 million miles from Earth. This means that the direction vector L to the light source will have the same value for all points in the scene: L = [ Lx, Ly, Lz, 0 ]
Reflections:
Adding in the reflection The reflected ray can simply be multiplied by some [r,g,b] color and added to the final color of the surface point. Note that you cannot have the ray tracer call itself recursively in the shader, because WebGL shader programs which run on the GPU do not permit recursive function calls. But you can call the ray tracer for the primary ray, and then call it again to compute a reflection ray.
Shadows:
Ray tracing to general second order surfaces
|
Improved noise in JavaScript: I've ported my Improved Noise algorithm to JavaScript, as the file inoise.js. You can use it to model shapes in various ways. For example, a bumpy spheroid might be implemented like this: var sph = function(u,v) { var theta = 2 * Math.PI * u, phi = Math.PI * (v - .5), cosT = Math.cos(theta) , cosP = Math.cos(phi) , sinT = Math.sin(theta) , sinP = Math.sin(phi) ; var x = cosT * cosP, y = sinT * cosP, z = sinP; var r = 1 + noise(2*x, 2*y, 2*z) / 12 + noise(4*x, 4*y, 4*z) / 24; return [ r * x, r * y, 1.3 * r * z ]; } Feel free to use this function in your geometric modeling.
Triangle strips:
Triangle strips
allow you to keep the transfer of geometry data from your CPU
to your GPU down to rougly one vertex per triangle.
In the version of
Bump mapping: For fine perturbations of a surface, it can be very expensive to generate and render the large number of triangles required for true surface normal perturbation. Therefore for finely detailed bumps, we sometimes just use Bump Mapping, a technique first described by Jim Blinn about 40 years ago. The basic idea is to modulate the surface normal, within the fragment shader, to reflect the changes in surface direction that would be produced by an actual bumpy surface. Since the human eye is more sensitive to variations in shading than to variations in object silhouette, this technique can produce a fairly convincing approximation to the appearance of surface bumpiness, at a fraction of the computational cost of building finely detailed geometric models. To do bump mapping of a procedural texture T that is defined over the (x,y,z) domain (the noise function is an example of one such procedural texture), we can think of the value of T(x,y,z) as a variation in surface height. In order to simulate the hills and valleys of this bumpy surface, we subtract the derivative of T from the normal (because the normal will point toward the valleys), and then renormalize to restore the normal vector to unit length. We can approximate the vector valued derivative at surface point (x,y,z) by finite differences (where ε below is some very small positive number): p0 = T(x,y,z)which we can then use to modify the surface normal: normal ← normalize( normal - vec3(px,py,pz) )
Forward kinematics Often we want to create hierarchical mechanisms. Such hierarchically structured mechanisms generally use forward kinematics, in which transformations form a tree structure that descends from a single root. Here is a swinging arm, a simple example of forward kinematics.
For clarity, I implemented the above example using push and pop methods. But if you want to create a system that allows users to put together their own object hierarchies, you are better off using explicit objects. In such a scheme, each object would have its own matrix transformation, and would also maintain a list of child objects. The transformation of a child object would be relative to its parent, thereby forming a tree of object nodes. Animation over time -- key frame animation When creating animations, it is often convenient to specify values only at certain frames, and then use smooth curves to interpolate values at the frames between these key frames. In-betweening with ease curves which start and stop with zero derivative, such as 3t2-2t3 produce natural looking interpolations. Here is a hand that can be animated by setting key frames. To show different animations of the hand, type "a1" or "a2" or "a3" followed by the space key. You can also read the on-line instructions on that page to learn how to vary the key-frame animation.
Inverse kinematics
Boids There is an entire sub-field of computer animation devoted to swarms and particle animation. One historically important example of this was Craig Reynold's Boids, which he first published in 1987. This technique for simulating herding and flocking behavior showed convincingly that a few simple procedural rules can create the impression of compelling group and social behavior. In cinema, this technique was first used in the 1992 feature film Batman Returns, and has since become a staple of the movie and game special effects industry. Procedurally animating a mesh over time:
You can create a
procedurally animated mesh, in which
you define a
If you implement something like this in JavaScript, you will need to keep two copies of your mesh: The original unmodified mesh, and the one that gets copied from the original and then vertex-filtered at every animation frame. When you modify the mesh, you will end up needing to change the vertex normals. To recompute vertex normals for a mesh, you can use the following algorithm:
If you are feeling ambitious, you can also try implementing this sort of filter in a vertex shader. In that case, you will need to be a bit more clever about modifying vertex normals. For example (since you will have greater compute power to work with), you can do finite differences to compute the new surface normals.
Layered keyframe animation: Layered animation allows you to create layered transparency for parts of movement, in a way analogous to how PhotoShop lets you do layered transparency for just some pixels of an image but not others.
Introduction to particle systems:
Examples of uses of particle systems: This week we just scratched the surface of particle systems. Next week we will go into more detail about this rich topic. Meanwhile, here's a high level introduction to the subject. Particle systems are very flexible; they can be used to simulate many natural phenomena, including water, leaves, clouds/fog, snow, dust, and stars. When they are "smeared out" so that they are rendered as trails, rather than as discrete particles, they can be used to render hair, fur, grass, and similar natural objects. Basic mechanism: Generally speaking, particles in a particle system begin by being emitted from the surface of an "emitter" object. When a particle begins its life, it has an initial trajectory, which is usually normal to the surface of the emitter object. After that, the path of the particle can be influenced by various things, including gravity and other forces, and collisions with object surfaces. Particles usually have a lifetime, after which they are removed from the system. Also, a particle can itself be an emitter of other particles, spawning one or more other particles in the course of its lifetime. In this way, particles can be made to cascade, generating complex patterns such as flamelike shapes. All of the qualities of a particle -- its lifetime, its velocity and mass, how many particles it spawns, can be a randomly chosen value within some range. By controlling the ranges from which these various properties are chosen, artists can control the look and feel of a particle system. History: Particle systems were first developed by Bill Reeves at Pixar in 1981. Its first public use was for the Genesis Effect in Star Trek 2, the Wrath of Khan 1982. Since then, it has become a mainstay of computer graphic films and games.
Rendering: One nice thing about particle systems is that they are not that difficult to implement in vertex shaders. In addition to their behavior, their appearance can also be hardware accelarated. One common technique is to render each particle as a "billboard": a polygon that is always perpendicular to the camera. This polygon is textured with a translucent image of a fuzzy spot. The effect is to make the particle look like a small gaseous sphere, but at fairly low computational cost.
Linear blend skinning: Here we discuss an approximation to animating the soft skin of game characters which is cheap and can be implemented very easily in vertex shaders. In an animated character, the rigid bones of the character's articulating skeleton are generally covered in some sort of soft skin. A fairly accurate way to model this skin would be to think of each point on its surface (approximated by the vertices of a polygon mesh), as being influenced by the various rigid matrix transformations of nearby bones in the skeleton. To do this properly, one would compute a composite transformation matrix that was influenced by all of those individual bone matrices. However, in practice this is a more expensive operation than can be accommodated in the real-time rendering budget of game engines. So most games instead do a kind of cheat called linear blend skinning. The basic idea is to compute the matrix transformation of each vertex as a part of each of the various nearby bones. This will result in a different position for each bone. Then these positions are blended together into a weighted average to find the final position for the vertex. To make this work, each vertex maintains a list of [bone,weight] pairs, where all of the respective weights sum to 1.0. This technique is very fast, and very easy to implement efficiently in hardware accelarated vertex shaders, but it has some practical deficiencies. For example, twisting between the two ends of a limb can cause the middle of the limb to appear to collapse. To handle cases like this linear blend skinned skeletons are rigged with extra bones to mitigate the effects of such problems.
Marching Cubes: Marching Squares (2D case):
Marching Tetrahedra (simpler to implement, less efficient):
|