pi = Math.PI;
lerp(t,a,b) { return a + t * (b - a); }
abs(t) { return Math.abs(t); }
cos(t) { return Math.cos(t); }
sin(t) { return Math.sin(t); }
noise(x) { return Noise.noise(x); }
noise(x,y) { return Noise.noise(x,y); }
noise(x,y,z) { return Noise.noise(x,y,z); }
fx = -100;
fy = -100;
mouseMove(x, y) {
fx = draw.fx(x);
fy = draw.fy(y);
}
px(x,y,z) { return (7 * x - 3.4 * z) / (10 - .1*z); }
py(x,y,z) { return (7 * y - 3.4 * z - .3*x) / (10 - .1*z); }
X(u,v) { return px( x(u,v), y(u,v), z(u,v) ); }
Y(u,v) { return py( x(u,v), y(u,v), z(u,v) ); }
axesColor = Color.blue;
veryLightGray = new Color(220,220,220);
lightGray = new Color(160,160,160);
draw2DAxes() {
draw.setColor(axesColor);
draw.fillThickLine(-1, 0, 1, 0, 0.01);
draw.fillThickLine(0, -1, 0, 1, 0.01);
}
draw3DAxes() {
draw.setColor(axesColor);
draw.fillThickLine(px(-2,0,0),py(-2,0,0),px(2,0,0),py(2,0,0), 0.01);
draw.fillThickLine(px(0,-2,0),py(0,-2,0),px(0,2,0),py(0,2,0), 0.01);
draw.fillThickLine(px(0,0,-2),py(0,0,-2),px(0,0,2),py(0,0,2), 0.01);
}
String round(double value) {
String s = "" + ((int)(100 * value) / 100.);
int i = s.indexOf('.');
if (i < 0)
return s + ".00";
int n = s.length() - i;
switch (n) {
case 1: return s + "00";
case 2: return s + "0";
}
return s;
}
Computing distance and surface normalConvert Z to perspective coords to evaluate depth:
For each vertex, after the Matrix transform,
we can do a perspective transform:
xp = f * x / (f - z)
yp = f * y / (f - z)
zp = f * z / (f - z)
(by applying the camera's perspective matrix)
1 0 0 0
0 1 0 0
0 0 1 0
0 0 -1/f 1
where f is focal length of the camera.
On the image, this vertex will be at pixel:
col = width/2 + (int)(height * xp )
row = height/2 - (int)(height * yp )
The "depth" of the vertex at this pixel is zp.
Surface normal:
The "normal" vector N of a surface is the perpendicular direction facing outward from that surface.
double[] x = {-.5,.5,.5,-.5};
double[] y = {-.5,-.5,.5,.5};
double[] z = {-.5,0,.5,0};
draw() {
n = x.length;
for (i = 0 ; i < n ; i++) {
j = (i + 1) % n;
xi = px(x[i],y[i],z[i]);
yi = py(x[i],y[i],z[i]);
xj = px(x[j],y[j],z[j]);
yj = py(x[j],y[j],z[j]);
draw.fillThickLine(xi,yi,xj,yj,.01);
}
draw.fillArrow(px(0,0,0),py(0,0,0),
px(0,0,-1),py(0,0,-1), .028);
draw.drawText("N", px(0,0,-1)+.1,py(0,0,-1)+.1);
}
This vector N = (nx,ny,nz) controls how a surface responds to light. N is always unit length.
Computing the surface normal
From now on you should change your vertices to contains six values, to account for both location and surface normal:
vertex[i] = { x, y, z, nx, ny, nz };
So rather than a declaration in your program like this:
double[][] vertex = new double[nVertices][3];
you should have one that looks like this:
double[][] vertex = new double[nVertices][6];
For simple shapes like a cube, sphere or cylinder, you can directly compute the surface normal:
Cube:
Normals are (-1,0,0), (1,0,0), (0,-1,0), etc.
Sphere:
Normal at point (x,y,z) is just (x,y,z)
Cylinder:
Normal around tube is (cos, sin, 0).
Normal at end caps are (0,0,-1) and (0,0,1), respectively.
Computing vertex normals for a polyhedral mesh:
edges
Geometry shape;
setup() {
Material material = new Material();
material.setAmbient(0.2, 0.1, 0.1);
material.setDiffuse(0.8, 0.4, 0.4);
render.addLight( 1, 1, 1, 1, 1, 1);
render.addLight(-1,-1,-1, 1, 1, 1);
render.setFOV(0.3);
N = 3;
shape = render.getWorld().add().mesh(N,N);
shape.setMaterial(material);
vertices = shape.vertices;
for (int i = 0 ; i <= N ; i++)
for (int j = 0 ; j <= N ; j++) {
int n = i + (N + 1) * j;
v = vertices[n];
v[2] = (sin(1.5*i - .5) + sin(1.5*j - .5)) / 4;
}
shape.computePolyhedronNormals();
}
update() {
render.showMesh = edges;
shape.getMatrix().identity().rotateX(-.7).rotateY(-.3);
}
for each face: sum cross products of successive edges
For each face, compute a faceDirection as follows:
(1) Set faceDirection to [0,0,0]
(2) For each three successive vertices A,B,C around face:
faceDirection += (C-B) (B-A)
about cross products:
The "cross product" of two vectors a and b is defined as the products of their lengths times the sine of the angle between them:
ab = |a| |b| sin
You can compute it as follows:
ab = (aybz - azby , azbx - axbz , axby - aybx)
for each vertex: sum neighbor faceDirection vectors and normalize
(1) Sum faceDirections of all faces containing this vertex
(2) Normalize this sum to unit length
Transforming surface normals
What does transformed N need to do?
It needs to stick out perpendicularly from the transformed shape.
This means it needs to act like a plane, not a point.
So let's first talk about planes in three dimensions.
Consider P = (a,b,c,d), describing the plane ax + by + cz + d.
Normal N can be expressed as a plane equation.
Specifically, the plane equation: ax + by + cz + 0.
Transforming normal N is going to follow the same rules
as transforming any plane P = (a,b,c,d).
A plane P is defined by what points X are on that plane.
Point X will be contained in plane P exactly when PX = 0.
That is:
when (a,b,c,d) (x,y,z,1) = 0
or:
when ax + by + cz + d = 0
We can apply this insight to transforming normals.
When we transform P, we need to preserve the value of PX.
Which means that PX = PM-1MX.
Derivation:
PM-1MX =
P(M-1M)X =
P(I)X =
PX
So P is transformed to PM-1.
To transform normal N:
we need to compute: NM-1
Which is the same as: (M-1)TN
To transpose a matrix, just swap its rows and columns.
To invert a matrix, you can use Invert.java.
ZBuffer
(1) Initialize zbuffer to "infinitely far"
Since zp = fz / (f - z),
we can look at the limit as z -.
When we do that, we see that zp -f.
So we can just initialize all zbuffer values to -f.
(2) Loop through triangles
Scan through pixels.
For each pixel at index:
i = col + row * nCols
Interpolate from vertices r,g,b,zp.
If zp zBuffer[i]:
replace values at that pixel as follows:
pix[i] = pack(r,g,b);
zBuffer[i] = zp;
Modify your triangle scan-conversion algorithm as follows:
Going into your scan-conversion algorithm, rather than just the pixel values (x,y) for each of your triangle's three vertices, you also need each vertex to maintain four more values: r,g,b,zp, where for this week's assignment r,g,b will be computed from surface normal (as described above), and where zp is the perspective z obtained from the expression fz/(f-z).
When you compute the x value for D (by linearly interpolating the x values of vertices A and C) as you split your triangle into two trapezoids, you also need to linearly interpolate the r,g,b,zp values of A and C, to get r,g,b,zp values at D.
You will now have trapezoids that have, at each of their TL,TR,BL,BR vertices, not just a single x value, but five values: x,r,g,b,zp. As you linearly interpolate to get the leftmost and rightmost x for each scanline, you also need to linearly interpolate r,g,b,zp for that scanline.
Finally, at each scanline, you now have leftmost and rightmost values for x,r,g,b,zp. As you march along in x between those two extremal points, you need to linearly interpolate r,g,b,zp. This will give you, at each pixel, the values you need to use for the zbuffer algorithm.
Homework:
Do scan-conversion using Z-buffer algorithm.
Compute normals
Map normal nx,ny,nz to r,g,b, where:
nx = -1.0...1.0 r = 0...255
ny = -1.0...1.0 g = 0...255
nz = -1.0...1.0 b = 0...255
Create an image of normals.
Inspirational videos:
Artificial retina
Worldbuilder
Keiichi Matsuda