Quantcast

Jump to content

» «
Photo

Questions

4 replies to this topic
Swoorup
  • Swoorup

    innovator

  • Feroci Racing
  • Joined: 28 Oct 2008
  • Nepal

#1

Posted 11 February 2012 - 05:18 AM

First of all how are 3D cords transformed into 2D cords for rendering or Display to the screen?
Secondly how does uv map works? I mean does uv's form a polygon in which textures are fitted?

K^2
  • K^2

    Vidi Vici Veni

  • Moderator
  • Joined: 14 Apr 2004
  • United-States
  • Most Knowledgeable [Web Development/Programming] 2013
    Most Knowledgeable [GTA Series] 2011
    Best Debater 2010

#2

Posted 11 February 2012 - 09:18 PM

I'll start with second question. Yes, the uv coordinates from 3 vertices that form a polygon on the model form a 2D triangle on the texture. Each pixel on the triangle that ends up rendered to the screen corresponds to some point within that triangle on the texture. Because the neighboring polygons will typically have neighboring triangles on the texture, the process of mapping polygons to texture triangles is often referred to as unwrapping. It really does look like the mesh of the model has been unwrapped and flattened over the texture. Onto the second question.


In principle, especially if you are writing your own shader code, you can transform coordinates any way you like. Canonically, however, there are 3 transforms that the coordinate from a model vertex passes through before it becomes a 2D coordinate on the screen.

World transform: This takes the vertex coordinate from model's coordinate space to world coordinate space.
View transform: Takes coordinate from world space to view space. (Makes it relative to camera's coordinate,s in other words.)
Projection transform: Projects coordinate into screen coordinates.

All of these are actually performed as 4-dimensional transforms. Vertex starts out as a 4D vector: r = (x, y, z, 1). It is then multiplied by the 3 transform matrices.

r' = r * W * V * P.

The final vector r' has form (x', y', depth*distance, distance). As a last step, rendering hardware divides it through by the last component, to give you (x'/distance, y'/distance, depth, 1). The first two components are the actual screen coordinates. Depth is value between 0 and 1 that will be used for depth test. For each pixel within a triangle, the depth value is interpolated from depths of 3 vertices, and if depth test is enabled, which it usually is, only pixels who are closer to camera than these already rendered at same coordinate will be added.

Ok, so I probably should explain a bit more about transform matrices. Lets start with world transform. It is a 4x4 matrix with the following structure.

CODE
Rxx Rxy Rxz 0
Ryx Ryy Ryz 0
Rzx Rzy Rzz 0
Tx  Ty  Tz  1


The R components describe a rotation around the model's origin. You can think of it as a separate matrix R. (Rxx, Ryx, Rzx) is a unit vector which describes the orientation of model's X axis in world coordinates. Same for the next two columns for Y and Z axes respectively. Furthermore, (Rxx, Rxy, Rxz) is also a unit vector which you can use for inverse transform. These are properties of a unitary matrix. That means that if you transpose R, which I'll mark as R', you can invert the rotation. If r' = r*R, then r = r'*R'.

This leaves the (Tx, Ty, Tz) component. If you are familiar with matrix multiplication, you'll notice that: (x, y, z, 1) * W = (x, y, z) * R + (Tx, Ty, Tz). The last component remains being 1. So (Tx, Ty, Tz) is the translation component. In fact, these are the coordinates of the model's center in the world coordinates.

The view matrix has an identical structure. The difference is that rotation describes rotation of world's axes relative to camera, and translation part is world's origin in camera's coordinates. Typical situation is that camera is described as an entity in the world, so it will have the same world transform matrix associated with it as any model would. In that case, you can easily get the view transform from camera's world transform by taking the inverse. The view transform matrix will then have the following components.

CODE
Rxx Ryx Rzx 0
Rxy Ryy Rzy 0
Rxz Ryz Rzz 0
Tx' Ty' Tz' 1


Notice that the R component was transposed. The translation has to be in camera's coordinates, so you get it from original translation like so.

(Tx', Ty', Tz') = (-Tx, -Ty, -Tz) * R'

Finally, the projection matrix. I'm not going to get into too much detail, just mention what it does. The projection matrix has the following structure.

CODE
2*Zn/Vx 0       0     0
0       2*Zn/Vy 0     0
0       0       Q     1
0       0       -Zn*Q 0


Where Q=Zf/(Zf-Zn). Zn and Zf are the distances to near and far planes respectively. Anything on the near plane will be rendered with depth 0. Anything closer will not be rendered. Anything on the far plane will have depth of 1. Anything further will not be rendered. The Vx and Vy are width and height of your view port at the near plane. Together, these describe the view frustum. It's like a pyramid with its top sliced off by near plane, far plane being the base of the pyramid, and the camera is where the top vertex of the pyramid would have been. Only things inside the frustum will end up on the screen.

Both OpenGL and DirectX have built-in functions that can generate the projection matrix for you based on your chosen parameter set. If you are rendering using fixed pipeline, these 3 matrices will be set as parameters for rendering. If you are rendering using a shader, you will have opportunity to pass all 3 as parameters to the shader and perform these transformations within the shader. However, like I said earlier, if you are writing your own shader, you can do whatever you want. A common thing to do is multiply view and projection matrix together and pass them as a single argument to the shader.

Anyways, this is the full process of getting the 3D coordinates of the vertex from a model to its final location on the screen. The final coordinates will run between -1 and 1 for both x and y with (0, 0) being in the center of the view port.

S.A. Lowell
  • S.A. Lowell

    Player Hater

  • Members
  • Joined: 24 Feb 2008

#3

Posted 15 February 2012 - 01:12 AM

For anyone interested but confused by what K^2 said give this book a read: http://www.amazon.co...y/dp/1556229119 I read it 4 - 6 years ago and I can vouch that it's basically perfect when it comes to understanding a lot of the basics in the world of game programming. It's extremely helpful.

K^2 is spot on with everything he said. Great post icon14.gif

Swoorup
  • Swoorup

    innovator

  • Feroci Racing
  • Joined: 28 Oct 2008
  • Nepal

#4

Posted 24 February 2012 - 05:34 AM Edited by Swoorup, 24 February 2012 - 05:37 AM.

Thank you guys.

Is there any sort of book that covers important aspects of D3D programming (Dx9) and is not very lengthy to read?
I had one last time but I cant really devote my time reading 5 volumes of 600 pages each.

However I found http://www.directxtu...asics/dx9B.aspx
really simple and short.

But the problem is that it has missed out a lot of important bits in explaining the programming code itself.
Any suggestions?

EDIT: Also can you tell me the difference between OpenGL and Directx from programming perspective. I meant what differences exist in both of the technologies. I know that OpenGL is platform independant whereas Directx is only suited to Windows, Xbox perhaps!

K^2
  • K^2

    Vidi Vici Veni

  • Moderator
  • Joined: 14 Apr 2004
  • United-States
  • Most Knowledgeable [Web Development/Programming] 2013
    Most Knowledgeable [GTA Series] 2011
    Best Debater 2010

#5

Posted 24 February 2012 - 11:24 AM

This book is pretty good. It cuts a fairly nice middle ground between teaching you actual DirectX-related code and general 3D programming stuff. Has lots of example code too, so you don't need to worry about getting stuck because you don't know enough programming to fill in the blanks.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users