Quantcast

Jump to content

» «
Photo

WIP First Person Shooter

3 replies to this topic
ViperGTS-R
  • ViperGTS-R

    Li'l G Loc

  • Members
  • Joined: 24 Jul 2002

#1

Posted 14 August 2007 - 11:34 AM

Just thought id show a project im working on at the moment. (Im pretty sure this is the right forum)

Im also wondering if anyone here has any experience in working on this sort of thing. Especially working with vertex and fragment shaders.

After finishing my degree, i decided to try an take some of the stuff ive learned and apply it by assembling, piece by piece my own 3D engine.

This is all written in C++ and im using the OpenGL API to draw the stuff on the screen, im using the GLUT (OpenGL, Utility Toolkit) to do the window management and operating system management (mouse & keyboard, menus, etc...).

Im not really into downloading and using these massive libraries that do all the cool graphics and game dev stuff for you, as you dont learn anything that way. So where i can, i try to go out of my way to write my own code, even if it may be slightly slower.

Progress has been a bit slow recently because i have been trying to implement Normal Mapping into the engine, this gave me several problems, which have all finally been solved in the last few days.

These were:
- Passing Variables like the position of the light(s) to the shaders running on the graphics card.
- Calculating the variables required to convert each vertex from world space into tangent space, in order to make normal mapping work.
- Debugging the shaders when they dont work, as they either work or they dont, and it can be a challenge to get the graphics card to tell you why they didnt work.

The engine currently has the following features:
- fully working first person camera
- Parallax Normal Mapping (AKA, like F.E.A.R)
- GLSL Shader support
- Multiple, varying sized Meshes imported from 3dsmax in .ase format
- Axis Aligned Bounding Box collision detection. (bit buggy at the moment)
- Object Class

Screenshots:
user posted image
user posted image

As you can see, it pretty basic at the moment. But now that i have solved the shader and normal map problems, it opens the door to all sorts of stuff.

Stuff i would like to add:
- relective/refractive water shader
- HDR Rendering & Lighting
- Depth of field
- Octree collision detection
- keyframe animation (i have code for this, i just need to merge it into this engine)
- Material Files (a text file that stores all the info needed for a shader to display an object right. This can be imported in without having to re-compile)
- Object files, same as above only stores all the info for a mesh, including its related material file
- Import .3ds files

Is anybody else working on, or has worked on anything like this? If so, any tips or suggestions?


K^2
  • K^2

    Vidi Vici Veni

  • Moderator
  • Joined: 14 Apr 2004
  • United-States
  • Best Poster [Programming] 2015
    Most Knowledgeable [Web Development/Programming] 2013
    Most Knowledgeable [GTA Series] 2011
    Best Debater 2010

#2

Posted 14 August 2007 - 09:55 PM

In OpenGL, you can use a little trick to do normal mapping on fixed pipeline. There is something called texture combiners. If you set OpenGL to combine a normal map texture with a constant vector (direction to the light relative to the map) using a DP3 method, you essentially get a diffused lighting with normal map. Of course, if you don't care about fixed pipes, you don't need to worry about it.

Are you writing your shaders in Cg or assembly? And which shader versions are you supporting?

On the engine itself, I'm not sure how much octrees are going to help you with dynamic objects, unless you write an extremely flexible build-as-you-test-octree for collision testing. For static objects, why not just BSP the whole thing? It resolves a number of transparency issues, which you are bound to run into if you want to play with the alpha maps, and you can save a load of time on dynamic ojbect to map collisions. For water effects, make sure you render water after you have rendered everything solid. Same goes for heat shimmers or similar effects. In fact, you can just use the water shader for them as well.

Also, why don't you throw in dynamic shadowing? You seem to have most of the platform you'd need for it built anyways. All you really need to add is shadow volume algorithms, and the rest is trivial, if you chose to run stencil shadows.

ViperGTS-R
  • ViperGTS-R

    Li'l G Loc

  • Members
  • Joined: 24 Jul 2002

#3

Posted 15 August 2007 - 07:34 PM Edited by ViperGTS-R, 15 August 2007 - 07:40 PM.

Cheers for the info,

Ive printed off a big document on BSP trees which ill pour over for a while, i dont really know alot about them. (I think i wasnt at the lecture that explained them confused.gif ).

My shaders are written in OpenGL Shading Language, which i think is similar if not identical to CG. Im not using assembly, haven't assembly shaders become a bit redundent recently?

As far as fixed pipeline goes, im trying to avoid it really, i want to try and implement everything through shaders where i can. It seems to me that the built in opengl lighting system is very rigid and cant really deliver what im after without making bulky complicated code, plus isnt GLSL generally quicker than most fixed pipeline solutions? Im also generally getting the impression that its a good idea to go for the newer OpenGL 2.0/2.1 solutions where you can, as they might be axeing some older techniques in the next version of OpenGL.

At this current point in time im looking into streamlining my engine and adding more object orientation where i can. I want to implement HDR next, it seems fairly straightforward really, you use the opengl 2.0 framebuffer object to render the frame into a texture which you apply to a scren sized quad. Then you run a GLSL fragment shader on that object to apply tone mapping to bring the colour range from HDR to LDR.

I should think my next move will be shadow volumes, as you discribe, ive had a brief look into this allready, but ive put it on the back-burner for now.

This would allow me to implement a multi-pass system that means i can do stuff like depth of field more easily.

my engine currently has the following classes:
- Object class: (stores only 1 thing, a position in 3D world space)

- Camera class: (child of object, has all the info and functions for moving and rotating the camera correctly)

- Mesh class: (child of object, handles the storage of everything a mesh could have attributed to it, texture pointers, display lists, vertex & face info, normals, etc)

- "vector3f" class, (downloaded this off the net for simplicity so that i didnt have to re-invent the wheel, it just stores 3 floats that represents a vertex, a vector etc, and has all the functions you could ever need like the Dot Product, Cross Product, Normalization, etc..)

- Then i have main.cpp which binds everything together.

i need to break up the mesh class because its very bulky, so i need to add:
- texture class (stores/imports a texture to an object)
- shader class (vertex and fragment shaders for a single effect, shader handles, etc)
- light object class (will represent a light source)

not quite sure of the best way to approach object orientation for a game engine, but id imagine it kind of works like the way im thinking of doing it.

K^2
  • K^2

    Vidi Vici Veni

  • Moderator
  • Joined: 14 Apr 2004
  • United-States
  • Best Poster [Programming] 2015
    Most Knowledgeable [Web Development/Programming] 2013
    Most Knowledgeable [GTA Series] 2011
    Best Debater 2010

#4

Posted 15 August 2007 - 08:23 PM Edited by K^2, 16 August 2007 - 01:05 AM.

BSP trees are trivial. Each node is a triangle. There are two child branches: aobve and bellow the plane of the triangle at current node (use normal vector to test). Caviat - you can't have a triangle that is cut by the plane of any of the parents, so you'll need an algorithm to divide a triangle into smaller triangles, each of which belongs to only one half-space. To render a node, first render the child sub-tree on the opposite side of the current node's plane from camera, then the current node's triangle, then the sub-tree on camera's side. Result: If one triangle overlaps the other in the projection, the top triangle is rendered after the bottom one with virtually no overhead. Hence, all your alpha-blending transparency is order-dependant.

When working with objects, don't overdo it. Objects are great, but extra pointer chasing in an inner rendering loop can easily cost you some percentage of the frame rate.

Speaking of unneeded pointer chasing, make sure that all of the member functions of your vector3f class are declared as static. If they aren't, go through, and declare them as such. The only non-static property of it should be a float[3] storing the vector components. Otherwise, you'll a) have to spend an extra memory access operation before calling the member functions and b) waste space to store a pointer to member functions in each vector3f structure, which can be a lot if you build a mesh out of them. And as an added bonus, if all you got in the vector3f is float[3], then any use of vector3f[n] is equivalent to using float[3*n]. So any function that takes array of floats interpreting them as 3f vectors, for instance, will be able to take array of vector3f, as long as you typecast it.

Edit:
QUOTE
I want to implement HDR next, it seems fairly straightforward really, you use the opengl 2.0 framebuffer object to render the frame into a texture which you apply to a scren sized quad. Then you run a GLSL fragment shader on that object to apply tone mapping to bring the colour range from HDR to LDR.

What do you do to estimate how much brighter/dimmer you need to make it? I would imagine you'd want to measure average luminance, but the only way I can think of how is to copy the whole thing to RAM and pick some random points, which is bound to be inefficient.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users