DirectTrace  0.9
 All Classes Files Functions Variables Typedefs Enumerations Enumerator Macros Pages
The Basic DirectTrace pipeline


A) Whole Pipeline

The pipeline to respect for using the library is as follows:

1- Setting up the library.

2- Streaming the scene to the library using OpenGL like commands. This includes several sub-steps, as described later.

3- Setting up initial rays, usually frustum or light-source rays. Note that steps 2 and 3 are independant and can be inverted.

4- Execute the Ray/Scene intersector.

5- Use intersector results to perform deferred shading and/or generate new rays.

6- Come back either to step 4 if shading requires it or to step 2 if you need to modify the scene


Note that this pipeline is designed to handle large batches of rays and samples at one time during various operations. This relatively minimizes the cost of calls to the API, and improves the efficiency of some of the algorithms implemented. So make sure that rays are not processed one by one in your code! Also, check that the functionalities you are using are available, as some specifications may not be accessible yet. See the functionality grid for more information


B) Detailed Pipeline

Note that some of the specifications described here may not be available in the current library. The SDK code should be fully functionnal though.

1- Setting up the library.

For that, you need to make sure that the relevant header is included, i.e., "#include "DirectTraceAPI.h".
On Windows platform, the DirectTrace DLLs (e.g., DirectTrace64.dll) should be located inside an appropriate directory as well, and OpenGL should be installed on your system.
An OpenGL application should ideally be created when working with DirectTrace. In this application, you should just need to insert the following code:

#include "DirectTraceAPI.h"

In most applications, a single dtAPI object should be instantiated (e.g., declared as a global variable), as the constructor will load the dll and create a unique context for the application. If your code leaves the dtAPI C++ variable scope or delete the object, then all the resources related to this object, including rays, scenes, and images will be deleted.

2- Streaming the scene.

The DirectTrace API provides the user with simple ways to store the information related to 3D scenes through lists that are internally managed, and therefore hidden from programmers.
Streaming a scene is a simple operation that can be decomposed into 3 stages.

Stage 1: Specifying Materials

Firstly, unless you decide to go for some pre-defined materials (features not available yet), you should first define the materials you are going to use in your application. Geometrical and texture data not included, there are only 3 different types of data/properties you can specify for a scene material: properties per vertex (not actually supported yet), properties per primitive, and properties per material. A simple material can be created by writing:

DTScene scene(dtAPI);
int bytesPerPrimitive=3*sizeof(float);//3 floats associated with each primitive for flat shading.
int bytesPerVertex=0; //no data per vertex. Check the availibility of the per vertex properties before usage.
int bytesPerMaterial=0; //no data per material. Some (fixed size) Material property can be specified if needed.
int materialId=scene.NewMaterial(bytesPerMaterial,bytesPerPrimitive, bytesPerVertex,0,0);//Notice the need for the first three integers here. This will be crucial when streaming properties. The last two integers are here to hint at memory space requirements.
//Use now one of the scene::MaterialAttrib functions to setup the data.

Therefore, creating a new material requires prior knowledge of three sizes: The amount of data stored for every primitive, the amount of data stored for every vertex, and obviously the amount of data per material. Vertex properties are only required if you choose to stream vertex indexed triangles, which prevents from duplicating content that is shared by primitives, like vertex coordinates and normals. Indexed vertices are not included in the specifications, bit not available yet.
Specifying these three elements represents an important deviation from OpenGL or DirectX. Rasterization techniques process each primitive one by one, and therefore, the amount of memory needed for primitive properties can be changed from one primitive to the other. In the ray-tracing case, the entire scene may need to be stored. As such, primitives with heterogenous properties can not be so well handled, thus the need for creating materials specifying these two properties and linking them to a given primitive type.
Also note that in order to optimize memory allocation, it is best to specify the number of primitives and vertices that will be related to a specific material so that the impact of resizing operations are minimized. The API does not allow easily retrieving an id for the primitive you have added to a scene. If your program requires this knowledge later on, we suggest you that you declare a primitive property for instance as follows:

typedef struct {float red; float green; float blue; float alpha;int id;} MyPrimitiveData;


After a call to the Intersector procedure, a pointer pointing to such a material data structure will be accessible inside a RayBuffer shader. Depending on the interesected primitive, the pointed data structure could be either a MyPrimitiveData or any other type of declared primitive material. The programmer will have to make sure that the data structure type is recognized in the shader (for instance by add adding a type constant to all structures)!



Stage 2: Specifying Vertex Properties (Optional, Vertex Properties not supported yet)

This stage is optional, and can occur after Stage 3 if your program does not use indexed coordinates (see section on indexed coordinates), which specifies coordinates for your vertices as vertex properties. Also, check for the support of this feature by the DirectTrace driver that is used.
For this stage, all that is needed is to specify properties for each vertex of your scene, whatever the shape of the assembled primitive is. The number of vertices for a given primitive is fixed (e.g., triangles have 3 vertices). However, different types of primitive can point to the same vertices, provided that the material used is the same. Don't forget that the order of vertices is important and defines an implicit numbering, which will be needed ultimately when you need to reference the vertices. Also, this numbering is not unique, but related to the material in use. So be careful there as this may generate bugs in your program later on.

A simple usage is this one:

typedef struct {float x; float y; float z; float w;} MyVertexData;
typedef struct {float red; float green; float blue; float alpha;} MyPrimitiveData;
MyVertexData data,data2,data3;
...
int materialId=scene.NewMaterial(0,0, sizeof(MyVertexData),0,0);//note the need for the two integers here. This will be crucial when streaming properties.
int materialId2=scene.NewMaterial(0,0, sizeof(MyVertexData),0,0);//note the need for the two integers here. This will be crucial when streaming properties.
scene.Begin(DT_VERTEX, materialId);
scene.PrimitiveAttrib(data, sizeof(MyVertexData));//EnterData for vertex 0 (first material)
scene.PrimitiveAttrib(data, sizeof(MyVertexData));//EnterData for vertex 1 (first material)
scene.Vertex3fv((float *) &data3);
scene.End();
scene.Begin(DT_VERTEX, materialId2);//change material, and so the vertex type!!!
scene.PrimitiveAttrib(data2, sizeof(MyVertexData));//EnterData for vertex 0 (material with id materialId2 this time)!!!
scene.Vertex4fv((float *) &data3);
scene.End(); // the begin/end section can be restarted if needed!!!

In some cases (e.g., normals), some attributes will be related to the modelview matrix. It is therefore required that some elements of the vertex data are entered separately. This can be done by using the PrimitiveAttribMV3fv and PrimitiveAttribMV4fv functions. Just make sure that your application transfers the right amount of bytes to the API, for instance as follows:

scene.Begin(DT_VERTEX,materialId);
scene.PrimitiveAttribMV3fv((float *) &data);//Passes 3 floats that are multiplied by the modelview matrix to vertex 0, w(first material materialId)
scene.PrimitiveAttribAttrib(&data.w, sizeof(float));//Passes the last float to vertex 0 as 4 floats per vertex are needed for this material.
scene.Vertex3fv((float *) &data3);
scene.End(); // the begin/end section can be restarted if needed!!!

Note: Check for the support for the vertex indexing feature! This may not be supported by your version of the API.



Stage 3: Streaming Primitives (Compulsory)

Full primitives can now be specified/assembled using OpenGL like coding. Here again, the implicit order of triangles will be relevant for numbering triangles later on
Example:

MyPrimitiveData pdata={...};
int materialId3=scene.NewMaterial(bytesPerMaterial,bytesPerPrimitive, 0,0,0);//Create a new material.
scene->MaterialAttrib(materialId3,&pdata,sizeof(pdata));
float vertices[3][3]={...};
scene.Begin(DT_TRIANGLES,materialId1)
scene.Vertex3fv(vertices[0]);//coordinates of vertex #0, triangle #0
scene.Vertex3fv(vertices[1]);//coordinates of vertex #1
scene.Vertex3fv(vertices[2]);//coordinates of vertex #2
scene.End();
scene.Begin(DT_TRIANGLES,materialId3)
scene.PrimitiveAttrib(&pdata,sizeof(MyPrimitiveData));//second variable optional as the entire structure is passed
scene.Vertex3fv(vertices[0]);//coordinates of vertex #0 , triangle #1
scene.Vertex3fv(vertices[1]);//coordinates of vertex #1
scene.Vertex3fv(vertices[2]);//coordinates of vertex #2
//another triangle can now be streamed, and primitive attributes from previous triangle can be reused if needed!
scene.Vertex3fv(vertices[0]);//coordinates of vertex #0, , triangle #0 as material 3 is used, same triangle coordinates again otherwise ;-)
scene.Vertex3fv(vertices[1]);//coordinates of vertex #1
scene.Vertex3fv(vertices[2]);//coordinates of vertex #2
scene.End();

If you want to apply the modelview matrix on your primitives, you will need to set the openGL modelview matrix outside of a begin/end section, and call functions like VertexMV3fv. Note that only the transformed coordinates are stored, and therefore there is not a way to recover the original coordinates of a 3D primitive. Primitive and vertex attributes can also be transformed, by using for instance functions like PrimitiveAttribMV3fv or VertexAttribMV3fv.

3- Setting up rays.

First, you will need to declare a raybuffer type. Ray buffers contain much information, like positions and directions of rays, primitive intersected, and finally the t-value of the ray at the intersection!

int width=640;
int height=480;
DTRayBuffer rays(dtAPI);
rays.Resize(width,height);//set the size of the ray matrix;
rays.SetFrustumRaysFromGLProjectionMatrix(); //Function may suffer from limited precision. Use SetFrustumRaysFromPyramidF if needed.

The other possibility to generate frustum rays is the SetFrustumRaysFromPyramidF function, which requires 5 3D points defining the frustum:

float farFrustumPoints[4][3]={...};
float center [3]={...};
float *corners[4]={farFrustumPoints[0],farFrustumPoints[1],farFrustumPoints[2],farFrustumPoints[3]};
rays.SetFrustumRaysFromPyramidF(center, corners);

4- Intersecting rays with scenes.

This can be very simply executed by writing:

scene.Intersector(rays);

The RayBuffer object will then be modified accordingly, and only rays that have successfully intersected a given object will be processed by subsequent calls to RayBuffer shaders. The hardware-accelerated OpenCL intersector can also be used:

scene.IntersectorCL(rays);

5- Deferred Shading and Generating new Rays.

From this point, the API provides use with many tools to perform shading and recast rays if needed. For instance, you can declare an image and set it to black by writing:

DTImage image(dtAPI);
image.Resize(width,height); //4 floats per pixel by default. Note that width and height parameters should be the same as your ray buffer if the two are related for shading or other purposes.
float black[4]={0.f,0.f,0.f,0.f};
image=black; //Each pixel should be now equal to 0.

Appart from using shaders on the image to perform a specific job, you can also call numerous useful functions for an image like:

  • operator=, /
  • operators +,-, *, /
  • ABSImage:Absolute value of the pixels
  • ClampImage (clamp values between 0 and 1)
  • Fourier operators (check availability).
  • MapImageToWindow, which maps the image to an OpenGL window.
  • etc...


After calculating the intersection, you can also perform several operations on rays. The following piece of code generates normals from scene and visibility rays to a punctual light source:

DTRayBuffer normals(dtAPI);
DTRayBuffer secondaryRays(dtAPI);
DTImage visibility(dtAPI);
float lightSourcePos[3]={...};
normals=rays;//automatic resize if needed!
normals->NormalsAtIntersections(scene,rays);//Get the true normals at intersection.
rays.MoveRaysToIntersections(-0.000001f,false);//move ray starting points to the intersection. Function is now deprecated. Use shaders instead.
secondaryRays.PunctualLightSourceToIntersectionRays(rays,normals,lightSourcePos,1,false);//Function is now deprecated. Use shaders instead.
scene.Intersector(secondaryRays);
visibility.VisibilityMask(intersectionRays,rays);//final result. Returns 0 for pixels where intersections do not correspond, 1 if they do. original rays are required to enhence precision.

Many other operations are available for ray buffers. Please check the DTRayBuffer specifications for additional details.

6- Return to step 2, 3 or 4

Once that a tracing+shading cycle is completed, you can come back to step 2 or 4 as you wish. You may re-use the last intersection results to, for instance, generate new rays.