Home Tutorials Download Beta Store Forum Documentation KnowledgeBase Wiki Blog

The Hunt, the making of
10th February, 2010

The Hunt, the making of

Making Of

1 Star2 Stars3 Stars4 Stars5 Stars (30 votes, average: 4.67 out of 5)
Loading ... Loading ...

This document is a making-of / tutorial for our latest technical demo called The Hunt.
The purpose of this demo is to present the capabilities of the ShiVa engine in terms of rendering, features and performance.





Table of contents

Overview

This document is a making-of / tutorial for our latest technical demo called The Hunt.

The purpose of this demo is to present the capabilities of the ShiVa engine in terms of rendering, features and performance.

The content of this document is directly linked to the 1.8 release of the ShiVa Editor;

Note that The Hunt source code is included in the Editor installer.

In the near future, a document about application publishing (web, standalone,facebook) will also be available.

You must be familiar with the ShiVa Editor before reading this document.

You can find beginners tutorials on the Developer Website: http://www.stonetrip.com/developer

Documentation and concepts are available : http://www.stonetrip.com/developer/doc/

Related links:

Stonetrip: http://www.stonetrip.com

The Hunt: http://www.stonetrip.com/the-hunt.html

The Hunt on Facebook: http://apps.facebook.com/zombies_hunter/

Graphics

Assets import

Shiva handles the popular Collada (dae) and DWF formats. Lightwave, Blender, Sketchup, Cinema4D etc….all have a Collada exporter. We will now see in detail the export-import operation from three modelling programs:

3DSMax

The zombie character has been made in 3DSMax. Here are some points to keep in mind before creating / exporting a skinned mesh:

1. Character must have its pivot at the world origin(0,0,0).

2. Character must use a skin modifier until the physique modifier is supported by the Collada exporter.

3. Character must have its transformations frozen(Reset Xform).

4. Character must not have a parent.

5. Skinning root must be included in the skinning.

6. A vertex cannot be affected by more than 4 bones.

7. Vertex morphing is not supported by the ShiVa importer.

8. Do not use the built-in Collada exporter but the one from Feeling Software (which is now open source):

- Download for Max 2009

- Download for Max 2010

9. You can download the ShiVa Tools for3DSMax that will help you to fix any export problems: Download

To export static objects:

File > Export | Export Selected > Collada (not Autodesk Collada)

Then check Bake Matrices, Normals, Triangulate and Export.

To export animated object and animation:

File > Export | Export Selected > Collada (and not Autodesk Collada)

Then check Bake Matrices, Normals, Triangulate, Enable export and Export.

Tips:

In some cases, the Collada exporter will not export identical key frames due to an optimization.

ShiVa needs all key frames, that’s why in the Shiva Tools you will find a micro-randomize animation button, this function will sample the animation, add a key on each frame with an invisible micro variation to work around the exporter optimization.

Maya

There should be no difficulty exporting from Maya, you will just have to check out a few things before exporting:

Objects must have a unique name and must be of a polygon type.

Transformations must be frozen and reinitialized, Modify > Reset/ Freeze transformations then place the pivot.

You have to transfer local TRS to global using this melscript, call ResetShiVaPivot();

You can then export your model using the built-in Collada exporter, 2010.1 version seems to be pretty good.

You can also use an unofficial build available here based on the openCollada library (check bake transform, normals, texture coordinates and YFov).

XSI

XSI comes with the best Collada exporter so you should not have any difficulties exporting a model from this software.

Like 3DSMax and Maya, transformations must be frozen, at least scale and translation.

To export an animated character, the best way is to use a biped rig with null shadowing on which the skinning will be done.

To export character animations, you will have to plot the biped animation to the null shadowing.

When you are ready, File > Crosswalk > Export (check for update), choose COLLADA 1.4.1 in the file type menu, check only Materials, animation, polygon meshes, XSI normals.

If you need to export animations, uncheck “filter keys” from the animation tab.

As for 3DSMax and Maya, vertex morphing will not be imported.

ShiVa

Now your models are ready to be imported into ShiVa.

Load the Data Explorer > Import > Model

Rotate by – Scale by: You can set a custom scale and rotation at import time if the model in your DCC has not been done correctly.

Tips:

To measure an object in Shiva, press the D key in the SceneViewer.

To save the programmer’s time, your character must look at the –Z axis

Resources are now auto-reloaded, so you can keep your model open and re-import it.

It will then be reloaded.

Filters: When you need to re-import the same model, you can uncheck some of the filters to avoid data overwrite, even though the system will ask for confirmation before replacing.

Use the skeleton transform to change the skeleton TRS if the exporter does not make it correctly.

Here is an import example:

Before repair

After repair

Tips:

To display the skeleton, in the scene viewer, select Display > Skeleton

Before repair, the skeleton is not well bound to the mesh. You have to repair it in your DCC Tool, but in some cases, the skeleton transform check at import time will be enough.

Lighting

You now have a bunch of models in your project models folder:

Take the time to rename your models with well chosen prefixes in order to use the fast filter power. We are now going to create an empty scene to check each model size and materials.

Lights

Data Explorer > Create > Scene

Open the scene, create a ground (Create >Model > Shape > Plane, ZX +Y) model and drag’n'drop it into the scene at (0, 0, 0), pick its material (Hotkey M), set the ambient colour to 127,127,127 and check Receive Dynamic Shadows. Then drag’n'drop the “DefaultDynamicLightSet” model and one of your models into the scene.

Setup your desktop with the following modules:

Your object is still flat, so you have to select its materials, and check Receive Dynamic Lighting > Per Pixel or Per Vertex. Check per pixel only if you need specular and normal map, otherwise check per vertex. The cost of vertex lighting is quite insignificant relative to pixel lighting.

Shadows

To activate the shadows, you must choose a light to project shadows. In the SceneViewer, change the selection mode to single selection mode (Hotkey Insert) because the “DefaultDynamicLightSet” is a group and we want to edit each light. Select one of the lights. In the Attributes Editor, Light pane, change the colour and check Project shadows. Note that, for now, you can only set one light casting shadows but this will change with ShiVa1.9.

You can change the shadow colour in the Ambience Editor > General > Shadow > Color

Now you know how to import models, and configure lights, shadows and materials.

Materials

In the Advanced ShiVa version, you can batch all your materials at once. Select the objects in the scene, and in the resources tab of SceneExplorer, order elements by Type.
The selected materials are those used by the selected object(s) in the scene.
Drag’n'drop them into the MaterialEditor.

In the Material Editor > Edit > Batch Processing, select a preset or create your own

And after a good day of work, your project might look like this…

Tips:

Skybox: You can find some nice skyboxes all around the web, here is a link, you just have to convert skydome to skybox using your DCC and 6 cameras at 90°.

Materials: If you want a good frame rate, you have to optimize your model and one of the clues is the amount of different materials you have. Try to reduce the number of materials as much as possible, that is to say the number of the object’s subsets in order to reduce the number of engine draw calls.

Shadows: To have nice shadows, don’t forget to use the optimization rollup in the Ambience Editor to set the max and fade distance to decent values, and you will gain on quality.

AO Baking

Lighting in this game is fully dynamic. For some static objects, we want to bake ambient occlusion.
You can do this directly inside ShiVa, using the built-in lightmap compiler.

Note: ensure that you are in scene mode and not model mode.

Example with the car:

With Ambient Occlusion

Without

Ambient Occlusion is computed as static Lighting. First you have to set up the materials of the car, Material Editor > Lighting > Receive Static Lighting > Per Vertex and set the ambient to something close to 127, 127, 127. Check also Cast Static Shadows to have the ground computed in the AO.

Try to use vertex color as much as possible as you can paint this directly inside ShiVa and it will not create additional textures.

To compute AO, open the Ambience Editor > Lighting, select the car and set the method to Vertex Color, check Compute Ambient Occlusion, play with the parameters and press Calculate.

Parameters for the car: quality 200, distance 10, bias 0.01, amount 1

You will notice that there are some artifacts. In the SceneViewer, select the car and press hotkey P to paint vertex colours. Choose or pick a colour on the model, and then lighten or darken the vertex colour to fix these artifacts.

Vertex Color will be saved in the model so right click on your car and Model > Update model from instance. From now on, editing the material ambient colour will not affect the render anymore, since we have baked the ambient.

Reflection

We will now see how to create real-time and fake reflection using a sphere-map (cube-maps are not yet available in ShiVa).

Sphere map: sphere map reflection is an old low cost technique, used to fake real reflection on convex and concave meshes.

Firstly, you have to find or generate a sphere-map. Here is a link to spherical environment maps.

In the ShiVa Advanced version, you can use the built-in ShiVa screenshot generator to create a cube-map render, then a sphere-map.

How to:

Open your scene, create a camera and point it at something interesting, then Tools > Take a screenshot > Cubic image set, size to 1024, bmp format and click OK.

In the Data Explorer, right click on your main project folder > Open in Explorer and open the “Screenshots” directory.

Here you will find your screenshots. You will now need an external tool to convert them to sphere-maps, download it here, and extract its contents into the Screenshots directory.

Load the picture in the right file chooser, set the size and generate…

You will have to do this once again when your scene is completed.

Import the generated texture into ShiVa, open the Material Editor and pick the material you want to reflect. In the Texturing pane, Effect Map 1, select your sphere-map, click on the Fx icon, enable the texture mapping modifier and set the generation mode to spherical.

Again, in the Texturing pane, set the effect to modulate.

You change the intensity by moving the slider but this will have a rendering cost. You can do the same in Photoshop playing with the contrast and the brightness of your sphere-map.

Real-time: real-time reflection, refraction and render-to-texture need a RenderMap texture to work. We used a camera render to texture to simulate the sniper lens.

Camera render to texture:

Select your camera, Attributes Editor > Camera Attributes > Output, select the RenderMap

(To create a RenderMap, Data Explorer > Create > Resources > RenderMap)

In the Material Editor, select the RenderMap as Effect Map.

The camera outputs its render in a dynamic texture used in the lens material.

Reflection and refraction:

To enable reflection or refraction, you have to use a specific camera called a Reflector, Data Explorer > Create > Model > Reflector.

Drag’n'drop the reflector into your scene, Attributes Editor > Reflector Attributes, select your RenderMap in the reflection or refraction attribute.

Note that the reflector must look at -Z.

Open the material that will receive the reflection, assign the RenderMap in an effect map, open the texture mapping modifier, set the generation mode to Camera and set Base to -1 in the scaling modifier type V.

To add deformation in your reflection/refraction, put the RenderMap in effect map 0, a dudv noise texture in effect map 1 and set the effect to Noise.

A sample of a dudv texture: realdudv9bc.jpg

Terrain

You now have some models ready for the level design, but the terrain is still missing… We are now going to create the game area.

TIFF creation

We used a TIFF to create The Hunt’s terrain, there are several tools dedicated to terrain generation: Terragen, GeoControl, Earthsculptor

ShiVa imports 32bit TIFF files, so once your terrain is ready to export, just select a 32bit TIFF file format.

Import

Open your scene, Data Explorer > Import Height map, then Terrain Editor > Create

Create chunks from Height Map, select your height map and set your Height Max

Select the chunk size and the unit size to fit your needs, Apply to preview, then Ok to create.

Set the Gouraud view mode in the SceneViewer, because there is still no light on the terrain.

Editing

You now have a terrain, but maybe it is too high or there is some area to raise or lower.

To reduce global terrain height, edit the Height Map layer in the Geometry pane and reduce the amount.

Tips: To select terrain chunks, Scene Viewer > Selection >Terrain Chunk or directly in the Terrain Editor > Chunks pane.

To raise or lower terrain areas, select the chunks you want to edit, add a level geometry layer and select the targeted height: the selected area will fit the specified height, you can then use paint to change the intensity of the effect.

To reduce the layer effect, right click on the Level layer > select all affected chunks then click on the paint icon, set the intensity to 0, select a brush size and paint onto the terrain to reduce this effect.

When it’s done, Click Edit > Rebuild geometry to remove any painting artifacts.

You can also add Terrace, Erode and Noise layers to the terrain…

Lighting

Next we are going to add dynamic lighting and shadows to this terrain…

Terrain Editor > Materials, check receive dynamic Lighting and receive dynamic shadows.

Be sure to already have the dynamic light set in your scene, then change the view mode of the SceneViewer to textured mode (Display > View Mode > Textured).

Your terrain might look like this:

Later on we will see how to apply pre-computed Lighting

Textures

To add a texture layer, select the chunks you want to map,

In the Terrain > Editor > Materials pane, add a new layer.

Select an albedo map (diffuse), a normal map, check the flag Disappear under roads if you need to, set height min, max and a texture scale. Here we set the dirt texture parameters to be displayed between -2m and 1m high.

Pay attention to the Height Max Soft parameters used to blend the texture on the borders, it is a percentage so you have to set your max and mix very close to your needs.

Click OK and then Texture Preview.

Add as many layers as you want, playing with height min, max and slope…If the preview fit your needs, press the Production button and go get yourself a coffee.

When it’s done, click Edit > Rebuild missing low resolution texture to generate terrain mipmaps.

Road

We want to paint a road on the terrain;
so first of all, we have to be sure that at least one of our layers will “receive”the road texture.

In the material pane, add a new layer and make sure to uncheck disappear under roads, note that the layers order is very important. You must place layers that you want underneath others at the top of the list.

Open the Roads Pane and add a new road, set the inner and outer width, the outer width will define the blending size.

Then select the road layer and click on the brush icon to start painting it in the SceneViewer.

Note that after painting, you can move and delete points but not add any.

The Terrain mesh will fit to the road, press Rebuild geometry to do the job.

To move road points, Scene Viewer > Select > Terrain road points.

Then select the chunks under the road and click on the Production button in the Terrain Editor > material pane to compute the terrain texture once again (we only selected the chunks that we wanted to re-compute textures for).
You can also add a bit of specular to increase the normal map effect.

Vegetation

Now let’s talk about vegetation, you can manually add some kinds of billboard onto your terrain or you can use the built-in grass manager.

Be sure to have some grass textures in your Texture folder. Select the chunks on which grass will grow, go to the Vegetation pane of the Terrain Editor and add a new layer.

Add some other layers to create different grass varieties.

Trees

To add some trees to your scene, add a custom layer in the Vegetation pane.

This layer type will ask for a model, so you must first create some tree models (Note: grouping is not allowed).

You can also add them manually by drag’n'drop of the tree models onto the terrain.

Tips: a tree is something that is very difficult to render correctly, because its foliage is made with a bunch of transparent textures. Here is a trick to make them beautiful.

Technique 1: Set the leaves material with no depth-write.Render is nice but you have some Z sorting problems Technique 2: Depth-write activated set the alpha ref to a strong value. No Z sorting problem but the render is cropped. Technique 3: technique 1 + technique 2, one foliage without depth write and one with depth-write and alpha ref

Tips:

When creating billboard or foliage, we set the double side option on in order to render the billboard back face, this is not the best thing to do because in terms of rendering time, it is exactly the same as creating a shell manually, but in term of rendering quality, the other side will not receive realistic lighting.

AO Baking

We will see here how to bake ambient occlusion for the terrain in its diffuse, you will have to do this later (when your level design is done) but it’s a terrain thing so…

Without AO

With AO

First, compute Ambient Occlusion.

In the Terrain Editor, set the ambient colour to white, the AO amount to 1, and the distance close to 4 (meters). So you don’t spend 5 hours of useless computation, select the chunk you want to compute, and click Lighting Production.

Set the terrain ambient to its previous value. This will not affect the selected chunk anymore because we just pre-computed its lighting. It is now time to bake the lighting in the diffuse texture for two reasons: save a texture, get your lighting back again.

Edit > Bake lighting in texture.

Level Design

We have almost created all the resources needed to create the game. It is now time to design the level and see how to manage objects together…

Object manipulation

Transformation
tools:
managing the position and orientation between objects might be a hard road; fortunately, ShiVa provides some tools to help you in this task.

Gravity tool and teleport:

To use the gravity, select an object, press Shift+Z and check Use gravity, then just move your object on the terrain, its position will be updated depending on the terrain height.

Teleport (Ctrl+T) is like a gravity cut-paste using the cursor

Gravity tool in action…

Snap to grid: the object will be constrained in X and Z to the grid, i.e. only unit by unit.

Soft transformation: When moving, rotating or scaling objects, if you press Shift, the object will be transformed with a smaller range, which is useful to precisely place objects relative to each other.

Instant copy: When moving, rotating or scaling an object, press Ctrl to duplicate and apply the transformation to the copy.

And after doing this several times…

Tips:

You can’t change an object’s pivot directly in ShiVa, but this does the trick: create a dummy, move it where you want the new pivot to be, select your object then the dummy and right click > Group selection, and parent to last selected object.

Model management

The scene is a catalogue of instanced models. While editing a scene, you can update the model from the instance or update the instance from the model and create a model from a selection.

Create a model:

Select multiple objects and create a group, right click > Model > Save model as, your selection is now available as a model in the model folder.

To create a clean model, be sure that all objects selected are not model instances. To unbind an instance, right click > Model > Unbind model

Update model from instance:

Imagine that you have changed things in the instance like material affectation, static vertex colour etc…and you need to update the other instances. Right click on the modified instance, Model > update model from instance, then update the other instances.

Update instance from model:

The model has changed and you need to update its instances… right click on the instances > model > update instance from model

Camera

How to manage the camera, create a new one, edit attributes like fov and depth blur…

You can access a list of all your cameras, directly in the Scene Viewer, Display > Camera, in which you can create, remove and activate a camera.To edit camera attributes, select the camera in the Scene Explorer or in the Attributes Editor > Display > Active camera attributes.

You can then edit the attributes in the camera pane of the Attributes Editor

Depth of field

Set the distance and range to focus on the object or use the eye-dropper to auto-configure

FOV, Motion Blur and Velocity Blur

Tags

Tags are used to give the developer the ability to have an handler on objects in the scene. You can use a tag to add extra information for the developers.

How to add a tag: right click on an object > Edit Selection tag.

You can also add tags on virtual objects called “dummy” or “helper”, which are only used for their positions.

How to create a dummy: Data Explorer > Create > Model > Helper

Helpers with tag are used here (SpawnPoint, Zone, Crickets) to define the spawning area, ambient sounds like crickets, cameras…

Post render

It is now time to tweak the render a little bit to make it more beautiful…

Bloom: you can use bloom to simulate dazzle, so set the bloom settings and ask the developer to increase the bloom when the player is looking at the sun.

De-saturation and Sepia: De-saturation is used when the player is losing life.


Particles

Particles are used to simulate special effects like smoke, fire, explosion… Particles are not much than billboards with time dependant settings like colour, size, opacity…

Smoke

To create smoke, use a cone emitter with a texture in additive blending, inverse gravity and a small initial speed with a large random value for life time, speed and rotation

Dark smoke is similar to the white one but with a dark alpha texture and no additive blending

Explosion

The explosion uses a bunch of particles emitters, a point light and an AI to animate the particles position, swap the truck model and play the explosion sound. Please look at the AITruckExplosion AI for more information.

Combustion

When you kill a zombie, its body burns for a second and then disappears…

Emitters have been added by script on each skeleton joints and these are updated each frame to follow the skeleton. When the zombie dies, we play the particle with material processing to simulate combustion.

To achieve this, we duplicated the zombie texture, and added an alpha channel with a very bright, quasi invisible, smoke pattern.

Then we played with the alpha ref parameter of the zombie material to simulate a sort of cropping. This method has the advantage of keeping a correct shadow because the shadowing system takes into account the alpha threshold of the texture.

Ocean

Oceans can easily be created in ShiVa using the built-in ocean generator

Grid size:

A top view for the ocean might look like this, here for a 64*64 polygon grid size; the active area (in the middle) will receive noise to simulate waves.

By definition, the ocean is infinite but the grid size and unit size will define an active area.

A large grid size may result in lost rendering performance as it is highly cpu intensive.

A large wave tiling may result in a low-poly feeling

Do not forget to add an underwater plane otherwise your ocean will look empty.

Note that the ocean specular will come with the ShiVa 1.9 update and its daylight system.

HUD

Intro

The HUD has been split in two 1024*1024 textures in order to provide a very good resolution but still be accessible to older graphic cards.

There are also four buttons (play, movie, stonetrip.com and score) linked to an action using a special font.

In game

The life HUD uses two progress components, one with the red texture and the other with the grey texture.

Changing the progress value for the first component will show the texture under.

For zombie vision, we used a texture all over the screen to simulate eye veins.
This texture is in modulate mode.

In Photoshop, this texture looks like this:

The white area will be transparent, so there is no need to have an alpha channel

We also added a texture to simulate old lens vignetting.

Dynamics

We are now going to check how to create dynamic objects and declare static objects as colliders.

Collider

To display a collider’s attributes, Scene Viewer > Display > Object filter > Collider attributes

To set an object or a selection of objects as a collider, right click on it and attributes > set as collider.

You can set an object to be invisible, and the collider attribute will still work. Note that the polygon normal orientation is important for dynamics.

Use proxy objects to lighten the physics engine computation.

Example:

We have used a box model as the collision mesh for the trees. This saves approximately 75000 polygons from the computation.

Box

To define a box as dynamic, right click on it > controller > dynamics > create controller.

You can then define, in the attribute editor, all kinds of physics parameters such as mass, friction, damping…

Do not forget that when you define an object as dynamic, it is automatically used as a collider if you check the collision option.

Animation

Assignment

You must first create an AnimBank, which is an AnimClips container.

Data Explorer > Create > Resource > AnimBank, then drag’n'drop this animbank onto your model, import or create an animClip in the animClip editor (see below) and drag’n'drop it to the AnimBank.

To preview an animation, select it and click on the play button.

If the animation changes the object morphology, try to re-import your animation changing its scale to fit the model.

Creation

You can create animations directly inside ShiVa using the AnimClip Editor.

We used this to create small animations like the plane crash, this module has been designed to create small animations or edit larger ones. Nothing is better than the one integrated in your favourite DCC tools.

Prepare your model to receive animations (animBank) and create an empty animClip

Data Explorer > create > resources > animClip.

Load it in the AnimClip Editor, figure out that there is a Default channel, if you have to animate multiple objects at once, create a new channel using your object name, then in the attribute Editor select this channel in the Animation Controller tab else select the Default channel.

To create your animation, click on the key in the AnimClip Editor to add a key on the 0 frame, move to another frame, move your object (TRS) and click on the key again…it’s done !

Click on the play button to see your work.

Sound

Assignment

Like animations, to assign a sound to an object, you first have to create a soundBank full of sounds, then attach it to your object.

Note that a sound is not music, music is streamed while sounds are loaded in their entirety, so be sure to have small sounds.

To play music, use the ambience editor, Music tab.

Script

Loading pack

Ocean dynamics

In the loading pack, there are buoys floating on the ocean. The movement is separated into 2 things: the translation matching the height of the ocean and the rotation which is calculated with the ocean and the normal, at the buoy position.

Sample:

local hScene = application.getCurrentUserScene ( )

local hObject = this.getObject ( )

local x, y, z = object.getTranslation ( hObject, object.kGlobalSpace )

-- Update the translation of the buoy, according to the height of the ocean at the buoy position

object.setTranslation( hObject, x, scene.getOceanHeight ( hScene, x, z ), z, object.kGlobalSpace )

-- Update the rotation of the buoy, according to the normal of the ocean at the buoy position

local nx, ny, nz = scene.getOceanNormal ( hScene, x, z )

local rx = math.asin ( nx )

local rz = math.asin ( nz )

object.setRotation ( hObject, rx, 0, -rz, object.kGlobalSpace )

Plane control

The player can control the plane over the ocean. The plane is composed of many sub models, like the wings and the tail.
The player controls the plane with the Up, Down, Left and Right keys of the keyboard. Pressing these keys will rotate the plane and its wings, depending on the elapsed time since the last frame.

The plane also has particles emitters which are enabled when the plane hits the ocean.

Sample:

local dt = application.getLastFrameTime ()

-- Rotation for the left wings of the plane

if ( this.bButtonLeft ( ) )

then

 angleLeft = math.clamp ( angleLeft + dt * 20, -20, 40 )

elseif ( this.bButtonRight ( ) )

then

 angleLeft = math.clamp ( angleLeft - dt * 20, -20, 40 )

end

object.setRotation ( group.getSubObjectAt ( plane, 0 ), 0, 0, angleLeft, object.kLocalSpace )

object.setRotation ( group.getSubObjectAt ( plane, 1 ), 0, 0, angleLeft, object.kLocalSpace )

-- Particles for the first wing

local d1x, d1y, d1z = object.getTranslation ( hWing1, object.kGlobalSpace)

if ( d1y < 1 )

then

 sfx.startAllParticleEmitters ( hEmitter1 )

else

 sfx.pauseAllParticleEmitters ( hEmitter1 )

end

Download

While the player controls the plane, the application is downloading the pack of The Hunt.

How to download a pack? Start the installation of the pack with the system.install () function and wait
during the download. Check the progress regularly with system.getInstallationStatus (). When the download is done (getInstallationStatus returns 1), you can launch the pack.

Sample:

-- onEnter

system.install (this.sFileToLoadURI () )

--onLoop

-- Get the progress of the installation

local nInstallationProgress = system.getInstallationStatus(this.sFileToLoadURI ( ) )

if (nInstallationProgress == 1 )

then

-- Download is done, launch the pack

system.launch (this.sFileToLoadURI ( ), "" )

end

Scene Preloading

How it works?

Preloading a scene will load it in an asynchronous manner (it means the application runs normally during the loading, it doesn’t wait for the scene to be loaded) and the scene is not started when its loading is done. It can be useful if you want to change your current scene instantaneously.
Preload your next scene during play, using this function: application.startCurrentUserScenePreloading () and start it when the scene is loaded. Check the status of the loading with this function: application.getCurrentUserScenePreloadingStatus ().
While loading, the status lies between 0 and 1, you have to wait until the status is 1 to start the scene, and then, use the function application.setCurrentUserScene () to launch the scene.

Sample:

--onEnter

application.startCurrentUserScenePreloading( "st_Terrain1024" )

--onLoop

if (application.getCurrentUserScenePreloadingStatus ( ) == 1 )

then

application.setCurrentUserScene ("st_Terrain1024" )

end

Force model to stay loaded

When you instantiate a runtime object, it takes time to load. If the model you want to load is big, the game can seem to be “frozen”. You can tell the application to keep a model loaded in memory.
Use the function forceModelToStayLoaded to force a model to stay loaded in the application (or not).

Important: The model must be referenced in the Game Editor.

Sample:

application.forceModelToStayLoaded( "st_zombie", true )

Note: You can also force a resource to stay loaded, use this function: application.forceResourceToStayLoaded(). The resource must be referenced in the Game Editor too.

Cinematic

Camera swap

In the scene, there are many (tagged) cameras, which are placed in predefined positions, with predefined orientations. The cinematic is a succession of camera swaps between these cameras.
We change the current active camera using a smaller and smaller time factor between swaps..

Sample:

function st_TheHuntMain.onSetFlash ( nNum )

-- [...]

if ( nNum == 1 )

then

tag = "Flash1"

time = 0.8

end

-- [...]

local cam = application.getCurrentUserSceneTaggedObject ( tag )

application.setCurrentUserActiveCamera ( cam )

this.postEvent ( time, "onSetFlash", nNum + 1 )

end

Look at

Most of the cinematic cameras are immobile, but with the first 3 cameras, you are in a zombie, looking at the plane. Use the function object.lookAt () with the position of the plane, with a
small factor (between 0.1 and 0.2) to simulate a progressive movement of the head of the zombie.

Shake

It is difficult to create a realistic shake effect. Does the shake have to be a vertical movement? Horizontal? Both? This is simulated by adding a slight translation to the default position of the camera.
Use the function math.sin () to create a sinusoidal movement depending on the time. Then, you can modify this movement to make it faster (by multiplying the value of the time) or increase the amplitude (by multiplying
the result of the sinus function). You can also improve the shake effect by adding motion blur when the shake starts, like this: camera.setMotionBlurFactor ( hCamera, 0.5 ).

Sample:

-- The shake duration is 5 seconds

-- x, y and z are the default camera position

local time = application.getTotalFrameTime ( ) -
this.nShakeStartTime ( )

local shakeSpeed = 2500

local decreaseFactor = ( 5 - time )

local amplitude = decreaseFactor / 50

local dy = math.sin ( time * shakeSpeed ) * amplitude

object.setTranslation ( hCamera, x, y + dy, z, object.kGlobalSpace )


Animation

In the cinematic, you can see the plane crash. In its movement, the plane hits a pole which falls down. In fact, it is an animation clip which manages these movements. This animation clip has one animation layer for each animated object. The animation is played once only.

Sample:

-- Set the animation playback mode to Once

animation.setPlaybackMode ( Plane, 0,animation.kPlaybackModeOnce )

animation.setPlaybackMode ( Pilone, 0,animation.kPlaybackModeOnce )

animation.setPlaybackMode ( Cable1, 0,animation.kPlaybackModeOnce )

animation.setPlaybackMode ( Cable2, 0,animation.kPlaybackModeOnce )

animation.setPlaybackMode ( Cable3, 0,animation.kPlaybackModeOnce )


FPS camera

Control and dynamic

The player is represented by 2 things: a camera for his head, and a dynamic body for his feet, which is used to make the player move.
For the second one, a helper model is placed in the scene, and a dynamic sphere body is created on it. Then, when the player wants to move, a force, in the required direction, will be added to the dynamic body.

Sample:

-- In a setupDynamics () function

-- Creation of the helper model

local dynObject = scene.createRuntimeObject ( hCurrentScene,"Helper" )

-- Creation of the dynamic sphere body

if ( dynamics.createSphereBody ( dynObject, 0.5 ) )

then

dynamics.setMass ( dynObject, 80)

dynamics.enableDynamics ( dynObject, true )

dynamics.enableCollisions ( dynObject, true )

dynamics.enableGravity ( dynObject, true )

end

-- In a updateDynamics () function

-- The force which will be applied to the dynamic body

local fx, fy, fz = 0, 0, 0

-- The vector in the direction of the player's camera

local oZx, oZy, oZz = object.getZAxis ( hCamera, object.kGlobalSpace)

if ( this.bMove_Forward ( ) )

then

-- Creation of the force vector if the player wants to move forward

fx, fy, fz = math.vectorSubtract ( fx, fy, fz, oZx, oZy, oZz )

end

dynamics.addForce ( dynObject, fx, 0, fz, object.kGlobalSpace )

Depth blur

When the player uses the zoom of his weapon, the field of view of the camera changes and a depth blur effect is added to blur the entire screen, except the sight of the weapon, which is used to view far objects and enemies.

Sample:

-- Set the field of view and the depth blur parameters of the camera

local currentFOV = camera.getFieldOfView ( hCamera )

if ( this.bZooming ( ) )

then

camera.setFieldOfView ( hCamera , math.max ( currentFOV - 0.5, 21) )

camera.setDepthBlurFactor ( hCamera, 1 )

camera.setDepthBlurFocusRangeMin ( hCamera, 0.2 )

camera.setDepthBlurFocusRangeMax ( hCamera, 0.8 )

else

camera.setFieldOfView ( hCamera , math.min ( currentFOV + 0.5, 24) )

camera.setDepthBlurFactor ( hCamera, 0 )

end

Blood HUD

When the player is hit by a zombie, a HUD representing blood appears. The blood indicates the position of the zombie: left, right, or both sides if the zombie is behind or in front of the player.

To know which side of the player the zombie is, you have to do a dot product between the vector separating the player and the zombie, and a vector representing the side of the player.
To know the first one, you just have to subtract the position of the player from the position of the zombie.
For the second vector, use the function object.getXAxis ().
After doing the dot product between these 2 vectors, test the result, if the result is big (greater than 0.6), the zombie can be considered as in front of or behind the player. In the other case, a positive result indicates the zombie is to the left side of the player, negative to the right side.

Sample:

local x1, y1,z1 = object.getTranslation ( hZombie, object.kGlobalSpace )

local x2, y2,z2 = object.getTranslation ( hPlayer, object.kGlobalSpace )

local v1x, v1y,v1z = math.vectorSubtract ( x1, y1, z1, x2, y2, z2 )

local v2x, v2y, v2z = object.getXAxis ( hPlayer, object.kGlobalSpace)

local side = math.vectorDotProduct ( v2x, v2y, v2z, v1x, v1y, v1z )

if ( math.abs ( side ) > 0.6 ) then

log.message ( "both sides" )

else

if ( side > 0 ) then

log.message ( "right" )

else

log.message ( "left" )

end

end

There is also another type of blood, when the player hits a zombie that is close by, blood appears in the centre of the screen. This is used to simulate blood splatter from the zombie.

Dazzle

The bloom of the scene increase when you look towards the sun. The bloom factor depends of 2 vectors: the direction of the current active camera, and the direction of the light.
A dot product between these 2 vectors will give you an indication of the intensity of the bloom factor.
The more the result of the dot product it is negative, the more the bloom factor has to be high (it means the light is in front of the camera, since directions are opposed).

Important: The bloom factor lies between 0 and 3.

Sample:

local dx, dy, dz = object.getDirection ( hLight, object.kGlobalSpace)

local dx2, dy2, dz2 = object.getDirection ( hCamera,object.kGlobalSpace )

local nFactor = -math.vectorDotProduct ( dx, dy, dz, dx2, dy2, dz2 )+ 1

scene.setBloomIntensity ( hScene, nFactor )

Ambience

Sound (cricket, music, heart beat)

Crickets are represented by invisible models (helpers), placed at different places in the scene.
These models have only one utility: playing sounds. 4 sounds are randomly played.
When the player fires his weapon, crickets near to him will be muted for few seconds.

Sample:

-- In a onMute () handler

sound.stop ( hCricket, this.nTrack () )

-- In a onActivate () handler

local nTrack = math.roundToNearestInteger ( math.random ( 0, 3 ) )

sound.play ( hCricket, nTrack, 0.8, true, 0.4 )

Windmill

The windmill rotates in 2 different axes:

- The vertical axis, which rotates slightly, to simulate changes in wind direction. This is a sinusoidal rotation, around a fixed value.

- The axis of the windmill helixes, rotating with the wind force.

Sample:

local nValue = 6 * math.cos ( application.getTotalFrameTime ( )* 100 )

object.setRotation ( hWindObject, 0, 90, 0, object.kGlobalSpace )

object.rotate ( hWindObject, 0, nValue, 0, object.kGlobalSpace )

object.rotate ( hHelix, 48 * application.getLastFrameTime (), 0,0, object.kLocalSpace )

Plane propeller

The plane propeller rotates during the entire game and an engine sound is played. The rotation speed changes regularly and the sound pitch too. The higher the rotation speed, the higher the pitch.

Sample:

this.nTime ( this.nTime () + application.getLastFrameTime () )

-- The rotation speed and the sound pitch change every 3 seconds

If ( this.nTime ( ) > 3 )

then

this.nSpeed ( math.random ( 90, 250 ) )

this.nTime ( 0 )

local n = ( ( this.nSpeed () - 90 ) / 320 ) + 0.5

sound.setPitch ( hHelix, 0 , n )

end

local nValue = this.nSpeed () * application.getLastFrameTime ()

object.rotate ( hHelix, nValue, 0, 0, object.kLocalSpace )

Change ambience

The player can choose between 3 different ambiences : Sunrise, Stormy and Sunset.
Changing the ambience implies 3 things: changing skybox textures, changing lights and replacing vegetation layers.

Important: The textures of the skybox must be referenced in the Game Editor.

Sample:

--Set which lights are hidden

object.setVisible( hLight1, false )

object.setVisible( hLight2, true )

object.setVisible( hLight3, false )

--
Set the new skybox textures

scene.setSkyBoxFaceMap( hScene, scene.kSkyBoxFaceBack, "skyblue_back" )

scene.setSkyBoxFaceMap( hScene, scene.kSkyBoxFaceFront, "skyblue_front" )

scene.setSkyBoxFaceMap( hScene, scene.kSkyBoxFaceLeft, "skyblue_left" )

scene.setSkyBoxFaceMap( hScene, scene.kSkyBoxFaceRight, "skyblue_right" )

scene.setSkyBoxFaceMap( hScene, scene.kSkyBoxFaceTop, "skyblue_top" )

--
Set which vegetation layers are visible

scene.setTerrainVegetationLayerVisible( hScene, 0, false )

scene.setTerrainVegetationLayerVisible( hScene, 1, false )

scene.setTerrainVegetationLayerVisible( hScene, 2, true )

scene.setTerrainVegetationLayerVisible( hScene, 3, true )

Shot

Impact point when fired

When the player fires the weapon, you have to check the impact point. This point can be with a collider (house), a sensor (zombie) or the terrain.

You have to launch a ray tracing for each surface type with the appropriate function: scene.getFirstHitTerrainChunk (), scene.getFirstHitColliderEx (), scene.getFirstHitSensor (). Use the player’s camera position as the ray start point, and its direction as the ray direction, both in global space. The 2 functions are object.getTranslation () and object.getDirection ().

You have to determine which ray has hit something first. This is the ray which has returned the smallest distance.
Save this distance in a variable, as it will be useful in obtaining the impact point coordinates.

Important: A ray with a distance of 0 can mean it has hit nothing, verify the handle to the object to know if an object has been hit. If the ray hits nothing, you won’t have to compare it with others.

To get the impact point, you first have to check the nearest hit distance between each function result, if the ray has hit something, you can save this distance in a variable.

Here is a sample of script that determines if it is a sensor which has been hit first:

local nMinDist = nil

if ( hHitSensorObject and ( not nHitChunk or nHitSensorDist <=nHitChunkDist ) and ( not nHitCollider or nHitSensorDist <= nHitColliderDist) )

then

nMinDist = nHitSensorDist

log.message ( "Hit a sensor first" )

end

You now have to find the hit position. If the distance has been set (it means something has been hit), scale the ray directional vector at a nMinDist size and then, add this new vector to the ray start position. The result is the impact point coordinates.

if ( nMinDist )

then

local x, y, z = math.vectorSetLength ( nRayDirX, nRayDirY,
nRayDirZ, nMinDist )

x, y, z = math.vectorAdd ( x, y, z, nRayPntX, nRayPntY, nRayPntZ
)

log.message ( "Impact point coordinates : x=", x, ", y=", y, ", z=", z )

end

Particles on shoot

When the player fires, 2 particle emitters are started: one at the impact point to simulate dust, and another at the end of the gun, to simulate flare.

Flare: This emitter contains only one particle, with a texture of flare, which has a very short life time, and is immobile (no speed, no gravity). It’s a “flash”.

Dust: This emitter contains more particles, with a brown colour. There is a slightly inverted gravity. Particles have a short life time with some randomness. Particles have a low opacity.


Set impact texture

When you fire, a bullet mark appears at the impact point. This is a textured plan, which has to be placed in the scene at the impact point and oriented relative to the surface. You already know the impact point coordinates.
The normal vector to the surface can be obtained with one of these functions: scene.getTerrainNormal () and scene.getFirstHitColliderEx().
With this normal vector, you can set the orientation of the mark using the function object.lookAt (), to make the mark, look at its position plus the normal vector.

Sample:

-- x, y, z: Coordinates of the impact point

-- i, j, k: Normal vector to the surface

-- hMark: The handle to the textured plan

object.setTranslation( hMark, x, y, z, object.kGlobalSpace )

object.lookAt ( hMark, x + i, y + j, z + k, object.kGlobalSpace, 1 )

Dynamics

Swing

In the scene, you can find ropes which are composed of dynamic elements.

Sample:

-- Creation of the rope

for iJoint = 2, shape.getSkeletonJointCount ( hCurrentRope )

do

-- Create and place the new rope element

local sJointName = string.format ( "Bone%0.2i-node",iJoint )

local PosX, PosY, PosZ = shape.getSkeletonJointTranslation (hCurrentRope, sJointName, object.kGlobalSpace )

local hJoint = scene.createRuntimeObject (application.getCurrentUserScene ( ), "st_RopeElement" )

object.setTranslation ( hJoint, PosX, PosY, PosZ,object.kGlobalSpace )

dynamics.createBoxBody ( hJoint, 0.05, 0.05, 0.05 )

-- Set the dynamics properties

dynamics.setMass ( hJoint, 3 )

dynamics.setLinearDamping ( hJoint, 0.5 )

dynamics.setAngularDamping ( hJoint, 0.5 )

dynamics.setFriction ( hJoint, 0.1 )

-- Enable the dynamics

dynamics.enableDynamics ( hJoint, true )

dynamics.enableCollisions ( hJoint, true )

dynamics.enableGravity ( hJoint, true )

-- Set the joint between elements

local sNewJointName = string.format ("RopeJoint%0.2i", iJoint )

dynamics.createBallJoint ( hJoint, hPreviousJoint, sNewJointName)

dynamics.setBallJointAnchor ( hJoint, sNewJointName, 0, 0.1, 0,object.kLocalSpace )

hPreviousJoint = hJoint

end

Box

You can find boxes in the scene. You can hit them when firing the weapon or when attacking with an axe.
Each box has a sensor. The sensor intercepts the ray launch when firing the weapon, or the sensor collision when attacking with the axe.

Sample:

-- Add an impulse to the box

local dx, dy, dz = object.getDirection ( application.getCurrentUserActiveCamera( ), object.kGlobalSpace )

dynamics.addLinearImpulse ( this.getObject ( ), dx*300, 0, dz*300,object.kGlobalSpace )

Zombie

Runtime objects and spawn area

Zombies can spawn at 14 different places:

- Behind the player

- Behind buildings (house, farm house, windmill, ruins)

- At one of the 9 additional spawn points.

When a zombie spawns, he has 25% chance of spawning behind the player, 25% behind the nearest building, 25% behind a random building and 25% at one of the 9 other spawn points. In all cases, the player can’t see the spawn of the zombie.

The 9 spawn points are represented with tagged helper models. The zombie spawns only on spawn points which are behind the player.

The buildings are also tagged. When a zombie spawns behind a building, he’s necessarily at the opposite side to the player, to make the apparition of the zombie unseen by the player.

The zombie can also spawn behind the player.
This ensures that the player cannot remain static: he has to move.

Animation blending

It’s very important to have models with well blended animations. A model having smooth animations appears more realistic.
There are many ways to make smooth transitions when you decide to change the current animation of a model. Important: The blend level lies between 0 and 1.

The idea is to make each animation blend level change gradually. Generally, you want an animation to be played with a blend level at 0 or 1. Use a variable indicating the blend level you want (the targeted blend level), and then, get the current blend level of the animation (current blend level). If the current blend level is greater than the targeted blend level, you have to decrease the blend level, if it is lesser, you will increase it. To make the blending of the animation independent of the frame rate of the application, the value you add or subtract has to be dependent to the elapsed time since the last frame.

-- Run animation is on the layer number 2

-- targetRun is the blend level you want for the layer of the runanimation (0 or 1)

local dt = application.getLastFrameTime ( )

local currentRun = animation.getPlaybackLevel ( hObject, 2 )

if ( targetRun > currentRun )

then

	animation.setPlaybackLevel ( hObject, 2, math.min ( currentRun +dt, targetRun ) )

else

	animation.setPlaybackLevel ( hObject, 2, math.max ( currentRun -dt, targetRun ) )

end

If you duplicated this sample for all your blend layers, you just have to set the target level for each layer to obtain smooth blending between all the animations. If you set the run target level at 1 and others animations target level at 0, this sample will automatically make your model run in a smooth animated manner. In this sample, the blend level goes from 0 to 1 in 1 second. If you want to blend in 0.5 sec, you just have to multiply “dt” by 2.

Pathfinding

Zombies use the pathfinding navigation system to move. This system allows the zombies to go somewhere bypassing obstacles.
The pathfinding uses nodes placed on the terrain.

To create these navigation points, open the NavMesh Editor, select the terrain chucks you want to add to the navigation
(with the terrain selection mode : ), and click “Add selected objects to NavMesh”. Then, select the navigation points under your objects or buildings
(with the navigation point selection mode :) and press “delete” to remove them.

A helper model represents the navigation position and the zombie follows it using his dynamic body.

Sample:

-- In a setupNavigation () function

this.hNavObject ( scene.createRuntimeObject ( hScene, "Helper") )

-- In a updateNavigation () function

navigation.setSpeedLimit ( hNavObj, 0.7 )

navigation.setAcceleration ( this.hNavObject ( ), 10 )

navigation.setPathMaxLength ( this.hNavObject ( ), 10 )

navigation.setRandomTargetNode ( this.hNavObject () )

-- In a updateDynamics () function

-- Add a force in the direction: zombie -> navigation object

local x, y, z =object.getTranslation ( hZombie, object.kGlobalSpace )

local x2, y2, z2= object.getTranslation ( this.hDynObject (), object.kGlobalSpace )

local fx, fy,fz = math.vectorSubtract ( x2, y2, z2, x, y, z )

dynamics.addForce ( this.hDynObject (), fx, fy, fz, object.kGlobalSpace)

dynamics.setLinearDampingEx ( this.hDynObject (), 10, 0, 10 )

-- Set the zombie looking in the direction of his navigation objectin a smooth manner

local tx, ty, tz = object.getTranslation ( hZombie, object.kGlobalSpace)

object.lookAt ( hZombie, tx + fx, ty, tz + fz, object.kGlobalSpace,5 * application.getLastFrameTime () )

Ragdoll

The ragdoll is used for the zombies. It’s a method generally used to create realistic character death movements, instead of using animations of death. The ragdoll consists of creating physics elements on all the members of a zombie when he spawns, and to attach all these elements.
The dynamics of the ragdoll is disabled until the zombie dies. At this moment, the ragdoll is activated, all the animations of the zombie are stopped and dynamics, with gravity, make the zombie fall to the ground.

Sample:

-- Creation of the ragdoll

-- sizeZ is the length of the member of the zombie

dynamics.createBoxBody ( hRagdollElement, 0.22, 0.17, sizeZ * 0.97 )

dynamics.setMass ( hRagdollElement, 12 )

dynamics.setOffset ( hRagdollElement, 0, 0, sizeZ / 2 )

dynamics.setLinearDamping ( hRagdollElement, 2 )

dynamics.setAngularDamping ( hRagdollElement, 7 )

dynamics.setFriction ( hRagdollElement, 1 )

dynamics.enableDynamics ( hRagdollElement, false )

dynamics.enableCollisions ( hRagdollElement, false )

dynamics.enableGravity ( hRagdollElement, false )

After the creation of all the ragdoll elements, joints are created between adjacent elements.

Have a look at The Hunt source code for more details.

Sensor

The sensors on the zombies are created dynamically. They are placed on each member of the zombie, like ragdoll elements. They are used to get the impact point of the shot from the player’s weapon.
Each sensor has an Id. This Id is used to detect which part of the zombie has been hit by the player. For example, the sensor at the zombie’s head has an Id of 0.
When a sensor is hit, if the Id is 0, the player has made a headshot, and the zombie dies in one shot.

Sample:

Creation of a sensor:

-- sizeZ is the length of the member of the zombie

sensor.add ( hRagdollElement, sensor.kShapeTypeBox )

sensor.setBoxSizeAt ( hRagdollElement, 0, 0.22, 0.17, sizeZ*0.97 )

local a, b, c = sensor.getBoxCenterAt ( hRagdollElement, 0,object.kLocalSpace )

sensor.setBoxCenterAt ( hRagdollElement, 0, a, b, c + sizeZ*0.97 /2, object.kLocalSpace )

sensor.setIDAt ( hRagdollElement, 0, nID )

HUD

Set options

To set an option, there is a dedicated AIModel named st_Options. In all of the scripts, if you want to change an option, call the handler onSetOption () of the st_Options AIModel. This handler updates the environment variables, updates the Options HUD and changes the camera if the option is a graphic one. It is also called when clicking on an option in the Options HUD.


Sample:

-- Hide the weapon and change the camera corresponding to the option

if ( bCameraPlacement )

then

	this.setWeaponVisible ( false )

	this.setCamera ( sOptionName )

end

-- Update the environment and the options HUD

this.updateOptionsEnvironment ( sOptionName )

this.updateOptionsHud ( sOptionName )

Actions

In the HUD editor, you can create actions, which can be attached to a onClick event of a HUD button, for instance. You can see bellow the action attached to the sepia first choice button in the Options HUD.
The action is called when the button is clicked, and call the onSetOption () handler, to set the sepia to the relevant value.

HUD lists and xml

How do you get an XML file, read it, and add its content into a list in the HUD?

Get the XML

First, start an XML request to get the xml content. You must have an xml variable in your AI variables (here named xmlScore):

xml.receive( this.xmlScore ( ), "http://localhost/scores.php" )

Then, in a dedicated state ,wait for the XML request to be done. When it’s done, manage your HUD and leave the state (here we change to a state named “null”).

local status = xml.getReceiveStatus ( this.xmlScore ( ) )

if ( status == 1 )

then

this.manageHUD ( )

this.null ( )

end

The function manageHUD ( ) contains 2 things: the HUD creation and the filling of the list.

Create columns in the HUD list

We want 3 columns in the list, for the position, the name and the score.

Create the HUD template instance, add 2 columns (by default, a list already has 1 column) and set the width of each column (in percentage of the total width of the list).

local hUser = this.getUser ( )

hud.newTemplateInstance ( hUser, "st_HudTheHuntScores","Scores" )

local list = hud.getComponent ( hUser, " Scores.ScoreList")

hud.addListColumn ( list )

hud.addListColumn ( list )

hud.setListColumnWidthAt ( list, 0, 20 )

hud.setListColumnWidthAt ( list, 1, 40 )

hud.setListColumnWidthAt ( list, 2, 40 )

Add the XML content into the list

The XML contains a list of elements named “game” in a root element. A “game” element is composed of 2 elements: the name of the player who played the game and their score.

The idea is to get the first “game” element in the XML, get its properties, and get its next sibling. This is an easy way to get all the children of an element.

local root = xml.getRootElement ( this.xmlScore ( ) )

local game = xml.getElementFirstChildWithName ( root,"game" )

local i = 1

while ( game )

do

	local name = xml.getElementValue (xml.getElementFirstChildWithName ( game, "name" ) )

	local score = xml.getElementValue ( xml.getElementFirstChildWithName( game, "score" ) )

	hud.setListItemTextAt ( list, nItem, 0, i ) --The position

	hud.setListItemTextAt ( list, nItem, 1, name ) --The name

	hud.setListItemTextAt ( list, nItem, 2, score ) --The score

	i = i + 1

	game = xml.getElementNextSiblingWithName ( game, "game")

end

Game

Save and load environment

When the game starts, we load the environment variables. Then we set up all of the other variables.

When exit is chosen in the option menu, we save the environment.

Sample:

-- In a onInit handler

-- Load the environment named "Options" if existing, named it "Option" else.

if ( not application.loadCurrentUserEnvironment ("Options" ) ) then

	application.setCurrentUserEnvironmentName ( "Options")

end

-- To erase an existing value

application.setCurrentUserEnvironmentVariable ("Game.ZombieCount", 0 )

-- To keep an existing value

this.setDefaultEnvironmentVariable ( "Game.Sepia", false )

-- Details of the setDefaultEnvironmentVariable () function

if ( application.getCurrentUserEnvironmentVariableStatus ( sName ) == application.kStatusNone )

then

	application.setCurrentUserEnvironmentVariable ( sName, value )

end

-- To save the environment

application.saveCurrentUserEnvironment ( )

Optimization

It is important to optimize the script, especially in reducing the number of call to variables/handles, to LUA constants or LUA functions, using local variables. The following optimization method is useful for scripts which are called very often, such as scripts containing “for” or “while” instructions, onEnterFrame handlers, or the loop scripts of states.

Variables/handles optimizations:

There are some handles which are often used in the scripts, like this.getObject ( ), this.getUser ( ), application.getCurrentUserScene ( ) and others. If they are used more than one time in a single script, you can optimize the script.
Instead of using the handle of the object directly, put it in a local variable. There will then be only one call to get the handle because it will be stored temporarily in a local variable.

Sample:

-- Here, this.getObject ( ) is called only one time with the optimization. Without it, there were 5 calls.

local hObject = this.getObject ( )

animation.changeClip ( hObject, 0, 0 )

animation.setPlaybackKeyFrameBegin ( hObject, 0, 0 )

animation.setPlaybackKeyFrameEnd ( hObject, 72 )

animation.setPlaybackMode ( hObject, 0, animation.kPlaybackModeLoop )

animation.setPlaybackLevel ( hObject, 0, 0 )

Constants optimizations:

In fact, constant are numbers. You can store them in local variables if you use a constant more than once in a script.

Sample:

--UpdateWeapon function, in the CameraZombie AIModel

--With the optimization, one call of object.kGlobalSpace instead of 4.

local kGlobalSpace = object.kGlobalSpace

local hObject = this.getObject ( )

local weapon = this.hWeapon ( )

local sHandNode = "RightHandMiddle1-node"

local x, y, z = shape.getSkeletonJointTranslation ( hObject, sHandNode, kGlobalSpace
)

local rx, ry, rz = shape.getSkeletonJointRotation ( hObject, sHandNode, kGlobalSpace
)

object.setTranslation ( weapon, x, y, z, kGlobalSpace )

object.setRotation ( weapon, rx, ry, rz, kGlobalSpace )

Functions optimizations:

When you call a LUA function, 2 things occur: a call to the ShiVa engine to get the handle of the function, and then, a call to the handle. If you use the function several times, you can store the handle of the function in a local variable.
As a result, when calling the function, only the second part of the process will be executed because you will already have the handle to the function.

Sample:

--Here there is only one call to ShiVa engine instead of 4.

local getCurrentUserSceneTaggedObject = application.getCurrentUserSceneTaggedObject

local pilone = getCurrentUserSceneTaggedObject ( "Pilone" )

local cable1 = getCurrentUserSceneTaggedObject ( "Cable1" )

local cable2 = getCurrentUserSceneTaggedObject ( "Cable2" )

local cable3 = getCurrentUserSceneTaggedObject ( "Cable3" )

All in one:

Now have a look at a sample, fully optimized, using the method above, to update the ragdoll elements of the zombies. This script is executed about 2 to 3 times faster with the optimizations.

Sample:

--Storing handles

local hObject = this.getObject ( )

local list = this.aRagdollObjects ( )

--Storing constants

local kGlobalSpace = object.kGlobalSpace

--Storing functions

local overrideSkeletonJointRotation = shape.overrideSkeletonJointRotation

local overrideSkeletonJointTranslation = shape.overrideSkeletonJointTranslation

local getTranslation = object.getTranslation

local getRotation = object.getRotation

local getAt = hashtable.getAt

local getKeyAt = hashtable.getKeyAt

-- Temporary variables

local hRagdollElement = nil

local hRagdollElementName = nil

local count = hashtable.getSize ( list ) - 1

local x, y, z = 0, 0, 0

--Update the rotations and translations for all the ragdoll elements

for i=0, count

do

	hRagdollElement = getAt ( list, i )

	hRagdollElementName = getKeyAt ( list, i )

	x, y, z = getTranslation ( hRagdollElement, kGlobalSpace )

	overrideSkeletonJointTranslation ( hObject, hRagdollElementName, x, y, z, kGlobalSpace, 1 )

	x, y, z = getRotation ( hRagdollElement, kGlobalSpace )

	overrideSkeletonJointRotation ( hObject, hRagdollElementName, x, y, z, kGlobalSpace, 1 )

end

Miscellaneous

Common mistakes, tips

Set the opacity of any object,even if the object is a group of objects

To set the opacity of an object, you can use the function shape.setMeshOpacity (). It often works, but sometimes you can have problems, especially when you want to change the opacity of a composed object, i.e. an object which is a group of objects. It looks like we need a recursive function to apply opacity changes to all the children of the object.
However, you just have to call the function on the main object, and the function will call itself for each child of the object, and for each child of these children, etc. Have a look at this function:

function st_AICamera.setOpacity ( hObject, nOpacity )

	if ( object.isKindOf ( hObject, object.kTypeShape ) )

	then

		shape.setMeshOpacity ( hObject, nOpacity )

	end

	if ( object.isKindOf ( hObject, object.kTypeGroup ) )

	then

		local nChildCount = group.getSubObjectCount ( hObject )

	if ( nChildCount > 0 )

		then

			for i = 0, nChildCount - 1

			do

				this.setOpacity ( group.getSubObjectAt ( hObject, i), nOpacity )

			end

		end

	end

end

Check if an object is behind another object

You need 4 things to know the position of an object relative to another object: the position of the 2 objects, the direction of the second one and the vector separating the 2 objects.

local x1, y1, z1 = object.getTranslation ( hObject, object.kGlobalSpace )

local x2, y2, z2 = object.getTranslation ( hOtherObject, object.kGlobalSpace )

local v1x, v1y, v1z = object.getDirection ( hOtherObject, object.kGlobalSpace )

local v2x, v2y, v2z = math.vectorSubtract ( x1, y1, z1, x2, y2, z2 )

To know if the first object is behind the second one, make a dot product calculation between the direction of the second one and the vector between the 2 objects. If the result is negative, the object is behind, if positive, the object is in front of the object.

if (math.vectorDotProduct ( v2x, v2y, v2z, v1x, v1y, v1z ) < 0 )

then

	return true

else

	return false

end

Operator “or”

Boolean operators are very useful when used with object’s handles.

-- Example : Check if at least 1 email exists

if ( email1 or email2 or email3 or email4 or email5 )

then

	this.sendEmails ( email1, email2, email3, email4, email5 )

end

-- Other example : send the event to the parent of the object if existing, else to the object

	local hObject = this.getObject ( )

	object.sendEvent ( object.getParent ( hObject ) or hObject, "st_AIModel", "onEvent" )

Ternary operator

Perhaps you know the ternary operator in C++, C#, Java or other programming languages. Lua also has the same operator:

-- Set a reduced speed while the L-Shift key is down

	this.nWalkSpeed ( bKeyDown and 2.5 or 3 )

-- In C++, Java, ... : this.nWalkSpeed = bKeyDown ? 2.5 : 3;

AI User, AI object, what’s the difference

User AIs are attached to a user, and are defined in the Game Editor. AIs which manage the keyboard, the mouse, or AIs which can’t be attached to an object are user AIs. AIs managing more than one object can also be set as user AIs

Object AIs define the comportment of an object. They can be added to, or removed from, an object dynamically with the functions object.addAIModel ( ) and object.removeAIModel ( ).

When the scene has been paused, only User AIs can run.

Sample:

-- Send an event to a user

	user.sendEvent ( hUser, "st_AISpawn","onPauseGame", true )

-- Send an event to an object

	object.sendEvent ( hZombie, "st_AIZombie","onSetState", "Idle" )

Leave a Reply

You must be logged in to post a comment.