FRDEngine

Architecture
The engine was written in C and was designed to be completely standalone, its only interactions being with the framework and the hardware. It was built as a separate library and then linked to the game. Parts of the engine (the renderer, file system, input and sound systems) were platform-dependent. Those parts had a common interface across platforms, and this interface was then used by the platform independent parts of the engine.

The framework was a set of C++ classes, which made writing a game easier. The framework made direct use of the engine. Other game code was either derived from or added onto the framework code, and was generally specific to a title. Both the framework and the derived code made use of the platform independent section of the engine, and were entirely platform independent.

Framework code called game code, and game code called Framework code – the boundaries between the two were (to a certain extent) non-existent. Both the game code and the framework were heavily data driven. When an object was created, classes were created (serialized) using data from pre-processed text files. The text files described what class was to be created, and what its data members were initialized to. This had several benefits. It meant changes could be made to tweak gameplay whilst the game was running (the files could be reloaded on the fly), and meant that composition changes could be made without having to recompile the game – for example, designers could change the type of camera and physics used on a vehicle, as well as tweaking any data used by the vehicle without needing a programmer.

Subsystems Overview
All subsystems were done by the end of 2008.

Havok
The Havok physics library was used on all platforms.

OpenAL
The OpenAL (Open Audio Library) was used to provide sound on the Windows and Linux builds of the game.

Ogg Vorbis
The Ogg Vorbis library was used on the Windows and Linux platforms for streamed sounds. It was chosen largely because it was a free and open source software (FOSS) standard, which many tools worked with.

Wii Specifics
The biggest change from the other platforms was that it didn't support fully programmable shaders which the renderer was largely dependent on. This required a separate renderer to be developed specifically for the Wii.

The render budget for the Wii was 200,000 polys running at 30 frames per second. This reduction in render polys was made by reducing the poly budget of objects by 30%. The rest was achieved by a combination of culling shadows at a shorter distance, bringing LODs in earlier, limiting the number of shadow projections by using clip map shadows and removing the 1 meter per vertex resolution area from the terrain renderer.

Bump mapping was kept to a minimum. While technically possible, it would increase the size of the assets and was approximately 3 times as expensive to render than regular objects. However if there were specific areas where bump mapping is a required (for instance, the heroes' faces) then it were to be supported in these cases.

PC Specifics
The minimum resolution supported was 800x600, with others being 1024x768, 1152x720 (widescreen), 1280x720 (widescreen), and 1280x1024. The aim was to allow any resolution upwards from 800x600 that is supported by the display device, so that users can balance the ideal resolution and resulting framerate for their machine. The user interface and HUD would need to scale appropriately with resolution.

The PC version used OpenGL for all rendering, not Direct3D. Any DirectX 10 features on Vista were accessed using OpenGL extensions.

Materials
Each surface was assigned a material, which was a combination of shader properties. A shader property was typically a texture (24-bit DXTC and VQ compressed) with variables to control how the texture was applied. The source textures were 24-bit TGA files, which were compressed by the tools.

The most commonly used shader properties were:
 * Diffuse map - this was the base colour texture


 * Specular map - this specified where the object was shiny, along with a variable to control the glossiness.


 * Normal map -  this allowed per-pixel normals to give the impression of increased detail and better lighting. These normals were relative to the vertex normals, which allowed better re-use of the normal maps. The normal maps only stored 2 of the 3 values from the normal vector (the 3rd is calculated in the shader as the result must be a unit length vector).

The two values were stored as 8-bit values in the RGB and alpha channels respectively, as DXTC compressed those channels separately. High precision (16-bit) normal maps could also be used, but the extra quality these allow could only be noticed on very smooth objects close-up, so these were unlikely to be used due to their high memory cost.


 * Parallax map -  this simulated per-pixel depth, and so gave the impression of much higher polygon meshes when viewed at an angle.


 * Environment map -  this was a cube map which made an object appear reflective, and was mainly used on very shiny objects (e.g. vehicles and droids). If no map was specified, it used the global environment map (which was set on a per-level basis). This allowed them to be location specific, or non-location specific, which was more likely to be used when an object is instanced several places in the environment.

The cube map was specified by the artist as 6 separate sides arranged in a cross shape. This was cut up automatically into the individual sides as part of the conversion process.
 * Incandescence map - this was rendered with a glow applied, and was useful for very bright parts of an object, such as neon lights or vehicle jets.

Environment
The backgrounds were static meshes which used an instancing system to allow for multiple occurrences of the same mesh in a scene. They were lit with pre-calculated lightmaps. All interior sections (including inside the capital ships) and some outdoor levels (such as Coruscant and Bespin) were implemented in this manner. Other outdoor levels required terrain.

The sky was rendered using a atmospheric shader, which combined two skyboxes, a ground-based skybox, and a space skybox. The skyboxes were stored as cube maps in a high dynamic range format (RGBE). The atmospheric shader combined the boxes differently based on the height of the camera above the ground, and the angle against the horizon, to represent the way light travels through a planet’s atmosphere.

Visibility
Indoor areas were split into sectors / rooms (the terrain was a single sector). Portals allowed visibility between sectors. Frustum checking would disregard objects and terrain chunks not in front of the camera.

Occluders could manually be placed inside high terrain to cull props from being rendered. When the view was near the ground, this would prevent having to render expensive parts of the background, and when the view was higher, coarser detail would be used.

In order to render larger scenes without a significant drop in performance, portals were setup to render to a texture. This texture was then be applied to a polygon in place of the portal (known as an Imposter Portal).

Terrain
The terrain system was for rendering a large heightmap based terrain with a high level of detail in the foreground but with a low polygon count when viewed from a distance. It drew terrains which were up to 8km by 8km in size, and required an LOD system to improve performance. The LOD system was seamless, so that players would not notice it. Since players could fly above the terrain and view it from space, it needed to blend in with the rest of the planet, and it needed to be cheap to render when viewed from long distances.

The terrain was uniquely textured (it didn't use tiling, but rather a 16384x16384 mega-texture) with tiled normal maps and detail textures. The core terrain code and supporting systems were not generally project specific, but certain elements catered towards Battlefront (eg. The maximum world sizes supported would not exceed those needed by this title).

The terrain was partly based on "Geometry Clipmaps: Terrain Rendering Using Nested Regular Grids" by Losasso & Hoppe and "Terrain Rendering Using GPU-Based Geometry Clipmaps" by Asirvatham & Hoppe.

In order to simplify the terrain system, some assumptions had to be made. It was assumed that the terrain is square, and could be represented by a heightmap. Terrain was also assumed to be continuous and smooth, and needed to be highly detailed for the innermost 2km square. Gameplay could only take place inside the inner 4km square, so no collisions would be required outside of this range.

The pipeline for terrain was:


 * A designer blocks out the terrain using the game editor.
 * An artist edits the terrain heightmap.
 * That heightmap is used in TerraGen to generate a colour texture.
 * This colour texture can then be edited by an artist.


 * terrainConvert (an in-house tool) creates lightmaps and does post processing to the artist-edited textures.


 * terrainConvert creates the final asset for use in-game.

During steps 1 – 4, all assets were retrieved and fed back into the assets management system, so at any time any of these steps could be repeated without affecting the others. Steps 5 and 6 were done automatically by the conversion process so any changes made by steps 1 – 4 would get incorporated into the new in-game asset.

A number of tools were required for the terrain system. The terrain editor was incorporated into the existing game editor. terrainConvert conversion tool was needed to produce the in-game asset. A tool for facilitating editing of the terrain mega texture was also needed as the texture was too big to be comfortably edited in Adobe Photoshop.

Some of the environments required caves within the terrain itself. The caves themselves were polygonal meshes built by artists, and attached to the terrain by flagging certain terrain quads as holes at a 1 metre resolution. In order to hide the seam between the terrain and a cave entrance, a frame was modelled around the entrance that could intersect with the terrain (e.g boulders or a metal doorway).

Foliage
Foliage was required for several of the outdoor environments. It was painted onto the surface of meshes using the world editor. Different brushes could be used for different foliage objects, which would allow a variety of grass, bushes, and flowers.

The foliage was rendered denser according to the distance from the camera. For performance reasons, no foliage was to be rendered when the viewpoint was beyond a certain distance from the terrain.

Detail geometry
The detail geometry system was used for static objects which could be instanced and re-used frequently throughout the environment (such as trees), as these could be batched together and rendered more efficiently.

Level-of-detail (LOD)
The game design allows the player to view environments and its objects up close on the surface, or from far away in space. In order to display a large number of objects without adversely affecting performance, the LOD system had to allow assets to display at high detail in close-up, and very low detail at a distance. It needs to blend between the levels seamlessly, as the player could approach from a distance without any camera cuts. It also needed to be relatively inexpensive with respect to processor time and memory use.

The terrain renderer natively used LOD. The foliage renderer also used LOD, and drew denser foliage at close distances.

Lighting / Shadows
For all static geometry, light maps were generated offline with the in-house photon-mapping tool. This was an artist-run tool which allowed lighting to be baked onto the background from lights placed in the background scene. This process could be distributed across a number of machines, vastly speeding up the render time of light maps. Light map UVs were calculated within Maya, allowing alterations to be made without necessitating a light map build. Monte Carlo integration was chosen as it allowed settings to be balanced with render time and visual quality.

The lightmap was stored in a proprietary HDR format, which stored the total colour (and hence intensity) of light hitting that pixel, but not the directions of the lights. For geometry with a lightmap attached, each vertex had a normal stored, indicating the direction of the most influential light, which was used to correctly apply the lightmap over the normal and specular maps. For outdoor scenes, the most influential light tended to be the sun.

Dynamic geometry had direct and indirect lighting calculated in different ways:

Shadow maps were generated every frame for dynamic objects from the closest light, and projected onto geometry as a separate render pass. Objects were also self-shadowing with this technique. The resolution of the shadow maps decreased when further away from the camera. Where unnoticeable, shadows were not created at all for props far away from the camera.
 * For each dynamic object, direct lighting was calculated using a maximum of 2 light sources. Outside, the sun was generally one of these light sources. Other light sources were placed by artists or created inside particle effects (e.g. explosions).
 * Indirect lighting (ambient) was calculated offline in the light mapper tool. Tiny cube maps were generated, sampling the incoming ambient at regular spatial intervals throughout the environment. For each dynamic object, the closest cube maps were interpolated between and applied.

Wii Specific Lighting / Shadows
The model of lighting was a combination of artist-produced vertex lighting and decals, real-time directional specular, environment hemisphere maps and real time vertex lighting.

Before rendering of a prop began, a 64x64 sphere map was produced for each view and for each prop that took up a large amount of the screen. The sphere map was always orientated towards the viewer. For non-bumpmapped objects the sphere map only consisted of the specular component. This was made up of an environment map (attenuated by a stored shadow map) and specular directional lights. These were generated by using an indirect texture to transform from sphere normals into the reflected normals. If the surface was reflective then these reflected normals were used as a texture coordinate to look up into a hemisphere environment map of the current sky box. Specular directional lights were added in passes by using a matrix to dot the reflected normals with the light direction together with a texture to provide a sharp spot.

When rendering props the normal was transformed into the sphere map space and the component facing the camera was zeroed to generate the 2D sphere map coordinates. It should be impossible to see normals facing away from the camera. Diffuse lights that were close to the player (such as light from the gun and other other lighting effects) could use up to 4 hardware lights which were available to use with zero extra cost to the renderer.

Assuming that there were 20 visible props needing their own sphere map per frame, 320K was required to store the sphere maps.

The lighting system would decrease in quality based on how close the model was to the viewer. Models in the distance would use a much simpler lighting model, decreasing the cost to render.

The order in which the lighting LODs would be...
 * Global hemisphere map gets used
 * Specular map turns off
 * Revert to fully hardware lit by only diffuse hardware lights.

Background lighting would mainly be artist-made using a combination of vertex lighting and decals. The hemisphere method of specular lighting was only based on the object's normals. This gave unsatisfactory results on backgrounds which had a large number of large flat surfaces. Therefore, background specular was done using an alternative method. By using a single directional specular light for the background the background polygons could be split into those that accepted specular and those that didn't. This saving allowed for a more complex specular scheme to be used. Backgrounds had the reflected normals calculated during conversion. From this reflected normal a screen space offset for the centre of the specular spot was calculated. This offset was added to a screen space texture of a specular highlight using an indirect TEV stage. Materials with different specular values were accounted for by shrinking or expanding the screen space specular spot. This allowed for a good one-pass approximation to real specular. Hardware lights would be available for free as with the props for doing background lighting effects such as bullets.

In the other versions of the game shadowing was preformed by each casting object being rendered into its own shadow buffer. Then everything affected by this shadow gets re-rendered with the shadow buffer projected onto it. This called for objects to be repeatedly redraw when projecting. This could be partially mitigated by rendering only the polygons of the object affected by the shadow but this takes CPU time to determine. For the Wii, it was proposed that a clip map approach be used. This method used predefined shadow buffers that covered an area around the viewer. This had the advantage of limiting the amount of times objects had to be rendered to project shadows and allowed for interesting dynamic shadows (such as moving trees or rolling clouds) to be added at very little cost.

Two shadow buffers were used. The first extended up to 16 metres away from the players (although could be extended) with a resolution of 32 texels per meter. The second extended to 128 metres but had only a 4 texel per meter resolution. The inner shadow buffer had objects rendered into it from the light's position and was then blurred to give nice soft projected shadows. The outer shadow buffer had only diffuse blobs the size of the object's bounds rendered into it. This allowed for much more objects to contribute to the shadow buffer than if they all had to be rendered. Projection onto the background only had to be done once and could be part of the normal background render. The inner buffer blended into the outer buffer providing a seamless transition between the shadow levels. Objects did not receive shadows.

Scene Management
Scenes were objects used to keep track of the game state. For example, there was a LoadingScene object (which carried out the loading process for a level), a FrontendScene (which was responsible for the menus and FX used in the main menus), and a GameScene (which dealed with the gameplay itself).

When the game started up, a Scene Manager was used to start the appropriate scene. In the final version of the game, this would have likely been a scene which loaded the frontend scene and played intro movies, but during development it was often a scene which loaded straight into a level. The scene manager also handled the game flow when a level ended, switching from one level to the loading scene, then to the frontend/next level. The game scene also contained components which were used to control flow of the game. This allowed implementation of game modes (Deathmatch, Capture The Flag, etc) using scripting. Scripts would contain game callbacks so that they could be notified of game events (game finishing, player dying, flag captured, etc) and react accordingly. Components could be used to share code between game modes as necessary – timers and scoring systems, for example, and the scripts could make use of these.

Game Objects
Game objects were referred to as props. They existed in both the framework and the engine layers. The engine layer provided limited support for them, managing their creation, deletion, rendering, and a few other aspects.

Most props were actually wrappers for artist-built models, which were referred to as obs. Each ob was loaded once (an obdef), and instances of it were created (obinsts). These were almost entirely dealt within the engine layer, and most in-game props had an ob. The obinst contained matrices, dealt with animations, skeletons, switchflags (flags to turn parts of models on and off), LODs, and other information about individual objects. The obdef contained information about the geometry, textures used, the number of parts, and other information (about glass, cloth, ladders, materials, physics, etcetera).

Props had a link to an obinst (not always used), a bounding box (used for basic frustum culling of props), and a matrix (which contained the prop’s world position and rotation). They also had a render callback so that special case rendering could be done on a per-prop basis.

The framework had a class called CGameProp. This was a wrapper/utility class that provided methods for manipulating engine props. There were a series of classes that extended CGameProp to provide different types of prop (around 50 or 60 types at current count); CVehicleProp, CPlayerProp, CDoorProp, CGun, and CPhysicsProp being regularly used examples.

Each prop serialized itself during the creation process. It typically read in values describing its initial position and orientation from a resfile. The resfile could be changed and reloaded whilst the game is running, which allowed for testing and altering level designs rapidly.

There was also a system that allowed props to serialize component classes in. The basic CGameProp class had only a single component, a CNetworkComponent, but most subclasses had many different components. A CVehicleProp contained about 30, which included a CSoundComponent (makes engine noises), a CFXComponent (adds trails, dust, tyre marks), a CCameraComponent (changes how the camera follows the vehicle), and a CHealthComponent (which keeps track of how much health the vehicle has remaining).

These components were created when the owner prop serialized them in, based on text resfiles, and each typically had a set of values that could be changed (how much health the vehicle has when undamaged, for example). The resfiles could be changed and reloaded on the fly, which allowed for fast testing and changing of components. For example, if several camera components had already been written, a vehicle camera could be changed to any of these and tweaked without having to recompile and run the game.

Animation
The animation system allowed the export of an animation created in Maya and conversion for use in-game, which would later allow the animation to be played back on an in-game object. The system had an engine and game side. The engine side was responsible for loading animation files and returning the data in the form of matrices. These matrices could then be applied to an objects' matrices. The game side had code for blending different sets of animation matrices together to allow two animations to be played on the same object at the same time.

Animations were blended via a directed acyclic graph (DAG), where each node within the graph could use parameters determined by the game. This allowed a common set of code to be used in many ways. Use of 2-chain IK on top of the final output allowed correction for final display.

Animations could be compressed by leaving out keyframes – the exact amount varied depending on the animation, and was determined by a set of heuristics and error tolerance checks. The tools that built the animation from the source carried this process out, and the game played the resulting animation with no special case code. Further compression was achieved by quantizing values used in the animation to a lower precision. These lower precision values could then be stored in less space.

Free Radical’s in-house Upper Body Inverse Kinematics System (known as UBIKS) was used to apply inverse kinematics to characters' upper body, and was largely used so characters could point their gun at a target independently to the rest of the animation being played on the character.

More Inverse Kinematics code was required for the Battlefront game (code for walking vehicles, for instance). This system used a 2 chain IK solver, which produced a single result. Constraints and stiffness information would be controlled from Maya. This was a relatively simple system to write and maintain (and was deliberately kept simple for performance reasons), but produced the desired results.

Cutscenes
The system was largely animation-driven, and the game code responsible for it was relatively simple. Animators created an animation in Maya. It contained information about the necessary characters' and props' movement, and also about the camera position and orientation. Since cutscenes were often relatively long, this animation could be large, so it was normally streamed in from disc and played on the fly.

Camera System
The camera system was split into two parts; an engine part and a framework part. The engine part was a basic camera system which could render views from given positions and orientations. The framework code then updated these positions and orientations.

Typically, the framework side had one camera per player. This camera was updated every frame, and the type of camera changed based upon the player's preferences and actions (if the player gets in a vehicle the camera will change types, or the player may switch between first- and third- person cameras).

There was also a free camera used for debugging purposes; this could be flown around the level using a keyboard and mouse.

Particles
The particle system consisted of particle emitters, which had a world position, area of emission, and other properties. Particles were emitted from these and updated during their lifetime. These were usually rendered as billboarded sprites. The emitter stored most of the common properties of the particles, so that each particle only used a small amount of memory. An in-house editor was written for creating and editing particles using the core game renderer. This allowed particle effects to be altered and visualized as they would appear in-game and also tweaked differently for particular levels or graphical filters.

Specific shaders were written for particles to allow effects such as heat-haze and refraction, and the system featured a range of built-in emitter properties and particle behaviours. The particles were integrated with the physics systems to allow effects as sparks ricocheting off characters and background geometry, to suitable sized splashes when objects hit water.

Weather
The weather system populated areas within the camera frustum with weather particles such as rain and snow that were affected by gravity and wind for efficient yet realistic results. This was combined with billboarded sprites that moved towards, away from or across the camera, depending on the wind direction. These sprites featured a noise texture that gave the impression of sweeping drafts of wind.

Clouds
Volumetric clouds that could be flown through were scattered about the sky for several of the environments. Each cloud initially generated a 3D noise texture using Perlin noise. This 3D texture was projected in the shader onto a series of billboard-aligned planes which the camera could pass through. Lighting was also calculated in the shader, although for performance reasons this assumed the light was always directly above the cloud, which in practice was unnoticeable.

A very important optimization was the use of imposters. Clouds in the distance were rendered to texture (the imposter buffer), and only the texture was drawn onto the screen. The imposter buffer was only updated when the cloud was approached or the camera angle changed sufficiently.

Clouds were generally fluffy and didn't require high definition which allowed the imposter buffer to be fairly low resolution. This also helped the non-imposter clouds, as all the fill-rate intensive planes could first be rendered onto a quarter size buffer. The buffer was then applied to the full screen.

The procedural nature of the clouds allowed the artists to tweak the parameters dynamically. It also allowed the clouds to change during gameplay, although this required updating the imposters which was expensive. One solution was to only update a small number of cloud imposters per frame, so that the entire cloud cover would gradually change over the course of a few seconds.

Other
The visual effects system also provided several effect primitives such as tracers, trails and electricity, which were utilized for the sci-fi weapon and lightsaber effects.

Wii Specific Effects
Blending between the ground sky and space sky was done as part of the fog calculation on other platforms. However, this was not practical on the Wii. The skybox blending was a separate process to the fogging of objects. Fogging of objects was done using the hardware fog with the near, far plane and fog colour set per object to blend in with the skybox.

The space skybox was a textured cube centered on the player. To save on texture space the space skybox was first rendered using a low resolution cubemap containing nebula. Then each of the six faces had a generic star texture tiled over each one. Finally, the suns were a separate render on top of this. Blending on the ground sky used a sphere rendered around the viewer's position. A ground sky hemisphere map was mapped to this. To achieve the blending between the skyboxes, a texture was mapped between the poles of the sphere that contained the pre-calculated amount of blending between the sky boxes. This pre-calculated texture was 2D. In the x direction was the amount of atmosphere passed through for each latitude on the sphere. In the y was the height of the viewer. As the viewer moved through the atmosphere the texture coordinate was shifted in the y direction which had the effect of making more of the space skybox visible. That way, the horizon of the ground skybox didn't follow the viewer up as they moved through the atmosphere the ground skybox was shifted down as the viewer moved up.

The planet itself and the clouds were also part of this sphere. The planet was a disc within the sphere that moved up and down to stay aligned with the atmosphere. This disc's texture coordinated scale as the viewer moved up so more of the planet surface could be made visible. The clouds, if any, were added on top of the planet disc. The vertices at the edge were the same as the planet disc but a single vertex in the middle separated it from the planet making the clouds visible from both the ground and from space and providing some parallax to make the scene more realistic when viewed from above. To hide the transition through this cloud plane, the cloud layer faded out when the viewer got close.

Full screen FX
The game would have made use of several types of full screen effects. Existing effects included a High Dynamic Range effect, blur and fog.

The High Dynamic Range system worked by rendering the scene into a floating point render target. This target was then downsampled and an average brightness for the scene was calculated. A shader was applied to blur the bright sections of the scene, providing a ‘bloom’ effect. In order to get a dynamic effect, the brightness was filtered towards the brightness of previous frames.

The blur effect was carried out using a fragment shader.

A fog effect was applied using a combination of particles and a shader which read values from the depth buffer and filtered things accordingly (more fog should be visible when objects are further away).

Possible full screen effects may have included a Depth of Field effect. This would have been for the purpose of adding polish to the game, and since it was not necessary for it to be shipped with the game, this would have been low-risk.

User Interface
The menu system used a GUI editor to allow a user to lay menus out and specify various attributes of interactive widgets in the GUI.

The HUD was largely data-driven, with the HUD split into separate components for parts of the display. The components were written in code, with several standard types available for bars and text displays. More complicated components were generally written with bespoke logic, but the components that comprised the HUD and their locations could be altered from data files without needing to rebuild the game.

Audio
Source files were stored in WAV format. These were then converted to VAG (adpcm) and atrac3 for the PS3, and WMA for the Xbox 360. During development on PC, the WAV files were used for shorter sounds, and OGG for large sound files. All source files were 48khz 16bit files.

Audio code was split into three separate parts - a platform layer, an engine layer, and a framework layer. The details of the platform layer varied depending on the platform, but all platforms exposed the same functions to the engine. The engine layer could then play sound effects on all platforms in the same way. The PS3 used Multistream and the Xbox 360 used XAudio and X3DAudio.

The engine layer dealt with three types of audio: sound FX, streamed sounds, and music. Sound fx were kept in memory and used regularly - gunshots or hit sound effects were often played this way, as were other small sounds. Streamed sounds were longer, less frequently used sound effects, and were played whilst being streamed from disc. These were often used for speech, or similar longer effects. Music was mostly handled in-engine by managing streamed sounds - it had special functions for fading between tunes, pausing, and setting volumes.

The sound system was able to cope with up to 100 simultaneous voices, and 5.1 channel output. It supported pitch shifting and sample skipping, interactive mixing, fades between sounds, and distance gain. Occlusion and environmental effects, interactive music sequencing, sound pathfinding, audio HDR, and complex looping were features added later on to the sound code.

Reverb and other DSP effects could be implemented in software (handled by Multistream on the PS3). In-game materials had sounds attached to them for different events. Artists placed the materials on models and backgrounds, and the sounds were triggered by events. This meant footsteps and bullet hits could produce different sound effects on metal and concrete, for example.

The sound code was approximately 15MB of memory on all platforms

The most commonly used method of decaying sound volume in games is a simple geometric distance test. However, in the setup editor, interconnected sound volumes could be placed throughout the environment. The volumes were combined to model a flow of sound through the environment; this, combined with the ability to tag volumes with additional data, allowed Free Radical Design to create more realistic sound design.

Sounds could be placed using the level editors, either as background sounds, or triggered under certain conditions using scripts. More complex sound effects could be created procedurally by altering volumes, pitches and filters in code based on events (e.g. engine sounds can be simulated by changing the pitches of mixed samples as a cars revs increase in the physics code).

Volumes and pitches could be adjusted in data files using proprietary tools, and interactive mixing could be carried out on the fly using tools.

Cutscene SFX could be triggered in-game but could also include pre-mixed streams.

Free Radical Design had a dedicated audio team who were responsible for writing in-house tools to facilitate sound implementation.

Wii Specific Audio

The Wii audio hardware had internal support for panning, mixing, envelopes, pitch, reverb and chorus. Effects would be limited to this selection. Sound occlusion/pathfinding was also expensive and would have been dropped from the Wii version.

Sound samples were mainly 48KHz ADPCM. For music and other long streams 48kHz ADPCM was streamed from disc. 48kHz ADPCM had the advantage of being able to stream off the disk and be played without involvement from the CPU.

Surround sound was supported via Dolby Pro Logic 2.

Input
Input was handled largely through the platform specific controllers (joypads). During debugging, however, it was useful to have access to keyboard and mouse input. Both systems were supported by the engine on PC, while consoles only had joypad support.

Each platform had its own input functions, and these were wrapped by an engine layer. The engine layer was then used by the framework. The framework had a data-driven system for setting player controls up. The framework input code could map different joypad inputs to different player controls based on the player’s current state. These could be changed by designers (freeing up programmer time), and on the fly while the game was running - sensitivities could be tweaked without having to rebuild and run the game.

Wii Specific Input / Controls

Because of the limited number of gestures, bespoke gesture recognition code would have been written for each one.

PC Specific Input / Controls

Mouse and keyboard support was added to the game. Every game control was bindable to custom keys from the Controls menu, and the sensitivity of the mouse was configurable.

The main game controllers would have been supported, including the X360 controller. The most common controllers had their mappings preset, but the ability to remap any game control to any button or axis was provided so that any controller could be configured to work.

File I/O
The engine exposed a set of file management functions to the game, which were a wrapper around platform-specific file functions. The file functions gave the engine the ability to open files and read or write data from them, and there were some utility functions here to make common operations easier. There was also a streaming API which was used for the majority of file operations so that the main thread remained responsive at all times. The engine layer of the file API was responsible for handling platform-specific TRC requirements, such as disc eject or file reading errors.

Package files
Files could be packaged together into large archive files, called pack files. If all the files were loaded in one load operation it reduced load times (only one seek should be required, and the file could then be loaded with the disc spinning at high speed). When the engine was looking for a file, it would first check to see if the file is in an open pack file. If it was, this file would be used, otherwise the engine would look for the file on disc. When files were in pack files on disc, the pack file was sorted to reduce the number of drive seeks required for that dataset. Small file reads were automatically extended into larger reads at little to no extra cost. The data from these extra reads was kept in a lookahead buffer and was immediately available if requested.

Streaming
Streaming functions were written on a per-platform basis. They allowed data to be read from files in the background whilst code is running. They were mainly used to read data while the game was running (textures, game objects, music and other sounds were all streamed in from disc as needed in order to save memory).

Asset compression
Audio was in wma, ogg, or atrac3 formats, which gave similar results to mp3 compression, so audio would not use much space on the media, or benefit much from extra external compression.

Object files and textures made up the majority of the data, textures in particular. Different files compressed better than others, but from tests it was estimated that on average object files compressed to 50%, and textures compressed to 33%.

The compression method was based on a gzip variant of lz77. This did not give optimal compression size, but gave very fast decompression. Decompression was done automatically and was performed on a separate thread.

Half-floats were used instead of floats for vertices where possible. These were sufficiently accurate up to 32 metres, so these were used for any objects or rooms smaller than this.

Wii specifics
The Wii's maximum disc capacity is a dual layer DVD which is the same available on the XBox 360. However, fitting on a single-layer DVD was possible due to the four-fold reduction in texture sizes and the lack of HD movies.

Saved games
The game save system was capable of taking a snapshot of any game state and recording it to a file. It supported multiple player profiles with per-profile settings.

Collision
Physics collision was handled by the Havok physics library.

Additionally, custom collision detection was performed for line tests used by AI and bullets/lasers. These line tests were done in a separate thread to take advantage of extra processing cores, and generally yield a result the frame after they are started.

Rigid Bodies
The physics for the game were built on Havok. The framework contained in-house code that wrapped around Havok.

Nearly all moving objects in the game consisted of dynamic rigid bodies that could be interacted and collided with such as crates, vehicles, characters and pickups.

Ragdoll Physics
When a character dies it made a transition into a ragdoll. The death was initially driven by an animation, but eventually evolved into a physics driven ragdoll.

The transition was determined by contacts between the character and its surrounding geometry. The ragdoll's geometry consisted of physics primitives such as spheres and object bounding boxes connected via joints, each with six degrees of freedom. The joints were given realistic constraints, tension and "springiness" to make the ragdoll feel more like a rigid corpse than a wooden ragdoll.

Vehicles were created using the rigid body physics system in the framework code. They were data-driven; many aspects of their handling could be tweaked from data files. Different types of vehicles shared code allowing characters to enter, leave, and drive the vehicle, but each had control over its handling. This could be achieved by setting the velocity of the physics object or by applying forces to the object.

<p style="margin-bottom: 0in">Some vehicle props used a different, animation-driven system (walking vehicles such as AT-ATs, for example).

<p style="margin-bottom: 0in" lang="en-GB">The codebase also contained code for wheeled vehicles. These had custom engine properties such as torque curves and gear ratios, as well as custom suspension with non-linear damping, and springs to give more realistic handling.

Scripting
<p style="margin-bottom: 0in" lang="en-GB">The propriety custom scripting language (VM) was used to drive events, objectives and object behaviour. The custom nature of the language allowed for expansion of the feature set based on the requirements of the game and engine, a notable point being the networking of game information. It was a typed C-style language, hence easier to catch compilation errors than a type-less language such as LUA. Commands could also be executed from within the in-game command-line console.

<p style="margin-bottom: 0in" lang="en-GB">The scripting system used triggers to start code running. Several types of triggers existed – volume triggers were fired when a player entered the volume, and switches had triggers associated with them. The trigger system was flexible and new triggers could easily be added.

<p style="margin-bottom: 0in" lang="en-GB">Events were linked to a trigger, so that when a trigger occured, the event linked to it was started. These could be pieces of code written in C, or pieces of VM code. From within the VM, designers could then create code to handle a complex chain of events – when a switch is pulled the VM could start props playing animations, delete props, create props, add messages to the player’s HUD, and many other things.

Wii Specific AI
<p style="margin-bottom: 0in" lang="en-GB">Depending on feasibility tests, it was probably necessary to scale back the combat density, or to more heavily LOD the AI on the Wii version.

Pathfinding
<p style="margin-bottom: 0in" lang="en-GB">The pathfinding system used designer-specified navigation meshes.

<p style="margin-bottom: 0in" lang="en-GB">Navigation meshes were a set of polygons defining the areas traversable by AI on foot, and created using the setup editor. These meshes were grouped by navigation volumes, which were used for quick pathfinding. Navigation volumes also represented navigable areas for vehicles. For exterior sections, there were obstacle volumes which the flying AI avoided.

<p style="margin-bottom: 0in" lang="en-GB">Navigation meshes were used by the AI system to plan a route from point to point. A system allowed the AI to ask for a path from point A to point B, and this was then returned to them as a series of waypoints.

<p style="margin-bottom: 0in" lang="en-GB">AI characters also used steering behaviours to do local obstacle avoidance. These behaviours were applied as the character traversed along the path calculated by the pathfinder and allowed them to steer around other characters, physics objects or other designer-tagged objects.

Networking
<p style="margin-bottom: 0in">The network system was a medium risk system due to its complexity and the scale required by the design. It enabled interactivity between players on different machines and allowed the delivery of game updates and modular content. It supported up to 50 players playing against one another in a single battlefront and 2-player galactic conquest over the Internet, LAN and system link connections, with dedicated and player-hosted servers.

<p style="margin-bottom: 0in" lang="en-GB">Key Features:

<p style="margin-bottom: 0in" lang="en-GB">Server migration was not supported.
 * <p style="margin-bottom: 0in" lang="en-GB">Instant Action
 * <p style="margin-bottom: 0in" lang="en-GB">Friends
 * <p style="margin-bottom: 0in" lang="en-GB">Clans
 * <p style="margin-bottom: 0in" lang="en-GB">Lobby
 * <p style="margin-bottom: 0in" lang="en-GB">Squad Support
 * <p style="margin-bottom: 0in" lang="en-GB">Leaderboards
 * <p style="margin-bottom: 0in" lang="en-GB">Statistics
 * <p style="margin-bottom: 0in" lang="en-GB">Spectator Mode
 * <p style="margin-bottom: 0in" lang="en-GB">Voice Chat
 * <p style="margin-bottom: 0in" lang="en-GB">Replays
 * <p style="margin-bottom: 0in" lang="en-GB">Downlodable Content
 * <p style="margin-bottom: 0in" lang="en-GB">Patching System
 * <p style="margin-bottom: 0in" lang="en-GB">Account Management
 * <p style="margin-bottom: 0in" lang="en-GB">Synchronisation
 * <p style="margin-bottom: 0in" lang="en-GB">LAN
 * <p style="margin-bottom: 0in" lang="en-GB">Dedicated Server
 * <p style="margin-bottom: 0in" lang="en-GB">Automated Testing Support
 * <p style="margin-bottom: 0in" lang="en-GB">Stability

<p style="margin-bottom: 0in" lang="en-GB">The requirements of the system for the network prototype were limited to a small number of opponents playing on one level, including on-foot and vehicular traversal of the terrain and interior of capital ships, the use of flying vehicles to span the vertical battlefront, and shooting, and as such the synchronization of all these elements.

<p style="margin-bottom: 0in" lang="en-GB">The networking layer utilized a client/server and peer-to-peer hybrid architecture, as this allowed the lowest data throughput over the network infrastructure.

<p style="margin-bottom: 0in" lang="en-GB">The necessity of game state being distributed around all clients dictated that most of the state data would be contained within the scripting language. The custom nature of the scripting language allowed a good understanding and control over the data being transferred automatically.

<p style="margin-bottom: 0in" lang="en-GB">Other tools, such as packet sniffers, statistical analysis and network condition simulation tools were used for debugging, performance tuning and stress testing. It was anticipated that most, if not all, of these wouldn't be developed internally, but used off-the-shelf.

Testing
<p style="margin-bottom: 0in" lang="en-GB">Network testing and debugging tools could make the process of debugging network games a lot easier. The game and engine had built-in fault simulations for testing packet loss and delay. If used regularly during early testing they would save a lot of time fixing bugs later. Free Radical had also previously used packet capture tools like Ethereal to debug problems, and previously worked with a STORM network robustness test. Whilst this required expensive hardware and software Free Radical Design didn't have, occasional offsite tests would be useful if they could be arranged.

<p style="margin-bottom: 0in" lang="en-GB">Automated testing involved AI playing matches over the network. This allowed Free Radical Design to easily gather crash information as if a high number of players were competing, and gather gameplay statistics to help identify any bugs. Automated leaving and joining was incorporated.

Wii Specifics
<p style="margin-bottom: 0in; widows: 0; orphans: 0">Networking was done using Nintendo's DWC libraries. Although this library was not socket based it was still possible to use it with the existing hardware-independent layer. The biggest change was in the lobby system. There were plans for supporting server/client and match anybody modes. The match anybody method was very unsuited to Free Radical Design's type of game but it was the only way of playing with players you did not know. Any matching had to occur before the game started which meant a very long wait if a large number of players weren't online. Also, at the time, there was no way of connecting to a dedicated server which meant either limiting the number of players or having one player (randomly selected from the peers) potentially with a lower frame rate.

<p style="margin-bottom: 0in" lang="en-GB">PC Specifics

<p style="margin-bottom: 0in; widows: 0; orphans: 0">There were plans for using GameSpy on the PC version.

Split-screen support
<p style="margin-bottom: 0in" lang="en-GB">Battlefront supported 4 players on a single console using split screens. This presented certain technical challenges. The 4 players were all created using their own screen view window by a player managing class in the framework, and the engine was then responsible for handling rendering the 4 views.

<p style="margin-bottom: 0in" lang="en-GB">In order to make good use of the console in singleplayer mode, Battlefront streamed data into memory from disc while the game was playing. If 4 players were playing at once and could all move to different areas of the level, each would require a different area of the terrain to be visible at high resolution at the same time. This meant each player needed a buffer to fill with the streamed terrain data. These buffers (and those for streaming textures in general) had to be smaller than those used in singleplayer because of the limited amount of memory available, and this meant using lower texture resolutions in splitscreen. The smaller screen area that was rendered to mitigate this – the texel-to-pixel ratio remained similar.

<p style="margin-bottom: 0in" lang="en-GB">Geometry streaming suffered from the same issues, except it was harder to replace high-polygon versions with lower-polygon versions, because artefacts were introduced even with a smaller screen area. LODs generated automatically suffered from this to a greater extent, as it produced models that look acceptable at a distance but not close up. To mitigate this, artists could manually generate the LODs where necessary, as they could judge where best to lose polygons. LODs were also being built with detail added on and flagged as detail. In these cases, it was very easy to render the geometry without the detail and for it to still look acceptable.

<p style="margin-bottom: 0in" lang="en-GB">Split-screen also limited the ability to LOD several systems that were easy in single-player, including AI and animation. To mitigate this, the playing areas needed to be reduced for split-screen play. This didn't have an impact on development, as levels were already being designed to work at different capacities for network games (i.e. 16, 32 and 50-player versions).

<p style="margin-bottom: 0in" lang="en-GB">When in splitscreen the game didn't support picture-in-picture display, partly due to the extra load on streaming and memory, and partly because of the difficulty of displaying the window in a small screen window.

<p style="margin-bottom: 0in" lang="en-GB">The splitscreen could be split horizontally or vertically. Most games had split the screen horizontally into top and bottom, but recent trends for widescreen televisions made a vertical split more visually appealing. Either split was possible using Free Radical Design's technology.

<p style="margin-bottom: 0in" lang="en-GB">The splitscreen code was not a self-contained system, and its implementation was split between several other systems. The stress put on other systems (streaming and memory management) were the main risks with the split-screen code. The risk for splitscreen was mainly related to performance and the compromised in visual quality necessitated to achieve this. Two player splitscreen was flagged as lower risk, however.

Wii Specific Split-screen
<p style="margin-bottom: 0in; widows: 0; orphans: 0">Splitscreen performance was a problem because the GPU was utilized so fully by the single-screen game. Splitscreen was limited to 2 players with AI on singleplayer maps. To make up for the extra cost of the extra view, shadows were turned off and the far plane was moved in to reduce the number of polys being rendered. The amount of reductions was similar to those meant to be used in the final version of Battlefront III. High-quality lighting was not used and fell back to hardware lights to decrease the vertex processing cost. The rendered terrain resolution could also be reduced to help with performance on CPU and GPU.

<p style="margin-bottom: 0in; widows: 0; orphans: 0">When running in splitscreen the increase in GPU cost mainly came from the extra vertices requiring processing. Pixel cost stayed reasonably constant as the same area was being drawn regardless of the number of views. Considering a typical set of vertex data (position, 1 texture coordinate and a normal) it took 8 cycles for the vertex processor to read in this data limiting the peak transformation rate to 30.38Mvtx/sec. A full scene made up of 200,000 polygons had approximately 400,000 vertices. This meant processing 400,000 vertices would take at least 13ms. This was almost half of a frame. Having 4 views would take the time needed to process the vertices to 52ms! With two player splitscreen 30fps was possible (by making the savings above). Without reducing the vertex count in the singleplayer game, Free Radical Design cannot see four- player splitscreen running above 15fps, an unacceptably low frame rate. Therefore, a limit on 2-player splitscreen was enforced on singleplayer maps. However, it was possible to have 4-player splitscreen on the Wii-exclusive content where a lower polygon count can be enforced.

Cloth
<p style="margin-bottom: 0in" lang="en-GB">A cloth system was used on characters.

Lip Sync
<p style="margin-bottom: 0in" lang="en-GB">Cut-scene characters had full facial animation. These were animated in sync with the English language, and dubbed over for foreign languages.

<p style="margin-bottom: 0in" lang="en-GB">In game, characters did a much less precise lip sync by semi-randomly blending between animator-defined face poses whilst talking. They will also have a bank of face poses for idles, pain and firing.

Skinning
<p style="margin-bottom: 0in" lang="en-GB">On the Playstation 3, Xbox 360 and PC characters were soft-skinned. This skinning is done on an SPU on the Playstation 3, and in a separate thread on the Xbox and PC.

Wii Specifics
<p style="margin-bottom: 0in; widows: 0; orphans: 0">On the Wii skinning posed two main problems: it required a huge amount of processing and large buffers were needed to contain the results. These problems were exacerbated by the large number of characters that could be on screen in Battlefront III. The Wii provided a cheap alternative to this called stitching. The Wii hardware allowed up to 10 model view matrices to be loaded into the graphics processors' memory at once. Which matrix to be used to transform a vertex could be selected on a per-vertex basis. For each pair of bones, the two bone matrices were uploaded along with eight interpolated matrices. The model was split into parts depending on what pair of bones influenced a polygon. Each vertex of the part had an index that specifies what combination of the two bone matrices it was influenced by. Because the skinning weights were quantized, it gave less smooth skinning (it was more like having 8 more bones between every pair) but this was rarely noticeable. Other disadvantages were that each vertex had to have more index data, the graphics FIFO had to accommodate a lot of extra matrix data and each polygon (not just each vertex) could only be influenced by a maximum of two bones. The big advantage was that there was almost zero CPU cost on this method. The PC used 4 weights per vertex so some models had to be re-boned.

Memory management
<p style="margin-bottom: 0in" lang="en-GB">The memory management API supported a number of different heap types, including:

<p style="margin-bottom: 0in" lang="en-GB">Different heaps were used so that the resources dedicated to each subsystem were easy to track – physics memory was allocated from a different heap to particle effects, for example. Easy tracking of memory use was very important, as it made memory optimization easier later on in the project.
 * <p style="margin-bottom: 0in" lang="en-GB">Arbitrary general purpose heap
 * <p style="margin-bottom: 0in" lang="en-GB">Fixed size heap
 * <p style="margin-bottom: 0in" lang="en-GB">Stack-based heap
 * <p style="margin-bottom: 0in" lang="en-GB">Handle-based defragging heap
 * <p style="margin-bottom: 0in" lang="en-GB">Object allocator

<p style="margin-bottom: 0in" lang="en-GB">By default, all heaps were thread-safe by use of mutexes. If a given heap was only to be used from a specific thread, it could be marked as not needing the mutex; this increased performance.

<p style="margin-bottom: 0in" lang="en-GB">The defragging heap was used for textures. The engine also supported its use for object data should that have proven necessary. The defragging process was done either using the GPU, DMAs or CPU copies.

<p style="margin-bottom: 0in" lang="en-GB">The object allocator was designed to provide minimal fragmentation when allocating lots of small arbitrary-sized memory blocks. It was used by the framework for allocating C++ objects and similar small allocations.

Wii Specifics
<p style="margin-bottom: 0in; widows: 0; orphans: 0">The Wii had 88Mb total RAM split between two banks. 24Mb is T1 RAM which was extremely fast. The second bank was 64Mb of standard DDR. As a general rule, the 24Mb T1 RAM had to be reserved for code and frequently-used data. The remaining 64Mb were to be used for infrequently-used data and assets. Both the CPU and GPU could access both banks. There was a further 3Mb embedded in the GPU but this is reserved for GPU caches and frame buffers.

<p style="margin-bottom: 0in; widows: 0; orphans: 0">The Wii vertex data format was a lot more flexible that the other platforms, allowing different index buffers for each parameter (position, texture coordinate etc.) allowing for a reduction in repeated data. In addition, tangents and bi normals were not needed on most objects due to the lack of normal maps. A further saving could be gained by using a static normal table shared between all assets. These techniques combined led to an average reduction of 40% on vertex data.

<p style="margin-bottom: 0in; widows: 0; orphans: 0">Smaller textures and a lack of normal maps reduced the texture cost significantly. The Wii hardware limited textures sizes to 1024x1024. As a lot of assets used 2048 by 2048 color maps which, if downsampled to 1024x1024, reduced the memory required by 75%. Specular maps were reduced to 256x256. If bumpmapping was used it would be combined with the specular map.

<p style="margin-bottom: 0in; widows: 0; orphans: 0">Texture and object streaming was used more aggressively reducing the memory further.

<p style="margin-bottom: 0in; widows: 0; orphans: 0">Any texture (or mipmaps of a texture) smaller than 256x256 were loaded statically. Bigger textures were streamed as needed. To determine what textures needed streaming in, each surface was stored with a bounding box and an average texture density. The bounding box was used to calculate how much of the screen the surface was occupying. This value was then multiplied by the texture density to obtain the size of texture needed to achieve 1 texel per screen pixel. If this size was greater than the texture's “requested size” then the requested size was increased. Each frame the requested size was reduced allowing textures that haven't been visible for several frames to be streamed out.

<p style="margin-bottom: 0in; widows: 0; orphans: 0">The stream buffer was allocated in blocks large enough to accommodate a 256x256 compressed color texture. Only 256x256 and 1024x1024 textures were streamed in. 1024x1024 textures use 16 blocks. If a texture was to be streamed in, first the 256x256 was streamed in. If no more 256x256 textures needed to be streamed then 1024x1024 textures are streamed in and the respective 256x256 textures are freed. If a 1024x1024 is streamed out then the first 15 blocks are freed to leave a 256x256 texture in the last block.

<p style="margin-bottom: 0in; widows: 0; orphans: 0">To make space for the larger textures the 256x256 texture slots were defragmented. This was done by using the DMA functions of the locked caches. These functions gave a 330Mbps copy bandwidth in MEM2. This meant a 256x256 texture could be moved within a microsecond. During periods of defragging around 4 moves were done per frame.

<p style="margin-bottom: 0in; widows: 0; orphans: 0">When a block was freed or moved, the old block would not be reallocated for 2 frames so that all references to the texture still in the graphics FIFO could be flushed out.

Wii Memory Map

 * {| width="351" cellspacing="0" cellpadding="0" bordercolor="#000000" border="1"

! width="173"| System ! width="176"| Size (Mb) <p style="text-align: CENTER;">OS <p style="text-align: CENTER;">1 <p style="text-align: CENTER;">GFX FIFO <p style="text-align: CENTER;">1 <p style="text-align: CENTER;">Frame buffers <p style="text-align: CENTER;">1 <p style="text-align: CENTER;">Terrain heightmap <p style="text-align: CENTER;">2 <p style="text-align: CENTER;">Shadow buffer <p style="text-align: CENTER;">2 <p style="text-align: CENTER;">Animation <p style="text-align: CENTER;">4 <p style="text-align: CENTER;">Sound <p style="text-align: CENTER;">5 <p style="text-align: CENTER;">Physics <p style="text-align: CENTER;">5 <p style="text-align: CENTER;">Object data <p style="text-align: CENTER;">15 <p style="text-align: CENTER;">Static textures <p style="text-align: CENTER;">15 <p style="text-align: CENTER;">Streamed textures <p style="text-align: CENTER;">15 <p style="text-align: CENTER;">Game <p style="text-align: CENTER;">22 <p style="text-align: CENTER;">TOTAL <p style="text-align: CENTER;">88
 * - valign="TOP"
 * - valign="TOP"
 * width="173"|
 * width="176"|
 * - valign="TOP"
 * width="173"|
 * width="176"|
 * - valign="TOP"
 * width="173"|
 * width="176"|
 * - valign="TOP"
 * width="173"|
 * width="176"|
 * - valign="TOP"
 * width="173"|
 * width="176"|
 * - valign="TOP"
 * width="173"|
 * width="176"|
 * - valign="TOP"
 * width="173"|
 * width="176"|
 * - valign="TOP"
 * width="173"|
 * width="176"|
 * - valign="TOP"
 * width="173"|
 * width="176"|
 * - valign="TOP"
 * width="173"|
 * width="176"|
 * - valign="TOP"
 * width="173"|
 * width="176"|
 * - valign="TOP"
 * width="173"|
 * width="176"|
 * - valign="TOP"
 * width="173"|
 * width="176"|
 * }

Asset / resource management
<p style="margin-bottom: 0in" lang="en-GB">Asset management code was stored with utility code in the engine, and was used by the game to make sure it was loading the correct version of different asset types.

<p style="margin-bottom: 0in" lang="en-GB">People who created assets to put in the game (generally referred to as artists, but this includes designers, musicians, animators and others) created them and stored them in editable file formats. These editable files were checked into Perforce.

<p style="margin-bottom: 0in" lang="en-GB">The engine’s tools system read these editable files and converted them into in-game formats. This allowed Free Radical Design to change the format of assets for different platforms, but also meant Free Radical Design could add new features without artists having to re-export all of their assets.

<p style="margin-bottom: 0in" lang="en-GB">Whenever an asset file format changed, a version number associated with that format was also changed. This version number was included in the path of the asset the game looked for. If the game looked for an asset that did not exist, the game ran the tools to attempt to convert that asset.

<p style="margin-bottom: 0in" lang="en-GB">To illustrate this, imagine the game sound file format changes to incorporate a new effect. The old sound file format was version 1, so the new version is 2. When the game is rebuilt with the changes, it will look for a file called ‘sound_v2/gunshot.snd’. This file doesn’t exist at present – only ‘sound_v1/gunshot.snd’ is there, so the game starts the tools to build gunshot.snd. The tools will only produce the latest version of the asset, so will output ‘sound_v2/gunshot.snd’ which is then loaded by the game. → To illustrate this, imagine the game sound file format changed to incorporate a new effect. The old sound file format was version 1, so the new version was 2. When the game was rebuilt with the changes, it would look for a file called ‘sound_v2/gunshot.snd’. This file did not exist at the moment – only ‘sound_v1/gunshot.snd’ was there, so the game would start the tools to build gunshot.snd. The tools would only produce the latest version of the asset, and so would output ‘sound_v2/gunshot.snd’ which would then be loaded by the game.

<p style="margin-bottom: 0in" lang="en-GB">As an optimisation, by default the tools would look to see if other team members on the network had the missing asset, and if so it would copy it from them. This saved everyone on the team from having to convert a new asset individually. The first machine that was checked for the asset was the central asset build server. This sever automatically rebuilt assets as they were checked into Perforce and always had the latest converted set of all assets in the game. Users could batch copy their assets from the server if they wanted to refresh all their assets to the latest versions.

<p style="margin-bottom: 0in" lang="en-GB">Additionally, the build server supported a simple branching system that allowed a separate set of tools and converted tools to exist. This allowed for ongoing work on the engine and tools which changed assets, and for all assets to be converted and tested in isolation.

Core libraries
<p style="margin-bottom: 0in" lang="en-GB">The engine contained some commonly-used core utility code which was used throughout the game and the engine.

<p style="margin-bottom: 0in" lang="en-GB">There was a set of math functions dealing with… matrices, vectors, quaternions, trigonometry/angles, clamping and wrapping numbers between limits, and filtering numbers. There was also code to generate noise (including Perlin noise) and random numbers (this contained various utility functions for generating numbers in a range).

<p style="margin-bottom: 0in" lang="en-GB">Various containers were used in different places. A pool system was often used for tracking large numbers of small objects, since it provided a fast (O(1)) allocation and was simple to set up. Linked lists were used to keep track of many game objects that need to be iterated through. Queues and stacks were also available for use. A sort algorithm made it easy to use efficient sort algorithms throughout the code.

<p style="margin-bottom: 0in" lang="en-GB">A string pool could be used to hold large numbers of strings, and a stringtable to link keys with other data. These were combined to create a dictionary class, which could be serialized and stored in plain text format. This dictionary class was used to store the data necessary to run the game. It was convenient to store this in text format during development because it made it easy to store the files in version control and merge changes together when two people were working on a file at the same time.

<p style="margin-bottom: 0in" lang="en-GB">An assert module was used to keep track of conditions which could be harmful in code and warn the user when something went wrong. These were disabled for release builds, but made debugging significantly easier during development.

Localization plan / system
<p style="margin-bottom: 0in" lang="en-GB">UTF8 encoding was used for the language files. This allowed UNICODE text to be represented whilst keeping the memory footprint as small as possible. The text was stored in Excel-loadable spreadsheets, which eased the localization process. A command line tool was used to convert from the spreadsheet to an in-game format. The spreadsheet could then be sent for localization. When a new localized version was returned, the normal game build process would produce the new versions of the in-game strings automatically.

<p style="margin-bottom: 0in" lang="en-GB">When the strings in the spreadsheet were associated with an audio sample (subtitles), information about the soundid was stored with the strings. This made editing strings, audio and localization easier by providing a central file to tie them all together.

<p style="margin-bottom: 0in" lang="en-GB">Localization tools were also available, which allowed an external localization team to change the spreadsheet or localized audio samples and preview them on a build. This meant localization and localization testing could be done by a team external to Free Radical Design.

TRC compliance plan
<p style="margin-bottom: 0in" lang="en-GB">Several tools were planned to make it easier to test and debug TRC issues.

<p style="margin-bottom: 0in" lang="en-GB">An in-game display of TV safe areas was used to make sure important information was only displayed within the appropriate area of the screen.

<p style="margin-bottom: 0in" lang="en-GB">Memory card debugging code was also planned, to simulate different actions being performed while the game was running. The code would simulate memory card insertion and removal; other actions were added depending on what the TRCs contained.

Debugging and testing tools
<p style="margin-bottom: 0in" lang="en-GB">Debugging code was built into in-game systems as they were added to the game. Hooks to this debug code were added to runtime debug systems (an in-game menu, a debug manager, and various debug overlays).

<p style="margin-bottom: 0in" lang="en-GB">The in-game menu could be used to turn systems on and off, and to adjust parameters. It could easily be used by people with little technical expertise to alter settings, but required specific code writing for each use.

<p style="margin-bottom: 0in" lang="en-GB">The debug manager allowed systems to be turned on and off and tweaked using a text file. This was mostly used by programmers to alter settings since it was easily reloaded and added to.

<p style="margin-bottom: 0in" lang="en-GB">Debug overlays were used to convey information about how particular systems were working. A memory overlay could be used to see how much memory was used by different subsystems, and an audio overlay showed which sounds and music were currently playing.

<p style="margin-bottom: 0in" lang="en-GB">An in-game profiler allowed game code profiling while the game was running. This often proved useful for optimizations, since it could be used to work out in detail why a particular level is running slowly.

Vista
<p style="margin-bottom: 0in; widows: 0; orphans: 0">Windows Vista OS compatibility was added to the engine, and Vista certification was gained. 64-bit compatibility was required, but this was not a difficult task because a 32-bit application could still be used.

Games for Windows
<p style="margin-bottom: 0in; widows: 0; orphans: 0">This was available for both Vista and XP, and was designed to integrate games better and more consistently into Windows. It required Games Explorer integration, which allowed games to launch easily, and Rich Saved Games, where the save game file featured information and an image about the status of the game (e.g campaign level reached).

Additional PC features
<p style="margin-bottom: 0in">A game launcher was written, that appeared when the DVD was inserted, with several options, including Play, Install, Help, and Dedicated Server. The game installer would install the game onto the hard disk. The installation software was InstallShield 2008 Premier Edition.

In-House Tools
The following tools were used to get working assets in-game. Screenshots are provided to showcase the visual interface.
 * The Stage / Quickviewer - this was the main preview tool used by all departments during game development. It provided a simple way to visualize an art asset, both for checking construction conformance and shader setup. It helped assess performance implications in a more controlled setting. Artists could preview their art in this tool from a single button within Maya. TextQuickviewer.jpg could be changed dynamically from within Maya or Photoshop and would automatically update in the stage, allowing instant appraisal of changes.

The stage provided a host of extra features for checking art, such as disabling shader passes. Several preset lighting setups could be flicked through, and custom lights could be added. Parts of the model could be turned on and off to check the meshes, and animations could be played.


 * GUI Editor – the GUI editor could be used to define all in-game menus (both the frontend and actual game menus).GUIeditor.jpg
 * Editor – The editor could be used to tune different aspects of the setup of the game. AI (placing and scripting characters, editing navigation information), and props (placing props in the level, scripting their behaviour, and altering their properties) could be changed. The editor was built on top of the game framework code, which made it possible for designers to test their work without exiting the editor and starting the game.


 * Profiler – The game featured built-in profiling code, which was designed to allow measuring of code performance and tuning it accordingly. This worked on PC, PS3 and Xbox 360, and was built in a platform-independent manner to allow porting to different platforms.


 * Bacon lightmapper – A program to generate lightmaps based on Monte Carlo integration. Artists ran this tool, which allowed lighting to be baked onto the background from lights placed in the scene. The lightmapper could be distributed across a number of machines, which reduced the length of time it took to make a lightmap.


 * World editor – This package integrated 3 editing tools, the art editor, the terrain editor, and the particle editor, featuring a shared GUI.Wrled.jpg

Multiple users weren't able to edit the same terrain or background at the same time, although they were able to edit the background components separately (in Maya). This was not a problem, because even though multiple artists worked together on a background, only one person was generally responsible for the terrain heightmap and placing together the components. <p style="margin-bottom: 0in">