The Tactical Point System (TPS):
The language is simple, powerful, and designed to be very readable. In a (simplified) example:
hidespots_from_attentionTarget_around_puppet = 7
coverSuperior = true, canReachBefore_the_attentionTarget = true
distance_from_puppet = -1
This means:
TPS uses a highly efficient method to rank points, keeping expensive operations like raycasts and pathfinding to an absolute minimum.
The Tactical Point System (TPS) provides the AI system with a powerful method of querying the environment for places of interest.
The main features include:
In terms of the code itself, rather than its use, some of the advantages of the framework include:
A summary of the stages in definition and execution of a TPS query.
Note that only stages 3 and 4 are significant to performance, chiefly 4 (point evaluation).
There are some possible execution-flow dependent optimizations, for instance caching any relevant query results between fallback queries.
This is provided to allow full use of the system from other C++ code, and Lua queries are translated through it. In fact there are two C++ interfaces:
// Check for shooter near cover using the TPS
static const char *sQueryName = "CHitMiss::GetAccuracy";
ITacticalPointSystem *pTPS = gEnv->pAISystem->GetTacticalPointSystem();
int iQueryId = pTPS->GetQueryID( sQueryName );
if ( iQueryId == 0 )
{
// Need to create query
iQueryId = pTPS->CreateQueryID( sQueryName );
pTPS->AddToGeneration( iQueryId, "hidespots_from_attentionTarget_around_puppet", 3.0f);
pTPS->AddToWeights( iQueryId, "distance_from_puppet", -1.0f);
}
pTPS->Query( iQueryId, CastToIPuppetSafe( pShooter->GetAI() ),vHidePos, bIsValidHidePos );
Some parsing is obviously taking place here and this is crucial to the generality of the system.
Here are some simple examples and their explanations – a proper grammar is given later.
option.AddToGeneration("hidespots_from_attTarget_around_puppet", 50.0)
Given as generation criteria and supplied with a float which represents distance in this case.
This is broken up into 5 words. "Hidespots" indicates we should generate points behind known cover in the conventional manner. "From" is just a glue word to aid readability. "Target" specifies from whom we are hiding. "Around" is, again, glue. "Puppet" specifies where we will centre our point generation. The supplied float tells us the radius from that point within which we should generate points.
Note that no raycasts are performed at this stage, and that we have here considerable flexibility in, for example, hiding from the player somewhere around the player, hiding from him somewhere near us, hiding near a friend but from the player, and indeed specifying some completely different target to hide from, such as an imagined player position, or the girl, etc. By allowing some flexibility at the generation stage we can make more powerful queries and let the user focus our computations in the right general area.
option2.AddToConditions("visible_from_player",true)
Given as a condition so we can assume a Boolean result. Here we parse "Visible", which specifies a kind of raytest, "From" is glue, "Player" specifies who to raycast to (from the generated point). Here we specify it must be visible, which is curious but perfectly valid.
option2.AddToConditions("max_coverDensity",3.0)
Given as a condition so we can assume a Boolean result. Here we parse "Max", which is specifies that the resulting value must be compared against the given value and must be less to be considered further. "Density" specifies a density query (to be implemented) which can measure the distance of various things such as cover objects, friendly AIs, etc. "Of" is glue, and "Cover" specifies what kind of density we are interested in.
option1.AddToWeights("distance_from_puppet",-1.0)
Given as a weight so the result of the query will be 0-1 (this is normalized as required). Boolean queries are in fact allowed and thus allow preference (e.g. Primary over Secondary cover) – they return 0.0 for false and 1.0 for true.
"Distance" specifies the type of query we will do. "From" is glue. "Puppet" is required to tell us where we are measuring distance from – it could be the player, the girl, etc.
In Lua-land the way to use the TPS is through a ScriptBind.
Both methods define query specifications using the same table structure. For example:
Hide_FindSoftNearby =
{
-- Find nearest soft cover hidespot at distance 5-15 metres,
-- biasing strongly towards cover density
{
Generation= { hidespots_from_attentionTarget_around_puppet = 15 },
Conditions= { coverSoft = true,
visible_from_player = false,
max_distance_from_puppet = 15,
min_distance_from_puppet = 5},
Weights = { distance_from_puppet = -1.0,
coverDensity = 2.0},
},
-- Or extend the range to 30 metres and just accept nearest
{
Generation ={ hidespots_from_attentionTarget_around_puppet = 30 },
Weights = { distance_from_puppet = -1.0}
}
}
AI.RegisterTacticalPointQuery( Hide_FindSoftNearby );
Note that registering a query returns a query id that then refers to this stored query.
AI.GetTacticalPoints( entityId, tacPointSpec, pointsTable, nPoints )
Performs a query using an existing specification. See Scriptbind_AI.h comments for details.
There are ways to define a query in both C++ and Lua (and potentially in XML), but the same core syntax is used. Here we formally define this language, taking the top-level units to be the Generation, Conditions or Weights that can be given. The semantics are defined in the following section. Non-terminal symbols are in bold. A few of the symbols are not yet implemented, present just for illustration.
Grammar:
Generator ::= GenerationQuery '_' 'around' '_' Object
Condition ::= BoolQuery | (Limit '_' RealQuery)
Weight ::= BoolQuery | (Limit '_' RealQuery) | RealQuery
GenerationQuery ::= ( 'hidespots' '_' Glue '_' Object)
| 'grid' | 'indoor'
BoolQuery ::= BoolProperty | (Test '_' Glue '_' Object)
BoolProperty ::= 'coverSoft' | 'coverSuperior' | 'coverInferior' | 'currentlyUsedObject' | 'crossesLineOfFire'
Test ::= 'visible' | 'towards' | 'canReachBefore' | 'reachable'
RealQuery = ::= RealProperty | (Measure '_' Glue '_' Object)
RealProperty ::= 'coverRadius' | 'cameraVisibility' | 'cameraCenter'
Measure ::= 'distance' | 'changeInDistance' | 'distanceInDirection' | 'distanceLeft' | 'directness' | 'dot' | 'objectsDot' | 'hostilesDistance'
Glue ::= 'from' | 'to' | 'at' | 'the'
Limit ::= 'min' | 'max'
Object ::= 'puppet' | 'attentionTarget' | 'referencePoint' | 'player'
| 'currentFormationRef' | 'leader' | 'lastOp'
There are a few different ways queries can fail, and it's important to understand how each case is handled.
Firstly, where no points matched the conditions and so none can be returned, this is not a failure, but a valid result - we can carry on to fallback queries, or the AI can try a different behaviour.
Otherwise, broadly speaking, where a query doesn't make sense in the context of an individual point, this isn't an error and we try to return the "least surprising" result, but where the context of the puppet means that a query will fail on all points, this is an error, and the query should not have been made.
The failure modes are:
To generate a set of points for consideration.
Specifications of what kind of points we're interested in.
Need a central position to find points around – e.g. the puppet itself, att target, given position, etc.
For some generation queries, we need a secondary object – e.g. a target to hide from.
Hence we start by generating points to consider. They fall into two types:
Multiple point generation criteria can be used, for instance generating around myself and my attention target.
A list of point objects, giving position and simple relevant meta data (like any associated hide object).
Here we try to establish the "fitness" of each point – that is, how well it matches the desired criteria. We wish to choose one good point, or the top n good points.
The list of candidate points from the Generator. The point evaluation criteria, which are of two types:
To test the points against the specification and find an adequately good point as fast as possible.
Often "adequately good" can mean "the best" but there's a lot of potential for optimizing if we allow a user-specified degree of uncertainty.
The order of evaluation is crucial to efficiency and is non-trivial. I hope this description at least gives the flavour of it.
The rough scheme is:
The strategy behind this is to minimize the number of expensive evaluations.
It turns out that we can go further with this, by interleaving the last two steps and making evaluation order completely dynamic.
Unlike Conditions, Weights don't explicitly discount points from further evaluations. However, by tracking the relative "fitness" of points during evaluation, we can still employ Weights to dramatically reduce evaluations. The basic principles are:
The implementation is based on a heap structure that orders points according to their maximum possible fitness and tracks the evaluation progress of each point separately. Each Weight evaluation collapses some of the uncertainty around the point, adjusting both the minimum and maximum possible fitness. If the Weight evaluation scored highly, the maximum will decrease a little and the minimum increase a lot; if it scored badly the maximum will decrease a lot and the minimum increase a little.
In each iteration we do the next most expensive evaluation on the point at the top of the heap and then re-sort it into place if necessary. If it occurs that we've completed all evaluations on a point, and it still has the maximum possible fitness, then it must be the optimal point. In this way we can tend towards evaluation of the optimal point with relatively few evaluations on all the others.
A point, or n points, with the potential to request all the metadata that lead to it's choice. Hence behaviors can adapt their style to reflect the nature of the hidepoint they received
From inside the Modular Behavior Tree, one can use the <QueryTPS> node to call predefined TPS queries by their name. If the query succeeds, then the <QueryTPS> node will return Success, otherwise Failure. The most common usage pattern of the <QueryTPS> node goes along in conjunction with the <Move> node inside a <Sequence>.
<Sequence>
<QueryTPS name="SDKGrunt_TargetPositionOnNavMesh" register="RefPoint"/>
<Move to="RefPoint" speed="Run" stance="Alerted" fireMode="Aim" avoidDangers="0"/>
</Sequence>
Here, we're making a call to the predefined TPS query "SDKGrunt_TargetPositionOnNavMesh" and if it succeeds, we will move to the queried position. That TPS query could look like this:
AI.RegisterTacticalPointQuery({
Name = "SDKGrunt_TargetPositionOnNavMesh",
{
Generation =
{
pointsInNavigationMesh_around_attentionTarget = 20.0
},
Conditions =
{
},
Weights =
{
distance_to_attentionTarget = -1.0
},
},
});
This is more about one possible use of the TPS than future plans for the implementation.
Rather than just using the TPS to choose a point and go to it, there is potential for some nice environmental deduction based on its results.
For example: The player runs around a corner and when the AI puppet follows, he is no longer visible. The puppet queries the TPS for places he would choose to hide from himself.
Obviously these require some extra code, but it becomes much easier when built upon the TPS.
When generating points in the open, here I have only proposed that we generate points in a grid or radially around objects and treat each point individually. This is really a very trivial sampling method. Where an area must be sampled, we can usually assume some kind of coherency in the evaluation functions and so could use some adaptive sampling approaches instead.
The must crucial aspect of optimizing the TPS will actually be adjusting the relative expense function of queries. The costs of evaluations will vary across platforms, different levels, and even locations within levels, as well as changing over time as the code changes. Making sure that the order is correct is crucial, or more expensive evaluations will be favored over cheaper ones.
Essentially, we will need to profile the evaluation function in all these difference circumstances – which suggests and automatic profiling solution at run-time.
Ideally, the relative weighting of Weight criteria should also be considered – that is, a cheaper query may not be worth doing first if it only contributes 10% of the final fitness value, and an expensive query that contributes 90% may actually save many other evaluations.
As described earlier, while evaluating points we always know both the maximum and minimum potential fitness at every stage – error bounds, or the relative uncertainty we have about the point.
We could relax the optimality constraint and accept a point when we know that no other point could be significantly better. For example, the minimum potential fitness of this point may be less than, say, 5% lower than the maximum potential fitness of the next best point. Hence we might stop evaluation early and yield a further performance saving.