The last couple of weeks have been going pretty well, aside from a tendency to forget to write about what I've been up to. Anyways, here're the core systems that've been added over the last couple of weeks. I'll go through each of them in turn to give more of an idea of what they're all about:
- BDI Agent model on guards
- STRIPS planning algorithm on guards
- Sight, sound and smell sensory systems
- Navmesh navigation
- Weapon & Inventory mechanics
- Arrow physics & sticking
- Basic GUI layout
BDI: Belief, Desire, Intention
This is a type of software model for creating the impression of intelligent agents (individuals who act, like guards or the beast in this game) in AI programming. Wikipedia (n.d.) gives a nice overview of the topic, but I'll sum it up here. At its core, the model relies on the agent having an awareness of its surroundings based on the following.
- Beliefs, or knowledge about the environment. What's key here is that the agents don't necessarily know the truth, they are only aware of what their senses perceive. For example, a guard might spot the player in the distance, who then runs around the corner. The guard's knowledge of the player's exact location ceases once they're out of line-of-sight, but the guard is aware of the player's approximate location, with a constant, decreasing certainty.
- Desires, or the end goals. These are achievements or circumstances that the agents would like to achieve, but need to accomplish other goals in order to progress. This is because each desire would rely on a series of:
- Intentions, or the agent's choice of action for the current situation. The agent is given a large number of actions to partake of in order to fulfill their desires. How they go about choosing a certain action will be covered shortly. but at its core, an intention is a type of action given to the guard. Examples might be 'Patrol the current area', 'Get help from a friend', or 'Attack the player'.
STRIPS: Stanford Research Institute Problem Solver
Although the planning algorithm I ended up writing was pretty original, its core structure is based off the STRIPS (wikipedia, n.d.) algorithm, where it's based on individual conditions. The article linked above gives a good (if somewhat code-heavy) overview of the system, but I'll try my best to explain it in less technical terms.
Each agent is given a database of object states, along with a precondition and postcondition (what I need to be able to fulfill this condition, and what'll happen once I've done it). One example might be the state 'Approach the Player'. The precondition in this case would be 'I can see the player', and the postcondition would be 'I have gotten closer to the player'.
Using this database, the agent can then use an algorithm to find what its current objectives should be based on the BDI model described above. I can feed in the desire 'Kill the player', and depending on how I set up the guard, their intention (or current action) will become 'patrol the area until I spot something suspicious'.
So far I've only implemented this simplified version, with a single path to the end goal. Over the next week or two I'll be adding in dynamic planning and multi-agent interaction. What's great about this type of system is that it should be able to react in real time to the world around it, and plan new actions accordingly based on multiple factors - the game's settings, the agent's health, their fear level, their previous activities, and so on.
Sight, Sound and Smell
Using the belief model mentioned above, I've constructed a database of known 'facts' that has been given to the guards. At the moment, they have a single basic viewcone (although I may expand on this at a later date, to be more sensitive to things like motion) for their sense of sight.
If an object is in line of sight, the belief about their position is almost certain (though it depends on how well-lit the object is). Once the object has left the line of sight, the certainty of its position gradually falls, depending on the type of object. For example, a guard will remain almost certain of a crate's location, but the longer they go without spotting the player, the less certain they are of their position.
In the above screenshot, the white wired sphere represents the player's noise level. They've just stepped on a metal floor, and so their area of influence has increased. If a guard were standing within this sphere, they would become alerted to the player's position, although they would only investigate if it were nearby or repeated enough. It's hard to show this in action, but it's working so far.
Also visible in the screenshot is a series of small clouds linked by a green line. Though currently unused, this is the player's scent trail, which the beast will be able to pick up on and follow. All guards will also leave a scent trail, which will fade over time, but can be enough for the beast to track them if left recently enough.
By using this sensory input, an agent's belief database can be populated with tons of information about the world around it. What I'm hoping to do with this is create the possibility for an agent to be surprised. For example, if a guard is certain of a crate's location, but comes back to find that it's been moved by the player, that will trigger a suspicious response and cause them to investigate the area. They will react similarly to changes in the states of objects, for example torches that have been extinguished, or allies that have been killed or incapacitated by the player or the beast.
Put simply, the navmesh is what enables the agents to navigate through the terrain. So far it works on a simple level, where agents will avoid objects and travel around them, but I've done some experiments with linked meshes, to allow agents to do things like climb over obstructing objects.
Weapon and Inventory Mechanics
The player can pick up and loot objects, weapons, and readable items. The simplified mechanics are that they can scroll through their inventory using the mouse wheel, and use the 1-9 alphanumeric keys to choose their weapon. How many of these end up being in the final product will be subject to testing and refinement, but so far the weapons will include:
- Sword, for melee combat.
- Cudgel, for knocking unaware enemies out.
- Broadhead arrows, for inflicting damage or distracting enemies.
- Water arrows, for extinguishing torches,
- Rope arrows, for dropping a rope that allows the player to climb up.
So far the items only include a compass, but I'd like to have some kind of throwables to either cause distraction or harm, such as bottles or rocks.
As with many programming systems, the hard part is laying the groundwork and structures, so more item implementation should follow at a later date.
Arrow Physics and Sticking
As the bow will be the player's main weapon, I felt it was important to ensure that they worked well. The player can hold down the mouse key to shoot an arrow, and holding it for longer will increase the draw strength. The longer the arrow is drawn, the faster it flies (and thus, the farther it can travel).
Arrows are also point-heavy, as in real life. They'll tend to arc with the point sticking towards the ground (although in real life, this is more due to the drag caused by the fletching than the weight of the tip itself).
Arrows will also stick into appropriate materials, based on the same system that governs footstep sounds. At the moment this is limited to wood and (...ulp!) flesh, but the mechanic itself is working well so far. The speed that the arrow is traveling governs how far it will embed itself, which makes for a semi-realistic approach. I'd also like to include the hardness of the material into this equation, but that's low on my priorities.
Basic GUI Layout
As with a great many of the mechanics of this game, it's been created in an homage to the original Thief games. As a result, the GUI layout at the moment is currently very reminiscent of that layout.
I wanted to include a 3d-based GUI, because it seemed very cool in some prototypes. I like that the GUI reacts to the player's surroundings, reflecting lights and behaving physically as an item in the player's inventory would. For example, to view the compass the player needs to be in a relatively bright location, as well as looking down.
Combining this with Unity's GUI system wasn't easy, but I've managed to find a neat way to automatically scale 3D objects to the confines of a screen-based rectangle. This has been combined with a couple of moving on-screen effects, like displays over items to show when they can be looted.
The shields below each represent 10% of the player's health, and they dissolve based on the player's current health. In the screenshot below, the player has about 75% health.
All in all...
It's been a while since my last update (hence the long read, apologies!), but I'm happy with progress overall. A lot of the work over the last couple of weeks has been in laying groundwork for things to come, so while it's been a bit frustrating to have so little visual progress, it means that I should have plenty to showcase over the next few weeks.
My plans for the immediate future are to give the player some animated arms so that I can begin on implementing the melee weapons, readables, and throwables.
I'll endeavor to post more often, but in the mean time, here's a summary of the last few weeks of coding.
Belief–desire–intention software model. (n.d.). In Wikipedia. Retrieved March 11, 2015, from http://en.wikipedia.org/wiki/Belief–desire–intention_software_model
Drexel University. (n.d.). Chapter 2: Belief, Desire, Intention (BDI). Retrieved from https://www.cs.drexel.edu/~greenie/cs510/bdilogic.pdf
Rao, A. & Georgeff, M. (1995). BDI Agents: From Theory to Practice. Retrieved from http://www.agent.ai/doc/upload/200302/rao95.pdf
STRIPS (n.d.). In Wikipedia. Retrieved March 11, 2015, from http://en.wikipedia.org/wiki/STRIPS