Nemeton's End Results, Torchbug Round 2, and FTI

The last couple of months have been interesting. While there's been plenty to write about, there hasn't been a lot of time to write them. So here's a bit of a summary of what's been going on.

Nemeton's End

The NE development finished up on time for submission, and I was pretty happy with the overall result. There's a lot more that could have been done, and quite a few features that had to be cut in the last couple of weeks of development, largely due to my own oversights in scope. To put it in the words of my lecturer in the submission feedback (with which I wholeheartedly agree):

The game itself is full of interesting ideas, though from an end-user perspective this is perhaps a rare case in which the whole does not quite match up to the sum of its parts
— Hadyn Lander

In hindsight, I spent far too long in the early weeks prototyping interesting systems that didn't add enough to the end product to justify the time spent on them. Things like the planning algorithms ended up being replaced in later weeks by simpler, more efficient state machines. While the process of figuring out how to write them was definitely beneficial, it didn't result in a better game.

So where is it?

Still getting polished is the short answer. While I'm happy with the submission build, there're a couple more things I'd like to finish off before putting the game up for public download. Some are aesthetic, like extra assets from Jordan, but others are more core to the initial game scope - namely a scripted endgame requirement.

Expect some more within the coming weeks. I only plan on spending a couple more days on the project, as the temptation otherwise is to keep working on it indefinitely (which I'd enjoy, but there're other things afoot!). With any luck I'll be putting it up for public download soon, though.

Torchbug

Dev Cycle 2 of Torchbug has recommenced. While we've been batting ideas back and forth for the last few months, development officially started a week ago today. The main focus for this round of development is to create a more polished, focused experience in time for Supanova, on the 28th of June, where we'll be presenting the game in association with Red Bird Creative.

The main goals for this round of development are:

  • A focus on story-driven gameplay. The last public version of Torchbug was fine, but it really didn't have much of a story. Sure, there was an opening and closing cinematic, but aside from mission text there wasn't a whole lot of story to go on. 
  • Full Voice Acting & Cinematic cutscenes. Somewhat related to the story content, we want to create a fully voiced story with a proper narrative structure. This will be used alongside the character creator to ensure that your custom character will be able to speak in a voice of your choosing.
  • Ship and Space Mechanics. While ship to ship combat likely won't make it in (it's an expanded scope goal at the moment), we want the player to be able to interact and use their ship to fly to different destinations.
  • Expanded RPG Mechanics. The RPG System was left by the wayside in the latter weeks of the last round of development, largely due to lack of time. This feature also includes a WoW-style character creation screen, where players can customize the appearance of their crew.
  • Upgrades of nearly every asset to Unity 5's Physically Based Shading (PBS) shader. This involves the recreation of nearly every texture to give a cohesive finish.
  • An Overhaul of the GUI system. Our original GUI was hand-coded in its initial implementation, which means that we could never take advantage of the features of Unity 4.6's Canvas system.

Most of my last week has been concentrated between the GUI Overhaul and Script Writing, both pretty tricky tasks. As mentioned, the original GUI was all hard coded, so I'm having to dismantle a lot of scripts and then rebuild them, restructuring the entire thing to fit in with the new Canvas system. 

 A before and after of the character portraits. In the final version, the portrait will be a small screengrab of the character themselves.

A before and after of the character portraits. In the final version, the portrait will be a small screengrab of the character themselves.

Script writing is also no easy task. Because of the dynamic nature of the crew, I need to ensure that each line of dialogue won't be repeated by the same person. An example exchange from the opening cutscene is below:

CAPTAIN (*if PILOT == CAPTAIN){MECHANIC(Intercom)}): Anywhere we can stop for gas?

PILOT: Let’s see… Looks like there’s a colony on one of the inner planets.

That odd pseudocode is how I've been writing most of the dialogue. In the above example, the Captain asks where to stop for gas. However, if the captain is the pilot, we don't want them to ask and then answer their own question. In that case, the mechanic delivers the line over an intercom (using a simple high pass filter in-engine).

 This is a before-and-after example of the PBS workflow. The one on the right uses far less polygons to give a much more reactive effect.

This is a before-and-after example of the PBS workflow. The one on the right uses far less polygons to give a much more reactive effect.

These individual lines will then be compiled into a full crew script, which each voice actor will perform. This means that the voice selected by the player for each crew member at the start should be able to say any line required by the story. A little bit like how Star Wars: The Old Republic did it.

I'm still working on dialogue for the third level, but hopefully I'll have a full script compiled to pass on to our Audio guys as soon as possible.

Overall, it's a pretty big scope that we're looking to complete before the end of June, but we've got the benefit of a largely student team who're on holidays until the start of next week, and their workload (for the most part) won't be too intensive in school until around mid July. 

FTI

And the last bit of news is that I've started an internship at the Film and Television Institute, working for the Games & Interactive Department. I'm mainly there in a support capacity, working on web administration, graphic design, and a bit of event organization. For anyone reading in Perth, they've got a couple of events coming up soon that you should come down to!

It's probably worth noting that as an intern I have no obligation to post about these things, but here I am plugging nonetheless!


I'll make an effort to post more about development over the coming weeks, as well as show off what we've got!

STRIPS, BDI, and Artwork

The last couple of weeks have been going pretty well, aside from a tendency to forget to write about what I've been up to. Anyways, here're the core systems that've been added over the last couple of weeks. I'll go through each of them in turn to give more of an idea of what they're all about:

  • BDI Agent model on guards
  • STRIPS planning algorithm on guards
  • Sight, sound and smell sensory systems
  • Navmesh navigation
  • Weapon & Inventory mechanics
  • Arrow physics & sticking
  • Basic GUI layout

BDI: Belief, Desire, Intention

This is a type of software model for creating the impression of intelligent agents (individuals who act, like guards or the beast in this game) in AI programming. Wikipedia (n.d.) gives a nice overview of the topic, but I'll sum it up here. At its core, the model relies on the agent having an awareness of its surroundings based on the following.

  • Beliefs, or knowledge about the environment. What's key here is that the agents don't necessarily know the truth,  they are only aware of what their senses perceive. For example, a guard might spot the player in the distance, who then runs around the corner. The guard's knowledge of the player's exact location ceases once they're out of line-of-sight, but the guard is aware of the player's approximate location, with a constant, decreasing certainty.
  • Desires, or the end goals. These are achievements or circumstances that the agents would like to achieve, but need to accomplish other goals in order to progress. This is because each desire would rely on a series of:
  • Intentions, or the agent's choice of action for the current situation. The agent is given a large number of actions to partake of in order to fulfill their desires. How they go about choosing a certain action will be covered shortly. but at its core, an intention is a type of action given to the guard. Examples might be 'Patrol the current area', 'Get help from a friend', or 'Attack the player'.

There're plenty of good sources online for ideas towards implementation, like this one from Drexel University (n.d.), or this one by Rao and Georgeff (1995).

 Before I go on, I've just realized I never posted this piece of concept art, which I did to set the mood for potential collaborations.

Before I go on, I've just realized I never posted this piece of concept art, which I did to set the mood for potential collaborations.

 

STRIPS: Stanford Research Institute Problem Solver

Although the planning algorithm I ended up writing was pretty original, its core structure is based off the STRIPS (wikipedia, n.d.) algorithm, where it's based on individual conditions. The article linked above gives a good (if somewhat code-heavy) overview of the system, but I'll try my best to explain it in less technical terms.

Each agent is given a database of object states, along with a precondition and postcondition (what I need to be able to fulfill this condition, and what'll happen once I've done it). One example might be the state 'Approach the Player'. The precondition in this case would be 'I can see the player', and the postcondition would be 'I have gotten closer to the player'.

Using this database, the agent can then use an algorithm to find what its current objectives should be based on the BDI model described above. I can feed in the desire 'Kill the player', and depending on how I set up the guard, their intention (or current action) will become 'patrol the area until I spot something suspicious'. 

So far I've only implemented this simplified version, with a single path to the end goal. Over the next week or two I'll be adding in dynamic planning and multi-agent interaction. What's great about this  type of system is that it should be able to react in real time to the world around it, and plan new actions accordingly based on multiple factors - the game's settings, the agent's health, their fear level, their previous activities, and so on.

Sight, Sound and Smell

Using the belief model mentioned above, I've constructed a database of known 'facts' that has been given to the guards. At the moment, they have a single basic viewcone (although I may expand on this at a later date, to be more sensitive to things like motion) for their sense of sight.

If an object is in line of sight, the belief about their position is almost certain (though it depends on how well-lit the object is). Once the object has left the line of sight, the certainty of its position gradually falls, depending on the type of object. For example, a guard will remain almost certain of a crate's location, but the longer they go without spotting the player, the less certain they are of their position.

 

In the above screenshot, the white wired sphere represents the player's noise level. They've just stepped on a metal floor, and so their area of influence has increased. If a guard were standing within this sphere, they would become alerted to the player's position, although they would only investigate if it were nearby or repeated enough. It's hard to show this in action, but it's working so far.

Also visible in the screenshot is a series of small clouds linked by a green line. Though currently unused, this is the player's scent trail, which the beast will be able to pick up on and follow. All guards will also leave a scent trail, which will fade over time, but can be enough for the beast to track them if left recently enough.

By using this sensory input, an agent's belief database can be populated with tons of information about the world around it. What I'm hoping to do with this is create the possibility for an agent to be surprised. For example, if a guard is certain of a crate's location, but comes back to find that it's been moved by the player, that will trigger a suspicious response and cause them to investigate the area. They will react similarly to changes in the states of objects, for example torches that have been extinguished, or allies that have been killed or incapacitated by the player or the beast.

Navmesh Navigation

Put simply, the navmesh is what enables the agents to navigate through the terrain. So far it works on a simple level, where agents will avoid objects and travel around them, but I've done some experiments with linked meshes, to allow agents to do things like climb over obstructing objects.

Weapon and Inventory Mechanics

The player can pick up and loot objects, weapons, and readable items. The simplified mechanics are that they can scroll through their inventory using the mouse wheel, and use the 1-9 alphanumeric keys to choose their weapon. How many of these end up being in the final product will be subject to testing and refinement, but so far the weapons will include:

  • Sword, for melee combat.
  • Cudgel, for knocking unaware enemies out.
  • Broadhead arrows, for inflicting damage or distracting enemies.
  • Water arrows, for extinguishing torches,
  • Rope arrows, for dropping a rope that allows the player to climb up.

So far the items only include a compass, but I'd like to have some kind of throwables to either cause distraction or harm, such as bottles or rocks.

As with many programming systems, the hard part is laying the groundwork and structures, so more item implementation should follow at a later date.

Arrow Physics and Sticking

As the bow will be the player's main weapon, I felt it was important to ensure that they worked well. The player can hold down the mouse key to shoot an arrow, and holding it for longer will increase the draw strength. The longer the arrow is drawn, the faster it flies (and thus, the farther it can travel). 

Arrows are also point-heavy, as in real life. They'll tend to arc with the point sticking towards the ground (although in real life, this is more due to the drag caused by the fletching than the weight of the tip itself). 

Arrows will also stick into appropriate materials, based on the same system that governs footstep sounds. At the moment this is limited to wood and (...ulp!) flesh, but the mechanic itself is working well so far. The speed that the arrow is traveling governs how far it will embed itself, which makes for a semi-realistic approach. I'd also like to include the hardness of the material into this equation, but that's low on my priorities.

Basic GUI Layout

As with a great many of the mechanics of this game, it's been created in an homage to the original Thief games. As a result, the GUI layout at the moment is currently very reminiscent of that layout.

I wanted to include a 3d-based GUI, because it seemed very cool in some prototypes. I like that the GUI reacts to the player's surroundings, reflecting lights and behaving physically as an item in the player's inventory would. For example, to view the compass the player needs to be in a relatively bright location, as well as looking down. 

Combining this with Unity's GUI system wasn't easy, but I've managed to find a neat way to automatically scale 3D objects to the confines of a screen-based rectangle. This has been combined with a couple of moving on-screen effects, like displays over items to show when they can be looted.

The shields below each represent 10% of the player's health, and they dissolve based on the player's current health. In the screenshot below, the player has about 75% health.

All in all...

It's been a while since my last update (hence the long read, apologies!), but I'm happy with progress overall. A lot of the work over the last couple of weeks has been in laying groundwork for things to come, so while it's been a bit frustrating to have so little visual progress, it means that I should have plenty to showcase over the next few weeks.

My plans for the immediate future are to give the player some animated arms so that I can begin on implementing the melee weapons, readables, and throwables.

I'll endeavor to post more often, but in the mean time, here's a summary of the last few weeks of coding.

References:

Belief–desire–intention software model. (n.d.). In Wikipedia. Retrieved March 11, 2015, from http://en.wikipedia.org/wiki/Belief–desire–intention_software_model

Drexel University. (n.d.). Chapter 2: Belief, Desire, Intention (BDI). Retrieved from https://www.cs.drexel.edu/~greenie/cs510/bdilogic.pdf

Rao, A. & Georgeff, M. (1995). BDI Agents: From Theory to Practice. Retrieved from http://www.agent.ai/doc/upload/200302/rao95.pdf

STRIPS (n.d.). In Wikipedia. Retrieved March 11, 2015, from http://en.wikipedia.org/wiki/STRIPS

 

 

 

 

Post-Pitch and Tech Demo

So the pitch to my facilitator went well, and everyone seemed to like the idea. I spent the rest of yesterday defining the theme as well as getting more familiar with some of the psychological principles I'd like to use for design.

The overall artistic theme will be more of an expression of the dichotomy of order and chaos. Both the player character and the monster will embody this chaotic nature, with some rival soldiers representing order. By relying on subterfuge and manipulation, the player character and monster can be said to fit the Trickster (n.d.) archetype.

The mechanics of the design have changed a lot too - the main focus of the gameplay has shifted to the manipulation of these rival guards, who'll use their AI and BDI model to have an awareness of the environment (as well as the influence the player has on it). This should make for some great emergent gameplay.

One cool idea I had was to have all in-game dialogue be in Anglish. It's a funky way of speaking English that removes all non-Germanic rooted words. There're some awesome resources for it online, including a wiki. It sounds oldschool when spoken, but without the florality or time gap that can make Shakespearean or Middle English a bit too difficult to understand.

Anyways, I'd best get back to nailing down the game design. Below is a quick tech demo showing off the material-based footstep system. I also hooked up a quick ragdoll system for giggles. All sound assets are placeholders that were taken from Thief: The Dark Project. I don't think this counts as redistributing them. Anyways, I'll replace them before releasing the final project.

References:

Trickster. (n.d.). In Wikipedia. Retrieved February 16, 2015 from http://en.wikipedia.org/wiki/Trickster