Wednesday, June 17, 2009

Achievements/Awards - trivial?

I'm currently stuck on figuring out how to model the AI (implementing the behavior tree was only 50%^h^h^h20%^h^h10%? of the work - now I have to write and compose those behaviors into something sensible).

So I began working on something I had motivation for instead: some sort of achievement component (they can't actually be called achievements). I figured it would take a few hours at best, or maybe a day. That was 3 days ago. In total, I've probably spent about 15 hours on it. There's a lot to think about.

First, the UI:

  • The award notification that pops up/animates at the bottom of the screen
  • An awards screen that lets you see all your awards, and the ones you haven't received yet.
  • An awards summary on the player selection screen (so online players can see your awards)

Other than the standard tedious pixel-pushing, that's all pretty straightforward. I considered trying out one of the ready-made XNA achievement components found on the web. But the implementation was fairly minimal - just UI. Another one, "Goal Component", was more complete but lacked some features I wanted and didn't include source code (so if there were deal-breaking bugs, I was out of luck).




My main concern was making sure I don't block the UI thread. I don't have any background threads in my game at the moment other than some content preloading at startup. So I thought about implementing a task scheduler of some sort, until I found one made by jwatte on the XNA forums. It seemed solid (and so far so good).

I wanted the process to be as transparent as possible for the rest of my game. So here's how it works.

There is a class that describes all the awards, along with the icon that goes with them. This class also holds other persistent player data such as flags that - when combined together - may yield an award (these flags may be set across play sessions, so they need to be persisted). The class also contains the smarts for figuring out when particular flags turn into awards. I'm kind of breaking OOP principles here - the award class is doing double duty (I've lumped other persistent player data in here).

There is a DrawableGameComponent that shows the animated notifications. It's always around, so it also serves as the logic for responding to "new award" requests, and scheduling load/save tasks (IAwardService).

When a player signs in, the following happens:

  • A component is responsible for showing the StorageDevice selector for this player. It has a queue, since multiple players may get signed in at once (the device selector will be shown sequentially in this case). Having a StorageDevice connected is a precondition for doing anything awards-related.
  • The IAwardService monitors SignedInGamers until it sees a StorageDevice has been selected for one. At that point, it schedules an award LoadTask (to load persisted awards from disk)
  • At some point later, the LoadTask completes and communicates the information back to the main thread.
  • Up until this point, any "receive an award" requests have just been queued up. Before starting a SaveTask for them, we need to wait until we have completed our first LoadTask. This way the awards we know about at runtime can't stomp the awards saved on disk. They are merged the proper way.

This seems to work so far.




Next up are the other places in the UI where awards are shown. These are the two screenshots you see here. Originally I based the awards system on my PlayerProfile class. But then I realized this only exists in the context of a NetworkSession (which I use for local games too). But I want awards to be visible before you have entered the lobby. So I had to change everything to be based off of SignedInGamer (which makes much more sense).

Finally, I had to transfer award information to other network players. This was pretty straightforward - just an extra int in the "player info" I send around in the lobby.

I still need to do a little UI polish. And implement some of the remaining awards.

Thursday, June 11, 2009

Making the AI imperfect.

Once I got the basic "capture flag" AI working, I started trying to play against it. It was clearly way too perfect to even have a chance to beat. For example:

  1. It's aim was perfect, it rarely missed a shot
  2. It knew exactly when to stop shooting bullets due to overheating, so it was achieving the theoretical maximum fire power


The second point was the first one I tried to fine tune. I added a parameter on the Actor that is "ChanceOfCheckingForOverheat". Every time the AI fires a bullet, this gives us the possibility that it won't check for overheat (and thus may actually overheat and not be able to shoot for a while).

The second one was a little harder. I wanted to introduce a random offset to the angle to which the AI turned. However, my aiming behavior was constantly aiming towards the target on every update cycle. So I introduced a timer that made the AI re-aim only every once in a while (once a second). I think this more accurately models how a human aims.

So now it re-aims every second, and the Actor has a property that indicates how bad its aim is. So for more difficult AI opponents, I can give them better aim (I can also make them check for overheating more reliably).

Here is a video that demonstrates this a little bit. The gameplay is still not fun at all (which increases my worry that playing against the computer just won't be very fun). But at least the AI is less-than-perfect in an intentional way (there are still some path-finding issues that are problematic, so it's less-than-perfect in an unintentional way too).



Next I will start adding AI for the skills. This will require the notion of parallel behaviors, which I haven't yet implemented in the tree.

Monday, June 8, 2009

Behavior Trees

Over the past few weeks I've been digesting behavior trees - trying to learn all I can about them.

I finally decided I knew enough to start implementing my own. I'll go into more detail in future posts, but I'm happy with how things are coming along so far. Once I had the basic building blocks in place, it was pretty easy to cobble together a test behavior that does the following:

If the flag is available, go to it, pick it up, and bring it back home (and drop it). If someone else has the flag, try to kill them.

Once you've got the building blocks in place, you basically have a "language" through which you can build behavior. Kind of neat to see it coming together.

I quickly realized that assembling these trees in code was way to complicated though. After toying around with various options for UI, I settled on a small Winforms app in which I manually bind the UI to the behavior tree hierarchy. There is a little bit of tedious UI work to get this up and running, but I decided it was a more prudent option at this point than learning about WPF databinding.

The result is what you see in the linked photo (that screenshot is the behavior for the AI I described above). I'm able to add/remove/re-arrange the nodes, set parameters on them, and save this out to an XML file which can be deserialized by the game.

I haven't done this yet, but it should be straightforward to have the game "hot load" the AI as I make changes in the editor.

Now *all* I need to do is write the AI, and fix all the design issues I haven't forseen.

I need to think about/implement:
  • target selection
  • memory
  • behavior vtables to aid in the re-use of certain subtrees


Sunday, June 7, 2009

More game reviews

Just some more notes on what I liked or didn't like for some games that caught my eye.

Project Alpha
  • No music during the intro
  • Trial mode makes me choose a storage device even though you can't save games
  • I am confronted with a big screen of controls
  • The font is hard to read in places
  • Text appears on top of each other in places
  • The sound levels are very inconsistent
  • A few bugs (when I try to resume the game (with no saved game), things flash and I get the same error message back.

I looks like there may be some deep gameplay here, but the game is a bit confusing and doesn't feel very polished.

Spy Chameleon

  • Trial mode makes me choose a storage device
  • It is very bright and cheery
  • The art is basic - definitely "dev art", like my game.
  • Fairly clever gameplay mechanic
  • Introduces you gradually, which is nice

Overall I liked this simple game. For what it is, it seems fairly polished, though it isn't really the kind of game I would play.

Fittest

  • Made me choose a storage device
  • The difficulty ramps up nicely
  • Graphics are basic but good
  • The music is a little annoying

It lacks a little polish. The main menu, however, is fairly slick. This just makes the rest of the game feel under-polished.

Saturday, May 23, 2009

Trying out some more games

Half Brick Echoes

A very polished title. The basic arcade-style gameplay doesn't really appeal to me, but some things I liked:
  • very well done menu system
  • very artistic (I wish I had an artistic touch like that)
  • it explains things quickly and clearly, and gradually introduces more complex elements

Basically, it gets a lot of things right.

Hexement

Catchy box art and the screenshots looked nice, so I downloaded the trial. Notes:

  • Menus are basic and buggy, and don't conform to norms (e.g. B for back)
  • No instructions on what to do by pressing the default buttons.
  • I'm still not sure what to do
  • Music was nice and soothing... graphics are pretty good.

I get the feeling there is some interesting gameplay here, but there isn't much explanation of what's going on (perhaps part of the enjoyment is figuring it out? but that doesn't leave much upsell potential). There are a number of basic "gameplay rules" that are broken here.

A Fading Memory

Beautiful introduction. Nice music, very evocative art style. I like games that are art.

Gameplay: it is too easy to die and then I restart at the beginning of the level. I think the controls are a little weird, and don't behave like other platformers (when you center the stick you immediately stop moving if you're jumping... feels a little weird).

This may be a good game if you like twitchy precise platformers. But it kind of feels like Braid, and Braid is anything but a twitchy "must jump precisely the exact amount" platformer. The demanding precision here doesn't seem to mesh with the dark emotional feel of the game.

If it wasn't so difficult (I am impatient and grow frustrated when I keep dying and get no rewards), then I would probably buy this game. I'm curious how long it is, and what comes next.

A Wizard's Odyssey

Great music. Best I've heard in a community game.

The visuals are an interesting mix of beautiful and bland. The toon shaders are nice. The environment seems a little spartan a low poly. Looks like there is some ambient occlusion going on here? Looks fancy in some areas, but not in others. I thought there were no shadows at first, but I do seem them where light floods in from stained glass window. But only there (which is like 1% of the level).

This game seems interesting, but I don't know what I get by buying the full version (other than being able to play for more than 8 minutes). How long is the game... how many levels?

Again, the art doesn't seem to have a common theme, which is a bit unsettling.

Mithra - episode 1, chapter 1

This is a full-blown professional-looking game. Definitely the most graphically advanced game I've seen on XBLCG.

They aren't getting 60FPS, that's for sure. And there is a short (garbage collection?) pause every few seconds.

Though it was beautiful, it didn't seem too exciting to me.

Thursday, May 21, 2009

Seablast

Every one in a while I'm going to try to look at some of community games and see what I like and don't like. I saw positive reviews on Seablast, so I downloaded the trial.

http://marketplace.xbox.com/en-US/games/media/66acd000-77fe-1000-9115-d80258550141/

At first glance, it is frighteningly like Tank Negotiator!

For example:
  • Tank-in-a-maze: check (except it is subs in the sea, but same deal)
  • Menu system that is slightly 3d: check
  • Vortex: check (I freaked out when I saw this - it's my prize weapon!)
  • Nice particles: check (but - they are not "soft")

Over all the game is quite polished. These are some of the things I liked:

  • You can practise with the controls
  • Campaign that gets you into it smoothly
  • Quick to load
  • Visuals are polished

The things I don't like:

  • Low constrast selection on some of the menus
  • Things are a little too small (the subs/players, the text). I have this problem too, when I zoom out from the board (with lots of players)
  • No tips during gameplay (I kept forgetting the controls)

I was a little shocked that it seemed, in some ways, a lot like TN.

Some other thoughts:

  • There is a flow mechanic in some of the levels (water flowing, which pushes you). I was thinking of having a weapon/powerup like this too!
  • The terrain doesn't seem to be involved much in gameplay. This is the case with TN too. I wish I could think of ways to integrate it more (like you can with a shooter where you take cover, for instance)
  • Has the same unit movement as TN, which some people don't like (you aim straight ahead at all times)
  • 8 level shapes... about the same as TN.
  • Music is a bit cheesy.
  • Apparently no network multiplayer

There seem to be a lot less weapons/skills than in TN, and no upgrade path. The campaign seemed a little boring - I'm worried this will be the case with TN too. It's really meant as a multiplayer game.

There are several people on the credits (3 or 4). I'm just me!

Wednesday, May 20, 2009

Nav meshes


I'm finally getting back to what I need to do: AI.


A necessity of any AI is good pathfinding. I had a basic point-based pathfinding system working long ago, but didn't have proper steering algorithms (as proper as they could be for a point-based system). Overall it was not very convincing, despite it (usually) getting from point A to point B.


More recently, I investigated navigation meshes and decided I would like to try implementing them. The basic A* algorithm I have will still work here (they are just another graph after all), but I need to actually create these meshes somehow.


In the interest of saving time (and allowing for the possibility of users creating their own mazes), I investigated automatic generation of the meshes. I basically start with a rectangle, and subtract additional polygons from it (each polygon representing a wall I add). I used an algorithm for subtracting one polygon from another (not really trivial) and another for generating the "minimum decomposition" of an arbtrary polygon into a series of convex polygons (convex polygons are a necessity for navigation meshes). Some pretty heavy and boring math here, and I could never get it all to work.


What stopped me were floating point inaccuracies. When two convex polygons are adjacent and floating point precision comes into play, one of the polygons may no longer be technically convex. The algorithms I was using did not like that. After a week of trying to get things working, I (temporarily) gave up, decided it was not worth it.


I'm now back to creating nav meshes manually. As an aid, I auto-generate points based on the wall positions that will be useful for creating the polygons.


At the top of this post is an example of a completed nav mesh using the in game maze editor.
One benefit to creating them manually is that I get control over how they are positioned. I may want the individual polygons to be evenly distributed in terms of size/position if I end up using polygon-level conditions (e.g. an enemy is in this polygon, so increase the cost of moving through this polygon).


HDR

HDR rendering means you do all your pixel/lighting calculations in a higher precision than that which is used for display, in a more "open-ended" color space. For example, you might use a 16bit per channel floating point surface. Normally, the output for each color channel is clipped to (0, 1). So if your lighting calculations meant that a pixel was brighter than full brightness, that information was lost. A floating point surface will allow for values greater than 1 (just using an 8 bit per channel FP surface won't technically give you any greater precision than your standard RGB, but makes calculations easier since you can just let values go above 1).

Unfortunately, none of the floating point surface formats on the Xbox support alpha blending. So that was pretty much a deal-breaker.

I could use a regular 32 bit ARGB render target, and just divide all my output pixel values by some factor. And then draw that render target to the back buffer and multiply by that factor again. But then I am losing precision. I played around with this before, and it wasn't too bad - but I dropped it because I perceived it to be a problem.

D3D supports a R10G10B10A2 format... 10 bits per color, and 2 alpha channel. This would suit my purposes, but the 2 bit alpha channel always worried me. Although, conceptually I didn't understand why the alpha channel of the render target mattered at all. So I coded this up again and was pleased to see it work fine.

On the PC.

On the Xbox, all my translucent objects were reduced to 4 levels of alpha. Why? They are only being drawn on top of the 2-bit alpha render target. Some searching on the XNA forum again showed that the Xbox converts the output of the pixel shader to the format of the render target before applying it to the render target. Foiled again!

Further research showed that this was only a problem for "translucent" alpha blending: where the final result included the original destination pixel in its calculation. Most of my non-particle alpha objects were like this (e.g. the shield, the planet atmosphere, the spotlight cones). Most of the particles used additive alpha blending. This wasn't an issue for additive alpha blending.

I found that I could mostly emulate the effect I want solely using additive alpha blending. A few things look a little worse, but I hope the gain I get from "true" HDR will be worth it.

What does it give me? Well, I don't need the contrast range necessary for realistically rendering indoor/outdoor scenes, and adjusting "exposure" (since the game takes place in space). What I do get is bloom: washing out the brightest parts of the scene with a glow.

The effect is often over-used in games, but I think it will help make things look more "professional".

Recent changes

These are some of the recent changes and visual polish I've been working on (since submitting to PAX10).
  • Soft particles
  • HDR

I finally got around to implementing soft particles. This is visual polish that avoids the ugly seams that appear where particle billboards intersect the scene geometry. There were a number of challenges for me here. I had tried this before and given up after some roadblocks I didn't have the skills to solve at the time.

The main problem was the construction of the depth buffer. If you don't get this exactly right, things won't work, and I only now am comfortable enough with PIX to enable me to debug some of the tricker problems. One problem I see many making (in looking at online tutorials) is using a depth calculation that involves passing (position.z / position.w) from the vertex shader to the pixel shader. Because you are dividing by w, this value will not be interpolated properly from vertex to vertex. This will only show up if you have triangles of varying size (as I do, for example, on my floor - which is one big quad - and the walls, which are much smaller): the depth value for touching geometry will not be continuous where they touch. The shadow mapping sample on the official XNA site is guilty of this, but of course it doesn't show up as a problem with the scene objects they use. The solution is not to divide by w. Leave it as z, or divide by a constant if you want to keep the output value between 0 and 1 for more floating point accuracy.

Another problem I encountered on the Xbox 360 only, was that of discontinuities in the depth buffer (I was using a 32 bit floating point buffer). Strange gaps in the depth values that were interpolated (presumably). With some tests, I found they were occuring at "even" numbers, such as when the value was 0.5, or .0625. I was finally able to repro the problem just drawing a simple gradient to the buffer. It turns out the problem was using something other than point sampling when reading the depth buffer. Anything but that (on the Xbox) will cause issues with floating point surfaces.

Very exciting.

With those problems solved, I figured I would first try just drawing a quad to the screen using the "soft particle" technique. The vortex is the obvious choice, because it looked terrible where it intersected with the tanks (the vortex *is* a particle system, but at the point it is drawn to the screen, it has already been rendered to a texture, so it is no longer). This worked well, and so I followed that with changing the particle system too. Things were working great!

... until I tried running it on the Xbox. No dice. Eventually I observed that things worked fine on the top half of the screen, but not in the bottom half. Uh-oh. predicated tiling.

Eventually I realized it was the convenient VPOS semantic I was using. This gives you screen position in the pixel shader, which I use to look up the correct spot in the depth texture. VPOS apparently "resets" for each tile. And the XNA framework handles the tiling for you, so there is no way to know which tile you're rendering from within the pixel shader. No problem, I can calculate the screen position in the vertex shader, and give it to the pixel shader in a TEXCOORD.

Except - I'm using point sprites for the particle systems. That workaround is fine for a quad (the vortex), but won't work for my point sprites (since they are a point, and there is nothing to interpolate). I certainly didn't want to re-write the complex particle system to use quads.

Eventually I realized I could figure out my screen position using: 1) the screen position of the center of the sprite. 2) the current texture coordinate, and 3) the pixel side of the sprite. And so that's what I did, and so far it seems to work. Since TEXCOORDs don't work for point sprites, I need to use a COLOR semantic to store all these values to pass to the pixel shader. COLOR semantics apparently only have 8bit precision on many PC graphics cards (which is insufficient for my uses here), but luckily on the Xbox they have greater precision (likely 32 bits).

One additional hurdle was that I was hoping to use MRT (multiple render targets) to help render the depth texture, so I didn't need to render the scene geometry twice. This worked fine on the PC, but again - not on the Xbox. I need to switch out the depth render target (at index 1) so I can read from it, while still keeping the main render target (index 0) to continue rendering to. However, the contents of the main render target are lost when any other render target is removed. It has to do with the nature of the Xbox's 10MB of EDRAM, and predicated tiling. It is obvious why this has to happen if you think about it.

So in the end I had to go with a separate rendering pass. I'm still getting 60FPS at 1080p, so so far so good.

This post is too long already, so I will discuss HDR in another post.

Tuesday, May 19, 2009

Gameplay video

Here is a gameplay video for the "beta" version I submitted to PAX10. Unfortunately it is low quality, despite my following the instructions for "upload HD videos, up to 1GB!". The youtube uploader tool apparently recompressed the video on my machine before uploading, since it took only a few minutes to upload a "600MB" video, and I ended up with this result: