10th Blogpost: Game Progress and Motion Blur

Game Progress

So it’s been a pretty crazy week! My group’s been accepted to Level Up and I’m super excited! It seems like the past couple weeks have been purely dedicated to improving Titanum, and — honestly — I’m not complaining. On Tuesday night I implemented controller support to our game through an API called XInput. It was really straightforward to add, and I feel it actually makes the game a bit more fun! This is mostly because I was able to make use of my Xbox controller’s vibration feature, so shooting feels really satisfying now! On top of this, the Xinput API opened up the possibility of multiplayer… I thought it would be too much work… but the idea kind of got stuck in my head. Needless to say, we now have multiplayer! There’s still a couple problems that I need to fix up (namely how we’re going to handle one player dying) but it works really well.

Here’s a video showing off controller switching and multiplayer. (Unfortunately, since I’m visiting my parents this weekend, I’m doing the multiplayer test by myself)

Motion Blur

For the lecture this week we covered motion blur. In terms of photography, motion blur is the result of taking a picture of a moving object. The amount of blur is dependent on the camera’s shutter speed — otherwise known as exposure time. This basically determines the amount of time that the camera’s shutter is open for when taking a picture. The longer it is open, the more blurred a moving object will become.

Motion blur is useful for still photography, as it is an easy way to suggest movement for an object in your photo. In games however, motion blur can be useful for different things. Since games are (usually) already in motion, we can use motion blur instead to emphasize specific actions. In racing games specifically, motion blur is very useful for conveying a greater sense of speed.

Of course, conveying speed through motion blur can also add to the impact of an attack in a fighting game

But mostly, motion blur is used to invoke motion sickness…

In order to achieve the effect of motion blur in games, we can use a pre-made function within openGL called the Accumulation Buffer. How this works, is that the accumulation buffer stores in the current frame (or at least part of the current frame). From here we can then draw the previous frames on top of the current frame. It’s an easy to use effect, however when applying the accumulation buffer to multiple objects it can get very slow. This is because it’s rendering the same scene multiple times. A better alternative would be to use Motion Vectors.

With Motion Vectors, we look at individual “blocks” (or pixels) of the screen. From here we look at adjacent frames (for games, we look at the previous frame) and we determine the velocity between the current and the adjacent (reference frame), where the velocity is just “a-b” or “current – adjacent”

This gives us an approximation of the direction of each pixel on the screen. This way each pixel now stores its own screen space velocity! Aside from motion blur, we’re also able to use this technique for effects like Light Rays!

In the game Dark Souls 2  we can see the use of BOTH Light Rays and Motion Blur in the same scene.

The light rays really add to the serenity of the environment, while the motion blur (which is only applied to the player’s weapon and shield) helps to emphasize the urgency of the player’s attack. It’s interesting to see that the developers must have had multiple motion vector FBO’s for both the player’s weapons as well as the environment’s lighting.

Dark-Souls-Screen motion

Before I end this post, I was wondering if I could get my updated blog post marks? Wasn’t sure if this will be my last post or not!

9th Blogpost: Depth of Field

Earlier this week we had a lecture on depth of field and its implementation. In the real world, depth of field is applied to our use in cameras. The Aperature of our camera basically determines the “range” in which things are in focus or out of focus (blurred). I like to think of this range as being similar to our near and far planes, where anything outside of these determined planes become out of focus (instead of being clipped from the scene).

In terms of photography (and film too!), depth of field can be important for putting emphasis on a particular part of our scene.

In one of my favorite movies of all time, The Shining, we can see the use of a slightly shallow depth of field:

In this picture we can see that Danny’s (the boy) head is slightly out of focus, while the twin girls in the distance appear much more sharp. This really adds to the horror in this scene since — for the last few minutes, the focus (and our attention) had primarily been on Danny riding his bike around the hotel. However, the moment Danny turns the corner the focus starts shift towards the twins. This really adds to the precision of the imagery in this movie, which is what makes it stick with you.

So, how do we get this to work with games? It actually works quite similarly to how we would implement HDR Bloom. To break it down:

1. We render our original scene

2. We then render that scene onto our DOF FBO

3. We down sample the DOF FBO

4. Apply a Gaussian blur to the DOF FBO

We are then left with our original scene, and our DOF FBO. Then, through the use of our depth buffer we can determine where our focal plane is. This way, if any pixels are around our focal plane we only sample from the original image, if pixels aren’t around the focal plane then we sample from our DOF FBO. However, just doing that isn’t enough! There would be no in-between from DOF to no-DOF. In order to get a smoother transition between pixels we basically take a fixed amount of samples of our DOF FBO and use LERP.

An example of DOF in games can be seen in the game Dark Souls.

Game Progress

Over the weekend I managed to get quite a bit done for our game. I finally managed to get two major features working, so I feel a little less stressed out! (still stressed though)

So we now have a working collision mask for our level, which effects all enemies, bullets, and objects. And we have a neat spawning system!

So, basically how the spawning system works is I have nodes placed all around the map. When the player comes within range of the node, something will spawn. However, the thing it spawns depends on the current “pace” of the game. If the game just started, the pace is set to 2. This means that the spawn system can pull from the “first tier” pool of “groups”. The first tier groups can be 1 small enemy, a couple crates, or 1 small enemy and 1 crate. Once the node has spawned something, the pace increases by 1. As the pace increase the spawn system can then spawn things from the next tier of groups, which contains tougher enemies.

On top of this, I also have the spawn system checking the player’s health. If the player’s health is below 40, then there is a high chance that the system will spawn a health kit. I also put in another variable which checks how many times the system has spawned enemies. If the system has spawned more than 8 enemies in a row, then there is a 8/10 chance that the system will spawn either a new weapon or nothing at all. This was implemented just to give the player some “quiet time”. Overall I’m pretty happy with this spawning system, I spent an entire day (I literally worked on it from 12pm until 2am) getting the system to work in a way that I liked. It’s certainly not perfect, but I feel it works well enough and actually surprises me from time to time. As much as I’d like to spend more time perfecting my system, I also have to start writing some reports!

Videos below, please ignore the fact that the map looks weird, it isn’t triangulated because other group members are currently UVing the map.

Some Gameplay:

http://www.youtube.com/watch?v=742_CX-6dwU&feature=youtu.be

Here’s another video where you can actually see the locations of the spawn points (and see me die horribly at the end):

http://www.youtube.com/watch?v=Lpn9TxAE4ZI&feature=youtu.be

Blog 8: Titanum Progress

Sorry I missed last week’s blogpost! Midterms were crazy and there wasn’t really much I felt like talking about. However, I’ve managed to get quite a bit done for the GDW game in the past couple weeks so I felt like sharing!

– Environmental Objects (boxes) which are affected by velocity and have different “weight” classes.

-Enemies collide with other enemies and environment objects

– Weapon pick-up + switching system (picking a weapon off the ground will replace your current weapon)

-HUD (though this is going to get replaced by the final assets soon)

-Bloom/Box Blur (Box blur is actually used on it’s own during gameplay, though regular bloom is being used on our main menu)

-Worm-like boss

-Game completion trigger.

bossimage

I managed to get quite a bit done! What I’m most proud of is definitely the boss, and the “blurring” when the player shoots.

What makes the boss so interesting is how it moves. With the system that I currently have set up, every “sprite” in the game is affected by velocity. This helps the game feel dynamic and satisfying to move around in. Playing around with this same idea — and incorporating some ideas from our motion hierarchy lessons during animation last semester — I started the basis for the boss class.

3180861e61f5afa25e858d217a246216

Basically when the boss’ “active” function is called, it has to be given a “leader” which it will follow.

847625af0c8f433f5ac1d3096ede52e6

The rotation of the boss is then based off the location of its leader (as a side note, I am multiplying this by (PI/180) so we can convert to radians)

velrot

The boss’ velocity is then added by the direction that it is looking at! This effect gives the boss a “worm” or “snake” like movement style which looks really cool in action. Here’s a video of the boss of it:

In a way our boss behaves similarly to a skeleton (in motion hierarchy terms), in that each segment’s position is all based on the position of its parent — which is just a node. This skeleton hierarchy is a method used to make animation in 3D models more manageable, as we can easily set the location of specific joints to be relative to other joints.

We’re hoping to improve the movement a bit so it doesn’t collide with it’s own body, however I told my group that I’ll save that for later so I can focus on getting the major features of the game completed first, besides, I think it looks pretty cool how it is!

blurrrr

I was also pretty happy with how the blurring game out in our game! It’s an effect that only happens when the player shoots, but I feel it gives an extra impact to your shots, which makes combat so much more satisfying.

This is a pretty simple post-processing effect which uses a simple box blur kernel.

I’m hoping that I can switch this out for a better Gaussian blur so the effect looks smoother. The difference between a box blur and a Gaussian blur is that, with Gaussian, the center of the kernel has the most offset while the points further from the center are offset less.

So… yeah. I’m super happy with how the game is turning out, but I’m also feeling pretty overwhelmed since I’m the only one in my group programming and there’s lots of assignments and final reports coming up… I’m REALLY hoping to go to Level Up, but there’s still a number of major things that need to get done.

What I’m really concerned about is getting a collision mask working (I had some problems with this last semester so I’m a bit concerned), creating a decent particle system, and then finally creating a “spawning system” which lets me place enemies around certain nodes when the player is nearby… then of course I need to spend time polishing and pacing the game just right!

So while stress is kicking in, I also have to admit that I’m enjoying myself quite a lot! The last time I felt this excited about a game was when I first started programming games in High School, so it’s good to have that feeling back again!

Here’s a couple videos showing off what our game currently looks like:

Collisions and Weapons:

100 Enemies:

7th Blog Post: Starting Over…

So, aside from studying I dedicated a good portion of the week to COMPLETELY restarting our GDW Game from scratch. This is something I had been dreading for a while, but… when I finally started to do it… I actually had a lot of fun.

This came about because last semester I coded the game very poorly. It was super unorganized, and there was a lot of redundancies that I forgot to get rid of. On top of this, the game itself didn’t quite have the “feeling” that I wanted; as the game felt kind of clunky in places. Of course, another major reason for starting over was the fact that I wanted to properly implement Shaders to our game.

Game Feel / Movement

Last semester I had implemented velocity/subtle physics based movement to the player, however this wasn’t as apparent as I would’ve liked it to have been. When playing the game you would notice a slight build in speed as you began to walk, however this always felt a little awkward to me. On top of this, whenever the player shot their gun the player would get knocked back slightly. This never quite felt right as the player would instantly get knocked back. Another major problem was the “dash” ability. This was kind of a last minute addition and it wasn’t until our professors came to look at our game that I realized how weird it was. The dash caused the player to literally move position by a large amount of pixels. This meant that there was a really awkward transition as the player would just teleport a large distance without any real indication of what happened.

When remaking the game I put a much greater emphasis on the physics of the player movement. Now, whenever the player shoots their gun, they get knocked back by a certain amount of velocity. This causes the player to “slide” a bit. I feel this gives a good amount of build up to the power of the player’s weapon, as I always imagine that the “slide” represents the player trying to regain their posture after shooting.

I also did something very similar to the “dash” ability. Now instead of being teleported, the player actually slides quickly. Think of it as an object being pushed on ice very quickly. I can see this being the result of some sort of “jet pack” on the player’s back. I found that, as it is, the dash is actually quite fun to move around with.

Overall, while I can see some people disliking the movement because it feels too “floaty” I think that the “floaty” aspects of movement is actually an important differentiator in our game, and it can be quite fun to mess around with as it gives the game a much more dynamic feeling to it.

Shaders

So, at the moment there really isn’t anything too impressive going on with Shaders, however the important part at the moment is just that we have them implemented. I am happy to say that we do have per-pixel lighting in our game though (finally)! This was something I was dying to have in our game since last semester, as per-vertex lighting made the game look kind of ugly.

Notice that one of the polygons in our level is dark while the surrounding polygons are lit.

Build from last semester, per-vertex lighting

Per-pixel lighting!

Per-pixel lighting!

Sprites Using Shaders

One major concern I had when starting the game over from scratch, was getting the character sprites to behave like they did in last semester’s build. Last semester, the characters in our game were literally just texture mapped quads (they still are)! However, in order to only make the “texture” part of the quad visible I relied heavily on an Alpha Test function built in to OpenGL.

    glEnable(GL_TEXTURE_2D);
glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_GREATER, 0.5);

Unfortunately, this didn’t work in the newest build. In order to get around this, I put a little extra code within the lighting Shader:

    vec3 tex = texture(color_texture, data.texcoordObj).rgb;

if(tex.g == 1.0)
{
discard;
}

What this does is that any pixel that is purely green (R:0, G:255, B:0) will be removed entirely. So our new sprite sheets look like this now:

testsprite

Another silly thing I did was create a second lighting Shader JUST for the character sprite. This Shader behaves exactly the same as our per-pixel lighting Shader, however I altered the colour of the light.

    vec3 diffuse = (Lambert * lightCol*vec3(2.8, 2.2, 2.0) * objectCol);

Now, when the colour of the light is calculated onto the sprite, there is a slight hint of red/orange. My reasoning behind this is because it actually highlights the player more in the environment, and puts an emphasis on the orange visor on the player’s helmet. It’s a very tiny detail but I feel it gives the player a bit more depth and adds to the visual style.

spriteShaderComparisons

Video Comparisons

Here’s video footage of Titanum so you can see the differences! (They’re unlisted videos so you may have to click the direct YouTube link to view them)

6th Blog Post: Brutal Legend’s Particle Effects & More Homework

So this week during the lectures we covered the effects in Brutal Legend, as well as review of everything we’ve gone over so far. The Brutal Legend talk was particularly interesting to me because of Double Fine’s use of extensive particle effects.

Particle Lighting

Traditionally, to set up proper lighting effects onto objects in our world, we need to access the object’s normals. In the case of particles, since they are essentially just basic quads, we can just apply some basic normals and be done with it… right? Unfortunately, the folks at Double Fine realized that this didn’t quite work the way they wanted it to.

Doing this meant that the particles didn’t have any sense of volume, as they appeared quite flat. On top of this, since we’re computing so many normals at once it’s a lot slower to render.

A method they used to overcome this was to push back the normals of their particles. This is called Center-To-Vert Normals.

This is fairly similar to calculating the normals of a sphere! This gives each particle a greater sense of volume and is also cheaper to calculate.

Finally, much like how we get the final lighting calculation by combining our ambient, diffuse, and specular calculations, Double Fine calculates its own lighting components in order to get the final result of particle lighting.

Fluid Particles / Ink Effects

In Brutal Legend, one of the factions that the player comes across (and can play as in multiplayer) is the Drowning Doom. This faction is based off of more gothic themes, and so a big part of their unique visual style was the use of “Ink” effects. This was a difficult task for the visual effects team.

In order to get the effect they wanted they knew that they needed fluid-like shapes that would break up over time, as well as getting a good silhouette out of the particles.

The desired result would finally be achieved through, what they call, Ink Potential.

Essentially they take their basic particle system, then add a subtractive bloom to it. This bloom gets the particles to almost “form” together, and it’s here that we can see the start of fluid-like effects. In order to actual achieve the final effect, they apply a threshold to the particles. This basically selects a specified range to be selected as the final particles.

The final result is then the combination of two thresholds. The addition of a second threshold is what gives the ink an extra sense of “wispy”-ness.

Home Work Progress, Toon Shading, Motion Blur, Accident of The Week

This week in the tutorial we covered non-photo realistic lighting! Dan initially went over how to apply a texture clamp onto our VBO’s through shaders (obviously). Like what we covered in class, this involved finding the dot product between the Normal and the Light source, then applying a textured clamp. However, he went into more detail after that and showed us how to do better outlines through post-processing!

Post-processing outlines involved using Sobel filtering. Sobel filtering is basically a form of edge detection that uses normal and depth textures in order to generate an edge texture.

After we got this working, I collaborated with one of my GDW members (Ethan Ranalli) and we finally got our OBJ Loader from last year working with VBOs. The end result was this:

redeadToon

I made a couple tweaks, and for some reason this Redead almost looks like it could be in the game Okami!

… sorry.

Aside from just Toon Shading, I also worked on Motion Blur. At the moment I only have motion blur working with the accumulation buffer built into OpenGL (which was stupidly easy to implement).

redeadMotionBlur

Later this weekend I’m going to be trying to get motion blur working as an FBO…

Before I end this week’s blog I guess I also might as well show off another weird effect that I made by accident!

redeadShine1

This was me being dumb and applying the Blinn-Phong lighting model I created. It gave the Redead an interesting shine.

redeadShine2

Then, on top of the Blinn-Phong, I applied the inverse dotNormal (from the accidental effect I made last week). This gave the Redead a golden shine to it! This effect doesn’t look amazing, but I thought it was neat nonetheless.

That’s it for this week!

5th Blog Post: Zelda, Shadow Mapping, Homework progress

This week our lecture’s went into detail about advanced effects that we can implement into our games. The primary focus being on Shadow Mapping.

Shadow Mapping is super important in games, not only from a aesthetic purpose, but especially in terms of design. Without any form of Shadow Mapping the spatial relationship of our objects, characters, etc, within our game world is lost. Of course, we can use simple techniques like texture mapping a dark circle beneath an asset to give it some sense of depth, though this method isn’t ideal.

We can see the use of circle texture mapping on the wolf here. However, if you look closely you’ll notice that Link actually has two “plane”-like shadows coming from his feet. From what I remember, the shadows coming from Link’s feet are the only shadows in the game that are affected by the light source in the current area.

I thought this was an interesting little effect. I don’t believe this is actual Shadow Mapping, but just a clever use of texture work. When you’re in Hyrule Field, I remember the shadows from Link would shift as the day progressed. What I think is going on is that — much like the circle mapping, the developers just textured quads at the location of Link’s feet. In this case, I think they may have mapped two separate quads for each foot. As the day goes on the rotation of the quads shift to reflect the location of the Sun.

Something I thought was interesting, is that it seems that even the 3DS remake uses the same shadow mapping techniques!

Shadow Mapping Technique

Actually achieving proper Shadow Mapping, is a little more complicated than just texture mapping a circle beneath a character however! First, we must make a pass through our shaders from the Light Source’s viewpoint. We then use our z-buffer to basically draw the “blockers” (things which cause shadows) in the scene based on the z-value. This allows us to access our potential shadow map based on our depth buffer.

Next, we make a second pass from the observer’s/player’s viewpoint. It’s here that we make a comparison between the depth of the pixel against our shadow map. Basically, if (Zpixel > Zsmap) then the pixel is a shadow!

shadowmap

Looking at the more recent Zelda, Skyward Sword, we can see that the above shadow mapping technique was probably implemented on the Bokoblin (red guy) that Link is fighting here. However, you may also notice the shadows for the Keese (bats) are just circles!

Given the more stylized aesthetic to Skyward Sword, these shadows still look fine in the world, however it’s interesting to see a combination of circle shadows as well as more advanced shadow mapping. Another interesting thing to notice is how blurred the shadows are! There seems to be very little aliasing.

Smoother Shadows

Aliasing on shadow maps is a very common problem. One method that people can use to overcome this is by raising the resolution of the shadow map… however this is very GPU intensive! A better solution would be to use Percentage Closer Filtering with Edge Blending. PCF is sort of like applying a Gaussian blur dependent on the shading location. Much like Gaussian blur, PCF also uses a kernel (like a matrix which is used to assign certain “weightings”).

We use a kernel like this (though with different weightings depending on the pixels locations) to apply a softness to the edges of a shadow. However, PCF just by itself can still lead to some aliasing. This is where Edge Tap Smoothing comes! With this we basically look at the sub-texels (texture pixels) and apply a weighting to the PCF kernel based off our sub-texel offset.

Messing Around With Shaders

Aside from the lecture, I also spent a good amount of time working on homework questions.

rimHighlights

It’s pretty simple stuff, just implemented a basic lighting model to 3 objects. The above image is using the Blinn-Phong reflection model, as well as my own implementation of rim highlighting!

The Blinn-Phong model is rather similar to Phong, however instead of getting the dot product between the Viewer and the Reflected ray, we instead find dot product of the Halfway vector (between viewer and light source) and the Normal.

For the rim highlighting, I used a method similar to Toon shading. From a previous lecture, I knew that finding the dot product of the Normal and the Viewer would tell us if the surface of our object is aligned with our Viewport. I also knew that values close to 0 are edges!

4c31190efbd681c8df72a78c15eb06de

Adding 0.4 and subtracting 0.4 basically allows the edges to “bleed” into the inside of our objects. I simply then add outline1 and outline2 to fragColour.r to add red rim highlighting to the objects.

Lastly, I figured I’d show off an effect that I made completely by accident!

7f593c90bebafe02ad5ca4b3a372b1d3

Essentially, I started with the dot product of the Object’s Normal, and the Object’s Position. I normalized this value, and then subtracted 1 by it (this gave me the inverse value). I then added the fragColour.r by the inverse value, and it gave me this effect! I thought it was kinda neat looking.

That’s all for today!

Fourth Blog Post: Dungeon Pals (48 Hour Game Jam)

So this past weekend I spent 48 hours diving back into Adobe Flash so I could program a quirky little dungeon crawler called Dungeon Pals. We were given a theme for this Game Jam… but… I decided not to follow it. For the past, 2 months or so, I’ve been itching to remake a game I made back in High School but due to a lot of work from school I decided to put it off. Of course, the Global Game Jam was the perfect excuse to finally get this idea out of my mind!

The Original Idea

Essentially, back in 2011 my computer science teacher gave me an assignment that involved making a 2 player game for an arcade cabinet. We were only given a week and I decided to work by myself. I was still pretty new to programming, but I ended up dedicating around 5 hours each day that week working on the game (ignoring all of my other assignments lol). The end result was Dungeon Bros! It was sort of a shooter, but at the beginning of the game you get to pick a class (which changes your damage, attack speed, special ability, and character’s aesthetic) and then roam through a dungeon upgrading your weapon and then finally confronting a boss.

dungeonBros

dungeonbrosss

I thought this was a pretty good idea to build upon, so I got together with my GDW group and we decided to try to give the idea a shot!

The 48 Hour Game

The result of our jamming was Dungeon Pals! (Now gender ambiguous!)

gamstilll3

Something I thought was pretty neat about Dungeon Pals is how the two players control very differently. One player, the “gun mage” uses the arrow keys to move and WASD to shoot (Robotron style) as well as the space bar to activate a “mine”. The second player, uses only the mouse, left clicking to attack and holding left click to activate a special slash attack.

I think visually our game looks really great (Thanks to Dario Van Hout and his girlfriend!), sounds great (Thanks Zachary Clarke!) and… for the most part, plays fairly nicely. I think that the samurai is a way too overpowered, but in my rush to get things done I didn’t really have time to polish this stuff up!

Unfortunately, because it’s been quite a while since I had used Adobe Flash and Actionscript, I had to relearn a few things. This led to a lot of my programming to being a little wonky. There is a pretty big glitch that happens when you get to the 2nd and 3rd floors of the dungeon, where the “dark goblin” enemies will transform back into regular goblins after being hit. I think I know how to fix this… but I didn’t discover this glitch until there was only 40 minutes left! When I’m under that amount of pressure my brain tends to turn to mush, so I decided to just call it quits and hand the game in as is.

Conclusion

Overall, I was still super happy with how the game turned out. It’s fairly buggy, but I think it looks really great and still plays fairly well despite some balancing issues. I’m hoping that this game jam will help me feel more motivated to program for GDW. At the very least, I think it definitely helped our GDW group to connect more and feel more comfortable together as a game makin’ team!

http://www.globalgamejam.org/2014/games/dungeon-pals

Third Blogpost: Lighting and Silent Hill

On Monday’s lecture we did review about lighting in games. A big emphasis was how we can use lighting to effect the mood and emotional intensity in a scene. This immediately got me thinking back to the Silent Hill games… specifically Silent Hill 2!

So, Silent Hill 2 is one of my favorite video games and it features some incredible lighting effects. I still find it hard to believe this game came out in 2001. There’s this one specific moment in game I want to talk about though. Near the beginning of the game, after running through the halls of a motel in darkness, the player will come across a room with the flash light in it:

At this moment in the game, there is no music playing at all. Obviously, the bright light (especially after running around in the dark for a few minutes) puts a big emphasis on the dress. This is actually a really important detail in terms of the game’s story, as it’s implied that the dress was the protagonist’s wife’s. Thus, the dress being emphasized in such a way almost shows how the protagonist is being guided by his desire to see his wife again. When I first played through this scene, I felt oddly relaxed by seeing this… but of course, since this is Silent Hill, this wasn’t a good time to relax…

Of course… there’s a monster behind the dress. As terrifying as this moment was when I first played it, it also shows off the impressive real-time dynamic lighting/shadows and just how powerful lighting is when setting a mood. The light initially puts an emphasis on the dress, but the moment we pick up the flashlight, our attention is immediately switched to the monster. Due to the limited capabilities of the PS2, lighting like this was very uncommon. A major reason why Silent Hill 2 was able to achieve this, was in part, due to the design decision to keep environments small and camera angles (mostly) fixed. In fact, even when the player is exploring a large open area (the town of Silent Hill) the environment is embedded with a deep fog which not only adds to the claustrophobic atmosphere but also allows the developers to render small chunks of the world. This way the developers are able to keep relatively high polygon models.

Now, back to lighting! After booting up the game again on my PS2 and loading an old save file, I decided to try playing around with the lighting effects in Silent Hill 2. They still really hold up and — initially — I thought that they were using some form of per-pixel lighting. However, when I walked down a specific hall in the game, I noticed that the lighting was a little jagged.

A little difficult to see, but you can see a bit of jaggedness to the lighting on the walls.

A little difficult to see, but you can see a bit of jaggedness to the lighting on the walls.

I’m thinking that the developers instead used some form of per-polygon lighting, but due to the high poly-count of most of the models and environments it ends up looking really great most of the time. I thought this was kind of funny, since Silent Hill 1 (on Playstation 1) also did this, though it’s a little more obvious.

Another little thing I noticed with shadows is that the game seems to take a very sharp outline of the models and maps them onto the environment.

Noticed the sharpness on the shadow of the woman.  http://silenthill.wikia.com/wiki/File:SilentHill2_10.jpg

Notice the sharpness of the (human) woman’s shadow.
http://silenthill.wikia.com/wiki/File:SilentHill2_10.jpg

Shadow mapping is done by rendering the scene once for the light’s position and again for the final camera position. Then — through a depth map test — the game is able to get the actual “shadow” locations of the models. These shadows are then properly mapped to the environment. Now, while the polygon count for Silent Hill 2’s models are fairly high, the actual silhouettes of the models aren’t as detailed. This leads to them appearing a little more jagged… at least that’s what I think.

While certainly not perfect, the game is still able to use its lighting effects to help emphasize a sense of claustrophobia and uneasiness within its world. In fact, it’s kind of brilliant that Team Silent was able to take the limitations of the Playstation 2 hardware and use it to complement the game’s atmosphere. Running through the thick fog that engulfs Silent Hill is such an incredibly disorienting and terrifying experience, and slowly lurking through dark and cramped hallways with nothing but your flashlight always fills me with a sense of anxiety. It really goes to show how much setting a mood/atmosphere can affect your game!

Second Blog Post

Lecture Overview

In the first lecture this week we mostly covered review on the graphics pipeline, though this review definitely helped me to better visualize and understand the pipeline then I had before.

Essentially, our graphics pipeline is what allows us to go from vertex data to our final image. We take our vertex data, which can comprise of anything from x,y,z values to RGB values. In general, our vertices are just points in space. This is then sent to our vertex shader which allows it to be processed by the triangle assembly. The triangle assembly is basically where we connect our vertex points to form triangles. We then send our triangles into our rasterizer which finds the potential pixels for our primitives. At this point we essentially go from an infinite resolution to a finite resolution. This then goes into our fragment shader, which it can then finally be processed by the frame buffer (frame that holds all the data like pixels and colours), which produces the final image.

Something that we went over in class was how OpenGL streams data. We talked about an analogy which compared the graphics pipeline with organizing a line of people at a grocery store. In this case, since we now have access to a programmable pipeline, it’s as if we are creating the line ourselves. This means we can make the line up as efficient as possible; creating a line up of people who buy one thing only and they all pay with cash. Allowing things to be “packed” in a particular way lets us stream the data through the pipeline much easier. We went into a little more detail about this when we talked about how draw an object in retained mode.

We first went over the inefficient method of doing so; this involved creating a single array which contained the vertices of all the triangles we were using. It looked something like this:

Vertices[v0,v2,v3, v1,v3,v2, v2,v3,v4, v3,v5,v4]

The problem with this is that we are declaring the same vertices multiple times. This means this array will be double the memory cost. Instead, we create one array which contains all the vertices once. Then we create an index which contains the ordered vertices.

Vertices[v0,v1,v2,v3,v4,v5]

indexBuffer[0,1,2, 1,3,2, 2,3,4, 3,5,3]

In the second lecture this week we went into a little more detail about textures, as well as some neat tricks using texture sampling. One thing that kinda helped me to understand how shaders work is the idea of textures just being an array of data. This can be applied directly to our fragment shader, as it just stores the final colours of our scene.

A really interesting part of the lecture was when we went into detail about how the game Forza (or most any racing game for that matter) gets a perfectly reflecting rear view mirror. Basically we have a second camera looking from behind the car and then apply that camera’s view as a texture sample to the rear view mirror. Now, this does mean that we will have to render the game twice, however we can cut some corners by not having to render the actual car or even not rendering whats in front of the car.

Game Progress & Programming

In terms of progress of our game, we spent a good portion of the week working on reorganizing a lot of our code. While I tried my absolute hardest last semester programming the game and getting all the basic features in a workable manner, I also slacked off a bit in regards to organization and efficiency. There’s a lot of portions of code which are redundant or just written in a way that is rather messy. I’ve been trying to work on creating better classes and making new classes for aspects of the code which I overlooked. Now that there isn’t as much stress on me, I find it a lot easier to go through my code and re-do things.

I’ve also been making small tweaks to the game’s feel and look, just to experiment. I remember we had some people tell us that our game was too dark, so I just brightened up the spot light. This made a really big difference… I’m not too sure why I thought having the game be so dark was a good thing in the first place… On top of this I made some small adjustments to the game’s AI. Before, the AI would literally go on top of the player if they were too close. Visually, this looked really bad, so I reworked this. Now, when the enemy gets to a certain range of the player, the enemy will stay within that range. I also added a slight knocked back for when the enemy gets hit. These are little details, but I feel it makes the combat overall feel more satisfying.

Looks a little better.

Looks a little better.

Another thing I did this week was experiment with VBOs and Shaders using the framework that Dan is helping us create.

By the end of the tutorial, we had this.

By the end of the tutorial, we had this.

Changes I made just by playing around with the shader code

Changes I made just by playing around with the shader code

Super simple, but I just altered the z value of the vertexs to stretch out the cube. Then I added and subtracted the st (UV) values to alter the colour range.

Super simple, but I just altered the z value of the vertices to stretch out the cube. Then I added and subtracted the st (UV) values to alter the colour range.

I actually spent a good portion of time trying to work ahead. I attempted to work on creating a gaussian blur, as well as “toon” shading, using some tutorials I found online. However, I was having a little bit of trouble figuring out how to “blur” the object, so I wasn’t able to get this effect to work. I’m hoping to look over this stuff some more (assuming I can get my Accounting assignment done soon…), and hopefully I’ll actually have something more interesting happening!

First Blog Post – Liam Svirk 100491248

Ah, a decent start to a new year after another reclusive break from society.

Unfortunately, since our group’s formation in September we lost one of our members. To be honest though, it wasn’t really much of a loss… but it’d be nice to get a 5th member who could actually help out! Regardless of if we can get a 5th member though, I think our group will still be able to make it.

I was fairly satisfied with how our game turned out by the end of the semester. It’s called Titanum. It’s a top down shooter with lite survival horror elements. However, I feel that there is still quite a lot of work to be done. Given that we really only have 4 months to complete our game, I feel that re-scoping our initial idea is going to be crucial to our success. We originally planned for our game to take place over 6 levels in a somewhat open-ended fashion. We wanted to put a great emphasis on exploration and slower combat, very similar to a game called Teleglitch. But, given our time restraints, I’m thinking we may have to turn the game into something that is more easily manageable. I still have to talk about this with my group, but perhaps changing the game’s structure to that of a more wave based survival game may help keep us from overwhelming ourselves. We’ll have to see though!

After the first graphics lecture I was feeling pretty excited about learning more about shaders and thinking of different ways to improve our game’s aesthetic. As it is, our game uses a combination of pixel art and 3D models. This idea is something that still really intrigues me, and I’m looking forward to taking it further when we delve deeper into different techniques. I think it’d be neat to add more modern effects like bloom (probably for the blinding light of an explosion) bump mapping (to give more detail to the environment) or motion blur to what appears to be — at first glance — a more pixelated 2D game. To give an indication as to what I hope to achieve, here’s some video footage of the game DwarfCorp [1], which is SOMEWHAT similar in style (though our game is significantly darker in tone).

Something I really like in that video is the real time water reflection of the 2D blimp at around 8 seconds into the video. It’s a great representation of classic pixel art being combined with modern graphical techniques. The whole game seems to also have some nice lighting effects and bloom too!

In general, when I think of computer graphics I’m really interested in how I can use visual effects in order to make an experience feel more satisfying. This is why one of my first goals (aside from actually implementing shaders) is to create a proper particle effect system for our game. I really wanted to add some subtle smoke and spark effects for when the player shoots, but I was more concerned about functionality at the time. Last year I added a slight position displacement to the character whenever the player shot their weapon. This gave the shooting a nice kickback and punchy-ness to it, though it really could’ve been so much better if I had added in some particles…

Screenshot from Spelunky (taken by me) shows off a simple dust/smoke effect, which has a subtle level of blending. It actually looks a lot better while the game is running.

Screenshot from Spelunky (screenshot by me) shows off a simple dust/smoke effect, which has a bit of blending. It actually looks a lot better in motion though.

Later — through shaders — I’m also hoping to get some good lighting effects in the game finally! What we had in the game was passable, but because we were using old OpenGL and thus per-polygon shading the lighting wasn’t perfect. I’m hoping that through per-pixel shading + shaders we’ll not only be able to improve the spotlight on the player, but also be able to add more dynamic lighting effects (for when the player shoots). I think it would be really cool to have the pixel art characters cast shadows on the environment when the player shoots his/her weapon.

Notice that one of the polygons in our level is dark while the surrounding polygons are lit.

Notice that one of the polygons in our level is dark while the surrounding polygons are lit.

Very rough mockup of what I'd like out of lighting and particle effects.

Very rough mockup (I’m not an artist!) of what I’d like to see out of lighting.

Another thing I had planned — but never got around to due to time restraints, was giving an extra layer of variation to certain assets through visual effects. During one of our Intro. to Computer Graphics tutorials, we covered some effects that could be done with shaders. One of these effects was adding a sort of “inner glow” to models. I’m hoping to add a simple upgrade system to our game which will increase the damage/give a special effect to the player’s weapon. My thinking is that I could then use a shader to alter the model of the gun which will give the player a better indication of their own progression.

Very rough mockup. Orange glow added to the weapon gives the player an indication of their own power.

Very rough mockup. Orange glow added to the weapon gives the player an indication of their own power.

All in all, I’m really excited to learn more in this class. I’m hoping by the end of the semester I’ll be able to have an interesting looking (and playing) game!

[1] DwarfcorpGame. “DwarfCorp: A Game of Ruthless Capitalism in a Fantasy World [Kickstarter Video]” Online video clip.
YouTube. YouTube, 23 July. 2013. Web. 9 Jan. 2014.