Geneva Lost Weekly Reflection – Week 10, 2016

Artifact: SHADERS!!!

I must profess something. I am in love. I’ve known about and worked a little bit with shaders before, but not until now that I sit down and actually revisit the area do I remember the beauty that is shaders.

For those of you who do not know, shaders are used for real time post-processing effects in video games (there are other areas of implementation, but it’s mainly purposed for games). Since we’re working with SFML which uses OpenGL, our shaders must be written in GLSL (OpenGL Shading Language). The syntax is very simple, so it’s not hard to write in, but there is a learning curve being that there are things you need to know beforehand to write in it (certain functionality and such). I will not bore you with the details, but it is a very powerful tool to make your game *POP*.

There are four types of GLSL shaders; fragment, vertex, tessellation and geometry shaders. Making a 2D game, I’m only looking at fragment shaders at the moment, seeing as all planned shaders are based on pixels on screen.

So, let’s talk about what shaders we’re using, what we’re planning to use and, most importantly, why we want them.

 

The last couple of weeks we’ve talked a lot about feedback, and how we wanted as much as possible be conveyed to the player through graphical feedback rather than explicit information. That’s where shaders come in.

Gray-scale shader:

We wanted to convey the player’s current health level through several coordinated effects. One of these effects is to decrease saturation of the screen based on how low on health the player is.

grayscale
Effect in effect

The current implementation also allows for modification of brightness and contrast, if that would be at any point desired.

The implementation is fairly simple. Brightness simply multiplies the color value per pixel by a brightness factor. For gray-scale you simply take the red, green and blue value per pixel and set them to their average. Trying to fix contrast, on the other hand, proved a bit more difficult. The concept itself is easy. You subtract the color gray from your current color value, multiply it by you contrast factor and then add the gray color value again. The problem I stumbled upon was not knowing what values to use. Knowing that some color scales use the interval of 0-255 of integers to represent color while others use hexadecimal values and some use a floating point value between 0 and 1, I had to do some guesswork, since the information seemed to be unobtainable, despite my numerous, creatively constructed google searches. It turns out that it varies. IT VARIES! WHY DOES IT VARY!? In this particular instance, it turned out to be a floating point value between 0 and 1, and so the problem is now solved!

“Glitch” shader:

Since the style of the game is as it’s seen from a satellite view, we wanted taking damage to temporarily “damage the up-link”, and having the screen jitter as a result. This, combined with a buzz effect, using a simple random noise filter, the effect should feel authentic. This shader is not currently on high priority, since we have other things we need to complete first, but I would really love to get to do it.

 

Light shader:

We have scrapped this feature, but originally we intended to use dynamic lights in the game. Calculating the shape of the light would of course be done in C++, but applying the light to the environment is best done with shaders. Again, it’s not an incredibly difficult thing to achieve, and there are a lot of fun things that can be done with it, such as simulating 3D surfaces with normal maps. I might end up adding some simple lighting if time allows it, just to make it looks extra good, but gameplayability has priority.

Geneva Lost Weekly Reflection – Week 9, 2016

Artifact: Crawlers and spawn points

 

The crawler, which may be our only enemy in the game (due to “budget cuts”), is supposed to be a fast and annoying enemy that attacks in swarms.

When first implementing the crawler it went straight for the player at a constant speed, dealing damage on contact. To make it more interesting we plan to give it some more dynamic behavior. First thing we changed was to give it a maximum turning speed, so that when running towards the player it couldn’t just home in straight towards them. This took a while to implement due to programmer stupidity (when comparing the angle between the player and the enemy’s current rotation I forgot to wrap the difference between -π and π, so it couldn’t determine which direction to rotate).

This also allowed us to give the crawler a higher speed than the player, making it almost like a rodeo, dodging a raging bull. Right now it still just runs around and deals damage when ever close enough. What we’ll do instead is make it so that when the player is in within a cone in front of the player it will stop and do a swiping attack. It will also on occasion do a lunge at the player from a distance. We’ve discussed having different behavior patterns for the crawler, but due to being pressed on time we’ll have to omit that feature from the project. By slightly randomizing properties of the crawlers, their movement won’t be too symmetric, making them feel more organic in the game. Some will be faster but can turn slower, some will have slightly more health.

swarm
Crawlers swarming the player.

We’ve done some redesigning of the game to make it plausible to finish by our deadline, so instead of having a huge level, we’re going to restrict the game to a single location (Sergel’s Square), where the player will face waves upon waves of enemies. To achieve this, we’ll use spawn points for the crawlers that will take in an integer value and spawn a pseudo-random amount of crawlers based on this. This will allow us to either go with infinite waves or a set amount of waves without having to modify more than a single variable. The spawn points will have a spawning queue based on each wave and a timer between spawns as to not have all of crawlers spawn on top of each other. This is all we have on them at this time, but I’m pretty sure more things will show up later on.

 

FIRE
BONUS: Temporary flame thrower (May be a little overkill)

 

Geneva Lost Weekly Reflection – Week 8, 2016

Artifact: Collision

Collision is an important part of every game, but can be very challenging. Some games require only simple collision detection: Tile based games makes due with axis aligned box to box collision detection while many space-shooter type games will only require simple circle collision detection.

Given that our game has free form movement we would need a more advanced type of collision to make sure that the game feels right. This we could either achieve with pixel-perfect collision detection or with convex polygon collision detection. Personally, I’m not a fan of pixel-perfect collision. It’s very limited, especially when considering animation, and doesn’t provide enough feedback for a somewhat proper collision handling. Therefore we decided to go with convex polygon collision detection. We’re using the separate axis theorem, since it provides information about the collision that can be used when handling the collision rather than just a true or false value. I’ve used this theorem previously, but only with rotated rectangles. This time we’re using convex polygon shapes. There isn’t too much difference, but circle to polygon collision detection will be harder to achieve than circle to rotated rectangle collision detection.

Of course, before doing this relatively expensive between every object in the game, you want to do a broad phase collision check. There are different ways to do this, but arguably the cheapest would be to use axis aligned bounding boxes.

Separate Axis Theorem (SAT) looks at all vertices of both polygons projected onto each axis of both polygons. This sounds advanced, but it’s a fairly simple process.

To get an axis from a polygon you subtract a vertex from a subsequent one.

axis0 = v0 – v1, axis1 = v1 – v2, axis2 = v2 – v3 … axisn = vn – v0

To project a vertex onto an axis you need the formula

projectionn = axis× ((vn · axisn) / (axisn(x)² × axisn(y)²)

Okay, so the math is rather complicated, but we’re almost done here! We have the points projected to the axis, but we need to know if, and how much, the projections overlap. For this we will use the dot product of the projected vertex and the axis

dotn = projectionn · axisn

then check for the smallest and biggest dotn. and compare them to the smallest and biggest dotn of the other polygon’s projected vertices on the same axis. If there is no overlap, there is no collision. If we go through all the axes like this and store what the smallest overlap is and on what axis we can determine not only if there is a collision, but also get the smallest distance to break the collision.

This can also be done with circles vs polygons where you would check the center of the circle against either the axes or the vertices of the polygon depending on where the circle is in relation to the polygon. This is using voronoi areas, but this is slightly more advanced and I won’t go into it here.

SAT_badillustration
Pictured: the worst possible illustration of SAT in action.

There are some really great tutorials on this out on the web.

I can recommend N tutorial A for its thorough breakdown of the theorem.

Geneva Lost Weekly Reflection – Week 6, 2016

Artifact: Camera

In our game we wanted the camera to be more than just a static viewport. For us, the camera is an important element to convey information to the player.

Firstly, we wanted the camera to gravitate towards the reticle, giving more vision towards the direction you’re looking at. This in itself is simple enough to implement. A simple mathematical formula for the camera position would be

b × f

where a is the position of the player, b is the position of the reticle in relation to the middle of the screen and f is the focus multiplier.

However, we wanted to compensate the viewing distance lost in the y-direction of the screen due to a non 1:1 aspect ratio. To compensate we used the absolute dot product of a heading vector pointing straight up and the heading vector of the reticle relative to the middle of the screen and multiplied that with the difference of the width and height of the screen, which gave us the following formula:

× (f × (1 + |c · (0, 1)| × (width / height – 1)))

where a is the position of the player, b is the position of the reticle in relation to the middle of the screen, c is the unit vector of b and f is the focus multiplier.

cameraRef

Ref. A & B – The camera offset from the player.

However, this solution is problematic. When aiming towards the corners the offset is greater than when aiming directly up or down, defeating the purpose of the mechanic.

One solution would be to limit the mouse position to be position within a set radius from the player. This is a solution we’re still considering viable and may be what we eventually decide on.

Another solution, however, which is currently implemented in the game uses the mathematical formula

x’ = ax + bx × f × height / width

y’ = ay + by × f × width / height

where is the position of the player, b is the position of the reticle relative to the center of the screen and f is the focus multiplier.

AimMockup

However, this solution makes the aim not go over the center of the screen. It also makes the mathematical formula to rotate the player towards the aim somewhat more complex. Neither of these are big issues or are hard to solve, so this is the solution we’re settling on for now.

There are a few other features planned for future iterations of the camera.

We want the bullet spread to correlate to the camera offset, meaning that the further away from you you aim, the more spread your attacks will have.

We’ve planned for simple things to help with the immersion of the game, such as camera shake and similar features.

I want to mention one planned feature in particular. We’re all in agreement that we want a minimalistic HUD and that as much information as possible should be conveyed through the game itself. Therefore instead of having a health meter we decided that the camera, together with other mechanics, should convey the current level of health.

What we’ve decided that the lower health you have the more zoomed in the camera should be. Colors also get slowly desaturated. Other mechanics outside of the camera that will help this effect include time slowing down and audio changing.