A robot of my own design.
It is part of a game I’m making on my own in my spare time.
I’ve worked for a little over 3 years now on the Virtual Reality part of the Tacx Trainer Software.
The Tacx Trainer Software (TTS) is software for riding training sessions on a Tacx bicycle trainer, for amateur and professional cyclists.
The Virtual Reality (VR) lets you ride in virtual worlds; terrain, slope and wind info is send to the bicycle trainer, for a realistic riding experience.
For the most part I was the only artist. So I pretty much made every level on my own from scratch.
(That includes the level design, placement, lighting, particle systems, …, and creating almost all off the props)
Initially we started development using GameBryo, but halfway through production, we switched over to Unity3D.
Here are some screenshots from various levels:
Metropolitan, Originally started with a (very basic) 3th party city-pack, but only a part of the street layout and basic buildings remain, as most of the textures and UV’s are completely replaced or redone. The smaller props, like the traffic lights, and the entire park were made by me. (Allthough the park also contains a couple of standard Unity3D trees).
While the orginal male and female cyclist models aren’t made by me, I did make almost all the cyclist outfits (shirts+pants, not the glasses, helmets and shoes), and redid the UV-layout to better fit these dozens of outfits, and created the necessary maps for changing the skin and hair color (and for some outfits, the outfit colors).
I also made all the bikes (currently 3 types), which also have customisable colors.
More screenshots can be found on Flickr.
For more info on a more recent level (summer 2014), check here.
A very lowpoly vehicle, with lowres texture, made to be able to run on a Nintendo DS.
Based on a vehicle from Jak2. (Zoomer design copyright Naughty Dog)
TriCount: 232 tris.
TextureSize: 64 x 64
made for a (friendly) competition back in college (in 2008).
the polygon-limit was 250 triangles.
texture had to be 64×64.
First off, I don’t actually know whether it’s called a blendmap, that’s just what I call it,
what I mean is a grayscale texture that is used to dynamically blend 2 images.
in it’s simplest form, a single numerical value defines the clipping value; for each pixel of the blendmap, if the luminosity is higher than the clipping value image1 is used, otherwise it’s image2.
Most often though, I don’t want a hard cut like that, so instead I use 1 value that roughly determines how much of image1 to use (_BlendAmount), and 1 value that defines the sharpness of the transition (_EdgeSharpness).
Which looks something like this:
float blend = blendColor + (_BlendAmount * 2 – 1);
blend = saturate(blend * _EdgeSharpness – (_EdgeSharpness – 1) * 0.5f);
result = lerp(color1, color2, blend);
Now this might not seem like much, in fact it’s pretty much just an alphamap.
But it’s strenght comes from the fact that it’s great for dynamic transitions from one image to another, in time and/or space, and that’s usefull for a lot of things.
And it’s quite simple and cheap, which is also a good thing.
For example, I used this to make a snow material, that would blend in snow with the diffuse texture, based on the surface normal, using the blendmap in the snow’s alpha channel.
And for fading HUD overlays (like the Frost Effect, and blood splatter when taking damage).
Usage in games
The first time I came into contact with this technique, was when I saw the snow in Uncharted 2 (on the internet, I never actually played the game).
They seem to be using blendmaps for the transition between snow and ground/rocks. (which gave me the idea to make a snow material)
And I believe a lot of games also use blendmaps in general for transition between terrain textures, which looks a lot better than the default fading.
(Which should actually be a little more complex, as it’s not just blending 2 textures, but any number)
But the weird thing is that I can’t seem to find anything about it, I mean, actual documentation (or just even how it’s called),
but maybe that’s just cause I’m terrible at finding things on the internet.
I never play much casual or web based games,
but I do visit Kongregate every so often (though it has been a while),
because they tend to have more original games than the average flash games site.
(or atleast they used to)
One thing I liked about making games for use on Kongregate, are the APIs.
(To be honest though, I’ve never uploaded my games anywhere else, so I don’t know what it’s like on other sites)
One of the APIs is the “Shared Content API”, which allowed users to save content they made in a game, and let others load it.
Mostly used for sharing save games and creating custom levels.
But I started thinking of ways to use the API for more direct gameplay mechanics, for creating a more cooperative gaming experience.
I got the idea to make a 2 player co-op game, where instead of 2 players playing simultaniously, only 1 player playes at a time, and the other is a recorded run of someone else.
And after finishing a level, the player could then save his run, and share it with others so they could then play co-op with it.
The idea was to see if this concept could allow for a new interesting experience, given the advantages and disadvantages that come with it.
The main advantages beeing that it allowed players to play together without having to play at the same time, and that any single run could be used for any number of different players, both friends and strangers.
(and smaller advantages beeing the impossibility of there beeing server or connection problems)
Since it was gonna be an experiment, I decided it should be a fairly simple game, not too big and easily accessible.
A 2D Shoot-em-up with pixel graphics seemed to be a good chose, as that’s one of the more easy games to make (seeing as how Warmada was made quite fast).
Well that turned out quite differently.
I liked making different ennemies, bosses and weapons too much, so it caused me to make the game at lot bigger and complexer than originally planned.
The graphics style was indeed fairly easy to make, but that only caused me to make more and more content.
Other than that, it also proved very difficult to explain the game mechanics in simple and clear ways to the player.
I wanted to emphasise the co-op element of the game, so I tried to make it really about cooperation.
I didn’t want it to be just 2 players shooting trough enemy waves, ignoring each other, I wanted them to help each other and work together.
Now how do you do that when 1 of them is a predetermined recorded run?
Well the biggest ellement I added for that purpose was the gates and gate switches;
some levels where devided in 2 parts, 1 for each player, with gates blocking their paths and gate switches that opened the gate when shot, but the key beeing that the switches for Player1’s gates where in Player2’s part and vica versa.
And another was a weapon, the helix cannon.
It shoots a beam of red and blue particles, that can move through walls, in a sin-like wave.
But when both players had this weapon equipped, their beams would be drawn towards and spiral around each other. Beacuse of this the players really had to keep track of each other and act in accordance.
Having one player beeing completely predetermined causes some issues ofcourse, I anticipated this, but the gravity of it was still much bigger than expected.
Simply put, everything that makes a playthrough different from the playthrough of the recorded run, can cause the recorded player to behave illogically.
For one, this means nothing should happen, even partially, at random. Everything must happen exactly the same each time (except for the active player), otherwise the recorded run would make no sense.
And all the enemies that target a player, always have to target either player1 or player2, they can’t change this based on whoever is closest, cause this would further differentiate it from the recorded run, this unfortunately limits the enemy behaviour.
An issue that can’t be helped, is that when the active player kills an enemy (that wasn’t killed in the recorded version), the recorded player might still try to shoot it down,
and worse, when the active player fails to kill en enemy fast enough (that was killed in the recorded version), the recorded player might fly into it (causing massive damage to the player).
The designs of the levels, and the gates and gate switches, had to be severly simplified,
because originally there where gate switches that both closed gates and opened others, but the problem with this was that it could cause the recorded player to fly straight into a gate, killing him instantly.
And the gate HAS to kill him instantly, as it is a recorded run, and so the only alternative would be making him fly through the gate.
All in all, I like how the game ended up, though I do not consider it a succes.
The concept has it’s plus sides, but IMO they don’t really outweight the downsides.
The problem basically comes down this:
It’s supposed to be a cooperative game, and that really requires interaction between the 2 players,
but there can’t be interaction when one is prerecorded, there can only be reaction.
It was an experiment, and although it wasn’t a great succes, an experiment is only a failure when you fail to learn from it.
You can play the game here:
But make sure to read the instructions before you start.
You can only save your run and share it when you have a Kongregate account (don’t worry, it’s free)
When texturing some sort of metallic object (a machine, weapon, robot,…),
it’s often a good idea to make the edges a bit worn off, otherwise it looks to clean, and it just makes the object more interesting.
The way I was taught to do this, was by painting it manually on the texture.
But I always felt this to be kind of a hassle, as these worn edges are often on UV-seams, making it more difficult to paint correctly than anywhere else on the textures.
And it just always seemed to me that this could perfectly be generated procedurally,
as the location and look of a worn edge is more of a technical thing, rather than a design chose.
Like ambient occlusion, you can paint that aswell, but most often you’re better of baking procedural ambient occlusion.
A couple of weeks ago I finally started making something to do this.
I created an editor extension in Unity3D, that allows you to generate worn edges and bake it into a texture.
Here’s an example:
My first idea was to make this a tool in Blender, but I’m not familiar with Python, and so I wanted to test some things in Unity first. But before I knew it, I had already created the whole outline of how to make it in Unity in my head.
So I started making it in Unity instead, and to my suprise, the development of it went extremely smooth, everything worked as expected, even the things I had to invent on the spot.
Now the tool does have it’s restrictions (for example: it currently only works on hard/sharp edges), but if you keep these in mind it works great IMO.
I’m planning on putting it on the Unity Asset Store,
I still have to make some documentation and screenshots and stuff, those things tend to take way more time than intended. 😦
But other than that, it’s as good as finished. 🙂
Now available on the Asset Store:
So a while ago, while I was making a christmas themed version of a level at work, I was playing the latest SSX after hours.
The christmas level
There’s a game mode in SSX where you have to stay out of the shadow or you freeze to death, they visualise this by putting more and more frost on the screen.
I quite liked this effect, and thought it would be nice to include this in the christmas level at work.
Now in essence it’s just an image overlay, but the special thing about it is that when it fades in or out, it doesn’t just get less or more transparent, but the frost shrinks or grows.
I figured they must be using a blend map or something alike to achieve this,
which is how I then made it.
Here’s what it looks like:
After making this, I thought it would be even better if the ice also distorts the view.
Since the frost effect was already a post effect, this wasn’t that difficult to implement.
What I did was create a normalmap from the frost texture.
This is used to determine the direction of the distortion, and the amount of distortion is relative to the opacity of the frost, together these define the sampling offset.
(It’s a screen space image distortion, so the distortion works by just sampling the source image with the offset.)
So this is what I got:
I’ve made this post effect available for free on the Unity Asset Store:
It requires a Unity Pro license in order to work though.