Last week I was lucky enough to witness a motion capture session deep in Bizarre towers.
Motion capture (if you're not aware), is the act of dressing up in a tight suit, sticking little reflective balls all over your body, and running around in a dark room for a bit. Apparently game developers have been doing this for years! A convenient side effect is that you can use infra-red cameras to record the movements of someone as they dance about the place. The recorded motion can then be applied to an in-game character, meaning that they animate realistically and move in a way you'd expect them to move.
There's quite a bit of this going on at Bizarre at the moment, as all of the crowd in PGR3 have all been "mo-capped". This particular session, however, wasn't for PGR3, but rather Bizarre's unannounced title in development... (There's more to Bizarre than just Gotham, you know!)
I snuck into the mo-cap studio with a camera whilst the preparations were still going on. Fortunately, Alan was already in the spandex suit when I arrived, so I was only witness to the "sticking the little balls on" phase, during which I harassed Mike about how the whole process worked.
Interestingly enough, there isn't any equipment inside the balls. They are simply reflective, so the cameras can pick them up, which makes Alan looks like a character from Tron in the camera flash! They are also "squishy" so that the actor can dive about the place without causing a nasty injury.
Mike (the man in the know) said this: "The cameras emit infra red and the markers are coated with a substance that reflects this spectrum of light back into the cameras, I guess the markers are a bit like cats eyes on roads, bouncing back a car's head light.
The cameras aren't really affected by unnatural light/bulbs, but white or shiny objects that are placed in the capture area can sometimes reflect light back to the camera. This makes the captured data very noisy and very hard to work with.
To capture motion data, these reflective markers have to be seen by at least two cameras, so that the mocap software (Vicon's IQ2) can calculate where that marker is in 3d space. If a marker is seen by only one camera then it wont work. This all adds to, what I call, the 'dark art of Motion Capture'."
The clever part of this whole process is the cameras which are mounted around the studio. They cover every angle, so once all of the angles are added together you can get an idea of the movement in 3 dimensions.
But how do you know where to put the markers on your actor? Mike explains: "Basically we have to have enough markers on the actor so that when individual markers get obscured from the camera's view, then IQ2 can use other markers nearby to reconstruct where it thinks they should be. So we might put a marker on the elbow, but we'll also put a marker on the actor's upper arm and the forearm. Then if the elbow marker is sometimes hidden from the camera's view then we can use features within IQ2 to reconstruct the position of the lost elbow marker using the markers on the forearm and upper arm. However, if there are too many markers then IQ2 will get confused when trying to work out which is which when it tries to automatically label these markers. Like I said, it's a bit of a dark art.
I feel the most important factor influencing where we put the markers on an actor is what type of motion we are trying to capture. There is no point putting a marker on the actor's back if we are capturing the motions of someone sitting in a chair, because it will always be hidden from view. We also need to consider placing markers to capture the rotation of certain objects or bones - we need at least two positions to work out the vector/rotation of that object. This is why we have two or more markers on the back of the actor's hand, so we can capture the angle/twist of the hand during the motion."
Before things got started, we needed to calibrate the software we were going to be using. Alan had to stand in the middle of the studio, and go through the range of motions his limbs were capable of moving. This means doing kung-fu kicks, rolling your head, and practicing yoga with your arms. Brilliant!
Alan's instructions for this particular motion capture session were to grab his neck, and act out a slow and painful death. Interesting stuff! It's not as simple as it sounds though... for example, Al couldn't just put his hands around his throat as you might expect. We don't yet know how the hands from the motion capture will map to the hands on the 3d model - will everything match up properly? The best way to get round this, Mike explained, was for Alan to do the action in front of his neck at a distance of about a foot. Then the limbs would be repositioned whilst in post-processing to make it all match up.
But how does it get from the motion capture software into the game? "During the capture session we save data from the cameras and use this within IQ2 to re-construct where each of the markers is in 3d space and its trajectory from one frame to another. IQ2 can then label what it thinks each marker is, and use tools to reconstruct the position of lost markers. Once all markers are labelled on every frame of a sequence then it is exported into Motion Builder; the package we use to transfer our motion captured data onto our 'animation skeleton'. The final stage is to clean up and adjust the basic animation, so that it loops or animates from specific poses or whatever. The animation is then exported into Maya (where even more magic is added) before it is finally exported into the game's animation converter where it is converted to be played in our game. Sorted!"
I'll do another update on the modelling process in coming weeks... as soon as I find an artist willing to convert to the dark side and spill the beans!
So there's only time for a short piece on PGR3 this week, but quite an interesting one nonetheless. What follows is an actual unlit, pre-effects PGR3 game texture, which shows the front of a building in New York City. As always, this was pretty much a random texture I selected from the archive. Click the thumbnail below to see the image at actual size (1024x1024 pixels)!
It might look a bit plain in its 'raw' form, but this is just a 1/4 of the story ... as well as the regular texture (called the diffuse layer), there is also the index channel (is it glass, metal, brick etc.?), the specular map (the shininess of the material), and the bump map (adds light and shadow to the raised areas) on top of that! In fact, I'll see if I can interrogate the texture artists some more in the future for some more info... Smiley
But that's not all. Imagine an entire city of textures at this high level of detail. How many textures would you need? Yes, it's thousands and thousands! The following image has been going around our studio this week, so I thought I'd share it with you also. It's a crop sheet of some of the New York City textures which are featured in the game. Again, make sure you click the thumbnail to see the full-size image.
Pretty incredible eh? Obviously this picture has been scaled down for practical use, but it has been made entirely from full-size textures like the one above. Here's what the building looks like in the game engine:
Remember, that's just New York. There are literally gigabytes of textures! The level of detail still amazes me. I can't wait! Smiley
More Bizarre updates next week at www.bizarreonline.net...