07-04-2025, 06:52 PM
C03. Character Production Workflow
Most of the characters that are animated on the console, including the main character, Snake, have been restricted to a data size (including the face model) of about 5,000 to 10,000 polygons. Further, characters are used that have the same polygon resolution in both the game action and the event demos. This means that the game screens and video clips are seamlessly connected, making it easier for players to become emotionally involved.
As mentioned above, with the exception of crowds, characters are used that have the same polygon resolution in both the game action and the event demos. Separate from the resolution model used on the console, high resolution data are also simultaneously modeled for generating the normal map. Details such as creases on clothes are then expressed with the normal map that was generated from the high-res model.
In terms of bones used for constructing the bodies of characters, about 21 joint bones were used that contained animation data and were activated through these data. But many auxiliary bones were also used to supplement movements such as the twisting of knees, elbows, legs and arms. These were not activated by animation data. Rather, they were linked to the values of the basic joints that were activated by animation.
The team used these specifications not only on XSI, but also on the console. They could perform the same control on the console simply by outputting an auxiliary bone definition file from XSI.
Because the auxiliary bones themselves do not contain motion data, the data size can be kept to a low level. Further, if auxiliary bones need to be added or deleted, the operation can be performed simply by changing the model data without having to reconvert the motion data.
C04. Gator: An Essential Function
To ensure top quality in game development, repeated specification changes cannot be avoided. In response to this issue, the team frequently used the Gator function in their character production workflow. Often, they wanted to extract just one attribute from a specific model, or they wanted to reuse an attribute from a past model. For example, due to unavoidable circumstances, sometimes complex UV information must be rejected after creation. Gator is perfect when you want to reuse such painstakingly-created UV information. Although the topology and number of polygons will be different, the data can be reused on models with a similar shape.
To give another example, creating UV information for high-res models involves a lot of work. Using Gator reduces this work by a significant amount. You can even transfer UV information from a low polygon model with a completely different polygon count to a high-res model. The UV information is interpolated before being transferred. In MGS4, the main character, Snake, has about 10 different clothes patterns, including camouflage and costumes. Instead of performing envelope settings for these clothes on a case-by-case basis, the team could instantaneously transfer the character's existing envelope information just by clicking a button in Gator.
Operation Video of Efficient Rig Setup Using Gator:
This video shows that with XSI's Gator, it was easy to reuse Snake's existing envelope for Altair. The Altair character is the model data received as-is from Ubisoft: modeling correction was not performed. This is a vivid operation video that shows how the setup was performed in an efficient and rational way.
C05. Amazing Facial Animation
One of the main features of MGS4 is its world-class facial animation. How did the team create such realistic facial expressions?
Because lip synch work is performed in the localization stage, and to lessen work loads, audio analysis animation is used in the Metal Gear Solid series. For example, in MGS4, lip synching in the English & Japanese versions was performed using different types of audio analysis software. Emotions & facial expressions other than lip synching were added through manual animation. In most cases, facial expressions and phoneme elements do not interfere with each other, so both can be worked on in parallel.
It was this that allowed the simultaneous worldwide release of the title. When performing audio analysis, the facial expression components (such as anger or laughter) and the phoneme components in each language must be divided into separate parameters. These must then be reproduced as rig behavior. Although it is possible to create parameters for bone rotation and movement, the team said that this would make the rig too complicated and make it too difficult to predict bone changes as an envelope. In other words, there would have been two problems had they tried to perform facial animation using bone control only: it would have been difficult for the designer to perform intuitive operations, and difficult to create facial expression and phoneme parameters for bone behavior.
On the other hand, while shape animation has the disadvantage of producing linear interpolation animation, it is very easy to create parameters for phonemes and facial
expressions. But perhaps its most important benefit is that the designer can intuitively predict the results.
For these reasons, the team based the rigs in this project on parameters created with shapes, outputting the results as bone animation.
This setup allowed animation automation using audio analysis (automated animation was performed not just for the mouth, but also the tongue and throat) to coexist with a rich range of character emotions that were added manually. In the Flash movie below, you can see how smooth muscle movement is reproduced using a top-quality rig setup.
Flow for Facial Rig Construction:
![[Image: th_im14.jpg]](http://i292.photobucket.com/albums/mm28/MGSDot/MGSForums/The%20Case%20Study%20of%20MGS4/th_im14.jpg)
![[Image: th_im13.jpg]](http://i292.photobucket.com/albums/mm28/MGSDot/MGSForums/The%20Case%20Study%20of%20MGS4/th_im13.jpg)
Expressions, phonemes, eyes (and eyebrows) and shader wrinkle animation can be selected using the tabs.
Surprisingly, the team even developed a tool for the automatic creation of setups for facial rigs that were capable of this kind of advanced control. In their system, they prepared face model data and executed the tool to automatically identify the optimum bone positions. The tool also created control that included preset parameters for expressions such as laughing or angry faces. To perform automatic facial rigging, it was first necessary to make sure that the topology information for facial data was uniform. Setup was fully automated just by observing this rule. Then, all that was required to construct an environment in which facial animation could be performed was fine-tuning of the control by the designer.
They also used a tool to automatically generate the rig for controlling eyeball movement and the muscles around the eyes. Because the area around the eyes is also controlled using both shapes and bones, when the eyeball locator is moved, the muscles move smoothly just like they do for the mouth. Further, even if the shape is edited to redefine the eye edges, it does not spoil the blinking or brow furrow expressions at all.
It took these setup techniques and efficiency quality improvements to create the memorable game characters that appeal so much to the emotions of players.
Most of the characters that are animated on the console, including the main character, Snake, have been restricted to a data size (including the face model) of about 5,000 to 10,000 polygons. Further, characters are used that have the same polygon resolution in both the game action and the event demos. This means that the game screens and video clips are seamlessly connected, making it easier for players to become emotionally involved.
As mentioned above, with the exception of crowds, characters are used that have the same polygon resolution in both the game action and the event demos. Separate from the resolution model used on the console, high resolution data are also simultaneously modeled for generating the normal map. Details such as creases on clothes are then expressed with the normal map that was generated from the high-res model.
In terms of bones used for constructing the bodies of characters, about 21 joint bones were used that contained animation data and were activated through these data. But many auxiliary bones were also used to supplement movements such as the twisting of knees, elbows, legs and arms. These were not activated by animation data. Rather, they were linked to the values of the basic joints that were activated by animation.
The team used these specifications not only on XSI, but also on the console. They could perform the same control on the console simply by outputting an auxiliary bone definition file from XSI.
Because the auxiliary bones themselves do not contain motion data, the data size can be kept to a low level. Further, if auxiliary bones need to be added or deleted, the operation can be performed simply by changing the model data without having to reconvert the motion data.
C04. Gator: An Essential Function
To ensure top quality in game development, repeated specification changes cannot be avoided. In response to this issue, the team frequently used the Gator function in their character production workflow. Often, they wanted to extract just one attribute from a specific model, or they wanted to reuse an attribute from a past model. For example, due to unavoidable circumstances, sometimes complex UV information must be rejected after creation. Gator is perfect when you want to reuse such painstakingly-created UV information. Although the topology and number of polygons will be different, the data can be reused on models with a similar shape.
To give another example, creating UV information for high-res models involves a lot of work. Using Gator reduces this work by a significant amount. You can even transfer UV information from a low polygon model with a completely different polygon count to a high-res model. The UV information is interpolated before being transferred. In MGS4, the main character, Snake, has about 10 different clothes patterns, including camouflage and costumes. Instead of performing envelope settings for these clothes on a case-by-case basis, the team could instantaneously transfer the character's existing envelope information just by clicking a button in Gator.
Operation Video of Efficient Rig Setup Using Gator:
This video shows that with XSI's Gator, it was easy to reuse Snake's existing envelope for Altair. The Altair character is the model data received as-is from Ubisoft: modeling correction was not performed. This is a vivid operation video that shows how the setup was performed in an efficient and rational way.
C05. Amazing Facial Animation
One of the main features of MGS4 is its world-class facial animation. How did the team create such realistic facial expressions?
Because lip synch work is performed in the localization stage, and to lessen work loads, audio analysis animation is used in the Metal Gear Solid series. For example, in MGS4, lip synching in the English & Japanese versions was performed using different types of audio analysis software. Emotions & facial expressions other than lip synching were added through manual animation. In most cases, facial expressions and phoneme elements do not interfere with each other, so both can be worked on in parallel.
It was this that allowed the simultaneous worldwide release of the title. When performing audio analysis, the facial expression components (such as anger or laughter) and the phoneme components in each language must be divided into separate parameters. These must then be reproduced as rig behavior. Although it is possible to create parameters for bone rotation and movement, the team said that this would make the rig too complicated and make it too difficult to predict bone changes as an envelope. In other words, there would have been two problems had they tried to perform facial animation using bone control only: it would have been difficult for the designer to perform intuitive operations, and difficult to create facial expression and phoneme parameters for bone behavior.
On the other hand, while shape animation has the disadvantage of producing linear interpolation animation, it is very easy to create parameters for phonemes and facial
expressions. But perhaps its most important benefit is that the designer can intuitively predict the results.
For these reasons, the team based the rigs in this project on parameters created with shapes, outputting the results as bone animation.
This setup allowed animation automation using audio analysis (automated animation was performed not just for the mouth, but also the tongue and throat) to coexist with a rich range of character emotions that were added manually. In the Flash movie below, you can see how smooth muscle movement is reproduced using a top-quality rig setup.
Flow for Facial Rig Construction:
- Low polygon model activated with shape animation
- Bones fixed on top
- Enveloped polygon meshes for these bones
- Tangent color
- OpenGL display (wrinkles also expressed with a normal map)
![[Image: th_im14.jpg]](http://i292.photobucket.com/albums/mm28/MGSDot/MGSForums/The%20Case%20Study%20of%20MGS4/th_im14.jpg)
![[Image: th_im13.jpg]](http://i292.photobucket.com/albums/mm28/MGSDot/MGSForums/The%20Case%20Study%20of%20MGS4/th_im13.jpg)
Expressions, phonemes, eyes (and eyebrows) and shader wrinkle animation can be selected using the tabs.
They also used a tool to automatically generate the rig for controlling eyeball movement and the muscles around the eyes. Because the area around the eyes is also controlled using both shapes and bones, when the eyeball locator is moved, the muscles move smoothly just like they do for the mouth. Further, even if the shape is edited to redefine the eye edges, it does not spoil the blinking or brow furrow expressions at all.
It took these setup techniques and efficiency quality improvements to create the memorable game characters that appeal so much to the emotions of players.
![[Image: thepatriotssig.png]](https://i292.photobucket.com/albums/mm28/MGSDot/MGSForums/General/thepatriotssig.png)