Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Lip sync
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==In video games== Early [[video game]]s did not use any voice sounds, due to technical limitations. In the 1970s and early 1980s, most video games used simple electronic sounds such as bleeps and simulated explosion sounds. At most, these games featured some generic jaw or mouth movement to convey a communication process in addition to text. However, as games become more advanced in the 1990s and 2000s, lip sync and voice acting has become a major focus of many games. In the 2020s, [[facial animation]] provided by companies like [[FaceFX]] allows for synchronization more efficiently. ===Role-playing games=== {{Expand section|1=examples and additional citations|date=February 2016}} Lip sync was for some time a minor focus in [[role-playing video game]]s. Because of the amount of information conveyed through the game, the majority of communication uses of scrolling text. Older RPGs rely solely on text, using inanimate portraits to provide a sense of who is speaking. Some games make use of voice acting, such as [[Grandia II]] or [[Diablo (series)|Diablo]], but due to simple character models, there is no mouth movement to simulate speech. RPGs for hand-held systems are still largely based on text, with the rare use of lip sync and voice files being reserved for [[full motion video]] cutscenes. Newer RPGs, have extensive audio dialogues. The [[Neverwinter Nights]] series are examples of transitional games where important dialogue and cutscenes are fully voiced, but less important information is still conveyed in text. In games such as [[Jade Empire]] and [[Star Wars: Knights of the Old Republic (video game)|Knights of the Old Republic]], developers created partial artificial languages to give the impression of full voice acting without having to actually voice all dialogue. ===Strategy games=== Unlike RPGs, [[strategy video game]]s make extensive use of sound files to create an immersive battle environment. Most games simply played a recorded audio track on cue with some games providing inanimate portraits to accompany the respective voice. ''[[StarCraft]]'' used full motion video character portraits with several generic speaking animations that did not synchronize with the lines spoken in the game. The game did, however, make extensive use of recorded speech to convey the game's plot, with the speaking animations providing a good idea of the flow of the conversation. ''[[Warcraft III]]'' used fully rendered 3D models to animate speech with generic mouth movements, both as character portraits as well as the in-game units. Like the FMV portraits, the 3D models did not synchronize with actual spoken text, while in-game models tended to simulate speech by moving their heads and arms rather than using actual lip synchronization. Similarly, the game [[Codename Panzers]] uses camera angles and hand movements to simulate speech, as the characters have no actual mouth movement. However, ''[[StarCraft II]]'' used fully synced unit portraits and cinematic sequences. ===First-person shooters=== FPS is a genre that generally places much more emphasis on graphical display, mainly due to the camera almost always being very close to character models. Due to increasingly detailed character models requiring animation, FPS developers assign many resources to create realistic lip synchronization with the many lines of speech used in most FPS games. Early 3D models used basic up-and-down jaw movements to simulate speech. As technology progressed, mouth movements began to closely resemble real human speech movements. ''[[Medal of Honor: Frontline]]'' dedicated a development team to lip sync alone, producing the most accurate lip synchronization for games at that time. Since then, games like ''[[Medal of Honor: Pacific Assault]]'' and ''[[Half-Life 2]]'' have made use of coding that dynamically simulates mouth movements to produce sounds as if they were spoken by a live person, resulting in astoundingly lifelike characters. Gamers who create their own [[Machinima|videos]] using character models with no lip movements, such as the helmeted [[Master Chief (Halo)|Master Chief]] from ''[[Halo (video game series)|Halo]]'', improvise lip movements by moving the characters' arms, bodies and making a bobbing movement with the head (see ''[[Red vs. Blue]]'').
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)