As mentioned above: Film is a linear media where music is synced to the picture. A game is an interactive medium where we need to give the impression that music is written to accompany your performance. One of the main challenges is not to make the music repetitive to avoid boring the player.
Similar to the filmmaking field, game composers must understand some basic concepts of how a video game works: What’s a game engine, an event, a map, a cinematic/cutscene, a trigger box, etc.
You need an audio middleware since game audio engines are simply not as powerful as their video engine. One of the most used ones, especially for AAA games is WWISE. There’s a list of games that use it on AudioKinetic’s website. Other middlewares such as FMOD are also used in the industry.
The middleware’s job is to manage how audio behaves in a game as well as the resource budget, dialogue in different languages, audio settings per platform, etc. For the music, you can set a BPM and program its behavior (for example: Switch from Track A to Track B on the next beat/bar/marker and use this cymbal swell file as a transition).
As for the music writing part, it’s crucial that the composer discusses understands how the music system works within the game in order to adapt their writing techniques to so it works with the interactive music mechanics. There are different writing techniques. Here are a few:
You write and orchestrate your music as you do normally and stem it out. For example, Strings High, Strings Low, Brass High, Brass Mid, Brass Low, etc. By doing so, every stem is also filling the appropriate sonic space in the frequency spectrum too.
Instead of adding more elements so they can work together, you simply strip down your arrangement into submixes but making sure the appropriate layers are grouped together (I learned this technique from Tom Salta’s conference at GameSoundCon).
The middleware will be programmed to add/remove layers according to different events. It could also be controlled with a variable between 1 and 100 according to your evolution in the game, or stress level, or anything really.
You write in small loopable section blocks and switch from block A to B to C on specific events (for example Change of location, collecting x number of points, etc.)
STATES You can write two pieces of music that play simultaneously, one on mute, the other at full volume and they crossfade to switch. A great example is Mario Odyssey on Nintendo Switch when he goes from full 3D to 8Bit and the music goes from a full orchestra to chiptunes. This system could work in different contexts such as Exploration vs Combat music, for example.
- RANDOMIZED BLOCKS
Similar to point #2, but the blocks are contained within a looping random playlist (similar to “shuffle mode”) and they always transition well and logically, regardless of the order, to prevent the player from being bored.
- AMBIENT MUSIC WITH LAYERS OF RANDOM PHRASES
Similar to the blocks in point #2, but each one has ambient layers + an additional layer that consists of musical phrases triggered randomly every x number of seconds.
Example: The exploration music in UNCHARTED: The Lost Legacy
- MUSIC BY STINGERS
When the player reaches certain areas, short musical pieces/phrases are triggered and separated with gaps of silence.
For example, God of War:
Obviously, techniques can be combined together. You can have randomized blocks + different layers per block + randomized music phrases per block, etc. The sky is the limit and it depends on so many factors. These discussions are usually made between the composer, the audio director, game designers, and programmers prior to writing the music.
There’s a great article from Designing Music Now that I highly recommend.