96 kHz.org 
Advanced Audio Recording 
The Limits of Physical Modeling
At first sight it is easily possible to calculate a mathematical wave acting similar to a real string but guitar do not consist only of strings and a big part of the sound stems from interactions of the strings and cross over effects. The way, the strings are mounted and damped by he fingers as well as the interaction with the resonating body of the guitar has a big role. All these parts move in a complex way against each other which has to be calculated in detail in order to get a realistic result. Examples of realistic oscillation calculations can be found in the field of electrical and mechanical engineering, such as the commonly used circuit simulator pSPICE or FEMbased simulators for mechanical tension and stability investigations. Classical synthesizers are nowadays emulated more or less perfectly by VAM representing the electrical circuit's behavior but mechanical instruments are much more complex. The moving body of a guitar for example comes with various resonances and torsion giving tension to the strings and causes rhythmical energy interchange through the body and the air similar to a piano. This will lead to a mixture of vibrato and tremolo effects which follow fundamental physical rules. Calculating that all in real time and with every detail that way that all typical modulations of the waves are close to reality requires a continuous calculation of all participating oscillators in the instrument and a phase correct summation of the signals. Also cross over effects and energy transport to adjacent regions of the 3dimensional instrument must be taken into account. Then first it will be possible to emulate the complex behavior of real instruments which differs from the simulated behavior which can be achieved by simple equations and sine modulated sine waves.
The Resource Problem Current solutions presented in many articles published today suffer from limited calculation resources so too many aspects must be left away. Also fastest DSP available have a defined limit which makes it hard to overcome to results one gets by superposing synthetic waves and manually adjust the modulations. With FPGAs, it is easier possible to parallely calculate a given number of concurrent effects, which is described here: Advantages of FPGAs But another problem persists:
Wood! we need wood! A big issue is, that it is necessary to understand an instrument and it's behavior completely in order to be able to find the right equations, so some investigation will have to be done. Whilst in theory are effect imaginable take pace the same time, some aspects have more impact than others with certain instruments. With guitars, harps and piano string, the interaction of the strings is very important for the finally sound and cannot be left away. Without all these upcoming modulations, all string instruments would sound more or less the same way. Precise measurement has to be done to find the real damping parameters of the use materials. Especially wood comes with a non linear behavior which hardly can be described with common simple equations. Many instruments do contain quite a large amount of wood which actively participates in sound creation, so even electric guitars cannot be represented by simple string emulation. Interestingly wood emulation is not mentioned at in most all these documents.
From math to sound Finally there is a problem which also is discussed nearly nowhere, namely the step from a mathematical equation to the real sound to be presented to the ear. Starting e.g. from an equation similar to something like a sine wave describing a certain point of a string, possibly the amplitude, this is not yet the final result but has to be processed since instruments are always 3dimensional and so is their sound. Just picking up one dot of a string is definitely not enough. So one will need a number of equations referring to numerous areas of a instrument leading to several dimensions. In most cases this cannot be easily done with a 2D or 3Dequation system. Think of e.g. a guitar having a neck, strings and a 3Dcorpus. All of these areas do emit sound and their amplitudes, phases and spectrums differ from each other. How many equations will be needed? Theoretically every single physical dot will need it's description. Practically this can only be achieved with simplification and modeling techniques with reduced complexity. But even when this is done, this is still not the end. Depending on the position of the listener all the emitted sounds of all areas have to be correctly summed up (super imposed) taking the phase and the direction into account. Having a closer look at eg the violin, one experiences a strong influence of the hearing angle on the sound. Movement of the instrument while playing is another issue. By doing this, a number of comb filter effects do occur and also the complex transient behavior of the spectra becomes audible. This first is the point of calculation where a sound is obtained coming close to reality.
Room for Room The same way, the room can be processed taking other angles, spectra and damping effects into account. By super imposing these results a real 3Dinstrument might be emulated once calculation power is present. Think of all the reflections inside an instrument like a piano or a contra bass taking place.
Controlling the Modell Another big issue is the control of the model. Any mathematical model performed by software sound synthesis needs detailled real time control of the parameters and MIDI transmission speed and accuracy is a show stopper here. As it somes to control an instrument in real time, about 10 to 20 parameters have to be changed in real time. See Limits of MIDI resolution to get an impression how well this might work with current MIDI.
Read more about VAM:

© 2002 J.S. 