|Advanced Audio Recording|
A virtual Piano in a FPGA
Based on virtual analog modeling it appears possible to create much more detailed and most realistic musical sounds by mathematically emulating the detailed behavior of all the oscillations, resonances and coupling effects between them which can be found in such large instruments like harps and pianos. The only thing required is enough calculation power. Today VA-Synthesis runs on DSPs mostly as shown here in the DSP project. Currently many VA-Synthesizers are out but many of the do have a limited number of voices - typically below 64, most of them below 32 oscillators. FPGAs might be a step to overcome these limits.
Having a closer look at the harp, one discovers that there are not only a lot of strings but also partial pieces of wood linking their movement thus energy exchange happens. Based on the energy fed self oscillators described here, it is possible to model these effects by not only superposing such sounds but directly fed oscillation energy into the model which comes from the manually moved strings (played by the musician) in the first place and the adjacent strings too. Further interaction can be added in respecting the energy which is exchanged by the air. Already with some simple equations it is possible to cause an effect like "the self singing harp" if one applies an air wave from the outside: Strings with an appropriate length will start do produce sound themselves by resonance. With increased quality of equations it is also possible to let them react on harmonics and sub harmonics, like it is observed in reality too.
The first harp was based on simple guitar model derived from a set of equations formerly introduced in my DSP System from Motorola. Currently I am testing the algorithms on my Chameleon hardware:
Also with the piano, one will recognize a large number of interactions between strings since with the piano, strings of the same frequency are grouped together and interact with each other in a heavy way. Setting the right starting parameters, one can invoke a string to fully take over 33% of the energy of a triple, which formerly was held by only two of them when playing sustenuto style. Therefore 21 partial oscillators for each string, 6 oscillators for interlink and 13 for emulating the wood which carries the strings and the hammer have been used. The voice of the hammer as well as the high frequent non tonal transient frequencies do have a low volume only and also disappear very quickly after being introduced, but they add THE relevant sound into the mix. Progressing the system in that way that harmonics up to the audible boarder are used, it seems necessary to operate with up to 80 or maybe 100 voices to run a string triple correctly leading to a total of 1000 voices for a 10 finger played piano. Taking sustain and residing strings into account which still are "on" when new fingers are played, 3000-5000 sounds shall be used to model this appropriately. Also the wooden case of the whole piano with all the partial frequencies and resonances one could think of, can be modeled (which I did not do so far).
State of the art:
The current piano plays with 16 fixed notes only (C3 Major upwards and 2x Fis) with switched off air modeling because of the limited FPGA power. This is already much more then the voices which I could produce in the DSP system. This is the DSP system synthesized in the FPGA platform:
Further work has to be done to tune the parameters in terms of damping and the amount of energy they interchange through the wood and the air.
FPGAs can help to produce many similar voices in real time and work best for instruments with a large number of primary sound generators like large string based instrument. Interaction of the strings can be modeled easily be energy exchange methods known from electrical engineering.
|© 2005 J.S.|