-
Notifications
You must be signed in to change notification settings - Fork 4
Voice
The Voice element resides inside a Synth element. It acts as a blueprint for each voice in the Synth. If a polyphonic Synth has one Voice element, the Voice element is used for all voices. If the Synth has several Voice elements, they will be picked in a sequential order for the incoming Note events. This makes it possible for a synth to use different synthesis models simultaneously for different notes/voices. The Voice element mixes the output of all its child nodes in the same manner as a Mixer element and can have its gain set to control the output of that voice. The output of the Voice element is connected to the Synths output. If the Voice has an Envelope controlling the gain of a GainNode, that Envelope will be used to trigger the sound, but if not, the gain of the Voice will be set to 0 or 1 when the Synth recieves a Note On and Note Off event. This is to make sure the Voice is not making a sound until the Synth receives events.
<?xml version="1.0" encoding="UTF-8"?>
<Audio version="1.0" timeUnit="ms">
<Synth follow="webaudio-keyboard, change, e.note" voices="1">
<Voice>
<Chain>
<OscillatorNode type="sawtooth">
<frequency follow="MIDI"></frequency>
</OscillatorNode>
<GainNode>
<gain>
<Envelope adsr="100, 200, 50, 200" max="1"></Envelope>
</gain>
</GainNode>
</Chain>
</Voice>
</Synth>
</Audio>
This example shows a Synth with one Voice element that contains a Chain element. This Chain contains an OscillatorNode connected through a GainNode. The frequency of the OscillatorNode is following the incoming MIDI events and the Envelope controlling the gain property of the GainNode is triggered by the incoming Note on and Note Off events.
Please follow my research journey at http://hans.arapoviclindetorp.se and https://www.facebook.com/hanslindetorpresearch/
- Collaborative music-making: special educational needs school assistants as facilitators in performances with accessible digital musical instruments (Frontiers in Computer Science 2023)
- Playing the Design: Creating Soundscapes through Playful Interaction (SMC2023)
- Accessible sonification of movement: A case in Swedish folk dance (SMC2023)
- Evaluating Web Audio for Learning, Accessibility, and Distribution (JAES2022)
- Audio Parameter Mapping Made Explicit Using WebAudioXML (SMC2021)
- Putting Web Audio API To The Test: Introducing WebAudioXML As A Pedagogical Platform (WAC2021)
- Sonification for everyone everywhere – Evaluating the WebAudioXML Sonification Toolkit for browsers (ICAD2021)
- WebAudioXML: Proposing a new standard for structuring web audio (SMC2020)