-
Notifications
You must be signed in to change notification settings - Fork 344
Advanced Meeting Controls
Caution
This documentation may no longer be current. Click here to view the updated content on our Developer Portal.
This article describes the advanced meeting controls available in the Webex Web SDK for meetings.
The following advanced meeting controls are available:
- Screen recording
- Lock and unlock meetings
- Send DTMF tones
- Transcription
- Access PSTN phone audio
- Effects for audio and video.
Once a meeting is created and joined, the screen recording controls can be initialized.
To start a recording, use the Meeting
object's startRecording()
method:
await meeting.startRecording();
Asynchronous | Yes |
Parameters | None |
Returns | Promise<undefined> |
For a recording in process, pause the recording using the Meeting
object's pauseRecording()
method:
await meeting.pauseRecording();
Asynchronous | Yes |
Parameters | None |
Returns | Promise<undefined> |
To resume recording, use the Meeting
object's resumeRecording()
method:
await meeting.resumeRecording();
Asynchronous | Yes |
Parameters | None |
Returns | Promise<undefined> |
To stop recording, use the Meeting
object's stopRecording()
method:
await meeting.stopRecording();
Asynchronous | Yes |
Parameters | None |
Returns | Promise<undefined> |
When you lock a meeting, no other participants can join unless you admit them. Only moderators of a meeting can lock and unlock meetings.
To lock a meeting, use the Meeting
object's lockMeeting()
method:
await meeting.lockMeeting();
Asynchronous | Yes |
Parameters | None |
Returns | Promise<undefined> |
To unlock a meeting, use the Meeting
object's unlockMeeting()
method:
await meeting.unlockMeeting();
Asynchronous | Yes |
Parameters | None |
Returns | Promise |
To send DTMF tones in a meeting, use the Meeting
object's sendDTMF()
method and pass it a string representing the tone you want to send:
await meeting.sendDTMF(DTMFStringToBeSent);
Asynchronous | Yes | ||||||||||
Parameters |
|
||||||||||
Returns | Promise<undefined> |
The basic steps for enabling and processing meeting transcriptions are:
- Initialize the
Webex
object with aenableAutomaticLLM
meeting configuration. - Enabled the Webex assistant when it is scheduled using the
/meeting
REST API here. - Add users to the meeting.
When a user initializes the Webex
object with enableAutomaticLLM
set to true
, the Webex Meetings SDK will automatically establish a socket connection between the browser and the transcription backend:
webex.init({
meetings: {
enableAutomaticLLM: true
}
});
Listen for the meeting:transcription:connected
event to determine if the socket connection has been successfully established:
meeting.on('meeting:transcription:connected', () => {
console.log('Transcription Websocket is connected');
});
When the event is received, your app can enable transcription.
To start receiving transcriptions during a meeting use the Meeting
object's startTranscription()
method with an optional options
parameter:
await meeting.startTranscription(options);
Parameter Name | Description | Required | Sample value | Type |
---|---|---|---|---|
options |
Configuration object to be provided while starting transcription. | No | { spokenLanguage?: String} |
Object |
When you start the transcription, you'll receive a meeting:receiveTranscription:started
event with a payload containing a list of supported spoken and caption languages:
meeting.on('meeting:receiveTranscription:started', (payload) => {
console.log(payload.captionLanguages);
console.log(payload.spokenLanguages);
});
The captionLanguages
and spokenLanguages
arrays contain the language code for the supported languages. The language codes conform with the ISO 639 language code specification.
To receive transcriptions, listen for the meeting:caption-received
event which contains the captions generated from the meeting's audio:
meeting.on('meeting:caption-received', (payload) => {
//use payload to display captions
});
Here's an example of the a transcription payload:
{
"captions": [
{
"id": "88e1b0c9-7483-b865-f0bd-a685a5234943",
"isFinal": true,
"text": "Hey, everyone.",
"currentSpokenLanguage": "en",
"timestamp": "1:22",
"speaker": {
"speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
"name": "Name"
}
},
{
"id": "e8fd9c60-1782-60c0-92e5-d5b22c80df2b",
"isFinal": true,
"text": "That's awesome.",
"currentSpokenLanguage": "en",
"timestamp": "1:26",
"speaker": {
"speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
"name": "Name"
}
},
{
"id": "be398e11-cf08-92e7-a42d-077ecd60aeea",
"isFinal": true,
"text": "आपका नाम क्या है?",
"currentSpokenLanguage": "hi",
"timestamp": "1:55",
"speaker": {
"speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
"name": "Name"
}
},
{
"id": "84adc1a7-b3c3-5a49-0588-aa787b1437eb",
"isFinal": true,
"translations": {
"en": "What is your name?"
},
"text": "आपका नाम क्या है?",
"currentSpokenLanguage": "hi",
"timestamp": "2:11",
"speaker": {
"speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
"name": "Name"
}
},
{
"id": "84c89387-cd5d-ce15-1867-562c0a91155f",
"isFinal": true,
"translations": {
"hi": "तुम्हारा नाम क्या है?"
},
"text": "What's your name?",
"currentSpokenLanguage": "en",
"timestamp": "2:46",
"speaker": {
"speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
"name": "Name"
}
}
],
"interimCaptions": {
"88e1b0c9-7483-b865-f0bd-a685a5234943": [],
"e8fd9c60-1782-60c0-92e5-d5b22c80df2b": [],
"be398e11-cf08-92e7-a42d-077ecd60aeea": [],
"84adc1a7-b3c3-5a49-0588-aa787b1437eb": [],
"84c89387-cd5d-ce15-1867-562c0a91155f": []
}
}
During the meeting, if you'd like to change the spoken language, use the Meeting
object's setSpokenLanguage()
method:
const currentSpokenLanguage = await meeting.setSpokenLanguage(selectedLanguage);
Choose the selectedLanguage
when you start the transcription. If you select the spoken language and speak in that language, the system displays the caption in the same language. If you set the caption language to a different language at any point, a user speaking in this new language will see the caption in that different language.
During the meeting, use the Meeting
object's setCaptionLanguage()
method to set the caption language:
const currentCaptionLanguage = await meeting.setCaptionLanguage(selectedLanguage);
You'll choose the selectedLanguage
language code at the start of the transcription. The system translates any speech, no matter the language, into this selected language.
To stop receiving the transcriptions, use the Meeting
object's stopTranscription()
method:
meeting.stopTranscription();
In a meeting, if the audio isn't clear or there are issues using device audio (desktop app, mobile app, or web app), Webex lets you dial in to a meeting via PSTN. The following controls are available:
- Use Phone Audio
- Disconnect Phone Audio
For more information see Use PSTN Phone Audio for Meetings
Webex meetings support the following three effects:
- Background noise removal (BNR) for Webex audio.
- Background blur for Webex video.
- Virtual backgrounds for Webex video.
For more information, see Meeting Audio & Video Effects.
Caution
- Introducing the Webex Web Calling SDK
- Core Concepts
- Quickstart guide
- Authorization
- Basic Features
- Advanced Features
- Introduction
- Quickstart Guide
- Basic Features
- Advanced Features
- Multistream
- Migrating SDK version 1 or 2 to version 3