WebRTC Input and Outputs
This guide is a continuation of the Input and Outputs article. It shows, with code examples, how to publish your source stream to a WebRTC room for translation and how to subscribe to the translated tracks published by Palabra.
Note: There are two sections: WebRTC input and WebRTC output. They are independent - you can use either without the other. For example, you can publish your source broadcast via RTMP (or another input) and output via WebRTC, or publish via WebRTC and output to RTMP/HLS/SRT. You don't need to use both WebRTC input and output if one meets your requirements.
WebRTC Input: Publishing of the source track
In case, you have specified the webrtc_push protocol as the INPUT for your broadcast, Palabra API will respond you with url and token in the input section of the responce:
{
"ok": true,
"data": {
"id": "c1236d4d-73eb-409d-b8f5-3780fb0d8a10",
// ...
"input": {
"protocol": "webrtc_push",
"url": "<WEBRTC_SERVER>",
"token": "<ACCESS_TOKEN>"
},
}
}
Use the url and token values to connect to the Palabra-hosted WebRTC room to publish your source MediaStream audio/video tracks, using Livekit:
Install the Livekit client:
npm install livekit-client
Connect to the Palabra WebRTC room:
import { Room } from "livekit-client";
const connectTranslationRoom = async (URL, TOKEN): Room => {
try {
const room = new Room();
await room.connect(URL, TOKEN, { autoSubscribe: true });
return room;
} catch (e) {
console.error(e);
throw e;
}
};
Publish the source MediaStream track:
import { LocalAudioTrack } from "livekit-client";
// Fetch MediaStream from the Deivce
const stream = await navigator.mediaDevices.getUserMedia({ audio: { channelCount: 1 } });
// Get the LocalAudioTrack from your MediaStream
const track = new LocalAudioTrack(stream.getAudioTracks()[0]);
// Publish the track to the WebRTC room
const publishAudioTrack = async (room, track) => {
try {
await room.localParticipant.publishTrack(track, {
dtx: false, // Important to keen false
red: false,
audioPreset: {
maxBitrate: 32000,
priority: "high"
}
});
} catch (e) {
console.error("Error while publishing audio track:", e);
throw e;
}
}
After you publish your source track, Palabra starts the translation pipeline and re‑streams the translated audio/video to your configured outputs.
WebRTC Output: Subscribing translated tracks
If you selected webrtc_push as an OUTPUT, to get your translated stream you have to:
- Call Get WebRTC room data request to get server
urland accesstoken. - Then, connect to the WebRTC server, using Livekit:
npm install livekit-client
import { Room } from "livekit-client";
const connectTranslationRoom = async (URL, TOKEN): Room => {
try {
const room = new Room();
await room.connect(URL, TOKEN, { autoSubscribe: true });
return room;
} catch (e) {
console.error(e);
throw e;
}
};
- Add a handler to the
TrackSubscribedevent:
import { RoomEvent } from "livekit-client";
// Handler to play the subscribed tracks in Web Browser (example)
const playTranslationInBrowser = (track) => {
if (track.kind === "audio") {
const mediaStream = new MediaStream([track.mediaStreamTrack]);
const audioElement = document.getElementById(
"remote-audio"
); // Your HTML audio element
if (audioElement) {
audioElement.srcObject = mediaStream;
audioElement.play();
} else {
console.error("Audio element not found!");
}
}
};
// Add a handler for a TrackSubscribed event
room.on(RoomEvent.TrackSubscribed, playTranslationInBrowser);
Palabra publishes translated tracks to the WebRTC room. With { autoSubscribe: true }, you are automatically subscribed to new tracks as they appear. Handle RoomEvent.TrackSubscribed to access each track (for example, to play its audio in the browser).