Table of Contents
This section contains tutorials showing how to use the Kurento framework to build different types of WebRTC and multimedia applications.
These tutorials have been created with learning purposes. They don’t have comprehensive error handling, or any kind of sophisticated session management. As such, these tutorials should not be used in production environments; they only show example code for you to study, in order to achieve what you want with your own code.
Use at your own risk!
These tutorials come in three flavors:
Java: Showing applications where clients interact with Spring Boot-based applications, that host the logic orchestrating the communication among clients and control Kurento Media Server capabilities.
To run the Java tutorials, you need to first install the Java JDK and Maven:
sudo apt-get update && sudo apt-get install --no-install-recommends --yes \ git \ default-jdk \ maven
Java tutorials are written on top of Spring Boot, so they already include most features expected from a full-fledged service, such as a web server or logging support.
Spring Boot is also able to create a “fully executable jar”, a standalone executable built out of the application package. This executable comes already with support for commands such as start, stop, or restart, so it can be used as a system service with either init.d (System V) and systemd. For more info, refer to the Spring Boot documentation and online resources such as this Stack Overflow answer.
Node.js: In which clients interact with an application server made with Node.js technology. The application server holds the logic orchestrating the communication among the clients and controlling Kurento Media Server capabilities for them.
These tutorials require HTTPS in order to use WebRTC. Following instructions will provide further information about how to enable application security.
This is one of the simplest WebRTC applications you can create with Kurento. It implements a WebRTC loopback (a WebRTC media stream going from client to Kurento Media Server and back to the client)
This web application consists of a WebRTC loopback video communication, adding a funny hat over detected faces. This is an example of a Computer Vision and Augmented Reality filter.
This web application showcases reception of an incoming RTP or SRTP stream, and playback via a WebRTC connection.
Video broadcasting for WebRTC. One peer transmits a video stream and N peers receive it.
This web application is a videophone (call one to one) based on WebRTC.
This is an enhanced version of the the One-To-One application with video recording and Augmented Reality.
This tutorial connects several participants to the same video conference. A group call will consist (in the media server side) in N*N WebRTC endpoints, where N is the number of clients connected to that conference.
This tutorial detects and draws faces present in the webcam video. It connects filters: KmsDetectFaces and the KmsShowFaces.
This tutorial injects video into a QR filter and then sends the stream to WebRTC. QR detection events are delivered by means of WebRTC Data Channels, to be displayed in browser.
This tutorial shows how text messages sent from browser can be delivered by Data Channels, to be displayed together with loopback video.
This tutorial has two parts:
A WebRTC loopback records the stream to disk.
The stream is played back.
Users can choose which type of media to send and record: audio, video or both.
This is similar to the recording tutorial, but using the repository to store metadata.
This tutorial implements a WebRTC loopback and shows how to collect WebRTC statistics.
This web application consists of a WebRTC video communication in mirror (loopback) with a chroma filter element.
This web application consists of a WebRTC video communication in mirror (loopback) with a crowd detector filter. This filter detects people agglomeration in video streams.
This web application consists of a WebRTC video communication in mirror (loopback) with a plate detector filter element.