Kurento is a low-level platform to create WebRTC applications from scratch. You will be responsible of managing STUN/TURN servers, networking, scalability, etc. If you are new to WebRTC, we recommend using OpenVidu instead.

OpenVidu is an easier to use, higher-level, Open Source platform based on Kurento.

JavaScript Module - Crowd Detector Filter


Bower dependencies are not yet upgraded for Kurento 7.0.0.

Kurento tutorials that use pure browser JavaScript need to be rewritten to drop the deprecated Bower service and instead use a web resource packer. This has not been done, so these tutorials won’t be able to download the dependencies they need to work. PRs would be appreciated!

This web application consists of a WebRTC video communication in mirror (loopback) with a crowd detector filter. This filter detects people agglomeration in video streams.


Web browsers require using HTTPS to enable WebRTC, so the web server must use SSL and a certificate file. For instructions, check Configure JavaScript applications to use HTTPS.

For convenience, this tutorial already provides dummy self-signed certificates (which will cause a security warning in the browser).

Running this example

First of all, install Kurento Media Server: Installation Guide. Start the media server and leave it running in the background.

Install Node.js, Bower, and a web server in your system:

curl -sSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt-get install -y nodejs
sudo npm install -g bower
sudo npm install -g http-server

Here, we suggest using the simple Node.js http-server, but you could use any other web server.

You also need the source code of this tutorial. Clone it from GitHub, then start the web server:

git clone https://github.com/Kurento/kurento.git
cd kurento/tutorials/javascript-browser/crowddetector/
git checkout main
bower install
http-server -p 8443 --ssl --cert keys/server.crt --key keys/server.key

When your web server is up and running, use a WebRTC compatible browser (Firefox, Chrome) to open the tutorial page:

  • If KMS is running in your local machine:

  • If KMS is running in a remote machine:



By default, this tutorial works out of the box by using non-secure WebSocket (ws://) to establish a client connection between the browser and KMS. This only works for localhost. It will fail if the web server is remote.

If you want to run this tutorial from a remote web server, then you have to do 3 things:

  1. Configure Secure WebSocket in KMS. For instructions, check Signaling Plane security (WebSocket).

  2. In index.js, change the ws_uri to use Secure WebSocket (wss:// instead of ws://) and the correct KMS port (TCP 8433 instead of TCP 8888).

  3. As explained in the link from step 1, if you configured KMS to use Secure WebSocket with a self-signed certificate you now have to browse to https://{KMS_HOST}:8433/kurento and click to accept the untrusted certificate.


By default, this tutorial assumes that Kurento Media Server can download the overlay image from a localhost web server. It will fail if the web server is remote (from the point of view of KMS). This includes the case of running KMS from Docker.

If you want to run this tutorial with a remote Kurento Media Server (including running KMS from Docker), then you have to provide it with the correct IP address of the application’s web server:

  • In index.js, change logo_uri to the correct one where KMS can reach the web server.

Understanding this example

This application uses computer vision and augmented reality techniques to detect a crowd in a WebRTC stream.

The interface of the application (an HTML web page) is composed by two HTML5 video tags: one for the video camera stream (the local client-side stream) and other for the mirror (the remote stream). The video camera stream is sent to Kurento Media Server, which processes and sends it back to the client as a remote stream. To implement this, we need to create a Media Pipeline composed by the following Media Element s:

WebRTC with crowdDetector filter Media Pipeline

WebRTC with crowdDetector filter Media Pipeline

The complete source code of this demo can be found in GitHub.

This example is a modified version of the Magic Mirror tutorial. In this case, this demo uses a CrowdDetector instead of FaceOverlay filter.

To setup a CrowdDetectorFilter, first we need to define one or more region of interests (ROIs). A ROI delimits the zone within the video stream in which crowd are going to be tracked. To define a ROI, we need to configure at least three points. These points are defined in relative terms (0 to 1) to the video width and height.

CrowdDetectorFilter performs two actions in the defined ROIs. On the one hand, the detected crowd are colored over the stream. On the other hand, different events are raised to the client.

To understand crowd coloring, we can take a look to a screenshot of a running example of CrowdDetectorFilter. In the picture below, we can see that there are two ROIs (bounded with white lines in the video). On these ROIs, we can see two different colors over the original video stream: red zones are drawn over detected static crowds (or moving slowly). Blue zones are drawn over the detected crowds moving fast.

Crowd detection sample

Crowd detection sample

Regarding crowd events, there are three types of events, namely:

  • CrowdDetectorFluidityEvent. Event raised when a certain level of fluidity is detected in a ROI. Fluidity can be seen as the level of general movement in a crowd.

  • CrowdDetectorOccupancyEvent. Event raised when a level of occupancy is detected in a ROI. Occupancy can be seen as the level of agglomeration in stream.

  • CrowdDetectorDirectionEvent. Event raised when a movement direction is detected in a ROI by a crowd.

Both fluidity as occupancy are quantified in a relative metric from 0 to 100%. Then, both attributes are qualified into three categories: i) Minimum (min); ii) Medium (med); iii) Maximum (max).

Regarding direction, it is quantified as an angle (0-360º), where 0 is the direction from the central point of the video to the top (i.e., north), 90 correspond to the direction to the right (east), 180 is the south, and finally 270 is the west.

With all these concepts, now we can check out the Java server-side code of this demo. As depicted in the snippet below, we create a ROI by adding RelativePoint instances to a list. Each ROI is then stored into a list of RegionOfInterest instances.

Then, each ROI should be configured. To do that, we have the following methods:

  • fluidityLevelMin: Fluidity level (0-100%) for the category minimum.

  • fluidityLevelMed: Fluidity level (0-100%) for the category medium.

  • fluidityLevelMax: Fluidity level (0-100%) for the category maximum.

  • fluidityNumFramesToEvent: Number of consecutive frames detecting a fluidity level to rise a event.

  • occupancyLevelMin: Occupancy level (0-100%) for the category minimum.

  • occupancyLevelMed: Occupancy level (0-100%) for the category medium.

  • occupancyLevelMax: Occupancy level (0-100%) for the category maximum.

  • occupancyNumFramesToEvent: Number of consecutive frames detecting a occupancy level to rise a event.

  • sendOpticalFlowEvent: Boolean value that indicates whether or not directions events are going to be tracked by the filter. Be careful with this feature, since it is very demanding in terms of resource usage (CPU, memory) in the media server. Set to true this parameter only when you are going to need directions events in your client-side.

  • opticalFlowNumFramesToEvent: Number of consecutive frames detecting a direction level to rise a event.

  • opticalFlowNumFramesToReset: Number of consecutive frames detecting a occupancy level in which the counter is reset.

  • opticalFlowAngleOffset: Counterclockwise offset of the angle. This parameters is useful to move the default axis for directions (0º=north, 90º=east, 180º=south, 270º=west).


Modules can have options. For configuring these options, you’ll need to get the constructor for them. In JavaScript and Node.js, you have to use kurentoClient.getComplexType(‘qualifiedName’) . There is an example in the code.

All in all, the media pipeline of this demo is is implemented as follows:

const RegionOfInterest       = kurentoClient.getComplexType('crowddetector.RegionOfInterest')
const RegionOfInterestConfig = kurentoClient.getComplexType('crowddetector.RegionOfInterestConfig')
const RelativePoint          = kurentoClient.getComplexType('crowddetector.RelativePoint')

kurentoClient(args.ws_uri, function(error, client) {
  if (error) return onError(error);

  client.create('MediaPipeline', function(error, p) {
    if (error) return onError(error);

    pipeline = p;

    console.log("Got MediaPipeline");

    pipeline.create('WebRtcEndpoint', function(error, webRtc) {
      if (error) return onError(error);

      console.log("Got WebRtcEndpoint");

      setIceCandidateCallbacks(webRtcPeer, webRtc, onError)

      webRtc.processOffer(sdpOffer, function(error, sdpAnswer) {
        if (error) return onError(error);

        console.log("SDP answer obtained. Processing ...");



      var options =
            id: 'roi1',
              RelativePoint({x: 0,   y: 0}),
              RelativePoint({x: 0.5, y: 0}),
              RelativePoint({x: 0.5, y: 0.5}),
              RelativePoint({x: 0,   y: 0.5})
            regionOfInterestConfig: RegionOfInterestConfig({
              occupancyLevelMin: 10,
              occupancyLevelMed: 35,
              occupancyLevelMax: 65,
              occupancyNumFramesToEvent: 5,
              fluidityLevelMin: 10,
              fluidityLevelMed: 35,
              fluidityLevelMax: 65,
              fluidityNumFramesToEvent: 5,
              sendOpticalFlowEvent: false,
              opticalFlowNumFramesToEvent: 3,
              opticalFlowNumFramesToReset: 3,
              opticalFlowAngleOffset: 0

      pipeline.create('crowddetector.CrowdDetectorFilter', options, function(error, filter)
        if (error) return onError(error);


        filter.on('CrowdDetectorDirection', function (data){
          console.log("Direction event received in roi " + data.roiID +
             " with direction " + data.directionAngle);

        filter.on('CrowdDetectorFluidity', function (data){
          console.log("Fluidity event received in roi " + data.roiID +
           ". Fluidity level " + data.fluidityPercentage +
           " and fluidity percentage " + data.fluidityLevel);

        filter.on('CrowdDetectorOccupancy', function (data){
          console.log("Occupancy event received in roi " + data.roiID +
           ". Occupancy level " + data.occupancyPercentage +
           " and occupancy percentage " + data.occupancyLevel);

        client.connect(webRtc, filter, webRtc, function(error){
          if (error) return onError(error);

          console.log("WebRtcEndpoint --> Filter --> WebRtcEndpoint");


The TURN and STUN servers to be used can be configured simple adding the parameter ice_servers to the application URL, as follows:



The dependencies of this demo has to be obtained using Bower. The definition of these dependencies are defined in the bower.json file, as follows:

"dependencies": {
   "kurento-client": "7.0.0",
   "kurento-utils": "7.0.0"
   "kurento-module-pointerdetector": "7.0.0"

To get these dependencies, just run the following shell command:

bower install


You can find the latest versions at Bower.