Provides functionality to store media contents.
The RecorderEndpoint can store media in local files or in a network
It receives a media stream from another MediaElement (i.e. the
source), and stores it in the designated location.
The following information has to be provided in order to create a
RecorderEndpoint, and cannot be changed afterwards:
Destination URI, where media will be stored. These
File: A file path that exists in the local file system.
HTTP: A POST request will be used against a remote server. The
must support using the chunked encoding mode (HTTP header
Relative URIs (with no schema) are supported. They are completed by
prepending a default URI defined by property defaultPath. This
property is defined in the configuration file
/etc/kurento/modules/kurento/UriEndpoint.conf.ini, and the
The Media Profile (module:elements.RecorderEndpoint#MediaProfileSpecType) used for
storage. This will determine the video and audio encoding. See below for
more details about Media Profile.
Optionally, the user can select if the endpoint will stop processing
the EndOfStream event is detected.
RecorderEndpoint requires access to the resource where stream is going
. Otherwise, the media server won't be able to store any information, and
ErrorEvent will be fired. Please note that if you haven't
that type of event, you can be left wondering why your media is not being
saved, while the error message was ignored.
To write local files (if you use
file://), the user running
media server (by default, user
kurento) needs to have write
permissions for the requested path.
To save into an HTTP server, the server must be accessible through the
network, and also have the correct access rights to the destination
The media profile is quite an important parameter, as it will determine
whether the server needs to perform on-the-fly transcoding of the media.
the input stream codec if not compatible with the selected media profile,
media will be transcoded into a suitable format. This will result in a
CPU load and will impact overall performance of the media server.
For example: Say that your pipeline will receive VP8-encoded video
WebRTC, and sends it to a RecorderEndpoint; depending on the format
WEBM: The input codec is the same as the recording format, so no
will take place.
MP4: The media server will have to transcode from VP8 to
This will raise the CPU load in the system.
MKV: Again, video must be transcoded from VP8 to H264,
means more CPU load.
From this you can see how selecting the correct format for your
a very important decision.
Recording will start as soon as the user invokes the
record method. The recorder will then store, in the location
indicated, the media that the source is sending to the endpoint. If no
is being received, or no endpoint has been connected, then the destination
will be empty. The recorder starts storing information into the file as
as it gets it.
Stopping the recording process is done through the
stopAndWait method, which will return only after all the
information was stored correctly. If the file is empty, this means that no
media arrived at the recorder.
When another endpoint is connected to the recorder, by default both AUDIO
VIDEO media types are expected, unless specified otherwise when invoking
connect method. Failing to provide both types, will result in teh
buffering the received media: it won't be written to the file until the
recording is stopped. This is due to the recorder waiting for the other
of media to arrive, so they are synchronized.
The source endpoint can be hot-swapped, while the recording is taking
The recorded file will then contain different feeds. When switching video
sources, if the new video has different size, the recorder will retain the
size of the previous source. If the source is disconnected, the last frame
recorded will be shown for the duration of the disconnection, or until the
recording is stopped.
It is recommended to start recording only after media arrives
. For this, you may use the
events of your endpoints, and synchronize the recording with the moment
comes into the Recorder. For example:
When the remote video arrives to KMS, your WebRtcEndpoint will start
generating packets into the Kurento Pipeline, and it will trigger a
When video packets arrive from the WebRtcEndpoint to the
the RecorderEndpoint will raise a
You should only start recording when RecorderEndpoint has notified a
MediaFlowInStateChange for ALL streams (so, if you record
AUDIO+VIDEO, your application must receive a
MediaFlowInStateChange event for audio, and another
MediaFlowInStateChange event for video).