public interface RecorderEndpoint extends UriEndpoint
RecorderEndpoint can store media into local files or send it to a remote
network storage. When another MediaElement is connected to a
RecorderEndpoint, the media coming from the former will be encapsulated into
the selected recording format and stored in the designated location.
These parameters must be provided to create a RecorderEndpoint, and they cannot be changed afterwards:
file:///path/to/fileTransfer-Encoding: chunked). Examples:
http(s)://{server-ip}/path/to/file
http(s)://{username}:{password}@{server-ip}:{server-port}/path/to/file
file:///var/lib/kurento/
{username} or {password}.
This means that {username} cannot contain colons
(:), and {password} cannot contain 'at' signs
(@). This is a limitation of GStreamer 1.8 (the underlying
media framework behind Kurento), and is already fixed in newer versions
(which the upcoming Kurento 7.x will use).
{username} or {password} must be
url-encoded.
This means that colons (:) should be replaced with
%3A, and 'at' signs (@) should be replaced
with %40.
#MediaProfileSpecType), used for
storage. This will determine the video and audio encoding. See below for
more details about Media Profile.
Note that
RecorderEndpoint requires write permissions to the destination
; otherwise, the media server won't be able to store any information, and an
ErrorEvent will be fired. Make sure your application subscribes to this
event, otherwise troubleshooting issues will be difficult.
file://), the system user that
is owner of the media server process needs to have write permissions for the
requested path. By default, this user is named 'kurento'.
The Media Profile is quite an important parameter, as it will determine whether the server needs to perform on-the-fly transcoding of the media. If the input stream codec if not compatible with the selected media profile, the media will be transcoded into a suitable format. This will result in a higher CPU load and will impact overall performance of the media server.
For example: If your pipeline receives VP8-encoded video from WebRTC, and sends it to a RecorderEndpoint; depending on the format selected...
From this you can see how selecting the correct format for your application is a very important decision.
Recording will start as soon as the user invokes the
record method. The recorder will then store, in the location
indicated, the media that the source is sending to the endpoint. If no media
is being received, or no endpoint has been connected, then the destination
will be empty. The recorder starts storing information into the file as soon
as it gets it.
Recording must be stopped when no more data should be stored.
This can be with the stopAndWait method, which blocks and returns
only after all the information was stored correctly.
If your output file is empty, this means that the recorder is waiting for
input media.
When another endpoint is connected to the recorder, by default both AUDIO and
VIDEO media types are expected, unless specified otherwise when invoking the
connect method. Failing to provide both types, will result in the
RecorderEndpoint buffering the received media: it won't be written to the file
until the recording is stopped. The recorder waits until all types of media
start arriving, in order to synchronize them appropriately.
The source endpoint can be hot-swapped while the recording is taking place. The recorded file will then contain different feeds. When switching video sources, if the new video has different size, the recorder will retain the size of the previous source. If the source is disconnected, the last frame recorded will be shown for the duration of the disconnection, or until the recording is stopped.
It is recommended to start recording only after media arrives.
For this, you may use the MediaFlowInStateChange and
MediaFlowOutStateChange
events of your endpoints, and synchronize the recording with the moment media
comes into the Recorder. For example:
MediaFlowOutStateChange event.
MediaFlowInStateChange event.
MediaFlowInStateChange for ALL streams (so, if you record
AUDIO+VIDEO, your application must receive a
MediaFlowInStateChange event for audio, and another
MediaFlowInStateChange event for video).
| Modifier and Type | Interface and Description |
|---|---|
static class |
RecorderEndpoint.Builder |
addUriEndpointStateChangedListener, addUriEndpointStateChangedListener, getState, getState, getState, getUri, getUri, getUri, pause, pause, pause, removeUriEndpointStateChangedListener, removeUriEndpointStateChangedListener, stop, stop, stopaddElementConnectedListener, addElementConnectedListener, addElementDisconnectedListener, addElementDisconnectedListener, addMediaFlowInStateChangeListener, addMediaFlowInStateChangeListener, addMediaFlowOutStateChangeListener, addMediaFlowOutStateChangeListener, addMediaTranscodingStateChangeListener, addMediaTranscodingStateChangeListener, connect, connect, connect, connect, connect, connect, connect, connect, connect, connect, connect, connect, disconnect, disconnect, disconnect, disconnect, disconnect, disconnect, disconnect, disconnect, disconnect, disconnect, disconnect, disconnect, getGstreamerDot, getGstreamerDot, getGstreamerDot, getGstreamerDot, getGstreamerDot, getGstreamerDot, getMaxOuputBitrate, getMaxOuputBitrate, getMaxOuputBitrate, getMaxOutputBitrate, getMaxOutputBitrate, getMaxOutputBitrate, getMinOuputBitrate, getMinOuputBitrate, getMinOuputBitrate, getMinOutputBitrate, getMinOutputBitrate, getMinOutputBitrate, getSinkConnections, getSinkConnections, getSinkConnections, getSinkConnections, getSinkConnections, getSinkConnections, getSinkConnections, getSinkConnections, getSinkConnections, getSourceConnections, getSourceConnections, getSourceConnections, getSourceConnections, getSourceConnections, getSourceConnections, getSourceConnections, getSourceConnections, getSourceConnections, getStats, getStats, getStats, getStats, getStats, getStats, isMediaFlowingIn, isMediaFlowingIn, isMediaFlowingIn, isMediaFlowingIn, isMediaFlowingIn, isMediaFlowingIn, isMediaFlowingOut, isMediaFlowingOut, isMediaFlowingOut, isMediaFlowingOut, isMediaFlowingOut, isMediaFlowingOut, isMediaTranscoding, isMediaTranscoding, isMediaTranscoding, isMediaTranscoding, isMediaTranscoding, isMediaTranscoding, removeElementConnectedListener, removeElementConnectedListener, removeElementDisconnectedListener, removeElementDisconnectedListener, removeMediaFlowInStateChangeListener, removeMediaFlowInStateChangeListener, removeMediaFlowOutStateChangeListener, removeMediaFlowOutStateChangeListener, removeMediaTranscodingStateChangeListener, removeMediaTranscodingStateChangeListener, setAudioFormat, setAudioFormat, setAudioFormat, setMaxOuputBitrate, setMaxOuputBitrate, setMaxOuputBitrate, setMaxOutputBitrate, setMaxOutputBitrate, setMaxOutputBitrate, setMinOuputBitrate, setMinOuputBitrate, setMinOuputBitrate, setMinOutputBitrate, setMinOutputBitrate, setMinOutputBitrate, setOutputBitrate, setOutputBitrate, setOutputBitrate, setVideoFormat, setVideoFormat, setVideoFormataddErrorListener, addErrorListener, addTag, addTag, addTag, getChildren, getChildren, getChildren, getChilds, getChilds, getChilds, getCreationTime, getCreationTime, getCreationTime, getId, getId, getId, getMediaPipeline, getMediaPipeline, getMediaPipeline, getName, getName, getName, getParent, getParent, getParent, getSendTagsInEvents, getSendTagsInEvents, getSendTagsInEvents, getTag, getTag, getTag, getTags, getTags, getTags, removeErrorListener, removeErrorListener, removeTag, removeTag, removeTag, setName, setName, setName, setSendTagsInEvents, setSendTagsInEvents, setSendTagsInEventsaddEventListener, invoke, isCommited, release, release, release, removeEventListener, waitCommited, whenCommited, whenCommitedvoid record()
void record(Continuation<Void> cont)
Continuation.onSuccess(F) is called when the action is
done. If an error occurs, Continuation.onError(java.lang.Throwable) is called.record()void record(Transaction tx)
void stopAndWait()
void stopAndWait(Continuation<Void> cont)
Continuation.onSuccess(F) is called when the action is
done. If an error occurs, Continuation.onError(java.lang.Throwable) is called.stopAndWait()void stopAndWait(Transaction tx)
ListenerSubscription addRecordingListener(EventListener<RecordingEvent> listener)
EventListener for event RecordingEvent. Synchronous call.listener - Listener to be called on RecordingEventvoid addRecordingListener(EventListener<RecordingEvent> listener, Continuation<ListenerSubscription> cont)
EventListener for event RecordingEvent. Asynchronous call.
Calls Continuation<ListenerSubscription> when it has been added.listener - Listener to be called on RecordingEventcont - Continuation to be called when the listener is registeredvoid removeRecordingListener(ListenerSubscription listenerSubscription)
ListenerSubscription for event RecordingEvent. Synchronous call.listenerSubscription - Listener subscription to be removedvoid removeRecordingListener(ListenerSubscription listenerSubscription, Continuation<Void> cont)
ListenerSubscription for event RecordingEvent. Asynchronous call.
Calls Continuation<Void> when it has been removed.listenerSubscription - Listener subscription to be removedcont - Continuation to be called when the listener is removedListenerSubscription addPausedListener(EventListener<PausedEvent> listener)
EventListener for event PausedEvent. Synchronous call.listener - Listener to be called on PausedEventvoid addPausedListener(EventListener<PausedEvent> listener, Continuation<ListenerSubscription> cont)
EventListener for event PausedEvent. Asynchronous call.
Calls Continuation<ListenerSubscription> when it has been added.listener - Listener to be called on PausedEventcont - Continuation to be called when the listener is registeredvoid removePausedListener(ListenerSubscription listenerSubscription)
ListenerSubscription for event PausedEvent. Synchronous call.listenerSubscription - Listener subscription to be removedvoid removePausedListener(ListenerSubscription listenerSubscription, Continuation<Void> cont)
ListenerSubscription for event PausedEvent. Asynchronous call.
Calls Continuation<Void> when it has been removed.listenerSubscription - Listener subscription to be removedcont - Continuation to be called when the listener is removedListenerSubscription addStoppedListener(EventListener<StoppedEvent> listener)
EventListener for event StoppedEvent. Synchronous call.listener - Listener to be called on StoppedEventvoid addStoppedListener(EventListener<StoppedEvent> listener, Continuation<ListenerSubscription> cont)
EventListener for event StoppedEvent. Asynchronous call.
Calls Continuation<ListenerSubscription> when it has been added.listener - Listener to be called on StoppedEventcont - Continuation to be called when the listener is registeredvoid removeStoppedListener(ListenerSubscription listenerSubscription)
ListenerSubscription for event StoppedEvent. Synchronous call.listenerSubscription - Listener subscription to be removedvoid removeStoppedListener(ListenerSubscription listenerSubscription, Continuation<Void> cont)
ListenerSubscription for event StoppedEvent. Asynchronous call.
Calls Continuation<Void> when it has been removed.listenerSubscription - Listener subscription to be removedcont - Continuation to be called when the listener is removedCopyright © 2022 Kurento. All rights reserved.