2 * Copyright (C) 2008 David Schleef <ds@schleef.org>
3 * Copyright (C) 2011 Mark Nauwelaerts <mark.nauwelaerts@collabora.co.uk>.
4 * Copyright (C) 2011 Nokia Corporation. All rights reserved.
5 * Contact: Stefan Kost <stefan.kost@nokia.com>
6 * Copyright (C) 2012 Collabora Ltd.
7 * Author : Edward Hervey <edward@collabora.com>
9 * This library is free software; you can redistribute it and/or
10 * modify it under the terms of the GNU Library General Public
11 * License as published by the Free Software Foundation; either
12 * version 2 of the License, or (at your option) any later version.
14 * This library is distributed in the hope that it will be useful,
15 * but WITHOUT ANY WARRANTY; without even the implied warranty of
16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
17 * Library General Public License for more details.
19 * You should have received a copy of the GNU Library General Public
20 * License along with this library; if not, write to the
21 * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor,
22 * Boston, MA 02110-1301, USA.
26 * SECTION:gstvideodecoder
27 * @title: GstVideoDecoder
28 * @short_description: Base class for video decoders
30 * This base class is for video decoders turning encoded data into raw video
33 * The GstVideoDecoder base class and derived subclasses should cooperate as
38 * * Initially, GstVideoDecoder calls @start when the decoder element
39 * is activated, which allows the subclass to perform any global setup.
41 * * GstVideoDecoder calls @set_format to inform the subclass of caps
42 * describing input video data that it is about to receive, including
43 * possibly configuration data.
44 * While unlikely, it might be called more than once, if changing input
45 * parameters require reconfiguration.
47 * * Incoming data buffers are processed as needed, described in Data
50 * * GstVideoDecoder calls @stop at end of all processing.
54 * * The base class gathers input data, and optionally allows subclass
55 * to parse this into subsequently manageable chunks, typically
56 * corresponding to and referred to as 'frames'.
58 * * Each input frame is provided in turn to the subclass' @handle_frame
60 * * When the subclass enables the subframe mode with `gst_video_decoder_set_subframe_mode`,
61 * the base class will provide to the subclass the same input frame with
62 * different input buffers to the subclass @handle_frame
63 * callback. During this call, the subclass needs to take
64 * ownership of the input_buffer as @GstVideoCodecFrame.input_buffer
65 * will have been changed before the next subframe buffer is received.
66 * The subclass will call `gst_video_decoder_have_last_subframe`
67 * when a new input frame can be created by the base class.
68 * Every subframe will share the same @GstVideoCodecFrame.output_buffer
69 * to write the decoding result. The subclass is responsible to protect
72 * * If codec processing results in decoded data, the subclass should call
73 * @gst_video_decoder_finish_frame to have decoded data pushed
74 * downstream. In subframe mode
75 * the subclass should call @gst_video_decoder_finish_subframe until the
76 * last subframe where it should call @gst_video_decoder_finish_frame.
77 * The subclass can detect the last subframe using GST_VIDEO_BUFFER_FLAG_MARKER
78 * on buffers or using its own logic to collect the subframes.
79 * In case of decoding failure, the subclass must call
80 * @gst_video_decoder_drop_frame or @gst_video_decoder_drop_subframe,
81 * to allow the base class to do timestamp and offset tracking, and possibly
82 * to requeue the frame for a later attempt in the case of reverse playback.
86 * * The GstVideoDecoder class calls @stop to inform the subclass that data
87 * parsing will be stopped.
93 * * When the pipeline is seeked or otherwise flushed, the subclass is
94 * informed via a call to its @reset callback, with the hard parameter
95 * set to true. This indicates the subclass should drop any internal data
96 * queues and timestamps and prepare for a fresh set of buffers to arrive
97 * for parsing and decoding.
101 * * At end-of-stream, the subclass @parse function may be called some final
102 * times with the at_eos parameter set to true, indicating that the element
103 * should not expect any more data to be arriving, and it should parse and
104 * remaining frames and call gst_video_decoder_have_frame() if possible.
106 * The subclass is responsible for providing pad template caps for
107 * source and sink pads. The pads need to be named "sink" and "src". It also
108 * needs to provide information about the output caps, when they are known.
109 * This may be when the base class calls the subclass' @set_format function,
110 * though it might be during decoding, before calling
111 * @gst_video_decoder_finish_frame. This is done via
112 * @gst_video_decoder_set_output_state
114 * The subclass is also responsible for providing (presentation) timestamps
115 * (likely based on corresponding input ones). If that is not applicable
116 * or possible, the base class provides limited framerate based interpolation.
118 * Similarly, the base class provides some limited (legacy) seeking support
119 * if specifically requested by the subclass, as full-fledged support
120 * should rather be left to upstream demuxer, parser or alike. This simple
121 * approach caters for seeking and duration reporting using estimated input
122 * bitrates. To enable it, a subclass should call
123 * @gst_video_decoder_set_estimate_rate to enable handling of incoming
126 * The base class provides some support for reverse playback, in particular
127 * in case incoming data is not packetized or upstream does not provide
128 * fragments on keyframe boundaries. However, the subclass should then be
129 * prepared for the parsing and frame processing stage to occur separately
130 * (in normal forward processing, the latter immediately follows the former),
131 * The subclass also needs to ensure the parsing stage properly marks
132 * keyframes, unless it knows the upstream elements will do so properly for
135 * The bare minimum that a functional subclass needs to implement is:
137 * * Provide pad templates
138 * * Inform the base class of output caps via
139 * @gst_video_decoder_set_output_state
141 * * Parse input data, if it is not considered packetized from upstream
142 * Data will be provided to @parse which should invoke
143 * @gst_video_decoder_add_to_frame and @gst_video_decoder_have_frame to
144 * separate the data belonging to each video frame.
146 * * Accept data in @handle_frame and provide decoded results to
147 * @gst_video_decoder_finish_frame, or call @gst_video_decoder_drop_frame.
156 * * Add a flag/boolean for I-frame-only/image decoders so we can do extra
157 * features, like applying QoS on input (as opposed to after the frame is
159 * * Add a flag/boolean for decoders that require keyframes, so the base
160 * class can automatically discard non-keyframes before one has arrived
161 * * Detect reordered frame/timestamps and fix the pts/dts
162 * * Support for GstIndex (or shall we not care ?)
163 * * Calculate actual latency based on input/output timestamp/frame_number
164 * and if it exceeds the recorded one, save it and emit a GST_MESSAGE_LATENCY
165 * * Emit latency message when it changes
169 /* Implementation notes:
170 * The Video Decoder base class operates in 2 primary processing modes, depending
171 * on whether forward or reverse playback is requested.
174 * * Incoming buffer -> @parse() -> add_to_frame()/have_frame() ->
175 * handle_frame() -> push downstream
177 * Reverse playback is more complicated, since it involves gathering incoming
178 * data regions as we loop backwards through the upstream data. The processing
179 * concept (using incoming buffers as containing one frame each to simplify
182 * Upstream data we want to play:
183 * Buffer encoded order: 1 2 3 4 5 6 7 8 9 EOS
185 * Groupings: AAAAAAA BBBBBBB CCCCCCC
188 * Buffer reception order: 7 8 9 4 5 6 1 2 3 EOS
190 * Discont flag: D D D
192 * - Each Discont marks a discont in the decoding order.
193 * - The keyframes mark where we can start decoding.
195 * Initially, we prepend incoming buffers to the gather queue. Whenever the
196 * discont flag is set on an incoming buffer, the gather queue is flushed out
197 * before the new buffer is collected.
199 * The above data will be accumulated in the gather queue like this:
201 * gather queue: 9 8 7
204 * When buffer 4 is received (with a DISCONT), we flush the gather queue like
208 * take head of queue and prepend to parse queue (this reverses the
209 * sequence, so parse queue is 7 -> 8 -> 9)
211 * Next, we process the parse queue, which now contains all un-parsed packets
212 * (including any leftover ones from the previous decode section)
214 * for each buffer now in the parse queue:
215 * Call the subclass parse function, prepending each resulting frame to
216 * the parse_gather queue. Buffers which precede the first one that
217 * produces a parsed frame are retained in the parse queue for
218 * re-processing on the next cycle of parsing.
220 * The parse_gather queue now contains frame objects ready for decoding,
222 * parse_gather: 9 -> 8 -> 7
224 * while (parse_gather)
225 * Take the head of the queue and prepend it to the decode queue
226 * If the frame was a keyframe, process the decode queue
227 * decode is now 7-8-9
229 * Processing the decode queue results in frames with attached output buffers
230 * stored in the 'output_queue' ready for outputting in reverse order.
232 * After we flushed the gather queue and parsed it, we add 4 to the (now empty)
233 * gather queue. We get the following situation:
236 * decode queue: 7 8 9
238 * After we received 5 (Keyframe) and 6:
240 * gather queue: 6 5 4
241 * decode queue: 7 8 9
243 * When we receive 1 (DISCONT) which triggers a flush of the gather queue:
245 * Copy head of the gather queue (6) to decode queue:
248 * decode queue: 6 7 8 9
250 * Copy head of the gather queue (5) to decode queue. This is a keyframe so we
251 * can start decoding.
254 * decode queue: 5 6 7 8 9
256 * Decode frames in decode queue, store raw decoded data in output queue, we
257 * can take the head of the decode queue and prepend the decoded result in the
262 * output queue: 9 8 7 6 5
264 * Now output all the frames in the output queue, picking a frame from the
267 * Copy head of the gather queue (4) to decode queue, we flushed the gather
268 * queue and can now store input buffer in the gather queue:
273 * When we receive EOS, the queue looks like:
275 * gather queue: 3 2 1
278 * Fill decode queue, first keyframe we copy is 2:
281 * decode queue: 2 3 4
287 * output queue: 4 3 2
289 * Leftover buffer 1 cannot be decoded and must be discarded.
292 #include "gstvideodecoder.h"
293 #include "gstvideoutils.h"
294 #include "gstvideoutilsprivate.h"
296 #include <gst/video/video.h>
297 #include <gst/video/video-event.h>
298 #include <gst/video/gstvideopool.h>
299 #include <gst/video/gstvideometa.h>
302 GST_DEBUG_CATEGORY (videodecoder_debug);
303 #define GST_CAT_DEFAULT videodecoder_debug
306 #define DEFAULT_QOS TRUE
307 #define DEFAULT_MAX_ERRORS GST_VIDEO_DECODER_MAX_ERRORS
308 #define DEFAULT_MIN_FORCE_KEY_UNIT_INTERVAL 0
309 #define DEFAULT_DISCARD_CORRUPTED_FRAMES FALSE
310 #define DEFAULT_AUTOMATIC_REQUEST_SYNC_POINTS FALSE
311 #define DEFAULT_AUTOMATIC_REQUEST_SYNC_POINT_FLAGS (GST_VIDEO_DECODER_REQUEST_SYNC_POINT_DISCARD_INPUT | GST_VIDEO_DECODER_REQUEST_SYNC_POINT_CORRUPT_OUTPUT)
313 /* Used for request_sync_point_frame_number. These are out of range for the
314 * frame numbers and can be given special meaning */
315 #define REQUEST_SYNC_POINT_PENDING G_MAXUINT + 1
316 #define REQUEST_SYNC_POINT_UNSET G_MAXUINT64
323 PROP_MIN_FORCE_KEY_UNIT_INTERVAL,
324 PROP_DISCARD_CORRUPTED_FRAMES,
325 PROP_AUTOMATIC_REQUEST_SYNC_POINTS,
326 PROP_AUTOMATIC_REQUEST_SYNC_POINT_FLAGS,
329 struct _GstVideoDecoderPrivate
331 /* FIXME introduce a context ? */
334 GstAllocator *allocator;
335 GstAllocationParams params;
339 GstAdapter *input_adapter;
340 /* assembles current frame */
341 GstAdapter *output_adapter;
343 /* Whether we attempt to convert newsegment from bytes to
344 * time using a bitrate estimation */
345 gboolean do_estimate_rate;
347 /* Whether input is considered packetized or not */
350 /* whether input is considered as subframes */
351 gboolean subframe_mode;
356 gboolean had_output_data;
357 gboolean had_input_data;
359 gboolean needs_format;
360 /* input_segment are output_segment identical */
361 gboolean in_out_segment_sync;
363 /* TRUE if we have an active set of instant rate flags */
364 gboolean decode_flags_override;
365 GstSegmentFlags decode_flags;
367 /* ... being tracked here;
368 * only available during parsing or when doing subframe decoding */
369 GstVideoCodecFrame *current_frame;
370 /* events that should apply to the current frame */
371 /* FIXME 2.0: Use a GQueue or similar, see GstVideoCodecFrame::events */
372 GList *current_frame_events;
373 /* events that should be pushed before the next frame */
374 /* FIXME 2.0: Use a GQueue or similar, see GstVideoCodecFrame::events */
375 GList *pending_events;
377 /* relative offset of input data */
378 guint64 input_offset;
379 /* relative offset of frame */
380 guint64 frame_offset;
381 /* tracking ts and offsets */
384 /* last outgoing ts */
385 GstClockTime last_timestamp_out;
386 /* incoming pts - dts */
387 GstClockTime pts_delta;
388 gboolean reordered_output;
390 /* FIXME: Consider using a GQueue or other better fitting data structure */
391 /* reverse playback */
396 /* collected parsed frames */
398 /* frames to be handled == decoded */
400 /* collected output - of buffer objects, not frames */
401 GList *output_queued;
404 /* base_picture_number is the picture number of the reference picture */
405 guint64 base_picture_number;
406 /* combine with base_picture_number, framerate and calcs to yield (presentation) ts */
407 GstClockTime base_timestamp;
410 GstClockTime min_force_key_unit_interval;
411 gboolean discard_corrupted_frames;
413 /* Key unit related state */
414 gboolean needs_sync_point;
415 GstVideoDecoderRequestSyncPointFlags request_sync_point_flags;
416 guint64 request_sync_point_frame_number;
417 GstClockTime last_force_key_unit_time;
418 /* -1 if we saw no sync point yet */
419 guint64 distance_from_sync;
421 gboolean automatic_request_sync_points;
422 GstVideoDecoderRequestSyncPointFlags automatic_request_sync_point_flags;
424 guint32 system_frame_number;
425 guint32 decode_frame_number;
427 GQueue frames; /* Protected with OBJECT_LOCK */
428 GstVideoCodecState *input_state;
429 GstVideoCodecState *output_state; /* OBJECT_LOCK and STREAM_LOCK */
430 gboolean output_state_changed;
434 gdouble proportion; /* OBJECT_LOCK */
435 GstClockTime earliest_time; /* OBJECT_LOCK */
436 GstClockTime qos_frame_duration; /* OBJECT_LOCK */
438 /* qos messages: frames dropped/processed */
442 /* Outgoing byte size ? */
449 /* Tracks whether the latency message was posted at least once */
450 gboolean posted_latency_msg;
452 /* upstream stream tags (global tags are passed through as-is) */
453 GstTagList *upstream_tags;
457 GstTagMergeMode tags_merge_mode;
459 gboolean tags_changed;
462 gboolean use_default_pad_acceptcaps;
464 #ifndef GST_DISABLE_DEBUG
465 /* Diagnostic time for reporting the time
466 * from flush to first output */
467 GstClockTime last_reset_time;
471 static GstElementClass *parent_class = NULL;
472 static gint private_offset = 0;
474 /* cached quark to avoid contention on the global quark table lock */
475 #define META_TAG_VIDEO meta_tag_video_quark
476 static GQuark meta_tag_video_quark;
478 static void gst_video_decoder_class_init (GstVideoDecoderClass * klass);
479 static void gst_video_decoder_init (GstVideoDecoder * dec,
480 GstVideoDecoderClass * klass);
482 static void gst_video_decoder_finalize (GObject * object);
483 static void gst_video_decoder_get_property (GObject * object, guint property_id,
484 GValue * value, GParamSpec * pspec);
485 static void gst_video_decoder_set_property (GObject * object, guint property_id,
486 const GValue * value, GParamSpec * pspec);
488 static gboolean gst_video_decoder_setcaps (GstVideoDecoder * dec,
490 static gboolean gst_video_decoder_sink_event (GstPad * pad, GstObject * parent,
492 static gboolean gst_video_decoder_src_event (GstPad * pad, GstObject * parent,
494 static GstFlowReturn gst_video_decoder_chain (GstPad * pad, GstObject * parent,
496 static gboolean gst_video_decoder_sink_query (GstPad * pad, GstObject * parent,
498 static GstStateChangeReturn gst_video_decoder_change_state (GstElement *
499 element, GstStateChange transition);
500 static gboolean gst_video_decoder_src_query (GstPad * pad, GstObject * parent,
502 static void gst_video_decoder_reset (GstVideoDecoder * decoder, gboolean full,
503 gboolean flush_hard);
505 static GstFlowReturn gst_video_decoder_decode_frame (GstVideoDecoder * decoder,
506 GstVideoCodecFrame * frame);
508 static void gst_video_decoder_push_event_list (GstVideoDecoder * decoder,
510 static GstClockTime gst_video_decoder_get_frame_duration (GstVideoDecoder *
511 decoder, GstVideoCodecFrame * frame);
512 static GstVideoCodecFrame *gst_video_decoder_new_frame (GstVideoDecoder *
514 static GstFlowReturn gst_video_decoder_clip_and_push_buf (GstVideoDecoder *
515 decoder, GstBuffer * buf);
516 static GstFlowReturn gst_video_decoder_flush_parse (GstVideoDecoder * dec,
519 static void gst_video_decoder_clear_queues (GstVideoDecoder * dec);
521 static gboolean gst_video_decoder_sink_event_default (GstVideoDecoder * decoder,
523 static gboolean gst_video_decoder_src_event_default (GstVideoDecoder * decoder,
525 static gboolean gst_video_decoder_decide_allocation_default (GstVideoDecoder *
526 decoder, GstQuery * query);
527 static gboolean gst_video_decoder_propose_allocation_default (GstVideoDecoder *
528 decoder, GstQuery * query);
529 static gboolean gst_video_decoder_negotiate_default (GstVideoDecoder * decoder);
530 static GstFlowReturn gst_video_decoder_parse_available (GstVideoDecoder * dec,
531 gboolean at_eos, gboolean new_buffer);
532 static gboolean gst_video_decoder_negotiate_unlocked (GstVideoDecoder *
534 static gboolean gst_video_decoder_sink_query_default (GstVideoDecoder * decoder,
536 static gboolean gst_video_decoder_src_query_default (GstVideoDecoder * decoder,
539 static gboolean gst_video_decoder_transform_meta_default (GstVideoDecoder *
540 decoder, GstVideoCodecFrame * frame, GstMeta * meta);
542 static gboolean gst_video_decoder_handle_missing_data_default (GstVideoDecoder *
543 decoder, GstClockTime timestamp, GstClockTime duration);
545 static void gst_video_decoder_copy_metas (GstVideoDecoder * decoder,
546 GstVideoCodecFrame * frame, GstBuffer * src_buffer,
547 GstBuffer * dest_buffer);
549 static void gst_video_decoder_request_sync_point_internal (GstVideoDecoder *
550 dec, GstClockTime deadline, GstVideoDecoderRequestSyncPointFlags flags);
552 /* we can't use G_DEFINE_ABSTRACT_TYPE because we need the klass in the _init
553 * method to get to the padtemplates */
555 gst_video_decoder_get_type (void)
557 static gsize type = 0;
559 if (g_once_init_enter (&type)) {
561 static const GTypeInfo info = {
562 sizeof (GstVideoDecoderClass),
565 (GClassInitFunc) gst_video_decoder_class_init,
568 sizeof (GstVideoDecoder),
570 (GInstanceInitFunc) gst_video_decoder_init,
573 _type = g_type_register_static (GST_TYPE_ELEMENT,
574 "GstVideoDecoder", &info, G_TYPE_FLAG_ABSTRACT);
577 g_type_add_instance_private (_type, sizeof (GstVideoDecoderPrivate));
579 g_once_init_leave (&type, _type);
584 static inline GstVideoDecoderPrivate *
585 gst_video_decoder_get_instance_private (GstVideoDecoder * self)
587 return (G_STRUCT_MEMBER_P (self, private_offset));
591 gst_video_decoder_class_init (GstVideoDecoderClass * klass)
593 GObjectClass *gobject_class;
594 GstElementClass *gstelement_class;
596 gobject_class = G_OBJECT_CLASS (klass);
597 gstelement_class = GST_ELEMENT_CLASS (klass);
599 GST_DEBUG_CATEGORY_INIT (videodecoder_debug, "videodecoder", 0,
600 "Base Video Decoder");
602 parent_class = g_type_class_peek_parent (klass);
604 if (private_offset != 0)
605 g_type_class_adjust_private_offset (klass, &private_offset);
607 gobject_class->finalize = gst_video_decoder_finalize;
608 gobject_class->get_property = gst_video_decoder_get_property;
609 gobject_class->set_property = gst_video_decoder_set_property;
611 gstelement_class->change_state =
612 GST_DEBUG_FUNCPTR (gst_video_decoder_change_state);
614 klass->sink_event = gst_video_decoder_sink_event_default;
615 klass->src_event = gst_video_decoder_src_event_default;
616 klass->decide_allocation = gst_video_decoder_decide_allocation_default;
617 klass->propose_allocation = gst_video_decoder_propose_allocation_default;
618 klass->negotiate = gst_video_decoder_negotiate_default;
619 klass->sink_query = gst_video_decoder_sink_query_default;
620 klass->src_query = gst_video_decoder_src_query_default;
621 klass->transform_meta = gst_video_decoder_transform_meta_default;
622 klass->handle_missing_data = gst_video_decoder_handle_missing_data_default;
625 * GstVideoDecoder:qos:
627 * If set to %TRUE the decoder will handle QoS events received
628 * from downstream elements.
629 * This includes dropping output frames which are detected as late
630 * using the metrics reported by those events.
634 g_object_class_install_property (gobject_class, PROP_QOS,
635 g_param_spec_boolean ("qos", "Quality of Service",
636 "Handle Quality-of-Service events from downstream",
637 DEFAULT_QOS, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
640 * GstVideoDecoder:max-errors:
642 * Maximum number of tolerated consecutive decode errors. See
643 * gst_video_decoder_set_max_errors() for more details.
647 g_object_class_install_property (gobject_class, PROP_MAX_ERRORS,
648 g_param_spec_int ("max-errors", "Max errors",
649 "Max consecutive decoder errors before returning flow error",
650 -1, G_MAXINT, DEFAULT_MAX_ERRORS,
651 G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
654 * GstVideoDecoder:min-force-key-unit-interval:
656 * Minimum interval between force-key-unit events sent upstream by the
657 * decoder. Setting this to 0 will cause every event to be handled, setting
658 * this to %GST_CLOCK_TIME_NONE will cause every event to be ignored.
660 * See gst_video_event_new_upstream_force_key_unit() for more details about
661 * force-key-unit events.
665 g_object_class_install_property (gobject_class,
666 PROP_MIN_FORCE_KEY_UNIT_INTERVAL,
667 g_param_spec_uint64 ("min-force-key-unit-interval",
668 "Minimum Force Keyunit Interval",
669 "Minimum interval between force-keyunit requests in nanoseconds", 0,
670 G_MAXUINT64, DEFAULT_MIN_FORCE_KEY_UNIT_INTERVAL,
671 G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
674 * GstVideoDecoder:discard-corrupted-frames:
676 * If set to %TRUE the decoder will discard frames that are marked as
677 * corrupted instead of outputting them.
681 g_object_class_install_property (gobject_class, PROP_DISCARD_CORRUPTED_FRAMES,
682 g_param_spec_boolean ("discard-corrupted-frames",
683 "Discard Corrupted Frames",
684 "Discard frames marked as corrupted instead of outputting them",
685 DEFAULT_DISCARD_CORRUPTED_FRAMES,
686 G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
689 * GstVideoDecoder:automatic-request-sync-points:
691 * If set to %TRUE the decoder will automatically request sync points when
692 * it seems like a good idea, e.g. if the first frames are not key frames or
693 * if packet loss was reported by upstream.
697 g_object_class_install_property (gobject_class,
698 PROP_AUTOMATIC_REQUEST_SYNC_POINTS,
699 g_param_spec_boolean ("automatic-request-sync-points",
700 "Automatic Request Sync Points",
701 "Automatically request sync points when it would be useful",
702 DEFAULT_AUTOMATIC_REQUEST_SYNC_POINTS,
703 G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
706 * GstVideoDecoder:automatic-request-sync-point-flags:
708 * GstVideoDecoderRequestSyncPointFlags to use for the automatically
709 * requested sync points if `automatic-request-sync-points` is enabled.
713 g_object_class_install_property (gobject_class,
714 PROP_AUTOMATIC_REQUEST_SYNC_POINT_FLAGS,
715 g_param_spec_flags ("automatic-request-sync-point-flags",
716 "Automatic Request Sync Point Flags",
717 "Flags to use when automatically requesting sync points",
718 GST_TYPE_VIDEO_DECODER_REQUEST_SYNC_POINT_FLAGS,
719 DEFAULT_AUTOMATIC_REQUEST_SYNC_POINT_FLAGS,
720 G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
722 meta_tag_video_quark = g_quark_from_static_string (GST_META_TAG_VIDEO_STR);
726 gst_video_decoder_init (GstVideoDecoder * decoder, GstVideoDecoderClass * klass)
728 GstPadTemplate *pad_template;
731 GST_DEBUG_OBJECT (decoder, "gst_video_decoder_init");
733 decoder->priv = gst_video_decoder_get_instance_private (decoder);
736 gst_element_class_get_pad_template (GST_ELEMENT_CLASS (klass), "sink");
737 g_return_if_fail (pad_template != NULL);
739 decoder->sinkpad = pad = gst_pad_new_from_template (pad_template, "sink");
741 gst_pad_set_chain_function (pad, GST_DEBUG_FUNCPTR (gst_video_decoder_chain));
742 gst_pad_set_event_function (pad,
743 GST_DEBUG_FUNCPTR (gst_video_decoder_sink_event));
744 gst_pad_set_query_function (pad,
745 GST_DEBUG_FUNCPTR (gst_video_decoder_sink_query));
746 gst_element_add_pad (GST_ELEMENT (decoder), decoder->sinkpad);
749 gst_element_class_get_pad_template (GST_ELEMENT_CLASS (klass), "src");
750 g_return_if_fail (pad_template != NULL);
752 decoder->srcpad = pad = gst_pad_new_from_template (pad_template, "src");
754 gst_pad_set_event_function (pad,
755 GST_DEBUG_FUNCPTR (gst_video_decoder_src_event));
756 gst_pad_set_query_function (pad,
757 GST_DEBUG_FUNCPTR (gst_video_decoder_src_query));
758 gst_element_add_pad (GST_ELEMENT (decoder), decoder->srcpad);
760 gst_segment_init (&decoder->input_segment, GST_FORMAT_TIME);
761 gst_segment_init (&decoder->output_segment, GST_FORMAT_TIME);
763 g_rec_mutex_init (&decoder->stream_lock);
765 decoder->priv->input_adapter = gst_adapter_new ();
766 decoder->priv->output_adapter = gst_adapter_new ();
767 decoder->priv->packetized = TRUE;
768 decoder->priv->needs_format = FALSE;
770 g_queue_init (&decoder->priv->frames);
771 g_queue_init (&decoder->priv->timestamps);
774 decoder->priv->do_qos = DEFAULT_QOS;
775 decoder->priv->max_errors = GST_VIDEO_DECODER_MAX_ERRORS;
777 decoder->priv->min_latency = 0;
778 decoder->priv->max_latency = 0;
780 decoder->priv->automatic_request_sync_points =
781 DEFAULT_AUTOMATIC_REQUEST_SYNC_POINTS;
782 decoder->priv->automatic_request_sync_point_flags =
783 DEFAULT_AUTOMATIC_REQUEST_SYNC_POINT_FLAGS;
785 gst_video_decoder_reset (decoder, TRUE, TRUE);
788 static GstVideoCodecState *
789 _new_input_state (GstCaps * caps)
791 GstVideoCodecState *state;
792 GstStructure *structure;
793 const GValue *codec_data;
795 state = g_slice_new0 (GstVideoCodecState);
796 state->ref_count = 1;
797 gst_video_info_init (&state->info);
798 if (G_UNLIKELY (!gst_video_info_from_caps (&state->info, caps)))
800 state->caps = gst_caps_ref (caps);
802 structure = gst_caps_get_structure (caps, 0);
804 codec_data = gst_structure_get_value (structure, "codec_data");
805 if (codec_data && G_VALUE_TYPE (codec_data) == GST_TYPE_BUFFER)
806 state->codec_data = GST_BUFFER (g_value_dup_boxed (codec_data));
812 g_slice_free (GstVideoCodecState, state);
817 static GstVideoCodecState *
818 _new_output_state (GstVideoFormat fmt, GstVideoInterlaceMode interlace_mode,
819 guint width, guint height, GstVideoCodecState * reference,
820 gboolean copy_interlace_mode)
822 GstVideoCodecState *state;
824 state = g_slice_new0 (GstVideoCodecState);
825 state->ref_count = 1;
826 gst_video_info_init (&state->info);
827 if (!gst_video_info_set_interlaced_format (&state->info, fmt, interlace_mode,
829 g_slice_free (GstVideoCodecState, state);
834 GstVideoInfo *tgt, *ref;
837 ref = &reference->info;
839 /* Copy over extra fields from reference state */
840 if (copy_interlace_mode)
841 tgt->interlace_mode = ref->interlace_mode;
842 tgt->flags = ref->flags;
843 tgt->chroma_site = ref->chroma_site;
844 tgt->colorimetry = ref->colorimetry;
845 GST_DEBUG ("reference par %d/%d fps %d/%d",
846 ref->par_n, ref->par_d, ref->fps_n, ref->fps_d);
847 tgt->par_n = ref->par_n;
848 tgt->par_d = ref->par_d;
849 tgt->fps_n = ref->fps_n;
850 tgt->fps_d = ref->fps_d;
851 tgt->views = ref->views;
853 GST_VIDEO_INFO_FIELD_ORDER (tgt) = GST_VIDEO_INFO_FIELD_ORDER (ref);
855 if (GST_VIDEO_INFO_MULTIVIEW_MODE (ref) != GST_VIDEO_MULTIVIEW_MODE_NONE) {
856 GST_VIDEO_INFO_MULTIVIEW_MODE (tgt) = GST_VIDEO_INFO_MULTIVIEW_MODE (ref);
857 GST_VIDEO_INFO_MULTIVIEW_FLAGS (tgt) =
858 GST_VIDEO_INFO_MULTIVIEW_FLAGS (ref);
860 /* Default to MONO, overridden as needed by sub-classes */
861 GST_VIDEO_INFO_MULTIVIEW_MODE (tgt) = GST_VIDEO_MULTIVIEW_MODE_MONO;
862 GST_VIDEO_INFO_MULTIVIEW_FLAGS (tgt) = GST_VIDEO_MULTIVIEW_FLAGS_NONE;
866 GST_DEBUG ("reference par %d/%d fps %d/%d",
867 state->info.par_n, state->info.par_d,
868 state->info.fps_n, state->info.fps_d);
874 gst_video_decoder_setcaps (GstVideoDecoder * decoder, GstCaps * caps)
876 GstVideoDecoderClass *decoder_class;
877 GstVideoCodecState *state;
880 decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
882 GST_DEBUG_OBJECT (decoder, "setcaps %" GST_PTR_FORMAT, caps);
884 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
886 if (decoder->priv->input_state) {
887 GST_DEBUG_OBJECT (decoder,
888 "Checking if caps changed old %" GST_PTR_FORMAT " new %" GST_PTR_FORMAT,
889 decoder->priv->input_state->caps, caps);
890 if (gst_caps_is_equal (decoder->priv->input_state->caps, caps))
891 goto caps_not_changed;
894 state = _new_input_state (caps);
896 if (G_UNLIKELY (state == NULL))
899 if (decoder_class->set_format)
900 ret = decoder_class->set_format (decoder, state);
905 if (decoder->priv->input_state)
906 gst_video_codec_state_unref (decoder->priv->input_state);
907 decoder->priv->input_state = state;
909 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
915 GST_DEBUG_OBJECT (decoder, "Caps did not change - ignore");
916 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
923 GST_WARNING_OBJECT (decoder, "Failed to parse caps");
924 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
930 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
931 GST_WARNING_OBJECT (decoder, "Subclass refused caps");
932 gst_video_codec_state_unref (state);
938 gst_video_decoder_finalize (GObject * object)
940 GstVideoDecoder *decoder;
942 decoder = GST_VIDEO_DECODER (object);
944 GST_DEBUG_OBJECT (object, "finalize");
946 g_rec_mutex_clear (&decoder->stream_lock);
948 if (decoder->priv->input_adapter) {
949 g_object_unref (decoder->priv->input_adapter);
950 decoder->priv->input_adapter = NULL;
952 if (decoder->priv->output_adapter) {
953 g_object_unref (decoder->priv->output_adapter);
954 decoder->priv->output_adapter = NULL;
957 if (decoder->priv->input_state)
958 gst_video_codec_state_unref (decoder->priv->input_state);
959 if (decoder->priv->output_state)
960 gst_video_codec_state_unref (decoder->priv->output_state);
962 if (decoder->priv->pool) {
963 gst_object_unref (decoder->priv->pool);
964 decoder->priv->pool = NULL;
967 if (decoder->priv->allocator) {
968 gst_object_unref (decoder->priv->allocator);
969 decoder->priv->allocator = NULL;
972 G_OBJECT_CLASS (parent_class)->finalize (object);
976 gst_video_decoder_get_property (GObject * object, guint property_id,
977 GValue * value, GParamSpec * pspec)
979 GstVideoDecoder *dec = GST_VIDEO_DECODER (object);
980 GstVideoDecoderPrivate *priv = dec->priv;
982 switch (property_id) {
984 g_value_set_boolean (value, priv->do_qos);
986 case PROP_MAX_ERRORS:
987 g_value_set_int (value, gst_video_decoder_get_max_errors (dec));
989 case PROP_MIN_FORCE_KEY_UNIT_INTERVAL:
990 g_value_set_uint64 (value, priv->min_force_key_unit_interval);
992 case PROP_DISCARD_CORRUPTED_FRAMES:
993 g_value_set_boolean (value, priv->discard_corrupted_frames);
995 case PROP_AUTOMATIC_REQUEST_SYNC_POINTS:
996 g_value_set_boolean (value, priv->automatic_request_sync_points);
998 case PROP_AUTOMATIC_REQUEST_SYNC_POINT_FLAGS:
999 g_value_set_flags (value, priv->automatic_request_sync_point_flags);
1002 G_OBJECT_WARN_INVALID_PROPERTY_ID (object, property_id, pspec);
1008 gst_video_decoder_set_property (GObject * object, guint property_id,
1009 const GValue * value, GParamSpec * pspec)
1011 GstVideoDecoder *dec = GST_VIDEO_DECODER (object);
1012 GstVideoDecoderPrivate *priv = dec->priv;
1014 switch (property_id) {
1016 priv->do_qos = g_value_get_boolean (value);
1018 case PROP_MAX_ERRORS:
1019 gst_video_decoder_set_max_errors (dec, g_value_get_int (value));
1021 case PROP_MIN_FORCE_KEY_UNIT_INTERVAL:
1022 priv->min_force_key_unit_interval = g_value_get_uint64 (value);
1024 case PROP_DISCARD_CORRUPTED_FRAMES:
1025 priv->discard_corrupted_frames = g_value_get_boolean (value);
1027 case PROP_AUTOMATIC_REQUEST_SYNC_POINTS:
1028 priv->automatic_request_sync_points = g_value_get_boolean (value);
1030 case PROP_AUTOMATIC_REQUEST_SYNC_POINT_FLAGS:
1031 priv->automatic_request_sync_point_flags = g_value_get_flags (value);
1034 G_OBJECT_WARN_INVALID_PROPERTY_ID (object, property_id, pspec);
1039 /* hard == FLUSH, otherwise discont */
1040 static GstFlowReturn
1041 gst_video_decoder_flush (GstVideoDecoder * dec, gboolean hard)
1043 GstVideoDecoderClass *klass = GST_VIDEO_DECODER_GET_CLASS (dec);
1044 GstFlowReturn ret = GST_FLOW_OK;
1046 GST_LOG_OBJECT (dec, "flush hard %d", hard);
1048 /* Inform subclass */
1050 GST_FIXME_OBJECT (dec, "GstVideoDecoder::reset() is deprecated");
1051 klass->reset (dec, hard);
1057 /* and get (re)set for the sequel */
1058 gst_video_decoder_reset (dec, FALSE, hard);
1064 gst_video_decoder_create_merged_tags_event (GstVideoDecoder * dec)
1066 GstTagList *merged_tags;
1068 GST_LOG_OBJECT (dec, "upstream : %" GST_PTR_FORMAT, dec->priv->upstream_tags);
1069 GST_LOG_OBJECT (dec, "decoder : %" GST_PTR_FORMAT, dec->priv->tags);
1070 GST_LOG_OBJECT (dec, "mode : %d", dec->priv->tags_merge_mode);
1073 gst_tag_list_merge (dec->priv->upstream_tags, dec->priv->tags,
1074 dec->priv->tags_merge_mode);
1076 GST_DEBUG_OBJECT (dec, "merged : %" GST_PTR_FORMAT, merged_tags);
1078 if (merged_tags == NULL)
1081 if (gst_tag_list_is_empty (merged_tags)) {
1082 gst_tag_list_unref (merged_tags);
1086 return gst_event_new_tag (merged_tags);
1090 gst_video_decoder_push_event (GstVideoDecoder * decoder, GstEvent * event)
1092 switch (GST_EVENT_TYPE (event)) {
1093 case GST_EVENT_SEGMENT:
1097 gst_event_copy_segment (event, &segment);
1099 GST_DEBUG_OBJECT (decoder, "segment %" GST_SEGMENT_FORMAT, &segment);
1101 if (segment.format != GST_FORMAT_TIME) {
1102 GST_DEBUG_OBJECT (decoder, "received non TIME newsegment");
1106 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1107 decoder->output_segment = segment;
1108 decoder->priv->in_out_segment_sync =
1109 gst_segment_is_equal (&decoder->input_segment, &segment);
1110 decoder->priv->last_timestamp_out = GST_CLOCK_TIME_NONE;
1111 decoder->priv->earliest_time = GST_CLOCK_TIME_NONE;
1112 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1119 GST_DEBUG_OBJECT (decoder, "pushing event %s",
1120 gst_event_type_get_name (GST_EVENT_TYPE (event)));
1122 return gst_pad_push_event (decoder->srcpad, event);
1125 static GstFlowReturn
1126 gst_video_decoder_parse_available (GstVideoDecoder * dec, gboolean at_eos,
1127 gboolean new_buffer)
1129 GstVideoDecoderClass *decoder_class = GST_VIDEO_DECODER_GET_CLASS (dec);
1130 GstVideoDecoderPrivate *priv = dec->priv;
1131 GstFlowReturn ret = GST_FLOW_OK;
1132 gsize was_available, available;
1135 available = gst_adapter_available (priv->input_adapter);
1137 while (available || new_buffer) {
1139 /* current frame may have been parsed and handled,
1140 * so we need to set up a new one when asking subclass to parse */
1141 if (priv->current_frame == NULL)
1142 priv->current_frame = gst_video_decoder_new_frame (dec);
1144 was_available = available;
1145 ret = decoder_class->parse (dec, priv->current_frame,
1146 priv->input_adapter, at_eos);
1147 if (ret != GST_FLOW_OK)
1150 /* if the subclass returned success (GST_FLOW_OK), it is expected
1151 * to have collected and submitted a frame, i.e. it should have
1152 * called gst_video_decoder_have_frame(), or at least consumed a
1153 * few bytes through gst_video_decoder_add_to_frame().
1155 * Otherwise, this is an implementation bug, and we error out
1156 * after 2 failed attempts */
1157 available = gst_adapter_available (priv->input_adapter);
1158 if (!priv->current_frame || available != was_available)
1160 else if (++inactive == 2)
1161 goto error_inactive;
1169 GST_ERROR_OBJECT (dec, "Failed to consume data. Error in subclass?");
1170 return GST_FLOW_ERROR;
1174 /* This function has to be called with the stream lock taken. */
1175 static GstFlowReturn
1176 gst_video_decoder_drain_out (GstVideoDecoder * dec, gboolean at_eos)
1178 GstVideoDecoderClass *decoder_class = GST_VIDEO_DECODER_GET_CLASS (dec);
1179 GstVideoDecoderPrivate *priv = dec->priv;
1180 GstFlowReturn ret = GST_FLOW_OK;
1182 if (dec->input_segment.rate > 0.0) {
1183 /* Forward mode, if unpacketized, give the child class
1184 * a final chance to flush out packets */
1185 if (!priv->packetized) {
1186 ret = gst_video_decoder_parse_available (dec, TRUE, FALSE);
1190 if (decoder_class->finish)
1191 ret = decoder_class->finish (dec);
1193 if (decoder_class->drain) {
1194 ret = decoder_class->drain (dec);
1196 GST_FIXME_OBJECT (dec, "Sub-class should implement drain()");
1200 /* Reverse playback mode */
1201 ret = gst_video_decoder_flush_parse (dec, TRUE);
1208 _flush_events (GstPad * pad, GList * events)
1212 for (tmp = events; tmp; tmp = tmp->next) {
1213 if (GST_EVENT_TYPE (tmp->data) != GST_EVENT_EOS &&
1214 GST_EVENT_TYPE (tmp->data) != GST_EVENT_SEGMENT &&
1215 GST_EVENT_IS_STICKY (tmp->data)) {
1216 gst_pad_store_sticky_event (pad, GST_EVENT_CAST (tmp->data));
1218 gst_event_unref (tmp->data);
1220 g_list_free (events);
1225 /* Must be called holding the GST_VIDEO_DECODER_STREAM_LOCK */
1227 gst_video_decoder_negotiate_default_caps (GstVideoDecoder * decoder)
1229 GstCaps *caps, *templcaps;
1230 GstVideoCodecState *state;
1234 GstStructure *structure;
1236 templcaps = gst_pad_get_pad_template_caps (decoder->srcpad);
1237 caps = gst_pad_peer_query_caps (decoder->srcpad, templcaps);
1239 gst_caps_unref (templcaps);
1244 if (!caps || gst_caps_is_empty (caps) || gst_caps_is_any (caps))
1247 GST_LOG_OBJECT (decoder, "peer caps %" GST_PTR_FORMAT, caps);
1249 /* before fixating, try to use whatever upstream provided */
1250 caps = gst_caps_make_writable (caps);
1251 caps_size = gst_caps_get_size (caps);
1252 if (decoder->priv->input_state && decoder->priv->input_state->caps) {
1253 GstCaps *sinkcaps = decoder->priv->input_state->caps;
1254 GstStructure *structure = gst_caps_get_structure (sinkcaps, 0);
1257 if (gst_structure_get_int (structure, "width", &width)) {
1258 for (i = 0; i < caps_size; i++) {
1259 gst_structure_set (gst_caps_get_structure (caps, i), "width",
1260 G_TYPE_INT, width, NULL);
1264 if (gst_structure_get_int (structure, "height", &height)) {
1265 for (i = 0; i < caps_size; i++) {
1266 gst_structure_set (gst_caps_get_structure (caps, i), "height",
1267 G_TYPE_INT, height, NULL);
1272 for (i = 0; i < caps_size; i++) {
1273 structure = gst_caps_get_structure (caps, i);
1274 /* Random I420 1280x720 for fixation */
1275 if (gst_structure_has_field (structure, "format"))
1276 gst_structure_fixate_field_string (structure, "format", "I420");
1278 gst_structure_set (structure, "format", G_TYPE_STRING, "I420", NULL);
1280 if (gst_structure_has_field (structure, "width"))
1281 gst_structure_fixate_field_nearest_int (structure, "width", 1280);
1283 gst_structure_set (structure, "width", G_TYPE_INT, 1280, NULL);
1285 if (gst_structure_has_field (structure, "height"))
1286 gst_structure_fixate_field_nearest_int (structure, "height", 720);
1288 gst_structure_set (structure, "height", G_TYPE_INT, 720, NULL);
1290 caps = gst_caps_fixate (caps);
1292 if (!caps || !gst_video_info_from_caps (&info, caps))
1295 GST_INFO_OBJECT (decoder,
1296 "Chose default caps %" GST_PTR_FORMAT " for initial gap", caps);
1298 gst_video_decoder_set_output_state (decoder, info.finfo->format,
1299 info.width, info.height, decoder->priv->input_state);
1300 gst_video_codec_state_unref (state);
1301 gst_caps_unref (caps);
1308 gst_caps_unref (caps);
1314 gst_video_decoder_handle_missing_data_default (GstVideoDecoder *
1315 decoder, GstClockTime timestamp, GstClockTime duration)
1317 GstVideoDecoderPrivate *priv;
1319 priv = decoder->priv;
1321 if (priv->automatic_request_sync_points) {
1322 GstClockTime deadline =
1323 gst_segment_to_running_time (&decoder->input_segment, GST_FORMAT_TIME,
1326 GST_DEBUG_OBJECT (decoder,
1327 "Requesting sync point for missing data at running time %"
1328 GST_TIME_FORMAT " timestamp %" GST_TIME_FORMAT " with duration %"
1329 GST_TIME_FORMAT, GST_TIME_ARGS (deadline), GST_TIME_ARGS (timestamp),
1330 GST_TIME_ARGS (duration));
1332 gst_video_decoder_request_sync_point_internal (decoder, deadline,
1333 priv->automatic_request_sync_point_flags);
1340 gst_video_decoder_sink_event_default (GstVideoDecoder * decoder,
1343 GstVideoDecoderClass *decoder_class;
1344 GstVideoDecoderPrivate *priv;
1345 gboolean ret = FALSE;
1346 gboolean forward_immediate = FALSE;
1348 decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
1350 priv = decoder->priv;
1352 switch (GST_EVENT_TYPE (event)) {
1353 case GST_EVENT_STREAM_START:
1355 GstFlowReturn flow_ret = GST_FLOW_OK;
1357 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1358 flow_ret = gst_video_decoder_drain_out (decoder, FALSE);
1359 ret = (flow_ret == GST_FLOW_OK);
1361 GST_DEBUG_OBJECT (decoder, "received STREAM_START. Clearing taglist");
1362 /* Flush upstream tags after a STREAM_START */
1363 if (priv->upstream_tags) {
1364 gst_tag_list_unref (priv->upstream_tags);
1365 priv->upstream_tags = NULL;
1366 priv->tags_changed = TRUE;
1368 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1370 /* Forward STREAM_START immediately. Everything is drained after
1371 * the STREAM_START event and we can forward this event immediately
1372 * now without having buffers out of order.
1374 forward_immediate = TRUE;
1377 case GST_EVENT_CAPS:
1381 gst_event_parse_caps (event, &caps);
1382 ret = gst_video_decoder_setcaps (decoder, caps);
1383 gst_event_unref (event);
1387 case GST_EVENT_SEGMENT_DONE:
1389 GstFlowReturn flow_ret = GST_FLOW_OK;
1391 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1392 flow_ret = gst_video_decoder_drain_out (decoder, FALSE);
1393 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1394 ret = (flow_ret == GST_FLOW_OK);
1396 /* Forward SEGMENT_DONE immediately. This is required
1397 * because no buffer or serialized event might come
1398 * after SEGMENT_DONE and nothing could trigger another
1399 * _finish_frame() call.
1401 * The subclass can override this behaviour by overriding
1402 * the ::sink_event() vfunc and not chaining up to the
1403 * parent class' ::sink_event() until a later time.
1405 forward_immediate = TRUE;
1410 GstFlowReturn flow_ret = GST_FLOW_OK;
1412 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1413 flow_ret = gst_video_decoder_drain_out (decoder, TRUE);
1414 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1415 ret = (flow_ret == GST_FLOW_OK);
1417 /* Error out even if EOS was ok when we had input, but no output */
1418 if (ret && priv->had_input_data && !priv->had_output_data) {
1419 GST_ELEMENT_ERROR (decoder, STREAM, DECODE,
1420 ("No valid frames decoded before end of stream"),
1421 ("no valid frames found"));
1424 /* Forward EOS immediately. This is required because no
1425 * buffer or serialized event will come after EOS and
1426 * nothing could trigger another _finish_frame() call.
1428 * The subclass can override this behaviour by overriding
1429 * the ::sink_event() vfunc and not chaining up to the
1430 * parent class' ::sink_event() until a later time.
1432 forward_immediate = TRUE;
1437 GstClockTime timestamp, duration;
1438 GstGapFlags gap_flags = 0;
1439 GstFlowReturn flow_ret = GST_FLOW_OK;
1440 gboolean needs_reconfigure = FALSE;
1442 GList *frame_events;
1444 gst_event_parse_gap (event, ×tamp, &duration);
1445 gst_event_parse_gap_flags (event, &gap_flags);
1447 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1448 /* If this is not missing data, or the subclass does not handle it
1449 * specifically, then drain out the decoder and forward the event
1451 if ((gap_flags & GST_GAP_FLAG_MISSING_DATA) == 0
1452 || !decoder_class->handle_missing_data
1453 || decoder_class->handle_missing_data (decoder, timestamp,
1455 if (decoder->input_segment.flags & GST_SEEK_FLAG_TRICKMODE_KEY_UNITS)
1456 flow_ret = gst_video_decoder_drain_out (decoder, FALSE);
1457 ret = (flow_ret == GST_FLOW_OK);
1459 /* Ensure we have caps before forwarding the event */
1460 if (!decoder->priv->output_state) {
1461 if (!gst_video_decoder_negotiate_default_caps (decoder)) {
1462 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1463 GST_ELEMENT_ERROR (decoder, STREAM, FORMAT, (NULL),
1464 ("Decoder output not negotiated before GAP event."));
1465 forward_immediate = TRUE;
1468 needs_reconfigure = TRUE;
1471 needs_reconfigure = gst_pad_check_reconfigure (decoder->srcpad)
1472 || needs_reconfigure;
1473 if (decoder->priv->output_state_changed || needs_reconfigure) {
1474 if (!gst_video_decoder_negotiate_unlocked (decoder)) {
1475 GST_WARNING_OBJECT (decoder, "Failed to negotiate with downstream");
1476 gst_pad_mark_reconfigure (decoder->srcpad);
1480 GST_DEBUG_OBJECT (decoder, "Pushing all pending serialized events"
1482 events = decoder->priv->pending_events;
1483 frame_events = decoder->priv->current_frame_events;
1484 decoder->priv->pending_events = NULL;
1485 decoder->priv->current_frame_events = NULL;
1487 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1489 gst_video_decoder_push_event_list (decoder, events);
1490 gst_video_decoder_push_event_list (decoder, frame_events);
1492 /* Forward GAP immediately. Everything is drained after
1493 * the GAP event and we can forward this event immediately
1494 * now without having buffers out of order.
1496 forward_immediate = TRUE;
1498 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1499 gst_clear_event (&event);
1503 case GST_EVENT_CUSTOM_DOWNSTREAM:
1506 GstFlowReturn flow_ret = GST_FLOW_OK;
1508 if (gst_video_event_parse_still_frame (event, &in_still)) {
1510 GST_DEBUG_OBJECT (decoder, "draining current data for still-frame");
1511 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1512 flow_ret = gst_video_decoder_drain_out (decoder, FALSE);
1513 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1514 ret = (flow_ret == GST_FLOW_OK);
1516 /* Forward STILL_FRAME immediately. Everything is drained after
1517 * the STILL_FRAME event and we can forward this event immediately
1518 * now without having buffers out of order.
1520 forward_immediate = TRUE;
1524 case GST_EVENT_SEGMENT:
1528 gst_event_copy_segment (event, &segment);
1530 if (segment.format == GST_FORMAT_TIME) {
1531 GST_DEBUG_OBJECT (decoder,
1532 "received TIME SEGMENT %" GST_SEGMENT_FORMAT, &segment);
1536 GST_DEBUG_OBJECT (decoder,
1537 "received SEGMENT %" GST_SEGMENT_FORMAT, &segment);
1539 /* handle newsegment as a result from our legacy simple seeking */
1540 /* note that initial 0 should convert to 0 in any case */
1541 if (priv->do_estimate_rate &&
1542 gst_pad_query_convert (decoder->sinkpad, GST_FORMAT_BYTES,
1543 segment.start, GST_FORMAT_TIME, &start)) {
1544 /* best attempt convert */
1545 /* as these are only estimates, stop is kept open-ended to avoid
1546 * premature cutting */
1547 GST_DEBUG_OBJECT (decoder,
1548 "converted to TIME start %" GST_TIME_FORMAT,
1549 GST_TIME_ARGS (start));
1550 segment.start = start;
1551 segment.stop = GST_CLOCK_TIME_NONE;
1552 segment.time = start;
1554 gst_event_unref (event);
1555 event = gst_event_new_segment (&segment);
1557 goto newseg_wrong_format;
1561 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1563 /* Update the decode flags in the segment if we have an instant-rate
1564 * override active */
1565 GST_OBJECT_LOCK (decoder);
1566 if (!priv->decode_flags_override)
1567 priv->decode_flags = segment.flags;
1569 segment.flags &= ~GST_SEGMENT_INSTANT_FLAGS;
1570 segment.flags |= priv->decode_flags & GST_SEGMENT_INSTANT_FLAGS;
1573 priv->base_timestamp = GST_CLOCK_TIME_NONE;
1574 priv->base_picture_number = 0;
1576 decoder->input_segment = segment;
1577 decoder->priv->in_out_segment_sync = FALSE;
1579 GST_OBJECT_UNLOCK (decoder);
1580 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1584 case GST_EVENT_INSTANT_RATE_CHANGE:
1586 GstSegmentFlags flags;
1589 gst_event_parse_instant_rate_change (event, NULL, &flags);
1591 GST_OBJECT_LOCK (decoder);
1592 priv->decode_flags_override = TRUE;
1593 priv->decode_flags = flags;
1595 /* Update the input segment flags */
1596 seg = &decoder->input_segment;
1597 seg->flags &= ~GST_SEGMENT_INSTANT_FLAGS;
1598 seg->flags |= priv->decode_flags & GST_SEGMENT_INSTANT_FLAGS;
1599 GST_OBJECT_UNLOCK (decoder);
1602 case GST_EVENT_FLUSH_STOP:
1606 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1607 for (l = priv->frames.head; l; l = l->next) {
1608 GstVideoCodecFrame *frame = l->data;
1610 frame->events = _flush_events (decoder->srcpad, frame->events);
1612 priv->current_frame_events = _flush_events (decoder->srcpad,
1613 decoder->priv->current_frame_events);
1615 /* well, this is kind of worse than a DISCONT */
1616 gst_video_decoder_flush (decoder, TRUE);
1617 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1618 /* Forward FLUSH_STOP immediately. This is required because it is
1619 * expected to be forwarded immediately and no buffers are queued
1622 forward_immediate = TRUE;
1629 gst_event_parse_tag (event, &tags);
1631 if (gst_tag_list_get_scope (tags) == GST_TAG_SCOPE_STREAM) {
1632 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1633 if (priv->upstream_tags != tags) {
1634 if (priv->upstream_tags)
1635 gst_tag_list_unref (priv->upstream_tags);
1636 priv->upstream_tags = gst_tag_list_ref (tags);
1637 GST_INFO_OBJECT (decoder, "upstream tags: %" GST_PTR_FORMAT, tags);
1639 gst_event_unref (event);
1640 event = gst_video_decoder_create_merged_tags_event (decoder);
1641 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1651 /* Forward non-serialized events immediately, and all other
1652 * events which can be forwarded immediately without potentially
1653 * causing the event to go out of order with other events and
1654 * buffers as decided above.
1657 if (!GST_EVENT_IS_SERIALIZED (event) || forward_immediate) {
1658 ret = gst_video_decoder_push_event (decoder, event);
1660 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1661 decoder->priv->current_frame_events =
1662 g_list_prepend (decoder->priv->current_frame_events, event);
1663 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1670 newseg_wrong_format:
1672 GST_DEBUG_OBJECT (decoder, "received non TIME newsegment");
1673 gst_event_unref (event);
1680 gst_video_decoder_sink_event (GstPad * pad, GstObject * parent,
1683 GstVideoDecoder *decoder;
1684 GstVideoDecoderClass *decoder_class;
1685 gboolean ret = FALSE;
1687 decoder = GST_VIDEO_DECODER (parent);
1688 decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
1690 GST_DEBUG_OBJECT (decoder, "received event %d, %s", GST_EVENT_TYPE (event),
1691 GST_EVENT_TYPE_NAME (event));
1693 if (decoder_class->sink_event)
1694 ret = decoder_class->sink_event (decoder, event);
1699 /* perform upstream byte <-> time conversion (duration, seeking)
1700 * if subclass allows and if enough data for moderately decent conversion */
1701 static inline gboolean
1702 gst_video_decoder_do_byte (GstVideoDecoder * dec)
1706 GST_OBJECT_LOCK (dec);
1707 ret = dec->priv->do_estimate_rate && (dec->priv->bytes_out > 0)
1708 && (dec->priv->time > GST_SECOND);
1709 GST_OBJECT_UNLOCK (dec);
1715 gst_video_decoder_do_seek (GstVideoDecoder * dec, GstEvent * event)
1719 GstSeekType start_type, end_type;
1721 gint64 start, start_time, end_time;
1722 GstSegment seek_segment;
1725 gst_event_parse_seek (event, &rate, &format, &flags, &start_type,
1726 &start_time, &end_type, &end_time);
1728 /* we'll handle plain open-ended flushing seeks with the simple approach */
1730 GST_DEBUG_OBJECT (dec, "unsupported seek: rate");
1734 if (start_type != GST_SEEK_TYPE_SET) {
1735 GST_DEBUG_OBJECT (dec, "unsupported seek: start time");
1739 if ((end_type != GST_SEEK_TYPE_SET && end_type != GST_SEEK_TYPE_NONE) ||
1740 (end_type == GST_SEEK_TYPE_SET && end_time != GST_CLOCK_TIME_NONE)) {
1741 GST_DEBUG_OBJECT (dec, "unsupported seek: end time");
1745 if (!(flags & GST_SEEK_FLAG_FLUSH)) {
1746 GST_DEBUG_OBJECT (dec, "unsupported seek: not flushing");
1750 memcpy (&seek_segment, &dec->output_segment, sizeof (seek_segment));
1751 gst_segment_do_seek (&seek_segment, rate, format, flags, start_type,
1752 start_time, end_type, end_time, NULL);
1753 start_time = seek_segment.position;
1755 if (!gst_pad_query_convert (dec->sinkpad, GST_FORMAT_TIME, start_time,
1756 GST_FORMAT_BYTES, &start)) {
1757 GST_DEBUG_OBJECT (dec, "conversion failed");
1761 seqnum = gst_event_get_seqnum (event);
1762 event = gst_event_new_seek (1.0, GST_FORMAT_BYTES, flags,
1763 GST_SEEK_TYPE_SET, start, GST_SEEK_TYPE_NONE, -1);
1764 gst_event_set_seqnum (event, seqnum);
1766 GST_DEBUG_OBJECT (dec, "seeking to %" GST_TIME_FORMAT " at byte offset %"
1767 G_GINT64_FORMAT, GST_TIME_ARGS (start_time), start);
1769 return gst_pad_push_event (dec->sinkpad, event);
1773 gst_video_decoder_src_event_default (GstVideoDecoder * decoder,
1776 GstVideoDecoderPrivate *priv;
1777 gboolean res = FALSE;
1779 priv = decoder->priv;
1781 GST_DEBUG_OBJECT (decoder,
1782 "received event %d, %s", GST_EVENT_TYPE (event),
1783 GST_EVENT_TYPE_NAME (event));
1785 switch (GST_EVENT_TYPE (event)) {
1786 case GST_EVENT_SEEK:
1791 GstSeekType start_type, stop_type;
1793 gint64 tstart, tstop;
1796 gst_event_parse_seek (event, &rate, &format, &flags, &start_type, &start,
1798 seqnum = gst_event_get_seqnum (event);
1800 /* upstream gets a chance first */
1801 if ((res = gst_pad_push_event (decoder->sinkpad, event)))
1804 /* if upstream fails for a time seek, maybe we can help if allowed */
1805 if (format == GST_FORMAT_TIME) {
1806 if (gst_video_decoder_do_byte (decoder))
1807 res = gst_video_decoder_do_seek (decoder, event);
1811 /* ... though a non-time seek can be aided as well */
1812 /* First bring the requested format to time */
1814 gst_pad_query_convert (decoder->srcpad, format, start,
1815 GST_FORMAT_TIME, &tstart)))
1818 gst_pad_query_convert (decoder->srcpad, format, stop,
1819 GST_FORMAT_TIME, &tstop)))
1822 /* then seek with time on the peer */
1823 event = gst_event_new_seek (rate, GST_FORMAT_TIME,
1824 flags, start_type, tstart, stop_type, tstop);
1825 gst_event_set_seqnum (event, seqnum);
1827 res = gst_pad_push_event (decoder->sinkpad, event);
1834 GstClockTimeDiff diff;
1835 GstClockTime timestamp;
1837 gst_event_parse_qos (event, &type, &proportion, &diff, ×tamp);
1839 GST_OBJECT_LOCK (decoder);
1840 priv->proportion = proportion;
1841 if (G_LIKELY (GST_CLOCK_TIME_IS_VALID (timestamp))) {
1842 if (G_UNLIKELY (diff > 0)) {
1843 priv->earliest_time = timestamp + 2 * diff + priv->qos_frame_duration;
1845 priv->earliest_time = timestamp + diff;
1848 priv->earliest_time = GST_CLOCK_TIME_NONE;
1850 GST_OBJECT_UNLOCK (decoder);
1852 GST_DEBUG_OBJECT (decoder,
1853 "got QoS %" GST_TIME_FORMAT ", %" GST_STIME_FORMAT ", %g",
1854 GST_TIME_ARGS (timestamp), GST_STIME_ARGS (diff), proportion);
1856 res = gst_pad_push_event (decoder->sinkpad, event);
1860 res = gst_pad_push_event (decoder->sinkpad, event);
1867 GST_DEBUG_OBJECT (decoder, "could not convert format");
1872 gst_video_decoder_src_event (GstPad * pad, GstObject * parent, GstEvent * event)
1874 GstVideoDecoder *decoder;
1875 GstVideoDecoderClass *decoder_class;
1876 gboolean ret = FALSE;
1878 decoder = GST_VIDEO_DECODER (parent);
1879 decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
1881 GST_DEBUG_OBJECT (decoder, "received event %d, %s", GST_EVENT_TYPE (event),
1882 GST_EVENT_TYPE_NAME (event));
1884 if (decoder_class->src_event)
1885 ret = decoder_class->src_event (decoder, event);
1891 gst_video_decoder_src_query_default (GstVideoDecoder * dec, GstQuery * query)
1893 GstPad *pad = GST_VIDEO_DECODER_SRC_PAD (dec);
1894 gboolean res = TRUE;
1896 GST_LOG_OBJECT (dec, "handling query: %" GST_PTR_FORMAT, query);
1898 switch (GST_QUERY_TYPE (query)) {
1899 case GST_QUERY_POSITION:
1904 /* upstream gets a chance first */
1905 if ((res = gst_pad_peer_query (dec->sinkpad, query))) {
1906 GST_LOG_OBJECT (dec, "returning peer response");
1910 /* Refuse BYTES format queries. If it made sense to
1911 * answer them, upstream would have already */
1912 gst_query_parse_position (query, &format, NULL);
1914 if (format == GST_FORMAT_BYTES) {
1915 GST_LOG_OBJECT (dec, "Ignoring BYTES position query");
1919 /* we start from the last seen time */
1920 time = dec->priv->last_timestamp_out;
1921 /* correct for the segment values */
1922 time = gst_segment_to_stream_time (&dec->output_segment,
1923 GST_FORMAT_TIME, time);
1925 GST_LOG_OBJECT (dec,
1926 "query %p: our time: %" GST_TIME_FORMAT, query, GST_TIME_ARGS (time));
1928 /* and convert to the final format */
1929 if (!(res = gst_pad_query_convert (pad, GST_FORMAT_TIME, time,
1933 gst_query_set_position (query, format, value);
1935 GST_LOG_OBJECT (dec,
1936 "query %p: we return %" G_GINT64_FORMAT " (format %u)", query, value,
1940 case GST_QUERY_DURATION:
1944 /* upstream in any case */
1945 if ((res = gst_pad_query_default (pad, GST_OBJECT (dec), query)))
1948 gst_query_parse_duration (query, &format, NULL);
1949 /* try answering TIME by converting from BYTE if subclass allows */
1950 if (format == GST_FORMAT_TIME && gst_video_decoder_do_byte (dec)) {
1953 if (gst_pad_peer_query_duration (dec->sinkpad, GST_FORMAT_BYTES,
1955 GST_LOG_OBJECT (dec, "upstream size %" G_GINT64_FORMAT, value);
1956 if (gst_pad_query_convert (dec->sinkpad,
1957 GST_FORMAT_BYTES, value, GST_FORMAT_TIME, &value)) {
1958 gst_query_set_duration (query, GST_FORMAT_TIME, value);
1965 case GST_QUERY_CONVERT:
1967 GstFormat src_fmt, dest_fmt;
1968 gint64 src_val, dest_val;
1970 GST_DEBUG_OBJECT (dec, "convert query");
1972 gst_query_parse_convert (query, &src_fmt, &src_val, &dest_fmt, &dest_val);
1973 GST_OBJECT_LOCK (dec);
1974 if (dec->priv->output_state != NULL)
1975 res = __gst_video_rawvideo_convert (dec->priv->output_state,
1976 src_fmt, src_val, &dest_fmt, &dest_val);
1979 GST_OBJECT_UNLOCK (dec);
1982 gst_query_set_convert (query, src_fmt, src_val, dest_fmt, dest_val);
1985 case GST_QUERY_LATENCY:
1988 GstClockTime min_latency, max_latency;
1990 res = gst_pad_peer_query (dec->sinkpad, query);
1992 gst_query_parse_latency (query, &live, &min_latency, &max_latency);
1993 GST_DEBUG_OBJECT (dec, "Peer qlatency: live %d, min %"
1994 GST_TIME_FORMAT " max %" GST_TIME_FORMAT, live,
1995 GST_TIME_ARGS (min_latency), GST_TIME_ARGS (max_latency));
1997 GST_OBJECT_LOCK (dec);
1998 min_latency += dec->priv->min_latency;
1999 if (max_latency == GST_CLOCK_TIME_NONE
2000 || dec->priv->max_latency == GST_CLOCK_TIME_NONE)
2001 max_latency = GST_CLOCK_TIME_NONE;
2003 max_latency += dec->priv->max_latency;
2004 GST_OBJECT_UNLOCK (dec);
2006 gst_query_set_latency (query, live, min_latency, max_latency);
2011 res = gst_pad_query_default (pad, GST_OBJECT (dec), query);
2016 GST_ERROR_OBJECT (dec, "query failed");
2021 gst_video_decoder_src_query (GstPad * pad, GstObject * parent, GstQuery * query)
2023 GstVideoDecoder *decoder;
2024 GstVideoDecoderClass *decoder_class;
2025 gboolean ret = FALSE;
2027 decoder = GST_VIDEO_DECODER (parent);
2028 decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
2030 GST_DEBUG_OBJECT (decoder, "received query %d, %s", GST_QUERY_TYPE (query),
2031 GST_QUERY_TYPE_NAME (query));
2033 if (decoder_class->src_query)
2034 ret = decoder_class->src_query (decoder, query);
2040 * gst_video_decoder_proxy_getcaps:
2041 * @decoder: a #GstVideoDecoder
2042 * @caps: (nullable): initial caps
2043 * @filter: (nullable): filter caps
2045 * Returns caps that express @caps (or sink template caps if @caps == NULL)
2046 * restricted to resolution/format/... combinations supported by downstream
2049 * Returns: (transfer full): a #GstCaps owned by caller
2054 gst_video_decoder_proxy_getcaps (GstVideoDecoder * decoder, GstCaps * caps,
2057 return __gst_video_element_proxy_getcaps (GST_ELEMENT_CAST (decoder),
2058 GST_VIDEO_DECODER_SINK_PAD (decoder),
2059 GST_VIDEO_DECODER_SRC_PAD (decoder), caps, filter);
2063 gst_video_decoder_sink_getcaps (GstVideoDecoder * decoder, GstCaps * filter)
2065 GstVideoDecoderClass *klass;
2068 klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
2071 caps = klass->getcaps (decoder, filter);
2073 caps = gst_video_decoder_proxy_getcaps (decoder, NULL, filter);
2075 GST_LOG_OBJECT (decoder, "Returning caps %" GST_PTR_FORMAT, caps);
2081 gst_video_decoder_sink_query_default (GstVideoDecoder * decoder,
2084 GstPad *pad = GST_VIDEO_DECODER_SINK_PAD (decoder);
2085 GstVideoDecoderPrivate *priv;
2086 gboolean res = FALSE;
2088 priv = decoder->priv;
2090 GST_LOG_OBJECT (decoder, "handling query: %" GST_PTR_FORMAT, query);
2092 switch (GST_QUERY_TYPE (query)) {
2093 case GST_QUERY_CONVERT:
2095 GstFormat src_fmt, dest_fmt;
2096 gint64 src_val, dest_val;
2098 gst_query_parse_convert (query, &src_fmt, &src_val, &dest_fmt, &dest_val);
2099 GST_OBJECT_LOCK (decoder);
2101 __gst_video_encoded_video_convert (priv->bytes_out, priv->time,
2102 src_fmt, src_val, &dest_fmt, &dest_val);
2103 GST_OBJECT_UNLOCK (decoder);
2106 gst_query_set_convert (query, src_fmt, src_val, dest_fmt, dest_val);
2109 case GST_QUERY_ALLOCATION:{
2110 GstVideoDecoderClass *klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
2112 if (klass->propose_allocation)
2113 res = klass->propose_allocation (decoder, query);
2116 case GST_QUERY_CAPS:{
2117 GstCaps *filter, *caps;
2119 gst_query_parse_caps (query, &filter);
2120 caps = gst_video_decoder_sink_getcaps (decoder, filter);
2121 gst_query_set_caps_result (query, caps);
2122 gst_caps_unref (caps);
2126 case GST_QUERY_ACCEPT_CAPS:{
2127 if (decoder->priv->use_default_pad_acceptcaps) {
2129 gst_pad_query_default (GST_VIDEO_DECODER_SINK_PAD (decoder),
2130 GST_OBJECT_CAST (decoder), query);
2133 GstCaps *allowed_caps;
2134 GstCaps *template_caps;
2137 gst_query_parse_accept_caps (query, &caps);
2139 template_caps = gst_pad_get_pad_template_caps (pad);
2140 accept = gst_caps_is_subset (caps, template_caps);
2141 gst_caps_unref (template_caps);
2145 gst_pad_query_caps (GST_VIDEO_DECODER_SINK_PAD (decoder), caps);
2147 accept = gst_caps_can_intersect (caps, allowed_caps);
2149 gst_caps_unref (allowed_caps);
2152 gst_query_set_accept_caps_result (query, accept);
2158 res = gst_pad_query_default (pad, GST_OBJECT (decoder), query);
2165 GST_DEBUG_OBJECT (decoder, "query failed");
2171 gst_video_decoder_sink_query (GstPad * pad, GstObject * parent,
2174 GstVideoDecoder *decoder;
2175 GstVideoDecoderClass *decoder_class;
2176 gboolean ret = FALSE;
2178 decoder = GST_VIDEO_DECODER (parent);
2179 decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
2181 GST_DEBUG_OBJECT (decoder, "received query %d, %s", GST_QUERY_TYPE (query),
2182 GST_QUERY_TYPE_NAME (query));
2184 if (decoder_class->sink_query)
2185 ret = decoder_class->sink_query (decoder, query);
2190 typedef struct _Timestamp Timestamp;
2196 GstClockTime duration;
2201 timestamp_free (Timestamp * ts)
2203 g_slice_free (Timestamp, ts);
2207 gst_video_decoder_add_buffer_info (GstVideoDecoder * decoder,
2210 GstVideoDecoderPrivate *priv = decoder->priv;
2213 if (!GST_BUFFER_PTS_IS_VALID (buffer) &&
2214 !GST_BUFFER_DTS_IS_VALID (buffer) &&
2215 !GST_BUFFER_DURATION_IS_VALID (buffer) &&
2216 GST_BUFFER_FLAGS (buffer) == 0) {
2217 /* Save memory - don't bother storing info
2218 * for buffers with no distinguishing info */
2222 ts = g_slice_new (Timestamp);
2224 GST_LOG_OBJECT (decoder,
2225 "adding PTS %" GST_TIME_FORMAT " DTS %" GST_TIME_FORMAT
2226 " (offset:%" G_GUINT64_FORMAT ")",
2227 GST_TIME_ARGS (GST_BUFFER_PTS (buffer)),
2228 GST_TIME_ARGS (GST_BUFFER_DTS (buffer)), priv->input_offset);
2230 ts->offset = priv->input_offset;
2231 ts->pts = GST_BUFFER_PTS (buffer);
2232 ts->dts = GST_BUFFER_DTS (buffer);
2233 ts->duration = GST_BUFFER_DURATION (buffer);
2234 ts->flags = GST_BUFFER_FLAGS (buffer);
2236 g_queue_push_tail (&priv->timestamps, ts);
2238 if (g_queue_get_length (&priv->timestamps) > 40) {
2239 GST_WARNING_OBJECT (decoder,
2240 "decoder timestamp list getting long: %d timestamps,"
2241 "possible internal leaking?", g_queue_get_length (&priv->timestamps));
2246 gst_video_decoder_get_buffer_info_at_offset (GstVideoDecoder *
2247 decoder, guint64 offset, GstClockTime * pts, GstClockTime * dts,
2248 GstClockTime * duration, guint * flags)
2250 #ifndef GST_DISABLE_GST_DEBUG
2251 guint64 got_offset = 0;
2256 *pts = GST_CLOCK_TIME_NONE;
2257 *dts = GST_CLOCK_TIME_NONE;
2258 *duration = GST_CLOCK_TIME_NONE;
2261 g = decoder->priv->timestamps.head;
2264 if (ts->offset <= offset) {
2265 GList *next = g->next;
2266 #ifndef GST_DISABLE_GST_DEBUG
2267 got_offset = ts->offset;
2271 *duration = ts->duration;
2273 g_queue_delete_link (&decoder->priv->timestamps, g);
2275 timestamp_free (ts);
2281 GST_LOG_OBJECT (decoder,
2282 "got PTS %" GST_TIME_FORMAT " DTS %" GST_TIME_FORMAT " flags %x @ offs %"
2283 G_GUINT64_FORMAT " (wanted offset:%" G_GUINT64_FORMAT ")",
2284 GST_TIME_ARGS (*pts), GST_TIME_ARGS (*dts), *flags, got_offset, offset);
2288 gst_video_decoder_clear_queues (GstVideoDecoder * dec)
2290 GstVideoDecoderPrivate *priv = dec->priv;
2292 g_list_free_full (priv->output_queued,
2293 (GDestroyNotify) gst_mini_object_unref);
2294 priv->output_queued = NULL;
2296 g_list_free_full (priv->gather, (GDestroyNotify) gst_mini_object_unref);
2297 priv->gather = NULL;
2298 g_list_free_full (priv->decode, (GDestroyNotify) gst_video_codec_frame_unref);
2299 priv->decode = NULL;
2300 g_list_free_full (priv->parse, (GDestroyNotify) gst_mini_object_unref);
2302 g_list_free_full (priv->parse_gather,
2303 (GDestroyNotify) gst_video_codec_frame_unref);
2304 priv->parse_gather = NULL;
2305 g_queue_clear_full (&priv->frames,
2306 (GDestroyNotify) gst_video_codec_frame_unref);
2310 gst_video_decoder_reset (GstVideoDecoder * decoder, gboolean full,
2311 gboolean flush_hard)
2313 GstVideoDecoderPrivate *priv = decoder->priv;
2315 GST_DEBUG_OBJECT (decoder, "reset full %d", full);
2317 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2319 if (full || flush_hard) {
2320 gst_segment_init (&decoder->input_segment, GST_FORMAT_UNDEFINED);
2321 gst_segment_init (&decoder->output_segment, GST_FORMAT_UNDEFINED);
2322 gst_video_decoder_clear_queues (decoder);
2323 decoder->priv->in_out_segment_sync = TRUE;
2325 if (priv->current_frame) {
2326 gst_video_codec_frame_unref (priv->current_frame);
2327 priv->current_frame = NULL;
2330 g_list_free_full (priv->current_frame_events,
2331 (GDestroyNotify) gst_event_unref);
2332 priv->current_frame_events = NULL;
2333 g_list_free_full (priv->pending_events, (GDestroyNotify) gst_event_unref);
2334 priv->pending_events = NULL;
2336 priv->error_count = 0;
2337 priv->had_output_data = FALSE;
2338 priv->had_input_data = FALSE;
2340 GST_OBJECT_LOCK (decoder);
2341 priv->earliest_time = GST_CLOCK_TIME_NONE;
2342 priv->proportion = 0.5;
2343 priv->decode_flags_override = FALSE;
2345 priv->request_sync_point_flags = 0;
2346 priv->request_sync_point_frame_number = REQUEST_SYNC_POINT_UNSET;
2347 priv->last_force_key_unit_time = GST_CLOCK_TIME_NONE;
2348 GST_OBJECT_UNLOCK (decoder);
2349 priv->distance_from_sync = -1;
2353 if (priv->input_state)
2354 gst_video_codec_state_unref (priv->input_state);
2355 priv->input_state = NULL;
2356 GST_OBJECT_LOCK (decoder);
2357 if (priv->output_state)
2358 gst_video_codec_state_unref (priv->output_state);
2359 priv->output_state = NULL;
2361 priv->qos_frame_duration = 0;
2362 GST_OBJECT_UNLOCK (decoder);
2365 gst_tag_list_unref (priv->tags);
2367 priv->tags_merge_mode = GST_TAG_MERGE_APPEND;
2368 if (priv->upstream_tags) {
2369 gst_tag_list_unref (priv->upstream_tags);
2370 priv->upstream_tags = NULL;
2372 priv->tags_changed = FALSE;
2373 priv->reordered_output = FALSE;
2376 priv->processed = 0;
2378 priv->posted_latency_msg = FALSE;
2380 priv->decode_frame_number = 0;
2381 priv->base_picture_number = 0;
2384 GST_DEBUG_OBJECT (decoder, "deactivate pool %" GST_PTR_FORMAT,
2386 gst_buffer_pool_set_active (priv->pool, FALSE);
2387 gst_object_unref (priv->pool);
2391 if (priv->allocator) {
2392 gst_object_unref (priv->allocator);
2393 priv->allocator = NULL;
2397 priv->discont = TRUE;
2399 priv->base_timestamp = GST_CLOCK_TIME_NONE;
2400 priv->last_timestamp_out = GST_CLOCK_TIME_NONE;
2401 priv->pts_delta = GST_CLOCK_TIME_NONE;
2403 priv->input_offset = 0;
2404 priv->frame_offset = 0;
2405 gst_adapter_clear (priv->input_adapter);
2406 gst_adapter_clear (priv->output_adapter);
2407 g_queue_clear_full (&priv->timestamps, (GDestroyNotify) timestamp_free);
2409 GST_OBJECT_LOCK (decoder);
2410 priv->bytes_out = 0;
2412 GST_OBJECT_UNLOCK (decoder);
2414 #ifndef GST_DISABLE_DEBUG
2415 priv->last_reset_time = gst_util_get_timestamp ();
2418 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2421 static GstFlowReturn
2422 gst_video_decoder_chain_forward (GstVideoDecoder * decoder,
2423 GstBuffer * buf, gboolean at_eos)
2425 GstVideoDecoderPrivate *priv;
2426 GstVideoDecoderClass *klass;
2427 GstFlowReturn ret = GST_FLOW_OK;
2429 klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
2430 priv = decoder->priv;
2432 g_return_val_if_fail (priv->packetized || klass->parse, GST_FLOW_ERROR);
2434 /* Draining on DISCONT is handled in chain_reverse() for reverse playback,
2435 * and this function would only be called to get everything collected GOP
2436 * by GOP in the parse_gather list */
2437 if (decoder->input_segment.rate > 0.0 && GST_BUFFER_IS_DISCONT (buf)
2438 && (decoder->input_segment.flags & GST_SEEK_FLAG_TRICKMODE_KEY_UNITS))
2439 ret = gst_video_decoder_drain_out (decoder, FALSE);
2441 if (priv->current_frame == NULL)
2442 priv->current_frame = gst_video_decoder_new_frame (decoder);
2444 if (!priv->packetized)
2445 gst_video_decoder_add_buffer_info (decoder, buf);
2447 priv->input_offset += gst_buffer_get_size (buf);
2449 if (priv->packetized) {
2450 GstVideoCodecFrame *frame;
2451 gboolean was_keyframe = FALSE;
2453 frame = priv->current_frame;
2455 frame->abidata.ABI.num_subframes++;
2456 if (gst_video_decoder_get_subframe_mode (decoder)) {
2457 /* End the frame if the marker flag is set */
2458 if (!GST_BUFFER_FLAG_IS_SET (buf, GST_VIDEO_BUFFER_FLAG_MARKER)
2459 && (decoder->input_segment.rate > 0.0))
2460 priv->current_frame = gst_video_codec_frame_ref (frame);
2462 priv->current_frame = NULL;
2464 priv->current_frame = frame;
2467 if (!GST_BUFFER_FLAG_IS_SET (buf, GST_BUFFER_FLAG_DELTA_UNIT)) {
2468 was_keyframe = TRUE;
2469 GST_DEBUG_OBJECT (decoder, "Marking current_frame as sync point");
2470 GST_VIDEO_CODEC_FRAME_SET_SYNC_POINT (frame);
2473 if (frame->input_buffer) {
2474 gst_video_decoder_copy_metas (decoder, frame, frame->input_buffer, buf);
2475 gst_buffer_unref (frame->input_buffer);
2477 frame->input_buffer = buf;
2479 if (decoder->input_segment.rate < 0.0) {
2480 priv->parse_gather = g_list_prepend (priv->parse_gather, frame);
2481 priv->current_frame = NULL;
2483 ret = gst_video_decoder_decode_frame (decoder, frame);
2484 if (!gst_video_decoder_get_subframe_mode (decoder))
2485 priv->current_frame = NULL;
2487 /* If in trick mode and it was a keyframe, drain decoder to avoid extra
2488 * latency. Only do this for forwards playback as reverse playback handles
2489 * draining on keyframes in flush_parse(), and would otherwise call back
2490 * from drain_out() to here causing an infinite loop.
2491 * Also this function is only called for reverse playback to gather frames
2492 * GOP by GOP, and does not do any actual decoding. That would be done by
2494 if (ret == GST_FLOW_OK && was_keyframe && decoder->input_segment.rate > 0.0
2495 && (decoder->input_segment.flags & GST_SEEK_FLAG_TRICKMODE_KEY_UNITS))
2496 ret = gst_video_decoder_drain_out (decoder, FALSE);
2498 gst_adapter_push (priv->input_adapter, buf);
2500 ret = gst_video_decoder_parse_available (decoder, at_eos, TRUE);
2503 if (ret == GST_VIDEO_DECODER_FLOW_NEED_DATA)
2509 static GstFlowReturn
2510 gst_video_decoder_flush_decode (GstVideoDecoder * dec)
2512 GstVideoDecoderPrivate *priv = dec->priv;
2513 GstFlowReturn res = GST_FLOW_OK;
2515 GstVideoCodecFrame *current_frame = NULL;
2516 gboolean last_subframe;
2517 GST_DEBUG_OBJECT (dec, "flushing buffers to decode");
2519 walk = priv->decode;
2522 GstVideoCodecFrame *frame = (GstVideoCodecFrame *) (walk->data);
2523 last_subframe = TRUE;
2524 /* In subframe mode, we need to get rid of intermediary frames
2525 * created during the buffer gather stage. That's why that we keep a current
2526 * frame as the main frame and drop all the frame afterwhile until the end
2527 * of the subframes batch.
2529 if (gst_video_decoder_get_subframe_mode (dec)) {
2530 if (current_frame == NULL) {
2531 current_frame = gst_video_codec_frame_ref (frame);
2533 if (current_frame->input_buffer) {
2534 gst_video_decoder_copy_metas (dec, current_frame,
2535 current_frame->input_buffer, current_frame->output_buffer);
2536 gst_buffer_unref (current_frame->input_buffer);
2538 current_frame->input_buffer = gst_buffer_ref (frame->input_buffer);
2539 gst_video_codec_frame_unref (frame);
2541 last_subframe = GST_BUFFER_FLAG_IS_SET (current_frame->input_buffer,
2542 GST_VIDEO_BUFFER_FLAG_MARKER);
2544 current_frame = frame;
2547 GST_DEBUG_OBJECT (dec, "decoding frame %p buffer %p, PTS %" GST_TIME_FORMAT
2548 ", DTS %" GST_TIME_FORMAT, frame, frame->input_buffer,
2549 GST_TIME_ARGS (GST_BUFFER_PTS (frame->input_buffer)),
2550 GST_TIME_ARGS (GST_BUFFER_DTS (frame->input_buffer)));
2554 priv->decode = g_list_delete_link (priv->decode, walk);
2556 /* decode buffer, resulting data prepended to queue */
2557 res = gst_video_decoder_decode_frame (dec, current_frame);
2558 if (res != GST_FLOW_OK)
2560 if (!gst_video_decoder_get_subframe_mode (dec)
2562 current_frame = NULL;
2569 /* gst_video_decoder_flush_parse is called from the
2570 * chain_reverse() function when a buffer containing
2571 * a DISCONT - indicating that reverse playback
2572 * looped back to the next data block, and therefore
2573 * all available data should be fed through the
2574 * decoder and frames gathered for reversed output
2576 static GstFlowReturn
2577 gst_video_decoder_flush_parse (GstVideoDecoder * dec, gboolean at_eos)
2579 GstVideoDecoderPrivate *priv = dec->priv;
2580 GstFlowReturn res = GST_FLOW_OK;
2582 GstVideoDecoderClass *decoder_class;
2584 decoder_class = GST_VIDEO_DECODER_GET_CLASS (dec);
2586 GST_DEBUG_OBJECT (dec, "flushing buffers to parsing");
2588 /* Reverse the gather list, and prepend it to the parse list,
2589 * then flush to parse whatever we can */
2590 priv->gather = g_list_reverse (priv->gather);
2591 priv->parse = g_list_concat (priv->gather, priv->parse);
2592 priv->gather = NULL;
2594 /* clear buffer and decoder state */
2595 gst_video_decoder_flush (dec, FALSE);
2599 GstBuffer *buf = GST_BUFFER_CAST (walk->data);
2600 GList *next = walk->next;
2602 GST_DEBUG_OBJECT (dec, "parsing buffer %p, PTS %" GST_TIME_FORMAT
2603 ", DTS %" GST_TIME_FORMAT " flags %x", buf,
2604 GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
2605 GST_TIME_ARGS (GST_BUFFER_DTS (buf)), GST_BUFFER_FLAGS (buf));
2607 /* parse buffer, resulting frames prepended to parse_gather queue */
2608 gst_buffer_ref (buf);
2609 res = gst_video_decoder_chain_forward (dec, buf, at_eos);
2611 /* if we generated output, we can discard the buffer, else we
2612 * keep it in the queue */
2613 if (priv->parse_gather) {
2614 GST_DEBUG_OBJECT (dec, "parsed buffer to %p", priv->parse_gather->data);
2615 priv->parse = g_list_delete_link (priv->parse, walk);
2616 gst_buffer_unref (buf);
2618 GST_DEBUG_OBJECT (dec, "buffer did not decode, keeping");
2623 walk = priv->parse_gather;
2625 GstVideoCodecFrame *frame = (GstVideoCodecFrame *) (walk->data);
2628 /* this is reverse playback, check if we need to apply some segment
2629 * to the output before decoding, as during decoding the segment.rate
2630 * must be used to determine if a buffer should be pushed or added to
2631 * the output list for reverse pushing.
2633 * The new segment is not immediately pushed here because we must
2634 * wait for negotiation to happen before it can be pushed to avoid
2635 * pushing a segment before caps event. Negotiation only happens
2636 * when finish_frame is called.
2638 for (walk2 = frame->events; walk2;) {
2640 GstEvent *event = walk2->data;
2642 walk2 = g_list_next (walk2);
2643 if (GST_EVENT_TYPE (event) <= GST_EVENT_SEGMENT) {
2645 if (GST_EVENT_TYPE (event) == GST_EVENT_SEGMENT) {
2648 GST_DEBUG_OBJECT (dec, "Segment at frame %p %" GST_TIME_FORMAT,
2649 frame, GST_TIME_ARGS (GST_BUFFER_PTS (frame->input_buffer)));
2650 gst_event_copy_segment (event, &segment);
2651 if (segment.format == GST_FORMAT_TIME) {
2652 dec->output_segment = segment;
2653 dec->priv->in_out_segment_sync =
2654 gst_segment_is_equal (&dec->input_segment, &segment);
2657 dec->priv->pending_events =
2658 g_list_append (dec->priv->pending_events, event);
2659 frame->events = g_list_delete_link (frame->events, cur);
2666 /* now we can process frames. Start by moving each frame from the parse_gather
2667 * to the decode list, reverse the order as we go, and stopping when/if we
2668 * copy a keyframe. */
2669 GST_DEBUG_OBJECT (dec, "checking parsed frames for a keyframe to decode");
2670 walk = priv->parse_gather;
2672 GstVideoCodecFrame *frame = (GstVideoCodecFrame *) (walk->data);
2674 /* remove from the gather list */
2675 priv->parse_gather = g_list_remove_link (priv->parse_gather, walk);
2677 /* move it to the front of the decode queue */
2678 priv->decode = g_list_concat (walk, priv->decode);
2680 /* if we copied a keyframe, flush and decode the decode queue */
2681 if (GST_VIDEO_CODEC_FRAME_IS_SYNC_POINT (frame)) {
2682 GST_DEBUG_OBJECT (dec, "found keyframe %p with PTS %" GST_TIME_FORMAT
2683 ", DTS %" GST_TIME_FORMAT, frame,
2684 GST_TIME_ARGS (GST_BUFFER_PTS (frame->input_buffer)),
2685 GST_TIME_ARGS (GST_BUFFER_DTS (frame->input_buffer)));
2686 res = gst_video_decoder_flush_decode (dec);
2687 if (res != GST_FLOW_OK)
2690 /* We need to tell the subclass to drain now.
2691 * We prefer the drain vfunc, but for backward-compat
2692 * we use a finish() vfunc if drain isn't implemented */
2693 if (decoder_class->drain) {
2694 GST_DEBUG_OBJECT (dec, "Draining");
2695 res = decoder_class->drain (dec);
2696 } else if (decoder_class->finish) {
2697 GST_FIXME_OBJECT (dec, "Sub-class should implement drain(). "
2698 "Calling finish() for backwards-compat");
2699 res = decoder_class->finish (dec);
2702 if (res != GST_FLOW_OK)
2705 /* now send queued data downstream */
2706 walk = priv->output_queued;
2708 GstBuffer *buf = GST_BUFFER_CAST (walk->data);
2710 priv->output_queued =
2711 g_list_delete_link (priv->output_queued, priv->output_queued);
2713 if (G_LIKELY (res == GST_FLOW_OK)) {
2714 /* avoid stray DISCONT from forward processing,
2715 * which have no meaning in reverse pushing */
2716 GST_BUFFER_FLAG_UNSET (buf, GST_BUFFER_FLAG_DISCONT);
2718 /* Last chance to calculate a timestamp as we loop backwards
2719 * through the list */
2720 if (GST_BUFFER_TIMESTAMP (buf) != GST_CLOCK_TIME_NONE)
2721 priv->last_timestamp_out = GST_BUFFER_TIMESTAMP (buf);
2722 else if (priv->last_timestamp_out != GST_CLOCK_TIME_NONE &&
2723 GST_BUFFER_DURATION (buf) != GST_CLOCK_TIME_NONE) {
2724 GST_BUFFER_TIMESTAMP (buf) =
2725 priv->last_timestamp_out - GST_BUFFER_DURATION (buf);
2726 priv->last_timestamp_out = GST_BUFFER_TIMESTAMP (buf);
2727 GST_LOG_OBJECT (dec,
2728 "Calculated TS %" GST_TIME_FORMAT " working backwards",
2729 GST_TIME_ARGS (priv->last_timestamp_out));
2732 res = gst_video_decoder_clip_and_push_buf (dec, buf);
2734 gst_buffer_unref (buf);
2737 walk = priv->output_queued;
2740 /* clear buffer and decoder state again
2741 * before moving to the previous keyframe */
2742 gst_video_decoder_flush (dec, FALSE);
2745 walk = priv->parse_gather;
2752 static GstFlowReturn
2753 gst_video_decoder_chain_reverse (GstVideoDecoder * dec, GstBuffer * buf)
2755 GstVideoDecoderPrivate *priv = dec->priv;
2756 GstFlowReturn result = GST_FLOW_OK;
2758 /* if we have a discont, move buffers to the decode list */
2759 if (!buf || GST_BUFFER_IS_DISCONT (buf)) {
2760 GST_DEBUG_OBJECT (dec, "received discont");
2762 /* parse and decode stuff in the gather and parse queues */
2763 result = gst_video_decoder_flush_parse (dec, FALSE);
2766 if (G_LIKELY (buf)) {
2767 GST_DEBUG_OBJECT (dec, "gathering buffer %p of size %" G_GSIZE_FORMAT ", "
2768 "PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT ", dur %"
2769 GST_TIME_FORMAT, buf, gst_buffer_get_size (buf),
2770 GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
2771 GST_TIME_ARGS (GST_BUFFER_DTS (buf)),
2772 GST_TIME_ARGS (GST_BUFFER_DURATION (buf)));
2774 /* add buffer to gather queue */
2775 priv->gather = g_list_prepend (priv->gather, buf);
2781 static GstFlowReturn
2782 gst_video_decoder_chain (GstPad * pad, GstObject * parent, GstBuffer * buf)
2784 GstVideoDecoder *decoder;
2785 GstFlowReturn ret = GST_FLOW_OK;
2787 decoder = GST_VIDEO_DECODER (parent);
2789 if (G_UNLIKELY (!decoder->priv->input_state && decoder->priv->needs_format))
2790 goto not_negotiated;
2792 GST_LOG_OBJECT (decoder,
2793 "chain PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT " duration %"
2794 GST_TIME_FORMAT " size %" G_GSIZE_FORMAT " flags %x",
2795 GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
2796 GST_TIME_ARGS (GST_BUFFER_DTS (buf)),
2797 GST_TIME_ARGS (GST_BUFFER_DURATION (buf)),
2798 gst_buffer_get_size (buf), GST_BUFFER_FLAGS (buf));
2800 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2803 * requiring the pad to be negotiated makes it impossible to use
2804 * oggdemux or filesrc ! decoder */
2806 if (decoder->input_segment.format == GST_FORMAT_UNDEFINED) {
2808 GstSegment *segment = &decoder->input_segment;
2810 GST_WARNING_OBJECT (decoder,
2811 "Received buffer without a new-segment. "
2812 "Assuming timestamps start from 0.");
2814 gst_segment_init (segment, GST_FORMAT_TIME);
2816 event = gst_event_new_segment (segment);
2818 decoder->priv->current_frame_events =
2819 g_list_prepend (decoder->priv->current_frame_events, event);
2822 decoder->priv->had_input_data = TRUE;
2824 if (decoder->input_segment.rate > 0.0)
2825 ret = gst_video_decoder_chain_forward (decoder, buf, FALSE);
2827 ret = gst_video_decoder_chain_reverse (decoder, buf);
2829 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2835 GST_ELEMENT_ERROR (decoder, CORE, NEGOTIATION, (NULL),
2836 ("decoder not initialized"));
2837 gst_buffer_unref (buf);
2838 return GST_FLOW_NOT_NEGOTIATED;
2842 static GstStateChangeReturn
2843 gst_video_decoder_change_state (GstElement * element, GstStateChange transition)
2845 GstVideoDecoder *decoder;
2846 GstVideoDecoderClass *decoder_class;
2847 GstStateChangeReturn ret;
2849 decoder = GST_VIDEO_DECODER (element);
2850 decoder_class = GST_VIDEO_DECODER_GET_CLASS (element);
2852 switch (transition) {
2853 case GST_STATE_CHANGE_NULL_TO_READY:
2854 /* open device/library if needed */
2855 if (decoder_class->open && !decoder_class->open (decoder))
2858 case GST_STATE_CHANGE_READY_TO_PAUSED:
2859 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2860 gst_video_decoder_reset (decoder, TRUE, TRUE);
2861 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2863 /* Initialize device/library if needed */
2864 if (decoder_class->start && !decoder_class->start (decoder))
2871 ret = GST_ELEMENT_CLASS (parent_class)->change_state (element, transition);
2873 switch (transition) {
2874 case GST_STATE_CHANGE_PAUSED_TO_READY:{
2875 gboolean stopped = TRUE;
2877 if (decoder_class->stop)
2878 stopped = decoder_class->stop (decoder);
2880 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2881 gst_video_decoder_reset (decoder, TRUE, TRUE);
2882 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2889 case GST_STATE_CHANGE_READY_TO_NULL:
2890 /* close device/library if needed */
2891 if (decoder_class->close && !decoder_class->close (decoder))
2903 GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL),
2904 ("Failed to open decoder"));
2905 return GST_STATE_CHANGE_FAILURE;
2910 GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL),
2911 ("Failed to start decoder"));
2912 return GST_STATE_CHANGE_FAILURE;
2917 GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL),
2918 ("Failed to stop decoder"));
2919 return GST_STATE_CHANGE_FAILURE;
2924 GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL),
2925 ("Failed to close decoder"));
2926 return GST_STATE_CHANGE_FAILURE;
2930 static GstVideoCodecFrame *
2931 gst_video_decoder_new_frame (GstVideoDecoder * decoder)
2933 GstVideoDecoderPrivate *priv = decoder->priv;
2934 GstVideoCodecFrame *frame;
2936 frame = g_slice_new0 (GstVideoCodecFrame);
2938 frame->ref_count = 1;
2940 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2941 frame->system_frame_number = priv->system_frame_number;
2942 priv->system_frame_number++;
2943 frame->decode_frame_number = priv->decode_frame_number;
2944 priv->decode_frame_number++;
2946 frame->dts = GST_CLOCK_TIME_NONE;
2947 frame->pts = GST_CLOCK_TIME_NONE;
2948 frame->duration = GST_CLOCK_TIME_NONE;
2949 frame->events = priv->current_frame_events;
2950 priv->current_frame_events = NULL;
2952 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2954 GST_LOG_OBJECT (decoder, "Created new frame %p (sfn:%d)",
2955 frame, frame->system_frame_number);
2961 gst_video_decoder_push_event_list (GstVideoDecoder * decoder, GList * events)
2965 /* events are stored in reverse order */
2966 for (l = g_list_last (events); l; l = g_list_previous (l)) {
2967 GST_LOG_OBJECT (decoder, "pushing %s event", GST_EVENT_TYPE_NAME (l->data));
2968 gst_video_decoder_push_event (decoder, l->data);
2970 g_list_free (events);
2974 gst_video_decoder_prepare_finish_frame (GstVideoDecoder *
2975 decoder, GstVideoCodecFrame * frame, gboolean dropping)
2977 GstVideoDecoderPrivate *priv = decoder->priv;
2978 GList *l, *events = NULL;
2981 #ifndef GST_DISABLE_GST_DEBUG
2982 GST_LOG_OBJECT (decoder, "n %d in %" G_GSIZE_FORMAT " out %" G_GSIZE_FORMAT,
2983 priv->frames.length,
2984 gst_adapter_available (priv->input_adapter),
2985 gst_adapter_available (priv->output_adapter));
2988 sync = GST_VIDEO_CODEC_FRAME_IS_SYNC_POINT (frame);
2990 GST_LOG_OBJECT (decoder,
2991 "finish frame %p (#%d)(sub=#%d) sync:%d PTS:%" GST_TIME_FORMAT " DTS:%"
2993 frame, frame->system_frame_number, frame->abidata.ABI.num_subframes,
2994 sync, GST_TIME_ARGS (frame->pts), GST_TIME_ARGS (frame->dts));
2996 /* Push all pending events that arrived before this frame */
2997 for (l = priv->frames.head; l; l = l->next) {
2998 GstVideoCodecFrame *tmp = l->data;
3001 events = g_list_concat (tmp->events, events);
3009 if (dropping || !decoder->priv->output_state) {
3010 /* Push before the next frame that is not dropped */
3011 decoder->priv->pending_events =
3012 g_list_concat (events, decoder->priv->pending_events);
3014 gst_video_decoder_push_event_list (decoder, decoder->priv->pending_events);
3015 decoder->priv->pending_events = NULL;
3017 gst_video_decoder_push_event_list (decoder, events);
3020 /* Check if the data should not be displayed. For example altref/invisible
3021 * frame in vp8. In this case we should not update the timestamps. */
3022 if (GST_VIDEO_CODEC_FRAME_IS_DECODE_ONLY (frame))
3025 /* If the frame is meant to be output but we don't have an output_buffer
3026 * we have a problem :) */
3027 if (G_UNLIKELY ((frame->output_buffer == NULL) && !dropping))
3028 goto no_output_buffer;
3030 if (GST_CLOCK_TIME_IS_VALID (frame->pts)) {
3031 if (frame->pts != priv->base_timestamp) {
3032 GST_DEBUG_OBJECT (decoder,
3033 "sync timestamp %" GST_TIME_FORMAT " diff %" GST_STIME_FORMAT,
3034 GST_TIME_ARGS (frame->pts),
3035 GST_STIME_ARGS (GST_CLOCK_DIFF (frame->pts,
3036 decoder->output_segment.start)));
3037 priv->base_timestamp = frame->pts;
3038 priv->base_picture_number = frame->decode_frame_number;
3042 if (frame->duration == GST_CLOCK_TIME_NONE) {
3043 frame->duration = gst_video_decoder_get_frame_duration (decoder, frame);
3044 GST_LOG_OBJECT (decoder,
3045 "Guessing duration %" GST_TIME_FORMAT " for frame...",
3046 GST_TIME_ARGS (frame->duration));
3049 /* PTS is expected montone ascending,
3050 * so a good guess is lowest unsent DTS */
3052 GstClockTime min_ts = GST_CLOCK_TIME_NONE;
3053 GstVideoCodecFrame *oframe = NULL;
3054 gboolean seen_none = FALSE;
3056 /* some maintenance regardless */
3057 for (l = priv->frames.head; l; l = l->next) {
3058 GstVideoCodecFrame *tmp = l->data;
3060 if (!GST_CLOCK_TIME_IS_VALID (tmp->abidata.ABI.ts)) {
3065 if (!GST_CLOCK_TIME_IS_VALID (min_ts) || tmp->abidata.ABI.ts < min_ts) {
3066 min_ts = tmp->abidata.ABI.ts;
3070 /* save a ts if needed */
3071 if (oframe && oframe != frame) {
3072 oframe->abidata.ABI.ts = frame->abidata.ABI.ts;
3075 /* and set if needed;
3076 * valid delta means we have reasonable DTS input */
3077 /* also, if we ended up reordered, means this approach is conflicting
3078 * with some sparse existing PTS, and so it does not work out */
3079 if (!priv->reordered_output &&
3080 !GST_CLOCK_TIME_IS_VALID (frame->pts) && !seen_none &&
3081 GST_CLOCK_TIME_IS_VALID (priv->pts_delta)) {
3082 frame->pts = min_ts + priv->pts_delta;
3083 GST_DEBUG_OBJECT (decoder,
3084 "no valid PTS, using oldest DTS %" GST_TIME_FORMAT,
3085 GST_TIME_ARGS (frame->pts));
3088 /* some more maintenance, ts2 holds PTS */
3089 min_ts = GST_CLOCK_TIME_NONE;
3091 for (l = priv->frames.head; l; l = l->next) {
3092 GstVideoCodecFrame *tmp = l->data;
3094 if (!GST_CLOCK_TIME_IS_VALID (tmp->abidata.ABI.ts2)) {
3099 if (!GST_CLOCK_TIME_IS_VALID (min_ts) || tmp->abidata.ABI.ts2 < min_ts) {
3100 min_ts = tmp->abidata.ABI.ts2;
3104 /* save a ts if needed */
3105 if (oframe && oframe != frame) {
3106 oframe->abidata.ABI.ts2 = frame->abidata.ABI.ts2;
3109 /* if we detected reordered output, then PTS are void,
3110 * however those were obtained; bogus input, subclass etc */
3111 if (priv->reordered_output && !seen_none) {
3112 GST_DEBUG_OBJECT (decoder, "invalidating PTS");
3113 frame->pts = GST_CLOCK_TIME_NONE;
3116 if (!GST_CLOCK_TIME_IS_VALID (frame->pts) && !seen_none) {
3117 frame->pts = min_ts;
3118 GST_DEBUG_OBJECT (decoder,
3119 "no valid PTS, using oldest PTS %" GST_TIME_FORMAT,
3120 GST_TIME_ARGS (frame->pts));
3125 if (frame->pts == GST_CLOCK_TIME_NONE) {
3126 /* Last ditch timestamp guess: Just add the duration to the previous
3127 * frame. If it's the first frame, just use the segment start. */
3128 if (frame->duration != GST_CLOCK_TIME_NONE) {
3129 if (GST_CLOCK_TIME_IS_VALID (priv->last_timestamp_out))
3130 frame->pts = priv->last_timestamp_out + frame->duration;
3131 else if (frame->dts != GST_CLOCK_TIME_NONE) {
3132 frame->pts = frame->dts;
3133 GST_LOG_OBJECT (decoder,
3134 "Setting DTS as PTS %" GST_TIME_FORMAT " for frame...",
3135 GST_TIME_ARGS (frame->pts));
3136 } else if (decoder->output_segment.rate > 0.0)
3137 frame->pts = decoder->output_segment.start;
3138 GST_INFO_OBJECT (decoder,
3139 "Guessing PTS=%" GST_TIME_FORMAT " for frame... DTS=%"
3140 GST_TIME_FORMAT, GST_TIME_ARGS (frame->pts),
3141 GST_TIME_ARGS (frame->dts));
3142 } else if (sync && frame->dts != GST_CLOCK_TIME_NONE) {
3143 frame->pts = frame->dts;
3144 GST_LOG_OBJECT (decoder,
3145 "Setting DTS as PTS %" GST_TIME_FORMAT " for frame...",
3146 GST_TIME_ARGS (frame->pts));
3150 if (GST_CLOCK_TIME_IS_VALID (priv->last_timestamp_out)) {
3151 if (frame->pts < priv->last_timestamp_out) {
3152 GST_WARNING_OBJECT (decoder,
3153 "decreasing timestamp (%" GST_TIME_FORMAT " < %"
3154 GST_TIME_FORMAT ")",
3155 GST_TIME_ARGS (frame->pts), GST_TIME_ARGS (priv->last_timestamp_out));
3156 priv->reordered_output = TRUE;
3157 /* make it a bit less weird downstream */
3158 frame->pts = priv->last_timestamp_out;
3162 if (GST_CLOCK_TIME_IS_VALID (frame->pts))
3163 priv->last_timestamp_out = frame->pts;
3170 GST_ERROR_OBJECT (decoder, "No buffer to output !");
3175 * gst_video_decoder_release_frame:
3176 * @dec: a #GstVideoDecoder
3177 * @frame: (transfer full): the #GstVideoCodecFrame to release
3179 * Similar to gst_video_decoder_drop_frame(), but simply releases @frame
3180 * without any processing other than removing it from list of pending frames,
3181 * after which it is considered finished and released.
3186 gst_video_decoder_release_frame (GstVideoDecoder * dec,
3187 GstVideoCodecFrame * frame)
3191 /* unref once from the list */
3192 GST_VIDEO_DECODER_STREAM_LOCK (dec);
3193 link = g_queue_find (&dec->priv->frames, frame);
3195 gst_video_codec_frame_unref (frame);
3196 g_queue_delete_link (&dec->priv->frames, link);
3198 if (frame->events) {
3199 dec->priv->pending_events =
3200 g_list_concat (frame->events, dec->priv->pending_events);
3201 frame->events = NULL;
3203 GST_VIDEO_DECODER_STREAM_UNLOCK (dec);
3205 /* unref because this function takes ownership */
3206 gst_video_codec_frame_unref (frame);
3209 /* called with STREAM_LOCK */
3211 gst_video_decoder_post_qos_drop (GstVideoDecoder * dec, GstClockTime timestamp)
3213 GstClockTime stream_time, jitter, earliest_time, qostime;
3214 GstSegment *segment;
3215 GstMessage *qos_msg;
3217 dec->priv->dropped++;
3219 /* post QoS message */
3220 GST_OBJECT_LOCK (dec);
3221 proportion = dec->priv->proportion;
3222 earliest_time = dec->priv->earliest_time;
3223 GST_OBJECT_UNLOCK (dec);
3225 segment = &dec->output_segment;
3226 if (G_UNLIKELY (segment->format == GST_FORMAT_UNDEFINED))
3227 segment = &dec->input_segment;
3229 gst_segment_to_stream_time (segment, GST_FORMAT_TIME, timestamp);
3230 qostime = gst_segment_to_running_time (segment, GST_FORMAT_TIME, timestamp);
3231 jitter = GST_CLOCK_DIFF (qostime, earliest_time);
3233 gst_message_new_qos (GST_OBJECT_CAST (dec), FALSE, qostime, stream_time,
3234 timestamp, GST_CLOCK_TIME_NONE);
3235 gst_message_set_qos_values (qos_msg, jitter, proportion, 1000000);
3236 gst_message_set_qos_stats (qos_msg, GST_FORMAT_BUFFERS,
3237 dec->priv->processed, dec->priv->dropped);
3238 gst_element_post_message (GST_ELEMENT_CAST (dec), qos_msg);
3242 * gst_video_decoder_drop_frame:
3243 * @dec: a #GstVideoDecoder
3244 * @frame: (transfer full): the #GstVideoCodecFrame to drop
3246 * Similar to gst_video_decoder_finish_frame(), but drops @frame in any
3247 * case and posts a QoS message with the frame's details on the bus.
3248 * In any case, the frame is considered finished and released.
3250 * Returns: a #GstFlowReturn, usually GST_FLOW_OK.
3253 gst_video_decoder_drop_frame (GstVideoDecoder * dec, GstVideoCodecFrame * frame)
3255 GST_LOG_OBJECT (dec, "drop frame %p", frame);
3257 if (gst_video_decoder_get_subframe_mode (dec))
3258 GST_DEBUG_OBJECT (dec, "Drop subframe %d. Must be the last one.",
3259 frame->abidata.ABI.num_subframes);
3261 GST_VIDEO_DECODER_STREAM_LOCK (dec);
3263 gst_video_decoder_prepare_finish_frame (dec, frame, TRUE);
3265 GST_DEBUG_OBJECT (dec, "dropping frame %" GST_TIME_FORMAT,
3266 GST_TIME_ARGS (frame->pts));
3268 gst_video_decoder_post_qos_drop (dec, frame->pts);
3270 /* now free the frame */
3271 gst_video_decoder_release_frame (dec, frame);
3273 GST_VIDEO_DECODER_STREAM_UNLOCK (dec);
3279 * gst_video_decoder_drop_subframe:
3280 * @dec: a #GstVideoDecoder
3281 * @frame: (transfer full): the #GstVideoCodecFrame
3284 * The frame is not considered finished until the whole frame
3285 * is finished or dropped by the subclass.
3287 * Returns: a #GstFlowReturn, usually GST_FLOW_OK.
3292 gst_video_decoder_drop_subframe (GstVideoDecoder * dec,
3293 GstVideoCodecFrame * frame)
3295 g_return_val_if_fail (gst_video_decoder_get_subframe_mode (dec),
3296 GST_FLOW_NOT_SUPPORTED);
3298 GST_LOG_OBJECT (dec, "drop subframe %p num=%d", frame->input_buffer,
3299 gst_video_decoder_get_input_subframe_index (dec, frame));
3301 GST_VIDEO_DECODER_STREAM_LOCK (dec);
3303 gst_video_codec_frame_unref (frame);
3305 GST_VIDEO_DECODER_STREAM_UNLOCK (dec);
3311 gst_video_decoder_transform_meta_default (GstVideoDecoder *
3312 decoder, GstVideoCodecFrame * frame, GstMeta * meta)
3314 const GstMetaInfo *info = meta->info;
3315 const gchar *const *tags;
3316 const gchar *const supported_tags[] = {
3317 GST_META_TAG_VIDEO_STR,
3318 GST_META_TAG_VIDEO_ORIENTATION_STR,
3319 GST_META_TAG_VIDEO_SIZE_STR,
3323 tags = gst_meta_api_type_get_tags (info->api);
3329 if (!g_strv_contains (supported_tags, *tags))
3339 GstVideoDecoder *decoder;
3340 GstVideoCodecFrame *frame;
3345 foreach_metadata (GstBuffer * inbuf, GstMeta ** meta, gpointer user_data)
3347 CopyMetaData *data = user_data;
3348 GstVideoDecoder *decoder = data->decoder;
3349 GstVideoDecoderClass *klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
3350 GstVideoCodecFrame *frame = data->frame;
3351 GstBuffer *buffer = data->buffer;
3352 const GstMetaInfo *info = (*meta)->info;
3353 gboolean do_copy = FALSE;
3355 if (gst_meta_api_type_has_tag (info->api, _gst_meta_tag_memory)
3356 || gst_meta_api_type_has_tag (info->api, _gst_meta_tag_memory_reference)) {
3357 /* never call the transform_meta with memory specific metadata */
3358 GST_DEBUG_OBJECT (decoder, "not copying memory specific metadata %s",
3359 g_type_name (info->api));
3361 } else if (klass->transform_meta) {
3362 do_copy = klass->transform_meta (decoder, frame, *meta);
3363 GST_DEBUG_OBJECT (decoder, "transformed metadata %s: copy: %d",
3364 g_type_name (info->api), do_copy);
3367 /* we only copy metadata when the subclass implemented a transform_meta
3368 * function and when it returns %TRUE */
3369 if (do_copy && info->transform_func) {
3370 GstMetaTransformCopy copy_data = { FALSE, 0, -1 };
3371 GST_DEBUG_OBJECT (decoder, "copy metadata %s", g_type_name (info->api));
3372 /* simply copy then */
3374 info->transform_func (buffer, *meta, inbuf, _gst_meta_transform_copy,
3381 gst_video_decoder_copy_metas (GstVideoDecoder * decoder,
3382 GstVideoCodecFrame * frame, GstBuffer * src_buffer, GstBuffer * dest_buffer)
3384 GstVideoDecoderClass *decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
3386 if (decoder_class->transform_meta) {
3387 if (G_LIKELY (frame)) {
3390 data.decoder = decoder;
3392 data.buffer = dest_buffer;
3393 gst_buffer_foreach_meta (src_buffer, foreach_metadata, &data);
3395 GST_WARNING_OBJECT (decoder,
3396 "Can't copy metadata because input frame disappeared");
3402 * gst_video_decoder_finish_frame:
3403 * @decoder: a #GstVideoDecoder
3404 * @frame: (transfer full): a decoded #GstVideoCodecFrame
3406 * @frame should have a valid decoded data buffer, whose metadata fields
3407 * are then appropriately set according to frame data and pushed downstream.
3408 * If no output data is provided, @frame is considered skipped.
3409 * In any case, the frame is considered finished and released.
3411 * After calling this function the output buffer of the frame is to be
3412 * considered read-only. This function will also change the metadata
3415 * Returns: a #GstFlowReturn resulting from sending data downstream
3418 gst_video_decoder_finish_frame (GstVideoDecoder * decoder,
3419 GstVideoCodecFrame * frame)
3421 GstFlowReturn ret = GST_FLOW_OK;
3422 GstVideoDecoderPrivate *priv = decoder->priv;
3423 GstBuffer *output_buffer;
3424 gboolean needs_reconfigure = FALSE;
3426 GST_LOG_OBJECT (decoder, "finish frame %p", frame);
3428 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3430 needs_reconfigure = gst_pad_check_reconfigure (decoder->srcpad);
3431 if (G_UNLIKELY (priv->output_state_changed || (priv->output_state
3432 && needs_reconfigure))) {
3433 if (!gst_video_decoder_negotiate_unlocked (decoder)) {
3434 gst_pad_mark_reconfigure (decoder->srcpad);
3435 if (GST_PAD_IS_FLUSHING (decoder->srcpad))
3436 ret = GST_FLOW_FLUSHING;
3438 ret = GST_FLOW_NOT_NEGOTIATED;
3443 gst_video_decoder_prepare_finish_frame (decoder, frame, FALSE);
3446 if (priv->tags_changed) {
3447 GstEvent *tags_event;
3449 tags_event = gst_video_decoder_create_merged_tags_event (decoder);
3451 if (tags_event != NULL)
3452 gst_video_decoder_push_event (decoder, tags_event);
3454 priv->tags_changed = FALSE;
3457 /* no buffer data means this frame is skipped */
3458 if (!frame->output_buffer || GST_VIDEO_CODEC_FRAME_IS_DECODE_ONLY (frame)) {
3459 GST_DEBUG_OBJECT (decoder,
3460 "skipping frame %" GST_TIME_FORMAT " because not output was produced",
3461 GST_TIME_ARGS (frame->pts));
3465 /* Mark output as corrupted if the subclass requested so and we're either
3466 * still before the sync point after the request, or we don't even know the
3467 * frame number of the sync point yet (it is 0) */
3468 GST_OBJECT_LOCK (decoder);
3469 if (frame->system_frame_number <= priv->request_sync_point_frame_number
3470 && priv->request_sync_point_frame_number != REQUEST_SYNC_POINT_UNSET) {
3471 if (priv->request_sync_point_flags &
3472 GST_VIDEO_DECODER_REQUEST_SYNC_POINT_CORRUPT_OUTPUT) {
3473 GST_DEBUG_OBJECT (decoder,
3474 "marking frame %" GST_TIME_FORMAT
3475 " as corrupted because it is still before the sync point",
3476 GST_TIME_ARGS (frame->pts));
3477 GST_VIDEO_CODEC_FRAME_FLAG_SET (frame,
3478 GST_VIDEO_CODEC_FRAME_FLAG_CORRUPTED);
3481 /* Reset to -1 to mark it as unset now that we've reached the frame */
3482 priv->request_sync_point_frame_number = REQUEST_SYNC_POINT_UNSET;
3484 GST_OBJECT_UNLOCK (decoder);
3486 if (priv->discard_corrupted_frames
3487 && (GST_VIDEO_CODEC_FRAME_FLAG_IS_SET (frame,
3488 GST_VIDEO_CODEC_FRAME_FLAG_CORRUPTED)
3489 || GST_BUFFER_FLAG_IS_SET (frame->output_buffer,
3490 GST_BUFFER_FLAG_CORRUPTED))) {
3491 GST_DEBUG_OBJECT (decoder,
3492 "skipping frame %" GST_TIME_FORMAT " because it is corrupted",
3493 GST_TIME_ARGS (frame->pts));
3497 /* We need a writable buffer for the metadata changes below */
3498 output_buffer = frame->output_buffer =
3499 gst_buffer_make_writable (frame->output_buffer);
3501 GST_BUFFER_FLAG_UNSET (output_buffer, GST_BUFFER_FLAG_DELTA_UNIT);
3503 GST_BUFFER_PTS (output_buffer) = frame->pts;
3504 GST_BUFFER_DTS (output_buffer) = GST_CLOCK_TIME_NONE;
3505 GST_BUFFER_DURATION (output_buffer) = frame->duration;
3507 GST_BUFFER_OFFSET (output_buffer) = GST_BUFFER_OFFSET_NONE;
3508 GST_BUFFER_OFFSET_END (output_buffer) = GST_BUFFER_OFFSET_NONE;
3510 if (priv->discont) {
3511 GST_BUFFER_FLAG_SET (output_buffer, GST_BUFFER_FLAG_DISCONT);
3514 if (GST_VIDEO_CODEC_FRAME_FLAG_IS_SET (frame,
3515 GST_VIDEO_CODEC_FRAME_FLAG_CORRUPTED)) {
3516 GST_DEBUG_OBJECT (decoder,
3517 "marking frame %" GST_TIME_FORMAT " as corrupted",
3518 GST_TIME_ARGS (frame->pts));
3519 GST_BUFFER_FLAG_SET (output_buffer, GST_BUFFER_FLAG_CORRUPTED);
3522 gst_video_decoder_copy_metas (decoder, frame, frame->input_buffer,
3523 frame->output_buffer);
3525 /* Get an additional ref to the buffer, which is going to be pushed
3526 * downstream, the original ref is owned by the frame
3528 output_buffer = gst_buffer_ref (output_buffer);
3530 /* Release frame so the buffer is writable when we push it downstream
3531 * if possible, i.e. if the subclass does not hold additional references
3534 gst_video_decoder_release_frame (decoder, frame);
3537 if (decoder->output_segment.rate < 0.0
3538 && !(decoder->output_segment.flags & GST_SEEK_FLAG_TRICKMODE_KEY_UNITS)) {
3539 GST_LOG_OBJECT (decoder, "queued frame");
3540 priv->output_queued = g_list_prepend (priv->output_queued, output_buffer);
3542 ret = gst_video_decoder_clip_and_push_buf (decoder, output_buffer);
3547 gst_video_decoder_release_frame (decoder, frame);
3548 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3553 * gst_video_decoder_finish_subframe:
3554 * @decoder: a #GstVideoDecoder
3555 * @frame: (transfer full): the #GstVideoCodecFrame
3557 * Indicate that a subframe has been finished to be decoded
3558 * by the subclass. This method should be called for all subframes
3559 * except the last subframe where @gst_video_decoder_finish_frame
3560 * should be called instead.
3562 * Returns: a #GstFlowReturn, usually GST_FLOW_OK.
3567 gst_video_decoder_finish_subframe (GstVideoDecoder * decoder,
3568 GstVideoCodecFrame * frame)
3570 g_return_val_if_fail (gst_video_decoder_get_subframe_mode (decoder),
3571 GST_FLOW_NOT_SUPPORTED);
3573 GST_LOG_OBJECT (decoder, "finish subframe %p num=%d", frame->input_buffer,
3574 gst_video_decoder_get_input_subframe_index (decoder, frame));
3576 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3577 frame->abidata.ABI.subframes_processed++;
3578 gst_video_codec_frame_unref (frame);
3580 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3585 /* With stream lock, takes the frame reference */
3586 static GstFlowReturn
3587 gst_video_decoder_clip_and_push_buf (GstVideoDecoder * decoder, GstBuffer * buf)
3589 GstFlowReturn ret = GST_FLOW_OK;
3590 GstVideoDecoderPrivate *priv = decoder->priv;
3591 guint64 start, stop;
3592 guint64 cstart, cstop;
3593 GstSegment *segment;
3594 GstClockTime duration;
3596 /* Check for clipping */
3597 start = GST_BUFFER_PTS (buf);
3598 duration = GST_BUFFER_DURATION (buf);
3600 /* store that we have valid decoded data */
3601 priv->had_output_data = TRUE;
3603 stop = GST_CLOCK_TIME_NONE;
3605 if (GST_CLOCK_TIME_IS_VALID (start) && GST_CLOCK_TIME_IS_VALID (duration)) {
3606 stop = start + duration;
3607 } else if (GST_CLOCK_TIME_IS_VALID (start)
3608 && !GST_CLOCK_TIME_IS_VALID (duration)) {
3609 /* If we don't clip away buffers that far before the segment we
3610 * can cause the pipeline to lockup. This can happen if audio is
3611 * properly clipped, and thus the audio sink does not preroll yet
3612 * but the video sink prerolls because we already outputted a
3613 * buffer here... and then queues run full.
3615 * In the worst case we will clip one buffer too many here now if no
3616 * framerate is given, no buffer duration is given and the actual
3617 * framerate is lower than 25fps */
3618 stop = start + 40 * GST_MSECOND;
3621 segment = &decoder->output_segment;
3622 if (gst_segment_clip (segment, GST_FORMAT_TIME, start, stop, &cstart, &cstop)) {
3623 GST_BUFFER_PTS (buf) = cstart;
3625 if (stop != GST_CLOCK_TIME_NONE && GST_CLOCK_TIME_IS_VALID (duration))
3626 GST_BUFFER_DURATION (buf) = cstop - cstart;
3628 GST_LOG_OBJECT (decoder,
3629 "accepting buffer inside segment: %" GST_TIME_FORMAT " %"
3630 GST_TIME_FORMAT " seg %" GST_TIME_FORMAT " to %" GST_TIME_FORMAT
3631 " time %" GST_TIME_FORMAT,
3632 GST_TIME_ARGS (cstart),
3633 GST_TIME_ARGS (cstop),
3634 GST_TIME_ARGS (segment->start), GST_TIME_ARGS (segment->stop),
3635 GST_TIME_ARGS (segment->time));
3637 GST_LOG_OBJECT (decoder,
3638 "dropping buffer outside segment: %" GST_TIME_FORMAT
3639 " %" GST_TIME_FORMAT
3640 " seg %" GST_TIME_FORMAT " to %" GST_TIME_FORMAT
3641 " time %" GST_TIME_FORMAT,
3642 GST_TIME_ARGS (start), GST_TIME_ARGS (stop),
3643 GST_TIME_ARGS (segment->start),
3644 GST_TIME_ARGS (segment->stop), GST_TIME_ARGS (segment->time));
3645 /* only check and return EOS if upstream still
3646 * in the same segment and interested as such */
3647 if (decoder->priv->in_out_segment_sync) {
3648 if (segment->rate >= 0) {
3649 if (GST_BUFFER_PTS (buf) >= segment->stop)
3651 } else if (GST_BUFFER_PTS (buf) < segment->start) {
3655 gst_buffer_unref (buf);
3659 /* Is buffer too late (QoS) ? */
3660 if (priv->do_qos && GST_CLOCK_TIME_IS_VALID (priv->earliest_time)
3661 && GST_CLOCK_TIME_IS_VALID (cstart)) {
3662 GstClockTime deadline =
3663 gst_segment_to_running_time (segment, GST_FORMAT_TIME, cstart);
3664 if (GST_CLOCK_TIME_IS_VALID (deadline) && deadline < priv->earliest_time) {
3665 GST_WARNING_OBJECT (decoder,
3666 "Dropping frame due to QoS. start:%" GST_TIME_FORMAT " deadline:%"
3667 GST_TIME_FORMAT " earliest_time:%" GST_TIME_FORMAT,
3668 GST_TIME_ARGS (start), GST_TIME_ARGS (deadline),
3669 GST_TIME_ARGS (priv->earliest_time));
3670 gst_video_decoder_post_qos_drop (decoder, cstart);
3671 gst_buffer_unref (buf);
3672 priv->discont = TRUE;
3677 /* Set DISCONT flag here ! */
3679 if (priv->discont) {
3680 GST_DEBUG_OBJECT (decoder, "Setting discont on output buffer");
3681 GST_BUFFER_FLAG_SET (buf, GST_BUFFER_FLAG_DISCONT);
3682 priv->discont = FALSE;
3685 /* update rate estimate */
3686 GST_OBJECT_LOCK (decoder);
3687 priv->bytes_out += gst_buffer_get_size (buf);
3688 if (GST_CLOCK_TIME_IS_VALID (duration)) {
3689 priv->time += duration;
3691 /* FIXME : Use difference between current and previous outgoing
3692 * timestamp, and relate to difference between current and previous
3694 /* better none than nothing valid */
3695 priv->time = GST_CLOCK_TIME_NONE;
3697 GST_OBJECT_UNLOCK (decoder);
3699 GST_DEBUG_OBJECT (decoder, "pushing buffer %p of size %" G_GSIZE_FORMAT ", "
3700 "PTS %" GST_TIME_FORMAT ", dur %" GST_TIME_FORMAT, buf,
3701 gst_buffer_get_size (buf),
3702 GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
3703 GST_TIME_ARGS (GST_BUFFER_DURATION (buf)));
3705 /* we got data, so note things are looking up again, reduce
3706 * the error count, if there is one */
3707 if (G_UNLIKELY (priv->error_count))
3708 priv->error_count = 0;
3710 #ifndef GST_DISABLE_DEBUG
3711 if (G_UNLIKELY (priv->last_reset_time != GST_CLOCK_TIME_NONE)) {
3712 GstClockTime elapsed = gst_util_get_timestamp () - priv->last_reset_time;
3714 /* First buffer since reset, report how long we took */
3715 GST_INFO_OBJECT (decoder, "First buffer since flush took %" GST_TIME_FORMAT
3716 " to produce", GST_TIME_ARGS (elapsed));
3717 priv->last_reset_time = GST_CLOCK_TIME_NONE;
3721 /* release STREAM_LOCK not to block upstream
3722 * while pushing buffer downstream */
3723 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3724 ret = gst_pad_push (decoder->srcpad, buf);
3725 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3732 * gst_video_decoder_add_to_frame:
3733 * @decoder: a #GstVideoDecoder
3734 * @n_bytes: the number of bytes to add
3736 * Removes next @n_bytes of input data and adds it to currently parsed frame.
3739 gst_video_decoder_add_to_frame (GstVideoDecoder * decoder, int n_bytes)
3741 GstVideoDecoderPrivate *priv = decoder->priv;
3744 GST_LOG_OBJECT (decoder, "add %d bytes to frame", n_bytes);
3749 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3750 if (gst_adapter_available (priv->output_adapter) == 0) {
3751 priv->frame_offset =
3752 priv->input_offset - gst_adapter_available (priv->input_adapter);
3754 buf = gst_adapter_take_buffer (priv->input_adapter, n_bytes);
3756 gst_adapter_push (priv->output_adapter, buf);
3757 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3761 * gst_video_decoder_get_pending_frame_size:
3762 * @decoder: a #GstVideoDecoder
3764 * Returns the number of bytes previously added to the current frame
3765 * by calling gst_video_decoder_add_to_frame().
3767 * Returns: The number of bytes pending for the current frame
3772 gst_video_decoder_get_pending_frame_size (GstVideoDecoder * decoder)
3774 GstVideoDecoderPrivate *priv = decoder->priv;
3777 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3778 ret = gst_adapter_available (priv->output_adapter);
3779 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3781 GST_LOG_OBJECT (decoder, "Current pending frame has %" G_GSIZE_FORMAT "bytes",
3788 gst_video_decoder_get_frame_duration (GstVideoDecoder * decoder,
3789 GstVideoCodecFrame * frame)
3791 GstVideoCodecState *state = decoder->priv->output_state;
3793 /* it's possible that we don't have a state yet when we are dropping the
3794 * initial buffers */
3796 return GST_CLOCK_TIME_NONE;
3798 if (state->info.fps_d == 0 || state->info.fps_n == 0) {
3799 return GST_CLOCK_TIME_NONE;
3802 /* FIXME: For interlaced frames this needs to take into account
3803 * the number of valid fields in the frame
3806 return gst_util_uint64_scale (GST_SECOND, state->info.fps_d,
3811 * gst_video_decoder_have_frame:
3812 * @decoder: a #GstVideoDecoder
3814 * Gathers all data collected for currently parsed frame, gathers corresponding
3815 * metadata and passes it along for further processing, i.e. @handle_frame.
3817 * Returns: a #GstFlowReturn
3820 gst_video_decoder_have_frame (GstVideoDecoder * decoder)
3822 GstVideoDecoderPrivate *priv = decoder->priv;
3825 GstClockTime pts, dts, duration;
3827 GstFlowReturn ret = GST_FLOW_OK;
3829 GST_LOG_OBJECT (decoder, "have_frame at offset %" G_GUINT64_FORMAT,
3830 priv->frame_offset);
3832 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3834 n_available = gst_adapter_available (priv->output_adapter);
3836 buffer = gst_adapter_take_buffer (priv->output_adapter, n_available);
3838 buffer = gst_buffer_new_and_alloc (0);
3841 if (priv->current_frame->input_buffer) {
3842 gst_video_decoder_copy_metas (decoder, priv->current_frame,
3843 priv->current_frame->input_buffer, buffer);
3844 gst_buffer_unref (priv->current_frame->input_buffer);
3846 priv->current_frame->input_buffer = buffer;
3848 gst_video_decoder_get_buffer_info_at_offset (decoder,
3849 priv->frame_offset, &pts, &dts, &duration, &flags);
3851 GST_BUFFER_PTS (buffer) = pts;
3852 GST_BUFFER_DTS (buffer) = dts;
3853 GST_BUFFER_DURATION (buffer) = duration;
3854 GST_BUFFER_FLAGS (buffer) = flags;
3856 GST_LOG_OBJECT (decoder, "collected frame size %d, "
3857 "PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT ", dur %"
3858 GST_TIME_FORMAT, n_available, GST_TIME_ARGS (pts), GST_TIME_ARGS (dts),
3859 GST_TIME_ARGS (duration));
3861 if (!GST_BUFFER_FLAG_IS_SET (buffer, GST_BUFFER_FLAG_DELTA_UNIT)) {
3862 GST_DEBUG_OBJECT (decoder, "Marking as sync point");
3863 GST_VIDEO_CODEC_FRAME_SET_SYNC_POINT (priv->current_frame);
3866 if (GST_BUFFER_FLAG_IS_SET (buffer, GST_BUFFER_FLAG_CORRUPTED)) {
3867 GST_DEBUG_OBJECT (decoder, "Marking as corrupted");
3868 GST_VIDEO_CODEC_FRAME_FLAG_SET (priv->current_frame,
3869 GST_VIDEO_CODEC_FRAME_FLAG_CORRUPTED);
3872 /* In reverse playback, just capture and queue frames for later processing */
3873 if (decoder->input_segment.rate < 0.0) {
3874 priv->parse_gather =
3875 g_list_prepend (priv->parse_gather, priv->current_frame);
3876 priv->current_frame = NULL;
3878 GstVideoCodecFrame *frame = priv->current_frame;
3879 frame->abidata.ABI.num_subframes++;
3880 /* In subframe mode, we keep a ref for ourselves
3881 * as this frame will be kept during the data collection
3882 * in parsed mode. The frame reference will be released by
3883 * finish_(sub)frame or drop_(sub)frame.*/
3884 if (gst_video_decoder_get_subframe_mode (decoder))
3885 gst_video_codec_frame_ref (priv->current_frame);
3887 priv->current_frame = NULL;
3889 /* Decode the frame, which gives away our ref */
3890 ret = gst_video_decoder_decode_frame (decoder, frame);
3893 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3898 /* Pass the frame in priv->current_frame through the
3899 * handle_frame() callback for decoding and passing to gvd_finish_frame(),
3900 * or dropping by passing to gvd_drop_frame() */
3901 static GstFlowReturn
3902 gst_video_decoder_decode_frame (GstVideoDecoder * decoder,
3903 GstVideoCodecFrame * frame)
3905 GstVideoDecoderPrivate *priv = decoder->priv;
3906 GstVideoDecoderClass *decoder_class;
3907 GstFlowReturn ret = GST_FLOW_OK;
3909 decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
3911 /* FIXME : This should only have to be checked once (either the subclass has an
3912 * implementation, or it doesn't) */
3913 g_return_val_if_fail (decoder_class->handle_frame != NULL, GST_FLOW_ERROR);
3914 g_return_val_if_fail (frame != NULL, GST_FLOW_ERROR);
3916 frame->pts = GST_BUFFER_PTS (frame->input_buffer);
3917 frame->dts = GST_BUFFER_DTS (frame->input_buffer);
3918 frame->duration = GST_BUFFER_DURATION (frame->input_buffer);
3920 gst_segment_to_running_time (&decoder->input_segment, GST_FORMAT_TIME,
3923 /* For keyframes, PTS = DTS + constant_offset, usually 0 to 3 frame
3925 /* FIXME upstream can be quite wrong about the keyframe aspect,
3926 * so we could be going off here as well,
3927 * maybe let subclass decide if it really is/was a keyframe */
3928 if (GST_VIDEO_CODEC_FRAME_IS_SYNC_POINT (frame)) {
3929 priv->distance_from_sync = 0;
3931 GST_OBJECT_LOCK (decoder);
3932 priv->request_sync_point_flags &=
3933 ~GST_VIDEO_DECODER_REQUEST_SYNC_POINT_DISCARD_INPUT;
3934 if (priv->request_sync_point_frame_number == REQUEST_SYNC_POINT_PENDING)
3935 priv->request_sync_point_frame_number = frame->system_frame_number;
3936 GST_OBJECT_UNLOCK (decoder);
3938 if (GST_CLOCK_TIME_IS_VALID (frame->pts)
3939 && GST_CLOCK_TIME_IS_VALID (frame->dts)) {
3940 /* just in case they are not equal as might ideally be,
3941 * e.g. quicktime has a (positive) delta approach */
3942 priv->pts_delta = frame->pts - frame->dts;
3943 GST_DEBUG_OBJECT (decoder, "PTS delta %d ms",
3944 (gint) (priv->pts_delta / GST_MSECOND));
3947 if (priv->distance_from_sync == -1 && priv->automatic_request_sync_points) {
3948 GST_DEBUG_OBJECT (decoder,
3949 "Didn't receive a keyframe yet, requesting sync point");
3950 gst_video_decoder_request_sync_point (decoder, frame,
3951 priv->automatic_request_sync_point_flags);
3954 GST_OBJECT_LOCK (decoder);
3955 if ((priv->needs_sync_point && priv->distance_from_sync == -1)
3956 || (priv->request_sync_point_flags &
3957 GST_VIDEO_DECODER_REQUEST_SYNC_POINT_DISCARD_INPUT)) {
3958 GST_WARNING_OBJECT (decoder,
3959 "Subclass requires a sync point but we didn't receive one yet, discarding input");
3960 GST_OBJECT_UNLOCK (decoder);
3961 if (priv->automatic_request_sync_points) {
3962 gst_video_decoder_request_sync_point (decoder, frame,
3963 priv->automatic_request_sync_point_flags);
3965 gst_video_decoder_release_frame (decoder, frame);
3968 GST_OBJECT_UNLOCK (decoder);
3970 priv->distance_from_sync++;
3973 frame->distance_from_sync = priv->distance_from_sync;
3975 if (frame->abidata.ABI.num_subframes == 1) {
3976 frame->abidata.ABI.ts = frame->dts;
3977 frame->abidata.ABI.ts2 = frame->pts;
3980 GST_LOG_OBJECT (decoder,
3981 "frame %p PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT ", dist %d",
3982 frame, GST_TIME_ARGS (frame->pts), GST_TIME_ARGS (frame->dts),
3983 frame->distance_from_sync);
3984 /* FIXME: suboptimal way to add a unique frame to the list, in case of subframe mode. */
3985 if (!g_queue_find (&priv->frames, frame)) {
3986 g_queue_push_tail (&priv->frames, gst_video_codec_frame_ref (frame));
3988 GST_LOG_OBJECT (decoder,
3989 "Do not add an existing frame used to decode subframes");
3992 if (priv->frames.length > 10) {
3993 GST_DEBUG_OBJECT (decoder, "decoder frame list getting long: %d frames,"
3994 "possible internal leaking?", priv->frames.length);
3997 /* do something with frame */
3998 ret = decoder_class->handle_frame (decoder, frame);
3999 if (ret != GST_FLOW_OK)
4000 GST_DEBUG_OBJECT (decoder, "flow error %s", gst_flow_get_name (ret));
4002 /* the frame has either been added to parse_gather or sent to
4003 handle frame so there is no need to unref it */
4009 * gst_video_decoder_get_output_state:
4010 * @decoder: a #GstVideoDecoder
4012 * Get the #GstVideoCodecState currently describing the output stream.
4014 * Returns: (transfer full) (nullable): #GstVideoCodecState describing format of video data.
4016 GstVideoCodecState *
4017 gst_video_decoder_get_output_state (GstVideoDecoder * decoder)
4019 GstVideoCodecState *state = NULL;
4021 GST_OBJECT_LOCK (decoder);
4022 if (decoder->priv->output_state)
4023 state = gst_video_codec_state_ref (decoder->priv->output_state);
4024 GST_OBJECT_UNLOCK (decoder);
4029 static GstVideoCodecState *
4030 _set_interlaced_output_state (GstVideoDecoder * decoder,
4031 GstVideoFormat fmt, GstVideoInterlaceMode interlace_mode, guint width,
4032 guint height, GstVideoCodecState * reference, gboolean copy_interlace_mode)
4034 GstVideoDecoderPrivate *priv = decoder->priv;
4035 GstVideoCodecState *state;
4037 g_assert ((copy_interlace_mode
4038 && interlace_mode == GST_VIDEO_INTERLACE_MODE_PROGRESSIVE)
4039 || !copy_interlace_mode);
4041 GST_DEBUG_OBJECT (decoder,
4042 "fmt:%d, width:%d, height:%d, interlace-mode: %s, reference:%p", fmt,
4043 width, height, gst_video_interlace_mode_to_string (interlace_mode),
4046 /* Create the new output state */
4048 _new_output_state (fmt, interlace_mode, width, height, reference,
4049 copy_interlace_mode);
4053 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
4055 GST_OBJECT_LOCK (decoder);
4056 /* Replace existing output state by new one */
4057 if (priv->output_state)
4058 gst_video_codec_state_unref (priv->output_state);
4059 priv->output_state = gst_video_codec_state_ref (state);
4061 if (priv->output_state != NULL && priv->output_state->info.fps_n > 0) {
4062 priv->qos_frame_duration =
4063 gst_util_uint64_scale (GST_SECOND, priv->output_state->info.fps_d,
4064 priv->output_state->info.fps_n);
4066 priv->qos_frame_duration = 0;
4068 priv->output_state_changed = TRUE;
4069 GST_OBJECT_UNLOCK (decoder);
4071 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4077 * gst_video_decoder_set_output_state:
4078 * @decoder: a #GstVideoDecoder
4079 * @fmt: a #GstVideoFormat
4080 * @width: The width in pixels
4081 * @height: The height in pixels
4082 * @reference: (nullable) (transfer none): An optional reference #GstVideoCodecState
4084 * Creates a new #GstVideoCodecState with the specified @fmt, @width and @height
4085 * as the output state for the decoder.
4086 * Any previously set output state on @decoder will be replaced by the newly
4089 * If the subclass wishes to copy over existing fields (like pixel aspec ratio,
4090 * or framerate) from an existing #GstVideoCodecState, it can be provided as a
4093 * If the subclass wishes to override some fields from the output state (like
4094 * pixel-aspect-ratio or framerate) it can do so on the returned #GstVideoCodecState.
4096 * The new output state will only take effect (set on pads and buffers) starting
4097 * from the next call to #gst_video_decoder_finish_frame().
4099 * Returns: (transfer full) (nullable): the newly configured output state.
4101 GstVideoCodecState *
4102 gst_video_decoder_set_output_state (GstVideoDecoder * decoder,
4103 GstVideoFormat fmt, guint width, guint height,
4104 GstVideoCodecState * reference)
4106 return _set_interlaced_output_state (decoder, fmt,
4107 GST_VIDEO_INTERLACE_MODE_PROGRESSIVE, width, height, reference, TRUE);
4111 * gst_video_decoder_set_interlaced_output_state:
4112 * @decoder: a #GstVideoDecoder
4113 * @fmt: a #GstVideoFormat
4114 * @width: The width in pixels
4115 * @height: The height in pixels
4116 * @interlace_mode: A #GstVideoInterlaceMode
4117 * @reference: (nullable) (transfer none): An optional reference #GstVideoCodecState
4119 * Same as #gst_video_decoder_set_output_state() but also allows you to also set
4120 * the interlacing mode.
4122 * Returns: (transfer full) (nullable): the newly configured output state.
4126 GstVideoCodecState *
4127 gst_video_decoder_set_interlaced_output_state (GstVideoDecoder * decoder,
4128 GstVideoFormat fmt, GstVideoInterlaceMode interlace_mode, guint width,
4129 guint height, GstVideoCodecState * reference)
4131 return _set_interlaced_output_state (decoder, fmt, interlace_mode, width,
4132 height, reference, FALSE);
4137 * gst_video_decoder_get_oldest_frame:
4138 * @decoder: a #GstVideoDecoder
4140 * Get the oldest pending unfinished #GstVideoCodecFrame
4142 * Returns: (transfer full) (nullable): oldest pending unfinished #GstVideoCodecFrame.
4144 GstVideoCodecFrame *
4145 gst_video_decoder_get_oldest_frame (GstVideoDecoder * decoder)
4147 GstVideoCodecFrame *frame = NULL;
4149 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
4150 if (decoder->priv->frames.head)
4151 frame = gst_video_codec_frame_ref (decoder->priv->frames.head->data);
4152 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4154 return (GstVideoCodecFrame *) frame;
4158 * gst_video_decoder_get_frame:
4159 * @decoder: a #GstVideoDecoder
4160 * @frame_number: system_frame_number of a frame
4162 * Get a pending unfinished #GstVideoCodecFrame
4164 * Returns: (transfer full) (nullable): pending unfinished #GstVideoCodecFrame identified by @frame_number.
4166 GstVideoCodecFrame *
4167 gst_video_decoder_get_frame (GstVideoDecoder * decoder, int frame_number)
4170 GstVideoCodecFrame *frame = NULL;
4172 GST_DEBUG_OBJECT (decoder, "frame_number : %d", frame_number);
4174 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
4175 for (g = decoder->priv->frames.head; g; g = g->next) {
4176 GstVideoCodecFrame *tmp = g->data;
4178 if (tmp->system_frame_number == frame_number) {
4179 frame = gst_video_codec_frame_ref (tmp);
4183 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4189 * gst_video_decoder_get_frames:
4190 * @decoder: a #GstVideoDecoder
4192 * Get all pending unfinished #GstVideoCodecFrame
4194 * Returns: (transfer full) (element-type GstVideoCodecFrame): pending unfinished #GstVideoCodecFrame.
4197 gst_video_decoder_get_frames (GstVideoDecoder * decoder)
4201 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
4203 g_list_copy_deep (decoder->priv->frames.head,
4204 (GCopyFunc) gst_video_codec_frame_ref, NULL);
4205 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4211 gst_video_decoder_decide_allocation_default (GstVideoDecoder * decoder,
4214 GstCaps *outcaps = NULL;
4215 GstBufferPool *pool = NULL;
4216 guint size, min, max;
4217 GstAllocator *allocator = NULL;
4218 GstAllocationParams params;
4219 GstStructure *config;
4220 gboolean update_pool, update_allocator;
4223 gst_query_parse_allocation (query, &outcaps, NULL);
4224 gst_video_info_init (&vinfo);
4226 gst_video_info_from_caps (&vinfo, outcaps);
4228 /* we got configuration from our peer or the decide_allocation method,
4230 if (gst_query_get_n_allocation_params (query) > 0) {
4231 /* try the allocator */
4232 gst_query_parse_nth_allocation_param (query, 0, &allocator, ¶ms);
4233 update_allocator = TRUE;
4236 gst_allocation_params_init (¶ms);
4237 update_allocator = FALSE;
4240 if (gst_query_get_n_allocation_pools (query) > 0) {
4241 gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max);
4242 size = MAX (size, vinfo.size);
4249 update_pool = FALSE;
4253 /* no pool, we can make our own */
4254 GST_DEBUG_OBJECT (decoder, "no pool, making new pool");
4255 pool = gst_video_buffer_pool_new ();
4259 config = gst_buffer_pool_get_config (pool);
4260 gst_buffer_pool_config_set_params (config, outcaps, size, min, max);
4261 gst_buffer_pool_config_set_allocator (config, allocator, ¶ms);
4263 GST_DEBUG_OBJECT (decoder,
4264 "setting config %" GST_PTR_FORMAT " in pool %" GST_PTR_FORMAT, config,
4266 if (!gst_buffer_pool_set_config (pool, config)) {
4267 config = gst_buffer_pool_get_config (pool);
4269 /* If change are not acceptable, fallback to generic pool */
4270 if (!gst_buffer_pool_config_validate_params (config, outcaps, size, min,
4272 GST_DEBUG_OBJECT (decoder, "unsupported pool, making new pool");
4274 gst_object_unref (pool);
4275 pool = gst_video_buffer_pool_new ();
4276 gst_buffer_pool_config_set_params (config, outcaps, size, min, max);
4277 gst_buffer_pool_config_set_allocator (config, allocator, ¶ms);
4280 if (!gst_buffer_pool_set_config (pool, config))
4284 if (update_allocator)
4285 gst_query_set_nth_allocation_param (query, 0, allocator, ¶ms);
4287 gst_query_add_allocation_param (query, allocator, ¶ms);
4289 gst_object_unref (allocator);
4292 gst_query_set_nth_allocation_pool (query, 0, pool, size, min, max);
4294 gst_query_add_allocation_pool (query, pool, size, min, max);
4297 gst_object_unref (pool);
4303 gst_object_unref (allocator);
4305 gst_object_unref (pool);
4306 GST_ELEMENT_ERROR (decoder, RESOURCE, SETTINGS,
4307 ("Failed to configure the buffer pool"),
4308 ("Configuration is most likely invalid, please report this issue."));
4313 gst_video_decoder_propose_allocation_default (GstVideoDecoder * decoder,
4320 gst_video_decoder_negotiate_pool (GstVideoDecoder * decoder, GstCaps * caps)
4322 GstVideoDecoderClass *klass;
4323 GstQuery *query = NULL;
4324 GstBufferPool *pool = NULL;
4325 GstAllocator *allocator;
4326 GstAllocationParams params;
4327 gboolean ret = TRUE;
4329 klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
4331 query = gst_query_new_allocation (caps, TRUE);
4333 GST_DEBUG_OBJECT (decoder, "do query ALLOCATION");
4335 if (!gst_pad_peer_query (decoder->srcpad, query)) {
4336 GST_DEBUG_OBJECT (decoder, "didn't get downstream ALLOCATION hints");
4339 g_assert (klass->decide_allocation != NULL);
4340 ret = klass->decide_allocation (decoder, query);
4342 GST_DEBUG_OBJECT (decoder, "ALLOCATION (%d) params: %" GST_PTR_FORMAT, ret,
4346 goto no_decide_allocation;
4348 /* we got configuration from our peer or the decide_allocation method,
4350 if (gst_query_get_n_allocation_params (query) > 0) {
4351 gst_query_parse_nth_allocation_param (query, 0, &allocator, ¶ms);
4354 gst_allocation_params_init (¶ms);
4357 if (gst_query_get_n_allocation_pools (query) > 0)
4358 gst_query_parse_nth_allocation_pool (query, 0, &pool, NULL, NULL, NULL);
4361 gst_object_unref (allocator);
4363 goto no_decide_allocation;
4366 if (decoder->priv->allocator)
4367 gst_object_unref (decoder->priv->allocator);
4368 decoder->priv->allocator = allocator;
4369 decoder->priv->params = params;
4371 if (decoder->priv->pool) {
4372 /* do not set the bufferpool to inactive here, it will be done
4373 * on its finalize function. As videodecoder do late renegotiation
4374 * it might happen that some element downstream is already using this
4375 * same bufferpool and deactivating it will make it fail.
4376 * Happens when a downstream element changes from passthrough to
4377 * non-passthrough and gets this same bufferpool to use */
4378 GST_DEBUG_OBJECT (decoder, "unref pool %" GST_PTR_FORMAT,
4379 decoder->priv->pool);
4380 gst_object_unref (decoder->priv->pool);
4382 decoder->priv->pool = pool;
4385 GST_DEBUG_OBJECT (decoder, "activate pool %" GST_PTR_FORMAT, pool);
4386 gst_buffer_pool_set_active (pool, TRUE);
4390 gst_query_unref (query);
4395 no_decide_allocation:
4397 GST_WARNING_OBJECT (decoder, "Subclass failed to decide allocation");
4403 gst_video_decoder_negotiate_default (GstVideoDecoder * decoder)
4405 GstVideoCodecState *state = decoder->priv->output_state;
4406 gboolean ret = TRUE;
4407 GstVideoCodecFrame *frame;
4412 GST_DEBUG_OBJECT (decoder,
4413 "Trying to negotiate the pool with out setting the o/p format");
4414 ret = gst_video_decoder_negotiate_pool (decoder, NULL);
4418 g_return_val_if_fail (GST_VIDEO_INFO_WIDTH (&state->info) != 0, FALSE);
4419 g_return_val_if_fail (GST_VIDEO_INFO_HEIGHT (&state->info) != 0, FALSE);
4421 /* If the base class didn't set any multiview params, assume mono
4423 if (GST_VIDEO_INFO_MULTIVIEW_MODE (&state->info) ==
4424 GST_VIDEO_MULTIVIEW_MODE_NONE) {
4425 GST_VIDEO_INFO_MULTIVIEW_MODE (&state->info) =
4426 GST_VIDEO_MULTIVIEW_MODE_MONO;
4427 GST_VIDEO_INFO_MULTIVIEW_FLAGS (&state->info) =
4428 GST_VIDEO_MULTIVIEW_FLAGS_NONE;
4431 GST_DEBUG_OBJECT (decoder, "output_state par %d/%d fps %d/%d",
4432 state->info.par_n, state->info.par_d,
4433 state->info.fps_n, state->info.fps_d);
4435 if (state->caps == NULL)
4436 state->caps = gst_video_info_to_caps (&state->info);
4438 incaps = gst_pad_get_current_caps (GST_VIDEO_DECODER_SINK_PAD (decoder));
4440 GstStructure *in_struct;
4442 in_struct = gst_caps_get_structure (incaps, 0);
4443 if (gst_structure_has_field (in_struct, "mastering-display-info") ||
4444 gst_structure_has_field (in_struct, "content-light-level")) {
4447 /* prefer upstream information */
4448 state->caps = gst_caps_make_writable (state->caps);
4449 if ((s = gst_structure_get_string (in_struct, "mastering-display-info"))) {
4450 gst_caps_set_simple (state->caps,
4451 "mastering-display-info", G_TYPE_STRING, s, NULL);
4454 if ((s = gst_structure_get_string (in_struct, "content-light-level"))) {
4455 gst_caps_set_simple (state->caps,
4456 "content-light-level", G_TYPE_STRING, s, NULL);
4460 gst_caps_unref (incaps);
4463 if (state->allocation_caps == NULL)
4464 state->allocation_caps = gst_caps_ref (state->caps);
4466 GST_DEBUG_OBJECT (decoder, "setting caps %" GST_PTR_FORMAT, state->caps);
4468 /* Push all pending pre-caps events of the oldest frame before
4470 frame = decoder->priv->frames.head ? decoder->priv->frames.head->data : NULL;
4471 if (frame || decoder->priv->current_frame_events) {
4475 events = &frame->events;
4477 events = &decoder->priv->current_frame_events;
4480 for (l = g_list_last (*events); l;) {
4481 GstEvent *event = GST_EVENT (l->data);
4484 if (GST_EVENT_TYPE (event) < GST_EVENT_CAPS) {
4485 gst_video_decoder_push_event (decoder, event);
4488 *events = g_list_delete_link (*events, tmp);
4495 prevcaps = gst_pad_get_current_caps (decoder->srcpad);
4496 if (!prevcaps || !gst_caps_is_equal (prevcaps, state->caps)) {
4498 GST_DEBUG_OBJECT (decoder, "decoder src pad has currently NULL caps");
4500 ret = gst_pad_set_caps (decoder->srcpad, state->caps);
4503 GST_DEBUG_OBJECT (decoder,
4504 "current src pad and output state caps are the same");
4507 gst_caps_unref (prevcaps);
4511 decoder->priv->output_state_changed = FALSE;
4512 /* Negotiate pool */
4513 ret = gst_video_decoder_negotiate_pool (decoder, state->allocation_caps);
4520 gst_video_decoder_negotiate_unlocked (GstVideoDecoder * decoder)
4522 GstVideoDecoderClass *klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
4523 gboolean ret = TRUE;
4525 if (G_LIKELY (klass->negotiate))
4526 ret = klass->negotiate (decoder);
4532 * gst_video_decoder_negotiate:
4533 * @decoder: a #GstVideoDecoder
4535 * Negotiate with downstream elements to currently configured #GstVideoCodecState.
4536 * Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But mark it again if
4539 * Returns: %TRUE if the negotiation succeeded, else %FALSE.
4542 gst_video_decoder_negotiate (GstVideoDecoder * decoder)
4544 GstVideoDecoderClass *klass;
4545 gboolean ret = TRUE;
4547 g_return_val_if_fail (GST_IS_VIDEO_DECODER (decoder), FALSE);
4549 klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
4551 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
4552 gst_pad_check_reconfigure (decoder->srcpad);
4553 if (klass->negotiate) {
4554 ret = klass->negotiate (decoder);
4556 gst_pad_mark_reconfigure (decoder->srcpad);
4558 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4564 * gst_video_decoder_allocate_output_buffer:
4565 * @decoder: a #GstVideoDecoder
4567 * Helper function that allocates a buffer to hold a video frame for @decoder's
4568 * current #GstVideoCodecState.
4570 * You should use gst_video_decoder_allocate_output_frame() instead of this
4571 * function, if possible at all.
4573 * Returns: (transfer full) (nullable): allocated buffer, or NULL if no buffer could be
4574 * allocated (e.g. when downstream is flushing or shutting down)
4577 gst_video_decoder_allocate_output_buffer (GstVideoDecoder * decoder)
4580 GstBuffer *buffer = NULL;
4581 gboolean needs_reconfigure = FALSE;
4583 GST_DEBUG ("alloc src buffer");
4585 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
4586 needs_reconfigure = gst_pad_check_reconfigure (decoder->srcpad);
4587 if (G_UNLIKELY (!decoder->priv->output_state
4588 || decoder->priv->output_state_changed || needs_reconfigure)) {
4589 if (!gst_video_decoder_negotiate_unlocked (decoder)) {
4590 if (decoder->priv->output_state) {
4591 GST_DEBUG_OBJECT (decoder, "Failed to negotiate, fallback allocation");
4592 gst_pad_mark_reconfigure (decoder->srcpad);
4595 GST_DEBUG_OBJECT (decoder, "Failed to negotiate, output_buffer=NULL");
4596 goto failed_allocation;
4601 flow = gst_buffer_pool_acquire_buffer (decoder->priv->pool, &buffer, NULL);
4603 if (flow != GST_FLOW_OK) {
4604 GST_INFO_OBJECT (decoder, "couldn't allocate output buffer, flow %s",
4605 gst_flow_get_name (flow));
4606 if (decoder->priv->output_state && decoder->priv->output_state->info.size)
4609 goto failed_allocation;
4611 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4616 GST_INFO_OBJECT (decoder,
4617 "Fallback allocation, creating new buffer which doesn't belongs to any buffer pool");
4619 gst_buffer_new_allocate (NULL, decoder->priv->output_state->info.size,
4623 GST_ERROR_OBJECT (decoder, "Failed to allocate the buffer..");
4624 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4630 * gst_video_decoder_allocate_output_frame:
4631 * @decoder: a #GstVideoDecoder
4632 * @frame: a #GstVideoCodecFrame
4634 * Helper function that allocates a buffer to hold a video frame for @decoder's
4635 * current #GstVideoCodecState. Subclass should already have configured video
4636 * state and set src pad caps.
4638 * The buffer allocated here is owned by the frame and you should only
4639 * keep references to the frame, not the buffer.
4641 * Returns: %GST_FLOW_OK if an output buffer could be allocated
4644 gst_video_decoder_allocate_output_frame (GstVideoDecoder *
4645 decoder, GstVideoCodecFrame * frame)
4647 return gst_video_decoder_allocate_output_frame_with_params (decoder, frame,
4652 * gst_video_decoder_allocate_output_frame_with_params:
4653 * @decoder: a #GstVideoDecoder
4654 * @frame: a #GstVideoCodecFrame
4655 * @params: a #GstBufferPoolAcquireParams
4657 * Same as #gst_video_decoder_allocate_output_frame except it allows passing
4658 * #GstBufferPoolAcquireParams to the sub call gst_buffer_pool_acquire_buffer.
4660 * Returns: %GST_FLOW_OK if an output buffer could be allocated
4665 gst_video_decoder_allocate_output_frame_with_params (GstVideoDecoder *
4666 decoder, GstVideoCodecFrame * frame, GstBufferPoolAcquireParams * params)
4668 GstFlowReturn flow_ret;
4669 GstVideoCodecState *state;
4671 gboolean needs_reconfigure = FALSE;
4673 g_return_val_if_fail (decoder->priv->output_state, GST_FLOW_NOT_NEGOTIATED);
4674 g_return_val_if_fail (frame->output_buffer == NULL, GST_FLOW_ERROR);
4676 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
4678 state = decoder->priv->output_state;
4679 if (state == NULL) {
4680 g_warning ("Output state should be set before allocating frame");
4683 num_bytes = GST_VIDEO_INFO_SIZE (&state->info);
4684 if (num_bytes == 0) {
4685 g_warning ("Frame size should not be 0");
4689 needs_reconfigure = gst_pad_check_reconfigure (decoder->srcpad);
4690 if (G_UNLIKELY (decoder->priv->output_state_changed || needs_reconfigure)) {
4691 if (!gst_video_decoder_negotiate_unlocked (decoder)) {
4692 gst_pad_mark_reconfigure (decoder->srcpad);
4693 if (GST_PAD_IS_FLUSHING (decoder->srcpad)) {
4694 GST_DEBUG_OBJECT (decoder,
4695 "Failed to negotiate a pool: pad is flushing");
4697 } else if (!decoder->priv->pool || decoder->priv->output_state_changed) {
4698 GST_DEBUG_OBJECT (decoder,
4699 "Failed to negotiate a pool and no previous pool to reuse");
4702 GST_DEBUG_OBJECT (decoder,
4703 "Failed to negotiate a pool, falling back to the previous pool");
4708 GST_LOG_OBJECT (decoder, "alloc buffer size %d", num_bytes);
4710 flow_ret = gst_buffer_pool_acquire_buffer (decoder->priv->pool,
4711 &frame->output_buffer, params);
4713 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4718 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4719 return GST_FLOW_FLUSHING;
4722 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4723 return GST_FLOW_ERROR;
4727 * gst_video_decoder_get_max_decode_time:
4728 * @decoder: a #GstVideoDecoder
4729 * @frame: a #GstVideoCodecFrame
4731 * Determines maximum possible decoding time for @frame that will
4732 * allow it to decode and arrive in time (as determined by QoS events).
4733 * In particular, a negative result means decoding in time is no longer possible
4734 * and should therefore occur as soon/skippy as possible.
4736 * Returns: max decoding time.
4739 gst_video_decoder_get_max_decode_time (GstVideoDecoder *
4740 decoder, GstVideoCodecFrame * frame)
4742 GstClockTimeDiff deadline;
4743 GstClockTime earliest_time;
4745 GST_OBJECT_LOCK (decoder);
4746 earliest_time = decoder->priv->earliest_time;
4747 if (GST_CLOCK_TIME_IS_VALID (earliest_time)
4748 && GST_CLOCK_TIME_IS_VALID (frame->deadline))
4749 deadline = GST_CLOCK_DIFF (earliest_time, frame->deadline);
4751 deadline = G_MAXINT64;
4753 GST_LOG_OBJECT (decoder, "earliest %" GST_TIME_FORMAT
4754 ", frame deadline %" GST_TIME_FORMAT ", deadline %" GST_STIME_FORMAT,
4755 GST_TIME_ARGS (earliest_time), GST_TIME_ARGS (frame->deadline),
4756 GST_STIME_ARGS (deadline));
4758 GST_OBJECT_UNLOCK (decoder);
4764 * gst_video_decoder_get_qos_proportion:
4765 * @decoder: a #GstVideoDecoder
4766 * current QoS proportion, or %NULL
4768 * Returns: The current QoS proportion.
4773 gst_video_decoder_get_qos_proportion (GstVideoDecoder * decoder)
4777 g_return_val_if_fail (GST_IS_VIDEO_DECODER (decoder), 1.0);
4779 GST_OBJECT_LOCK (decoder);
4780 proportion = decoder->priv->proportion;
4781 GST_OBJECT_UNLOCK (decoder);
4787 _gst_video_decoder_error (GstVideoDecoder * dec, gint weight,
4788 GQuark domain, gint code, gchar * txt, gchar * dbg, const gchar * file,
4789 const gchar * function, gint line)
4792 GST_WARNING_OBJECT (dec, "error: %s", txt);
4794 GST_WARNING_OBJECT (dec, "error: %s", dbg);
4795 dec->priv->error_count += weight;
4796 dec->priv->discont = TRUE;
4797 if (dec->priv->max_errors >= 0 &&
4798 dec->priv->error_count > dec->priv->max_errors) {
4799 gst_element_message_full (GST_ELEMENT (dec), GST_MESSAGE_ERROR,
4800 domain, code, txt, dbg, file, function, line);
4801 return GST_FLOW_ERROR;
4810 * gst_video_decoder_set_max_errors:
4811 * @dec: a #GstVideoDecoder
4812 * @num: max tolerated errors
4814 * Sets numbers of tolerated decoder errors, where a tolerated one is then only
4815 * warned about, but more than tolerated will lead to fatal error. You can set
4816 * -1 for never returning fatal errors. Default is set to
4817 * GST_VIDEO_DECODER_MAX_ERRORS.
4819 * The '-1' option was added in 1.4
4822 gst_video_decoder_set_max_errors (GstVideoDecoder * dec, gint num)
4824 g_return_if_fail (GST_IS_VIDEO_DECODER (dec));
4826 dec->priv->max_errors = num;
4830 * gst_video_decoder_get_max_errors:
4831 * @dec: a #GstVideoDecoder
4833 * Returns: currently configured decoder tolerated error count.
4836 gst_video_decoder_get_max_errors (GstVideoDecoder * dec)
4838 g_return_val_if_fail (GST_IS_VIDEO_DECODER (dec), 0);
4840 return dec->priv->max_errors;
4844 * gst_video_decoder_set_needs_format:
4845 * @dec: a #GstVideoDecoder
4846 * @enabled: new state
4848 * Configures decoder format needs. If enabled, subclass needs to be
4849 * negotiated with format caps before it can process any data. It will then
4850 * never be handed any data before it has been configured.
4851 * Otherwise, it might be handed data without having been configured and
4852 * is then expected being able to do so either by default
4853 * or based on the input data.
4858 gst_video_decoder_set_needs_format (GstVideoDecoder * dec, gboolean enabled)
4860 g_return_if_fail (GST_IS_VIDEO_DECODER (dec));
4862 dec->priv->needs_format = enabled;
4866 * gst_video_decoder_get_needs_format:
4867 * @dec: a #GstVideoDecoder
4869 * Queries decoder required format handling.
4871 * Returns: %TRUE if required format handling is enabled.
4876 gst_video_decoder_get_needs_format (GstVideoDecoder * dec)
4880 g_return_val_if_fail (GST_IS_VIDEO_DECODER (dec), FALSE);
4882 result = dec->priv->needs_format;
4888 * gst_video_decoder_set_packetized:
4889 * @decoder: a #GstVideoDecoder
4890 * @packetized: whether the input data should be considered as packetized.
4892 * Allows baseclass to consider input data as packetized or not. If the
4893 * input is packetized, then the @parse method will not be called.
4896 gst_video_decoder_set_packetized (GstVideoDecoder * decoder,
4897 gboolean packetized)
4899 decoder->priv->packetized = packetized;
4903 * gst_video_decoder_get_packetized:
4904 * @decoder: a #GstVideoDecoder
4906 * Queries whether input data is considered packetized or not by the
4909 * Returns: TRUE if input data is considered packetized.
4912 gst_video_decoder_get_packetized (GstVideoDecoder * decoder)
4914 return decoder->priv->packetized;
4918 * gst_video_decoder_have_last_subframe:
4919 * @decoder: a #GstVideoDecoder
4920 * @frame: (transfer none): the #GstVideoCodecFrame to update
4922 * Indicates that the last subframe has been processed by the decoder
4923 * in @frame. This will release the current frame in video decoder
4924 * allowing to receive new frames from upstream elements. This method
4925 * must be called in the subclass @handle_frame callback.
4927 * Returns: a #GstFlowReturn, usually GST_FLOW_OK.
4932 gst_video_decoder_have_last_subframe (GstVideoDecoder * decoder,
4933 GstVideoCodecFrame * frame)
4935 g_return_val_if_fail (gst_video_decoder_get_subframe_mode (decoder),
4937 /* unref once from the list */
4938 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
4939 if (decoder->priv->current_frame == frame) {
4940 gst_video_codec_frame_unref (decoder->priv->current_frame);
4941 decoder->priv->current_frame = NULL;
4943 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4949 * gst_video_decoder_set_subframe_mode:
4950 * @decoder: a #GstVideoDecoder
4951 * @subframe_mode: whether the input data should be considered as subframes.
4953 * If this is set to TRUE, it informs the base class that the subclass
4954 * can receive the data at a granularity lower than one frame.
4956 * Note that in this mode, the subclass has two options. It can either
4957 * require the presence of a GST_VIDEO_BUFFER_FLAG_MARKER to mark the
4958 * end of a frame. Or it can operate in such a way that it will decode
4959 * a single frame at a time. In this second case, every buffer that
4960 * arrives to the element is considered part of the same frame until
4961 * gst_video_decoder_finish_frame() is called.
4963 * In either case, the same #GstVideoCodecFrame will be passed to the
4964 * GstVideoDecoderClass:handle_frame vmethod repeatedly with a
4965 * different GstVideoCodecFrame:input_buffer every time until the end of the
4966 * frame has been signaled using either method.
4967 * This method must be called during the decoder subclass @set_format call.
4972 gst_video_decoder_set_subframe_mode (GstVideoDecoder * decoder,
4973 gboolean subframe_mode)
4975 decoder->priv->subframe_mode = subframe_mode;
4979 * gst_video_decoder_get_subframe_mode:
4980 * @decoder: a #GstVideoDecoder
4982 * Queries whether input data is considered as subframes or not by the
4983 * base class. If FALSE, each input buffer will be considered as a full
4986 * Returns: TRUE if input data is considered as sub frames.
4991 gst_video_decoder_get_subframe_mode (GstVideoDecoder * decoder)
4993 return decoder->priv->subframe_mode;
4997 * gst_video_decoder_get_input_subframe_index:
4998 * @decoder: a #GstVideoDecoder
4999 * @frame: (transfer none): the #GstVideoCodecFrame to update
5001 * Queries the number of the last subframe received by
5002 * the decoder baseclass in the @frame.
5004 * Returns: the current subframe index received in subframe mode, 1 otherwise.
5009 gst_video_decoder_get_input_subframe_index (GstVideoDecoder * decoder,
5010 GstVideoCodecFrame * frame)
5012 return frame->abidata.ABI.num_subframes;
5016 * gst_video_decoder_get_processed_subframe_index:
5017 * @decoder: a #GstVideoDecoder
5018 * @frame: (transfer none): the #GstVideoCodecFrame to update
5020 * Queries the number of subframes in the frame processed by
5021 * the decoder baseclass.
5023 * Returns: the current subframe processed received in subframe mode.
5028 gst_video_decoder_get_processed_subframe_index (GstVideoDecoder * decoder,
5029 GstVideoCodecFrame * frame)
5031 return frame->abidata.ABI.subframes_processed;
5035 * gst_video_decoder_set_estimate_rate:
5036 * @dec: a #GstVideoDecoder
5037 * @enabled: whether to enable byte to time conversion
5039 * Allows baseclass to perform byte to time estimated conversion.
5042 gst_video_decoder_set_estimate_rate (GstVideoDecoder * dec, gboolean enabled)
5044 g_return_if_fail (GST_IS_VIDEO_DECODER (dec));
5046 dec->priv->do_estimate_rate = enabled;
5050 * gst_video_decoder_get_estimate_rate:
5051 * @dec: a #GstVideoDecoder
5053 * Returns: currently configured byte to time conversion setting
5056 gst_video_decoder_get_estimate_rate (GstVideoDecoder * dec)
5058 g_return_val_if_fail (GST_IS_VIDEO_DECODER (dec), 0);
5060 return dec->priv->do_estimate_rate;
5064 * gst_video_decoder_set_latency:
5065 * @decoder: a #GstVideoDecoder
5066 * @min_latency: minimum latency
5067 * @max_latency: maximum latency
5069 * Lets #GstVideoDecoder sub-classes tell the baseclass what the decoder latency
5070 * is. If the provided values changed from previously provided ones, this will
5071 * also post a LATENCY message on the bus so the pipeline can reconfigure its
5075 gst_video_decoder_set_latency (GstVideoDecoder * decoder,
5076 GstClockTime min_latency, GstClockTime max_latency)
5078 gboolean post_message = FALSE;
5079 g_return_if_fail (GST_CLOCK_TIME_IS_VALID (min_latency));
5080 g_return_if_fail (max_latency >= min_latency);
5082 GST_DEBUG_OBJECT (decoder,
5083 "min_latency:%" GST_TIME_FORMAT " max_latency:%" GST_TIME_FORMAT,
5084 GST_TIME_ARGS (min_latency), GST_TIME_ARGS (max_latency));
5086 GST_OBJECT_LOCK (decoder);
5087 if (decoder->priv->min_latency != min_latency) {
5088 decoder->priv->min_latency = min_latency;
5089 post_message = TRUE;
5091 if (decoder->priv->max_latency != max_latency) {
5092 decoder->priv->max_latency = max_latency;
5093 post_message = TRUE;
5095 if (!decoder->priv->posted_latency_msg) {
5096 decoder->priv->posted_latency_msg = TRUE;
5097 post_message = TRUE;
5099 GST_OBJECT_UNLOCK (decoder);
5102 gst_element_post_message (GST_ELEMENT_CAST (decoder),
5103 gst_message_new_latency (GST_OBJECT_CAST (decoder)));
5107 * gst_video_decoder_get_latency:
5108 * @decoder: a #GstVideoDecoder
5109 * @min_latency: (out) (optional): address of variable in which to store the
5110 * configured minimum latency, or %NULL
5111 * @max_latency: (out) (optional): address of variable in which to store the
5112 * configured mximum latency, or %NULL
5114 * Query the configured decoder latency. Results will be returned via
5115 * @min_latency and @max_latency.
5118 gst_video_decoder_get_latency (GstVideoDecoder * decoder,
5119 GstClockTime * min_latency, GstClockTime * max_latency)
5121 GST_OBJECT_LOCK (decoder);
5123 *min_latency = decoder->priv->min_latency;
5125 *max_latency = decoder->priv->max_latency;
5126 GST_OBJECT_UNLOCK (decoder);
5130 * gst_video_decoder_merge_tags:
5131 * @decoder: a #GstVideoDecoder
5132 * @tags: (nullable): a #GstTagList to merge, or NULL to unset
5133 * previously-set tags
5134 * @mode: the #GstTagMergeMode to use, usually #GST_TAG_MERGE_REPLACE
5136 * Sets the audio decoder tags and how they should be merged with any
5137 * upstream stream tags. This will override any tags previously-set
5138 * with gst_audio_decoder_merge_tags().
5140 * Note that this is provided for convenience, and the subclass is
5141 * not required to use this and can still do tag handling on its own.
5146 gst_video_decoder_merge_tags (GstVideoDecoder * decoder,
5147 const GstTagList * tags, GstTagMergeMode mode)
5149 g_return_if_fail (GST_IS_VIDEO_DECODER (decoder));
5150 g_return_if_fail (tags == NULL || GST_IS_TAG_LIST (tags));
5151 g_return_if_fail (tags == NULL || mode != GST_TAG_MERGE_UNDEFINED);
5153 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
5154 if (decoder->priv->tags != tags) {
5155 if (decoder->priv->tags) {
5156 gst_tag_list_unref (decoder->priv->tags);
5157 decoder->priv->tags = NULL;
5158 decoder->priv->tags_merge_mode = GST_TAG_MERGE_APPEND;
5161 decoder->priv->tags = gst_tag_list_ref ((GstTagList *) tags);
5162 decoder->priv->tags_merge_mode = mode;
5165 GST_DEBUG_OBJECT (decoder, "set decoder tags to %" GST_PTR_FORMAT, tags);
5166 decoder->priv->tags_changed = TRUE;
5168 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
5172 * gst_video_decoder_get_buffer_pool:
5173 * @decoder: a #GstVideoDecoder
5175 * Returns: (transfer full) (nullable): the instance of the #GstBufferPool used
5176 * by the decoder; free it after use it
5179 gst_video_decoder_get_buffer_pool (GstVideoDecoder * decoder)
5181 g_return_val_if_fail (GST_IS_VIDEO_DECODER (decoder), NULL);
5183 if (decoder->priv->pool)
5184 return gst_object_ref (decoder->priv->pool);
5190 * gst_video_decoder_get_allocator:
5191 * @decoder: a #GstVideoDecoder
5192 * @allocator: (out) (optional) (nullable) (transfer full): the #GstAllocator
5194 * @params: (out) (optional) (transfer full): the
5195 * #GstAllocationParams of @allocator
5197 * Lets #GstVideoDecoder sub-classes to know the memory @allocator
5198 * used by the base class and its @params.
5200 * Unref the @allocator after use it.
5203 gst_video_decoder_get_allocator (GstVideoDecoder * decoder,
5204 GstAllocator ** allocator, GstAllocationParams * params)
5206 g_return_if_fail (GST_IS_VIDEO_DECODER (decoder));
5209 *allocator = decoder->priv->allocator ?
5210 gst_object_ref (decoder->priv->allocator) : NULL;
5213 *params = decoder->priv->params;
5217 * gst_video_decoder_set_use_default_pad_acceptcaps:
5218 * @decoder: a #GstVideoDecoder
5219 * @use: if the default pad accept-caps query handling should be used
5221 * Lets #GstVideoDecoder sub-classes decide if they want the sink pad
5222 * to use the default pad query handler to reply to accept-caps queries.
5224 * By setting this to true it is possible to further customize the default
5225 * handler with %GST_PAD_SET_ACCEPT_INTERSECT and
5226 * %GST_PAD_SET_ACCEPT_TEMPLATE
5231 gst_video_decoder_set_use_default_pad_acceptcaps (GstVideoDecoder * decoder,
5234 decoder->priv->use_default_pad_acceptcaps = use;
5238 gst_video_decoder_request_sync_point_internal (GstVideoDecoder * dec,
5239 GstClockTime deadline, GstVideoDecoderRequestSyncPointFlags flags)
5241 GstEvent *fku = NULL;
5242 GstVideoDecoderPrivate *priv;
5244 g_return_if_fail (GST_IS_VIDEO_DECODER (dec));
5248 GST_OBJECT_LOCK (dec);
5250 /* Check if we're allowed to send a new force-keyunit event.
5251 * frame->deadline is set to the running time of the PTS. */
5252 if (priv->min_force_key_unit_interval == 0 ||
5253 deadline == GST_CLOCK_TIME_NONE ||
5254 (priv->min_force_key_unit_interval != GST_CLOCK_TIME_NONE &&
5255 (priv->last_force_key_unit_time == GST_CLOCK_TIME_NONE
5256 || (priv->last_force_key_unit_time +
5257 priv->min_force_key_unit_interval <= deadline)))) {
5258 GST_DEBUG_OBJECT (dec,
5259 "Requesting a new key-unit for frame with deadline %" GST_TIME_FORMAT,
5260 GST_TIME_ARGS (deadline));
5262 gst_video_event_new_upstream_force_key_unit (GST_CLOCK_TIME_NONE, FALSE,
5264 priv->last_force_key_unit_time = deadline;
5266 GST_DEBUG_OBJECT (dec,
5267 "Can't request a new key-unit for frame with deadline %"
5268 GST_TIME_FORMAT, GST_TIME_ARGS (deadline));
5270 priv->request_sync_point_flags |= flags;
5271 /* We don't know yet the frame number of the sync point so set it to a
5272 * frame number higher than any allowed frame number */
5273 priv->request_sync_point_frame_number = REQUEST_SYNC_POINT_PENDING;
5274 GST_OBJECT_UNLOCK (dec);
5277 gst_pad_push_event (dec->sinkpad, fku);
5281 * gst_video_decoder_request_sync_point:
5282 * @dec: a #GstVideoDecoder
5283 * @frame: a #GstVideoCodecFrame
5284 * @flags: #GstVideoDecoderRequestSyncPointFlags
5286 * Allows the #GstVideoDecoder subclass to request from the base class that
5287 * a new sync should be requested from upstream, and that @frame was the frame
5288 * when the subclass noticed that a new sync point is required. A reason for
5289 * the subclass to do this could be missing reference frames, for example.
5291 * The base class will then request a new sync point from upstream as long as
5292 * the time that passed since the last one is exceeding
5293 * #GstVideoDecoder:min-force-key-unit-interval.
5295 * The subclass can signal via @flags how the frames until the next sync point
5296 * should be handled:
5298 * * If %GST_VIDEO_DECODER_REQUEST_SYNC_POINT_DISCARD_INPUT is selected then
5299 * all following input frames until the next sync point are discarded.
5300 * This can be useful if the lack of a sync point will prevent all further
5301 * decoding and the decoder implementation is not very robust in handling
5302 * missing references frames.
5303 * * If %GST_VIDEO_DECODER_REQUEST_SYNC_POINT_CORRUPT_OUTPUT is selected
5304 * then all output frames following @frame are marked as corrupted via
5305 * %GST_BUFFER_FLAG_CORRUPTED. Corrupted frames can be automatically
5306 * dropped by the base class, see #GstVideoDecoder:discard-corrupted-frames.
5307 * Subclasses can manually mark frames as corrupted via %GST_VIDEO_CODEC_FRAME_FLAG_CORRUPTED
5308 * before calling gst_video_decoder_finish_frame().
5313 gst_video_decoder_request_sync_point (GstVideoDecoder * dec,
5314 GstVideoCodecFrame * frame, GstVideoDecoderRequestSyncPointFlags flags)
5316 g_return_if_fail (GST_IS_VIDEO_DECODER (dec));
5317 g_return_if_fail (frame != NULL);
5319 gst_video_decoder_request_sync_point_internal (dec, frame->deadline, flags);
5323 * gst_video_decoder_set_needs_sync_point:
5324 * @dec: a #GstVideoDecoder
5325 * @enabled: new state
5327 * Configures whether the decoder requires a sync point before it starts
5328 * outputting data in the beginning. If enabled, the base class will discard
5329 * all non-sync point frames in the beginning and after a flush and does not
5330 * pass it to the subclass.
5332 * If the first frame is not a sync point, the base class will request a sync
5333 * point via the force-key-unit event.
5338 gst_video_decoder_set_needs_sync_point (GstVideoDecoder * dec, gboolean enabled)
5340 g_return_if_fail (GST_IS_VIDEO_DECODER (dec));
5342 dec->priv->needs_sync_point = enabled;
5346 * gst_video_decoder_get_needs_sync_point:
5347 * @dec: a #GstVideoDecoder
5349 * Queries if the decoder requires a sync point before it starts outputting
5350 * data in the beginning.
5352 * Returns: %TRUE if a sync point is required in the beginning.
5357 gst_video_decoder_get_needs_sync_point (GstVideoDecoder * dec)
5361 g_return_val_if_fail (GST_IS_VIDEO_DECODER (dec), FALSE);
5363 result = dec->priv->needs_sync_point;