2 * Copyright (C) 2008 David Schleef <ds@schleef.org>
3 * Copyright (C) 2011 Mark Nauwelaerts <mark.nauwelaerts@collabora.co.uk>.
4 * Copyright (C) 2011 Nokia Corporation. All rights reserved.
5 * Contact: Stefan Kost <stefan.kost@nokia.com>
6 * Copyright (C) 2012 Collabora Ltd.
7 * Author : Edward Hervey <edward@collabora.com>
9 * This library is free software; you can redistribute it and/or
10 * modify it under the terms of the GNU Library General Public
11 * License as published by the Free Software Foundation; either
12 * version 2 of the License, or (at your option) any later version.
14 * This library is distributed in the hope that it will be useful,
15 * but WITHOUT ANY WARRANTY; without even the implied warranty of
16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
17 * Library General Public License for more details.
19 * You should have received a copy of the GNU Library General Public
20 * License along with this library; if not, write to the
21 * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
22 * Boston, MA 02111-1307, USA.
26 * SECTION:gstvideodecoder
27 * @short_description: Base class for video decoders
30 * This base class is for video decoders turning encoded data into raw video
33 * The GstVideoDecoder base class and derived subclasses should cooperate as follows:
36 * <itemizedlist><title>Configuration</title>
38 * Initially, GstVideoDecoder calls @start when the decoder element
39 * is activated, which allows the subclass to perform any global setup.
42 * GstVideoDecoder calls @set_format to inform the subclass of caps
43 * describing input video data that it is about to receive, including
44 * possibly configuration data.
45 * While unlikely, it might be called more than once, if changing input
46 * parameters require reconfiguration.
49 * Incoming data buffers are processed as needed, described in Data Processing below.
52 * GstVideoDecoder calls @stop at end of all processing.
58 * <title>Data processing</title>
60 * The base class gathers input data, and optionally allows subclass
61 * to parse this into subsequently manageable chunks, typically
62 * corresponding to and referred to as 'frames'.
65 * Each input frame is provided in turn to the subclass' @handle_frame callback.
66 * The ownership of the frame is given to the @handle_frame callback.
69 * If codec processing results in decoded data, the subclass should call
70 * @gst_video_decoder_finish_frame to have decoded data pushed.
71 * downstream. Otherwise, the subclass must call @gst_video_decoder_drop_frame, to
72 * allow the base class to do timestamp and offset tracking, and possibly to
73 * requeue the frame for a later attempt in the case of reverse playback.
78 * <itemizedlist><title>Shutdown phase</title>
80 * The GstVideoDecoder class calls @stop to inform the subclass that data
81 * parsing will be stopped.
86 * <itemizedlist><title>Additional Notes</title>
88 * <itemizedlist><title>Seeking/Flushing</title>
90 * When the pipeline is seeked or otherwise flushed, the subclass is informed via a call
91 * to its @reset callback, with the hard parameter set to true. This indicates the
92 * subclass should drop any internal data queues and timestamps and prepare for a fresh
93 * set of buffers to arrive for parsing and decoding.
98 * <itemizedlist><title>End Of Stream</title>
100 * At end-of-stream, the subclass @parse function may be called some final times with the
101 * at_eos parameter set to true, indicating that the element should not expect any more data
102 * to be arriving, and it should parse and remaining frames and call
103 * gst_video_decoder_have_frame() if possible.
111 * The subclass is responsible for providing pad template caps for
112 * source and sink pads. The pads need to be named "sink" and "src". It also
113 * needs to provide information about the ouptput caps, when they are known.
114 * This may be when the base class calls the subclass' @set_format function,
115 * though it might be during decoding, before calling
116 * @gst_video_decoder_finish_frame. This is done via
117 * @gst_video_decoder_set_output_state
119 * The subclass is also responsible for providing (presentation) timestamps
120 * (likely based on corresponding input ones). If that is not applicable
121 * or possible, the base class provides limited framerate based interpolation.
123 * Similarly, the base class provides some limited (legacy) seeking support
124 * if specifically requested by the subclass, as full-fledged support
125 * should rather be left to upstream demuxer, parser or alike. This simple
126 * approach caters for seeking and duration reporting using estimated input
127 * bitrates. To enable it, a subclass should call
128 * @gst_video_decoder_set_estimate_rate to enable handling of incoming byte-streams.
130 * The base class provides some support for reverse playback, in particular
131 * in case incoming data is not packetized or upstream does not provide
132 * fragments on keyframe boundaries. However, the subclass should then be prepared
133 * for the parsing and frame processing stage to occur separately (in normal
134 * forward processing, the latter immediately follows the former),
135 * The subclass also needs to ensure the parsing stage properly marks keyframes,
136 * unless it knows the upstream elements will do so properly for incoming data.
138 * The bare minimum that a functional subclass needs to implement is:
140 * <listitem><para>Provide pad templates</para></listitem>
142 * Inform the base class of output caps via @gst_video_decoder_set_output_state
145 * Parse input data, if it is not considered packetized from upstream
146 * Data will be provided to @parse which should invoke @gst_video_decoder_add_to_frame and
147 * @gst_video_decoder_have_frame to separate the data belonging to each video frame.
150 * Accept data in @handle_frame and provide decoded results to
151 * @gst_video_decoder_finish_frame, or call @gst_video_decoder_drop_frame.
162 * * Add a flag/boolean for I-frame-only/image decoders so we can do extra
163 * features, like applying QoS on input (as opposed to after the frame is
165 * * Add a flag/boolean for decoders that require keyframes, so the base
166 * class can automatically discard non-keyframes before one has arrived
167 * * Detect reordered frame/timestamps and fix the pts/dts
168 * * Support for GstIndex (or shall we not care ?)
169 * * Calculate actual latency based on input/output timestamp/frame_number
170 * and if it exceeds the recorded one, save it and emit a GST_MESSAGE_LATENCY
171 * * Emit latency message when it changes
175 /* Implementation notes:
176 * The Video Decoder base class operates in 2 primary processing modes, depending
177 * on whether forward or reverse playback is requested.
180 * * Incoming buffer -> @parse() -> add_to_frame()/have_frame() -> handle_frame() ->
183 * Reverse playback is more complicated, since it involves gathering incoming data regions
184 * as we loop backwards through the upstream data. The processing concept (using incoming
185 * buffers as containing one frame each to simplify things) is:
187 * Upstream data we want to play:
188 * Buffer encoded order: 1 2 3 4 5 6 7 8 9 EOS
190 * Groupings: AAAAAAA BBBBBBB CCCCCCC
193 * Buffer reception order: 7 8 9 4 5 6 1 2 3 EOS
195 * Discont flag: D D D
197 * - Each Discont marks a discont in the decoding order.
198 * - The keyframes mark where we can start decoding.
200 * Initially, we prepend incoming buffers to the gather queue. Whenever the
201 * discont flag is set on an incoming buffer, the gather queue is flushed out
202 * before the new buffer is collected.
204 * The above data will be accumulated in the gather queue like this:
206 * gather queue: 9 8 7
209 * Whe buffer 4 is received (with a DISCONT), we flush the gather queue like
213 * take head of queue and prepend to parse queue (this reverses the sequence,
214 * so parse queue is 7 -> 8 -> 9)
216 * Next, we process the parse queue, which now contains all un-parsed packets (including
217 * any leftover ones from the previous decode section)
219 * for each buffer now in the parse queue:
220 * Call the subclass parse function, prepending each resulting frame to
221 * the parse_gather queue. Buffers which precede the first one that
222 * produces a parsed frame are retained in the parse queue for re-processing on
223 * the next cycle of parsing.
225 * The parse_gather queue now contains frame objects ready for decoding, in reverse order.
226 * parse_gather: 9 -> 8 -> 7
228 * while (parse_gather)
229 * Take the head of the queue and prepend it to the decode queue
230 * If the frame was a keyframe, process the decode queue
231 * decode is now 7-8-9
233 * Processing the decode queue results in frames with attached output buffers
234 * stored in the 'output_queue' ready for outputting in reverse order.
236 * After we flushed the gather queue and parsed it, we add 4 to the (now empty) gather queue.
237 * We get the following situation:
240 * decode queue: 7 8 9
242 * After we received 5 (Keyframe) and 6:
244 * gather queue: 6 5 4
245 * decode queue: 7 8 9
247 * When we receive 1 (DISCONT) which triggers a flush of the gather queue:
249 * Copy head of the gather queue (6) to decode queue:
252 * decode queue: 6 7 8 9
254 * Copy head of the gather queue (5) to decode queue. This is a keyframe so we
255 * can start decoding.
258 * decode queue: 5 6 7 8 9
260 * Decode frames in decode queue, store raw decoded data in output queue, we
261 * can take the head of the decode queue and prepend the decoded result in the
266 * output queue: 9 8 7 6 5
268 * Now output all the frames in the output queue, picking a frame from the
271 * Copy head of the gather queue (4) to decode queue, we flushed the gather
272 * queue and can now store input buffer in the gather queue:
277 * When we receive EOS, the queue looks like:
279 * gather queue: 3 2 1
282 * Fill decode queue, first keyframe we copy is 2:
285 * decode queue: 2 3 4
291 * output queue: 4 3 2
293 * Leftover buffer 1 cannot be decoded and must be discarded.
296 #include "gstvideodecoder.h"
297 #include "gstvideoutils.h"
299 #include <gst/video/video-event.h>
300 #include <gst/video/gstvideopool.h>
301 #include <gst/video/gstvideometa.h>
304 GST_DEBUG_CATEGORY (videodecoder_debug);
305 #define GST_CAT_DEFAULT videodecoder_debug
307 #define GST_VIDEO_DECODER_GET_PRIVATE(obj) \
308 (G_TYPE_INSTANCE_GET_PRIVATE ((obj), GST_TYPE_VIDEO_DECODER, \
309 GstVideoDecoderPrivate))
311 struct _GstVideoDecoderPrivate
313 /* FIXME introduce a context ? */
316 GstAllocator *allocator;
317 GstAllocationParams params;
321 GstAdapter *input_adapter;
322 /* assembles current frame */
323 GstAdapter *output_adapter;
325 /* Whether we attempt to convert newsegment from bytes to
326 * time using a bitrate estimation */
327 gboolean do_estimate_rate;
329 /* Whether input is considered packetized or not */
338 /* ... being tracked here;
339 * only available during parsing */
340 GstVideoCodecFrame *current_frame;
341 /* events that should apply to the current frame */
342 GList *current_frame_events;
344 /* relative offset of input data */
345 guint64 input_offset;
346 /* relative offset of frame */
347 guint64 frame_offset;
348 /* tracking ts and offsets */
351 /* last outgoing ts */
352 GstClockTime last_timestamp_out;
353 /* incoming pts - dts */
354 GstClockTime pts_delta;
355 gboolean reordered_output;
357 /* reverse playback */
362 /* collected parsed frames */
364 /* frames to be handled == decoded */
366 /* collected output - of buffer objects, not frames */
367 GList *output_queued;
370 /* base_picture_number is the picture number of the reference picture */
371 guint64 base_picture_number;
372 /* combine with base_picture_number, framerate and calcs to yield (presentation) ts */
373 GstClockTime base_timestamp;
375 /* FIXME : reorder_depth is never set */
377 int distance_from_sync;
379 guint32 system_frame_number;
380 guint32 decode_frame_number;
382 GList *frames; /* Protected with OBJECT_LOCK */
383 GstVideoCodecState *input_state;
384 GstVideoCodecState *output_state; /* OBJECT_LOCK and STREAM_LOCK */
385 gboolean output_state_changed;
388 gdouble proportion; /* OBJECT_LOCK */
389 GstClockTime earliest_time; /* OBJECT_LOCK */
390 GstClockTime qos_frame_duration; /* OBJECT_LOCK */
392 /* qos messages: frames dropped/processed */
396 /* Outgoing byte size ? */
404 gboolean tags_changed;
407 static GstElementClass *parent_class = NULL;
408 static void gst_video_decoder_class_init (GstVideoDecoderClass * klass);
409 static void gst_video_decoder_init (GstVideoDecoder * dec,
410 GstVideoDecoderClass * klass);
412 static void gst_video_decoder_finalize (GObject * object);
414 static gboolean gst_video_decoder_setcaps (GstVideoDecoder * dec,
416 static gboolean gst_video_decoder_sink_event (GstPad * pad, GstObject * parent,
418 static gboolean gst_video_decoder_src_event (GstPad * pad, GstObject * parent,
420 static GstFlowReturn gst_video_decoder_chain (GstPad * pad, GstObject * parent,
422 static gboolean gst_video_decoder_sink_query (GstPad * pad, GstObject * parent,
424 static GstStateChangeReturn gst_video_decoder_change_state (GstElement *
425 element, GstStateChange transition);
426 static gboolean gst_video_decoder_src_query (GstPad * pad, GstObject * parent,
428 static void gst_video_decoder_reset (GstVideoDecoder * decoder, gboolean full);
430 static GstFlowReturn gst_video_decoder_decode_frame (GstVideoDecoder * decoder,
431 GstVideoCodecFrame * frame);
433 static void gst_video_decoder_release_frame (GstVideoDecoder * dec,
434 GstVideoCodecFrame * frame);
435 static GstClockTime gst_video_decoder_get_frame_duration (GstVideoDecoder *
436 decoder, GstVideoCodecFrame * frame);
437 static GstVideoCodecFrame *gst_video_decoder_new_frame (GstVideoDecoder *
439 static GstFlowReturn gst_video_decoder_clip_and_push_buf (GstVideoDecoder *
440 decoder, GstBuffer * buf);
441 static GstFlowReturn gst_video_decoder_flush_parse (GstVideoDecoder * dec,
444 static void gst_video_decoder_clear_queues (GstVideoDecoder * dec);
446 static gboolean gst_video_decoder_sink_event_default (GstVideoDecoder * decoder,
448 static gboolean gst_video_decoder_src_event_default (GstVideoDecoder * decoder,
450 static gboolean gst_video_decoder_decide_allocation_default (GstVideoDecoder *
451 decoder, GstQuery * query);
452 static gboolean gst_video_decoder_propose_allocation_default (GstVideoDecoder *
453 decoder, GstQuery * query);
454 static gboolean gst_video_decoder_negotiate_default (GstVideoDecoder * decoder);
456 /* we can't use G_DEFINE_ABSTRACT_TYPE because we need the klass in the _init
457 * method to get to the padtemplates */
459 gst_video_decoder_get_type (void)
461 static volatile gsize type = 0;
463 if (g_once_init_enter (&type)) {
465 static const GTypeInfo info = {
466 sizeof (GstVideoDecoderClass),
469 (GClassInitFunc) gst_video_decoder_class_init,
472 sizeof (GstVideoDecoder),
474 (GInstanceInitFunc) gst_video_decoder_init,
477 _type = g_type_register_static (GST_TYPE_ELEMENT,
478 "GstVideoDecoder", &info, G_TYPE_FLAG_ABSTRACT);
479 g_once_init_leave (&type, _type);
485 gst_video_decoder_class_init (GstVideoDecoderClass * klass)
487 GObjectClass *gobject_class;
488 GstElementClass *gstelement_class;
490 gobject_class = G_OBJECT_CLASS (klass);
491 gstelement_class = GST_ELEMENT_CLASS (klass);
493 GST_DEBUG_CATEGORY_INIT (videodecoder_debug, "videodecoder", 0,
494 "Base Video Decoder");
496 parent_class = g_type_class_peek_parent (klass);
497 g_type_class_add_private (klass, sizeof (GstVideoDecoderPrivate));
499 gobject_class->finalize = gst_video_decoder_finalize;
501 gstelement_class->change_state =
502 GST_DEBUG_FUNCPTR (gst_video_decoder_change_state);
504 klass->sink_event = gst_video_decoder_sink_event_default;
505 klass->src_event = gst_video_decoder_src_event_default;
506 klass->decide_allocation = gst_video_decoder_decide_allocation_default;
507 klass->propose_allocation = gst_video_decoder_propose_allocation_default;
508 klass->negotiate = gst_video_decoder_negotiate_default;
512 gst_video_decoder_init (GstVideoDecoder * decoder, GstVideoDecoderClass * klass)
514 GstPadTemplate *pad_template;
517 GST_DEBUG_OBJECT (decoder, "gst_video_decoder_init");
519 decoder->priv = GST_VIDEO_DECODER_GET_PRIVATE (decoder);
522 gst_element_class_get_pad_template (GST_ELEMENT_CLASS (klass), "sink");
523 g_return_if_fail (pad_template != NULL);
525 decoder->sinkpad = pad = gst_pad_new_from_template (pad_template, "sink");
527 gst_pad_set_chain_function (pad, GST_DEBUG_FUNCPTR (gst_video_decoder_chain));
528 gst_pad_set_event_function (pad,
529 GST_DEBUG_FUNCPTR (gst_video_decoder_sink_event));
530 gst_pad_set_query_function (pad,
531 GST_DEBUG_FUNCPTR (gst_video_decoder_sink_query));
532 gst_element_add_pad (GST_ELEMENT (decoder), decoder->sinkpad);
535 gst_element_class_get_pad_template (GST_ELEMENT_CLASS (klass), "src");
536 g_return_if_fail (pad_template != NULL);
538 decoder->srcpad = pad = gst_pad_new_from_template (pad_template, "src");
540 gst_pad_set_event_function (pad,
541 GST_DEBUG_FUNCPTR (gst_video_decoder_src_event));
542 gst_pad_set_query_function (pad,
543 GST_DEBUG_FUNCPTR (gst_video_decoder_src_query));
544 gst_pad_use_fixed_caps (pad);
545 gst_element_add_pad (GST_ELEMENT (decoder), decoder->srcpad);
547 gst_segment_init (&decoder->input_segment, GST_FORMAT_TIME);
548 gst_segment_init (&decoder->output_segment, GST_FORMAT_TIME);
550 g_rec_mutex_init (&decoder->stream_lock);
552 decoder->priv->input_adapter = gst_adapter_new ();
553 decoder->priv->output_adapter = gst_adapter_new ();
554 decoder->priv->packetized = TRUE;
556 gst_video_decoder_reset (decoder, TRUE);
560 gst_video_rawvideo_convert (GstVideoCodecState * state,
561 GstFormat src_format, gint64 src_value,
562 GstFormat * dest_format, gint64 * dest_value)
564 gboolean res = FALSE;
568 g_return_val_if_fail (dest_format != NULL, FALSE);
569 g_return_val_if_fail (dest_value != NULL, FALSE);
571 if (src_format == *dest_format || src_value == 0 || src_value == -1) {
572 *dest_value = src_value;
576 vidsize = GST_VIDEO_INFO_SIZE (&state->info);
577 fps_n = GST_VIDEO_INFO_FPS_N (&state->info);
578 fps_d = GST_VIDEO_INFO_FPS_D (&state->info);
580 if (src_format == GST_FORMAT_BYTES &&
581 *dest_format == GST_FORMAT_DEFAULT && vidsize) {
582 /* convert bytes to frames */
583 *dest_value = gst_util_uint64_scale_int (src_value, 1, vidsize);
585 } else if (src_format == GST_FORMAT_DEFAULT &&
586 *dest_format == GST_FORMAT_BYTES && vidsize) {
587 /* convert bytes to frames */
588 *dest_value = src_value * vidsize;
590 } else if (src_format == GST_FORMAT_DEFAULT &&
591 *dest_format == GST_FORMAT_TIME && fps_n) {
592 /* convert frames to time */
593 *dest_value = gst_util_uint64_scale (src_value, GST_SECOND * fps_d, fps_n);
595 } else if (src_format == GST_FORMAT_TIME &&
596 *dest_format == GST_FORMAT_DEFAULT && fps_d) {
597 /* convert time to frames */
598 *dest_value = gst_util_uint64_scale (src_value, fps_n, GST_SECOND * fps_d);
600 } else if (src_format == GST_FORMAT_TIME &&
601 *dest_format == GST_FORMAT_BYTES && fps_d && vidsize) {
602 /* convert time to frames */
603 *dest_value = gst_util_uint64_scale (src_value,
604 fps_n * vidsize, GST_SECOND * fps_d);
606 } else if (src_format == GST_FORMAT_BYTES &&
607 *dest_format == GST_FORMAT_TIME && fps_n && vidsize) {
608 /* convert frames to time */
609 *dest_value = gst_util_uint64_scale (src_value,
610 GST_SECOND * fps_d, fps_n * vidsize);
618 gst_video_encoded_video_convert (gint64 bytes, gint64 time,
619 GstFormat src_format, gint64 src_value, GstFormat * dest_format,
622 gboolean res = FALSE;
624 g_return_val_if_fail (dest_format != NULL, FALSE);
625 g_return_val_if_fail (dest_value != NULL, FALSE);
627 if (G_UNLIKELY (src_format == *dest_format || src_value == 0 ||
630 *dest_value = src_value;
634 if (bytes <= 0 || time <= 0) {
635 GST_DEBUG ("not enough metadata yet to convert");
639 switch (src_format) {
640 case GST_FORMAT_BYTES:
641 switch (*dest_format) {
642 case GST_FORMAT_TIME:
643 *dest_value = gst_util_uint64_scale (src_value, time, bytes);
650 case GST_FORMAT_TIME:
651 switch (*dest_format) {
652 case GST_FORMAT_BYTES:
653 *dest_value = gst_util_uint64_scale (src_value, bytes, time);
661 GST_DEBUG ("unhandled conversion from %d to %d", src_format,
670 static GstVideoCodecState *
671 _new_input_state (GstCaps * caps)
673 GstVideoCodecState *state;
674 GstStructure *structure;
675 const GValue *codec_data;
677 state = g_slice_new0 (GstVideoCodecState);
678 state->ref_count = 1;
679 gst_video_info_init (&state->info);
680 if (G_UNLIKELY (!gst_video_info_from_caps (&state->info, caps)))
682 state->caps = gst_caps_ref (caps);
684 structure = gst_caps_get_structure (caps, 0);
686 codec_data = gst_structure_get_value (structure, "codec_data");
687 if (codec_data && G_VALUE_TYPE (codec_data) == GST_TYPE_BUFFER)
688 state->codec_data = GST_BUFFER (g_value_dup_boxed (codec_data));
694 g_slice_free (GstVideoCodecState, state);
699 static GstVideoCodecState *
700 _new_output_state (GstVideoFormat fmt, guint width, guint height,
701 GstVideoCodecState * reference)
703 GstVideoCodecState *state;
705 state = g_slice_new0 (GstVideoCodecState);
706 state->ref_count = 1;
707 gst_video_info_init (&state->info);
708 gst_video_info_set_format (&state->info, fmt, width, height);
711 GstVideoInfo *tgt, *ref;
714 ref = &reference->info;
716 /* Copy over extra fields from reference state */
717 tgt->interlace_mode = ref->interlace_mode;
718 tgt->flags = ref->flags;
719 tgt->chroma_site = ref->chroma_site;
720 /* only copy values that are not unknown so that we don't override the
721 * defaults. subclasses should really fill these in when they know. */
722 if (ref->colorimetry.range)
723 tgt->colorimetry.range = ref->colorimetry.range;
724 if (ref->colorimetry.matrix)
725 tgt->colorimetry.matrix = ref->colorimetry.matrix;
726 if (ref->colorimetry.transfer)
727 tgt->colorimetry.transfer = ref->colorimetry.transfer;
728 if (ref->colorimetry.primaries)
729 tgt->colorimetry.primaries = ref->colorimetry.primaries;
730 GST_DEBUG ("reference par %d/%d fps %d/%d",
731 ref->par_n, ref->par_d, ref->fps_n, ref->fps_d);
732 tgt->par_n = ref->par_n;
733 tgt->par_d = ref->par_d;
734 tgt->fps_n = ref->fps_n;
735 tgt->fps_d = ref->fps_d;
738 GST_DEBUG ("reference par %d/%d fps %d/%d",
739 state->info.par_n, state->info.par_d,
740 state->info.fps_n, state->info.fps_d);
746 gst_video_decoder_setcaps (GstVideoDecoder * decoder, GstCaps * caps)
748 GstVideoDecoderClass *decoder_class;
749 GstVideoCodecState *state;
752 decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
754 GST_DEBUG_OBJECT (decoder, "setcaps %" GST_PTR_FORMAT, caps);
756 state = _new_input_state (caps);
758 if (G_UNLIKELY (state == NULL))
761 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
763 if (decoder_class->set_format)
764 ret = decoder_class->set_format (decoder, state);
769 if (decoder->priv->input_state)
770 gst_video_codec_state_unref (decoder->priv->input_state);
771 decoder->priv->input_state = state;
773 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
781 GST_WARNING_OBJECT (decoder, "Failed to parse caps");
787 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
788 GST_WARNING_OBJECT (decoder, "Subclass refused caps");
789 gst_video_codec_state_unref (state);
795 gst_video_decoder_finalize (GObject * object)
797 GstVideoDecoder *decoder;
799 decoder = GST_VIDEO_DECODER (object);
801 GST_DEBUG_OBJECT (object, "finalize");
803 g_rec_mutex_clear (&decoder->stream_lock);
805 if (decoder->priv->input_adapter) {
806 g_object_unref (decoder->priv->input_adapter);
807 decoder->priv->input_adapter = NULL;
809 if (decoder->priv->output_adapter) {
810 g_object_unref (decoder->priv->output_adapter);
811 decoder->priv->output_adapter = NULL;
814 if (decoder->priv->input_state)
815 gst_video_codec_state_unref (decoder->priv->input_state);
816 if (decoder->priv->output_state)
817 gst_video_codec_state_unref (decoder->priv->output_state);
819 if (decoder->priv->pool) {
820 gst_object_unref (decoder->priv->pool);
821 decoder->priv->pool = NULL;
824 if (decoder->priv->allocator) {
825 gst_object_unref (decoder->priv->allocator);
826 decoder->priv->allocator = NULL;
829 G_OBJECT_CLASS (parent_class)->finalize (object);
832 /* hard == FLUSH, otherwise discont */
834 gst_video_decoder_flush (GstVideoDecoder * dec, gboolean hard)
836 GstVideoDecoderClass *klass;
837 GstVideoDecoderPrivate *priv = dec->priv;
838 GstFlowReturn ret = GST_FLOW_OK;
840 klass = GST_VIDEO_DECODER_GET_CLASS (dec);
842 GST_LOG_OBJECT (dec, "flush hard %d", hard);
844 /* Inform subclass */
846 klass->reset (dec, hard);
848 /* FIXME make some more distinction between hard and soft,
849 * but subclass may not be prepared for that */
850 /* FIXME perhaps also clear pending frames ?,
851 * but again, subclass may still come up with one of those */
853 /* TODO ? finish/drain some stuff */
855 gst_segment_init (&dec->input_segment, GST_FORMAT_UNDEFINED);
856 gst_segment_init (&dec->output_segment, GST_FORMAT_UNDEFINED);
857 gst_video_decoder_clear_queues (dec);
858 priv->error_count = 0;
859 g_list_free_full (priv->current_frame_events,
860 (GDestroyNotify) gst_event_unref);
861 priv->current_frame_events = NULL;
863 /* and get (re)set for the sequel */
864 gst_video_decoder_reset (dec, FALSE);
870 gst_video_decoder_push_event (GstVideoDecoder * decoder, GstEvent * event)
872 switch (GST_EVENT_TYPE (event)) {
873 case GST_EVENT_SEGMENT:
877 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
879 gst_event_copy_segment (event, &segment);
881 GST_DEBUG_OBJECT (decoder, "segment %" GST_SEGMENT_FORMAT, &segment);
883 if (segment.format != GST_FORMAT_TIME) {
884 GST_DEBUG_OBJECT (decoder, "received non TIME newsegment");
885 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
889 decoder->output_segment = segment;
890 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
897 return gst_pad_push_event (decoder->srcpad, event);
901 gst_video_decoder_drain_out (GstVideoDecoder * dec, gboolean at_eos)
903 GstVideoDecoderClass *decoder_class = GST_VIDEO_DECODER_GET_CLASS (dec);
904 GstVideoDecoderPrivate *priv = dec->priv;
905 GstFlowReturn ret = GST_FLOW_OK;
907 GST_VIDEO_DECODER_STREAM_LOCK (dec);
909 if (dec->input_segment.rate > 0.0) {
910 /* Forward mode, if unpacketized, give the child class
911 * a final chance to flush out packets */
912 if (!priv->packetized) {
913 while (ret == GST_FLOW_OK && gst_adapter_available (priv->input_adapter)) {
914 if (priv->current_frame == NULL)
915 priv->current_frame = gst_video_decoder_new_frame (dec);
917 ret = decoder_class->parse (dec, priv->current_frame,
918 priv->input_adapter, TRUE);
922 /* Reverse playback mode */
923 ret = gst_video_decoder_flush_parse (dec, TRUE);
927 if (decoder_class->finish)
928 ret = decoder_class->finish (dec);
931 GST_VIDEO_DECODER_STREAM_UNLOCK (dec);
937 gst_video_decoder_sink_event_default (GstVideoDecoder * decoder,
940 GstVideoDecoderPrivate *priv;
941 gboolean ret = FALSE;
942 gboolean forward_immediate = FALSE;
944 priv = decoder->priv;
946 switch (GST_EVENT_TYPE (event)) {
951 gst_event_parse_caps (event, &caps);
953 decoder->priv->do_caps = TRUE;
954 gst_event_unref (event);
960 GstFlowReturn flow_ret = GST_FLOW_OK;
962 flow_ret = gst_video_decoder_drain_out (decoder, TRUE);
963 ret = (flow_ret == GST_FLOW_OK);
964 /* Forward EOS immediately. This is required because no
965 * buffer or serialized event will come after EOS and
966 * nothing could trigger another _finish_frame() call.
968 * The subclass can override this behaviour by overriding
969 * the ::sink_event() vfunc and not chaining up to the
970 * parent class' ::sink_event() until a later time.
972 forward_immediate = TRUE;
977 GstFlowReturn flow_ret = GST_FLOW_OK;
979 flow_ret = gst_video_decoder_drain_out (decoder, FALSE);
980 ret = (flow_ret == GST_FLOW_OK);
982 /* Forward GAP immediately. Everything is drained after
983 * the GAP event and we can forward this event immediately
984 * now without having buffers out of order.
986 forward_immediate = TRUE;
989 case GST_EVENT_CUSTOM_DOWNSTREAM:
992 GstFlowReturn flow_ret = GST_FLOW_OK;
994 if (gst_video_event_parse_still_frame (event, &in_still)) {
996 GST_DEBUG_OBJECT (decoder, "draining current data for still-frame");
997 flow_ret = gst_video_decoder_drain_out (decoder, FALSE);
998 ret = (flow_ret == GST_FLOW_OK);
1000 /* Forward STILL_FRAME immediately. Everything is drained after
1001 * the STILL_FRAME event and we can forward this event immediately
1002 * now without having buffers out of order.
1004 forward_immediate = TRUE;
1008 case GST_EVENT_SEGMENT:
1012 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1014 gst_event_copy_segment (event, &segment);
1016 if (segment.format == GST_FORMAT_TIME) {
1017 GST_DEBUG_OBJECT (decoder,
1018 "received TIME SEGMENT %" GST_SEGMENT_FORMAT, &segment);
1022 GST_DEBUG_OBJECT (decoder,
1023 "received SEGMENT %" GST_SEGMENT_FORMAT, &segment);
1025 /* handle newsegment as a result from our legacy simple seeking */
1026 /* note that initial 0 should convert to 0 in any case */
1027 if (priv->do_estimate_rate &&
1028 gst_pad_query_convert (decoder->sinkpad, GST_FORMAT_BYTES,
1029 segment.start, GST_FORMAT_TIME, &start)) {
1030 /* best attempt convert */
1031 /* as these are only estimates, stop is kept open-ended to avoid
1032 * premature cutting */
1033 GST_DEBUG_OBJECT (decoder,
1034 "converted to TIME start %" GST_TIME_FORMAT,
1035 GST_TIME_ARGS (start));
1036 segment.start = start;
1037 segment.stop = GST_CLOCK_TIME_NONE;
1038 segment.time = start;
1040 gst_event_unref (event);
1041 event = gst_event_new_segment (&segment);
1043 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1044 goto newseg_wrong_format;
1048 gst_video_decoder_flush (decoder, FALSE);
1050 priv->base_timestamp = GST_CLOCK_TIME_NONE;
1051 priv->base_picture_number = 0;
1053 decoder->input_segment = segment;
1055 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1058 case GST_EVENT_FLUSH_STOP:
1060 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1061 /* well, this is kind of worse than a DISCONT */
1062 gst_video_decoder_flush (decoder, TRUE);
1063 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1064 /* Forward FLUSH_STOP immediately. This is required because it is
1065 * expected to be forwarded immediately and no buffers are queued
1068 forward_immediate = TRUE;
1075 gst_event_parse_tag (event, &tags);
1077 if (gst_tag_list_get_scope (tags) == GST_TAG_SCOPE_STREAM) {
1078 gst_video_decoder_merge_tags (decoder, tags, GST_TAG_MERGE_REPLACE);
1079 gst_event_unref (event);
1089 /* Forward non-serialized events immediately, and all other
1090 * events which can be forwarded immediately without potentially
1091 * causing the event to go out of order with other events and
1092 * buffers as decided above.
1095 if (!GST_EVENT_IS_SERIALIZED (event) || forward_immediate) {
1096 ret = gst_video_decoder_push_event (decoder, event);
1098 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1099 decoder->priv->current_frame_events =
1100 g_list_prepend (decoder->priv->current_frame_events, event);
1101 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1108 newseg_wrong_format:
1110 GST_DEBUG_OBJECT (decoder, "received non TIME newsegment");
1111 gst_event_unref (event);
1118 gst_video_decoder_sink_event (GstPad * pad, GstObject * parent,
1121 GstVideoDecoder *decoder;
1122 GstVideoDecoderClass *decoder_class;
1123 gboolean ret = FALSE;
1125 decoder = GST_VIDEO_DECODER (parent);
1126 decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
1128 GST_DEBUG_OBJECT (decoder, "received event %d, %s", GST_EVENT_TYPE (event),
1129 GST_EVENT_TYPE_NAME (event));
1131 if (decoder_class->sink_event)
1132 ret = decoder_class->sink_event (decoder, event);
1137 /* perform upstream byte <-> time conversion (duration, seeking)
1138 * if subclass allows and if enough data for moderately decent conversion */
1139 static inline gboolean
1140 gst_video_decoder_do_byte (GstVideoDecoder * dec)
1142 return dec->priv->do_estimate_rate && (dec->priv->bytes_out > 0)
1143 && (dec->priv->time > GST_SECOND);
1147 gst_video_decoder_do_seek (GstVideoDecoder * dec, GstEvent * event)
1151 GstSeekType start_type, end_type;
1153 gint64 start, start_time, end_time;
1154 GstSegment seek_segment;
1157 gst_event_parse_seek (event, &rate, &format, &flags, &start_type,
1158 &start_time, &end_type, &end_time);
1160 /* we'll handle plain open-ended flushing seeks with the simple approach */
1162 GST_DEBUG_OBJECT (dec, "unsupported seek: rate");
1166 if (start_type != GST_SEEK_TYPE_SET) {
1167 GST_DEBUG_OBJECT (dec, "unsupported seek: start time");
1171 if (end_type != GST_SEEK_TYPE_NONE ||
1172 (end_type == GST_SEEK_TYPE_SET && end_time != GST_CLOCK_TIME_NONE)) {
1173 GST_DEBUG_OBJECT (dec, "unsupported seek: end time");
1177 if (!(flags & GST_SEEK_FLAG_FLUSH)) {
1178 GST_DEBUG_OBJECT (dec, "unsupported seek: not flushing");
1182 memcpy (&seek_segment, &dec->output_segment, sizeof (seek_segment));
1183 gst_segment_do_seek (&seek_segment, rate, format, flags, start_type,
1184 start_time, end_type, end_time, NULL);
1185 start_time = seek_segment.position;
1187 if (!gst_pad_query_convert (dec->sinkpad, GST_FORMAT_TIME, start_time,
1188 GST_FORMAT_BYTES, &start)) {
1189 GST_DEBUG_OBJECT (dec, "conversion failed");
1193 seqnum = gst_event_get_seqnum (event);
1194 event = gst_event_new_seek (1.0, GST_FORMAT_BYTES, flags,
1195 GST_SEEK_TYPE_SET, start, GST_SEEK_TYPE_NONE, -1);
1196 gst_event_set_seqnum (event, seqnum);
1198 GST_DEBUG_OBJECT (dec, "seeking to %" GST_TIME_FORMAT " at byte offset %"
1199 G_GINT64_FORMAT, GST_TIME_ARGS (start_time), start);
1201 return gst_pad_push_event (dec->sinkpad, event);
1205 gst_video_decoder_src_event_default (GstVideoDecoder * decoder,
1208 GstVideoDecoderPrivate *priv;
1209 gboolean res = FALSE;
1211 priv = decoder->priv;
1213 GST_DEBUG_OBJECT (decoder,
1214 "received event %d, %s", GST_EVENT_TYPE (event),
1215 GST_EVENT_TYPE_NAME (event));
1217 switch (GST_EVENT_TYPE (event)) {
1218 case GST_EVENT_SEEK:
1223 GstSeekType start_type, stop_type;
1225 gint64 tstart, tstop;
1228 gst_event_parse_seek (event, &rate, &format, &flags, &start_type, &start,
1230 seqnum = gst_event_get_seqnum (event);
1232 /* upstream gets a chance first */
1233 if ((res = gst_pad_push_event (decoder->sinkpad, event)))
1236 /* if upstream fails for a time seek, maybe we can help if allowed */
1237 if (format == GST_FORMAT_TIME) {
1238 if (gst_video_decoder_do_byte (decoder))
1239 res = gst_video_decoder_do_seek (decoder, event);
1243 /* ... though a non-time seek can be aided as well */
1244 /* First bring the requested format to time */
1246 gst_pad_query_convert (decoder->srcpad, format, start,
1247 GST_FORMAT_TIME, &tstart)))
1250 gst_pad_query_convert (decoder->srcpad, format, stop,
1251 GST_FORMAT_TIME, &tstop)))
1254 /* then seek with time on the peer */
1255 event = gst_event_new_seek (rate, GST_FORMAT_TIME,
1256 flags, start_type, tstart, stop_type, tstop);
1257 gst_event_set_seqnum (event, seqnum);
1259 res = gst_pad_push_event (decoder->sinkpad, event);
1266 GstClockTimeDiff diff;
1267 GstClockTime timestamp;
1269 gst_event_parse_qos (event, &type, &proportion, &diff, ×tamp);
1271 GST_OBJECT_LOCK (decoder);
1272 priv->proportion = proportion;
1273 if (G_LIKELY (GST_CLOCK_TIME_IS_VALID (timestamp))) {
1274 if (G_UNLIKELY (diff > 0)) {
1275 priv->earliest_time = timestamp + 2 * diff + priv->qos_frame_duration;
1277 priv->earliest_time = timestamp + diff;
1280 priv->earliest_time = GST_CLOCK_TIME_NONE;
1282 GST_OBJECT_UNLOCK (decoder);
1284 GST_DEBUG_OBJECT (decoder,
1285 "got QoS %" GST_TIME_FORMAT ", %" G_GINT64_FORMAT ", %g",
1286 GST_TIME_ARGS (timestamp), diff, proportion);
1288 res = gst_pad_push_event (decoder->sinkpad, event);
1292 res = gst_pad_push_event (decoder->sinkpad, event);
1299 GST_DEBUG_OBJECT (decoder, "could not convert format");
1304 gst_video_decoder_src_event (GstPad * pad, GstObject * parent, GstEvent * event)
1306 GstVideoDecoder *decoder;
1307 GstVideoDecoderClass *decoder_class;
1308 gboolean ret = FALSE;
1310 decoder = GST_VIDEO_DECODER (parent);
1311 decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
1313 GST_DEBUG_OBJECT (decoder, "received event %d, %s", GST_EVENT_TYPE (event),
1314 GST_EVENT_TYPE_NAME (event));
1316 if (decoder_class->src_event)
1317 ret = decoder_class->src_event (decoder, event);
1323 gst_video_decoder_src_query (GstPad * pad, GstObject * parent, GstQuery * query)
1325 GstVideoDecoder *dec;
1326 gboolean res = TRUE;
1328 dec = GST_VIDEO_DECODER (parent);
1330 GST_LOG_OBJECT (dec, "handling query: %" GST_PTR_FORMAT, query);
1332 switch (GST_QUERY_TYPE (query)) {
1333 case GST_QUERY_POSITION:
1338 /* upstream gets a chance first */
1339 if ((res = gst_pad_peer_query (dec->sinkpad, query))) {
1340 GST_LOG_OBJECT (dec, "returning peer response");
1344 /* we start from the last seen time */
1345 time = dec->priv->last_timestamp_out;
1346 /* correct for the segment values */
1347 time = gst_segment_to_stream_time (&dec->output_segment,
1348 GST_FORMAT_TIME, time);
1350 GST_LOG_OBJECT (dec,
1351 "query %p: our time: %" GST_TIME_FORMAT, query, GST_TIME_ARGS (time));
1353 /* and convert to the final format */
1354 gst_query_parse_position (query, &format, NULL);
1355 if (!(res = gst_pad_query_convert (pad, GST_FORMAT_TIME, time,
1359 gst_query_set_position (query, format, value);
1361 GST_LOG_OBJECT (dec,
1362 "query %p: we return %" G_GINT64_FORMAT " (format %u)", query, value,
1366 case GST_QUERY_DURATION:
1370 /* upstream in any case */
1371 if ((res = gst_pad_query_default (pad, parent, query)))
1374 gst_query_parse_duration (query, &format, NULL);
1375 /* try answering TIME by converting from BYTE if subclass allows */
1376 if (format == GST_FORMAT_TIME && gst_video_decoder_do_byte (dec)) {
1379 if (gst_pad_peer_query_duration (dec->sinkpad, GST_FORMAT_BYTES,
1381 GST_LOG_OBJECT (dec, "upstream size %" G_GINT64_FORMAT, value);
1382 if (gst_pad_query_convert (dec->sinkpad,
1383 GST_FORMAT_BYTES, value, GST_FORMAT_TIME, &value)) {
1384 gst_query_set_duration (query, GST_FORMAT_TIME, value);
1391 case GST_QUERY_CONVERT:
1393 GstFormat src_fmt, dest_fmt;
1394 gint64 src_val, dest_val;
1396 GST_DEBUG_OBJECT (dec, "convert query");
1398 gst_query_parse_convert (query, &src_fmt, &src_val, &dest_fmt, &dest_val);
1399 GST_OBJECT_LOCK (dec);
1400 if (dec->priv->output_state != NULL)
1401 res = gst_video_rawvideo_convert (dec->priv->output_state,
1402 src_fmt, src_val, &dest_fmt, &dest_val);
1405 GST_OBJECT_UNLOCK (dec);
1408 gst_query_set_convert (query, src_fmt, src_val, dest_fmt, dest_val);
1411 case GST_QUERY_LATENCY:
1414 GstClockTime min_latency, max_latency;
1416 res = gst_pad_peer_query (dec->sinkpad, query);
1418 gst_query_parse_latency (query, &live, &min_latency, &max_latency);
1419 GST_DEBUG_OBJECT (dec, "Peer qlatency: live %d, min %"
1420 GST_TIME_FORMAT " max %" GST_TIME_FORMAT, live,
1421 GST_TIME_ARGS (min_latency), GST_TIME_ARGS (max_latency));
1423 GST_OBJECT_LOCK (dec);
1424 min_latency += dec->priv->min_latency;
1425 if (dec->priv->max_latency == GST_CLOCK_TIME_NONE) {
1426 max_latency = GST_CLOCK_TIME_NONE;
1427 } else if (max_latency != GST_CLOCK_TIME_NONE) {
1428 max_latency += dec->priv->max_latency;
1430 GST_OBJECT_UNLOCK (dec);
1432 gst_query_set_latency (query, live, min_latency, max_latency);
1437 res = gst_pad_query_default (pad, parent, query);
1442 GST_ERROR_OBJECT (dec, "query failed");
1447 gst_video_decoder_sink_query (GstPad * pad, GstObject * parent,
1450 GstVideoDecoder *decoder;
1451 GstVideoDecoderPrivate *priv;
1452 gboolean res = FALSE;
1454 decoder = GST_VIDEO_DECODER (parent);
1455 priv = decoder->priv;
1457 GST_LOG_OBJECT (decoder, "handling query: %" GST_PTR_FORMAT, query);
1459 switch (GST_QUERY_TYPE (query)) {
1460 case GST_QUERY_CONVERT:
1462 GstFormat src_fmt, dest_fmt;
1463 gint64 src_val, dest_val;
1465 gst_query_parse_convert (query, &src_fmt, &src_val, &dest_fmt, &dest_val);
1467 gst_video_encoded_video_convert (priv->bytes_out, priv->time, src_fmt,
1468 src_val, &dest_fmt, &dest_val);
1471 gst_query_set_convert (query, src_fmt, src_val, dest_fmt, dest_val);
1474 case GST_QUERY_ALLOCATION:{
1475 GstVideoDecoderClass *klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
1477 if (klass->propose_allocation)
1478 res = klass->propose_allocation (decoder, query);
1482 res = gst_pad_query_default (pad, parent, query);
1489 GST_DEBUG_OBJECT (decoder, "query failed");
1493 typedef struct _Timestamp Timestamp;
1499 GstClockTime duration;
1503 timestamp_free (Timestamp * ts)
1505 g_slice_free (Timestamp, ts);
1509 gst_video_decoder_add_timestamp (GstVideoDecoder * decoder, GstBuffer * buffer)
1511 GstVideoDecoderPrivate *priv = decoder->priv;
1514 ts = g_slice_new (Timestamp);
1516 GST_LOG_OBJECT (decoder,
1517 "adding PTS %" GST_TIME_FORMAT " DTS %" GST_TIME_FORMAT
1518 " (offset:%" G_GUINT64_FORMAT ")",
1519 GST_TIME_ARGS (GST_BUFFER_PTS (buffer)),
1520 GST_TIME_ARGS (GST_BUFFER_DTS (buffer)), priv->input_offset);
1522 ts->offset = priv->input_offset;
1523 ts->pts = GST_BUFFER_PTS (buffer);
1524 ts->dts = GST_BUFFER_DTS (buffer);
1525 ts->duration = GST_BUFFER_DURATION (buffer);
1527 priv->timestamps = g_list_append (priv->timestamps, ts);
1531 gst_video_decoder_get_timestamp_at_offset (GstVideoDecoder *
1532 decoder, guint64 offset, GstClockTime * pts, GstClockTime * dts,
1533 GstClockTime * duration)
1535 #ifndef GST_DISABLE_GST_DEBUG
1536 guint64 got_offset = 0;
1541 *pts = GST_CLOCK_TIME_NONE;
1542 *dts = GST_CLOCK_TIME_NONE;
1543 *duration = GST_CLOCK_TIME_NONE;
1545 g = decoder->priv->timestamps;
1548 if (ts->offset <= offset) {
1549 #ifndef GST_DISABLE_GST_DEBUG
1550 got_offset = ts->offset;
1554 *duration = ts->duration;
1555 timestamp_free (ts);
1557 decoder->priv->timestamps = g_list_remove (decoder->priv->timestamps, ts);
1563 GST_LOG_OBJECT (decoder,
1564 "got PTS %" GST_TIME_FORMAT " DTS %" GST_TIME_FORMAT " @ offs %"
1565 G_GUINT64_FORMAT " (wanted offset:%" G_GUINT64_FORMAT ")",
1566 GST_TIME_ARGS (*pts), GST_TIME_ARGS (*dts), got_offset, offset);
1570 gst_video_decoder_clear_queues (GstVideoDecoder * dec)
1572 GstVideoDecoderPrivate *priv = dec->priv;
1574 g_list_free_full (priv->output_queued,
1575 (GDestroyNotify) gst_mini_object_unref);
1576 priv->output_queued = NULL;
1578 g_list_free_full (priv->gather, (GDestroyNotify) gst_mini_object_unref);
1579 priv->gather = NULL;
1580 g_list_free_full (priv->decode, (GDestroyNotify) gst_video_codec_frame_unref);
1581 priv->decode = NULL;
1582 g_list_free_full (priv->parse, (GDestroyNotify) gst_mini_object_unref);
1584 g_list_free_full (priv->parse_gather,
1585 (GDestroyNotify) gst_video_codec_frame_unref);
1586 priv->parse_gather = NULL;
1587 g_list_free_full (priv->frames, (GDestroyNotify) gst_video_codec_frame_unref);
1588 priv->frames = NULL;
1592 gst_video_decoder_reset (GstVideoDecoder * decoder, gboolean full)
1594 GstVideoDecoderPrivate *priv = decoder->priv;
1596 GST_DEBUG_OBJECT (decoder, "reset full %d", full);
1598 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1601 gst_segment_init (&decoder->input_segment, GST_FORMAT_UNDEFINED);
1602 gst_segment_init (&decoder->output_segment, GST_FORMAT_UNDEFINED);
1603 gst_video_decoder_clear_queues (decoder);
1604 priv->error_count = 0;
1605 priv->max_errors = GST_VIDEO_DECODER_MAX_ERRORS;
1606 if (priv->input_state)
1607 gst_video_codec_state_unref (priv->input_state);
1608 priv->input_state = NULL;
1609 GST_OBJECT_LOCK (decoder);
1610 if (priv->output_state)
1611 gst_video_codec_state_unref (priv->output_state);
1612 priv->output_state = NULL;
1614 priv->qos_frame_duration = 0;
1615 GST_OBJECT_UNLOCK (decoder);
1617 priv->min_latency = 0;
1618 priv->max_latency = 0;
1621 gst_tag_list_unref (priv->tags);
1623 priv->tags_changed = FALSE;
1624 priv->reordered_output = FALSE;
1627 priv->discont = TRUE;
1629 priv->base_timestamp = GST_CLOCK_TIME_NONE;
1630 priv->last_timestamp_out = GST_CLOCK_TIME_NONE;
1631 priv->pts_delta = GST_CLOCK_TIME_NONE;
1633 priv->input_offset = 0;
1634 priv->frame_offset = 0;
1635 gst_adapter_clear (priv->input_adapter);
1636 gst_adapter_clear (priv->output_adapter);
1637 g_list_free_full (priv->timestamps, (GDestroyNotify) timestamp_free);
1638 priv->timestamps = NULL;
1640 if (priv->current_frame) {
1641 gst_video_codec_frame_unref (priv->current_frame);
1642 priv->current_frame = NULL;
1646 priv->processed = 0;
1648 priv->decode_frame_number = 0;
1649 priv->base_picture_number = 0;
1651 g_list_free_full (priv->frames, (GDestroyNotify) gst_video_codec_frame_unref);
1652 priv->frames = NULL;
1654 priv->bytes_out = 0;
1657 GST_OBJECT_LOCK (decoder);
1658 priv->earliest_time = GST_CLOCK_TIME_NONE;
1659 priv->proportion = 0.5;
1660 GST_OBJECT_UNLOCK (decoder);
1662 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1665 static GstFlowReturn
1666 gst_video_decoder_chain_forward (GstVideoDecoder * decoder,
1667 GstBuffer * buf, gboolean at_eos)
1669 GstVideoDecoderPrivate *priv;
1670 GstVideoDecoderClass *klass;
1671 GstFlowReturn ret = GST_FLOW_OK;
1673 klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
1674 priv = decoder->priv;
1676 g_return_val_if_fail (priv->packetized || klass->parse, GST_FLOW_ERROR);
1678 if (priv->current_frame == NULL)
1679 priv->current_frame = gst_video_decoder_new_frame (decoder);
1681 if (GST_BUFFER_PTS_IS_VALID (buf)) {
1682 gst_video_decoder_add_timestamp (decoder, buf);
1684 priv->input_offset += gst_buffer_get_size (buf);
1686 if (priv->packetized) {
1687 if (!GST_BUFFER_FLAG_IS_SET (buf, GST_BUFFER_FLAG_DELTA_UNIT)) {
1688 GST_VIDEO_CODEC_FRAME_SET_SYNC_POINT (priv->current_frame);
1691 priv->current_frame->input_buffer = buf;
1693 if (decoder->input_segment.rate < 0.0) {
1694 priv->parse_gather =
1695 g_list_prepend (priv->parse_gather, priv->current_frame);
1697 ret = gst_video_decoder_decode_frame (decoder, priv->current_frame);
1699 priv->current_frame = NULL;
1702 gst_adapter_push (priv->input_adapter, buf);
1704 if (G_UNLIKELY (!gst_adapter_available (priv->input_adapter)))
1708 /* current frame may have been parsed and handled,
1709 * so we need to set up a new one when asking subclass to parse */
1710 if (priv->current_frame == NULL)
1711 priv->current_frame = gst_video_decoder_new_frame (decoder);
1713 ret = klass->parse (decoder, priv->current_frame,
1714 priv->input_adapter, at_eos);
1715 } while (ret == GST_FLOW_OK && gst_adapter_available (priv->input_adapter));
1719 if (ret == GST_VIDEO_DECODER_FLOW_NEED_DATA)
1725 static GstFlowReturn
1726 gst_video_decoder_flush_decode (GstVideoDecoder * dec)
1728 GstVideoDecoderPrivate *priv = dec->priv;
1729 GstFlowReturn res = GST_FLOW_OK;
1732 GST_DEBUG_OBJECT (dec, "flushing buffers to decode");
1734 /* clear buffer and decoder state */
1735 gst_video_decoder_flush (dec, FALSE);
1737 walk = priv->decode;
1740 GstVideoCodecFrame *frame = (GstVideoCodecFrame *) (walk->data);
1742 GST_DEBUG_OBJECT (dec, "decoding frame %p buffer %p, PTS %" GST_TIME_FORMAT
1743 ", DTS %" GST_TIME_FORMAT, frame, frame->input_buffer,
1744 GST_TIME_ARGS (GST_BUFFER_PTS (frame->input_buffer)),
1745 GST_TIME_ARGS (GST_BUFFER_DTS (frame->input_buffer)));
1749 priv->decode = g_list_delete_link (priv->decode, walk);
1751 /* decode buffer, resulting data prepended to queue */
1752 res = gst_video_decoder_decode_frame (dec, frame);
1753 if (res != GST_FLOW_OK)
1762 /* gst_video_decoder_flush_parse is called from the
1763 * chain_reverse() function when a buffer containing
1764 * a DISCONT - indicating that reverse playback
1765 * looped back to the next data block, and therefore
1766 * all available data should be fed through the
1767 * decoder and frames gathered for reversed output
1769 static GstFlowReturn
1770 gst_video_decoder_flush_parse (GstVideoDecoder * dec, gboolean at_eos)
1772 GstVideoDecoderPrivate *priv = dec->priv;
1773 GstFlowReturn res = GST_FLOW_OK;
1776 GST_DEBUG_OBJECT (dec, "flushing buffers to parsing");
1778 /* Reverse the gather list, and prepend it to the parse list,
1779 * then flush to parse whatever we can */
1780 priv->gather = g_list_reverse (priv->gather);
1781 priv->parse = g_list_concat (priv->gather, priv->parse);
1782 priv->gather = NULL;
1784 /* clear buffer and decoder state */
1785 gst_video_decoder_flush (dec, FALSE);
1789 GstBuffer *buf = GST_BUFFER_CAST (walk->data);
1790 GList *next = walk->next;
1792 GST_DEBUG_OBJECT (dec, "parsing buffer %p, PTS %" GST_TIME_FORMAT
1793 ", DTS %" GST_TIME_FORMAT, buf, GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
1794 GST_TIME_ARGS (GST_BUFFER_DTS (buf)));
1796 /* parse buffer, resulting frames prepended to parse_gather queue */
1797 gst_buffer_ref (buf);
1798 res = gst_video_decoder_chain_forward (dec, buf, at_eos);
1800 /* if we generated output, we can discard the buffer, else we
1801 * keep it in the queue */
1802 if (priv->parse_gather) {
1803 GST_DEBUG_OBJECT (dec, "parsed buffer to %p", priv->parse_gather->data);
1804 priv->parse = g_list_delete_link (priv->parse, walk);
1805 gst_buffer_unref (buf);
1807 GST_DEBUG_OBJECT (dec, "buffer did not decode, keeping");
1812 /* now we can process frames. Start by moving each frame from the parse_gather
1813 * to the decode list, reverse the order as we go, and stopping when/if we
1814 * copy a keyframe. */
1815 GST_DEBUG_OBJECT (dec, "checking parsed frames for a keyframe to decode");
1816 walk = priv->parse_gather;
1818 GstVideoCodecFrame *frame = (GstVideoCodecFrame *) (walk->data);
1820 /* remove from the gather list */
1821 priv->parse_gather = g_list_remove_link (priv->parse_gather, walk);
1823 /* move it to the front of the decode queue */
1824 priv->decode = g_list_concat (walk, priv->decode);
1826 /* if we copied a keyframe, flush and decode the decode queue */
1827 if (GST_VIDEO_CODEC_FRAME_IS_SYNC_POINT (frame)) {
1828 GST_DEBUG_OBJECT (dec, "found keyframe %p with PTS %" GST_TIME_FORMAT
1829 ", DTS %" GST_TIME_FORMAT, frame,
1830 GST_TIME_ARGS (GST_BUFFER_PTS (frame->input_buffer)),
1831 GST_TIME_ARGS (GST_BUFFER_DTS (frame->input_buffer)));
1832 res = gst_video_decoder_flush_decode (dec);
1833 if (res != GST_FLOW_OK)
1837 walk = priv->parse_gather;
1840 /* now send queued data downstream */
1841 walk = priv->output_queued;
1843 GstBuffer *buf = GST_BUFFER_CAST (walk->data);
1845 if (G_LIKELY (res == GST_FLOW_OK)) {
1846 /* avoid stray DISCONT from forward processing,
1847 * which have no meaning in reverse pushing */
1848 GST_BUFFER_FLAG_UNSET (buf, GST_BUFFER_FLAG_DISCONT);
1850 /* Last chance to calculate a timestamp as we loop backwards
1851 * through the list */
1852 if (GST_BUFFER_TIMESTAMP (buf) != GST_CLOCK_TIME_NONE)
1853 priv->last_timestamp_out = GST_BUFFER_TIMESTAMP (buf);
1854 else if (priv->last_timestamp_out != GST_CLOCK_TIME_NONE &&
1855 GST_BUFFER_DURATION (buf) != GST_CLOCK_TIME_NONE) {
1856 GST_BUFFER_TIMESTAMP (buf) =
1857 priv->last_timestamp_out - GST_BUFFER_DURATION (buf);
1858 priv->last_timestamp_out = GST_BUFFER_TIMESTAMP (buf);
1859 GST_LOG_OBJECT (dec,
1860 "Calculated TS %" GST_TIME_FORMAT " working backwards",
1861 GST_TIME_ARGS (priv->last_timestamp_out));
1864 res = gst_video_decoder_clip_and_push_buf (dec, buf);
1866 gst_buffer_unref (buf);
1869 priv->output_queued =
1870 g_list_delete_link (priv->output_queued, priv->output_queued);
1871 walk = priv->output_queued;
1878 static GstFlowReturn
1879 gst_video_decoder_chain_reverse (GstVideoDecoder * dec, GstBuffer * buf)
1881 GstVideoDecoderPrivate *priv = dec->priv;
1882 GstFlowReturn result = GST_FLOW_OK;
1884 /* if we have a discont, move buffers to the decode list */
1885 if (!buf || GST_BUFFER_IS_DISCONT (buf)) {
1886 GST_DEBUG_OBJECT (dec, "received discont");
1888 /* parse and decode stuff in the gather and parse queues */
1889 gst_video_decoder_flush_parse (dec, FALSE);
1892 if (G_LIKELY (buf)) {
1893 GST_DEBUG_OBJECT (dec, "gathering buffer %p of size %" G_GSIZE_FORMAT ", "
1894 "PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT ", dur %"
1895 GST_TIME_FORMAT, buf, gst_buffer_get_size (buf),
1896 GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
1897 GST_TIME_ARGS (GST_BUFFER_DTS (buf)),
1898 GST_TIME_ARGS (GST_BUFFER_DURATION (buf)));
1900 /* add buffer to gather queue */
1901 priv->gather = g_list_prepend (priv->gather, buf);
1907 static GstFlowReturn
1908 gst_video_decoder_chain (GstPad * pad, GstObject * parent, GstBuffer * buf)
1910 GstVideoDecoder *decoder;
1911 GstFlowReturn ret = GST_FLOW_OK;
1913 decoder = GST_VIDEO_DECODER (parent);
1915 if (G_UNLIKELY (decoder->priv->do_caps)) {
1916 GstCaps *caps = gst_pad_get_current_caps (decoder->sinkpad);
1918 if (!gst_video_decoder_setcaps (decoder, caps)) {
1919 gst_caps_unref (caps);
1920 goto not_negotiated;
1922 gst_caps_unref (caps);
1924 decoder->priv->do_caps = FALSE;
1927 GST_LOG_OBJECT (decoder,
1928 "chain PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT " duration %"
1929 GST_TIME_FORMAT " size %" G_GSIZE_FORMAT,
1930 GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
1931 GST_TIME_ARGS (GST_BUFFER_DTS (buf)),
1932 GST_TIME_ARGS (GST_BUFFER_DURATION (buf)), gst_buffer_get_size (buf));
1934 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1937 * requiring the pad to be negotiated makes it impossible to use
1938 * oggdemux or filesrc ! decoder */
1940 if (decoder->input_segment.format == GST_FORMAT_UNDEFINED) {
1942 GstSegment *segment = &decoder->input_segment;
1944 GST_WARNING_OBJECT (decoder,
1945 "Received buffer without a new-segment. "
1946 "Assuming timestamps start from 0.");
1948 gst_segment_init (segment, GST_FORMAT_TIME);
1950 event = gst_event_new_segment (segment);
1952 decoder->priv->current_frame_events =
1953 g_list_prepend (decoder->priv->current_frame_events, event);
1956 if (decoder->input_segment.rate > 0.0)
1957 ret = gst_video_decoder_chain_forward (decoder, buf, FALSE);
1959 ret = gst_video_decoder_chain_reverse (decoder, buf);
1961 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1967 GST_ELEMENT_ERROR (decoder, CORE, NEGOTIATION, (NULL),
1968 ("encoder not initialized"));
1969 gst_buffer_unref (buf);
1970 return GST_FLOW_NOT_NEGOTIATED;
1974 static GstStateChangeReturn
1975 gst_video_decoder_change_state (GstElement * element, GstStateChange transition)
1977 GstVideoDecoder *decoder;
1978 GstVideoDecoderClass *decoder_class;
1979 GstStateChangeReturn ret;
1981 decoder = GST_VIDEO_DECODER (element);
1982 decoder_class = GST_VIDEO_DECODER_GET_CLASS (element);
1984 switch (transition) {
1985 case GST_STATE_CHANGE_NULL_TO_READY:
1986 /* open device/library if needed */
1987 if (decoder_class->open && !decoder_class->open (decoder))
1990 case GST_STATE_CHANGE_READY_TO_PAUSED:
1991 /* Initialize device/library if needed */
1992 if (decoder_class->start && !decoder_class->start (decoder))
1999 ret = GST_ELEMENT_CLASS (parent_class)->change_state (element, transition);
2001 switch (transition) {
2002 case GST_STATE_CHANGE_PAUSED_TO_READY:
2003 if (decoder_class->stop && !decoder_class->stop (decoder))
2006 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2007 gst_video_decoder_reset (decoder, TRUE);
2008 g_list_free_full (decoder->priv->current_frame_events,
2009 (GDestroyNotify) gst_event_unref);
2010 decoder->priv->current_frame_events = NULL;
2011 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2013 case GST_STATE_CHANGE_READY_TO_NULL:
2014 /* close device/library if needed */
2015 if (decoder_class->close && !decoder_class->close (decoder))
2027 GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL),
2028 ("Failed to open decoder"));
2029 return GST_STATE_CHANGE_FAILURE;
2034 GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL),
2035 ("Failed to start decoder"));
2036 return GST_STATE_CHANGE_FAILURE;
2041 GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL),
2042 ("Failed to stop decoder"));
2043 return GST_STATE_CHANGE_FAILURE;
2048 GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL),
2049 ("Failed to close decoder"));
2050 return GST_STATE_CHANGE_FAILURE;
2054 static GstVideoCodecFrame *
2055 gst_video_decoder_new_frame (GstVideoDecoder * decoder)
2057 GstVideoDecoderPrivate *priv = decoder->priv;
2058 GstVideoCodecFrame *frame;
2060 frame = g_slice_new0 (GstVideoCodecFrame);
2062 frame->ref_count = 1;
2064 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2065 frame->system_frame_number = priv->system_frame_number;
2066 priv->system_frame_number++;
2067 frame->decode_frame_number = priv->decode_frame_number;
2068 priv->decode_frame_number++;
2070 frame->dts = GST_CLOCK_TIME_NONE;
2071 frame->pts = GST_CLOCK_TIME_NONE;
2072 frame->duration = GST_CLOCK_TIME_NONE;
2073 frame->events = priv->current_frame_events;
2074 priv->current_frame_events = NULL;
2075 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2077 GST_LOG_OBJECT (decoder, "Created new frame %p (sfn:%d)",
2078 frame, frame->system_frame_number);
2084 gst_video_decoder_prepare_finish_frame (GstVideoDecoder *
2085 decoder, GstVideoCodecFrame * frame, gboolean dropping)
2087 GstVideoDecoderPrivate *priv = decoder->priv;
2088 GList *l, *events = NULL;
2090 #ifndef GST_DISABLE_GST_DEBUG
2091 GST_LOG_OBJECT (decoder, "n %d in %" G_GSIZE_FORMAT " out %" G_GSIZE_FORMAT,
2092 g_list_length (priv->frames),
2093 gst_adapter_available (priv->input_adapter),
2094 gst_adapter_available (priv->output_adapter));
2097 GST_LOG_OBJECT (decoder,
2098 "finish frame %p (#%d) sync:%d PTS:%" GST_TIME_FORMAT " DTS:%"
2100 frame, frame->system_frame_number,
2101 GST_VIDEO_CODEC_FRAME_IS_SYNC_POINT (frame), GST_TIME_ARGS (frame->pts),
2102 GST_TIME_ARGS (frame->dts));
2104 /* Push all pending events that arrived before this frame */
2105 for (l = priv->frames; l; l = l->next) {
2106 GstVideoCodecFrame *tmp = l->data;
2109 events = g_list_concat (events, tmp->events);
2117 for (l = g_list_last (events); l; l = g_list_previous (l)) {
2118 GST_LOG_OBJECT (decoder, "pushing %s event", GST_EVENT_TYPE_NAME (l->data));
2119 gst_video_decoder_push_event (decoder, l->data);
2121 g_list_free (events);
2123 /* Check if the data should not be displayed. For example altref/invisible
2124 * frame in vp8. In this case we should not update the timestamps. */
2125 if (GST_VIDEO_CODEC_FRAME_IS_DECODE_ONLY (frame))
2128 /* If the frame is meant to be output but we don't have an output_buffer
2129 * we have a problem :) */
2130 if (G_UNLIKELY ((frame->output_buffer == NULL) && !dropping))
2131 goto no_output_buffer;
2133 if (GST_CLOCK_TIME_IS_VALID (frame->pts)) {
2134 if (frame->pts != priv->base_timestamp) {
2135 GST_DEBUG_OBJECT (decoder,
2136 "sync timestamp %" GST_TIME_FORMAT " diff %" GST_TIME_FORMAT,
2137 GST_TIME_ARGS (frame->pts),
2138 GST_TIME_ARGS (frame->pts - decoder->output_segment.start));
2139 priv->base_timestamp = frame->pts;
2140 priv->base_picture_number = frame->decode_frame_number;
2144 if (frame->duration == GST_CLOCK_TIME_NONE) {
2145 frame->duration = gst_video_decoder_get_frame_duration (decoder, frame);
2146 GST_LOG_OBJECT (decoder,
2147 "Guessing duration %" GST_TIME_FORMAT " for frame...",
2148 GST_TIME_ARGS (frame->duration));
2151 /* PTS is expected montone ascending,
2152 * so a good guess is lowest unsent DTS */
2154 GstClockTime min_ts = GST_CLOCK_TIME_NONE;
2155 GstVideoCodecFrame *oframe = NULL;
2156 gboolean seen_none = FALSE;
2158 /* some maintenance regardless */
2159 for (l = priv->frames; l; l = l->next) {
2160 GstVideoCodecFrame *tmp = l->data;
2162 if (!GST_CLOCK_TIME_IS_VALID (tmp->abidata.ABI.ts)) {
2167 if (!GST_CLOCK_TIME_IS_VALID (min_ts) || tmp->abidata.ABI.ts < min_ts) {
2168 min_ts = tmp->abidata.ABI.ts;
2172 /* save a ts if needed */
2173 if (oframe && oframe != frame) {
2174 oframe->abidata.ABI.ts = frame->abidata.ABI.ts;
2177 /* and set if needed;
2178 * valid delta means we have reasonable DTS input */
2179 /* also, if we ended up reordered, means this approach is conflicting
2180 * with some sparse existing PTS, and so it does not work out */
2181 if (!priv->reordered_output &&
2182 !GST_CLOCK_TIME_IS_VALID (frame->pts) && !seen_none &&
2183 GST_CLOCK_TIME_IS_VALID (priv->pts_delta)) {
2184 frame->pts = min_ts + priv->pts_delta;
2185 GST_DEBUG_OBJECT (decoder,
2186 "no valid PTS, using oldest DTS %" GST_TIME_FORMAT,
2187 GST_TIME_ARGS (frame->pts));
2190 /* some more maintenance, ts2 holds PTS */
2191 min_ts = GST_CLOCK_TIME_NONE;
2193 for (l = priv->frames; l; l = l->next) {
2194 GstVideoCodecFrame *tmp = l->data;
2196 if (!GST_CLOCK_TIME_IS_VALID (tmp->abidata.ABI.ts2)) {
2201 if (!GST_CLOCK_TIME_IS_VALID (min_ts) || tmp->abidata.ABI.ts2 < min_ts) {
2202 min_ts = tmp->abidata.ABI.ts2;
2206 /* save a ts if needed */
2207 if (oframe && oframe != frame) {
2208 oframe->abidata.ABI.ts2 = frame->abidata.ABI.ts2;
2211 /* if we detected reordered output, then PTS are void,
2212 * however those were obtained; bogus input, subclass etc */
2213 if (priv->reordered_output && !seen_none) {
2214 GST_DEBUG_OBJECT (decoder, "invaliding PTS");
2215 frame->pts = GST_CLOCK_TIME_NONE;
2218 if (!GST_CLOCK_TIME_IS_VALID (frame->pts) && !seen_none) {
2219 frame->pts = min_ts;
2220 GST_DEBUG_OBJECT (decoder,
2221 "no valid PTS, using oldest PTS %" GST_TIME_FORMAT,
2222 GST_TIME_ARGS (frame->pts));
2227 if (frame->pts == GST_CLOCK_TIME_NONE) {
2228 /* Last ditch timestamp guess: Just add the duration to the previous
2230 if (priv->last_timestamp_out != GST_CLOCK_TIME_NONE &&
2231 frame->duration != GST_CLOCK_TIME_NONE) {
2232 frame->pts = priv->last_timestamp_out + frame->duration;
2233 GST_LOG_OBJECT (decoder,
2234 "Guessing timestamp %" GST_TIME_FORMAT " for frame...",
2235 GST_TIME_ARGS (frame->pts));
2239 if (GST_CLOCK_TIME_IS_VALID (priv->last_timestamp_out)) {
2240 if (frame->pts < priv->last_timestamp_out) {
2241 GST_WARNING_OBJECT (decoder,
2242 "decreasing timestamp (%" GST_TIME_FORMAT " < %"
2243 GST_TIME_FORMAT ")",
2244 GST_TIME_ARGS (frame->pts), GST_TIME_ARGS (priv->last_timestamp_out));
2245 priv->reordered_output = TRUE;
2249 if (GST_CLOCK_TIME_IS_VALID (frame->pts))
2250 priv->last_timestamp_out = frame->pts;
2257 GST_ERROR_OBJECT (decoder, "No buffer to output !");
2262 gst_video_decoder_release_frame (GstVideoDecoder * dec,
2263 GstVideoCodecFrame * frame)
2267 /* unref once from the list */
2268 link = g_list_find (dec->priv->frames, frame);
2270 gst_video_codec_frame_unref (frame);
2271 dec->priv->frames = g_list_delete_link (dec->priv->frames, link);
2274 /* unref because this function takes ownership */
2275 gst_video_codec_frame_unref (frame);
2279 * gst_video_decoder_drop_frame:
2280 * @dec: a #GstVideoDecoder
2281 * @frame: (transfer full): the #GstVideoCodecFrame to drop
2283 * Similar to gst_video_decoder_finish_frame(), but drops @frame in any
2284 * case and posts a QoS message with the frame's details on the bus.
2285 * In any case, the frame is considered finished and released.
2287 * Returns: a #GstFlowReturn, usually GST_FLOW_OK.
2290 gst_video_decoder_drop_frame (GstVideoDecoder * dec, GstVideoCodecFrame * frame)
2292 GstClockTime stream_time, jitter, earliest_time, qostime, timestamp;
2293 GstSegment *segment;
2294 GstMessage *qos_msg;
2297 GST_LOG_OBJECT (dec, "drop frame %p", frame);
2299 GST_VIDEO_DECODER_STREAM_LOCK (dec);
2301 gst_video_decoder_prepare_finish_frame (dec, frame, TRUE);
2303 GST_DEBUG_OBJECT (dec, "dropping frame %" GST_TIME_FORMAT,
2304 GST_TIME_ARGS (frame->pts));
2306 dec->priv->dropped++;
2308 /* post QoS message */
2309 GST_OBJECT_LOCK (dec);
2310 proportion = dec->priv->proportion;
2311 earliest_time = dec->priv->earliest_time;
2312 GST_OBJECT_UNLOCK (dec);
2314 timestamp = frame->pts;
2315 segment = &dec->output_segment;
2317 gst_segment_to_stream_time (segment, GST_FORMAT_TIME, timestamp);
2318 qostime = gst_segment_to_running_time (segment, GST_FORMAT_TIME, timestamp);
2319 jitter = GST_CLOCK_DIFF (qostime, earliest_time);
2321 gst_message_new_qos (GST_OBJECT_CAST (dec), FALSE, qostime, stream_time,
2322 timestamp, GST_CLOCK_TIME_NONE);
2323 gst_message_set_qos_values (qos_msg, jitter, proportion, 1000000);
2324 gst_message_set_qos_stats (qos_msg, GST_FORMAT_BUFFERS,
2325 dec->priv->processed, dec->priv->dropped);
2326 gst_element_post_message (GST_ELEMENT_CAST (dec), qos_msg);
2328 /* now free the frame */
2329 gst_video_decoder_release_frame (dec, frame);
2331 GST_VIDEO_DECODER_STREAM_UNLOCK (dec);
2337 * gst_video_decoder_finish_frame:
2338 * @decoder: a #GstVideoDecoder
2339 * @frame: (transfer full): a decoded #GstVideoCodecFrame
2341 * @frame should have a valid decoded data buffer, whose metadata fields
2342 * are then appropriately set according to frame data and pushed downstream.
2343 * If no output data is provided, @frame is considered skipped.
2344 * In any case, the frame is considered finished and released.
2346 * After calling this function the output buffer of the frame is to be
2347 * considered read-only. This function will also change the metadata
2350 * Returns: a #GstFlowReturn resulting from sending data downstream
2353 gst_video_decoder_finish_frame (GstVideoDecoder * decoder,
2354 GstVideoCodecFrame * frame)
2356 GstFlowReturn ret = GST_FLOW_OK;
2357 GstVideoDecoderPrivate *priv = decoder->priv;
2358 GstBuffer *output_buffer;
2360 GST_LOG_OBJECT (decoder, "finish frame %p", frame);
2362 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2364 if (G_UNLIKELY (priv->output_state_changed || (priv->output_state
2365 && gst_pad_check_reconfigure (decoder->srcpad))))
2366 gst_video_decoder_negotiate (decoder);
2368 gst_video_decoder_prepare_finish_frame (decoder, frame, FALSE);
2371 if (priv->tags && priv->tags_changed) {
2372 gst_video_decoder_push_event (decoder,
2373 gst_event_new_tag (gst_tag_list_ref (priv->tags)));
2374 priv->tags_changed = FALSE;
2377 /* no buffer data means this frame is skipped */
2378 if (!frame->output_buffer || GST_VIDEO_CODEC_FRAME_IS_DECODE_ONLY (frame)) {
2379 GST_DEBUG_OBJECT (decoder, "skipping frame %" GST_TIME_FORMAT,
2380 GST_TIME_ARGS (frame->pts));
2384 output_buffer = frame->output_buffer;
2386 GST_BUFFER_FLAG_UNSET (output_buffer, GST_BUFFER_FLAG_DELTA_UNIT);
2388 /* set PTS and DTS to both the PTS for decoded frames */
2389 GST_BUFFER_PTS (output_buffer) = frame->pts;
2390 GST_BUFFER_DTS (output_buffer) = frame->pts;
2391 GST_BUFFER_DURATION (output_buffer) = frame->duration;
2393 GST_BUFFER_OFFSET (output_buffer) = GST_BUFFER_OFFSET_NONE;
2394 GST_BUFFER_OFFSET_END (output_buffer) = GST_BUFFER_OFFSET_NONE;
2396 if (priv->discont) {
2397 GST_BUFFER_FLAG_SET (output_buffer, GST_BUFFER_FLAG_DISCONT);
2398 priv->discont = FALSE;
2401 /* Get an additional ref to the buffer, which is going to be pushed
2402 * downstream, the original ref is owned by the frame
2404 * FIXME: clip_and_push_buf() changes buffer metadata but the buffer
2405 * might have a refcount > 1 */
2406 output_buffer = gst_buffer_ref (output_buffer);
2407 if (decoder->output_segment.rate < 0.0) {
2408 GST_LOG_OBJECT (decoder, "queued frame");
2409 priv->output_queued = g_list_prepend (priv->output_queued, output_buffer);
2411 ret = gst_video_decoder_clip_and_push_buf (decoder, output_buffer);
2415 gst_video_decoder_release_frame (decoder, frame);
2416 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2421 /* With stream lock, takes the frame reference */
2422 static GstFlowReturn
2423 gst_video_decoder_clip_and_push_buf (GstVideoDecoder * decoder, GstBuffer * buf)
2425 GstFlowReturn ret = GST_FLOW_OK;
2426 GstVideoDecoderPrivate *priv = decoder->priv;
2427 guint64 start, stop;
2428 guint64 cstart, cstop;
2429 GstSegment *segment;
2430 GstClockTime duration;
2432 /* Check for clipping */
2433 start = GST_BUFFER_PTS (buf);
2434 duration = GST_BUFFER_DURATION (buf);
2436 stop = GST_CLOCK_TIME_NONE;
2438 if (GST_CLOCK_TIME_IS_VALID (start) && GST_CLOCK_TIME_IS_VALID (duration)) {
2439 stop = start + duration;
2442 segment = &decoder->output_segment;
2443 if (gst_segment_clip (segment, GST_FORMAT_TIME, start, stop, &cstart, &cstop)) {
2445 GST_BUFFER_PTS (buf) = cstart;
2447 if (stop != GST_CLOCK_TIME_NONE)
2448 GST_BUFFER_DURATION (buf) = cstop - cstart;
2450 GST_LOG_OBJECT (decoder,
2451 "accepting buffer inside segment: %" GST_TIME_FORMAT " %"
2452 GST_TIME_FORMAT " seg %" GST_TIME_FORMAT " to %" GST_TIME_FORMAT
2453 " time %" GST_TIME_FORMAT,
2454 GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
2455 GST_TIME_ARGS (GST_BUFFER_PTS (buf) +
2456 GST_BUFFER_DURATION (buf)),
2457 GST_TIME_ARGS (segment->start), GST_TIME_ARGS (segment->stop),
2458 GST_TIME_ARGS (segment->time));
2460 GST_LOG_OBJECT (decoder,
2461 "dropping buffer outside segment: %" GST_TIME_FORMAT
2462 " %" GST_TIME_FORMAT
2463 " seg %" GST_TIME_FORMAT " to %" GST_TIME_FORMAT
2464 " time %" GST_TIME_FORMAT,
2465 GST_TIME_ARGS (start), GST_TIME_ARGS (stop),
2466 GST_TIME_ARGS (segment->start),
2467 GST_TIME_ARGS (segment->stop), GST_TIME_ARGS (segment->time));
2468 gst_buffer_unref (buf);
2472 /* update rate estimate */
2473 priv->bytes_out += gst_buffer_get_size (buf);
2474 if (GST_CLOCK_TIME_IS_VALID (duration)) {
2475 priv->time += duration;
2477 /* FIXME : Use difference between current and previous outgoing
2478 * timestamp, and relate to difference between current and previous
2480 /* better none than nothing valid */
2481 priv->time = GST_CLOCK_TIME_NONE;
2484 GST_DEBUG_OBJECT (decoder, "pushing buffer %p of size %" G_GSIZE_FORMAT ", "
2485 "PTS %" GST_TIME_FORMAT ", dur %" GST_TIME_FORMAT, buf,
2486 gst_buffer_get_size (buf),
2487 GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
2488 GST_TIME_ARGS (GST_BUFFER_DURATION (buf)));
2490 /* we got data, so note things are looking up again, reduce
2491 * the error count, if there is one */
2492 if (G_UNLIKELY (priv->error_count))
2493 priv->error_count--;
2495 ret = gst_pad_push (decoder->srcpad, buf);
2502 * gst_video_decoder_add_to_frame:
2503 * @decoder: a #GstVideoDecoder
2504 * @n_bytes: the number of bytes to add
2506 * Removes next @n_bytes of input data and adds it to currently parsed frame.
2509 gst_video_decoder_add_to_frame (GstVideoDecoder * decoder, int n_bytes)
2511 GstVideoDecoderPrivate *priv = decoder->priv;
2514 GST_LOG_OBJECT (decoder, "add %d bytes to frame", n_bytes);
2519 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2520 if (gst_adapter_available (priv->output_adapter) == 0) {
2521 priv->frame_offset =
2522 priv->input_offset - gst_adapter_available (priv->input_adapter);
2524 buf = gst_adapter_take_buffer (priv->input_adapter, n_bytes);
2526 gst_adapter_push (priv->output_adapter, buf);
2527 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2531 gst_video_decoder_get_frame_duration (GstVideoDecoder * decoder,
2532 GstVideoCodecFrame * frame)
2534 GstVideoCodecState *state = decoder->priv->output_state;
2536 /* it's possible that we don't have a state yet when we are dropping the
2537 * initial buffers */
2539 return GST_CLOCK_TIME_NONE;
2541 if (state->info.fps_d == 0 || state->info.fps_n == 0) {
2542 return GST_CLOCK_TIME_NONE;
2545 /* FIXME: For interlaced frames this needs to take into account
2546 * the number of valid fields in the frame
2549 return gst_util_uint64_scale (GST_SECOND, state->info.fps_d,
2554 * gst_video_decoder_have_frame:
2555 * @decoder: a #GstVideoDecoder
2557 * Gathers all data collected for currently parsed frame, gathers corresponding
2558 * metadata and passes it along for further processing, i.e. @handle_frame.
2560 * Returns: a #GstFlowReturn
2563 gst_video_decoder_have_frame (GstVideoDecoder * decoder)
2565 GstVideoDecoderPrivate *priv = decoder->priv;
2568 GstClockTime pts, dts, duration;
2569 GstFlowReturn ret = GST_FLOW_OK;
2571 GST_LOG_OBJECT (decoder, "have_frame");
2573 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2575 n_available = gst_adapter_available (priv->output_adapter);
2577 buffer = gst_adapter_take_buffer (priv->output_adapter, n_available);
2579 buffer = gst_buffer_new_and_alloc (0);
2582 priv->current_frame->input_buffer = buffer;
2584 gst_video_decoder_get_timestamp_at_offset (decoder,
2585 priv->frame_offset, &pts, &dts, &duration);
2587 GST_BUFFER_PTS (buffer) = pts;
2588 GST_BUFFER_DTS (buffer) = dts;
2589 GST_BUFFER_DURATION (buffer) = duration;
2591 GST_LOG_OBJECT (decoder, "collected frame size %d, "
2592 "PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT ", dur %"
2593 GST_TIME_FORMAT, n_available, GST_TIME_ARGS (pts), GST_TIME_ARGS (dts),
2594 GST_TIME_ARGS (duration));
2596 /* In reverse playback, just capture and queue frames for later processing */
2597 if (decoder->output_segment.rate < 0.0) {
2598 priv->parse_gather =
2599 g_list_prepend (priv->parse_gather, priv->current_frame);
2601 /* Otherwise, decode the frame, which gives away our ref */
2602 ret = gst_video_decoder_decode_frame (decoder, priv->current_frame);
2604 /* Current frame is gone now, either way */
2605 priv->current_frame = NULL;
2607 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2612 /* Pass the frame in priv->current_frame through the
2613 * handle_frame() callback for decoding and passing to gvd_finish_frame(),
2614 * or dropping by passing to gvd_drop_frame() */
2615 static GstFlowReturn
2616 gst_video_decoder_decode_frame (GstVideoDecoder * decoder,
2617 GstVideoCodecFrame * frame)
2619 GstVideoDecoderPrivate *priv = decoder->priv;
2620 GstVideoDecoderClass *decoder_class;
2621 GstFlowReturn ret = GST_FLOW_OK;
2623 decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
2625 /* FIXME : This should only have to be checked once (either the subclass has an
2626 * implementation, or it doesn't) */
2627 g_return_val_if_fail (decoder_class->handle_frame != NULL, GST_FLOW_ERROR);
2629 frame->distance_from_sync = priv->distance_from_sync;
2630 priv->distance_from_sync++;
2631 frame->pts = GST_BUFFER_PTS (frame->input_buffer);
2632 frame->dts = GST_BUFFER_DTS (frame->input_buffer);
2633 frame->duration = GST_BUFFER_DURATION (frame->input_buffer);
2635 /* For keyframes, PTS = DTS */
2636 /* FIXME upstream can be quite wrong about the keyframe aspect,
2637 * so we could be going off here as well,
2638 * maybe let subclass decide if it really is/was a keyframe */
2639 if (GST_VIDEO_CODEC_FRAME_IS_SYNC_POINT (frame)) {
2640 if (!GST_CLOCK_TIME_IS_VALID (frame->pts)) {
2641 frame->pts = frame->dts;
2642 } else if (GST_CLOCK_TIME_IS_VALID (frame->dts)) {
2643 /* just in case they are not equal as might ideally be,
2644 * e.g. quicktime has a (positive) delta approach */
2645 priv->pts_delta = frame->pts - frame->dts;
2646 GST_DEBUG_OBJECT (decoder, "PTS delta %d ms",
2647 (gint) (priv->pts_delta / GST_MSECOND));
2651 frame->abidata.ABI.ts = frame->dts;
2652 frame->abidata.ABI.ts2 = frame->pts;
2654 GST_LOG_OBJECT (decoder, "PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT,
2655 GST_TIME_ARGS (frame->pts), GST_TIME_ARGS (frame->dts));
2656 GST_LOG_OBJECT (decoder, "dist %d", frame->distance_from_sync);
2658 gst_video_codec_frame_ref (frame);
2659 priv->frames = g_list_append (priv->frames, frame);
2661 gst_segment_to_running_time (&decoder->input_segment, GST_FORMAT_TIME,
2664 /* do something with frame */
2665 ret = decoder_class->handle_frame (decoder, frame);
2666 if (ret != GST_FLOW_OK)
2667 GST_DEBUG_OBJECT (decoder, "flow error %s", gst_flow_get_name (ret));
2669 /* the frame has either been added to parse_gather or sent to
2670 handle frame so there is no need to unref it */
2676 * gst_video_decoder_get_output_state:
2677 * @decoder: a #GstVideoDecoder
2679 * Get the #GstVideoCodecState currently describing the output stream.
2681 * Returns: (transfer full): #GstVideoCodecState describing format of video data.
2683 GstVideoCodecState *
2684 gst_video_decoder_get_output_state (GstVideoDecoder * decoder)
2686 GstVideoCodecState *state = NULL;
2688 GST_OBJECT_LOCK (decoder);
2689 if (decoder->priv->output_state)
2690 state = gst_video_codec_state_ref (decoder->priv->output_state);
2691 GST_OBJECT_UNLOCK (decoder);
2697 * gst_video_decoder_set_output_state:
2698 * @decoder: a #GstVideoDecoder
2699 * @fmt: a #GstVideoFormat
2700 * @width: The width in pixels
2701 * @height: The height in pixels
2702 * @reference: (allow-none) (transfer none): An optional reference #GstVideoCodecState
2704 * Creates a new #GstVideoCodecState with the specified @fmt, @width and @height
2705 * as the output state for the decoder.
2706 * Any previously set output state on @decoder will be replaced by the newly
2709 * If the subclass wishes to copy over existing fields (like pixel aspec ratio,
2710 * or framerate) from an existing #GstVideoCodecState, it can be provided as a
2713 * If the subclass wishes to override some fields from the output state (like
2714 * pixel-aspect-ratio or framerate) it can do so on the returned #GstVideoCodecState.
2716 * The new output state will only take effect (set on pads and buffers) starting
2717 * from the next call to #gst_video_decoder_finish_frame().
2719 * Returns: (transfer full): the newly configured output state.
2721 GstVideoCodecState *
2722 gst_video_decoder_set_output_state (GstVideoDecoder * decoder,
2723 GstVideoFormat fmt, guint width, guint height,
2724 GstVideoCodecState * reference)
2726 GstVideoDecoderPrivate *priv = decoder->priv;
2727 GstVideoCodecState *state;
2729 GST_DEBUG_OBJECT (decoder, "fmt:%d, width:%d, height:%d, reference:%p",
2730 fmt, width, height, reference);
2732 /* Create the new output state */
2733 state = _new_output_state (fmt, width, height, reference);
2735 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2737 GST_OBJECT_LOCK (decoder);
2738 /* Replace existing output state by new one */
2739 if (priv->output_state)
2740 gst_video_codec_state_unref (priv->output_state);
2741 priv->output_state = gst_video_codec_state_ref (state);
2743 if (priv->output_state != NULL && priv->output_state->info.fps_n > 0) {
2744 priv->qos_frame_duration =
2745 gst_util_uint64_scale (GST_SECOND, priv->output_state->info.fps_d,
2746 priv->output_state->info.fps_n);
2748 priv->qos_frame_duration = 0;
2750 priv->output_state_changed = TRUE;
2751 GST_OBJECT_UNLOCK (decoder);
2753 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2760 * gst_video_decoder_get_oldest_frame:
2761 * @decoder: a #GstVideoDecoder
2763 * Get the oldest pending unfinished #GstVideoCodecFrame
2765 * Returns: (transfer full): oldest pending unfinished #GstVideoCodecFrame.
2767 GstVideoCodecFrame *
2768 gst_video_decoder_get_oldest_frame (GstVideoDecoder * decoder)
2770 GstVideoCodecFrame *frame = NULL;
2772 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2773 if (decoder->priv->frames)
2774 frame = gst_video_codec_frame_ref (decoder->priv->frames->data);
2775 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2777 return (GstVideoCodecFrame *) frame;
2781 * gst_video_decoder_get_frame:
2782 * @decoder: a #GstVideoDecoder
2783 * @frame_number: system_frame_number of a frame
2785 * Get a pending unfinished #GstVideoCodecFrame
2787 * Returns: (transfer full): pending unfinished #GstVideoCodecFrame identified by @frame_number.
2789 GstVideoCodecFrame *
2790 gst_video_decoder_get_frame (GstVideoDecoder * decoder, int frame_number)
2793 GstVideoCodecFrame *frame = NULL;
2795 GST_DEBUG_OBJECT (decoder, "frame_number : %d", frame_number);
2797 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2798 for (g = decoder->priv->frames; g; g = g->next) {
2799 GstVideoCodecFrame *tmp = g->data;
2801 if (tmp->system_frame_number == frame_number) {
2802 frame = gst_video_codec_frame_ref (tmp);
2806 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2812 * gst_video_decoder_get_frames:
2813 * @decoder: a #GstVideoDecoder
2815 * Get all pending unfinished #GstVideoCodecFrame
2817 * Returns: (transfer full) (element-type GstVideoCodecFrame): pending unfinished #GstVideoCodecFrame.
2820 gst_video_decoder_get_frames (GstVideoDecoder * decoder)
2824 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2825 frames = g_list_copy (decoder->priv->frames);
2826 g_list_foreach (frames, (GFunc) gst_video_codec_frame_ref, NULL);
2827 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2833 gst_video_decoder_decide_allocation_default (GstVideoDecoder * decoder,
2837 GstBufferPool *pool = NULL;
2838 guint size, min, max;
2839 GstAllocator *allocator = NULL;
2840 GstAllocationParams params;
2841 GstStructure *config;
2842 gboolean update_pool, update_allocator;
2845 gst_query_parse_allocation (query, &outcaps, NULL);
2846 gst_video_info_init (&vinfo);
2847 gst_video_info_from_caps (&vinfo, outcaps);
2849 /* we got configuration from our peer or the decide_allocation method,
2851 if (gst_query_get_n_allocation_params (query) > 0) {
2852 /* try the allocator */
2853 gst_query_parse_nth_allocation_param (query, 0, &allocator, ¶ms);
2854 update_allocator = TRUE;
2857 gst_allocation_params_init (¶ms);
2858 update_allocator = FALSE;
2861 if (gst_query_get_n_allocation_pools (query) > 0) {
2862 gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max);
2863 size = MAX (size, vinfo.size);
2870 update_pool = FALSE;
2874 /* no pool, we can make our own */
2875 GST_DEBUG_OBJECT (decoder, "no pool, making new pool");
2876 pool = gst_video_buffer_pool_new ();
2880 config = gst_buffer_pool_get_config (pool);
2881 gst_buffer_pool_config_set_params (config, outcaps, size, min, max);
2882 gst_buffer_pool_config_set_allocator (config, allocator, ¶ms);
2883 gst_buffer_pool_set_config (pool, config);
2885 if (update_allocator)
2886 gst_query_set_nth_allocation_param (query, 0, allocator, ¶ms);
2888 gst_query_add_allocation_param (query, allocator, ¶ms);
2890 gst_object_unref (allocator);
2893 gst_query_set_nth_allocation_pool (query, 0, pool, size, min, max);
2895 gst_query_add_allocation_pool (query, pool, size, min, max);
2898 gst_object_unref (pool);
2904 gst_video_decoder_propose_allocation_default (GstVideoDecoder * decoder,
2911 gst_video_decoder_negotiate_default (GstVideoDecoder * decoder)
2913 GstVideoCodecState *state = decoder->priv->output_state;
2914 GstVideoDecoderClass *klass;
2915 GstQuery *query = NULL;
2916 GstBufferPool *pool = NULL;
2917 GstAllocator *allocator;
2918 GstAllocationParams params;
2919 gboolean ret = TRUE;
2921 g_return_val_if_fail (GST_VIDEO_INFO_WIDTH (&state->info) != 0, FALSE);
2922 g_return_val_if_fail (GST_VIDEO_INFO_HEIGHT (&state->info) != 0, FALSE);
2924 klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
2926 GST_DEBUG_OBJECT (decoder, "output_state par %d/%d fps %d/%d",
2927 state->info.par_n, state->info.par_d,
2928 state->info.fps_n, state->info.fps_d);
2930 if (G_UNLIKELY (state->caps == NULL))
2931 state->caps = gst_video_info_to_caps (&state->info);
2933 GST_DEBUG_OBJECT (decoder, "setting caps %" GST_PTR_FORMAT, state->caps);
2935 ret = gst_pad_set_caps (decoder->srcpad, state->caps);
2938 decoder->priv->output_state_changed = FALSE;
2940 /* Negotiate pool */
2941 query = gst_query_new_allocation (state->caps, TRUE);
2943 if (!gst_pad_peer_query (decoder->srcpad, query)) {
2944 GST_DEBUG_OBJECT (decoder, "didn't get downstream ALLOCATION hints");
2947 g_assert (klass->decide_allocation != NULL);
2948 ret = klass->decide_allocation (decoder, query);
2950 GST_DEBUG_OBJECT (decoder, "ALLOCATION (%d) params: %" GST_PTR_FORMAT, ret,
2954 goto no_decide_allocation;
2956 /* we got configuration from our peer or the decide_allocation method,
2958 if (gst_query_get_n_allocation_params (query) > 0) {
2959 gst_query_parse_nth_allocation_param (query, 0, &allocator, ¶ms);
2962 gst_allocation_params_init (¶ms);
2965 if (gst_query_get_n_allocation_pools (query) > 0)
2966 gst_query_parse_nth_allocation_pool (query, 0, &pool, NULL, NULL, NULL);
2969 gst_object_unref (allocator);
2971 goto no_decide_allocation;
2974 if (decoder->priv->allocator)
2975 gst_object_unref (decoder->priv->allocator);
2976 decoder->priv->allocator = allocator;
2977 decoder->priv->params = params;
2979 if (decoder->priv->pool) {
2980 gst_buffer_pool_set_active (decoder->priv->pool, FALSE);
2981 gst_object_unref (decoder->priv->pool);
2983 decoder->priv->pool = pool;
2986 gst_buffer_pool_set_active (pool, TRUE);
2990 gst_query_unref (query);
2995 no_decide_allocation:
2997 GST_WARNING_OBJECT (decoder, "Subclass failed to decide allocation");
3003 * gst_video_decoder_negotiate:
3004 * @decoder: a #GstVideoDecoder
3006 * Negotiate with downstreame elements to currently configured #GstVideoCodecState.
3008 * Returns: #TRUE if the negotiation succeeded, else #FALSE.
3011 gst_video_decoder_negotiate (GstVideoDecoder * decoder)
3013 GstVideoDecoderClass *klass;
3014 gboolean ret = TRUE;
3016 g_return_val_if_fail (GST_IS_VIDEO_DECODER (decoder), FALSE);
3018 klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
3020 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3021 if (klass->negotiate)
3022 ret = klass->negotiate (decoder);
3023 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3029 * gst_video_decoder_allocate_output_buffer:
3030 * @decoder: a #GstVideoDecoder
3032 * Helper function that allocates a buffer to hold a video frame for @decoder's
3033 * current #GstVideoCodecState.
3035 * Returns: (transfer full): allocated buffer
3038 gst_video_decoder_allocate_output_buffer (GstVideoDecoder * decoder)
3042 GST_DEBUG ("alloc src buffer");
3044 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3045 if (G_UNLIKELY (decoder->priv->output_state_changed
3046 || (decoder->priv->output_state
3047 && gst_pad_check_reconfigure (decoder->srcpad))))
3048 gst_video_decoder_negotiate (decoder);
3050 gst_buffer_pool_acquire_buffer (decoder->priv->pool, &buffer, NULL);
3052 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3058 * gst_video_decoder_allocate_output_frame:
3059 * @decoder: a #GstVideoDecoder
3060 * @frame: a #GstVideoCodecFrame
3062 * Helper function that allocates a buffer to hold a video frame for @decoder's
3063 * current #GstVideoCodecState. Subclass should already have configured video
3064 * state and set src pad caps.
3066 * The buffer allocated here is owned by the frame and you should only
3067 * keep references to the frame, not the buffer.
3069 * Returns: %GST_FLOW_OK if an output buffer could be allocated
3072 gst_video_decoder_allocate_output_frame (GstVideoDecoder *
3073 decoder, GstVideoCodecFrame * frame)
3075 GstFlowReturn flow_ret;
3076 GstVideoCodecState *state;
3079 g_return_val_if_fail (frame->output_buffer == NULL, GST_FLOW_ERROR);
3081 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3083 state = decoder->priv->output_state;
3084 if (state == NULL) {
3085 g_warning ("Output state should be set before allocating frame");
3088 num_bytes = GST_VIDEO_INFO_SIZE (&state->info);
3089 if (num_bytes == 0) {
3090 g_warning ("Frame size should not be 0");
3094 if (G_UNLIKELY (decoder->priv->output_state_changed
3095 || (decoder->priv->output_state
3096 && gst_pad_check_reconfigure (decoder->srcpad))))
3097 gst_video_decoder_negotiate (decoder);
3099 GST_LOG_OBJECT (decoder, "alloc buffer size %d", num_bytes);
3101 flow_ret = gst_buffer_pool_acquire_buffer (decoder->priv->pool,
3102 &frame->output_buffer, NULL);
3104 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3109 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3110 return GST_FLOW_ERROR;
3114 * gst_video_decoder_get_max_decode_time:
3115 * @decoder: a #GstVideoDecoder
3116 * @frame: a #GstVideoCodecFrame
3118 * Determines maximum possible decoding time for @frame that will
3119 * allow it to decode and arrive in time (as determined by QoS events).
3120 * In particular, a negative result means decoding in time is no longer possible
3121 * and should therefore occur as soon/skippy as possible.
3123 * Returns: max decoding time.
3126 gst_video_decoder_get_max_decode_time (GstVideoDecoder *
3127 decoder, GstVideoCodecFrame * frame)
3129 GstClockTimeDiff deadline;
3130 GstClockTime earliest_time;
3132 GST_OBJECT_LOCK (decoder);
3133 earliest_time = decoder->priv->earliest_time;
3134 if (GST_CLOCK_TIME_IS_VALID (earliest_time)
3135 && GST_CLOCK_TIME_IS_VALID (frame->deadline))
3136 deadline = GST_CLOCK_DIFF (earliest_time, frame->deadline);
3138 deadline = G_MAXINT64;
3140 GST_LOG_OBJECT (decoder, "earliest %" GST_TIME_FORMAT
3141 ", frame deadline %" GST_TIME_FORMAT ", deadline %" GST_TIME_FORMAT,
3142 GST_TIME_ARGS (earliest_time), GST_TIME_ARGS (frame->deadline),
3143 GST_TIME_ARGS (deadline));
3145 GST_OBJECT_UNLOCK (decoder);
3151 _gst_video_decoder_error (GstVideoDecoder * dec, gint weight,
3152 GQuark domain, gint code, gchar * txt, gchar * dbg, const gchar * file,
3153 const gchar * function, gint line)
3156 GST_WARNING_OBJECT (dec, "error: %s", txt);
3158 GST_WARNING_OBJECT (dec, "error: %s", dbg);
3159 dec->priv->error_count += weight;
3160 dec->priv->discont = TRUE;
3161 if (dec->priv->max_errors < dec->priv->error_count) {
3162 gst_element_message_full (GST_ELEMENT (dec), GST_MESSAGE_ERROR,
3163 domain, code, txt, dbg, file, function, line);
3164 return GST_FLOW_ERROR;
3171 * gst_video_decoder_set_max_errors:
3172 * @dec: a #GstVideoDecoder
3173 * @num: max tolerated errors
3175 * Sets numbers of tolerated decoder errors, where a tolerated one is then only
3176 * warned about, but more than tolerated will lead to fatal error. Default
3177 * is set to GST_VIDEO_DECODER_MAX_ERRORS.
3180 gst_video_decoder_set_max_errors (GstVideoDecoder * dec, gint num)
3182 g_return_if_fail (GST_IS_VIDEO_DECODER (dec));
3184 dec->priv->max_errors = num;
3188 * gst_video_decoder_get_max_errors:
3189 * @dec: a #GstVideoDecoder
3191 * Returns: currently configured decoder tolerated error count.
3194 gst_video_decoder_get_max_errors (GstVideoDecoder * dec)
3196 g_return_val_if_fail (GST_IS_VIDEO_DECODER (dec), 0);
3198 return dec->priv->max_errors;
3202 * gst_video_decoder_set_packetized:
3203 * @decoder: a #GstVideoDecoder
3204 * @packetized: whether the input data should be considered as packetized.
3206 * Allows baseclass to consider input data as packetized or not. If the
3207 * input is packetized, then the @parse method will not be called.
3210 gst_video_decoder_set_packetized (GstVideoDecoder * decoder,
3211 gboolean packetized)
3213 decoder->priv->packetized = packetized;
3217 * gst_video_decoder_get_packetized:
3218 * @decoder: a #GstVideoDecoder
3220 * Queries whether input data is considered packetized or not by the
3223 * Returns: TRUE if input data is considered packetized.
3226 gst_video_decoder_get_packetized (GstVideoDecoder * decoder)
3228 return decoder->priv->packetized;
3232 * gst_video_decoder_set_estimate_rate:
3233 * @dec: a #GstVideoDecoder
3234 * @enabled: whether to enable byte to time conversion
3236 * Allows baseclass to perform byte to time estimated conversion.
3239 gst_video_decoder_set_estimate_rate (GstVideoDecoder * dec, gboolean enabled)
3241 g_return_if_fail (GST_IS_VIDEO_DECODER (dec));
3243 dec->priv->do_estimate_rate = enabled;
3247 * gst_video_decoder_get_estimate_rate:
3248 * @dec: a #GstVideoDecoder
3250 * Returns: currently configured byte to time conversion setting
3253 gst_video_decoder_get_estimate_rate (GstVideoDecoder * dec)
3255 g_return_val_if_fail (GST_IS_VIDEO_DECODER (dec), 0);
3257 return dec->priv->do_estimate_rate;
3261 * gst_video_decoder_set_latency:
3262 * @decoder: a #GstVideoDecoder
3263 * @min_latency: minimum latency
3264 * @max_latency: maximum latency
3266 * Lets #GstVideoDecoder sub-classes tell the baseclass what the decoder
3267 * latency is. Will also post a LATENCY message on the bus so the pipeline
3268 * can reconfigure its global latency.
3271 gst_video_decoder_set_latency (GstVideoDecoder * decoder,
3272 GstClockTime min_latency, GstClockTime max_latency)
3274 g_return_if_fail (GST_CLOCK_TIME_IS_VALID (min_latency));
3275 g_return_if_fail (max_latency >= min_latency);
3277 GST_OBJECT_LOCK (decoder);
3278 decoder->priv->min_latency = min_latency;
3279 decoder->priv->max_latency = max_latency;
3280 GST_OBJECT_UNLOCK (decoder);
3282 gst_element_post_message (GST_ELEMENT_CAST (decoder),
3283 gst_message_new_latency (GST_OBJECT_CAST (decoder)));
3287 * gst_video_decoder_get_latency:
3288 * @decoder: a #GstVideoDecoder
3289 * @min_latency: (out) (allow-none): address of variable in which to store the
3290 * configured minimum latency, or %NULL
3291 * @max_latency: (out) (allow-none): address of variable in which to store the
3292 * configured mximum latency, or %NULL
3294 * Query the configured decoder latency. Results will be returned via
3295 * @min_latency and @max_latency.
3298 gst_video_decoder_get_latency (GstVideoDecoder * decoder,
3299 GstClockTime * min_latency, GstClockTime * max_latency)
3301 GST_OBJECT_LOCK (decoder);
3303 *min_latency = decoder->priv->min_latency;
3305 *max_latency = decoder->priv->max_latency;
3306 GST_OBJECT_UNLOCK (decoder);
3310 * gst_video_decoder_merge_tags:
3311 * @decoder: a #GstVideoDecoder
3312 * @tags: a #GstTagList to merge
3313 * @mode: the #GstTagMergeMode to use
3315 * Adds tags to so-called pending tags, which will be processed
3316 * before pushing out data downstream.
3318 * Note that this is provided for convenience, and the subclass is
3319 * not required to use this and can still do tag handling on its own.
3324 gst_video_decoder_merge_tags (GstVideoDecoder * decoder,
3325 const GstTagList * tags, GstTagMergeMode mode)
3329 g_return_if_fail (GST_IS_VIDEO_DECODER (decoder));
3330 g_return_if_fail (tags == NULL || GST_IS_TAG_LIST (tags));
3332 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3334 GST_DEBUG_OBJECT (decoder, "merging tags %" GST_PTR_FORMAT, tags);
3335 otags = decoder->priv->tags;
3336 decoder->priv->tags = gst_tag_list_merge (decoder->priv->tags, tags, mode);
3338 gst_tag_list_unref (otags);
3339 decoder->priv->tags_changed = TRUE;
3340 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3344 * gst_video_decoder_get_buffer_pool:
3345 * @decoder: a #GstVideoDecoder
3347 * Returns: (transfer full): the instance of the #GstBufferPool used
3348 * by the decoder; free it after use it
3351 gst_video_decoder_get_buffer_pool (GstVideoDecoder * decoder)
3353 g_return_val_if_fail (GST_IS_VIDEO_DECODER (decoder), NULL);
3355 if (decoder->priv->pool)
3356 return gst_object_ref (decoder->priv->pool);
3362 * gst_video_decoder_get_allocator:
3363 * @decoder: a #GstVideoDecoder
3364 * @allocator: (out) (allow-none) (transfer full): the #GstAllocator
3366 * @params: (out) (allow-none) (transfer full): the
3367 * #GstAllocatorParams of @allocator
3369 * Lets #GstVideoDecoder sub-classes to know the memory @allocator
3370 * used by the base class and its @params.
3372 * Unref the @allocator after use it.
3375 gst_video_decoder_get_allocator (GstVideoDecoder * decoder,
3376 GstAllocator ** allocator, GstAllocationParams * params)
3378 g_return_if_fail (GST_IS_VIDEO_DECODER (decoder));
3381 *allocator = decoder->priv->allocator ?
3382 gst_object_ref (decoder->priv->allocator) : NULL;
3385 *params = decoder->priv->params;