2 * Copyright (C) 2008 David Schleef <ds@schleef.org>
3 * Copyright (C) 2011 Mark Nauwelaerts <mark.nauwelaerts@collabora.co.uk>.
4 * Copyright (C) 2011 Nokia Corporation. All rights reserved.
5 * Contact: Stefan Kost <stefan.kost@nokia.com>
6 * Copyright (C) 2012 Collabora Ltd.
7 * Author : Edward Hervey <edward@collabora.com>
9 * This library is free software; you can redistribute it and/or
10 * modify it under the terms of the GNU Library General Public
11 * License as published by the Free Software Foundation; either
12 * version 2 of the License, or (at your option) any later version.
14 * This library is distributed in the hope that it will be useful,
15 * but WITHOUT ANY WARRANTY; without even the implied warranty of
16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
17 * Library General Public License for more details.
19 * You should have received a copy of the GNU Library General Public
20 * License along with this library; if not, write to the
21 * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor,
22 * Boston, MA 02110-1301, USA.
26 * SECTION:gstvideodecoder
27 * @short_description: Base class for video decoders
30 * This base class is for video decoders turning encoded data into raw video
33 * The GstVideoDecoder base class and derived subclasses should cooperate as follows:
36 * <itemizedlist><title>Configuration</title>
38 * Initially, GstVideoDecoder calls @start when the decoder element
39 * is activated, which allows the subclass to perform any global setup.
42 * GstVideoDecoder calls @set_format to inform the subclass of caps
43 * describing input video data that it is about to receive, including
44 * possibly configuration data.
45 * While unlikely, it might be called more than once, if changing input
46 * parameters require reconfiguration.
49 * Incoming data buffers are processed as needed, described in Data Processing below.
52 * GstVideoDecoder calls @stop at end of all processing.
58 * <title>Data processing</title>
60 * The base class gathers input data, and optionally allows subclass
61 * to parse this into subsequently manageable chunks, typically
62 * corresponding to and referred to as 'frames'.
65 * Each input frame is provided in turn to the subclass' @handle_frame callback.
66 * The ownership of the frame is given to the @handle_frame callback.
69 * If codec processing results in decoded data, the subclass should call
70 * @gst_video_decoder_finish_frame to have decoded data pushed.
71 * downstream. Otherwise, the subclass must call @gst_video_decoder_drop_frame, to
72 * allow the base class to do timestamp and offset tracking, and possibly to
73 * requeue the frame for a later attempt in the case of reverse playback.
78 * <itemizedlist><title>Shutdown phase</title>
80 * The GstVideoDecoder class calls @stop to inform the subclass that data
81 * parsing will be stopped.
86 * <itemizedlist><title>Additional Notes</title>
88 * <itemizedlist><title>Seeking/Flushing</title>
90 * When the pipeline is seeked or otherwise flushed, the subclass is informed via a call
91 * to its @reset callback, with the hard parameter set to true. This indicates the
92 * subclass should drop any internal data queues and timestamps and prepare for a fresh
93 * set of buffers to arrive for parsing and decoding.
98 * <itemizedlist><title>End Of Stream</title>
100 * At end-of-stream, the subclass @parse function may be called some final times with the
101 * at_eos parameter set to true, indicating that the element should not expect any more data
102 * to be arriving, and it should parse and remaining frames and call
103 * gst_video_decoder_have_frame() if possible.
111 * The subclass is responsible for providing pad template caps for
112 * source and sink pads. The pads need to be named "sink" and "src". It also
113 * needs to provide information about the ouptput caps, when they are known.
114 * This may be when the base class calls the subclass' @set_format function,
115 * though it might be during decoding, before calling
116 * @gst_video_decoder_finish_frame. This is done via
117 * @gst_video_decoder_set_output_state
119 * The subclass is also responsible for providing (presentation) timestamps
120 * (likely based on corresponding input ones). If that is not applicable
121 * or possible, the base class provides limited framerate based interpolation.
123 * Similarly, the base class provides some limited (legacy) seeking support
124 * if specifically requested by the subclass, as full-fledged support
125 * should rather be left to upstream demuxer, parser or alike. This simple
126 * approach caters for seeking and duration reporting using estimated input
127 * bitrates. To enable it, a subclass should call
128 * @gst_video_decoder_set_estimate_rate to enable handling of incoming byte-streams.
130 * The base class provides some support for reverse playback, in particular
131 * in case incoming data is not packetized or upstream does not provide
132 * fragments on keyframe boundaries. However, the subclass should then be prepared
133 * for the parsing and frame processing stage to occur separately (in normal
134 * forward processing, the latter immediately follows the former),
135 * The subclass also needs to ensure the parsing stage properly marks keyframes,
136 * unless it knows the upstream elements will do so properly for incoming data.
138 * The bare minimum that a functional subclass needs to implement is:
140 * <listitem><para>Provide pad templates</para></listitem>
142 * Inform the base class of output caps via @gst_video_decoder_set_output_state
145 * Parse input data, if it is not considered packetized from upstream
146 * Data will be provided to @parse which should invoke @gst_video_decoder_add_to_frame and
147 * @gst_video_decoder_have_frame to separate the data belonging to each video frame.
150 * Accept data in @handle_frame and provide decoded results to
151 * @gst_video_decoder_finish_frame, or call @gst_video_decoder_drop_frame.
162 * * Add a flag/boolean for I-frame-only/image decoders so we can do extra
163 * features, like applying QoS on input (as opposed to after the frame is
165 * * Add a flag/boolean for decoders that require keyframes, so the base
166 * class can automatically discard non-keyframes before one has arrived
167 * * Detect reordered frame/timestamps and fix the pts/dts
168 * * Support for GstIndex (or shall we not care ?)
169 * * Calculate actual latency based on input/output timestamp/frame_number
170 * and if it exceeds the recorded one, save it and emit a GST_MESSAGE_LATENCY
171 * * Emit latency message when it changes
175 /* Implementation notes:
176 * The Video Decoder base class operates in 2 primary processing modes, depending
177 * on whether forward or reverse playback is requested.
180 * * Incoming buffer -> @parse() -> add_to_frame()/have_frame() -> handle_frame() ->
183 * Reverse playback is more complicated, since it involves gathering incoming data regions
184 * as we loop backwards through the upstream data. The processing concept (using incoming
185 * buffers as containing one frame each to simplify things) is:
187 * Upstream data we want to play:
188 * Buffer encoded order: 1 2 3 4 5 6 7 8 9 EOS
190 * Groupings: AAAAAAA BBBBBBB CCCCCCC
193 * Buffer reception order: 7 8 9 4 5 6 1 2 3 EOS
195 * Discont flag: D D D
197 * - Each Discont marks a discont in the decoding order.
198 * - The keyframes mark where we can start decoding.
200 * Initially, we prepend incoming buffers to the gather queue. Whenever the
201 * discont flag is set on an incoming buffer, the gather queue is flushed out
202 * before the new buffer is collected.
204 * The above data will be accumulated in the gather queue like this:
206 * gather queue: 9 8 7
209 * Whe buffer 4 is received (with a DISCONT), we flush the gather queue like
213 * take head of queue and prepend to parse queue (this reverses the sequence,
214 * so parse queue is 7 -> 8 -> 9)
216 * Next, we process the parse queue, which now contains all un-parsed packets (including
217 * any leftover ones from the previous decode section)
219 * for each buffer now in the parse queue:
220 * Call the subclass parse function, prepending each resulting frame to
221 * the parse_gather queue. Buffers which precede the first one that
222 * produces a parsed frame are retained in the parse queue for re-processing on
223 * the next cycle of parsing.
225 * The parse_gather queue now contains frame objects ready for decoding, in reverse order.
226 * parse_gather: 9 -> 8 -> 7
228 * while (parse_gather)
229 * Take the head of the queue and prepend it to the decode queue
230 * If the frame was a keyframe, process the decode queue
231 * decode is now 7-8-9
233 * Processing the decode queue results in frames with attached output buffers
234 * stored in the 'output_queue' ready for outputting in reverse order.
236 * After we flushed the gather queue and parsed it, we add 4 to the (now empty) gather queue.
237 * We get the following situation:
240 * decode queue: 7 8 9
242 * After we received 5 (Keyframe) and 6:
244 * gather queue: 6 5 4
245 * decode queue: 7 8 9
247 * When we receive 1 (DISCONT) which triggers a flush of the gather queue:
249 * Copy head of the gather queue (6) to decode queue:
252 * decode queue: 6 7 8 9
254 * Copy head of the gather queue (5) to decode queue. This is a keyframe so we
255 * can start decoding.
258 * decode queue: 5 6 7 8 9
260 * Decode frames in decode queue, store raw decoded data in output queue, we
261 * can take the head of the decode queue and prepend the decoded result in the
266 * output queue: 9 8 7 6 5
268 * Now output all the frames in the output queue, picking a frame from the
271 * Copy head of the gather queue (4) to decode queue, we flushed the gather
272 * queue and can now store input buffer in the gather queue:
277 * When we receive EOS, the queue looks like:
279 * gather queue: 3 2 1
282 * Fill decode queue, first keyframe we copy is 2:
285 * decode queue: 2 3 4
291 * output queue: 4 3 2
293 * Leftover buffer 1 cannot be decoded and must be discarded.
296 #include "gstvideodecoder.h"
297 #include "gstvideoutils.h"
299 #include <gst/video/video.h>
300 #include <gst/video/video-event.h>
301 #include <gst/video/gstvideopool.h>
302 #include <gst/video/gstvideometa.h>
305 GST_DEBUG_CATEGORY (videodecoder_debug);
306 #define GST_CAT_DEFAULT videodecoder_debug
308 #define GST_VIDEO_DECODER_GET_PRIVATE(obj) \
309 (G_TYPE_INSTANCE_GET_PRIVATE ((obj), GST_TYPE_VIDEO_DECODER, \
310 GstVideoDecoderPrivate))
312 struct _GstVideoDecoderPrivate
314 /* FIXME introduce a context ? */
317 GstAllocator *allocator;
318 GstAllocationParams params;
322 GstAdapter *input_adapter;
323 /* assembles current frame */
324 GstAdapter *output_adapter;
326 /* Whether we attempt to convert newsegment from bytes to
327 * time using a bitrate estimation */
328 gboolean do_estimate_rate;
330 /* Whether input is considered packetized or not */
339 /* ... being tracked here;
340 * only available during parsing */
341 GstVideoCodecFrame *current_frame;
342 /* events that should apply to the current frame */
343 GList *current_frame_events;
345 /* relative offset of input data */
346 guint64 input_offset;
347 /* relative offset of frame */
348 guint64 frame_offset;
349 /* tracking ts and offsets */
352 /* last outgoing ts */
353 GstClockTime last_timestamp_out;
354 /* incoming pts - dts */
355 GstClockTime pts_delta;
356 gboolean reordered_output;
358 /* reverse playback */
363 /* collected parsed frames */
365 /* frames to be handled == decoded */
367 /* collected output - of buffer objects, not frames */
368 GList *output_queued;
371 /* base_picture_number is the picture number of the reference picture */
372 guint64 base_picture_number;
373 /* combine with base_picture_number, framerate and calcs to yield (presentation) ts */
374 GstClockTime base_timestamp;
376 /* FIXME : reorder_depth is never set */
378 int distance_from_sync;
380 guint32 system_frame_number;
381 guint32 decode_frame_number;
383 GList *frames; /* Protected with OBJECT_LOCK */
384 GstVideoCodecState *input_state;
385 GstVideoCodecState *output_state; /* OBJECT_LOCK and STREAM_LOCK */
386 gboolean output_state_changed;
389 gdouble proportion; /* OBJECT_LOCK */
390 GstClockTime earliest_time; /* OBJECT_LOCK */
391 GstClockTime qos_frame_duration; /* OBJECT_LOCK */
393 /* qos messages: frames dropped/processed */
397 /* Outgoing byte size ? */
405 gboolean tags_changed;
408 static GstElementClass *parent_class = NULL;
409 static void gst_video_decoder_class_init (GstVideoDecoderClass * klass);
410 static void gst_video_decoder_init (GstVideoDecoder * dec,
411 GstVideoDecoderClass * klass);
413 static void gst_video_decoder_finalize (GObject * object);
415 static gboolean gst_video_decoder_setcaps (GstVideoDecoder * dec,
417 static gboolean gst_video_decoder_sink_event (GstPad * pad, GstObject * parent,
419 static gboolean gst_video_decoder_src_event (GstPad * pad, GstObject * parent,
421 static GstFlowReturn gst_video_decoder_chain (GstPad * pad, GstObject * parent,
423 static gboolean gst_video_decoder_sink_query (GstPad * pad, GstObject * parent,
425 static GstStateChangeReturn gst_video_decoder_change_state (GstElement *
426 element, GstStateChange transition);
427 static gboolean gst_video_decoder_src_query (GstPad * pad, GstObject * parent,
429 static void gst_video_decoder_reset (GstVideoDecoder * decoder, gboolean full);
431 static GstFlowReturn gst_video_decoder_decode_frame (GstVideoDecoder * decoder,
432 GstVideoCodecFrame * frame);
434 static void gst_video_decoder_release_frame (GstVideoDecoder * dec,
435 GstVideoCodecFrame * frame);
436 static GstClockTime gst_video_decoder_get_frame_duration (GstVideoDecoder *
437 decoder, GstVideoCodecFrame * frame);
438 static GstVideoCodecFrame *gst_video_decoder_new_frame (GstVideoDecoder *
440 static GstFlowReturn gst_video_decoder_clip_and_push_buf (GstVideoDecoder *
441 decoder, GstBuffer * buf);
442 static GstFlowReturn gst_video_decoder_flush_parse (GstVideoDecoder * dec,
445 static void gst_video_decoder_clear_queues (GstVideoDecoder * dec);
447 static gboolean gst_video_decoder_sink_event_default (GstVideoDecoder * decoder,
449 static gboolean gst_video_decoder_src_event_default (GstVideoDecoder * decoder,
451 static gboolean gst_video_decoder_decide_allocation_default (GstVideoDecoder *
452 decoder, GstQuery * query);
453 static gboolean gst_video_decoder_propose_allocation_default (GstVideoDecoder *
454 decoder, GstQuery * query);
455 static gboolean gst_video_decoder_negotiate_default (GstVideoDecoder * decoder);
456 static GstFlowReturn gst_video_decoder_parse_available (GstVideoDecoder * dec,
459 /* we can't use G_DEFINE_ABSTRACT_TYPE because we need the klass in the _init
460 * method to get to the padtemplates */
462 gst_video_decoder_get_type (void)
464 static volatile gsize type = 0;
466 if (g_once_init_enter (&type)) {
468 static const GTypeInfo info = {
469 sizeof (GstVideoDecoderClass),
472 (GClassInitFunc) gst_video_decoder_class_init,
475 sizeof (GstVideoDecoder),
477 (GInstanceInitFunc) gst_video_decoder_init,
480 _type = g_type_register_static (GST_TYPE_ELEMENT,
481 "GstVideoDecoder", &info, G_TYPE_FLAG_ABSTRACT);
482 g_once_init_leave (&type, _type);
488 gst_video_decoder_class_init (GstVideoDecoderClass * klass)
490 GObjectClass *gobject_class;
491 GstElementClass *gstelement_class;
493 gobject_class = G_OBJECT_CLASS (klass);
494 gstelement_class = GST_ELEMENT_CLASS (klass);
496 GST_DEBUG_CATEGORY_INIT (videodecoder_debug, "videodecoder", 0,
497 "Base Video Decoder");
499 parent_class = g_type_class_peek_parent (klass);
500 g_type_class_add_private (klass, sizeof (GstVideoDecoderPrivate));
502 gobject_class->finalize = gst_video_decoder_finalize;
504 gstelement_class->change_state =
505 GST_DEBUG_FUNCPTR (gst_video_decoder_change_state);
507 klass->sink_event = gst_video_decoder_sink_event_default;
508 klass->src_event = gst_video_decoder_src_event_default;
509 klass->decide_allocation = gst_video_decoder_decide_allocation_default;
510 klass->propose_allocation = gst_video_decoder_propose_allocation_default;
511 klass->negotiate = gst_video_decoder_negotiate_default;
515 gst_video_decoder_init (GstVideoDecoder * decoder, GstVideoDecoderClass * klass)
517 GstPadTemplate *pad_template;
520 GST_DEBUG_OBJECT (decoder, "gst_video_decoder_init");
522 decoder->priv = GST_VIDEO_DECODER_GET_PRIVATE (decoder);
525 gst_element_class_get_pad_template (GST_ELEMENT_CLASS (klass), "sink");
526 g_return_if_fail (pad_template != NULL);
528 decoder->sinkpad = pad = gst_pad_new_from_template (pad_template, "sink");
530 gst_pad_set_chain_function (pad, GST_DEBUG_FUNCPTR (gst_video_decoder_chain));
531 gst_pad_set_event_function (pad,
532 GST_DEBUG_FUNCPTR (gst_video_decoder_sink_event));
533 gst_pad_set_query_function (pad,
534 GST_DEBUG_FUNCPTR (gst_video_decoder_sink_query));
535 gst_element_add_pad (GST_ELEMENT (decoder), decoder->sinkpad);
538 gst_element_class_get_pad_template (GST_ELEMENT_CLASS (klass), "src");
539 g_return_if_fail (pad_template != NULL);
541 decoder->srcpad = pad = gst_pad_new_from_template (pad_template, "src");
543 gst_pad_set_event_function (pad,
544 GST_DEBUG_FUNCPTR (gst_video_decoder_src_event));
545 gst_pad_set_query_function (pad,
546 GST_DEBUG_FUNCPTR (gst_video_decoder_src_query));
547 gst_pad_use_fixed_caps (pad);
548 gst_element_add_pad (GST_ELEMENT (decoder), decoder->srcpad);
550 gst_segment_init (&decoder->input_segment, GST_FORMAT_TIME);
551 gst_segment_init (&decoder->output_segment, GST_FORMAT_TIME);
553 g_rec_mutex_init (&decoder->stream_lock);
555 decoder->priv->input_adapter = gst_adapter_new ();
556 decoder->priv->output_adapter = gst_adapter_new ();
557 decoder->priv->packetized = TRUE;
559 gst_video_decoder_reset (decoder, TRUE);
563 gst_video_rawvideo_convert (GstVideoCodecState * state,
564 GstFormat src_format, gint64 src_value,
565 GstFormat * dest_format, gint64 * dest_value)
567 gboolean res = FALSE;
571 g_return_val_if_fail (dest_format != NULL, FALSE);
572 g_return_val_if_fail (dest_value != NULL, FALSE);
574 if (src_format == *dest_format || src_value == 0 || src_value == -1) {
575 *dest_value = src_value;
579 vidsize = GST_VIDEO_INFO_SIZE (&state->info);
580 fps_n = GST_VIDEO_INFO_FPS_N (&state->info);
581 fps_d = GST_VIDEO_INFO_FPS_D (&state->info);
583 if (src_format == GST_FORMAT_BYTES &&
584 *dest_format == GST_FORMAT_DEFAULT && vidsize) {
585 /* convert bytes to frames */
586 *dest_value = gst_util_uint64_scale_int (src_value, 1, vidsize);
588 } else if (src_format == GST_FORMAT_DEFAULT &&
589 *dest_format == GST_FORMAT_BYTES && vidsize) {
590 /* convert bytes to frames */
591 *dest_value = src_value * vidsize;
593 } else if (src_format == GST_FORMAT_DEFAULT &&
594 *dest_format == GST_FORMAT_TIME && fps_n) {
595 /* convert frames to time */
596 *dest_value = gst_util_uint64_scale (src_value, GST_SECOND * fps_d, fps_n);
598 } else if (src_format == GST_FORMAT_TIME &&
599 *dest_format == GST_FORMAT_DEFAULT && fps_d) {
600 /* convert time to frames */
601 *dest_value = gst_util_uint64_scale (src_value, fps_n, GST_SECOND * fps_d);
603 } else if (src_format == GST_FORMAT_TIME &&
604 *dest_format == GST_FORMAT_BYTES && fps_d && vidsize) {
605 /* convert time to frames */
606 *dest_value = gst_util_uint64_scale (src_value,
607 fps_n * vidsize, GST_SECOND * fps_d);
609 } else if (src_format == GST_FORMAT_BYTES &&
610 *dest_format == GST_FORMAT_TIME && fps_n && vidsize) {
611 /* convert frames to time */
612 *dest_value = gst_util_uint64_scale (src_value,
613 GST_SECOND * fps_d, fps_n * vidsize);
621 gst_video_encoded_video_convert (gint64 bytes, gint64 time,
622 GstFormat src_format, gint64 src_value, GstFormat * dest_format,
625 gboolean res = FALSE;
627 g_return_val_if_fail (dest_format != NULL, FALSE);
628 g_return_val_if_fail (dest_value != NULL, FALSE);
630 if (G_UNLIKELY (src_format == *dest_format || src_value == 0 ||
633 *dest_value = src_value;
637 if (bytes <= 0 || time <= 0) {
638 GST_DEBUG ("not enough metadata yet to convert");
642 switch (src_format) {
643 case GST_FORMAT_BYTES:
644 switch (*dest_format) {
645 case GST_FORMAT_TIME:
646 *dest_value = gst_util_uint64_scale (src_value, time, bytes);
653 case GST_FORMAT_TIME:
654 switch (*dest_format) {
655 case GST_FORMAT_BYTES:
656 *dest_value = gst_util_uint64_scale (src_value, bytes, time);
664 GST_DEBUG ("unhandled conversion from %d to %d", src_format,
673 static GstVideoCodecState *
674 _new_input_state (GstCaps * caps)
676 GstVideoCodecState *state;
677 GstStructure *structure;
678 const GValue *codec_data;
680 state = g_slice_new0 (GstVideoCodecState);
681 state->ref_count = 1;
682 gst_video_info_init (&state->info);
683 if (G_UNLIKELY (!gst_video_info_from_caps (&state->info, caps)))
685 state->caps = gst_caps_ref (caps);
687 structure = gst_caps_get_structure (caps, 0);
689 codec_data = gst_structure_get_value (structure, "codec_data");
690 if (codec_data && G_VALUE_TYPE (codec_data) == GST_TYPE_BUFFER)
691 state->codec_data = GST_BUFFER (g_value_dup_boxed (codec_data));
697 g_slice_free (GstVideoCodecState, state);
702 static GstVideoCodecState *
703 _new_output_state (GstVideoFormat fmt, guint width, guint height,
704 GstVideoCodecState * reference)
706 GstVideoCodecState *state;
708 state = g_slice_new0 (GstVideoCodecState);
709 state->ref_count = 1;
710 gst_video_info_init (&state->info);
711 gst_video_info_set_format (&state->info, fmt, width, height);
714 GstVideoInfo *tgt, *ref;
717 ref = &reference->info;
719 /* Copy over extra fields from reference state */
720 tgt->interlace_mode = ref->interlace_mode;
721 tgt->flags = ref->flags;
722 tgt->chroma_site = ref->chroma_site;
723 /* only copy values that are not unknown so that we don't override the
724 * defaults. subclasses should really fill these in when they know. */
725 if (ref->colorimetry.range)
726 tgt->colorimetry.range = ref->colorimetry.range;
727 if (ref->colorimetry.matrix)
728 tgt->colorimetry.matrix = ref->colorimetry.matrix;
729 if (ref->colorimetry.transfer)
730 tgt->colorimetry.transfer = ref->colorimetry.transfer;
731 if (ref->colorimetry.primaries)
732 tgt->colorimetry.primaries = ref->colorimetry.primaries;
733 GST_DEBUG ("reference par %d/%d fps %d/%d",
734 ref->par_n, ref->par_d, ref->fps_n, ref->fps_d);
735 tgt->par_n = ref->par_n;
736 tgt->par_d = ref->par_d;
737 tgt->fps_n = ref->fps_n;
738 tgt->fps_d = ref->fps_d;
741 GST_DEBUG ("reference par %d/%d fps %d/%d",
742 state->info.par_n, state->info.par_d,
743 state->info.fps_n, state->info.fps_d);
749 gst_video_decoder_setcaps (GstVideoDecoder * decoder, GstCaps * caps)
751 GstVideoDecoderClass *decoder_class;
752 GstVideoCodecState *state;
755 decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
757 GST_DEBUG_OBJECT (decoder, "setcaps %" GST_PTR_FORMAT, caps);
759 state = _new_input_state (caps);
761 if (G_UNLIKELY (state == NULL))
764 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
766 if (decoder_class->set_format)
767 ret = decoder_class->set_format (decoder, state);
772 if (decoder->priv->input_state)
773 gst_video_codec_state_unref (decoder->priv->input_state);
774 decoder->priv->input_state = state;
776 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
784 GST_WARNING_OBJECT (decoder, "Failed to parse caps");
790 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
791 GST_WARNING_OBJECT (decoder, "Subclass refused caps");
792 gst_video_codec_state_unref (state);
798 gst_video_decoder_finalize (GObject * object)
800 GstVideoDecoder *decoder;
802 decoder = GST_VIDEO_DECODER (object);
804 GST_DEBUG_OBJECT (object, "finalize");
806 g_rec_mutex_clear (&decoder->stream_lock);
808 if (decoder->priv->input_adapter) {
809 g_object_unref (decoder->priv->input_adapter);
810 decoder->priv->input_adapter = NULL;
812 if (decoder->priv->output_adapter) {
813 g_object_unref (decoder->priv->output_adapter);
814 decoder->priv->output_adapter = NULL;
817 if (decoder->priv->input_state)
818 gst_video_codec_state_unref (decoder->priv->input_state);
819 if (decoder->priv->output_state)
820 gst_video_codec_state_unref (decoder->priv->output_state);
822 if (decoder->priv->pool) {
823 gst_object_unref (decoder->priv->pool);
824 decoder->priv->pool = NULL;
827 if (decoder->priv->allocator) {
828 gst_object_unref (decoder->priv->allocator);
829 decoder->priv->allocator = NULL;
832 G_OBJECT_CLASS (parent_class)->finalize (object);
835 /* hard == FLUSH, otherwise discont */
837 gst_video_decoder_flush (GstVideoDecoder * dec, gboolean hard)
839 GstVideoDecoderClass *klass;
840 GstVideoDecoderPrivate *priv = dec->priv;
841 GstFlowReturn ret = GST_FLOW_OK;
843 klass = GST_VIDEO_DECODER_GET_CLASS (dec);
845 GST_LOG_OBJECT (dec, "flush hard %d", hard);
847 /* Inform subclass */
849 klass->reset (dec, hard);
851 /* FIXME make some more distinction between hard and soft,
852 * but subclass may not be prepared for that */
853 /* FIXME perhaps also clear pending frames ?,
854 * but again, subclass may still come up with one of those */
856 /* TODO ? finish/drain some stuff */
858 gst_segment_init (&dec->input_segment, GST_FORMAT_UNDEFINED);
859 gst_segment_init (&dec->output_segment, GST_FORMAT_UNDEFINED);
860 gst_video_decoder_clear_queues (dec);
861 priv->error_count = 0;
862 g_list_free_full (priv->current_frame_events,
863 (GDestroyNotify) gst_event_unref);
864 priv->current_frame_events = NULL;
866 /* and get (re)set for the sequel */
867 gst_video_decoder_reset (dec, FALSE);
873 gst_video_decoder_push_event (GstVideoDecoder * decoder, GstEvent * event)
875 switch (GST_EVENT_TYPE (event)) {
876 case GST_EVENT_SEGMENT:
880 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
882 gst_event_copy_segment (event, &segment);
884 GST_DEBUG_OBJECT (decoder, "segment %" GST_SEGMENT_FORMAT, &segment);
886 if (segment.format != GST_FORMAT_TIME) {
887 GST_DEBUG_OBJECT (decoder, "received non TIME newsegment");
888 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
892 decoder->output_segment = segment;
893 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
900 return gst_pad_push_event (decoder->srcpad, event);
904 gst_video_decoder_parse_available (GstVideoDecoder * dec, gboolean at_eos)
906 GstVideoDecoderClass *decoder_class = GST_VIDEO_DECODER_GET_CLASS (dec);
907 GstVideoDecoderPrivate *priv = dec->priv;
908 GstFlowReturn ret = GST_FLOW_OK;
909 gsize start_size, available;
911 available = gst_adapter_available (priv->input_adapter);
914 while (ret == GST_FLOW_OK && available && start_size != available) {
915 /* current frame may have been parsed and handled,
916 * so we need to set up a new one when asking subclass to parse */
917 if (priv->current_frame == NULL)
918 priv->current_frame = gst_video_decoder_new_frame (dec);
920 start_size = available;
921 ret = decoder_class->parse (dec, priv->current_frame,
922 priv->input_adapter, at_eos);
923 available = gst_adapter_available (priv->input_adapter);
930 gst_video_decoder_drain_out (GstVideoDecoder * dec, gboolean at_eos)
932 GstVideoDecoderClass *decoder_class = GST_VIDEO_DECODER_GET_CLASS (dec);
933 GstVideoDecoderPrivate *priv = dec->priv;
934 GstFlowReturn ret = GST_FLOW_OK;
936 GST_VIDEO_DECODER_STREAM_LOCK (dec);
938 if (dec->input_segment.rate > 0.0) {
939 /* Forward mode, if unpacketized, give the child class
940 * a final chance to flush out packets */
941 if (!priv->packetized) {
942 ret = gst_video_decoder_parse_available (dec, TRUE);
945 /* Reverse playback mode */
946 ret = gst_video_decoder_flush_parse (dec, TRUE);
950 if (decoder_class->finish)
951 ret = decoder_class->finish (dec);
954 GST_VIDEO_DECODER_STREAM_UNLOCK (dec);
960 gst_video_decoder_sink_event_default (GstVideoDecoder * decoder,
963 GstVideoDecoderPrivate *priv;
964 gboolean ret = FALSE;
965 gboolean forward_immediate = FALSE;
967 priv = decoder->priv;
969 switch (GST_EVENT_TYPE (event)) {
970 case GST_EVENT_STREAM_START:
972 GstFlowReturn flow_ret = GST_FLOW_OK;
974 flow_ret = gst_video_decoder_drain_out (decoder, FALSE);
975 ret = (flow_ret == GST_FLOW_OK);
977 /* Forward STREAM_START immediately. Everything is drained after
978 * the STREAM_START event and we can forward this event immediately
979 * now without having buffers out of order.
981 forward_immediate = TRUE;
988 gst_event_parse_caps (event, &caps);
990 decoder->priv->do_caps = TRUE;
991 gst_event_unref (event);
997 GstFlowReturn flow_ret = GST_FLOW_OK;
999 flow_ret = gst_video_decoder_drain_out (decoder, TRUE);
1000 ret = (flow_ret == GST_FLOW_OK);
1001 /* Forward EOS immediately. This is required because no
1002 * buffer or serialized event will come after EOS and
1003 * nothing could trigger another _finish_frame() call.
1005 * The subclass can override this behaviour by overriding
1006 * the ::sink_event() vfunc and not chaining up to the
1007 * parent class' ::sink_event() until a later time.
1009 forward_immediate = TRUE;
1014 GstFlowReturn flow_ret = GST_FLOW_OK;
1016 flow_ret = gst_video_decoder_drain_out (decoder, FALSE);
1017 ret = (flow_ret == GST_FLOW_OK);
1019 /* Forward GAP immediately. Everything is drained after
1020 * the GAP event and we can forward this event immediately
1021 * now without having buffers out of order.
1023 forward_immediate = TRUE;
1026 case GST_EVENT_CUSTOM_DOWNSTREAM:
1029 GstFlowReturn flow_ret = GST_FLOW_OK;
1031 if (gst_video_event_parse_still_frame (event, &in_still)) {
1033 GST_DEBUG_OBJECT (decoder, "draining current data for still-frame");
1034 flow_ret = gst_video_decoder_drain_out (decoder, FALSE);
1035 ret = (flow_ret == GST_FLOW_OK);
1037 /* Forward STILL_FRAME immediately. Everything is drained after
1038 * the STILL_FRAME event and we can forward this event immediately
1039 * now without having buffers out of order.
1041 forward_immediate = TRUE;
1045 case GST_EVENT_SEGMENT:
1049 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1051 gst_event_copy_segment (event, &segment);
1053 if (segment.format == GST_FORMAT_TIME) {
1054 GST_DEBUG_OBJECT (decoder,
1055 "received TIME SEGMENT %" GST_SEGMENT_FORMAT, &segment);
1059 GST_DEBUG_OBJECT (decoder,
1060 "received SEGMENT %" GST_SEGMENT_FORMAT, &segment);
1062 /* handle newsegment as a result from our legacy simple seeking */
1063 /* note that initial 0 should convert to 0 in any case */
1064 if (priv->do_estimate_rate &&
1065 gst_pad_query_convert (decoder->sinkpad, GST_FORMAT_BYTES,
1066 segment.start, GST_FORMAT_TIME, &start)) {
1067 /* best attempt convert */
1068 /* as these are only estimates, stop is kept open-ended to avoid
1069 * premature cutting */
1070 GST_DEBUG_OBJECT (decoder,
1071 "converted to TIME start %" GST_TIME_FORMAT,
1072 GST_TIME_ARGS (start));
1073 segment.start = start;
1074 segment.stop = GST_CLOCK_TIME_NONE;
1075 segment.time = start;
1077 gst_event_unref (event);
1078 event = gst_event_new_segment (&segment);
1080 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1081 goto newseg_wrong_format;
1085 gst_video_decoder_flush (decoder, FALSE);
1087 priv->base_timestamp = GST_CLOCK_TIME_NONE;
1088 priv->base_picture_number = 0;
1090 decoder->input_segment = segment;
1092 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1095 case GST_EVENT_FLUSH_STOP:
1097 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1098 /* well, this is kind of worse than a DISCONT */
1099 gst_video_decoder_flush (decoder, TRUE);
1100 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1101 /* Forward FLUSH_STOP immediately. This is required because it is
1102 * expected to be forwarded immediately and no buffers are queued
1105 forward_immediate = TRUE;
1112 gst_event_parse_tag (event, &tags);
1114 if (gst_tag_list_get_scope (tags) == GST_TAG_SCOPE_STREAM) {
1115 gst_video_decoder_merge_tags (decoder, tags, GST_TAG_MERGE_REPLACE);
1116 gst_event_unref (event);
1126 /* Forward non-serialized events immediately, and all other
1127 * events which can be forwarded immediately without potentially
1128 * causing the event to go out of order with other events and
1129 * buffers as decided above.
1132 if (!GST_EVENT_IS_SERIALIZED (event) || forward_immediate) {
1133 ret = gst_video_decoder_push_event (decoder, event);
1135 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1136 decoder->priv->current_frame_events =
1137 g_list_prepend (decoder->priv->current_frame_events, event);
1138 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1145 newseg_wrong_format:
1147 GST_DEBUG_OBJECT (decoder, "received non TIME newsegment");
1148 gst_event_unref (event);
1155 gst_video_decoder_sink_event (GstPad * pad, GstObject * parent,
1158 GstVideoDecoder *decoder;
1159 GstVideoDecoderClass *decoder_class;
1160 gboolean ret = FALSE;
1162 decoder = GST_VIDEO_DECODER (parent);
1163 decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
1165 GST_DEBUG_OBJECT (decoder, "received event %d, %s", GST_EVENT_TYPE (event),
1166 GST_EVENT_TYPE_NAME (event));
1168 if (decoder_class->sink_event)
1169 ret = decoder_class->sink_event (decoder, event);
1174 /* perform upstream byte <-> time conversion (duration, seeking)
1175 * if subclass allows and if enough data for moderately decent conversion */
1176 static inline gboolean
1177 gst_video_decoder_do_byte (GstVideoDecoder * dec)
1179 return dec->priv->do_estimate_rate && (dec->priv->bytes_out > 0)
1180 && (dec->priv->time > GST_SECOND);
1184 gst_video_decoder_do_seek (GstVideoDecoder * dec, GstEvent * event)
1188 GstSeekType start_type, end_type;
1190 gint64 start, start_time, end_time;
1191 GstSegment seek_segment;
1194 gst_event_parse_seek (event, &rate, &format, &flags, &start_type,
1195 &start_time, &end_type, &end_time);
1197 /* we'll handle plain open-ended flushing seeks with the simple approach */
1199 GST_DEBUG_OBJECT (dec, "unsupported seek: rate");
1203 if (start_type != GST_SEEK_TYPE_SET) {
1204 GST_DEBUG_OBJECT (dec, "unsupported seek: start time");
1208 if (end_type != GST_SEEK_TYPE_NONE ||
1209 (end_type == GST_SEEK_TYPE_SET && end_time != GST_CLOCK_TIME_NONE)) {
1210 GST_DEBUG_OBJECT (dec, "unsupported seek: end time");
1214 if (!(flags & GST_SEEK_FLAG_FLUSH)) {
1215 GST_DEBUG_OBJECT (dec, "unsupported seek: not flushing");
1219 memcpy (&seek_segment, &dec->output_segment, sizeof (seek_segment));
1220 gst_segment_do_seek (&seek_segment, rate, format, flags, start_type,
1221 start_time, end_type, end_time, NULL);
1222 start_time = seek_segment.position;
1224 if (!gst_pad_query_convert (dec->sinkpad, GST_FORMAT_TIME, start_time,
1225 GST_FORMAT_BYTES, &start)) {
1226 GST_DEBUG_OBJECT (dec, "conversion failed");
1230 seqnum = gst_event_get_seqnum (event);
1231 event = gst_event_new_seek (1.0, GST_FORMAT_BYTES, flags,
1232 GST_SEEK_TYPE_SET, start, GST_SEEK_TYPE_NONE, -1);
1233 gst_event_set_seqnum (event, seqnum);
1235 GST_DEBUG_OBJECT (dec, "seeking to %" GST_TIME_FORMAT " at byte offset %"
1236 G_GINT64_FORMAT, GST_TIME_ARGS (start_time), start);
1238 return gst_pad_push_event (dec->sinkpad, event);
1242 gst_video_decoder_src_event_default (GstVideoDecoder * decoder,
1245 GstVideoDecoderPrivate *priv;
1246 gboolean res = FALSE;
1248 priv = decoder->priv;
1250 GST_DEBUG_OBJECT (decoder,
1251 "received event %d, %s", GST_EVENT_TYPE (event),
1252 GST_EVENT_TYPE_NAME (event));
1254 switch (GST_EVENT_TYPE (event)) {
1255 case GST_EVENT_SEEK:
1260 GstSeekType start_type, stop_type;
1262 gint64 tstart, tstop;
1265 gst_event_parse_seek (event, &rate, &format, &flags, &start_type, &start,
1267 seqnum = gst_event_get_seqnum (event);
1269 /* upstream gets a chance first */
1270 if ((res = gst_pad_push_event (decoder->sinkpad, event)))
1273 /* if upstream fails for a time seek, maybe we can help if allowed */
1274 if (format == GST_FORMAT_TIME) {
1275 if (gst_video_decoder_do_byte (decoder))
1276 res = gst_video_decoder_do_seek (decoder, event);
1280 /* ... though a non-time seek can be aided as well */
1281 /* First bring the requested format to time */
1283 gst_pad_query_convert (decoder->srcpad, format, start,
1284 GST_FORMAT_TIME, &tstart)))
1287 gst_pad_query_convert (decoder->srcpad, format, stop,
1288 GST_FORMAT_TIME, &tstop)))
1291 /* then seek with time on the peer */
1292 event = gst_event_new_seek (rate, GST_FORMAT_TIME,
1293 flags, start_type, tstart, stop_type, tstop);
1294 gst_event_set_seqnum (event, seqnum);
1296 res = gst_pad_push_event (decoder->sinkpad, event);
1303 GstClockTimeDiff diff;
1304 GstClockTime timestamp;
1306 gst_event_parse_qos (event, &type, &proportion, &diff, ×tamp);
1308 GST_OBJECT_LOCK (decoder);
1309 priv->proportion = proportion;
1310 if (G_LIKELY (GST_CLOCK_TIME_IS_VALID (timestamp))) {
1311 if (G_UNLIKELY (diff > 0)) {
1312 priv->earliest_time = timestamp + 2 * diff + priv->qos_frame_duration;
1314 priv->earliest_time = timestamp + diff;
1317 priv->earliest_time = GST_CLOCK_TIME_NONE;
1319 GST_OBJECT_UNLOCK (decoder);
1321 GST_DEBUG_OBJECT (decoder,
1322 "got QoS %" GST_TIME_FORMAT ", %" G_GINT64_FORMAT ", %g",
1323 GST_TIME_ARGS (timestamp), diff, proportion);
1325 res = gst_pad_push_event (decoder->sinkpad, event);
1329 res = gst_pad_push_event (decoder->sinkpad, event);
1336 GST_DEBUG_OBJECT (decoder, "could not convert format");
1341 gst_video_decoder_src_event (GstPad * pad, GstObject * parent, GstEvent * event)
1343 GstVideoDecoder *decoder;
1344 GstVideoDecoderClass *decoder_class;
1345 gboolean ret = FALSE;
1347 decoder = GST_VIDEO_DECODER (parent);
1348 decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
1350 GST_DEBUG_OBJECT (decoder, "received event %d, %s", GST_EVENT_TYPE (event),
1351 GST_EVENT_TYPE_NAME (event));
1353 if (decoder_class->src_event)
1354 ret = decoder_class->src_event (decoder, event);
1360 gst_video_decoder_src_query (GstPad * pad, GstObject * parent, GstQuery * query)
1362 GstVideoDecoder *dec;
1363 gboolean res = TRUE;
1365 dec = GST_VIDEO_DECODER (parent);
1367 GST_LOG_OBJECT (dec, "handling query: %" GST_PTR_FORMAT, query);
1369 switch (GST_QUERY_TYPE (query)) {
1370 case GST_QUERY_POSITION:
1375 /* upstream gets a chance first */
1376 if ((res = gst_pad_peer_query (dec->sinkpad, query))) {
1377 GST_LOG_OBJECT (dec, "returning peer response");
1381 /* we start from the last seen time */
1382 time = dec->priv->last_timestamp_out;
1383 /* correct for the segment values */
1384 time = gst_segment_to_stream_time (&dec->output_segment,
1385 GST_FORMAT_TIME, time);
1387 GST_LOG_OBJECT (dec,
1388 "query %p: our time: %" GST_TIME_FORMAT, query, GST_TIME_ARGS (time));
1390 /* and convert to the final format */
1391 gst_query_parse_position (query, &format, NULL);
1392 if (!(res = gst_pad_query_convert (pad, GST_FORMAT_TIME, time,
1396 gst_query_set_position (query, format, value);
1398 GST_LOG_OBJECT (dec,
1399 "query %p: we return %" G_GINT64_FORMAT " (format %u)", query, value,
1403 case GST_QUERY_DURATION:
1407 /* upstream in any case */
1408 if ((res = gst_pad_query_default (pad, parent, query)))
1411 gst_query_parse_duration (query, &format, NULL);
1412 /* try answering TIME by converting from BYTE if subclass allows */
1413 if (format == GST_FORMAT_TIME && gst_video_decoder_do_byte (dec)) {
1416 if (gst_pad_peer_query_duration (dec->sinkpad, GST_FORMAT_BYTES,
1418 GST_LOG_OBJECT (dec, "upstream size %" G_GINT64_FORMAT, value);
1419 if (gst_pad_query_convert (dec->sinkpad,
1420 GST_FORMAT_BYTES, value, GST_FORMAT_TIME, &value)) {
1421 gst_query_set_duration (query, GST_FORMAT_TIME, value);
1428 case GST_QUERY_CONVERT:
1430 GstFormat src_fmt, dest_fmt;
1431 gint64 src_val, dest_val;
1433 GST_DEBUG_OBJECT (dec, "convert query");
1435 gst_query_parse_convert (query, &src_fmt, &src_val, &dest_fmt, &dest_val);
1436 GST_OBJECT_LOCK (dec);
1437 if (dec->priv->output_state != NULL)
1438 res = gst_video_rawvideo_convert (dec->priv->output_state,
1439 src_fmt, src_val, &dest_fmt, &dest_val);
1442 GST_OBJECT_UNLOCK (dec);
1445 gst_query_set_convert (query, src_fmt, src_val, dest_fmt, dest_val);
1448 case GST_QUERY_LATENCY:
1451 GstClockTime min_latency, max_latency;
1453 res = gst_pad_peer_query (dec->sinkpad, query);
1455 gst_query_parse_latency (query, &live, &min_latency, &max_latency);
1456 GST_DEBUG_OBJECT (dec, "Peer qlatency: live %d, min %"
1457 GST_TIME_FORMAT " max %" GST_TIME_FORMAT, live,
1458 GST_TIME_ARGS (min_latency), GST_TIME_ARGS (max_latency));
1460 GST_OBJECT_LOCK (dec);
1461 min_latency += dec->priv->min_latency;
1462 if (dec->priv->max_latency == GST_CLOCK_TIME_NONE) {
1463 max_latency = GST_CLOCK_TIME_NONE;
1464 } else if (max_latency != GST_CLOCK_TIME_NONE) {
1465 max_latency += dec->priv->max_latency;
1467 GST_OBJECT_UNLOCK (dec);
1469 gst_query_set_latency (query, live, min_latency, max_latency);
1474 res = gst_pad_query_default (pad, parent, query);
1479 GST_ERROR_OBJECT (dec, "query failed");
1484 gst_video_decoder_sink_query (GstPad * pad, GstObject * parent,
1487 GstVideoDecoder *decoder;
1488 GstVideoDecoderPrivate *priv;
1489 gboolean res = FALSE;
1491 decoder = GST_VIDEO_DECODER (parent);
1492 priv = decoder->priv;
1494 GST_LOG_OBJECT (decoder, "handling query: %" GST_PTR_FORMAT, query);
1496 switch (GST_QUERY_TYPE (query)) {
1497 case GST_QUERY_CONVERT:
1499 GstFormat src_fmt, dest_fmt;
1500 gint64 src_val, dest_val;
1502 gst_query_parse_convert (query, &src_fmt, &src_val, &dest_fmt, &dest_val);
1504 gst_video_encoded_video_convert (priv->bytes_out, priv->time, src_fmt,
1505 src_val, &dest_fmt, &dest_val);
1508 gst_query_set_convert (query, src_fmt, src_val, dest_fmt, dest_val);
1511 case GST_QUERY_ALLOCATION:{
1512 GstVideoDecoderClass *klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
1514 if (klass->propose_allocation)
1515 res = klass->propose_allocation (decoder, query);
1519 res = gst_pad_query_default (pad, parent, query);
1526 GST_DEBUG_OBJECT (decoder, "query failed");
1530 typedef struct _Timestamp Timestamp;
1536 GstClockTime duration;
1540 timestamp_free (Timestamp * ts)
1542 g_slice_free (Timestamp, ts);
1546 gst_video_decoder_add_timestamp (GstVideoDecoder * decoder, GstBuffer * buffer)
1548 GstVideoDecoderPrivate *priv = decoder->priv;
1551 ts = g_slice_new (Timestamp);
1553 GST_LOG_OBJECT (decoder,
1554 "adding PTS %" GST_TIME_FORMAT " DTS %" GST_TIME_FORMAT
1555 " (offset:%" G_GUINT64_FORMAT ")",
1556 GST_TIME_ARGS (GST_BUFFER_PTS (buffer)),
1557 GST_TIME_ARGS (GST_BUFFER_DTS (buffer)), priv->input_offset);
1559 ts->offset = priv->input_offset;
1560 ts->pts = GST_BUFFER_PTS (buffer);
1561 ts->dts = GST_BUFFER_DTS (buffer);
1562 ts->duration = GST_BUFFER_DURATION (buffer);
1564 priv->timestamps = g_list_append (priv->timestamps, ts);
1568 gst_video_decoder_get_timestamp_at_offset (GstVideoDecoder *
1569 decoder, guint64 offset, GstClockTime * pts, GstClockTime * dts,
1570 GstClockTime * duration)
1572 #ifndef GST_DISABLE_GST_DEBUG
1573 guint64 got_offset = 0;
1578 *pts = GST_CLOCK_TIME_NONE;
1579 *dts = GST_CLOCK_TIME_NONE;
1580 *duration = GST_CLOCK_TIME_NONE;
1582 g = decoder->priv->timestamps;
1585 if (ts->offset <= offset) {
1586 #ifndef GST_DISABLE_GST_DEBUG
1587 got_offset = ts->offset;
1591 *duration = ts->duration;
1592 timestamp_free (ts);
1594 decoder->priv->timestamps = g_list_remove (decoder->priv->timestamps, ts);
1600 GST_LOG_OBJECT (decoder,
1601 "got PTS %" GST_TIME_FORMAT " DTS %" GST_TIME_FORMAT " @ offs %"
1602 G_GUINT64_FORMAT " (wanted offset:%" G_GUINT64_FORMAT ")",
1603 GST_TIME_ARGS (*pts), GST_TIME_ARGS (*dts), got_offset, offset);
1607 gst_video_decoder_clear_queues (GstVideoDecoder * dec)
1609 GstVideoDecoderPrivate *priv = dec->priv;
1611 g_list_free_full (priv->output_queued,
1612 (GDestroyNotify) gst_mini_object_unref);
1613 priv->output_queued = NULL;
1615 g_list_free_full (priv->gather, (GDestroyNotify) gst_mini_object_unref);
1616 priv->gather = NULL;
1617 g_list_free_full (priv->decode, (GDestroyNotify) gst_video_codec_frame_unref);
1618 priv->decode = NULL;
1619 g_list_free_full (priv->parse, (GDestroyNotify) gst_mini_object_unref);
1621 g_list_free_full (priv->parse_gather,
1622 (GDestroyNotify) gst_video_codec_frame_unref);
1623 priv->parse_gather = NULL;
1624 g_list_free_full (priv->frames, (GDestroyNotify) gst_video_codec_frame_unref);
1625 priv->frames = NULL;
1629 gst_video_decoder_reset (GstVideoDecoder * decoder, gboolean full)
1631 GstVideoDecoderPrivate *priv = decoder->priv;
1633 GST_DEBUG_OBJECT (decoder, "reset full %d", full);
1635 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1638 gst_segment_init (&decoder->input_segment, GST_FORMAT_UNDEFINED);
1639 gst_segment_init (&decoder->output_segment, GST_FORMAT_UNDEFINED);
1640 gst_video_decoder_clear_queues (decoder);
1641 priv->error_count = 0;
1642 priv->max_errors = GST_VIDEO_DECODER_MAX_ERRORS;
1643 if (priv->input_state)
1644 gst_video_codec_state_unref (priv->input_state);
1645 priv->input_state = NULL;
1646 GST_OBJECT_LOCK (decoder);
1647 if (priv->output_state)
1648 gst_video_codec_state_unref (priv->output_state);
1649 priv->output_state = NULL;
1651 priv->qos_frame_duration = 0;
1652 GST_OBJECT_UNLOCK (decoder);
1654 priv->min_latency = 0;
1655 priv->max_latency = 0;
1658 gst_tag_list_unref (priv->tags);
1660 priv->tags_changed = FALSE;
1661 priv->reordered_output = FALSE;
1664 priv->discont = TRUE;
1666 priv->base_timestamp = GST_CLOCK_TIME_NONE;
1667 priv->last_timestamp_out = GST_CLOCK_TIME_NONE;
1668 priv->pts_delta = GST_CLOCK_TIME_NONE;
1670 priv->input_offset = 0;
1671 priv->frame_offset = 0;
1672 gst_adapter_clear (priv->input_adapter);
1673 gst_adapter_clear (priv->output_adapter);
1674 g_list_free_full (priv->timestamps, (GDestroyNotify) timestamp_free);
1675 priv->timestamps = NULL;
1677 if (priv->current_frame) {
1678 gst_video_codec_frame_unref (priv->current_frame);
1679 priv->current_frame = NULL;
1683 priv->processed = 0;
1685 priv->decode_frame_number = 0;
1686 priv->base_picture_number = 0;
1688 g_list_free_full (priv->frames, (GDestroyNotify) gst_video_codec_frame_unref);
1689 priv->frames = NULL;
1691 priv->bytes_out = 0;
1694 GST_OBJECT_LOCK (decoder);
1695 priv->earliest_time = GST_CLOCK_TIME_NONE;
1696 priv->proportion = 0.5;
1697 GST_OBJECT_UNLOCK (decoder);
1699 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1702 static GstFlowReturn
1703 gst_video_decoder_chain_forward (GstVideoDecoder * decoder,
1704 GstBuffer * buf, gboolean at_eos)
1706 GstVideoDecoderPrivate *priv;
1707 GstVideoDecoderClass *klass;
1708 GstFlowReturn ret = GST_FLOW_OK;
1710 klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
1711 priv = decoder->priv;
1713 g_return_val_if_fail (priv->packetized || klass->parse, GST_FLOW_ERROR);
1715 if (priv->current_frame == NULL)
1716 priv->current_frame = gst_video_decoder_new_frame (decoder);
1718 if (GST_BUFFER_PTS_IS_VALID (buf) && !priv->packetized) {
1719 gst_video_decoder_add_timestamp (decoder, buf);
1721 priv->input_offset += gst_buffer_get_size (buf);
1723 if (priv->packetized) {
1724 if (!GST_BUFFER_FLAG_IS_SET (buf, GST_BUFFER_FLAG_DELTA_UNIT)) {
1725 GST_VIDEO_CODEC_FRAME_SET_SYNC_POINT (priv->current_frame);
1728 priv->current_frame->input_buffer = buf;
1730 if (decoder->input_segment.rate < 0.0) {
1731 priv->parse_gather =
1732 g_list_prepend (priv->parse_gather, priv->current_frame);
1734 ret = gst_video_decoder_decode_frame (decoder, priv->current_frame);
1736 priv->current_frame = NULL;
1738 gst_adapter_push (priv->input_adapter, buf);
1740 ret = gst_video_decoder_parse_available (decoder, at_eos);
1743 if (ret == GST_VIDEO_DECODER_FLOW_NEED_DATA)
1749 static GstFlowReturn
1750 gst_video_decoder_flush_decode (GstVideoDecoder * dec)
1752 GstVideoDecoderPrivate *priv = dec->priv;
1753 GstFlowReturn res = GST_FLOW_OK;
1756 GST_DEBUG_OBJECT (dec, "flushing buffers to decode");
1758 /* clear buffer and decoder state */
1759 gst_video_decoder_flush (dec, FALSE);
1761 walk = priv->decode;
1764 GstVideoCodecFrame *frame = (GstVideoCodecFrame *) (walk->data);
1766 GST_DEBUG_OBJECT (dec, "decoding frame %p buffer %p, PTS %" GST_TIME_FORMAT
1767 ", DTS %" GST_TIME_FORMAT, frame, frame->input_buffer,
1768 GST_TIME_ARGS (GST_BUFFER_PTS (frame->input_buffer)),
1769 GST_TIME_ARGS (GST_BUFFER_DTS (frame->input_buffer)));
1773 priv->decode = g_list_delete_link (priv->decode, walk);
1775 /* decode buffer, resulting data prepended to queue */
1776 res = gst_video_decoder_decode_frame (dec, frame);
1777 if (res != GST_FLOW_OK)
1786 /* gst_video_decoder_flush_parse is called from the
1787 * chain_reverse() function when a buffer containing
1788 * a DISCONT - indicating that reverse playback
1789 * looped back to the next data block, and therefore
1790 * all available data should be fed through the
1791 * decoder and frames gathered for reversed output
1793 static GstFlowReturn
1794 gst_video_decoder_flush_parse (GstVideoDecoder * dec, gboolean at_eos)
1796 GstVideoDecoderPrivate *priv = dec->priv;
1797 GstFlowReturn res = GST_FLOW_OK;
1800 GST_DEBUG_OBJECT (dec, "flushing buffers to parsing");
1802 /* Reverse the gather list, and prepend it to the parse list,
1803 * then flush to parse whatever we can */
1804 priv->gather = g_list_reverse (priv->gather);
1805 priv->parse = g_list_concat (priv->gather, priv->parse);
1806 priv->gather = NULL;
1808 /* clear buffer and decoder state */
1809 gst_video_decoder_flush (dec, FALSE);
1813 GstBuffer *buf = GST_BUFFER_CAST (walk->data);
1814 GList *next = walk->next;
1816 GST_DEBUG_OBJECT (dec, "parsing buffer %p, PTS %" GST_TIME_FORMAT
1817 ", DTS %" GST_TIME_FORMAT, buf, GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
1818 GST_TIME_ARGS (GST_BUFFER_DTS (buf)));
1820 /* parse buffer, resulting frames prepended to parse_gather queue */
1821 gst_buffer_ref (buf);
1822 res = gst_video_decoder_chain_forward (dec, buf, at_eos);
1824 /* if we generated output, we can discard the buffer, else we
1825 * keep it in the queue */
1826 if (priv->parse_gather) {
1827 GST_DEBUG_OBJECT (dec, "parsed buffer to %p", priv->parse_gather->data);
1828 priv->parse = g_list_delete_link (priv->parse, walk);
1829 gst_buffer_unref (buf);
1831 GST_DEBUG_OBJECT (dec, "buffer did not decode, keeping");
1836 /* now we can process frames. Start by moving each frame from the parse_gather
1837 * to the decode list, reverse the order as we go, and stopping when/if we
1838 * copy a keyframe. */
1839 GST_DEBUG_OBJECT (dec, "checking parsed frames for a keyframe to decode");
1840 walk = priv->parse_gather;
1842 GstVideoCodecFrame *frame = (GstVideoCodecFrame *) (walk->data);
1844 /* remove from the gather list */
1845 priv->parse_gather = g_list_remove_link (priv->parse_gather, walk);
1847 /* move it to the front of the decode queue */
1848 priv->decode = g_list_concat (walk, priv->decode);
1850 /* if we copied a keyframe, flush and decode the decode queue */
1851 if (GST_VIDEO_CODEC_FRAME_IS_SYNC_POINT (frame)) {
1852 GST_DEBUG_OBJECT (dec, "found keyframe %p with PTS %" GST_TIME_FORMAT
1853 ", DTS %" GST_TIME_FORMAT, frame,
1854 GST_TIME_ARGS (GST_BUFFER_PTS (frame->input_buffer)),
1855 GST_TIME_ARGS (GST_BUFFER_DTS (frame->input_buffer)));
1856 res = gst_video_decoder_flush_decode (dec);
1857 if (res != GST_FLOW_OK)
1861 walk = priv->parse_gather;
1864 /* now send queued data downstream */
1865 walk = priv->output_queued;
1867 GstBuffer *buf = GST_BUFFER_CAST (walk->data);
1869 if (G_LIKELY (res == GST_FLOW_OK)) {
1870 /* avoid stray DISCONT from forward processing,
1871 * which have no meaning in reverse pushing */
1872 GST_BUFFER_FLAG_UNSET (buf, GST_BUFFER_FLAG_DISCONT);
1874 /* Last chance to calculate a timestamp as we loop backwards
1875 * through the list */
1876 if (GST_BUFFER_TIMESTAMP (buf) != GST_CLOCK_TIME_NONE)
1877 priv->last_timestamp_out = GST_BUFFER_TIMESTAMP (buf);
1878 else if (priv->last_timestamp_out != GST_CLOCK_TIME_NONE &&
1879 GST_BUFFER_DURATION (buf) != GST_CLOCK_TIME_NONE) {
1880 GST_BUFFER_TIMESTAMP (buf) =
1881 priv->last_timestamp_out - GST_BUFFER_DURATION (buf);
1882 priv->last_timestamp_out = GST_BUFFER_TIMESTAMP (buf);
1883 GST_LOG_OBJECT (dec,
1884 "Calculated TS %" GST_TIME_FORMAT " working backwards",
1885 GST_TIME_ARGS (priv->last_timestamp_out));
1888 res = gst_video_decoder_clip_and_push_buf (dec, buf);
1890 gst_buffer_unref (buf);
1893 priv->output_queued =
1894 g_list_delete_link (priv->output_queued, priv->output_queued);
1895 walk = priv->output_queued;
1902 static GstFlowReturn
1903 gst_video_decoder_chain_reverse (GstVideoDecoder * dec, GstBuffer * buf)
1905 GstVideoDecoderPrivate *priv = dec->priv;
1906 GstFlowReturn result = GST_FLOW_OK;
1908 /* if we have a discont, move buffers to the decode list */
1909 if (!buf || GST_BUFFER_IS_DISCONT (buf)) {
1910 GST_DEBUG_OBJECT (dec, "received discont");
1912 /* parse and decode stuff in the gather and parse queues */
1913 gst_video_decoder_flush_parse (dec, FALSE);
1916 if (G_LIKELY (buf)) {
1917 GST_DEBUG_OBJECT (dec, "gathering buffer %p of size %" G_GSIZE_FORMAT ", "
1918 "PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT ", dur %"
1919 GST_TIME_FORMAT, buf, gst_buffer_get_size (buf),
1920 GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
1921 GST_TIME_ARGS (GST_BUFFER_DTS (buf)),
1922 GST_TIME_ARGS (GST_BUFFER_DURATION (buf)));
1924 /* add buffer to gather queue */
1925 priv->gather = g_list_prepend (priv->gather, buf);
1931 static GstFlowReturn
1932 gst_video_decoder_chain (GstPad * pad, GstObject * parent, GstBuffer * buf)
1934 GstVideoDecoder *decoder;
1935 GstFlowReturn ret = GST_FLOW_OK;
1937 decoder = GST_VIDEO_DECODER (parent);
1939 if (G_UNLIKELY (decoder->priv->do_caps)) {
1940 GstCaps *caps = gst_pad_get_current_caps (decoder->sinkpad);
1942 if (!gst_video_decoder_setcaps (decoder, caps)) {
1943 gst_caps_unref (caps);
1944 goto not_negotiated;
1946 gst_caps_unref (caps);
1948 decoder->priv->do_caps = FALSE;
1951 GST_LOG_OBJECT (decoder,
1952 "chain PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT " duration %"
1953 GST_TIME_FORMAT " size %" G_GSIZE_FORMAT,
1954 GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
1955 GST_TIME_ARGS (GST_BUFFER_DTS (buf)),
1956 GST_TIME_ARGS (GST_BUFFER_DURATION (buf)), gst_buffer_get_size (buf));
1958 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1961 * requiring the pad to be negotiated makes it impossible to use
1962 * oggdemux or filesrc ! decoder */
1964 if (decoder->input_segment.format == GST_FORMAT_UNDEFINED) {
1966 GstSegment *segment = &decoder->input_segment;
1968 GST_WARNING_OBJECT (decoder,
1969 "Received buffer without a new-segment. "
1970 "Assuming timestamps start from 0.");
1972 gst_segment_init (segment, GST_FORMAT_TIME);
1974 event = gst_event_new_segment (segment);
1976 decoder->priv->current_frame_events =
1977 g_list_prepend (decoder->priv->current_frame_events, event);
1980 if (decoder->input_segment.rate > 0.0)
1981 ret = gst_video_decoder_chain_forward (decoder, buf, FALSE);
1983 ret = gst_video_decoder_chain_reverse (decoder, buf);
1985 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1991 GST_ELEMENT_ERROR (decoder, CORE, NEGOTIATION, (NULL),
1992 ("encoder not initialized"));
1993 gst_buffer_unref (buf);
1994 return GST_FLOW_NOT_NEGOTIATED;
1998 static GstStateChangeReturn
1999 gst_video_decoder_change_state (GstElement * element, GstStateChange transition)
2001 GstVideoDecoder *decoder;
2002 GstVideoDecoderClass *decoder_class;
2003 GstStateChangeReturn ret;
2005 decoder = GST_VIDEO_DECODER (element);
2006 decoder_class = GST_VIDEO_DECODER_GET_CLASS (element);
2008 switch (transition) {
2009 case GST_STATE_CHANGE_NULL_TO_READY:
2010 /* open device/library if needed */
2011 if (decoder_class->open && !decoder_class->open (decoder))
2014 case GST_STATE_CHANGE_READY_TO_PAUSED:
2015 /* Initialize device/library if needed */
2016 if (decoder_class->start && !decoder_class->start (decoder))
2023 ret = GST_ELEMENT_CLASS (parent_class)->change_state (element, transition);
2025 switch (transition) {
2026 case GST_STATE_CHANGE_PAUSED_TO_READY:
2027 if (decoder_class->stop && !decoder_class->stop (decoder))
2030 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2031 gst_video_decoder_reset (decoder, TRUE);
2032 g_list_free_full (decoder->priv->current_frame_events,
2033 (GDestroyNotify) gst_event_unref);
2034 decoder->priv->current_frame_events = NULL;
2035 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2037 case GST_STATE_CHANGE_READY_TO_NULL:
2038 /* close device/library if needed */
2039 if (decoder_class->close && !decoder_class->close (decoder))
2051 GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL),
2052 ("Failed to open decoder"));
2053 return GST_STATE_CHANGE_FAILURE;
2058 GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL),
2059 ("Failed to start decoder"));
2060 return GST_STATE_CHANGE_FAILURE;
2065 GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL),
2066 ("Failed to stop decoder"));
2067 return GST_STATE_CHANGE_FAILURE;
2072 GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL),
2073 ("Failed to close decoder"));
2074 return GST_STATE_CHANGE_FAILURE;
2078 static GstVideoCodecFrame *
2079 gst_video_decoder_new_frame (GstVideoDecoder * decoder)
2081 GstVideoDecoderPrivate *priv = decoder->priv;
2082 GstVideoCodecFrame *frame;
2084 frame = g_slice_new0 (GstVideoCodecFrame);
2086 frame->ref_count = 1;
2088 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2089 frame->system_frame_number = priv->system_frame_number;
2090 priv->system_frame_number++;
2091 frame->decode_frame_number = priv->decode_frame_number;
2092 priv->decode_frame_number++;
2094 frame->dts = GST_CLOCK_TIME_NONE;
2095 frame->pts = GST_CLOCK_TIME_NONE;
2096 frame->duration = GST_CLOCK_TIME_NONE;
2097 frame->events = priv->current_frame_events;
2098 priv->current_frame_events = NULL;
2099 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2101 GST_LOG_OBJECT (decoder, "Created new frame %p (sfn:%d)",
2102 frame, frame->system_frame_number);
2108 gst_video_decoder_prepare_finish_frame (GstVideoDecoder *
2109 decoder, GstVideoCodecFrame * frame, gboolean dropping)
2111 GstVideoDecoderPrivate *priv = decoder->priv;
2112 GList *l, *events = NULL;
2114 #ifndef GST_DISABLE_GST_DEBUG
2115 GST_LOG_OBJECT (decoder, "n %d in %" G_GSIZE_FORMAT " out %" G_GSIZE_FORMAT,
2116 g_list_length (priv->frames),
2117 gst_adapter_available (priv->input_adapter),
2118 gst_adapter_available (priv->output_adapter));
2121 GST_LOG_OBJECT (decoder,
2122 "finish frame %p (#%d) sync:%d PTS:%" GST_TIME_FORMAT " DTS:%"
2124 frame, frame->system_frame_number,
2125 GST_VIDEO_CODEC_FRAME_IS_SYNC_POINT (frame), GST_TIME_ARGS (frame->pts),
2126 GST_TIME_ARGS (frame->dts));
2128 /* Push all pending events that arrived before this frame */
2129 for (l = priv->frames; l; l = l->next) {
2130 GstVideoCodecFrame *tmp = l->data;
2133 events = g_list_concat (events, tmp->events);
2141 for (l = g_list_last (events); l; l = g_list_previous (l)) {
2142 GST_LOG_OBJECT (decoder, "pushing %s event", GST_EVENT_TYPE_NAME (l->data));
2143 gst_video_decoder_push_event (decoder, l->data);
2145 g_list_free (events);
2147 /* Check if the data should not be displayed. For example altref/invisible
2148 * frame in vp8. In this case we should not update the timestamps. */
2149 if (GST_VIDEO_CODEC_FRAME_IS_DECODE_ONLY (frame))
2152 /* If the frame is meant to be output but we don't have an output_buffer
2153 * we have a problem :) */
2154 if (G_UNLIKELY ((frame->output_buffer == NULL) && !dropping))
2155 goto no_output_buffer;
2157 if (GST_CLOCK_TIME_IS_VALID (frame->pts)) {
2158 if (frame->pts != priv->base_timestamp) {
2159 GST_DEBUG_OBJECT (decoder,
2160 "sync timestamp %" GST_TIME_FORMAT " diff %" GST_TIME_FORMAT,
2161 GST_TIME_ARGS (frame->pts),
2162 GST_TIME_ARGS (frame->pts - decoder->output_segment.start));
2163 priv->base_timestamp = frame->pts;
2164 priv->base_picture_number = frame->decode_frame_number;
2168 if (frame->duration == GST_CLOCK_TIME_NONE) {
2169 frame->duration = gst_video_decoder_get_frame_duration (decoder, frame);
2170 GST_LOG_OBJECT (decoder,
2171 "Guessing duration %" GST_TIME_FORMAT " for frame...",
2172 GST_TIME_ARGS (frame->duration));
2175 /* PTS is expected montone ascending,
2176 * so a good guess is lowest unsent DTS */
2178 GstClockTime min_ts = GST_CLOCK_TIME_NONE;
2179 GstVideoCodecFrame *oframe = NULL;
2180 gboolean seen_none = FALSE;
2182 /* some maintenance regardless */
2183 for (l = priv->frames; l; l = l->next) {
2184 GstVideoCodecFrame *tmp = l->data;
2186 if (!GST_CLOCK_TIME_IS_VALID (tmp->abidata.ABI.ts)) {
2191 if (!GST_CLOCK_TIME_IS_VALID (min_ts) || tmp->abidata.ABI.ts < min_ts) {
2192 min_ts = tmp->abidata.ABI.ts;
2196 /* save a ts if needed */
2197 if (oframe && oframe != frame) {
2198 oframe->abidata.ABI.ts = frame->abidata.ABI.ts;
2201 /* and set if needed;
2202 * valid delta means we have reasonable DTS input */
2203 /* also, if we ended up reordered, means this approach is conflicting
2204 * with some sparse existing PTS, and so it does not work out */
2205 if (!priv->reordered_output &&
2206 !GST_CLOCK_TIME_IS_VALID (frame->pts) && !seen_none &&
2207 GST_CLOCK_TIME_IS_VALID (priv->pts_delta)) {
2208 frame->pts = min_ts + priv->pts_delta;
2209 GST_DEBUG_OBJECT (decoder,
2210 "no valid PTS, using oldest DTS %" GST_TIME_FORMAT,
2211 GST_TIME_ARGS (frame->pts));
2214 /* some more maintenance, ts2 holds PTS */
2215 min_ts = GST_CLOCK_TIME_NONE;
2217 for (l = priv->frames; l; l = l->next) {
2218 GstVideoCodecFrame *tmp = l->data;
2220 if (!GST_CLOCK_TIME_IS_VALID (tmp->abidata.ABI.ts2)) {
2225 if (!GST_CLOCK_TIME_IS_VALID (min_ts) || tmp->abidata.ABI.ts2 < min_ts) {
2226 min_ts = tmp->abidata.ABI.ts2;
2230 /* save a ts if needed */
2231 if (oframe && oframe != frame) {
2232 oframe->abidata.ABI.ts2 = frame->abidata.ABI.ts2;
2235 /* if we detected reordered output, then PTS are void,
2236 * however those were obtained; bogus input, subclass etc */
2237 if (priv->reordered_output && !seen_none) {
2238 GST_DEBUG_OBJECT (decoder, "invaliding PTS");
2239 frame->pts = GST_CLOCK_TIME_NONE;
2242 if (!GST_CLOCK_TIME_IS_VALID (frame->pts) && !seen_none) {
2243 frame->pts = min_ts;
2244 GST_DEBUG_OBJECT (decoder,
2245 "no valid PTS, using oldest PTS %" GST_TIME_FORMAT,
2246 GST_TIME_ARGS (frame->pts));
2251 if (frame->pts == GST_CLOCK_TIME_NONE) {
2252 /* Last ditch timestamp guess: Just add the duration to the previous
2254 if (priv->last_timestamp_out != GST_CLOCK_TIME_NONE &&
2255 frame->duration != GST_CLOCK_TIME_NONE) {
2256 frame->pts = priv->last_timestamp_out + frame->duration;
2257 GST_LOG_OBJECT (decoder,
2258 "Guessing timestamp %" GST_TIME_FORMAT " for frame...",
2259 GST_TIME_ARGS (frame->pts));
2263 if (GST_CLOCK_TIME_IS_VALID (priv->last_timestamp_out)) {
2264 if (frame->pts < priv->last_timestamp_out) {
2265 GST_WARNING_OBJECT (decoder,
2266 "decreasing timestamp (%" GST_TIME_FORMAT " < %"
2267 GST_TIME_FORMAT ")",
2268 GST_TIME_ARGS (frame->pts), GST_TIME_ARGS (priv->last_timestamp_out));
2269 priv->reordered_output = TRUE;
2273 if (GST_CLOCK_TIME_IS_VALID (frame->pts))
2274 priv->last_timestamp_out = frame->pts;
2281 GST_ERROR_OBJECT (decoder, "No buffer to output !");
2286 gst_video_decoder_release_frame (GstVideoDecoder * dec,
2287 GstVideoCodecFrame * frame)
2291 /* unref once from the list */
2292 link = g_list_find (dec->priv->frames, frame);
2294 gst_video_codec_frame_unref (frame);
2295 dec->priv->frames = g_list_delete_link (dec->priv->frames, link);
2298 /* unref because this function takes ownership */
2299 gst_video_codec_frame_unref (frame);
2303 * gst_video_decoder_drop_frame:
2304 * @dec: a #GstVideoDecoder
2305 * @frame: (transfer full): the #GstVideoCodecFrame to drop
2307 * Similar to gst_video_decoder_finish_frame(), but drops @frame in any
2308 * case and posts a QoS message with the frame's details on the bus.
2309 * In any case, the frame is considered finished and released.
2311 * Returns: a #GstFlowReturn, usually GST_FLOW_OK.
2314 gst_video_decoder_drop_frame (GstVideoDecoder * dec, GstVideoCodecFrame * frame)
2316 GstClockTime stream_time, jitter, earliest_time, qostime, timestamp;
2317 GstSegment *segment;
2318 GstMessage *qos_msg;
2321 GST_LOG_OBJECT (dec, "drop frame %p", frame);
2323 GST_VIDEO_DECODER_STREAM_LOCK (dec);
2325 gst_video_decoder_prepare_finish_frame (dec, frame, TRUE);
2327 GST_DEBUG_OBJECT (dec, "dropping frame %" GST_TIME_FORMAT,
2328 GST_TIME_ARGS (frame->pts));
2330 dec->priv->dropped++;
2332 /* post QoS message */
2333 GST_OBJECT_LOCK (dec);
2334 proportion = dec->priv->proportion;
2335 earliest_time = dec->priv->earliest_time;
2336 GST_OBJECT_UNLOCK (dec);
2338 timestamp = frame->pts;
2339 segment = &dec->output_segment;
2341 gst_segment_to_stream_time (segment, GST_FORMAT_TIME, timestamp);
2342 qostime = gst_segment_to_running_time (segment, GST_FORMAT_TIME, timestamp);
2343 jitter = GST_CLOCK_DIFF (qostime, earliest_time);
2345 gst_message_new_qos (GST_OBJECT_CAST (dec), FALSE, qostime, stream_time,
2346 timestamp, GST_CLOCK_TIME_NONE);
2347 gst_message_set_qos_values (qos_msg, jitter, proportion, 1000000);
2348 gst_message_set_qos_stats (qos_msg, GST_FORMAT_BUFFERS,
2349 dec->priv->processed, dec->priv->dropped);
2350 gst_element_post_message (GST_ELEMENT_CAST (dec), qos_msg);
2352 /* now free the frame */
2353 gst_video_decoder_release_frame (dec, frame);
2355 GST_VIDEO_DECODER_STREAM_UNLOCK (dec);
2361 * gst_video_decoder_finish_frame:
2362 * @decoder: a #GstVideoDecoder
2363 * @frame: (transfer full): a decoded #GstVideoCodecFrame
2365 * @frame should have a valid decoded data buffer, whose metadata fields
2366 * are then appropriately set according to frame data and pushed downstream.
2367 * If no output data is provided, @frame is considered skipped.
2368 * In any case, the frame is considered finished and released.
2370 * After calling this function the output buffer of the frame is to be
2371 * considered read-only. This function will also change the metadata
2374 * Returns: a #GstFlowReturn resulting from sending data downstream
2377 gst_video_decoder_finish_frame (GstVideoDecoder * decoder,
2378 GstVideoCodecFrame * frame)
2380 GstFlowReturn ret = GST_FLOW_OK;
2381 GstVideoDecoderPrivate *priv = decoder->priv;
2382 GstBuffer *output_buffer;
2384 GST_LOG_OBJECT (decoder, "finish frame %p", frame);
2386 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2388 if (G_UNLIKELY (priv->output_state_changed || (priv->output_state
2389 && gst_pad_check_reconfigure (decoder->srcpad))))
2390 gst_video_decoder_negotiate (decoder);
2392 gst_video_decoder_prepare_finish_frame (decoder, frame, FALSE);
2395 if (priv->tags && priv->tags_changed) {
2396 gst_video_decoder_push_event (decoder,
2397 gst_event_new_tag (gst_tag_list_ref (priv->tags)));
2398 priv->tags_changed = FALSE;
2401 /* no buffer data means this frame is skipped */
2402 if (!frame->output_buffer || GST_VIDEO_CODEC_FRAME_IS_DECODE_ONLY (frame)) {
2403 GST_DEBUG_OBJECT (decoder, "skipping frame %" GST_TIME_FORMAT,
2404 GST_TIME_ARGS (frame->pts));
2408 output_buffer = frame->output_buffer;
2410 GST_BUFFER_FLAG_UNSET (output_buffer, GST_BUFFER_FLAG_DELTA_UNIT);
2412 /* set PTS and DTS to both the PTS for decoded frames */
2413 GST_BUFFER_PTS (output_buffer) = frame->pts;
2414 GST_BUFFER_DTS (output_buffer) = frame->pts;
2415 GST_BUFFER_DURATION (output_buffer) = frame->duration;
2417 GST_BUFFER_OFFSET (output_buffer) = GST_BUFFER_OFFSET_NONE;
2418 GST_BUFFER_OFFSET_END (output_buffer) = GST_BUFFER_OFFSET_NONE;
2420 if (priv->discont) {
2421 GST_BUFFER_FLAG_SET (output_buffer, GST_BUFFER_FLAG_DISCONT);
2422 priv->discont = FALSE;
2425 /* Get an additional ref to the buffer, which is going to be pushed
2426 * downstream, the original ref is owned by the frame
2428 * FIXME: clip_and_push_buf() changes buffer metadata but the buffer
2429 * might have a refcount > 1 */
2430 output_buffer = gst_buffer_ref (output_buffer);
2431 if (decoder->output_segment.rate < 0.0) {
2432 GST_LOG_OBJECT (decoder, "queued frame");
2433 priv->output_queued = g_list_prepend (priv->output_queued, output_buffer);
2435 ret = gst_video_decoder_clip_and_push_buf (decoder, output_buffer);
2439 gst_video_decoder_release_frame (decoder, frame);
2440 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2445 /* With stream lock, takes the frame reference */
2446 static GstFlowReturn
2447 gst_video_decoder_clip_and_push_buf (GstVideoDecoder * decoder, GstBuffer * buf)
2449 GstFlowReturn ret = GST_FLOW_OK;
2450 GstVideoDecoderPrivate *priv = decoder->priv;
2451 guint64 start, stop;
2452 guint64 cstart, cstop;
2453 GstSegment *segment;
2454 GstClockTime duration;
2456 /* Check for clipping */
2457 start = GST_BUFFER_PTS (buf);
2458 duration = GST_BUFFER_DURATION (buf);
2460 stop = GST_CLOCK_TIME_NONE;
2462 if (GST_CLOCK_TIME_IS_VALID (start) && GST_CLOCK_TIME_IS_VALID (duration)) {
2463 stop = start + duration;
2466 segment = &decoder->output_segment;
2467 if (gst_segment_clip (segment, GST_FORMAT_TIME, start, stop, &cstart, &cstop)) {
2469 GST_BUFFER_PTS (buf) = cstart;
2471 if (stop != GST_CLOCK_TIME_NONE)
2472 GST_BUFFER_DURATION (buf) = cstop - cstart;
2474 GST_LOG_OBJECT (decoder,
2475 "accepting buffer inside segment: %" GST_TIME_FORMAT " %"
2476 GST_TIME_FORMAT " seg %" GST_TIME_FORMAT " to %" GST_TIME_FORMAT
2477 " time %" GST_TIME_FORMAT,
2478 GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
2479 GST_TIME_ARGS (GST_BUFFER_PTS (buf) +
2480 GST_BUFFER_DURATION (buf)),
2481 GST_TIME_ARGS (segment->start), GST_TIME_ARGS (segment->stop),
2482 GST_TIME_ARGS (segment->time));
2484 GST_LOG_OBJECT (decoder,
2485 "dropping buffer outside segment: %" GST_TIME_FORMAT
2486 " %" GST_TIME_FORMAT
2487 " seg %" GST_TIME_FORMAT " to %" GST_TIME_FORMAT
2488 " time %" GST_TIME_FORMAT,
2489 GST_TIME_ARGS (start), GST_TIME_ARGS (stop),
2490 GST_TIME_ARGS (segment->start),
2491 GST_TIME_ARGS (segment->stop), GST_TIME_ARGS (segment->time));
2492 gst_buffer_unref (buf);
2496 /* update rate estimate */
2497 priv->bytes_out += gst_buffer_get_size (buf);
2498 if (GST_CLOCK_TIME_IS_VALID (duration)) {
2499 priv->time += duration;
2501 /* FIXME : Use difference between current and previous outgoing
2502 * timestamp, and relate to difference between current and previous
2504 /* better none than nothing valid */
2505 priv->time = GST_CLOCK_TIME_NONE;
2508 GST_DEBUG_OBJECT (decoder, "pushing buffer %p of size %" G_GSIZE_FORMAT ", "
2509 "PTS %" GST_TIME_FORMAT ", dur %" GST_TIME_FORMAT, buf,
2510 gst_buffer_get_size (buf),
2511 GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
2512 GST_TIME_ARGS (GST_BUFFER_DURATION (buf)));
2514 /* we got data, so note things are looking up again, reduce
2515 * the error count, if there is one */
2516 if (G_UNLIKELY (priv->error_count))
2517 priv->error_count = 0;
2519 ret = gst_pad_push (decoder->srcpad, buf);
2526 * gst_video_decoder_add_to_frame:
2527 * @decoder: a #GstVideoDecoder
2528 * @n_bytes: the number of bytes to add
2530 * Removes next @n_bytes of input data and adds it to currently parsed frame.
2533 gst_video_decoder_add_to_frame (GstVideoDecoder * decoder, int n_bytes)
2535 GstVideoDecoderPrivate *priv = decoder->priv;
2538 GST_LOG_OBJECT (decoder, "add %d bytes to frame", n_bytes);
2543 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2544 if (gst_adapter_available (priv->output_adapter) == 0) {
2545 priv->frame_offset =
2546 priv->input_offset - gst_adapter_available (priv->input_adapter);
2548 buf = gst_adapter_take_buffer (priv->input_adapter, n_bytes);
2550 gst_adapter_push (priv->output_adapter, buf);
2551 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2555 gst_video_decoder_get_frame_duration (GstVideoDecoder * decoder,
2556 GstVideoCodecFrame * frame)
2558 GstVideoCodecState *state = decoder->priv->output_state;
2560 /* it's possible that we don't have a state yet when we are dropping the
2561 * initial buffers */
2563 return GST_CLOCK_TIME_NONE;
2565 if (state->info.fps_d == 0 || state->info.fps_n == 0) {
2566 return GST_CLOCK_TIME_NONE;
2569 /* FIXME: For interlaced frames this needs to take into account
2570 * the number of valid fields in the frame
2573 return gst_util_uint64_scale (GST_SECOND, state->info.fps_d,
2578 * gst_video_decoder_have_frame:
2579 * @decoder: a #GstVideoDecoder
2581 * Gathers all data collected for currently parsed frame, gathers corresponding
2582 * metadata and passes it along for further processing, i.e. @handle_frame.
2584 * Returns: a #GstFlowReturn
2587 gst_video_decoder_have_frame (GstVideoDecoder * decoder)
2589 GstVideoDecoderPrivate *priv = decoder->priv;
2592 GstClockTime pts, dts, duration;
2593 GstFlowReturn ret = GST_FLOW_OK;
2595 GST_LOG_OBJECT (decoder, "have_frame");
2597 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2599 n_available = gst_adapter_available (priv->output_adapter);
2601 buffer = gst_adapter_take_buffer (priv->output_adapter, n_available);
2603 buffer = gst_buffer_new_and_alloc (0);
2606 priv->current_frame->input_buffer = buffer;
2608 gst_video_decoder_get_timestamp_at_offset (decoder,
2609 priv->frame_offset, &pts, &dts, &duration);
2611 GST_BUFFER_PTS (buffer) = pts;
2612 GST_BUFFER_DTS (buffer) = dts;
2613 GST_BUFFER_DURATION (buffer) = duration;
2615 GST_LOG_OBJECT (decoder, "collected frame size %d, "
2616 "PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT ", dur %"
2617 GST_TIME_FORMAT, n_available, GST_TIME_ARGS (pts), GST_TIME_ARGS (dts),
2618 GST_TIME_ARGS (duration));
2620 /* In reverse playback, just capture and queue frames for later processing */
2621 if (decoder->output_segment.rate < 0.0) {
2622 priv->parse_gather =
2623 g_list_prepend (priv->parse_gather, priv->current_frame);
2625 /* Otherwise, decode the frame, which gives away our ref */
2626 ret = gst_video_decoder_decode_frame (decoder, priv->current_frame);
2628 /* Current frame is gone now, either way */
2629 priv->current_frame = NULL;
2631 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2636 /* Pass the frame in priv->current_frame through the
2637 * handle_frame() callback for decoding and passing to gvd_finish_frame(),
2638 * or dropping by passing to gvd_drop_frame() */
2639 static GstFlowReturn
2640 gst_video_decoder_decode_frame (GstVideoDecoder * decoder,
2641 GstVideoCodecFrame * frame)
2643 GstVideoDecoderPrivate *priv = decoder->priv;
2644 GstVideoDecoderClass *decoder_class;
2645 GstFlowReturn ret = GST_FLOW_OK;
2647 decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
2649 /* FIXME : This should only have to be checked once (either the subclass has an
2650 * implementation, or it doesn't) */
2651 g_return_val_if_fail (decoder_class->handle_frame != NULL, GST_FLOW_ERROR);
2653 frame->distance_from_sync = priv->distance_from_sync;
2654 priv->distance_from_sync++;
2655 frame->pts = GST_BUFFER_PTS (frame->input_buffer);
2656 frame->dts = GST_BUFFER_DTS (frame->input_buffer);
2657 frame->duration = GST_BUFFER_DURATION (frame->input_buffer);
2659 /* For keyframes, PTS = DTS + constant_offset, usually 0 to 3 frame
2661 /* FIXME upstream can be quite wrong about the keyframe aspect,
2662 * so we could be going off here as well,
2663 * maybe let subclass decide if it really is/was a keyframe */
2664 if (GST_VIDEO_CODEC_FRAME_IS_SYNC_POINT (frame) &&
2665 GST_CLOCK_TIME_IS_VALID (frame->pts)
2666 && GST_CLOCK_TIME_IS_VALID (frame->dts)) {
2667 /* just in case they are not equal as might ideally be,
2668 * e.g. quicktime has a (positive) delta approach */
2669 priv->pts_delta = frame->pts - frame->dts;
2670 GST_DEBUG_OBJECT (decoder, "PTS delta %d ms",
2671 (gint) (priv->pts_delta / GST_MSECOND));
2674 frame->abidata.ABI.ts = frame->dts;
2675 frame->abidata.ABI.ts2 = frame->pts;
2677 GST_LOG_OBJECT (decoder, "PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT,
2678 GST_TIME_ARGS (frame->pts), GST_TIME_ARGS (frame->dts));
2679 GST_LOG_OBJECT (decoder, "dist %d", frame->distance_from_sync);
2681 gst_video_codec_frame_ref (frame);
2682 priv->frames = g_list_append (priv->frames, frame);
2684 if (g_list_length (priv->frames) > 10) {
2685 GST_WARNING_OBJECT (decoder, "decoder frame list getting long: %d frames,"
2686 "possible internal leaking?", g_list_length (priv->frames));
2690 gst_segment_to_running_time (&decoder->input_segment, GST_FORMAT_TIME,
2693 /* do something with frame */
2694 ret = decoder_class->handle_frame (decoder, frame);
2695 if (ret != GST_FLOW_OK)
2696 GST_DEBUG_OBJECT (decoder, "flow error %s", gst_flow_get_name (ret));
2698 /* the frame has either been added to parse_gather or sent to
2699 handle frame so there is no need to unref it */
2705 * gst_video_decoder_get_output_state:
2706 * @decoder: a #GstVideoDecoder
2708 * Get the #GstVideoCodecState currently describing the output stream.
2710 * Returns: (transfer full): #GstVideoCodecState describing format of video data.
2712 GstVideoCodecState *
2713 gst_video_decoder_get_output_state (GstVideoDecoder * decoder)
2715 GstVideoCodecState *state = NULL;
2717 GST_OBJECT_LOCK (decoder);
2718 if (decoder->priv->output_state)
2719 state = gst_video_codec_state_ref (decoder->priv->output_state);
2720 GST_OBJECT_UNLOCK (decoder);
2726 * gst_video_decoder_set_output_state:
2727 * @decoder: a #GstVideoDecoder
2728 * @fmt: a #GstVideoFormat
2729 * @width: The width in pixels
2730 * @height: The height in pixels
2731 * @reference: (allow-none) (transfer none): An optional reference #GstVideoCodecState
2733 * Creates a new #GstVideoCodecState with the specified @fmt, @width and @height
2734 * as the output state for the decoder.
2735 * Any previously set output state on @decoder will be replaced by the newly
2738 * If the subclass wishes to copy over existing fields (like pixel aspec ratio,
2739 * or framerate) from an existing #GstVideoCodecState, it can be provided as a
2742 * If the subclass wishes to override some fields from the output state (like
2743 * pixel-aspect-ratio or framerate) it can do so on the returned #GstVideoCodecState.
2745 * The new output state will only take effect (set on pads and buffers) starting
2746 * from the next call to #gst_video_decoder_finish_frame().
2748 * Returns: (transfer full): the newly configured output state.
2750 GstVideoCodecState *
2751 gst_video_decoder_set_output_state (GstVideoDecoder * decoder,
2752 GstVideoFormat fmt, guint width, guint height,
2753 GstVideoCodecState * reference)
2755 GstVideoDecoderPrivate *priv = decoder->priv;
2756 GstVideoCodecState *state;
2758 GST_DEBUG_OBJECT (decoder, "fmt:%d, width:%d, height:%d, reference:%p",
2759 fmt, width, height, reference);
2761 /* Create the new output state */
2762 state = _new_output_state (fmt, width, height, reference);
2764 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2766 GST_OBJECT_LOCK (decoder);
2767 /* Replace existing output state by new one */
2768 if (priv->output_state)
2769 gst_video_codec_state_unref (priv->output_state);
2770 priv->output_state = gst_video_codec_state_ref (state);
2772 if (priv->output_state != NULL && priv->output_state->info.fps_n > 0) {
2773 priv->qos_frame_duration =
2774 gst_util_uint64_scale (GST_SECOND, priv->output_state->info.fps_d,
2775 priv->output_state->info.fps_n);
2777 priv->qos_frame_duration = 0;
2779 priv->output_state_changed = TRUE;
2780 GST_OBJECT_UNLOCK (decoder);
2782 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2789 * gst_video_decoder_get_oldest_frame:
2790 * @decoder: a #GstVideoDecoder
2792 * Get the oldest pending unfinished #GstVideoCodecFrame
2794 * Returns: (transfer full): oldest pending unfinished #GstVideoCodecFrame.
2796 GstVideoCodecFrame *
2797 gst_video_decoder_get_oldest_frame (GstVideoDecoder * decoder)
2799 GstVideoCodecFrame *frame = NULL;
2801 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2802 if (decoder->priv->frames)
2803 frame = gst_video_codec_frame_ref (decoder->priv->frames->data);
2804 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2806 return (GstVideoCodecFrame *) frame;
2810 * gst_video_decoder_get_frame:
2811 * @decoder: a #GstVideoDecoder
2812 * @frame_number: system_frame_number of a frame
2814 * Get a pending unfinished #GstVideoCodecFrame
2816 * Returns: (transfer full): pending unfinished #GstVideoCodecFrame identified by @frame_number.
2818 GstVideoCodecFrame *
2819 gst_video_decoder_get_frame (GstVideoDecoder * decoder, int frame_number)
2822 GstVideoCodecFrame *frame = NULL;
2824 GST_DEBUG_OBJECT (decoder, "frame_number : %d", frame_number);
2826 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2827 for (g = decoder->priv->frames; g; g = g->next) {
2828 GstVideoCodecFrame *tmp = g->data;
2830 if (tmp->system_frame_number == frame_number) {
2831 frame = gst_video_codec_frame_ref (tmp);
2835 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2841 * gst_video_decoder_get_frames:
2842 * @decoder: a #GstVideoDecoder
2844 * Get all pending unfinished #GstVideoCodecFrame
2846 * Returns: (transfer full) (element-type GstVideoCodecFrame): pending unfinished #GstVideoCodecFrame.
2849 gst_video_decoder_get_frames (GstVideoDecoder * decoder)
2853 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2854 frames = g_list_copy (decoder->priv->frames);
2855 g_list_foreach (frames, (GFunc) gst_video_codec_frame_ref, NULL);
2856 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2862 gst_video_decoder_decide_allocation_default (GstVideoDecoder * decoder,
2866 GstBufferPool *pool = NULL;
2867 guint size, min, max;
2868 GstAllocator *allocator = NULL;
2869 GstAllocationParams params;
2870 GstStructure *config;
2871 gboolean update_pool, update_allocator;
2874 gst_query_parse_allocation (query, &outcaps, NULL);
2875 gst_video_info_init (&vinfo);
2876 gst_video_info_from_caps (&vinfo, outcaps);
2878 /* we got configuration from our peer or the decide_allocation method,
2880 if (gst_query_get_n_allocation_params (query) > 0) {
2881 /* try the allocator */
2882 gst_query_parse_nth_allocation_param (query, 0, &allocator, ¶ms);
2883 update_allocator = TRUE;
2886 gst_allocation_params_init (¶ms);
2887 update_allocator = FALSE;
2890 if (gst_query_get_n_allocation_pools (query) > 0) {
2891 gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max);
2892 size = MAX (size, vinfo.size);
2899 update_pool = FALSE;
2903 /* no pool, we can make our own */
2904 GST_DEBUG_OBJECT (decoder, "no pool, making new pool");
2905 pool = gst_video_buffer_pool_new ();
2909 config = gst_buffer_pool_get_config (pool);
2910 gst_buffer_pool_config_set_params (config, outcaps, size, min, max);
2911 gst_buffer_pool_config_set_allocator (config, allocator, ¶ms);
2912 gst_buffer_pool_set_config (pool, config);
2914 if (update_allocator)
2915 gst_query_set_nth_allocation_param (query, 0, allocator, ¶ms);
2917 gst_query_add_allocation_param (query, allocator, ¶ms);
2919 gst_object_unref (allocator);
2922 gst_query_set_nth_allocation_pool (query, 0, pool, size, min, max);
2924 gst_query_add_allocation_pool (query, pool, size, min, max);
2927 gst_object_unref (pool);
2933 gst_video_decoder_propose_allocation_default (GstVideoDecoder * decoder,
2940 gst_video_decoder_negotiate_default (GstVideoDecoder * decoder)
2942 GstVideoCodecState *state = decoder->priv->output_state;
2943 GstVideoDecoderClass *klass;
2944 GstQuery *query = NULL;
2945 GstBufferPool *pool = NULL;
2946 GstAllocator *allocator;
2947 GstAllocationParams params;
2948 gboolean ret = TRUE;
2950 g_return_val_if_fail (GST_VIDEO_INFO_WIDTH (&state->info) != 0, FALSE);
2951 g_return_val_if_fail (GST_VIDEO_INFO_HEIGHT (&state->info) != 0, FALSE);
2953 klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
2955 GST_DEBUG_OBJECT (decoder, "output_state par %d/%d fps %d/%d",
2956 state->info.par_n, state->info.par_d,
2957 state->info.fps_n, state->info.fps_d);
2959 if (state->caps == NULL)
2960 state->caps = gst_video_info_to_caps (&state->info);
2962 GST_DEBUG_OBJECT (decoder, "setting caps %" GST_PTR_FORMAT, state->caps);
2964 ret = gst_pad_set_caps (decoder->srcpad, state->caps);
2967 decoder->priv->output_state_changed = FALSE;
2969 /* Negotiate pool */
2970 query = gst_query_new_allocation (state->caps, TRUE);
2972 if (!gst_pad_peer_query (decoder->srcpad, query)) {
2973 GST_DEBUG_OBJECT (decoder, "didn't get downstream ALLOCATION hints");
2976 g_assert (klass->decide_allocation != NULL);
2977 ret = klass->decide_allocation (decoder, query);
2979 GST_DEBUG_OBJECT (decoder, "ALLOCATION (%d) params: %" GST_PTR_FORMAT, ret,
2983 goto no_decide_allocation;
2985 /* we got configuration from our peer or the decide_allocation method,
2987 if (gst_query_get_n_allocation_params (query) > 0) {
2988 gst_query_parse_nth_allocation_param (query, 0, &allocator, ¶ms);
2991 gst_allocation_params_init (¶ms);
2994 if (gst_query_get_n_allocation_pools (query) > 0)
2995 gst_query_parse_nth_allocation_pool (query, 0, &pool, NULL, NULL, NULL);
2998 gst_object_unref (allocator);
3000 goto no_decide_allocation;
3003 if (decoder->priv->allocator)
3004 gst_object_unref (decoder->priv->allocator);
3005 decoder->priv->allocator = allocator;
3006 decoder->priv->params = params;
3008 if (decoder->priv->pool) {
3009 gst_buffer_pool_set_active (decoder->priv->pool, FALSE);
3010 gst_object_unref (decoder->priv->pool);
3012 decoder->priv->pool = pool;
3015 gst_buffer_pool_set_active (pool, TRUE);
3019 gst_query_unref (query);
3024 no_decide_allocation:
3026 GST_WARNING_OBJECT (decoder, "Subclass failed to decide allocation");
3032 * gst_video_decoder_negotiate:
3033 * @decoder: a #GstVideoDecoder
3035 * Negotiate with downstreame elements to currently configured #GstVideoCodecState.
3037 * Returns: #TRUE if the negotiation succeeded, else #FALSE.
3040 gst_video_decoder_negotiate (GstVideoDecoder * decoder)
3042 GstVideoDecoderClass *klass;
3043 gboolean ret = TRUE;
3045 g_return_val_if_fail (GST_IS_VIDEO_DECODER (decoder), FALSE);
3047 klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
3049 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3050 if (klass->negotiate)
3051 ret = klass->negotiate (decoder);
3052 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3058 * gst_video_decoder_allocate_output_buffer:
3059 * @decoder: a #GstVideoDecoder
3061 * Helper function that allocates a buffer to hold a video frame for @decoder's
3062 * current #GstVideoCodecState.
3064 * You should use gst_video_decoder_allocate_output_frame() instead of this
3065 * function, if possible at all.
3067 * Returns: (transfer full): allocated buffer, or NULL if no buffer could be
3068 * allocated (e.g. when downstream is flushing or shutting down)
3071 gst_video_decoder_allocate_output_buffer (GstVideoDecoder * decoder)
3076 GST_DEBUG ("alloc src buffer");
3078 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3079 if (G_UNLIKELY (decoder->priv->output_state_changed
3080 || (decoder->priv->output_state
3081 && gst_pad_check_reconfigure (decoder->srcpad))))
3082 gst_video_decoder_negotiate (decoder);
3084 flow = gst_buffer_pool_acquire_buffer (decoder->priv->pool, &buffer, NULL);
3086 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3088 if (flow != GST_FLOW_OK) {
3089 GST_INFO_OBJECT (decoder, "couldn't allocate output buffer, flow %s",
3090 gst_flow_get_name (flow));
3098 * gst_video_decoder_allocate_output_frame:
3099 * @decoder: a #GstVideoDecoder
3100 * @frame: a #GstVideoCodecFrame
3102 * Helper function that allocates a buffer to hold a video frame for @decoder's
3103 * current #GstVideoCodecState. Subclass should already have configured video
3104 * state and set src pad caps.
3106 * The buffer allocated here is owned by the frame and you should only
3107 * keep references to the frame, not the buffer.
3109 * Returns: %GST_FLOW_OK if an output buffer could be allocated
3112 gst_video_decoder_allocate_output_frame (GstVideoDecoder *
3113 decoder, GstVideoCodecFrame * frame)
3115 GstFlowReturn flow_ret;
3116 GstVideoCodecState *state;
3119 g_return_val_if_fail (frame->output_buffer == NULL, GST_FLOW_ERROR);
3121 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3123 state = decoder->priv->output_state;
3124 if (state == NULL) {
3125 g_warning ("Output state should be set before allocating frame");
3128 num_bytes = GST_VIDEO_INFO_SIZE (&state->info);
3129 if (num_bytes == 0) {
3130 g_warning ("Frame size should not be 0");
3134 if (G_UNLIKELY (decoder->priv->output_state_changed
3135 || (decoder->priv->output_state
3136 && gst_pad_check_reconfigure (decoder->srcpad))))
3137 gst_video_decoder_negotiate (decoder);
3139 GST_LOG_OBJECT (decoder, "alloc buffer size %d", num_bytes);
3141 flow_ret = gst_buffer_pool_acquire_buffer (decoder->priv->pool,
3142 &frame->output_buffer, NULL);
3144 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3149 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3150 return GST_FLOW_ERROR;
3154 * gst_video_decoder_get_max_decode_time:
3155 * @decoder: a #GstVideoDecoder
3156 * @frame: a #GstVideoCodecFrame
3158 * Determines maximum possible decoding time for @frame that will
3159 * allow it to decode and arrive in time (as determined by QoS events).
3160 * In particular, a negative result means decoding in time is no longer possible
3161 * and should therefore occur as soon/skippy as possible.
3163 * Returns: max decoding time.
3166 gst_video_decoder_get_max_decode_time (GstVideoDecoder *
3167 decoder, GstVideoCodecFrame * frame)
3169 GstClockTimeDiff deadline;
3170 GstClockTime earliest_time;
3172 GST_OBJECT_LOCK (decoder);
3173 earliest_time = decoder->priv->earliest_time;
3174 if (GST_CLOCK_TIME_IS_VALID (earliest_time)
3175 && GST_CLOCK_TIME_IS_VALID (frame->deadline))
3176 deadline = GST_CLOCK_DIFF (earliest_time, frame->deadline);
3178 deadline = G_MAXINT64;
3180 GST_LOG_OBJECT (decoder, "earliest %" GST_TIME_FORMAT
3181 ", frame deadline %" GST_TIME_FORMAT ", deadline %" GST_TIME_FORMAT,
3182 GST_TIME_ARGS (earliest_time), GST_TIME_ARGS (frame->deadline),
3183 GST_TIME_ARGS (deadline));
3185 GST_OBJECT_UNLOCK (decoder);
3191 * gst_video_decoder_get_qos_proportion:
3192 * @decoder: a #GstVideoDecoder
3193 * current QoS proportion, or %NULL
3195 * Returns: The current QoS proportion.
3200 gst_video_decoder_get_qos_proportion (GstVideoDecoder * decoder)
3204 g_return_val_if_fail (GST_IS_VIDEO_DECODER (decoder), 1.0);
3206 GST_OBJECT_LOCK (decoder);
3207 proportion = decoder->priv->proportion;
3208 GST_OBJECT_UNLOCK (decoder);
3214 _gst_video_decoder_error (GstVideoDecoder * dec, gint weight,
3215 GQuark domain, gint code, gchar * txt, gchar * dbg, const gchar * file,
3216 const gchar * function, gint line)
3219 GST_WARNING_OBJECT (dec, "error: %s", txt);
3221 GST_WARNING_OBJECT (dec, "error: %s", dbg);
3222 dec->priv->error_count += weight;
3223 dec->priv->discont = TRUE;
3224 if (dec->priv->max_errors < dec->priv->error_count) {
3225 gst_element_message_full (GST_ELEMENT (dec), GST_MESSAGE_ERROR,
3226 domain, code, txt, dbg, file, function, line);
3227 return GST_FLOW_ERROR;
3236 * gst_video_decoder_set_max_errors:
3237 * @dec: a #GstVideoDecoder
3238 * @num: max tolerated errors
3240 * Sets numbers of tolerated decoder errors, where a tolerated one is then only
3241 * warned about, but more than tolerated will lead to fatal error. Default
3242 * is set to GST_VIDEO_DECODER_MAX_ERRORS.
3245 gst_video_decoder_set_max_errors (GstVideoDecoder * dec, gint num)
3247 g_return_if_fail (GST_IS_VIDEO_DECODER (dec));
3249 dec->priv->max_errors = num;
3253 * gst_video_decoder_get_max_errors:
3254 * @dec: a #GstVideoDecoder
3256 * Returns: currently configured decoder tolerated error count.
3259 gst_video_decoder_get_max_errors (GstVideoDecoder * dec)
3261 g_return_val_if_fail (GST_IS_VIDEO_DECODER (dec), 0);
3263 return dec->priv->max_errors;
3267 * gst_video_decoder_set_packetized:
3268 * @decoder: a #GstVideoDecoder
3269 * @packetized: whether the input data should be considered as packetized.
3271 * Allows baseclass to consider input data as packetized or not. If the
3272 * input is packetized, then the @parse method will not be called.
3275 gst_video_decoder_set_packetized (GstVideoDecoder * decoder,
3276 gboolean packetized)
3278 decoder->priv->packetized = packetized;
3282 * gst_video_decoder_get_packetized:
3283 * @decoder: a #GstVideoDecoder
3285 * Queries whether input data is considered packetized or not by the
3288 * Returns: TRUE if input data is considered packetized.
3291 gst_video_decoder_get_packetized (GstVideoDecoder * decoder)
3293 return decoder->priv->packetized;
3297 * gst_video_decoder_set_estimate_rate:
3298 * @dec: a #GstVideoDecoder
3299 * @enabled: whether to enable byte to time conversion
3301 * Allows baseclass to perform byte to time estimated conversion.
3304 gst_video_decoder_set_estimate_rate (GstVideoDecoder * dec, gboolean enabled)
3306 g_return_if_fail (GST_IS_VIDEO_DECODER (dec));
3308 dec->priv->do_estimate_rate = enabled;
3312 * gst_video_decoder_get_estimate_rate:
3313 * @dec: a #GstVideoDecoder
3315 * Returns: currently configured byte to time conversion setting
3318 gst_video_decoder_get_estimate_rate (GstVideoDecoder * dec)
3320 g_return_val_if_fail (GST_IS_VIDEO_DECODER (dec), 0);
3322 return dec->priv->do_estimate_rate;
3326 * gst_video_decoder_set_latency:
3327 * @decoder: a #GstVideoDecoder
3328 * @min_latency: minimum latency
3329 * @max_latency: maximum latency
3331 * Lets #GstVideoDecoder sub-classes tell the baseclass what the decoder
3332 * latency is. Will also post a LATENCY message on the bus so the pipeline
3333 * can reconfigure its global latency.
3336 gst_video_decoder_set_latency (GstVideoDecoder * decoder,
3337 GstClockTime min_latency, GstClockTime max_latency)
3339 g_return_if_fail (GST_CLOCK_TIME_IS_VALID (min_latency));
3340 g_return_if_fail (max_latency >= min_latency);
3342 GST_OBJECT_LOCK (decoder);
3343 decoder->priv->min_latency = min_latency;
3344 decoder->priv->max_latency = max_latency;
3345 GST_OBJECT_UNLOCK (decoder);
3347 gst_element_post_message (GST_ELEMENT_CAST (decoder),
3348 gst_message_new_latency (GST_OBJECT_CAST (decoder)));
3352 * gst_video_decoder_get_latency:
3353 * @decoder: a #GstVideoDecoder
3354 * @min_latency: (out) (allow-none): address of variable in which to store the
3355 * configured minimum latency, or %NULL
3356 * @max_latency: (out) (allow-none): address of variable in which to store the
3357 * configured mximum latency, or %NULL
3359 * Query the configured decoder latency. Results will be returned via
3360 * @min_latency and @max_latency.
3363 gst_video_decoder_get_latency (GstVideoDecoder * decoder,
3364 GstClockTime * min_latency, GstClockTime * max_latency)
3366 GST_OBJECT_LOCK (decoder);
3368 *min_latency = decoder->priv->min_latency;
3370 *max_latency = decoder->priv->max_latency;
3371 GST_OBJECT_UNLOCK (decoder);
3375 * gst_video_decoder_merge_tags:
3376 * @decoder: a #GstVideoDecoder
3377 * @tags: a #GstTagList to merge
3378 * @mode: the #GstTagMergeMode to use
3380 * Adds tags to so-called pending tags, which will be processed
3381 * before pushing out data downstream.
3383 * Note that this is provided for convenience, and the subclass is
3384 * not required to use this and can still do tag handling on its own.
3389 gst_video_decoder_merge_tags (GstVideoDecoder * decoder,
3390 const GstTagList * tags, GstTagMergeMode mode)
3394 g_return_if_fail (GST_IS_VIDEO_DECODER (decoder));
3395 g_return_if_fail (tags == NULL || GST_IS_TAG_LIST (tags));
3397 GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3399 GST_DEBUG_OBJECT (decoder, "merging tags %" GST_PTR_FORMAT, tags);
3400 otags = decoder->priv->tags;
3401 decoder->priv->tags = gst_tag_list_merge (decoder->priv->tags, tags, mode);
3403 gst_tag_list_unref (otags);
3404 decoder->priv->tags_changed = TRUE;
3405 GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3409 * gst_video_decoder_get_buffer_pool:
3410 * @decoder: a #GstVideoDecoder
3412 * Returns: (transfer full): the instance of the #GstBufferPool used
3413 * by the decoder; free it after use it
3416 gst_video_decoder_get_buffer_pool (GstVideoDecoder * decoder)
3418 g_return_val_if_fail (GST_IS_VIDEO_DECODER (decoder), NULL);
3420 if (decoder->priv->pool)
3421 return gst_object_ref (decoder->priv->pool);
3427 * gst_video_decoder_get_allocator:
3428 * @decoder: a #GstVideoDecoder
3429 * @allocator: (out) (allow-none) (transfer full): the #GstAllocator
3431 * @params: (out) (allow-none) (transfer full): the
3432 * #GstAllocatorParams of @allocator
3434 * Lets #GstVideoDecoder sub-classes to know the memory @allocator
3435 * used by the base class and its @params.
3437 * Unref the @allocator after use it.
3440 gst_video_decoder_get_allocator (GstVideoDecoder * decoder,
3441 GstAllocator ** allocator, GstAllocationParams * params)
3443 g_return_if_fail (GST_IS_VIDEO_DECODER (decoder));
3446 *allocator = decoder->priv->allocator ?
3447 gst_object_ref (decoder->priv->allocator) : NULL;
3450 *params = decoder->priv->params;