Gwenole Beauchesne [Mon, 7 Jan 2013 12:41:59 +0000 (13:41 +0100)]
decoder: use an array of units instead of a single-linked list.
Use a GArray to hold decoder units in a frame, instead of a single-linked
list. This makes 'append' calls faster, but not that much. At least, this
makes things clearer.
Gwenole Beauchesne [Mon, 7 Jan 2013 10:13:07 +0000 (11:13 +0100)]
decoder: refactor decoder unit API.
Allocate decoder unit earlier in the main parse() function and don't
delegate this task to derived classes. The ultimate purpose is to get
rid of dynamic allocation of decoder units.
Gwenole Beauchesne [Mon, 7 Jan 2013 09:48:27 +0000 (10:48 +0100)]
mpeg2: introduce parser info instead of MPEG-2 specific decoder unit.
Use a new GstVaapiParserInfoMpeg2 data structure instead of deriving
from GstVaapiDecoderUnit for MPEG-2 specific parser information.
Gwenole Beauchesne [Mon, 7 Jan 2013 09:22:54 +0000 (10:22 +0100)]
h264: introduce parser info instead of H.264 specific decoder unit.
Use a new GstVaapiParserInfoH264 data structure instead of deriving
from GstVaapiDecoderUnit for H.264 specific parser information.
Sreerenj Balachandran [Sat, 5 Jan 2013 10:33:06 +0000 (12:33 +0200)]
h264: set default values for some header fields.
The SPS, PPS and slice headers are not fully zero-initialized in the
codecparsers/ library. Rather, the standard upstream behaviour is to
initialize only certain syntax elements with some inferred values if
they are not present in the bitstream.
At the gstreamer-vaapi decoder level, we need to further initialize
certain syntax elements with some sensible default values so that to
not complicate VA drivers that just pass those verbatim to the HW,
and also avoid an memset() of the whole decoder unit.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Gwenole Beauchesne [Sun, 6 Jan 2013 18:05:49 +0000 (19:05 +0100)]
codecparsers: update to gst-vaapi-rebased commit
b47983a.
b47983a h264: add inferred value for slice_beta_offset_div2
Gwenole Beauchesne [Sat, 5 Jan 2013 16:55:47 +0000 (17:55 +0100)]
plugins: cope with new GstVaapiVideoMeta API.
Update plugin elements with the new GstVaapiVideoMeta API.
This also fixes support for subpictures/overlay because GstVideoDecoder
generates a sub-buffer from the GstVaapiVideoBuffer. So, that sub-buffer
is marked as read-only. However, when comes in the textoverlay element
for example, it checks whether the input buffer is writable. Since that
buffer read-only, then a new GstBuffer is created. Since gst_buffer_copy()
does not preserve the parent field, the generated buffer in textoverlay
is not exploitable because we lost all VA specific information.
Now, with GstVaapiVideoMeta information attached to a standard GstBuffer,
all information are preserved through gst_buffer_copy() since the latter
does copy metadata (qdata in this case).
Gwenole Beauchesne [Sat, 5 Jan 2013 16:37:13 +0000 (17:37 +0100)]
videobuffer: wrap video meta into a surface buffer.
Make GstVaapiVideoBuffer a simple wrapper for video meta. This buffer is
no longer necessary but for compatibility with GStreamer 0.10 APIs or users
expecting a GstSurfaceBuffer like Clutter.
Gwenole Beauchesne [Sat, 5 Jan 2013 07:31:24 +0000 (08:31 +0100)]
videobuffer: add video meta information.
Add new GstVaapiVideoMeta object that holds all information needed to
convey gst-vaapi specific data as a GstBuffer.
Gwenole Beauchesne [Thu, 3 Jan 2013 12:10:33 +0000 (13:10 +0100)]
vaapidecode: fix calculation of the time-out value.
Fix calculation of the time-out value for cases where no VA surface is
available for decoding. In this case, we need to wait until downstream
sink consumed at least one surface. The time-out was miscalculated as
it was always set to <current-time> + one second, which is not suitable
for streams with larger gaps.
Gwenole Beauchesne [Thu, 3 Jan 2013 12:05:47 +0000 (13:05 +0100)]
decoder: always use the calculated presentation timestamp.
Use PTS value computed by the decoder, which could also be derived from
the GstVideoCodecFrame PTS. This makes it possible to fix up the PTS if
the original one was miscomputed or only represented a DTS instead.
Gwenole Beauchesne [Wed, 2 Jan 2013 16:33:15 +0000 (17:33 +0100)]
h264: don't create sub-buffer for slice data.
Gwenole Beauchesne [Thu, 3 Jan 2013 10:16:44 +0000 (11:16 +0100)]
decoder: create new context when encoded resolution changes.
Create a new VA context if the encoded surface size changes because we
need to keep the underlying surface pool until the last one was released.
Otherwise, either of the following cases could have happened: (i) release
a VA surface to an inexistent pool, or (ii) release VA surface to an
existing surface pool, but with different size.
Gwenole Beauchesne [Wed, 2 Jan 2013 16:23:53 +0000 (17:23 +0100)]
mpeg2: don't create sub-buffer for slice data.
Avoid creating a GstBuffer for slice data. Rather, directly use the codec
frame input buffer data. This is possible because the codec frame is valid
until end_frame() where we submit the VA buffers for decoding. Anyway, the
slice data buffer is copied into the VA buffer when it is created.
Gwenole Beauchesne [Wed, 2 Jan 2013 13:45:50 +0000 (14:45 +0100)]
mpeg2: minor clean-ups.
Drop explicit initialization of most fields that are implicitly set to
zero. Remove some useless checks for NULL pointers.
Gwenole Beauchesne [Wed, 2 Jan 2013 13:18:31 +0000 (14:18 +0100)]
mpeg2: optimize scan for the second start code.
Optimize scan for the second start code, on the next parse() call so that
to avoid scanning again earlier bytes where we didn't find any start code.
Gwenole Beauchesne [Wed, 2 Jan 2013 13:10:20 +0000 (14:10 +0100)]
mpeg2: use sequence_display_extension() to compute PAR.
Also compute pixel-aspect-ratio from sequence_display_extension(),
should it exist in the bitstream.
Gwenole Beauchesne [Wed, 2 Jan 2013 13:02:29 +0000 (14:02 +0100)]
mpeg2: handle sequence_display_extension().
Gwenole Beauchesne [Thu, 27 Dec 2012 14:18:55 +0000 (15:18 +0100)]
mpeg2: implement {start,end}_frame() hooks.
Implement GstVaapiDecoder.start_frame() and end_frame() semantics so
that to create new VA context earlier and submit VA pictures to the
HW for decoding as soon as possible. i.e. don't wait for the next
frame to start decoding the previous one.
Gwenole Beauchesne [Thu, 27 Dec 2012 13:54:29 +0000 (14:54 +0100)]
mpeg2: parse slice() header earlier.
Parse slice() header and first macroblock position earlier in _parse()
function instead of waiting for the _decode() stage. This doesn't change
anything but readability.
Gwenole Beauchesne [Thu, 27 Dec 2012 13:41:04 +0000 (14:41 +0100)]
mpeg2: add codec specific decoder unit.
Introduce new GstVaapiDecoderUnitMpeg2 object, which holds the standard
GstMpegVideoPacket and additional parsed header info. Besides, we now
parse as early as in the _parse() function so that to avoid un-necessary
creation of sub-buffers in _decode() for video packets that are not slices.
Gwenole Beauchesne [Thu, 27 Dec 2012 17:52:43 +0000 (18:52 +0100)]
decoder: introduce lists of units to decode before/after frame.
Theory of operations: all units marked as "slice" are moved to the "units"
list. Since this list only contains slice data units, the prev_slice pointer
was removed. Besides, we now maintain two extra lists of units to be decoded
before or after slice data units.
In particular, all units in the "pre_units" list will be decoded before
GstVaapiDecoder::start_frame() is called and units in the "post_units"
list will be decoded after GstVaapiDecoder::end_frame() is called.
Gwenole Beauchesne [Wed, 2 Jan 2013 15:06:18 +0000 (16:06 +0100)]
decoder: drop useless checks for codec objects.
Codec objects are used internally only and they are bound to be created
with a valid GstVaapiDecoder object.
Gwenole Beauchesne [Thu, 27 Dec 2012 09:35:45 +0000 (10:35 +0100)]
vaapidecode: use GST_ERROR to print error messages.
Gwenole Beauchesne [Thu, 27 Dec 2012 08:55:14 +0000 (09:55 +0100)]
vaapidecode: avoid double release of frame on error.
Don't call gst_video_decoder_drop_frame() if gst_video_decoder_finish_frame()
was already called before and it returned an error. In that case, we were
releasing the frame again, thus leading to a "double-free" condition.
Gwenole Beauchesne [Fri, 21 Dec 2012 13:29:01 +0000 (14:29 +0100)]
Add videoutils submodule for GstVideoDecoder APIs.
Gwenole Beauchesne [Tue, 18 Dec 2012 15:36:01 +0000 (16:36 +0100)]
configure: check for GstVideoDecoder API.
GstVideoDecoder API is part of an unreleased GStreamer 0.10 stack. In particular,
this is only available in git 0.10 branch or GStreamer >= 1.0 stack. Interested
parties may either use upstream git 0.10 branch or backport the necessary support
for GstVideoDecoder API, thus including helper tools like GstVideoCodecFrame et al.
Gwenole Beauchesne [Tue, 18 Dec 2012 15:21:31 +0000 (16:21 +0100)]
docs: remove obsolete gst_vaapi_surface_proxy_get_type().
GstVaapiSurfaceProxy is no longer based on the GType system.
Gwenole Beauchesne [Tue, 18 Dec 2012 15:17:22 +0000 (16:17 +0100)]
docs: fix entries for GstVaapiSurfaceProxy.
Gwenole Beauchesne [Tue, 18 Dec 2012 14:29:58 +0000 (15:29 +0100)]
NEWS: updates.
Gwenole Beauchesne [Tue, 18 Dec 2012 14:15:52 +0000 (15:15 +0100)]
Bump library major version.
Increase library major so that to cope with API/ABI incompatible changes
since 0.4.x series and avoid user issues.
Gwenole Beauchesne [Thu, 13 Dec 2012 15:02:52 +0000 (16:02 +0100)]
surfaceproxy: minor clean-ups.
Gwenole Beauchesne [Thu, 13 Dec 2012 14:51:24 +0000 (15:51 +0100)]
surfaceproxy: drop accessors to obsolete attributes.
Make GstVaapiSurfaceProxy only a thin wrapper around a VA context and a
VA surface. i.e. drop any other attribute like timestamp, duration,
interlaced or top-field-first.
Gwenole Beauchesne [Thu, 13 Dec 2012 14:34:10 +0000 (15:34 +0100)]
decoder: maintain decoded frames as GstVideoCodecFrame objects.
Maintain decoded surfaces as GstVideoCodecFrame objects instead of
GstVaapiSurfaceProxy objects. The latter will tend to be reduced to
the strict minimum: a context and a surface.
Gwenole Beauchesne [Thu, 13 Dec 2012 13:30:18 +0000 (14:30 +0100)]
vaapidecode: output all decoded frames as soon as possible.
Make sure to push all decoded frames downstream as soon as possible.
This makes sure we don't need to wait for a new frame to be ready to
be decoded before receiving new decoded frames.
This also separates the decode process and the output process. The latter
could be moved to a specific GstTask later on.
Gwenole Beauchesne [Thu, 13 Dec 2012 13:27:18 +0000 (14:27 +0100)]
decoder: add gst_vaapi_decoder_get_frame() API.
Add new gst_vaapi_decoder_get_frame() function meant to be used with
gst_vaapi_decoder_decode(). The purpose is to return the next decoded
frame as a GstVideoCodecFrame and the associated GstVaapiSurfaceProxy
as the user-data object.
Gwenole Beauchesne [Thu, 13 Dec 2012 14:47:27 +0000 (15:47 +0100)]
vaapipostproc: use GstBuffer flags for TFF.
Determine whether the buffer represents the top-field only by checking for
the GST_VIDEO_BUFFER_TFF flag instead of relying on the GstVaapiSurfaceProxy
flag. Also trust "interlaced" caps to determine whether the input frame
is interleaved or not.
Gwenole Beauchesne [Thu, 13 Dec 2012 12:27:33 +0000 (13:27 +0100)]
vaapipostproc: handle video sub-buffers.
Intermediate elements may produce a sub-buffer from a valid GstVaapiVideoBuffer
for non raw YUV cases. Make sure vaapipostproc now understands those buffers.
Gwenole Beauchesne [Tue, 18 Dec 2012 13:57:36 +0000 (14:57 +0100)]
h264: optimize initialization process of decoder units.
Decoder units were zero-initialized, including the SPS/PPS/slice headers.
The latter don't require zero-initialization since the codecparsers/ lib
will do so for key variables already. This is not a great value per se but
at least it makes it possible to check whether the default initialization
decisions made in the codecparsers/ lib were right or not.
This can be reverted if this exposes too many issues.
Gwenole Beauchesne [Thu, 13 Dec 2012 10:48:06 +0000 (11:48 +0100)]
h264: minor clean-ups.
Drop explicit initialization of most fields that are implicitly set to
zero. Drop helper macros for casting to GstVaapiPictureH264 or
GstVaapiFrameStore. Also remove some useless checks for NULL pointers.
Gwenole Beauchesne [Fri, 7 Dec 2012 16:45:03 +0000 (17:45 +0100)]
h264: drop GstVaapiSliceH264 object.
Use standard GstVaapiSlice object from now on since we already have
parsed and recorded the slice headers (GstH264SliceHdr decode units).
Gwenole Beauchesne [Thu, 13 Dec 2012 09:47:25 +0000 (10:47 +0100)]
h264: detect new pictures from decode-units.
Update is_new_picture() to cope with GstVaapiDecoderUnitH264, instead
of assuming frame boundaries when first_mb_in_slice is zero.
Gwenole Beauchesne [Thu, 13 Dec 2012 09:21:46 +0000 (10:21 +0100)]
h264: implement {start,end}_frame() hooks.
Implement GstVaapiDecoder.start_frame() and end_frame() semantics so
that to create new VA context earlier and submit VA pictures to the
HW for decoding as soon as possible. i.e. don't wait for the next
frame to start decoding the previous one.
Gwenole Beauchesne [Wed, 12 Dec 2012 17:33:52 +0000 (18:33 +0100)]
h264: optimize scan for the second start code.
Optimize scan for the second start code, on the next parse() call so that
to avoid scanning again earlier bytes where we didn't find any start code.
Gwenole Beauchesne [Thu, 6 Dec 2012 16:25:01 +0000 (17:25 +0100)]
h264: add codec specific decoder unit.
Introduce new GstVaapiDecoderUnitH264 object, which holds the standard
NAL unit header (GstH264NalUnit) and additional parsed header info.
Besides, we now parse headers as early as in the _parse() function so
that to avoid un-necessary creation of sub-buffers in _decode() for
NAL units that are not slices.
This is a performance win by ~+1.1% only.
Gwenole Beauchesne [Tue, 4 Dec 2012 10:01:42 +0000 (11:01 +0100)]
vaapisink: handle sub video-buffers.
Intermediate elements may produce a sub-buffer from a valid GstVaapiVideoBuffer
for non raw YUV cases. Make sure vaapisink now understands those buffers.
Sreerenj Balachandran [Wed, 12 Dec 2012 14:22:32 +0000 (15:22 +0100)]
vaapidecode: use gst_vaapi_decoder_get_codec_state().
Directly use the GstVideoCodecState associated with the VA decoder
instead of parsing caps again.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Gwenole Beauchesne [Tue, 4 Dec 2012 13:53:15 +0000 (14:53 +0100)]
vaapidecode: use more standard helpers.
Use g_clear_object() [glib >= 2.28] and gst_caps_replace() helper functions
in more places.
Sreerenj Balachandran [Tue, 4 Dec 2012 13:45:29 +0000 (14:45 +0100)]
vaapidecode: move to GstVideoDecoder base class.
Make vaapidecode derive from the standard GstVideoDecoder base element
class. This simplifies the code to the strict minimum for the decoder
element and makes it easier to port to GStreamer 1.x API.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Gwenole Beauchesne [Thu, 6 Dec 2012 13:02:25 +0000 (14:02 +0100)]
jpeg: initial port to new GstVaapiDecoder API
Gwenole Beauchesne [Thu, 6 Dec 2012 13:02:21 +0000 (14:02 +0100)]
vc1: initial port to new GstVaapiDecoder API
Gwenole Beauchesne [Thu, 6 Dec 2012 13:02:17 +0000 (14:02 +0100)]
h264: initial port to new GstVaapiDecoder API
Gwenole Beauchesne [Mon, 17 Dec 2012 17:47:20 +0000 (09:47 -0800)]
mpeg4: initial port to new GstVaapiDecoder API
Gwenole Beauchesne [Thu, 6 Dec 2012 13:01:46 +0000 (14:01 +0100)]
mpeg2: initial port to new GstVaapiDecoder API.
Sreerenj Balachandran [Wed, 12 Dec 2012 14:09:21 +0000 (15:09 +0100)]
decoder: use GstVideoCodecState.
Use standard GstVideoCodecState throughout GstVaapiDecoder and expose
it with a new gst_vaapi_decoder_get_codec_state() function. This makes
it possible to drop picture size (width, height) information, framerate
(fps_n, fps_d) information, pixel aspect ratio (par_n, par_d) information,
and interlace mode (is_interlaced field).
This is a new API with backwards compatibility maintained. In particular,
gst_vaapi_decoder_get_caps() is still available.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Gwenole Beauchesne [Wed, 12 Dec 2012 12:44:07 +0000 (13:44 +0100)]
decoder: update gst_vaapi_decoder_get_surface() semantics.
Align gst_vaapi_decoder_get_surface() semantics with the rest of the
API. That is, return a GstVaapiDecoderStatus and the decoded surface
as a handle to GstVaapiSurfaceProxy in parameter.
This is an API/ABI change.
Gwenole Beauchesne [Fri, 7 Dec 2012 15:40:44 +0000 (16:40 +0100)]
decoder: use standard helper functions.
Use g_clear_object(), gst_buffer_replace() and gst_caps_replace()
whenever necessary.
Gwenole Beauchesne [Thu, 29 Nov 2012 14:06:00 +0000 (15:06 +0100)]
decoder: expose new parse/decode API.
Introduce new decoding process whereby a GstVideoCodecFrame is created
first. Next, input stream buffers are accumulated into a GstAdapter,
that is then passed to the _parse() function. The GstVaapiDecoder object
accumulates all parsed units and when a complete frame or field is
detected, that GstVideoCodecFrame is passed to the _decode() function.
Ultimately, the caller receives a GstVaapiSurfaceProxy if decoding
process was successful.
Gwenole Beauchesne [Thu, 13 Dec 2012 09:20:35 +0000 (10:20 +0100)]
decoder: add {start,end}_frame() hooks.
The start_frame() hook is called prior to traversing all decode-units
for decoding. The unit argument represents the first slice in the frame.
Some codecs (e.g. H.264) need to wait for the first slice in order to
determine the actual VA context parameters.
Gwenole Beauchesne [Thu, 6 Dec 2012 12:57:42 +0000 (13:57 +0100)]
decoder: add new GstVaapiDecoder API.
Split decoding process into two steps: (i) parse incoming bitstreams
into simple decoder-units until the frame or field is complete; and
(ii) decode the whole frame or field at once.
This is an ABI change.
Gwenole Beauchesne [Wed, 5 Dec 2012 09:51:41 +0000 (10:51 +0100)]
decoder: add new "decoder-frame" object.
Introduce a new GstVaapiDecoderFrame that is just a list of decoder units
(GstVaapiDecoderUnit objects) that constitute a frame. This object is just
an extension to GstVideoCodecFrame for VA decoder purposes. It is available
as the user-data member element.
This is a libgstvaapi internal object.
Gwenole Beauchesne [Thu, 6 Dec 2012 08:44:01 +0000 (09:44 +0100)]
decoder: add new "decoder-unit" object.
Introduce GstVaapiDecoderUnit which represents a fragment of the source
stream to be decoded. For instance, a decode-unit will be a NAL unit for
H.264 streams, an EBDU for VC-1 streams, and a video packet for MPEG-2
streams.
This is a libgstvaapi internal object.
Gwenole Beauchesne [Mon, 3 Dec 2012 13:09:01 +0000 (14:09 +0100)]
Port GstVaapiFrameStore to GstVaapiMiniObject.
Gwenole Beauchesne [Mon, 3 Dec 2012 10:19:08 +0000 (11:19 +0100)]
Port codec objects to GstVaapiMiniObject.
Gwenole Beauchesne [Mon, 3 Dec 2012 12:46:28 +0000 (13:46 +0100)]
surfaceproxy: port to GstVaapiMiniObject.
GstVaapiSurfaceProxy does not use any particular functionality from
GObject. Actually, it only needs a basic object type with reference
counting.
This is an API and ABI change.
Gwenole Beauchesne [Fri, 30 Nov 2012 16:25:07 +0000 (17:25 +0100)]
Add GstVaapiMiniObject.
Introduce a new reference counted object that is very lightweight and
also provides flags and user-data functionalities. Initialization and
finalization times are reduced by up to a factor 5x vs GstMiniObject
from GStreamer 0.10 stack.
This is a libgstvaapi internal object.
Gwenole Beauchesne [Mon, 17 Dec 2012 10:51:17 +0000 (02:51 -0800)]
tests: add test for MPEG-4:2 decoding.
Gwenole Beauchesne [Mon, 17 Dec 2012 12:42:29 +0000 (04:42 -0800)]
h264: initialize VA context before allocating the first slice.
Fix decode_slice() to ensure a VA context exists prior to creating a
new GstVaapiSliceH264, which invokes vaCreateBuffer() with some VA
context ID. i.e. the latter was not initialized, thus causing failures
on Cedar Trail for example.
Zhao Halley [Wed, 5 Dec 2012 01:15:32 +0000 (09:15 +0800)]
configure: install plugin elements in GST_PLUGIN_PATH, if set.
If GST_PLUGIN_PATH environment variable exists and points to a valid
directory, then use it as the system installation path for gst-vaapi
plugin elements.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Gwenole Beauchesne [Mon, 17 Dec 2012 13:27:56 +0000 (14:27 +0100)]
configure: downgrade glib required version to 2.28.
Gwenole Beauchesne [Mon, 17 Dec 2012 08:41:24 +0000 (09:41 +0100)]
libs: fix compatibility with glib 2.28.
Always prefer non deprecated APIs by default and provide compatibility
glue for older glib versions when necessary.
Gwenole Beauchesne [Mon, 17 Dec 2012 09:10:55 +0000 (10:10 +0100)]
libs: use glib >= 2.32 semantics for mutexes.
Use glib >= 2.32 semantics for GMutex and GRecMutex wrt. initialization
and termination. Basically, the new mutex objects can be used as static
mutex objects from the deprecated APIs, e.g. GStaticMutex and GStaticRecMutex.
Gwenole Beauchesne [Mon, 17 Dec 2012 12:15:53 +0000 (04:15 -0800)]
libs: only export gst_vaapi_*() symbols.
This fixes symbol clashes between the gst-vaapi built-in codecparsers/
library and the system-provided one, mainly used by videoparses/. Now,
only symbols with the gst_vaapi_* prefix will be exported, if they are
not marked as "hidden" to libgstvaapi.
Gwenole Beauchesne [Tue, 20 Nov 2012 17:21:41 +0000 (18:21 +0100)]
vaapiupload: reset direct-rendering to zero when changing caps.
Make sure to reset direct-rendering flag to zero when caps are changed,
and only derive it to one when the next checks succeed.
Gwenole Beauchesne [Tue, 20 Nov 2012 13:42:24 +0000 (14:42 +0100)]
vaapiupload: fix sink caps to report the supported set of YUV caps.
Try to allocate the GstVaapiUploader helper object prior to listing the
supported image formats. Otherwise, only a single generic caps is output
with no particular pixel format referenced in there.
Zhao Halley [Tue, 20 Nov 2012 13:32:40 +0000 (14:32 +0100)]
vaapiupload: use new GstVaapiUploader helper.
Use GstVaapiUploader helper that automatically handles direct rendering
mode, thus making the "direct-rendering" property obsolete and hence it
is now removed.
The "direct-rendering" level 2, i.e. exposing VA surface buffers, was never
really well supported and it could actually trigger degraded performance.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Gwenole Beauchesne [Tue, 20 Nov 2012 14:50:56 +0000 (15:50 +0100)]
vaapisink: compute and expose the supported set of YUV caps.
Make vaapisink expose only the set of supported caps for raw YUV buffers.
Add gst_vaapi_uploader_get_caps() helper function to determine the set
of supported YUV caps as source (for images). This function actually
tries to zero and upload each image to a 64x64 test surface. Of course,
this relies on VA drivers to not claim success if vaPutImage() is not
correctly supported.
Gwenole Beauchesne [Tue, 20 Nov 2012 13:28:55 +0000 (14:28 +0100)]
vaapisink: add support for raw YUV buffers.
Add new GstVaapiUploader helper to upload raw YUV buffers to VA surfaces.
It is up to the caller to negotiate source caps (for images) and output
caps (for surfaces). gst_vaapi_uploader_has_direct_rendering() is available
to help decide between the creation of a GstVaapiVideoBuffer or a regular
GstBuffer on sink pads.
Signed-off-by: Zhao Halley <halley.zhao@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Gwenole Beauchesne [Tue, 20 Nov 2012 13:36:29 +0000 (14:36 +0100)]
image: fix GstVaapiImage map and unmap.
Fix gst_vaapi_image_map() to return TRUE and the GstVaapiImageRaw
structure correctly filled in if the image was already mapped.
Likewise, make gst_vaapi_image_unmap() return TRUE if the image
was already unmapped.
Wind Yuan [Tue, 30 Oct 2012 05:15:45 +0000 (13:15 +0800)]
videobuffer: fix memory leak for surface and image.
Fix reference leak of surface and image in GstVaapiVideoBuffer wrapper,
thus resulting on actual memory leak of GstVaapiImage when using them
for downloads/uploads from VA surfaces and more specifically surfaces
when the pipeline is shutdown. i.e. vaTerminate() was never called
because the resources were not unreferenced, and thus not deallocated
in the end.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Gwenole Beauchesne [Mon, 19 Nov 2012 09:04:52 +0000 (10:04 +0100)]
NEWS: updates.
Gwenole Beauchesne [Fri, 16 Nov 2012 17:00:10 +0000 (18:00 +0100)]
h264: fix picture size in macroblocks.
The picture size signalled by sps->{width,height} is the actual size with
cropping applied, not the original size derived from pic_width_in_mbs_minus1
and pic_height_in_map_units_minus1. VA driver expects that original size,
uncropped.
There is another issue pending: frame cropping information needs to be
taken care of.
Gwenole Beauchesne [Fri, 16 Nov 2012 15:18:52 +0000 (16:18 +0100)]
codecparsers: always build parserutils first.
Fix commit
18245b4 so that to link and build parserutils.[ch] first.
This is needed since that's the common dependency for actual codec
parsers (gstvc1parser.c for instance).
Gwenole Beauchesne [Thu, 15 Nov 2012 16:50:45 +0000 (17:50 +0100)]
codecparsers: always build the VC-1 parser library.
... this is useful to make sure pixel-aspect-ratio and framerate
information are correctly parsed since we have no means to detect
that at configure time.
Sreerenj Balachandran [Thu, 8 Nov 2012 09:40:47 +0000 (11:40 +0200)]
mpeg2: fix PAR calculation from commit
bd11bae.
Invoke gst_mpeg_video_finalise_mpeg2_sequence_header() to get the
correct PAR values. While doing so, require a newer version of the
bitstream parser library.
Note: it may be necessary to also parse the Sequence_Display_Extension()
header.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Gwenole Beauchesne [Thu, 15 Nov 2012 14:00:43 +0000 (15:00 +0100)]
Fix build with the GNU gold linker.
In particular, fix libgstvaapi-glx DSO dependencies to include libgstbase
and libgstvideo libs, e.g. for gst_video_buffer_get_overlay_composition().
Rob Bradford [Fri, 2 Nov 2012 18:18:37 +0000 (18:18 +0000)]
wayland: port to 1.0 version of the protocol.
This patch updates to relect the 1.0 version of the protocol. The main
changes are the switch to wl_registry for global object notifications
and the way that the event queue and file descriptor is processed.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Gwenole Beauchesne [Wed, 14 Nov 2012 18:22:13 +0000 (19:22 +0100)]
h264: fix incorrect integration of previous commit (
4d31e1e).
git am got confused somehow, though the end result doesn't change at
all since we require both SPS and PPS to be parsed prior to decoding
the first slice.
Gwenole Beauchesne [Wed, 14 Nov 2012 17:40:47 +0000 (18:40 +0100)]
h264: start decoding slices after first SPS/PPS activation.
Only start decoding slices when at least one SPS and PPS got activated.
This fixes cases when a source represents a substream of another stream
and no SPS and PPS was inserted before the first slice of the generated
substream.
Gwenole Beauchesne [Wed, 14 Nov 2012 13:25:34 +0000 (14:25 +0100)]
h264: fix VAPictureParameterBufferH264.ReferenceFrames[] construction.
... for interlaced streams. The short_ref[] and long_ref[] arrays may
contain up to 32 fields but VA ReferenceFrames[] array expects up to
16 reference frames, thus including both fields.
Gwenole Beauchesne [Wed, 14 Nov 2012 09:27:12 +0000 (10:27 +0100)]
h264: fix interlaced stream decoding with MMCO.
Fix decoding of interlaced streams when adaptive_ref_pic_marking_mode_flag
is equal to 1, i.e. when memory management control operations are used. In
particular, when field_pic_flag is set to 0, the new reference flags shall
be applied to both fields.
Gwenole Beauchesne [Tue, 13 Nov 2012 16:14:39 +0000 (17:14 +0100)]
h264: add initial support for interlaced streams.
Decoded frames are only output when they are complete, i.e. when both
fields are decoded. This also means that the "interlaced" caps is not
propagated to vaapipostproc or vaapisink elements. Another limitation
is that interlaced bitstreams with MMCO are unlikely to work.
Gwenole Beauchesne [Tue, 13 Nov 2012 15:35:30 +0000 (16:35 +0100)]
h264: split remove_reference_at() into finer units.
Split remove_reference_at() into a function that actually removes the
specified entry from the short-term or long-term reference picture array,
and a function that sets reference flags to the desired value, possibly
zero. The latters marks the picture as "unused for reference".
Gwenole Beauchesne [Tue, 23 Oct 2012 12:04:22 +0000 (14:04 +0200)]
decoder: fix gst_vaapi_picture_new_field() object type.
Fix gst_vaapi_picture_new_field() to preserve the original picture type.
e.g. gst_vaapi_picture_new_field() with a GstVaapiPictureH264 argument
shall generate a GstVaapiPictureH264 object.
Gwenole Beauchesne [Tue, 13 Nov 2012 13:04:31 +0000 (14:04 +0100)]
h264: add picture structure for reference picture marking process.
Introduce new `structure' field to the H.264 specific picture structure
so that to simplify the reference picture marking process. That local
picture structure is derived from the original picture structure, as
defined by the syntax elements field_pic_flag and bottom_field_flag.
Gwenole Beauchesne [Fri, 2 Nov 2012 14:14:58 +0000 (15:14 +0100)]
h264: introduce new frame store structure.
The frame store represents a Decoded Picture Buffer entry, which can
hold up to two fields. So far, the frame store is only used to hold
full frames.
Gwenole Beauchesne [Tue, 13 Nov 2012 09:10:31 +0000 (10:10 +0100)]
codecparsers: update to gst-vaapi-rebased commit
73d6aab.
73d6aab h264: fix rbsp_more_data() implementation
25d04cf h264: fix error code for invalid size parsed in SPS
84798e5 fix FSF address
Gwenole Beauchesne [Wed, 31 Oct 2012 15:37:14 +0000 (16:37 +0100)]
h264: minor clean-ups.
Move DPB flush up if the current picture to decode is an IDR. Besides,
don't bother to check for IDR pictures in dpb_add() function since an
explicit DPB flush was already performed in this case.
Gwenole Beauchesne [Wed, 31 Oct 2012 13:24:09 +0000 (14:24 +0100)]
h264: simplify reference picture marking process.
... to build the short_ref[] and long_ref[] lists from the DPB, instead
of maintaining them separately. This avoids refs/unrefs while making it
possible to generate the list based on the actual picture structure.
This also ensures that the list of generated ReferenceFrames[] actually
matches what reference frames are available in the DPB. i.e. short_ref[]
and long_ref[] entries are implied from the DPB, so there is no risk of
having "dangling" references.
Gwenole Beauchesne [Wed, 31 Oct 2012 10:52:03 +0000 (11:52 +0100)]
h264: introduce per-field POC in GstVaapiPictureH264.
Use the POC member available in the GstVaapiPicture base class and
get rid of the dependency on the local VAPictureH264 TopFieldOrderCnt
and BottomFieldOrderCnt. Rather, use a simple field_poc[] array
initialized to INT_MAX, so that to simplify picture POC calculation
for non frame pictures.