Sebastian Dröge [Thu, 25 Jul 2019 15:27:30 +0000 (18:27 +0300)]
timecodestamper: Validate LTC timestamps before trying to use them
There's no point in working with invalid LTC timestamps as all future
calculations will be wrong based on this, and invalid LTC timestamps can
sometimes be read via the audio input.
Ilya Smelykh [Thu, 25 Jul 2019 13:03:02 +0000 (20:03 +0700)]
dtls: fix generated cert dtls agent leak
The generated certificate dtls agent was refed two times on the first call.
Ilya Smelykh [Thu, 25 Jul 2019 10:00:14 +0000 (10:00 +0000)]
dtls: fix dtls connection object leak
Sebastian Dröge [Mon, 22 Jul 2019 16:10:15 +0000 (19:10 +0300)]
decklink: Make sure to return a value from all code paths
False warning from MSVC, or it does not understand that
g_assert_not_reached() does not return.
...\gst-plugins-bad-1.0-1.17.0.1\sys\decklink\gstdecklink.cpp(1647) : warning C4715: 'gst_decklink_configure_duplex_mode': not all control paths return a value
Sebastian Dröge [Mon, 22 Jul 2019 14:57:01 +0000 (17:57 +0300)]
decklinksrc: Reset timestamp observations on format change
We will usually get timestamps starting from 0 again and due to the
format change the clock of the input might also be different.
Seungha Yang [Thu, 25 Jul 2019 07:45:21 +0000 (16:45 +0900)]
nvcodec: Clean up pointless return values around plugin init
Any plugin which returned FALSE from plugin_init will be blacklisted
so the plugin will be unusable even if an user install required runtime
dependency next time. So that's the reason why nvcodec returns TRUE always.
This commit is to remove possible misreading code.
Seungha Yang [Wed, 24 Jul 2019 04:06:16 +0000 (13:06 +0900)]
nvcodec: Change log level for g_module_open failure
Since we build nvcodec plugin without external CUDA dependency,
CUDA and en/decoder library loading failure can be natural behavior.
Emit error only when the module was opend but required symbols are missing.
Seungha Yang [Wed, 24 Jul 2019 01:00:56 +0000 (10:00 +0900)]
nvdec: Add support for 10bits 4:2:0 decoding
This commit includes h265 main-10 profile support if the device can
decode it.
Note that since h264 10bits decoding is not supported by nvidia GPU for now,
the additional code path for h264 high-10 profile is a preparation for
the future Nvidia's enhancement.
Seungha Yang [Wed, 24 Jul 2019 09:06:41 +0000 (18:06 +0900)]
nvdec: Specify supported profiles of h264/h265 codec
See more details about supported formats at
nvidia codec sdk document "NVDEC_VideoDecoder_API_ProgGuide.pdf"
Table 1. Hardware Video Decoder Capabilities.
Fixes: https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/926
Seungha Yang [Wed, 24 Jul 2019 11:38:58 +0000 (20:38 +0900)]
nvdec: Skip draining before creating internal parser
GstVideoDecoder::drain/flush can be called at very initial state
with stream-start and flush-stop event, respectively.
Draning with NULL CUvideoparser seems to unsafe and that eventually
failed to handle it.
Xavier Claessens [Wed, 24 Jul 2019 18:37:40 +0000 (14:37 -0400)]
dash: Fallback to libxml2 subproject
Aaron Boxer [Tue, 23 Jul 2019 17:47:44 +0000 (13:47 -0400)]
msdkdec: improve spelling and grammar of comments
Haihao Xiang [Tue, 23 Jul 2019 05:16:36 +0000 (13:16 +0800)]
msdkdec: make sure mfx frame width/height meets MSDK's requirement
It is possible that the output region size (e.g. 192x144) is different
from the coded picture size (e.g. 192x256). We may adjust the alignment
parameters so that the padding is respected in GstVideoInfo and use
GstVideoInfo to calculate mfx frame width and height
This fixes the error below when decoding a stream which has different
output region size and coded picture size
0:00:00.
057726900 28634 0x55df6c3220a0 ERROR msdkdec
gstmsdkdec.c:1065:gst_msdkdec_handle_frame:<msdkh265dec0>
DecodeFrameAsync failed (failed to allocate memory)
Sample pipeline:
gst-launch-1.0 filesrc location=output.h265 ! h265parse ! msdkh265dec !
glimagesink
Haihao Xiang [Tue, 23 Jul 2019 05:28:17 +0000 (13:28 +0800)]
msdkdec: remove unneeded code
Before calling gst_msdkdec_create_buffer_pool, the alignment parameters
have been adjusted.
Seungha Yang [Tue, 23 Jul 2019 00:40:24 +0000 (09:40 +0900)]
nvcodec: Drop system installed cuda.h dependency
... and add our stub cuda header.
Newly introduced stub cuda.h file is defining minimal types in order to
build nvcodec plugin without system installed CUDA toolkit dependency.
This will make cross-compile possible.
Seungha Yang [Tue, 23 Jul 2019 01:24:10 +0000 (10:24 +0900)]
nvcodec: Keep requested rank for default device
Fix for default encoder and decoder element factory to make them have
higher rank than the others.
Seungha Yang [Tue, 9 Jul 2019 04:31:27 +0000 (13:31 +0900)]
nvenc: Register elements per GPU device with capability check
* By this commit, if there are more than one device,
nvenc element factory will be created per
device like nvh264device{device-id}enc and nvh265device{device-id}enc
in addition to nvh264enc and nvh265enc, so that the element factory
can expose the exact capability of the device for the codec.
* Each element factory will have fixed cuda-device-id
which is determined during plugin initialization
depending on the capability of corresponding device.
(e.g., when only the second device can encode h265 among two GPU,
then nvh265enc will choose "1" (zero-based numbering)
as it's target cuda-device-id. As we have element factory
per GPU device, "cuda-device-id" property is changed to read-only.
* nvh265enc gains ability to encoding
4:4:4 8bits, 4:2:0 10 bits formats and up to 8K resolution
depending on device capability.
Additionally, I420 GLMemory input is supported by nvenc.
Seungha Yang [Sun, 21 Jul 2019 12:23:30 +0000 (21:23 +0900)]
nvdec: Create CUDA context with registered device id
Only the default device has been used by NVDEC so far.
This commit make it possible to use registered device id.
To simplify device id selection, GstNvDecCudaContext usage is removed.
Seungha Yang [Thu, 11 Jul 2019 12:53:46 +0000 (21:53 +0900)]
nvdec: Register elements per device/codec with capability check
By this commit, each codec has its own element factory so the
nvdec element factory is removed. Also, if there are more than one device,
additional nvdec element factory will be created per
device like nvh264device{device-id}dec, so that the element factory
can expose the exact capability of the device for the codec.
Seungha Yang [Thu, 18 Jul 2019 09:27:55 +0000 (18:27 +0900)]
msdk: Do not expose DMA buffer caps feature on Windows
On Windows, DMA buffer is not supported. PadTemplate with actually
supported feature seems to more make sense.
Seungha Yang [Mon, 22 Jul 2019 14:01:43 +0000 (23:01 +0900)]
nvcodec: Drop cudaGL.h dependency
nvcodec does not use any type/define/enum in cudaGL.h.
Sebastian Dröge [Mon, 22 Jul 2019 09:23:51 +0000 (12:23 +0300)]
av1enc: Also set AV1E_SET_ROW_MT from the property value when initializing the encoder
Previously it was only set if the property was changed after the encoder
was initialized.
Wonchul Lee [Sun, 2 Dec 2018 13:49:19 +0000 (22:49 +0900)]
av1enc: Add threads and row-mt properties
Add threads related property that setting a number of threads to encode
av1 codec and row-mt configuration.
Wonchul Lee [Sun, 2 Dec 2018 12:45:50 +0000 (21:45 +0900)]
av1enc: Release lock when failing to initialize
Add to missing unlock when failing to initialize encoder.
Sebastian Dröge [Mon, 22 Jul 2019 08:23:22 +0000 (11:23 +0300)]
Revert "av1enc: Release lock when failing to initialize"
This reverts commit
7de6b5d48161cb4982efe7fd04c8be408ca85424.
It was accidentally squashed together from the MR instead of keeping the
individual commits.
Fabrice Bellet [Mon, 22 Jul 2019 08:00:00 +0000 (08:00 +0000)]
siren: fix a global buffer overflow spotted by asan
This patch just enforces boudaries for the access to the
standard_deviation array (64 floats). Such case can be
seen with a corrupted stream, where there's no hope to
obtain a valid decoded frame anyway.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1002
Wonchul Lee [Mon, 22 Jul 2019 06:59:48 +0000 (06:59 +0000)]
av1enc: Release lock when failing to initialize
Add to missing unlock when failing to initialize encoder.
Seungha Yang [Thu, 18 Jul 2019 16:07:38 +0000 (01:07 +0900)]
nvdec: Fix video stuttering issue with VP9
Address nvidia driver specific behavior to avoid unexpected frame mismatch
between GStreamer and NVDEC.
Seungha Yang [Thu, 18 Jul 2019 15:52:59 +0000 (00:52 +0900)]
nvdec: Drop async queue and handle data on callback of CUvideoparser
Callbacks of CUvideoparser is called on the streaming thread.
So the use of async queue has no benefit.
Make control flow straightforward instead of long while/switch loop.
Mathieu Duponchelle [Fri, 12 Jul 2019 18:24:10 +0000 (20:24 +0200)]
rtponviftimestamp: fix setting of the discontinuity flag
The D bit is meant to be set whenever there is a discontinuity
in transmission, and directly maps to the DISCONT flag.
The E bit is not meant to be set on every buffer preceding a
discontinuity, but only on the last buffer of a contiguous section
of recording. This has to be signaled through the unfortunately-named
"discont" field of the custom NtpOffset event.
Mathieu Duponchelle [Fri, 12 Jul 2019 18:23:24 +0000 (20:23 +0200)]
rtponvifparse: set ONVIF timestamps as buffer PTS
Mathieu Duponchelle [Wed, 10 Jul 2019 21:40:36 +0000 (23:40 +0200)]
h26{4,5}parse: add support for forward predicted trick mode
Also stop assigning TRUE to fields with |=
Seungha Yang [Wed, 17 Jul 2019 13:42:10 +0000 (22:42 +0900)]
x265enc: Specify colorimetry related VUI parameters
Set the colorimetry config for the information to be embedded in encodec bitstream.
Seungha Yang [Mon, 15 Jul 2019 14:40:21 +0000 (23:40 +0900)]
nvdec: Port to color_{primaries,transfer,matrix}_to_iso
... and update the color information only when upstream was not provided
the information.
Seungha Yang [Wed, 17 Jul 2019 00:35:35 +0000 (09:35 +0900)]
nvenc: Specify colorimetry related VUI parameters
Set the colorimetry config for the information to be embedded in encodec bitstream.
Mathieu Duponchelle [Tue, 16 Jul 2019 21:30:07 +0000 (23:30 +0200)]
webrtcdatachannel: inherit directly from GObject
There's no reason for it to inherit from GstObject apart from
locking, which is easily replaced, and inheriting from
GInitiallyUnowned made introspection awkward and needlessly
complicated.
Seungha Yang [Tue, 16 Jul 2019 15:13:24 +0000 (00:13 +0900)]
h264parse: Update caps per pixel aspect ratio change
Output caps should be updated per pixel aspect ratio change.
Seungha Yang [Tue, 16 Jul 2019 13:58:26 +0000 (22:58 +0900)]
h265parse: Expose parsed colorimetry when VUI provided it
... and also if upstream did not specify the colorimetry.
Seungha Yang [Tue, 16 Jul 2019 00:40:01 +0000 (09:40 +0900)]
h264parse: Expose parsed colorimetry when VUI provided it
... and also if upstream did not specify the colorimetry.
Seungha Yang [Tue, 16 Jul 2019 16:05:32 +0000 (01:05 +0900)]
kmssink: Fix implicit declaration build error
ffs() and strcmp() require string.h
gstkmssink.c:255:28: error: implicit declaration of function ‘ffs’ [-Werror=implicit-function-declaration]
crtc_id = res->crtcs[ffs (crtcs_for_connector) - 1];
^~~
gstkmssink.c:590:10: error: implicit declaration of function ‘strcmp’ [-Werror=implicit-function-declaration]
if (!strcmp (property->name, prop_name)) {
^~~~~~
Martin Liska [Mon, 15 Jul 2019 14:05:05 +0000 (16:05 +0200)]
Fix -Werror=return-type error in configure.
Martin Theriault [Mon, 15 Jul 2019 19:48:08 +0000 (15:48 -0400)]
aiff: Fix infinite loop in header parsing.
Sebastian Dröge [Mon, 15 Jul 2019 09:06:25 +0000 (12:06 +0300)]
decklinkvideosrc: Don't report that we have signal until we know for sure
Previously we would've reported that there is signal unless we know for
sure that we don't have signal. For example signal would've been
reported before the device is even opened.
Now keep track whether the signal state is unknown or not and report no
signal if we don't know yet. As before, only send an INFO message about
signal recovery if we actually had a signal loss before.
Sebastian Dröge [Fri, 12 Jul 2019 09:53:09 +0000 (12:53 +0300)]
avwait: In running-time mode, select start/end running time based on the actual video timestamps
Otherwise we would start/end at exactly the given times, which might be
up to 1 frame earlier/later than the video.
Sebastian Dröge [Fri, 12 Jul 2019 09:29:09 +0000 (12:29 +0300)]
avwait: Add some more debug output
Sebastian Dröge [Fri, 12 Jul 2019 09:28:59 +0000 (12:28 +0300)]
avwait: Fix clipping of audio buffers at the start of recording
Ting-Wei Lan [Tue, 9 Jul 2019 16:34:18 +0000 (00:34 +0800)]
build: Fix error messages for missing hotdoc extensions
Sebastian Dröge [Tue, 9 Jul 2019 09:43:53 +0000 (12:43 +0300)]
cccombiner: Proxy POSITION/DURATION/URI/CAPS/ALLOCATION queries between video sinkpad and source pad
We pass-through the video as is, only putting a GstMeta on it from the
caption sinkpad.
This fixes negotation problems caused by not passing through caps
queries in both directions.
Also handle CAPS/ACCEPT_CAPS queries directly for the caption pad
instead of proxying.
Seungha Yang [Thu, 20 Dec 2018 03:37:43 +0000 (12:37 +0900)]
nvdec: Fix possible frame drop on EOS
On eos, baseclass videoencoder call finish() vfunc instead of drain()
Ray Tiley [Mon, 8 Jul 2019 20:43:10 +0000 (16:43 -0400)]
decklinkvideosrc: remove g_print
Causes a lot of output :)
Seungha Yang [Mon, 8 Jul 2019 14:58:29 +0000 (23:58 +0900)]
vulkan: Fix incompatible type build warning
Make declare/define a function consistent.
Note that GstBaseTransform::set_caps should return gboolean
Compiling C object subprojects/gst-plugins-bad/ext/vulkan/f3f9d6b@@gstvulkan@sha/vkviewconvert.c.obj.
../subprojects/gst-plugins-bad/ext/vulkan/vkviewconvert.c(644):
warning C4133: '=': incompatible types - from 'GstFlowReturn (__cdecl *)(GstBaseTransform *,GstCaps *,GstCaps *)'
to 'gboolean (__cdecl *)(GstBaseTransform *,GstCaps *,GstCaps *)'
Olivier Crête [Mon, 8 Jul 2019 19:51:43 +0000 (15:51 -0400)]
srt: Remove msg-size property
Remove the now unused property
Olivier Crête [Mon, 8 Jul 2019 19:50:59 +0000 (15:50 -0400)]
srtsrc: Receive one frame per gstbuffer
Don't aggregate the received data, just receive it one packet at a
time. So it keeps the packetization boundaries
Nicolas Dufresne [Sat, 6 Jul 2019 20:15:40 +0000 (16:15 -0400)]
srt: Fix listener crash if no URI is specified
Nicolas Dufresne [Sat, 6 Jul 2019 19:53:26 +0000 (15:53 -0400)]
srt: Use macro instead of duplicating a default value
Nicolas Dufresne [Sat, 6 Jul 2019 19:45:20 +0000 (15:45 -0400)]
srt: Fix confusing typo in FIXME comment
SRT does not support IPv6, but the comment said IPv4 which was the
opposite of the following code.
Sebastian Dröge [Mon, 1 Jul 2019 10:43:28 +0000 (13:43 +0300)]
timecodestamper: Add support for linear timecode (LTC) from an audio stream
Based on a patch by
Georg Lippitsch <glippitsch@toolsonair.com>
Vivia Nikolaidou <vivia@toolsonair.com>
Using libltc from https://github.com/x42/libltc
Sebastian Dröge [Mon, 1 Jul 2019 10:42:16 +0000 (13:42 +0300)]
timecodestamper: Rewrite element API and code flow
We now have a single property to select the timecode source that should
be applied, and for each timecode source the timecode is updated at
every frame. Then based on a set mode, the timecode is added to the
frame if none exists already or all existing timecodes are removed and
the timecode is added.
In addition the real-time clock is considered a proper timecode source
now instead of only allowing to initialize once in the beginning with
it, and also instead of just taking the current time we now take the
current time at the clock time of the video frame.
Marc Leeman [Fri, 7 Jun 2019 11:27:21 +0000 (13:27 +0200)]
nvcodec: do a generic cuda tests before going into version specifics
Seungha Yang [Fri, 17 May 2019 13:27:50 +0000 (22:27 +0900)]
nvdec,nvenc: Port to dynamic library loading
... and put them into new nvcodec plugin.
* nvcodec plugin
Now each nvenc and nvdec element is moved to be a part of nvcodec plugin
for better interoperability.
Additionally, cuda runtime API header dependencies
(i.e., cuda_runtime_api.h and cuda_gl_interop.h) are removed.
Note that cuda runtime APIs have prefix "cuda". Since 1.16 release with
Windows support, only "cuda.h" and "cudaGL.h" dependent symbols have
been used except for some defined types. However, those types could be
replaced with other types which were defined by "cuda.h".
* dynamic library loading
CUDA library will be opened with g_module_open() instead of build-time linking.
On Windows, nvcuda.dll is installed to system path by CUDA Toolkit
installer, and on *nix, user should ensure that libcuda.so.1 can be
loadable (i.e., via LD_LIBRARY_PATH or default dlopen path)
Therefore, NVIDIA_VIDEO_CODEC_SDK_PATH env build time dependency for Windows
is removed.
Seungha Yang [Wed, 30 Jan 2019 11:07:29 +0000 (20:07 +0900)]
d3d11videosink: Add new Direct3D11 video render plugin
Direct3D11 was shipped as part of Windows7 and it's obviously
primary graphics API on Windows.
This plugin includes HDR10 rendering if following requirements are satisfied
* IDXGISwapChain4::SetHDRMetaData is available (decleared in dxgi1_5.h)
* Display can support DXGI_COLOR_SPACE_RGB_FULL_G2084_NONE_P2020 color space
* Upstream provides 10 bitdepth format with smpte-st 2084 static metadata
Sebastian Dröge [Fri, 5 Jul 2019 21:58:47 +0000 (00:58 +0300)]
webrtcbin: Don't assert if an SDP media can't be converted to caps
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1008
Haihao Xiang [Thu, 25 Apr 2019 08:32:34 +0000 (16:32 +0800)]
msdk: add msdkvp9enc element
Haihao Xiang [Sun, 28 Apr 2019 08:10:13 +0000 (16:10 +0800)]
msdk: workaround for MFX_FOURCC_VP9_SEGMAP surface
MFX_FOURCC_VP9_SEGMAP surface in MSDK is an internal surface however
MSDK still call the external allocator for this surface, so this plugin
has to return UNSUPPORTED and force MSDK allocates surface using the
internal allocator.
See https://github.com/Intel-Media-SDK/MediaSDK/issues/762 for details
Haihao Xiang [Wed, 8 May 2019 08:05:07 +0000 (16:05 +0800)]
msdkenc: allow encode element requires extra frames
The call of MFXVideoENCODE_EncodeFrameAsync may not generate output and
the function returns MFX_ERR_MORE_DATA with NULL sync point, the input
frame is cached in this case, so it is possible that all allocated
frames go into the surfaces_used list after calling
MFXVideoENCODE_EncodeFrameAsync a few times, then the encoder will fail
to get an available surface before releasing used frames
This patch adds a new field of num_extra_frames to GstMsdkEnc and allows
encode element requires extra frames, the default value is 0.
This patch is the preparation for msdkvp9enc element.
Matthew Waters [Fri, 5 Jul 2019 06:20:29 +0000 (16:20 +1000)]
tests/vulkan: fix copyright name
Matthew Waters [Fri, 5 Jul 2019 06:20:05 +0000 (16:20 +1000)]
vulkan/window: add property for the parent display
Matthew Waters [Fri, 5 Jul 2019 06:13:13 +0000 (16:13 +1000)]
vulkan/device: add property for the parent instance
Matthew Waters [Thu, 4 Jul 2019 07:22:07 +0000 (17:22 +1000)]
vulkan: add view converter element
Matthew Waters [Thu, 4 Jul 2019 07:19:31 +0000 (17:19 +1000)]
vulkan: fix output framebuffer creation size
We don't scale when color converting so there is no impact.
Mathieu Duponchelle [Thu, 4 Jul 2019 23:26:26 +0000 (01:26 +0200)]
tsmux: output smoothly increasing PTS when in CBR mode
Thanks to that, when its output is plugged into eg a udp sink, the
outgoing data can be output in a smoother way, reducing burstiness
Jan Schmidt [Thu, 4 Jul 2019 14:17:10 +0000 (00:17 +1000)]
tests: Add h264parser SEI checks
Add some tests around SEI parsing.
Jan Schmidt [Fri, 28 Jun 2019 04:59:18 +0000 (14:59 +1000)]
h264parser lib: Add more profile_idc to the recognised set
Update the list of profile_idc recognised during SPS parsing
based on H.264 201704
Jan Schmidt [Fri, 28 Jun 2019 04:50:00 +0000 (14:50 +1000)]
h264parse lib: Remove the SPS parse_vui_params flag
The SPS parsing functions take a parse_vui_param flag
to skip VUI parsing, but there's no indication in the output
SPS struct that the VUI was skipped.
The only caller that ever passed FALSE seems to be the
important gst_h264_parser_parse_nal() function, meaning - so the
cached SPS were always silently invalid. That needs changing
anyway, meaning noone ever passes FALSE.
I don't see any use for saving a few microseconds in
order to silently produce garbage, and since this is still
unstable API, let's remove the parse_vui_param.
Jan Schmidt [Fri, 28 Jun 2019 04:46:36 +0000 (14:46 +1000)]
h264parser lib: Warn on invalid pic_timing SEI
The spec calls for pic_timing SEI to be absent unless
there's either a CpbDpbDelaysPresentFlag or
pic_struct_present_flag in the SPS VUI data. If
both those flags are missing, warn.
Jan Schmidt [Fri, 28 Jun 2019 04:42:19 +0000 (14:42 +1000)]
h264parser lib: Always consume all SEI bits
If parsing an SEI errors out, it might not consume
all bits, leaving extra unparsed data in the reader
that the outer loop then tries to parse as a new
appended SEI.
Skip all the bits if any are left over to avoid
'finding' extra garbage SEI in the parsing.
Jan Schmidt [Thu, 27 Jun 2019 16:42:00 +0000 (02:42 +1000)]
h264parser: Return BROKEN_LINK for missing SPS
When parsing SEI that require an SPS, return
GST_H264_PARSER_BROKEN_LINK instead of a generic
parsing error to let callers distinguish
bitstream errors from (expected) missing packets
when resuming decode.
Jan Schmidt [Thu, 27 Jun 2019 15:26:19 +0000 (01:26 +1000)]
h264parser: Improve documentation
Improve some docs around the NALU structure contents
Jan Schmidt [Thu, 27 Jun 2019 14:27:12 +0000 (00:27 +1000)]
gstmpegvideoparser: Documentation fixes
Fix some spelling mistakes and improve documentation in
the MPEG video parser
Seungha Yang [Thu, 4 Jul 2019 10:43:42 +0000 (19:43 +0900)]
tsmuxstream: Do not try return from void function
../subprojects/gst-plugins-bad/gst/mpegtsmux/tsmux/tsmuxstream.c(1082): warning C4098:
'tsmux_stream_get_es_descrs': 'void' function returning a value
Seungha Yang [Thu, 4 Jul 2019 10:42:48 +0000 (19:42 +0900)]
mpegtsmux: Remove white space
Matthew Waters [Thu, 4 Jul 2019 04:16:17 +0000 (14:16 +1000)]
vulkan: move swapper object to the gstvulkan library
Allows other sinks and/or user code to display to a VkSurface
Matthew Waters [Thu, 4 Jul 2019 04:03:51 +0000 (14:03 +1000)]
vulkan: move trash list to library
Matthew Waters [Wed, 3 Jul 2019 03:48:49 +0000 (13:48 +1000)]
webrtcbin: use the latest self-generated SDP as the basis for renegotiations
Fixes multiple errors when a webrtcbin renegotiation can switch between the
offerer and the answerer.
Ederson de Souza [Fri, 17 May 2019 23:00:24 +0000 (16:00 -0700)]
avtp: Update documentation
Ederson de Souza [Tue, 26 Mar 2019 21:25:56 +0000 (14:25 -0700)]
tests: Add AVTP CVF depayloader tests
In these tests, some specially crafted buffers are sent to the
depayloader, simulating some scenarios and checking what comes out from
it.
Ederson de Souza [Tue, 26 Mar 2019 00:23:49 +0000 (17:23 -0700)]
tests: Add AVTP CVF payloader tests
In these tests, some specially crafted buffers are sent to the
payloader, simulating some scenarios and checking what comes out from
it.
Andre Guedes [Thu, 2 May 2019 17:52:42 +0000 (10:52 -0700)]
tests: Add AVTP source tests
This patch adds test cases for the AVTP source element. For now, only
properties get() and set() are covered.
Andre Guedes [Thu, 25 Apr 2019 21:16:46 +0000 (14:16 -0700)]
tests: Add AVTP sink tests
This patch adds test cases for the AVTP sink element. For now, only
properties get() and set() are covered.
Andre Guedes [Tue, 9 Apr 2019 21:10:36 +0000 (14:10 -0700)]
tests: Add AAF depayloader tests
This patch adds test cases for the AAF depayloader element covering the
basic functionalities.
Andre Guedes [Fri, 22 Mar 2019 22:54:23 +0000 (15:54 -0700)]
tests: Add AAF payloader tests
This patch adds the infrastructure to test AVTP plugin elements. It also
adds a test case to check avtpaafpay element basic functionality. The
test consists in setting the element sink caps and properties, and
verifying if the output buffer is set as expected.
Ederson de Souza [Wed, 17 Apr 2019 00:32:46 +0000 (17:32 -0700)]
docs: Add AVTP elements documentation
Ederson de Souza [Wed, 20 Mar 2019 23:40:13 +0000 (16:40 -0700)]
avtp: Add fragmented packets handling to CVF depayloader
This patch adds to the CVF depayloader the capability to regroup H.264
fragmented FU-A packets.
After all packets are regrouped, they are added to the "stash" of H.264
NAL units that will be sent as soon as an AVTP packet with M bit set is
found (usually, the last fragment).
Unrecognized fragments (such as first fragment seen, but with no Start
bit set) are discarded - and any NAL units on the "stash" are sent
downstream, as if a SEQNUM discontinuty happened.
Ederson de Souza [Tue, 12 Mar 2019 22:46:16 +0000 (15:46 -0700)]
avtp: Introduce AVTP CVF depayloader element
This patch introduces the AVTP Compressed Video Format (CVF) depayloader
specified in IEEE 1722-2016 section 8. Currently, this depayloader only
supports H.264 encapsulation described in section 8.5.
Is also worth noting that only single NAL units are handled: aggregated
and fragmented payloads are not handled.
As stated in AVTP CVF payloader patch, AVTP timestamp is used to define
outgoing buffer DTS, while the H264_TIMESTAMP defines outgoing buffer
PTS.
When an AVTP packet is received, the extracted H.264 NAL unit is added to
a "stash" (the out_buffer) of H.264 NAL units. This "stash" is pushed
downstream as single buffer (with NAL units aggregated according to format
used on GStreamer, based on ISO/IEC 14496-15) as soon as we get the AVTP
packet with M bit set.
This patch groups NAL units using a fixed NAL size lenght, sent downstream
on the `codec_data` capability.
The "stash" of NAL units can be prematurely sent downstream if a
discontinuity (a missing SEQNUM) happens.
This patch reuses the infra provided by gstavtpbasedepayload.c.
Ederson de Souza [Wed, 6 Mar 2019 02:09:13 +0000 (18:09 -0800)]
avtp: Add fragmentation feature to CVF payloader
Based on `mtu` property, the CVF payloader is now capable of properly
fragmenting H.264 NAL units that are bigger than MTU in several AVTP
packets.
AVTP spec defines two methods for fragmenting H.264 packets, but this
patch only generates non-interleaved FU-A fragments.
Usually, only the last NAL unit from a group of NAL units in a single
buffer will be big enough to be fragmented. Nevertheless, only the last
AVTP packet sent for a group of NAL units will have the M bit set (this
means that the AVTP packet for the last fragment will only have the M
bit set if there's no more NAL units in the group).
Ederson de Souza [Thu, 28 Feb 2019 23:49:02 +0000 (15:49 -0800)]
avtp: Introduce AVTP CVF payloader element
This patch introduces the AVTP Compressed Video Format (CVF) payloader
specified in IEEE 1722-2016 section 8. Currently, this payload only
supports H.264 encapsulation described in section 8.5.
Is also worth noting that only single NAL units are encapsulated: no
aggregation or fragmentation is performed by the payloader.
An interesting characteristic of CVF H.264 spec is that it defines an
H264_TIMESTAMP, in addition to the AVTP timestamp. The later is
translated to the GST_BUFFER_DTS while the former is translated to the
GST_BUFFER_PTS. From AVTP CVF H.264 spec, it is clear that the AVTP
timestamp is related to the decoding order, while the H264_TIMESTAMP is
an ancillary information to the H.264 decoder.
Upon receiving a buffer containing a group of NAL units, the avtpcvfpay
element will extract each NAL unit and payload them into individual AVTP
packets. The last AVTP packet generated for a group of NAL units will
have the M bit set, so the depayloader is able to properly regroup them.
The exact format of the buffer of NAL units is described on the
'codec_data' capability, which is parsed by the avtpcvfpay, in the same
way done in rtph264pay.
This patch reuses the infra provided by gstavtpbasepayload.c.
Andre Guedes [Wed, 23 Jan 2019 23:17:48 +0000 (15:17 -0800)]
avtp: Introduce AVTP source element
This patch introduces the avtpsrc element which implements a typical
network source. The avtpsrc element receives AVTPDUs encapsulated into
Ethernet frames and push them downstream in the GStreamer pipeline.
Implementation if pretty straightforward since the burden is implemented
by GstPushSrc class.
Likewise the avtpsink element, applications that utilize this element
must have CAP_NET_RAW capability since it is required by Linux to open
sockets from AF_PACKET domain.
Andre Guedes [Wed, 23 Jan 2019 18:56:10 +0000 (10:56 -0800)]
avtp: Introduce AVTP sink element
This patch introduces the avtpsink elements which implements a typical
network sink. Implementation is pretty straightforward since the burden
is implemented by GstBaseSink class.
The avtpsink element defines three new properties: 1) network interface
from where AVTPDU should be transmitted, 2) destination MAC address
(usually a multicast address), and 3) socket priority (SO_PRIORITY).
Socket setup and teardown are done in start/stop virtual methods while
AVTPDU transmission is carried out by render(). AVTPDUs are encapsulated
into Ethernet frames and transmitted to the network via AF_PACKET socket
domain. Linux requires CAP_NET_RAW capability in order to open an
AF_PACKET socket so the application that utilize this element must have
it. For further info about AF_PACKET socket domain see packet(7).
Finally, AVTPDUs are expected to be transmitted at specific times -
according to the GstBuffer presentation timestamp - so the 'sync'
property from GstBaseSink is set to TRUE by default.
Andre Guedes [Thu, 24 Jan 2019 00:20:27 +0000 (16:20 -0800)]
avtp: Introduce AAF depayloader element
This patch introduces the AAF depayloader element, the counterpart from
the AAF payloader. As expected, this element inputs AVTPDUs and outputs
audio raw data and supports AAF PCM encapsulation only.
The AAF depayloader srcpad produces a fixed format that is encoded
within the AVTPDU. Once the first AVTPDU is received by the element, the
audio features e.g. sample format, rate, number of channels, are decoded
and the srcpad caps are set accordingly. Also, at this point, the
element pushes a SEGMENT event downstream defining the segment according
to the AVTP presentation time.
All AVTP depayloaders will share some common code. For that reason, this
patch introduces the GstAvtpBaseDepayload abstract class that implements
common depayloader functionalities. AAF-specific functionalities are
implemented in the derived class GstAvtpAafDepay.
Andre Guedes [Thu, 17 Jan 2019 01:16:59 +0000 (17:16 -0800)]
avtp: Introduce AAF payloader element
This patch introduces the AVTP Audio Format (AAF) payloader element from
the AVTP plugin. The element inputs audio raw data and outputs AVTP
packets (aka AVTPDUs), implementing a typical protocol payloader element
from GStreamer.
AAF is one of the available formats to transport audio data in an AVTP
system. AAF is specified in IEEE 1722-2016 section 7 and provides two
encapsulation mode: PCM and AES3. This patch implements PCM
encapsulation mode only.
The AAF payloader working mechanism consists of building the AAF header,
prepending it to the GstBuffer received on the sink pad, and pushing the
buffer downstream. Payloader parameters such as stream ID, maximum
transit time, time uncertainty, and timestamping mode are passed via
element properties. AAF doesn't support all possible sample format and
sampling rate values so the sink pad caps template from the payloader is
a subset of audio/x-raw. Additionally, this patch implements only
"normal" timestamping mode from AAF. "Sparse" mode should be implemented
in future.
Upcoming patches will introduce other AVTP payloader elements that will
have some common code. For that reason, this patch introduces the
GstAvtpBasePayload abstract class that implements common payloader
functionalities, and the GstAvtpAafPay class that extends the
GstAvtpBasePayload class, implementing AAF-specific functionalities.
The AAF payloader element is most likely to be used with the AVTP sink
element (to be introduced by a later patch) but it could also be used
with UDP sink element to implement AVTP over UDP as described in IEEE
1722-2016 Annex J.
This element was inspired by RTP payloader elements.