X-Git-Url: http://review.tizen.org/git/?a=blobdiff_plain;f=subprojects%2Fgstreamer-vaapi%2FNEWS;h=9802493d3252203d2e26c8c315a9bdbde953807d;hb=f13c65d977b740b5955343d320dcf2061bbdf62d;hp=1a512c18485caa24196288774c817a7ab5763f23;hpb=31b5243e1dde294a413c30fd026a93140f78cbee;p=platform%2Fupstream%2Fgstreamer.git diff --git a/subprojects/gstreamer-vaapi/NEWS b/subprojects/gstreamer-vaapi/NEWS index 1a512c1..9802493 100644 --- a/subprojects/gstreamer-vaapi/NEWS +++ b/subprojects/gstreamer-vaapi/NEWS @@ -1,19 +1,11 @@ -GStreamer 1.20 Release Notes +GStreamer 1.22 Release Notes -GStreamer 1.20 has not been released yet. It is scheduled for release in -late January / early February 2022. +GStreamer 1.22.0 was originally released on 23 January 2023. -1.19.x is the unstable development version that is being developed in -the git main branch and which will eventually result in 1.20, and -1.19.90 is the first release candidate in that series (1.20rc1). - -1.20 will be backwards-compatible to the stable 1.18, 1.16, 1.14, 1.12, -1.10, 1.8, 1.6, 1.4, 1.2 and 1.0 release series. - -See https://gstreamer.freedesktop.org/releases/1.20/ for the latest +See https://gstreamer.freedesktop.org/releases/1.22/ for the latest version of this document. -Last updated: Wednesday 26 January 2022, 01:00 UTC (log) +Last updated: Monday 23 January 2023, 17:00 UTC (log) Introduction @@ -26,1862 +18,1222 @@ fixes and other improvements. Highlights -- Development in GitLab was switched to a single git repository - containing all the modules -- GstPlay: new high-level playback library, replaces GstPlayer -- WebM Alpha decoding support -- Encoding profiles can now be tweaked with additional - application-specified element properties -- Compositor: multi-threaded video conversion and mixing -- RTP header extensions: unified support in RTP depayloader and - payloader base classes -- SMPTE 2022-1 2-D Forward Error Correction support -- Smart encoding (passthrough) support for VP8, VP9, H.265 in - encodebin and transcodebin -- Runtime compatibility support for libsoup2 and libsoup3 (libsoup3 - support experimental) -- Video decoder subframe support -- Video decoder automatic packet-loss, data corruption, and keyframe - request handling for RTP / WebRTC / RTSP -- MP4 and Matroska muxers now support profile/level/resolution changes - for H264/H265 input streams (i.e. codec data changing on the fly) -- MP4 muxing mode that initially creates a fragmented mp4 which is - converted to a regular mp4 on EOS -- Audio support for the WebKit Port for Embedded (WPE) web page source - element -- CUDA based video color space convert and rescale elements and - upload/download elements -- NVIDIA memory:NVMM support for OpenGL glupload and gldownload - elements -- Many WebRTC improvements -- The new VA-API plugin implemention fleshed out with more decoders - and new postproc elements -- AppSink API to retrieve events in addition to buffers and buffer - lists -- AppSrc gained more configuration options for the internal queue - (leakiness, limits in buffers and time, getters to read current - levels) -- Updated Rust bindings and many new Rust plugins -- Improved support for custom minimal GStreamer builds -- Support build against FFmpeg 5.0 -- Linux Stateless CODEC support gained MPEG2 and VP9 -- Windows Direct3D11/DXVA decoder gained AV1 and MPEG2 support +- AV1 video codec support improvements +- New HLS, DASH and Microsoft Smooth Streaming adaptive streaming + clients +- Qt6 support for rendering video inside a QML scene +- Minimal builds optimised for binary size, including only the + individual elements needed +- Playbin3, Decodebin3, UriDecodebin3, Parsebin enhancements and + stabilisation +- WebRTC simulcast support and support for Google Congestion Control +- WebRTC-based media server ingestion/egress (WHIP/WHEP) support +- New easy to use batteries-included WebRTC sender plugin +- Easy RTP sender timestamp reconstruction for RTP and RTSP +- ONVIF timed metadata support +- New fragmented MP4 muxer and non-fragmented MP4 muxer +- New plugins for Amazon AWS storage and audio transcription services +- New gtk4paintablesink and gtkwaylandsink renderers +- New videocolorscale element that can convert and scale in one go for + better performance +- High bit-depth video improvements +- Touchscreen event support in navigation API +- Rust plugins now shipped in macOS and Windows/MSVC binary packages +- H.264/H.265 timestamp correction elements for PTS/DTS reconstruction + before muxers +- Improved design for DMA buffer sharing and modifier handling for + hardware-accelerated video decoders/encoders/filters and + capturing/rendering on Linux +- Video4Linux2 hardware accelerated decoder improvements +- CUDA integration and Direct3D11 integration and plugin improvements +- New H.264 / AVC, H.265 / HEVC and AV1 hardware-accelerated video + encoders for AMD GPUs using the Advanced Media Framework (AMF) SDK +- applemedia: H.265 / HEVC video encoding + decoding support +- androidmedia: H.265 / HEVC video encoding support +- New “force-live” property for audiomixer, compositor, glvideomixer, + d3d11compositor etc. - Lots of new plugins, features, performance improvements and bug fixes Major new features and changes -Noteworthy new features and API +AV1 video codec support improvements -- gst_element_get_request_pad() has been deprecated in favour of the - newly-added gst_element_request_pad_simple() which does the exact - same thing but has a less confusing name that hopefully makes clear - that the function request a new pad rather than just retrieves an - already-existing request pad. +AV1 is a royalty free next-generation video codec by AOMedia and a free +alternative to H.265/HEVC. -Development in GitLab was switched to a single git repository containing all the modules +While supported in earlier versions of GStreamer already, this release +saw a lot of improvements across the board: -The GStreamer multimedia framework is a set of libraries and plugins -split into a number of distinct modules which are released independently -and which have so far been developed in separate git repositories in -freedesktop.org GitLab. +- Support for hardware encoding and decoding via VAAPI/VA, AMF, D3D11, + NVCODEC, QSV and Intel MediaSDK. Hardware codecs for AV1 are slowly + becoming available in embedded systems and desktop GPUs (AMD, Intel, + NVIDIA), and these can now be used via GStreamer. -In addition to these separate git repositories there was a gst-build -module that would use the Meson build systems’s subproject feature to -download each individual module and then build everything in one go. It -would also provide an uninstalled development environment that made it -easy to work on GStreamer and use or test versions other than the -system-installed GStreamer version. +- New AV1 RTP payloader and depayloader elements. -All of these modules have now (as of 28 September 2021) been merged into -a single git repository (“Mono repository” or “monorepo”) which should -simplify development workflows and continuous integration, especially -where changes need to be made to multiple modules at once. +- New encoder settings in the AOM reference encoder-based av1enc + element. -This mono repository merge will primarily affect GStreamer developers -and contributors and anyone who has workflows based on the GStreamer git -repositories. +- Various improvements in the AV1 parser and in the MP4/Matroska/WebM + muxers/demuxers. -The Rust bindings and Rust plugins modules have not been merged into the -mono repository at this time because they follow a different release -cycle. +- dav1d and rav1e based software decoder/encoder elements shipped as + part of the binaries. -The mono repository lives in the existing GStreamer core git repository -in GitLab in the new main branch and all future development will happen -on this branch. +- AV1 parser improvements and various bugfixes all over the place. -Modules will continue to be released as separate tarballs. +Touchscreen event support in Navigation API -For more details, please see the GStreamer mono repository FAQ. +The Navigation API supports the sending of key press events and mouse +events through a GStreamer pipeline. Typically these will be picked up +by a video sink on which these events happen and then the event is +transmitted into the pipeline so it can be handled by elements inside +the pipeline if it wasn’t handled by the application. -GstPlay: new high-level playback library replacing GstPlayer +This has traditionally been used for DVD menu support, but can also be +used to forward such inputs to source elements that render a web page +using a browser engine such as WebKit or Chromium. -- GstPlay is a new high-level playback library that replaces the older - GstPlayer API. It is basically the same API as GstPlayer but - refactored to use bus messages for application notifications instead - of GObject signals. There is still a signal adapter object for those - who prefer signals. Since the existing GstPlayer API is already in - use in various applications, it didn’t seem like a good idea to - break it entirely. Instead a new API was added, and it is expected - that this new GstPlay API will be moved to gst-plugins-base in - future. +This API has now gained support for touchscreen events, and this has +been implemented in various plugins such as the GTK, Qt, XV, and x11 +video sinks as well as the wpevideosrc element. + +GStreamer CUDA integration + +- New gst-cuda library +- integration with D3D11 and NVIDIA dGPU NVMM elements +- new cudaconvertscale element + +GStreamer Direct3D11 integration + +- New gst-d3d11 public library + - gst-d3d11 library is not integrated with GStreamer documentation + system yet. Please refer to the examples +- d3d11screencapture: Add Windows Graphics Capture API based capture + mode, including Win32 application window capturing +- d3d11videosink and d3d11convert can support flip/rotation and crop + meta +- d3d11videosink: New emit-present property and present signal so that + applications can overlay an image on Direct3D11 swapchain’s + backbuffer via Direct3D/Direct2D APIs. See also C++ and Rust + examples +- d3d11compositor supports YUV blending/composing without intermediate + RGB(A) conversion to improve performance +- Direct3D11 video decoders are promoted to GST_RANK_PRIMARY or + higher, except for the MPEG2 decoder + +H.264/H.265 timestamp correction elements + +- Muxers are often picky and need proper PTS/DTS timestamps set on the + input buffers, but that can be a problem if the encoded input media + stream comes from a source that doesn’t provide proper signalling of + DTS, such as is often the case for RTP, RTSP and WebRTC streams or + Matroska container files. Theoretically parsers should be able to + fix this up, but it would probably require fairly invasive changes + in the parsers, so two new elements h264timestamper and + h265timestamper bridge the gap in the meantime and can reconstruct + missing PTS/DTS. + +Easy sender timestamp reconstruction for RTP and RTSP + +- it was always possible to reconstruct and retrieve the original RTP + sender timestamps in GStreamer, but required a fair bit of + understanding of the internal mechanisms and the right property + configuration and clock setup. + +- rtspsrc and rtpjitterbuffer gained a new + “add-reference-timestamp-meta” property that if set puts the + original absolute reconstructed sender timestamps on the output + buffers via a meta. This is particularly useful if the sender is + synced to an NTP clock or PTP clock. The original sender timestamps + are either based on the RTCP NTP times, NTP RTP header extensions + (RFC6051) or RFC7273-style clock signalling. + +Qt6 support + +- new qml6glsink element for Qt6 similar to the existing Qt5 element. + Matching source and overlay elements will hopefully follow in the + near future. + +OpenGL + Video library enhancements + +- Support for new video formats (NV12_4L4, NV12_16L32S, NV12_8L128, + NV12_10BE_8L128) and dmabuf import in more formats (Y410, Y212_LE, + Y212_BE, Y210, NV21, NV61) + +- Improved support for tiled formats with arbitrary tile dimensions, + as needed by certain hardware decoders/encoders + +- glvideomixer: New “crop-left,”crop-right, “crop-top” and + “crop-bottom” pad properties for cropping inputs + +- OpenGL support for gst_video_sample_convert(): + + - Used for video snapshotting and thumbnailing, to convert buffers + retrieved from appsinks or sink “last-sample” properties in + JPG/PNG thumbnails. + - This function can now take samples and buffers backed by GL + textures as input and will automatically plug a gldownload + element in that case. + +High bit-depth support (10, 12, 16 bits per component value) improvements + +- compositor can now handle any supported input format and also mix + high-bitdepth (10-16 bit) formats (naively) + +- videoflip has gained support for higher bit depth formats. + +- vp9enc, vp9dec now support 12-bit formats and also 10-bit 4:4:4 + +WebRTC + +- Allow insertion of bandwidth estimation elements e.g. for Google + Congestion Control (GCC) support + +- Initial support for sending or receiving simulcast streams + +- Support for asynchronous host resolution for STUN/TURN servers + +- GstWebRTCICE was split into base classes and implementation to make + it possible to plug custom ICE implementations + +- webrtcsink: batteries-included WebRTC sender (Rust) + +- whipsink: WebRTC HTTP ingest (WHIP) to a MediaServer (Rust) + +- whepsrc: WebRTC HTTP egress (WHEP) from a MediaServer (Rust) + +- Many other improvements and bug fixes + +New HLS, DASH and MSS adaptive streaming clients + +A new set of “adaptive demuxers” to support HLS, DASH and MSS adaptive +streaming protocols has been added. They provide improved performance, +new features and better stream compatibility compared to the previous +elements. These new elements require a “streams-aware” pipeline such as +playbin3, uridecodebin3 or urisourcebin. + +The previous elements’ design prevented implementing several use-cases +and fixing long-standing issues. The new elements were re-designed from +scratch to tackle those: + +- Scheduling Only 3 threads are present, regardless of the number of + streams selected. One in charge of downloading fragments and + manifests, one in charge of outputting parsed data downstream, and + one in charge of scheduling. This improves performance, resource + usage and latency. + +- Better download control The elements now directly control the + scheduling and download of manifests and fragments using libsoup + directly instead of depending on external elements for downloading. + +- Stream selection, only the selected streams are downloaded. This + improves bandwith usage. Switching stream is done in such a way to + ensure there are no gaps, meaning the new stream will be switched to + only once enough data for it has been downloaded. + +- Internal parsing, the downloaded streams are parsed internally. This + allows the element to fully respect the various specifications and + offer accurate buffering, seeking and playback. This is especially + important for HLS streams which require parsing for proper + positioning of streams. + +- Buffering and adaptive rate switching, the new elements handle + buffering internally which allows them to have a more accurate + visibility of which bandwith variant to switch to. + +Playbin3, Decodebin3, UriDecodebin3, Parsebin improvements + +The “new” playback elements introduced in 1.18 (playbin3 and its various +components) have been refactored to allow more use-cases and improve +performance. They are no longer considered experimental, so applications +using the legacy playback elements (playbin and (uri)decodebin) can +migrate to the new components to benefit from these improvements. + +- Gapless The “gapless” feature allows files and streams to be + fetched, buffered and decoded in order to provide a “gapless” + output. This feature has been refactored extensively in the new + components: + - A single (uri)decodebin3 (and therefore a single set of + decoders) is used. This improves memory and cpu usage, since on + identical codecs a single decoder will be used. + - The “next” stream to play will be pre-rolled “just-in-time” + thanks to the buffering improvements in urisourcebin (see below) + - This feature is now handled at the uridecodebin3 level. + Applications that wish to have a “gapless” stream and process it + (instead of just outputting it, for example for transcoding, + retransmission, …) can now use uridecodebin3 directly. Note that + a streamsynchronizer element is required in that case. +- Buffering improvements The urisourcebin element is in charge of + fetching and (optionally) buffering/downloading the stream. It has + been extended and improved: + - When the parse-streams property is used (by default in + uridecodebin3 and playbin3), compatible streams will be demuxed + and parsed (via parsebin) and buffering will be done on the + elementary streams. This provides a more accurate handling of + buffering. Previously buffering was done on a best-effort basis + and was mostly wrong (i.e. downloading more than needed). + - Applications can use urisourcebin with this property as a + convenient way of getting elementary streams from a given URI. + - Elements can handle buffering themselves (such as the new + adaptive demuxers) by answering the GST_QUERY_BUFFERING query. + In that case urisourcebin will not handle it. +- Stream Selection Efficient stream selection was previously only + possible within decodebin3. The downside is that this meant that + upstream elements had to provide all the streams from which to chose + from, which is inefficient. With the addition of the + GST_QUERY_SELECTABLE query, this can now be handled by elements + upstream (i.e. sources) + - Elements that can handle stream selection internally (such as + the new adaptive demuxer elements) answer that query, and handle + the stream selection events themselves. + - In this case, decodebin3 will always process all streams that + are provided to it. +- Instant URI switching This new feature allows switching URIs + “instantly” in playbin3 (and uridecodebin3) without having to change + states. This mimics switching channels on a television. + - If compatible, decoders will be re-used, providing lower + latency/cpu/memory than by switching states. + - This is enabled by setting the instant-uri property to true, + setting the URI to switch to immediately, and then disabling the + instant-uri property again afterwards. +- playbin3, decodebin3, uridecodebin3, parsebin, and urisrc are no + longer experimental + - They were originally marked as ‘technology preview’ but have + since seen extensive usage in production settings, so are + considered ready for general use now. + +Fraunhofer AAC audio encoder HE-AAC and AAC-LD profile support + +- fdkaacenc: + - Support for encoding to HE-AACv1 and HE-AACv2 profile + - Support for encoding to AAC Low Delay (LD) profile + - Advanced bitrate control options via new “rate-control”, + “vbr-preset”, “peak-bitrate”, and “afterburner” properties + +RTP rapid synchronization support in the RTP stack (RFC6051) + +RTP provides several mechanisms how streams can be synchronized relative +to each other, and how absolute sender times for RTP packets can be +obtained. One of these mechanisms is via RTCP, which has the +disadvantage that the synchronization information is only distributed +out-of-band and usually some time after the start. + +GStreamer’s RTP stack, specifically the rtpbin, rtpsession and +rtpjitterbuffer elements, now also have support for retrieving and +sending the same synchronization information in-band via RTP header +extensions according to RFC6051 (Rapid Synchronisation of RTP Flows). +Only 64-bit timestamps are supported currently. + +This provides per packet synchronization information from the very +beginning of a stream and allows accurate inter-stream, and (depending +on setup) inter-device, synchronization at the receiver side. + +ONVIF XML Timed Metadata support + +The ONVIF standard implemented by various security cameras also +specifies a format for timed metadata that is transmitted together with +the audio/video streams, usually over RTSP. + +Support for this timed metadata is implemented in the MP4 demuxer now as +well as the new fragmented MP4 muxer and the new non-fragmented MP4 +muxer from the GStreamer Rust plugins. Additionally, the new onvif +plugin ‒ which is part of the GStreamer Rust plugins ‒ provides general +elements for handling the metadata and e.g. overlaying certain parts of +it over a video stream. + +As part of this support for absolute UTC times was also implemented +according to the requirements of the ONVIF standards in the +corresponding elements. + +MP3 gapless playback support + +While MP3 can probably considered a legacy format at this point, a new +feature was added with this release. + +When playing back plain MP3 files, i.e. outside a container format, +switches between files can now be completely gapless if the required +metadata is provided inside the file. There is no standardized metadata +for this, but the LAME MP3 encoder writes metadata that can be parsed by +the mpegaudioparse element now and forwarded to decoders for ensuring +removal of padding samples at the front and end of MP3 files. + +“force-live” property for audio + video aggregators + +This is a quality of life fix for playout and streaming applications +where it is common to have audio and video mixer elements that should +operate in live mode from the start and produce output continuously. + +Often one would start a pipeline without any inputs hooked up to these +mixers in the beginning, and up until now there was no way to easily +force these elements into live mode from the start. One would have to +add an initial live video or audio test source as dummy input to achieve +this. + +The new “force-live” property makes these audio and video aggregators +start in live mode without the need for any dummy inputs, which is +useful for scenarios where inputs are only added after starting the +pipeline. + +This new property should usually be used in connection with the +“min-upstream-latency” property, i.e. you should always set a non-0 +minimum upstream latency then. + +This is now supported in all GstAudioAggregator and GstVideoAggregator +subclasses such as audiomixer, audiointerleave, compositor, +glvideomixer, d3d11compositor, etc. + +New elements and plugins + +- new cudaconvertscale element that can convert and scale in one pass + +- new gtkwaylandsink element based on gtksink, but similar to + waylandsink and uses Wayland APIs directly instead of rendering with + Gtk/Cairo primitives. This approach is only compatible with Gtk3, + and like gtksink this element only supports Gtk3. + +- new h264timestamper and h265timestamper elements to reconstruct + missing pts/dts from inputs that might not provide them such as + e.g. RTP/RTSP/WebRTC inputs (see above) + +- mfaacdec, mfmp3dec: Windows MediaFoundation AAC and MP3 decoders + +- new msdkav1enc AV1 video encoder element + +- new nvcudah264enc, nvcudah265enc, nvd3d11h264enc, and nvd3d11h265enc + NVIDIA GPU encoder elements to support zero-copy encoding, via CUDA + and Direct3D11 APIs, respectively + +- new nvautogpuh264enc and nvautogpuh265enc NVIDIA GPU encoder + elements: The auto GPU elements will automatically select a target + GPU instance in case multiple NVIDIA desktop GPUs are present, also + taking into account the input memory. On Windows CUDA or Direct3D11 + mode will be determined by the elements automatically as well. Those + new elements are useful if target GPU and/or API mode (either CUDA + or Direct3D11 in case of Windows) is undeterminable from the encoder + point of view at the time when pipeline is configured, and therefore + lazy target GPU and/or API selection are required in order to avoid + unnecessary memory copy operations. + +- new nvav1dec AV1 NVIDIA desktop GPU decoder element + +- new qml6glsink element to render video with Qt6 + +- qsv: New Intel OneVPL/MediaSDK (a.k.a Intel Quick Sync) based + decoder and encoder elements, with gst-d3d11 (on Windows) and gst-va + (on Linux) integration + + - Support multi-GPU environment, for example, concurrent video + encoding using Intel iGPU and dGPU in a single pipeline + - H.264 / H.265 / VP9 and JPEG decoders + - H.264 / H.265 / VP9 / AV1 / JPEG encoders with dynamic encoding + bitrate update + - New plugin does not require external SDK for building on Windows -- The existing GstPlayer API is scheduled for deprecation and will be - removed at some point in the future (e.g. in GStreamer 1.24), so - application developers are urged to migrate to the new GstPlay API - at their earliest convenience. - -WebM alpha decoding - -- Implement WebM alpha decoding (VP8/VP9 with alpha), which required - support and additions in various places. This is supported both with - software decoders and hardware-accelerated decoders. - -- VP8/VP9 don’t support alpha components natively in the codec, so the - way this is implemented in WebM is by encoding the alpha plane with - transparency data as a separate VP8/VP9 stream. Inside the WebM - container (a variant of Matroska) this is coded as a single video - track with the “normal” VP8/VP9 video data making up the main video - data and each frame of video having an encoded alpha frame attached - to it as extra data ("BlockAdditional"). - -- matroskademux has been extended extract this per-frame alpha side - data and attach it in form of a GstVideoCodecAlphaMeta to the - regular video buffers. Note that this new meta is specific to this - VP8/VP9 alpha support and can’t be used to just add alpha support to - other codecs that don’t support it. Lastly, matroskademux also - advertises the fact that the streams contain alpha in the caps. - -- The new codecalpha plugin contains various bits of infrastructure to - support autoplugging and debugging: - - - codecalphademux splits out the alpha stream from the metas on - the regular VP8/VP9 buffers - - alphacombine takes two decoded raw video streams (one alpha, one - the regular video) and combines it into a video stream with - alpha - - vp8alphadecodebin + vp9alphadecodebin are wrapper bins that use - the regular vp8dec and vp9dec software decoders to decode - regular and alpha streams and combine them again. To decodebin - these look like regular decoders which ju - - The V4L2 CODEC plugin has stateless VP8/VP9 decoders that can - decode both alpha and non-alpha stream with a single decoder - instance - -- A new AV12 video format was added which is basically NV12 with an - alpha plane, which is more convenient for many hardware-accelerated - decoders. - -- Watch Nicolas Dufresne’s LCA 2022 talk “Bringing WebM Alpha support - to GStreamer” for all the details and a demo. - -RTP Header Extensions Base Class and Automatic Header Extension Handling in RTP Payloaders and Depayloaders - -- RTP Header Extensions are specified in RFC 5285 and provide a way to - add small pieces of data to RTP packets in between the RTP header - and the RTP payload. This is often used for per-frame metadata, - extended timestamps or other application-specific extra data. There - are several commonly-used extensions specified in various RFCs, but - senders are free to put any kind of data in there, as long as sender - and receiver both know what that data is. Receivers that don’t know - about the header extensions will just skip the extra data without - ever looking at it. These header extensions can often be combined - with any kind of payload format, so may need to be supported by many - RTP payloader and depayloader elements. - -- Inserting and extracting RTP header extension data has so far been a - bit inconvenient in GStreamer: There are functions to add and - retrieve RTP header extension data from RTP packets, but nothing - works automatically, even for common extensions. People would have - to do the insertion/extraction either in custom elements - before/after the RTP payloader/depayloader, or inside pad probes, - which isn’t very nice. - -- This release adds various pieces of new infrastructure for generic - RTP header extension handling, as well as some implementations for - common extensions: - - - GstRTPHeaderExtension is a new helper base class for reading and - writing RTP header extensions. Nominally this subclasses - GstElement, but only so these extensions are stored in the - registry where they can be looked up by URI or name. They don’t - have pads and don’t get added to the pipeline graph as an - element. - - - "add-extension" and "clear-extension" action signals on RTP - payloaders and depayloaders for manual extension management - - - The "request-extension" signal will be emitted if an extension - is encountered that requires explicit mapping by the application - - - new "auto-header-extension" property on RTP payloaders and - depayloaders for automatic handling of known header extensions. - This is enabled by default. The extensions must be signalled via - caps / SDP. - - - RTP header extension implementations: - - - rtphdrextclientaudiolevel: Client-to-Mixer Audio Level - Indication (RFC 6464) (also see below) - - rtphdrextcolorspace: Color Space extension, extends RTP - packets with color space and high dynamic range (HDR) - information - - rtphdrexttwcc: Transport Wide Congestion Control support - -- gst_rtp_buffer_remove_extension_data() is a new helper function to - remove an RTP header extension from an RTP buffer - -- The existing gst_rtp_buffer_set_extension_data() now also supports - shrinking the extension data in size - -AppSink and AppSrc improvements - -- appsink: new API to pull events out of appsink in addition to - buffers and buffer lists. - - There was previously no way for users to receive incoming events - from appsink properly serialised with the data flow, even if they - are serialised events. The reason for that is that the only way to - intercept events was via a pad probe on the appsink sink pad, but - there is also internal queuing inside of appsink, so it’s difficult - to ascertain the right order of everything in all cases. - - There is now a new "new-serialized-event" signal which will be - emitted when there’s a new event pending (just like the existing - "new-sample" signal). The "emit-signals" property must be set to - TRUE in order to activate this (but it’s also fine to just pull from - the application thread without using the signals). - - gst_app_sink_pull_object() and gst_app_sink_try_pull_object() can be - used to pull out either an event or a new sample carrying a buffer - or buffer list, whatever is next in the queue. - - EOS events will be filtered and will not be returned. EOS handling - can be done the ususal way, same as with _pull_sample(). - -- appsrc: allow configuration of internal queue limits in time and - buffers and add leaky mode. - - There is internal queuing inside appsrc so the application thread - can push data into the element which will then be picked up by the - source element’s streaming thread and pushed into the pipeline from - that streaming thread. This queue is unlimited by default and until - now it was only possible to set a maximum size limit in bytes. When - that byte limit is reached, the pushing thread (application thread) - would be blocked until more space becomes available. - - A limit in bytes is not particularly useful for many use cases, so - now it is possible to also configure limits in time and buffers - using the new "max-time" and "max-buffers" properties. Of course - there are also matching new read-only"current-level-buffers" and - "current-level-time properties" properties to query the current fill - level of the internal queue in time and buffers. - - And as if that wasn’t enough the internal queue can also be - configured as leaky using the new "leaky-type" property. That way - when the queue is full the application thread won’t be blocked when - it tries to push in more data, but instead either the new buffer - will be dropped or the oldest data in the queue will be dropped. - -Better string serialization of nested GstCaps and GstStructures - -- New string serialisation format for structs and caps that can handle - nested structs and caps properly by using brackets to delimit nested - items (e.g. some-struct, some-field=[nested-struct, nested=true]). - Unlike the default format the new variant can also support more than - one level of nesting. For backwards-compatibility reasons the old - format is still output by default when serialising caps and structs - using the existing API. The new functions gst_caps_serialize() and - gst_structure_serialize() can be used to output strings in the new - format. - -Convenience API for custom GstMetas - -- New convenience API to register and create custom GstMetas: - gst_meta_register_custom() and gst_buffer_add_custom_meta(). Such - custom meta is backed by a GstStructure and does not require that - users of the API expose their GstMeta implementation as public API - for other components to make use of it. In addition, it provides a - simpler interface by ignoring the impl vs. api distinction that the - regular API exposes. This new API is meant to be the meta - counterpart to custom events and messages, and to be more convenient - than the lower-level API when the absolute best performance isn’t a - requirement. The reason it’s less performant than a “proper” meta is - that a proper meta is just a C struct in the end whereas this goes - through the GstStructure API which has a bit more overhead, which - for most scenarios is negligible however. This new API is useful for - experimentation or proprietary metas, but also has some limitations: - it can only be used if there’s a single producer of these metas; - it’s not allowed to register the same custom meta multiple times or - from multiple places. - -Additional Element Properties on Encoding Profiles - -- GstEncodingProfile: The new "element-properties" and - gst_encoding_profile_set_element_properties() API allows - applications to set additional element properties on encoding - profiles to configure muxers and encoders. So far the encoding - profile template was the only place where this could be specified, - but often what applications want to do is take a ready-made encoding - profile shipped by GStreamer or the application and then tweak the - settings on top of that, which is now possible with this API. Since - applications can’t always know in advance what encoder element will - be used in the end, it’s even possible to specify properties on a - per-element basis. - - Encoding Profiles are used in the encodebin, transcodebin and - camerabin elements and APIs to configure output formats (containers - and elementary streams). - -Audio Level Indication Meta for RFC 6464 - -- New GstAudioLevelMeta containing Audio Level Indication as per RFC - 6464 - -- The level element has been updated to add GstAudioLevelMeta on - buffers if the "audio-level-meta" property is set to TRUE. This can - then in turn be picked up by RTP payloaders to signal the audio - level to receivers through RTP header extensions (see above). - -- New Client-to-Mixer Audio Level Indication (RFC6464) RTP Header - Extension which should be automatically created and used by RTP - payloaders and depayloaders if their "auto-header-extension" - property is enabled and if the extension is part of the RTP caps. - -Automatic packet loss, data corruption and keyframe request handling for video decoders - -- The GstVideoDecoder base class has gained various new APIs to - automatically handle packet loss and data corruption better by - default, especially in RTP, RTSP and WebRTC streaming scenarios, and - to give subclasses more control about how they want to handle - missing data: - - - Video decoder subclasses can mark output frames as corrupted via - the new GST_VIDEO_CODEC_FRAME_FLAG_CORRUPTED flag - - - A new "discard-corrupted-frames" property allows applications to - configure decoders so that corrupted frames are directly - discarded instead of being forwarded inside the pipeline. This - is a replacement for the "output-corrupt" property of the FFmpeg - decoders. - - - RTP depayloaders can now signal to decoders that data is missing - when sending GAP events for lost packets. GAP events can be sent - for various reason in a GStreamer pipeline. Often they are just - used to let downstream elements know that there isn’t a buffer - available at the moment, so downstream elements can move on - instead of waiting for one. They are also sent by RTP - depayloaders in the case that packets are missing, however, and - so far a decoder was not able to differentiate the two cases. - This has been remedied now: GAP events can be decorated with - gst_event_set_gap_flags() and GST_GAP_FLAG_MISSING_DATA to let - decoders now what happened, and decoders can then use that in - some cases to handle missing data better. - - - The GstVideoDecoder::handle_missing_data vfunc was added to - inform subclasses about packet loss or missing data and let them - handle it in their own way if they like. - - - gst_video_decoder_set_needs_sync_point() lets subclasses signal - that they need the stream to start with a sync point. If - enabled, the base class will discard all non-sync point frames - in the beginning and after a flush and does not pass them to the - subclass. Furthermore, if the first frame is not a sync point, - the base class will try and request a sync frame from upstream - by sending a force-key-unit event (see next items). - - - New "automatic-request-sync-points" and - "automatic-request-sync-point-flags" properties to automatically - request sync points when needed, e.g. on packet loss or if the - first frame is not a keyframe. Applications may want to enable - this on decoders operating in e.g. RTP/WebRTC/RTSP receiver - pipelines. - - - The new "min-force-key-unit-interval" property can be used to - ensure there’s a minimal interval between keyframe requests to - upstream (and/or the sender) and we’re not flooding the sender - with key unit requests. - - - gst_video_decoder_request_sync_point() allows subclasses to - request a new sync point (e.g. if they choose to do their own - missing data handling). This will still honour the - "min-force-key-unit-interval" property if set. - -Improved support for custom minimal GStreamer builds - -- Element registration and registration of other plugin features - inside plugin init functions has been improved in order to - facilitate minimal custom GStreamer builds. - -- A number of new macros have been added to declare and create - per-element and per-pluginfeature register functions in all plugins, - and then call those from the per-plugin plugin_init functions: - - - GST_ELEMENT_REGISTER_DEFINE, - GST_DEVICE_PROVIDER_REGISTER_DEFINE, - GST_DYNAMIC_TYPE_REGISTER_DEFINE, GST_TYPE_FIND_REGISTER_DEFINE - for the actual registration call with GStreamer - - GST_ELEMENT_REGISTER, GST_DEVICE_PROVIDER_REGISTER, - GST_DYNAMIC_TYPE_REGISTER, GST_PLUGIN_STATIC_REGISTER, - GST_TYPE_FIND_REGISTER to call the registration function defined - by the REGISTER_DEFINE macro - - GST_ELEMENT_REGISTER_DECLARE, - GST_DEVICE_PROVIDER_REGISTER_DECLARE, - GST_DYNAMIC_TYPE_REGISTER_DECLARE, - GST_TYPE_FIND_REGISTER_DECLARE to declare the registration - function defined by the REGISTER_DEFINE macro - - and various variants for advanced use cases. - -- This means that applications can call the per-element and - per-pluginfeature registration functions for only the elements they - need instead of registering plugins as a whole with all kinds of - elements that may not be required (e.g. encoder and decoder instead - of just decoder). In case of static linking all unused functions and - their dependencies would be removed in this case by the linker, - which helps minimise binary size for custom builds. - -- gst_init() will automatically call a gst_init_static_plugins() - function if one exists. - -- See the GStreamer static build documentation and Stéphane’s blog - post Generate a minimal GStreamer build, tailored to your needs for - more details. - -New elements - -- New aesdec and aesenc elements for AES encryption and decryption in - a custom format. - -- New encodebin2 element with dynamic/sometimes source pads in order - to support the option of doing the muxing outside of encodebin, - e.g. in combination with a splitmuxsink. - -- New fakeaudiosink and videocodectestsink elements for testing and - debugging (see below for more details) - -- rtpisacpay, rtpisacdepay: new RTP payloader and depayloader for iSAC - audio codec - -- rtpst2022-1-fecdec, rtpst2022-1-fecenc: new elements providing SMPTE - 2022-1 2-D Forward Error Correction. More details in Mathieu’s blog - post. - -- isac: new plugin wrapping the Internet Speech Audio Codec reference - encoder and decoder from the WebRTC project. - -- asio: plugin for Steinberg ASIO (Audio Streaming Input/Output) API - -- gssrc, gssink: add source and sink for Google Cloud Storage - -- onnx: new plugin to apply ONNX neural network models to video - -- openaptx: aptX and aptX-HD codecs using libopenaptx (v0.2.0) - -- qroverlay, debugqroverlay: new elements that allows overlaying data - on top of video in form of a QR code - -- cvtracker: new OpenCV-based tracker element - -- av1parse, vp9parse: new parsers for AV1 and VP9 video - -- va: work on the new VA-API plugin implementation for - hardware-accelerated video decoding and encoding has continued at - pace, with various new decoders and filters having joined the - initial vah264dec: - - - vah265dec: VA-API H.265 decoder - - vavp8dec: VA-API VP8 decoder - - vavp9dec: VA-API VP9 decoder - - vaav1dec: VA-API AV1 decoder - - vampeg2dec: VA-API MPEG-2 decoder - - vadeinterlace: : VA-API deinterlace filter - - vapostproc: : VA-API postproc filter (color conversion, - resizing, cropping, color balance, video rotation, skin tone - enhancement, denoise, sharpen) - - See Víctor’s blog post “GstVA in GStreamer 1.20” for more details - and what’s coming up next. - -- vaapiav1dec: new AV1 decoder element (in gstreamer-vaapi) - -- msdkav1dec: hardware-accelerated AV1 decoder using the Intel Media - SDK / oneVPL - -- nvcodec plugin for NVIDIA NVCODEC API for hardware-accelerated video - encoding and decoding: - - - cudaconvert, cudascale: new CUDA based video color space convert - and rescale elements - - cudaupload, cudadownload: new helper elements for memory - transfer between CUDA and system memory spaces - - nvvp8sldec, nvvp9sldec: new GstCodecs-based VP8/VP9 decoders - -- Various new hardware-accelerated elements for Windows: - - - d3d11screencapturesrc: new desktop capture element, including a - GstDeviceProvider implementation to enumerate/select target - monitors for capture. - - d3d11av1dec and d3d11mpeg2dec: AV1 and MPEG-2 decoders - - d3d11deinterlace: deinterlacing filter - - d3d11compositor: video composing element - - see Windows section below for more details - -- new Rust plugins: - - - audiornnoise: Removes noise from an audio stream - - awstranscribeparse: Parses AWS audio transcripts into timed text - buffers - - ccdetect: Detects if valid closed captions are present in a - closed captions stream - - cea608tojson: Converts CEA-608 Closed Captions to a JSON - representation - - cmafmux: CMAF fragmented MP4 muxer - - dashmp4mux: DASH fragmented MP4 muxer - - isofmp4mux: ISO fragmented MP4 muxer - - ebur128level: EBU R128 Loudness Level Measurement - - ffv1dec: FFV1 video decoder - - gtk4paintablesink: GTK4 video sink, which provides a - GdkPaintable that can be rendered in various widgets - - hlssink3: HTTP Live Streaming sink - - hrtfrender: Head-Related Transfer Function (HRTF) renderer - - hsvdetector: HSV colorspace detector - - hsvfilter: HSV colorspace filter - - jsongstenc: Wraps buffers containing any valid top-level JSON - structures into higher level JSON objects, and outputs those as - ndjson - - jsongstparse: Parses ndjson as output by jsongstenc - - jsontovtt: converts JSON to WebVTT subtitles - - regex: Applies regular expression operations on text - - roundedcorners: Adds rounded corners to video - - spotifyaudiosrc: Spotify source - - textahead: Display upcoming text buffers ahead (e.g. for - Karaoke) - - transcriberbin: passthrough bin that transcribes raw audio to - closed captions using awstranscriber and puts the captions as - metas onto the video - - tttojson: Converts timed text to a JSON representation - - uriplaylistbin: Playlist source bin - - webpdec-rs: WebP image decoder with animation support - -- New plugin codecalpha with elements to assist with WebM Alpha - decoding - - - codecalphademux: Split stream with GstVideoCodecAlphaMeta into - two streams - - alphacombine: Combine two raw video stream (I420 or NV12) as one - stream with alpha channel (A420 or AV12) - - vp8alphadecodebin: A bin to handle software decoding of VP8 with - alpha - - vp9alphadecodebin: A bin to handle software decoding of VP9 with - alpha - -- New hardware accelerated elements for Linux: - - - v4l2slmpeg2dec: Support for Linux Stateless MPEG2 decoders - - v4l2slvp9dec: Support for Linux Stateless VP9 decoders - - v4l2slvp8alphadecodebin: Support HW accelerated VP8 with alpha - layer decoding - - v4l2slvp9alphadecodebin: Support HW accelerated VP9 with alpha - layer decoding +- vulkanoverlaycompositor: new vulkan overlay compositor element to + overlay upstream GstVideoOverlayCompositonMeta onto the video + stream. + +- vulkanshaderspv: performs operations with SPIRV shaders in Vulkan + +- win32ipcvideosink, win32ipcvideosrc: new shared memory videosrc/sink + elements for Windows + +- wicjpegdec, wicpngdec: Windows Imaging Component (WIC) based JPEG + and PNG decoder elements. + +- Many exciting new Rust elements, see Rust section below New element features and additions -- assrender: handle more font mime types; better interaction with - matroskademux for embedded fonts - -- audiobuffersplit: Add support for specifying output buffer size in - bytes (not just duration) - -- audiolatency: new "samplesperbuffer" property so users can configure - the number of samples per buffer. The default value is 240 samples - which is equivalent to 5ms latency with a sample rate of 48000, - which might be larger than actual buffer size of audio capture - device. - -- audiomixer, audiointerleave, GstAudioAggregator: now keep a count of - samples that are dropped or processed as statistic and can be made - to post QoS messages on the bus whenever samples are dropped by - setting the "qos-messages" property on input pads. - -- audiomixer, compositor: improved handling of new inputs added at - runtime. New API was added to the GstAggregator base class to allow - subclasses to opt into an aggregation mode where inactive pads are - ignored when processing input buffers - (gst_aggregator_set_ignore_inactive_pads(), - gst_aggregator_pad_is_inactive()). An “inactive pad” in this context - is a pad which, in live mode, hasn’t yet received a first buffer, - but has been waited on at least once. What would happen usually in - this case is that the aggregator would wait for data on this pad - every time, up to the maximum configured latency. This would - inadvertently push mixer elements in live mode to the configured - latency envelope and delay processing when new inputs are added at - runtime until these inputs have actually produced data. This is - usually undesirable. With this new API, new inputs can be added - (requested) and configured and they won’t delay the data processing. - Applications can opt into this new behaviour by setting the - "ignore-inactive-pads" property on compositor, audiomixer or other - GstAudioAggregator-based elements. - -- cccombiner: implement “scheduling” of captions. So far cccombiner’s - behaviour was essentially that of a funnel: it strictly looked at - input timestamps to associate together video and caption buffers. - Now it will try to smoothly schedule caption buffers in order to - have exactly one per output video buffer. This might involve - rewriting input captions, for example when the input is CDP then - sequence counters are rewritten, time codes are dropped and - potentially re-injected if the input video frame had a time code - meta. This can also lead to the input drifting from synchronization, - when there isn’t enough padding in the input stream to catch up. In - that case the element will start dropping old caption buffers once - the number of buffers in its internal queue reaches a certain limit - (configurable via the "max-scheduled" property). The new original - funnel-like behaviour can be restored by setting the "scheduling" - property to FALSE. - -- ccconverter: new "cdp-mode" property to specify which sections to - include in CDP packets (timecode, CC data, service info). Various - software, including ffmpeg’s Decklink support, fails parsing CDP - packets that contain anything but CC data in the CDP packets. - -- clocksync: new "sync-to-first" property for automatic timestamp - offset setup: if set clocksync will set up the "ts-offset" value - based on the first buffer and the pipeline’s running time when the - first buffer arrived. The newly configured "ts-offset" in this case - would be the value that allows outputting the first buffer without - waiting on the clock. This is useful for example to feed a non-live - input into an already-running pipeline. - -- compositor: - - - multi-threaded input conversion and compositing. Set the - "max-threads" property to activate this. - - new "sizing-policy" property to support display aspect ratio - (DAR)-aware scaling. By default the image is scaled to fill the - configured destination rectangle without padding and without - keeping the aspect ratio. With sizing-policy=keep-aspect-ratio - the input image is scaled to fit the destination rectangle - specified by GstCompositorPad:{xpos, ypos, width, height} - properties preserving the aspect ratio. As a result, the image - will be centered in the destination rectangle with padding if - necessary. - - new "zero-size-is-unscaled" property on input pads. By default - pad width=0 or pad height=0 mean that the stream should not be - scaled in that dimension. But if the "zero-size-is-unscaled" - property is set to FALSE a width or height of 0 is instead - interpreted to mean that the input image on that pad should not - be composited, which is useful when creating animations where an - input image is made smaller and smaller until it disappears. - - improved handling of new inputs at runtime via - "ignore-inactive-pads"property (see above for details) - - allow output format with alpha even if none of the inputs have - alpha (also glvideomixer and other GstVideoAggregator - subclasses) - -- dashsink: add h265 codec support and signals for allowing custom - playlist/fragment output - -- decodebin3: - - - improved decoder selection, especially for hardware decoders - - make input activation “atomic” when adding inputs dynamically - - better interleave handling: take into account decoder latency - for interleave size - -- decklink: - - - Updated DeckLink SDK to 11.2 to support DeckLink 8K Pro - - decklinkvideosrc: - - More accurate and stable capture timestamps: use the - hardware reference clock time when the frame was finished - being captured instead of a clock time much further down the - road. - - Automatically detect widescreen vs. normal NTSC/PAL - -- encodebin: - - - add “smart encoding” support for H.265, VP8 and VP9 (i.e. only - re-encode where needed and otherwise pass through encoded video - as-is). - - H264/H265 smart encoding improvements: respect user-specified - stream-format, but if not specified default to avc3/hvc1 with - in-band SPS/PPS/VPS signalling for more flexibility. - - new encodebin2 element with dynamic/sometimes source pads in - order to support the option of doing the muxing outside of - encodebin, e.g. in combination with splitmuxsink. - - add APIs to set element properties on encoding profiles (see - below) - -- errorignore: new "ignore-eos" property to also ignore FLOW_EOS from - downstream elements - -- giosrc: add support for growing source files: applications can - specify that the underlying file being read is growing by setting - the "is-growing" property. If set, the source won’t EOS when it - reaches the end of the file, but will instead start monitoring it - and will start reading data again whenever a change is detected. The - new "waiting-data" and "done-waiting-data" signals keep the - application informed about the current state. - -- gtksink, gtkglsink: - - - scroll event support: forwarded as navigation events into the - pipeline - - "video-aspect-ratio-override" property to force a specific - aspect ratio - - "rotate-method" property and support automatic rotation based on - image tags - -- identity: new "stats" property allows applications to retrieve the - number of bytes and buffers that have passed through so far. - -- interlace: add support for more formats, esp 10-bit, 12-bit and - 16-bit ones - -- jack: new "low-latency" property for automatic latency-optimized - setting and "port-names" property to select ports explicitly - -- jpegdec: support output conversion to RGB using libjpeg-turbo (for - certain input files) - -- line21dec: - - - "mode" property to control whether and how detected closed - captions should be inserted in the list of existing close - caption metas on the input frame (if any): add, drop, or - replace. - - "ntsc-only" property to only look for captions if video has NTSC - resolution - -- line21enc: new "remove-caption-meta" to remove metas from output - buffers after encoding the captions into the video data; support for - CDP closed captions - -- matroskademux, matroskamux: Add support for ffv1, a lossless - intra-frame video coding format. - -- matroskamux: accept in-band SPS/PPS/VPS for H264 and H265 - (i.e. stream-format avc3 and hev1) which allows on-the-fly - profile/level/resolution changes. - -- matroskamux: new "cluster-timestamp-offset" property, useful for use - cases where the container timestamps should map to some absolute - wall clock time, for example. - -- rtpsrc: add "caps" property to allow explicit setting of the caps - where needed - -- mpegts: support SCTE-35 passthrough via new "send-scte35-events" - property on MPEG-TS demuxer tsdemux. When enabled, SCTE 35 sections - (eg ad placement opportunities) are forwarded as events donwstream - where they can be picked up again by mpegtsmux. This required a - semantic change in the SCTE-35 section API: timestamps are now in - running time instead of muxer pts. - -- tsdemux: Handle PCR-less MPEG-TS streams; more robust timestamp - handling in certain corner cases and for poorly muxed streams. - -- mpegtsmux: - - - More conformance improvements to make MPEG-TS analyzers happy: - - PCR timing accuracy: Improvements to the way mpegtsmux - outputs PCR observations in CBR mode, so that a PCR - observation is always inserted when needed, so that we never - miss the configured pcr-interval, as that triggers various - MPEG-TS analyser errors. - - Improved PCR/SI scheduling - - Don’t write PCR until PAT/PMT are output to make sure streams - start cleanly with a PAT/PMT. - - Allow overriding the automatic PMT PID selection via - application-supplied PMT_%d fields in the prog-map - structure/property. - -- mp4mux: - - - new "first-moov-then-finalise" mode for fragmented output where - the output will start with a self-contained moov atom for the - first fragment, and then produce regular fragments. Then at the - end when the file is finalised, the initial moov is invalidated - and a new moov is written covering the entire file. This way the - file is a “fragmented mp4” file while it is still being written - out, and remains playable at all times, but at the end it is - turned into a regular mp4 file (with former fragment headers - remaining as unused junk data in the file). - - support H.264 avc3 and H.265 hvc1 stream formats as input where - the codec data is signalled in-band inside the bitstream instead - of caps/file headers. - - support profile/level/resolution changes for H264/H265 input - streams (i.e. codec data changing on the fly). Each codec_data - is put into its own SampleTableEntry inside the stsd, unless the - input is in avc3 stream format in which case it’s written - in-band an not in the headers. - -- multifilesink: new ""min-keyframe-distance"" property to make - minimum distance between keyframes in next-file=key-frame mode - configurable instead of hard-coding it to 10 seconds. - -- mxfdemux has seen a big refactoring to support non-frame wrappings - and more accurate timestamp/seek handling for some formats - -- msdk plugin for hardware-accelerated video encoding and decoding - using the Intel Media SDK: - - - oneVPL support (Intel oneAPI Video Processing Library) - - AV1 decoding support - - H.264 decoder now supports constrained-high and progressive-high - profiles - - H.264 encoder: - - more configuration options (properties): - "intra-refresh-type", "min-qp" , "max-qp", "p-pyramid", - "dblk-idc" - - H.265 encoder: - - can output main-still-picture profile - - now inserts HDR SEIs (mastering display colour volume and - content light level) - - more configuration options (properties): - "intra-refresh-type", "min-qp" , "max-qp", "p-pyramid", - "b-pyramid", "dblk-idc", "transform-skip" - - support for RGB 10bit format - - External bitrate control in encoders - - Video post proc element msdkvpp gained support for 12-bit pixel - formats P012_LE, Y212_LE and Y412_LE - -- nvh264sldec: interlaced stream support - -- openh264enc: support main, high, constrained-high and - progressive-high profiles - -- openjpeg: support for multithreaded decoding and encoding - -- rtspsrc: now supports IPv6 also for tunneled mode (RTSP-over-HTTP); - new "ignore-x-server-reply" property to ignore the - x-server-ip-address server header reply in case of HTTP tunneling, - as it is often broken. - -- souphttpsrc: Runtime compatibility support for libsoup2 and - libsoup3. libsoup3 is the latest major version of libsoup, but - libsoup2 and libsoup3 can’t co-exist in the same process because - there is no namespacing or versioning for GObject types. As a - result, it would be awkward if the GStreamer souphttpsrc plugin - linked to a specific version of libsoup, because it would only work - with applications that use the same version of libsoup. To make this - work, the soup plugin now tries to determine the libsoup version - used by the application (and its other dependencies) at runtime on - systems where GStreamer is linked dynamically. libsoup3 support is - still considered somewhat experimental at this point. - -- srtsrc, srtsink: add signals for the application to accept/reject - incoming connections - -- timeoverlay: new elapsed-running-time time mode which shows the - running time since the first running time (and each flush-stop). - -- udpsrc: new timestamping mode to retrieve packet receive timestamps - from the kernel via socket control messages (SO_TIMESTAMPNS) on - supported platforms - -- uritranscodebin: new setup-source and element-setup signals for - applications to configure elements used - -- v4l2codecs plugin gained support for 4x4 and 32x32 tile formats - enabling some platforms or direct renders. Important memory usage - improvement. - -- v4l2slh264dec now implements the final Linux uAPI as shipped on - Linux 5.11 and later. - -- valve: add "drop-mode" property and provide two new modes of - operation: in drop-mode=forward-sticky-events sticky events - (stream-start, segment, tags, caps, etc.) are forwarded downstream - even when dropping is enabled; drop-mode=transform-to-gap will in - addition also convert buffers into gap events when dropping is - enabled, which lets downstream elements know that time is advancing - and might allow for preroll in many scenarios. By default all events - and all buffers are dropped when dropping is enabled, which can - cause problems with caps negotiation not progressing or branches not - prerolling when dropping is enabled. - -- videocrop: support for many more pixel formats, e.g. planar YUV - formats with > 8bits and GBR* video formats; can now also accept - video not backed by system memory as long as downstream supports the - GstCropMeta - -- videotestsrc: new smpte-rp-219 pattern for SMPTE75 RP-219 conformant - color bars - -- vp8enc: finish support for temporal scalability: two new properties - ("temporal-scalability-layer-flags", - "temporal-scalability-layer-sync-flags") and a unit change on the - "temporal-scalability-target-bitrate" property (now expects bps); - also make temporal scalability details available to RTP payloaders - as buffer metadata. - -- vp9enc: new properties to tweak encoder performance: - - - "aq-mode" to configure adaptive quantization modes - - "frame-parallel-decoding" to configure whether to create a - bitstream that reduces decoding dependencies between frames - which allows staged parallel processing of more than one video - frames in the decoder. (Defaults to TRUE) - - "row-mt", "tile-columns" and "tile-rows" so multithreading can - be enabled on a per-tile basis, instead of on a per tile-column - basis. In combination with the new "tile-rows" property, this - allows the encoder to make much better use of the available CPU - power. - -- vp9dec, vp9enc: add support for 10-bit 4:2:0 and 4:2:2 YUV, as well - as 8-bit 4:4:4 - -- vp8enc, vp9enc now default to “good quality” for the deadline - property rather then “best quality”. Having the deadline set to best - quality causes the encoder to be absurdly slow, most real-life users - will prefer good-enough quality with better performance instead. - -- wpesrc: - - - implement audio support: a new sometimes source pad will be - created for each audio stream created by the web engine. - - move wpesrc to wpevideosrc and add a wrapper bin wpesrc to also - support audio - - also handles web:// URIs now (same as cefsrc) - - post messages with the estimated load progress on the bus +- audioconvert: Dithering now uses a slightly slower, less biased PRNG + which results in better quality output. Also dithering can now be + enabled via the new “dithering-threshold” property for target bit + depths of more than 20 bits. + +- av1enc: Add “keyframe-max-dist” property for controlling max + distance between keyframes, as well as “enc-pass”, “keyframe-mode”, + “lag-in-frames” and “usage-profile” properties. + +- cccombiner: new “output-padding” property + +- decklink: Add support for 4k DCI, 8k/UHD2 and 8k DCI modes + +- dvbsubenc: Support for >SD resolutions is working correctly now. + +- fdkaacenc: Add HE-AAC / HE-AACv2 profile support + +- glvideomixer: New “crop-left,”crop-right, “crop-top” and + “crop-bottom” pad properties for cropping inputs + +- gssink: new ‘content-type’ property. Useful when one wants to upload + a video as video/mp4 instead of ’video/quicktime` for example. + +- jpegparse: Rewritten using the common parser library + +- msdk: + + - new msdkav1enc AV1 video encoder element + - msdk decoders: Add support for Scaler Format Converter (SFC) on + supported Intel platforms for hardware accelerated conversion + and scaling + - msdk encoders: support import of dmabuf, va memory and D3D11 + memory + - msdk encoders: add properties for low delay bitrate control and + max frame sizes for I/P frames + - msdkh264enc, msdkh265enc: more properties to control intra + refresh + - note that on systems with multi GPUs the Windows D3D11 + integration might only work reliably if the Intel GPU is the + primary GPU + +- mxfdemux: Add support for Canon XF-HEVC -- x265enc: add negative DTS support, which means timestamps are now - offset by 1h same as with x264enc +- openaptx: Support the freeaptx library -RTP Payloaders and Depayloaders +- qroverlay: -- rtpisacpay, rtpisacdepay: new RTP payloader and depayloader for iSAC - audio codec - -- rtph264depay: - - - new "request-keyframe" property to make the depayloader - automatically request a new keyframe from the sender on packet - loss, consistent with the new property on rtpvp8depay. - - new "wait-for-keyframe" property to make depayloader wait for a - new keyframe at the beginning and after packet loss (only - effective if the depayloader outputs AUs), consistent with the - existing property on rtpvp8depay. + - new “qrcode-case-sensitive” property allows encoding case + sensitive strings like wifi SSIDs or passwords. + - added the ability to pick up data to render from an + upstream-provided custom GstQROverlay meta -- rtpopuspay, rtpopusdepay: support libwebrtc-compatible multichannel - audio in addition to the previously supported multichannel audio - modes +- qtdemux: Add support for ONVIF XML Timed MetaData and AVC-Intra + video -- rtpopuspay: add DTX (Discontinuous Transmission) support +- rfbsrc now supports the uri handler interface, so applications can + use RFB/VNC sources in uridecodebin(3) and playbin, with + e.g. rfb://:password@10.1.2.3:5903?shared=1 -- rtpvp8depay: new "request-keyframe" property to make the depayloader - automatically request a new keyframe from the sender on packet loss. +- rtponviftimestamp: Add support for using reference timestamps -- rtpvp8pay: temporal scaling support +- rtpvp9depay now has the same keyframe-related properties as + rtpvp8depay and rtph264depay: “request-keyframe” and + “wait-for-keyframe” -- rtpvp9depay: Improved SVC handling (aggregate all layers) +- rtspsrc: Various RTSP servers are using invalid URL operations for + constructing the control URL. Until GStreamer 1.16 these worked + correctly because GStreamer was just appending strings itself to + construct the control URL, but starting version 1.18 the correct URL + operations were used. With GStreamer 1.22, rtspsrc now first tries + with the correct control URL and if that fails it will retry with + the wrongly constructed control URL to restore support for such + servers. -RTP Infrastructure +- rtspsrc and rtpjitterbuffer gained a new + “add-reference-timestamp-meta” property that makes them put the + unmodified original sender timestamp on output buffers for NTP or + PTP clock synced senders -- rtpst2022-1-fecdec, rtpst2022-1-fecenc: new elements providing SMPTE - 2022-1 2-D Forward Error Correction. More details in Mathieu’s blog - post. +- srtsrc, srtsink: new “auto-reconnect” property to make it possible + to disable automatic reconnects (in caller mode) and make the + elements post an error immediately instead; also stats improvements -- rtpreddec: BUNDLE support +- srtsrc: new “keep-listening” property to avoid EOS on disconnect and + keep the source running while it waits for a new connection. -- rtpredenc, rtpulpfecenc: add support for Transport-wide Congestion - Control (TWCC) +- videocodectestsink: added YUV 4:2:2 support -- rtpsession: new "twcc-feedback-interval" property to allow RTCP TWCC - reports to be scheduled on a timer instead of per marker-bit. +- wasapi2src: Add support for process loopback capture + +- wpesrc: Add support for modifiers in key/touch/pointer events Plugin and library moves -- There were no plugin moves or library moves in this cycle. +- The xingmux plugin has been moved from gst-plugins-ugly into + gst-plugins-good. + +- The various Windows directshow plugins in gst-plugins-bad have been + unified into a single directshow plugin. Plugin removals -The following elements or plugins have been removed: +- The dxgiscreencapsrc element has been removed, use + d3d11screencapturesrc instead -- The ofa audio fingerprinting plugin has been removed. The MusicIP - database has been defunct for years so this plugin is likely neither - useful nor used by anyone. +Miscellaneous API additions -- The mms plugin containing mmssrc has been removed. It seems unlikely - anyone still needs this or that there are even any streams left out - there. The MMS protocol was deprecated in 2003 (in favour of RTSP) - and support for it was dropped with Microsoft Media Services 2008, - and Windows Media Player apparently also does not support it any - more. +- GST_AUDIO_FORMAT_INFO_IS_VALID_RAW() and + GST_VIDEO_FORMAT_INFO_IS_VALID_RAW() can be used to check if a + GstAudioFormatInfo or GstVideoFormatInfo has been initialised to a + valid raw format. -Miscellaneous API additions +- Video SEI meta: new GstVideoSEIUserDataUnregisteredMeta to carry + H.264 and H.265 metadata from SEI User Data Unregistered messages. -Core - -- gst_buffer_new_memdup() is a convenience function for the - widely-used gst_buffer_new_wrapped(g_memdup(data,size),size) - pattern. - -- gst_caps_features_new_single() creates a new single GstCapsFeatures, - avoiding the need to use the vararg function with NULL terminator - for simple cases. - -- gst_element_type_set_skip_documentation() can be used by plugins to - signal that certain elements should not be included in the GStreamer - plugin documentation. This is useful for plugins where elements are - registered dynamically based on hardware capabilities and/or where - the available plugins and properties vary from system to system. - This is used in the d3d11 plugin for example to ensure that only the - list of default elements is advertised in the documentation. - -- gst_type_find_suggest_empty_simple() is a new convenience function - for typefinders for cases where there’s only a media type and no - other fields. - -- New API to create elements and set properties at construction time, - which is not only convenient, but also allows GStreamer elements to - have construct-only properties: gst_element_factory_make_full(), - gst_element_factory_make_valist(), - gst_element_factory_make_with_properties(), - gst_element_factory_create_full(), - gst_element_factory_create_valist(), - gst_element_factory_create_with_properties(). - -- GstSharedTaskPool: new “shared” task pool subclass with slightly - different default behaviour than the existing GstTaskPool which - would create unlimited number of threads for new tasks. The shared - taskpool creates up to N threads (default: 1) and then distributes - pending tasks to those threads round-robin style, and blocks if no - thread is available. It is possible to join tasks. This can be used - by plugins to implement simple multi-threaded processing and is used - for the new multi-threaded video conversion and compositing done in - GstVideoAggregator, videoconverter and compositor. - -Plugins Base Utils library - -- GstDiscoverer: - - - gst_discoverer_container_info_get_tags() was added to retrieve - global/container tags (vs. per-stream tags). Per-Stream tags can - be retrieved via the existing - gst_discoverer_stream_info_get_tags(). - gst_discoverer_info_get_tags(), which for many files returns a - confusing mix of stream and container tags, has been deprecated - in favour of the container/stream-specific functions. - - gst_discoverer_stream_info_get_stream_number() returns a unique - integer identifier for a given stream within the given - GstDiscoverer context. (If this matches the stream number inside - the container bitstream that’s by coincidence and not by - design.) - -- gst_pb_utils_get_caps_description_flags() can be used to query - whether certain caps represent a container, audio, video, image, - subtitles, tags, or something else. This only works for formats - known to GStreamer. - -- gst_pb_utils_get_file_extension_from_caps() returns a possible file - extension for given caps. - -- gst_codec_utils_h264_get_profile_flags_level(): Parses profile, - flags, and level from H264 AvcC codec_data. The format of H264 AVCC - extradata/sequence_header is documented in the ITU-T H.264 - specification section 7.3.2.1.1 as well as in ISO/IEC 14496-15 - section 5.3.3.1.2. - -- gst_codec_utils_caps_get_mime_codec() to convert caps to a RFC 6381 - compatible MIME codec string codec. Useful for providing the codecs - field inside the Content-Type HTTP header for containerized formats, - such as mp4 or matroska. - -GStreamer OpenGL integration library and plugins - -- glcolorconvert: added suppport for converting the video formats - A420, AV12, BGR, BGRA, RGBP and BGRP. - -- Added support to GstGLBuffer for persistent buffer mappings where a - Pixel Buffer Object (PBO) can be mapped by both the CPU and the GPU. - This removes a memcpy() when uploading textures or vertices - particularly when software decoders (e.g. libav) are direct - rendering into our memory. Improves transfer performance - significantly. Requires OpenGL 4.4, GL_ARB_buffer_storage or - GL_EXT_buffer_storage - -- Added various helper functions for handling 4x4 matrices of affine - transformations as used by GstVideoAffineTransformationMeta. - -- Add support to GstGLContext for allowing the application to control - the config (EGLConfig, GLXConfig, etc) used when creating the OpenGL - context. This allows the ability to choose between RGB16 or RGB10A2 - or RGBA8 back/front buffer configurations that were previously - hardcoded. GstGLContext also supports retrieving the configuration - it was created with or from an externally provide OpenGL context - handle. This infrastructure is also used to create a compatible - config from an application/externally provided OpenGL context in - order to improve compatibility with other OpenGL frameworks and GUI - toolkits. A new environment variable GST_GL_CONFIG was also added to - be able to request a specific configuration from the command line. - Note: different platforms will have different functionality - available. - -- Add support for choosing between EGL and WGL at runtime when running - on Windows. Previously this was a build-time switch. Allows use in - e.g. Gtk applications on Windows that target EGL/ANGLE without - recompiling GStreamer. gst_gl_display_new_with_type() can be used by - applications to choose a specific display type to use. - -- Build fixes to explicitly check for Broadcom-specific libraries on - older versions of the Raspberry Pi platform. The Broadcom OpenGL ES - and EGL libraries have different filenames. Using the vc4 Mesa - driver on the Raspberry Pi is not affected. - -- Added support to glupload and gldownload for transferring RGBA - buffers using the memory:NVMM available on the Nvidia Tegra family - of embedded devices. - -- Added support for choosing libOpenGL and libGLX as used in a GLVND - environment on unix-based platforms. This allows using desktop - OpenGL and EGL without pulling in any GLX symbols as would be - required with libGL. - -Video library - -- New raw video formats: - - - AV12 (NV12 with alpha plane) - - RGBP and BGRP (planar RGB formats) - - ARGB64 variants with specified endianness instead of host - endianness: - - ARGB64_LE, ARGB64_BE - - RGBA64_BE, RGBA64_LE - - BGRA64_BE, BGRA64_LE - - ABGR64_BE, ABGR64_LE - -- gst_video_orientation_from_tag() is new convenience API to parse the - image orientation from a GstTagList. - -- GstVideoDecoder subframe support (see below) - -- GstVideoCodecState now also carries some HDR metadata - -- Ancillary video data: implement transform functions for AFD/Bar - metas, so they will be forwarded in more cases - -MPEG-TS library - -This library only handles section parsing and such, see above for -changes to the actual mpegtsmux and mpegtsdemux elements. - -- many additions and improvements to SCTE-35 section parsing -- new API for fetching extended descriptors: - gst_mpegts_find_descriptor_with_extension() -- add support for SIT sections (Selection Information Tables) -- expose event-from-section constructor gst_event_new_mpegts_section() -- parse Audio Preselection Descriptor needed for Dolby AC-4 - -GstWebRTC library + webrtcbin - -- Change the way in which sink pads and transceivers are matched - together to support easier usage. If a pad is created without a - specific index (i.e. using sink_%u as the pad template), then an - available compatible transceiver will be searched for. If a specific - index is requested (i.e. sink_1) then if a transceiver for that - m-line already exists, that transceiver must match the new sink pad - request. If there is no transceiver available in either scenario, a - new transceiver is created. If a mixture of both sink_1 and sink_%u - requests result in an impossible situation, an error will be - produced at pad request time or from create offer/answer. - -- webrtcbin now uses regular ICE nomination instead of libnice’s - default of aggressive ICE nomination. Regular ICE nomination is the - default recommended by various relevant standards and improves - connectivity in specific network scenarios. - -- Add support for limiting the port range used for RTP with the - addition of the min-rtp-port and max-rtp-port properties on the ICE - object. - -- Expose the SCTP transport as a property on webrtcbin to more closely - match the WebRTC specification. - -- Added support for taking into account the data channel transport - state when determining the value of the "connection-state" property. - Previous versions of the WebRTC spec did not include the data - channel state when computing this value. - -- Add configuration for choosing the size of the underlying sockets - used for transporting media data - -- Always advertise support for the transport-cc RTCP feedback protocol - as rtpbin supports it. For full support, the configured caps (input - or through codec-preferences) need to include the relevant RTP - header extension. - -- Numerous fixes to caps and media handling to fail-fast when an - incompatible situation is detected. - -- Improved support for attaching the required media after a remote - offer has been set. - -- Add support for dynamically changing the amount of FEC used for a - particular stream. - -- webrtcbin now stops further SDP processing at the first error it - encounters. - -- Completed support for either local or the remote closing a data - channel. - -- Various fixes when performing BUNDLEing of the media streams in - relation to RTX and FEC usage. - -- Add support for writing out QoS DSCP marking on outgoing packets to - improve reliability in some network scenarios. - -- Improvements to the statistics returned by the get-stats signal - including the addition of the raw statistics from the internal - RTPSource, the TWCC stats when available. - -- The webrtc library does not expose any objects anymore with public - fields. Instead properties have been added to replace that - functionality. If you are accessing such fields in your application, - switch to the corresponding properties. - -GstCodecs and Video Parsers - -- Support for render delays to improve throughput across all CODECs - (used with NVDEC and V4L2). -- lots of improvements to parsers and the codec parsing decoder base - classes (H264, H265, VP8, VP9, AV1, MPEG-2) used for various - hardware-accelerate decoder APIs. - -Bindings support - -- gst_allocation_params_new() allocates a GstAllocationParams struct - on the heap. This should only be used by bindings (and freed via - gst_allocation_params_free() then). In C code you would allocate - this on the stack and only init it in place. - -- gst_debug_log_literal() can be used to log a string to the debug log - without going through any printf format expansion and associated - overhead. This is mostly useful for bindings such as the Rust - bindings which may have done their own formatting already . - -- Provide non-inlined versions of refcounting APIs for various - GStreamer mini objects, so that they can be consumed by bindings - (e.g. gstreamer-sharp): gst_buffer_ref, gst_buffer_unref, - gst_clear_buffer, gst_buffer_copy, gst_buffer_replace, - gst_buffer_list_ref, gst_buffer_list_unref, gst_clear_buffer_list, - gst_buffer_list_copy, gst_buffer_list_replace, gst_buffer_list_take, - gst_caps_ref, gst_caps_unref, gst_clear_caps, gst_caps_replace, - gst_caps_take, gst_context_ref, gst_context_unref, gst_context_copy, - gst_context_replace, gst_event_replace, gst_event_steal, - gst_event_take, gst_event_ref, gst_event_unref, gst_clear_event, - gst_event_copy, gst_memory_ref, gst_memory_unref, gst_message_ref, - gst_message_unref, gst_clear_message, gst_message_copy, - gst_message_replace, gst_message_take, gst_promise_ref, - gst_promise_unref, gst_query_ref, gst_query_unref, gst_clear_query, - gst_query_copy, gst_query_replace, gst_query_take, gst_sample_ref, - gst_sample_unref, gst_sample_copy, gst_tag_list_ref, - gst_tag_list_unref, gst_clear_tag_list, gst_tag_list_replace, - gst_tag_list_take, gst_uri_copy, gst_uri_ref, gst_uri_unref, - gst_clear_uri. - -- expose a GType for GstMiniObject - -- gst_device_provider_probe() now returns non-floating device object - -API Deprecations - -- gst_element_get_request_pad() has been deprecated in favour of the - newly-added gst_element_request_pad_simple() which does the exact - same thing but has a less confusing name that hopefully makes clear - that the function request a new pad rather than just retrieves an - already-existing request pad. - -- gst_discoverer_info_get_tags(), which for many files returns a - confusing mix of stream and container tags, has been deprecated in - favour of the container-specific and stream-specific functions, - gst_discoverer_container_info_get_tags() and - gst_discoverer_stream_info_get_tags(). - -- gst_video_sink_center_rect() was deprecated in favour of the more - generic newly-added gst_video_center_rect(). - -- The GST_MEMORY_FLAG_NO_SHARE flag has been deprecated, as it tends - to cause problems and prevents sub-buffering. If pooling or lifetime - tracking is required, memories should be allocated through a custom - GstAllocator instead of relying on the lifetime of the buffers the - memories were originally attached to, which is fragile anyway. - -- The GstPlayer high-level playback library is being replaced with the - new GstPlay library (see above). GstPlayer should be considered - deprecated at this point and will be marked as such in the next - development cycle. Applications should be ported to GstPlay. - -- Gstreamer Editing Services: ges_video_transition_set_border(), - ges_video_transition_get_border() - ges_video_transition_set_inverted() - ges_video_transition_is_inverted() have been deprecated, use - ges_timeline_element_set_children_properties() instead. +- vulkan: Expose gst_vulkan_result_to_string() Miscellaneous performance, latency and memory optimisations -More video conversion fast paths +- liborc 0.4.33 adds support for aarch64 (64-bit ARM) architecture + (not enabled by default on Windows yet though) and improvements for + 32-bit ARM and should greatly enhance performance for certain + operations that use ORC. -- v210 ↔ I420, YV12, Y42B, UYVY and YUY2 -- A420 → RGB +- as always there have been plenty of performance, latency and memory + optimisations all over the place. -Less jitter when waiting on the system clock +Miscellaneous other changes and enhancements -- Better system clock wait accuracy, less jitter: where available, - clock_nanosleep is used for higher accuracy for waits below 500 - usecs, and waits below 2ms will first use the regular waiting system - and then clock_nanosleep for the remainder. The various wait - implementation have a latency ranging from 50 to 500+ microseconds. - While this is not a major issue when dealing with a low number of - waits per second (for ex: video), it does introduce a non-negligible - jitter for synchronisation of higher packet rate systems. +- the audio/video decoder base classes will not consider decoding + errors a hard error by default anymore but will continue trying to + decode. Previously more than 10 consecutive errors were considered a + hard error but this caused various partially broken streams to fail. + The threshold is configurable via the “max-errors” property. -Video decoder subframe support +- compatibility of the GStreamer PTP clock implementation with + different PTP server implementations was improved, and + synchronization is achieved successfully in various scenarios that + failed before. -- The GstVideoDecoder base class gained API to process input at the - sub-frame level. That way video decoders can start decoding slices - before they have received the full input frame in its entirety (to - the extent this is supported by the codec, of course). This helps - with CPU utilisation and reduces latency. +Tracing framework and debugging improvements -- This functionality is now being used in the OpenJPEG JPEG 2000 - decoder, the FFmpeg H.264 decoder (in case of NAL-aligned input) and - the OpenMAX H.264/H.265 decoders (in case of NAL-aligned input). +New tracers -Miscellaneous other changes and enhancements +- buffer-lateness: Records lateness of buffers and the reported + latency for each pad in a CSV file. Comes with a script for + visualisation. + +- pipeline-snapshot: Creates a .dot file of all pipelines in the + application whenever requested via SIGUSR1 (on UNIX systems) -- GstDeviceMonitor no longer fails to start just because one of the - device providers failed to start. That could happen for example on - systems where the pulseaudio device provider is installed, but - pulseaudio isn’t actually running but ALSA is used for audio - instead. In the same vein the device monitor now keeps track of - which providers have been started (via the new - gst_device_provider_is_started()) and only stops actually running - device providers when stopping the device monitor. - -- On embedded systems it can be useful to create a registry that can - be shared and read by multiple processes running as different users. - It is now possible to set the new GST_REGISTRY_MODE environment - variable to specify the file mode for the registry file, which by - default is set to be only user readable/writable. - -- GstNetClientClock will signal lost sync in case the remote time - resets (e.g. because device power cycles), by emitting the “synced” - signal with synced=FALSE parameter, so applications can take action. - -- gst_value_deserialize_with_pspec() allows deserialization with a - hint for what the target GType should be. This allows for example - passing arrays of flags through the command line or - gst_util_set_object_arg(), eg: foo="". - -- It’s now allowed to create an empty GstVideoOverlayComposition - without any rectangles by passing a NULL rectangle to - gst_video_overlay_composition_new(). This is useful for bindings and - simplifies application code in some places. - -Tracing framework, debugging and testing improvements - -- New factories tracer to list loaded elements (and other plugin - features). This can be useful to collect a list of elements needed - for an application, which then in turn can be used to create a - tailored minimal GStreamer build that contains just the elements - needed and nothing else. -- New plugin-feature-loaded tracing hook for use by tracers like the - new factories tracer - -- GstHarness: Add gst_harness_set_live() so that harnesses can be set - to non-live and return is-live=false in latency queries if needed. - Default behaviour is to always return is-live=true in latency - queries. - -- navseek: new "hold-eos" property. When enabled, the element will - hold back an EOS event until the next keystroke (via navigation - events). This can be used to keep a video sink showing the last - frame of a video pipeline until a key is pressed instead of tearing - it down immediately on EOS. - -- New fakeaudiosink element: mimics an audio sink and can be used for - testing and CI pipelines on systems where no audio system is - installed or running. It differs from fakesink in that it only - support audio caps and syncs to the clock by default like a normal - audio sink. It also implements the GstStreamVolume interface like - most audio sinks do. - -- New videocodectestsink element for video codec conformance testing: - Calculates MD5 checksums for video frames and skips any padding - whilst doing so. Can optionally also write back the video data with - padding removed into a file for easy byte-by-byte comparison with - reference data. +- queue-levels: Records queue levels for each queue in a CSV file. + Comes with a script for visualisation. + +Debug logging system improvements + +- new log macros GST_LOG_ID, GST_DEBUG_ID, GST_INFO_ID, + GST_WARNING_ID, GST_ERROR_ID, and GST_TRACE_ID allow passing a + string identifier instead of a GObject. This makes it easier to log + non-gobject-based items and also has performance benefits. Tools -gst-inspect-1.0 +- gst-play-1.0 gained a --no-position command line option to suppress + position/duration queries, which can be useful to reduce debug log + noise. -- Can sort the list of plugins by passing --sort=name as command line - option +GStreamer FFMPEG wrapper -gst-launch-1.0 +- Fixed bitrate management and timestamp inaccuracies for video + encoders -- will now error out on top-level properties that don’t exist and - which were silently ignored before -- On Windows the high-resolution clock is enabled now, which provides - better clock and timer performance on Windows (see Windows section - below for more details). +- Fix synchronization issues and errors created by the (wrong) + forwarding of upstream segment events by ffmpeg demuxers. -gst-play-1.0 +- Clipping meta support for gapless mp3 playback -- New --start-position command line argument to start playback from - the specified position -- Audio can be muted/unmuted in interactive mode by pressing the m - key. -- On Windows the high-resolution clock is enabled now (see Windows - section below for more details) +GStreamer RTSP server -gst-device-monitor-1.0 +- Add RFC5576 Source-specific media attribute to the SDP media for + signalling the CNAME -- New --include-hidden command line argument to also show “hidden” - device providers +- Add support for adjusting request response on pipeline errors -ges-launch-1.0 + - Give the application the possibility to adjust the error code + when responding to a request. For that purpose the pipeline’s + bus messages are emitted to subscribers through a + “handle-message” signal. The subscribers can then check those + messages for errors and adjust the response error code by + overriding the virtual method + GstRTSPClientClass::adjust_error_code(). -- New interactive mode that allows seeking and such. Can be disabled - by passing the --no-interactive argument on the command line. -- Option to forward tags -- Allow using an existing clip to determine the rendering format (both - topology and profile) via new --profile-from command line argument. +- Add gst_rtsp_context_set_token() method to make it possible to set + the RTSPToken on some RTSPContext from bindings such as the Python + bindings. -GStreamer RTSP server +- rtspclientsink gained a “publish-clock-mode” property to configure + whether the pipeline clock should be published according to RFC7273 + (RTP Clock Source Signalling), similar to the same API on + GstRTSPMedia. + +GStreamer VA-API support + +- Development activity has shifted towards the new va plugin, with + gstreamer-vaapi now basically in maintenance-only mode. Most of the + below refers to the va plugin (not gstreamer-vaapi). + +- new gst-va library for GStreamer VA-API integration -- GstRTSPMediaFactory gained API to disable RTCP - (gst_rtsp_media_factory_set_enable_rtcp(), "enable-rtcp" property). - Previously RTCP was always allowed for all RTSP medias. With this - change it is possible to disable RTCP completely, no matter if the - client wants to do RTCP or not. +- vajpegdec: new JPEG decoder -- Make a mount point of / work correctly. While not allowed by the - RTSP 2 spec, the RTSP 1 spec is silent on this and it is used in the - wild. It is now possible to use / as a mount path in - gst-rtsp-server, e.g. rtsp://example.com/ would work with this now. - Note that query/fragment parts of the URI are not necessarily - correctly handled, and behaviour will differ between various - client/server implementations; so use it if you must but don’t bug - us if it doesn’t work with third party clients as you’d hoped. +- vah264enc, vah265enc: new H.264/H.265 encoders -- multithreading fixes (races, refcounting issues, deadlocks) +- vah264lpenc, vah265lpenc: new low power mode encoders -- ONVIF audio backchannel fixes +- vah265enc: Add extended formats support such as 10/12 bits, 4:2:2 + and 4:4:4 -- ONVIF trick mode optimisations +- Support encoder reconfiguration -- rtspclientsink: new "update-sdp" signal that allows updating the SDP - before sending it to the server via ANNOUNCE. This can be used to - add additional metadata to the SDP, for example. The order and - number of medias must not be changed, however. +- vacompositor: Add new compositor element using the VA-API VPP + interface -GStreamer VAAPI +- vapostproc: -- new AV1 decoder element (vaapiav1dec) + - new “scale-method” property + - Process HDR caps if supported + - parse video orientation from tags -- H264 decoder: handle stereoscopic 3D video with frame packing - arrangement SEI messages +- vaapipostproc: Enable the use of DMA-Buf import and export + (gstreamer-vaapi) -- H265 encoder: added Screen Content Coding extensions support +GStreamer Video4Linux2 support -- H265 decoder: gained MAIN_444_12 profile support (decoded to - Y412_LE), and 4:2:2 12-bits support (decoded to Y212_LE) +- Added support for Mediatek Stateless CODEC (VP8, H.264, VP9) -- vaapipostproc: gained BT2020 color standard support +- Stateless H.264 interlaced decoder support -- vaapidecode: now generates caps templates dynamically at runtime in - order to advertise actually supported caps instead of all - theoretically supported caps. +- Stateless H.265 decoder support -- GST_VAAPI_DRM_DEVICE environment variable to force a specified DRM - device when a DRM display is used. It is ignored when other types of - displays are used. By default /dev/dri/renderD128 is used for DRM - display. +- Stateful decoder support for driver resolution change events + +- Stateful decoding support fixes for NXP/Amphion driver + +- Support for hardware crop in v4l2src + +- Conformance test improvement for stateful decoders + +- Fixes for Raspberry Pi CODEC GStreamer OMX -- subframe support in H.264/H.265 decoders +- There were no changes in this module GStreamer Editing Services and NLE -- framepositioner: new "operator" property to access blending modes in - the compositor -- timeline: Implement snapping to markers -- smart-mixer: Add support for d3d11compositor and glvideomixer -- titleclip: add "draw-shadow" child property -- ges:// URI support to define a timeline from a description. -- command-line-formatter - - Add track management to timeline description - - Add keyframe support -- ges-launch-1.0: - - Add an interactive mode where we can seek etc… - - Add option to forward tags - - Allow using an existing clip to determine the rendering format - (both topology and profile) via new --profile-from command line - argument. -- Fix static build +- Handle compositors that are bins around the actual compositor + implementation (like glvideomixers which wraps several elements) + +- Add a mode to disable timeline editing API so the user can be in + full control of its layout (meaning that the user is responsible for + ensuring its validity/coherency) + +- Add a new fade-in transition type + +- Add support for non-1/1 PAR source videos + +- Fix frame accuracy when working with very low framerate streams GStreamer validate -- report: Add a way to force backtraces on reports even if not a - critical issue (GST_VALIDATE_ISSUE_FLAGS_FORCE_BACKTRACE) -- Add a flag to gst_validate_replace_variables_in_string() allow - defining how to resolve variables in structs -- Add gst_validate_bin_monitor_get_scenario() to get the bin monitor - scenario, which is useful for applications that use Validate - directly. -- Add an expected-values parameter to wait, message-type=XX allowing - more precise filtering of the message we are waiting for. -- Add config file support: each test can now use a config file for the - given media file used to test. -- Add support to check properties of object properties -- scenario: Add an "action-done" signal to signal when an action is - done -- scenario: Add a "run-command" action type -- scenario: Allow forcing running action on idle from scenario file -- scenario: Allow iterating over arrays in foreach -- scenario: Rename ‘interlaced’ action to ‘non-blocking’ -- scenario: Add a non-blocking flag to the wait signal +- Clean up and stabilize API so we can now generate rust bindings + +- Enhance the appsrc-push action type allowing to find tune the + buffers more in details + +- Add an action type to verify currently configured pad caps + +- Add a way to run checks from any thread after executing a ‘wait’ + action. This is useful when waiting on a signal and want to check + the value of a property right when it is emited for example. GStreamer Python Bindings -- Fixes for Python 3.10 -- Various build fixes -- at least one known breaking change caused by g-i annotation changes - (see below) +- Add a Gst.init_python() function to be called from plugins which + will initialise everything needed for the GStreamer Python bindings + but not call Gst.init() again since this will have been called + already. + +- Add support for the GstURIHandlerInterface that allows elements to + advertise what URI protocols they support. GStreamer C# Bindings -- Fix GstDebugGraphDetails enum -- Updated to latests GtkSharp -- Updated to include GStreamer 1.20 API +- Fix AppSrc and AppSink constructors -GStreamer Rust Bindings and Rust Plugins +- The C# bindings have yet to be updated to include new 1.22 API, + which requires improvements in various places in the bindings / + binding generator stack. See issue #1718 in GitLab for more + information and to track progress. -- The GStreamer Rust bindings are released separately with a different - release cadence that’s tied to gtk-rs, but the latest release has - already been updated for the upcoming new GStreamer 1.20 API (v1_20 - feature). +GStreamer Rust Bindings and Rust Plugins -- gst-plugins-rs, the module containing GStreamer plugins written in - Rust, has also seen lots of activity with many new elements and - plugins. See the New Elements section above for a list of new Rust - elements. +The GStreamer Rust bindings are released separately with a different +release cadence that’s tied to gtk-rs, but the latest release has +already been updated for the new GStreamer 1.22 API. Check the bindings +release notes for details of the changes since 0.18, which was released +around GStreamer 1.20. + +gst-plugins-rs, the module containing GStreamer plugins written in Rust, +has also seen lots of activity with many new elements and plugins. A +list of all Rust plugins and elements provided with the 0.9 release can +be found in the repository. + +- 33% of GStreamer commits are now in Rust (bindings + plugins), and + the Rust plugins module is also where most of the new plugins are + added these days. + +- The Rust plugins are now shipped as part of the Windows MSVC + macOS + binary packages. See below for the list of shipped plugins and the + status of Rust support in cerbero. + +- The Rust plugins are also part of the documentation on the GStreamer + website now. + +- Rust plugins can be used from any programming language. To the + outside they look just like a plugin written in C or C++. + +New Rust plugins and elements + +- rtpav1pay / rtpav1depay: RTP (de)payloader for the AV1 video codec +- gtk4paintablesink: a GTK4 video sink that provides a GdkPaintable + for rendering a video in any place inside a GTK UI. Supports + zero-copy rendering via OpenGL on Linux and macOS. +- ndi: source, sink and device provider for NewTek NDI protocol +- onvif: Various elements for parsing, RTP (de)payloading, overlaying + of ONVIF timed metadata. +- livesync: Element for converting a live stream into a continuous + stream without gaps and timestamp jumps while preserving live + latency requirements. +- raptorq: Encoder/decoder elements for the RaptorQ FEC mechanism that + can be used for RTP streams (RFC6330). + +WebRTC elements + +- webrtcsink: a WebRTC sink (batteries included WebRTC sender with + specific signalling) +- whipsink: WebRTC HTTP ingest (WHIP) to MediaServer +- whepsrc: WebRTC HTTP egress (WHEP) from MediaServer +- rtpgccbwe: RTP bandwidth estimator based on the Google Congestion + Control algorithm (GCC), used by webrtcsink + +Amazon AWS services + +- awss3src / awss3sink: A source and sink element to talk to the + Amazon S3 object storage system. +- awss3hlssink: A sink element to store HLS streams on Amazon S3. +- awstranscriber: an element wrapping the AWS Transcriber service. +- awstranscribeparse: an element parsing the packets of the AWS + Transcriber service. + +Video Effects (videofx) + +- roundedcorners: Element to make the corners of a video rounded via + the alpha channel. +- colordetect: A pass-through filter able to detect the dominant + color(s) on incoming frames, using color-thief. +- videocompare: Compare similarity of video frames. The element can + use different hashing algorithms like Blockhash, DSSIM, and others. + +New MP4 muxer + Fragmented MP4 muxer + +- fmp4mux: New fragmented MP4/ISOBMFF/CMAF muxer for generating + e.g. DASH/HLS media fragments. +- isomp4mux: New non-fragmented, normal MP4 muxer. + +Both plugins provides elements that replace the existing qtmux/mp4mux +element from gst-plugins-good. While not feature-equivalent yet, the new +codebase and using separate elements for the fragment and non-fragmented +case allows for easier extensability in the future. + +Cerbero Rust support + +- Starting this release, cerbero has support for building and shipping + Rust code on Linux, Windows (MSVC) and macOS. The Windows (MSVC) and + macOS binaries also ship the GStreamer Rust plugins in this release. + Only dynamic plugins are built and shipped currently. + +- Preliminary support for Android, iOS and Windows (MinGW) exists but + more work is needed. Check the tracker issue for more details about + future work. + +- The following plugins are included currently: audiofx, aws, cdg, + claxon, closedcaption, dav1d, fallbackswitch, ffv1, fmp4, gif, + hlssink3, hsv, json, livesync, lewton, mp4, ndi, onvif, rav1e, + regex, reqwest, raptorq, png, rtp, textahead, textwrap, threadshare, + togglerecord, tracers, uriplaylistbin, videofx, webrtc, webrtchttp. Build and Dependencies -- Meson 0.59 or newer is required to build GStreamer now. +- meson 0.62 or newer is required -- The GLib requirement has been bumped to GLib 2.56 or newer (from - March 2018). +- GLib >= 2.62 is now required (but GLib >= 2.64 is strongly + recommended) -- The wpe plugin now requires wpe >= 2.28 and wpebackend-fdo >= 1.8 +- libnice >= 0.1.21 is now required and contains important fixes for + GStreamer’s WebRTC stack. -Explicit opt-in required for build of certain plugins with (A)GPL dependencies +- liborc >= 0.4.33 is recommended for 64-bit ARM support and 32-bit + ARM improvements -Some plugins have GPL- or AGPL-licensed dependencies and those plugins -will no longer be built by default unless you have explicitly opted in -to allow (A)GPL-licensed dependencies by passing -Dgpl=enabled to Meson, -even if the required dependencies are available. +- onnx: OnnxRT >= 1.13.1 is now required -See Building plugins with (A)GPL-licensed dependencies for more details -and a non-exhaustive list of plugins affected. +- openaptx: can now be built against libfreeaptx -gst-build: replaced by mono repository +- opencv: allow building against any 4.x version -See mono repository section above and the GStreamer mono repository FAQ. +- shout: libshout >= 2.4.3 is now required -Cerbero +- gstreamer-vaapi’s Meson build options have been switched from a + custom combo type (yes/no/auto) to the built-in Meson feature type + (enabled/disabled/auto) -Cerbero is a meta build system used to build GStreamer plus dependencies -on platforms where dependencies are not readily available, such as -Windows, Android, iOS and macOS. +- The GStreamer Rust plugins module gst-plugins-rs is now considered + an essential part of the GStreamer plugin offering and packagers and + distributors are strongly encouraged to package and ship those + plugins alongside the existing plugin modules. -General Cerbero improvements +- we now make use of Meson’s install tags feature which allows + selective installation of installl components and might be useful + for packagers. -- Plugin removed: libvisual -- New plugins: rtpmanagerbad and rist +Monorepo build (gst-build) -macOS / iOS specific Cerbero improvements +- new “orc-source” build option to allow build against a + system-installed liborc instead of forcing the use of orc as a + subproject. -- XCode 12 support -- macOS OS release support is now future-proof, similar to iOS -- macOS Apple Silicon (ARM64) cross-compile support has been added -- macOS Apple Silicon (ARM64) native support is currently experimental +- GStreamer command line tools can now be linked to the gstreamer-full + library if it’s built + +Cerbero + +Cerbero is a meta build system used to build GStreamer plus dependencies +on platforms where dependencies are not readily available, such as +Windows, Android, iOS, and macOS. + +General improvements + +- Rust support was added for all support configurations, controlled by + the rust variant; see above for more details +- All pkgconfig files are now reliably relocatable without requiring + pkg-config --define-prefix. This also fixes statically linking with + GStreamer plugins using the corresponding pkgconfig files. +- New documentation on how to build a custom GStreamer repository + using Cerbero, please see the README +- HTTPS certificate checking is enabled for downloads on all platforms + now +- Fetching now automatically retries on error for robustness against + transient errors +- Support for building the new Qt6 plugin was added +- pkgconfig files for various recipes were fixed +- Several recipes were updated to newer versions +- New plugins: adaptivedemux2 aes codectimestamper dav1d +- New libraries: cuda webrtcnice + +macOS / iOS + +- Added support for running Cerbero on ARM64 macOS +- GStreamer.framework and all libraries in it are now relocatable, + which means they use LC_RPATH entries to find dependencies instead + of using an absolute path. If you link to GStreamer using the + pkgconfig files, no action is necessary. However, if you use the + framework directly or link to the libraries inside the framework by + hand, then you need to pass -Wl,-rpath, to the + linker. +- Apple bitcode support was dropped, since Apple has deprecated it +- macOS installer now correctly advertises support for both x86_64 and + arm64 +- macOS framework now ships the gst-rtsp-server-1.0 library +- Various fixes were made to make static linking to gstreamer + libraries and plugins work correctly on macOS +- When statically linking to the applemedia plugin using Xcode 13, you + will need to pass -fno-objc-msgsend-selector-stubs which works + around a backwards-incompatible change in Xcode 14. This is not + required for the rest of GStreamer at present, but will be in the + future. +- macOS installer now shows the GStreamer logo correctly -Windows specific Cerbero improvements +Windows -- Visual Studio 2022 support has been added -- bootstrap is faster since it requires building fewer build-tools - recipes on Windows -- package is faster due to better scheduling of recipe stages and - elimination of unnecessary autotools regeneration -- The following plugins are no longer built on Windows: - - a52dec (another decoder is still available in libav) - - dvdread - - resindvd +- MSVC is now required by default on Windows, and the Visual Studio + variant is enabled by default + - To build with MinGW, use the mingw variant +- Visual Studio props files were updated for newer Visual Studio + versions +- Visual Studio 2015 support was dropped +- MSYS2 is now supported as the base instead of MSYS. Please see the + README for more details. Some advantages include: + - Faster build times, since parallel make works + - Faster bootstrap, since some tools are provided by MSYS2 + - Other speed-ups due to using MSYS2 tools instead of MSYS +- Faster download by using powershell instead of hand-rolled Python + code +- Many recipes were ported from Autotools to Meson, speeding up the + build +- Universal Windows Platform is no longer supported, and binaries are + no longer shipped for it +- New documentation on how to force a specific Visual Studio + installation in Cerbero, please see the README +- New plugins: qsv wavpack directshow amfcodec wic win32ipc +- New libraries: d3d11 Windows MSI installer -- no major changes +- Universal Windows Platform prebuilt binaries are no longer available -Linux specific Cerbero improvements +Linux -- Fedora, Debian OS release support is now more future-proof -- Amazon Linux 2 support has been added +- Various fixes for RHEL/CentOS 7 support +- Added support for running on Linux ARM64 -Android specific Cerbero improvements +Android -- no major changes +- Android support now requires Android API version 21 (Lollipop) +- Support for Android Gradle plugin 7.2 Platform-specific changes and improvements Android -- No major changes +- Android SDK 21 is required now as minimum SDK version -macOS and iOS +- androidmedia: Add H.265 / HEVC video encoder mapping -- applemedia: add ProRes support to vtenc and vtdec +- Implement JNI_OnLoad() to register static plugins etc. automatically + in case GStreamer is loaded from Java using System.loadLibrary(), + which is also useful for the gst-full deployment scenario. -Windows +Apple macOS and iOS + +- The GLib version shipped with the GStreamer binaries does not + initialize an NSApp and does not run a NSRunLoop on the main thread + anymore. This was a custom GLib patch and caused it to behave + different from the GLib shipped by Homebrew or anybody else. + + The change was originally introduced because various macOS APIs + require a NSRunLoop to run on the main thread to function correctly + but as this change will never get merged into GLib and it was + reverted for 1.22. Applications that relied on this behaviour should + move to the new gst_macos_main() function, which also does not + require the usage of a GMainLoop. -- On Windows the high-resolution clock is enabled now in the - gst-launch-1.0 and gst-play-1.0 command line tools, which provides - better clock and timer performance on Windows, at the cost of higher - power consumption. By default, without the high-resolution clock - enabled, the timer precision on Windows is system-dependent and may - be as bad as 15ms which is not good enough for many multimedia - applications. Developers may want to do the same in their Windows - applications if they think it’s a good idea for their application - use case, and depending on the Windows version they target. This is - not done automatically by GStreamer because on older Windows - versions (pre-Windows 10) this affects a global Windows setting and - also there’s a power consumption vs. performance trade-off that may - differ from application to application. + See e.g. gst-play.c for an example for the usage of + gst_macos_main(). -- dxgiscreencapsrc now supports resolution changes +- GStreamer.framework and all libraries in it are now relocatable, + which means they use LC_RPATH entries to find dependencies instead + of using an absolute path. If you link to GStreamer using the + pkgconfig files, no action is necessary. However, if you use the + framework directly or link to the libraries inside the framework by + hand, then you need to pass -Wl,-rpath, to the + linker. -- The wasapi2 audio plugin was rewritten and now has a higher rank - than the old wasapi plugin since it has a number of additional - features such as automatic stream routing, and no - known-but-hard-to-fix issues. The plugin is always built if the - Windows 10 SDK is available now. +- avfvideosrc: Allow specifying crop coordinates during screen capture -- The wasapi device providers now detect and notify dynamic device - additions/removals +- vtenc, vtdec: H.265 / HEVC video encoding + decoding support -- d3d11screencapturesrc: new desktop capture element, including - GstDeviceProvider implementation to enumerate/select target monitors - for capture. +- osxaudiosrc: Support a device as both input and output -- Direct3D11/DXVA decoder now supports AV1 and MPEG2 codecs - (d3d11av1dec, d3d11mpeg2dec) + - osxaudiodeviceprovider now probes devices more than once to + determine if the device can function as both an input AND and + output device. Previously, if the device provider detected that + a device had any output capabilities, it was treated solely as + an Audio/Sink. This caused issues for devices that have both + input and output capabilities (for example, USB interfaces for + professional audio have both input and output channels). Such + devicesare now listed as both an Audio/Sink as well as an + Audio/Source. -- VP9 decoding got more reliable and stable thanks to a newly written - codec parser +- osxaudio: support hidden devices on macOS -- Support for decoding interlaced H.264/AVC streams + - These are devices that will not be shown in the macOS UIs and + that cannot be retrieved without having the specific UID of the + hidden device. There are cases when you might want to have a + hidden device, for example when having a virtual speaker that + forwards the data to a virtual hidden input device from which + you can then grab the audio. The blackhole project supports + these hidden devices and this change provides a way that if the + device id is a hidden device it will use it instead of checkinf + the hardware list of devices to understand if the device is + valid. -- Hardware-accelerated video deinterlacing (d3d11deinterlace) and - video mixing (d3d11compositor) +Windows + +- win32ipcvideosink, win32ipcvideosrc: new shared memory videosrc/sink + elements + +- wasapi2: Add support for process loopback capture for a specified + PID (requires Windows 11/Windows Server 2022) -- Video mixing with the Direct3D11 API (d3d11compositor) +- The Windows universal UWP build is currently non-functional and will + need updating after the recent GLib upgrade. It is unclear if anyone + is using these binaries, so if you are please make yourself known. -- MediaFoundation API based hardware encoders gained the ability to - receive Direct3D11 textures as an input +- wicjpegdec, wicpngdec: Windows Imaging Component (WIC) based JPEG + and PNG decoder elements. -- Seungha’s blog post “GStreamer ❤ Windows: A primer on the cool stuff - you’ll find in the 1.20 release” describes many of the - Windows-related improvements in more detail +- mfaacdec, mfmp3dec: Windows MediaFoundation AAC and MP3 decoders + +- The uninstalled development environment supports PowerShell 7 now Linux -- bluez: LDAC Bluetooth audio codec support in a2dpsink and avdtpsink, - as well as an LDAC RTP payloader (rtpldacpay) and an LDAC audio - encoder (ldacenc) +- Improved design for DMA buffer sharing and modifier handling for + hardware-accelerated video decoders/encoders/filters and + capture/rendering on Linux and Linux-like system. + +- kmssink + + - new “fd” property which allows an application to provide their + own opened DRM device fd handle to kmssink. That way an + application can lease multiple fd’s from a DRM master to display + on different CRTC outputs at the same time with multiple kmssink + instances, for example. + - new “skip-vsync” property to achieve full framerate with legacy + emulation in drivers. + - HDR10 infoframe support + +- va plugin and gstreamer-vaapi improvements (see above) -- kmssink: gained support for NV24, NV61, RGB16/BGR16 formats; - auto-detect NVIDIA Tegra driver +- waylandsink: Add “rotate-method” property and “render-rectangle” + property + +- new gtkwaylandsink element based on gtksink, but similar to + waylandsink and uses Wayland APIs directly instead of rendering with + Gtk/Cairo primitives. This approach is only compatible with Gtk3, + and like gtksink this element only supports Gtk3. Documentation improvements -- hardware-accelerated GPU plugins will now no longer always list all - the element variants for all available GPUs, since those are - system-dependent and it’s confusing for users to see those in the - documentation just because the GStreamer developer who generated the - docs had multiple GPUs to play with at the time. Instead just show - the default elements. - -Possibly Breaking and Other Noteworthy Behavioural Changes - -- gst_parse_launch(), gst_parse_bin_from_description() and friends - will now error out when setting properties that don’t exist on - top-level bins. They were silently ignored before. - -- The GstWebRTC library does not expose any objects anymore with - public fields. Instead properties have been added to replace that - functionality. If you are accessing such fields in your application, - switch to the corresponding properties. - -- playbin and uridecodebin now emit the source-setup signal before the - element is added to the bin and linked so that the source element is - already configured before any scheduling query comes in, which is - useful for elements such as appsrc or giostreamsrc. - -- The source element inside urisourcebin (used inside uridecodebin3 - which is used inside playbin3) is no longer called "source". This - shouldn’t affect anyone hopefully, because there’s a "setup-source" - signal to configure the source element and no one should rely on - names of internal elements anyway. - -- The vp8enc element now expects bps (bits per second) for the - "temporal-scalability-target-bitrate" property, which is consistent - with the "target-bitrate" property. Since additional configuration - is required with modern libvpx to make temporal scaling work anyway, - chances are that very few people will have been using this property - -- vp8enc and vp9enc now default to “good quality” for the "deadline" - property rather then “best quality”. Having the deadline set to best - quality causes the encoder to be absurdly slow, most real-life users - will want the good quality tradeoff instead. - -- The experimental GstTranscoder library API in gst-plugins-bad was - changed from a GObject signal-based notification mechanism to a - GstBus/message-based mechanism akin to GstPlayer/GstPlay. - -- MPEG-TS SCTE-35 API: semantic change for SCTE-35 splice commands: - timestamps passed by the application should be in running time now, - since users of the API can’t really be expected to predict the local - PTS of the muxer. - -- The GstContext used by souphttpsrc to share the session between - multiple element instances has changed. Previously it provided - direct access to the internal SoupSession object, now it only - provides access to an opaque, internal type. This change is - necessary because SoupSession is not thread-safe at all and can’t be - shared safely between arbitrary external code and souphttpsrc. - -- Python bindings: GObject-introspection related Annotation fixes have - led to a case of a GstVideo.VideoInfo-related function signature - changing in the Python bindings (possibly one or two other cases - too). This is for a function that should never have been exposed in - the first place though, so the bindings are being updated to throw - an exception in that case, and the correct replacement API has been - added in form of an override. +- The GStreamer Rust plugins are now included and documented in the + plugin documentation. + +Possibly Breaking Changes + +- the Opus audio RTP payloader and depayloader no longer accept the + lower case encoding-format=multiopus but instead produce and accept + only the upper case variant encoding-format=MULTIOPUS, since those + should always be upper case in GStreamer (caps fields are always + case sensitive). This should hopefully only affect applications + where RTP caps are set manually and multi-channel audio (>= 3 + channels) is used. + +- wpesrc: the URI handler protocols changed from wpe:// and web:// to + web+http://, web+https://, and web+file:// which means URIs are RFC + 3986 compliant and the source can simply strip the prefix from the + protocol. + +- The Windows screen capture element dxgiscreencapsrc has been + removed, please use d3d11screencapturesrc instead. + +- On Android the minimum supported Android API version is now version + 21 and has been increased from 16. + +- On macOS, the GLib version shipped with the GStreamer binaries will + no longer initialize an NSApp or run an NSRunLoop on the main + thread. See macOS/iOS section above for details. + +- decklink: The decklink plugin is now using the 12.2.2 version of the + SDK and will not work with drivers older than version 12. + +- On iOS Apple Bitcode support was removed from the binaries. This + feature is deprecated since XCode 14 and not used on the App Store + anymore. + +- The MP4/Matroska/WebM muxers now require the “stream-format” to be + provided as part of the AV1 caps as only the original “obu-stream” + format is supported in these containers and not the “annexb” format. Known Issues -- nothing in particular at this point (but also see possibly breaking - changes section above) +- The Windows UWP build in Cerbero needs fixing after the recent GLib + upgrade (see above) + +- The C# bindings have not been updated to include new 1.22 API yet + (see above) + +Statistics + +- 4072 commits + +- 2224 Merge Requests + +- 716 Issues + +- 200+ Contributors + +- ~33% of all commits and Merge Requests were in Rust modules + +- 4747 files changed + +- 469633 lines added + +- 209842 lines deleted + +- 259791 lines added (net) Contributors -Aaron Boxer, Adam Leppky, Adam Williamson, Alba Mendez, Alejandro -González, Aleksandr Slobodeniuk, Alexander Vandenbulcke, Alex Ashley, -Alicia Boya García, Andika Triwidada, Andoni Morales Alastruey, Andrew -Wesie, Andrey Moiseev, Antonio Ospite, Antonio Rojas, Arthur Crippa -Búrigo, Arun Raghavan, Ashley Brighthope, Axel Kellermann, Baek, Bastien -Nocera, Bastien Reboulet, Benjamin Gaignard, Bing Song, Binh Truong, -Biswapriyo Nath, Brad Hards, Brad Smith, Brady J. Garvin, Branko -Subasic, Camilo Celis Guzman, Chris Bass, ChrisDuncanAnyvision, Chris -White, Corentin Damman, Daniel Almeida, Daniel Knobe, Daniel Stone, -david, David Fernandez, David Keijser, David Phung, Devarsh Thakkar, -Dinesh Manajipet, Dmitry Samoylov, Dmitry Shusharin, Dominique Martinet, -Doug Nazar, Ederson de Souza, Edward Hervey, Emmanuel Gil Peyrot, -Enrique Ocaña González, Ezequiel Garcia, Fabian Orccon, Fabrice -Fontaine, Fernando Jimenez Moreno, Florian Karydes, Francisco Javier -Velázquez-García, François Laignel, Frederich Munch, Fredrik Pålsson, -George Kiagiadakis, Georg Lippitsch, Göran Jönsson, Guido Günther, -Guillaume Desmottes, Guiqin Zou, Haakon Sporsheim, Haelwenn (lanodan) -Monnier, Haihao Xiang, Haihua Hu, Havard Graff, He Junyan, Helmut -Januschka, Henry Wilkes, Hosang Lee, Hou Qi, Ignacio Casal Quinteiro, -Igor Kovalenko, Ilya Kreymer, Imanol Fernandez, Jacek Tomaszewski, Jade -Macho, Jakub Adam, Jakub Janků, Jan Alexander Steffens (heftig), Jan -Schmidt, Jason Carrete, Jason Pereira, Jay Douglass, Jeongki Kim, Jérôme -Laheurte, Jimmi Holst Christensen, Johan Sternerup, John Hassell, John -Lindgren, John-Mark Bell, Jonathan Matthew, Jordan Petridis, Jose -Quaresma, Julian Bouzas, Julien, Kai Uwe Broulik, Kasper Steensig -Jensen, Kellermann Axel, Kevin Song, Khem Raj, Knut Inge Hvidsten, Knut -Saastad, Kristofer Björkström, Lars Lundqvist, Lawrence Troup, Lim Siew -Hoon, Lucas Stach, Ludvig Rappe, Luis Paulo Fernandes de Barros, Luke -Yelavich, Mads Buvik Sandvei, Marc Leeman, Marco Felsch, Marek Vasut, -Marian Cichy, Marijn Suijten, Marius Vlad, Markus Ebner, Mart Raudsepp, -Matej Knopp, Mathieu Duponchelle, Matthew Waters, Matthieu De Beule, -Mengkejiergeli Ba, Michael de Gans, Michael Olbrich, Michael Tretter, -Michal Dzik, Miguel Paris, Mikhail Fludkov, mkba, Nazar Mokrynskyi, -Nicholas Jackson, Nicola Murino, Nicolas Dufresne, Niklas Hambüchen, -Nikolay Sivov, Nirbheek Chauhan, Olivier Blin, Olivier Crete, Olivier -Crête, Paul Goulpié, Per Förlin, Peter Boba, P H, Philippe Normand, -Philipp Zabel, Pieter Willem Jordaan, Piotrek Brzeziński, Rafał -Dzięgiel, Rafostar, raghavendra, Raghavendra, Raju Babannavar, Raleigh -Littles III, Randy Li, Randy Li (ayaka), Ratchanan Srirattanamet, Raul -Tambre, reed.lawrence, Ricky Tang, Robert Rosengren, Robert Swain, Robin -Burchell, Roman Sivriver, R S Nikhil Krishna, Ruben Gonzalez, Ruslan -Khamidullin, Sanchayan Maity, Scott Moreau, Sebastian Dröge, Sergei -Kovalev, Seungha Yang, Sid Sethupathi, sohwan.park, Sonny Piers, Staz M, -Stefan Brüns, Stéphane Cerveau, Stephan Hesse, Stian Selnes, Stirling -Westrup, Théo MAILLART, Thibault Saunier, Tim, Timo Wischer, Tim-Philipp -Müller, Tim Schneider, Tobias Ronge, Tom Schoonjans, Tulio Beloqui, -tyler-aicradle, U. Artie Eoff, Ung, Val Doroshchuk, VaL Doroshchuk, -Víctor Manuel Jáquez Leal, Vivek R, Vivia Nikolaidou, Vivienne -Watermeier, Vladimir Menshakov, Will Miller, Wim Taymans, Xabier -Rodriguez Calvar, Xavier Claessens, Xℹ Ruoyao, Yacine Bandou, Yinhang -Liu, youngh.lee, youngsoo.lee, yychao, Zebediah Figura, Zhang yuankun, -Zhang Yuankun, Zhao, Zhao Zhili, , Aleksandar Topic, Antonio Ospite, -Bastien Nocera, Benjamin Gaignard, Brad Hards, Carlos Falgueras García, -Célestin Marot, Corentin Damman, Corentin Noël, Daniel Almeida, Daniel -Knobe, Danny Smith, Dave Piché, Dmitry Osipenko, Fabrice Fontaine, -fjmax, Florian Zwoch, Guillaume Desmottes, Haihua Hu, Heinrich Kruger, -He Junyan, Jakub Adam, James Cowgill, Jan Alexander Steffens (heftig), -Jean Felder, Jeongki Kim, Jiri Uncovsky, Joe Todd, Jordan Petridis, -Krystian Wojtas, Marc-André Lureau, Marcin Kolny, Marc Leeman, Mark -Nauwelaerts, Martin Reboredo, Mathieu Duponchelle, Matthew Waters, -Mengkejiergeli Ba, Michael Gruner, Nicolas Dufresne, Nirbheek Chauhan, -Olivier Crête, Philippe Normand, Rafał Dzięgiel, Ralf Sippl, Robert -Mader, Sanchayan Maity, Sangchul Lee, Sebastian Dröge, Seungha Yang, -Stéphane Cerveau, Teh Yule Kim, Thibault Saunier, Thomas Klausner, Timo -Wischer, Tim-Philipp Müller, Tobias Reineke, Tomasz Andrzejak, Trung Do, -Tyler Compton, Ung, Víctor Manuel Jáquez Leal, Vivia Nikolaidou, Wim -Taymans, wngecn, Wonchul Lee, wuchang li, Xavier Claessens, Xi Ruoyao, -Yoshiharu Hirose, Zhao, +Ádám Balázs, Adam Doupe, Adrian Fiergolski, Adrian Perez de Castro, Alba +Mendez, Aleix Conchillo Flaqué, Aleksandr Slobodeniuk, Alicia Boya +García, Alireza Miryazdi, Andoni Morales Alastruey, Andrew Pritchard, +Arun Raghavan, A. Wilcox, Bastian Krause, Bastien Nocera, Benjamin +Gaignard, Bill Hofmann, Bo Elmgreen, Boyuan Zhang, Brad Hards, Branko +Subasic, Bruce Liang, Bunio FH, byran77, Camilo Celis Guzman, Carlos +Falgueras García, Carlos Rafael Giani, Célestin Marot, Christian Wick, +Christopher Obbard, Christoph Reiter, Chris Wiggins, Chun-wei Fan, Colin +Kinloch, Corentin Damman, Corentin Noël, Damian Hobson-Garcia, Daniel +Almeida, Daniel Morin, Daniel Stone, Daniels Umanovskis, Danny Smith, +David Svensson Fors, Devin Anderson, Diogo Goncalves, Dmitry Osipenko, +Dongil Park, Doug Nazar, Edward Hervey, ekwange, Eli Schwartz, Elliot +Chen, Enrique Ocaña González, Eric Knapp, Erwann Gouesbet, Evgeny +Pavlov, Fabian Orccon, Fabrice Fontaine, Fan F He, F. Duncanh, Filip +Hanes, Florian Zwoch, François Laignel, Fuga Kato, George Kiagiadakis, +Guillaume Desmottes, Gu Yanjie, Haihao Xiang, Haihua Hu, Havard Graff, +Heiko Becker, He Junyan, Henry Hoegelow, Hiero32, Hoonhee Lee, Hosang +Lee, Hou Qi, Hugo Svirak, Ignacio Casal Quinteiro, Ignazio Pillai, Igor +V. Kovalenko, Jacek Skiba, Jakub Adam, James Cowgill, James Hilliard, +Jan Alexander Steffens (heftig), Jan Lorenz, Jan Schmidt, Jianhui Dai, +jinsl00000, Johan Sternerup, Jonas Bonn, Jonas Danielsson, Jordan +Petridis, Joseph Donofry, Jose Quaresma, Julian Bouzas, Junsoo Park, +Justin Chadwell, Khem Raj, Krystian Wojtas, László Károlyi, Linus +Svensson, Loïc Le Page, Ludvig Rappe, Marc Leeman, Marek Olejnik, Marek +Vasut, Marijn Suijten, Mark Nauwelaerts, Martin Dørum, Martin Reboredo, +Mart Raudsepp, Mathieu Duponchelle, Matt Crane, Matthew Waters, Matthias +Clasen, Matthias Fuchs, Mengkejiergeli Ba, MGlolenstine, Michael Gruner, +Michiel Konstapel, Mikhail Fludkov, Ming Qian, Mingyang Ma, Myles +Inglis, Nicolas Dufresne, Nirbheek Chauhan, Olivier Crête, Pablo Marcos +Oltra, Patricia Muscalu, Patrick Griffis, Paweł Stawicki, Peter +Stensson, Philippe Normand, Philipp Zabel, Pierre Bourré, Piotr +Brzeziński, Rabindra Harlalka, Rafael Caricio, Rafael Sobral, Rafał +Dzięgiel, Raul Tambre, Robert Mader, Robert Rosengren, Rodrigo +Bernardes, Rouven Czerwinski, Ruben Gonzalez, Sam Van Den Berge, +Sanchayan Maity, Sangchul Lee, Sebastian Dröge, Sebastian Fricke, +Sebastian Groß, Sebastian Mueller, Sebastian Wick, Sergei Kovalev, +Seungha Yang, Seungmin Kim, sezanzeb, Sherrill Lin, Shingo Kitagawa, +Stéphane Cerveau, Talha Khan, Taruntej Kanakamalla, Thibault Saunier, +Tim Mooney, Tim-Philipp Müller, Tomasz Andrzejak, Tom Schuring, Tong Wu, +toor, Tristan Matthews, Tulio Beloqui, U. Artie Eoff, Víctor Manuel +Jáquez Leal, Vincent Cheah Beng Keat, Vivia Nikolaidou, Vivienne +Watermeier, WANG Xuerui, Wojciech Kapsa, Wonchul Lee, Wu Tong, Xabier +Rodriguez Calvar, Xavier Claessens, Yatin Mann, Yeongjin Jeong, Zebediah +Figura, Zhao Zhili, Zhiyuaniu, مهدي شينون (Mehdi Chinoune), … and many others who have contributed bug reports, translations, sent suggestions or helped testing. -Stable 1.20 branch +Stable 1.22 branch -After the 1.20.0 release there will be several 1.20.x bug-fix releases +After the 1.22.0 release there will be several 1.22.x bug-fix releases which will contain bug fixes which have been deemed suitable for a stable branch, but no new features or intrusive changes will be added to -a bug-fix release usually. The 1.20.x bug-fix releases will be made from -the git 1.20 branch, which will be a stable branch. +a bug-fix release usually. The 1.22.x bug-fix releases will be made from +the git 1.22 branch, which will be a stable branch. -1.20.0 +1.22.0 -1.20.0 is scheduled to be released around early February 2022. +1.22.0 was originally released on 23 January 2023. -Schedule for 1.22 +Schedule for 1.24 -Our next major feature release will be 1.22, and 1.21 will be the -unstable development version leading up to the stable 1.22 release. The -development of 1.21/1.22 will happen in the git main branch. +Our next major feature release will be 1.24, and 1.23 will be the +unstable development version leading up to the stable 1.24 release. The +development of 1.23/1.24 will happen in the git main branch of the +GStreamer mono repository. -The plan for the 1.22 development cycle is yet to be confirmed. Assuming -no major project-wide reorganisations in the 1.22 cycle we might try and -aim for a release around August 2022. +The plan for the 1.24 development cycle is yet to be confirmed. -1.22 will be backwards-compatible to the stable 1.20, 1.18, 1.16, 1.14, -1.12, 1.10, 1.8, 1.6, 1.4, 1.2 and 1.0 release series. +1.24 will be backwards-compatible to the stable 1.22, 1.20, 1.18, 1.16, +1.14, 1.12, 1.10, 1.8, 1.6, 1.4, 1.2 and 1.0 release series. ------------------------------------------------------------------------ These release notes have been prepared by Tim-Philipp Müller with -contributions from Matthew Waters, Nicolas Dufresne, Nirbheek Chauhan, -Sebastian Dröge and Seungha Yang. +contributions from Edward Hervey, Matthew Waters, Nicolas Dufresne, +Nirbheek Chauhan, Olivier Crête, Sebastian Dröge, Seungha Yang, and +Thibault Saunier. License: CC BY-SA 4.0