+AV1 video codec support improvements
+
+AV1 is a royalty free next-generation video codec by AOMedia and a free
+alternative to H.265/HEVC.
+
+While supported in earlier versions of GStreamer already, this release
+saw a lot of improvements across the board:
+
+- Support for hardware encoding and decoding via VAAPI/VA, AMF, D3D11,
+ NVCODEC, QSV and Intel MediaSDK. Hardware codecs for AV1 are slowly
+ becoming available in embedded systems and desktop GPUs (AMD, Intel,
+ NVIDIA), and these can now be used via GStreamer.
+
+- New AV1 RTP payloader and depayloader elements.
+
+- New encoder settings in the AOM reference encoder-based av1enc
+ element.
+
+- Various improvements in the AV1 parser and in the MP4/Matroska/WebM
+ muxers/demuxers.
+
+- dav1d and rav1e based software decoder/encoder elements shipped as
+ part of the binaries.
+
+- AV1 parser improvements and various bugfixes all over the place.
+
+Touchscreen event support in Navigation API
+
+The Navigation API supports the sending of key press events and mouse
+events through a GStreamer pipeline. Typically these will be picked up
+by a video sink on which these events happen and then the event is
+transmitted into the pipeline so it can be handled by elements inside
+the pipeline if it wasn’t handled by the application.
+
+This has traditionally been used for DVD menu support, but can also be
+used to forward such inputs to source elements that render a web page
+using a browser engine such as WebKit or Chromium.
+
+This API has now gained support for touchscreen events, and this has
+been implemented in various plugins such as the GTK, Qt, XV, and x11
+video sinks as well as the wpevideosrc element.
+
+GStreamer CUDA integration
+
+- New gst-cuda library
+- integration with D3D11 and NVIDIA dGPU NVMM elements
+- new cudaconvertscale element
+
+GStreamer Direct3D11 integration
+
+- New gst-d3d11 public library
+ - gst-d3d11 library is not integrated with GStreamer documentation
+ system yet. Please refer to the examples
+- d3d11screencapture: Add Windows Graphics Capture API based capture
+ mode, including Win32 application window capturing
+- d3d11videosink and d3d11convert can support flip/rotation and crop
+ meta
+- d3d11videosink: New emit-present property and present signal so that
+ applications can overlay an image on Direct3D11 swapchain’s
+ backbuffer via Direct3D/Direct2D APIs. See also C++ and Rust
+ examples
+- d3d11compositor supports YUV blending/composing without intermediate
+ RGB(A) conversion to improve performance
+- Direct3D11 video decoders are promoted to GST_RANK_PRIMARY or
+ higher, except for the MPEG2 decoder
+
+H.264/H.265 timestamp correction elements
+
+- Muxers are often picky and need proper PTS/DTS timestamps set on the
+ input buffers, but that can be a problem if the encoded input media
+ stream comes from a source that doesn’t provide proper signalling of
+ DTS, such as is often the case for RTP, RTSP and WebRTC streams or
+ Matroska container files. Theoretically parsers should be able to
+ fix this up, but it would probably require fairly invasive changes
+ in the parsers, so two new elements h264timestamper and
+ h265timestamper bridge the gap in the meantime and can reconstruct
+ missing PTS/DTS.
+
+Easy sender timestamp reconstruction for RTP and RTSP
+
+- it was always possible to reconstruct and retrieve the original RTP
+ sender timestamps in GStreamer, but required a fair bit of
+ understanding of the internal mechanisms and the right property
+ configuration and clock setup.
+
+- rtspsrc and rtpjitterbuffer gained a new
+ “add-reference-timestamp-meta” property that if set puts the
+ original absolute reconstructed sender timestamps on the output
+ buffers via a meta. This is particularly useful if the sender is
+ synced to an NTP clock or PTP clock. The original sender timestamps
+ are either based on the RTCP NTP times, NTP RTP header extensions
+ (RFC6051) or RFC7273-style clock signalling.
+
+Qt6 support
+
+- new qml6glsink element for Qt6 similar to the existing Qt5 element.
+ Matching source and overlay elements will hopefully follow in the
+ near future.
+
+OpenGL + Video library enhancements
+
+- Support for new video formats (NV12_4L4, NV12_16L32S, NV12_8L128,
+ NV12_10BE_8L128) and dmabuf import in more formats (Y410, Y212_LE,
+ Y212_BE, Y210, NV21, NV61)
+
+- Improved support for tiled formats with arbitrary tile dimensions,
+ as needed by certain hardware decoders/encoders
+
+- glvideomixer: New “crop-left,”crop-right, “crop-top” and
+ “crop-bottom” pad properties for cropping inputs
+
+- OpenGL support for gst_video_sample_convert():
+
+ - Used for video snapshotting and thumbnailing, to convert buffers
+ retrieved from appsinks or sink “last-sample” properties in
+ JPG/PNG thumbnails.
+ - This function can now take samples and buffers backed by GL
+ textures as input and will automatically plug a gldownload
+ element in that case.
+
+High bit-depth support (10, 12, 16 bits per component value) improvements
+
+- compositor can now handle any supported input format and also mix
+ high-bitdepth (10-16 bit) formats (naively)
+
+- videoflip has gained support for higher bit depth formats.
+
+- vp9enc, vp9dec now support 12-bit formats and also 10-bit 4:4:4
+
+WebRTC
+
+- Allow insertion of bandwidth estimation elements e.g. for Google
+ Congestion Control (GCC) support
+
+- Initial support for sending or receiving simulcast streams
+
+- Support for asynchronous host resolution for STUN/TURN servers
+
+- GstWebRTCICE was split into base classes and implementation to make
+ it possible to plug custom ICE implementations
+
+- webrtcsink: batteries-included WebRTC sender (Rust)
+
+- whipsink: WebRTC HTTP ingest (WHIP) to a MediaServer (Rust)
+
+- whepsrc: WebRTC HTTP egress (WHEP) from a MediaServer (Rust)
+
+- Many other improvements and bug fixes
+
+New HLS, DASH and MSS adaptive streaming clients
+
+A new set of “adaptive demuxers” to support HLS, DASH and MSS adaptive
+streaming protocols has been added. They provide improved performance,
+new features and better stream compatibility compared to the previous
+elements. These new elements require a “streams-aware” pipeline such as
+playbin3, uridecodebin3 or urisourcebin.
+
+The previous elements’ design prevented implementing several use-cases
+and fixing long-standing issues. The new elements were re-designed from
+scratch to tackle those:
+
+- Scheduling Only 3 threads are present, regardless of the number of
+ streams selected. One in charge of downloading fragments and
+ manifests, one in charge of outputting parsed data downstream, and
+ one in charge of scheduling. This improves performance, resource
+ usage and latency.
+
+- Better download control The elements now directly control the
+ scheduling and download of manifests and fragments using libsoup
+ directly instead of depending on external elements for downloading.
+
+- Stream selection, only the selected streams are downloaded. This
+ improves bandwith usage. Switching stream is done in such a way to
+ ensure there are no gaps, meaning the new stream will be switched to
+ only once enough data for it has been downloaded.
+
+- Internal parsing, the downloaded streams are parsed internally. This
+ allows the element to fully respect the various specifications and
+ offer accurate buffering, seeking and playback. This is especially
+ important for HLS streams which require parsing for proper
+ positioning of streams.
+
+- Buffering and adaptive rate switching, the new elements handle
+ buffering internally which allows them to have a more accurate
+ visibility of which bandwith variant to switch to.
+
+Playbin3, Decodebin3, UriDecodebin3, Parsebin improvements
+
+The “new” playback elements introduced in 1.18 (playbin3 and its various
+components) have been refactored to allow more use-cases and improve
+performance. They are no longer considered experimental, so applications
+using the legacy playback elements (playbin and (uri)decodebin) can
+migrate to the new components to benefit from these improvements.
+
+- Gapless The “gapless” feature allows files and streams to be
+ fetched, buffered and decoded in order to provide a “gapless”
+ output. This feature has been refactored extensively in the new
+ components:
+ - A single (uri)decodebin3 (and therefore a single set of
+ decoders) is used. This improves memory and cpu usage, since on
+ identical codecs a single decoder will be used.
+ - The “next” stream to play will be pre-rolled “just-in-time”
+ thanks to the buffering improvements in urisourcebin (see below)
+ - This feature is now handled at the uridecodebin3 level.
+ Applications that wish to have a “gapless” stream and process it
+ (instead of just outputting it, for example for transcoding,
+ retransmission, …) can now use uridecodebin3 directly. Note that
+ a streamsynchronizer element is required in that case.
+- Buffering improvements The urisourcebin element is in charge of
+ fetching and (optionally) buffering/downloading the stream. It has
+ been extended and improved:
+ - When the parse-streams property is used (by default in
+ uridecodebin3 and playbin3), compatible streams will be demuxed
+ and parsed (via parsebin) and buffering will be done on the
+ elementary streams. This provides a more accurate handling of
+ buffering. Previously buffering was done on a best-effort basis
+ and was mostly wrong (i.e. downloading more than needed).
+ - Applications can use urisourcebin with this property as a
+ convenient way of getting elementary streams from a given URI.
+ - Elements can handle buffering themselves (such as the new
+ adaptive demuxers) by answering the GST_QUERY_BUFFERING query.
+ In that case urisourcebin will not handle it.
+- Stream Selection Efficient stream selection was previously only
+ possible within decodebin3. The downside is that this meant that
+ upstream elements had to provide all the streams from which to chose
+ from, which is inefficient. With the addition of the
+ GST_QUERY_SELECTABLE query, this can now be handled by elements
+ upstream (i.e. sources)
+ - Elements that can handle stream selection internally (such as
+ the new adaptive demuxer elements) answer that query, and handle
+ the stream selection events themselves.
+ - In this case, decodebin3 will always process all streams that
+ are provided to it.
+- Instant URI switching This new feature allows switching URIs
+ “instantly” in playbin3 (and uridecodebin3) without having to change
+ states. This mimics switching channels on a television.
+ - If compatible, decoders will be re-used, providing lower
+ latency/cpu/memory than by switching states.
+ - This is enabled by setting the instant-uri property to true,
+ setting the URI to switch to immediately, and then disabling the
+ instant-uri property again afterwards.
+- playbin3, decodebin3, uridecodebin3, parsebin, and urisrc are no
+ longer experimental
+ - They were originally marked as ‘technology preview’ but have
+ since seen extensive usage in production settings, so are
+ considered ready for general use now.
+
+Fraunhofer AAC audio encoder HE-AAC and AAC-LD profile support
+
+- fdkaacenc:
+ - Support for encoding to HE-AACv1 and HE-AACv2 profile
+ - Support for encoding to AAC Low Delay (LD) profile
+ - Advanced bitrate control options via new “rate-control”,
+ “vbr-preset”, “peak-bitrate”, and “afterburner” properties
+
+RTP rapid synchronization support in the RTP stack (RFC6051)
+
+RTP provides several mechanisms how streams can be synchronized relative
+to each other, and how absolute sender times for RTP packets can be
+obtained. One of these mechanisms is via RTCP, which has the
+disadvantage that the synchronization information is only distributed
+out-of-band and usually some time after the start.
+
+GStreamer’s RTP stack, specifically the rtpbin, rtpsession and
+rtpjitterbuffer elements, now also have support for retrieving and
+sending the same synchronization information in-band via RTP header
+extensions according to RFC6051 (Rapid Synchronisation of RTP Flows).
+Only 64-bit timestamps are supported currently.
+
+This provides per packet synchronization information from the very
+beginning of a stream and allows accurate inter-stream, and (depending
+on setup) inter-device, synchronization at the receiver side.
+
+ONVIF XML Timed Metadata support
+
+The ONVIF standard implemented by various security cameras also
+specifies a format for timed metadata that is transmitted together with
+the audio/video streams, usually over RTSP.
+
+Support for this timed metadata is implemented in the MP4 demuxer now as
+well as the new fragmented MP4 muxer and the new non-fragmented MP4
+muxer from the GStreamer Rust plugins. Additionally, the new onvif
+plugin ‒ which is part of the GStreamer Rust plugins ‒ provides general
+elements for handling the metadata and e.g. overlaying certain parts of
+it over a video stream.
+
+As part of this support for absolute UTC times was also implemented
+according to the requirements of the ONVIF standards in the
+corresponding elements.
+
+MP3 gapless playback support
+
+While MP3 can probably considered a legacy format at this point, a new
+feature was added with this release.
+
+When playing back plain MP3 files, i.e. outside a container format,
+switches between files can now be completely gapless if the required
+metadata is provided inside the file. There is no standardized metadata
+for this, but the LAME MP3 encoder writes metadata that can be parsed by
+the mpegaudioparse element now and forwarded to decoders for ensuring
+removal of padding samples at the front and end of MP3 files.
+
+“force-live” property for audio + video aggregators
+
+This is a quality of life fix for playout and streaming applications
+where it is common to have audio and video mixer elements that should
+operate in live mode from the start and produce output continuously.
+
+Often one would start a pipeline without any inputs hooked up to these
+mixers in the beginning, and up until now there was no way to easily
+force these elements into live mode from the start. One would have to
+add an initial live video or audio test source as dummy input to achieve
+this.
+
+The new “force-live” property makes these audio and video aggregators
+start in live mode without the need for any dummy inputs, which is
+useful for scenarios where inputs are only added after starting the
+pipeline.
+
+This new property should usually be used in connection with the
+“min-upstream-latency” property, i.e. you should always set a non-0
+minimum upstream latency then.
+
+This is now supported in all GstAudioAggregator and GstVideoAggregator
+subclasses such as audiomixer, audiointerleave, compositor,
+glvideomixer, d3d11compositor, etc.
+
+New elements and plugins
+
+- new cudaconvertscale element that can convert and scale in one pass
+
+- new gtkwaylandsink element based on gtksink, but similar to
+ waylandsink and uses Wayland APIs directly instead of rendering with
+ Gtk/Cairo primitives. This approach is only compatible with Gtk3,
+ and like gtksink this element only supports Gtk3.
+
+- new h264timestamper and h265timestamper elements to reconstruct
+ missing pts/dts from inputs that might not provide them such as
+ e.g. RTP/RTSP/WebRTC inputs (see above)
+
+- mfaacdec, mfmp3dec: Windows MediaFoundation AAC and MP3 decoders
+
+- new msdkav1enc AV1 video encoder element
+
+- new nvcudah264enc, nvcudah265enc, nvd3d11h264enc, and nvd3d11h265enc
+ NVIDIA GPU encoder elements to support zero-copy encoding, via CUDA
+ and Direct3D11 APIs, respectively
+
+- new nvautogpuh264enc and nvautogpuh265enc NVIDIA GPU encoder
+ elements: The auto GPU elements will automatically select a target
+ GPU instance in case multiple NVIDIA desktop GPUs are present, also
+ taking into account the input memory. On Windows CUDA or Direct3D11
+ mode will be determined by the elements automatically as well. Those
+ new elements are useful if target GPU and/or API mode (either CUDA
+ or Direct3D11 in case of Windows) is undeterminable from the encoder
+ point of view at the time when pipeline is configured, and therefore
+ lazy target GPU and/or API selection are required in order to avoid
+ unnecessary memory copy operations.
+
+- new nvav1dec AV1 NVIDIA desktop GPU decoder element
+
+- new qml6glsink element to render video with Qt6
+
+- qsv: New Intel OneVPL/MediaSDK (a.k.a Intel Quick Sync) based
+ decoder and encoder elements, with gst-d3d11 (on Windows) and gst-va
+ (on Linux) integration
+
+ - Support multi-GPU environment, for example, concurrent video
+ encoding using Intel iGPU and dGPU in a single pipeline
+ - H.264 / H.265 / VP9 and JPEG decoders
+ - H.264 / H.265 / VP9 / AV1 / JPEG encoders with dynamic encoding
+ bitrate update
+ - New plugin does not require external SDK for building on Windows
+
+- vulkanoverlaycompositor: new vulkan overlay compositor element to
+ overlay upstream GstVideoOverlayCompositonMeta onto the video
+ stream.