-# GStreamer 1.12 Release Notes
-GStreamer 1.12.0 was originally released on 4th May 2017.
-The GStreamer team is proud to announce a new major feature release in the
-stable 1.x API series of your favourite cross-platform multimedia framework!
+GSTREAMER 1.14 RELEASE NOTES
-As always, this release is again packed with new features, bug fixes and other
-improvements.
-See [https://gstreamer.freedesktop.org/releases/1.12/][latest] for the latest
-version of this document.
-
-*Last updated: Thursday 4 May 2017, 11:00 UTC [(log)][gitlog]*
-
-[latest]: https://gstreamer.freedesktop.org/releases/1.12/
-[gitlog]: https://cgit.freedesktop.org/gstreamer/www/log/src/htdocs/releases/1.12/release-notes-1.12.md
-
-## Introduction
-
-The GStreamer team is proud to announce a new major feature release in the
-stable 1.x API series of your favourite cross-platform multimedia framework!
-
-As always, this release is again packed with new features, bug fixes and other
-improvements.
-
-## Highlights
-
-- new `msdk` plugin for Intel's Media SDK for hardware-accelerated video
- encoding and decoding on Intel graphics hardware on Windows or Linux.
-
-- `x264enc` can now use multiple x264 library versions compiled for different
- bit depths at runtime, to transparently provide support for multiple bit
- depths.
-
-- `videoscale` and `videoconvert` now support multi-threaded scaling and
- conversion, which is particularly useful with higher resolution video.
-
-- `h264parse` will now automatically insert AU delimiters if needed when
- outputting byte-stream format, which improves standard compliance and
- is needed in particular for HLS playback on iOS/macOS.
-
-- `rtpbin` has acquired bundle support for incoming streams
-
-## Major new features and changes
-
-### Noteworthy new API
-
-- The video library gained support for a number of new video formats:
-
- - `GBR_12LE`, `GBR_12BE`, `GBRA_12LE`, `GBRA_12BE` (planar 4:4:4 RGB/RGBA, 12 bits per channel)
- - `GBRA_10LE`, `GBRA_10BE` (planar 4:4:4:4 RGBA, 10 bits per channel)
- - `GBRA` (planar 4:4:4:4 ARGB, 8 bits per channel)
- - `I420_12BE`, `I420_12LE` (planar 4:2:0 YUV, 12 bits per channel)
- - `I422_12BE`,`I422_12LE` (planar 4:2:2 YUV, 12 bits per channel)
- - `Y444_12BE`, `Y444_12LE` (planar 4:4:4 YUV, 12 bits per channel)
- - `VYUY` (another packed 4:2:2 YUV format)
-
-- The high-level `GstPlayer` API was extended with functions for taking video
- snapshots and enabling accurate seeking. It can optionally also use the
- still-experimental `playbin3` element now.
-
-### New Elements
-
-- msdk: new plugin for Intel's Media SDK for hardware-accelerated video encoding
- and decoding on Intel graphics hardware on Windows or Linux. This includes
- an H.264 encoder/decoder (`msdkh264dec`, `msdkh264enc`),
- an H.265 encoder/decoder (`msdkh265dec`, `msdkh265enc`),
- an MJPEG encoder/encoder (`msdkmjpegdec`, `msdkmjpegenc`),
- an MPEG-2 video encoder (`msdkmpeg2enc`) and a VP8 encoder (`msdkvp8enc`).
-
-- `iqa` is a new Image Quality Assessment plugin based on [DSSIM][dssim],
- similar to the old (unported) videomeasure element.
-
-- The `faceoverlay` element, which allows you to overlay SVG graphics over
- a detected face in a video stream, has been ported from 0.10.
-
-- our `ffmpeg` wrapper plugin now exposes/maps the ffmpeg Opus audio decoder
- (`avdec_opus`) as well as the GoPro CineForm HD / CFHD decoder (`avdec_cfhd`),
- and also a parser/writer for the IVF format (`avdemux_ivf` and `avmux_ivf`).
-
-- `audiobuffersplit` is a new element that splits raw audio buffers into
- equal-sized buffers
-
-- `audiomixmatrix` is a new element that mixes N:M audio channels according to
- a configured mix matrix.
-
-- The `timecodewait` element got renamed to `avwait` and can operate in
- different modes now.
-
-- The `opencv` video processing plugin has gained a new `dewarp` element that
- dewarps fisheye images.
+GStreamer 1.14.0 has not been released yet. It is scheduled for release
+in early March 2018.
-- `ttml` is a new plugin for parsing and rendering subtitles in Timed Text
- Markup Language (TTML) format. For the time being these elements will not
- be autoplugged during media playback however, unless the `GST_TTML_AUTOPLUG=1`
- environment variable is set. Only the EBU-TT-D profile is supported at this
- point.
+There are unstable pre-releases available for testing and development
+purposes. The latest pre-release is version 1.13.91 (rc2) and was
+released on 12 March 2018.
-[dssim]: https://github.com/pornel/dssim
-
-### New element features and additions
-
-- `x264enc` can now use multiple x264 library versions compiled for different
- bit depths at runtime, to transparently provide support for multiple bit
- depths. A new configure parameter `--with-x264-libraries` has been added to
- specify additional paths to look for additional x264 libraries to load.
- Background is that the libx264 library is always compile for one specific
- bit depth and the `x264enc` element would simply support the depth supported
- by the underlying library. Now we can support multiple depths.
-
-- `x264enc` also picks up the interlacing mode automatically from the input
- caps now and passed interlacing/TFF information correctly to the library.
-
-- `videoscale` and `videoconvert` now support multi-threaded scaling and
- conversion, which is particularly useful with higher resolution video.
- This has to be enabled explicitly via the `"n-threads"` property.
-
-- `videorate`'s new `"rate"` property lets you set a speed factor
- on the output stream
-
-- `splitmuxsink`'s buffer collection and scheduling was rewritten to make
- processing and splitting deterministic; before it was possible for a buffer
- to end up in a different file chunk in different runs. `splitmuxsink` also
- gained a new `"format-location-full"` signal that works just like the existing
- `"format-location"` signal only that it is also passed the primary stream's
- first buffer as argument, so that it is possible to construct the file name
- based on metadata such as the buffer timestamp or any GstMeta attached to
- the buffer. The new `"max-size-timecode"` property allows for timecode-based
- splitting. `splitmuxsink` will now also automatically start a new file if the
- input caps change in an incompatible way.
-
-- `fakesink` has a new `"drop-out-of-segment"` property to not drop
- out-of-segment buffers, which is useful for debugging purposes.
-
-- `identity` gained a `"ts-offset"` property.
-
-- both `fakesink` and `identity` now also print what kind of metas are attached
- to buffers when printing buffer details via the `"last-message"` property
- used by `gst-launch-1.0 -v`.
-
-- multiqueue: made `"min-interleave-time"` a configurable property.
-
-- video nerds will be thrilled to know that `videotestsrc`'s snow is now
- deterministic. `videotestsrc` also gained some new properties to make the
- ball pattern based on system time, and invert colours each second
- (`"animation-mode"`, `"motion"`, and `"flip"` properties).
-
-- `oggdemux` reverse playback should work again now. You're welcome.
-
-- `playbin3` and `urisourcebin` now have buffering enabled by default, and
- buffering message aggregation was fixed.
-
-- `tcpclientsrc` now has a `"timeout"` property
-
-- `appsink` has gained support for buffer lists. For backwards compatibility
- reasons users need to enable this explicitly with `gst_app_sink_set_buffer_list_support()`,
- however. Once activated, a pulled `GstSample` can contain either a buffer
- list or a single buffer.
-
-- `splitmuxsrc` reverse playback was fixed and handling of sparse streams, such
- as subtitle tracks or metadata tracks, was improved.
-
-- `matroskamux` has acquired support for muxing G722 audio; it also marks all
- buffers as keyframes now when streaming only audio, so that `tcpserversink`
- will behave properly with audio-only streams.
-
-- `qtmux` gained support for ProRes 4444 XQ, HEVC/H.265 and CineForm (GoPro) formats,
- and generally writes more video stream-related metadata into the track headers.
- It is also allows configuration of the maximum interleave size in bytes and
- time now. For fragmented mp4 we always write the `tfdt` atom now as required
- by the DASH spec.
-
-- `qtdemux` supports FLAC, xvid, mp2, S16L and CineForm (GoPro) tracks now, and
- generally tries harder to extract more video-related information from track
- headers, such as colorimetry or interlacing details. It also received a
- couple of fixes for the scenario where upstream operates in TIME format and
- feeds chunks to qtdemux (e.g. DASH or MSE).
-
-- `audioecho` has two new properties to apply a delay only to certain channels
- to create a surround effect, rather than an echo on all channels. This is
- useful when upmixing from stereo, for example. The `"surround-delay"` property
- enables this, and the `"surround-mask"` property controls which channels
- are considered surround sound channels in this case.
-
-- `webrtcdsp` gained various new properties for gain control and also exposes
- voice activity detection now, in which case it will post `"voice-activity"`
- messages on the bus whenever the voice detection status changes.
-
-- The `decklink` capture elements for Blackmagic Decklink cards have seen a
- number of improvements:
-
- - `decklinkvideosrc` will post a warning message on "no signal" and an info
- message when the signal lock has been (re)acquired. There is also a new
- read-only `"signal"` property that can be used to query the signal lock
- status. The `GAP` flag will be set on buffers that are captured without
- a signal lock. The new `drop-no-signal-frames` will make `decklinkvideosrc`
- drop all buffers that have been captured without an input signal. The
- `"skip-first-time"` property will make the source drop the first few
- buffers, which is handy since some devices will at first output buffers
- with the wrong resolution before they manage to figure out the right input
- format and decide on the actual output caps.
-
- - `decklinkaudiosrc` supports more than just 2 audio channels now.
-
- - The capture sources no longer use the "hardware" timestamps which turn
- out to be useless and instead just use the pipeline clock directly.
-
-- `srtpdec` now also has a readonly `"stats"` property, just like `srtpenc`.
-
-- `rtpbin` gained RTP bundle support, as used by e.g. WebRTC. The first
- rtpsession will have a `rtpssrcdemux` element inside splitting the streams
- based on their SSRC and potentially dispatch to a different rtpsession.
- Because retransmission SSRCs need to be merged with the corresponding media
- stream the `::on-bundled-ssrc` signal is emitted on `rtpbin` so that the
- application can find out to which session the SSRC belongs.
-
-- `rtprtxqueue` gained two new properties exposing retransmission
- statistics (`"requests"` and `"fulfilled-requests"`)
-
-- `kmssink` will now use the preferred mode for the monitor and render to the
- base plane if nothing else has set a mode yet. This can also be done forcibly
- in any case via the new `"force-modesetting"` property. Furthermore, `kmssink`
- now allows only the supported connector resolutions as input caps in order to
- avoid scaling or positioning of the input stream, as `kmssink` can't know
- whether scaling or positioning would be more appropriate for the use case at
- hand.
-
-- `waylandsink` can now take DMAbuf buffers as input in the presence
- of a compatible Wayland compositor. This enables zero-copy transfer
- from a decoder or source that outputs DMAbuf.
-
-- `udpsrc` can be bound to more than one interface when joining a
- multicast group, this is done by giving a comma separate list of
- interfaces such as multicast-iface="eth0,eth1".
-
-### Plugin moves
-
-- `dataurisrc` moved from gst-plugins-bad to core
-
-- The `rawparse` plugin containing the `rawaudioparse` and `rawvideoparse`
- elements moved from gst-plugins-bad to gst-plugins-base. These elements
- supersede the old `videoparse` and `audioparse` elements. They work the
- same, with just some minor API changes. The old legacy elements still
- exist in gst-plugins-bad, but may be removed at some point in the future.
-
-- `timecodestamper` is an element that attaches time codes to video buffers
- in form of `GstVideoTimeCodeMeta`s. It had a `"clock-source"` property
- which has now been removed because it was fairly useless in practice. It
- gained some new properties however: the `"first-timecode"` property can
- be used to set the inital timecode; alternatively `"first-timecode-to-now"`
- can be set, and then the current system time at the time the first buffer
- arrives is used as base time for the time codes.
-
-
-### Plugin removals
-
-- The `mad` mp1/mp2/mp3 decoder plugin was removed from gst-plugins-ugly,
- as libmad is GPL licensed, has been unmaintained for a very long time, and
- there are better alternatives available. Use the `mpg123audiodec` element
- from the `mpg123` plugin in gst-plugins-ugly instead, or `avdec_mp3` from
- the `gst-libav` module which wraps the ffmpeg library. We expect that we
- will be able to move mp3 decoding to gst-plugins-good in the next cycle
- seeing that most patents around mp3 have expired recently or are about to
- expire.
-
-- The `mimic` plugin was removed from gst-plugins-bad. It contained a decoder
- and encoder for a video codec used by MSN messenger many many years ago (in
- a galaxy far far away). The underlying library is unmaintained and no one
- really needs to use this codec any more. Recorded videos can still be played
- back with the MIMIC decoder in gst-libav.
-
-## Miscellaneous API additions
-
-- Request pad name templates passed to `gst_element_request_pad()` may now
- contain multiple specifiers, such as e.g. `src_%u_%u`.
-
-- [`gst_buffer_iterate_meta_filtered()`][buffer-iterate-meta-filtered] is a
- variant of `gst_buffer_iterate_meta()` that only returns metas of the
- requested type and skips all other metas.
-
-- [`gst_pad_task_get_state()`][pad-task-get-state] gets the current state of
- a task in a thread-safe way.
-
-- [`gst_uri_get_media_fragment_table()`][uri-get-fragment-table] provides the
- media fragments of an URI as a table of key=value pairs.
-
-- [`gst_print()`][print], [`gst_println()`][println], [`gst_printerr()`][printerr],
- and [`gst_printerrln()`][printerrln] can be used to print to stdout or stderr.
- These functions are similar to `g_print()` and `g_printerr()` but they also
- support all the additional format specifiers provided by the GStreamer
- logging system, such as e.g. `GST_PTR_FORMAT`.
-
-- a `GstParamSpecArray` has been added, for elements who want to have array
- type properties, such as the `audiomixmatrix` element for example. There are
- also two new functions to set and get properties of this type from bindings:
- - gst_util_set_object_array()
- - gst_util_get_object_array()
-
-- various helper functions have been added to make it easier to set or get
- GstStructure fields containing caps-style array or list fields from language
- bindings (which usually support GValueArray but don't know about the GStreamer
- specific fundamental types):
- - [`gst_structure_get_array()`][get-array]
- - [`gst_structure_set_array()`][set-array]
- - [`gst_structure_get_list()`][get-list]
- - [`gst_structure_set_list()`][set-list]
-
-- a new ['dynamic type' registry factory type][dynamic-type] was added to
- register dynamically loadable GType types. This is useful for automatically
- loading enum/flags types that are used in caps, such as for example the
- `GstVideoMultiviewFlagsSet` type used in multiview video caps.
-
-- there is a new [`GstProxyControlBinding`][proxy-control-binding] for use
- with GstController. This allows proxying the control interface from one
- property on one GstObject to another property (of the same type) in another
- GstObject. So e.g. in parent-child relationship, one may need to call
- `gst_object_sync_values()` on the child and have a binding (set elsewhere)
- on the parent update the value. This is used in `glvideomixer` and `glsinkbin`
- for example, where `sync_values()` on the child pad or element will call
- `sync_values()` on the exposed bin pad or element.
-
- Note that this doesn't solve GObject property forwarding, that must
- be taken care of by the implementation manually or using GBinding.
-
-- `gst_base_parse_drain()` has been made public for subclasses to use.
-
-- `gst_base_sink_set_drop_out_of_segment()' can be used by subclasses to
- prevent GstBaseSink from dropping buffers that fall outside of the segment.
-
-- [`gst_calculate_linear_regression()`][calc-lin-regression] is a new utility
- function to calculate a linear regression.
-
-- [`gst_debug_get_stack_trace`][get-stack-trace] is an easy way to retrieve a
- stack trace, which can be useful in tracer plugins.
-
-- allocators: the dmabuf allocator is now sub-classable, and there is a new
- `GST_CAPS_FEATURE_MEMORY_DMABUF` define.
-
-- video decoder subclasses can use the newly-added function
- `gst_video_decoder_allocate_output_frame_with_params()` to
- pass a `GstBufferPoolAcquireParams` to the buffer pool for
- each buffer allocation.
-
-- the video time code API has gained a dedicated [`GstVideoTimeCodeInterval`][timecode-interval]
- type plus related API, including functions to add intervals to timecodes.
-
-- There is a new `libgstbadallocators-1.0` library in gst-plugins-bad, which
- may go away again in future releases once the `GstPhysMemoryAllocator`
- interface API has been validated by more users and was moved to
- `libgstallocators-1.0` from gst-plugins-base.
-
-[timecode-interval]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-libs/html/gst-plugins-base-libs-gstvideo.html#gst-video-time-code-interval-new
-[buffer-iterate-meta-filtered]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/GstBuffer.html#gst-buffer-iterate-meta-filtered
-[pad-task-get-state]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/GstPad.html#gst-pad-task-get-state
-[uri-get-fragment-table]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstUri.html#gst-uri-get-media-fragment-table
-[print]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstInfo.html#gst-print
-[println]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstInfo.html#gst-println
-[printerr]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstInfo.html#gst-printerr
-[printerrln]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstInfo.html#gst-printerrln
-[get-array]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/GstStructure.html#gst-structure-get-array
-[set-array]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/GstStructure.html#gst-structure-set-array
-[get-list]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/GstStructure.html#gst-structure-get-list
-[set-list]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/GstStructure.html#gst-structure-set-list
-[dynamic-type]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/GstDynamicTypeFactory.html
-[proxy-control-binding]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-libs/html/gstreamer-libs-GstProxyControlBinding.html
-[calc-lin-regression]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstUtils.html#gst-calculate-linear-regression
-[get-stack-trace]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstUtils.html#gst-debug-get-stack-trace
-
-### GstPlayer
-
-New API has been added to:
-
- - get the number of audio/video/subtitle streams:
- - `gst_player_media_info_get_number_of_streams()`
- - `gst_player_media_info_get_number_of_video_streams()`
- - `gst_player_media_info_get_number_of_audio_streams()`
- - `gst_player_media_info_get_number_of_subtitle_streams()`
-
- - enable accurate seeking: `gst_player_config_set_seek_accurate()`
- and `gst_player_config_get_seek_accurate()`
-
- - get a snapshot image of the video in RGBx, BGRx, JPEG, PNG or
- native format: [`gst_player_get_video_snapshot()`][snapshot]
-
- - selecting use of a specific video sink element
- ([`gst_player_video_overlay_video_renderer_new_with_sink()`][renderer-with-vsink])
-
- - If the environment variable `GST_PLAYER_USE_PLAYBIN3` is set, GstPlayer will
- use the still-experimental `playbin3` element and the `GstStreams` API for
- playback.
-
-[snapshot]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad-libs/html/gst-plugins-bad-libs-gstplayer.html#gst-player-get-video-snapshot
-[renderer-with-vsink]: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad-libs/html/gst-plugins-bad-libs-gstplayer-videooverlayvideorenderer.html#gst-player-video-overlay-video-renderer-new-with-sink
-
-## Miscellaneous changes
-
-- video caps for interlaced video may contain an optional `"field-order"` field
- now in the case of `interlaced-mode=interleaved` to signal that the field
- order is always the same throughout the stream. This is useful to signal to
- muxers such as mp4mux. The new field is parsed from/to `GstVideoInfo` of course.
-
-- video decoder and video encoder base classes try harder to proxy
- interlacing, colorimetry and chroma-site related fields in caps properly.
-
-- The buffer stored in the `PROTECTION` events is now left unchanged. This is a
- change of behaviour since 1.8, especially for the mssdemux element which used to
- decode the base64 parsed data wrapped in the protection events emitted by the
- demuxer.
-
-- `PROTECTION` events can now be injected into the pipeline from the application;
- source elements deriving from GstBaseSrc will forward those downstream now.
-
-- The DASH demuxer is now correctly parsing the MSPR-2.0 ContentProtection nodes
- and emits Protection events accordingly. Applications relying on those events
- might need to decode the base64 data stored in the event buffer before using it.
-
-- The registry can now also be disabled by setting the environment variable
- `GST_REGISTRY_DISABLE=yes`, with similar effect as the `GST_DISABLE_REGISTRY`
- compile time switch.
+See https://gstreamer.freedesktop.org/releases/1.14/ for the latest
+version of this document.
-- Seeking performance with gstreamer-vaapi based decoders was improved. It would
- recreate the decoder and surfaces on every seek which can be quite slow.
+_Last updated: Monday 12 March 2018, 18:00 UTC (log)_
-- more robust handling of input caps changes in videoaggregator-based elements
- such as `compositor`.
-- Lots of adaptive streaming-related fixes across the board (DASH, MSS, HLS). Also:
+Introduction
- - `mssdemux`, the Microsoft Smooth Streaming demuxer, has seen various
- fixes for live streams, duration reporting and seeking.
+The GStreamer team is proud to announce a new major feature release in
+the stable 1.x API series of your favourite cross-platform multimedia
+framework!
- - The DASH manifest parser now extracts MS PlayReady ContentProtection objects
- from manifests and sends them downstream as `PROTECTION` events. It also
- supports multiple Period elements in external xml now.
+As always, this release is again packed with new features, bug fixes and
+other improvements.
-- gst-libav was updated to ffmpeg 3.3 but should still work with any 3.x
- version.
-- GstEncodingProfile has been generally enhanced so it can, for
- example, be used to get possible profiles for a given file
- extension. It is now possible to define profiles based on element
- factory names or using a path to a `.gep` file containing a
- serialized profile.
+Highlights
-- `audioconvert` can now do endianness conversion in-place. All other
- conversions still require a copy, but e.g. sign conversion and a few others
- could also be implemented in-place now.
+- WebRTC support: real-time audio/video streaming to and from web
+ browsers
-- The new, experimental `playbin3` and `urisourcebin` elements got many
- bugfixes and improvements and should generally be closer to a full
- replacement of the old elements.
+- Experimental support for the next-gen royalty-free AV1 video codec
-- `interleave` now supports > 64 channels.
+- Video4Linux: encoding support, stable element names and faster
+ device probing
-### OpenGL integration
+- Support for the Secure Reliable Transport (SRT) video streaming
+ protocol
-- As usual the GStreamer OpenGL integration library has seen numerous
- fixes and performance improvements all over the place, and is hopefully
- ready now to become API stable and be moved to gst-plugins-base during the
- 1.14 release cycle.
+- RTP Forward Error Correction (FEC) support (ULPFEC)
-- The GStreamer OpenGL integration layer has also gained support for the
- Vivante EGL FB windowing system, which improves performance on platforms
- such as Freescale iMX.6 for those who are stuck with the proprietary driver.
- The `qmlglsink` element also supports this now if Qt is used with eglfs or
- wayland backend, and it works in conjunction with [gstreamer-imx][gstreamer-imx]
- of course.
+- RTSP 2.0 support in rtspsrc and gst-rtsp-server
-- various `qmlglsrc` improvements
+- ONVIF audio backchannel support in gst-rtsp-server and rtspsrc
-[gstreamer-imx]: https://github.com/Freescale/gstreamer-imx
+- playbin3 gapless playback and pre-buffering support
-## Tracing framework and debugging improvements
+- tee, our stream splitter/duplication element, now does allocation
+ query aggregation which is important for efficient data handling and
+ zero-copy
-- New tracing hooks have been added to track GstMiniObject and GstObject
- ref/unref operations.
+- QuickTime muxer has a new prefill recording mode that allows file
+ import in Adobe Premiere and FinalCut Pro while the file is still
+ being written.
-- The memory leaks tracer can optionally use this to retrieve stack traces if
- enabled with e.g. `GST_TRACERS=leaks(filters="GstEvent,GstMessage",stack-traces-flags=full)`
+- rtpjitterbuffer fast-start mode and timestamp offset adjustment
+ smoothing
-- The `GST_DEBUG_FILE` environment variable, which can be used to write the
- debug log output to a file instead of printing it to stderr, can now contain
- a name pattern, which is useful for automated testing and continuous
- integration systems. The following format specifiers are supported:
+- souphttpsrc connection sharing, which allows for connection reuse,
+ cookie sharing, etc.
- - `%p`: will be replaced with the PID
- - `%r`: will be replaced with a random number, which is useful for instance
- when running two processes with the same PID but in different containers.
+- nvdec: new plugin for hardware-accelerated video decoding using the
+ NVIDIA NVDEC API
-## Tools
+- Adaptive DASH trick play support
-- `gst-inspect-1.0` can now list elements by type with the new `--types`
- command-line option, e.g. `gst-inspect-1.0 --types=Audio/Encoder` will
- show a list of audio encoders.
+- ipcpipeline: new plugin that allows splitting a pipeline across
+ multiple processes
-- `gst-launch-1.0` and `gst_parse_launch()` have gained a new operator (`:`)
- that allows linking all pads between two elements. This is useful in cases
- where the exact number of pads or type of pads is not known beforehand, such
- as in the `uridecodebin : encodebin` scenario, for example. In this case,
- multiple links will be created if the encodebin has multiple profiles
- compatible with the output of uridecodebin.
+- Major gobject-introspection annotation improvements for large parts
+ of the library API
-- `gst-device-monitor-1.0` now shows a `gst-launch-1.0` snippet for each
- device that shows how to make use of it in a `gst-launch-1.0` pipeline string.
-## GStreamer RTSP server
+Major new features and changes
-- The RTSP server now also supports Digest authentication in addition to Basic
- authentication.
+WebRTC support
+
+There is now basic support for WebRTC in GStreamer in form of a new
+webrtcbin element and a webrtc support library. This allows you to build
+applications that set up connections with and stream to and from other
+WebRTC peers, whilst leveraging all of the usual GStreamer features such
+as hardware-accelerated encoding and decoding, OpenGL integration,
+zero-copy and embedded platform support. And it's easy to build and
+integrate into your application too!
+
+WebRTC enables real-time communication of audio, video and data with web
+browsers and native apps, and it is supported or about to be support by
+recent versions of all major browsers and operating systems.
+
+GStreamer's new WebRTC implementation uses libnice for Interactive
+Connectivity Establishment (ICE) to figure out the best way to
+communicate with other peers, punch holes into firewalls, and traverse
+NATs.
-- The `GstRTSPClient` class has gained a `pre-*-request` signal and virtual
- method for each client request type, emitted in the beginning of each rtsp
- request. These signals or virtual methods let the application validate the
- requests, configure the media/stream in a certain way and also generate error
- status codes in case of an error or a bad request.
+The implementation is not complete, but all the basics are there, and
+the code sticks fairly close to the PeerConnection API. Where
+functionality is missing it should be fairly obvious where it needs to
+go.
-## GStreamer VAAPI
+For more details, background and example code, check out Nirbheek's blog
+post _GStreamer has grown a WebRTC implementation_, as well as Matthew's
+_GStreamer WebRTC_ talk from last year's GStreamer Conference in Prague.
+
+New Elements
+
+- webrtcbin handles the transport aspects of webrtc connections (see
+ WebRTC section above for more details)
+
+- New srtsink and srtsrc elements for the Secure Reliable Transport
+ (SRT) video streaming protocol, which aims to be easy to use whilst
+ striking a new balance between reliability and latency for low
+ latency video streaming use cases. More details about SRT and the
+ implementation in GStreamer in Olivier's blog post _SRT in
+ GStreamer_.
+
+- av1enc and av1dec elements providing experimental support for the
+ next-generation royalty free video AV1 codec, alongside Matroska
+ support for it.
+
+- hlssink2 is a rewrite of the existing hlssink element, but unlike
+ its predecessor hlssink2 takes elementary streams as input and
+ handles the muxing to MPEG-TS internally. It also leverages
+ splitmuxsink internally to do the splitting. This allows more
+ control over the chunk splitting and sizing process and relies less
+ on the co-operation of an upstream muxer. Different to the old
+ hlssink it also works with pre-encoded streams and does not require
+ close interaction with an upstream encoder element.
+
+- audiolatency is a new element for measuring audio latency end-to-end
+ and is useful to measure roundtrip latency including both the
+ GStreamer-internal latency as well as latency added by external
+ components or circuits.
+
+- 'fakevideosink is basically a null sink for video data and very
+ similar to fakesink, only that it will answer allocation queries and
+ will advertise support for various video-specific things such
+ GstVideoMeta, GstVideoCropMeta and GstVideoOverlayCompositionMeta
+ like a normal video sink would. This is useful for throughput
+ testing and testing the zero-copy path when creating a new pipeline.
+
+- ipcpipeline: new plugin that allows the splitting of a pipeline into
+ multiple processes. Usually a GStreamer pipeline runs in a single
+ process and parallelism is achieved by distributing workloads using
+ multiple threads. This means that all elements in the pipeline have
+ access to all the other elements' memory space however, including
+ that of any libraries used. For security reasons one might therefore
+ want to put sensitive parts of a pipeline such as DRM and decryption
+ handling into a separate process to isolate it from the rest of the
+ pipeline. This can now be achieved with the new ipcpipeline plugin.
+ Check out George's blog post _ipcpipeline: Splitting a GStreamer
+ pipeline into multiple processes_ or his lightning talk from last
+ year's GStreamer Conference in Prague for all the gory details.
+
+
+- proxysink and proxysrc are new elements to pass data from one
+ pipeline to another within the same process, very similar to the
+ existing inter elements, but not limited to raw audio and video
+ data. These new proxy elements are very special in how they work
+ under the hood, which makes them extremely powerful, but also
+ dangerous if not used with care. The reason for this is that it's
+ not just data that's passed from sink to src, but these elements
+ basically establish a two-way wormhole that passes through queries
+ and events in both directions, which means caps negotiation and
+ allocation query driven zero-copy can work through this wormhole.
+ There are scheduling considerations as well: proxysink forwards
+ everything into the proxysrc pipeline directly from the proxysink
+ streaming thread. There is a queue element inside proxysrc to
+ decouple the source thread from the sink thread, but that queue is
+ not unlimited, so it is entirely possible that the proxysink
+ pipeline thread gets stuck in the proxysrc pipeline, e.g. when that
+ pipeline is paused or stops consuming data for some other reason.
+ This means that one should always shut down down the proxysrc
+ pipeline before shutting down the proxysink pipeline, for example.
+ Or at least take care when shutting down pipelines. Usually this is
+ not a problem though, especially not in live pipelines. For more
+ information see Nirbheek's blog post _Decoupling GStreamer
+ Pipelines_, and also check out out the new ipcpipeline plugin for
+ sending data from one process to another process (see above).
+
+- lcms is a new LCMS-based ICC color profile correction element
+
+- openmptdec is a new OpenMPT-based decoder for module music formats,
+ such as S3M, MOD, XM, IT. It is built on top of a new
+ GstNonstreamAudioDecoder base class which aims to unify handling of
+ files which do not operate a streaming model. The wildmidi plugin
+ has also been revived and is also implemented on top of this new
+ base class.
+
+- The curl plugin has gained a new curlhttpsrc element, which is
+ useful for testing HTTP protocol version 2.0 amongst other things.
+
+Noteworthy new API
+
+- GstPromise provides future/promise-like functionality. This is used
+ in the GStreamer WebRTC implementation.
+
+
+- GstReferenceTimestampMeta is a new meta that allows you to attach
+ additional reference timestamps to a buffer. These timestamps don't
+ have to relate to the pipeline clock in any way. Examples of this
+ could be an NTP timestamp when the media was captured, a frame
+ counter on the capture side or the (local) UNIX timestamp when the
+ media was captured. The decklink elements make use of this.
+
+
+- GstVideoRegionOfInterestMeta: it's now possible to attach generic
+ free-form element-specific parameters to a region of interest meta,
+ for example to tell a downstream encoder to use certain codec
+ parameters for a certain region.
+
+
+- gst_bus_get_pollfd can be used to obtain a file descriptor for the
+ bus that can be poll()-ed on for new messages. This is useful for
+ integration with non-GLib event loops.
+
+
+- gst_get_main_executable_path() can be used by wrapper plugins that
+ need to find things in the directory where the application
+ executable is located. In the same vein,
+ GST_PLUGIN_DEPENDENCY_FLAG_PATHS_ARE_RELATIVE_TO_EXE can be used to
+ signal that plugin dependency paths are relative to the main
+ executable.
+
+- pad templates can be told about the GType of the pad subclass of the
+ pad via newly-added GstPadTemplate API API or the
+ gst_element_class_add_static_pad_template_with_gtype() convenience
+ function. gst-inspect-1.0 will use this information to print pad
+ properties.
+
+
+- new convenience functions to iterate over element pads without using
+ the GstIterator API: gst_element_foreach_pad(),
+ gst_element_foreach_src_pad(), and gst_element_foreach_sink_pad().
+
+
+- GstBaseSrc and appsrc have gained support for buffer lists:
+ GstBaseSrc subclasses can use gst_base_src_submit_buffer_list(), and
+ applications can use gst_app_src_push_buffer_list() to push a buffer
+ list into appsrc.
+
+
+- The GstHarness unit test harness has a couple of new convenience
+ functions to retrieve all pending data in the harness in form of a
+ single chunk of memory.
+
+
+- GstAudioStreamAlign is a new helper object for audio elements that
+ handles discontinuity detection and sample alignment. It will align
+ samples after the previous buffer's samples, but keep track of the
+ divergence between buffer timestamps and sample position (jitter).
+ If it exceeds a configurable threshold the alignment will be reset.
+ This simply factors out code that was duplicated in a number of
+ elements into a common helper API.
+
+
+- The GstVideoEncoder base class implements Quality of Service (QoS)
+ now. This is disabled by default and must be opted in by setting the
+ "qos" property, which will make the base class gather statistics
+ about the real-time performance of the pipeline from downstream
+ elements (usually sinks that sync the pipeline clock). Subclasses
+ can then make use of this by checking whether input frames are late
+ already using gst_video_encoder_get_max_encode_time() If late, they
+ can just drop them and skip encoding in the hope that the pipeline
+ will catch up.
+
+
+- The GstVideoOverlay interface gained a few helper functions for
+ installing and handling a "render-rectangle" property on elements
+ that implement this interface, so that this functionality can also
+ be used from the command line for testing and debugging purposes.
+ The property wasn't added to the interface itself as that would
+ require all implementors to provide it which would not be
+ backwards-compatible.
+
+
+- A new base class, GstNonstreamAudioDecoder for non-stream audio
+ decoders was added to gst-plugins-bad. This base-class is meant to
+ be used for audio decoders that require the whole stream to be
+ loaded first before decoding can start. Examples of this are module
+ formats (MOD/S3M/XM/IT/etc), C64 SID tunes, video console music
+ files (GYM/VGM/etc), MIDI files and others. The new openmptdec
+ element is based on this.
+
+
+- Full list of API new in 1.14:
+- GStreamer core API new in 1.14
+- GStreamer base library API new in 1.14
+- gst-plugins-base libraries API new in 1.14
+- gst-plugins-bad: no list, mostly GstWebRTC library and new
+ non-stream audio decoder base class.
+
+New RTP features and improvements
+
+- rtpulpfecenc and rtpulpfecdec are new elements that implement
+ Generic Forward Error Correction (FEC) using Uneven Level Protection
+ (ULP) as described in RFC 5109. This can be used to protect against
+ certain types of (non-bursty) packet loss, and important packets
+ such as those containing codec configuration data or key frames can
+ be protected with higher redundancy. Equally, packets that are not
+ particularly important can be given low priority or not be protected
+ at all. If packets are lost, the receiver can then hopefully restore
+ the lost packet(s) from the surrounding packets which were received.
+ This is an alternative to, or rather complementary to, dealing with
+ packet loss using _retransmission (rtx)_. GStreamer has had
+ retransmission support for a long time, but Forward Error Correction
+ allows for different trade-offs: The advantage of Forward Error
+ Correction is that it doesn't add latency, whereas retransmission
+ requires at least one more roundtrip to request and hopefully
+ receive lost packets; Forward Error Correction increases the
+ required bandwidth however, even in situations where there is no
+ packet loss at all, so one will typically want to fine-tune the
+ overhead and mechanisms used based on the characteristics of the
+ link at the time.
+
+- New _Redundant Audio Data (RED)_ encoders and decoders for RTP as
+ per RFC 2198 are also provided (rtpredenc and rtpreddec), mostly for
+ chrome webrtc compatibility, as chrome will wrap ULPFEC-protected
+ streams in RED packets, and such streams need to be wrapped and
+ unwrapped in order to use ULPFEC with chrome.
+
+
+- a few new buffer flags for FEC support:
+ GST_BUFFER_FLAG_NON_DROPPABLE can be used to mark important buffers,
+ e.g. to flag RTP packets carrying keyframes or codec setup data for
+ RTP Forward Error Correction purposes, or to prevent still video
+ frames from being dropped by elements due to QoS. There already is a
+ GST_BUFFER_FLAG_DROPPABLE. GST_RTP_BUFFER_FLAG_REDUNDANT is used to
+ signal internally that a packet represents a redundant RTP packet
+ and used in rtpstorage to hold back the packet and use it only for
+ recovery from packet loss. Further work is still needed in
+ payloaders to make use of these.
+
+- rtpbin now has an option for increasing timestamp offsets gradually:
+ Instant large changes to the internal ts_offset may cause timestamps
+ to move backwards and also cause visible glitches in media playback.
+ The new "max-ts-offset-adjustment" and "max-ts-offset" properties
+ let the application control the rate to apply changes to ts_offset.
+ There have also been some EOS/BYE handling improvements in rtpbin.
+
+- rtpjitterbuffer has a new fast start mode: in many scenarios the
+ jitter buffer will have to wait for the full configured latency
+ before it can start outputting packets. The reason for that is that
+ it often can't know what the sequence number of the first expected
+ RTP packet is, so it can't know whether a packet earlier than the
+ earliest packet received will still arrive in future. This behaviour
+ can now be bypassed by setting the "faststart-min-packets" property
+ to the number of consecutive packets needed to start, and the jitter
+ buffer will start output packets as soon as it has N consecutive
+ packets queued internally. This is particularly useful to get a
+ first video frame decoded and rendered as quickly as possible.
+
+- rtpL8pay and rtpL8depay provide RTP payloading and depayloading for
+ 8-bit raw audio
+
+New element features
+
+- playbin3 has gained support or gapless playback via the
+ "about-to-finish" signal where users can set the uri for the next
+ item to play. For non-live streams this will be emitted as soon as
+ the first uri has finished downloading, so with sufficiently large
+ buffers it is now possible to pre-buffer the next item well ahead of
+ time (unlike playbin where there would not be a lot of time between
+ "about-to-finish" emission and the end of the stream). If the stream
+ format of the next stream is the same as that of the previous
+ stream, the data will be concatenated via the concat element.
+ Whether this will result in true gaplessness depends on the
+ container format and codecs used, there might still be codec-related
+ gaps between streams with some codecs.
+
+- tee now does allocation query aggregation, which is important for
+ zero-copy and efficient data handling, especially for video. Those
+ who want to drop allocation queries on purpose can use the identity
+ element's new "drop-allocation" property for that instead.
+
+- audioconvert now has a "mix-matrix" property, which obsoletes the
+ audiomixmatrix element. There's also mix matrix support in the audio
+ conversion and channel mixing API.
+
+- x264enc: new "insert-vui" property to disable VUI (Video Usability
+ Information) parameter insertion into the stream, which allows
+ creation of streams that are compatible with certain legacy hardware
+ decoders that will refuse to decode in certain combinations of
+ resolution and VUI parameters; the max. allowed number of B-frames
+ was also increased from 4 to 16.
+
+- dvdlpcmdec: has gained support for Blu-Ray audio LPCM.
+
+- appsrc has gained support for buffer lists (see above) and also seen
+ some other performance improvements.
+
+- flvmux has been ported to the GstAggregator base class which means
+ it can work in defined-latency mode with live input sources and
+ continue streaming if one of the inputs stops producing data.
+
+- jpegenc has gained a "snapshot" property just like pngenc to make it
+ easier to just output a single encoded frame.
+
+- jpegdec will now handle interlaced MJPEG streams properly and also
+ handle frames without an End of Image marker better.
+
+- v4l2: There are now video encoders for VP8, VP9, MPEG4, and H263.
+ The v4l2 video decoder handles dynamic resolution changes, and the
+ video4linux device provider now does much faster device probing. The
+ plugin also no longer uses the libv4l2 library by default, as it has
+ prevented a lot of interesting use cases like CREATE_BUFS, DMABuf,
+ usage of TRY_FMT. As the libv4l2 library is totally inactive and not
+ really maintained, we decided to disable it. This might affect a
+ small number of cheap/old webcams with custom vendor formats for
+ which we do not provide conversion in GStreamer. It is possible to
+ re-enable support for libv4l2 at run-time however, by setting the
+ environment variable GST_V4L2_USE_LIBV4L2=1.
+
+- rtspsrc now has support for RTSP protocol version 2.0 as well as
+ ONVIF audio backchannels (see below for more details). It also
+ sports a new ["accept-certificate"] signal for "manually" checking a
+ TLS certificate for validity. It now also prints RTSP/SDP messages
+ to the gstreamer debug log instead of stdout.
+
+- shout2send now uses non-blocking I/O and has a configurable network
+ operations timeout.
+
+- splitmuxsink has gained a "split-now" action signal and new
+ "alignment-threshold" and "use-robust-muxing" properties. If robust
+ muxing is enabled, it will check and set the muxer's reserved space
+ properties if present. This is primarily for use with mp4mux's
+ robust muxing mode.
+
+- qtmux has a new _prefill recording mode_ which sets up a moov header
+ with the correct sample positions beforehand, which then allows
+ software like Adobe Premiere and FinalCut Pro to import the files
+ while they are still being written to. This only works with constant
+ framerate I-frame only streams, and for now only support for ProRes
+ video and raw audio is implemented but adding new codecs is just a
+ matter of defining appropriate maximum frame sizes.
+
+- qtmux also supports writing of svmi atoms with stereoscopic video
+ information now. Trak timescales can be configured on a per-stream
+ basis using the "trak-timescale" property on the sink pads. Various
+ new formats can be muxed: MPEG layer 1 and 2, AC3 and Opus, as well
+ as PNG and VP9.
+
+- souphttpsrc now does connection sharing by default, shares its
+ SoupSession with other elements in the same pipeline via a
+ GstContext if possible (session-wide settings are all the defaults).
+ This allows for connection reuse, cookie sharing, etc. Applications
+ can also force a context to use. In other news, HTTP headers
+ received from the server are posted as element messages on the bus
+ now for easier diagnostics, and it's also possible now to use other
+ types of proxy servers such as SOCKS4 or SOCKS5 proxies, support for
+ which is implemented directly in gio. Before only HTTP proxies were
+ allowed.
+
+- qtmux, mp4mux and matroskamux will now refuse caps changes of input
+ streams at runtime. This isn't really supported with these
+ containers (or would have to be implemented differently with a
+ considerable effort) and doesn't produce valid and spec-compliant
+ files that will play everywhere. So if you can't guarantee that the
+ input caps won't change, use a container format that does support on
+ the fly caps changes for a stream such as MPEG-TS or use
+ splitmuxsink which can start a new file when the caps change. What
+ would happen before is that e.g. rtph264depay or rtph265depay would
+ simply send new SPS/PPS inband even for AVC format, which would then
+ get muxed into the container as if nothing changed. Some decoders
+ will handle this just fine, but that's often more luck than by
+ design. In any case, it's not right, so we disallow it now.
+
+- matroskamux had Table of Content (TOC) support now (chapters etc.)
+ and matroskademux TOC support has been improved. matroskademux has
+ also seen seeking improvements searching for the right cluster and
+ position.
+
+- videocrop now uses GstVideoCropMeta if downstream supports it, which
+ means cropping can be handled more efficiently without any copying.
+
+- compositor now has support for _crossfade blending_, which can be
+ used via the new "crossfade-ratio" property on the sink pads.
+
+- The avwait element has a new "end-timecode" property and posts
+ "avwait-status" element messages now whenever avwait starts or stops
+ passing through data (e.g. because target-timecode and end-timecode
+ respectively have been reached).
+
+
+- h265parse and h265parse will try harder to make upstream output the
+ same caps as downstream requires or prefers, thus avoiding
+ unnecessary conversion. The parsers also expose chroma format and
+ bit depth in the caps now.
+
+- The dtls elements now longer rely on or require the application to
+ run a GLib main loop that iterates the default main context
+ (GStreamer plugins should never rely on the application running a
+ GLib main loop).
+
+- openh264enc allows to change the encoding bitrate dynamically at
+ runtime now
+
+- nvdec is a new plugin for hardware-accelerated video decoding using
+ the NVIDIA NVDEC API (which replaces the old VDPAU API which is no
+ longer supported by NVIDIA)
+
+- The NVIDIA NVENC hardware-accelerated video encoders now support
+ dynamic bitrate and preset reconfiguration and support the I420
+ 4:2:0 video format. It's also possible to configure the gop size via
+ the new "gop-size" property.
+
+- The MPEG-TS muxer and demuxer (tsmux, tsdemux) now have support for
+ JPEG2000
+
+- openjpegdec and jpeg2000parse support 2-component images now (gray
+ with alpha), and jpeg2000parse has gained limited support for
+ conversion between JPEG2000 stream-formats. (JP2, J2C, JPC) and also
+ extracts more details such as colorimetry, interlace-mode,
+ field-order, multiview-mode and chroma siting.
+
+- The decklink plugin for Blackmagic capture and playback cards have
+ seen numerous improvements:
+
+- decklinkaudiosrc and decklinkvideosrc now put hardware reference
+ timestamp on buffers in form of GstReferenceTimestampMetas.
+ This can be useful to know on multi-channel cards which frames from
+ different channels were captured at the same time.
+
+- decklinkvideosink has gained support for Decklink hardware keying
+ with two new properties ("keyer-mode" and "keyer-level") to control
+ the built-in hardware keyer of Decklink cards.
+
+- decklinkaudiosink has been re-implemented around GstBaseSink instead
+ of the GstAudioBaseSink base class, since the Decklink APIs don't
+ fit very well with the GstAudioBaseSink APIs, which used to cause
+ various problems due to inaccuracies in the clock calculations.
+ Problems were audio drop-outs and A/V sync going wrong after
+ pausing/seeking.
+
+- support for more than 16 devices, without any artificial limit
+
+- work continued on the msdk plugin for Intel's Media SDK which
+ enables hardware-accelerated video encoding and decoding on Intel
+ graphics hardware on Windows or Linux. More tuning options were
+ added, and more pixel formats and video codecs are supported now.
+ The encoder now also handles force-key-unit events and can insert
+ frame-packing SEIs for side-by-side and top-bottom stereoscopic 3D
+ video.
+
+- dashdemux can now do adaptive trick play of certain types of DASH
+ streams, meaning it can do fast-forward/fast-rewind of normal (non-I
+ frame only) streams even at high speeds without saturating network
+ bandwidth or exceeding decoder capabilities. It will keep statistics
+ and skip keyframes or fragments as needed. See Sebastian's blog post
+ _DASH trick-mode playback in GStreamer_ for more details. It also
+ supports webvtt subtitle streams now and has seen improvements when
+ seeking in live streams.
+
+
+- kmssink has seen lots of fixes and improvements in this cycle,
+ including:
+
+- Raspberry Pi (vc4) and Xilinx DRM driver support
+
+- new "render-rectangle" property that can be used from the command
+ line as well as "display-width" and "display-height", and
+ "can-scale" properties
+
+- GstVideoCropMeta support
+
+Plugin and library moves
+
+MPEG-1 audio (mp1, mp2, mp3) decoders and encoders moved to -good
+
+Following the expiration of the last remaining mp3 patents in most
+jurisdictions, and the termination of the mp3 licensing program, as well
+as the decision by certain distros to officially start shipping full mp3
+decoding and encoding support, these plugins should now no longer be
+problematic for most distributors and have therefore been moved from
+-ugly and -bad to gst-plugins-good. Distributors can still disable these
+plugins if desired.
+
+In particular these are:
+
+- mpg123audiodec: an mp1/mp2/mp3 audio decoder using libmpg123
+- lamemp3enc: an mp3 encoder using LAME
+- twolamemp2enc: an mp2 encoder using TwoLAME
+
+GstAggregator moved from -bad to core
+
+GstAggregator has been moved from gst-plugins-bad to the base library in
+GStreamer and is now stable API.
+
+GstAggregator is a new base class for mixers and muxers that have to
+handle multiple input pads and aggregate streams into one output stream.
+It improves upon the existing GstCollectPads API in that it is a proper
+base class which was also designed with live streaming in mind.
+GstAggregator subclasses will operate in a mode with defined latency if
+any of the inputs are live streams. This ensures that the pipeline won't
+stall if any of the inputs stop producing data, and that the configured
+maximum latency is never exceeded.
+
+GstAudioAggregator, audiomixer and audiointerleave moved from -bad to -base
+
+GstAudioAggregator is a new base class for raw audio mixers and muxers
+and is based on GstAggregator (see above). It provides defined-latency
+mixing of raw audio inputs and ensures that the pipeline won't stall
+even if one of the input streams stops producing data.
+
+As part of the move to stabilise the API there were some last-minute API
+changes and clean-ups, but those should mostly affect internal elements.
+
+It is used by the audiomixer element, which is a replacement for
+'adder', which did not handle live inputs very well and did not align
+input streams according to running time. audiomixer should behave much
+better in that respect and generally behave as one would expected in
+most scenarios.
+
+Similarly, audiointerleave replaces the 'interleave' element which did
+not handle live inputs or non-aligned inputs very robustly.
+
+GstAudioAggregator and its subclases have gained support for input
+format conversion, which does not include sample rate conversion though
+as that would add additional latency. Furthermore, GAP events are now
+handled correctly.
+
+We hope to move the video equivalents (GstVideoAggregator and
+compositor) to -base in the next cycle, i.e. for 1.16.
+
+GStreamer OpenGL integration library and plugin moved from -bad to -base
+
+The GStreamer OpenGL integration library and opengl plugin have moved
+from gst-plugins-bad to -base and are now part of the stable API canon.
+Not all OpenGL elements have been moved; a few had to be left behind in
+gst-plugins-bad in the new openglmixers plugin, because they depend on
+the GstVideoAggregator base class which we were not able to move in this
+cycle. We hope to reunite these elements with the rest of their family
+for 1.16 though.
+
+This is quite a milestone, thanks to everyone who worked to make this
+happen!
+
+Qt QML and GTK plugins moved from -bad to -good
+
+The Qt QML-based qmlgl plugin has moved to -good and provides a
+qmlglsink video sink element as well as a qmlglsrc element. qmlglsink
+renders video into a QQuickItem, and qmlglsrc captures a window from a
+QML view and feeds it as video into a pipeline for further processing.
+Both elements leverage GStreamer's OpenGL integration. In addition to
+the move to -good the following features were added:
+
+- A proxy object is now used for thread-safe access to the QML widget
+ which prevents crashes in corner case scenarios: QML can destroy the
+ video widget at any time, so without this we might be left with a
+ dangling pointer.
+
+- EGL is now supported with the X11 backend, which works e.g. on
+ Freescale imx6
+
+The GTK+ plugin has also moved from -bad to -good. It includes gtksink
+and gtkglsink which both render video into a GtkWidget. gtksink uses
+Cairo for rendering the video, which will work everywhere in all
+scenarios but involves an extra memory copy, whereas gtkglsink fully
+leverages GStreamer's OpenGL integration, but might not work properly in
+all scenarios, e.g. where the OpenGL driver does not properly support
+multiple sharing contexts in different threads; on Linux Nouveau is
+known to be broken in this respect, whilst NVIDIA's proprietary drivers
+and most other drivers generally work fine, and the experience with
+Intel's driver seems to be fixed; some proprietary embedded Linux
+drivers don't work; macOS works).
+
+GstPhysMemoryAllocator interface moved from -bad to -base
+
+GstPhysMemoryAllocator is a marker interface for allocators with
+physical address backed memory.
+
+Plugin removals
+
+- the sunaudio plugin was removed, since it couldn't ever have been
+ built or used with GStreamer 1.0, but no one even noticed in all
+ these years.
+
+- the schroedinger-based Dirac encoder/decoder plugin has been
+ removed, as there is no longer any upstream or anyone else
+ maintaining it. Seeing that it's quite a fringe codec it seemed best
+ to simply remove it.
+
+API removals
+
+- some MPEG video parser API in the API unstable codecutils library in
+ gst-plugins-bad was removed after having been deprecated for 5
+ years.
+
+
+Miscellaneous changes
+
+- The video support library has gained support for a few new pixel
+ formats:
+- NV16_10LE32: 10-bit variant of NV16, packed into 32bit words (plus 2
+ bits padding)
+- NV12_10LE32: 10-bit variant of NV12, packed into 32bit words (plus 2
+ bits padding)
+- GRAY10_LE32: 10-bit grayscale, packed in 32bit words (plus 2 bits
+ padding)
+
+- decodebin, playbin and GstDiscoverer have seen stability
+ improvements in corner cases such as shutdown while still starting
+ up or shutdown in error cases (hat tip to the oss-fuzz project).
+
+- floating reference handling was inconsistent and has been cleaned up
+ across the board, including annotations. This solves various
+ long-standing memory leaks in language bindings, which e.g. often
+ caused elements and pads to be leaked.
+
+- major gobject-introspection annotation improvements for large parts
+ of the library API, including nullability of return types and
+ function parameters, correct types (e.g. strings vs. filenames),
+ ownership transfer, array length parameters, etc. This allows to use
+ bigger parts of the GStreamer API to be safely used from dynamic
+ language bindings (e.g. Python, Javascript) and allows static
+ bindings (e.g. C#, Rust, Vala) to autogenerate more API bindings
+ without manual intervention.
+
+OpenGL integration
+
+- The GStreamer OpenGL integration library has moved to
+ gst-plugins-base and is now part of our stable API.
+
+- new MESA3D GBM BACKEND. On devices with working libdrm support, it
+ is possible to use Mesa3D's GBM library to set up an EGL context
+ directly on top of KMS. This makes it possible to use the GStreamer
+ OpenGL elements without a windowing system if a libdrm- and
+ Mesa3D-supported GPU is present.
+
+- Prefer wayland display over X11: As most Wayland compositors support
+ XWayland, the X11 backend would get selected.
+
+- gldownload can export dmabufs now, and glupload will advertise
+ dmabuf as caps feature.
+
+
+Tracing framework and debugging improvements
+
+- NEW MEMORY RINGBUFFER BASED DEBUG LOGGER, useful for long-running
+ applications or to retrieve diagnostics when encountering an error.
+ The GStreamer debug logging system provides in-depth debug logging
+ about what is going on inside a pipeline. When enabled, debug logs
+ are usually written into a file, printed to the terminal, or handed
+ off to a log handler installed by the application. However, at
+ higher debug levels the volume of debug output quickly becomes
+ unmanageable, which poses a problem in disk-space or bandwidth
+ restricted environments or with long-running pipelines where a
+ problem might only manifest itself after multiple days. In those
+ situations, developers are usually only interested in the most
+ recent debug log output. The new in-memory ringbuffer logger makes
+ this easy: just installed it with gst_debug_add_ring_buffer_logger()
+ and retrieve logs with gst_debug_ring_buffer_logger_get_logs() when
+ needed. It is possible to limit the memory usage per thread and set
+ a timeout to determine how long messages are kept around. It was
+ always possible to implement this in the application with a custom
+ log handler of course, this just provides this functionality as part
+ of GStreamer.
+
+
+- 'fakevideosink is a null sink for video data that advertises
+ video-specific metas ane behaves like a video sink. See above for
+ more details.
-- GstVaapiDisplay now inherits from GstObject, thus the VA display logging
- messages are better and tracing the context sharing is more readable.
+- gst_util_dump_buffer() prints the content of a buffer to stdout.
-- When uploading raw images into a VA surfaces now VADeriveImages are tried
- fist, improving the upload performance, if it is possible.
+- gst_pad_link_get_name() and gst_state_change_get_name() print pad
+ link return values and state change transition values as strings.
-- The decoders and the post-processor now can push dmabuf-based buffers to
- downstream under certain conditions. For example:
+- The LATENCY TRACER has seen a few improvements: trace records now
+ contain timestamps which is useful to plot things over time, and
+ downstream synchronisation time is now excluded from the measured
+ values.
- `GST_GL_PLATFORM=egl gst-play-1.0 video-sample.mkv --videosink=glimagesink`
+- Miniobject refcount tracing and logging was not entirley
+ thread-safe, there were duplicates or missing entries at times. This
+ has now been made reliable.
-- Refactored the wrapping of VA surface into gstreamer memory, adding lock
- when mapping and unmapping, and many other fixes.
+- The netsim element, which can be used to simulate network jitter,
+ packet reordering and packet loss, received new features and
+ improvements: it can now also simulate network congestion using a
+ token bucket algorithm. This can be enabled via the "max-kbps"
+ property. Packet reordering can be disabled now via the
+ "allow-reordering" property: Reordering of packets is not very
+ common in networks, and the delay functions will always introduce
+ reordering if delay > packet-spacing, so by setting
+ "allow-reordering" to FALSE you guarantee that the packets are in
+ order, while at the same time introducing delay/jitter to them. By
+ using the new "delay-distribution" property the use can control how
+ the delay applied to delayed packets is distributed: This is either
+ the uniform distribution (as before) or the normal distribution; in
+ addition there is also the gamma distribution which simulates the
+ delay on wifi networks better.
-- Now `vaapidecodebin` loads `vaapipostproc` dynamically. It is possible to
- avoid it usage with the environment variable `GST_VAAPI_DISABLE_VPP=1`.
-- Regarding encoders: they have primary rank again, since they can discover,
- in run-time, the color formats they can use for upstream raw buffers and
- caps renegotiation is now possible. Also the encoders push encoding info
- downstream via tags.
+Tools
-- About specific encoders: added constant bit-rate encoding mode for VP8 and
- H265 encoder handles P010_10LE color format.
+- gst-inspect-1.0 now prints pad properties for elements that have pad
+ subclasses with special properties, such as compositor or
+ audiomixer. This only works for elements that use the newly-added
+ GstPadTemplate API API or the
+ gst_element_class_add_static_pad_template_with_gtype() convenience
+ function to tell GStreamer about the special pad subclass.
-- Regarding decoders, flush operation has been improved, now the internal VA
- encoder is not recreated at each flush. Also there are several improvements
- in the handling of H264 and H265 streams.
+- gst-launch-1.0 now generates a gstreamer pipeline diagram (.dot
+ file) whenever SIGHUP is sent to it on Linux/*nix systems.
-- VAAPI plugins try to create their on GstGL context (when available) if they
- cannot find it in the pipeline, to figure out what type of VA Display they
- should create.
+- gst-discoverer-1.0 can now analyse live streams such as rtsp:// URIs
-- Regarding `vaapisink` for X11, if the backend reports that it is unable to
- render correctly the current color format, an internal VA post-processor, is
- instantiated (if available) and converts the color format.
-## GStreamer Editing Services and NLE
+GStreamer RTSP server
-- Enhanced auto transition behaviour
+- Initial support for [RTSP protocol version
+ 2.0][rtsp2-lightning-talk] was added, which is to the best of our
+ knowledge the first RTSP 2.0 implementation ever!
-- Fix some races in `nlecomposition`
+- ONVIF audio backchannel support. This is an extension specified by
+ ONVIF that allows RTSP clients (e.g. a control room operator) to
+ send audio back to the RTSP server (e.g. an IP camera).
+ Theoretically this could have been done also by using the RECORD
+ method of the RTSP protocol, but ONVIF chose not to do that, so the
+ backchannel is set up alongside the other streams. Format
+ negotiation needs to be done out of band, if needed. Use the new
+ ONVIF-specific subclasses GstRTSPOnvifServer and
+ GstRTSPOnvifMediaFactory to enable this functionality.
-- Allow building with msvc
+
+- The internal server streaming pipeline is now dynamically
+ reconfigured on PLAY based on the transports needed. This means that
+ the server no longer adds the pipeline plumbing for all possible
+ transports from the start, but only if needed as needed. This
+ improves performance and memory footprint.
-- Added a UNIX manpage for `ges-launch`
+- rtspclientsink has gained an "accept-certificate" signal for
+ manually checking a TLS certificate for validity.
-- API changes:
- - Added ges_deinit (allowing the leak tracer to work properly)
- - Added ges_layer_get_clips_in_interval
- - Finally hide internal symbols that should never have been exposed
+- Fix keep-alive/timeout issue for certain clients using TCP
+ interleave as transport who don't do keep-alive via some other
+ method such as periodic RTSP OPTION requests. We now put netaddress
+ metas on the packets from the TCP interleaved stream, so can map
+ RTCP packets to the right stream in the server and can handle them
+ properly.
-## GStreamer validate
+- Language bindings improvements: in general there were quite a few
+ improvements in the gobject-introspection annotations, but we also
+ extended the permissions API which was not usable from bindings
+ before.
-- Port `gst-validate-launcher` to python 3
+- Fix corner case issue where the wrong mount point was found when
+ there were multiple mount points with a common prefix.
-- `gst-validate-launcher` now checks if blacklisted bugs have been fixed on
- bugzilla and errors out if it is the case
-- Allow building with msvc
+GStreamer VAAPI
-- Add ability for the launcher to run GStreamer unit tests
+- this section will be filled in shortly {FIXME!}
-- Added a way to activate the leaks tracer on our tests and fix leaks
-- Make the http server multithreaded
+GStreamer Editing Services and NLE
-- New testsuite for running various test scenarios on the DASH-IF test vectors
+- this section will be filled in shortly {FIXME!}
-## Build and Dependencies
-- Meson build files are now disted in tarballs, for jhbuild and so distro
- packagers can start using it. Note that the Meson-based build system is not
- 100% feature-equivalent with the autotools-based one yet.
+GStreamer validate
-- Some plugin filenames have been changed to match the plugin names: for example
- the file name of the `encoding` plugin in gst-plugins-base containing the
- `encodebin` element was `libgstencodebin.so` and has been changed to
- `libgstencodebin.so`. This affects only a handful of plugins across modules.
+- this section will be filled in shortly {FIXME!}
- **Developers who install GStreamer from source and just do `make install`**
- **after updating the source code, without doing `make uninstall` first, will**
- **have to manually remove the old installed plugin files from the installation**
- **prefix, or they will get 'Cannot register existing type' critical warnings.**
-- Most of the docbook-based documentation (FAQ, Application Development Manual,
- Plugin Writer's Guide, design documents) has been converted to markdown and
- moved into a new gst-docs module. The gtk-doc library API references and
- the plugins documentation are still built as part of the source modules though.
+GStreamer Python Bindings
-- GStreamer core now optionally uses libunwind and libdw to generate backtraces.
- This is useful for tracer plugins used during debugging and development.
+- this section will be filled in shortly {FIXME!}
-- There is a new `libgstbadallocators-1.0` library in gst-plugins-bad (which
- may go away again in future releases once the `GstPhysMemoryAllocator`
- interface API has been validated by more users).
-- `gst-omx` and `gstreamer-vaapi` modules can now also be built using the
- Meson build system.
+Build and Dependencies
-- The `qtkitvideosrc` element for macOS was removed. The API is deprecated
- since 10.9 and it wasn't shipped in the binaries since a few releases.
+- the new WebRTC support in gst-plugins-bad depends on the GStreamer
+ elements that ship as part of libnice, and libnice version 1.1.14 is
+ required. Also the dtls and srtp plugins.
-## Platform-specific improvements
+- gst-plugins-bad no longer depends on the libschroedinger Dirac codec
+ library.
-### Android
+- The srtp plugin can now also be built against libsrtp2.
-- androidmedia: add support for VP9 video decoding/encoding and Opus audio
- decoding (where supported)
+- some plugins and libraries have moved between modules, see the
+ _Plugin and_ _library moves_ section above, and their respective
+ dependencies have moved with them of course, e.g. the GStreamer
+ OpenGL integration support library and plugin is now in
+ gst-plugins-base, and mpg123, LAME and twoLAME based audio decoder
+ and encoder plugins are now in gst-plugins-good.
-### OS/X and iOS
+- Unify static and dynamic plugin interface and remove plugin specific
+ static build option: Static and dynamic plugins now have the same
+ interface. The standard --enable-static/--enable-shared toggle is
+ sufficient. This allows building static and shared plugins from the
+ same object files, instead of having to build everything twice.
-- `avfvideosrc`, which represents an iPhone camera or, on a Mac, a screencapture
- session, so far allowed you to select an input device by device index only.
- New API adds the ability to select the position (front or back facing) and
- device-type (wide angle, telephoto, etc.). Furthermore, you can now also
- specify the orientation (portrait, landscape, etc.) of the videostream.
+- The default plugin entry point has changed. This will only affect
+ plugins that are recompiled against new GStreamer headers. Binary
+ plugins using the old entry point will continue to work. However,
+ plugins that are recompiled must have matching plugin names in
+ GST_PLUGIN_DEFINE and filenames, as the plugin entry point for
+ shared plugins is now deduced from the plugin filename. This means
+ you can no longer have a plugin called foo living in a file called
+ libfoobar.so or such, the plugin filename needs to match. This might
+ cause problems with some external third party plugin modules when
+ they get rebuilt against GStreamer 1.14.
-### Windows
-- `dx9screencapsrc` can now optionally also capture the cursor.
+Note to packagers and distributors
-## Contributors
+A number of libraries, APIs and plugins moved between modules and/or
+libraries in different modules between version 1.12.x and 1.14.x, see
+the _Plugin and_ _library moves_ section above. Some APIs have seen
+minor ABI changes in the course of moving them into the stable APIs
+section.
-Aleix Conchillo Flaque, Alejandro G. Castro, Aleksandr Slobodeniuk, Alexandru
-Băluț, Alex Ashley, Andre McCurdy, Andrew, Anton Eliasson, Antonio Ospite,
-Arnaud Vrac, Arun Raghavan, Aurélien Zanelli, Axel Menzel, Benjamin Otte,
-Branko Subasic, Brendan Shanks, Carl Karsten, Carlos Rafael Giani, ChangBok
-Chae, Chris Bass, Christian Schaller, christophecvr, Claudio Saavedra,
-Corentin Noël, Dag Gullberg, Daniel Garbanzo, Daniel Shahaf, David Evans,
-David Schleef, David Warman, Dominique Leuenberger, Dongil Park, Douglas
-Bagnall, Edgard Lima, Edward Hervey, Emeric Grange, Enrico Jorns, Enrique
-Ocaña González, Evan Nemerson, Fabian Orccon, Fabien Dessenne, Fabrice Bellet,
-Florent Thiéry, Florian Zwoch, Francisco Velazquez, Frédéric Dalleau, Garima
-Gaur, Gaurav Gupta, George Kiagiadakis, Georg Lippitsch, Göran Jönsson, Graham
-Leggett, Guillaume Desmottes, Gurkirpal Singh, Haihua Hu, Hanno Boeck, Havard
-Graff, Heekyoung Seo, hoonhee.lee, Hyunjun Ko, Imre Eörs, Iñaki García
-Etxebarria, Jagadish, Jagyum Koo, Jan Alexander Steffens (heftig), Jan
-Schmidt, Jean-Christophe Trotin, Jochen Henneberg, Jonas Holmberg, Joris
-Valette, Josep Torra, Juan Pablo Ugarte, Julien Isorce, Jürgen Sachs, Koop
-Mast, Kseniia Vasilchuk, Lars Wendler, leigh123linux@googlemail.com, Luis de
-Bethencourt, Lyon Wang, Marcin Kolny, Marinus Schraal, Mark Nauwelaerts,
-Mathieu Duponchelle, Matthew Waters, Matt Staples, Michael Dutka, Michael
-Olbrich, Michael Smith, Michael Tretter, Miguel París Díaz, namanyadav12, Neha
-Arora, Nick Kallen, Nicola Murino, Nicolas Dechesne, Nicolas Dufresne, Nicolas
-Huet, Nirbheek Chauhan, Ole André Vadla Ravnås, Olivier Crête, Patricia
-Muscalu, Peter Korsgaard, Peter Seiderer, Petr Kulhavy, Philippe Normand,
-Philippe Renon, Philipp Zabel, Rahul Bedarkar, Reynaldo H. Verdejo Pinochet,
-Ricardo Ribalda Delgado, Rico Tzschichholz, Руслан Ижбулатов, Samuel Maroy,
-Santiago Carot-Nemesio, Scott D Phillips, Sean DuBois, Sebastian Dröge, Sergey
-Borovkov, Seungha Yang, shakin chou, Song Bing, Søren Juul, Sreerenj
-Balachandran, Stefan Kost, Stefan Sauer, Stepan Salenikovich, Stian Selnes,
-Stuart Weaver, suhas2go, Thiago Santos, Thibault Saunier, Thomas Bluemel,
-Thomas Petazzoni, Tim-Philipp Müller, Ting-Wei Lan, Tobias Mueller, Todor
-Tomov, Tomasz Zajac, Ulf Olsson, Ursula Maplehurst, Víctor Manuel Jáquez Leal,
-Victor Toso, Vincent Penquerc'h, Vineeth TM, Vinod Kesti, Vitor Massaru Iha,
-Vivia Nikolaidou, WeiChungChang, William Manley, Wim Taymans, Wojciech
-Przybyl, Wonchul Lee, Xavier Claessens, Yasushi SHOJI
+This means that you should try to ensure that all major GStreamer
+modules are synced to the same major version (1.12 or 1.13/1.14) and can
+only be upgraded in lockstep, so that your users never end up with a mix
+of major versions on their system at the same time, as this may cause
+breakages.
+
+Also, plugins compiled against >= 1.14 headers will not load with
+GStreamer <= 1.12 owing to a new plugin entry point (but plugin binaries
+built against older GStreamer versions will continue to load with newer
+versions of GStreamer of course).
+
+There is also a small structure size related ABI breakage introduced in
+the gst-plugins-bad codecparsers library between version 1.13.90 and
+1.13.91. This should "only" affect gstreamer-vaapi, so anyone who ships
+the release candidates is advised to upgrade those two modules at the
+same time.
+
+
+Platform-specific improvements
+
+Android
+
+- ahcsrc (Android camera source) does autofocus now
+
+macOS and iOS
+
+- this section will be filled in shortly {FIXME!}
+
+Windows
+
+- The GStreamer wasapi plugin was rewritten and should not only be
+ usable now, but in top shape and suitable for low-latency use cases.
+ The Windows Audio Session API (WASAPI) is Microsoft's most modern
+ method for talking with audio devices, and now that the wasapi
+ plugin is up to scratch it is preferred over the directsound plugin.
+ The ranks of the wasapisink and wasapisrc elements have been updated
+ to reflect this. Further improvements include:
+
+- support for more than 2 channels
+
+- a new "low-latency" property to enable low-latency operation (which
+ should always be safe to enable)
+
+- support for the AudioClient3 API which is only available on Windows
+ 10: in wasapisink this will be used automatically if available; in
+ wasapisrc it will have to be enabled explicitly via the
+ "use-audioclient3" property, as capturing audio with low latency and
+ without glitches seems to require setting the realtime priority of
+ the entire pipeline to "critical", which cannot be done from inside
+ the element, but has to be done in the application.
+
+- set realtime thread priority to avoid glitches
+
+- allow opening devices in exclusive mode, which provides much lower
+ latency compared to shared mode where WASAPI's engine period is
+ 10ms. This can be activated via the "exclusive" property.
+
+- There are now GstDeviceProvider implementations for the wasapi and
+ directsound plugins, so it's now possible to discover both audio
+ sources and audio sinks on Windows via the GstDeviceMonitor API
+
+- debug log timestamps are now higher granularity owing to
+ g_get_monotonic_time() now being used as fallback in
+ gst_utils_get_timestamp(). Before that, there would sometimes be
+ 10-20 lines of debug log output sporting the same timestamp.
+
+
+Contributors
+
+Aaron Boxer, Adrián Pardini, Adrien SCH, Akinobu Mita, Alban Bedel,
+Alessandro Decina, Alex Ashley, Alicia Boya García, Alistair Buxton,
+Alvaro Margulis, Anders Jonsson, Andreas Frisch, Andrejs Vasiljevs,
+Andrew Bott, Antoine Jacoutot, Antonio Ospite, Antoni Silvestre, Anton
+Obzhirov, Anuj Jaiswal, Arjen Veenhuizen, Arnaud Bonatti, Arun Raghavan,
+Ashish Kumar, Aurélien Zanelli, Ayaka, Branislav Katreniak, Branko
+Subasic, Brion Vibber, Carlos Rafael Giani, Cassandra Rommel, Chris
+Bass, Chris Paulson-Ellis, Christoph Reiter, Claudio Saavedra, Clemens
+Lang, Cyril Lashkevich, Daniel van Vugt, Dave Craig, Dave Johnstone,
+David Evans, David Schleef, Deepak Srivastava, Dimitrios Katsaros,
+Dmitry Zhadinets, Dongil Park, Dustin Spicuzza, Eduard Sinelnikov,
+Edward Hervey, Enrico Jorns, Eunhae Choi, Ezequiel Garcia, fengalin,
+Filippo Argiolas, Florent Thiéry, Florian Zwoch, Francisco Velazquez,
+François Laignel, fvanzile, George Kiagiadakis, Georg Lippitsch, Graham
+Leggett, Guillaume Desmottes, Gurkirpal Singh, Gwang Yoon Hwang, Gwenole
+Beauchesne, Haakon Sporsheim, Haihua Hu, Håvard Graff, Heekyoung Seo,
+Heinrich Fink, Holger Kaelberer, Hoonhee Lee, Hosang Lee, Hyunjun Ko,
+Ian Jamison, James Stevenson, Jan Alexander Steffens (heftig), Jan
+Schmidt, Jason Lin, Jens Georg, Jeremy Hiatt, Jérôme Laheurte, Jimmy
+Ohn, Jochen Henneberg, John Ludwig, John Nikolaides, Jonathan Karlsson,
+Josep Torra, Juan Navarro, Juan Pablo Ugarte, Julien Isorce, Jun Xie,
+Jussi Kukkonen, Justin Kim, Lasse Laursen, Lubosz Sarnecki, Luc
+Deschenaux, Luis de Bethencourt, Marcin Lewandowski, Mario Alfredo
+Carrillo Arevalo, Mark Nauwelaerts, Martin Kelly, Matej Knopp, Mathieu
+Duponchelle, Matteo Valdina, Matt Fischer, Matthew Waters, Matthieu
+Bouron, Matthieu Crapet, Matt Staples, Michael Catanzaro, Michael
+Olbrich, Michael Shigorin, Michael Tretter, Michał Dębski, Michał Górny,
+Michele Dionisio, Miguel París, Mikhail Fludkov, Munez, Nael Ouedraogo,
+Neos3452, Nicholas Panayis, Nick Kallen, Nicola Murino, Nicolas
+Dechesne, Nicolas Dufresne, Nirbheek Chauhan, Ognyan Tonchev, Ole André
+Vadla Ravnås, Oleksij Rempel, Olivier Crête, Omar Akkila, Orestis
+Floros, Patricia Muscalu, Patrick Radizi, Paul Kim, Per-Erik Brodin,
+Peter Seiderer, Philip Craig, Philippe Normand, Philippe Renon, Philipp
+Zabel, Pierre Pouzol, Piotr Drąg, Ponnam Srinivas, Pratheesh Gangadhar,
+Raimo Järvi, Ramprakash Jelari, Ravi Kiran K N, Reynaldo H. Verdejo
+Pinochet, Rico Tzschichholz, Robert Rosengren, Roland Peffer, Руслан
+Ижбулатов, Sam Hurst, Sam Thursfield, Sangkyu Park, Sanjay NM, Satya
+Prakash Gupta, Scott D Phillips, Sean DuBois, Sebastian Cote, Sebastian
+Dröge, Sebastian Rasmussen, Sejun Park, Sergey Borovkov, Seungha Yang,
+Shakin Chou, Shinya Saito, Simon Himmelbauer, Sky Juan, Song Bing,
+Sreerenj Balachandran, Stefan Kost, Stefan Popa, Stefan Sauer, Stian
+Selnes, Thiago Santos, Thibault Saunier, Thijs Vermeir, Tim Allen,
+Tim-Philipp Müller, Ting-Wei Lan, Tomas Rataj, Tom Bailey, Tonu Jaansoo,
+U. Artie Eoff, Umang Jain, Ursula Maplehurst, VaL Doroshchuk, Vasilis
+Liaskovitis, Víctor Manuel Jáquez Leal, vijay, Vincent Penquerc'h,
+Vineeth T M, Vivia Nikolaidou, Wang Xin-yu (王昕宇), Wei Feng, Wim
+Taymans, Wonchul Lee, Xabier Rodriguez Calvar, Xavier Claessens,
+XuGuangxin, Yasushi SHOJI, Yi A Wang, Youness Alaoui,
... and many others who have contributed bug reports, translations, sent
suggestions or helped testing.
-## Bugs fixed in 1.12
-More than [635 bugs][bugs-fixed-in-1.12] have been fixed during
-the development of 1.12.
+Bugs fixed in 1.14
+
+More than 800 bugs have been fixed during the development of 1.14.
This list does not include issues that have been cherry-picked into the
-stable 1.10 branch and fixed there as well, all fixes that ended up in the
-1.10 branch are also included in 1.12.
+stable 1.12 branch and fixed there as well, all fixes that ended up in
+the 1.12 branch are also included in 1.14.
+
+This list also does not include issues that have been fixed without a
+bug report in bugzilla, so the actual number of fixes is much higher.
-This list also does not include issues that have been fixed without a bug
-report in bugzilla, so the actual number of fixes is much higher.
-[bugs-fixed-in-1.12]: https://bugzilla.gnome.org/buglist.cgi?bug_status=RESOLVED&bug_status=VERIFIED&classification=Platform&limit=0&list_id=213265&order=bug_id&product=GStreamer&query_format=advanced&resolution=FIXED&target_milestone=1.10.1&target_milestone=1.10.2&target_milestone=1.10.3&target_milestone=1.10.4&target_milestone=1.11.1&target_milestone=1.11.2&target_milestone=1.11.3&target_milestone=1.11.4&target_milestone=1.11.90&target_milestone=1.11.91&target_milestone=1.12.0
+Stable 1.14 branch
-## Stable 1.12 branch
+After the 1.14.0 release there will be several 1.14.x bug-fix releases
+which will contain bug fixes which have been deemed suitable for a
+stable branch, but no new features or intrusive changes will be added to
+a bug-fix release usually. The 1.14.x bug-fix releases will be made from
+the git 1.14 branch, which is a stable branch.
-After the 1.12.0 release there will be several 1.12.x bug-fix releases which
-will contain bug fixes which have been deemed suitable for a stable branch,
-but no new features or intrusive changes will be added to a bug-fix release
-usually. The 1.12.x bug-fix releases will be made from the git 1.12 branch, which
-is a stable branch.
+1.14.0
-### 1.12.0
+1.14.0 is scheduled to be released in early March 2018.
-1.12.0 was released on 4th May 2017.
-## Known Issues
+Known Issues
-- The `webrtcdsp` element is currently not shipped as part of the Windows
- binary packages due to a [build system issue][bug-770264].
+- The webrtcdsp element (which is unrelated to the newly-landed
+ GStreamer webrtc support) is currently not shipped as part of the
+ Windows binary packages due to a build system issue.
-[bug-770264]: https://bugzilla.gnome.org/show_bug.cgi?id=770264
-## Schedule for 1.14
+Schedule for 1.16
-Our next major feature release will be 1.14, and 1.11 will be the unstable
-development version leading up to the stable 1.12 release. The development
-of 1.13/1.14 will happen in the git master branch.
+Our next major feature release will be 1.16, and 1.15 will be the
+unstable development version leading up to the stable 1.16 release. The
+development of 1.15/1.16 will happen in the git master branch.
-The plan for the 1.14 development cycle is yet to be confirmed, but it is
-expected that feature freeze will be around September 2017
-followed by several 1.13 pre-releases and the new 1.14 stable release
-in October.
+The plan for the 1.16 development cycle is yet to be confirmed, but it
+is expected that feature freeze will be around August 2017 followed by
+several 1.15 pre-releases and the new 1.16 stable release in September.
-1.14 will be backwards-compatible to the stable 1.12, 1.10, 1.8, 1.6, 1.4,
-1.2 and 1.0 release series.
+1.16 will be backwards-compatible to the stable 1.14, 1.12, 1.10, 1.8,
+1.6, 1.4, 1.2 and 1.0 release series.
-- - -
+------------------------------------------------------------------------
-*These release notes have been prepared by Sebastian Dröge, Tim-Philipp Müller
-and Víctor Manuel Jáquez Leal.*
+_These release notes have been prepared by Tim-Philipp Müller with_
+_contributions from Sebastian Dröge._
-*License: [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/)*
+_License: CC BY-SA 4.0_