*
* AV1 Decoder.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 -v filesrc location=videotestsrc.webm ! matroskademux ! av1dec ! videoconvert ! videoscale ! autovideosink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
*
* AV1 Encoder.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 videotestsrc num-buffers=50 ! av1enc ! webmmux ! filesink location=av1.webm
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
* "625-line television Wide Screen Signalling (WSS)"</a>.
*
* vbi_sliced payload:
- * <pre>
+ * ```
* Byte 0 1
* msb lsb msb lsb
- * bit 7 6 5 4 3 2 1 0 x x 13 12 11 10 9 8<br></pre>
+ * bit 7 6 5 4 3 2 1 0 x x 13 12 11 10 9 8
+ * ```
* according to EN 300 294, Table 1, lsb first transmitted.
*/
#define VBI_SLICED_WSS_625 0x00000400
* Reference: <a href="http://www.jeita.or.jp">EIA-J CPR-1204</a>
*
* vbi_sliced payload:
- * <pre>
+ * ```
* Byte 0 1 2
* msb lsb msb lsb msb lsb
* bit 7 6 5 4 3 2 1 0 15 14 13 12 11 10 9 8 x x x x 19 18 17 16
- * </pre>
+ * ```
*/
#define VBI_SLICED_WSS_CPR1204 0x00000800
* frames using the given ICC (International Color Consortium) profiles.
* Falls back to internal sRGB profile if no ICC file is specified in property.
*
- * <refsect2>
- * <title>Example launch line</title>
- * <para>(write everything in one line, without the backslash characters)</para>
+ * ## Example launch line
+ *
+ * (write everything in one line, without the backslash characters)
* |[
* gst-launch-1.0 filesrc location=photo_camera.png ! pngdec ! \
* videoconvert ! lcms input-profile=sRGB.icc dest-profile=printer.icc \
* pngenc ! filesink location=photo_print.png
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
* If the "http_proxy" environment variable is set, its value is used.
* The #GstCurlHttpSrc:proxy property can be used to override the default.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 curlhttpsrc location=http://127.0.1.1/index.html ! fakesink dump=1
* ]| The above pipeline reads a web page from the local machine using HTTP and
* ]| The above pipeline will start up a DASH streaming session from the given
* MPD file. This requires GStreamer to have been built with dashdemux from
* gst-plugins-bad.
- * </refsect2>
*/
/*
* Modplug uses the <ulink url="http://modplug-xmms.sourceforge.net/">modplug</ulink>
* library to decode tracked music in the MOD/S3M/XM/IT and related formats.
*
- * <refsect2>
- * <title>Example pipeline</title>
+ * ## Example pipeline
+ *
* |[
* gst-launch-1.0 -v filesrc location=1990s-nostalgia.xm ! modplug ! audioconvert ! alsasink
* ]| Play a FastTracker xm file.
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
* and on the various available parameters in the documentation
* of the mpeg2enc tool in particular, which shares options with this element.
*
- * <refsect2>
- * <title>Example pipeline</title>
+ * ## Example pipeline
+ *
* |[
* gst-launch-1.0 videotestsrc num-buffers=1000 ! mpeg2enc ! filesink location=videotestsrc.m1v
* ]| This example pipeline will encode a test video source to a an MPEG1
* elementary stream (with Generic MPEG1 profile).
- * <para>
+ *
* Likely, the #GstMpeg2enc:format property
* is most important, as it selects the type of MPEG stream that is produced.
* In particular, default property values are dependent on the format,
* Note that the (S)VCD profiles also restrict the image size, so some scaling
* may be needed to accomodate this. The so-called generic profiles (as used
* in the example above) allow most parameters to be adjusted.
- * </para>
+ *
* |[
* gst-launch-1.0 videotestsrc num-buffers=1000 ! videoscale ! mpeg2enc format=1 norm=p ! filesink location=videotestsrc.m1v
* ]| This will produce an MPEG1 profile stream according to VCD2.0 specifications
* for PAL #GstMpeg2enc:norm (as the image height is dependent on video norm).
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
* and the man-page of the mplex tool documents the properties of this element,
* which are shared with the mplex tool.
*
- * <refsect2>
- * <title>Example pipeline</title>
+ * ## Example pipeline
+ *
* |[
* gst-launch-1.0 -v videotestsrc num-buffers=1000 ! mpeg2enc ! mplex ! filesink location=videotestsrc.mpg
* ]| This example pipeline will encode a test video source to an
* MPEG1 elementary stream and multiplexes this to an MPEG system stream.
- * <para>
+ *
* If several streams are being multiplexed, there should (as usual) be
* a queue in each stream, and due to mplex' buffering the capacities of these
* may have to be set to a few times the default settings to prevent the
* pipeline stalling.
- * </para>
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
*
* Based on this tutorial: https://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html
*
- * <refsect2>
- * <title>Example pipelines</title>
+ * ## Example pipelines
+ *
* |[
* gst-launch-1.0 -v v4l2src ! videoconvert ! cameraundistort ! cameracalibrate | autovideosink
* ]| will correct camera distortion once camera calibration is done.
- * </refsect2>
*/
/*
*
* Based on this tutorial: https://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html
*
- * <refsect2>
- * <title>Example pipelines</title>
+ * ## Example pipelines
+ *
* |[
* gst-launch-1.0 -v v4l2src ! videoconvert ! cameraundistort settings="???" ! autovideosink
* ]| will correct camera distortion based on provided settings.
* |[
* gst-launch-1.0 -v v4l2src ! videoconvert ! cameraundistort ! cameracalibrate ! autovideosink
* ]| will correct camera distortion once camera calibration is done.
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
*
* Dilates the image with the cvDilate OpenCV function.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 videotestsrc ! cvdilate ! videoconvert ! autovideosink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
* Equalizes the histogram of a grayscale image with the cvEqualizeHist OpenCV
* function.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 videotestsrc pattern=23 ! cvequalizehist ! videoconvert ! autovideosink
* ]|
- * </refsect2>
*/
*
* Erodes the image with the cvErode OpenCV function.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * Example launch line
+ *
* |[
* gst-launch-1.0 videotestsrc ! cverode ! videoconvert ! autovideosink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
*
* Applies cvLaplace OpenCV function to the image.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * Example launch line
+ *
* |[
* gst-launch-1.0 videotestsrc ! cvlaplace ! videoconvert ! autovideosink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
*
* Smooths the image using thes cvSmooth OpenCV function.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 videotestsrc ! cvsmooth ! videoconvert ! autovideosink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
*
* Applies the cvSobel OpenCV function to the image.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 videotestsrc ! cvsobel ! videoconvert ! autovideosink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
*
* Dewarp fisheye images
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 videotestsrc ! videoconvert ! circle radius=0.1 height=80 ! dewarp outer-radius=0.35 inner-radius=0.1 ! videoconvert ! xvimagesink
* ]|
- * </refsect2>
*/
* [D] Scharstein, D. & Szeliski, R. (2001). A taxonomy and evaluation of dense two-frame stereo
* correspondence algorithms, International Journal of Computer Vision 47: 7–42.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 videotestsrc ! video/x-raw,width=320,height=240 ! videoconvert ! disp0.sink_right videotestsrc ! video/x-raw,width=320,height=240 ! videoconvert ! disp0.sink_left disparity name=disp0 ! videoconvert ! ximagesink
* ]|
* |[
gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw,width=320,height=240 ! videoconvert ! disp0.sink_right v4l2src device=/dev/video0 ! video/x-raw,width=320,height=240 ! videoconvert ! disp0.sink_left disparity name=disp0 method=sgbm disp0.src ! videoconvert ! ximagesink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
*
* Performs canny edge detection on videos and images
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 videotestsrc ! videoconvert ! edgedetect ! videoconvert ! xvimagesink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
*
* Blurs faces in images and videos.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 autovideosrc ! videoconvert ! faceblur ! videoconvert ! autovideosink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
* until the size is <= GstFaceDetect::min-size-width or
* GstFaceDetect::min-size-height.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 autovideosrc ! decodebin ! colorspace ! facedetect ! videoconvert ! xvimagesink
* ]| Detect and show faces
* gst-launch-1.0 autovideosrc ! video/x-raw,width=320,height=240 ! videoconvert ! facedetect min-size-width=60 min-size-height=60 ! colorspace ! xvimagesink
* ]| Detect large faces on a smaller image
*
- * </refsect2>
*/
/* FIXME: development version of OpenCV has CV_HAAR_FIND_BIGGEST_OBJECT which
* extraction using iterated graph cuts, ACM Trans. Graph., vol. 23, pp. 309–314,
* 2004.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 --gst-debug=grabcut=4 v4l2src device=/dev/video0 ! videoconvert ! grabcut ! videoconvert ! video/x-raw,width=320,height=240 ! ximagesink
* ]|
* |[
* gst-launch-1.0 --gst-debug=grabcut=4 v4l2src device=/dev/video0 ! videoconvert ! facedetect display=0 ! videoconvert ! grabcut test-mode=true ! videoconvert ! video/x-raw,width=320,height=240 ! ximagesink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
* FIXME:operates hand gesture detection in video streams and images,
* and enable media operation e.g. play/stop/fast forward/back rewind.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 autovideosrc ! videoconvert ! "video/x-raw, format=RGB, width=320, height=240" ! \
* videoscale ! handdetect ! videoconvert ! xvimagesink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
*
* Performs motion detection on videos.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 videotestsrc pattern=18 ! videorate ! videoscale ! video/x-raw,width=320,height=240,framerate=5/1 ! videoconvert ! motioncells ! videoconvert ! xvimagesink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
* color image enhancement." Image Processing, 1996. Proceedings., International
* Conference on. Vol. 3. IEEE, 1996.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 videotestsrc ! videoconvert ! retinex ! videoconvert ! xvimagesink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
* per Image Pixel for the Task of Background Subtraction", Pattern Recognition
* Letters, vol. 27, no. 7, pages 773-780, 2006.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! segmentation test-mode=true method=2 ! videoconvert ! ximagesink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
*
* Human skin detection on videos and images
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 videotestsrc ! decodebin ! videoconvert ! skindetect ! videoconvert ! xvimagesink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
*
* Performs template matching on videos and images, providing detected positions via bus messages.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 videotestsrc ! videoconvert ! templatematch template=/path/to/file.jpg ! videoconvert ! xvimagesink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
*
* opencvtextoverlay renders the text on top of the video frames
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 videotestsrc ! videoconvert ! opencvtextoverlay text="Opencv Text Overlay " ! videoconvert ! xvimagesink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
* It uses the <ulink url="https://lib.openmpt.org">OpenMPT library</ulink>
* for this purpose. It can be autoplugged and therefore works with decodebin.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 filesrc location=media/example.it ! openmptdec ! audioconvert ! audioresample ! autoaudiosink
* ]|
- * </refsect2>
*/
* More concretely on the "libopenni2-dev" and "libopenni2" packages - that can
* be downloaded in http://goo.gl/2H6SZ6.
*
- * <refsect2>
- * <title>Examples</title>
- * <para>
- * Some recorded .oni files are available at:
- * <programlisting>
- * http://people.cs.pitt.edu/~chang/1635/proj11/kinectRecord
- * </programlisting>
- * </para>
- * </refsect2>
+ * ## Examples
+ *
+ * Some recorded .oni files are available at <http://people.cs.pitt.edu/~chang/1635/proj11/kinectRecord>
*/
/**
* SECTION:element-openni2src
*
- * <refsect2>
- * <title>Examples</title>
- * <para>
- * Some recorded .oni files are available at:
- * <programlisting>
- * http://people.cs.pitt.edu/~chang/1635/proj11/kinectRecord
- * </programlisting>
+ * ## Examples
*
- * <programlisting>
- LD_LIBRARY_PATH=/usr/lib/OpenNI2/Drivers/ gst-launch-1.0 --gst-debug=openni2src:5 openni2src location='Downloads/mr.oni' sourcetype=depth ! videoconvert ! ximagesink
- * </programlisting>
- * <programlisting>
- LD_LIBRARY_PATH=/usr/lib/OpenNI2/Drivers/ gst-launch-1.0 --gst-debug=openni2src:5 openni2src location='Downloads/mr.oni' sourcetype=color ! videoconvert ! ximagesink
- * </programlisting>
- * </para>
- * </refsect2>
+ * Some recorded .oni files are available at <http://people.cs.pitt.edu/~chang/1635/proj11/kinectRecord>
+ *
+ * ``` shell
+ * LD_LIBRARY_PATH=/usr/lib/OpenNI2/Drivers/ gst-launch-1.0 --gst-debug=openni2src:5 openni2src location='Downloads/mr.oni' sourcetype=depth ! videoconvert ! ximagesink
+ * ```
+ *
+ * ``` shell
+ * LD_LIBRARY_PATH=/usr/lib/OpenNI2/Drivers/ gst-launch-1.0 --gst-debug=openni2src:5 openni2src location='Downloads/mr.oni' sourcetype=color ! videoconvert ! ximagesink
+ * ```
*/
#ifdef HAVE_CONFIG_H
* srtsink is a network sink that sends <ulink url="http://www.srtalliance.org/">SRT</ulink>
* packets to the network.
*
- * <refsect2>
- * <title>Examples</title>
+ * ## Examples</title>
+ *
* |[
* gst-launch-1.0 -v audiotestsrc ! srtsink uri=srt://host
* ]| This pipeline shows how to serve SRT packets through the default port.
* |[
* gst-launch-1.0 -v audiotestsrc ! srtsink uri=srt://:port
* ]| This pipeline shows how to wait SRT callers.
- * </refsect2>
*
*/
* srtsrc is a network source that reads <ulink url="http://www.srtalliance.org/">SRT</ulink>
* packets from the network.
*
- * <refsect2>
- * <title>Examples</title>
+ * ## Examples
* |[
* gst-launch-1.0 -v srtsrc uri="srt://127.0.0.1:7001" ! fakesink
* ]| This pipeline shows how to connect SRT server by setting #GstSRTSrc:uri property.
* |[
* gst-launch-1.0 -v srtclientsrc uri="srt://192.168.1.10:7001?mode=rendez-vous" ! fakesink
* ]| This pipeline shows how to connect SRT server by setting #GstSRTSrc:uri property and using the rendez-vous mode.
- * </refsect2>
*
*/
* It uses <ulink url="https://www.mindwerks.net/projects/wildmidi/">WildMidi</ulink>
* for this purpose. It can be autoplugged and therefore works with decodebin.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 filesrc location=media/example.mid ! wildmididec ! audioconvert ! audioresample ! autoaudiosink
* ]|
- * </refsect2>
*/
* @get_property then reports the corrected versions.
*
* The base class operates as follows:
- * <orderedlist>
- * <listitem>
- * <itemizedlist><title>Unloaded mode</title>
- * <listitem><para>
- * Initial values are set. If a current subsong has already been
- * defined (for example over the command line with gst-launch), then
- * the subsong index is copied over to current_subsong .
- * Same goes for the num-loops and output-mode properties.
- * Media is NOT loaded yet.
- * </para></listitem>
- * <listitem><para>
- * Once the sinkpad is activated, the process continues. The sinkpad is
- * activated in push mode, and the class accumulates the incoming media
- * data in an adapter inside the sinkpad's chain function until either an
- * EOS event is received from upstream, or the number of bytes reported
- * by upstream is reached. Then it loads the media, and starts the decoder
- * output task.
- * <listitem><para>
- * If upstream cannot respond to the size query (in bytes) of @load_from_buffer
- * fails, an error is reported, and the pipeline stops.
- * </para></listitem>
- * <listitem><para>
- * If there are no errors, @load_from_buffer is called to load the media. The
- * subclass must at least call gst_nonstream_audio_decoder_set_output_audioinfo()
- * there, and is free to make use of the initial subsong, output mode, and
- * position. If the actual output mode or position differs from the initial
- * value,it must set the initial value to the actual one (for example, if
- * the actual starting position is always 0, set *initial_position to 0).
- * If loading is unsuccessful, an error is reported, and the pipeline
- * stops. Otherwise, the base class calls @get_current_subsong to retrieve
- * the actual current subsong, @get_subsong_duration to report the current
- * subsong's duration in a duration event and message, and @get_subsong_tags
- * to send tags downstream in an event (these functions are optional; if
- * set to NULL, the associated operation is skipped). Afterwards, the base
- * class switches to loaded mode, and starts the decoder output task.
- * </para></listitem>
- * </itemizedlist>
- * <itemizedlist><title>Loaded mode</title>
- * <listitem><para>
- * Inside the decoder output task, the base class repeatedly calls @decode,
- * which returns a buffer with decoded, ready-to-play samples. If the
- * subclass reached the end of playback, @decode returns FALSE, otherwise
- * TRUE.
- * </para></listitem>
- * <listitem><para>
- * Upon reaching a loop end, subclass either ignores that, or loops back
- * to the beginning of the loop. In the latter case, if the output mode is set
- * to LOOPING, the subclass must call gst_nonstream_audio_decoder_handle_loop()
- * *after* the playback position moved to the start of the loop. In
- * STEADY mode, the subclass must *not* call this function.
- * Since many decoders only provide a callback for when the looping occurs,
- * and that looping occurs inside the decoding operation itself, the following
- * mechanism for subclass is suggested: set a flag inside such a callback.
- * Then, in the next @decode call, before doing the decoding, check this flag.
- * If it is set, gst_nonstream_audio_decoder_handle_loop() is called, and the
- * flag is cleared.
- * (This function call is necessary in LOOPING mode because it updates the
- * current segment and makes sure the next buffer that is sent downstream
- * has its DISCONT flag set.)
- * </para></listitem>
- * <listitem><para>
- * When the current subsong is switched, @set_current_subsong is called.
- * If it fails, a warning is reported, and nothing else is done. Otherwise,
- * it calls @get_subsong_duration to get the new current subsongs's
- * duration, @get_subsong_tags to get its tags, reports a new duration
- * (i.e. it sends a duration event downstream and generates a duration
- * message), updates the current segment, and sends the subsong's tags in
- * an event downstream. (If @set_current_subsong has been set to NULL by
- * the subclass, attempts to set a current subsong are ignored; likewise,
- * if @get_subsong_duration is NULL, no duration is reported, and if
- * @get_subsong_tags is NULL, no tags are sent downstream.)
- * </para></listitem>
- * <listitem><para>
- * When an attempt is made to switch the output mode, it is checked against
- * the bitmask returned by @get_supported_output_modes. If the proposed
- * new output mode is supported, the current segment is updated
- * (it is open-ended in STEADY mode, and covers the (sub)song length in
- * LOOPING mode), and the subclass' @set_output_mode function is called
- * unless it is set to NULL. Subclasses should reset internal loop counters
- * in this function.
- * </para></listitem>
- * </itemizedlist>
- * </listitem>
- * </orderedlist>
+ * * Unloaded mode
+ * - Initial values are set. If a current subsong has already been
+ * defined (for example over the command line with gst-launch), then
+ * the subsong index is copied over to current_subsong .
+ * Same goes for the num-loops and output-mode properties.
+ * Media is NOT loaded yet.
+ * - Once the sinkpad is activated, the process continues. The sinkpad is
+ * activated in push mode, and the class accumulates the incoming media
+ * data in an adapter inside the sinkpad's chain function until either an
+ * EOS event is received from upstream, or the number of bytes reported
+ * by upstream is reached. Then it loads the media, and starts the decoder
+ * output task.
+ * - If upstream cannot respond to the size query (in bytes) of @load_from_buffer
+ * fails, an error is reported, and the pipeline stops.
+ * - If there are no errors, @load_from_buffer is called to load the media. The
+ * subclass must at least call gst_nonstream_audio_decoder_set_output_audioinfo()
+ * there, and is free to make use of the initial subsong, output mode, and
+ * position. If the actual output mode or position differs from the initial
+ * value,it must set the initial value to the actual one (for example, if
+ * the actual starting position is always 0, set *initial_position to 0).
+ * If loading is unsuccessful, an error is reported, and the pipeline
+ * stops. Otherwise, the base class calls @get_current_subsong to retrieve
+ * the actual current subsong, @get_subsong_duration to report the current
+ * subsong's duration in a duration event and message, and @get_subsong_tags
+ * to send tags downstream in an event (these functions are optional; if
+ * set to NULL, the associated operation is skipped). Afterwards, the base
+ * class switches to loaded mode, and starts the decoder output task.
+ *
+ * * Loaded mode</title>
+ * - Inside the decoder output task, the base class repeatedly calls @decode,
+ * which returns a buffer with decoded, ready-to-play samples. If the
+ * subclass reached the end of playback, @decode returns FALSE, otherwise
+ * TRUE.
+ * - Upon reaching a loop end, subclass either ignores that, or loops back
+ * to the beginning of the loop. In the latter case, if the output mode is set
+ * to LOOPING, the subclass must call gst_nonstream_audio_decoder_handle_loop()
+ * *after* the playback position moved to the start of the loop. In
+ * STEADY mode, the subclass must *not* call this function.
+ * Since many decoders only provide a callback for when the looping occurs,
+ * and that looping occurs inside the decoding operation itself, the following
+ * mechanism for subclass is suggested: set a flag inside such a callback.
+ * Then, in the next @decode call, before doing the decoding, check this flag.
+ * If it is set, gst_nonstream_audio_decoder_handle_loop() is called, and the
+ * flag is cleared.
+ * (This function call is necessary in LOOPING mode because it updates the
+ * current segment and makes sure the next buffer that is sent downstream
+ * has its DISCONT flag set.)
+ * - When the current subsong is switched, @set_current_subsong is called.
+ * If it fails, a warning is reported, and nothing else is done. Otherwise,
+ * it calls @get_subsong_duration to get the new current subsongs's
+ * duration, @get_subsong_tags to get its tags, reports a new duration
+ * (i.e. it sends a duration event downstream and generates a duration
+ * message), updates the current segment, and sends the subsong's tags in
+ * an event downstream. (If @set_current_subsong has been set to NULL by
+ * the subclass, attempts to set a current subsong are ignored; likewise,
+ * if @get_subsong_duration is NULL, no duration is reported, and if
+ * @get_subsong_tags is NULL, no tags are sent downstream.)
+ * - When an attempt is made to switch the output mode, it is checked against
+ * the bitmask returned by @get_supported_output_modes. If the proposed
+ * new output mode is supported, the current segment is updated
+ * (it is open-ended in STEADY mode, and covers the (sub)song length in
+ * LOOPING mode), and the subclass' @set_output_mode function is called
+ * unless it is set to NULL. Subclasses should reset internal loop counters
+ * in this function.
*
* The relationship between (sub)song duration, output mode, and number of loops
* is defined this way (this is all done by the base class automatically):
- * <itemizedlist>
- * <listitem><para>
- * Segments have their duration and stop values set to GST_CLOCK_TIME_NONE in
+ *
+ * * Segments have their duration and stop values set to GST_CLOCK_TIME_NONE in
* STEADY mode, and to the duration of the (sub)song in LOOPING mode.
- * </para></listitem>
- * <listitem><para>
- * The duration that is returned to a DURATION query is always the duration
+ *
+ * * The duration that is returned to a DURATION query is always the duration
* of the (sub)song, regardless of number of loops or output mode. The same
* goes for DURATION messages and tags.
- * </para></listitem>
- * <listitem><para>
- * If the number of loops is >0 or -1, durations of TOC entries are set to
+ *
+ * * If the number of loops is >0 or -1, durations of TOC entries are set to
* the duration of the respective subsong in LOOPING mode and to G_MAXINT64 in
* STEADY mode. If the number of loops is 0, entry durations are set to the
* subsong duration regardless of the output mode.
- * </para></listitem>
- * </itemizedlist>
*/
#ifdef HAVE_CONFIG_H
*
* All functions are called with a locked decoder mutex.
*
- * <note> If GST_ELEMENT_ERROR, GST_ELEMENT_WARNING, or GST_ELEMENT_INFO are called from
- * inside one of these functions, it is strongly recommended to unlock the decoder mutex
- * before and re-lock it after these macros to prevent potential deadlocks in case the
- * application does something with the element when it receives an ERROR/WARNING/INFO
- * message. Same goes for gst_element_post_message() calls and non-serialized events. </note>
+ * > If GST_ELEMENT_ERROR, GST_ELEMENT_WARNING, or GST_ELEMENT_INFO are called from
+ * > inside one of these functions, it is strongly recommended to unlock the decoder mutex
+ * > before and re-lock it after these macros to prevent potential deadlocks in case the
+ * > application does something with the element when it receives an ERROR/WARNING/INFO
+ * > message. Same goes for gst_element_post_message() calls and non-serialized events.
*
* By default, this class works by reading media data from the sinkpad, and then commencing
* playback. Some decoders cannot be given data from a memory block, so the usual way of
*
* The design mandates that the subclasses implement the following features and
* behaviour:
- * <itemizedlist>
- * <listitem><para>
- * 3 pads: viewfinder, image capture, video capture
- * </para></listitem>
- * <listitem><para>
- * </para></listitem>
- * </itemizedlist>
+ *
+ * * 3 pads: viewfinder, image capture, video capture
*
* During construct_pipeline() vmethod a subclass can add several elements into
* the bin and expose 3 srcs pads as ghostpads implementing the 3 pad templates.
*
* The interface allows access to some common digital image capture parameters.
*
- * <note>
- * The GstPhotography interface is unstable API and may change in future.
- * One can define GST_USE_UNSTABLE_API to acknowledge and avoid this warning.
- * </note>
+ * > The GstPhotography interface is unstable API and may change in future.
+ * > One can define GST_USE_UNSTABLE_API to acknowledge and avoid this warning.
*/
static void gst_photography_iface_base_init (GstPhotographyInterface * iface);
* Name of custom GstMessage that will be posted to #GstBus when autofocusing
* is complete.
* This message contains following fields:
- * <itemizedlist>
- * <listitem>
- * <para>
- * #GstPhotographyFocusStatus
- * <classname>"status"</classname>:
- * Tells if focusing succeeded or failed.
- * </para>
- * </listitem>
- * <listitem>
- * <para>
- * #G_TYPE_INT
- * <classname>"focus-window-rows"</classname>:
- * Tells number of focus matrix rows.
- * </para>
- * </listitem>
- * <listitem>
- * <para>
- * #G_TYPE_INT
- * <classname>"focus-window-columns"</classname>:
- * Tells number of focus matrix columns.
- * </para>
- * </listitem>
- * <listitem>
- * <para>
- * #G_TYPE_INT
- * <classname>"focus-window-mask"</classname>:
- * Bitmask containing rows x columns bits which mark the focus points in the
- * focus matrix. Lowest bit (LSB) always represents the top-left corner of the
- * focus matrix. This field is only valid when focusing status is SUCCESS.
- * </para>
- * </listitem>
- * </itemizedlist>
+ *
+ * * `status` (#GstPhotographyFocusStatus): Tells if focusing succeeded or failed.
+ *
+ * * `focus-window-rows` (#G_TYPE_INT): Tells number of focus matrix rows.
+ *
+ * * `focus-window-columns` (#G_TYPE_INT): Tells number of focus matrix columns.
+ *
+ * * `focus-window-mask` (#G_TYPE_INT): Bitmask containing rows x columns bits
+ * which mark the focus points in the focus matrix. Lowest bit (LSB) always
+ * represents the top-left corner of the focus matrix. This field is only valid
+ * when focusing status is SUCCESS.
*/
#define GST_PHOTOGRAPHY_AUTOFOCUS_DONE "autofocus-done"
* becoming "shaken" due to camera movement and too long exposure time.
*
* This message contains following fields:
- * <itemizedlist>
- * <listitem>
- * <para>
- * #GstPhotographyShakeRisk
- * <classname>"status"</classname>:
- * Tells risk level of capturing shaken image.
- * </para>
- * </listitem>
- * </itemizedlist>
+ *
+ * * `status` (#GstPhotographyShakeRisk): Tells risk level of capturing shaken image.
*/
#define GST_PHOTOGRAPHY_SHAKE_RISK "shake-risk"
* x, y, w, and h properties are optional, and change the image position and
* size relative to the detected face position and size.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 autovideosrc ! videoconvert ! faceoverlay location=/path/to/gnome-video-effects/pixmaps/bow.svg x=0.5 y=0.5 w=0.7 h=0.7 ! videoconvert ! autovideosink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
* This element connects to a
* <ulink url="http://www.festvox.org/festival/index.html">festival</ulink>
* server process and uses it to synthesize speech. Festival need to run already
- * in server mode, started as <screen>festival --server</screen>
+ * in server mode, started as `festival --server`
*
* ## Example pipeline
* |[
*
* Read and decode samples from AVFoundation assets using the AVFAssetReader API
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 -v -m avfassetsrc uri="file://movie.mp4" ! autovideosink
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
*
* Read data from an iOS asset from the media library.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 iosassetsrc uri=assets-library://asset/asset.M4V?id=11&ext=M4V ! decodebin ! autoaudiosink
* ]| Plays asset with id a song.ogg from local dir.
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
*
* FIXME:Describe vdpaumpegdec here.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 -v -m fakesrc ! vdpaumpegdec ! fakesink silent=TRUE
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H
*
* FIXME:Describe vdpaumpeg4dec here.
*
- * <refsect2>
- * <title>Example launch line</title>
+ * ## Example launch line
+ *
* |[
* gst-launch-1.0 -v -m fakesrc ! vdpaumpeg4dec ! fakesink silent=TRUE
* ]|
- * </refsect2>
*/
#ifdef HAVE_CONFIG_H