make install
- try out a simple test:
- gst-launch -v fakesrc num_buffers=5 ! fakesink
- (If you didn't install GStreamer, prefix gst-launch with tools/)
+ gst-launch-1.0 -v fakesrc num_buffers=5 ! fakesink
+ (If you didn't install GStreamer, prefix gst-launch-1.0 with tools/)
If it outputs a bunch of messages from fakesrc and fakesink, everything is
ok.
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v filesrc location=/path/to/mkv ! matroskademux name=d ! queue ! mp3parse ! mad ! audioconvert ! autoaudiosink d. ! queue ! ffdec_h264 ! videoconvert ! r. d. ! queue ! "application/x-ass" ! assrender name=r ! videoconvert ! autovideosink
+ * gst-launch-1.0 -v filesrc location=/path/to/mkv ! matroskademux name=d ! queue ! mp3parse ! mad ! audioconvert ! autoaudiosink d. ! queue ! ffdec_h264 ! videoconvert ! r. d. ! queue ! "application/x-ass" ! assrender name=r ! videoconvert ! autovideosink
* ]| This pipeline demuxes a Matroska file with h.264 video, MP3 audio and embedded ASS subtitles and renders the subtitles on top of the video.
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -m uridecodebin uri=file:///path/to/song.ogg ! audioconvert ! chromaprint ! fakesink
+ * gst-launch-1.0 -m uridecodebin uri=file:///path/to/song.ogg ! audioconvert ! chromaprint ! fakesink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line (upload a JPEG file to an HTTP server)</title>
* |[
- * gst-launch filesrc location=image.jpg ! jpegparse ! curlsink \
+ * gst-launch-1.0 filesrc location=image.jpg ! jpegparse ! curlsink \
* file-name=image.jpg \
* location=http://192.168.0.1:8080/cgi-bin/patupload.cgi/ \
* user=test passwd=test \
* <title>Example launch line (upload a JPEG file to /home/test/images
* directory)</title>
* |[
- * gst-launch filesrc location=image.jpg ! jpegparse ! curlfilesink \
+ * gst-launch-1.0 filesrc location=image.jpg ! jpegparse ! curlfilesink \
* file-name=image.jpg \
* location=file:///home/test/images/
* ]|
* <title>Example launch line (upload a JPEG file to /home/test/images
* directory)</title>
* |[
- * gst-launch filesrc location=image.jpg ! jpegparse ! curlftpsink \
+ * gst-launch-1.0 filesrc location=image.jpg ! jpegparse ! curlftpsink \
* file-name=image.jpg \
* location=ftp://192.168.0.1/images/
* ]|
* <refsect2>
* <title>Example launch line (upload a JPEG file to an HTTP server)</title>
* |[
- * gst-launch filesrc location=image.jpg ! jpegparse ! curlhttpsink \
+ * gst-launch-1.0 filesrc location=image.jpg ! jpegparse ! curlhttpsink \
* file-name=image.jpg \
* location=http://192.168.0.1:8080/cgi-bin/patupload.cgi/ \
* user=test passwd=test \
* <refsect2>
* <title>Example launch line (upload a file to /home/john/sftp_tests/)</title>
* |[
- * gst-launch filesrc location=/home/jdoe/some.file ! curlsftpsink \
+ * gst-launch-1.0 filesrc location=/home/jdoe/some.file ! curlsftpsink \
* file-name=some.file.backup \
* user=john location=sftp://192.168.0.1/~/sftp_tests/ \
* ssh-auth-type=1 ssh-key-passphrase=blabla \
* <refsect2>
* <title>Example launch line (upload a JPEG file to an SMTP server)</title>
* |[
- * gst-launch filesrc location=image.jpg ! jpegparse ! curlsmtpsink \
+ * gst-launch-1.0 filesrc location=image.jpg ! jpegparse ! curlsmtpsink \
* file-name=image.jpg \
* location=smtp://smtp.gmail.com:507 \
* user=test passwd=test \
* <refsect2>
* <title>Example pipeline</title>
* |[
- * gst-launch -v filesrc location=videotestsrc.ogg ! oggdemux ! daaladec ! xvimagesink
+ * gst-launch-1.0 -v filesrc location=videotestsrc.ogg ! oggdemux ! daaladec ! xvimagesink
* ]| This example pipeline will decode an ogg stream and decodes the daala video. Refer to
* the daalaenc example to create the ogg file.
* </refsect2>
* <refsect2>
* <title>Example pipeline</title>
* |[
- * gst-launch -v videotestsrc num-buffers=1000 ! daalaenc ! oggmux ! filesink location=videotestsrc.ogg
+ * gst-launch-1.0 -v videotestsrc num-buffers=1000 ! daalaenc ! oggmux ! filesink location=videotestsrc.ogg
* ]| This example pipeline will encode a test video source to daala muxed in an
* ogg container. Refer to the daaladec documentation to decode the create
* stream.
* <para>
* Standalone: this mode will take complete control of the monitor forcing
* <ulink url="http://www.directfb.org/">DirectFB</ulink> to fullscreen layout.
- * This is convenient to test using the gst-launch command line tool or
+ * This is convenient to test using the gst-launch-1.0 command line tool or
* other simple applications. It is possible to interrupt playback while
* being in this mode by pressing the Escape key.
* </para>
* <refsect2>
* <title>Example pipelines</title>
* |[
- * gst-launch -v videotestsrc ! dfbvideosink hue=20000 saturation=40000 brightness=25000
+ * gst-launch-1.0 -v videotestsrc ! dfbvideosink hue=20000 saturation=40000 brightness=25000
* ]| test the colorbalance interface implementation in dfbvideosink
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch dvdreadsrc title=1 ! mpegpsdemux ! dtsdec ! audioresample ! audioconvert ! alsasink
+ * gst-launch-1.0 dvdreadsrc title=1 ! mpegpsdemux ! dtsdec ! audioresample ! audioconvert ! alsasink
* ]| Play a DTS audio track from a dvd.
* |[
- * gst-launch filesrc location=abc.dts ! dtsdec ! audioresample ! audioconvert ! alsasink
+ * gst-launch-1.0 filesrc location=abc.dts ! dtsdec ! audioresample ! audioconvert ! alsasink
* ]| Decode a standalone file and play it.
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch audiotestsrc wave=sine num-buffers=100 ! audioconvert ! faac ! matroskamux ! filesink location=sine.mkv
+ * gst-launch-1.0 audiotestsrc wave=sine num-buffers=100 ! audioconvert ! faac ! matroskamux ! filesink location=sine.mkv
* ]| Encode a sine beep as aac and write to matroska container.
* </refsect2>
*/
* <refsect2>
* <title>Example launch lines</title>
* |[
- * gst-launch filesrc location=example.mp4 ! qtdemux ! faad ! audioconvert ! audioresample ! autoaudiosink
+ * gst-launch-1.0 filesrc location=example.mp4 ! qtdemux ! faad ! audioconvert ! audioresample ! autoaudiosink
* ]| Play aac from mp4 file.
* |[
- * gst-launch filesrc location=example.adts ! faad ! audioconvert ! audioresample ! autoaudiosink
+ * gst-launch-1.0 filesrc location=example.adts ! faad ! audioconvert ! audioresample ! autoaudiosink
* ]| Play standalone aac bitstream.
* </refsect2>
*/
* <refsect2>
* <title>Examples</title>
* |[
- * gst-launch -v videotestsrc ! glupload ! glbumper location=normalmap.bmp ! glimagesink
+ * gst-launch-1.0 -v videotestsrc ! glupload ! glbumper location=normalmap.bmp ! glimagesink
* ]| A pipeline to test normal mapping.
* FBO (Frame Buffer Object) and GLSL (OpenGL Shading Language) are required.
* </refsect2>
* <refsect2>
* <title>Examples</title>
* |[
- * gst-launch videotestsrc ! glupload ! gldeinterlace ! glimagesink
+ * gst-launch-1.0 videotestsrc ! glupload ! gldeinterlace ! glimagesink
* ]|
* FBO (Frame Buffer Object) and GLSL (OpenGL Shading Language) are required.
* </refsect2>
* <refsect2>
* <title>Examples</title>
* |[
- * gst-launch videotestsrc ! glupload ! gldifferencemate location=backgroundimagefile ! glimagesink
+ * gst-launch-1.0 videotestsrc ! glupload ! gldifferencemate location=backgroundimagefile ! glimagesink
* ]|
* FBO (Frame Buffer Object) and GLSL (OpenGL Shading Language) are required.
* </refsect2>
* <refsect2>
* <title>Examples</title>
* |[
- * gst-launch videotestsrc ! glupload ! gleffects effect=5 ! glimagesink
+ * gst-launch-1.0 videotestsrc ! glupload ! gleffects effect=5 ! glimagesink
* ]|
* FBO (Frame Buffer Object) and GLSL (OpenGL Shading Language) are required.
* </refsect2>
* <refsect2>
* <title>Examples</title>
* |[
- * gst-launch videotestsrc ! glupload ! glfilterreflectedscreen ! glimagesink
+ * gst-launch-1.0 videotestsrc ! glupload ! glfilterreflectedscreen ! glimagesink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Examples</title>
* |[
- * gst-launch videotestsrc ! glupload ! glshader location=myshader.fs ! glimagesink
+ * gst-launch-1.0 videotestsrc ! glupload ! glshader location=myshader.fs ! glimagesink
* ]|
* FBO (Frame Buffer Object) and GLSL (OpenGL Shading Language) are required.
* </refsect2>
* <refsect2>
* <title>Examples</title>
* |[
- * gst-launch videotestsrc ! glstereosplit name=s ! queue ! glimagesink s. ! queue ! glimagesink
+ * gst-launch-1.0 videotestsrc ! glstereosplit name=s ! queue ! glimagesink s. ! queue ! glimagesink
* ]|
* FBO (Frame Buffer Object) and GLSL (OpenGL Shading Language) are required.
* </refsect2>
* <title>Example launch line</title>
* <para>
* <programlisting>
- * gst-launch -v gltestsrc pattern=smpte ! glimagesink
+ * gst-launch-1.0 -v gltestsrc pattern=smpte ! glimagesink
* </programlisting>
* Shows original SMPTE color bars in a window.
* </para>
* <refsect2>
* <title>Examples</title>
* |[
- * gst-launch gltestsrc ! gltransformation rotation-z=45 ! glimagesink
+ * gst-launch-1.0 gltestsrc ! gltransformation rotation-z=45 ! glimagesink
* ]| A pipeline to rotate by 45 degrees
* |[
- * gst-launch gltestsrc ! gltransformation translation-x=0.5 ! glimagesink
+ * gst-launch-1.0 gltestsrc ! gltransformation translation-x=0.5 ! glimagesink
* ]| Translate the video by 0.5
* |[
- * gst-launch gltestsrc ! gltransformation scale-y=0.5 scale-x=0.5 ! glimagesink
+ * gst-launch-1.0 gltestsrc ! gltransformation scale-y=0.5 scale-x=0.5 ! glimagesink
* ]| Resize the video by 0.5
* |[
- * gst-launch gltestsrc ! gltransformation rotation-x=-45 ortho=True ! glimagesink
+ * gst-launch-1.0 gltestsrc ! gltransformation rotation-x=-45 ortho=True ! glimagesink
* ]| Rotate the video around the X-Axis by -45Ā° with an orthographic projection
* </refsect2>
*/
* <para>
* This explicitely decodes a Kate stream:
* <programlisting>
- * gst-launch filesrc location=test.ogg ! oggdemux ! katedec ! fakesink silent=TRUE
+ * gst-launch-1.0 filesrc location=test.ogg ! oggdemux ! katedec ! fakesink silent=TRUE
* </programlisting>
* </para>
* <para>
* This will automatically detect and use any Kate streams multiplexed
* in an Ogg stream:
* <programlisting>
- * gst-launch playbin uri=file:///tmp/test.ogg
+ * gst-launch-1.0 playbin uri=file:///tmp/test.ogg
* </programlisting>
* </para>
* </refsect2>
* <para>
* This encodes a DVD SPU track to a Kate stream:
* <programlisting>
- * gst-launch dvdreadsrc ! dvddemux ! dvdsubparse ! kateenc category=spu-subtitles ! oggmux ! filesink location=test.ogg
+ * gst-launch-1.0 dvdreadsrc ! dvddemux ! dvdsubparse ! kateenc category=spu-subtitles ! oggmux ! filesink location=test.ogg
* </programlisting>
* </para>
* </refsect2>
* <title>Example pipelines</title>
* <para>
* <programlisting>
- * gst-launch -v filesrc location=kate.ogg ! oggdemux ! kateparse ! fakesink
+ * gst-launch-1.0 -v filesrc location=kate.ogg ! oggdemux ! kateparse ! fakesink
* </programlisting>
* This pipeline shows that the streamheader is set in the caps, and that each
* buffer has the timestamp, duration, offset, and offset_end set.
* </para>
* <para>
* <programlisting>
- * gst-launch filesrc location=kate.ogg ! oggdemux ! kateparse \
+ * gst-launch-1.0 filesrc location=kate.ogg ! oggdemux ! kateparse \
* ! oggmux ! filesink location=kate-remuxed.ogg
* </programlisting>
* This pipeline shows remuxing. kate-remuxed.ogg might not be exactly the same
* </para>
* <title>Example pipelines</title>
* <para>
- * This element is only useful with gst-launch for modifying the language
+ * This element is only useful with gst-launch-1.0 for modifying the language
* and/or category (which are properties of the stream located in the kate
* beginning of stream header), because it does not support setting the tags
* on a #GstTagSetter interface. Conceptually, the element will usually be
* used like:
* <programlisting>
- * gst-launch -v filesrc location=foo.ogg ! oggdemux ! katetag ! oggmux ! filesink location=bar.ogg
+ * gst-launch-1.0 -v filesrc location=foo.ogg ! oggdemux ! katetag ! oggmux ! filesink location=bar.ogg
* </programlisting>
* </para>
* <para>
* This pipeline will set the language and category of the stream to the
* given values:
* <programlisting>
- * gst-launch -v filesrc location=foo.ogg ! oggdemux ! katetag language=pt_BR category=subtitles ! oggmux ! filesink location=bar.ogg
+ * gst-launch-1.0 -v filesrc location=foo.ogg ! oggdemux ! katetag language=pt_BR category=subtitles ! oggmux ! filesink location=bar.ogg
* </programlisting>
* </para>
* </refsect2>
* This pipeline renders a Kate stream on top of a Theora video multiplexed
* in the same stream:
* <programlisting>
- * gst-launch \
+ * gst-launch-1.0 \
* filesrc location=video.ogg ! oggdemux name=demux \
* demux. ! queue ! theoradec ! videoconvert ! tiger name=tiger \
* demux. ! queue ! kateparse ! tiger. \
* |[
* (padsp) listplugins
* (padsp) analyseplugin cmt.so amp_mono
- * gst-launch -e filesrc location="$myfile" ! decodebin ! audioconvert ! audioresample ! "audio/x-raw,format=S16LE,rate=48000,channels=1" ! wavenc ! filesink location="testin.wav"
+ * gst-launch-1.0 -e filesrc location="$myfile" ! decodebin ! audioconvert ! audioresample ! "audio/x-raw,format=S16LE,rate=48000,channels=1" ! wavenc ! filesink location="testin.wav"
* (padsp) applyplugin testin.wav testout.wav cmt.so amp_mono 2
- * gst-launch playbin uri=file://"$PWD"/testout.wav
+ * gst-launch-1.0 playbin uri=file://"$PWD"/testout.wav
* ]| Decode any audio file into wav with the format expected for the specific ladspa plugin to be applied, apply the ladspa filter and play it.
* </refsect2>
*
* <refsect2>
* <title>Example LADSPA line with this plugins</title>
* |[
- * gst-launch autoaudiosrc ! ladspa-cmt-so-amp-mono gain=2 ! ladspa-caps-so-plate ! ladspa-tap-echo-so-tap-stereo-echo l-delay=500 r-haas-delay=500 ! tee name=myT myT. ! queue ! autoaudiosink myT. ! queue ! audioconvert ! goom ! videoconvert ! xvimagesink pixel-aspect-ratio=3/4
+ * gst-launch-1.0 autoaudiosrc ! ladspa-cmt-so-amp-mono gain=2 ! ladspa-caps-so-plate ! ladspa-tap-echo-so-tap-stereo-echo l-delay=500 r-haas-delay=500 ! tee name=myT myT. ! queue ! autoaudiosink myT. ! queue ! audioconvert ! goom ! videoconvert ! xvimagesink pixel-aspect-ratio=3/4
* ]| Get audio input, filter it through CAPS Plate and TAP Stereo Echo, play it and show a visualization (recommended hearphones).
* </refsect2>
*
* <refsect2>
* <title>Example Filter/Effect/Audio/LADSPA line with this plugins</title>
* |[
- * gst-launch filesrc location="$myfile" ! decodebin ! audioconvert ! audioresample ! ladspa-calf-so-reverb decay-time=15 high-frq-damp=20000 room-size=5 diffusion=1 wet-amount=2 dry-amount=2 pre-delay=50 bass-cut=20000 treble-cut=20000 ! ladspa-tap-echo-so-tap-stereo-echo l-delay=500 r-haas-delay=500 ! autoaudiosink
+ * gst-launch-1.0 filesrc location="$myfile" ! decodebin ! audioconvert ! audioresample ! ladspa-calf-so-reverb decay-time=15 high-frq-damp=20000 room-size=5 diffusion=1 wet-amount=2 dry-amount=2 pre-delay=50 bass-cut=20000 treble-cut=20000 ! ladspa-tap-echo-so-tap-stereo-echo l-delay=500 r-haas-delay=500 ! autoaudiosink
* ]| Decode any audio file, filter it through Calf Reverb LADSPA then TAP Stereo Echo, and play it.
* </refsect2>
* </listitem>
* <refsect2>
* <title>Example Source/Audio/LADSPA line with this plugins</title>
* |[
- * gst-launch ladspasrc-sine-so-sine-fcac frequency=220 amplitude=100 ! audioconvert ! autoaudiosink
+ * gst-launch-1.0 ladspasrc-sine-so-sine-fcac frequency=220 amplitude=100 ! audioconvert ! autoaudiosink
* ]| Generate a sine wave with Sine Oscillator (Freq:control, Amp:control) and play it.
* </refsect2>
* <refsect2>
* <title>Example Source/Audio/LADSPA line with this plugins</title>
* |[
- * gst-launch ladspasrc-caps-so-click bpm=240 volume=1 ! autoaudiosink
+ * gst-launch-1.0 ladspasrc-caps-so-click bpm=240 volume=1 ! autoaudiosink
* ]| Generate clicks with CAPS Click - Metronome at 240 beats per minute and play it.
* </refsect2>
* <refsect2>
* <title>Example Source/Audio/LADSPA line with this plugins</title>
* |[
- * gst-launch ladspasrc-random-1661-so-random-fcsc-oa ! ladspa-cmt-so-amp-mono gain=1.5 ! ladspa-caps-so-plate ! tee name=myT myT. ! queue ! autoaudiosink myT. ! queue ! audioconvert ! wavescope ! videoconvert ! autovideosink
+ * gst-launch-1.0 ladspasrc-random-1661-so-random-fcsc-oa ! ladspa-cmt-so-amp-mono gain=1.5 ! ladspa-caps-so-plate ! tee name=myT myT. ! queue ! autoaudiosink myT. ! queue ! audioconvert ! wavescope ! videoconvert ! autovideosink
* ]| Generate random wave, filter it trhough Mono Amplifier and Versatile Plate Reverb, and play, while showing, it.
* </refsect2>
* </listitem>
* <refsect2>
* <title>Example Sink/Audio/LADSPA line with this plugins</title>
* |[
- * gst-launch autoaudiosrc ! ladspa-cmt-so-amp-mono gain=2 ! ladspa-caps-so-plate ! ladspa-tap-echo-so-tap-stereo-echo l-delay=500 r-haas-delay=500 ! tee name=myT myT. ! audioconvert ! audioresample ! queue ! ladspasink-cmt-so-null-ai myT. ! audioconvert ! audioresample ! queue ! goom ! videoconvert ! xvimagesink pixel-aspect-ratio=3/4
+ * gst-launch-1.0 autoaudiosrc ! ladspa-cmt-so-amp-mono gain=2 ! ladspa-caps-so-plate ! ladspa-tap-echo-so-tap-stereo-echo l-delay=500 r-haas-delay=500 ! tee name=myT myT. ! audioconvert ! audioresample ! queue ! ladspasink-cmt-so-null-ai myT. ! audioconvert ! audioresample ! queue ! goom ! videoconvert ! xvimagesink pixel-aspect-ratio=3/4
* ]| Get audio input, filter it trhough Mono Amplifier, CAPS Plate LADSPA and TAP Stereo Echo, explicitily anulate audio with Null (Audio Output), and play a visualization (recommended hearphones).
* </refsect2>
* </listitem>
* <refsect2>
* <title>Examples</title>
* |[
- * gst-launch -v audiotestsrc ! libvisual_gl_lv_flower ! glimagesink
+ * gst-launch-1.0 -v audiotestsrc ! libvisual_gl_lv_flower ! glimagesink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example pipelines</title>
* |[
- * gst-launch filesrc location=music.mp3 ! mpegaudioparse ! mpg123audiodec ! audioconvert ! audioresample ! autoaudiosink
+ * gst-launch-1.0 filesrc location=music.mp3 ! mpegaudioparse ! mpg123audiodec ! audioconvert ! audioresample ! autoaudiosink
* ]| Decode and play the mp3 file
* </refsect2>
*/
* <refsect2>
* <title>Example pipeline</title>
* |[
- * gst-launch -v videotestsrc num-buffers=1000 ! mpeg2enc ! mplex ! filesink location=videotestsrc.mpg
+ * gst-launch-1.0 -v videotestsrc num-buffers=1000 ! mpeg2enc ! mplex ! filesink location=videotestsrc.mpg
* ]| This example pipeline will encode a test video source to an
* MPEG1 elementary stream and multiplexes this to an MPEG system stream.
* <para>
* <refsect2>
* <title>Example pipelines</title>
* |[
- * gst-launch audiotestsrc ! audioconvert ! volume volume=0.5 ! openalsink
+ * gst-launch-1.0 audiotestsrc ! audioconvert ! volume volume=0.5 ! openalsink
* ]| will play a sine wave (continuous beep sound) through OpenAL.
* |[
- * gst-launch filesrc location=stream.wav ! decodebin ! audioconvert ! openalsink
+ * gst-launch-1.0 filesrc location=stream.wav ! decodebin ! audioconvert ! openalsink
* ]| will play a wav audio file through OpenAL.
* |[
- * gst-launch openalsrc ! "audio/x-raw,format=S16LE,rate=44100" ! audioconvert ! volume volume=0.25 ! openalsink
+ * gst-launch-1.0 openalsrc ! "audio/x-raw,format=S16LE,rate=44100" ! audioconvert ! volume volume=0.25 ! openalsink
* ]| will capture and play audio through OpenAL.
* </refsect2>
*/
* <refsect2>
* <title>Example pipelines</title>
* |[
- * gst-launch -v openalsrc ! audioconvert ! wavenc ! filesink location=stream.wav
+ * gst-launch-1.0 -v openalsrc ! audioconvert ! wavenc ! filesink location=stream.wav
* ]| * will capture sound through OpenAL and encode it to a wav file.
* |[
- * gst-launch openalsrc ! "audio/x-raw,format=S16LE,rate=44100" ! audioconvert ! volume volume=0.25 ! openalsink
+ * gst-launch-1.0 openalsrc ! "audio/x-raw,format=S16LE,rate=44100" ! audioconvert ! volume volume=0.25 ! openalsink
* ]| will capture and play audio through OpenAL.
* </refsect2>
*/
* <refsect2>
* <title>Example pipelines</title>
* |[
- * gst-launch -v filesrc location=opus.ogg ! oggdemux ! opusdec ! audioconvert ! audioresample ! alsasink
+ * gst-launch-1.0 -v filesrc location=opus.ogg ! oggdemux ! opusdec ! audioconvert ! audioresample ! alsasink
* ]| Decode an Ogg/Opus file. To create an Ogg/Opus file refer to the documentation of opusenc.
* </refsect2>
*/
* <refsect2>
* <title>Example pipelines</title>
* |[
- * gst-launch -v audiotestsrc wave=sine num-buffers=100 ! audioconvert ! opusenc ! oggmux ! filesink location=sine.ogg
+ * gst-launch-1.0 -v audiotestsrc wave=sine num-buffers=100 ! audioconvert ! opusenc ! oggmux ! filesink location=sine.ogg
* ]| Encode a test sine signal to Ogg/OPUS.
* </refsect2>
*/
* <refsect2>
* <title>Example pipelines</title>
* |[
- * gst-launch -v filesrc location=opusdata ! opusparse ! opusdec ! audioconvert ! audioresample ! alsasink
+ * gst-launch-1.0 -v filesrc location=opusdata ! opusparse ! opusdec ! audioconvert ! audioresample ! alsasink
* ]| Decode and plays an unmuxed Opus file.
* </refsect2>
*/
DEVICE_OPT=""
fi
-gst-launch rsndvdbin name=dvd "$DEVICE_OPT" \
+gst-launch-1.0 rsndvdbin name=dvd "$DEVICE_OPT" \
dvdspu name=spu ! videoconvert ! videoscale ! ximagesink force-aspect-ratio=true \
dvd. ! queue max-size-buffers=3 max-size-bytes=0 ! spu.video \
dvd. ! spu.subpicture \
DEVICE_OPT=""
fi
-gst-launch rsndvdbin name=dvd "$DEVICE_OPT" \
+gst-launch-1.0 rsndvdbin name=dvd "$DEVICE_OPT" \
dvdspu name=spu ! deinterlace ! xvimagesink force-aspect-ratio=false \
dvd. ! queue max-size-buffers=3 max-size-bytes=0 ! spu.video \
dvd. ! spu.subpicture \
* <refsect2>
* <title>Example launch lines</title>
* |[
- * gst-launch filesrc location=image.svg ! rsvgdec ! imagefreeze ! videoconvert ! autovideosink
+ * gst-launch-1.0 filesrc location=image.svg ! rsvgdec ! imagefreeze ! videoconvert ! autovideosink
* ]| render and show a svg image.
* </refsect2>
*/
* <refsect2>
* <title>Example launch lines</title>
* |[
- * gst-launch -v videotestsrc ! videoconvert ! rsvgoverlay location=foo.svg ! videoconvert ! autovideosink
+ * gst-launch-1.0 -v videotestsrc ! videoconvert ! rsvgoverlay location=foo.svg ! videoconvert ! autovideosink
* ]| specifies the SVG location through the filename property.
* |[
- * gst-launch -v videotestsrc ! videoconvert ! rsvgoverlay name=overlay ! videoconvert ! autovideosink filesrc location=foo.svg ! image/svg ! overlay.data_sink
+ * gst-launch-1.0 -v videotestsrc ! videoconvert ! rsvgoverlay name=overlay ! videoconvert ! autovideosink filesrc location=foo.svg ! image/svg ! overlay.data_sink
* ]| does the same by feeding data through the data_sink pad. You can also specify the SVG data itself as parameter:
* |[
- * gst-launch -v videotestsrc ! videoconvert ! rsvgoverlay data='<svg viewBox="0 0 800 600"><image x="80%" y="80%" width="10%" height="10%" xlink:href="foo.jpg" /></svg>' ! videoconvert ! autovideosink
+ * gst-launch-1.0 -v videotestsrc ! videoconvert ! rsvgoverlay data='<svg viewBox="0 0 800 600"><image x="80%" y="80%" width="10%" height="10%" xlink:href="foo.jpg" /></svg>' ! videoconvert ! autovideosink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc ! ffenc_flv ! flvmux ! rtmpsink location='rtmp://localhost/path/to/stream live=1'
+ * gst-launch-1.0 -v videotestsrc ! ffenc_flv ! flvmux ! rtmpsink location='rtmp://localhost/path/to/stream live=1'
* ]| Encode a test video stream to FLV video format and stream it via RTMP.
* </refsect2>
*/
* <refsect2>
* <title>Example launch lines</title>
* |[
- * gst-launch -v rtmpsrc location=rtmp://somehost/someurl ! fakesink
+ * gst-launch-1.0 -v rtmpsrc location=rtmp://somehost/someurl ! fakesink
* ]| Open an RTMP location and pass its content to fakesink.
* </refsect2>
*/
* <para>
* Simple example pipeline that plays an Ogg/Vorbis file via sndio:
* <programlisting>
- * gst-launch -v filesrc location=foo.ogg ! decodebin ! audioconvert ! audioresample ! sndiosink
+ * gst-launch-1.0 -v filesrc location=foo.ogg ! decodebin ! audioconvert ! audioresample ! sndiosink
* </programlisting>
* </para>
* </refsect2>
* <para>
* Simple example pipeline that records an Ogg/Vorbis file via sndio:
* <programlisting>
- * gst-launch -v sndiosrc ! audioconvert ! vorbisenc ! oggmux ! filesink location=foo.ogg
+ * gst-launch-1.0 -v sndiosrc ! audioconvert ! vorbisenc ! oggmux ! filesink location=foo.ogg
* </programlisting>
* </para>
* </refsect2>
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v -m filesrc location=recording.mpeg ! tsdemux ! teletextdec ! videoconvert ! ximagesink
+ * gst-launch-1.0 -v -m filesrc location=recording.mpeg ! tsdemux ! teletextdec ! videoconvert ! ximagesink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example pipelines</title>
* |[
- * gst-launch -v videotestsrc ! waylandsink
+ * gst-launch-1.0 -v videotestsrc ! waylandsink
* ]| test the video rendering in wayland
* </refsect2>
*/
* <refsect2>
* <title>Example launch lines</title>
* |[
- * gst-launch -m v4l2src ! videoconvert ! zbar ! videoconvert ! xvimagesink
+ * gst-launch-1.0 -m v4l2src ! videoconvert ! zbar ! videoconvert ! xvimagesink
* ]| This pipeline will detect barcodes and send them as messages.
* |[
- * gst-launch -m v4l2src ! tee name=t ! queue ! videoconvert ! zbar ! fakesink t. ! queue ! xvimagesink
+ * gst-launch-1.0 -m v4l2src ! tee name=t ! queue ! videoconvert ! zbar ! fakesink t. ! queue ! xvimagesink
* ]| Same as above, but running the filter on a branch to keep the display in color
* </refsect2>
*/
* <title>Example launch line</title>
* <para>
* <programlisting>
- * gst-launch filesrc location=sine.aiff ! aiffparse ! audioconvert ! alsasink
+ * gst-launch-1.0 filesrc location=sine.aiff ! aiffparse ! audioconvert ! alsasink
* </programlisting>
* Read a aiff file and output to the soundcard using the ALSA element. The
* aiff file is assumed to contain raw uncompressed samples.
* </para>
* <para>
* <programlisting>
- * gst-launch souphhtpsrc location=http://www.example.org/sine.aiff ! queue ! aiffparse ! audioconvert ! alsasink
+ * gst-launch-1.0 souphttpsrc location=http://www.example.org/sine.aiff ! queue ! aiffparse ! audioconvert ! alsasink
* </programlisting>
* Stream data from a network url.
* </para>
* <title>Example launch lines</title>
* <para>(write everything in one line, without the backslash characters)</para>
* |[
- * gst-launch videotestsrc num-buffers=250 \
- * ! "video/x-raw,format=(string)I420,framerate=(fraction)25/1" ! ffenc_wmv2 \
+ * gst-launch-1.0 videotestsrc num-buffers=250 \
+ * ! "video/x-raw,format=(string)I420,framerate=(fraction)25/1" ! avenc_wmv2 \
* ! asfmux name=mux ! filesink location=test.asf \
* audiotestsrc num-buffers=440 ! audioconvert \
- * ! "audio/x-raw,rate=44100" ! ffenc_wmav2 ! mux.
+ * ! "audio/x-raw,rate=44100" ! avenc_wmav2 ! mux.
* ]| This creates an ASF file containing an WMV video stream
* with a test picture and WMA audio stream of a test sound.
*
* <para>(write everything in one line, without the backslash characters)</para>
* Server (sender)
* |[
- * gst-launch -ve videotestsrc ! ffenc_wmv2 ! asfmux name=mux streamable=true \
+ * gst-launch-1.0 -ve videotestsrc ! avenc_wmv2 ! asfmux name=mux streamable=true \
* ! rtpasfpay ! udpsink host=127.0.0.1 port=3333 \
- * audiotestsrc ! ffenc_wmav2 ! mux.
+ * audiotestsrc ! avenc_wmav2 ! mux.
* ]|
* Client (receiver)
* |[
- * gst-launch udpsrc port=3333 ! "caps_from_rtpasfpay_at_sender" \
+ * gst-launch-1.0 udpsrc port=3333 ! "caps_from_rtpasfpay_at_sender" \
* ! rtpasfdepay ! decodebin name=d ! queue \
* ! videoconvert ! autovideosink \
* d. ! queue ! audioconvert ! autoaudiosink
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v audiotestsrc ! audiochannelmix ! autoaudiosink
+ * gst-launch-1.0 -v audiotestsrc ! audiochannelmix ! autoaudiosink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch audiotestsrc freq=100 ! audiomixer name=mix ! audioconvert ! alsasink audiotestsrc freq=500 ! mix.
+ * gst-launch-1.0 audiotestsrc freq=100 ! audiomixer name=mix ! audioconvert ! alsasink audiotestsrc freq=500 ! mix.
* ]| This pipeline produces two sine waves mixed together.
* </refsect2>
*
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch audiotestsrc ! audioconvert ! spacescope ! ximagesink
+ * gst-launch-1.0 audiotestsrc ! audioconvert ! spacescope ! ximagesink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch audiotestsrc ! audioconvert ! spectrascope ! ximagesink
+ * gst-launch-1.0 audiotestsrc ! audioconvert ! spectrascope ! ximagesink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch audiotestsrc ! audioconvert ! synaescope ! ximagesink
+ * gst-launch-1.0 audiotestsrc ! audioconvert ! synaescope ! ximagesink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch audiotestsrc ! audioconvert ! wavescope ! ximagesink
+ * gst-launch-1.0 audiotestsrc ! audioconvert ! wavescope ! ximagesink
* ]|
* </refsect2>
*/
/*
* test autovideoconvert:
* if rgb2bayer is present
- * gst-launch videotestsrc num-buffers=2 ! "video/x-raw,width=100,height=100,framerate=10/1" ! autovideoconvert ! "video/x-raw-bayer,width=100,height=100,format=bggr,framerate=10/1" ! fakesink -v
+ * gst-launch-1.0 videotestsrc num-buffers=2 ! "video/x-raw,width=100,height=100,framerate=10/1" ! autovideoconvert ! "video/x-bayer,width=100,height=100,format=bggr,framerate=10/1" ! fakesink -v
* if bayer2rgb is present
- * gst-launch videotestsrc num-buffers=2 ! "video/x-raw-bayer,width=100,height=100,format=bggr,framerate=10/1" ! autovideoconvert ! "video/x-raw,width=100,height=100,framerate=10/1" ! fakesink -v
+ * gst-launch-1.0 videotestsrc num-buffers=2 ! "video/x-bayer,width=100,height=100,format=bggr,framerate=10/1" ! autovideoconvert ! "video/x-raw,width=100,height=100,framerate=10/1" ! fakesink -v
* test with videoconvert
- * gst-launch videotestsrc num-buffers=2 ! "video/x-raw,format=RGBx,width=100,height=100,framerate=10/1" ! autovideoconvert ! "video/x-raw,format=RGB16,width=100,height=100,framerate=10/1" ! fakesink -v
+ * gst-launch-1.0 videotestsrc num-buffers=2 ! "video/x-raw,format=RGBx,width=100,height=100,framerate=10/1" ! autovideoconvert ! "video/x-raw,format=RGB16,width=100,height=100,framerate=10/1" ! fakesink -v
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
* <refsect2>
* <title>Example launch line</title>
* <para>
- * Unfortunately, camerabin can't be really used from gst-launch, as you need
+ * Unfortunately, camerabin can't be really used from gst-launch-1.0, as you need
* to send signals to control it. The following pipeline might be able
* to show the viewfinder using all the default elements.
* |[
- * gst-launch -v -m camerabin
+ * gst-launch-1.0 -v -m camerabin
* ]|
* </para>
* </refsect2>
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc ! viewfinderbin
+ * gst-launch-1.0 -v videotestsrc ! viewfinderbin
* ]|
* Feeds the viewfinderbin with video test data.
* </refsect2>
*
* Sample pipeline:
* |[
- * gst-launch videotestsrc pattern=smpte75 ! \
+ * gst-launch-1.0 videotestsrc pattern=smpte75 ! \
* chromahold target-r=0 target-g=0 target-b=255 ! \
* videoconvert ! autovideosink \
* ]| This pipeline only keeps the red color.
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc ! coloreffects preset=heat ! videoconvert !
+ * gst-launch-1.0 -v videotestsrc ! coloreffects preset=heat ! videoconvert !
* autovideosink
* ]| This pipeline shows the effect of coloreffects on a test stream.
* </refsect2>
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch-1.0 -v dataurisrc uri="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAfElEQVQ4je2MwQnAIAxFgziA4EnczIsO4MEROo/gzZWc4xdTbe1R6LGRR74heYS7iKElzfcMiRnt4hf8gk8EayB6luefue/HzlJfCA50XsNjYRxprZmenXNIKSGEsC+QUqK1hhgj521BzhnWWiilUGvdF5RS4L2HMQZCCJy8sHMm2TYdJAAAAABJRU5ErkJggg==" ! pngdec ! videoconvert ! freeze ! videoconvert ! autovideosink
+ * gst-launch-1.0 -v dataurisrc uri="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAfElEQVQ4je2MwQnAIAxFgziA4EnczIsO4MEROo/gzZWc4xdTbe1R6LGRR74heYS7iKElzfcMiRnt4hf8gk8EayB6luefue/HzlJfCA50XsNjYRxprZmenXNIKSGEsC+QUqK1hhgj521BzhnWWiilUGvdF5RS4L2HMQZCCJy8sHMm2TYdJAAAAABJRU5ErkJggg==" ! pngdec ! videoconvert ! imagefreeze ! videoconvert ! autovideosink
* ]| This pipeline displays a small 16x16 PNG image from the data URI.
* </refsect2>
*/
* <refsect2>
* <title>Example launch lines</title>
* |[
- * gst-launch videotestsrc ! fpsdisplaysink
- * gst-launch videotestsrc ! fpsdisplaysink text-overlay=false
- * gst-launch filesrc location=video.avi ! decodebin name=d ! queue ! fpsdisplaysink d. ! queue ! fakesink sync=true
- * gst-launch playbin uri=file:///path/to/video.avi video-sink="fpsdisplaysink" audio-sink=fakesink
+ * gst-launch-1.0 videotestsrc ! fpsdisplaysink
+ * gst-launch-1.0 videotestsrc ! fpsdisplaysink text-overlay=false
+ * gst-launch-1.0 filesrc location=video.avi ! decodebin name=d ! queue ! fpsdisplaysink d. ! queue ! fakesink sync=true
+ * gst-launch-1.0 playbin uri=file:///path/to/video.avi video-sink="fpsdisplaysink" audio-sink=fakesink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v audiotestsrc num-buffers=10 ! gstchopmydata min-size=100
+ * gst-launch-1.0 -v audiotestsrc num-buffers=10 ! chopmydata min-size=100
* max-size=200 step-size=2 ! fakesink -v
* ]|
*
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -m videotestsrc ! debugspy ! fakesink
+ * gst-launch-1.0 -m videotestsrc ! debugspy ! fakesink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v fakesrc ! watchdog ! fakesink
+ * gst-launch-1.0 -v fakesrc ! watchdog ! fakesink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[ FIXME
- * gst-launch -v filesrc location=/path/to/ts ! mpegtsdemux name=d ! queue ! mp3parse ! mad ! audioconvert ! autoaudiosink \
+ * gst-launch-1.0 -v filesrc location=/path/to/ts ! mpegtsdemux name=d ! queue ! mp3parse ! mad ! audioconvert ! autoaudiosink \
* d. ! queue ! mpeg2dec ! videoconvert ! r. \
* d. ! queue ! "subpicture/x-dvb" ! dvbsuboverlay name=r ! videoconvert ! autovideosink
* ]| This pipeline demuxes a MPEG-TS file with MPEG2 video, MP3 audio and embedded DVB subtitles and renders the subtitles on top of the video.
* <refsect2>
* <title>Example launch line</title>
* |[
- * FIXME: gst-launch ...
+ * FIXME: gst-launch-1.0 ...
* ]| FIXME: description for the sample launch pipeline
* </refsect2>
*/
* <refsect2>
* <title>Example pipeline</title>
* |[
- * echo 'Hello G-Streamer!' | gst-launch fdsrc fd=0 ! festival ! wavparse ! audioconvert ! alsasink
+ * echo 'Hello G-Streamer!' | gst-launch-1.0 fdsrc fd=0 ! festival ! wavparse ! audioconvert ! alsasink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v uridecodebin uri=/path/to/foo.bar ! fieldanalysis ! deinterlace ! videoconvert ! autovideosink
+ * gst-launch-1.0 -v uridecodebin uri=/path/to/foo.bar ! fieldanalysis ! deinterlace ! videoconvert ! autovideosink
* ]| This pipeline will analyse a video stream with default metrics and thresholds and output progressive frames.
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch audiotestsrc wave=saw ! freeverb ! autoaudiosink
- * gst-launch filesrc location="melo1.ogg" ! decodebin ! audioconvert ! freeverb ! autoaudiosink
+ * gst-launch-1.0 audiotestsrc wave=saw ! freeverb ! autoaudiosink
+ * gst-launch-1.0 filesrc location="melo1.ogg" ! decodebin ! audioconvert ! freeverb ! autoaudiosink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc ! burn ! videoconvert ! autovideosink
+ * gst-launch-1.0 -v videotestsrc ! burn ! videoconvert ! autovideosink
* ]| This pipeline shows the effect of burn on a test stream
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc ! chromium ! videoconvert ! autovideosink
+ * gst-launch-1.0 -v videotestsrc ! chromium ! videoconvert ! autovideosink
* ]| This pipeline shows the effect of chromium on a test stream
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc ! dilate ! videoconvert ! autovideosink
+ * gst-launch-1.0 -v videotestsrc ! dilate ! videoconvert ! autovideosink
* ]| This pipeline shows the effect of dilate on a test stream
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc ! dodge ! videoconvert ! autovideosink
+ * gst-launch-1.0 -v videotestsrc ! dodge ! videoconvert ! autovideosink
* ]| This pipeline shows the effect of dodge on a test stream
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc ! exclusion ! videoconvert ! autovideosink
+ * gst-launch-1.0 -v videotestsrc ! exclusion ! videoconvert ! autovideosink
* ]| This pipeline shows the effect of exclusion on a test stream
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc ! gaussianblur ! videoconvert ! autovideosink
+ * gst-launch-1.0 -v videotestsrc ! gaussianblur ! videoconvert ! autovideosink
* ]| This pipeline shows the effect of gaussianblur on a test stream
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc ! solarize ! videoconvert ! autovideosink
+ * gst-launch-1.0 -v videotestsrc ! solarize ! videoconvert ! autovideosink
* ]| This pipeline shows the effect of solarize on a test stream
* </refsect2>
*/
- replacing tcpserversink protocol=gdp with gdppay ! tcpserversink:
- raw audio:
- server:
- gst-launch -v audiotestsrc ! gdppay version=0.2 ! tcpserversink
+ gst-launch-1.0 -v audiotestsrc ! gdppay version=0.2 ! tcpserversink
- client:
- gst-launch -v tcpclientsrc protocol=gdp ! alsasink sync=FALSE
+ gst-launch-1.0 -v tcpclientsrc protocol=gdp ! alsasink sync=FALSE
- vorbis packets:
- server:
- gst-launch -v audiotestsrc ! audioconvert ! vorbisenc ! gdppay version=0.2 ! tcpserversink
+ gst-launch-1.0 -v audiotestsrc ! audioconvert ! vorbisenc ! gdppay version=0.2 ! tcpserversink
- client:
- gst-launch -v tcpclientsrc protocol=gdp ! vorbisdec ! audioconvert ! alsasink sync=FALSE
+ gst-launch-1.0 -v tcpclientsrc protocol=gdp ! vorbisdec ! audioconvert ! alsasink sync=FALSE
- ogg packets:
- server:
- gst-launch -v audiotestsrc ! audioconvert ! vorbisenc ! oggmux ! gdppay version=0.2 ! tcpserversink
+ gst-launch-1.0 -v audiotestsrc ! audioconvert ! vorbisenc ! oggmux ! gdppay version=0.2 ! tcpserversink
- client:
- gst-launch -v tcpclientsrc protocol=gdp ! oggdemux ! vorbisdec ! audioconvert ! alsasink sync=FALSE
+ gst-launch-1.0 -v tcpclientsrc protocol=gdp ! oggdemux ! vorbisdec ! audioconvert ! alsasink sync=FALSE
In all the client pipelines, tcpclientsrc protocol=gdp can be replaced with
tcpclientsrc ! gdpdepay
*
* <refsect2>
* |[
- * gst-launch -v -m filesrc location=test.gdp ! gdpdepay ! xvimagesink
+ * gst-launch-1.0 -v -m filesrc location=test.gdp ! gdpdepay ! xvimagesink
* ]| This pipeline plays back a serialized video stream as created in the
* example for gdppay.
* </refsect2>
*
* <refsect2>
* |[
- * gst-launch -v -m videotestsrc num-buffers=50 ! gdppay ! filesink location=test.gdp
+ * gst-launch-1.0 -v -m videotestsrc num-buffers=50 ! gdppay ! filesink location=test.gdp
* ]| This pipeline creates a serialized video stream that can be played back
* with the example shown in gdpdepay.
* </refsect2>
* <refsect2>
* <title>Example pipelines</title>
* |[
- * gst-launch -v filesrc location=foo.ogg ! decodebin ! audioconvert ! lame ! id3mux ! filesink location=foo.mp3
+ * gst-launch-1.0 -v filesrc location=foo.ogg ! decodebin ! audioconvert ! id3mux ! filesink location=foo.mp3
* ]| A pipeline that transcodes a file from Ogg/Vorbis to mp3 format with
* ID3 tags that contain the same metadata as the the Ogg/Vorbis file.
* Make sure the Ogg/Vorbis file actually has comments to preserve.
* |[
- * gst-launch -m filesrc location=foo.mp3 ! id3demux ! fakesink silent=TRUE 2> /dev/null | grep taglist
+ * gst-launch-1.0 -m filesrc location=foo.mp3 ! id3demux ! fakesink silent=TRUE
* ]| Verify that tags have been written.
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v audiotestsrc ! queue ! interaudiosink
+ * gst-launch-1.0 -v audiotestsrc ! queue ! interaudiosink
* ]|
*
- * The interaudiosink element cannot be used effectively with gst-launch,
+ * The interaudiosink element cannot be used effectively with gst-launch-1.0,
* as it requires a second pipeline in the application to receive the
* audio.
* See the gstintertest.c example in the gst-plugins-bad source code for
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v interaudiosrc ! queue ! audiosink
+ * gst-launch-1.0 -v interaudiosrc ! queue ! autoaudiosink
* ]|
*
- * The interaudiosrc element cannot be used effectively with gst-launch,
+ * The interaudiosrc element cannot be used effectively with gst-launch-1.0,
* as it requires a second pipeline in the application to send audio.
* See the gstintertest.c example in the gst-plugins-bad source code for
* more details.
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v ... ! intersubsink
+ * gst-launch-1.0 -v ... ! intersubsink
* ]|
*
- * The intersubsink element cannot be used effectively with gst-launch,
+ * The intersubsink element cannot be used effectively with gst-launch-1.0,
* as it requires a second pipeline in the application to send audio.
* See the gstintertest.c example in the gst-plugins-bad source code for
* more details.
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v intersubsrc ! kateenc ! oggmux ! filesink location=out.ogv
+ * gst-launch-1.0 -v intersubsrc ! kateenc ! oggmux ! filesink location=out.ogv
* ]|
*
- * The intersubsrc element cannot be used effectively with gst-launch,
+ * The intersubsrc element cannot be used effectively with gst-launch-1.0,
* as it requires a second pipeline in the application to send subtitles.
* See the gstintertest.c example in the gst-plugins-bad source code for
* more details.
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc ! intervideosink
+ * gst-launch-1.0 -v videotestsrc ! intervideosink
* ]|
*
- * The intervideosink element cannot be used effectively with gst-launch,
+ * The intervideosink element cannot be used effectively with gst-launch-1.0,
* as it requires a second pipeline in the application to send video to.
* See the gstintertest.c example in the gst-plugins-bad source code for
* more details.
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v intervideosrc ! queue ! xvimagesink
+ * gst-launch-1.0 -v intervideosrc ! queue ! xvimagesink
* ]|
*
- * The intersubsrc element cannot be used effectively with gst-launch,
+ * The intersubsrc element cannot be used effectively with gst-launch-1.0,
* as it requires a second pipeline in the application to send subtitles.
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc pattern=ball ! interlace ! xvimagesink
+ * gst-launch-1.0 -v videotestsrc pattern=ball ! interlace ! xvimagesink
* ]|
* This pipeline illustrates the combing effects caused by displaying
* two interlaced fields as one progressive frame.
* |[
- * gst-launch -v filesrc location=/path/to/file ! decodebin ! videorate !
+ * gst-launch-1.0 -v filesrc location=/path/to/file ! decodebin ! videorate !
* videoscale ! video/x-raw,format=\(string\)I420,width=720,height=480,
* framerate=60000/1001,pixel-aspect-ratio=11/10 !
- * interlace top-field-first=false ! ...
+ * interlace top-field-first=false ! autovideosink
* ]|
* This pipeline converts a progressive video stream into an interlaced
* stream suitable for standard definition NTSC.
* |[
- * gst-launch -v videotestsrc pattern=ball ! video/x-raw,
+ * gst-launch-1.0 -v videotestsrc pattern=ball ! video/x-raw,
* format=\(string\)I420,width=720,height=480,framerate=24000/1001,
- * pixel-aspect-ratio=11/10 ! interlace pattern=2:3 !
- * ...
+ * pixel-aspect-ratio=11/10 ! interlace !
+ * autovideosink
* ]|
* This pipeline converts a 24 frames per second progressive film stream into a
* 30000/1001 2:3:2:3... pattern telecined stream suitable for displaying film
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v filesrc location=file.mov ! decodebin ! combdetect !
+ * gst-launch-1.0 -v filesrc location=file.mov ! decodebin ! combdetect !
* xvimagesink
* ]|
* </refsect2>
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc pattern=ball ! video/x-raw,framerate=24/1 !
- * interlace field-pattern=3:2 !
+ * gst-launch-1.0 -v videotestsrc pattern=ball ! video/x-raw,framerate=24/1 !
+ * interlace !
* ivtc ! video/x-raw,framerate=24/1 ! fakesink
* ]|
*
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc num-buffers=1 ! jp2kenc ! \
+ * gst-launch-1.0 -v videotestsrc num-buffers=1 ! jp2kenc ! \
* gstjp2kdecimator max-decomposition-levels=2 ! jp2kdec ! \
* videoconvert ! autovideosink
* ]|
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc num-buffers=1 ! jpegenc ! jifmux ! filesink location=...
+ * gst-launch-1.0 -v videotestsrc num-buffers=1 ! jpegenc ! jifmux ! filesink location=...
* ]|
* The above pipeline renders a frame, encodes to jpeg, adds metadata and writes
* it to disk.
file trailer: EOI
tests:
-gst-launch videotestsrc num-buffers=1 ! jpegenc ! jifmux ! filesink location=test1.jpeg
-gst-launch videotestsrc num-buffers=1 ! jpegenc ! taginject tags="comment=\"test image\"" ! jifmux ! filesink location=test2.jpeg
+gst-launch-1.0 videotestsrc num-buffers=1 ! jpegenc ! jifmux ! filesink location=test1.jpeg
+gst-launch-1.0 videotestsrc num-buffers=1 ! jpegenc ! taginject tags="comment=test image" ! jifmux ! filesink location=test2.jpeg
*/
#ifdef HAVE_CONFIG_H
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v souphttpsrc location=... ! jpegparse ! matroskamux ! filesink location=...
+ * gst-launch-1.0 -v souphttpsrc location=... ! jpegparse ! matroskamux ! filesink location=...
* ]|
* The above pipeline fetches a motion JPEG stream from an IP camera over
* HTTP and stores it in a matroska file.
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v filesrc location=/path/to/mxf ! mxfdemux ! audioconvert ! autoaudiosink
+ * gst-launch-1.0 -v filesrc location=/path/to/mxf ! mxfdemux ! audioconvert ! autoaudiosink
* ]| This pipeline demuxes an MXF file and outputs one of the contained raw audio streams.
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v filesrc location=/path/to/audio ! decodebin ! queue ! mxfmux name=m ! filesink location=file.mxf filesrc location=/path/to/video ! decodebin ! queue ! m.
+ * gst-launch-1.0 -v filesrc location=/path/to/audio ! decodebin ! queue ! mxfmux name=m ! filesink location=file.mxf filesrc location=/path/to/video ! decodebin ! queue ! m.
* ]| This pipeline muxes an audio and video file into a single MXF file.
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch filesrc test.nuv ! nuvdemux name=demux demux.audio_00 ! decodebin ! audioconvert ! audioresample ! autoaudiosink demux.video_00 ! queue ! decodebin ! videoconvert ! videoscale ! autovideosink
+ * gst-launch-1.0 filesrc test.nuv ! nuvdemux name=demux demux.audio_00 ! decodebin ! audioconvert ! audioresample ! autoaudiosink demux.video_00 ! queue ! decodebin ! videoconvert ! videoscale ! autovideosink
* ]| Play (parse and decode) an .nuv file and try to output it to
* an automatically detected soundcard and videosink. If the NUV file contains
* compressed audio or video data, this will only work if you have the
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch filesrc location=test.pnm ! pnmdec ! videoconvert ! autovideosink
+ * gst-launch-1.0 filesrc location=test.pnm ! pnmdec ! videoconvert ! autovideosink
* ]| The above pipeline reads a pnm file and renders it to the screen.
* </refsect2>
*/
* <refsect>
* <title>Example launch line</title>
* |[
- * gst-launch videotestsrc num_buffers=1 ! videoconvert ! "video/x-raw,format=GRAY8" ! pnmenc ascii=true ! filesink location=test.pnm
+ * gst-launch-1.0 videotestsrc num_buffers=1 ! videoconvert ! "video/x-raw,format=GRAY8" ! pnmenc ascii=true ! filesink location=test.pnm
* ]| The above pipeline writes a test pnm file (ASCII encoding).
* </refsect2>
*/
Creating example data
=====================
-gst-launch videotestsrc num_buffers=300 ! \
+gst-launch-1.0 videotestsrc num_buffers=300 ! \
video/x-raw,format=\(string\)I420,width=320,height=240 ! \
filesink location=raw
Reading example data
====================
-gst-launch filesrc location=raw ! \
+gst-launch-1.0 filesrc location=raw ! \
videoparse format=I420 width=320 height=240 framerate=30/1 ! \
xvimagesink
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v -m filesrc location="audiofile" ! decodebin ! removesilence remove=true ! wavenc ! filesink location=without_audio.wav
+ * gst-launch-1.0 -v -m filesrc location="audiofile" ! decodebin ! removesilence remove=true ! wavenc ! filesink location=without_audio.wav
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch gnomevfssrc location=http://some.server/session.sdp ! sdpdemux ! fakesink
+ * gst-launch-1.0 souphttpsrc location=http://some.server/session.sdp ! sdpdemux ! fakesink
* ]| Establish a connection to an HTTP server that contains an SDP session description
* that gets parsed by sdpdemux and send the raw RTP packets to a fakesink.
* </refsect2>
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch filesrc location=test.ogg ! decodebin ! audioconvert ! speed speed=1.5 ! audioconvert ! audioresample ! autoaudiosink
+ * gst-launch-1.0 filesrc location=test.ogg ! decodebin ! audioconvert ! speed speed=1.5 ! audioconvert ! audioresample ! autoaudiosink
* ]| Plays an .ogg file at 1.5x speed.
* </refsect2>
*/
* <refsect2>
* <title>Example pipelines</title>
* |[
- * gst-launch -v filesrc location=sine.ogg ! oggdemux ! vorbisdec ! audioconvert ! stereo ! audioconvert ! audioresample ! alsasink
+ * gst-launch-1.0 -v filesrc location=sine.ogg ! oggdemux ! vorbisdec ! audioconvert ! stereo ! audioconvert ! audioresample ! alsasink
* ]| Play an Ogg/Vorbis file.
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v filesrc location=some_file.ogv ! decodebin !
+ * gst-launch-1.0 -v filesrc location=some_file.ogv ! decodebin !
* scenechange ! theoraenc ! fakesink
* ]|
* </refsect2>
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v fakesrc ! videodiff ! FIXME ! fakesink
+ * gst-launch-1.0 -v fakesrc ! videodiff ! FIXME ! fakesink
* ]|
* FIXME Describe what the pipeline does.
* </refsect2>
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc ! zebrastripe ! xvimagesink
+ * gst-launch-1.0 -v videotestsrc ! zebrastripe ! xvimagesink
* ]|
* Marks overexposed areas of the video with zebra stripes.
*
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v fakesrc ! gstdiracparse ! FIXME ! fakesink
+ * gst-launch-1.0 -v fakesrc ! gstdiracparse ! FIXME ! fakesink
* ]|
* FIXME Describe what the pipeline does.
* </refsect2>
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch videotestsrc ! simplevideomark ! ximagesink
+ * gst-launch-1.0 videotestsrc ! simplevideomark ! videoconvert ! ximagesink
* ]| Add the default black/white squares at the bottom left of the video frames.
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch videotestsrc ! simplevideomarkdetect ! videoconvert ! ximagesink
+ * gst-launch-1.0 videotestsrc ! simplevideomarkdetect ! videoconvert ! ximagesink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -m videotestsrc ! videoanalyse ! videoconvert ! ximagesink
+ * gst-launch-1.0 -m videotestsrc ! videoanalyse ! videoconvert ! ximagesink
* ]| This pipeline emits messages to the console for each frame that has been analysed.
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v filesrc location=file.y4m ! y4mdec ! xvimagesink
+ * gst-launch-1.0 -v filesrc location=file.y4m ! y4mdec ! xvimagesink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v videotestsrc pattern=ball ! interlace ! yadif ! xvimagesink
+ * gst-launch-1.0 -v videotestsrc pattern=ball ! interlace ! yadif ! xvimagesink
* ]|
* This pipeline creates an interlaced test pattern, and then deinterlaces
* it using the yadif filter.
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v filesrc location=file.mov ! qtdemux ! queue ! aacparse ! atdec ! autoaudiosink
+ * gst-launch-1.0 -v filesrc location=file.mov ! qtdemux ! queue ! aacparse ! atdec ! autoaudiosink
* ]|
* Decode aac audio from a mov file
* </refsect2>
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v -m avfassetsrc uri="file://movie.mp4" ! autovideosink
+ * gst-launch-1.0 -v -m avfassetsrc uri="file://movie.mp4" ! autovideosink
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch iosassetsrc uri=assets-library://asset/asset.M4V?id=11&ext=M4V ! decodebin ! autoaudiosink
+ * gst-launch-1.0 iosassetsrc uri=assets-library://asset/asset.M4V?id=11&ext=M4V ! decodebin ! autoaudiosink
* ]| Plays asset with id a song.ogg from local dir.
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v filesrc location=file.mov ! qtdemux ! queue ! h264parse ! vtdec ! videoconvert ! autovideosink
+ * gst-launch-1.0 -v filesrc location=file.mov ! qtdemux ! queue ! h264parse ! vtdec ! videoconvert ! autovideosink
* ]|
* Decode h264 video from a mov file.
* </refsect2>
* <refsect2>
* <title>Example pipelines</title>
* |[
- * gst-launch -v directsoundsrc ! audioconvert ! vorbisenc ! oggmux ! filesink location=dsound.ogg
+ * gst-launch-1.0 -v directsoundsrc ! audioconvert ! vorbisenc ! oggmux ! filesink location=dsound.ogg
* ]| Record from DirectSound and encode to Ogg/Vorbis.
* </refsect2>
*/
Try:
- gst-launch dvbsrc frequency=11954 polarity=h symbol-rate=27500 pids=210:220
+ gst-launch-1.0 dvbsrc frequency=11954 polarity=h symbol-rate=27500 pids=210:220
! mpegtsdemux es-pids=210:220 name=demux ! queue ! mpeg2dec
! xvimagesink demux. ! queue ! mad ! audioconvert ! autoaudiosink
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=514000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! mpegtsdemux name=demux ! queue max-size-buffers=0 max-size-time=0 ! mpeg2dec ! xvimagesink demux. ! queue max-size-buffers=0 max-size-time=0 ! mad ! alsasink
+ * gst-launch-1.0 dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=514000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! mpegtsdemux name=demux ! queue max-size-buffers=0 max-size-time=0 ! mpeg2dec ! xvimagesink demux. ! queue max-size-buffers=0 max-size-time=0 ! mad ! alsasink
* ]| Captures a full transport stream from DVB card 0 that is a DVB-T card at tuned frequency 514000000 Hz with other parameters as seen in the pipeline and renders the first TV program on the transport stream.
* |[
- * gst-launch dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=514000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 pids=100:256:257 ! mpegtsdemux name=demux ! queue max-size-buffers=0 max-size-time=0 ! mpeg2dec ! xvimagesink demux. ! queue max-size-buffers=0 max-size-time=0 ! mad ! alsasink
+ * gst-launch-1.0 dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=514000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 pids=100:256:257 ! mpegtsdemux name=demux ! queue max-size-buffers=0 max-size-time=0 ! mpeg2dec ! xvimagesink demux. ! queue max-size-buffers=0 max-size-time=0 ! mad ! alsasink
* ]| Captures and renders a transport stream from DVB card 0 that is a DVB-T card for a program at tuned frequency 514000000 Hz with PMT PID 100 and elementary stream PIDs of 256, 257 with other parameters as seen in the pipeline.
* |[
- * gst-launch dvbsrc polarity="h" frequency=11302000 symbol-rate=27500 diseqc-source=0 pids=50:102:103 ! mpegtsdemux name=demux ! queue max-size-buffers=0 max-size-time=0 ! mpeg2dec ! xvimagesink demux. ! queue max-size-buffers=0 max-size-time=0 ! mad ! alsasink
+ * gst-launch-1.0 dvbsrc polarity="h" frequency=11302000 symbol-rate=27500 diseqc-source=0 pids=50:102:103 ! mpegtsdemux name=demux ! queue max-size-buffers=0 max-size-time=0 ! mpeg2dec ! xvimagesink demux. ! queue max-size-buffers=0 max-size-time=0 ! mad ! alsasink
* ]| Captures and renders a transport stream from DVB card 0 that is a DVB-S card for a program at tuned frequency 11302000 kHz, symbol rate of 27500 kBd (kilo bauds) with PMT PID of 50 and elementary stream PIDs of 102 and 103.
* |[
- gst-launch dvbsrc frequency=515142857 guard=16 trans-mode="8k" isdbt-layer-enabled=7 isdbt-partial-reception=1 isdbt-layera-fec="2/3" isdbt-layera-modulation="QPSK" isdbt-layera-segment-count=1 isdbt-layera-time-interleaving=4 isdbt-layerb-fec="3/4" isdbt-layerb-modulation="qam-64" isdbt-layerb-segment-count=12 isdbt-layerb-time-interleaving=2 isdbt-layerc-fec="1/2" isdbt-layerc-modulation="qam-64" isdbt-layerc-segment-count=0 isdbt-layerc-time-interleaving=0 delsys="isdb-t" ! tsdemux ! "video/x-h264" ! h264parse ! queue ! avdec_h264 ! videoconvert ! queue ! autovideosink
+ gst-launch-1.0 dvbsrc frequency=515142857 guard=16 trans-mode="8k" isdbt-layer-enabled=7 isdbt-partial-reception=1 isdbt-layera-fec="2/3" isdbt-layera-modulation="QPSK" isdbt-layera-segment-count=1 isdbt-layera-time-interleaving=4 isdbt-layerb-fec="3/4" isdbt-layerb-modulation="qam-64" isdbt-layerb-segment-count=12 isdbt-layerb-time-interleaving=2 isdbt-layerc-fec="1/2" isdbt-layerc-modulation="qam-64" isdbt-layerc-segment-count=0 isdbt-layerc-time-interleaving=0 delsys="isdb-t" ! tsdemux ! "video/x-h264" ! h264parse ! queue ! avdec_h264 ! videoconvert ! queue ! autovideosink
* ]| Captures and renders the video track of TV ParaĆba HD (Globo affiliate) in Campina Grande, Brazil. This is an ISDB-T (Brazilian ISDB-Tb variant) broadcast.
* </refsect2>
*/
* <refsect2>
* <title>Example pipelines</title>
* |[
- * gst-launch -v filesrc location=music.ogg ! oggdemux ! vorbisdec ! audioconvert ! audioresample ! opeslessink
+ * gst-launch-1.0 -v filesrc location=music.ogg ! oggdemux ! vorbisdec ! audioconvert ! audioresample ! opeslessink
* ]| Play an Ogg/Vorbis file.
* </refsect2>
*
* <refsect2>
* <title>Example pipelines</title>
* |[
- * gst-launch -v openslessrc ! audioconvert ! vorbisenc ! oggmux ! filesink location=recorded.ogg
+ * gst-launch-1.0 -v openslessrc ! audioconvert ! vorbisenc ! oggmux ! filesink location=recorded.ogg
* ]| Record from default audio input and encode to Ogg/Vorbis.
* </refsect2>
*
* <refsect2>
* <title>Example launch lines</title>
* |[
- * gst-launch -v videotestsrc ! shmsink socket-path=/tmp/blah shm-size=1000000
+ * gst-launch-1.0 -v videotestsrc ! shmsink socket-path=/tmp/blah shm-size=1000000
* ]| Send video to shm buffers.
* </refsect2>
*/
* <refsect2>
* <title>Example launch lines</title>
* |[
- * gst-launch shmsrc socket-path=/tmp/blah ! \
- * "video/x-raw-yuv, format=(fourcc)YUY2, color-matrix=(string)sdtv, \
- * chroma-site=(string)mpeg2, width=(int)320, height=(int)240, framerate=(fraction)30/1" ! autovideosink
+ * gst-launch-1.0 shmsrc socket-path=/tmp/blah ! \
+ * "video/x-yuv, format=YUY2, color-matrix=sdtv, \
+ * chroma-site=mpeg2, width=(int)320, height=(int)240, framerate=(fraction)30/1" ! autovideosink
* ]| Render video from shm buffers.
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v -m fakesrc ! vdpauvideopostprocess ! fakesink silent=TRUE
+ * gst-launch-1.0 -v -m fakesrc ! vdpauvideopostprocess ! fakesink silent=TRUE
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v -m fakesrc ! vdpaumpegdec ! fakesink silent=TRUE
+ * gst-launch-1.0 -v -m fakesrc ! vdpaumpegdec ! fakesink silent=TRUE
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v -m fakesrc ! vdpaumpeg4dec ! fakesink silent=TRUE
+ * gst-launch-1.0 -v -m fakesrc ! vdpaumpeg4dec ! fakesink silent=TRUE
* ]|
* </refsect2>
*/
* <refsect2>
* <title>Example pipelines</title>
* |[
- * gst-launch -v ksvideosrc do-stats=TRUE ! ffmpegcolorspace ! dshowvideosink
+ * gst-launch-1.0 -v ksvideosrc do-stats=TRUE ! videoconvert ! dshowvideosink
* ]| Capture from a camera and render using dshowvideosink.
* |[
- * gst-launch -v ksvideosrc do-stats=TRUE ! image/jpeg, width=640, height=480
- * ! jpegdec ! ffmpegcolorspace ! dshowvideosink
+ * gst-launch-1.0 -v ksvideosrc do-stats=TRUE ! image/jpeg, width=640, height=480
+ * ! jpegdec ! videoconvert ! dshowvideosink
* ]| Capture from an MJPEG camera and render using dshowvideosink.
* </refsect2>
*/
* <refsect2>
* <title>Example pipelines</title>
* |[
- * gst-launch dx9screencapsrc ! ffmpegcolorspace ! dshowvideosink
+ * gst-launch-1.0 dx9screencapsrc ! videoconvert ! dshowvideosink
* ]| Capture the desktop and display it.
* |[
- * gst-launch dx9screencapsrc x=100 y=100 width=320 height=240 !
- * ffmpegcolorspace ! dshowvideosink
+ * gst-launch-1.0 dx9screencapsrc x=100 y=100 width=320 height=240 !
+ * videoconvert ! dshowvideosink
* ]| Capture a portion of the desktop and display it.
* </refsect2>
*/
* <refsect2>
* <title>Example pipelines</title>
* |[
- * gst-launch gdiscreencapsrc ! ffmpegcolorspace ! dshowvideosink
+ * gst-launch-1.0 gdiscreencapsrc ! videoconvert ! dshowvideosink
* ]| Capture the desktop and display it.
* |[
- * gst-launch gdiscreencapsrc x=100 y=100 width=320 height=240 cursor=TRUE
- * ! ffmpegcolorspace ! dshowvideosink
+ * gst-launch-1.0 gdiscreencapsrc x=100 y=100 width=320 height=240 cursor=TRUE
+ * ! videoconvert ! dshowvideosink
* ]| Capture a portion of the desktop, including the mouse cursor, and
* display it.
* </refsect2>
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch -v fakesrc ! $replace ! FIXME ! fakesink
+ * gst-launch-1.0 -v fakesrc ! $replace ! FIXME ! fakesink
* ]|
* FIXME Describe what the pipeline does.
* </refsect2>