id3demux : Extracter/Metadata
udpsrc : Source/Network/Protocol/Device
videomixer : Mixer/Video
- ffmpegcolorspace : Filter/Video (intended use to convert video with as little
+ videoconvert : Filter/Video (intended use to convert video with as little
visible change as possible)
vertigotv : Effect/Video (intended use is to change the video)
volume : Effect/Audio (intended use is to change the audio data)
e.g. urisource-http or urisource-mms
- element-$(ELEMENT_REQUIRED),
- e.g. element-ffmpegcolorspace
+ e.g. element-videoconvert
- decoder-$(CAPS_REQUIRED)
e.g. decoder-audio/x-vorbis or
To automatically detect the right codec in a pipeline, try
<programlisting>
gst-launch filesrc location=my-random-media-file.mpeg ! decodebin !
- audioconvert ! audioresample ! osssink
+ audioconvert ! pulsesink
</programlisting>.
or
<programlisting>
gst-launch filesrc location=my-random-media-file.mpeg ! decodebin !
- ffmpegcolorspace ! xvimagesink
+ videoconvert ! xvimagesink
</programlisting>
Something more complicated:
<programlisting>
gst-launch filesrc location=my-random-media-file.mpeg ! decodebin name=decoder
- decoder. ! ffmpegcolorspace ! xvimagesink
- decoder. ! audioconvert ! audioresample ! osssink
+ decoder. ! videoconvert ! xvimagesink
+ decoder. ! audioconvert ! pulsesink
</programlisting>
</para>
<para>
filter = gst_element_factory_make ("capsfilter", "filter");
g_assert (filter != NULL); /* should always exist */
- csp = gst_element_factory_make ("ffmpegcolorspace", "csp");
+ csp = gst_element_factory_make ("videoconvert", "csp");
if (csp == NULL)
- g_error ("Could not create 'ffmpegcolorspace' element");
+ g_error ("Could not create 'videoconvert' element");
sink = gst_element_factory_make ("xvimagesink", "sink");
if (sink == NULL) {
gst_bin_add_many (GST_BIN (pipeline), src, filter, csp, sink, NULL);
gst_element_link_many (src, filter, csp, sink, NULL);
- filtercaps = gst_caps_new_simple ("video/x-raw-rgb",
+ filtercaps = gst_caps_new_simple ("video/x-raw",
+ "format", G_TYPE_STRING, "RGB16",
"width", G_TYPE_INT, 384,
"height", G_TYPE_INT, 288,
"framerate", GST_TYPE_FRACTION, 25, 1,
- "bpp", G_TYPE_INT, 16,
- "depth", G_TYPE_INT, 16,
- "endianness", G_TYPE_INT, G_BYTE_ORDER,
NULL);
g_object_set (G_OBJECT (filter), "caps", filtercaps, NULL);
gst_caps_unref (filtercaps);
pipeline = gst_pipeline_new ("pipeline");
fakesrc = gst_element_factory_make ("fakesrc", "source");
flt = gst_element_factory_make ("capsfilter", "flt");
- conv = gst_element_factory_make ("ffmpegcolorspace", "conv");
+ conv = gst_element_factory_make ("videoconvert", "conv");
videosink = gst_element_factory_make ("xvimagesink", "videosink");
/* setup */
g_object_set (G_OBJECT (flt), "caps",
- gst_caps_new_simple ("video/x-raw-rgb",
+ gst_caps_new_simple ("video/x-raw",
+ "format", G_TYPE_STRING, "RGB16",
"width", G_TYPE_INT, 384,
"height", G_TYPE_INT, 288,
"framerate", GST_TYPE_FRACTION, 1, 1,
- "bpp", G_TYPE_INT, 16,
- "depth", G_TYPE_INT, 16,
- "endianness", G_TYPE_INT, G_BYTE_ORDER,
NULL), NULL);
gst_bin_add_many (GST_BIN (pipeline), fakesrc, flt, conv, videosink, NULL);
gst_element_link_many (fakesrc, flt, conv, videosink, NULL);
<para>
&GStreamer; contains a bunch of conversion plugins that most
applications will find useful. Specifically, those are videoscalers
- (videoscale), colorspace convertors (ffmpegcolorspace), audio format
+ (videoscale), colorspace convertors (videoconvert), audio format
convertors and channel resamplers (audioconvert) and audio samplerate
convertors (audioresample). Those convertors don't do anything when not
required, they will act in passthrough mode. They will activate when
<command>gst-launch</command> is a simple script-like commandline
application that can be used to test pipelines. For example, the
command <command>gst-launch audiotestsrc ! audioconvert !
- audio/x-raw-int,channels=2 ! alsasink</command> will run
+ audio/x-raw,channels=2 ! alsasink</command> will run
a pipeline which generates a sine-wave audio stream and plays it
to your ALSA audio card. <command>gst-launch</command> also allows
the use of threads (will be used automatically as required or as queue
or even omit the padname to automatically select a pad. Using
all this, the pipeline
<command>gst-launch filesrc location=file.ogg ! oggdemux name=d
- d. ! queue ! theoradec ! ffmpegcolorspace ! xvimagesink
+ d. ! queue ! theoradec ! videoconvert ! xvimagesink
d. ! queue ! vorbisdec ! audioconvert ! audioresample ! alsasink
</command> will play an Ogg file
containing a Theora video-stream and a Vorbis audio-stream. You can
<screen>
gst-launch filesrc location=redpill.vob ! dvddemux name=demux \
demux.audio_00 ! queue ! a52dec ! audioconvert ! audioresample ! osssink \
- demux.video_00 ! queue ! mpeg2dec ! ffmpegcolorspace ! xvimagesink
+ demux.video_00 ! queue ! mpeg2dec ! videoconvert ! xvimagesink
</screen>
</para>
<quote>audio/x-vorbis</quote>. The source pad will be used
to send raw (decoded) audio samples to the next element, with
a raw audio mime-type (in this case,
- <quote>audio/x-raw-float</quote>). The source pad will also
+ <quote>audio/x-raw</quote>). The source pad will also
contain properties for the audio samplerate and the amount of
channels, plus some more that you don't need to worry about
for now.
</para>
<programlisting>
+
Pad Templates:
SRC template: 'src'
Availability: Always
Capabilities:
- audio/x-raw-float
- rate: [ 8000, 50000 ]
- channels: [ 1, 2 ]
- endianness: 1234
- width: 32
- buffer-frames: 0
-
+ audio/x-raw
+ format: F32LE
+ rate: [ 1, 2147483647 ]
+ channels: [ 1, 256 ]
+
SINK template: 'sink'
Availability: Always
Capabilities:
You can do caps filtering by inserting a capsfilter element into
your pipeline and setting its <quote>caps</quote> property. Caps
filters are often placed after converter elements like audioconvert,
- audioresample, ffmpegcolorspace or videoscale to force those
+ audioresample, videoconvert or videoscale to force those
converters to convert data to a specific output format at a
certain point in a stream.
</para>
gboolean link_ok;
GstCaps *caps;
- caps = gst_caps_new_simple ("video/x-raw-yuv",
- "format", GST_TYPE_FOURCC, GST_MAKE_FOURCC ('I', '4', '2', '0'),
+ caps = gst_caps_new_simple ("video/x-raw",
+ "format", G_TYPE_STRING, "I420",
"width", G_TYPE_INT, 384,
"height", G_TYPE_INT, 288,
"framerate", GST_TYPE_FRACTION, 25, 1,
GstCaps *caps;
caps = gst_caps_new_full (
- gst_structure_new ("video/x-raw-yuv",
+ gst_structure_new ("video/x-raw",
"width", G_TYPE_INT, 384,
"height", G_TYPE_INT, 288,
"framerate", GST_TYPE_FRACTION, 25, 1,
NULL),
- gst_structure_new ("video/x-raw-rgb",
+ gst_structure_new ("video/x-bayer",
"width", G_TYPE_INT, 384,
"height", G_TYPE_INT, 288,
"framerate", GST_TYPE_FRACTION, 25, 1,
</para>
<programlisting>
[..]
- caps = gst_caps_new_simple ("audio/x-raw-float",
- "width", G_TYPE_INT, 32,
- "endianness", G_TYPE_INT, G_BYTE_ORDER,
+ caps = gst_caps_new_simple ("audio/x-raw",
+ "format", G_TYPE_STRING, GST_AUDIO_NE(F32),
"buffer-frames", G_TYPE_INT, <bytes-per-frame>,
"rate", G_TYPE_INT, <samplerate>,
"channels", G_TYPE_INT, <num-channels>, NULL);
GST_PAD_SINK,
GST_PAD_ALWAYS,
GST_STATIC_CAPS (
- "audio/x-raw-int, "
- "width = (int) 16, "
- "depth = (int) 16, "
- "endianness = (int) BYTE_ORDER, "
+ "audio/x-raw, "
+ "format = (string) " GST_AUDIO_NE (S16) ", "
"channels = (int) { 1, 2 }, "
"rate = (int) [ 8000, 96000 ]"
)
* and from that audio type, we need to get the samplerate and
* number of channels. */
mime = gst_structure_get_name (structure);
- if (strcmp (mime, "audio/x-raw-int") != 0) {
+ if (strcmp (mime, "audio/x-raw") != 0) {
GST_WARNING ("Wrong mimetype %s provided, we only support %s",
- mime, "audio/x-raw-int");
+ mime, "audio/x-raw");
return FALSE;
}
* stream structure (strh/strf). */
[..]
return GST_PAD_LINK_OK;
- } else if !strcmp (str, "audio/x-raw-int")) {
+ } else if !strcmp (str, "audio/x-raw")) {
/* See above, but now with the raw audio tag (0x0001). */
[..]
return GST_PAD_LINK_OK;
NULL);
break;
case 0x0001: /* pcm */
- caps = gst_caps_new_simple ("audio/x-raw-int",
+ caps = gst_caps_new_simple ("audio/x-raw",
[..]);
break;
[..]
* <itemizedlist>
* <title>Example elements</title>
* <listitem>Level</listitem>
- * <listitem>Videoscale, audioconvert, ffmpegcolorspace, audioresample in
+ * <listitem>Videoscale, audioconvert, videoconvert, audioresample in
* certain modes.</listitem>
* </itemizedlist>
* </listitem>
* <listitem>Volume</listitem>
* <listitem>Audioconvert in certain modes (signed/unsigned
* conversion)</listitem>
- * <listitem>ffmpegcolorspace in certain modes (endianness
+ * <listitem>videoconvert in certain modes (endianness
* swapping)</listitem>
* </itemizedlist>
* </listitem>
* </itemizedlist>
* <itemizedlist>
* <title>Example elements</title>
- * <listitem>Videoscale, ffmpegcolorspace, audioconvert when doing
+ * <listitem>Videoscale, videoconvert, audioconvert when doing
* scaling/conversions</listitem>
* </itemizedlist>
* </listitem>
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch videotestsrc ! video/x-raw-gray ! ffmpegcolorspace ! autovideosink
+ * gst-launch videotestsrc ! video/x-raw,format=GRAY8 ! videoconvert ! autovideosink
* ]| Limits acceptable video from videotestsrc to be grayscale.
* </refsect2>
*/
* This ensures that outgoing buffers have caps if we can, so
* that pipelines like:
* gst-launch filesrc location=rawsamples.raw !
- * audio/x-raw-int,width=16,depth=16,rate=48000,channels=2,
- * endianness=4321,signed='(boolean)'true ! alsasink
+ * audio/x-raw,format=S16LE,rate=48000,channels=2 ! alsasink
* will work.
*/
static GstFlowReturn
* <refsect2>
* <title>Example launch line</title>
* |[
- * gst-launch filesrc location=song.ogg ! decodebin2 ! tee name=t ! queue ! autoaudiosink t. ! queue ! audioconvert ! goom ! ffmpegcolorspace ! autovideosink
+ * gst-launch filesrc location=song.ogg ! decodebin2 ! tee name=t ! queue ! autoaudiosink t. ! queue ! audioconvert ! goom ! videoconvert ! autovideosink
* ]| Play a song.ogg from local dir and render visualisations using the goom
* element.
* </refsect2>
#define NUM_CAPS 10000
+#define AUDIO_FORMATS_ALL " { S8, U8, " \
+ "S16LE, S16BE, U16LE, U16BE, " \
+ "S24_32LE, S24_32BE, U24_32LE, U24_32BE, " \
+ "S32LE, S32BE, U32LE, U32BE, " \
+ "S24LE, S24BE, U24LE, U24BE, " \
+ "S20LE, S20BE, U20LE, U20BE, " \
+ "S18LE, S18BE, U18LE, U18BE, " \
+ "F32LE, F32BE, F64LE, F64BE }"
#define GST_AUDIO_INT_PAD_TEMPLATE_CAPS \
- "audio/x-raw-int, " \
+ "audio/x-raw, " \
+ "format = (string) " AUDIO_FORMATS_ALL ", " \
"rate = (int) [ 1, MAX ], " \
- "channels = (int) [ 1, MAX ], " \
- "endianness = (int) { LITTLE_ENDIAN, BIG_ENDIAN }, " \
- "width = (int) { 8, 16, 24, 32 }, " \
- "depth = (int) [ 1, 32 ], " \
- "signed = (boolean) { true, false }"
+ "channels = (int) [ 1, MAX ]"
gint
static const gchar *factories[NUM_FLAVOURS][NUM_ELEM] = {
{"audiotestsrc", "adder", "volume", "audioconvert"},
- {"videotestsrc", "videomixer", "videoscale", "ffmpegcolorspace"}
+ {"videotestsrc", "videomixer", "videoscale", "videoconvert"}
};
static const gchar *sink_pads[NUM_FLAVOURS][NUM_ELEM] = {
- {NULL, "sink%d", NULL, NULL},
- {NULL, "sink_%d", NULL, NULL}
+ {NULL, "sink_%u", NULL, NULL},
+ {NULL, "sink_%u", NULL, NULL}
};
"video/x-raw, red_mask = (int) 0x80000000",
"video/x-raw, red_mask = (int) 0xFF000000",
/* result from
- * gst-launch ... ! "video/x-raw-rgb, red_mask=(int)0xFF000000" ! ... */
+ * gst-launch ... ! "video/x-raw, red_mask=(int)0xFF000000" ! ... */
"video/x-raw,\\ red_mask=(int)0xFF000000",
};
gint results[] = {
"osxvideosink", or "aasink". Keep in mind though that different sinks might
accept different formats and even the same sink might accept different formats
on different machines, so you might need to add converter elements like
-audioconvert and audioresample (for audio) or ffmpegcolorspace (for video)
+audioconvert and audioresample (for audio) or videoconvert (for video)
in front of the sink to make things work.
.B Audio playback
Play both video and audio portions of an MPEG movie
.B
- gst\-launch filesrc location=movie.mpg ! mpegdemux name=demuxer demuxer. ! queue ! mpeg2dec ! ffmpegcolorspace ! sdlvideosink demuxer. ! queue ! mad ! audioconvert ! audioresample ! osssink
+ gst\-launch filesrc location=movie.mpg ! mpegdemux name=demuxer demuxer. ! queue ! mpeg2dec ! videoconvert ! sdlvideosink demuxer. ! queue ! mad ! audioconvert ! audioresample ! osssink
.br
Play an AVI movie with an external text subtitle stream
(here: textoverlay) has multiple sink or source pads.
.B
- gst\-launch textoverlay name=overlay ! ffmpegcolorspace ! videoscale ! autovideosink filesrc location=movie.avi ! decodebin2 ! ffmpegcolorspace ! overlay.video_sink filesrc location=movie.srt ! subparse ! overlay.text_sink
+ gst\-launch textoverlay name=overlay ! videoconvert ! videoscale ! autovideosink filesrc location=movie.avi ! decodebin2 ! videoconvert ! overlay.video_sink filesrc location=movie.srt ! subparse ! overlay.text_sink
.br
Play an AVI movie with an external text subtitle stream using playbin2
Stream video using RTP and network elements.
.B
- gst\-launch v4l2src ! video/x-raw-yuv,width=128,height=96,format='(fourcc)'UYVY ! ffmpegcolorspace ! ffenc_h263 ! video/x-h263 ! rtph263ppay pt=96 ! udpsink host=192.168.1.1 port=5000 sync=false
+ gst\-launch v4l2src ! video/x-raw,width=128,height=96,format=UYVY ! videoconvert ! ffenc_h263 ! video/x-h263 ! rtph263ppay pt=96 ! udpsink host=192.168.1.1 port=5000
.br
This command would be run on the transmitter
Play any supported audio format
.B
- gst\-launch filesrc location=videofile ! decodebin name=decoder decoder. ! queue ! audioconvert ! audioresample ! osssink decoder. ! ffmpegcolorspace ! xvimagesink
+ gst\-launch filesrc location=videofile ! decodebin name=decoder decoder. ! queue ! audioconvert ! audioresample ! osssink decoder. ! videoconvert ! xvimagesink
.br
Play any supported video format with video and audio output. Threads are used
automatically. To make this even easier, you can use the playbin element:
These examples show you how to use filtered caps.
.B
- gst\-launch videotestsrc ! 'video/x-raw-yuv,format=(fourcc)YUY2;video/x-raw-yuv,format=(fourcc)YV12' ! xvimagesink
+ gst\-launch videotestsrc ! 'video/x-raw,format=YUY2;video/x-raw,format=YV12' ! xvimagesink
.br
Show a test image and use the YUY2 or YV12 video format for this.
.B
- gst\-launch osssrc ! 'audio/x-raw-int,rate=[32000,64000],width=[16,32],depth={16,24,32},signed=(boolean)true' ! wavenc ! filesink location=recording.wav
+ gst\-launch osssrc ! 'audio/x-raw,rate=[32000,64000],format={S16LE,S24LE,S32LE}' ! wavenc ! filesink location=recording.wav
.br
record audio and write it to a .wav file. Force usage of signed 16 to 32 bit
samples and a sample rate between 32kHz and 64KHz.