When sending a flushing seek upstream on elem1.src, the FLUSH_START event
will temporary unblock the streaming thread and make all pad functions that
-triggers a block (_push/_alloc_buffer/_push_event/_pull_range) return
-GST_FLOW_WRONG_STATE. This will then eventually pause the streaming thread
+can trigger a block (_push/_query/_push_event/_pull_range) return
+GST_FLOW_FLUSHING. This will then eventually pause the streaming thread
and release the STREAM_LOCK.
Since no STREAM lock is taken after the pad block it is not needed to send
if (need_commit)
need_commit = FALSE
if (!commit)
- return WRONG_STATE
+ return FLUSHING
if (need_preroll)
# release PREROLL_LOCK and wait. prerolled can be observed
PREROLL_WAIT (releasing PREROLL_LOCK)
prerolled = FALSE
if (flushing)
- return WRONG_STATE
+ return FLUSHING
if (valid (start || stop))
PREROLL_UNLOCK
ret = wait_clock (obj,start)
PREROLL_LOCK
if (flushing)
- return WRONG_STATE
+ return FLUSHING
# if the clock was unscheduled, we redo the
# preroll
if (ret == UNSCHEDULED)
render (buffer) ----->| PREROLL_WAIT (releasing PREROLL_LOCK)
| prerolled = FALSE
| if (flushing)
- | return WRONG_STATE
+ | return FLUSHING
|
# queue a prerollable item (EOS or buffer). It is
if (need_commit)
need_commit = FALSE
if (!commit)
- return WRONG_STATE
+ return FLUSHING
# then see if we need more preroll items before we
# can block
if (flushing)
return FALSE
ret = queue (event, TRUE)
- if (ret == WRONG_STATE)
+ if (ret == FLUSHING)
return FALSE
PREROLL_UNLOCK
STREAM_UNLOCK
return FALSE
set_clip
ret = queue (event, FALSE)
- if (ret == WRONG_STATE)
+ if (ret == FLUSHING)
return FALSE
PREROLL_UNLOCK
STREAM_UNLOCK
STREAM_LOCK
PREROLL_LOCK
if (flushing)
- return WRONG_STATE
+ return FLUSHING
if (clip)
queue (buffer, TRUE)
PREROLL_UNLOCK
streaming threads must stop sending data. This happens in the following sequence:
alsasink to READY: alsasink unblocks from the _chain() function and returns a
- WRONG_STATE return value to the peer element. The sinkpad is
+ FLUSHING return value to the peer element. The sinkpad is
deactivated and becomes unusable for sending more data.
mp3dec to READY: the pads are deactivated and the state change completes when
mp3dec leaves its _chain() function.
filesrc to READY: the pads are deactivated and the thread is paused.
The upstream elements finish their chain() function because the downstream element
-returned an error code (WRONG_STATE) from the _push() functions. These error codes
+returned an error code (FLUSHING) from the _push() functions. These error codes
are eventually returned to the element that started the streaming thread (filesrc),
which pauses the thread and completes the state change.
- a flush event
When the preroll is unlocked by a flush event, a return value of
-GST_FLOW_WRONG_STATE is to be returned to the peer pad.
+GST_FLOW_FLUSHING is to be returned to the peer pad.
When preroll is unlocked by a state change to PLAYING, playback and
rendering of the buffers shall start.
When preroll is unlocked by a state change to READY, the buffer is
-to be discarded and a GST_FLOW_WRONG_STATE shall be returned to the
+to be discarded and a GST_FLOW_FLUSHING shall be returned to the
peer element.
-------------------->O |
O |
flushing? O |
- WRONG_STATE O |
+ FLUSHING O |
< - - - - - - O |
O-> do BLOCK probes |
O |
O gst_pad_send_event() |
O------------------------------>O
O flushing? O
- O WRONG_STATE O
+ O FLUSHING O
O< - - - - - - - - - - - - - - -O
O O-> do BLOCK probes
O O
| O<---------------------
| O
| O flushing?
- | O WRONG_STATE
+ | O FLUSHING
| O - - - - - - - - - - >
| do BLOCK probes <-O
| O no peer?
O<------------------------------O
O O
O flushing? O
- O WRONG_STATE O
+ O FLUSHING O
O- - - - - - - - - - - - - - - >O
do BLOCK probes <-O O
O O
Avidemux starts by sending a FLUSH_START event downstream and upstream. This
will cause its streaming task to PAUSED because _pad_pull_range() and
- _pad_push() will return WRONG_STATE. It then waits for the STREAM_LOCK,
+ _pad_push() will return FLUSHING. It then waits for the STREAM_LOCK,
which will be unlocked when the streaming task pauses. At this point no
streaming is happening anymore in the pipeline and a FLUSH_STOP is sent
upstream and downstream.
PAUSED -> READY
- Sinks unblock any waits in the preroll.
- Elements unblock any waits on devices
- - Chain or get_range functions return WRONG_STATE.
+ - Chain or get_range functions return FLUSHING.
- The element pads are deactivated so that streaming becomes impossible and
all streaming threads are stopped.
- The sink forgets all negotiated formats
sure that when changing the state of an element, the downstream elements are in
the correct state to process the eventual buffers. In the case of a downwards
state change, the sink elements will shut down first which makes the upstream
-elements shut down as well since the _push() function returns a GST_FLOW_WRONG_STATE
+elements shut down as well since the _push() function returns a GST_FLOW_FLUSHING
error.
If all the children return SUCCESS, the function returns SUCCESS as well.
As a side-effect of flushing all data from the pipeline, this event
unblocks the streaming thread by making all pads reject data until
they receive a <xref linkend="section-events-flush-stop"/> signal
- (elements trying to push data will get a WRONG_STATE flow return
+ (elements trying to push data will get a FLUSHING flow return
and stop processing data).
</para>
<para>
* When we are in the loop() function, we might be in the middle
* of pushing a buffer, which might block in a sink. To make sure
* that the push gets unblocked we push out a FLUSH_START event.
- * Our loop function will get a WRONG_STATE return value from
+ * Our loop function will get a FLUSHING return value from
* the push and will pause, effectively releasing the STREAM_LOCK.
*
* For a non-flushing seek, we pause the task, which might eventually
* first and do EOS instead of entering it.
* - If we are in the _create function or we did not manage to set the
* flag fast enough and we are about to enter the _create function,
- * we unlock it so that we exit with WRONG_STATE immediately. We then
+ * we unlock it so that we exit with FLUSHING immediately. We then
* check the EOS flag and do the EOS logic.
*/
g_atomic_int_set (&src->priv->pending_eos, TRUE);
gst_buffer_unref (res_buf);
if (!src->live_running) {
- /* We return WRONG_STATE when we are not running to stop the dataflow also
+ /* We return FLUSHING when we are not running to stop the dataflow also
* get rid of the produced buffer. */
GST_DEBUG_OBJECT (src,
- "clock was unscheduled (%d), returning WRONG_STATE", status);
+ "clock was unscheduled (%d), returning FLUSHING", status);
ret = GST_FLOW_FLUSHING;
} else {
/* If we are running when this happens, we quickly switched between
gst_event_set_seqnum (event, src->priv->seqnum);
/* for fatal errors we post an error message, post the error
* first so the app knows about the error first.
- * Also don't do this for WRONG_STATE because it happens
+ * Also don't do this for FLUSHING because it happens
* due to flushing and posting an error message because of
* that is the wrong thing to do, e.g. when we're doing
* a flushing seek. */