4 - The primary object that applications deal with is a GstPipeline. A
5 given pipeline is connected to a particular main loop (GMainLoop for
6 glib, etc.). Calls to gst_ functions for objects owned by that
7 pipeline must be done from the context of the pipeline's main loop.
8 Signals fired by elements are marshalled in the pipeline's main
11 Notably, this means the gst_ API is not necessarily thread-safe.
12 However, it is safe to operate on different GstPipelines from
13 different threads. This makes it possible, for example, for
14 rhythmbox to play music and gather metadata from different threads
15 using different pipelines. Likewise, it's also possible to do
16 both in the same thread.
18 - The primary method of scheduling an element is through a generic
19 'iterate()' method. The iterate method explicitly tells the core
20 what it is waiting for (a specific time, pads to have available
21 data, etc.), and the core calls the iterate method when these
22 "triggers" happen. GstElement subclasses will be created to
23 emulate 0.8-style get/chain/loop methods. Existing elements will
24 be converted to the new subclasses rather than implement the
25 iterate method directly, unless there is a compelling reason to
26 do so. Iterate implementations are expected not to block, ever.
28 Rationale: This makes it possible to create completely non-blocking
31 - Scheduling elements will be done in either a threaded or
32 non-threaded way. The idle handler that is called by a pipeline's
33 main loop determines which elements are ready to be iterated
34 (based on their triggers), and puts them into a ready queue. In
35 the non-threaded case, the idle handler then calls the iterate()
36 method on each element in the ready queue. In the threaded case,
37 additional helper threads (which are completely owned by the
38 pipeline) are used to call the iterate methods.
40 Note that in the threaded case, elements may not always be run
43 Some elements are much easier to write if they run in the same
44 thread as the main loop (i.e., elements that are also GUI widgets).
45 An element flag can be set to make the manager always call the
46 iterate method in the manager context (i.e., in the main loop
47 thread). Also, elements like spider need to make core calls
48 which may not be allowed from other threads.
50 Rationale: Doing all bookkeeping in a single thread/context makes
51 the core code _much_ simpler. This bookkeeping takes only a
52 minimal amount of CPU time, less than 5% of the CPU time in a
53 rhythmbox pipeline. There is very little benefit to spreading
54 this over multiple CPUs until the number of CPUs is greater than
55 ~16, and you have _huge_ pipelines. Also, a single-threaded
56 manager significantly decreases the number of locks necessary
57 in the core, decreasing lock contention (if any) and making it
58 easier to understand deadlocks (if any).
60 - There are essentially two types of objects/structures. One type
61 includes objects that are derived from GObject, and are passed in
62 function calls similarly to gtk. The other type includes objects
63 (structures, really) that are not reference counted and passed
64 around similar to how GstCaps works in 0.8. That is, functions
65 that take 'const GstCaps *' do not take ownership of the passed
66 object, whereas functions that take 'GstCaps *' do. Similar is
67 true for return values.
69 - The concept of GstBuffer from 0.8 will be split into two types.
70 One type will focus solely on holding information pertaining to
71 ownership of a memory area (call this GstMemBuffer), and the
72 other type will focus solely in transfering information between
73 elements (call this GstPipeBuffer). In case you get confused,
74 GstMemBuffers _are not_ transferred between elements, and
75 GstPipeBuffers _do not_ own the memory they point to.
77 In general, GstPipeBuffers point to (and reference) a GstMemBuffer.
78 GstMemBuffers are GObjects. GstPipeBuffers are structs, like
79 GstCaps. GstPipeBuffers have timestamps, durations, and flags.
80 GstMemBuffers contain read/write flags. There are no subbuffers
81 for either type, because they are not necessary. Essentially,
82 GstPipeBuffers completely replace the concept of subbuffers.
84 (I'd like to continue to use the name GstBuffer for GstPipeBuffers,
85 since its usage is much more common in elements.)
87 Rationale: Memory regions need an ultimate owner and reference
88 counting. However, chunks passed around between elements need
89 to be small and efficient. These goals are non-overlapping and
90 conflicting, and thus are inappropriate to be combined in the
93 - Core objects should have very few (if any) public fields. This
94 means that accessor macros will all be eliminated and replaced
95 with accessor functions.
97 Rationale: This makes it possible to change the core more during
100 - Remove pluggable scheduling.
102 Rationale: We need one good scheduler. Having multiple schedulers
103 is directly opposed to this goal.
105 - 0.8-style element states are split up. One state (AppState)
106 indicates what the application wants the element to be doing,
107 and is completely under the control of the application. The
108 other state (ElementState) indicates what the element is actually
109 doing, and is under control of the element. If the application
110 wants an element to pause, it sets the AppState to PAUSED, and
111 the element eventually changes its ElementState to PAUSED (and
112 fires a signal). If the element has an error or EOS, it sets
113 its ElementState to SOME_STATE and fires a signal, while the
114 AppState remains at PLAYING. The actual number and descriptions
115 of states has not been discussed.
117 Rationale: It's pretty obvious that we're mixing concepts for
118 elements states in 0.8.
120 - getcaps() methods will be replaced by an element_allowed_caps()
121 field in the pad. The primary reason for this is because
122 renegotiation only needs to happen when circumstances change.
123 This is more easily done by a field in GstPad and notification
124 of peers when this changes.
126 Somewhere, there's a document I wrote about completely redoing