5 == what information is interesting? ==
7 if we know the cpu-load for a given datastream, we could extrapolate what the
11 which element causes which cpu load/memory usage
15 * what data is needed ?
16 * (streamtime,propotion) pairs from sinks
17 draw a graph with gnuplot or similar
18 * number of frames in total
19 * number of audio/video frames dropped from each element that support QOS
20 * could be expressed as percent in relation to total-frames
22 * query data (e.g. via. gst-launch)
23 * add -r, --report option to gst-launch
24 * during playing we capture QOS-events to record 'streamtime,proportion' pairs
25 gst_pad_add_event_probe(video_sink->sink_pad,handler,data)
26 * during playback we like to know when an elemnt drops frames
27 what about elements sending a qos_action message?
28 * after EOS, send qos-queries to each element in the pipeline
29 * qos-query will return:
30 number of frames rendered
31 number of frames dropped
32 * print a nice table with the results
34 * writes a gnuplot data file
35 * list of 'streamtime,proportion,<drop>' tuples
39 * scheduler keeps a list of usecs the process function of each element was
41 * process functions are: loop, chain, get, they are driven by gst_pad_push() and
43 * scheduler keeps a sum of all times
44 * each gst-element has a profile_percentage field
47 * scheduler sets sum and all usecs in the list to 0
48 * when handling an element
49 * remember old usecs t_old
51 * call elements processing function
55 * profile_percentage=t_new/sum;
56 * should the percentage be averaged?
57 * profile_percentage=(profile_percentage+(t_new/sum))/2.0;
59 * the profile_percentage shows how much CPU time the element uses in relation
62 = rusage + pad-probes =
63 * check get_rusage() based cpu usage detection in buzztard
64 this together with pad_probes could gives us decent application level profiles
66 * 1:1 elements are easy to handle
67 * 0:1 elements need a start timer
68 * 1:0 elements need a end timer
69 * n:1, 1:m and n:m type elemnts are tricky
70 adapter based elements might have a fluctuating usage in addition
79 gst_bin_iterate_elements(pipeline)
80 gst_element_iterate_pads(element)
81 if (gst_pad_get_direction(pad)==GST_PAD_SRC)
82 gst_pad_add_buffer_probe(pad,end_timer,profile_data)
84 gst_pad_add_buffer_probe(pad,beg_timer,profile_data)
86 // listen to bus state-change messages to
87 // * reset counters on NULL_TO_READY
88 // * print results on READY_TO_NULL
90 = PerformanceMonitor =
91 Write a ld-preload lib that can gather data from gstreamer and logs it to files.
92 The idea is not avoid adding API for performance measurement to gstreamer.
95 library provides some common services used by the sensor modules.
100 Sensors do measurements and deliver timestampe performance data.
101 * bitrates and latency via gst_pad_push/pull per link
102 * qos ratio via gst_event_new_qos(), gst_pad_send_event()
103 * cpu/mem via get_rusage
104 * when (gst_clock_get_time) ?
105 * we want it per thread
115 * we have global data, data per {link,element,thread}
117 <timestamp> [<sensor-data>] [<sensor-data>]
120 timestamp [qos-ratio] [cpu-load={sum,17284,17285}]
121 00126437 [0.5] [0.7,0.2,0.5]
122 00126437 [0.8] [0.9,0.2,0.7]
125 ** should we have the log config in the header or in some separate config?
126 - if config, we just specify the config when capturing put that
127 in the first log line
128 - otherwise the analyzer ui has to parse it from the first line
131 LD_PRELOAD=libgstperfmon.so GST_PERFMON_DETAILS="qos-ratio,cpu-load=all" <application>
132 LD_PRELOAD=libgstperfmon.so GST_PERFMON_DETAILS="qos-ratio,cpu-load=sum" <application>
133 LD_PRELOAD=libgstperfmon.so GST_PERFMON_DETAILS="*" <application>
136 pygtk ui, mathplotlib
139 * can be used in media test suite as a monitor