4 * Copyright (c) 2011-2013 David Herrmann <dh.herrmann@googlemail.com>
5 * Copyright (c) 2011 University of Tuebingen
7 * Permission is hereby granted, free of charge, to any person obtaining
8 * a copy of this software and associated documentation files
9 * (the "Software"), to deal in the Software without restriction, including
10 * without limitation the rights to use, copy, modify, merge, publish,
11 * distribute, sublicense, and/or sell copies of the Software, and to
12 * permit persons to whom the Software is furnished to do so, subject to
13 * the following conditions:
15 * The above copyright notice and this permission notice shall be included
16 * in all copies or substantial portions of the Software.
18 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
19 * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
20 * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
21 * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
22 * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
23 * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
24 * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
29 * @short_description: Event loop
32 * The event loop allows to register event sources and poll them for events.
33 * When an event occurs, the user-supplied callback is called.
35 * The event-loop allows the callbacks to modify _any_ data they want. They can
36 * remove themselves or other sources from the event loop even in a callback.
37 * This, however, means that recursive dispatch calls are not supported to
38 * increase performance and avoid internal dispatch-stacks.
40 * Sources can be one of:
41 * - File descriptors: An fd that is watched for readable/writeable events
42 * - Timers: An event that occurs after a relative timeout
43 * - Counters: An event that occurs when the counter is non-zero
44 * - Signals: An event that occurs when a signal is caught
45 * - Idle: An event that occurs when nothing else is done
46 * - Eloop: An event loop itself can be a source of another event loop
48 * A source can be registered for a single event-loop only! You cannot add it
49 * to multiple event loops simultaneously. Also all provided sources are based
50 * on the file-descriptor source so it is guaranteed that you can get a
51 * file-descriptor for every source-type. This is not exported via the public
52 * API, but you can get the epoll-fd which is basically a selectable FD summary
53 * of all event sources.
55 * For instance, if you're developing a library, you can use the eloop library
56 * internally and you will have a full event-loop implementation inside of a
57 * library without any side-effects. You simply export the epoll-fd of the
58 * eloop-object via your public API and the outside users think you only use a
59 * single file-descriptor. They include this FD in their own application event
60 * loop which will then dispatch the messages to your library. Internally, you
61 * simply forward this dispatching to ev_eloop_dispatch() which then calls all
62 * your internal callbacks.
63 * That is, you have an event loop inside your library without requiring the
64 * outside-user to use the same event loop. You also have no global state or
65 * thread-bound event-loops like the Qt/Gtk event loops. So you have full
66 * access to the whole event loop without any side-effects.
69 * The whole eloop library does not use any global data. Therefore, it is fully
70 * re-entrant and no synchronization needed. However, a single object is not
71 * thread-safe. This means, if you access a single eloop object or registered
72 * sources on this eloop object in two different threads, you need to
73 * synchronize them. Furthermore, all callbacks are called from the thread that
74 * calls ev_eloop_dispatch() or ev_eloop_run().
75 * This guarantees that you have full control over the eloop but that you also
76 * have to implement additional functionality like thread-affinity yourself
77 * (obviously, only if you need it).
79 * The philosophy behind this library is that a proper application needs only a
80 * single thread that uses an event loop. Multiple threads should be used to do
81 * calculations, but not to avoid learning how to do non-blocking I/O!
82 * Therefore, only the application threads needs an event-loop, all other
83 * threads only perform calculation and return the data to the main thread.
84 * However, the library does not enforce this design-choice. On the contrary,
85 * it supports all other types of application-designs, too. But as it is
86 * optimized for performance, other application-designs may need to add further
87 * functionality (like thread-affinity) by themselves as it would slow down the
88 * event loop if it was natively implemented.
91 * To get started simply create an eloop object with ev_eloop_new(). All
92 * functions return 0 on success and a negative error code like -EFAULT on
93 * failure. -EINVAL is returned if invalid parameters were passed.
94 * Every object can be ref-counted. *_ref() increases the reference-count and
95 * *_unref() decreases it. *_unref() also destroys the object if the ref-count
97 * To create new objects you call *_new(). It stores a pointer to the new
98 * object in the location you passed as parameter. Nearly all structues are
99 * opaque, that is, you cannot access member fields directly. This guarantees
102 * You can create sources with ev_fd_new(), ev_timer_new(), ... and you can add
103 * them to you eloop with ev_eloop_add_fd() or ev_eloop_add_timer(), ...
104 * After they are added you can call ev_eloop_run() to run this eloop for the
105 * given time. If you pass -1 as timeout, it runs until some callback calls
106 * ev_eloop_exit() on this eloop.
107 * You can perform _any_ operations on an eloop object inside of callbacks. You
108 * can add new sources, remove sources, destroy sources, modify sources. You
109 * also do all this on the currently active source.
111 * All objects are enabled by default. You can disable them with *_disable()
112 * and re-enable them with *_enable(). Only when enabled, they are added to the
113 * dispatcher and callbacks are called.
115 * Two sources are different for performance reasons:
116 * Idle sources: Idle sources can be registered with
117 * ev_eloop_register_idle_cb() and unregistered with
118 * ev_eloop_unregister_idle_cb(). They internally share a single
119 * file-descriptor to make them faster so you cannot get the same access as
120 * to other event sources (you cannot enable/disable them or similar).
121 * Idle sources are called every-time ev_eloop_dispatch() is called. That is,
122 * as long as an idle-source is registered, the event-loop will not go to
125 * Signal sources: Talking about the API they are very similar to
126 * idle-sources. They same restrictions apply, however, their type is very
127 * different. A signal-callback is called when the specified signal is
128 * received. They are not called in signal-context! But rather called in the
129 * same context as every other source. They are implemented with
131 * You can register multiple callbacks for the same signal and all callbacks
132 * will be called (compared to plain signalfd where only one fd gets the
133 * signal). This is done internally by sharing the signalfd.
134 * However, there is one restriction: You cannot share a signalfd between
135 * multiple eloop-instances. That is, if you register a callback for the same
136 * signal on two different eloop-instances (which are connected themselves),
137 * then only one eloop-instance will fire the signal source. This is a
138 * restriction of signalfd that cannot be overcome. However, it is very
139 * uncommon to register multiple callbacks for a signal so this shouldn't
140 * affect common application use-cases.
141 * Also note that if you register a callback for SIGCHLD then the eloop-
142 * object will automatically reap all pending zombies _after_ your callback
143 * has been called. So if you need to check for them, then check for all of
144 * them in the callback. After you return, they will be gone.
145 * When adding a signal handler the signal is automatically added to the
146 * currently blocked signals. It is not removed when dropping the
147 * signal-source, though.
149 * Eloop uses several system calls which may fail. All errors (including memory
150 * allocation errors via -ENOMEM) are forwarded to the caller, however, it is
151 * often preferable to have a more detailed logging message. Therefore, eloop
152 * takes a loggin-function as argument for each object. Pass NULL if you are
153 * not interested in logging. This will disable logging entirely.
154 * Otherwise, pass in a callback from your application. This callback will be
155 * called when a message is to be logged. The function may be called under any
156 * circumstances (out-of-memory, etc...) and should always behave well.
157 * Nothing is ever logged except through this callback.
161 #include <inttypes.h>
167 #include <sys/epoll.h>
168 #include <sys/eventfd.h>
169 #include <sys/signalfd.h>
170 #include <sys/time.h>
171 #include <sys/timerfd.h>
172 #include <sys/wait.h>
176 #include "shl_dlist.h"
177 #include "shl_hook.h"
178 #include "shl_llog.h"
179 #include "shl_misc.h"
181 #define LLOG_SUBSYSTEM "eloop"
185 * @ref: refcnt of this object
186 * @llog: llog log function
187 * @llog_data: llog log function user-data
188 * @efd: The epoll file descriptor.
189 * @fd: Event source around \efd so you can nest event loops
190 * @cnt: Counter source used for idle events
191 * @sig_list: Shared signal sources
192 * @idlers: List of idle sources
193 * @cur_fds: Current dispatch array of fds
194 * @cur_fds_cnt: current length of \cur_fds
195 * @cur_fds_size: absolute size of \cur_fds
196 * @exit: true if we should exit the main loop
198 * An event loop is an object where you can register event sources. If you then
199 * sleep on the event loop, you will be woken up if a single event source is
200 * firing up. An event loop itself is an event source so you can nest them.
210 struct shl_dlist sig_list;
211 struct shl_hook *chlds;
212 struct shl_hook *idlers;
213 struct shl_hook *pres;
214 struct shl_hook *posts;
217 struct epoll_event *cur_fds;
225 * @ref: refcnt for object
226 * @llog: llog log function
227 * @llog_data: llog log function user-data
228 * @fd: the actual file descriptor
229 * @mask: the event mask for this fd (EV_READABLE, EV_WRITEABLE, ...)
230 * @cb: the user callback
231 * @data: the user data
232 * @enabled: true if the object is currently enabled
233 * @loop: NULL or pointer to eloop if bound
235 * File descriptors are the most basic event source. Internally, they are used
236 * to implement all other kinds of event sources.
248 struct ev_eloop *loop;
253 * @ref: refcnt of this object
254 * @llog: llog log function
255 * @llog_data: llog log function user-data
258 * @fd: the timerfd file descriptor
259 * @efd: fd-source for @fd
261 * Based on timerfd this allows firing events based on relative timeouts.
276 * @ref: refcnt of counter object
277 * @llog: llog log function
278 * @llog_data: llog log function user-data
281 * @fd: eventfd file descriptor
282 * @efd: fd-source for @fd
284 * Counter sources fire if they are non-zero. They are based on the eventfd
300 * @list: list integration into ev_eloop object
301 * @fd: the signalfd file descriptor for this signal
302 * @signum: the actual signal number
303 * @hook: list of registered user callbacks for this signal
305 * A shared signal allows multiple listeners for the same signal. All listeners
306 * are called if the signal is caught.
308 struct ev_signal_shared {
309 struct shl_dlist list;
313 struct shl_hook *hook;
318 * signalfd allows us to conveniently listen for incoming signals. However, if
319 * multiple signalfds are registered for the same signal, then only one of them
320 * will get signaled. To avoid this restriction, we provide shared signals.
321 * That means, the user can register for a signal and if no other user is
322 * registered for this signal, yet, we create a new shared signal. Otherwise,
323 * we add the user to the existing shared signals.
324 * If the signal is caught, we simply call all users that are registered for
326 * To avoid side-effects, we automatically block all signals for the current
327 * thread when a signalfd is created. We never unblock the signal. However,
328 * most modern linux user-space programs avoid signal handlers, anyway, so you
329 * can use signalfd only.
332 static void sig_child(struct ev_eloop *loop, struct signalfd_siginfo *info,
337 struct ev_child_data d;
340 pid = waitpid(-1, &status, WNOHANG);
343 llog_warn(loop, "cannot wait on child: %m");
345 } else if (pid == 0) {
347 } else if (WIFEXITED(status)) {
348 if (WEXITSTATUS(status) != 0)
349 llog_debug(loop, "child %d exited with status %d",
350 pid, WEXITSTATUS(status));
352 llog_debug(loop, "child %d exited successfully",
354 } else if (WIFSIGNALED(status)) {
355 llog_debug(loop, "child %d exited by signal %d", pid,
361 shl_hook_call(loop->chlds, loop, &d);
365 static void shared_signal_cb(struct ev_fd *fd, int mask, void *data)
367 struct ev_signal_shared *sig = data;
368 struct signalfd_siginfo info;
371 if (mask & EV_READABLE) {
372 len = read(fd->fd, &info, sizeof(info));
373 if (len != sizeof(info))
374 llog_warn(fd, "cannot read signalfd (%d): %m", errno);
376 shl_hook_call(sig->hook, sig->fd->loop, &info);
377 } else if (mask & (EV_HUP | EV_ERR)) {
378 llog_warn(fd, "HUP/ERR on signal source");
384 * @out: Shared signal storage where the new object is stored
385 * @loop: The event loop where this shared signal is registered
386 * @signum: Signal number that this shared signal is for
388 * This creates a new shared signal and links it into the list of shared
389 * signals in @loop. It automatically adds @signum to the signal mask of the
390 * current thread so the signal is blocked.
392 * Returns: 0 on success, otherwise negative error code
394 static int signal_new(struct ev_signal_shared **out, struct ev_eloop *loop,
399 struct ev_signal_shared *sig;
402 return llog_EINVAL(loop);
404 sig = malloc(sizeof(*sig));
406 return llog_ENOMEM(loop);
407 memset(sig, 0, sizeof(*sig));
408 sig->signum = signum;
410 ret = shl_hook_new(&sig->hook);
415 sigaddset(&mask, signum);
417 fd = signalfd(-1, &mask, SFD_CLOEXEC | SFD_NONBLOCK);
420 llog_error(loop, "cannot created signalfd");
424 ret = ev_eloop_new_fd(loop, &sig->fd, fd, EV_READABLE,
425 shared_signal_cb, sig);
429 pthread_sigmask(SIG_BLOCK, &mask, NULL);
430 shl_dlist_link(&loop->sig_list, &sig->list);
438 shl_hook_free(sig->hook);
446 * @sig: The shared signal to be freed
448 * This unlinks the given shared signal from the event-loop where it was
449 * registered and destroys it. This does _not_ unblock the signal number that it
450 * was associated to. If you want this, you need to do this manually with
453 static void signal_free(struct ev_signal_shared *sig)
460 shl_dlist_unlink(&sig->list);
462 ev_eloop_rm_fd(sig->fd);
464 shl_hook_free(sig->hook);
467 * We do not unblock the signal here as there may be other subsystems
468 * which blocked this signal so we do not want to interfere. If you
469 * need a clean sigmask then do it yourself.
475 * The main eloop object is responsible for correctly dispatching all events.
476 * You can register fd, idle or signal sources with it. All other kinds of
477 * sources are based on these. In fact, event idle and signal sources are based
479 * As special feature, you can retrieve an fd of an eloop object, too, and pass
480 * it to your own event loop. If this fd is readable, then call
481 * ev_eloop_dispatch() to make this loop dispatch all pending events.
483 * There is one restriction when nesting eloops, though. You cannot share
484 * signals across eloop boundaries. That is, if you have registered for shared
485 * signals in two eloops for the _same_ signal, then only one eloop will
486 * receive the signal (and this is pretty random).
487 * However, such a setup is most often broken in design and hence should never
488 * occur. Even shared signals are quite rare.
489 * Anyway, you must take this into account when nesting eloops.
491 * For the curious reader: We implement idle sources with counter sources. That
492 * is, whenever there is an idle source we increase the counter source. Hence,
493 * the next dispatch call will call the counter source and this will call all
494 * registered idle source. If the idle sources do not unregister them, then we
495 * directly increase the counter again and the next dispatch round will call
496 * all idle sources again. This, however, has the side-effect that idle sources
497 * are _not_ called before other fd events but are rather mixed in between.
500 static void eloop_event(struct ev_fd *fd, int mask, void *data)
502 struct ev_eloop *eloop = data;
504 if (mask & EV_READABLE)
505 ev_eloop_dispatch(eloop, 0);
506 if (mask & (EV_HUP | EV_ERR))
507 llog_warn(eloop, "HUP/ERR on eloop source");
510 static int write_eventfd(llog_submit_t llog, void *llog_data, int fd,
516 return llog_dEINVAL(llog, llog_data);
518 if (val == 0xffffffffffffffffULL) {
519 llog_dwarning(llog, llog_data,
520 "increasing counter with invalid value %" PRIu64,
525 ret = write(fd, &val, sizeof(val));
528 llog_dwarning(llog, llog_data,
529 "eventfd overflow while writing %" PRIu64,
532 llog_dwarning(llog, llog_data,
533 "eventfd write error (%d): %m", errno);
535 } else if (ret != sizeof(val)) {
536 llog_dwarning(llog, llog_data,
537 "wrote %d bytes instead of 8 to eventdfd", ret);
544 static void eloop_idle_event(struct ev_eloop *loop, unsigned int mask)
549 if (mask & (EV_HUP | EV_ERR)) {
550 llog_warning(loop, "HUP/ERR on eventfd");
554 if (!(mask & EV_READABLE))
557 ret = read(loop->idle_fd, &val, sizeof(val));
559 if (errno != EAGAIN) {
560 llog_warning(loop, "reading eventfd failed (%d): %m",
564 } else if (ret == 0) {
565 llog_warning(loop, "EOF on eventfd");
567 } else if (ret != sizeof(val)) {
568 llog_warning(loop, "read %d bytes instead of 8 on eventfd",
571 } else if (val > 0) {
572 shl_hook_call(loop->idlers, loop, NULL);
573 if (shl_hook_num(loop->idlers) > 0)
574 write_eventfd(loop->llog, loop->llog_data,
581 ret = epoll_ctl(loop->efd, EPOLL_CTL_DEL, loop->idle_fd, NULL);
583 llog_warning(loop, "cannot remove fd %d from epollset (%d): %m",
584 loop->idle_fd, errno);
589 * @out: Storage for the result
590 * @log: logging function or NULL
591 * @log_data: logging function user-data
593 * This creates a new event-loop with ref-count 1. The new event loop is stored
594 * in @out and has no registered events.
596 * Returns: 0 on success, otherwise negative error code
599 int ev_eloop_new(struct ev_eloop **out, ev_log_t log, void *log_data)
601 struct ev_eloop *loop;
603 struct epoll_event ep;
606 return llog_dEINVAL(log, log_data);
608 loop = malloc(sizeof(*loop));
610 return llog_dENOMEM(log, log_data);
612 memset(loop, 0, sizeof(*loop));
615 loop->llog_data = log_data;
616 shl_dlist_init(&loop->sig_list);
618 loop->cur_fds_size = 32;
619 loop->cur_fds = malloc(sizeof(struct epoll_event) *
621 if (!loop->cur_fds) {
622 ret = llog_ENOMEM(loop);
626 ret = shl_hook_new(&loop->chlds);
630 ret = shl_hook_new(&loop->idlers);
634 ret = shl_hook_new(&loop->pres);
638 ret = shl_hook_new(&loop->posts);
642 loop->efd = epoll_create1(EPOLL_CLOEXEC);
645 llog_error(loop, "cannot create epoll-fd");
649 ret = ev_fd_new(&loop->fd, loop->efd, EV_READABLE, eloop_event, loop,
650 loop->llog, loop->llog_data);
654 loop->idle_fd = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
655 if (loop->idle_fd < 0) {
656 llog_error(loop, "cannot create eventfd (%d): %m", errno);
661 memset(&ep, 0, sizeof(ep));
662 ep.events |= EPOLLIN;
665 ret = epoll_ctl(loop->efd, EPOLL_CTL_ADD, loop->idle_fd, &ep);
667 llog_warning(loop, "cannot add fd %d to epoll set (%d): %m",
668 loop->idle_fd, errno);
673 llog_debug(loop, "new eloop object %p", loop);
678 close(loop->idle_fd);
680 ev_fd_unref(loop->fd);
684 shl_hook_free(loop->posts);
686 shl_hook_free(loop->pres);
688 shl_hook_free(loop->idlers);
690 shl_hook_free(loop->chlds);
700 * @loop: Event loop to be modified or NULL
702 * This increases the ref-count of @loop by 1.
705 void ev_eloop_ref(struct ev_eloop *loop)
715 * @loop: Event loop to be modified or NULL
717 * This decreases the ref-count of @loop by 1. If it drops to zero, the event
718 * loop is destroyed. Note that every registered event source takes a ref-count
719 * of the event loop so this ref-count will never drop to zero while there is an
720 * registered event source.
723 void ev_eloop_unref(struct ev_eloop *loop)
725 struct ev_signal_shared *sig;
731 return llog_vEINVAL(loop);
735 llog_debug(loop, "free eloop object %p", loop);
737 if (shl_hook_num(loop->chlds))
738 ev_eloop_unregister_signal_cb(loop, SIGCHLD, sig_child, loop);
740 while (loop->sig_list.next != &loop->sig_list) {
741 sig = shl_dlist_entry(loop->sig_list.next,
742 struct ev_signal_shared,
747 ret = epoll_ctl(loop->efd, EPOLL_CTL_DEL, loop->idle_fd, NULL);
749 llog_warning(loop, "cannot remove fd %d from epollset (%d): %m",
750 loop->idle_fd, errno);
751 close(loop->idle_fd);
753 ev_fd_unref(loop->fd);
755 shl_hook_free(loop->posts);
756 shl_hook_free(loop->pres);
757 shl_hook_free(loop->idlers);
758 shl_hook_free(loop->chlds);
765 * @loop: The event loop where @fd is registered
766 * @fd: The fd to be flushed
768 * If @loop is currently dispatching events, this will remove all pending events
769 * of @fd from the current event-list.
772 void ev_eloop_flush_fd(struct ev_eloop *loop, struct ev_fd *fd)
779 return llog_vEINVAL(loop);
781 if (loop->dispatching) {
782 for (i = 0; i < loop->cur_fds_cnt; ++i) {
783 if (loop->cur_fds[i].data.ptr == fd)
784 loop->cur_fds[i].data.ptr = NULL;
789 static unsigned int convert_mask(uint32_t mask)
791 unsigned int res = 0;
807 * @loop: Event loop to be dispatched
808 * @timeout: Timeout in milliseconds
810 * This listens on @loop for incoming events and handles all events that
811 * occurred. This waits at most @timeout milliseconds until returning. If
812 * @timeout is -1, this waits until the first event arrives. If @timeout is 0,
813 * then this returns directly if no event is currently pending.
815 * This performs only a single dispatch round. That is, if all sources where
816 * checked for events and there are no more pending events, this will return. If
817 * it handled events and the timeout has not elapsed, this will still return.
819 * If ev_eloop_exit() was called on @loop, then this will return immediately.
821 * Returns: 0 on success, otherwise negative error code
824 int ev_eloop_dispatch(struct ev_eloop *loop, int timeout)
826 struct epoll_event *ep;
828 int i, count, mask, ret;
833 return llog_EINVAL(loop);
834 if (loop->dispatching) {
835 llog_warn(loop, "recursive dispatching not allowed");
839 loop->dispatching = true;
841 shl_hook_call(loop->pres, loop, NULL);
843 count = epoll_wait(loop->efd,
848 if (errno == EINTR) {
852 llog_warn(loop, "epoll_wait dispatching failed: %m");
856 } else if (count > loop->cur_fds_size) {
857 count = loop->cur_fds_size;
861 loop->cur_fds_cnt = count;
863 for (i = 0; i < count; ++i) {
864 if (ep[i].data.ptr == loop) {
865 mask = convert_mask(ep[i].events);
866 eloop_idle_event(loop, mask);
869 if (!fd || !fd->cb || !fd->enabled)
872 mask = convert_mask(ep[i].events);
873 fd->cb(fd, mask, fd->data);
877 if (count == loop->cur_fds_size) {
878 ep = realloc(loop->cur_fds, sizeof(struct epoll_event) *
879 loop->cur_fds_size * 2);
881 llog_warning(loop, "cannot reallocate dispatch cache to size %zu",
882 loop->cur_fds_size * 2);
885 loop->cur_fds_size *= 2;
892 shl_hook_call(loop->posts, loop, NULL);
893 loop->dispatching = false;
899 * @loop: The event loop to be run
900 * @timeout: Timeout for this operation
902 * This is similar to ev_eloop_dispatch() but runs _exactly_ for @timeout
903 * milliseconds. It calls ev_eloop_dispatch() as often as it can until the
904 * timeout has elapsed. If @timeout is -1 this will run until you call
905 * ev_eloop_exit(). If @timeout is 0 this is equal to calling
906 * ev_eloop_dispatch() with a timeout of 0.
908 * Calling ev_eloop_exit() will always interrupt this function and make it
911 * Returns: 0 on success, otherwise a negative error code
914 int ev_eloop_run(struct ev_eloop *loop, int timeout)
917 struct timeval tv, start;
924 llog_debug(loop, "run for %d msecs", timeout);
925 gettimeofday(&start, NULL);
927 while (!loop->exit) {
928 ret = ev_eloop_dispatch(loop, timeout);
934 } else if (timeout > 0) {
935 gettimeofday(&tv, NULL);
936 off = tv.tv_sec - start.tv_sec;
937 msec = (int64_t)tv.tv_usec - (int64_t)start.tv_usec;
940 msec = 1000000 + msec;
954 * @loop: Event loop that should exit
956 * This makes a call to ev_eloop_run() stop.
959 void ev_eloop_exit(struct ev_eloop *loop)
964 llog_debug(loop, "exiting %p", loop);
968 ev_eloop_exit(loop->fd->loop);
975 * Returns a single file descriptor for the whole event-loop. If that FD is
976 * readable, then one of the event-sources is active and you should call
977 * ev_eloop_dispatch(loop, 0); to dispatch these events.
978 * If the fd is not readable, then ev_eloop_dispatch() would sleep as there are
981 * Returns: A file descriptor for the event loop or negative error code
984 int ev_eloop_get_fd(struct ev_eloop *loop)
993 * ev_eloop_new_eloop:
994 * @loop: The parent event-loop where the new event loop is registered
995 * @out: Storage for new event loop
997 * This creates a new event loop and directly registers it as event source on
998 * the parent event loop \loop.
1000 * Returns: 0 on success, otherwise negative error code
1003 int ev_eloop_new_eloop(struct ev_eloop *loop, struct ev_eloop **out)
1005 struct ev_eloop *el;
1011 return llog_EINVAL(loop);
1013 ret = ev_eloop_new(&el, loop->llog, loop->llog_data);
1017 ret = ev_eloop_add_eloop(loop, el);
1029 * ev_eloop_add_eloop:
1030 * @loop: Parent event loop
1031 * @add: The event loop that is registered as event source on @loop
1033 * This registers the existing event loop @add as event source on the parent
1036 * Returns: 0 on success, otherwise negative error code
1039 int ev_eloop_add_eloop(struct ev_eloop *loop, struct ev_eloop *add)
1046 return llog_EINVAL(loop);
1051 /* This adds the epoll-fd into the parent epoll-set. This works
1052 * perfectly well with registered FDs, timers, etc. However, we use
1053 * shared signals in this event-loop so if the parent and child have
1054 * overlapping shared-signals, then the signal will be randomly
1055 * delivered to either the parent-hook or child-hook but never both.
1057 * We may fix this by linking the childs-sig_list into the parent's
1058 * siglist but we didn't need this, yet, so ignore it here.
1061 ret = ev_eloop_add_fd(loop, add->fd);
1070 * ev_eloop_rm_eloop:
1071 * @rm: Event loop to be unregistered from its parent
1073 * This unregisters the event loop @rm as event source from its parent. If this
1074 * event loop was not registered on any other event loop, then this call does
1078 void ev_eloop_rm_eloop(struct ev_eloop *rm)
1080 if (!rm || !rm->fd->loop)
1083 ev_eloop_rm_fd(rm->fd);
1089 * This allows adding file descriptors to an eloop. A file descriptor is the
1090 * most basic kind of source and used for all other source types.
1091 * By default a source is always enabled but you can easily disable the source
1092 * by calling ev_fd_disable(). This will have the effect, that the source is
1093 * still registered with the eloop but will not wake up the thread or get
1094 * called until you enable it again.
1099 * @out: Storage for result
1100 * @rfd: The actual file descriptor
1101 * @mask: Bitmask of %EV_READABLE and %EV_WRITEABLE flags
1102 * @cb: User callback
1104 * @log: llog function or NULL
1105 * @log_data: logging function user-data
1107 * This creates a new file descriptor source that is watched for the events set
1108 * in @mask. @rfd is the system filedescriptor. The resulting object is stored
1109 * in @out. @cb and @data are the user callback and the user-supplied data that
1110 * is passed to the callback on events.
1111 * The FD is automatically watched for EV_HUP and EV_ERR events, too.
1113 * Returns: 0 on success, otherwise negative error code
1116 int ev_fd_new(struct ev_fd **out, int rfd, int mask, ev_fd_cb cb, void *data,
1117 ev_log_t log, void *log_data)
1121 if (!out || rfd < 0)
1122 return llog_dEINVAL(log, log_data);
1124 fd = malloc(sizeof(*fd));
1126 return llog_dEINVAL(log, log_data);
1128 memset(fd, 0, sizeof(*fd));
1131 fd->llog_data = log_data;
1146 * Increases the ref-count of @fd by 1.
1149 void ev_fd_ref(struct ev_fd *fd)
1154 return llog_vEINVAL(fd);
1163 * Decreases the ref-count of @fd by 1. Destroys the object if the ref-count
1167 void ev_fd_unref(struct ev_fd *fd)
1172 return llog_vEINVAL(fd);
1179 static int fd_epoll_add(struct ev_fd *fd)
1181 struct epoll_event ep;
1187 memset(&ep, 0, sizeof(ep));
1188 if (fd->mask & EV_READABLE)
1189 ep.events |= EPOLLIN;
1190 if (fd->mask & EV_WRITEABLE)
1191 ep.events |= EPOLLOUT;
1192 if (fd->mask & EV_ET)
1193 ep.events |= EPOLLET;
1196 ret = epoll_ctl(fd->loop->efd, EPOLL_CTL_ADD, fd->fd, &ep);
1198 llog_warning(fd, "cannot add fd %d to epoll set (%d): %m",
1206 static void fd_epoll_remove(struct ev_fd *fd)
1213 ret = epoll_ctl(fd->loop->efd, EPOLL_CTL_DEL, fd->fd, NULL);
1214 if (ret && errno != EBADF)
1215 llog_warning(fd, "cannot remove fd %d from epoll set (%d): %m",
1219 static int fd_epoll_update(struct ev_fd *fd)
1221 struct epoll_event ep;
1227 memset(&ep, 0, sizeof(ep));
1228 if (fd->mask & EV_READABLE)
1229 ep.events |= EPOLLIN;
1230 if (fd->mask & EV_WRITEABLE)
1231 ep.events |= EPOLLOUT;
1232 if (fd->mask & EV_ET)
1233 ep.events |= EPOLLET;
1236 ret = epoll_ctl(fd->loop->efd, EPOLL_CTL_MOD, fd->fd, &ep);
1238 llog_warning(fd, "cannot update epoll fd %d (%d): %m",
1250 * This enables @fd. By default every fd object is enabled. If you disabled it
1251 * you can re-enable it with this call.
1253 * Returns: 0 on success, otherwise negative error code
1256 int ev_fd_enable(struct ev_fd *fd)
1265 ret = fd_epoll_add(fd);
1277 * Disables @fd. That means, no more events are handled for @fd until you
1278 * re-enable it with ev_fd_enable().
1281 void ev_fd_disable(struct ev_fd *fd)
1283 if (!fd || !fd->enabled)
1286 fd->enabled = false;
1287 fd_epoll_remove(fd);
1294 * Returns whether the fd object is enabled or disabled.
1296 * Returns: true if @fd is enabled, otherwise false.
1299 bool ev_fd_is_enabled(struct ev_fd *fd)
1301 return fd && fd->enabled;
1308 * Returns true if the fd object is bound to an event loop.
1310 * Returns: true if @fd is bound, otherwise false
1313 bool ev_fd_is_bound(struct ev_fd *fd)
1315 return fd && fd->loop;
1319 * ev_fd_set_cb_data:
1321 * @cb: New user callback
1322 * @data: New user data
1324 * This changes the user callback and user data that were set in ev_fd_new().
1325 * Both can be set to NULL. If @cb is NULL, then the callback will not be called
1329 void ev_fd_set_cb_data(struct ev_fd *fd, ev_fd_cb cb, void *data)
1341 * @mask: Bitmask of %EV_READABLE and %EV_WRITEABLE
1343 * This resets the event mask of @fd to @mask.
1345 * Returns: 0 on success, otherwise negative error code
1348 int ev_fd_update(struct ev_fd *fd, int mask)
1355 if (fd->mask == mask && !(mask & EV_ET))
1364 ret = fd_epoll_update(fd);
1376 * @out: Storage for result
1377 * @rfd: File descriptor
1378 * @mask: Bitmask of %EV_READABLE and %EV_WRITEABLE
1379 * @cb: User callback
1382 * This creates a new fd object like ev_fd_new() and directly registers it in
1383 * the event loop @loop. See ev_fd_new() and ev_eloop_add_fd() for more
1385 * The ref-count of @out is 1 so you must call ev_eloop_rm_fd() to destroy the
1386 * fd. You must not call ev_fd_unref() unless you called ev_fd_ref() before.
1388 * Returns: 0 on success, otherwise negative error code
1391 int ev_eloop_new_fd(struct ev_eloop *loop, struct ev_fd **out, int rfd,
1392 int mask, ev_fd_cb cb, void *data)
1399 if (!out || rfd < 0)
1400 return llog_EINVAL(loop);
1402 ret = ev_fd_new(&fd, rfd, mask, cb, data, loop->llog, loop->llog_data);
1406 ret = ev_eloop_add_fd(loop, fd);
1422 * Registers @fd in the event loop @loop. This increases the ref-count of both
1423 * @loop and @fd. From now on the user callback of @fd may get called during
1426 * Returns: 0 on success, otherwise negative error code
1429 int ev_eloop_add_fd(struct ev_eloop *loop, struct ev_fd *fd)
1435 if (!fd || fd->loop)
1436 return llog_EINVAL(loop);
1441 ret = fd_epoll_add(fd);
1457 * Removes the fd object @fd from its event loop. If you did not call
1458 * ev_eloop_add_fd() before, this will do nothing.
1459 * This decreases the refcount of @fd and the event loop by 1.
1460 * It is safe to call this in any callback. This makes sure that the current
1461 * dispatcher will not get confused or read invalid memory.
1464 void ev_eloop_rm_fd(struct ev_fd *fd)
1466 struct ev_eloop *loop;
1469 if (!fd || !fd->loop)
1474 fd_epoll_remove(fd);
1477 * If we are currently dispatching events, we need to remove ourself
1478 * from the temporary event list.
1480 if (loop->dispatching) {
1481 for (i = 0; i < loop->cur_fds_cnt; ++i) {
1482 if (fd == loop->cur_fds[i].data.ptr)
1483 loop->cur_fds[i].data.ptr = NULL;
1489 ev_eloop_unref(loop);
1494 * Timer sources allow delaying a specific event by an relative timeout. The
1495 * timeout can be set to trigger after a specific time. Optionally, you can
1496 * also make the timeout trigger every next time the timeout elapses so you
1497 * basically get a pulse that reliably calls the callback.
1498 * The callback gets as parameter the number of timeouts that elapsed since it
1499 * was last called (in case the application couldn't call the callback fast
1500 * enough). The timeout can be specified with nano-seconds precision. However,
1501 * real precision depends on the operating-system and hardware.
1504 static int timer_drain(struct ev_timer *timer, uint64_t *out)
1507 uint64_t expirations;
1512 len = read(timer->fd, &expirations, sizeof(expirations));
1514 if (errno == EAGAIN) {
1517 llog_warning(timer, "cannot read timerfd (%d): %m",
1521 } else if (len == 0) {
1522 llog_warning(timer, "EOF on timer source");
1524 } else if (len != sizeof(expirations)) {
1525 llog_warn(timer, "invalid size %d read on timerfd", len);
1534 static void timer_cb(struct ev_fd *fd, int mask, void *data)
1536 struct ev_timer *timer = data;
1537 uint64_t expirations;
1540 if (mask & (EV_HUP | EV_ERR)) {
1541 llog_warn(fd, "HUP/ERR on timer source");
1545 if (mask & EV_READABLE) {
1546 ret = timer_drain(timer, &expirations);
1549 if (expirations > 0) {
1551 timer->cb(timer, expirations, timer->data);
1558 ev_timer_disable(timer);
1560 timer->cb(timer, 0, timer->data);
1563 static const struct itimerspec ev_timer_zero;
1567 * @out: Timer pointer where to store the new timer
1569 * @cb: callback to use for this event-source
1570 * @data: user-specified data
1571 * @log: logging function or NULL
1572 * @log_data: logging function user-data
1574 * This creates a new timer-source. See "man timerfd_create" for information on
1575 * the @spec argument. The timer is always relative and uses the
1576 * monotonic-kernel clock.
1578 * Returns: 0 on success, negative error on failure
1581 int ev_timer_new(struct ev_timer **out, const struct itimerspec *spec,
1582 ev_timer_cb cb, void *data, ev_log_t log, void *log_data)
1584 struct ev_timer *timer;
1588 return llog_dEINVAL(log, log_data);
1591 spec = &ev_timer_zero;
1593 timer = malloc(sizeof(*timer));
1595 return llog_dENOMEM(log, log_data);
1597 memset(timer, 0, sizeof(*timer));
1600 timer->llog_data = log_data;
1604 timer->fd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC | TFD_NONBLOCK);
1605 if (timer->fd < 0) {
1606 llog_error(timer, "cannot create timerfd (%d): %m", errno);
1611 ret = timerfd_settime(timer->fd, 0, spec, NULL);
1613 llog_warn(timer, "cannot set timerfd (%d): %m", errno);
1618 ret = ev_fd_new(&timer->efd, timer->fd, EV_READABLE, timer_cb, timer,
1619 timer->llog, timer->llog_data);
1635 * @timer: Timer object
1637 * Increase reference count by 1.
1640 void ev_timer_ref(struct ev_timer *timer)
1645 return llog_vEINVAL(timer);
1652 * @timer: Timer object
1654 * Decrease reference-count by 1 and destroy timer if it drops to 0.
1657 void ev_timer_unref(struct ev_timer *timer)
1662 return llog_vEINVAL(timer);
1666 ev_fd_unref(timer->efd);
1673 * @timer: Timer object
1675 * Enable the timer. This calls ev_fd_enable() on the fd that implements this
1678 * Returns: 0 on success negative error code on failure
1681 int ev_timer_enable(struct ev_timer *timer)
1686 return ev_fd_enable(timer->efd);
1691 * @timer: Timer object
1693 * Disable the timer. This calls ev_fd_disable() on the fd that implements this
1696 * Returns: 0 on success and negative error code on failure
1699 void ev_timer_disable(struct ev_timer *timer)
1704 ev_fd_disable(timer->efd);
1708 * ev_timer_is_enabled:
1709 * @timer: Timer object
1711 * Checks whether the timer is enabled.
1713 * Returns: true if timer is enabled, false otherwise
1716 bool ev_timer_is_enabled(struct ev_timer *timer)
1718 return timer && ev_fd_is_enabled(timer->efd);
1722 * ev_timer_is_bound:
1723 * @timer: Timer object
1725 * Checks whether the timer is bound to an event loop.
1727 * Returns: true if the timer is bound, false otherwise.
1730 bool ev_timer_is_bound(struct ev_timer *timer)
1732 return timer && ev_fd_is_bound(timer->efd);
1736 * ev_timer_set_cb_data:
1737 * @timer: Timer object
1738 * @cb: User callback or NULL
1739 * @data: User data or NULL
1741 * This changes the user-supplied callback and data that is used for this timer
1745 void ev_timer_set_cb_data(struct ev_timer *timer, ev_timer_cb cb, void *data)
1756 * @timer: Timer object
1759 * This changes the timer timespan. See "man timerfd_settime" for information
1760 * on the @spec parameter.
1762 * Returns: 0 on success, negative error code on failure.
1765 int ev_timer_update(struct ev_timer *timer, const struct itimerspec *spec)
1773 spec = &ev_timer_zero;
1775 ret = timerfd_settime(timer->fd, 0, spec, NULL);
1777 llog_warn(timer, "cannot set timerfd (%d): %m", errno);
1786 * @timer: valid timer object
1787 * @expirations: destination to save result or NULL
1789 * This reads the current expiration-count from the timer object @timer and
1790 * saves it in @expirations (if it is non-NULL). This can be used to clear the
1791 * timer after an idle-period or similar.
1792 * Note that the timer_cb() callback function automatically calls this before
1793 * calling the user-supplied callback.
1795 * Returns: 0 on success, negative error code on failure.
1798 int ev_timer_drain(struct ev_timer *timer, uint64_t *expirations)
1803 return timer_drain(timer, expirations);
1807 * ev_eloop_new_timer:
1809 * @out: output where to store the new timer
1811 * @cb: user callback
1812 * @data: user-supplied data
1814 * This is a combination of ev_timer_new() and ev_eloop_add_timer(). See both
1815 * for more information.
1817 * Returns: 0 on success, negative error code on failure.
1820 int ev_eloop_new_timer(struct ev_eloop *loop, struct ev_timer **out,
1821 const struct itimerspec *spec, ev_timer_cb cb,
1824 struct ev_timer *timer;
1830 return llog_EINVAL(loop);
1832 ret = ev_timer_new(&timer, spec, cb, data, loop->llog, loop->llog_data);
1836 ret = ev_eloop_add_timer(loop, timer);
1838 ev_timer_unref(timer);
1842 ev_timer_unref(timer);
1848 * ev_eloop_add_timer:
1850 * @timer: Timer source
1852 * This adds @timer as source to @loop. @timer must be currently unbound,
1853 * otherwise, this will fail with -EALREADY.
1855 * Returns: 0 on success, negative error code on failure
1858 int ev_eloop_add_timer(struct ev_eloop *loop, struct ev_timer *timer)
1865 return llog_EINVAL(loop);
1867 if (ev_fd_is_bound(timer->efd))
1870 ret = ev_eloop_add_fd(loop, timer->efd);
1874 ev_timer_ref(timer);
1879 * ev_eloop_rm_timer:
1880 * @timer: Timer object
1882 * If @timer is currently bound to an event loop, this will remove this bondage
1886 void ev_eloop_rm_timer(struct ev_timer *timer)
1888 if (!timer || !ev_fd_is_bound(timer->efd))
1891 ev_eloop_rm_fd(timer->efd);
1892 ev_timer_unref(timer);
1897 * Counter sources are a very basic event notification mechanism. It is based
1898 * around the eventfd() system call on linux machines. Internally, there is a
1899 * 64bit unsigned integer that can be increased by the caller. By default it is
1900 * set to 0. If it is non-zero, the event-fd will be notified and the
1901 * user-defined callback is called. The callback gets as argument the current
1902 * state of the counter and the counter is reset to 0.
1904 * If the internal counter would overflow, an increase() fails silently so an
1905 * overflow will never occur, however, you may loose events this way. This can
1906 * be ignored when increasing with small values, only.
1909 static void counter_event(struct ev_fd *fd, int mask, void *data)
1911 struct ev_counter *cnt = data;
1915 if (mask & (EV_HUP | EV_ERR)) {
1916 llog_warning(fd, "HUP/ERR on eventfd");
1918 cnt->cb(cnt, 0, cnt->data);
1922 if (!(mask & EV_READABLE))
1925 ret = read(cnt->fd, &val, sizeof(val));
1927 if (errno != EAGAIN) {
1928 llog_warning(fd, "reading eventfd failed (%d): %m", errno);
1929 ev_counter_disable(cnt);
1931 cnt->cb(cnt, 0, cnt->data);
1933 } else if (ret == 0) {
1934 llog_warning(fd, "EOF on eventfd");
1935 ev_counter_disable(cnt);
1937 cnt->cb(cnt, 0, cnt->data);
1938 } else if (ret != sizeof(val)) {
1939 llog_warning(fd, "read %d bytes instead of 8 on eventfd", ret);
1940 ev_counter_disable(cnt);
1942 cnt->cb(cnt, 0, cnt->data);
1943 } else if (val > 0) {
1945 cnt->cb(cnt, val, cnt->data);
1951 * @out: Where to store the new counter
1952 * @cb: user-supplied callback
1953 * @data: user-supplied data
1954 * @log: logging function or NULL
1955 * @log_data: logging function user-data
1957 * This creates a new counter object and stores it in @out.
1959 * Returns: 0 on success, negative error code on failure.
1962 int ev_counter_new(struct ev_counter **out, ev_counter_cb cb, void *data,
1963 ev_log_t log, void *log_data)
1965 struct ev_counter *cnt;
1969 return llog_dEINVAL(log, log_data);
1971 cnt = malloc(sizeof(*cnt));
1973 return llog_dENOMEM(log, log_data);
1974 memset(cnt, 0, sizeof(*cnt));
1977 cnt->llog_data = log_data;
1981 cnt->fd = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
1983 llog_error(cnt, "cannot create eventfd (%d): %m", errno);
1988 ret = ev_fd_new(&cnt->efd, cnt->fd, EV_READABLE, counter_event, cnt,
1989 cnt->llog, cnt->llog_data);
2005 * @cnt: Counter object
2007 * This increases the reference-count of @cnt by 1.
2010 void ev_counter_ref(struct ev_counter *cnt)
2015 return llog_vEINVAL(cnt);
2022 * @cnt: Counter object
2024 * This decreases the reference-count of @cnt by 1 and destroys the object if
2028 void ev_counter_unref(struct ev_counter *cnt)
2033 return llog_vEINVAL(cnt);
2037 ev_fd_unref(cnt->efd);
2043 * ev_counter_enable:
2044 * @cnt: Counter object
2046 * This enables the counter object. It calls ev_fd_enable() on the underlying
2049 * Returns: 0 on success, negative error code on failure
2052 int ev_counter_enable(struct ev_counter *cnt)
2057 return ev_fd_enable(cnt->efd);
2061 * ev_counter_disable:
2062 * @cnt: Counter object
2064 * This disables the counter. It calls ev_fd_disable() on the underlying
2068 void ev_counter_disable(struct ev_counter *cnt)
2073 ev_fd_disable(cnt->efd);
2077 * ev_counter_is_enabled:
2078 * @cnt: counter object
2080 * Checks whether the counter is enabled.
2082 * Returns: true if the counter is enabled, otherwise returns false.
2085 bool ev_counter_is_enabled(struct ev_counter *cnt)
2087 return cnt && ev_fd_is_enabled(cnt->efd);
2091 * ev_counter_is_bound:
2092 * @cnt: Counter object
2094 * Checks whether the counter is bound to an event loop.
2096 * Returns: true if the counter is bound, otherwise false is returned.
2099 bool ev_counter_is_bound(struct ev_counter *cnt)
2101 return cnt && ev_fd_is_bound(cnt->efd);
2105 * ev_counter_set_cb_data:
2106 * @cnt: Counter object
2107 * @cb: user-supplied callback
2108 * @data: user-supplied data
2110 * This changes the user-supplied callback and data for the given counter
2114 void ev_counter_set_cb_data(struct ev_counter *cnt, ev_counter_cb cb,
2126 * @cnt: Counter object
2127 * @val: Counter increase amount
2129 * This increases the counter @cnt by @val.
2131 * Returns: 0 on success, negative error code on failure.
2134 int ev_counter_inc(struct ev_counter *cnt, uint64_t val)
2139 return write_eventfd(cnt->llog, cnt->llog_data, cnt->fd, val);
2143 * ev_eloop_new_counter:
2144 * @eloop: event loop
2145 * @out: output storage for new counter
2146 * @cb: user-supplied callback
2147 * @data: user-supplied data
2149 * This combines ev_counter_new() and ev_eloop_add_counter() in one call.
2151 * Returns: 0 on success, negative error code on failure.
2154 int ev_eloop_new_counter(struct ev_eloop *eloop, struct ev_counter **out,
2155 ev_counter_cb cb, void *data)
2158 struct ev_counter *cnt;
2163 return llog_EINVAL(eloop);
2165 ret = ev_counter_new(&cnt, cb, data, eloop->llog, eloop->llog_data);
2169 ret = ev_eloop_add_counter(eloop, cnt);
2171 ev_counter_unref(cnt);
2175 ev_counter_unref(cnt);
2181 * ev_eloop_add_counter:
2182 * @eloop: Event loop
2183 * @cnt: Counter object
2185 * This adds @cnt to the given event loop @eloop. If @cnt is already bound,
2186 * this will fail with -EALREADY.
2188 * Returns: 0 on success, negative error code on failure.
2191 int ev_eloop_add_counter(struct ev_eloop *eloop, struct ev_counter *cnt)
2198 return llog_EINVAL(eloop);
2200 if (ev_fd_is_bound(cnt->efd))
2203 ret = ev_eloop_add_fd(eloop, cnt->efd);
2207 ev_counter_ref(cnt);
2212 * ev_eloop_rm_counter:
2213 * @cnt: Counter object
2215 * If @cnt is bound to an event-loop, then this will remove this bondage again.
2218 void ev_eloop_rm_counter(struct ev_counter *cnt)
2220 if (!cnt || !ev_fd_is_bound(cnt->efd))
2223 ev_eloop_rm_fd(cnt->efd);
2224 ev_counter_unref(cnt);
2229 * This allows registering for shared signal events. See description of the
2230 * shared signal object above for more information how this works. Also see the
2231 * eloop description to see some drawbacks when nesting eloop objects with the
2232 * same shared signal sources.
2236 * ev_eloop_register_signal_cb:
2238 * @signum: Signal number
2239 * @cb: user-supplied callback
2240 * @data: user-supplied data
2242 * This register a new callback for the given signal @signum. @cb must not be
2245 * Returns: 0 on success, negative error code on failure.
2248 int ev_eloop_register_signal_cb(struct ev_eloop *loop, int signum,
2249 ev_signal_shared_cb cb, void *data)
2251 struct ev_signal_shared *sig = NULL;
2253 struct shl_dlist *iter;
2257 if (signum < 0 || !cb)
2258 return llog_EINVAL(loop);
2260 shl_dlist_for_each(iter, &loop->sig_list) {
2261 sig = shl_dlist_entry(iter, struct ev_signal_shared, list);
2262 if (sig->signum == signum)
2268 ret = signal_new(&sig, loop, signum);
2273 ret = shl_hook_add_cast(sig->hook, cb, data, false);
2283 * ev_eloop_unregister_signal_cb:
2285 * @signum: signal number
2286 * @cb: user-supplied callback
2287 * @data: user-supplied data
2289 * This removes a previously registered signal-callback again. The arguments
2290 * must be the same as for the ev_eloop_register_signal_cb() call. If multiple
2291 * callbacks with the same arguments are registered, then only one callback is
2292 * removed. It doesn't matter which callback is removed as both are identical.
2295 void ev_eloop_unregister_signal_cb(struct ev_eloop *loop, int signum,
2296 ev_signal_shared_cb cb, void *data)
2298 struct ev_signal_shared *sig;
2299 struct shl_dlist *iter;
2304 shl_dlist_for_each(iter, &loop->sig_list) {
2305 sig = shl_dlist_entry(iter, struct ev_signal_shared, list);
2306 if (sig->signum == signum) {
2307 shl_hook_rm_cast(sig->hook, cb, data);
2308 if (!shl_hook_num(sig->hook))
2316 * Child reaper sources
2317 * If at least one child-reaper callback is registered, then the eloop object
2318 * listens for SIGCHLD and waits for all exiting children. The callbacks are
2319 * then notified for each PID that signaled an event.
2320 * Note that this cannot be done via the shared-signal sources as the waitpid()
2321 * call must not be done in callbacks. Otherwise, only one callback would see
2322 * the events while others will call waitpid() and get EAGAIN.
2326 int ev_eloop_register_child_cb(struct ev_eloop *loop, ev_child_cb cb,
2335 empty = !shl_hook_num(loop->chlds);
2336 ret = shl_hook_add_cast(loop->chlds, cb, data, false);
2341 ret = ev_eloop_register_signal_cb(loop, SIGCHLD, sig_child,
2344 shl_hook_rm_cast(loop->chlds, cb, data);
2353 void ev_eloop_unregister_child_cb(struct ev_eloop *loop, ev_child_cb cb,
2356 if (!loop || !shl_hook_num(loop->chlds))
2359 shl_hook_rm_cast(loop->chlds, cb, data);
2360 if (!shl_hook_num(loop->chlds))
2361 ev_eloop_unregister_signal_cb(loop, SIGCHLD, sig_child, loop);
2366 * Idle sources are called every time when a next dispatch round is started.
2367 * That means, unless there is no idle source registered, the thread will
2368 * _never_ go to sleep. So please unregister your idle source if no longer
2373 * ev_eloop_register_idle_cb:
2374 * @eloop: event loop
2375 * @cb: user-supplied callback
2376 * @data: user-supplied data
2379 * This register a new idle-source with the given callback and data. @cb must
2382 * Returns: 0 on success, negative error code on failure.
2385 int ev_eloop_register_idle_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2386 void *data, unsigned int flags)
2389 bool os = flags & EV_ONESHOT;
2391 if (!eloop || (flags & ~EV_IDLE_ALL))
2394 if ((flags & EV_SINGLE))
2395 ret = shl_hook_add_single_cast(eloop->idlers, cb, data, os);
2397 ret = shl_hook_add_cast(eloop->idlers, cb, data, os);
2402 ret = write_eventfd(eloop->llog, eloop->llog_data, eloop->idle_fd, 1);
2404 llog_warning(eloop, "cannot increase eloop idle-counter");
2405 shl_hook_rm_cast(eloop->idlers, cb, data);
2413 * ev_eloop_unregister_idle_cb:
2414 * @eloop: event loop
2415 * @cb: user-supplied callback
2416 * @data: user-supplied data
2419 * This removes an idle-source. The arguments must be the same as for the
2420 * ev_eloop_register_idle_cb() call. If two identical callbacks are registered,
2421 * then only one is removed. It doesn't matter which one is removed, because
2422 * they are identical.
2425 void ev_eloop_unregister_idle_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2426 void *data, unsigned int flags)
2428 if (!eloop || (flags & ~EV_IDLE_ALL))
2431 if (flags & EV_SINGLE)
2432 shl_hook_rm_all_cast(eloop->idlers, cb, data);
2434 shl_hook_rm_cast(eloop->idlers, cb, data);
2438 * Pre-Dispatch Callbacks
2439 * A pre-dispatch cb is called before a single dispatch round is started.
2440 * You should avoid using them and instead not rely on any specific
2441 * dispatch-behavior but expect every event to be received asynchronously.
2442 * However, this hook is useful to integrate other limited APIs into this event
2443 * loop if they do not provide proper FD-abstractions.
2447 * ev_eloop_register_pre_cb:
2448 * @eloop: event loop
2449 * @cb: user-supplied callback
2450 * @data: user-supplied data
2452 * This register a new pre-cb with the given callback and data. @cb must
2455 * Returns: 0 on success, negative error code on failure.
2458 int ev_eloop_register_pre_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2464 return shl_hook_add_cast(eloop->pres, cb, data, false);
2468 * ev_eloop_unregister_pre_cb:
2469 * @eloop: event loop
2470 * @cb: user-supplied callback
2471 * @data: user-supplied data
2473 * This removes a pre-cb. The arguments must be the same as for the
2474 * ev_eloop_register_pre_cb() call. If two identical callbacks are registered,
2475 * then only one is removed. It doesn't matter which one is removed, because
2476 * they are identical.
2479 void ev_eloop_unregister_pre_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2485 shl_hook_rm_cast(eloop->pres, cb, data);
2489 * Post-Dispatch Callbacks
2490 * A post-dispatch cb is called whenever a single dispatch round is complete.
2491 * You should avoid using them and instead not rely on any specific
2492 * dispatch-behavior but expect every event to be received asynchronously.
2493 * However, this hook is useful to integrate other limited APIs into this event
2494 * loop if they do not provide proper FD-abstractions.
2498 * ev_eloop_register_post_cb:
2499 * @eloop: event loop
2500 * @cb: user-supplied callback
2501 * @data: user-supplied data
2503 * This register a new post-cb with the given callback and data. @cb must
2506 * Returns: 0 on success, negative error code on failure.
2509 int ev_eloop_register_post_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2515 return shl_hook_add_cast(eloop->posts, cb, data, false);
2519 * ev_eloop_unregister_post_cb:
2520 * @eloop: event loop
2521 * @cb: user-supplied callback
2522 * @data: user-supplied data
2524 * This removes a post-cb. The arguments must be the same as for the
2525 * ev_eloop_register_post_cb() call. If two identical callbacks are registered,
2526 * then only one is removed. It doesn't matter which one is removed, because
2527 * they are identical.
2530 void ev_eloop_unregister_post_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2536 shl_hook_rm_cast(eloop->posts, cb, data);