4 * Copyright (c) 2011-2012 David Herrmann <dh.herrmann@googlemail.com>
5 * Copyright (c) 2011 University of Tuebingen
7 * Permission is hereby granted, free of charge, to any person obtaining
8 * a copy of this software and associated documentation files
9 * (the "Software"), to deal in the Software without restriction, including
10 * without limitation the rights to use, copy, modify, merge, publish,
11 * distribute, sublicense, and/or sell copies of the Software, and to
12 * permit persons to whom the Software is furnished to do so, subject to
13 * the following conditions:
15 * The above copyright notice and this permission notice shall be included
16 * in all copies or substantial portions of the Software.
18 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
19 * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
20 * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
21 * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
22 * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
23 * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
24 * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
29 * @short_description: Event loop
32 * The event loop allows to register event sources and poll them for events.
33 * When an event occurs, the user-supplied callback is called.
35 * The event-loop allows the callbacks to modify _any_ data they want. They can
36 * remove themself or other sources from the event loop even in a callback.
37 * This, however, means that recursive dispatch calls are not supported to
38 * increase performance and avoid internal dispatch-stacks.
40 * Sources can be one of:
41 * - File descriptors: An fd that is watched for readable/writeable events
42 * - Timers: An event that occurs after a relative timeout
43 * - Counters: An event that occurs when the counter is non-zero
44 * - Signals: An event that occurs when a signal is caught
45 * - Idle: An event that occurs when nothing else is done
46 * - Eloop: An event loop itself can be a source of another event loop
48 * A source can be registered for a single event-loop only! You cannot add it
49 * to multiple event loops simultaneously. Also all provided sources are based
50 * on the file-descriptor source so it is guaranteed that you can get a
51 * file-desciptor for every source-type. This is not exported via the public
52 * API, but you can get the epoll-fd which is basically a selectable FD summary
53 * of all event sources.
55 * For instance, if you're developing a library, you can use the eloop library
56 * internally and you will have a full event-loop implementation inside of a
57 * library without any side-effects. You simply export the epoll-fd of the
58 * eloop-object via your public API and the outside users think you only use a
59 * single file-descriptor. They include this FD in their own application event
60 * loop which will then dispatch the messages to your library. Internally, you
61 * simply forward this dispatching to ev_eloop_dispatch() which then calls all
62 * your internal callbacks.
63 * That is, you have an event loop inside your library without requiring the
64 * outside-user to use the same event loop. You also have no global state or
65 * thread-bound event-loops like the Qt/Gtk event loops. So you have full
66 * access to the whole event loop without any side-effects.
69 * The whole eloop library does not use any global data. Therefore, it is fully
70 * re-entrant and no synchronization needed. However, a single object is not
71 * thread-safe. This means, if you access a single eloop object or registered
72 * sources on this eloop object in two different threads, you need to
73 * synchronize them. Furthermore, all callbacks are called from the thread that
74 * calls ev_eloop_dispatch() or ev_eloop_run().
75 * This guarantees that you have full control over the eloop but that you also
76 * have to implement additional functionality like thread-affinity yourself
77 * (obviously, only if you need it).
79 * The philosophy behind this library is that a proper application needs only a
80 * single thread that uses an event loop. Multiple threads should be used to do
81 * calculations, but not to avoid learning how to do non-blocking I/O!
82 * Therefore, only the application threads needs an event-loop, all other
83 * threads only perform calculation and return the data to the main thread.
84 * However, the library does not enforce this design-choice. On the contrary,
85 * it supports all other types of application-designs, too. But as it is
86 * optimized for performance, other application-designs may need to add further
87 * functionality (like thread-affinity) by themself as it would slow down the
88 * event loop if it was natively implemented.
91 * To get started simply create an eloop object with ev_eloop_new(). All
92 * functions return 0 on success and a negative error code like -EFAULT on
93 * failure. -EINVAL is returned if invalid parameters were passed.
94 * Every object can be ref-counted. *_ref() increases the reference-count and
95 * *_unref() decreases it. *_unref() also destroys the object if the ref-count
97 * To create new objects you call *_new(). It stores a pointer to the new
98 * object in the location you passed as parameter. Nearly all structues are
99 * opaque, that is, you cannot access member fields directly. This guarantees
102 * You can create sources with ev_fd_new(), ev_timer_new(), ... and you can add
103 * them to you eloop with ev_eloop_add_fd() or ev_eloop_add_timer(), ...
104 * After they are added you can call ev_eloop_run() to run this eloop for the
105 * given time. If you pass -1 as timeout, it runs until some callback calls
106 * ev_eloop_exit() on this eloop.
107 * You can perform _any_ operations on an eloop object inside of callbacks. You
108 * can add new sources, remove sources, destroy sources, modify sources. You
109 * also do all this on the currently active source.
111 * All objects are enabled by default. You can disable them with *_disable()
112 * and re-enable them with *_enable(). Only when enabled, they are added to the
113 * dispatcher and callbacks are called.
115 * Two sources are different for performance reasons:
116 * Idle sources: Idle sources can be registered with
117 * ev_eloop_register_idle_cb() and unregistered with
118 * ev_eloop_unregister_idle_cb(). They internally share a single
119 * file-descriptor to make them faster so you cannot get the same access as
120 * to other event sources (you cannot enable/disable them or similar).
121 * Idle sources are called every-time ev_eloop_dispatch() is called. That is,
122 * as long as an idle-source is registered, the event-loop will not go to
125 * Signal sources: Talking about the API they are very similar to
126 * idle-sources. They same restrictions apply, however, their type is very
127 * different. A signal-callback is called when the specified signal is
128 * received. They are not called in signal-context! But rather called in the
129 * same context as every other source. They are implemented with
131 * You can register multiple callbacks for the same signal and all callbacks
132 * will be called (compared to plain signalfd where only one fd gets the
133 * signal). This is done internally by sharing the signalfd.
134 * However, there is one restriction: You cannot share a signalfd between
135 * multiple eloop-instances. That is, if you register a callback for the same
136 * signal on two different eloop-instances (which are connected themself),
137 * then only one eloop-instance will fire the signal source. This is a
138 * restriction of signalfd that cannot be overcome. However, it is very
139 * uncommon to register multiple callbacks for a signal so this shouldn't
140 * affect common application use-cases.
141 * Also note that if you register a callback for SIGCHLD then the eloop-
142 * object will automatically reap all pending zombies _after_ your callback
143 * has been called. So if you need to check for them, then check for all of
144 * them in the callback. After you return, they will be gone.
145 * When adding a signal handler the signal is automatically added to the
146 * currently blocked signals. It is not removed when dropping the
147 * signal-source, though.
149 * Eloop uses several system calls which may fail. All errors (including memory
150 * allocation errors via -ENOMEM) are forwarded to the caller, however, it is
151 * often preferable to have a more detailed logging message. Therefore, eloop
152 * takes a loggin-function as argument for each object. Pass NULL if you are
153 * not interested in logging. This will disable logging entirely.
154 * Otherwise, pass in a callback from your application. This callback will be
155 * called when a message is to be logged. The function may be called under any
156 * circumstances (out-of-memory, etc...) and should always behave well.
157 * Nothing is ever logged except through this callback.
161 #include <inttypes.h>
167 #include <sys/epoll.h>
168 #include <sys/eventfd.h>
169 #include <sys/signalfd.h>
170 #include <sys/time.h>
171 #include <sys/timerfd.h>
172 #include <sys/wait.h>
176 #include "shl_dlist.h"
177 #include "shl_hook.h"
178 #include "shl_llog.h"
180 #define LLOG_SUBSYSTEM "eloop"
184 * @ref: refcnt of this object
185 * @llog: llog log function
186 * @efd: The epoll file descriptor.
187 * @fd: Event source around \efd so you can nest event loops
188 * @cnt: Counter source used for idle events
189 * @sig_list: Shared signal sources
190 * @idlers: List of idle sources
191 * @cur_fds: Current dispatch array of fds
192 * @cur_fds_cnt: current length of \cur_fds
193 * @cur_fds_size: absolute size of \cur_fds
194 * @exit: true if we should exit the main loop
196 * An event loop is an object where you can register event sources. If you then
197 * sleep on the event loop, you will be woken up if a single event source is
198 * firing up. An event loop itself is an event source so you can nest them.
207 struct shl_dlist sig_list;
208 struct shl_hook *chlds;
209 struct shl_hook *idlers;
210 struct shl_hook *pres;
211 struct shl_hook *posts;
214 struct epoll_event *cur_fds;
222 * @ref: refcnt for object
223 * @llog: llog log function
224 * @fd: the actual file desciptor
225 * @mask: the event mask for this fd (EV_READABLE, EV_WRITABLE, ...)
226 * @cb: the user callback
227 * @data: the user data
228 * @enabled: true if the object is currently enabled
229 * @loop: NULL or pointer to eloop if bound
231 * File descriptors are the most basic event source. Internally, they are used
232 * to implement all other kinds of event sources.
243 struct ev_eloop *loop;
248 * @ref: refcnt of this object
249 * @llog: llog log function
252 * @fd: the timerfd file desciptor
253 * @efd: fd-source for @fd
255 * Based on timerfd this allows firing events based on relative timeouts.
269 * @ref: refcnt of counter object
270 * @llog: llog log function
273 * @fd: eventfd file desciptor
274 * @efd: fd-source for @fd
276 * Counter sources fire if they are non-zero. They are based on the eventfd
291 * @list: list integration into ev_eloop object
292 * @fd: the signalfd file desciptor for this signal
293 * @signum: the actual signal number
294 * @hook: list of registered user callbacks for this signal
296 * A shared signal allows multiple listeners for the same signal. All listeners
297 * are called if the signal is catched.
299 struct ev_signal_shared {
300 struct shl_dlist list;
304 struct shl_hook *hook;
309 * signalfd allows us to conveniently listen for incoming signals. However, if
310 * multiple signalfds are registered for the same signal, then only one of them
311 * will get signaled. To avoid this restriction, we provide shared signals.
312 * That means, the user can register for a signal and if no other user is
313 * registered for this signal, yet, we create a new shared signal. Otherwise,
314 * we add the user to the existing shared signals.
315 * If the signal is catched, we simply call all users that are registered for
317 * To avoid side-effects, we automatically block all signals for the current
318 * thread when a signalfd is created. We never unblock the signal. However,
319 * most modern linux user-space programs avoid signal handlers, anyway, so you
320 * can use signalfd only.
323 static void sig_child(struct ev_eloop *loop, struct signalfd_siginfo *info,
328 struct ev_child_data d;
331 pid = waitpid(-1, &status, WNOHANG);
334 llog_warn(loop, "cannot wait on child: %m");
336 } else if (pid == 0) {
338 } else if (WIFEXITED(status)) {
339 if (WEXITSTATUS(status) != 0)
340 llog_debug(loop, "child %d exited with status %d",
341 pid, WEXITSTATUS(status));
343 llog_debug(loop, "child %d exited successfully",
345 } else if (WIFSIGNALED(status)) {
346 llog_debug(loop, "child %d exited by signal %d", pid,
352 shl_hook_call(loop->chlds, loop, &d);
356 static void shared_signal_cb(struct ev_fd *fd, int mask, void *data)
358 struct ev_signal_shared *sig = data;
359 struct signalfd_siginfo info;
362 if (mask & EV_READABLE) {
363 len = read(fd->fd, &info, sizeof(info));
364 if (len != sizeof(info))
365 llog_warn(fd, "cannot read signalfd (%d): %m", errno);
367 shl_hook_call(sig->hook, sig->fd->loop, &info);
368 } else if (mask & (EV_HUP | EV_ERR)) {
369 llog_warn(fd, "HUP/ERR on signal source");
375 * @out: Shared signal storage where the new object is stored
376 * @loop: The event loop where this shared signal is registered
377 * @signum: Signal number that this shared signal is for
379 * This creates a new shared signal and links it into the list of shared
380 * signals in @loop. It automatically adds @signum to the signal mask of the
381 * current thread so the signal is blocked.
383 * Returns: 0 on success, otherwise negative error code
385 static int signal_new(struct ev_signal_shared **out, struct ev_eloop *loop,
390 struct ev_signal_shared *sig;
393 return llog_EINVAL(loop);
395 sig = malloc(sizeof(*sig));
397 return llog_ENOMEM(loop);
398 memset(sig, 0, sizeof(*sig));
399 sig->signum = signum;
401 ret = shl_hook_new(&sig->hook);
406 sigaddset(&mask, signum);
408 fd = signalfd(-1, &mask, SFD_CLOEXEC | SFD_NONBLOCK);
411 llog_error(loop, "cannot created signalfd");
415 ret = ev_eloop_new_fd(loop, &sig->fd, fd, EV_READABLE,
416 shared_signal_cb, sig);
420 pthread_sigmask(SIG_BLOCK, &mask, NULL);
421 shl_dlist_link(&loop->sig_list, &sig->list);
429 shl_hook_free(sig->hook);
437 * @sig: The shared signal to be freed
439 * This unlinks the given shared signal from the event-loop where it was
440 * registered and destroys it. This does _not_ unblock the signal number that it
441 * was associated to. If you want this, you need to do this manually with
444 static void signal_free(struct ev_signal_shared *sig)
451 shl_dlist_unlink(&sig->list);
453 ev_eloop_rm_fd(sig->fd);
455 shl_hook_free(sig->hook);
458 * We do not unblock the signal here as there may be other subsystems
459 * which blocked this signal so we do not want to interfere. If you
460 * need a clean sigmask then do it yourself.
466 * The main eloop object is responsible for correctly dispatching all events.
467 * You can register fd, idle or signal sources with it. All other kinds of
468 * sources are based on these. In fact, event idle and signal sources are based
470 * As special feature, you can retrieve an fd of an eloop object, too, and pass
471 * it to your own event loop. If this fd is readable, then call
472 * ev_eloop_dispatch() to make this loop dispatch all pending events.
474 * There is one restriction when nesting eloops, though. You cannot share
475 * signals across eloop boundaries. That is, if you have registered for shared
476 * signals in two eloops for the _same_ signal, then only one eloop will
477 * receive the signal (and this is pretty random).
478 * However, such a setup is most often broken in design and hence should never
479 * occur. Even shared signals are quite rare.
480 * Anyway, you must take this into account when nesting eloops.
482 * For the curious reader: We implement idle sources with counter sources. That
483 * is, whenever there is an idle source we increase the counter source. Hence,
484 * the next dispatch call will call the counter source and this will call all
485 * registered idle source. If the idle sources do not unregister them, then we
486 * directly increase the counter again and the next dispatch round will call
487 * all idle sources again. This, however, has the side-effect that idle sources
488 * are _not_ called before other fd events but are rather mixed in between.
491 static void eloop_event(struct ev_fd *fd, int mask, void *data)
493 struct ev_eloop *eloop = data;
495 if (mask & EV_READABLE)
496 ev_eloop_dispatch(eloop, 0);
497 if (mask & (EV_HUP | EV_ERR))
498 llog_warn(eloop, "HUP/ERR on eloop source");
501 static int write_eventfd(llog_submit_t llog, int fd, uint64_t val)
506 return llog_dEINVAL(llog);
508 if (val == 0xffffffffffffffffULL) {
509 llog_dwarning(llog, "increasing counter with invalid value %" PRIu64, val);
513 ret = write(fd, &val, sizeof(val));
516 llog_dwarning(llog, "eventfd overflow while writing %" PRIu64, val);
518 llog_dwarning(llog, "eventfd write error (%d): %m", errno);
520 } else if (ret != sizeof(val)) {
521 llog_dwarning(llog, "wrote %d bytes instead of 8 to eventdfd", ret);
528 static void eloop_idle_event(struct ev_eloop *loop, unsigned int mask)
533 if (mask & (EV_HUP | EV_ERR)) {
534 llog_warning(loop, "HUP/ERR on eventfd");
538 if (!(mask & EV_READABLE))
541 ret = read(loop->idle_fd, &val, sizeof(val));
543 if (errno != EAGAIN) {
544 llog_warning(loop, "reading eventfd failed (%d): %m",
548 } else if (ret == 0) {
549 llog_warning(loop, "EOF on eventfd");
551 } else if (ret != sizeof(val)) {
552 llog_warning(loop, "read %d bytes instead of 8 on eventfd",
555 } else if (val > 0) {
556 shl_hook_call(loop->idlers, loop, NULL);
557 if (shl_hook_num(loop->idlers) > 0)
558 write_eventfd(loop->llog, loop->idle_fd, 1);
564 ret = epoll_ctl(loop->efd, EPOLL_CTL_DEL, loop->idle_fd, NULL);
566 llog_warning(loop, "cannot remove fd %d from epollset (%d): %m",
567 loop->idle_fd, errno);
572 * @out: Storage for the result
573 * @log: logging function or NULL
575 * This creates a new event-loop with ref-count 1. The new event loop is stored
576 * in @out and has no registered events.
578 * Returns: 0 on success, otherwise negative error code
580 int ev_eloop_new(struct ev_eloop **out, ev_log_t log)
582 struct ev_eloop *loop;
584 struct epoll_event ep;
587 return llog_dEINVAL(log);
589 loop = malloc(sizeof(*loop));
591 return llog_dENOMEM(log);
593 memset(loop, 0, sizeof(*loop));
596 shl_dlist_init(&loop->sig_list);
598 loop->cur_fds_size = 32;
599 loop->cur_fds = malloc(sizeof(struct epoll_event) *
601 if (!loop->cur_fds) {
602 ret = llog_ENOMEM(loop);
606 ret = shl_hook_new(&loop->chlds);
610 ret = shl_hook_new(&loop->idlers);
614 ret = shl_hook_new(&loop->pres);
618 ret = shl_hook_new(&loop->posts);
622 loop->efd = epoll_create1(EPOLL_CLOEXEC);
625 llog_error(loop, "cannot create epoll-fd");
629 ret = ev_fd_new(&loop->fd, loop->efd, EV_READABLE, eloop_event, loop,
634 loop->idle_fd = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
635 if (loop->idle_fd < 0) {
636 llog_error(loop, "cannot create eventfd (%d): %m", errno);
641 memset(&ep, 0, sizeof(ep));
642 ep.events |= EPOLLIN;
645 ret = epoll_ctl(loop->efd, EPOLL_CTL_ADD, loop->idle_fd, &ep);
647 llog_warning(loop, "cannot add fd %d to epoll set (%d): %m",
648 loop->idle_fd, errno);
653 llog_debug(loop, "new eloop object %p", loop);
658 close(loop->idle_fd);
660 ev_fd_unref(loop->fd);
664 shl_hook_free(loop->posts);
666 shl_hook_free(loop->pres);
668 shl_hook_free(loop->idlers);
670 shl_hook_free(loop->chlds);
680 * @loop: Event loop to be modified or NULL
682 * This increases the ref-count of @loop by 1.
684 void ev_eloop_ref(struct ev_eloop *loop)
694 * @loop: Event loop to be modified or NULL
696 * This decreases the ref-count of @loop by 1. If it drops to zero, the event
697 * loop is destroyed. Note that every registered event source takes a ref-count
698 * of the event loop so this ref-count will never drop to zero while there is an
699 * registered event source.
701 void ev_eloop_unref(struct ev_eloop *loop)
703 struct ev_signal_shared *sig;
709 return llog_vEINVAL(loop);
713 llog_debug(loop, "free eloop object %p", loop);
715 if (shl_hook_num(loop->chlds))
716 ev_eloop_unregister_signal_cb(loop, SIGCHLD, sig_child, loop);
718 while (loop->sig_list.next != &loop->sig_list) {
719 sig = shl_dlist_entry(loop->sig_list.next,
720 struct ev_signal_shared,
725 ret = epoll_ctl(loop->efd, EPOLL_CTL_DEL, loop->idle_fd, NULL);
727 llog_warning(loop, "cannot remove fd %d from epollset (%d): %m",
728 loop->idle_fd, errno);
729 close(loop->idle_fd);
731 ev_fd_unref(loop->fd);
733 shl_hook_free(loop->posts);
734 shl_hook_free(loop->pres);
735 shl_hook_free(loop->idlers);
736 shl_hook_free(loop->chlds);
743 * @loop: The event loop where @fd is registered
744 * @fd: The fd to be flushed
746 * If @loop is currently dispatching events, this will remove all pending events
747 * of @fd from the current event-list.
749 void ev_eloop_flush_fd(struct ev_eloop *loop, struct ev_fd *fd)
756 return llog_vEINVAL(loop);
758 if (loop->dispatching) {
759 for (i = 0; i < loop->cur_fds_cnt; ++i) {
760 if (loop->cur_fds[i].data.ptr == fd)
761 loop->cur_fds[i].data.ptr = NULL;
766 static unsigned int convert_mask(uint32_t mask)
768 unsigned int res = 0;
784 * @loop: Event loop to be dispatched
785 * @timeout: Timeout in milliseconds
787 * This listens on @loop for incoming events and handles all events that
788 * occured. This waits at most @timeout milliseconds until returning. If
789 * @timeout is -1, this waits until the first event arrives. If @timeout is 0,
790 * then this returns directly if no event is currently pending.
792 * This performs only a single dispatch round. That is, if all sources where
793 * checked for events and there are no more pending events, this will return. If
794 * it handled events and the timeout has not elapsed, this will still return.
796 * If ev_eloop_exit() was called on @loop, then this will return immediately.
798 * Returns: 0 on success, otherwise negative error code
800 int ev_eloop_dispatch(struct ev_eloop *loop, int timeout)
802 struct epoll_event *ep;
804 int i, count, mask, ret;
809 return llog_EINVAL(loop);
810 if (loop->dispatching) {
811 llog_warn(loop, "recursive dispatching not allowed");
815 loop->dispatching = true;
817 shl_hook_call(loop->pres, loop, NULL);
819 count = epoll_wait(loop->efd,
824 if (errno == EINTR) {
828 llog_warn(loop, "epoll_wait dispatching failed: %m");
832 } else if (count > loop->cur_fds_size) {
833 count = loop->cur_fds_size;
837 loop->cur_fds_cnt = count;
839 for (i = 0; i < count; ++i) {
840 if (ep[i].data.ptr == loop) {
841 mask = convert_mask(ep[i].events);
842 eloop_idle_event(loop, mask);
845 if (!fd || !fd->cb || !fd->enabled)
848 mask = convert_mask(ep[i].events);
849 fd->cb(fd, mask, fd->data);
853 if (count == loop->cur_fds_size) {
854 ep = realloc(loop->cur_fds, sizeof(struct epoll_event) *
855 loop->cur_fds_size * 2);
857 llog_warning(loop, "cannot reallocate dispatch cache to size %zu",
858 loop->cur_fds_size * 2);
861 loop->cur_fds_size *= 2;
868 shl_hook_call(loop->posts, loop, NULL);
869 loop->dispatching = false;
875 * @loop: The event loop to be run
876 * @timeout: Timeout for this operation
878 * This is similar to ev_eloop_dispatch() but runs _exactly_ for @timeout
879 * milliseconds. It calls ev_eloop_dispatch() as often as it can until the
880 * timeout has elapsed. If @timeout is -1 this will run until you call
881 * ev_eloop_exit(). If @timeout is 0 this is equal to calling
882 * ev_eloop_dispatch() with a timeout of 0.
884 * Calling ev_eloop_exit() will always interrupt this function and make it
887 * Returns: 0 on success, otherwise a negative error code
889 int ev_eloop_run(struct ev_eloop *loop, int timeout)
892 struct timeval tv, start;
899 llog_debug(loop, "run for %d msecs", timeout);
900 gettimeofday(&start, NULL);
902 while (!loop->exit) {
903 ret = ev_eloop_dispatch(loop, timeout);
909 } else if (timeout > 0) {
910 gettimeofday(&tv, NULL);
911 off = tv.tv_sec - start.tv_sec;
912 msec = (int64_t)tv.tv_usec - (int64_t)start.tv_usec;
915 msec = 1000000 + msec;
929 * @loop: Event loop that should exit
931 * This makes a call to ev_eloop_run() stop.
933 void ev_eloop_exit(struct ev_eloop *loop)
938 llog_debug(loop, "exiting %p", loop);
942 ev_eloop_exit(loop->fd->loop);
949 * Returns a single file descriptor for the whole event-loop. If that FD is
950 * readable, then one of the event-sources is active and you should call
951 * ev_eloop_dispatch(loop, 0); to dispatch these events.
952 * If the fd is not readable, then ev_eloop_dispatch() would sleep as there are
955 * Returns: A file descriptor for the event loop or negative error code
957 int ev_eloop_get_fd(struct ev_eloop *loop)
966 * ev_eloop_new_eloop:
967 * @loop: The parent event-loop where the new event loop is registered
968 * @out: Storage for new event loop
970 * This creates a new event loop and directly registeres it as event source on
971 * the parent event loop \loop.
973 * Returns: 0 on success, otherwise negative error code
975 int ev_eloop_new_eloop(struct ev_eloop *loop, struct ev_eloop **out)
983 return llog_EINVAL(loop);
985 ret = ev_eloop_new(&el, loop->llog);
989 ret = ev_eloop_add_eloop(loop, el);
1001 * ev_eloop_add_eloop:
1002 * @loop: Parent event loop
1003 * @add: The event loop that is registered as event source on @loop
1005 * This registers the existing event loop @add as event source on the parent
1008 * Returns: 0 on success, otherwise negative error code
1010 int ev_eloop_add_eloop(struct ev_eloop *loop, struct ev_eloop *add)
1017 return llog_EINVAL(loop);
1022 /* This adds the epoll-fd into the parent epoll-set. This works
1023 * perfectly well with registered FDs, timers, etc. However, we use
1024 * shared signals in this event-loop so if the parent and child have
1025 * overlapping shared-signals, then the signal will be randomly
1026 * delivered to either the parent-hook or child-hook but never both.
1028 * We may fix this by linking the childs-sig_list into the parent's
1029 * siglist but we didn't need this, yet, so ignore it here.
1032 ret = ev_eloop_add_fd(loop, add->fd);
1041 * ev_eloop_rm_eloop:
1042 * @rm: Event loop to be unregistered from its parent
1044 * This unregisters the event loop @rm as event source from its parent. If this
1045 * event loop was not registered on any other event loop, then this call does
1048 void ev_eloop_rm_eloop(struct ev_eloop *rm)
1050 if (!rm || !rm->fd->loop)
1053 ev_eloop_rm_fd(rm->fd);
1059 * This allows adding file descriptors to an eloop. A file descriptor is the
1060 * most basic kind of source and used for all other source types.
1061 * By default a source is always enabled but you can easily disable the source
1062 * by calling ev_fd_disable(). This will have the effect, that the source is
1063 * still registered with the eloop but will not wake up the thread or get
1064 * called until you enable it again.
1069 * @out: Storage for result
1070 * @rfd: The actual file desciptor
1071 * @mask: Bitmask of %EV_READABLE and %EV_WRITeABLE flags
1072 * @cb: User callback
1074 * @log: llog function or NULL
1076 * This creates a new file desciptor source that is watched for the events set
1077 * in @mask. @rfd is the system filedescriptor. The resulting object is stored
1078 * in @out. @cb and @data are the user callback and the user-supplied data that
1079 * is passed to the callback on events.
1080 * The FD is automatically watched for EV_HUP and EV_ERR events, too.
1082 * Returns: 0 on success, otherwise negative error code
1084 int ev_fd_new(struct ev_fd **out, int rfd, int mask, ev_fd_cb cb, void *data,
1089 if (!out || rfd < 0)
1090 return llog_dEINVAL(log);
1092 fd = malloc(sizeof(*fd));
1094 return llog_dEINVAL(log);
1096 memset(fd, 0, sizeof(*fd));
1113 * Increases the ref-count of @fd by 1.
1115 void ev_fd_ref(struct ev_fd *fd)
1120 return llog_vEINVAL(fd);
1129 * Decreases the ref-count of @fd by 1. Destroys the object if the ref-count
1132 void ev_fd_unref(struct ev_fd *fd)
1137 return llog_vEINVAL(fd);
1144 static int fd_epoll_add(struct ev_fd *fd)
1146 struct epoll_event ep;
1152 memset(&ep, 0, sizeof(ep));
1153 if (fd->mask & EV_READABLE)
1154 ep.events |= EPOLLIN;
1155 if (fd->mask & EV_WRITEABLE)
1156 ep.events |= EPOLLOUT;
1157 if (fd->mask & EV_ET)
1158 ep.events |= EPOLLET;
1161 ret = epoll_ctl(fd->loop->efd, EPOLL_CTL_ADD, fd->fd, &ep);
1163 llog_warning(fd, "cannot add fd %d to epoll set (%d): %m",
1171 static void fd_epoll_remove(struct ev_fd *fd)
1178 ret = epoll_ctl(fd->loop->efd, EPOLL_CTL_DEL, fd->fd, NULL);
1179 if (ret && errno != EBADF)
1180 llog_warning(fd, "cannot remove fd %d from epoll set (%d): %m",
1184 static int fd_epoll_update(struct ev_fd *fd)
1186 struct epoll_event ep;
1192 memset(&ep, 0, sizeof(ep));
1193 if (fd->mask & EV_READABLE)
1194 ep.events |= EPOLLIN;
1195 if (fd->mask & EV_WRITEABLE)
1196 ep.events |= EPOLLOUT;
1197 if (fd->mask & EV_ET)
1198 ep.events |= EPOLLET;
1201 ret = epoll_ctl(fd->loop->efd, EPOLL_CTL_MOD, fd->fd, &ep);
1203 llog_warning(fd, "cannot update epoll fd %d (%d): %m",
1215 * This enables @fd. By default every fd object is enabled. If you disabled it
1216 * you can re-enable it with this call.
1218 * Returns: 0 on success, otherwise negative error code
1220 int ev_fd_enable(struct ev_fd *fd)
1229 ret = fd_epoll_add(fd);
1241 * Disables @fd. That means, no more events are handled for @fd until you
1242 * re-enable it with ev_fd_enable().
1244 void ev_fd_disable(struct ev_fd *fd)
1246 if (!fd || !fd->enabled)
1249 fd->enabled = false;
1250 fd_epoll_remove(fd);
1257 * Returns whether the fd object is enabled or disabled.
1259 * Returns: true if @fd is enabled, otherwise false.
1261 bool ev_fd_is_enabled(struct ev_fd *fd)
1263 return fd && fd->enabled;
1270 * Returns true if the fd object is bound to an event loop.
1272 * Returns: true if @fd is bound, otherwise false
1274 bool ev_fd_is_bound(struct ev_fd *fd)
1276 return fd && fd->loop;
1280 * ev_fd_set_cb_data:
1282 * @cb: New user callback
1283 * @data: New user data
1285 * This changes the user callback and user data that were set in ev_fd_new().
1286 * Both can be set to NULL. If @cb is NULL, then the callback will not be called
1289 void ev_fd_set_cb_data(struct ev_fd *fd, ev_fd_cb cb, void *data)
1301 * @mask: Bitmask of %EV_READABLE and %EV_WRITEABLE
1303 * This resets the event mask of @fd to @mask.
1305 * Returns: 0 on success, otherwise negative error code
1307 int ev_fd_update(struct ev_fd *fd, int mask)
1314 if (fd->mask == mask && !(mask & EV_ET))
1323 ret = fd_epoll_update(fd);
1335 * @out: Storage for result
1336 * @rfd: File descriptor
1337 * @mask: Bitmask of %EV_READABLE and %EV_WRITEABLE
1338 * @cb: User callback
1341 * This creates a new fd object like ev_fd_new() and directly registers it in
1342 * the event loop @loop. See ev_fd_new() and ev_eloop_add_fd() for more
1344 * The ref-count of @out is 1 so you must call ev_eloop_rm_fd() to destroy the
1345 * fd. You must not call ev_fd_unref() unless you called ev_fd_ref() before.
1347 * Returns: 0 on success, otherwise negative error code
1349 int ev_eloop_new_fd(struct ev_eloop *loop, struct ev_fd **out, int rfd,
1350 int mask, ev_fd_cb cb, void *data)
1357 if (!out || rfd < 0)
1358 return llog_EINVAL(loop);
1360 ret = ev_fd_new(&fd, rfd, mask, cb, data, loop->llog);
1364 ret = ev_eloop_add_fd(loop, fd);
1380 * Registers @fd in the event loop @loop. This increases the ref-count of both
1381 * @loop and @fd. From now on the user callback of @fd may get called during
1384 * Returns: 0 on success, otherwise negative error code
1386 int ev_eloop_add_fd(struct ev_eloop *loop, struct ev_fd *fd)
1392 if (!fd || fd->loop)
1393 return llog_EINVAL(loop);
1398 ret = fd_epoll_add(fd);
1414 * Removes the fd object @fd from its event loop. If you did not call
1415 * ev_eloop_add_fd() before, this will do nothing.
1416 * This decreases the refcount of @fd and the event loop by 1.
1417 * It is safe to call this in any callback. This makes sure that the current
1418 * dispatcher will not get confused or read invalid memory.
1420 void ev_eloop_rm_fd(struct ev_fd *fd)
1422 struct ev_eloop *loop;
1425 if (!fd || !fd->loop)
1430 fd_epoll_remove(fd);
1433 * If we are currently dispatching events, we need to remove ourself
1434 * from the temporary event list.
1436 if (loop->dispatching) {
1437 for (i = 0; i < loop->cur_fds_cnt; ++i) {
1438 if (fd == loop->cur_fds[i].data.ptr)
1439 loop->cur_fds[i].data.ptr = NULL;
1445 ev_eloop_unref(loop);
1450 * Timer sources allow delaying a specific event by an relative timeout. The
1451 * timeout can be set to trigger after a specific time. Optionally, you can
1452 * also make the timeout trigger every next time the timeout elapses so you
1453 * basically get a pulse that reliably calls the callback.
1454 * The callback gets as parameter the number of timeouts that elapsed since it
1455 * was last called (in case the application couldn't call the callback fast
1456 * enough). The timeout can be specified with nano-seconds precision. However,
1457 * real precision depends on the operating-system and hardware.
1460 static int timer_drain(struct ev_timer *timer, uint64_t *out)
1463 uint64_t expirations;
1468 len = read(timer->fd, &expirations, sizeof(expirations));
1470 if (errno == EAGAIN) {
1473 llog_warning(timer, "cannot read timerfd (%d): %m",
1477 } else if (len == 0) {
1478 llog_warning(timer, "EOF on timer source");
1480 } else if (len != sizeof(expirations)) {
1481 llog_warn(timer, "invalid size %d read on timerfd", len);
1490 static void timer_cb(struct ev_fd *fd, int mask, void *data)
1492 struct ev_timer *timer = data;
1493 uint64_t expirations;
1496 if (mask & (EV_HUP | EV_ERR)) {
1497 llog_warn(fd, "HUP/ERR on timer source");
1501 if (mask & EV_READABLE) {
1502 ret = timer_drain(timer, &expirations);
1505 if (expirations > 0) {
1507 timer->cb(timer, expirations, timer->data);
1514 ev_timer_disable(timer);
1516 timer->cb(timer, 0, timer->data);
1519 static const struct itimerspec ev_timer_zero;
1523 * @out: Timer pointer where to store the new timer
1525 * @cb: callback to use for this event-source
1526 * @data: user-specified data
1527 * @log: logging function or NULL
1529 * This creates a new timer-source. See "man timerfd_create" for information on
1530 * the @spec argument. The timer is always relative and uses the
1531 * monotonic-kernel clock.
1533 * Returns: 0 on success, negative error on failure
1535 int ev_timer_new(struct ev_timer **out, const struct itimerspec *spec,
1536 ev_timer_cb cb, void *data, ev_log_t log)
1538 struct ev_timer *timer;
1542 return llog_dEINVAL(log);
1545 spec = &ev_timer_zero;
1547 timer = malloc(sizeof(*timer));
1549 return llog_dENOMEM(log);
1551 memset(timer, 0, sizeof(*timer));
1557 timer->fd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC | TFD_NONBLOCK);
1558 if (timer->fd < 0) {
1559 llog_error(timer, "cannot create timerfd (%d): %m", errno);
1564 ret = timerfd_settime(timer->fd, 0, spec, NULL);
1566 llog_warn(timer, "cannot set timerfd (%d): %m", errno);
1571 ret = ev_fd_new(&timer->efd, timer->fd, EV_READABLE, timer_cb, timer,
1588 * @timer: Timer object
1590 * Increase reference count by 1.
1592 void ev_timer_ref(struct ev_timer *timer)
1597 return llog_vEINVAL(timer);
1604 * @timer: Timer object
1606 * Decrease reference-count by 1 and destroy timer if it drops to 0.
1608 void ev_timer_unref(struct ev_timer *timer)
1613 return llog_vEINVAL(timer);
1617 ev_fd_unref(timer->efd);
1624 * @timer: Timer object
1626 * Enable the timer. This calls ev_fd_enable() on the fd that implements this
1629 * Returns: 0 on success negative error code on failure
1631 int ev_timer_enable(struct ev_timer *timer)
1636 return ev_fd_enable(timer->efd);
1641 * @timer: Timer object
1643 * Disable the timer. This calls ev_fd_disable() on the fd that implements this
1646 * Returns: 0 on success and negative error code on failure
1648 void ev_timer_disable(struct ev_timer *timer)
1653 ev_fd_disable(timer->efd);
1657 * ev_timer_is_enabled:
1658 * @timer: Timer object
1660 * Checks whether the timer is enabled.
1662 * Returns: true if timer is enabled, false otherwise
1664 bool ev_timer_is_enabled(struct ev_timer *timer)
1666 return timer && ev_fd_is_enabled(timer->efd);
1670 * ev_timer_is_bound:
1671 * @timer: Timer object
1673 * Checks whether the timer is bound to an event loop.
1675 * Returns: true if the timer is bound, false otherwise.
1677 bool ev_timer_is_bound(struct ev_timer *timer)
1679 return timer && ev_fd_is_bound(timer->efd);
1683 * ev_timer_set_cb_data:
1684 * @timer: Timer object
1685 * @cb: User callback or NULL
1686 * @data: User data or NULL
1688 * This changes the user-supplied callback and data that is used for this timer
1691 void ev_timer_set_cb_data(struct ev_timer *timer, ev_timer_cb cb, void *data)
1702 * @timer: Timer object
1705 * This changes the timer timespan. See "man timerfd_settime" for information
1706 * on the @spec parameter.
1708 * Returns: 0 on success, negative error code on failure.
1710 int ev_timer_update(struct ev_timer *timer, const struct itimerspec *spec)
1718 spec = &ev_timer_zero;
1720 ret = timerfd_settime(timer->fd, 0, spec, NULL);
1722 llog_warn(timer, "cannot set timerfd (%d): %m", errno);
1731 * @timer: valid timer object
1732 * @expirations: destination to save result or NULL
1734 * This reads the current expiration-count from the timer object @timer and
1735 * saves it in @expirations (if it is non-NULL). This can be used to clear the
1736 * timer after an idle-period or similar.
1737 * Note that the timer_cb() callback function automatically calls this before
1738 * calling the user-supplied callback.
1740 * Returns: 0 on success, negative error code on failure.
1742 int ev_timer_drain(struct ev_timer *timer, uint64_t *expirations)
1747 return timer_drain(timer, expirations);
1751 * ev_eloop_new_timer:
1753 * @out: output where to store the new timer
1755 * @cb: user callback
1756 * @data: user-supplied data
1758 * This is a combination of ev_timer_new() and ev_eloop_add_timer(). See both
1759 * for more information.
1761 * Returns: 0 on success, negative error code on failure.
1763 int ev_eloop_new_timer(struct ev_eloop *loop, struct ev_timer **out,
1764 const struct itimerspec *spec, ev_timer_cb cb,
1767 struct ev_timer *timer;
1773 return llog_EINVAL(loop);
1775 ret = ev_timer_new(&timer, spec, cb, data, loop->llog);
1779 ret = ev_eloop_add_timer(loop, timer);
1781 ev_timer_unref(timer);
1785 ev_timer_unref(timer);
1791 * ev_eloop_add_timer:
1793 * @timer: Timer source
1795 * This adds @timer as source to @loop. @timer must be currently unbound,
1796 * otherwise, this will fail with -EALREADY.
1798 * Returns: 0 on success, negative error code on failure
1800 int ev_eloop_add_timer(struct ev_eloop *loop, struct ev_timer *timer)
1807 return llog_EINVAL(loop);
1809 if (ev_fd_is_bound(timer->efd))
1812 ret = ev_eloop_add_fd(loop, timer->efd);
1816 ev_timer_ref(timer);
1821 * ev_eloop_rm_timer:
1822 * @timer: Timer object
1824 * If @timer is currently bound to an event loop, this will remove this bondage
1827 void ev_eloop_rm_timer(struct ev_timer *timer)
1829 if (!timer || !ev_fd_is_bound(timer->efd))
1832 ev_eloop_rm_fd(timer->efd);
1833 ev_timer_unref(timer);
1838 * Counter sources are a very basic event notification mechanism. It is based
1839 * around the eventfd() system call on linux machines. Internally, there is a
1840 * 64bit unsigned integer that can be increased by the caller. By default it is
1841 * set to 0. If it is non-zero, the event-fd will be notified and the
1842 * user-defined callback is called. The callback gets as argument the current
1843 * state of the counter and the counter is reset to 0.
1845 * If the internal counter would overflow, an increase() fails silently so an
1846 * overflow will never occur, however, you may loose events this way. This can
1847 * be ignored when increasing with small values, only.
1850 static void counter_event(struct ev_fd *fd, int mask, void *data)
1852 struct ev_counter *cnt = data;
1856 if (mask & (EV_HUP | EV_ERR)) {
1857 llog_warning(fd, "HUP/ERR on eventfd");
1859 cnt->cb(cnt, 0, cnt->data);
1863 if (!(mask & EV_READABLE))
1866 ret = read(cnt->fd, &val, sizeof(val));
1868 if (errno != EAGAIN) {
1869 llog_warning(fd, "reading eventfd failed (%d): %m", errno);
1870 ev_counter_disable(cnt);
1872 cnt->cb(cnt, 0, cnt->data);
1874 } else if (ret == 0) {
1875 llog_warning(fd, "EOF on eventfd");
1876 ev_counter_disable(cnt);
1878 cnt->cb(cnt, 0, cnt->data);
1879 } else if (ret != sizeof(val)) {
1880 llog_warning(fd, "read %d bytes instead of 8 on eventfd", ret);
1881 ev_counter_disable(cnt);
1883 cnt->cb(cnt, 0, cnt->data);
1884 } else if (val > 0) {
1886 cnt->cb(cnt, val, cnt->data);
1892 * @out: Where to store the new counter
1893 * @cb: user-supplied callback
1894 * @data: user-supplied data
1895 * @log: logging function or NULL
1897 * This creates a new counter object and stores it in @out.
1899 * Returns: 0 on success, negative error code on failure.
1901 int ev_counter_new(struct ev_counter **out, ev_counter_cb cb, void *data,
1904 struct ev_counter *cnt;
1908 return llog_dEINVAL(log);
1910 cnt = malloc(sizeof(*cnt));
1912 return llog_dENOMEM(log);
1913 memset(cnt, 0, sizeof(*cnt));
1919 cnt->fd = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
1921 llog_error(cnt, "cannot create eventfd (%d): %m", errno);
1926 ret = ev_fd_new(&cnt->efd, cnt->fd, EV_READABLE, counter_event, cnt,
1943 * @cnt: Counter object
1945 * This increases the reference-count of @cnt by 1.
1947 void ev_counter_ref(struct ev_counter *cnt)
1952 return llog_vEINVAL(cnt);
1959 * @cnt: Counter object
1961 * This decreases the reference-count of @cnt by 1 and destroys the object if
1964 void ev_counter_unref(struct ev_counter *cnt)
1969 return llog_vEINVAL(cnt);
1973 ev_fd_unref(cnt->efd);
1979 * ev_counter_enable:
1980 * @cnt: Counter object
1982 * This enables the counter object. It calls ev_fd_enable() on the underlying
1985 * Returns: 0 on success, negative error code on failure
1987 int ev_counter_enable(struct ev_counter *cnt)
1992 return ev_fd_enable(cnt->efd);
1996 * ev_counter_disable:
1997 * @cnt: Counter object
1999 * This disables the counter. It calls ev_fd_disable() on the underlying
2002 void ev_counter_disable(struct ev_counter *cnt)
2007 ev_fd_disable(cnt->efd);
2011 * ev_counter_is_enabled:
2012 * @cnt: counter object
2014 * Checks whether the counter is enabled.
2016 * Returns: true if the counter is enabled, otherwise returns false.
2018 bool ev_counter_is_enabled(struct ev_counter *cnt)
2020 return cnt && ev_fd_is_enabled(cnt->efd);
2024 * ev_counter_is_bound:
2025 * @cnt: Counter object
2027 * Checks whether the counter is bound to an event loop.
2029 * Returns: true if the counter is bound, otherwise false is returned.
2031 bool ev_counter_is_bound(struct ev_counter *cnt)
2033 return cnt && ev_fd_is_bound(cnt->efd);
2037 * ev_counter_set_cb_data:
2038 * @cnt: Counter object
2039 * @cb: user-supplied callback
2040 * @data: user-supplied data
2042 * This changes the user-supplied callback and data for the given counter
2045 void ev_counter_set_cb_data(struct ev_counter *cnt, ev_counter_cb cb,
2057 * @cnt: Counter object
2058 * @val: Counter increase amount
2060 * This increases the counter @cnt by @val.
2062 * Returns: 0 on success, negative error code on failure.
2064 int ev_counter_inc(struct ev_counter *cnt, uint64_t val)
2069 return write_eventfd(cnt->llog, cnt->fd, val);
2073 * ev_eloop_new_counter:
2074 * @eloop: event loop
2075 * @out: output storage for new counter
2076 * @cb: user-supplied callback
2077 * @data: user-supplied data
2079 * This combines ev_counter_new() and ev_eloop_add_counter() in one call.
2081 * Returns: 0 on success, negative error code on failure.
2083 int ev_eloop_new_counter(struct ev_eloop *eloop, struct ev_counter **out,
2084 ev_counter_cb cb, void *data)
2087 struct ev_counter *cnt;
2092 return llog_EINVAL(eloop);
2094 ret = ev_counter_new(&cnt, cb, data, eloop->llog);
2098 ret = ev_eloop_add_counter(eloop, cnt);
2100 ev_counter_unref(cnt);
2104 ev_counter_unref(cnt);
2110 * ev_eloop_add_counter:
2111 * @eloop: Event loop
2112 * @cnt: Counter object
2114 * This adds @cnt to the given event loop @eloop. If @cnt is already bound,
2115 * this will fail with -EALREADY.
2117 * Returns: 0 on success, negative error code on failure.
2119 int ev_eloop_add_counter(struct ev_eloop *eloop, struct ev_counter *cnt)
2126 return llog_EINVAL(eloop);
2128 if (ev_fd_is_bound(cnt->efd))
2131 ret = ev_eloop_add_fd(eloop, cnt->efd);
2135 ev_counter_ref(cnt);
2140 * ev_eloop_rm_counter:
2141 * @cnt: Counter object
2143 * If @cnt is bound to an event-loop, then this will remove this bondage again.
2145 void ev_eloop_rm_counter(struct ev_counter *cnt)
2147 if (!cnt || !ev_fd_is_bound(cnt->efd))
2150 ev_eloop_rm_fd(cnt->efd);
2151 ev_counter_unref(cnt);
2156 * This allows registering for shared signal events. See description of the
2157 * shared signal object above for more information how this works. Also see the
2158 * eloop description to see some drawbacks when nesting eloop objects with the
2159 * same shared signal sources.
2163 * ev_eloop_register_signal_cb:
2165 * @signum: Signal number
2166 * @cb: user-supplied callback
2167 * @data: user-supplied data
2169 * This register a new callback for the given signal @signum. @cb must not be
2172 * Returns: 0 on success, negative error code on failure.
2174 int ev_eloop_register_signal_cb(struct ev_eloop *loop, int signum,
2175 ev_signal_shared_cb cb, void *data)
2177 struct ev_signal_shared *sig = NULL;
2179 struct shl_dlist *iter;
2183 if (signum < 0 || !cb)
2184 return llog_EINVAL(loop);
2186 shl_dlist_for_each(iter, &loop->sig_list) {
2187 sig = shl_dlist_entry(iter, struct ev_signal_shared, list);
2188 if (sig->signum == signum)
2194 ret = signal_new(&sig, loop, signum);
2199 ret = shl_hook_add_cast(sig->hook, cb, data, false);
2209 * ev_eloop_unregister_signal_cb:
2211 * @signum: signal number
2212 * @cb: user-supplied callback
2213 * @data: user-supplied data
2215 * This removes a previously registered signal-callback again. The arguments
2216 * must be the same as for the ev_eloop_register_signal_cb() call. If multiple
2217 * callbacks with the same arguments are registered, then only one callback is
2218 * removed. It doesn't matter which callback is removed as both are identical.
2220 void ev_eloop_unregister_signal_cb(struct ev_eloop *loop, int signum,
2221 ev_signal_shared_cb cb, void *data)
2223 struct ev_signal_shared *sig;
2224 struct shl_dlist *iter;
2229 shl_dlist_for_each(iter, &loop->sig_list) {
2230 sig = shl_dlist_entry(iter, struct ev_signal_shared, list);
2231 if (sig->signum == signum) {
2232 shl_hook_rm_cast(sig->hook, cb, data);
2233 if (!shl_hook_num(sig->hook))
2241 * Child reaper sources
2242 * If at least one child-reaper callback is registered, then the eloop object
2243 * listens for SIGCHLD and waits for all exiting children. The callbacks are
2244 * then notified for each PID that signaled an event.
2245 * Note that this cannot be done via the shared-signal sources as the waitpid()
2246 * call must not be done in callbacks. Otherwise, only one callback would see
2247 * the events while others will call waitpid() and get EAGAIN.
2250 int ev_eloop_register_child_cb(struct ev_eloop *loop, ev_child_cb cb,
2259 empty = !shl_hook_num(loop->chlds);
2260 ret = shl_hook_add_cast(loop->chlds, cb, data, false);
2265 ret = ev_eloop_register_signal_cb(loop, SIGCHLD, sig_child,
2268 shl_hook_rm_cast(loop->chlds, cb, data);
2276 void ev_eloop_unregister_child_cb(struct ev_eloop *loop, ev_child_cb cb,
2279 if (!loop || !shl_hook_num(loop->chlds))
2282 shl_hook_rm_cast(loop->chlds, cb, data);
2283 if (!shl_hook_num(loop->chlds))
2284 ev_eloop_unregister_signal_cb(loop, SIGCHLD, sig_child, loop);
2289 * Idle sources are called everytime when a next dispatch round is started.
2290 * That means, unless there is no idle source registered, the thread will
2291 * _never_ go to sleep. So please unregister your idle source if no longer
2296 * ev_eloop_register_idle_cb:
2297 * @eloop: event loop
2298 * @cb: user-supplied callback
2299 * @data: user-supplied data
2302 * This register a new idle-source with the given callback and data. @cb must
2305 * Returns: 0 on success, negative error code on failure.
2307 int ev_eloop_register_idle_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2308 void *data, unsigned int flags)
2311 bool os = flags & EV_ONESHOT;
2313 if (!eloop || (flags & ~EV_IDLE_ALL))
2316 if ((flags & EV_SINGLE))
2317 ret = shl_hook_add_single_cast(eloop->idlers, cb, data, os);
2319 ret = shl_hook_add_cast(eloop->idlers, cb, data, os);
2324 ret = write_eventfd(eloop->llog, eloop->idle_fd, 1);
2326 llog_warning(eloop, "cannot increase eloop idle-counter");
2327 shl_hook_rm_cast(eloop->idlers, cb, data);
2335 * ev_eloop_unregister_idle_cb:
2336 * @eloop: event loop
2337 * @cb: user-supplied callback
2338 * @data: user-supplied data
2341 * This removes an idle-source. The arguments must be the same as for the
2342 * ev_eloop_register_idle_cb() call. If two identical callbacks are registered,
2343 * then only one is removed. It doesn't matter which one is removed, because
2344 * they are identical.
2346 void ev_eloop_unregister_idle_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2347 void *data, unsigned int flags)
2349 if (!eloop || (flags & ~EV_IDLE_ALL))
2352 if (flags & EV_SINGLE)
2353 shl_hook_rm_all_cast(eloop->idlers, cb, data);
2355 shl_hook_rm_cast(eloop->idlers, cb, data);
2359 * Pre-Dispatch Callbacks
2360 * A pre-dispatch cb is called before a single dispatch round is started.
2361 * You should avoid using them and instead not rely on any specific
2362 * dispatch-behavior but expect every event to be recieved asynchronously.
2363 * However, this hook is useful to integrate other limited APIs into this event
2364 * loop if they do not provide proper FD-abstractions.
2368 * ev_eloop_register_pre_cb:
2369 * @eloop: event loop
2370 * @cb: user-supplied callback
2371 * @data: user-supplied data
2373 * This register a new pre-cb with the given callback and data. @cb must
2376 * Returns: 0 on success, negative error code on failure.
2378 int ev_eloop_register_pre_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2384 return shl_hook_add_cast(eloop->pres, cb, data, false);
2388 * ev_eloop_unregister_pre_cb:
2389 * @eloop: event loop
2390 * @cb: user-supplied callback
2391 * @data: user-supplied data
2393 * This removes a pre-cb. The arguments must be the same as for the
2394 * ev_eloop_register_pre_cb() call. If two identical callbacks are registered,
2395 * then only one is removed. It doesn't matter which one is removed, because
2396 * they are identical.
2398 void ev_eloop_unregister_pre_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2404 shl_hook_rm_cast(eloop->pres, cb, data);
2408 * Post-Dispatch Callbacks
2409 * A post-dispatch cb is called whenever a single dispatch round is complete.
2410 * You should avoid using them and instead not rely on any specific
2411 * dispatch-behavior but expect every event to be recieved asynchronously.
2412 * However, this hook is useful to integrate other limited APIs into this event
2413 * loop if they do not provide proper FD-abstractions.
2417 * ev_eloop_register_post_cb:
2418 * @eloop: event loop
2419 * @cb: user-supplied callback
2420 * @data: user-supplied data
2422 * This register a new post-cb with the given callback and data. @cb must
2425 * Returns: 0 on success, negative error code on failure.
2427 int ev_eloop_register_post_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2433 return shl_hook_add_cast(eloop->posts, cb, data, false);
2437 * ev_eloop_unregister_post_cb:
2438 * @eloop: event loop
2439 * @cb: user-supplied callback
2440 * @data: user-supplied data
2442 * This removes a post-cb. The arguments must be the same as for the
2443 * ev_eloop_register_post_cb() call. If two identical callbacks are registered,
2444 * then only one is removed. It doesn't matter which one is removed, because
2445 * they are identical.
2447 void ev_eloop_unregister_post_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2453 shl_hook_rm_cast(eloop->posts, cb, data);