4 * Copyright (c) 2011-2012 David Herrmann <dh.herrmann@googlemail.com>
5 * Copyright (c) 2011 University of Tuebingen
7 * Permission is hereby granted, free of charge, to any person obtaining
8 * a copy of this software and associated documentation files
9 * (the "Software"), to deal in the Software without restriction, including
10 * without limitation the rights to use, copy, modify, merge, publish,
11 * distribute, sublicense, and/or sell copies of the Software, and to
12 * permit persons to whom the Software is furnished to do so, subject to
13 * the following conditions:
15 * The above copyright notice and this permission notice shall be included
16 * in all copies or substantial portions of the Software.
18 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
19 * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
20 * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
21 * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
22 * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
23 * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
24 * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
29 * @short_description: Event loop
32 * The event loop allows to register event sources and poll them for events.
33 * When an event occurs, the user-supplied callback is called.
35 * The event-loop allows the callbacks to modify _any_ data they want. They can
36 * remove themself or other sources from the event loop even in a callback.
37 * This, however, means that recursive dispatch calls are not supported to
38 * increase performance and avoid internal dispatch-stacks.
40 * Sources can be one of:
41 * - File descriptors: An fd that is watched for readable/writeable events
42 * - Timers: An event that occurs after a relative timeout
43 * - Counters: An event that occurs when the counter is non-zero
44 * - Signals: An event that occurs when a signal is caught
45 * - Idle: An event that occurs when nothing else is done
46 * - Eloop: An event loop itself can be a source of another event loop
48 * A source can be registered for a single event-loop only! You cannot add it
49 * to multiple event loops simultaneously. Also all provided sources are based
50 * on the file-descriptor source so it is guaranteed that you can get a
51 * file-desciptor for every source-type. This is not exported via the public
52 * API, but you can get the epoll-fd which is basically a selectable FD summary
53 * of all event sources.
55 * For instance, if you're developing a library, you can use the eloop library
56 * internally and you will have a full event-loop implementation inside of a
57 * library without any side-effects. You simply export the epoll-fd of the
58 * eloop-object via your public API and the outside users think you only use a
59 * single file-descriptor. They include this FD in their own application event
60 * loop which will then dispatch the messages to your library. Internally, you
61 * simply forward this dispatching to ev_eloop_dispatch() which then calls all
62 * your internal callbacks.
63 * That is, you have an event loop inside your library without requiring the
64 * outside-user to use the same event loop. You also have no global state or
65 * thread-bound event-loops like the Qt/Gtk event loops. So you have full
66 * access to the whole event loop without any side-effects.
69 * The whole eloop library does not use any global data. Therefore, it is fully
70 * re-entrant and no synchronization needed. However, a single object is not
71 * thread-safe. This means, if you access a single eloop object or registered
72 * sources on this eloop object in two different threads, you need to
73 * synchronize them. Furthermore, all callbacks are called from the thread that
74 * calls ev_eloop_dispatch() or ev_eloop_run().
75 * This guarantees that you have full control over the eloop but that you also
76 * have to implement additional functionality like thread-affinity yourself
77 * (obviously, only if you need it).
79 * The philosophy behind this library is that a proper application needs only a
80 * single thread that uses an event loop. Multiple threads should be used to do
81 * calculations, but not to avoid learning how to do non-blocking I/O!
82 * Therefore, only the application threads needs an event-loop, all other
83 * threads only perform calculation and return the data to the main thread.
84 * However, the library does not enforce this design-choice. On the contrary,
85 * it supports all other types of application-designs, too. But as it is
86 * optimized for performance, other application-designs may need to add further
87 * functionality (like thread-affinity) by themself as it would slow down the
88 * event loop if it was natively implemented.
91 * To get started simply create an eloop object with ev_eloop_new(). All
92 * functions return 0 on success and a negative error code like -EFAULT on
93 * failure. -EINVAL is returned if invalid parameters were passed.
94 * Every object can be ref-counted. *_ref() increases the reference-count and
95 * *_unref() decreases it. *_unref() also destroys the object if the ref-count
97 * To create new objects you call *_new(). It stores a pointer to the new
98 * object in the location you passed as parameter. Nearly all structues are
99 * opaque, that is, you cannot access member fields directly. This guarantees
102 * You can create sources with ev_fd_new(), ev_timer_new(), ... and you can add
103 * them to you eloop with ev_eloop_add_fd() or ev_eloop_add_timer(), ...
104 * After they are added you can call ev_eloop_run() to run this eloop for the
105 * given time. If you pass -1 as timeout, it runs until some callback calls
106 * ev_eloop_exit() on this eloop.
107 * You can perform _any_ operations on an eloop object inside of callbacks. You
108 * can add new sources, remove sources, destroy sources, modify sources. You
109 * also do all this on the currently active source.
111 * All objects are enabled by default. You can disable them with *_disable()
112 * and re-enable them with *_enable(). Only when enabled, they are added to the
113 * dispatcher and callbacks are called.
115 * Two sources are different for performance reasons:
116 * Idle sources: Idle sources can be registered with
117 * ev_eloop_register_idle_cb() and unregistered with
118 * ev_eloop_unregister_idle_cb(). They internally share a single
119 * file-descriptor to make them faster so you cannot get the same access as
120 * to other event sources (you cannot enable/disable them or similar).
121 * Idle sources are called every-time ev_eloop_dispatch() is called. That is,
122 * as long as an idle-source is registered, the event-loop will not go to
125 * Signal sources: Talking about the API they are very similar to
126 * idle-sources. They same restrictions apply, however, their type is very
127 * different. A signal-callback is called when the specified signal is
128 * received. They are not called in signal-context! But rather called in the
129 * same context as every other source. They are implemented with
131 * You can register multiple callbacks for the same signal and all callbacks
132 * will be called (compared to plain signalfd where only one fd gets the
133 * signal). This is done internally by sharing the signalfd.
134 * However, there is one restriction: You cannot share a signalfd between
135 * multiple eloop-instances. That is, if you register a callback for the same
136 * signal on two different eloop-instances (which are connected themself),
137 * then only one eloop-instance will fire the signal source. This is a
138 * restriction of signalfd that cannot be overcome. However, it is very
139 * uncommon to register multiple callbacks for a signal so this shouldn't
140 * affect common application use-cases.
141 * Also note that if you register a callback for SIGCHLD then the eloop-
142 * object will automatically reap all pending zombies _after_ your callback
143 * has been called. So if you need to check for them, then check for all of
144 * them in the callback. After you return, they will be gone.
145 * When adding a signal handler the signal is automatically added to the
146 * currently blocked signals. It is not removed when dropping the
147 * signal-source, though.
149 * Eloop uses several system calls which may fail. All errors (including memory
150 * allocation errors via -ENOMEM) are forwarded to the caller, however, it is
151 * often preferable to have a more detailed logging message. Therefore, eloop
152 * takes a loggin-function as argument for each object. Pass NULL if you are
153 * not interested in logging. This will disable logging entirely.
154 * Otherwise, pass in a callback from your application. This callback will be
155 * called when a message is to be logged. The function may be called under any
156 * circumstances (out-of-memory, etc...) and should always behave well.
157 * Nothing is ever logged except through this callback.
166 #include <sys/epoll.h>
167 #include <sys/eventfd.h>
168 #include <sys/signalfd.h>
169 #include <sys/time.h>
170 #include <sys/timerfd.h>
171 #include <sys/wait.h>
175 #include "shl_dlist.h"
176 #include "shl_hook.h"
177 #include "shl_llog.h"
179 #define LLOG_SUBSYSTEM "eloop"
183 * @ref: refcnt of this object
184 * @llog: llog log function
185 * @efd: The epoll file descriptor.
186 * @fd: Event source around \efd so you can nest event loops
187 * @cnt: Counter source used for idle events
188 * @sig_list: Shared signal sources
189 * @idlers: List of idle sources
190 * @cur_fds: Current dispatch array of fds
191 * @cur_fds_cnt: current length of \cur_fds
192 * @cur_fds_size: absolute size of \cur_fds
193 * @exit: true if we should exit the main loop
195 * An event loop is an object where you can register event sources. If you then
196 * sleep on the event loop, you will be woken up if a single event source is
197 * firing up. An event loop itself is an event source so you can nest them.
206 struct shl_dlist sig_list;
207 struct shl_hook *idlers;
208 struct shl_hook *pres;
209 struct shl_hook *posts;
212 struct epoll_event *cur_fds;
220 * @ref: refcnt for object
221 * @llog: llog log function
222 * @fd: the actual file desciptor
223 * @mask: the event mask for this fd (EV_READABLE, EV_WRITABLE, ...)
224 * @cb: the user callback
225 * @data: the user data
226 * @enabled: true if the object is currently enabled
227 * @loop: NULL or pointer to eloop if bound
229 * File descriptors are the most basic event source. Internally, they are used
230 * to implement all other kinds of event sources.
241 struct ev_eloop *loop;
246 * @ref: refcnt of this object
247 * @llog: llog log function
250 * @fd: the timerfd file desciptor
251 * @efd: fd-source for @fd
253 * Based on timerfd this allows firing events based on relative timeouts.
267 * @ref: refcnt of counter object
268 * @llog: llog log function
271 * @fd: eventfd file desciptor
272 * @efd: fd-source for @fd
274 * Counter sources fire if they are non-zero. They are based on the eventfd
289 * @list: list integration into ev_eloop object
290 * @fd: the signalfd file desciptor for this signal
291 * @signum: the actual signal number
292 * @hook: list of registered user callbacks for this signal
294 * A shared signal allows multiple listeners for the same signal. All listeners
295 * are called if the signal is catched.
297 struct ev_signal_shared {
298 struct shl_dlist list;
302 struct shl_hook *hook;
307 * signalfd allows us to conveniently listen for incoming signals. However, if
308 * multiple signalfds are registered for the same signal, then only one of them
309 * will get signaled. To avoid this restriction, we provide shared signals.
310 * That means, the user can register for a signal and if no other user is
311 * registered for this signal, yet, we create a new shared signal. Otherwise,
312 * we add the user to the existing shared signals.
313 * If the signal is catched, we simply call all users that are registered for
315 * To avoid side-effects, we automatically block all signals for the current
316 * thread when a signalfd is created. We never unblock the signal. However,
317 * most modern linux user-space programs avoid signal handlers, anyway, so you
318 * can use signalfd only.
320 * As special note, we automatically handle SIGCHLD signals here and wait for
321 * all pending child exits. This, however, is only activated when at least one
322 * user has registered for SIGCHLD callbacks.
325 static void sig_child(struct ev_fd *fd)
331 pid = waitpid(-1, &status, WNOHANG);
334 llog_warn(fd, "cannot wait on child: %m");
336 } else if (pid == 0) {
338 } else if (WIFEXITED(status)) {
339 if (WEXITSTATUS(status) != 0)
340 llog_debug(fd, "child %d exited with status %d",
341 pid, WEXITSTATUS(status));
343 llog_debug(fd, "child %d exited successfully",
345 } else if (WIFSIGNALED(status)) {
346 llog_debug(fd, "child %d exited by signal %d", pid,
352 static void shared_signal_cb(struct ev_fd *fd, int mask, void *data)
354 struct ev_signal_shared *sig = data;
355 struct signalfd_siginfo info;
358 if (mask & EV_READABLE) {
359 len = read(fd->fd, &info, sizeof(info));
360 if (len != sizeof(info))
361 llog_warn(fd, "cannot read signalfd (%d): %m", errno);
363 shl_hook_call(sig->hook, sig->fd->loop, &info);
365 if (info.ssi_signo == SIGCHLD)
367 } else if (mask & (EV_HUP | EV_ERR)) {
368 llog_warn(fd, "HUP/ERR on signal source");
374 * @out: Shared signal storage where the new object is stored
375 * @loop: The event loop where this shared signal is registered
376 * @signum: Signal number that this shared signal is for
378 * This creates a new shared signal and links it into the list of shared
379 * signals in @loop. It automatically adds @signum to the signal mask of the
380 * current thread so the signal is blocked.
382 * Returns: 0 on success, otherwise negative error code
384 static int signal_new(struct ev_signal_shared **out, struct ev_eloop *loop,
389 struct ev_signal_shared *sig;
392 return llog_EINVAL(loop);
394 sig = malloc(sizeof(*sig));
396 return llog_ENOMEM(loop);
397 memset(sig, 0, sizeof(*sig));
398 sig->signum = signum;
400 ret = shl_hook_new(&sig->hook);
405 sigaddset(&mask, signum);
407 fd = signalfd(-1, &mask, SFD_CLOEXEC | SFD_NONBLOCK);
410 llog_error(loop, "cannot created signalfd");
414 ret = ev_eloop_new_fd(loop, &sig->fd, fd, EV_READABLE,
415 shared_signal_cb, sig);
419 pthread_sigmask(SIG_BLOCK, &mask, NULL);
420 shl_dlist_link(&loop->sig_list, &sig->list);
428 shl_hook_free(sig->hook);
436 * @sig: The shared signal to be freed
438 * This unlinks the given shared signal from the event-loop where it was
439 * registered and destroys it. This does _not_ unblock the signal number that it
440 * was associated to. If you want this, you need to do this manually with
443 static void signal_free(struct ev_signal_shared *sig)
450 shl_dlist_unlink(&sig->list);
452 ev_eloop_rm_fd(sig->fd);
454 shl_hook_free(sig->hook);
457 * We do not unblock the signal here as there may be other subsystems
458 * which blocked this signal so we do not want to interfere. If you
459 * need a clean sigmask then do it yourself.
465 * The main eloop object is responsible for correctly dispatching all events.
466 * You can register fd, idle or signal sources with it. All other kinds of
467 * sources are based on these. In fact, event idle and signal sources are based
469 * As special feature, you can retrieve an fd of an eloop object, too, and pass
470 * it to your own event loop. If this fd is readable, then call
471 * ev_eloop_dispatch() to make this loop dispatch all pending events.
473 * There is one restriction when nesting eloops, though. You cannot share
474 * signals across eloop boundaries. That is, if you have registered for shared
475 * signals in two eloops for the _same_ signal, then only one eloop will
476 * receive the signal (and this is pretty random).
477 * However, such a setup is most often broken in design and hence should never
478 * occur. Even shared signals are quite rare.
479 * Anyway, you must take this into account when nesting eloops.
481 * For the curious reader: We implement idle sources with counter sources. That
482 * is, whenever there is an idle source we increase the counter source. Hence,
483 * the next dispatch call will call the counter source and this will call all
484 * registered idle source. If the idle sources do not unregister them, then we
485 * directly increase the counter again and the next dispatch round will call
486 * all idle sources again. This, however, has the side-effect that idle sources
487 * are _not_ called before other fd events but are rather mixed in between.
490 static void eloop_event(struct ev_fd *fd, int mask, void *data)
492 struct ev_eloop *eloop = data;
494 if (mask & EV_READABLE)
495 ev_eloop_dispatch(eloop, 0);
496 if (mask & (EV_HUP | EV_ERR))
497 llog_warn(eloop, "HUP/ERR on eloop source");
500 static int write_eventfd(llog_submit_t llog, int fd, uint64_t val)
505 return llog_dEINVAL(llog);
507 if (val == 0xffffffffffffffffULL) {
508 llog_dwarning(llog, "increasing counter with invalid value %llu", val);
512 ret = write(fd, &val, sizeof(val));
515 llog_dwarning(llog, "eventfd overflow while writing %llu", val);
517 llog_dwarning(llog, "eventfd write error (%d): %m", errno);
519 } else if (ret != sizeof(val)) {
520 llog_dwarning(llog, "wrote %d bytes instead of 8 to eventdfd", ret);
527 static void eloop_idle_event(struct ev_eloop *loop, unsigned int mask)
532 if (mask & (EV_HUP | EV_ERR)) {
533 llog_warning(loop, "HUP/ERR on eventfd");
537 if (!(mask & EV_READABLE))
540 ret = read(loop->idle_fd, &val, sizeof(val));
542 if (errno != EAGAIN) {
543 llog_warning(loop, "reading eventfd failed (%d): %m",
547 } else if (ret == 0) {
548 llog_warning(loop, "EOF on eventfd");
550 } else if (ret != sizeof(val)) {
551 llog_warning(loop, "read %d bytes instead of 8 on eventfd",
554 } else if (val > 0) {
555 shl_hook_call(loop->idlers, loop, NULL);
556 if (shl_hook_num(loop->idlers) > 0)
557 write_eventfd(loop->llog, loop->idle_fd, 1);
563 ret = epoll_ctl(loop->efd, EPOLL_CTL_DEL, loop->idle_fd, NULL);
565 llog_warning(loop, "cannot remove fd %d from epollset (%d): %m",
566 loop->idle_fd, errno);
571 * @out: Storage for the result
572 * @log: logging function or NULL
574 * This creates a new event-loop with ref-count 1. The new event loop is stored
575 * in @out and has no registered events.
577 * Returns: 0 on success, otherwise negative error code
579 int ev_eloop_new(struct ev_eloop **out, ev_log_t log)
581 struct ev_eloop *loop;
583 struct epoll_event ep;
586 return llog_dEINVAL(log);
588 loop = malloc(sizeof(*loop));
590 return llog_dENOMEM(log);
592 memset(loop, 0, sizeof(*loop));
595 shl_dlist_init(&loop->sig_list);
597 loop->cur_fds_size = 32;
598 loop->cur_fds = malloc(sizeof(struct epoll_event) *
600 if (!loop->cur_fds) {
601 ret = llog_ENOMEM(loop);
605 ret = shl_hook_new(&loop->idlers);
609 ret = shl_hook_new(&loop->pres);
613 ret = shl_hook_new(&loop->posts);
617 loop->efd = epoll_create1(EPOLL_CLOEXEC);
620 llog_error(loop, "cannot create epoll-fd");
624 ret = ev_fd_new(&loop->fd, loop->efd, EV_READABLE, eloop_event, loop,
629 loop->idle_fd = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
630 if (loop->idle_fd < 0) {
631 llog_error(loop, "cannot create eventfd (%d): %m", errno);
636 memset(&ep, 0, sizeof(ep));
637 ep.events |= EPOLLIN;
640 ret = epoll_ctl(loop->efd, EPOLL_CTL_ADD, loop->idle_fd, &ep);
642 llog_warning(loop, "cannot add fd %d to epoll set (%d): %m",
643 loop->idle_fd, errno);
648 llog_debug(loop, "new eloop object %p", loop);
653 close(loop->idle_fd);
655 ev_fd_unref(loop->fd);
659 shl_hook_free(loop->posts);
661 shl_hook_free(loop->pres);
663 shl_hook_free(loop->idlers);
673 * @loop: Event loop to be modified or NULL
675 * This increases the ref-count of @loop by 1.
677 void ev_eloop_ref(struct ev_eloop *loop)
687 * @loop: Event loop to be modified or NULL
689 * This decreases the ref-count of @loop by 1. If it drops to zero, the event
690 * loop is destroyed. Note that every registered event source takes a ref-count
691 * of the event loop so this ref-count will never drop to zero while there is an
692 * registered event source.
694 void ev_eloop_unref(struct ev_eloop *loop)
696 struct ev_signal_shared *sig;
702 return llog_vEINVAL(loop);
706 llog_debug(loop, "free eloop object %p", loop);
708 while (loop->sig_list.next != &loop->sig_list) {
709 sig = shl_dlist_entry(loop->sig_list.next,
710 struct ev_signal_shared,
715 ret = epoll_ctl(loop->efd, EPOLL_CTL_DEL, loop->idle_fd, NULL);
717 llog_warning(loop, "cannot remove fd %d from epollset (%d): %m",
718 loop->idle_fd, errno);
719 close(loop->idle_fd);
721 ev_fd_unref(loop->fd);
723 shl_hook_free(loop->posts);
724 shl_hook_free(loop->pres);
725 shl_hook_free(loop->idlers);
732 * @loop: The event loop where @fd is registered
733 * @fd: The fd to be flushed
735 * If @loop is currently dispatching events, this will remove all pending events
736 * of @fd from the current event-list.
738 void ev_eloop_flush_fd(struct ev_eloop *loop, struct ev_fd *fd)
745 return llog_vEINVAL(loop);
747 if (loop->dispatching) {
748 for (i = 0; i < loop->cur_fds_cnt; ++i) {
749 if (loop->cur_fds[i].data.ptr == fd)
750 loop->cur_fds[i].data.ptr = NULL;
755 static unsigned int convert_mask(uint32_t mask)
757 unsigned int res = 0;
773 * @loop: Event loop to be dispatched
774 * @timeout: Timeout in milliseconds
776 * This listens on @loop for incoming events and handles all events that
777 * occured. This waits at most @timeout milliseconds until returning. If
778 * @timeout is -1, this waits until the first event arrives. If @timeout is 0,
779 * then this returns directly if no event is currently pending.
781 * This performs only a single dispatch round. That is, if all sources where
782 * checked for events and there are no more pending events, this will return. If
783 * it handled events and the timeout has not elapsed, this will still return.
785 * If ev_eloop_exit() was called on @loop, then this will return immediately.
787 * Returns: 0 on success, otherwise negative error code
789 int ev_eloop_dispatch(struct ev_eloop *loop, int timeout)
791 struct epoll_event *ep;
793 int i, count, mask, ret;
798 return llog_EINVAL(loop);
799 if (loop->dispatching) {
800 llog_warn(loop, "recursive dispatching not allowed");
804 loop->dispatching = true;
806 shl_hook_call(loop->pres, loop, NULL);
808 count = epoll_wait(loop->efd,
813 if (errno == EINTR) {
817 llog_warn(loop, "epoll_wait dispatching failed: %m");
821 } else if (count > loop->cur_fds_size) {
822 count = loop->cur_fds_size;
826 loop->cur_fds_cnt = count;
828 for (i = 0; i < count; ++i) {
829 if (ep[i].data.ptr == loop) {
830 mask = convert_mask(ep[i].events);
831 eloop_idle_event(loop, mask);
834 if (!fd || !fd->cb || !fd->enabled)
837 mask = convert_mask(ep[i].events);
841 fd->cb(fd, mask, fd->data);
845 if (count == loop->cur_fds_size) {
846 ep = realloc(loop->cur_fds, sizeof(struct epoll_event) *
847 loop->cur_fds_size * 2);
849 llog_warning(loop, "cannot reallocate dispatch cache to size %u",
850 loop->cur_fds_size * 2);
853 loop->cur_fds_size *= 2;
860 shl_hook_call(loop->posts, loop, NULL);
861 loop->dispatching = false;
867 * @loop: The event loop to be run
868 * @timeout: Timeout for this operation
870 * This is similar to ev_eloop_dispatch() but runs _exactly_ for @timeout
871 * milliseconds. It calls ev_eloop_dispatch() as often as it can until the
872 * timeout has elapsed. If @timeout is -1 this will run until you call
873 * ev_eloop_exit(). If @timeout is 0 this is equal to calling
874 * ev_eloop_dispatch() with a timeout of 0.
876 * Calling ev_eloop_exit() will always interrupt this function and make it
879 * Returns: 0 on success, otherwise a negative error code
881 int ev_eloop_run(struct ev_eloop *loop, int timeout)
884 struct timeval tv, start;
891 llog_debug(loop, "run for %d msecs", timeout);
892 gettimeofday(&start, NULL);
894 while (!loop->exit) {
895 ret = ev_eloop_dispatch(loop, timeout);
901 } else if (timeout > 0) {
902 gettimeofday(&tv, NULL);
903 off = tv.tv_sec - start.tv_sec;
904 msec = (int64_t)tv.tv_usec - (int64_t)start.tv_usec;
907 msec = 1000000 + msec;
921 * @loop: Event loop that should exit
923 * This makes a call to ev_eloop_run() stop.
925 void ev_eloop_exit(struct ev_eloop *loop)
930 llog_debug(loop, "exiting %p", loop);
934 ev_eloop_exit(loop->fd->loop);
941 * Returns a single file descriptor for the whole event-loop. If that FD is
942 * readable, then one of the event-sources is active and you should call
943 * ev_eloop_dispatch(loop, 0); to dispatch these events.
944 * If the fd is not readable, then ev_eloop_dispatch() would sleep as there are
947 * Returns: A file descriptor for the event loop or negative error code
949 int ev_eloop_get_fd(struct ev_eloop *loop)
958 * ev_eloop_new_eloop:
959 * @loop: The parent event-loop where the new event loop is registered
960 * @out: Storage for new event loop
962 * This creates a new event loop and directly registeres it as event source on
963 * the parent event loop \loop.
965 * Returns: 0 on success, otherwise negative error code
967 int ev_eloop_new_eloop(struct ev_eloop *loop, struct ev_eloop **out)
975 return llog_EINVAL(loop);
977 ret = ev_eloop_new(&el, loop->llog);
981 ret = ev_eloop_add_eloop(loop, el);
993 * ev_eloop_add_eloop:
994 * @loop: Parent event loop
995 * @add: The event loop that is registered as event source on @loop
997 * This registers the existing event loop @add as event source on the parent
1000 * Returns: 0 on success, otherwise negative error code
1002 int ev_eloop_add_eloop(struct ev_eloop *loop, struct ev_eloop *add)
1009 return llog_EINVAL(loop);
1014 /* This adds the epoll-fd into the parent epoll-set. This works
1015 * perfectly well with registered FDs, timers, etc. However, we use
1016 * shared signals in this event-loop so if the parent and child have
1017 * overlapping shared-signals, then the signal will be randomly
1018 * delivered to either the parent-hook or child-hook but never both.
1020 * We may fix this by linking the childs-sig_list into the parent's
1021 * siglist but we didn't need this, yet, so ignore it here.
1024 ret = ev_eloop_add_fd(loop, add->fd);
1033 * ev_eloop_rm_eloop:
1034 * @rm: Event loop to be unregistered from its parent
1036 * This unregisters the event loop @rm as event source from its parent. If this
1037 * event loop was not registered on any other event loop, then this call does
1040 void ev_eloop_rm_eloop(struct ev_eloop *rm)
1042 if (!rm || !rm->fd->loop)
1045 ev_eloop_rm_fd(rm->fd);
1051 * This allows adding file descriptors to an eloop. A file descriptor is the
1052 * most basic kind of source and used for all other source types.
1053 * By default a source is always enabled but you can easily disable the source
1054 * by calling ev_fd_disable(). This will have the effect, that the source is
1055 * still registered with the eloop but will not wake up the thread or get
1056 * called until you enable it again.
1061 * @out: Storage for result
1062 * @rfd: The actual file desciptor
1063 * @mask: Bitmask of %EV_READABLE and %EV_WRITeABLE flags
1064 * @cb: User callback
1066 * @log: llog function or NULL
1068 * This creates a new file desciptor source that is watched for the events set
1069 * in @mask. @rfd is the system filedescriptor. The resulting object is stored
1070 * in @out. @cb and @data are the user callback and the user-supplied data that
1071 * is passed to the callback on events.
1072 * The FD is automatically watched for EV_HUP and EV_ERR events, too.
1074 * Returns: 0 on success, otherwise negative error code
1076 int ev_fd_new(struct ev_fd **out, int rfd, int mask, ev_fd_cb cb, void *data,
1081 if (!out || rfd < 0)
1082 return llog_dEINVAL(log);
1084 fd = malloc(sizeof(*fd));
1086 return llog_dEINVAL(log);
1088 memset(fd, 0, sizeof(*fd));
1105 * Increases the ref-count of @fd by 1.
1107 void ev_fd_ref(struct ev_fd *fd)
1112 return llog_vEINVAL(fd);
1121 * Decreases the ref-count of @fd by 1. Destroys the object if the ref-count
1124 void ev_fd_unref(struct ev_fd *fd)
1129 return llog_vEINVAL(fd);
1136 static int fd_epoll_add(struct ev_fd *fd)
1138 struct epoll_event ep;
1144 memset(&ep, 0, sizeof(ep));
1145 if (fd->mask & EV_READABLE)
1146 ep.events |= EPOLLIN;
1147 if (fd->mask & EV_WRITEABLE)
1148 ep.events |= EPOLLOUT;
1151 ret = epoll_ctl(fd->loop->efd, EPOLL_CTL_ADD, fd->fd, &ep);
1153 llog_warning(fd, "cannot add fd %d to epoll set (%d): %m",
1161 static void fd_epoll_remove(struct ev_fd *fd)
1168 ret = epoll_ctl(fd->loop->efd, EPOLL_CTL_DEL, fd->fd, NULL);
1170 llog_warning(fd, "cannot remove fd %d from epoll set (%d): %m",
1174 static int fd_epoll_update(struct ev_fd *fd)
1176 struct epoll_event ep;
1182 memset(&ep, 0, sizeof(ep));
1183 if (fd->mask & EV_READABLE)
1184 ep.events |= EPOLLIN;
1185 if (fd->mask & EV_WRITEABLE)
1186 ep.events |= EPOLLOUT;
1189 ret = epoll_ctl(fd->loop->efd, EPOLL_CTL_MOD, fd->fd, &ep);
1191 llog_warning(fd, "cannot update epoll fd %d (%d): %m",
1203 * This enables @fd. By default every fd object is enabled. If you disabled it
1204 * you can re-enable it with this call.
1206 * Returns: 0 on success, otherwise negative error code
1208 int ev_fd_enable(struct ev_fd *fd)
1217 ret = fd_epoll_add(fd);
1229 * Disables @fd. That means, no more events are handled for @fd until you
1230 * re-enable it with ev_fd_enable().
1232 void ev_fd_disable(struct ev_fd *fd)
1234 if (!fd || !fd->enabled)
1237 fd->enabled = false;
1238 fd_epoll_remove(fd);
1245 * Returns whether the fd object is enabled or disabled.
1247 * Returns: true if @fd is enabled, otherwise false.
1249 bool ev_fd_is_enabled(struct ev_fd *fd)
1251 return fd && fd->enabled;
1258 * Returns true if the fd object is bound to an event loop.
1260 * Returns: true if @fd is bound, otherwise false
1262 bool ev_fd_is_bound(struct ev_fd *fd)
1264 return fd && fd->loop;
1268 * ev_fd_set_cb_data:
1270 * @cb: New user callback
1271 * @data: New user data
1273 * This changes the user callback and user data that were set in ev_fd_new().
1274 * Both can be set to NULL. If @cb is NULL, then the callback will not be called
1277 void ev_fd_set_cb_data(struct ev_fd *fd, ev_fd_cb cb, void *data)
1289 * @mask: Bitmask of %EV_READABLE and %EV_WRITEABLE
1291 * This resets the event mask of @fd to @mask.
1293 * Returns: 0 on success, otherwise negative error code
1295 int ev_fd_update(struct ev_fd *fd, int mask)
1309 ret = fd_epoll_update(fd);
1321 * @out: Storage for result
1322 * @rfd: File descriptor
1323 * @mask: Bitmask of %EV_READABLE and %EV_WRITEABLE
1324 * @cb: User callback
1327 * This creates a new fd object like ev_fd_new() and directly registers it in
1328 * the event loop @loop. See ev_fd_new() and ev_eloop_add_fd() for more
1330 * The ref-count of @out is 1 so you must call ev_eloop_rm_fd() to destroy the
1331 * fd. You must not call ev_fd_unref() unless you called ev_fd_ref() before.
1333 * Returns: 0 on success, otherwise negative error code
1335 int ev_eloop_new_fd(struct ev_eloop *loop, struct ev_fd **out, int rfd,
1336 int mask, ev_fd_cb cb, void *data)
1343 if (!out || rfd < 0)
1344 return llog_EINVAL(loop);
1346 ret = ev_fd_new(&fd, rfd, mask, cb, data, loop->llog);
1350 ret = ev_eloop_add_fd(loop, fd);
1366 * Registers @fd in the event loop @loop. This increases the ref-count of both
1367 * @loop and @fd. From now on the user callback of @fd may get called during
1370 * Returns: 0 on success, otherwise negative error code
1372 int ev_eloop_add_fd(struct ev_eloop *loop, struct ev_fd *fd)
1378 if (!fd || fd->loop)
1379 return llog_EINVAL(loop);
1384 ret = fd_epoll_add(fd);
1400 * Removes the fd object @fd from its event loop. If you did not call
1401 * ev_eloop_add_fd() before, this will do nothing.
1402 * This decreases the refcount of @fd and the event loop by 1.
1403 * It is safe to call this in any callback. This makes sure that the current
1404 * dispatcher will not get confused or read invalid memory.
1406 void ev_eloop_rm_fd(struct ev_fd *fd)
1408 struct ev_eloop *loop;
1411 if (!fd || !fd->loop)
1416 fd_epoll_remove(fd);
1419 * If we are currently dispatching events, we need to remove ourself
1420 * from the temporary event list.
1422 if (loop->dispatching) {
1423 for (i = 0; i < loop->cur_fds_cnt; ++i) {
1424 if (fd == loop->cur_fds[i].data.ptr)
1425 loop->cur_fds[i].data.ptr = NULL;
1431 ev_eloop_unref(loop);
1436 * Timer sources allow delaying a specific event by an relative timeout. The
1437 * timeout can be set to trigger after a specific time. Optionally, you can
1438 * also make the timeout trigger every next time the timeout elapses so you
1439 * basically get a pulse that reliably calls the callback.
1440 * The callback gets as parameter the number of timeouts that elapsed since it
1441 * was last called (in case the application couldn't call the callback fast
1442 * enough). The timeout can be specified with nano-seconds precision. However,
1443 * real precision depends on the operating-system and hardware.
1446 static int timer_drain(struct ev_timer *timer, uint64_t *out)
1449 uint64_t expirations;
1454 len = read(timer->fd, &expirations, sizeof(expirations));
1456 if (errno == EAGAIN) {
1459 llog_warning(timer, "cannot read timerfd (%d): %m",
1463 } else if (len == 0) {
1464 llog_warning(timer, "EOF on timer source");
1466 } else if (len != sizeof(expirations)) {
1467 llog_warn(timer, "invalid size %d read on timerfd", len);
1476 static void timer_cb(struct ev_fd *fd, int mask, void *data)
1478 struct ev_timer *timer = data;
1479 uint64_t expirations;
1482 if (mask & (EV_HUP | EV_ERR)) {
1483 llog_warn(fd, "HUP/ERR on timer source");
1487 if (mask & EV_READABLE) {
1488 ret = timer_drain(timer, &expirations);
1491 if (expirations > 0) {
1493 timer->cb(timer, expirations, timer->data);
1500 ev_timer_disable(timer);
1502 timer->cb(timer, 0, timer->data);
1505 static const struct itimerspec ev_timer_zero;
1509 * @out: Timer pointer where to store the new timer
1511 * @cb: callback to use for this event-source
1512 * @data: user-specified data
1513 * @log: logging function or NULL
1515 * This creates a new timer-source. See "man timerfd_create" for information on
1516 * the @spec argument. The timer is always relative and uses the
1517 * monotonic-kernel clock.
1519 * Returns: 0 on success, negative error on failure
1521 int ev_timer_new(struct ev_timer **out, const struct itimerspec *spec,
1522 ev_timer_cb cb, void *data, ev_log_t log)
1524 struct ev_timer *timer;
1528 return llog_dEINVAL(log);
1531 spec = &ev_timer_zero;
1533 timer = malloc(sizeof(*timer));
1535 return llog_dENOMEM(log);
1537 memset(timer, 0, sizeof(*timer));
1543 timer->fd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC | TFD_NONBLOCK);
1544 if (timer->fd < 0) {
1545 llog_error(timer, "cannot create timerfd (%d): %m", errno);
1550 ret = timerfd_settime(timer->fd, 0, spec, NULL);
1552 llog_warn(timer, "cannot set timerfd (%d): %m", errno);
1557 ret = ev_fd_new(&timer->efd, timer->fd, EV_READABLE, timer_cb, timer,
1574 * @timer: Timer object
1576 * Increase reference count by 1.
1578 void ev_timer_ref(struct ev_timer *timer)
1583 return llog_vEINVAL(timer);
1590 * @timer: Timer object
1592 * Decrease reference-count by 1 and destroy timer if it drops to 0.
1594 void ev_timer_unref(struct ev_timer *timer)
1599 return llog_vEINVAL(timer);
1603 ev_fd_unref(timer->efd);
1610 * @timer: Timer object
1612 * Enable the timer. This calls ev_fd_enable() on the fd that implements this
1615 * Returns: 0 on success negative error code on failure
1617 int ev_timer_enable(struct ev_timer *timer)
1622 return ev_fd_enable(timer->efd);
1627 * @timer: Timer object
1629 * Disable the timer. This calls ev_fd_disable() on the fd that implements this
1632 * Returns: 0 on success and negative error code on failure
1634 void ev_timer_disable(struct ev_timer *timer)
1639 ev_fd_disable(timer->efd);
1643 * ev_timer_is_enabled:
1644 * @timer: Timer object
1646 * Checks whether the timer is enabled.
1648 * Returns: true if timer is enabled, false otherwise
1650 bool ev_timer_is_enabled(struct ev_timer *timer)
1652 return timer && ev_fd_is_enabled(timer->efd);
1656 * ev_timer_is_bound:
1657 * @timer: Timer object
1659 * Checks whether the timer is bound to an event loop.
1661 * Returns: true if the timer is bound, false otherwise.
1663 bool ev_timer_is_bound(struct ev_timer *timer)
1665 return timer && ev_fd_is_bound(timer->efd);
1669 * ev_timer_set_cb_data:
1670 * @timer: Timer object
1671 * @cb: User callback or NULL
1672 * @data: User data or NULL
1674 * This changes the user-supplied callback and data that is used for this timer
1677 void ev_timer_set_cb_data(struct ev_timer *timer, ev_timer_cb cb, void *data)
1688 * @timer: Timer object
1691 * This changes the timer timespan. See "man timerfd_settime" for information
1692 * on the @spec parameter.
1694 * Returns: 0 on success, negative error code on failure.
1696 int ev_timer_update(struct ev_timer *timer, const struct itimerspec *spec)
1704 spec = &ev_timer_zero;
1706 ret = timerfd_settime(timer->fd, 0, spec, NULL);
1708 llog_warn(timer, "cannot set timerfd (%d): %m", errno);
1717 * @timer: valid timer object
1718 * @expirations: destination to save result or NULL
1720 * This reads the current expiration-count from the timer object @timer and
1721 * saves it in @expirations (if it is non-NULL). This can be used to clear the
1722 * timer after an idle-period or similar.
1723 * Note that the timer_cb() callback function automatically calls this before
1724 * calling the user-supplied callback.
1726 * Returns: 0 on success, negative error code on failure.
1728 int ev_timer_drain(struct ev_timer *timer, uint64_t *expirations)
1733 return timer_drain(timer, expirations);
1737 * ev_eloop_new_timer:
1739 * @out: output where to store the new timer
1741 * @cb: user callback
1742 * @data: user-supplied data
1744 * This is a combination of ev_timer_new() and ev_eloop_add_timer(). See both
1745 * for more information.
1747 * Returns: 0 on success, negative error code on failure.
1749 int ev_eloop_new_timer(struct ev_eloop *loop, struct ev_timer **out,
1750 const struct itimerspec *spec, ev_timer_cb cb,
1753 struct ev_timer *timer;
1759 return llog_EINVAL(loop);
1761 ret = ev_timer_new(&timer, spec, cb, data, loop->llog);
1765 ret = ev_eloop_add_timer(loop, timer);
1767 ev_timer_unref(timer);
1771 ev_timer_unref(timer);
1777 * ev_eloop_add_timer:
1779 * @timer: Timer source
1781 * This adds @timer as source to @loop. @timer must be currently unbound,
1782 * otherwise, this will fail with -EALREADY.
1784 * Returns: 0 on success, negative error code on failure
1786 int ev_eloop_add_timer(struct ev_eloop *loop, struct ev_timer *timer)
1793 return llog_EINVAL(loop);
1795 if (ev_fd_is_bound(timer->efd))
1798 ret = ev_eloop_add_fd(loop, timer->efd);
1802 ev_timer_ref(timer);
1807 * ev_eloop_rm_timer:
1808 * @timer: Timer object
1810 * If @timer is currently bound to an event loop, this will remove this bondage
1813 void ev_eloop_rm_timer(struct ev_timer *timer)
1815 if (!timer || !ev_fd_is_bound(timer->efd))
1818 ev_eloop_rm_fd(timer->efd);
1819 ev_timer_unref(timer);
1824 * Counter sources are a very basic event notification mechanism. It is based
1825 * around the eventfd() system call on linux machines. Internally, there is a
1826 * 64bit unsigned integer that can be increased by the caller. By default it is
1827 * set to 0. If it is non-zero, the event-fd will be notified and the
1828 * user-defined callback is called. The callback gets as argument the current
1829 * state of the counter and the counter is reset to 0.
1831 * If the internal counter would overflow, an increase() fails silently so an
1832 * overflow will never occur, however, you may loose events this way. This can
1833 * be ignored when increasing with small values, only.
1836 static void counter_event(struct ev_fd *fd, int mask, void *data)
1838 struct ev_counter *cnt = data;
1842 if (mask & (EV_HUP | EV_ERR)) {
1843 llog_warning(fd, "HUP/ERR on eventfd");
1845 cnt->cb(cnt, 0, cnt->data);
1849 if (!(mask & EV_READABLE))
1852 ret = read(cnt->fd, &val, sizeof(val));
1854 if (errno != EAGAIN) {
1855 llog_warning(fd, "reading eventfd failed (%d): %m", errno);
1856 ev_counter_disable(cnt);
1858 cnt->cb(cnt, 0, cnt->data);
1860 } else if (ret == 0) {
1861 llog_warning(fd, "EOF on eventfd");
1862 ev_counter_disable(cnt);
1864 cnt->cb(cnt, 0, cnt->data);
1865 } else if (ret != sizeof(val)) {
1866 llog_warning(fd, "read %d bytes instead of 8 on eventfd", ret);
1867 ev_counter_disable(cnt);
1869 cnt->cb(cnt, 0, cnt->data);
1870 } else if (val > 0) {
1872 cnt->cb(cnt, val, cnt->data);
1878 * @out: Where to store the new counter
1879 * @cb: user-supplied callback
1880 * @data: user-supplied data
1881 * @log: logging function or NULL
1883 * This creates a new counter object and stores it in @out.
1885 * Returns: 0 on success, negative error code on failure.
1887 int ev_counter_new(struct ev_counter **out, ev_counter_cb cb, void *data,
1890 struct ev_counter *cnt;
1894 return llog_dEINVAL(log);
1896 cnt = malloc(sizeof(*cnt));
1898 return llog_dENOMEM(log);
1899 memset(cnt, 0, sizeof(*cnt));
1905 cnt->fd = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
1907 llog_error(cnt, "cannot create eventfd (%d): %m", errno);
1912 ret = ev_fd_new(&cnt->efd, cnt->fd, EV_READABLE, counter_event, cnt,
1929 * @cnt: Counter object
1931 * This increases the reference-count of @cnt by 1.
1933 void ev_counter_ref(struct ev_counter *cnt)
1938 return llog_vEINVAL(cnt);
1945 * @cnt: Counter object
1947 * This decreases the reference-count of @cnt by 1 and destroys the object if
1950 void ev_counter_unref(struct ev_counter *cnt)
1955 return llog_vEINVAL(cnt);
1959 ev_fd_unref(cnt->efd);
1965 * ev_counter_enable:
1966 * @cnt: Counter object
1968 * This enables the counter object. It calls ev_fd_enable() on the underlying
1971 * Returns: 0 on success, negative error code on failure
1973 int ev_counter_enable(struct ev_counter *cnt)
1978 return ev_fd_enable(cnt->efd);
1982 * ev_counter_disable:
1983 * @cnt: Counter object
1985 * This disables the counter. It calls ev_fd_disable() on the underlying
1988 void ev_counter_disable(struct ev_counter *cnt)
1993 ev_fd_disable(cnt->efd);
1997 * ev_counter_is_enabled:
1998 * @cnt: counter object
2000 * Checks whether the counter is enabled.
2002 * Returns: true if the counter is enabled, otherwise returns false.
2004 bool ev_counter_is_enabled(struct ev_counter *cnt)
2006 return cnt && ev_fd_is_enabled(cnt->efd);
2010 * ev_counter_is_bound:
2011 * @cnt: Counter object
2013 * Checks whether the counter is bound to an event loop.
2015 * Returns: true if the counter is bound, otherwise false is returned.
2017 bool ev_counter_is_bound(struct ev_counter *cnt)
2019 return cnt && ev_fd_is_bound(cnt->efd);
2023 * ev_counter_set_cb_data:
2024 * @cnt: Counter object
2025 * @cb: user-supplied callback
2026 * @data: user-supplied data
2028 * This changes the user-supplied callback and data for the given counter
2031 void ev_counter_set_cb_data(struct ev_counter *cnt, ev_counter_cb cb,
2043 * @cnt: Counter object
2044 * @val: Counter increase amount
2046 * This increases the counter @cnt by @val.
2048 * Returns: 0 on success, negative error code on failure.
2050 int ev_counter_inc(struct ev_counter *cnt, uint64_t val)
2055 return write_eventfd(cnt->llog, cnt->fd, val);
2059 * ev_eloop_new_counter:
2060 * @eloop: event loop
2061 * @out: output storage for new counter
2062 * @cb: user-supplied callback
2063 * @data: user-supplied data
2065 * This combines ev_counter_new() and ev_eloop_add_counter() in one call.
2067 * Returns: 0 on success, negative error code on failure.
2069 int ev_eloop_new_counter(struct ev_eloop *eloop, struct ev_counter **out,
2070 ev_counter_cb cb, void *data)
2073 struct ev_counter *cnt;
2078 return llog_EINVAL(eloop);
2080 ret = ev_counter_new(&cnt, cb, data, eloop->llog);
2084 ret = ev_eloop_add_counter(eloop, cnt);
2086 ev_counter_unref(cnt);
2090 ev_counter_unref(cnt);
2096 * ev_eloop_add_counter:
2097 * @eloop: Event loop
2098 * @cnt: Counter object
2100 * This adds @cnt to the given event loop @eloop. If @cnt is already bound,
2101 * this will fail with -EALREADY.
2103 * Returns: 0 on success, negative error code on failure.
2105 int ev_eloop_add_counter(struct ev_eloop *eloop, struct ev_counter *cnt)
2112 return llog_EINVAL(eloop);
2114 if (ev_fd_is_bound(cnt->efd))
2117 ret = ev_eloop_add_fd(eloop, cnt->efd);
2121 ev_counter_ref(cnt);
2126 * ev_eloop_rm_counter:
2127 * @cnt: Counter object
2129 * If @cnt is bound to an event-loop, then this will remove this bondage again.
2131 void ev_eloop_rm_counter(struct ev_counter *cnt)
2133 if (!cnt || !ev_fd_is_bound(cnt->efd))
2136 ev_eloop_rm_fd(cnt->efd);
2137 ev_counter_unref(cnt);
2142 * This allows registering for shared signal events. See description of the
2143 * shared signal object above for more information how this works. Also see the
2144 * eloop description to see some drawbacks when nesting eloop objects with the
2145 * same shared signal sources.
2149 * ev_eloop_register_signal_cb:
2151 * @signum: Signal number
2152 * @cb: user-supplied callback
2153 * @data: user-supplied data
2155 * This register a new callback for the given signal @signum. @cb must not be
2158 * Returns: 0 on success, negative error code on failure.
2160 int ev_eloop_register_signal_cb(struct ev_eloop *loop, int signum,
2161 ev_signal_shared_cb cb, void *data)
2163 struct ev_signal_shared *sig = NULL;
2165 struct shl_dlist *iter;
2169 if (signum < 0 || !cb)
2170 return llog_EINVAL(loop);
2172 shl_dlist_for_each(iter, &loop->sig_list) {
2173 sig = shl_dlist_entry(iter, struct ev_signal_shared, list);
2174 if (sig->signum == signum)
2180 ret = signal_new(&sig, loop, signum);
2185 return shl_hook_add_cast(sig->hook, cb, data);
2189 * ev_eloop_unregister_signal_cb:
2191 * @signum: signal number
2192 * @cb: user-supplied callback
2193 * @data: user-supplied data
2195 * This removes a previously registered signal-callback again. The arguments
2196 * must be the same as for the ev_eloop_register_signal_cb() call. If multiple
2197 * callbacks with the same arguments are registered, then only one callback is
2198 * removed. It doesn't matter which callback is removed as both are identical.
2200 void ev_eloop_unregister_signal_cb(struct ev_eloop *loop, int signum,
2201 ev_signal_shared_cb cb, void *data)
2203 struct ev_signal_shared *sig;
2204 struct shl_dlist *iter;
2209 shl_dlist_for_each(iter, &loop->sig_list) {
2210 sig = shl_dlist_entry(iter, struct ev_signal_shared, list);
2211 if (sig->signum == signum) {
2212 shl_hook_rm_cast(sig->hook, cb, data);
2213 if (!shl_hook_num(sig->hook))
2222 * Idle sources are called everytime when a next dispatch round is started.
2223 * That means, unless there is no idle source registered, the thread will
2224 * _never_ go to sleep. So please unregister your idle source if no longer
2229 * ev_eloop_register_idle_cb:
2230 * @eloop: event loop
2231 * @cb: user-supplied callback
2232 * @data: user-supplied data
2234 * This register a new idle-source with the given callback and data. @cb must
2237 * Returns: 0 on success, negative error code on failure.
2239 int ev_eloop_register_idle_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2247 ret = shl_hook_add_cast(eloop->idlers, cb, data);
2251 ret = write_eventfd(eloop->llog, eloop->idle_fd, 1);
2253 llog_warning(eloop, "cannot increase eloop idle-counter");
2254 shl_hook_rm_cast(eloop->idlers, cb, data);
2262 * ev_eloop_unregister_idle_cb:
2263 * @eloop: event loop
2264 * @cb: user-supplied callback
2265 * @data: user-supplied data
2267 * This removes an idle-source. The arguments must be the same as for the
2268 * ev_eloop_register_idle_cb() call. If two identical callbacks are registered,
2269 * then only one is removed. It doesn't matter which one is removed, because
2270 * they are identical.
2272 void ev_eloop_unregister_idle_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2278 shl_hook_rm_cast(eloop->idlers, cb, data);
2282 * Pre-Dispatch Callbacks
2283 * A pre-dispatch cb is called before a single dispatch round is started.
2284 * You should avoid using them and instead not rely on any specific
2285 * dispatch-behavior but expect every event to be recieved asynchronously.
2286 * However, this hook is useful to integrate other limited APIs into this event
2287 * loop if they do not provide proper FD-abstractions.
2291 * ev_eloop_register_pre_cb:
2292 * @eloop: event loop
2293 * @cb: user-supplied callback
2294 * @data: user-supplied data
2296 * This register a new pre-cb with the given callback and data. @cb must
2299 * Returns: 0 on success, negative error code on failure.
2301 int ev_eloop_register_pre_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2307 return shl_hook_add_cast(eloop->pres, cb, data);
2311 * ev_eloop_unregister_pre_cb:
2312 * @eloop: event loop
2313 * @cb: user-supplied callback
2314 * @data: user-supplied data
2316 * This removes a pre-cb. The arguments must be the same as for the
2317 * ev_eloop_register_pre_cb() call. If two identical callbacks are registered,
2318 * then only one is removed. It doesn't matter which one is removed, because
2319 * they are identical.
2321 void ev_eloop_unregister_pre_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2327 shl_hook_rm_cast(eloop->pres, cb, data);
2331 * Post-Dispatch Callbacks
2332 * A post-dispatch cb is called whenever a single dispatch round is complete.
2333 * You should avoid using them and instead not rely on any specific
2334 * dispatch-behavior but expect every event to be recieved asynchronously.
2335 * However, this hook is useful to integrate other limited APIs into this event
2336 * loop if they do not provide proper FD-abstractions.
2340 * ev_eloop_register_post_cb:
2341 * @eloop: event loop
2342 * @cb: user-supplied callback
2343 * @data: user-supplied data
2345 * This register a new post-cb with the given callback and data. @cb must
2348 * Returns: 0 on success, negative error code on failure.
2350 int ev_eloop_register_post_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2356 return shl_hook_add_cast(eloop->posts, cb, data);
2360 * ev_eloop_unregister_post_cb:
2361 * @eloop: event loop
2362 * @cb: user-supplied callback
2363 * @data: user-supplied data
2365 * This removes a post-cb. The arguments must be the same as for the
2366 * ev_eloop_register_post_cb() call. If two identical callbacks are registered,
2367 * then only one is removed. It doesn't matter which one is removed, because
2368 * they are identical.
2370 void ev_eloop_unregister_post_cb(struct ev_eloop *eloop, ev_idle_cb cb,
2376 shl_hook_rm_cast(eloop->posts, cb, data);