4 * Provide support for fcntl()'s F_GETLK, F_SETLK, and F_SETLKW calls.
5 * Doug Evans (dje@spiff.uucp), August 07, 1992
7 * Deadlock detection added.
8 * FIXME: one thing isn't handled yet:
9 * - mandatory locks (requires lots of changes elsewhere)
10 * Kelly Carmichael (kelly@[142.24.8.65]), September 17, 1994.
12 * Miscellaneous edits, and a total rewrite of posix_lock_file() code.
13 * Kai Petzke (wpp@marie.physik.tu-berlin.de), 1994
15 * Converted file_lock_table to a linked list from an array, which eliminates
16 * the limits on how many active file locks are open.
17 * Chad Page (pageone@netcom.com), November 27, 1994
19 * Removed dependency on file descriptors. dup()'ed file descriptors now
20 * get the same locks as the original file descriptors, and a close() on
21 * any file descriptor removes ALL the locks on the file for the current
22 * process. Since locks still depend on the process id, locks are inherited
23 * after an exec() but not after a fork(). This agrees with POSIX, and both
24 * BSD and SVR4 practice.
25 * Andy Walker (andy@lysaker.kvaerner.no), February 14, 1995
27 * Scrapped free list which is redundant now that we allocate locks
28 * dynamically with kmalloc()/kfree().
29 * Andy Walker (andy@lysaker.kvaerner.no), February 21, 1995
31 * Implemented two lock personalities - FL_FLOCK and FL_POSIX.
33 * FL_POSIX locks are created with calls to fcntl() and lockf() through the
34 * fcntl() system call. They have the semantics described above.
36 * FL_FLOCK locks are created with calls to flock(), through the flock()
37 * system call, which is new. Old C libraries implement flock() via fcntl()
38 * and will continue to use the old, broken implementation.
40 * FL_FLOCK locks follow the 4.4 BSD flock() semantics. They are associated
41 * with a file pointer (filp). As a result they can be shared by a parent
42 * process and its children after a fork(). They are removed when the last
43 * file descriptor referring to the file pointer is closed (unless explicitly
46 * FL_FLOCK locks never deadlock, an existing lock is always removed before
47 * upgrading from shared to exclusive (or vice versa). When this happens
48 * any processes blocked by the current lock are woken up and allowed to
49 * run before the new lock is applied.
50 * Andy Walker (andy@lysaker.kvaerner.no), June 09, 1995
52 * Removed some race conditions in flock_lock_file(), marked other possible
53 * races. Just grep for FIXME to see them.
54 * Dmitry Gorodchanin (pgmdsg@ibi.com), February 09, 1996.
56 * Addressed Dmitry's concerns. Deadlock checking no longer recursive.
57 * Lock allocation changed to GFP_ATOMIC as we can't afford to sleep
58 * once we've checked for blocking and deadlocking.
59 * Andy Walker (andy@lysaker.kvaerner.no), April 03, 1996.
61 * Initial implementation of mandatory locks. SunOS turned out to be
62 * a rotten model, so I implemented the "obvious" semantics.
63 * See 'Documentation/filesystems/mandatory-locking.txt' for details.
64 * Andy Walker (andy@lysaker.kvaerner.no), April 06, 1996.
66 * Don't allow mandatory locks on mmap()'ed files. Added simple functions to
67 * check if a file has mandatory locks, used by mmap(), open() and creat() to
68 * see if system call should be rejected. Ref. HP-UX/SunOS/Solaris Reference
70 * Andy Walker (andy@lysaker.kvaerner.no), April 09, 1996.
72 * Tidied up block list handling. Added '/proc/locks' interface.
73 * Andy Walker (andy@lysaker.kvaerner.no), April 24, 1996.
75 * Fixed deadlock condition for pathological code that mixes calls to
76 * flock() and fcntl().
77 * Andy Walker (andy@lysaker.kvaerner.no), April 29, 1996.
79 * Allow only one type of locking scheme (FL_POSIX or FL_FLOCK) to be in use
80 * for a given file at a time. Changed the CONFIG_LOCK_MANDATORY scheme to
81 * guarantee sensible behaviour in the case where file system modules might
82 * be compiled with different options than the kernel itself.
83 * Andy Walker (andy@lysaker.kvaerner.no), May 15, 1996.
85 * Added a couple of missing wake_up() calls. Thanks to Thomas Meckel
86 * (Thomas.Meckel@mni.fh-giessen.de) for spotting this.
87 * Andy Walker (andy@lysaker.kvaerner.no), May 15, 1996.
89 * Changed FL_POSIX locks to use the block list in the same way as FL_FLOCK
90 * locks. Changed process synchronisation to avoid dereferencing locks that
91 * have already been freed.
92 * Andy Walker (andy@lysaker.kvaerner.no), Sep 21, 1996.
94 * Made the block list a circular list to minimise searching in the list.
95 * Andy Walker (andy@lysaker.kvaerner.no), Sep 25, 1996.
97 * Made mandatory locking a mount option. Default is not to allow mandatory
99 * Andy Walker (andy@lysaker.kvaerner.no), Oct 04, 1996.
101 * Some adaptations for NFS support.
102 * Olaf Kirch (okir@monad.swb.de), Dec 1996,
104 * Fixed /proc/locks interface so that we can't overrun the buffer we are handed.
105 * Andy Walker (andy@lysaker.kvaerner.no), May 12, 1997.
107 * Use slab allocator instead of kmalloc/kfree.
108 * Use generic list implementation from <linux/list.h>.
109 * Sped up posix_locks_deadlock by only considering blocked locks.
110 * Matthew Wilcox <willy@debian.org>, March, 2000.
112 * Leases and LOCK_MAND
113 * Matthew Wilcox <willy@debian.org>, June, 2000.
114 * Stephen Rothwell <sfr@canb.auug.org.au>, June, 2000.
117 #include <linux/capability.h>
118 #include <linux/file.h>
119 #include <linux/fdtable.h>
120 #include <linux/fs.h>
121 #include <linux/init.h>
122 #include <linux/module.h>
123 #include <linux/security.h>
124 #include <linux/slab.h>
125 #include <linux/syscalls.h>
126 #include <linux/time.h>
127 #include <linux/rcupdate.h>
128 #include <linux/pid_namespace.h>
129 #include <linux/hashtable.h>
131 #include <asm/uaccess.h>
133 #define IS_POSIX(fl) (fl->fl_flags & FL_POSIX)
134 #define IS_FLOCK(fl) (fl->fl_flags & FL_FLOCK)
135 #define IS_LEASE(fl) (fl->fl_flags & FL_LEASE)
137 static bool lease_breaking(struct file_lock *fl)
139 return fl->fl_flags & (FL_UNLOCK_PENDING | FL_DOWNGRADE_PENDING);
142 static int target_leasetype(struct file_lock *fl)
144 if (fl->fl_flags & FL_UNLOCK_PENDING)
146 if (fl->fl_flags & FL_DOWNGRADE_PENDING)
151 int leases_enable = 1;
152 int lease_break_time = 45;
154 #define for_each_lock(inode, lockp) \
155 for (lockp = &inode->i_flock; *lockp != NULL; lockp = &(*lockp)->fl_next)
158 * The global file_lock_list is only used for displaying /proc/locks. Protected
159 * by the file_lock_lock.
161 static HLIST_HEAD(file_lock_list);
164 * The blocked_hash is used to find POSIX lock loops for deadlock detection.
165 * It is protected by file_lock_lock.
167 * We hash locks by lockowner in order to optimize searching for the lock a
168 * particular lockowner is waiting on.
170 * FIXME: make this value scale via some heuristic? We generally will want more
171 * buckets when we have more lockowners holding locks, but that's a little
172 * difficult to determine without knowing what the workload will look like.
174 #define BLOCKED_HASH_BITS 7
175 static DEFINE_HASHTABLE(blocked_hash, BLOCKED_HASH_BITS);
178 * This lock protects the blocked_hash and the file_lock_list. Generally, if
179 * you're accessing one of those lists, you want to be holding this lock.
181 * In addition, it also protects the fl->fl_block list, and the fl->fl_next
182 * pointer for file_lock structures that are acting as lock requests (in
183 * contrast to those that are acting as records of acquired locks).
185 * Note that when we acquire this lock in order to change the above fields,
186 * we often hold the i_lock as well. In certain cases, when reading the fields
187 * protected by this lock, we can skip acquiring it iff we already hold the
190 * In particular, adding an entry to the fl_block list requires that you hold
191 * both the i_lock and the blocked_lock_lock (acquired in that order). Deleting
192 * an entry from the list however only requires the file_lock_lock.
194 static DEFINE_SPINLOCK(file_lock_lock);
196 static struct kmem_cache *filelock_cache __read_mostly;
198 static void locks_init_lock_heads(struct file_lock *fl)
200 INIT_HLIST_NODE(&fl->fl_link);
201 INIT_LIST_HEAD(&fl->fl_block);
202 init_waitqueue_head(&fl->fl_wait);
205 /* Allocate an empty lock structure. */
206 struct file_lock *locks_alloc_lock(void)
208 struct file_lock *fl = kmem_cache_zalloc(filelock_cache, GFP_KERNEL);
211 locks_init_lock_heads(fl);
215 EXPORT_SYMBOL_GPL(locks_alloc_lock);
217 void locks_release_private(struct file_lock *fl)
220 if (fl->fl_ops->fl_release_private)
221 fl->fl_ops->fl_release_private(fl);
227 EXPORT_SYMBOL_GPL(locks_release_private);
229 /* Free a lock which is not in use. */
230 void locks_free_lock(struct file_lock *fl)
232 BUG_ON(waitqueue_active(&fl->fl_wait));
233 BUG_ON(!list_empty(&fl->fl_block));
234 BUG_ON(!hlist_unhashed(&fl->fl_link));
236 locks_release_private(fl);
237 kmem_cache_free(filelock_cache, fl);
239 EXPORT_SYMBOL(locks_free_lock);
241 void locks_init_lock(struct file_lock *fl)
243 memset(fl, 0, sizeof(struct file_lock));
244 locks_init_lock_heads(fl);
247 EXPORT_SYMBOL(locks_init_lock);
249 static void locks_copy_private(struct file_lock *new, struct file_lock *fl)
252 if (fl->fl_ops->fl_copy_lock)
253 fl->fl_ops->fl_copy_lock(new, fl);
254 new->fl_ops = fl->fl_ops;
257 new->fl_lmops = fl->fl_lmops;
261 * Initialize a new lock from an existing file_lock structure.
263 void __locks_copy_lock(struct file_lock *new, const struct file_lock *fl)
265 new->fl_owner = fl->fl_owner;
266 new->fl_pid = fl->fl_pid;
268 new->fl_flags = fl->fl_flags;
269 new->fl_type = fl->fl_type;
270 new->fl_start = fl->fl_start;
271 new->fl_end = fl->fl_end;
273 new->fl_lmops = NULL;
275 EXPORT_SYMBOL(__locks_copy_lock);
277 void locks_copy_lock(struct file_lock *new, struct file_lock *fl)
279 locks_release_private(new);
281 __locks_copy_lock(new, fl);
282 new->fl_file = fl->fl_file;
283 new->fl_ops = fl->fl_ops;
284 new->fl_lmops = fl->fl_lmops;
286 locks_copy_private(new, fl);
289 EXPORT_SYMBOL(locks_copy_lock);
291 static inline int flock_translate_cmd(int cmd) {
293 return cmd & (LOCK_MAND | LOCK_RW);
305 /* Fill in a file_lock structure with an appropriate FLOCK lock. */
306 static int flock_make_lock(struct file *filp, struct file_lock **lock,
309 struct file_lock *fl;
310 int type = flock_translate_cmd(cmd);
314 fl = locks_alloc_lock();
319 fl->fl_pid = current->tgid;
320 fl->fl_flags = FL_FLOCK;
322 fl->fl_end = OFFSET_MAX;
328 static int assign_type(struct file_lock *fl, long type)
342 /* Verify a "struct flock" and copy it to a "struct file_lock" as a POSIX
345 static int flock_to_posix_lock(struct file *filp, struct file_lock *fl,
350 switch (l->l_whence) {
358 start = i_size_read(file_inode(filp));
364 /* POSIX-1996 leaves the case l->l_len < 0 undefined;
365 POSIX-2001 defines it. */
369 fl->fl_end = OFFSET_MAX;
371 end = start + l->l_len - 1;
373 } else if (l->l_len < 0) {
380 fl->fl_start = start; /* we record the absolute position */
381 if (fl->fl_end < fl->fl_start)
384 fl->fl_owner = current->files;
385 fl->fl_pid = current->tgid;
387 fl->fl_flags = FL_POSIX;
391 return assign_type(fl, l->l_type);
394 #if BITS_PER_LONG == 32
395 static int flock64_to_posix_lock(struct file *filp, struct file_lock *fl,
400 switch (l->l_whence) {
408 start = i_size_read(file_inode(filp));
417 fl->fl_end = OFFSET_MAX;
419 fl->fl_end = start + l->l_len - 1;
420 } else if (l->l_len < 0) {
421 fl->fl_end = start - 1;
426 fl->fl_start = start; /* we record the absolute position */
427 if (fl->fl_end < fl->fl_start)
430 fl->fl_owner = current->files;
431 fl->fl_pid = current->tgid;
433 fl->fl_flags = FL_POSIX;
437 return assign_type(fl, l->l_type);
441 /* default lease lock manager operations */
442 static void lease_break_callback(struct file_lock *fl)
444 kill_fasync(&fl->fl_fasync, SIGIO, POLL_MSG);
447 static const struct lock_manager_operations lease_manager_ops = {
448 .lm_break = lease_break_callback,
449 .lm_change = lease_modify,
453 * Initialize a lease, use the default lock manager operations
455 static int lease_init(struct file *filp, long type, struct file_lock *fl)
457 if (assign_type(fl, type) != 0)
460 fl->fl_owner = current->files;
461 fl->fl_pid = current->tgid;
464 fl->fl_flags = FL_LEASE;
466 fl->fl_end = OFFSET_MAX;
468 fl->fl_lmops = &lease_manager_ops;
472 /* Allocate a file_lock initialised to this type of lease */
473 static struct file_lock *lease_alloc(struct file *filp, long type)
475 struct file_lock *fl = locks_alloc_lock();
479 return ERR_PTR(error);
481 error = lease_init(filp, type, fl);
484 return ERR_PTR(error);
489 /* Check if two locks overlap each other.
491 static inline int locks_overlap(struct file_lock *fl1, struct file_lock *fl2)
493 return ((fl1->fl_end >= fl2->fl_start) &&
494 (fl2->fl_end >= fl1->fl_start));
498 * Check whether two locks have the same owner.
500 static int posix_same_owner(struct file_lock *fl1, struct file_lock *fl2)
502 if (fl1->fl_lmops && fl1->fl_lmops->lm_compare_owner)
503 return fl2->fl_lmops == fl1->fl_lmops &&
504 fl1->fl_lmops->lm_compare_owner(fl1, fl2);
505 return fl1->fl_owner == fl2->fl_owner;
509 locks_insert_global_locks(struct file_lock *fl)
511 spin_lock(&file_lock_lock);
512 hlist_add_head(&fl->fl_link, &file_lock_list);
513 spin_unlock(&file_lock_lock);
517 locks_delete_global_locks(struct file_lock *fl)
519 spin_lock(&file_lock_lock);
520 hlist_del_init(&fl->fl_link);
521 spin_unlock(&file_lock_lock);
525 locks_insert_global_blocked(struct file_lock *waiter)
527 hash_add(blocked_hash, &waiter->fl_link, (unsigned long)waiter->fl_owner);
531 locks_delete_global_blocked(struct file_lock *waiter)
533 hash_del(&waiter->fl_link);
536 /* Remove waiter from blocker's block list.
537 * When blocker ends up pointing to itself then the list is empty.
539 * Must be called with file_lock_lock held.
541 static void __locks_delete_block(struct file_lock *waiter)
543 locks_delete_global_blocked(waiter);
544 list_del_init(&waiter->fl_block);
545 waiter->fl_next = NULL;
548 static void locks_delete_block(struct file_lock *waiter)
550 spin_lock(&file_lock_lock);
551 __locks_delete_block(waiter);
552 spin_unlock(&file_lock_lock);
555 /* Insert waiter into blocker's block list.
556 * We use a circular list so that processes can be easily woken up in
557 * the order they blocked. The documentation doesn't require this but
558 * it seems like the reasonable thing to do.
560 * Must be called with both the i_lock and file_lock_lock held. The fl_block
561 * list itself is protected by the file_lock_list, but by ensuring that the
562 * i_lock is also held on insertions we can avoid taking the file_lock_lock
563 * in some cases when we see that the fl_block list is empty.
565 static void __locks_insert_block(struct file_lock *blocker,
566 struct file_lock *waiter)
568 BUG_ON(!list_empty(&waiter->fl_block));
569 waiter->fl_next = blocker;
570 list_add_tail(&waiter->fl_block, &blocker->fl_block);
571 if (IS_POSIX(blocker))
572 locks_insert_global_blocked(waiter);
575 /* Must be called with i_lock held. */
576 static void locks_insert_block(struct file_lock *blocker,
577 struct file_lock *waiter)
579 spin_lock(&file_lock_lock);
580 __locks_insert_block(blocker, waiter);
581 spin_unlock(&file_lock_lock);
585 * Wake up processes blocked waiting for blocker.
587 * Must be called with the inode->i_lock held!
589 static void locks_wake_up_blocks(struct file_lock *blocker)
592 * Avoid taking global lock if list is empty. This is safe since new
593 * blocked requests are only added to the list under the i_lock, and
594 * the i_lock is always held here. Note that removal from the fl_block
595 * list does not require the i_lock, so we must recheck list_empty()
596 * after acquiring the file_lock_lock.
598 if (list_empty(&blocker->fl_block))
601 spin_lock(&file_lock_lock);
602 while (!list_empty(&blocker->fl_block)) {
603 struct file_lock *waiter;
605 waiter = list_first_entry(&blocker->fl_block,
606 struct file_lock, fl_block);
607 __locks_delete_block(waiter);
608 if (waiter->fl_lmops && waiter->fl_lmops->lm_notify)
609 waiter->fl_lmops->lm_notify(waiter);
611 wake_up(&waiter->fl_wait);
613 spin_unlock(&file_lock_lock);
616 /* Insert file lock fl into an inode's lock list at the position indicated
617 * by pos. At the same time add the lock to the global file lock list.
619 * Must be called with the i_lock held!
621 static void locks_insert_lock(struct file_lock **pos, struct file_lock *fl)
623 fl->fl_nspid = get_pid(task_tgid(current));
625 /* insert into file's list */
629 locks_insert_global_locks(fl);
633 * Delete a lock and then free it.
634 * Wake up processes that are blocked waiting for this lock,
635 * notify the FS that the lock has been cleared and
636 * finally free the lock.
638 * Must be called with the i_lock held!
640 static void locks_delete_lock(struct file_lock **thisfl_p)
642 struct file_lock *fl = *thisfl_p;
644 locks_delete_global_locks(fl);
646 *thisfl_p = fl->fl_next;
650 put_pid(fl->fl_nspid);
654 locks_wake_up_blocks(fl);
658 /* Determine if lock sys_fl blocks lock caller_fl. Common functionality
659 * checks for shared/exclusive status of overlapping locks.
661 static int locks_conflict(struct file_lock *caller_fl, struct file_lock *sys_fl)
663 if (sys_fl->fl_type == F_WRLCK)
665 if (caller_fl->fl_type == F_WRLCK)
670 /* Determine if lock sys_fl blocks lock caller_fl. POSIX specific
671 * checking before calling the locks_conflict().
673 static int posix_locks_conflict(struct file_lock *caller_fl, struct file_lock *sys_fl)
675 /* POSIX locks owned by the same process do not conflict with
678 if (!IS_POSIX(sys_fl) || posix_same_owner(caller_fl, sys_fl))
681 /* Check whether they overlap */
682 if (!locks_overlap(caller_fl, sys_fl))
685 return (locks_conflict(caller_fl, sys_fl));
688 /* Determine if lock sys_fl blocks lock caller_fl. FLOCK specific
689 * checking before calling the locks_conflict().
691 static int flock_locks_conflict(struct file_lock *caller_fl, struct file_lock *sys_fl)
693 /* FLOCK locks referring to the same filp do not conflict with
696 if (!IS_FLOCK(sys_fl) || (caller_fl->fl_file == sys_fl->fl_file))
698 if ((caller_fl->fl_type & LOCK_MAND) || (sys_fl->fl_type & LOCK_MAND))
701 return (locks_conflict(caller_fl, sys_fl));
705 posix_test_lock(struct file *filp, struct file_lock *fl)
707 struct file_lock *cfl;
708 struct inode *inode = file_inode(filp);
710 spin_lock(&inode->i_lock);
711 for (cfl = file_inode(filp)->i_flock; cfl; cfl = cfl->fl_next) {
714 if (posix_locks_conflict(fl, cfl))
718 __locks_copy_lock(fl, cfl);
720 fl->fl_pid = pid_vnr(cfl->fl_nspid);
722 fl->fl_type = F_UNLCK;
723 spin_unlock(&inode->i_lock);
726 EXPORT_SYMBOL(posix_test_lock);
729 * Deadlock detection:
731 * We attempt to detect deadlocks that are due purely to posix file
734 * We assume that a task can be waiting for at most one lock at a time.
735 * So for any acquired lock, the process holding that lock may be
736 * waiting on at most one other lock. That lock in turns may be held by
737 * someone waiting for at most one other lock. Given a requested lock
738 * caller_fl which is about to wait for a conflicting lock block_fl, we
739 * follow this chain of waiters to ensure we are not about to create a
742 * Since we do this before we ever put a process to sleep on a lock, we
743 * are ensured that there is never a cycle; that is what guarantees that
744 * the while() loop in posix_locks_deadlock() eventually completes.
746 * Note: the above assumption may not be true when handling lock
747 * requests from a broken NFS client. It may also fail in the presence
748 * of tasks (such as posix threads) sharing the same open file table.
750 * To handle those cases, we just bail out after a few iterations.
753 #define MAX_DEADLK_ITERATIONS 10
755 /* Find a lock that the owner of the given block_fl is blocking on. */
756 static struct file_lock *what_owner_is_waiting_for(struct file_lock *block_fl)
758 struct file_lock *fl;
760 hash_for_each_possible(blocked_hash, fl, fl_link, (unsigned long)block_fl->fl_owner) {
761 if (posix_same_owner(fl, block_fl))
767 /* Must be called with the file_lock_lock held! */
768 static int posix_locks_deadlock(struct file_lock *caller_fl,
769 struct file_lock *block_fl)
773 while ((block_fl = what_owner_is_waiting_for(block_fl))) {
774 if (i++ > MAX_DEADLK_ITERATIONS)
776 if (posix_same_owner(caller_fl, block_fl))
782 /* Try to create a FLOCK lock on filp. We always insert new FLOCK locks
783 * after any leases, but before any posix locks.
785 * Note that if called with an FL_EXISTS argument, the caller may determine
786 * whether or not a lock was successfully freed by testing the return
789 static int flock_lock_file(struct file *filp, struct file_lock *request)
791 struct file_lock *new_fl = NULL;
792 struct file_lock **before;
793 struct inode * inode = file_inode(filp);
797 if (!(request->fl_flags & FL_ACCESS) && (request->fl_type != F_UNLCK)) {
798 new_fl = locks_alloc_lock();
803 spin_lock(&inode->i_lock);
804 if (request->fl_flags & FL_ACCESS)
807 for_each_lock(inode, before) {
808 struct file_lock *fl = *before;
813 if (filp != fl->fl_file)
815 if (request->fl_type == fl->fl_type)
818 locks_delete_lock(before);
822 if (request->fl_type == F_UNLCK) {
823 if ((request->fl_flags & FL_EXISTS) && !found)
829 * If a higher-priority process was blocked on the old file lock,
830 * give it the opportunity to lock the file.
833 spin_unlock(&inode->i_lock);
835 spin_lock(&inode->i_lock);
839 for_each_lock(inode, before) {
840 struct file_lock *fl = *before;
845 if (!flock_locks_conflict(request, fl))
848 if (!(request->fl_flags & FL_SLEEP))
850 error = FILE_LOCK_DEFERRED;
851 locks_insert_block(fl, request);
854 if (request->fl_flags & FL_ACCESS)
856 locks_copy_lock(new_fl, request);
857 locks_insert_lock(before, new_fl);
862 spin_unlock(&inode->i_lock);
864 locks_free_lock(new_fl);
868 static int __posix_lock_file(struct inode *inode, struct file_lock *request, struct file_lock *conflock)
870 struct file_lock *fl;
871 struct file_lock *new_fl = NULL;
872 struct file_lock *new_fl2 = NULL;
873 struct file_lock *left = NULL;
874 struct file_lock *right = NULL;
875 struct file_lock **before;
880 * We may need two file_lock structures for this operation,
881 * so we get them in advance to avoid races.
883 * In some cases we can be sure, that no new locks will be needed
885 if (!(request->fl_flags & FL_ACCESS) &&
886 (request->fl_type != F_UNLCK ||
887 request->fl_start != 0 || request->fl_end != OFFSET_MAX)) {
888 new_fl = locks_alloc_lock();
889 new_fl2 = locks_alloc_lock();
892 spin_lock(&inode->i_lock);
894 * New lock request. Walk all POSIX locks and look for conflicts. If
895 * there are any, either return error or put the request on the
896 * blocker's list of waiters and the global blocked_hash.
898 if (request->fl_type != F_UNLCK) {
899 for_each_lock(inode, before) {
903 if (!posix_locks_conflict(request, fl))
906 __locks_copy_lock(conflock, fl);
908 if (!(request->fl_flags & FL_SLEEP))
911 * Deadlock detection and insertion into the blocked
912 * locks list must be done while holding the same lock!
915 spin_lock(&file_lock_lock);
916 if (likely(!posix_locks_deadlock(request, fl))) {
917 error = FILE_LOCK_DEFERRED;
918 __locks_insert_block(fl, request);
920 spin_unlock(&file_lock_lock);
925 /* If we're just looking for a conflict, we're done. */
927 if (request->fl_flags & FL_ACCESS)
931 * Find the first old lock with the same owner as the new lock.
934 before = &inode->i_flock;
936 /* First skip locks owned by other processes. */
937 while ((fl = *before) && (!IS_POSIX(fl) ||
938 !posix_same_owner(request, fl))) {
939 before = &fl->fl_next;
942 /* Process locks with this owner. */
943 while ((fl = *before) && posix_same_owner(request, fl)) {
944 /* Detect adjacent or overlapping regions (if same lock type)
946 if (request->fl_type == fl->fl_type) {
947 /* In all comparisons of start vs end, use
948 * "start - 1" rather than "end + 1". If end
949 * is OFFSET_MAX, end + 1 will become negative.
951 if (fl->fl_end < request->fl_start - 1)
953 /* If the next lock in the list has entirely bigger
954 * addresses than the new one, insert the lock here.
956 if (fl->fl_start - 1 > request->fl_end)
959 /* If we come here, the new and old lock are of the
960 * same type and adjacent or overlapping. Make one
961 * lock yielding from the lower start address of both
962 * locks to the higher end address.
964 if (fl->fl_start > request->fl_start)
965 fl->fl_start = request->fl_start;
967 request->fl_start = fl->fl_start;
968 if (fl->fl_end < request->fl_end)
969 fl->fl_end = request->fl_end;
971 request->fl_end = fl->fl_end;
973 locks_delete_lock(before);
980 /* Processing for different lock types is a bit
983 if (fl->fl_end < request->fl_start)
985 if (fl->fl_start > request->fl_end)
987 if (request->fl_type == F_UNLCK)
989 if (fl->fl_start < request->fl_start)
991 /* If the next lock in the list has a higher end
992 * address than the new one, insert the new one here.
994 if (fl->fl_end > request->fl_end) {
998 if (fl->fl_start >= request->fl_start) {
999 /* The new lock completely replaces an old
1000 * one (This may happen several times).
1003 locks_delete_lock(before);
1006 /* Replace the old lock with the new one.
1007 * Wake up anybody waiting for the old one,
1008 * as the change in lock type might satisfy
1011 locks_wake_up_blocks(fl);
1012 fl->fl_start = request->fl_start;
1013 fl->fl_end = request->fl_end;
1014 fl->fl_type = request->fl_type;
1015 locks_release_private(fl);
1016 locks_copy_private(fl, request);
1021 /* Go on to next lock.
1024 before = &fl->fl_next;
1028 * The above code only modifies existing locks in case of merging or
1029 * replacing. If new lock(s) need to be inserted all modifications are
1030 * done below this, so it's safe yet to bail out.
1032 error = -ENOLCK; /* "no luck" */
1033 if (right && left == right && !new_fl2)
1038 if (request->fl_type == F_UNLCK) {
1039 if (request->fl_flags & FL_EXISTS)
1048 locks_copy_lock(new_fl, request);
1049 locks_insert_lock(before, new_fl);
1053 if (left == right) {
1054 /* The new lock breaks the old one in two pieces,
1055 * so we have to use the second new lock.
1059 locks_copy_lock(left, right);
1060 locks_insert_lock(before, left);
1062 right->fl_start = request->fl_end + 1;
1063 locks_wake_up_blocks(right);
1066 left->fl_end = request->fl_start - 1;
1067 locks_wake_up_blocks(left);
1070 spin_unlock(&inode->i_lock);
1072 * Free any unused locks.
1075 locks_free_lock(new_fl);
1077 locks_free_lock(new_fl2);
1082 * posix_lock_file - Apply a POSIX-style lock to a file
1083 * @filp: The file to apply the lock to
1084 * @fl: The lock to be applied
1085 * @conflock: Place to return a copy of the conflicting lock, if found.
1087 * Add a POSIX style lock to a file.
1088 * We merge adjacent & overlapping locks whenever possible.
1089 * POSIX locks are sorted by owner task, then by starting address
1091 * Note that if called with an FL_EXISTS argument, the caller may determine
1092 * whether or not a lock was successfully freed by testing the return
1093 * value for -ENOENT.
1095 int posix_lock_file(struct file *filp, struct file_lock *fl,
1096 struct file_lock *conflock)
1098 return __posix_lock_file(file_inode(filp), fl, conflock);
1100 EXPORT_SYMBOL(posix_lock_file);
1103 * posix_lock_file_wait - Apply a POSIX-style lock to a file
1104 * @filp: The file to apply the lock to
1105 * @fl: The lock to be applied
1107 * Add a POSIX style lock to a file.
1108 * We merge adjacent & overlapping locks whenever possible.
1109 * POSIX locks are sorted by owner task, then by starting address
1111 int posix_lock_file_wait(struct file *filp, struct file_lock *fl)
1116 error = posix_lock_file(filp, fl, NULL);
1117 if (error != FILE_LOCK_DEFERRED)
1119 error = wait_event_interruptible(fl->fl_wait, !fl->fl_next);
1123 locks_delete_block(fl);
1128 EXPORT_SYMBOL(posix_lock_file_wait);
1131 * locks_mandatory_locked - Check for an active lock
1132 * @inode: the file to check
1134 * Searches the inode's list of locks to find any POSIX locks which conflict.
1135 * This function is called from locks_verify_locked() only.
1137 int locks_mandatory_locked(struct inode *inode)
1139 fl_owner_t owner = current->files;
1140 struct file_lock *fl;
1143 * Search the lock list for this inode for any POSIX locks.
1145 spin_lock(&inode->i_lock);
1146 for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
1149 if (fl->fl_owner != owner)
1152 spin_unlock(&inode->i_lock);
1153 return fl ? -EAGAIN : 0;
1157 * locks_mandatory_area - Check for a conflicting lock
1158 * @read_write: %FLOCK_VERIFY_WRITE for exclusive access, %FLOCK_VERIFY_READ
1160 * @inode: the file to check
1161 * @filp: how the file was opened (if it was)
1162 * @offset: start of area to check
1163 * @count: length of area to check
1165 * Searches the inode's list of locks to find any POSIX locks which conflict.
1166 * This function is called from rw_verify_area() and
1167 * locks_verify_truncate().
1169 int locks_mandatory_area(int read_write, struct inode *inode,
1170 struct file *filp, loff_t offset,
1173 struct file_lock fl;
1176 locks_init_lock(&fl);
1177 fl.fl_owner = current->files;
1178 fl.fl_pid = current->tgid;
1180 fl.fl_flags = FL_POSIX | FL_ACCESS;
1181 if (filp && !(filp->f_flags & O_NONBLOCK))
1182 fl.fl_flags |= FL_SLEEP;
1183 fl.fl_type = (read_write == FLOCK_VERIFY_WRITE) ? F_WRLCK : F_RDLCK;
1184 fl.fl_start = offset;
1185 fl.fl_end = offset + count - 1;
1188 error = __posix_lock_file(inode, &fl, NULL);
1189 if (error != FILE_LOCK_DEFERRED)
1191 error = wait_event_interruptible(fl.fl_wait, !fl.fl_next);
1194 * If we've been sleeping someone might have
1195 * changed the permissions behind our back.
1197 if (__mandatory_lock(inode))
1201 locks_delete_block(&fl);
1208 EXPORT_SYMBOL(locks_mandatory_area);
1210 static void lease_clear_pending(struct file_lock *fl, int arg)
1214 fl->fl_flags &= ~FL_UNLOCK_PENDING;
1217 fl->fl_flags &= ~FL_DOWNGRADE_PENDING;
1221 /* We already had a lease on this file; just change its type */
1222 int lease_modify(struct file_lock **before, int arg)
1224 struct file_lock *fl = *before;
1225 int error = assign_type(fl, arg);
1229 lease_clear_pending(fl, arg);
1230 locks_wake_up_blocks(fl);
1231 if (arg == F_UNLCK) {
1232 struct file *filp = fl->fl_file;
1235 filp->f_owner.signum = 0;
1236 fasync_helper(0, fl->fl_file, 0, &fl->fl_fasync);
1237 if (fl->fl_fasync != NULL) {
1238 printk(KERN_ERR "locks_delete_lock: fasync == %p\n", fl->fl_fasync);
1239 fl->fl_fasync = NULL;
1241 locks_delete_lock(before);
1246 EXPORT_SYMBOL(lease_modify);
1248 static bool past_time(unsigned long then)
1251 /* 0 is a special value meaning "this never expires": */
1253 return time_after(jiffies, then);
1256 static void time_out_leases(struct inode *inode)
1258 struct file_lock **before;
1259 struct file_lock *fl;
1261 before = &inode->i_flock;
1262 while ((fl = *before) && IS_LEASE(fl) && lease_breaking(fl)) {
1263 if (past_time(fl->fl_downgrade_time))
1264 lease_modify(before, F_RDLCK);
1265 if (past_time(fl->fl_break_time))
1266 lease_modify(before, F_UNLCK);
1267 if (fl == *before) /* lease_modify may have freed fl */
1268 before = &fl->fl_next;
1273 * __break_lease - revoke all outstanding leases on file
1274 * @inode: the inode of the file to return
1275 * @mode: the open mode (read or write)
1277 * break_lease (inlined for speed) has checked there already is at least
1278 * some kind of lock (maybe a lease) on this file. Leases are broken on
1279 * a call to open() or truncate(). This function can sleep unless you
1280 * specified %O_NONBLOCK to your open().
1282 int __break_lease(struct inode *inode, unsigned int mode)
1285 struct file_lock *new_fl, *flock;
1286 struct file_lock *fl;
1287 unsigned long break_time;
1288 int i_have_this_lease = 0;
1289 int want_write = (mode & O_ACCMODE) != O_RDONLY;
1291 new_fl = lease_alloc(NULL, want_write ? F_WRLCK : F_RDLCK);
1293 return PTR_ERR(new_fl);
1295 spin_lock(&inode->i_lock);
1297 time_out_leases(inode);
1299 flock = inode->i_flock;
1300 if ((flock == NULL) || !IS_LEASE(flock))
1303 if (!locks_conflict(flock, new_fl))
1306 for (fl = flock; fl && IS_LEASE(fl); fl = fl->fl_next)
1307 if (fl->fl_owner == current->files)
1308 i_have_this_lease = 1;
1311 if (lease_break_time > 0) {
1312 break_time = jiffies + lease_break_time * HZ;
1313 if (break_time == 0)
1314 break_time++; /* so that 0 means no break time */
1317 for (fl = flock; fl && IS_LEASE(fl); fl = fl->fl_next) {
1319 if (fl->fl_flags & FL_UNLOCK_PENDING)
1321 fl->fl_flags |= FL_UNLOCK_PENDING;
1322 fl->fl_break_time = break_time;
1324 if (lease_breaking(flock))
1326 fl->fl_flags |= FL_DOWNGRADE_PENDING;
1327 fl->fl_downgrade_time = break_time;
1329 fl->fl_lmops->lm_break(fl);
1332 if (i_have_this_lease || (mode & O_NONBLOCK)) {
1333 error = -EWOULDBLOCK;
1338 break_time = flock->fl_break_time;
1339 if (break_time != 0) {
1340 break_time -= jiffies;
1341 if (break_time == 0)
1344 locks_insert_block(flock, new_fl);
1345 spin_unlock(&inode->i_lock);
1346 error = wait_event_interruptible_timeout(new_fl->fl_wait,
1347 !new_fl->fl_next, break_time);
1348 spin_lock(&inode->i_lock);
1349 locks_delete_block(new_fl);
1352 time_out_leases(inode);
1354 * Wait for the next conflicting lease that has not been
1357 for (flock = inode->i_flock; flock && IS_LEASE(flock);
1358 flock = flock->fl_next) {
1359 if (locks_conflict(new_fl, flock))
1366 spin_unlock(&inode->i_lock);
1367 locks_free_lock(new_fl);
1371 EXPORT_SYMBOL(__break_lease);
1374 * lease_get_mtime - get the last modified time of an inode
1376 * @time: pointer to a timespec which will contain the last modified time
1378 * This is to force NFS clients to flush their caches for files with
1379 * exclusive leases. The justification is that if someone has an
1380 * exclusive lease, then they could be modifying it.
1382 void lease_get_mtime(struct inode *inode, struct timespec *time)
1384 struct file_lock *flock = inode->i_flock;
1385 if (flock && IS_LEASE(flock) && (flock->fl_type == F_WRLCK))
1386 *time = current_fs_time(inode->i_sb);
1388 *time = inode->i_mtime;
1391 EXPORT_SYMBOL(lease_get_mtime);
1394 * fcntl_getlease - Enquire what lease is currently active
1397 * The value returned by this function will be one of
1398 * (if no lease break is pending):
1400 * %F_RDLCK to indicate a shared lease is held.
1402 * %F_WRLCK to indicate an exclusive lease is held.
1404 * %F_UNLCK to indicate no lease is held.
1406 * (if a lease break is pending):
1408 * %F_RDLCK to indicate an exclusive lease needs to be
1409 * changed to a shared lease (or removed).
1411 * %F_UNLCK to indicate the lease needs to be removed.
1413 * XXX: sfr & willy disagree over whether F_INPROGRESS
1414 * should be returned to userspace.
1416 int fcntl_getlease(struct file *filp)
1418 struct file_lock *fl;
1419 struct inode *inode = file_inode(filp);
1422 spin_lock(&inode->i_lock);
1423 time_out_leases(file_inode(filp));
1424 for (fl = file_inode(filp)->i_flock; fl && IS_LEASE(fl);
1426 if (fl->fl_file == filp) {
1427 type = target_leasetype(fl);
1431 spin_unlock(&inode->i_lock);
1435 static int generic_add_lease(struct file *filp, long arg, struct file_lock **flp)
1437 struct file_lock *fl, **before, **my_before = NULL, *lease;
1438 struct dentry *dentry = filp->f_path.dentry;
1439 struct inode *inode = dentry->d_inode;
1445 if ((arg == F_RDLCK) && (atomic_read(&inode->i_writecount) > 0))
1447 if ((arg == F_WRLCK)
1448 && ((dentry->d_count > 1)
1449 || (atomic_read(&inode->i_count) > 1)))
1453 * At this point, we know that if there is an exclusive
1454 * lease on this file, then we hold it on this filp
1455 * (otherwise our open of this file would have blocked).
1456 * And if we are trying to acquire an exclusive lease,
1457 * then the file is not open by anyone (including us)
1458 * except for this filp.
1461 for (before = &inode->i_flock;
1462 ((fl = *before) != NULL) && IS_LEASE(fl);
1463 before = &fl->fl_next) {
1464 if (fl->fl_file == filp) {
1469 * No exclusive leases if someone else has a lease on
1475 * Modifying our existing lease is OK, but no getting a
1476 * new lease if someone else is opening for write:
1478 if (fl->fl_flags & FL_UNLOCK_PENDING)
1482 if (my_before != NULL) {
1483 error = lease->fl_lmops->lm_change(my_before, arg);
1493 locks_insert_lock(before, lease);
1500 static int generic_delete_lease(struct file *filp, struct file_lock **flp)
1502 struct file_lock *fl, **before;
1503 struct dentry *dentry = filp->f_path.dentry;
1504 struct inode *inode = dentry->d_inode;
1506 for (before = &inode->i_flock;
1507 ((fl = *before) != NULL) && IS_LEASE(fl);
1508 before = &fl->fl_next) {
1509 if (fl->fl_file != filp)
1511 return (*flp)->fl_lmops->lm_change(before, F_UNLCK);
1517 * generic_setlease - sets a lease on an open file
1518 * @filp: file pointer
1519 * @arg: type of lease to obtain
1520 * @flp: input - file_lock to use, output - file_lock inserted
1522 * The (input) flp->fl_lmops->lm_break function is required
1525 * Called with inode->i_lock held.
1527 int generic_setlease(struct file *filp, long arg, struct file_lock **flp)
1529 struct dentry *dentry = filp->f_path.dentry;
1530 struct inode *inode = dentry->d_inode;
1533 if ((!uid_eq(current_fsuid(), inode->i_uid)) && !capable(CAP_LEASE))
1535 if (!S_ISREG(inode->i_mode))
1537 error = security_file_lock(filp, arg);
1541 time_out_leases(inode);
1543 BUG_ON(!(*flp)->fl_lmops->lm_break);
1547 return generic_delete_lease(filp, flp);
1550 return generic_add_lease(filp, arg, flp);
1555 EXPORT_SYMBOL(generic_setlease);
1557 static int __vfs_setlease(struct file *filp, long arg, struct file_lock **lease)
1559 if (filp->f_op && filp->f_op->setlease)
1560 return filp->f_op->setlease(filp, arg, lease);
1562 return generic_setlease(filp, arg, lease);
1566 * vfs_setlease - sets a lease on an open file
1567 * @filp: file pointer
1568 * @arg: type of lease to obtain
1569 * @lease: file_lock to use
1571 * Call this to establish a lease on the file.
1572 * The (*lease)->fl_lmops->lm_break operation must be set; if not,
1573 * break_lease will oops!
1575 * This will call the filesystem's setlease file method, if
1576 * defined. Note that there is no getlease method; instead, the
1577 * filesystem setlease method should call back to setlease() to
1578 * add a lease to the inode's lease list, where fcntl_getlease() can
1579 * find it. Since fcntl_getlease() only reports whether the current
1580 * task holds a lease, a cluster filesystem need only do this for
1581 * leases held by processes on this node.
1583 * There is also no break_lease method; filesystems that
1584 * handle their own leases should break leases themselves from the
1585 * filesystem's open, create, and (on truncate) setattr methods.
1587 * Warning: the only current setlease methods exist only to disable
1588 * leases in certain cases. More vfs changes may be required to
1589 * allow a full filesystem lease implementation.
1592 int vfs_setlease(struct file *filp, long arg, struct file_lock **lease)
1594 struct inode *inode = file_inode(filp);
1597 spin_lock(&inode->i_lock);
1598 error = __vfs_setlease(filp, arg, lease);
1599 spin_unlock(&inode->i_lock);
1603 EXPORT_SYMBOL_GPL(vfs_setlease);
1605 static int do_fcntl_delete_lease(struct file *filp)
1607 struct file_lock fl, *flp = &fl;
1609 lease_init(filp, F_UNLCK, flp);
1611 return vfs_setlease(filp, F_UNLCK, &flp);
1614 static int do_fcntl_add_lease(unsigned int fd, struct file *filp, long arg)
1616 struct file_lock *fl, *ret;
1617 struct inode *inode = file_inode(filp);
1618 struct fasync_struct *new;
1621 fl = lease_alloc(filp, arg);
1625 new = fasync_alloc();
1627 locks_free_lock(fl);
1631 spin_lock(&inode->i_lock);
1632 error = __vfs_setlease(filp, arg, &ret);
1634 spin_unlock(&inode->i_lock);
1635 locks_free_lock(fl);
1636 goto out_free_fasync;
1639 locks_free_lock(fl);
1642 * fasync_insert_entry() returns the old entry if any.
1643 * If there was no old entry, then it used 'new' and
1644 * inserted it into the fasync list. Clear new so that
1645 * we don't release it here.
1647 if (!fasync_insert_entry(fd, filp, &ret->fl_fasync, new))
1650 error = __f_setown(filp, task_pid(current), PIDTYPE_PID, 0);
1651 spin_unlock(&inode->i_lock);
1660 * fcntl_setlease - sets a lease on an open file
1661 * @fd: open file descriptor
1662 * @filp: file pointer
1663 * @arg: type of lease to obtain
1665 * Call this fcntl to establish a lease on the file.
1666 * Note that you also need to call %F_SETSIG to
1667 * receive a signal when the lease is broken.
1669 int fcntl_setlease(unsigned int fd, struct file *filp, long arg)
1672 return do_fcntl_delete_lease(filp);
1673 return do_fcntl_add_lease(fd, filp, arg);
1677 * flock_lock_file_wait - Apply a FLOCK-style lock to a file
1678 * @filp: The file to apply the lock to
1679 * @fl: The lock to be applied
1681 * Add a FLOCK style lock to a file.
1683 int flock_lock_file_wait(struct file *filp, struct file_lock *fl)
1688 error = flock_lock_file(filp, fl);
1689 if (error != FILE_LOCK_DEFERRED)
1691 error = wait_event_interruptible(fl->fl_wait, !fl->fl_next);
1695 locks_delete_block(fl);
1701 EXPORT_SYMBOL(flock_lock_file_wait);
1704 * sys_flock: - flock() system call.
1705 * @fd: the file descriptor to lock.
1706 * @cmd: the type of lock to apply.
1708 * Apply a %FL_FLOCK style lock to an open file descriptor.
1709 * The @cmd can be one of
1711 * %LOCK_SH -- a shared lock.
1713 * %LOCK_EX -- an exclusive lock.
1715 * %LOCK_UN -- remove an existing lock.
1717 * %LOCK_MAND -- a `mandatory' flock. This exists to emulate Windows Share Modes.
1719 * %LOCK_MAND can be combined with %LOCK_READ or %LOCK_WRITE to allow other
1720 * processes read and write access respectively.
1722 SYSCALL_DEFINE2(flock, unsigned int, fd, unsigned int, cmd)
1724 struct fd f = fdget(fd);
1725 struct file_lock *lock;
1726 int can_sleep, unlock;
1733 can_sleep = !(cmd & LOCK_NB);
1735 unlock = (cmd == LOCK_UN);
1737 if (!unlock && !(cmd & LOCK_MAND) &&
1738 !(f.file->f_mode & (FMODE_READ|FMODE_WRITE)))
1741 error = flock_make_lock(f.file, &lock, cmd);
1745 lock->fl_flags |= FL_SLEEP;
1747 error = security_file_lock(f.file, lock->fl_type);
1751 if (f.file->f_op && f.file->f_op->flock)
1752 error = f.file->f_op->flock(f.file,
1753 (can_sleep) ? F_SETLKW : F_SETLK,
1756 error = flock_lock_file_wait(f.file, lock);
1759 locks_free_lock(lock);
1768 * vfs_test_lock - test file byte range lock
1769 * @filp: The file to test lock for
1770 * @fl: The lock to test; also used to hold result
1772 * Returns -ERRNO on failure. Indicates presence of conflicting lock by
1773 * setting conf->fl_type to something other than F_UNLCK.
1775 int vfs_test_lock(struct file *filp, struct file_lock *fl)
1777 if (filp->f_op && filp->f_op->lock)
1778 return filp->f_op->lock(filp, F_GETLK, fl);
1779 posix_test_lock(filp, fl);
1782 EXPORT_SYMBOL_GPL(vfs_test_lock);
1784 static int posix_lock_to_flock(struct flock *flock, struct file_lock *fl)
1786 flock->l_pid = fl->fl_pid;
1787 #if BITS_PER_LONG == 32
1789 * Make sure we can represent the posix lock via
1790 * legacy 32bit flock.
1792 if (fl->fl_start > OFFT_OFFSET_MAX)
1794 if (fl->fl_end != OFFSET_MAX && fl->fl_end > OFFT_OFFSET_MAX)
1797 flock->l_start = fl->fl_start;
1798 flock->l_len = fl->fl_end == OFFSET_MAX ? 0 :
1799 fl->fl_end - fl->fl_start + 1;
1800 flock->l_whence = 0;
1801 flock->l_type = fl->fl_type;
1805 #if BITS_PER_LONG == 32
1806 static void posix_lock_to_flock64(struct flock64 *flock, struct file_lock *fl)
1808 flock->l_pid = fl->fl_pid;
1809 flock->l_start = fl->fl_start;
1810 flock->l_len = fl->fl_end == OFFSET_MAX ? 0 :
1811 fl->fl_end - fl->fl_start + 1;
1812 flock->l_whence = 0;
1813 flock->l_type = fl->fl_type;
1817 /* Report the first existing lock that would conflict with l.
1818 * This implements the F_GETLK command of fcntl().
1820 int fcntl_getlk(struct file *filp, struct flock __user *l)
1822 struct file_lock file_lock;
1827 if (copy_from_user(&flock, l, sizeof(flock)))
1830 if ((flock.l_type != F_RDLCK) && (flock.l_type != F_WRLCK))
1833 error = flock_to_posix_lock(filp, &file_lock, &flock);
1837 error = vfs_test_lock(filp, &file_lock);
1841 flock.l_type = file_lock.fl_type;
1842 if (file_lock.fl_type != F_UNLCK) {
1843 error = posix_lock_to_flock(&flock, &file_lock);
1848 if (!copy_to_user(l, &flock, sizeof(flock)))
1855 * vfs_lock_file - file byte range lock
1856 * @filp: The file to apply the lock to
1857 * @cmd: type of locking operation (F_SETLK, F_GETLK, etc.)
1858 * @fl: The lock to be applied
1859 * @conf: Place to return a copy of the conflicting lock, if found.
1861 * A caller that doesn't care about the conflicting lock may pass NULL
1862 * as the final argument.
1864 * If the filesystem defines a private ->lock() method, then @conf will
1865 * be left unchanged; so a caller that cares should initialize it to
1866 * some acceptable default.
1868 * To avoid blocking kernel daemons, such as lockd, that need to acquire POSIX
1869 * locks, the ->lock() interface may return asynchronously, before the lock has
1870 * been granted or denied by the underlying filesystem, if (and only if)
1871 * lm_grant is set. Callers expecting ->lock() to return asynchronously
1872 * will only use F_SETLK, not F_SETLKW; they will set FL_SLEEP if (and only if)
1873 * the request is for a blocking lock. When ->lock() does return asynchronously,
1874 * it must return FILE_LOCK_DEFERRED, and call ->lm_grant() when the lock
1875 * request completes.
1876 * If the request is for non-blocking lock the file system should return
1877 * FILE_LOCK_DEFERRED then try to get the lock and call the callback routine
1878 * with the result. If the request timed out the callback routine will return a
1879 * nonzero return code and the file system should release the lock. The file
1880 * system is also responsible to keep a corresponding posix lock when it
1881 * grants a lock so the VFS can find out which locks are locally held and do
1882 * the correct lock cleanup when required.
1883 * The underlying filesystem must not drop the kernel lock or call
1884 * ->lm_grant() before returning to the caller with a FILE_LOCK_DEFERRED
1887 int vfs_lock_file(struct file *filp, unsigned int cmd, struct file_lock *fl, struct file_lock *conf)
1889 if (filp->f_op && filp->f_op->lock)
1890 return filp->f_op->lock(filp, cmd, fl);
1892 return posix_lock_file(filp, fl, conf);
1894 EXPORT_SYMBOL_GPL(vfs_lock_file);
1896 static int do_lock_file_wait(struct file *filp, unsigned int cmd,
1897 struct file_lock *fl)
1901 error = security_file_lock(filp, fl->fl_type);
1906 error = vfs_lock_file(filp, cmd, fl, NULL);
1907 if (error != FILE_LOCK_DEFERRED)
1909 error = wait_event_interruptible(fl->fl_wait, !fl->fl_next);
1913 locks_delete_block(fl);
1920 /* Apply the lock described by l to an open file descriptor.
1921 * This implements both the F_SETLK and F_SETLKW commands of fcntl().
1923 int fcntl_setlk(unsigned int fd, struct file *filp, unsigned int cmd,
1924 struct flock __user *l)
1926 struct file_lock *file_lock = locks_alloc_lock();
1928 struct inode *inode;
1932 if (file_lock == NULL)
1936 * This might block, so we do it before checking the inode.
1939 if (copy_from_user(&flock, l, sizeof(flock)))
1942 inode = file_inode(filp);
1944 /* Don't allow mandatory locks on files that may be memory mapped
1947 if (mandatory_lock(inode) && mapping_writably_mapped(filp->f_mapping)) {
1953 error = flock_to_posix_lock(filp, file_lock, &flock);
1956 if (cmd == F_SETLKW) {
1957 file_lock->fl_flags |= FL_SLEEP;
1961 switch (flock.l_type) {
1963 if (!(filp->f_mode & FMODE_READ))
1967 if (!(filp->f_mode & FMODE_WRITE))
1977 error = do_lock_file_wait(filp, cmd, file_lock);
1980 * Attempt to detect a close/fcntl race and recover by
1981 * releasing the lock that was just acquired.
1984 * we need that spin_lock here - it prevents reordering between
1985 * update of inode->i_flock and check for it done in close().
1986 * rcu_read_lock() wouldn't do.
1988 spin_lock(¤t->files->file_lock);
1990 spin_unlock(¤t->files->file_lock);
1991 if (!error && f != filp && flock.l_type != F_UNLCK) {
1992 flock.l_type = F_UNLCK;
1997 locks_free_lock(file_lock);
2001 #if BITS_PER_LONG == 32
2002 /* Report the first existing lock that would conflict with l.
2003 * This implements the F_GETLK command of fcntl().
2005 int fcntl_getlk64(struct file *filp, struct flock64 __user *l)
2007 struct file_lock file_lock;
2008 struct flock64 flock;
2012 if (copy_from_user(&flock, l, sizeof(flock)))
2015 if ((flock.l_type != F_RDLCK) && (flock.l_type != F_WRLCK))
2018 error = flock64_to_posix_lock(filp, &file_lock, &flock);
2022 error = vfs_test_lock(filp, &file_lock);
2026 flock.l_type = file_lock.fl_type;
2027 if (file_lock.fl_type != F_UNLCK)
2028 posix_lock_to_flock64(&flock, &file_lock);
2031 if (!copy_to_user(l, &flock, sizeof(flock)))
2038 /* Apply the lock described by l to an open file descriptor.
2039 * This implements both the F_SETLK and F_SETLKW commands of fcntl().
2041 int fcntl_setlk64(unsigned int fd, struct file *filp, unsigned int cmd,
2042 struct flock64 __user *l)
2044 struct file_lock *file_lock = locks_alloc_lock();
2045 struct flock64 flock;
2046 struct inode *inode;
2050 if (file_lock == NULL)
2054 * This might block, so we do it before checking the inode.
2057 if (copy_from_user(&flock, l, sizeof(flock)))
2060 inode = file_inode(filp);
2062 /* Don't allow mandatory locks on files that may be memory mapped
2065 if (mandatory_lock(inode) && mapping_writably_mapped(filp->f_mapping)) {
2071 error = flock64_to_posix_lock(filp, file_lock, &flock);
2074 if (cmd == F_SETLKW64) {
2075 file_lock->fl_flags |= FL_SLEEP;
2079 switch (flock.l_type) {
2081 if (!(filp->f_mode & FMODE_READ))
2085 if (!(filp->f_mode & FMODE_WRITE))
2095 error = do_lock_file_wait(filp, cmd, file_lock);
2098 * Attempt to detect a close/fcntl race and recover by
2099 * releasing the lock that was just acquired.
2101 spin_lock(¤t->files->file_lock);
2103 spin_unlock(¤t->files->file_lock);
2104 if (!error && f != filp && flock.l_type != F_UNLCK) {
2105 flock.l_type = F_UNLCK;
2110 locks_free_lock(file_lock);
2113 #endif /* BITS_PER_LONG == 32 */
2116 * This function is called when the file is being removed
2117 * from the task's fd array. POSIX locks belonging to this task
2118 * are deleted at this time.
2120 void locks_remove_posix(struct file *filp, fl_owner_t owner)
2122 struct file_lock lock;
2125 * If there are no locks held on this file, we don't need to call
2126 * posix_lock_file(). Another process could be setting a lock on this
2127 * file at the same time, but we wouldn't remove that lock anyway.
2129 if (!file_inode(filp)->i_flock)
2132 lock.fl_type = F_UNLCK;
2133 lock.fl_flags = FL_POSIX | FL_CLOSE;
2135 lock.fl_end = OFFSET_MAX;
2136 lock.fl_owner = owner;
2137 lock.fl_pid = current->tgid;
2138 lock.fl_file = filp;
2140 lock.fl_lmops = NULL;
2142 vfs_lock_file(filp, F_SETLK, &lock, NULL);
2144 if (lock.fl_ops && lock.fl_ops->fl_release_private)
2145 lock.fl_ops->fl_release_private(&lock);
2148 EXPORT_SYMBOL(locks_remove_posix);
2151 * This function is called on the last close of an open file.
2153 void locks_remove_flock(struct file *filp)
2155 struct inode * inode = file_inode(filp);
2156 struct file_lock *fl;
2157 struct file_lock **before;
2159 if (!inode->i_flock)
2162 if (filp->f_op && filp->f_op->flock) {
2163 struct file_lock fl = {
2164 .fl_pid = current->tgid,
2166 .fl_flags = FL_FLOCK,
2168 .fl_end = OFFSET_MAX,
2170 filp->f_op->flock(filp, F_SETLKW, &fl);
2171 if (fl.fl_ops && fl.fl_ops->fl_release_private)
2172 fl.fl_ops->fl_release_private(&fl);
2175 spin_lock(&inode->i_lock);
2176 before = &inode->i_flock;
2178 while ((fl = *before) != NULL) {
2179 if (fl->fl_file == filp) {
2181 locks_delete_lock(before);
2185 lease_modify(before, F_UNLCK);
2191 before = &fl->fl_next;
2193 spin_unlock(&inode->i_lock);
2197 * posix_unblock_lock - stop waiting for a file lock
2198 * @waiter: the lock which was waiting
2200 * lockd needs to block waiting for locks.
2203 posix_unblock_lock(struct file_lock *waiter)
2207 spin_lock(&file_lock_lock);
2208 if (waiter->fl_next)
2209 __locks_delete_block(waiter);
2212 spin_unlock(&file_lock_lock);
2215 EXPORT_SYMBOL(posix_unblock_lock);
2218 * vfs_cancel_lock - file byte range unblock lock
2219 * @filp: The file to apply the unblock to
2220 * @fl: The lock to be unblocked
2222 * Used by lock managers to cancel blocked requests
2224 int vfs_cancel_lock(struct file *filp, struct file_lock *fl)
2226 if (filp->f_op && filp->f_op->lock)
2227 return filp->f_op->lock(filp, F_CANCELLK, fl);
2231 EXPORT_SYMBOL_GPL(vfs_cancel_lock);
2233 #ifdef CONFIG_PROC_FS
2234 #include <linux/proc_fs.h>
2235 #include <linux/seq_file.h>
2237 static void lock_get_status(struct seq_file *f, struct file_lock *fl,
2238 loff_t id, char *pfx)
2240 struct inode *inode = NULL;
2241 unsigned int fl_pid;
2244 fl_pid = pid_vnr(fl->fl_nspid);
2246 fl_pid = fl->fl_pid;
2248 if (fl->fl_file != NULL)
2249 inode = file_inode(fl->fl_file);
2251 seq_printf(f, "%lld:%s ", id, pfx);
2253 seq_printf(f, "%6s %s ",
2254 (fl->fl_flags & FL_ACCESS) ? "ACCESS" : "POSIX ",
2255 (inode == NULL) ? "*NOINODE*" :
2256 mandatory_lock(inode) ? "MANDATORY" : "ADVISORY ");
2257 } else if (IS_FLOCK(fl)) {
2258 if (fl->fl_type & LOCK_MAND) {
2259 seq_printf(f, "FLOCK MSNFS ");
2261 seq_printf(f, "FLOCK ADVISORY ");
2263 } else if (IS_LEASE(fl)) {
2264 seq_printf(f, "LEASE ");
2265 if (lease_breaking(fl))
2266 seq_printf(f, "BREAKING ");
2267 else if (fl->fl_file)
2268 seq_printf(f, "ACTIVE ");
2270 seq_printf(f, "BREAKER ");
2272 seq_printf(f, "UNKNOWN UNKNOWN ");
2274 if (fl->fl_type & LOCK_MAND) {
2275 seq_printf(f, "%s ",
2276 (fl->fl_type & LOCK_READ)
2277 ? (fl->fl_type & LOCK_WRITE) ? "RW " : "READ "
2278 : (fl->fl_type & LOCK_WRITE) ? "WRITE" : "NONE ");
2280 seq_printf(f, "%s ",
2281 (lease_breaking(fl))
2282 ? (fl->fl_type == F_UNLCK) ? "UNLCK" : "READ "
2283 : (fl->fl_type == F_WRLCK) ? "WRITE" : "READ ");
2286 #ifdef WE_CAN_BREAK_LSLK_NOW
2287 seq_printf(f, "%d %s:%ld ", fl_pid,
2288 inode->i_sb->s_id, inode->i_ino);
2290 /* userspace relies on this representation of dev_t ;-( */
2291 seq_printf(f, "%d %02x:%02x:%ld ", fl_pid,
2292 MAJOR(inode->i_sb->s_dev),
2293 MINOR(inode->i_sb->s_dev), inode->i_ino);
2296 seq_printf(f, "%d <none>:0 ", fl_pid);
2299 if (fl->fl_end == OFFSET_MAX)
2300 seq_printf(f, "%Ld EOF\n", fl->fl_start);
2302 seq_printf(f, "%Ld %Ld\n", fl->fl_start, fl->fl_end);
2304 seq_printf(f, "0 EOF\n");
2308 static int locks_show(struct seq_file *f, void *v)
2310 struct file_lock *fl, *bfl;
2312 fl = hlist_entry(v, struct file_lock, fl_link);
2314 lock_get_status(f, fl, *((loff_t *)f->private), "");
2316 list_for_each_entry(bfl, &fl->fl_block, fl_block)
2317 lock_get_status(f, bfl, *((loff_t *)f->private), " ->");
2322 static void *locks_start(struct seq_file *f, loff_t *pos)
2324 loff_t *p = f->private;
2326 spin_lock(&file_lock_lock);
2328 return seq_hlist_start(&file_lock_list, *pos);
2331 static void *locks_next(struct seq_file *f, void *v, loff_t *pos)
2333 loff_t *p = f->private;
2335 return seq_hlist_next(v, &file_lock_list, pos);
2338 static void locks_stop(struct seq_file *f, void *v)
2340 spin_unlock(&file_lock_lock);
2343 static const struct seq_operations locks_seq_operations = {
2344 .start = locks_start,
2350 static int locks_open(struct inode *inode, struct file *filp)
2352 return seq_open_private(filp, &locks_seq_operations, sizeof(loff_t));
2355 static const struct file_operations proc_locks_operations = {
2358 .llseek = seq_lseek,
2359 .release = seq_release_private,
2362 static int __init proc_locks_init(void)
2364 proc_create("locks", 0, NULL, &proc_locks_operations);
2367 module_init(proc_locks_init);
2371 * lock_may_read - checks that the region is free of locks
2372 * @inode: the inode that is being read
2373 * @start: the first byte to read
2374 * @len: the number of bytes to read
2376 * Emulates Windows locking requirements. Whole-file
2377 * mandatory locks (share modes) can prohibit a read and
2378 * byte-range POSIX locks can prohibit a read if they overlap.
2380 * N.B. this function is only ever called
2381 * from knfsd and ownership of locks is never checked.
2383 int lock_may_read(struct inode *inode, loff_t start, unsigned long len)
2385 struct file_lock *fl;
2388 spin_lock(&inode->i_lock);
2389 for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
2391 if (fl->fl_type == F_RDLCK)
2393 if ((fl->fl_end < start) || (fl->fl_start > (start + len)))
2395 } else if (IS_FLOCK(fl)) {
2396 if (!(fl->fl_type & LOCK_MAND))
2398 if (fl->fl_type & LOCK_READ)
2405 spin_unlock(&inode->i_lock);
2409 EXPORT_SYMBOL(lock_may_read);
2412 * lock_may_write - checks that the region is free of locks
2413 * @inode: the inode that is being written
2414 * @start: the first byte to write
2415 * @len: the number of bytes to write
2417 * Emulates Windows locking requirements. Whole-file
2418 * mandatory locks (share modes) can prohibit a write and
2419 * byte-range POSIX locks can prohibit a write if they overlap.
2421 * N.B. this function is only ever called
2422 * from knfsd and ownership of locks is never checked.
2424 int lock_may_write(struct inode *inode, loff_t start, unsigned long len)
2426 struct file_lock *fl;
2429 spin_lock(&inode->i_lock);
2430 for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
2432 if ((fl->fl_end < start) || (fl->fl_start > (start + len)))
2434 } else if (IS_FLOCK(fl)) {
2435 if (!(fl->fl_type & LOCK_MAND))
2437 if (fl->fl_type & LOCK_WRITE)
2444 spin_unlock(&inode->i_lock);
2448 EXPORT_SYMBOL(lock_may_write);
2450 static int __init filelock_init(void)
2452 filelock_cache = kmem_cache_create("file_lock_cache",
2453 sizeof(struct file_lock), 0, SLAB_PANIC, NULL);
2458 core_initcall(filelock_init);