4 * Provide support for fcntl()'s F_GETLK, F_SETLK, and F_SETLKW calls.
5 * Doug Evans (dje@spiff.uucp), August 07, 1992
7 * Deadlock detection added.
8 * FIXME: one thing isn't handled yet:
9 * - mandatory locks (requires lots of changes elsewhere)
10 * Kelly Carmichael (kelly@[142.24.8.65]), September 17, 1994.
12 * Miscellaneous edits, and a total rewrite of posix_lock_file() code.
13 * Kai Petzke (wpp@marie.physik.tu-berlin.de), 1994
15 * Converted file_lock_table to a linked list from an array, which eliminates
16 * the limits on how many active file locks are open.
17 * Chad Page (pageone@netcom.com), November 27, 1994
19 * Removed dependency on file descriptors. dup()'ed file descriptors now
20 * get the same locks as the original file descriptors, and a close() on
21 * any file descriptor removes ALL the locks on the file for the current
22 * process. Since locks still depend on the process id, locks are inherited
23 * after an exec() but not after a fork(). This agrees with POSIX, and both
24 * BSD and SVR4 practice.
25 * Andy Walker (andy@lysaker.kvaerner.no), February 14, 1995
27 * Scrapped free list which is redundant now that we allocate locks
28 * dynamically with kmalloc()/kfree().
29 * Andy Walker (andy@lysaker.kvaerner.no), February 21, 1995
31 * Implemented two lock personalities - FL_FLOCK and FL_POSIX.
33 * FL_POSIX locks are created with calls to fcntl() and lockf() through the
34 * fcntl() system call. They have the semantics described above.
36 * FL_FLOCK locks are created with calls to flock(), through the flock()
37 * system call, which is new. Old C libraries implement flock() via fcntl()
38 * and will continue to use the old, broken implementation.
40 * FL_FLOCK locks follow the 4.4 BSD flock() semantics. They are associated
41 * with a file pointer (filp). As a result they can be shared by a parent
42 * process and its children after a fork(). They are removed when the last
43 * file descriptor referring to the file pointer is closed (unless explicitly
46 * FL_FLOCK locks never deadlock, an existing lock is always removed before
47 * upgrading from shared to exclusive (or vice versa). When this happens
48 * any processes blocked by the current lock are woken up and allowed to
49 * run before the new lock is applied.
50 * Andy Walker (andy@lysaker.kvaerner.no), June 09, 1995
52 * Removed some race conditions in flock_lock_file(), marked other possible
53 * races. Just grep for FIXME to see them.
54 * Dmitry Gorodchanin (pgmdsg@ibi.com), February 09, 1996.
56 * Addressed Dmitry's concerns. Deadlock checking no longer recursive.
57 * Lock allocation changed to GFP_ATOMIC as we can't afford to sleep
58 * once we've checked for blocking and deadlocking.
59 * Andy Walker (andy@lysaker.kvaerner.no), April 03, 1996.
61 * Initial implementation of mandatory locks. SunOS turned out to be
62 * a rotten model, so I implemented the "obvious" semantics.
63 * See 'Documentation/filesystems/mandatory-locking.txt' for details.
64 * Andy Walker (andy@lysaker.kvaerner.no), April 06, 1996.
66 * Don't allow mandatory locks on mmap()'ed files. Added simple functions to
67 * check if a file has mandatory locks, used by mmap(), open() and creat() to
68 * see if system call should be rejected. Ref. HP-UX/SunOS/Solaris Reference
70 * Andy Walker (andy@lysaker.kvaerner.no), April 09, 1996.
72 * Tidied up block list handling. Added '/proc/locks' interface.
73 * Andy Walker (andy@lysaker.kvaerner.no), April 24, 1996.
75 * Fixed deadlock condition for pathological code that mixes calls to
76 * flock() and fcntl().
77 * Andy Walker (andy@lysaker.kvaerner.no), April 29, 1996.
79 * Allow only one type of locking scheme (FL_POSIX or FL_FLOCK) to be in use
80 * for a given file at a time. Changed the CONFIG_LOCK_MANDATORY scheme to
81 * guarantee sensible behaviour in the case where file system modules might
82 * be compiled with different options than the kernel itself.
83 * Andy Walker (andy@lysaker.kvaerner.no), May 15, 1996.
85 * Added a couple of missing wake_up() calls. Thanks to Thomas Meckel
86 * (Thomas.Meckel@mni.fh-giessen.de) for spotting this.
87 * Andy Walker (andy@lysaker.kvaerner.no), May 15, 1996.
89 * Changed FL_POSIX locks to use the block list in the same way as FL_FLOCK
90 * locks. Changed process synchronisation to avoid dereferencing locks that
91 * have already been freed.
92 * Andy Walker (andy@lysaker.kvaerner.no), Sep 21, 1996.
94 * Made the block list a circular list to minimise searching in the list.
95 * Andy Walker (andy@lysaker.kvaerner.no), Sep 25, 1996.
97 * Made mandatory locking a mount option. Default is not to allow mandatory
99 * Andy Walker (andy@lysaker.kvaerner.no), Oct 04, 1996.
101 * Some adaptations for NFS support.
102 * Olaf Kirch (okir@monad.swb.de), Dec 1996,
104 * Fixed /proc/locks interface so that we can't overrun the buffer we are handed.
105 * Andy Walker (andy@lysaker.kvaerner.no), May 12, 1997.
107 * Use slab allocator instead of kmalloc/kfree.
108 * Use generic list implementation from <linux/list.h>.
109 * Sped up posix_locks_deadlock by only considering blocked locks.
110 * Matthew Wilcox <willy@debian.org>, March, 2000.
112 * Leases and LOCK_MAND
113 * Matthew Wilcox <willy@debian.org>, June, 2000.
114 * Stephen Rothwell <sfr@canb.auug.org.au>, June, 2000.
117 #include <linux/capability.h>
118 #include <linux/file.h>
119 #include <linux/fdtable.h>
120 #include <linux/fs.h>
121 #include <linux/init.h>
122 #include <linux/module.h>
123 #include <linux/security.h>
124 #include <linux/slab.h>
125 #include <linux/syscalls.h>
126 #include <linux/time.h>
127 #include <linux/rcupdate.h>
128 #include <linux/pid_namespace.h>
130 #include <asm/uaccess.h>
132 #define IS_POSIX(fl) (fl->fl_flags & FL_POSIX)
133 #define IS_FLOCK(fl) (fl->fl_flags & FL_FLOCK)
134 #define IS_LEASE(fl) (fl->fl_flags & FL_LEASE)
136 static bool lease_breaking(struct file_lock *fl)
138 return fl->fl_flags & (FL_UNLOCK_PENDING | FL_DOWNGRADE_PENDING);
141 static int target_leasetype(struct file_lock *fl)
143 if (fl->fl_flags & FL_UNLOCK_PENDING)
145 if (fl->fl_flags & FL_DOWNGRADE_PENDING)
150 int leases_enable = 1;
151 int lease_break_time = 45;
153 #define for_each_lock(inode, lockp) \
154 for (lockp = &inode->i_flock; *lockp != NULL; lockp = &(*lockp)->fl_next)
157 * The global file_lock_list is only used for displaying /proc/locks. Protected
158 * by the file_lock_lock.
160 static LIST_HEAD(file_lock_list);
163 * The blocked_list is used to find POSIX lock loops for deadlock detection.
164 * Protected by file_lock_lock.
166 static LIST_HEAD(blocked_list);
169 * This lock protects the blocked_list, and the file_lock_list. Generally, if
170 * you're accessing one of those lists, you want to be holding this lock.
172 * In addition, it also protects the fl->fl_block list, and the fl->fl_next
173 * pointer for file_lock structures that are acting as lock requests (in
174 * contrast to those that are acting as records of acquired locks).
176 * Note that when we acquire this lock in order to change the above fields,
177 * we often hold the i_lock as well. In certain cases, when reading the fields
178 * protected by this lock, we can skip acquiring it iff we already hold the
181 * In particular, adding an entry to the fl_block list requires that you hold
182 * both the i_lock and the blocked_lock_lock (acquired in that order). Deleting
183 * an entry from the list however only requires the file_lock_lock.
185 static DEFINE_SPINLOCK(file_lock_lock);
187 static struct kmem_cache *filelock_cache __read_mostly;
189 static void locks_init_lock_heads(struct file_lock *fl)
191 INIT_LIST_HEAD(&fl->fl_link);
192 INIT_LIST_HEAD(&fl->fl_block);
193 init_waitqueue_head(&fl->fl_wait);
196 /* Allocate an empty lock structure. */
197 struct file_lock *locks_alloc_lock(void)
199 struct file_lock *fl = kmem_cache_zalloc(filelock_cache, GFP_KERNEL);
202 locks_init_lock_heads(fl);
206 EXPORT_SYMBOL_GPL(locks_alloc_lock);
208 void locks_release_private(struct file_lock *fl)
211 if (fl->fl_ops->fl_release_private)
212 fl->fl_ops->fl_release_private(fl);
218 EXPORT_SYMBOL_GPL(locks_release_private);
220 /* Free a lock which is not in use. */
221 void locks_free_lock(struct file_lock *fl)
223 BUG_ON(waitqueue_active(&fl->fl_wait));
224 BUG_ON(!list_empty(&fl->fl_block));
225 BUG_ON(!list_empty(&fl->fl_link));
227 locks_release_private(fl);
228 kmem_cache_free(filelock_cache, fl);
230 EXPORT_SYMBOL(locks_free_lock);
232 void locks_init_lock(struct file_lock *fl)
234 memset(fl, 0, sizeof(struct file_lock));
235 locks_init_lock_heads(fl);
238 EXPORT_SYMBOL(locks_init_lock);
240 static void locks_copy_private(struct file_lock *new, struct file_lock *fl)
243 if (fl->fl_ops->fl_copy_lock)
244 fl->fl_ops->fl_copy_lock(new, fl);
245 new->fl_ops = fl->fl_ops;
248 new->fl_lmops = fl->fl_lmops;
252 * Initialize a new lock from an existing file_lock structure.
254 void __locks_copy_lock(struct file_lock *new, const struct file_lock *fl)
256 new->fl_owner = fl->fl_owner;
257 new->fl_pid = fl->fl_pid;
259 new->fl_flags = fl->fl_flags;
260 new->fl_type = fl->fl_type;
261 new->fl_start = fl->fl_start;
262 new->fl_end = fl->fl_end;
264 new->fl_lmops = NULL;
266 EXPORT_SYMBOL(__locks_copy_lock);
268 void locks_copy_lock(struct file_lock *new, struct file_lock *fl)
270 locks_release_private(new);
272 __locks_copy_lock(new, fl);
273 new->fl_file = fl->fl_file;
274 new->fl_ops = fl->fl_ops;
275 new->fl_lmops = fl->fl_lmops;
277 locks_copy_private(new, fl);
280 EXPORT_SYMBOL(locks_copy_lock);
282 static inline int flock_translate_cmd(int cmd) {
284 return cmd & (LOCK_MAND | LOCK_RW);
296 /* Fill in a file_lock structure with an appropriate FLOCK lock. */
297 static int flock_make_lock(struct file *filp, struct file_lock **lock,
300 struct file_lock *fl;
301 int type = flock_translate_cmd(cmd);
305 fl = locks_alloc_lock();
310 fl->fl_pid = current->tgid;
311 fl->fl_flags = FL_FLOCK;
313 fl->fl_end = OFFSET_MAX;
319 static int assign_type(struct file_lock *fl, long type)
333 /* Verify a "struct flock" and copy it to a "struct file_lock" as a POSIX
336 static int flock_to_posix_lock(struct file *filp, struct file_lock *fl,
341 switch (l->l_whence) {
349 start = i_size_read(file_inode(filp));
355 /* POSIX-1996 leaves the case l->l_len < 0 undefined;
356 POSIX-2001 defines it. */
360 fl->fl_end = OFFSET_MAX;
362 end = start + l->l_len - 1;
364 } else if (l->l_len < 0) {
371 fl->fl_start = start; /* we record the absolute position */
372 if (fl->fl_end < fl->fl_start)
375 fl->fl_owner = current->files;
376 fl->fl_pid = current->tgid;
378 fl->fl_flags = FL_POSIX;
382 return assign_type(fl, l->l_type);
385 #if BITS_PER_LONG == 32
386 static int flock64_to_posix_lock(struct file *filp, struct file_lock *fl,
391 switch (l->l_whence) {
399 start = i_size_read(file_inode(filp));
408 fl->fl_end = OFFSET_MAX;
410 fl->fl_end = start + l->l_len - 1;
411 } else if (l->l_len < 0) {
412 fl->fl_end = start - 1;
417 fl->fl_start = start; /* we record the absolute position */
418 if (fl->fl_end < fl->fl_start)
421 fl->fl_owner = current->files;
422 fl->fl_pid = current->tgid;
424 fl->fl_flags = FL_POSIX;
428 return assign_type(fl, l->l_type);
432 /* default lease lock manager operations */
433 static void lease_break_callback(struct file_lock *fl)
435 kill_fasync(&fl->fl_fasync, SIGIO, POLL_MSG);
438 static const struct lock_manager_operations lease_manager_ops = {
439 .lm_break = lease_break_callback,
440 .lm_change = lease_modify,
444 * Initialize a lease, use the default lock manager operations
446 static int lease_init(struct file *filp, long type, struct file_lock *fl)
448 if (assign_type(fl, type) != 0)
451 fl->fl_owner = current->files;
452 fl->fl_pid = current->tgid;
455 fl->fl_flags = FL_LEASE;
457 fl->fl_end = OFFSET_MAX;
459 fl->fl_lmops = &lease_manager_ops;
463 /* Allocate a file_lock initialised to this type of lease */
464 static struct file_lock *lease_alloc(struct file *filp, long type)
466 struct file_lock *fl = locks_alloc_lock();
470 return ERR_PTR(error);
472 error = lease_init(filp, type, fl);
475 return ERR_PTR(error);
480 /* Check if two locks overlap each other.
482 static inline int locks_overlap(struct file_lock *fl1, struct file_lock *fl2)
484 return ((fl1->fl_end >= fl2->fl_start) &&
485 (fl2->fl_end >= fl1->fl_start));
489 * Check whether two locks have the same owner.
491 static int posix_same_owner(struct file_lock *fl1, struct file_lock *fl2)
493 if (fl1->fl_lmops && fl1->fl_lmops->lm_compare_owner)
494 return fl2->fl_lmops == fl1->fl_lmops &&
495 fl1->fl_lmops->lm_compare_owner(fl1, fl2);
496 return fl1->fl_owner == fl2->fl_owner;
500 locks_insert_global_locks(struct file_lock *fl)
502 spin_lock(&file_lock_lock);
503 list_add_tail(&fl->fl_link, &file_lock_list);
504 spin_unlock(&file_lock_lock);
508 locks_delete_global_locks(struct file_lock *fl)
510 spin_lock(&file_lock_lock);
511 list_del_init(&fl->fl_link);
512 spin_unlock(&file_lock_lock);
516 locks_insert_global_blocked(struct file_lock *waiter)
518 list_add(&waiter->fl_link, &blocked_list);
522 locks_delete_global_blocked(struct file_lock *waiter)
524 list_del_init(&waiter->fl_link);
527 /* Remove waiter from blocker's block list.
528 * When blocker ends up pointing to itself then the list is empty.
530 * Must be called with file_lock_lock held.
532 static void __locks_delete_block(struct file_lock *waiter)
534 locks_delete_global_blocked(waiter);
535 list_del_init(&waiter->fl_block);
536 waiter->fl_next = NULL;
539 static void locks_delete_block(struct file_lock *waiter)
541 spin_lock(&file_lock_lock);
542 __locks_delete_block(waiter);
543 spin_unlock(&file_lock_lock);
546 /* Insert waiter into blocker's block list.
547 * We use a circular list so that processes can be easily woken up in
548 * the order they blocked. The documentation doesn't require this but
549 * it seems like the reasonable thing to do.
551 * Must be called with file_lock_lock held!
553 static void __locks_insert_block(struct file_lock *blocker,
554 struct file_lock *waiter)
556 BUG_ON(!list_empty(&waiter->fl_block));
557 waiter->fl_next = blocker;
558 list_add_tail(&waiter->fl_block, &blocker->fl_block);
559 if (IS_POSIX(blocker))
560 locks_insert_global_blocked(waiter);
563 /* Must be called with i_lock held. */
564 static void locks_insert_block(struct file_lock *blocker,
565 struct file_lock *waiter)
567 spin_lock(&file_lock_lock);
568 __locks_insert_block(blocker, waiter);
569 spin_unlock(&file_lock_lock);
573 * Wake up processes blocked waiting for blocker.
575 * Must be called with the inode->i_lock held!
577 static void locks_wake_up_blocks(struct file_lock *blocker)
579 spin_lock(&file_lock_lock);
580 while (!list_empty(&blocker->fl_block)) {
581 struct file_lock *waiter;
583 waiter = list_first_entry(&blocker->fl_block,
584 struct file_lock, fl_block);
585 __locks_delete_block(waiter);
586 if (waiter->fl_lmops && waiter->fl_lmops->lm_notify)
587 waiter->fl_lmops->lm_notify(waiter);
589 wake_up(&waiter->fl_wait);
591 spin_unlock(&file_lock_lock);
594 /* Insert file lock fl into an inode's lock list at the position indicated
595 * by pos. At the same time add the lock to the global file lock list.
597 * Must be called with the i_lock held!
599 static void locks_insert_lock(struct file_lock **pos, struct file_lock *fl)
601 fl->fl_nspid = get_pid(task_tgid(current));
603 /* insert into file's list */
607 locks_insert_global_locks(fl);
611 * Delete a lock and then free it.
612 * Wake up processes that are blocked waiting for this lock,
613 * notify the FS that the lock has been cleared and
614 * finally free the lock.
616 * Must be called with the i_lock held!
618 static void locks_delete_lock(struct file_lock **thisfl_p)
620 struct file_lock *fl = *thisfl_p;
622 locks_delete_global_locks(fl);
624 *thisfl_p = fl->fl_next;
628 put_pid(fl->fl_nspid);
632 locks_wake_up_blocks(fl);
636 /* Determine if lock sys_fl blocks lock caller_fl. Common functionality
637 * checks for shared/exclusive status of overlapping locks.
639 static int locks_conflict(struct file_lock *caller_fl, struct file_lock *sys_fl)
641 if (sys_fl->fl_type == F_WRLCK)
643 if (caller_fl->fl_type == F_WRLCK)
648 /* Determine if lock sys_fl blocks lock caller_fl. POSIX specific
649 * checking before calling the locks_conflict().
651 static int posix_locks_conflict(struct file_lock *caller_fl, struct file_lock *sys_fl)
653 /* POSIX locks owned by the same process do not conflict with
656 if (!IS_POSIX(sys_fl) || posix_same_owner(caller_fl, sys_fl))
659 /* Check whether they overlap */
660 if (!locks_overlap(caller_fl, sys_fl))
663 return (locks_conflict(caller_fl, sys_fl));
666 /* Determine if lock sys_fl blocks lock caller_fl. FLOCK specific
667 * checking before calling the locks_conflict().
669 static int flock_locks_conflict(struct file_lock *caller_fl, struct file_lock *sys_fl)
671 /* FLOCK locks referring to the same filp do not conflict with
674 if (!IS_FLOCK(sys_fl) || (caller_fl->fl_file == sys_fl->fl_file))
676 if ((caller_fl->fl_type & LOCK_MAND) || (sys_fl->fl_type & LOCK_MAND))
679 return (locks_conflict(caller_fl, sys_fl));
683 posix_test_lock(struct file *filp, struct file_lock *fl)
685 struct file_lock *cfl;
686 struct inode *inode = file_inode(filp);
688 spin_lock(&inode->i_lock);
689 for (cfl = file_inode(filp)->i_flock; cfl; cfl = cfl->fl_next) {
692 if (posix_locks_conflict(fl, cfl))
696 __locks_copy_lock(fl, cfl);
698 fl->fl_pid = pid_vnr(cfl->fl_nspid);
700 fl->fl_type = F_UNLCK;
701 spin_unlock(&inode->i_lock);
704 EXPORT_SYMBOL(posix_test_lock);
707 * Deadlock detection:
709 * We attempt to detect deadlocks that are due purely to posix file
712 * We assume that a task can be waiting for at most one lock at a time.
713 * So for any acquired lock, the process holding that lock may be
714 * waiting on at most one other lock. That lock in turns may be held by
715 * someone waiting for at most one other lock. Given a requested lock
716 * caller_fl which is about to wait for a conflicting lock block_fl, we
717 * follow this chain of waiters to ensure we are not about to create a
720 * Since we do this before we ever put a process to sleep on a lock, we
721 * are ensured that there is never a cycle; that is what guarantees that
722 * the while() loop in posix_locks_deadlock() eventually completes.
724 * Note: the above assumption may not be true when handling lock
725 * requests from a broken NFS client. It may also fail in the presence
726 * of tasks (such as posix threads) sharing the same open file table.
728 * To handle those cases, we just bail out after a few iterations.
731 #define MAX_DEADLK_ITERATIONS 10
733 /* Find a lock that the owner of the given block_fl is blocking on. */
734 static struct file_lock *what_owner_is_waiting_for(struct file_lock *block_fl)
736 struct file_lock *fl;
738 list_for_each_entry(fl, &blocked_list, fl_link) {
739 if (posix_same_owner(fl, block_fl))
745 /* Must be called with the file_lock_lock held! */
746 static int posix_locks_deadlock(struct file_lock *caller_fl,
747 struct file_lock *block_fl)
751 while ((block_fl = what_owner_is_waiting_for(block_fl))) {
752 if (i++ > MAX_DEADLK_ITERATIONS)
754 if (posix_same_owner(caller_fl, block_fl))
760 /* Try to create a FLOCK lock on filp. We always insert new FLOCK locks
761 * after any leases, but before any posix locks.
763 * Note that if called with an FL_EXISTS argument, the caller may determine
764 * whether or not a lock was successfully freed by testing the return
767 static int flock_lock_file(struct file *filp, struct file_lock *request)
769 struct file_lock *new_fl = NULL;
770 struct file_lock **before;
771 struct inode * inode = file_inode(filp);
775 if (!(request->fl_flags & FL_ACCESS) && (request->fl_type != F_UNLCK)) {
776 new_fl = locks_alloc_lock();
781 spin_lock(&inode->i_lock);
782 if (request->fl_flags & FL_ACCESS)
785 for_each_lock(inode, before) {
786 struct file_lock *fl = *before;
791 if (filp != fl->fl_file)
793 if (request->fl_type == fl->fl_type)
796 locks_delete_lock(before);
800 if (request->fl_type == F_UNLCK) {
801 if ((request->fl_flags & FL_EXISTS) && !found)
807 * If a higher-priority process was blocked on the old file lock,
808 * give it the opportunity to lock the file.
811 spin_unlock(&inode->i_lock);
813 spin_lock(&inode->i_lock);
817 for_each_lock(inode, before) {
818 struct file_lock *fl = *before;
823 if (!flock_locks_conflict(request, fl))
826 if (!(request->fl_flags & FL_SLEEP))
828 error = FILE_LOCK_DEFERRED;
829 locks_insert_block(fl, request);
832 if (request->fl_flags & FL_ACCESS)
834 locks_copy_lock(new_fl, request);
835 locks_insert_lock(before, new_fl);
840 spin_unlock(&inode->i_lock);
842 locks_free_lock(new_fl);
846 static int __posix_lock_file(struct inode *inode, struct file_lock *request, struct file_lock *conflock)
848 struct file_lock *fl;
849 struct file_lock *new_fl = NULL;
850 struct file_lock *new_fl2 = NULL;
851 struct file_lock *left = NULL;
852 struct file_lock *right = NULL;
853 struct file_lock **before;
858 * We may need two file_lock structures for this operation,
859 * so we get them in advance to avoid races.
861 * In some cases we can be sure, that no new locks will be needed
863 if (!(request->fl_flags & FL_ACCESS) &&
864 (request->fl_type != F_UNLCK ||
865 request->fl_start != 0 || request->fl_end != OFFSET_MAX)) {
866 new_fl = locks_alloc_lock();
867 new_fl2 = locks_alloc_lock();
870 spin_lock(&inode->i_lock);
872 * New lock request. Walk all POSIX locks and look for conflicts. If
873 * there are any, either return error or put the request on the
874 * blocker's list of waiters and the global blocked_list.
876 if (request->fl_type != F_UNLCK) {
877 for_each_lock(inode, before) {
881 if (!posix_locks_conflict(request, fl))
884 __locks_copy_lock(conflock, fl);
886 if (!(request->fl_flags & FL_SLEEP))
889 * Deadlock detection and insertion into the blocked
890 * locks list must be done while holding the same lock!
893 spin_lock(&file_lock_lock);
894 if (likely(!posix_locks_deadlock(request, fl))) {
895 error = FILE_LOCK_DEFERRED;
896 __locks_insert_block(fl, request);
898 spin_unlock(&file_lock_lock);
903 /* If we're just looking for a conflict, we're done. */
905 if (request->fl_flags & FL_ACCESS)
909 * Find the first old lock with the same owner as the new lock.
912 before = &inode->i_flock;
914 /* First skip locks owned by other processes. */
915 while ((fl = *before) && (!IS_POSIX(fl) ||
916 !posix_same_owner(request, fl))) {
917 before = &fl->fl_next;
920 /* Process locks with this owner. */
921 while ((fl = *before) && posix_same_owner(request, fl)) {
922 /* Detect adjacent or overlapping regions (if same lock type)
924 if (request->fl_type == fl->fl_type) {
925 /* In all comparisons of start vs end, use
926 * "start - 1" rather than "end + 1". If end
927 * is OFFSET_MAX, end + 1 will become negative.
929 if (fl->fl_end < request->fl_start - 1)
931 /* If the next lock in the list has entirely bigger
932 * addresses than the new one, insert the lock here.
934 if (fl->fl_start - 1 > request->fl_end)
937 /* If we come here, the new and old lock are of the
938 * same type and adjacent or overlapping. Make one
939 * lock yielding from the lower start address of both
940 * locks to the higher end address.
942 if (fl->fl_start > request->fl_start)
943 fl->fl_start = request->fl_start;
945 request->fl_start = fl->fl_start;
946 if (fl->fl_end < request->fl_end)
947 fl->fl_end = request->fl_end;
949 request->fl_end = fl->fl_end;
951 locks_delete_lock(before);
958 /* Processing for different lock types is a bit
961 if (fl->fl_end < request->fl_start)
963 if (fl->fl_start > request->fl_end)
965 if (request->fl_type == F_UNLCK)
967 if (fl->fl_start < request->fl_start)
969 /* If the next lock in the list has a higher end
970 * address than the new one, insert the new one here.
972 if (fl->fl_end > request->fl_end) {
976 if (fl->fl_start >= request->fl_start) {
977 /* The new lock completely replaces an old
978 * one (This may happen several times).
981 locks_delete_lock(before);
984 /* Replace the old lock with the new one.
985 * Wake up anybody waiting for the old one,
986 * as the change in lock type might satisfy
989 locks_wake_up_blocks(fl);
990 fl->fl_start = request->fl_start;
991 fl->fl_end = request->fl_end;
992 fl->fl_type = request->fl_type;
993 locks_release_private(fl);
994 locks_copy_private(fl, request);
999 /* Go on to next lock.
1002 before = &fl->fl_next;
1006 * The above code only modifies existing locks in case of merging or
1007 * replacing. If new lock(s) need to be inserted all modifications are
1008 * done below this, so it's safe yet to bail out.
1010 error = -ENOLCK; /* "no luck" */
1011 if (right && left == right && !new_fl2)
1016 if (request->fl_type == F_UNLCK) {
1017 if (request->fl_flags & FL_EXISTS)
1026 locks_copy_lock(new_fl, request);
1027 locks_insert_lock(before, new_fl);
1031 if (left == right) {
1032 /* The new lock breaks the old one in two pieces,
1033 * so we have to use the second new lock.
1037 locks_copy_lock(left, right);
1038 locks_insert_lock(before, left);
1040 right->fl_start = request->fl_end + 1;
1041 locks_wake_up_blocks(right);
1044 left->fl_end = request->fl_start - 1;
1045 locks_wake_up_blocks(left);
1048 spin_unlock(&inode->i_lock);
1050 * Free any unused locks.
1053 locks_free_lock(new_fl);
1055 locks_free_lock(new_fl2);
1060 * posix_lock_file - Apply a POSIX-style lock to a file
1061 * @filp: The file to apply the lock to
1062 * @fl: The lock to be applied
1063 * @conflock: Place to return a copy of the conflicting lock, if found.
1065 * Add a POSIX style lock to a file.
1066 * We merge adjacent & overlapping locks whenever possible.
1067 * POSIX locks are sorted by owner task, then by starting address
1069 * Note that if called with an FL_EXISTS argument, the caller may determine
1070 * whether or not a lock was successfully freed by testing the return
1071 * value for -ENOENT.
1073 int posix_lock_file(struct file *filp, struct file_lock *fl,
1074 struct file_lock *conflock)
1076 return __posix_lock_file(file_inode(filp), fl, conflock);
1078 EXPORT_SYMBOL(posix_lock_file);
1081 * posix_lock_file_wait - Apply a POSIX-style lock to a file
1082 * @filp: The file to apply the lock to
1083 * @fl: The lock to be applied
1085 * Add a POSIX style lock to a file.
1086 * We merge adjacent & overlapping locks whenever possible.
1087 * POSIX locks are sorted by owner task, then by starting address
1089 int posix_lock_file_wait(struct file *filp, struct file_lock *fl)
1094 error = posix_lock_file(filp, fl, NULL);
1095 if (error != FILE_LOCK_DEFERRED)
1097 error = wait_event_interruptible(fl->fl_wait, !fl->fl_next);
1101 locks_delete_block(fl);
1106 EXPORT_SYMBOL(posix_lock_file_wait);
1109 * locks_mandatory_locked - Check for an active lock
1110 * @inode: the file to check
1112 * Searches the inode's list of locks to find any POSIX locks which conflict.
1113 * This function is called from locks_verify_locked() only.
1115 int locks_mandatory_locked(struct inode *inode)
1117 fl_owner_t owner = current->files;
1118 struct file_lock *fl;
1121 * Search the lock list for this inode for any POSIX locks.
1123 spin_lock(&inode->i_lock);
1124 for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
1127 if (fl->fl_owner != owner)
1130 spin_unlock(&inode->i_lock);
1131 return fl ? -EAGAIN : 0;
1135 * locks_mandatory_area - Check for a conflicting lock
1136 * @read_write: %FLOCK_VERIFY_WRITE for exclusive access, %FLOCK_VERIFY_READ
1138 * @inode: the file to check
1139 * @filp: how the file was opened (if it was)
1140 * @offset: start of area to check
1141 * @count: length of area to check
1143 * Searches the inode's list of locks to find any POSIX locks which conflict.
1144 * This function is called from rw_verify_area() and
1145 * locks_verify_truncate().
1147 int locks_mandatory_area(int read_write, struct inode *inode,
1148 struct file *filp, loff_t offset,
1151 struct file_lock fl;
1154 locks_init_lock(&fl);
1155 fl.fl_owner = current->files;
1156 fl.fl_pid = current->tgid;
1158 fl.fl_flags = FL_POSIX | FL_ACCESS;
1159 if (filp && !(filp->f_flags & O_NONBLOCK))
1160 fl.fl_flags |= FL_SLEEP;
1161 fl.fl_type = (read_write == FLOCK_VERIFY_WRITE) ? F_WRLCK : F_RDLCK;
1162 fl.fl_start = offset;
1163 fl.fl_end = offset + count - 1;
1166 error = __posix_lock_file(inode, &fl, NULL);
1167 if (error != FILE_LOCK_DEFERRED)
1169 error = wait_event_interruptible(fl.fl_wait, !fl.fl_next);
1172 * If we've been sleeping someone might have
1173 * changed the permissions behind our back.
1175 if (__mandatory_lock(inode))
1179 locks_delete_block(&fl);
1186 EXPORT_SYMBOL(locks_mandatory_area);
1188 static void lease_clear_pending(struct file_lock *fl, int arg)
1192 fl->fl_flags &= ~FL_UNLOCK_PENDING;
1195 fl->fl_flags &= ~FL_DOWNGRADE_PENDING;
1199 /* We already had a lease on this file; just change its type */
1200 int lease_modify(struct file_lock **before, int arg)
1202 struct file_lock *fl = *before;
1203 int error = assign_type(fl, arg);
1207 lease_clear_pending(fl, arg);
1208 locks_wake_up_blocks(fl);
1209 if (arg == F_UNLCK) {
1210 struct file *filp = fl->fl_file;
1213 filp->f_owner.signum = 0;
1214 fasync_helper(0, fl->fl_file, 0, &fl->fl_fasync);
1215 if (fl->fl_fasync != NULL) {
1216 printk(KERN_ERR "locks_delete_lock: fasync == %p\n", fl->fl_fasync);
1217 fl->fl_fasync = NULL;
1219 locks_delete_lock(before);
1224 EXPORT_SYMBOL(lease_modify);
1226 static bool past_time(unsigned long then)
1229 /* 0 is a special value meaning "this never expires": */
1231 return time_after(jiffies, then);
1234 static void time_out_leases(struct inode *inode)
1236 struct file_lock **before;
1237 struct file_lock *fl;
1239 before = &inode->i_flock;
1240 while ((fl = *before) && IS_LEASE(fl) && lease_breaking(fl)) {
1241 if (past_time(fl->fl_downgrade_time))
1242 lease_modify(before, F_RDLCK);
1243 if (past_time(fl->fl_break_time))
1244 lease_modify(before, F_UNLCK);
1245 if (fl == *before) /* lease_modify may have freed fl */
1246 before = &fl->fl_next;
1251 * __break_lease - revoke all outstanding leases on file
1252 * @inode: the inode of the file to return
1253 * @mode: the open mode (read or write)
1255 * break_lease (inlined for speed) has checked there already is at least
1256 * some kind of lock (maybe a lease) on this file. Leases are broken on
1257 * a call to open() or truncate(). This function can sleep unless you
1258 * specified %O_NONBLOCK to your open().
1260 int __break_lease(struct inode *inode, unsigned int mode)
1263 struct file_lock *new_fl, *flock;
1264 struct file_lock *fl;
1265 unsigned long break_time;
1266 int i_have_this_lease = 0;
1267 int want_write = (mode & O_ACCMODE) != O_RDONLY;
1269 new_fl = lease_alloc(NULL, want_write ? F_WRLCK : F_RDLCK);
1271 return PTR_ERR(new_fl);
1273 spin_lock(&inode->i_lock);
1275 time_out_leases(inode);
1277 flock = inode->i_flock;
1278 if ((flock == NULL) || !IS_LEASE(flock))
1281 if (!locks_conflict(flock, new_fl))
1284 for (fl = flock; fl && IS_LEASE(fl); fl = fl->fl_next)
1285 if (fl->fl_owner == current->files)
1286 i_have_this_lease = 1;
1289 if (lease_break_time > 0) {
1290 break_time = jiffies + lease_break_time * HZ;
1291 if (break_time == 0)
1292 break_time++; /* so that 0 means no break time */
1295 for (fl = flock; fl && IS_LEASE(fl); fl = fl->fl_next) {
1297 if (fl->fl_flags & FL_UNLOCK_PENDING)
1299 fl->fl_flags |= FL_UNLOCK_PENDING;
1300 fl->fl_break_time = break_time;
1302 if (lease_breaking(flock))
1304 fl->fl_flags |= FL_DOWNGRADE_PENDING;
1305 fl->fl_downgrade_time = break_time;
1307 fl->fl_lmops->lm_break(fl);
1310 if (i_have_this_lease || (mode & O_NONBLOCK)) {
1311 error = -EWOULDBLOCK;
1316 break_time = flock->fl_break_time;
1317 if (break_time != 0) {
1318 break_time -= jiffies;
1319 if (break_time == 0)
1322 locks_insert_block(flock, new_fl);
1323 spin_unlock(&inode->i_lock);
1324 error = wait_event_interruptible_timeout(new_fl->fl_wait,
1325 !new_fl->fl_next, break_time);
1326 spin_lock(&inode->i_lock);
1327 locks_delete_block(new_fl);
1330 time_out_leases(inode);
1332 * Wait for the next conflicting lease that has not been
1335 for (flock = inode->i_flock; flock && IS_LEASE(flock);
1336 flock = flock->fl_next) {
1337 if (locks_conflict(new_fl, flock))
1344 spin_unlock(&inode->i_lock);
1345 locks_free_lock(new_fl);
1349 EXPORT_SYMBOL(__break_lease);
1352 * lease_get_mtime - get the last modified time of an inode
1354 * @time: pointer to a timespec which will contain the last modified time
1356 * This is to force NFS clients to flush their caches for files with
1357 * exclusive leases. The justification is that if someone has an
1358 * exclusive lease, then they could be modifying it.
1360 void lease_get_mtime(struct inode *inode, struct timespec *time)
1362 struct file_lock *flock = inode->i_flock;
1363 if (flock && IS_LEASE(flock) && (flock->fl_type == F_WRLCK))
1364 *time = current_fs_time(inode->i_sb);
1366 *time = inode->i_mtime;
1369 EXPORT_SYMBOL(lease_get_mtime);
1372 * fcntl_getlease - Enquire what lease is currently active
1375 * The value returned by this function will be one of
1376 * (if no lease break is pending):
1378 * %F_RDLCK to indicate a shared lease is held.
1380 * %F_WRLCK to indicate an exclusive lease is held.
1382 * %F_UNLCK to indicate no lease is held.
1384 * (if a lease break is pending):
1386 * %F_RDLCK to indicate an exclusive lease needs to be
1387 * changed to a shared lease (or removed).
1389 * %F_UNLCK to indicate the lease needs to be removed.
1391 * XXX: sfr & willy disagree over whether F_INPROGRESS
1392 * should be returned to userspace.
1394 int fcntl_getlease(struct file *filp)
1396 struct file_lock *fl;
1397 struct inode *inode = file_inode(filp);
1400 spin_lock(&inode->i_lock);
1401 time_out_leases(file_inode(filp));
1402 for (fl = file_inode(filp)->i_flock; fl && IS_LEASE(fl);
1404 if (fl->fl_file == filp) {
1405 type = target_leasetype(fl);
1409 spin_unlock(&inode->i_lock);
1413 static int generic_add_lease(struct file *filp, long arg, struct file_lock **flp)
1415 struct file_lock *fl, **before, **my_before = NULL, *lease;
1416 struct dentry *dentry = filp->f_path.dentry;
1417 struct inode *inode = dentry->d_inode;
1423 if ((arg == F_RDLCK) && (atomic_read(&inode->i_writecount) > 0))
1425 if ((arg == F_WRLCK)
1426 && ((dentry->d_count > 1)
1427 || (atomic_read(&inode->i_count) > 1)))
1431 * At this point, we know that if there is an exclusive
1432 * lease on this file, then we hold it on this filp
1433 * (otherwise our open of this file would have blocked).
1434 * And if we are trying to acquire an exclusive lease,
1435 * then the file is not open by anyone (including us)
1436 * except for this filp.
1439 for (before = &inode->i_flock;
1440 ((fl = *before) != NULL) && IS_LEASE(fl);
1441 before = &fl->fl_next) {
1442 if (fl->fl_file == filp) {
1447 * No exclusive leases if someone else has a lease on
1453 * Modifying our existing lease is OK, but no getting a
1454 * new lease if someone else is opening for write:
1456 if (fl->fl_flags & FL_UNLOCK_PENDING)
1460 if (my_before != NULL) {
1461 error = lease->fl_lmops->lm_change(my_before, arg);
1471 locks_insert_lock(before, lease);
1478 static int generic_delete_lease(struct file *filp, struct file_lock **flp)
1480 struct file_lock *fl, **before;
1481 struct dentry *dentry = filp->f_path.dentry;
1482 struct inode *inode = dentry->d_inode;
1484 for (before = &inode->i_flock;
1485 ((fl = *before) != NULL) && IS_LEASE(fl);
1486 before = &fl->fl_next) {
1487 if (fl->fl_file != filp)
1489 return (*flp)->fl_lmops->lm_change(before, F_UNLCK);
1495 * generic_setlease - sets a lease on an open file
1496 * @filp: file pointer
1497 * @arg: type of lease to obtain
1498 * @flp: input - file_lock to use, output - file_lock inserted
1500 * The (input) flp->fl_lmops->lm_break function is required
1503 * Called with inode->i_lock held.
1505 int generic_setlease(struct file *filp, long arg, struct file_lock **flp)
1507 struct dentry *dentry = filp->f_path.dentry;
1508 struct inode *inode = dentry->d_inode;
1511 if ((!uid_eq(current_fsuid(), inode->i_uid)) && !capable(CAP_LEASE))
1513 if (!S_ISREG(inode->i_mode))
1515 error = security_file_lock(filp, arg);
1519 time_out_leases(inode);
1521 BUG_ON(!(*flp)->fl_lmops->lm_break);
1525 return generic_delete_lease(filp, flp);
1528 return generic_add_lease(filp, arg, flp);
1533 EXPORT_SYMBOL(generic_setlease);
1535 static int __vfs_setlease(struct file *filp, long arg, struct file_lock **lease)
1537 if (filp->f_op && filp->f_op->setlease)
1538 return filp->f_op->setlease(filp, arg, lease);
1540 return generic_setlease(filp, arg, lease);
1544 * vfs_setlease - sets a lease on an open file
1545 * @filp: file pointer
1546 * @arg: type of lease to obtain
1547 * @lease: file_lock to use
1549 * Call this to establish a lease on the file.
1550 * The (*lease)->fl_lmops->lm_break operation must be set; if not,
1551 * break_lease will oops!
1553 * This will call the filesystem's setlease file method, if
1554 * defined. Note that there is no getlease method; instead, the
1555 * filesystem setlease method should call back to setlease() to
1556 * add a lease to the inode's lease list, where fcntl_getlease() can
1557 * find it. Since fcntl_getlease() only reports whether the current
1558 * task holds a lease, a cluster filesystem need only do this for
1559 * leases held by processes on this node.
1561 * There is also no break_lease method; filesystems that
1562 * handle their own leases should break leases themselves from the
1563 * filesystem's open, create, and (on truncate) setattr methods.
1565 * Warning: the only current setlease methods exist only to disable
1566 * leases in certain cases. More vfs changes may be required to
1567 * allow a full filesystem lease implementation.
1570 int vfs_setlease(struct file *filp, long arg, struct file_lock **lease)
1572 struct inode *inode = file_inode(filp);
1575 spin_lock(&inode->i_lock);
1576 error = __vfs_setlease(filp, arg, lease);
1577 spin_unlock(&inode->i_lock);
1581 EXPORT_SYMBOL_GPL(vfs_setlease);
1583 static int do_fcntl_delete_lease(struct file *filp)
1585 struct file_lock fl, *flp = &fl;
1587 lease_init(filp, F_UNLCK, flp);
1589 return vfs_setlease(filp, F_UNLCK, &flp);
1592 static int do_fcntl_add_lease(unsigned int fd, struct file *filp, long arg)
1594 struct file_lock *fl, *ret;
1595 struct inode *inode = file_inode(filp);
1596 struct fasync_struct *new;
1599 fl = lease_alloc(filp, arg);
1603 new = fasync_alloc();
1605 locks_free_lock(fl);
1609 spin_lock(&inode->i_lock);
1610 error = __vfs_setlease(filp, arg, &ret);
1612 spin_unlock(&inode->i_lock);
1613 locks_free_lock(fl);
1614 goto out_free_fasync;
1617 locks_free_lock(fl);
1620 * fasync_insert_entry() returns the old entry if any.
1621 * If there was no old entry, then it used 'new' and
1622 * inserted it into the fasync list. Clear new so that
1623 * we don't release it here.
1625 if (!fasync_insert_entry(fd, filp, &ret->fl_fasync, new))
1628 error = __f_setown(filp, task_pid(current), PIDTYPE_PID, 0);
1629 spin_unlock(&inode->i_lock);
1638 * fcntl_setlease - sets a lease on an open file
1639 * @fd: open file descriptor
1640 * @filp: file pointer
1641 * @arg: type of lease to obtain
1643 * Call this fcntl to establish a lease on the file.
1644 * Note that you also need to call %F_SETSIG to
1645 * receive a signal when the lease is broken.
1647 int fcntl_setlease(unsigned int fd, struct file *filp, long arg)
1650 return do_fcntl_delete_lease(filp);
1651 return do_fcntl_add_lease(fd, filp, arg);
1655 * flock_lock_file_wait - Apply a FLOCK-style lock to a file
1656 * @filp: The file to apply the lock to
1657 * @fl: The lock to be applied
1659 * Add a FLOCK style lock to a file.
1661 int flock_lock_file_wait(struct file *filp, struct file_lock *fl)
1666 error = flock_lock_file(filp, fl);
1667 if (error != FILE_LOCK_DEFERRED)
1669 error = wait_event_interruptible(fl->fl_wait, !fl->fl_next);
1673 locks_delete_block(fl);
1679 EXPORT_SYMBOL(flock_lock_file_wait);
1682 * sys_flock: - flock() system call.
1683 * @fd: the file descriptor to lock.
1684 * @cmd: the type of lock to apply.
1686 * Apply a %FL_FLOCK style lock to an open file descriptor.
1687 * The @cmd can be one of
1689 * %LOCK_SH -- a shared lock.
1691 * %LOCK_EX -- an exclusive lock.
1693 * %LOCK_UN -- remove an existing lock.
1695 * %LOCK_MAND -- a `mandatory' flock. This exists to emulate Windows Share Modes.
1697 * %LOCK_MAND can be combined with %LOCK_READ or %LOCK_WRITE to allow other
1698 * processes read and write access respectively.
1700 SYSCALL_DEFINE2(flock, unsigned int, fd, unsigned int, cmd)
1702 struct fd f = fdget(fd);
1703 struct file_lock *lock;
1704 int can_sleep, unlock;
1711 can_sleep = !(cmd & LOCK_NB);
1713 unlock = (cmd == LOCK_UN);
1715 if (!unlock && !(cmd & LOCK_MAND) &&
1716 !(f.file->f_mode & (FMODE_READ|FMODE_WRITE)))
1719 error = flock_make_lock(f.file, &lock, cmd);
1723 lock->fl_flags |= FL_SLEEP;
1725 error = security_file_lock(f.file, lock->fl_type);
1729 if (f.file->f_op && f.file->f_op->flock)
1730 error = f.file->f_op->flock(f.file,
1731 (can_sleep) ? F_SETLKW : F_SETLK,
1734 error = flock_lock_file_wait(f.file, lock);
1737 locks_free_lock(lock);
1746 * vfs_test_lock - test file byte range lock
1747 * @filp: The file to test lock for
1748 * @fl: The lock to test; also used to hold result
1750 * Returns -ERRNO on failure. Indicates presence of conflicting lock by
1751 * setting conf->fl_type to something other than F_UNLCK.
1753 int vfs_test_lock(struct file *filp, struct file_lock *fl)
1755 if (filp->f_op && filp->f_op->lock)
1756 return filp->f_op->lock(filp, F_GETLK, fl);
1757 posix_test_lock(filp, fl);
1760 EXPORT_SYMBOL_GPL(vfs_test_lock);
1762 static int posix_lock_to_flock(struct flock *flock, struct file_lock *fl)
1764 flock->l_pid = fl->fl_pid;
1765 #if BITS_PER_LONG == 32
1767 * Make sure we can represent the posix lock via
1768 * legacy 32bit flock.
1770 if (fl->fl_start > OFFT_OFFSET_MAX)
1772 if (fl->fl_end != OFFSET_MAX && fl->fl_end > OFFT_OFFSET_MAX)
1775 flock->l_start = fl->fl_start;
1776 flock->l_len = fl->fl_end == OFFSET_MAX ? 0 :
1777 fl->fl_end - fl->fl_start + 1;
1778 flock->l_whence = 0;
1779 flock->l_type = fl->fl_type;
1783 #if BITS_PER_LONG == 32
1784 static void posix_lock_to_flock64(struct flock64 *flock, struct file_lock *fl)
1786 flock->l_pid = fl->fl_pid;
1787 flock->l_start = fl->fl_start;
1788 flock->l_len = fl->fl_end == OFFSET_MAX ? 0 :
1789 fl->fl_end - fl->fl_start + 1;
1790 flock->l_whence = 0;
1791 flock->l_type = fl->fl_type;
1795 /* Report the first existing lock that would conflict with l.
1796 * This implements the F_GETLK command of fcntl().
1798 int fcntl_getlk(struct file *filp, struct flock __user *l)
1800 struct file_lock file_lock;
1805 if (copy_from_user(&flock, l, sizeof(flock)))
1808 if ((flock.l_type != F_RDLCK) && (flock.l_type != F_WRLCK))
1811 error = flock_to_posix_lock(filp, &file_lock, &flock);
1815 error = vfs_test_lock(filp, &file_lock);
1819 flock.l_type = file_lock.fl_type;
1820 if (file_lock.fl_type != F_UNLCK) {
1821 error = posix_lock_to_flock(&flock, &file_lock);
1826 if (!copy_to_user(l, &flock, sizeof(flock)))
1833 * vfs_lock_file - file byte range lock
1834 * @filp: The file to apply the lock to
1835 * @cmd: type of locking operation (F_SETLK, F_GETLK, etc.)
1836 * @fl: The lock to be applied
1837 * @conf: Place to return a copy of the conflicting lock, if found.
1839 * A caller that doesn't care about the conflicting lock may pass NULL
1840 * as the final argument.
1842 * If the filesystem defines a private ->lock() method, then @conf will
1843 * be left unchanged; so a caller that cares should initialize it to
1844 * some acceptable default.
1846 * To avoid blocking kernel daemons, such as lockd, that need to acquire POSIX
1847 * locks, the ->lock() interface may return asynchronously, before the lock has
1848 * been granted or denied by the underlying filesystem, if (and only if)
1849 * lm_grant is set. Callers expecting ->lock() to return asynchronously
1850 * will only use F_SETLK, not F_SETLKW; they will set FL_SLEEP if (and only if)
1851 * the request is for a blocking lock. When ->lock() does return asynchronously,
1852 * it must return FILE_LOCK_DEFERRED, and call ->lm_grant() when the lock
1853 * request completes.
1854 * If the request is for non-blocking lock the file system should return
1855 * FILE_LOCK_DEFERRED then try to get the lock and call the callback routine
1856 * with the result. If the request timed out the callback routine will return a
1857 * nonzero return code and the file system should release the lock. The file
1858 * system is also responsible to keep a corresponding posix lock when it
1859 * grants a lock so the VFS can find out which locks are locally held and do
1860 * the correct lock cleanup when required.
1861 * The underlying filesystem must not drop the kernel lock or call
1862 * ->lm_grant() before returning to the caller with a FILE_LOCK_DEFERRED
1865 int vfs_lock_file(struct file *filp, unsigned int cmd, struct file_lock *fl, struct file_lock *conf)
1867 if (filp->f_op && filp->f_op->lock)
1868 return filp->f_op->lock(filp, cmd, fl);
1870 return posix_lock_file(filp, fl, conf);
1872 EXPORT_SYMBOL_GPL(vfs_lock_file);
1874 static int do_lock_file_wait(struct file *filp, unsigned int cmd,
1875 struct file_lock *fl)
1879 error = security_file_lock(filp, fl->fl_type);
1884 error = vfs_lock_file(filp, cmd, fl, NULL);
1885 if (error != FILE_LOCK_DEFERRED)
1887 error = wait_event_interruptible(fl->fl_wait, !fl->fl_next);
1891 locks_delete_block(fl);
1898 /* Apply the lock described by l to an open file descriptor.
1899 * This implements both the F_SETLK and F_SETLKW commands of fcntl().
1901 int fcntl_setlk(unsigned int fd, struct file *filp, unsigned int cmd,
1902 struct flock __user *l)
1904 struct file_lock *file_lock = locks_alloc_lock();
1906 struct inode *inode;
1910 if (file_lock == NULL)
1914 * This might block, so we do it before checking the inode.
1917 if (copy_from_user(&flock, l, sizeof(flock)))
1920 inode = file_inode(filp);
1922 /* Don't allow mandatory locks on files that may be memory mapped
1925 if (mandatory_lock(inode) && mapping_writably_mapped(filp->f_mapping)) {
1931 error = flock_to_posix_lock(filp, file_lock, &flock);
1934 if (cmd == F_SETLKW) {
1935 file_lock->fl_flags |= FL_SLEEP;
1939 switch (flock.l_type) {
1941 if (!(filp->f_mode & FMODE_READ))
1945 if (!(filp->f_mode & FMODE_WRITE))
1955 error = do_lock_file_wait(filp, cmd, file_lock);
1958 * Attempt to detect a close/fcntl race and recover by
1959 * releasing the lock that was just acquired.
1962 * we need that spin_lock here - it prevents reordering between
1963 * update of inode->i_flock and check for it done in close().
1964 * rcu_read_lock() wouldn't do.
1966 spin_lock(¤t->files->file_lock);
1968 spin_unlock(¤t->files->file_lock);
1969 if (!error && f != filp && flock.l_type != F_UNLCK) {
1970 flock.l_type = F_UNLCK;
1975 locks_free_lock(file_lock);
1979 #if BITS_PER_LONG == 32
1980 /* Report the first existing lock that would conflict with l.
1981 * This implements the F_GETLK command of fcntl().
1983 int fcntl_getlk64(struct file *filp, struct flock64 __user *l)
1985 struct file_lock file_lock;
1986 struct flock64 flock;
1990 if (copy_from_user(&flock, l, sizeof(flock)))
1993 if ((flock.l_type != F_RDLCK) && (flock.l_type != F_WRLCK))
1996 error = flock64_to_posix_lock(filp, &file_lock, &flock);
2000 error = vfs_test_lock(filp, &file_lock);
2004 flock.l_type = file_lock.fl_type;
2005 if (file_lock.fl_type != F_UNLCK)
2006 posix_lock_to_flock64(&flock, &file_lock);
2009 if (!copy_to_user(l, &flock, sizeof(flock)))
2016 /* Apply the lock described by l to an open file descriptor.
2017 * This implements both the F_SETLK and F_SETLKW commands of fcntl().
2019 int fcntl_setlk64(unsigned int fd, struct file *filp, unsigned int cmd,
2020 struct flock64 __user *l)
2022 struct file_lock *file_lock = locks_alloc_lock();
2023 struct flock64 flock;
2024 struct inode *inode;
2028 if (file_lock == NULL)
2032 * This might block, so we do it before checking the inode.
2035 if (copy_from_user(&flock, l, sizeof(flock)))
2038 inode = file_inode(filp);
2040 /* Don't allow mandatory locks on files that may be memory mapped
2043 if (mandatory_lock(inode) && mapping_writably_mapped(filp->f_mapping)) {
2049 error = flock64_to_posix_lock(filp, file_lock, &flock);
2052 if (cmd == F_SETLKW64) {
2053 file_lock->fl_flags |= FL_SLEEP;
2057 switch (flock.l_type) {
2059 if (!(filp->f_mode & FMODE_READ))
2063 if (!(filp->f_mode & FMODE_WRITE))
2073 error = do_lock_file_wait(filp, cmd, file_lock);
2076 * Attempt to detect a close/fcntl race and recover by
2077 * releasing the lock that was just acquired.
2079 spin_lock(¤t->files->file_lock);
2081 spin_unlock(¤t->files->file_lock);
2082 if (!error && f != filp && flock.l_type != F_UNLCK) {
2083 flock.l_type = F_UNLCK;
2088 locks_free_lock(file_lock);
2091 #endif /* BITS_PER_LONG == 32 */
2094 * This function is called when the file is being removed
2095 * from the task's fd array. POSIX locks belonging to this task
2096 * are deleted at this time.
2098 void locks_remove_posix(struct file *filp, fl_owner_t owner)
2100 struct file_lock lock;
2103 * If there are no locks held on this file, we don't need to call
2104 * posix_lock_file(). Another process could be setting a lock on this
2105 * file at the same time, but we wouldn't remove that lock anyway.
2107 if (!file_inode(filp)->i_flock)
2110 lock.fl_type = F_UNLCK;
2111 lock.fl_flags = FL_POSIX | FL_CLOSE;
2113 lock.fl_end = OFFSET_MAX;
2114 lock.fl_owner = owner;
2115 lock.fl_pid = current->tgid;
2116 lock.fl_file = filp;
2118 lock.fl_lmops = NULL;
2120 vfs_lock_file(filp, F_SETLK, &lock, NULL);
2122 if (lock.fl_ops && lock.fl_ops->fl_release_private)
2123 lock.fl_ops->fl_release_private(&lock);
2126 EXPORT_SYMBOL(locks_remove_posix);
2129 * This function is called on the last close of an open file.
2131 void locks_remove_flock(struct file *filp)
2133 struct inode * inode = file_inode(filp);
2134 struct file_lock *fl;
2135 struct file_lock **before;
2137 if (!inode->i_flock)
2140 if (filp->f_op && filp->f_op->flock) {
2141 struct file_lock fl = {
2142 .fl_pid = current->tgid,
2144 .fl_flags = FL_FLOCK,
2146 .fl_end = OFFSET_MAX,
2148 filp->f_op->flock(filp, F_SETLKW, &fl);
2149 if (fl.fl_ops && fl.fl_ops->fl_release_private)
2150 fl.fl_ops->fl_release_private(&fl);
2153 spin_lock(&inode->i_lock);
2154 before = &inode->i_flock;
2156 while ((fl = *before) != NULL) {
2157 if (fl->fl_file == filp) {
2159 locks_delete_lock(before);
2163 lease_modify(before, F_UNLCK);
2169 before = &fl->fl_next;
2171 spin_unlock(&inode->i_lock);
2175 * posix_unblock_lock - stop waiting for a file lock
2176 * @waiter: the lock which was waiting
2178 * lockd needs to block waiting for locks.
2181 posix_unblock_lock(struct file_lock *waiter)
2185 spin_lock(&file_lock_lock);
2186 if (waiter->fl_next)
2187 __locks_delete_block(waiter);
2190 spin_unlock(&file_lock_lock);
2193 EXPORT_SYMBOL(posix_unblock_lock);
2196 * vfs_cancel_lock - file byte range unblock lock
2197 * @filp: The file to apply the unblock to
2198 * @fl: The lock to be unblocked
2200 * Used by lock managers to cancel blocked requests
2202 int vfs_cancel_lock(struct file *filp, struct file_lock *fl)
2204 if (filp->f_op && filp->f_op->lock)
2205 return filp->f_op->lock(filp, F_CANCELLK, fl);
2209 EXPORT_SYMBOL_GPL(vfs_cancel_lock);
2211 #ifdef CONFIG_PROC_FS
2212 #include <linux/proc_fs.h>
2213 #include <linux/seq_file.h>
2215 static void lock_get_status(struct seq_file *f, struct file_lock *fl,
2216 loff_t id, char *pfx)
2218 struct inode *inode = NULL;
2219 unsigned int fl_pid;
2222 fl_pid = pid_vnr(fl->fl_nspid);
2224 fl_pid = fl->fl_pid;
2226 if (fl->fl_file != NULL)
2227 inode = file_inode(fl->fl_file);
2229 seq_printf(f, "%lld:%s ", id, pfx);
2231 seq_printf(f, "%6s %s ",
2232 (fl->fl_flags & FL_ACCESS) ? "ACCESS" : "POSIX ",
2233 (inode == NULL) ? "*NOINODE*" :
2234 mandatory_lock(inode) ? "MANDATORY" : "ADVISORY ");
2235 } else if (IS_FLOCK(fl)) {
2236 if (fl->fl_type & LOCK_MAND) {
2237 seq_printf(f, "FLOCK MSNFS ");
2239 seq_printf(f, "FLOCK ADVISORY ");
2241 } else if (IS_LEASE(fl)) {
2242 seq_printf(f, "LEASE ");
2243 if (lease_breaking(fl))
2244 seq_printf(f, "BREAKING ");
2245 else if (fl->fl_file)
2246 seq_printf(f, "ACTIVE ");
2248 seq_printf(f, "BREAKER ");
2250 seq_printf(f, "UNKNOWN UNKNOWN ");
2252 if (fl->fl_type & LOCK_MAND) {
2253 seq_printf(f, "%s ",
2254 (fl->fl_type & LOCK_READ)
2255 ? (fl->fl_type & LOCK_WRITE) ? "RW " : "READ "
2256 : (fl->fl_type & LOCK_WRITE) ? "WRITE" : "NONE ");
2258 seq_printf(f, "%s ",
2259 (lease_breaking(fl))
2260 ? (fl->fl_type == F_UNLCK) ? "UNLCK" : "READ "
2261 : (fl->fl_type == F_WRLCK) ? "WRITE" : "READ ");
2264 #ifdef WE_CAN_BREAK_LSLK_NOW
2265 seq_printf(f, "%d %s:%ld ", fl_pid,
2266 inode->i_sb->s_id, inode->i_ino);
2268 /* userspace relies on this representation of dev_t ;-( */
2269 seq_printf(f, "%d %02x:%02x:%ld ", fl_pid,
2270 MAJOR(inode->i_sb->s_dev),
2271 MINOR(inode->i_sb->s_dev), inode->i_ino);
2274 seq_printf(f, "%d <none>:0 ", fl_pid);
2277 if (fl->fl_end == OFFSET_MAX)
2278 seq_printf(f, "%Ld EOF\n", fl->fl_start);
2280 seq_printf(f, "%Ld %Ld\n", fl->fl_start, fl->fl_end);
2282 seq_printf(f, "0 EOF\n");
2286 static int locks_show(struct seq_file *f, void *v)
2288 struct file_lock *fl, *bfl;
2290 fl = list_entry(v, struct file_lock, fl_link);
2292 lock_get_status(f, fl, *((loff_t *)f->private), "");
2294 list_for_each_entry(bfl, &fl->fl_block, fl_block)
2295 lock_get_status(f, bfl, *((loff_t *)f->private), " ->");
2300 static void *locks_start(struct seq_file *f, loff_t *pos)
2302 loff_t *p = f->private;
2304 spin_lock(&file_lock_lock);
2306 return seq_list_start(&file_lock_list, *pos);
2309 static void *locks_next(struct seq_file *f, void *v, loff_t *pos)
2311 loff_t *p = f->private;
2313 return seq_list_next(v, &file_lock_list, pos);
2316 static void locks_stop(struct seq_file *f, void *v)
2318 spin_unlock(&file_lock_lock);
2321 static const struct seq_operations locks_seq_operations = {
2322 .start = locks_start,
2328 static int locks_open(struct inode *inode, struct file *filp)
2330 return seq_open_private(filp, &locks_seq_operations, sizeof(loff_t));
2333 static const struct file_operations proc_locks_operations = {
2336 .llseek = seq_lseek,
2337 .release = seq_release_private,
2340 static int __init proc_locks_init(void)
2342 proc_create("locks", 0, NULL, &proc_locks_operations);
2345 module_init(proc_locks_init);
2349 * lock_may_read - checks that the region is free of locks
2350 * @inode: the inode that is being read
2351 * @start: the first byte to read
2352 * @len: the number of bytes to read
2354 * Emulates Windows locking requirements. Whole-file
2355 * mandatory locks (share modes) can prohibit a read and
2356 * byte-range POSIX locks can prohibit a read if they overlap.
2358 * N.B. this function is only ever called
2359 * from knfsd and ownership of locks is never checked.
2361 int lock_may_read(struct inode *inode, loff_t start, unsigned long len)
2363 struct file_lock *fl;
2366 spin_lock(&inode->i_lock);
2367 for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
2369 if (fl->fl_type == F_RDLCK)
2371 if ((fl->fl_end < start) || (fl->fl_start > (start + len)))
2373 } else if (IS_FLOCK(fl)) {
2374 if (!(fl->fl_type & LOCK_MAND))
2376 if (fl->fl_type & LOCK_READ)
2383 spin_unlock(&inode->i_lock);
2387 EXPORT_SYMBOL(lock_may_read);
2390 * lock_may_write - checks that the region is free of locks
2391 * @inode: the inode that is being written
2392 * @start: the first byte to write
2393 * @len: the number of bytes to write
2395 * Emulates Windows locking requirements. Whole-file
2396 * mandatory locks (share modes) can prohibit a write and
2397 * byte-range POSIX locks can prohibit a write if they overlap.
2399 * N.B. this function is only ever called
2400 * from knfsd and ownership of locks is never checked.
2402 int lock_may_write(struct inode *inode, loff_t start, unsigned long len)
2404 struct file_lock *fl;
2407 spin_lock(&inode->i_lock);
2408 for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
2410 if ((fl->fl_end < start) || (fl->fl_start > (start + len)))
2412 } else if (IS_FLOCK(fl)) {
2413 if (!(fl->fl_type & LOCK_MAND))
2415 if (fl->fl_type & LOCK_WRITE)
2422 spin_unlock(&inode->i_lock);
2426 EXPORT_SYMBOL(lock_may_write);
2428 static int __init filelock_init(void)
2430 filelock_cache = kmem_cache_create("file_lock_cache",
2431 sizeof(struct file_lock), 0, SLAB_PANIC, NULL);
2436 core_initcall(filelock_init);