From: Tao Zeng Date: Tue, 14 May 2019 08:35:57 +0000 (+0800) Subject: binder: back port changes from kernel 4.19 [1/3] X-Git-Tag: khadas-vims-v0.9.6-release~447 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=f11163649e83d2762b493f40fd43644f7a8b6c05;p=platform%2Fkernel%2Flinux-amlogic.git binder: back port changes from kernel 4.19 [1/3] PD#SWPL-8572 Problems: based on android platfrom, each process may allocate 1MB vmalloc memory space for IPC. But most process don't use full memory range of vmalloc space. It's a waste of memory space and may cause driver can't work normal based on 32bit kernel Soluton: On kernel 4.19, google have fixed it, so we need back porting following changes: Squashed commit of the following: commit b12a56e5342e15e99b0fb07c67dfce0891ba2f6b Author: Todd Kjos Date: Tue Mar 19 09:53:01 2019 -0700 FROMGIT: binder: fix BUG_ON found by selinux-testsuite The selinux-testsuite found an issue resulting in a BUG_ON() where a conditional relied on a size_t going negative when checking the validity of a buffer offset. (cherry picked from commit 5997da82145bb7c9a56d834894cb81f81f219344 git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git char-misc-linus) Bug: 67668716 Change-Id: Ib3b408717141deadddcb6b95ad98c0b97d9d98ea Fixes: 7a67a39320df ("binder: add function to copy binder object from buffer") Reported-by: Paul Moore Tested-by: Paul Moore Signed-off-by: Todd Kjos commit 5b28e504d93a5f1efc074dd7cdcadc07293bb783 Author: Todd Kjos Date: Thu Feb 14 15:22:57 2019 -0800 UPSTREAM: binder: fix handling of misaligned binder object Fixes crash found by syzbot: kernel BUG at drivers/android/binder_alloc.c:LINE! (2) (cherry pick from commit 26528be6720bb40bc8844e97ee73a37e530e9c5e) Bug: 67668716 Reported-and-tested-by: syzbot+55de1eb4975dec156d8f@syzkaller.appspotmail.com Signed-off-by: Todd Kjos Signed-off-by: Greg Kroah-Hartman Change-Id: Ib8597dd05a158f78503d4affe6c5f46ded16a811 commit e110c3b44e437bad09f76c2b42f23dcad898f57d Author: Todd Kjos Date: Wed Feb 13 11:48:53 2019 -0800 UPSTREAM: binder: fix sparse issue in binder_alloc_selftest.c Fixes sparse issues reported by the kbuild test robot running on https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git char-misc-testing: bde4a19fc04f5 ("binder: use userspace pointer as base of buffer space") Error output (drivers/android/binder_alloc_selftest.c): sparse: warning: incorrect type in assignment (different address spaces) sparse: expected void *page_addr sparse: got void [noderef] *user_data sparse: error: subtraction of different types can't work Fixed by adding necessary "__user" tags. (cherry pick from commit 36f30937922ce75390c73f99e650e4f2eb56b0e6) Bug: 67668716 Reported-by: kbuild test robot Signed-off-by: Todd Kjos Signed-off-by: Greg Kroah-Hartman Change-Id: Ia0a16d163251381d4bc04f46a44dddbc18b10a85 commit 9f6fd7733286f1af04d153c9d3a050ca2615b3cc Author: Todd Kjos Date: Fri Feb 8 10:35:20 2019 -0800 BACKPORT: binder: use userspace pointer as base of buffer space Now that alloc->buffer points to the userspace vm_area rename buffer->data to buffer->user_data and rename local pointers that hold user addresses. Also use the "__user" tag to annotate all user pointers so sparse can flag cases where user pointer vaues are copied to kernel pointers. Refactor code to use offsets instead of user pointers. (cherry pick from commit bde4a19fc04f5f46298c86b1acb7a4af1d5f138d) Bug: 67668716 Change-Id: I9d04b844c5994d1f6214da795799e6b373bc9816 Signed-off-by: Todd Kjos Signed-off-by: Greg Kroah-Hartman commit 194d8606b011657ce30bf0c240a5adcad0691201 Author: Todd Kjos Date: Wed Dec 5 15:19:25 2018 -0800 UPSTREAM: binder: fix kerneldoc header for struct binder_buffer Fix the incomplete kerneldoc header for struct binder_buffer. (cherry pick from commit 7a2670a5bc917e4e7c9be5274efc004f9bd1216a) Bug: 67668716 Signed-off-by: Todd Kjos Signed-off-by: Greg Kroah-Hartman Change-Id: I6bb942e6a9466b02653349943524462f205af839 commit 55cb58623a60d48678d8eb74e1cabe7744ed62c2 Author: Todd Kjos Date: Fri Feb 8 10:35:19 2019 -0800 BACKPORT: binder: remove user_buffer_offset Remove user_buffer_offset since there is no kernel buffer pointer anymore. (cherry pick from commit c41358a5f5217abd7c051e8d42397e5b80f3b3ed) Bug: 67668716 Signed-off-by: Todd Kjos Signed-off-by: Greg Kroah-Hartman Change-Id: I399219867704dc5013453a7738193c742fc970ad commit 3301f77efa9d99e742e5642243b891e014becf17 Author: Todd Kjos Date: Fri Feb 8 10:35:18 2019 -0800 UPSTREAM: binder: remove kernel vm_area for buffer space Remove the kernel's vm_area and the code that maps buffer pages into it. (cherry pick from commit 880211667b203dd32724f3be224c44c0400aa0a6) Bug: 67668716 Signed-off-by: Todd Kjos Signed-off-by: Greg Kroah-Hartman Change-Id: I2595bb8416c2bbfcf97ad3d7380ae94e29c209fb commit 628c27a60665f15984364f6c0a1bda03473b3a78 Author: Todd Kjos Date: Fri Feb 8 10:35:17 2019 -0800 UPSTREAM: binder: avoid kernel vm_area for buffer fixups Refactor the functions to validate and fixup struct binder_buffer pointer objects to avoid using vm_area pointers. Instead copy to/from kernel space using binder_alloc_copy_to_buffer() and binder_alloc_copy_from_buffer(). The following functions were refactored: refactor binder_validate_ptr() binder_validate_fixup() binder_fixup_parent() (cherry pick from commit db6b0b810bf945d1991917ffce0e93383101f2fa) Bug: 67668716 Signed-off-by: Todd Kjos Signed-off-by: Greg Kroah-Hartman Change-Id: Ic222af9b6c56bf48fd0b65debe981d19a7809e77 commit ed39057090cc4a95c318bafcd97f418da56e3867 Author: Todd Kjos Date: Fri Feb 8 10:35:16 2019 -0800 BACKPORT: binder: add function to copy binder object from buffer When creating or tearing down a transaction, the binder driver examines objects in the buffer and takes appropriate action. To do this without needing to dereference pointers into the buffer, the local copies of the objects are needed. This patch introduces a function to validate and copy binder objects from the buffer to a local structure. (cherry pick from commit 7a67a39320dfba4b36d3be5dae4581194e650316) Bug: 67668716 Signed-off-by: Todd Kjos Signed-off-by: Greg Kroah-Hartman Change-Id: I42dfe238a2d20bdeff479068ca87a80e4577e64a commit 01f8f48c56b53faf1c795112f451a032a0d00b75 Author: Todd Kjos Date: Fri Feb 8 10:35:15 2019 -0800 BACKPORT: binder: add functions to copy to/from binder buffers Avoid vm_area when copying to or from binder buffers. Instead, new copy functions are added that copy from kernel space to binder buffer space. These use kmap_atomic() and kunmap_atomic() to create temporary mappings and then memcpy() is used to copy within that page. Also, kmap_atomic() / kunmap_atomic() use the appropriate cache flushing to support VIVT cache architectures. Allow binder to build if CPU_CACHE_VIVT is defined. Several uses of the new functions are added here. More to follow in subsequent patches. (cherry picked from commit 8ced0c6231ead26eca8cb416dcb7cc1c2cdd41d8) Bug: 67668716 Change-Id: I6a93d2396d0a80c352a1d563fc7fb523a753e38c Signed-off-by: Todd Kjos Signed-off-by: Greg Kroah-Hartman commit bfc28d4c046d2a1aea5db66508e7fbb65a31a4a9 Author: Todd Kjos Date: Fri Feb 8 10:35:14 2019 -0800 UPSTREAM: binder: create userspace-to-binder-buffer copy function The binder driver uses a vm_area to map the per-process binder buffer space. For 32-bit android devices, this is now taking too much vmalloc space. This patch removes the use of vm_area when copying the transaction data from the sender to the buffer space. Instead of using copy_from_user() for multi-page copies, it now uses binder_alloc_copy_user_to_buffer() which uses kmap() and kunmap() to map each page, and uses copy_from_user() for copying to that page. (cherry picked from 1a7c3d9bb7a926e88d5f57643e75ad1abfc55013) Bug: 67668716 Signed-off-by: Todd Kjos Signed-off-by: Greg Kroah-Hartman Change-Id: I59ff83455984fce4626476e30601ed8b99858a92 commit 89a1a65d35200d8ca94c865f061f11af41a8ced7 Author: Todd Kjos Date: Mon Jan 14 09:10:21 2019 -0800 FROMGIT: binder: create node flag to request sender's security context To allow servers to verify client identity, allow a node flag to be set that causes the sender's security context to be delivered with the transaction. The BR_TRANSACTION command is extended in BR_TRANSACTION_SEC_CTX to contain a pointer to the security context string. Signed-off-by: Todd Kjos Reviewed-by: Joel Fernandes (Google) Signed-off-by: Greg Kroah-Hartman (cherry picked from commit ec74136ded792deed80780a2f8baf3521eeb72f9 https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master) Change-Id: I44496546e2d0dc0022f818a45cd52feb1c1a92cb Signed-off-by: Todd Kjos commit 4afd6d2498ecd54e4211c6e47d8956a686a52ee3 Author: Todd Kjos Date: Wed Dec 5 15:19:26 2018 -0800 UPSTREAM: binder: filter out nodes when showing binder procs When dumping out binder transactions via a debug node, the output is too verbose if a process has many nodes. Change the output for transaction dumps to only display nodes with pending async transactions. Signed-off-by: Todd Kjos Signed-off-by: Greg Kroah-Hartman (cherry picked from commit ecd589d8f5661dd3a9545079a29b678cd9e3ecf3) Bug: 112037142 Change-Id: Iaa76ebdc844037ce1ee3bf2e590676790a959cef commit 72e3c1d60a499bfa547d962a150082f47bfb16af Author: Todd Kjos Date: Tue Nov 6 15:55:32 2018 -0800 binder: fix race that allows malicious free of live buffer commit 7bada55ab50697861eee6bb7d60b41e68a961a9c upstream. Malicious code can attempt to free buffers using the BC_FREE_BUFFER ioctl to binder. There are protections against a user freeing a buffer while in use by the kernel, however there was a window where BC_FREE_BUFFER could be used to free a recently allocated buffer that was not completely initialized. This resulted in a use-after-free detected by KASAN with a malicious test program. This window is closed by setting the buffer's allow_user_free attribute to 0 when the buffer is allocated or when the user has previously freed it instead of waiting for the caller to set it. The problem was that when the struct buffer was recycled, allow_user_free was stale and set to 1 allowing a free to go through. Signed-off-by: Todd Kjos Acked-by: Arve Hjønnevåg Cc: stable # 4.14 Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit c7940ee7e55f4caec80ab646b7f9d495ee2677c6 Author: Martijn Coenen Date: Sat Aug 25 13:50:56 2018 -0700 UPSTREAM: binder: Add BINDER_GET_NODE_INFO_FOR_REF ioctl. This allows the context manager to retrieve information about nodes that it holds a reference to, such as the current number of references to those nodes. Such information can for example be used to determine whether the servicemanager is the only process holding a reference to a node. This information can then be passed on to the process holding the node, which can in turn decide whether it wants to shut down to reduce resource usage. Bug: 79983843 Change-Id: I21e52ed1ca2137f7bfdc0300365fb1285b7e3d70 Signed-off-by: Martijn Coenen commit afd02b5ead68a94eb6bf1bf5234271687d7eb461 Author: Minchan Kim Date: Thu Aug 23 14:29:56 2018 +0900 android: binder: fix the race mmap and alloc_new_buf_locked There is RaceFuzzer report like below because we have no lock to close below the race between binder_mmap and binder_alloc_new_buf_locked. To close the race, let's use memory barrier so that if someone see alloc->vma is not NULL, alloc->vma_vm_mm should be never NULL. (I didn't add stable mark intentionallybecause standard android userspace libraries that interact with binder (libbinder & libhwbinder) prevent the mmap/ioctl race. - from Todd) " Thread interleaving: CPU0 (binder_alloc_mmap_handler) CPU1 (binder_alloc_new_buf_locked) ===== ===== // drivers/android/binder_alloc.c // #L718 (v4.18-rc3) alloc->vma = vma; // drivers/android/binder_alloc.c // #L346 (v4.18-rc3) if (alloc->vma == NULL) { ... // alloc->vma is not NULL at this point return ERR_PTR(-ESRCH); } ... // #L438 binder_update_page_range(alloc, 0, (void *)PAGE_ALIGN((uintptr_t)buffer->data), end_page_addr); // In binder_update_page_range() #L218 // But still alloc->vma_vm_mm is NULL here if (need_mm && mmget_not_zero(alloc->vma_vm_mm)) alloc->vma_vm_mm = vma->vm_mm; Crash Log: ================================================================== BUG: KASAN: null-ptr-deref in __atomic_add_unless include/asm-generic/atomic-instrumented.h:89 [inline] BUG: KASAN: null-ptr-deref in atomic_add_unless include/linux/atomic.h:533 [inline] BUG: KASAN: null-ptr-deref in mmget_not_zero include/linux/sched/mm.h:75 [inline] BUG: KASAN: null-ptr-deref in binder_update_page_range+0xece/0x18e0 drivers/android/binder_alloc.c:218 Write of size 4 at addr 0000000000000058 by task syz-executor0/11184 CPU: 1 PID: 11184 Comm: syz-executor0 Not tainted 4.18.0-rc3 #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.2-0-g33fbe13 by qemu-project.org 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x16e/0x22c lib/dump_stack.c:113 kasan_report_error mm/kasan/report.c:352 [inline] kasan_report+0x163/0x380 mm/kasan/report.c:412 check_memory_region_inline mm/kasan/kasan.c:260 [inline] check_memory_region+0x140/0x1a0 mm/kasan/kasan.c:267 kasan_check_write+0x14/0x20 mm/kasan/kasan.c:278 __atomic_add_unless include/asm-generic/atomic-instrumented.h:89 [inline] atomic_add_unless include/linux/atomic.h:533 [inline] mmget_not_zero include/linux/sched/mm.h:75 [inline] binder_update_page_range+0xece/0x18e0 drivers/android/binder_alloc.c:218 binder_alloc_new_buf_locked drivers/android/binder_alloc.c:443 [inline] binder_alloc_new_buf+0x467/0xc30 drivers/android/binder_alloc.c:513 binder_transaction+0x125b/0x4fb0 drivers/android/binder.c:2957 binder_thread_write+0xc08/0x2770 drivers/android/binder.c:3528 binder_ioctl_write_read.isra.39+0x24f/0x8e0 drivers/android/binder.c:4456 binder_ioctl+0xa86/0xf34 drivers/android/binder.c:4596 vfs_ioctl fs/ioctl.c:46 [inline] do_vfs_ioctl+0x154/0xd40 fs/ioctl.c:686 ksys_ioctl+0x94/0xb0 fs/ioctl.c:701 __do_sys_ioctl fs/ioctl.c:708 [inline] __se_sys_ioctl fs/ioctl.c:706 [inline] __x64_sys_ioctl+0x43/0x50 fs/ioctl.c:706 do_syscall_64+0x167/0x4b0 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe " Signed-off-by: Todd Kjos Signed-off-by: Minchan Kim Reviewed-by: Martijn Coenen Cc: stable Signed-off-by: Greg Kroah-Hartman commit 3ed5fd0f095e9d6fe5f33f909165a8cd596e8b46 Author: Sherry Yang Date: Tue Aug 7 12:57:13 2018 -0700 android: binder: Rate-limit debug and userspace triggered err msgs Use rate-limited debug messages where userspace can trigger excessive log spams. Acked-by: Arve Hjønnevåg Signed-off-by: Sherry Yang Signed-off-by: Greg Kroah-Hartman commit 8129fb3ee7af23a888383aa23647c9d576ecdfef Author: Sherry Yang Date: Thu Jul 26 17:17:17 2018 -0700 android: binder: Show extra_buffers_size in trace Add extra_buffers_size to the binder_transaction_alloc_buf tracepoint. Acked-by: Arve Hjønnevåg Signed-off-by: Sherry Yang Signed-off-by: Greg Kroah-Hartman commit 3b0bbcb65457ddec6fbee72bb26002e2bba16089 Author: Guenter Roeck Date: Mon Jul 23 14:41:38 2018 -0700 android: binder: Include asm/cacheflush.h after linux/ include files If asm/cacheflush.h is included first, the following build warnings are seen with sparc32 builds. In file included from arch/sparc/include/asm/cacheflush.h:11:0, from drivers/android/binder.c:54: arch/sparc/include/asm/cacheflush_32.h:40:37: warning: 'struct page' declared inside parameter list will not be visible outside of this definition or declaration Moving the asm/ include after linux/ includes solves the problem. Suggested-by: Linus Torvalds Signed-off-by: Guenter Roeck Signed-off-by: Greg Kroah-Hartman commit e8a4948f49629c6ab122339f46908884d55ca7e9 Author: Guenter Roeck Date: Mon Jul 23 14:47:23 2018 -0700 android: binder_alloc: Include asm/cacheflush.h after linux/ include files If asm/cacheflush.h is included first, the following build warnings are seen with sparc32 builds. In file included from ./arch/sparc/include/asm/cacheflush.h:11:0, from drivers/android/binder_alloc.c:20: ./arch/sparc/include/asm/cacheflush_32.h:40:37: warning: 'struct page' declared inside parameter list Moving the asm/ include after linux/ includes fixes the problem. Suggested-by: Linus Torvalds Signed-off-by: Guenter Roeck Signed-off-by: Greg Kroah-Hartman commit 8cae6730ef318700ab3a0db3ef43ee6a5e5856c8 Author: Geert Uytterhoeven Date: Wed Jun 6 14:40:56 2018 +0200 android: binder: Drop dependency on !M68K As of commit 7124330dabe5b3cb ("m68k/uaccess: Revive 64-bit get_user()"), the 64-bit Android binder interface builds fine on m68k. Signed-off-by: Geert Uytterhoeven Signed-off-by: Greg Kroah-Hartman verify: p212 Change-Id: I1bac2c5345bcac64a3890f1688c1ecc4a3654a79 Signed-off-by: Tao Zeng --- diff --git a/drivers/android/Kconfig b/drivers/android/Kconfig index 63ed9ce..79cbd66 100644 --- a/drivers/android/Kconfig +++ b/drivers/android/Kconfig @@ -9,7 +9,7 @@ if ANDROID config ANDROID_BINDER_IPC bool "Android Binder IPC Driver" - depends on MMU && !M68K + depends on MMU default n ---help--- Binder is used in Android for both communication between processes, diff --git a/drivers/android/binder.c b/drivers/android/binder.c index 51a4503..5b5bf21 100644 --- a/drivers/android/binder.c +++ b/drivers/android/binder.c @@ -51,7 +51,6 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt -#include #include #include #include @@ -70,8 +69,12 @@ #include #include #include +#include #include + +#include + #include "binder_alloc.h" #include "binder_trace.h" @@ -162,13 +165,13 @@ module_param_call(stop_on_user_error, binder_set_stop_on_user_error, #define binder_debug(mask, x...) \ do { \ if (binder_debug_mask & mask) \ - pr_info(x); \ + pr_info_ratelimited(x); \ } while (0) #define binder_user_error(x...) \ do { \ if (binder_debug_mask & BINDER_DEBUG_USER_ERROR) \ - pr_info(x); \ + pr_info_ratelimited(x); \ if (binder_stop_on_user_error) \ binder_stop_on_user_error = 2; \ } while (0) @@ -355,6 +358,7 @@ struct binder_error { * @min_priority: minimum scheduling priority * (invariant after initialized) * @inherit_rt: inherit RT scheduling policy from caller + * @txn_security_ctx: require sender's security context * (invariant after initialized) * @async_todo: list of async work items * (protected by @proc->inner_lock) @@ -394,6 +398,7 @@ struct binder_node { u8 sched_policy:2; u8 inherit_rt:1; u8 accept_fds:1; + u8 txn_security_ctx:1; u8 min_priority; }; bool has_async_transaction; @@ -651,6 +656,7 @@ struct binder_transaction { struct binder_priority saved_priority; bool set_priority_called; kuid_t sender_euid; + binder_uintptr_t security_ctx; /** * @lock: protects @from, @to_proc, and @to_thread * @@ -661,6 +667,26 @@ struct binder_transaction { }; /** + * struct binder_object - union of flat binder object types + * @hdr: generic object header + * @fbo: binder object (nodes and refs) + * @fdo: file descriptor object + * @bbo: binder buffer pointer + * @fdao: file descriptor array + * + * Used for type-independent object copies + */ +struct binder_object { + union { + struct binder_object_header hdr; + struct flat_binder_object fbo; + struct binder_fd_object fdo; + struct binder_buffer_object bbo; + struct binder_fd_array_object fdao; + }; +}; + +/** * binder_proc_lock() - Acquire outer lock for given binder_proc * @proc: struct binder_proc to acquire * @@ -1360,6 +1386,7 @@ static struct binder_node *binder_init_node_ilocked( node->min_priority = to_kernel_prio(node->sched_policy, priority); node->accept_fds = !!(flags & FLAT_BINDER_FLAG_ACCEPTS_FDS); node->inherit_rt = !!(flags & FLAT_BINDER_FLAG_INHERIT_RT); + node->txn_security_ctx = !!(flags & FLAT_BINDER_FLAG_TXN_SECURITY_CTX); spin_lock_init(&node->lock); INIT_LIST_HEAD(&node->work.entry); INIT_LIST_HEAD(&node->async_todo); @@ -2196,26 +2223,34 @@ static void binder_cleanup_transaction(struct binder_transaction *t, } /** - * binder_validate_object() - checks for a valid metadata object in a buffer. + * binder_get_object() - gets object and checks for valid metadata + * @proc: binder_proc owning the buffer * @buffer: binder_buffer that we're parsing. - * @offset: offset in the buffer at which to validate an object. + * @offset: offset in the @buffer at which to validate an object. + * @object: struct binder_object to read into * * Return: If there's a valid metadata object at @offset in @buffer, the - * size of that object. Otherwise, it returns zero. + * size of that object. Otherwise, it returns zero. The object + * is read into the struct binder_object pointed to by @object. */ -static size_t binder_validate_object(struct binder_buffer *buffer, u64 offset) +static size_t binder_get_object(struct binder_proc *proc, + struct binder_buffer *buffer, + unsigned long offset, + struct binder_object *object) { - /* Check if we can read a header first */ + size_t read_size; struct binder_object_header *hdr; size_t object_size = 0; - if (buffer->data_size < sizeof(*hdr) || - offset > buffer->data_size - sizeof(*hdr) || + read_size = min_t(size_t, sizeof(*object), buffer->data_size - offset); + if (offset > buffer->data_size || read_size < sizeof(*hdr) || !IS_ALIGNED(offset, sizeof(u32))) return 0; + binder_alloc_copy_from_buffer(&proc->alloc, object, buffer, + offset, read_size); - /* Ok, now see if we can read a complete object. */ - hdr = (struct binder_object_header *)(buffer->data + offset); + /* Ok, now see if we read a complete object. */ + hdr = &object->hdr; switch (hdr->type) { case BINDER_TYPE_BINDER: case BINDER_TYPE_WEAK_BINDER: @@ -2244,10 +2279,13 @@ static size_t binder_validate_object(struct binder_buffer *buffer, u64 offset) /** * binder_validate_ptr() - validates binder_buffer_object in a binder_buffer. + * @proc: binder_proc owning the buffer * @b: binder_buffer containing the object + * @object: struct binder_object to read into * @index: index in offset array at which the binder_buffer_object is * located - * @start: points to the start of the offset array + * @start_offset: points to the start of the offset array + * @object_offsetp: offset of @object read from @b * @num_valid: the number of valid offsets in the offset array * * Return: If @index is within the valid range of the offset array @@ -2258,34 +2296,46 @@ static size_t binder_validate_object(struct binder_buffer *buffer, u64 offset) * Note that the offset found in index @index itself is not * verified; this function assumes that @num_valid elements * from @start were previously verified to have valid offsets. + * If @object_offsetp is non-NULL, then the offset within + * @b is written to it. */ -static struct binder_buffer_object *binder_validate_ptr(struct binder_buffer *b, - binder_size_t index, - binder_size_t *start, - binder_size_t num_valid) +static struct binder_buffer_object *binder_validate_ptr( + struct binder_proc *proc, + struct binder_buffer *b, + struct binder_object *object, + binder_size_t index, + binder_size_t start_offset, + binder_size_t *object_offsetp, + binder_size_t num_valid) { - struct binder_buffer_object *buffer_obj; - binder_size_t *offp; + size_t object_size; + binder_size_t object_offset; + unsigned long buffer_offset; if (index >= num_valid) return NULL; - offp = start + index; - buffer_obj = (struct binder_buffer_object *)(b->data + *offp); - if (buffer_obj->hdr.type != BINDER_TYPE_PTR) + buffer_offset = start_offset + sizeof(binder_size_t) * index; + binder_alloc_copy_from_buffer(&proc->alloc, &object_offset, + b, buffer_offset, sizeof(object_offset)); + object_size = binder_get_object(proc, b, object_offset, object); + if (!object_size || object->hdr.type != BINDER_TYPE_PTR) return NULL; + if (object_offsetp) + *object_offsetp = object_offset; - return buffer_obj; + return &object->bbo; } /** * binder_validate_fixup() - validates pointer/fd fixups happen in order. + * @proc: binder_proc owning the buffer * @b: transaction buffer - * @objects_start start of objects buffer - * @buffer: binder_buffer_object in which to fix up - * @offset: start offset in @buffer to fix up - * @last_obj: last binder_buffer_object that we fixed up in - * @last_min_offset: minimum fixup offset in @last_obj + * @objects_start_offset: offset to start of objects buffer + * @buffer_obj_offset: offset to binder_buffer_object in which to fix up + * @fixup_offset: start offset in @buffer to fix up + * @last_obj_offset: offset to last binder_buffer_object that we fixed + * @last_min_offset: minimum fixup offset in object at @last_obj_offset * * Return: %true if a fixup in buffer @buffer at offset @offset is * allowed. @@ -2316,63 +2366,83 @@ static struct binder_buffer_object *binder_validate_ptr(struct binder_buffer *b, * C (parent = A, offset = 16) * D (parent = B, offset = 0) // B is not A or any of A's parents */ -static bool binder_validate_fixup(struct binder_buffer *b, - binder_size_t *objects_start, - struct binder_buffer_object *buffer, +static bool binder_validate_fixup(struct binder_proc *proc, + struct binder_buffer *b, + binder_size_t objects_start_offset, + binder_size_t buffer_obj_offset, binder_size_t fixup_offset, - struct binder_buffer_object *last_obj, + binder_size_t last_obj_offset, binder_size_t last_min_offset) { - if (!last_obj) { + if (!last_obj_offset) { /* Nothing to fix up in */ return false; } - while (last_obj != buffer) { + while (last_obj_offset != buffer_obj_offset) { + unsigned long buffer_offset; + struct binder_object last_object; + struct binder_buffer_object *last_bbo; + size_t object_size = binder_get_object(proc, b, last_obj_offset, + &last_object); + if (object_size != sizeof(*last_bbo)) + return false; + + last_bbo = &last_object.bbo; /* * Safe to retrieve the parent of last_obj, since it * was already previously verified by the driver. */ - if ((last_obj->flags & BINDER_BUFFER_FLAG_HAS_PARENT) == 0) + if ((last_bbo->flags & BINDER_BUFFER_FLAG_HAS_PARENT) == 0) return false; - last_min_offset = last_obj->parent_offset + sizeof(uintptr_t); - last_obj = (struct binder_buffer_object *) - (b->data + *(objects_start + last_obj->parent)); + last_min_offset = last_bbo->parent_offset + sizeof(uintptr_t); + buffer_offset = objects_start_offset + + sizeof(binder_size_t) * last_bbo->parent, + binder_alloc_copy_from_buffer(&proc->alloc, &last_obj_offset, + b, buffer_offset, + sizeof(last_obj_offset)); } return (fixup_offset >= last_min_offset); } static void binder_transaction_buffer_release(struct binder_proc *proc, struct binder_buffer *buffer, - binder_size_t *failed_at) + binder_size_t failed_at, + bool is_failure) { - binder_size_t *offp, *off_start, *off_end; int debug_id = buffer->debug_id; + binder_size_t off_start_offset, buffer_offset, off_end_offset; binder_debug(BINDER_DEBUG_TRANSACTION, - "%d buffer release %d, size %zd-%zd, failed at %pK\n", + "%d buffer release %d, size %zd-%zd, failed at %llx\n", proc->pid, buffer->debug_id, - buffer->data_size, buffer->offsets_size, failed_at); + buffer->data_size, buffer->offsets_size, + (unsigned long long)failed_at); if (buffer->target_node) binder_dec_node(buffer->target_node, 1, 0); - off_start = (binder_size_t *)(buffer->data + - ALIGN(buffer->data_size, sizeof(void *))); - if (failed_at) - off_end = failed_at; - else - off_end = (void *)off_start + buffer->offsets_size; - for (offp = off_start; offp < off_end; offp++) { + off_start_offset = ALIGN(buffer->data_size, sizeof(void *)); + off_end_offset = is_failure ? failed_at : + off_start_offset + buffer->offsets_size; + for (buffer_offset = off_start_offset; buffer_offset < off_end_offset; + buffer_offset += sizeof(binder_size_t)) { struct binder_object_header *hdr; - size_t object_size = binder_validate_object(buffer, *offp); - + size_t object_size; + struct binder_object object; + binder_size_t object_offset; + + binder_alloc_copy_from_buffer(&proc->alloc, &object_offset, + buffer, buffer_offset, + sizeof(object_offset)); + object_size = binder_get_object(proc, buffer, + object_offset, &object); if (object_size == 0) { pr_err("transaction release %d bad object at offset %lld, size %zd\n", - debug_id, (u64)*offp, buffer->data_size); + debug_id, (u64)object_offset, buffer->data_size); continue; } - hdr = (struct binder_object_header *)(buffer->data + *offp); + hdr = &object.hdr; switch (hdr->type) { case BINDER_TYPE_BINDER: case BINDER_TYPE_WEAK_BINDER: { @@ -2430,28 +2500,25 @@ static void binder_transaction_buffer_release(struct binder_proc *proc, case BINDER_TYPE_FDA: { struct binder_fd_array_object *fda; struct binder_buffer_object *parent; - uintptr_t parent_buffer; - u32 *fd_array; + struct binder_object ptr_object; + binder_size_t fda_offset; size_t fd_index; binder_size_t fd_buf_size; + binder_size_t num_valid; + num_valid = (buffer_offset - off_start_offset) / + sizeof(binder_size_t); fda = to_binder_fd_array_object(hdr); - parent = binder_validate_ptr(buffer, fda->parent, - off_start, - offp - off_start); + parent = binder_validate_ptr(proc, buffer, &ptr_object, + fda->parent, + off_start_offset, + NULL, + num_valid); if (!parent) { pr_err("transaction release %d bad parent offset", debug_id); continue; } - /* - * Since the parent was already fixed up, convert it - * back to kernel address space to access it - */ - parent_buffer = parent->buffer - - binder_alloc_get_user_buffer_offset( - &proc->alloc); - fd_buf_size = sizeof(u32) * fda->num_fds; if (fda->num_fds >= SIZE_MAX / sizeof(u32)) { pr_err("transaction release %d invalid number of fds (%lld)\n", @@ -2465,9 +2532,29 @@ static void binder_transaction_buffer_release(struct binder_proc *proc, debug_id, (u64)fda->num_fds); continue; } - fd_array = (u32 *)(parent_buffer + (uintptr_t)fda->parent_offset); - for (fd_index = 0; fd_index < fda->num_fds; fd_index++) - task_close_fd(proc, fd_array[fd_index]); + /* + * the source data for binder_buffer_object is visible + * to user-space and the @buffer element is the user + * pointer to the buffer_object containing the fd_array. + * Convert the address to an offset relative to + * the base of the transaction buffer. + */ + fda_offset = + (parent->buffer - (uintptr_t)buffer->user_data) + + fda->parent_offset; + for (fd_index = 0; fd_index < fda->num_fds; + fd_index++) { + u32 fd; + binder_size_t offset = fda_offset + + fd_index * sizeof(fd); + + binder_alloc_copy_from_buffer(&proc->alloc, + &fd, + buffer, + offset, + sizeof(fd)); + task_close_fd(proc, fd); + } } break; default: pr_err("transaction release %d bad object type %x\n", @@ -2664,9 +2751,8 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda, struct binder_transaction *in_reply_to) { binder_size_t fdi, fd_buf_size, num_installed_fds; + binder_size_t fda_offset; int target_fd; - uintptr_t parent_buffer; - u32 *fd_array; struct binder_proc *proc = thread->proc; struct binder_proc *target_proc = t->to_proc; @@ -2684,23 +2770,32 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda, return -EINVAL; } /* - * Since the parent was already fixed up, convert it - * back to the kernel address space to access it + * the source data for binder_buffer_object is visible + * to user-space and the @buffer element is the user + * pointer to the buffer_object containing the fd_array. + * Convert the address to an offset relative to + * the base of the transaction buffer. */ - parent_buffer = parent->buffer - - binder_alloc_get_user_buffer_offset(&target_proc->alloc); - fd_array = (u32 *)(parent_buffer + (uintptr_t)fda->parent_offset); - if (!IS_ALIGNED((unsigned long)fd_array, sizeof(u32))) { + fda_offset = (parent->buffer - (uintptr_t)t->buffer->user_data) + + fda->parent_offset; + if (!IS_ALIGNED((unsigned long)fda_offset, sizeof(u32))) { binder_user_error("%d:%d parent offset not aligned correctly.\n", proc->pid, thread->pid); return -EINVAL; } for (fdi = 0; fdi < fda->num_fds; fdi++) { - target_fd = binder_translate_fd(fd_array[fdi], t, thread, - in_reply_to); + u32 fd; + binder_size_t offset = fda_offset + fdi * sizeof(fd); + + binder_alloc_copy_from_buffer(&target_proc->alloc, + &fd, t->buffer, + offset, sizeof(fd)); + target_fd = binder_translate_fd(fd, t, thread, in_reply_to); if (target_fd < 0) goto err_translate_fd_failed; - fd_array[fdi] = target_fd; + binder_alloc_copy_to_buffer(&target_proc->alloc, + t->buffer, offset, + &target_fd, sizeof(fd)); } return 0; @@ -2710,38 +2805,48 @@ err_translate_fd_failed: * installed so far. */ num_installed_fds = fdi; - for (fdi = 0; fdi < num_installed_fds; fdi++) - task_close_fd(target_proc, fd_array[fdi]); + for (fdi = 0; fdi < num_installed_fds; fdi++) { + u32 fd; + binder_size_t offset = fda_offset + fdi * sizeof(fd); + binder_alloc_copy_from_buffer(&target_proc->alloc, + &fd, t->buffer, + offset, sizeof(fd)); + task_close_fd(target_proc, fd); + } return target_fd; } static int binder_fixup_parent(struct binder_transaction *t, struct binder_thread *thread, struct binder_buffer_object *bp, - binder_size_t *off_start, + binder_size_t off_start_offset, binder_size_t num_valid, - struct binder_buffer_object *last_fixup_obj, + binder_size_t last_fixup_obj_off, binder_size_t last_fixup_min_off) { struct binder_buffer_object *parent; - u8 *parent_buffer; struct binder_buffer *b = t->buffer; struct binder_proc *proc = thread->proc; struct binder_proc *target_proc = t->to_proc; + struct binder_object object; + binder_size_t buffer_offset; + binder_size_t parent_offset; if (!(bp->flags & BINDER_BUFFER_FLAG_HAS_PARENT)) return 0; - parent = binder_validate_ptr(b, bp->parent, off_start, num_valid); + parent = binder_validate_ptr(target_proc, b, &object, bp->parent, + off_start_offset, &parent_offset, + num_valid); if (!parent) { binder_user_error("%d:%d got transaction with invalid parent offset or type\n", proc->pid, thread->pid); return -EINVAL; } - if (!binder_validate_fixup(b, off_start, - parent, bp->parent_offset, - last_fixup_obj, + if (!binder_validate_fixup(target_proc, b, off_start_offset, + parent_offset, bp->parent_offset, + last_fixup_obj_off, last_fixup_min_off)) { binder_user_error("%d:%d got transaction with out-of-order buffer fixup\n", proc->pid, thread->pid); @@ -2755,10 +2860,10 @@ static int binder_fixup_parent(struct binder_transaction *t, proc->pid, thread->pid); return -EINVAL; } - parent_buffer = (u8 *)((uintptr_t)parent->buffer - - binder_alloc_get_user_buffer_offset( - &target_proc->alloc)); - *(binder_uintptr_t *)(parent_buffer + bp->parent_offset) = bp->buffer; + buffer_offset = bp->parent_offset + + (uintptr_t)parent->buffer - (uintptr_t)b->user_data; + binder_alloc_copy_to_buffer(&target_proc->alloc, b, buffer_offset, + &bp->buffer, sizeof(bp->buffer)); return 0; } @@ -2883,9 +2988,10 @@ static void binder_transaction(struct binder_proc *proc, int ret; struct binder_transaction *t; struct binder_work *tcomplete; - binder_size_t *offp, *off_end, *off_start; + binder_size_t buffer_offset = 0; + binder_size_t off_start_offset, off_end_offset; binder_size_t off_min; - u8 *sg_bufp, *sg_buf_end; + binder_size_t sg_buf_offset, sg_buf_end_offset; struct binder_proc *target_proc = NULL; struct binder_thread *target_thread = NULL; struct binder_node *target_node = NULL; @@ -2894,10 +3000,12 @@ static void binder_transaction(struct binder_proc *proc, uint32_t return_error = 0; uint32_t return_error_param = 0; uint32_t return_error_line = 0; - struct binder_buffer_object *last_fixup_obj = NULL; + binder_size_t last_fixup_obj_off = 0; binder_size_t last_fixup_min_off = 0; struct binder_context *context = proc->context; int t_debug_id = atomic_inc_return(&binder_last_id); + char *secctx = NULL; + u32 secctx_sz = 0; e = binder_transaction_log_add(&binder_transaction_log); e->debug_id = t_debug_id; @@ -3121,6 +3229,20 @@ static void binder_transaction(struct binder_proc *proc, t->priority = target_proc->default_priority; } + if (target_node && target_node->txn_security_ctx) { + u32 secid; + + security_task_getsecid(proc->tsk, &secid); + ret = security_secid_to_secctx(secid, &secctx, &secctx_sz); + if (ret) { + return_error = BR_FAILED_REPLY; + return_error_param = ret; + return_error_line = __LINE__; + goto err_get_secctx_failed; + } + extra_buffers_size += ALIGN(secctx_sz, sizeof(u64)); + } + trace_binder_transaction(reply, t, target_node); t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size, @@ -3137,17 +3259,30 @@ static void binder_transaction(struct binder_proc *proc, t->buffer = NULL; goto err_binder_alloc_buf_failed; } - t->buffer->allow_user_free = 0; + if (secctx) { + size_t buf_offset = ALIGN(tr->data_size, sizeof(void *)) + + ALIGN(tr->offsets_size, sizeof(void *)) + + ALIGN(extra_buffers_size, sizeof(void *)) - + ALIGN(secctx_sz, sizeof(u64)); + + t->security_ctx = (uintptr_t)t->buffer->user_data + buf_offset; + binder_alloc_copy_to_buffer(&target_proc->alloc, + t->buffer, buf_offset, + secctx, secctx_sz); + security_release_secctx(secctx, secctx_sz); + secctx = NULL; + } t->buffer->debug_id = t->debug_id; t->buffer->transaction = t; t->buffer->target_node = target_node; trace_binder_transaction_alloc_buf(t->buffer); - off_start = (binder_size_t *)(t->buffer->data + - ALIGN(tr->data_size, sizeof(void *))); - offp = off_start; - if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t) - tr->data.ptr.buffer, tr->data_size)) { + if (binder_alloc_copy_user_to_buffer( + &target_proc->alloc, + t->buffer, 0, + (const void __user *) + (uintptr_t)tr->data.ptr.buffer, + tr->data_size)) { binder_user_error("%d:%d got transaction with invalid data ptr\n", proc->pid, thread->pid); return_error = BR_FAILED_REPLY; @@ -3155,8 +3290,13 @@ static void binder_transaction(struct binder_proc *proc, return_error_line = __LINE__; goto err_copy_data_failed; } - if (copy_from_user(offp, (const void __user *)(uintptr_t) - tr->data.ptr.offsets, tr->offsets_size)) { + if (binder_alloc_copy_user_to_buffer( + &target_proc->alloc, + t->buffer, + ALIGN(tr->data_size, sizeof(void *)), + (const void __user *) + (uintptr_t)tr->data.ptr.offsets, + tr->offsets_size)) { binder_user_error("%d:%d got transaction with invalid offsets ptr\n", proc->pid, thread->pid); return_error = BR_FAILED_REPLY; @@ -3181,17 +3321,30 @@ static void binder_transaction(struct binder_proc *proc, return_error_line = __LINE__; goto err_bad_offset; } - off_end = (void *)off_start + tr->offsets_size; - sg_bufp = (u8 *)(PTR_ALIGN(off_end, sizeof(void *))); - sg_buf_end = sg_bufp + extra_buffers_size; + off_start_offset = ALIGN(tr->data_size, sizeof(void *)); + buffer_offset = off_start_offset; + off_end_offset = off_start_offset + tr->offsets_size; + sg_buf_offset = ALIGN(off_end_offset, sizeof(void *)); + sg_buf_end_offset = sg_buf_offset + extra_buffers_size; off_min = 0; - for (; offp < off_end; offp++) { + for (buffer_offset = off_start_offset; buffer_offset < off_end_offset; + buffer_offset += sizeof(binder_size_t)) { struct binder_object_header *hdr; - size_t object_size = binder_validate_object(t->buffer, *offp); - - if (object_size == 0 || *offp < off_min) { + size_t object_size; + struct binder_object object; + binder_size_t object_offset; + + binder_alloc_copy_from_buffer(&target_proc->alloc, + &object_offset, + t->buffer, + buffer_offset, + sizeof(object_offset)); + object_size = binder_get_object(target_proc, t->buffer, + object_offset, &object); + if (object_size == 0 || object_offset < off_min) { binder_user_error("%d:%d got transaction with invalid offset (%lld, min %lld max %lld) or object.\n", - proc->pid, thread->pid, (u64)*offp, + proc->pid, thread->pid, + (u64)object_offset, (u64)off_min, (u64)t->buffer->data_size); return_error = BR_FAILED_REPLY; @@ -3200,8 +3353,8 @@ static void binder_transaction(struct binder_proc *proc, goto err_bad_offset; } - hdr = (struct binder_object_header *)(t->buffer->data + *offp); - off_min = *offp + object_size; + hdr = &object.hdr; + off_min = object_offset + object_size; switch (hdr->type) { case BINDER_TYPE_BINDER: case BINDER_TYPE_WEAK_BINDER: { @@ -3215,6 +3368,9 @@ static void binder_transaction(struct binder_proc *proc, return_error_line = __LINE__; goto err_translate_failed; } + binder_alloc_copy_to_buffer(&target_proc->alloc, + t->buffer, object_offset, + fp, sizeof(*fp)); } break; case BINDER_TYPE_HANDLE: case BINDER_TYPE_WEAK_HANDLE: { @@ -3228,6 +3384,9 @@ static void binder_transaction(struct binder_proc *proc, return_error_line = __LINE__; goto err_translate_failed; } + binder_alloc_copy_to_buffer(&target_proc->alloc, + t->buffer, object_offset, + fp, sizeof(*fp)); } break; case BINDER_TYPE_FD: { @@ -3243,14 +3402,23 @@ static void binder_transaction(struct binder_proc *proc, } fp->pad_binder = 0; fp->fd = target_fd; + binder_alloc_copy_to_buffer(&target_proc->alloc, + t->buffer, object_offset, + fp, sizeof(*fp)); } break; case BINDER_TYPE_FDA: { + struct binder_object ptr_object; + binder_size_t parent_offset; struct binder_fd_array_object *fda = to_binder_fd_array_object(hdr); + size_t num_valid = (buffer_offset - off_start_offset) * + sizeof(binder_size_t); struct binder_buffer_object *parent = - binder_validate_ptr(t->buffer, fda->parent, - off_start, - offp - off_start); + binder_validate_ptr(target_proc, t->buffer, + &ptr_object, fda->parent, + off_start_offset, + &parent_offset, + num_valid); if (!parent) { binder_user_error("%d:%d got transaction with invalid parent offset or type\n", proc->pid, thread->pid); @@ -3259,9 +3427,11 @@ static void binder_transaction(struct binder_proc *proc, return_error_line = __LINE__; goto err_bad_parent; } - if (!binder_validate_fixup(t->buffer, off_start, - parent, fda->parent_offset, - last_fixup_obj, + if (!binder_validate_fixup(target_proc, t->buffer, + off_start_offset, + parent_offset, + fda->parent_offset, + last_fixup_obj_off, last_fixup_min_off)) { binder_user_error("%d:%d got transaction with out-of-order buffer fixup\n", proc->pid, thread->pid); @@ -3278,14 +3448,15 @@ static void binder_transaction(struct binder_proc *proc, return_error_line = __LINE__; goto err_translate_failed; } - last_fixup_obj = parent; + last_fixup_obj_off = parent_offset; last_fixup_min_off = fda->parent_offset + sizeof(u32) * fda->num_fds; } break; case BINDER_TYPE_PTR: { struct binder_buffer_object *bp = to_binder_buffer_object(hdr); - size_t buf_left = sg_buf_end - sg_bufp; + size_t buf_left = sg_buf_end_offset - sg_buf_offset; + size_t num_valid; if (bp->length > buf_left) { binder_user_error("%d:%d got transaction with too large buffer\n", @@ -3295,9 +3466,13 @@ static void binder_transaction(struct binder_proc *proc, return_error_line = __LINE__; goto err_bad_offset; } - if (copy_from_user(sg_bufp, - (const void __user *)(uintptr_t) - bp->buffer, bp->length)) { + if (binder_alloc_copy_user_to_buffer( + &target_proc->alloc, + t->buffer, + sg_buf_offset, + (const void __user *) + (uintptr_t)bp->buffer, + bp->length)) { binder_user_error("%d:%d got transaction with invalid offsets ptr\n", proc->pid, thread->pid); return_error_param = -EFAULT; @@ -3306,14 +3481,16 @@ static void binder_transaction(struct binder_proc *proc, goto err_copy_data_failed; } /* Fixup buffer pointer to target proc address space */ - bp->buffer = (uintptr_t)sg_bufp + - binder_alloc_get_user_buffer_offset( - &target_proc->alloc); - sg_bufp += ALIGN(bp->length, sizeof(u64)); - - ret = binder_fixup_parent(t, thread, bp, off_start, - offp - off_start, - last_fixup_obj, + bp->buffer = (uintptr_t) + t->buffer->user_data + sg_buf_offset; + sg_buf_offset += ALIGN(bp->length, sizeof(u64)); + + num_valid = (buffer_offset - off_start_offset) * + sizeof(binder_size_t); + ret = binder_fixup_parent(t, thread, bp, + off_start_offset, + num_valid, + last_fixup_obj_off, last_fixup_min_off); if (ret < 0) { return_error = BR_FAILED_REPLY; @@ -3321,7 +3498,10 @@ static void binder_transaction(struct binder_proc *proc, return_error_line = __LINE__; goto err_translate_failed; } - last_fixup_obj = bp; + binder_alloc_copy_to_buffer(&target_proc->alloc, + t->buffer, object_offset, + bp, sizeof(*bp)); + last_fixup_obj_off = object_offset; last_fixup_min_off = 0; } break; default: @@ -3409,13 +3589,17 @@ err_bad_offset: err_bad_parent: err_copy_data_failed: trace_binder_transaction_failed_buffer_release(t->buffer); - binder_transaction_buffer_release(target_proc, t->buffer, offp); + binder_transaction_buffer_release(target_proc, t->buffer, + buffer_offset, true); if (target_node) binder_dec_node_tmpref(target_node); target_node = NULL; t->buffer->transaction = NULL; binder_alloc_free_buf(&target_proc->alloc, t->buffer); err_binder_alloc_buf_failed: + if (secctx) + security_release_secctx(secctx, secctx_sz); +err_get_secctx_failed: kfree(tcomplete); binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE); err_alloc_tcomplete_failed: @@ -3641,14 +3825,18 @@ static int binder_thread_write(struct binder_proc *proc, buffer = binder_alloc_prepare_to_free(&proc->alloc, data_ptr); - if (buffer == NULL) { - binder_user_error("%d:%d BC_FREE_BUFFER u%016llx no match\n", - proc->pid, thread->pid, (u64)data_ptr); - break; - } - if (!buffer->allow_user_free) { - binder_user_error("%d:%d BC_FREE_BUFFER u%016llx matched unreturned buffer\n", - proc->pid, thread->pid, (u64)data_ptr); + if (IS_ERR_OR_NULL(buffer)) { + if (PTR_ERR(buffer) == -EPERM) { + binder_user_error( + "%d:%d BC_FREE_BUFFER u%016llx matched unreturned or currently freeing buffer\n", + proc->pid, thread->pid, + (u64)data_ptr); + } else { + binder_user_error( + "%d:%d BC_FREE_BUFFER u%016llx no match\n", + proc->pid, thread->pid, + (u64)data_ptr); + } break; } binder_debug(BINDER_DEBUG_FREE_BUFFER, @@ -3681,7 +3869,7 @@ static int binder_thread_write(struct binder_proc *proc, binder_node_inner_unlock(buf_node); } trace_binder_transaction_buffer_release(buffer); - binder_transaction_buffer_release(proc, buffer, NULL); + binder_transaction_buffer_release(proc, buffer, 0, false); binder_alloc_free_buf(&proc->alloc, buffer); break; } @@ -4058,11 +4246,13 @@ retry: while (1) { uint32_t cmd; - struct binder_transaction_data tr; + struct binder_transaction_data_secctx tr; + struct binder_transaction_data *trd = &tr.transaction_data; struct binder_work *w = NULL; struct list_head *list = NULL; struct binder_transaction *t = NULL; struct binder_thread *t_from; + size_t trsize = sizeof(*trd); binder_inner_proc_lock(proc); if (!binder_worklist_empty_ilocked(&thread->todo)) @@ -4258,41 +4448,45 @@ retry: struct binder_node *target_node = t->buffer->target_node; struct binder_priority node_prio; - tr.target.ptr = target_node->ptr; - tr.cookie = target_node->cookie; + trd->target.ptr = target_node->ptr; + trd->cookie = target_node->cookie; node_prio.sched_policy = target_node->sched_policy; node_prio.prio = target_node->min_priority; binder_transaction_priority(current, t, node_prio, target_node->inherit_rt); cmd = BR_TRANSACTION; } else { - tr.target.ptr = 0; - tr.cookie = 0; + trd->target.ptr = 0; + trd->cookie = 0; cmd = BR_REPLY; } - tr.code = t->code; - tr.flags = t->flags; - tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid); + trd->code = t->code; + trd->flags = t->flags; + trd->sender_euid = from_kuid(current_user_ns(), t->sender_euid); t_from = binder_get_txn_from(t); if (t_from) { struct task_struct *sender = t_from->proc->tsk; - tr.sender_pid = task_tgid_nr_ns(sender, - task_active_pid_ns(current)); + trd->sender_pid = + task_tgid_nr_ns(sender, + task_active_pid_ns(current)); } else { - tr.sender_pid = 0; + trd->sender_pid = 0; } - tr.data_size = t->buffer->data_size; - tr.offsets_size = t->buffer->offsets_size; - tr.data.ptr.buffer = (binder_uintptr_t) - ((uintptr_t)t->buffer->data + - binder_alloc_get_user_buffer_offset(&proc->alloc)); - tr.data.ptr.offsets = tr.data.ptr.buffer + + trd->data_size = t->buffer->data_size; + trd->offsets_size = t->buffer->offsets_size; + trd->data.ptr.buffer = (uintptr_t)t->buffer->user_data; + trd->data.ptr.offsets = trd->data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *)); + tr.secctx = t->security_ctx; + if (t->security_ctx) { + cmd = BR_TRANSACTION_SEC_CTX; + trsize = sizeof(tr); + } if (put_user(cmd, (uint32_t __user *)ptr)) { if (t_from) binder_thread_dec_tmpref(t_from); @@ -4303,7 +4497,7 @@ retry: return -EFAULT; } ptr += sizeof(uint32_t); - if (copy_to_user(ptr, &tr, sizeof(tr))) { + if (copy_to_user(ptr, &tr, trsize)) { if (t_from) binder_thread_dec_tmpref(t_from); @@ -4312,7 +4506,7 @@ retry: return -EFAULT; } - ptr += sizeof(tr); + ptr += trsize; trace_binder_transaction_received(t); binder_stat_br(proc, thread, cmd); @@ -4320,16 +4514,18 @@ retry: "%d:%d %s %d %d:%d, cmd %d size %zd-%zd ptr %016llx-%016llx\n", proc->pid, thread->pid, (cmd == BR_TRANSACTION) ? "BR_TRANSACTION" : - "BR_REPLY", + (cmd == BR_TRANSACTION_SEC_CTX) ? + "BR_TRANSACTION_SEC_CTX" : "BR_REPLY", t->debug_id, t_from ? t_from->proc->pid : 0, t_from ? t_from->pid : 0, cmd, t->buffer->data_size, t->buffer->offsets_size, - (u64)tr.data.ptr.buffer, (u64)tr.data.ptr.offsets); + (u64)trd->data.ptr.buffer, + (u64)trd->data.ptr.offsets); if (t_from) binder_thread_dec_tmpref(t_from); t->buffer->allow_user_free = 1; - if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) { + if (cmd != BR_REPLY && !(t->flags & TF_ONE_WAY)) { binder_inner_proc_lock(thread->proc); t->to_parent = thread->transaction_stack; t->to_thread = thread; @@ -4674,7 +4870,8 @@ out: return ret; } -static int binder_ioctl_set_ctx_mgr(struct file *filp) +static int binder_ioctl_set_ctx_mgr(struct file *filp, + struct flat_binder_object *fbo) { int ret = 0; struct binder_proc *proc = filp->private_data; @@ -4703,7 +4900,7 @@ static int binder_ioctl_set_ctx_mgr(struct file *filp) } else { context->binder_context_mgr_uid = curr_euid; } - new_node = binder_new_node(proc, NULL); + new_node = binder_new_node(proc, fbo); if (!new_node) { ret = -ENOMEM; goto out; @@ -4721,6 +4918,42 @@ out: return ret; } +static int binder_ioctl_get_node_info_for_ref(struct binder_proc *proc, + struct binder_node_info_for_ref *info) +{ + struct binder_node *node; + struct binder_context *context = proc->context; + __u32 handle = info->handle; + + if (info->strong_count || info->weak_count || info->reserved1 || + info->reserved2 || info->reserved3) { + binder_user_error("%d BINDER_GET_NODE_INFO_FOR_REF: only handle may be non-zero.", + proc->pid); + return -EINVAL; + } + + /* This ioctl may only be used by the context manager */ + mutex_lock(&context->context_mgr_node_lock); + if (!context->binder_context_mgr_node || + context->binder_context_mgr_node->proc != proc) { + mutex_unlock(&context->context_mgr_node_lock); + return -EPERM; + } + mutex_unlock(&context->context_mgr_node_lock); + + node = binder_get_node_from_ref(proc, handle, true, NULL); + if (!node) + return -EINVAL; + + info->strong_count = node->local_strong_refs + + node->internal_strong_refs; + info->weak_count = node->local_weak_refs; + + binder_put_node(node); + + return 0; +} + static int binder_ioctl_get_node_debug_info(struct binder_proc *proc, struct binder_node_debug_info *info) { struct rb_node *n; @@ -4789,8 +5022,20 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) binder_inner_proc_unlock(proc); break; } + case BINDER_SET_CONTEXT_MGR_EXT: { + struct flat_binder_object fbo; + + if (copy_from_user(&fbo, ubuf, sizeof(fbo))) { + ret = -EINVAL; + goto err; + } + ret = binder_ioctl_set_ctx_mgr(filp, &fbo); + if (ret) + goto err; + break; + } case BINDER_SET_CONTEXT_MGR: - ret = binder_ioctl_set_ctx_mgr(filp); + ret = binder_ioctl_set_ctx_mgr(filp, NULL); if (ret) goto err; break; @@ -4814,6 +5059,25 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) } break; } + case BINDER_GET_NODE_INFO_FOR_REF: { + struct binder_node_info_for_ref info; + + if (copy_from_user(&info, ubuf, sizeof(info))) { + ret = -EFAULT; + goto err; + } + + ret = binder_ioctl_get_node_info_for_ref(proc, &info); + if (ret < 0) + goto err; + + if (copy_to_user(ubuf, &info, sizeof(info))) { + ret = -EFAULT; + goto err; + } + + break; + } case BINDER_GET_NODE_DEBUG_INFO: { struct binder_node_debug_info info; @@ -5271,7 +5535,7 @@ static void print_binder_transaction_ilocked(struct seq_file *m, seq_printf(m, " node %d", buffer->target_node->debug_id); seq_printf(m, " size %zd:%zd data %pK\n", buffer->data_size, buffer->offsets_size, - buffer->data); + buffer->user_data); } static void print_binder_work_ilocked(struct seq_file *m, @@ -5421,6 +5685,9 @@ static void print_binder_proc(struct seq_file *m, for (n = rb_first(&proc->nodes); n != NULL; n = rb_next(n)) { struct binder_node *node = rb_entry(n, struct binder_node, rb_node); + if (!print_all && !node->has_async_transaction) + continue; + /* * take a temporary reference on the node so it * survives and isn't removed from the tree diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index bec6c0a..111e640 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -17,7 +17,6 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt -#include #include #include #include @@ -28,6 +27,10 @@ #include #include #include +#include +#include +#include +#include #include "binder_alloc.h" #include "binder_trace.h" @@ -36,11 +39,12 @@ struct list_lru binder_alloc_lru; static DEFINE_MUTEX(binder_alloc_mmap_lock); enum { + BINDER_DEBUG_USER_ERROR = 1U << 0, BINDER_DEBUG_OPEN_CLOSE = 1U << 1, BINDER_DEBUG_BUFFER_ALLOC = 1U << 2, BINDER_DEBUG_BUFFER_ALLOC_ASYNC = 1U << 3, }; -static uint32_t binder_alloc_debug_mask; +static uint32_t binder_alloc_debug_mask = BINDER_DEBUG_USER_ERROR; module_param_named(debug_mask, binder_alloc_debug_mask, uint, 0644); @@ -48,7 +52,7 @@ module_param_named(debug_mask, binder_alloc_debug_mask, #define binder_alloc_debug(mask, x...) \ do { \ if (binder_alloc_debug_mask & mask) \ - pr_info(x); \ + pr_info_ratelimited(x); \ } while (0) static struct binder_buffer *binder_buffer_next(struct binder_buffer *buffer) @@ -65,9 +69,8 @@ static size_t binder_alloc_buffer_size(struct binder_alloc *alloc, struct binder_buffer *buffer) { if (list_is_last(&buffer->entry, &alloc->buffers)) - return (u8 *)alloc->buffer + - alloc->buffer_size - (u8 *)buffer->data; - return (u8 *)binder_buffer_next(buffer)->data - (u8 *)buffer->data; + return alloc->buffer + alloc->buffer_size - buffer->user_data; + return binder_buffer_next(buffer)->user_data - buffer->user_data; } static void binder_insert_free_buffer(struct binder_alloc *alloc, @@ -117,9 +120,9 @@ static void binder_insert_allocated_buffer_locked( buffer = rb_entry(parent, struct binder_buffer, rb_node); BUG_ON(buffer->free); - if (new_buffer->data < buffer->data) + if (new_buffer->user_data < buffer->user_data) p = &parent->rb_left; - else if (new_buffer->data > buffer->data) + else if (new_buffer->user_data > buffer->user_data) p = &parent->rb_right; else BUG(); @@ -134,29 +137,27 @@ static struct binder_buffer *binder_alloc_prepare_to_free_locked( { struct rb_node *n = alloc->allocated_buffers.rb_node; struct binder_buffer *buffer; - void *kern_ptr; + void __user *uptr; - kern_ptr = (void *)(user_ptr - alloc->user_buffer_offset); + uptr = (void __user *)user_ptr; while (n) { buffer = rb_entry(n, struct binder_buffer, rb_node); BUG_ON(buffer->free); - if (kern_ptr < buffer->data) + if (uptr < buffer->user_data) n = n->rb_left; - else if (kern_ptr > buffer->data) + else if (uptr > buffer->user_data) n = n->rb_right; else { /* * Guard against user threads attempting to - * free the buffer twice + * free the buffer when in use by kernel or + * after it's already been freed. */ - if (buffer->free_in_progress) { - pr_err("%d:%d FREE_BUFFER u%016llx user freed buffer twice\n", - alloc->pid, current->pid, (u64)user_ptr); - return NULL; - } - buffer->free_in_progress = 1; + if (!buffer->allow_user_free) + return ERR_PTR(-EPERM); + buffer->allow_user_free = 0; return buffer; } } @@ -186,9 +187,9 @@ struct binder_buffer *binder_alloc_prepare_to_free(struct binder_alloc *alloc, } static int binder_update_page_range(struct binder_alloc *alloc, int allocate, - void *start, void *end) + void __user *start, void __user *end) { - void *page_addr; + void __user *page_addr; unsigned long user_page_addr; struct binder_lru_page *page; struct vm_area_struct *vma = NULL; @@ -224,8 +225,9 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, } if (!vma && need_mm) { - pr_err("%d: binder_alloc_buf failed to map pages in userspace, no vma\n", - alloc->pid); + binder_alloc_debug(BINDER_DEBUG_USER_ERROR, + "%d: binder_alloc_buf failed to map pages in userspace, no vma\n", + alloc->pid); goto err_no_vma; } @@ -262,18 +264,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, page->alloc = alloc; INIT_LIST_HEAD(&page->lru); - ret = map_kernel_range_noflush((unsigned long)page_addr, - PAGE_SIZE, PAGE_KERNEL, - &page->page_ptr); - flush_cache_vmap((unsigned long)page_addr, - (unsigned long)page_addr + PAGE_SIZE); - if (ret != 1) { - pr_err("%d: binder_alloc_buf failed to map page at %pK in kernel\n", - alloc->pid, page_addr); - goto err_map_kernel_failed; - } - user_page_addr = - (uintptr_t)page_addr + alloc->user_buffer_offset; + user_page_addr = (uintptr_t)page_addr; ret = vm_insert_page(vma, user_page_addr, page[0].page_ptr); if (ret) { pr_err("%d: binder_alloc_buf failed to map page at %lx in userspace\n", @@ -311,8 +302,6 @@ free_range: continue; err_vm_insert_page_failed: - unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE); -err_map_kernel_failed: __free_page(page->page_ptr); page->page_ptr = NULL; err_alloc_page_failed: @@ -327,6 +316,35 @@ err_no_vma: return vma ? -ENOMEM : -ESRCH; } + +static inline void binder_alloc_set_vma(struct binder_alloc *alloc, + struct vm_area_struct *vma) +{ + if (vma) + alloc->vma_vm_mm = vma->vm_mm; + /* + * If we see alloc->vma is not NULL, buffer data structures set up + * completely. Look at smp_rmb side binder_alloc_get_vma. + * We also want to guarantee new alloc->vma_vm_mm is always visible + * if alloc->vma is set. + */ + smp_wmb(); + alloc->vma = vma; +} + +static inline struct vm_area_struct *binder_alloc_get_vma( + struct binder_alloc *alloc) +{ + struct vm_area_struct *vma = NULL; + + if (alloc->vma) { + /* Look at description in binder_alloc_set_vma */ + smp_rmb(); + vma = alloc->vma; + } + return vma; +} + static struct binder_buffer *binder_alloc_new_buf_locked( struct binder_alloc *alloc, size_t data_size, @@ -338,14 +356,15 @@ static struct binder_buffer *binder_alloc_new_buf_locked( struct binder_buffer *buffer; size_t buffer_size; struct rb_node *best_fit = NULL; - void *has_page_addr; - void *end_page_addr; + void __user *has_page_addr; + void __user *end_page_addr; size_t size, data_offsets_size; int ret; - if (alloc->vma == NULL) { - pr_err("%d: binder_alloc_buf, no vma\n", - alloc->pid); + if (!binder_alloc_get_vma(alloc)) { + binder_alloc_debug(BINDER_DEBUG_USER_ERROR, + "%d: binder_alloc_buf, no vma\n", + alloc->pid); return ERR_PTR(-ESRCH); } @@ -417,11 +436,14 @@ static struct binder_buffer *binder_alloc_new_buf_locked( if (buffer_size > largest_free_size) largest_free_size = buffer_size; } - pr_err("%d: binder_alloc_buf size %zd failed, no address space\n", - alloc->pid, size); - pr_err("allocated: %zd (num: %zd largest: %zd), free: %zd (num: %zd largest: %zd)\n", - total_alloc_size, allocated_buffers, largest_alloc_size, - total_free_size, free_buffers, largest_free_size); + binder_alloc_debug(BINDER_DEBUG_USER_ERROR, + "%d: binder_alloc_buf size %zd failed, no address space\n", + alloc->pid, size); + binder_alloc_debug(BINDER_DEBUG_USER_ERROR, + "allocated: %zd (num: %zd largest: %zd), free: %zd (num: %zd largest: %zd)\n", + total_alloc_size, allocated_buffers, + largest_alloc_size, total_free_size, + free_buffers, largest_free_size); return ERR_PTR(-ENOSPC); } if (n == NULL) { @@ -433,15 +455,15 @@ static struct binder_buffer *binder_alloc_new_buf_locked( "%d: binder_alloc_buf size %zd got buffer %pK size %zd\n", alloc->pid, size, buffer, buffer_size); - has_page_addr = - (void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK); + has_page_addr = (void __user *) + (((uintptr_t)buffer->user_data + buffer_size) & PAGE_MASK); WARN_ON(n && buffer_size != size); end_page_addr = - (void *)PAGE_ALIGN((uintptr_t)buffer->data + size); + (void __user *)PAGE_ALIGN((uintptr_t)buffer->user_data + size); if (end_page_addr > has_page_addr) end_page_addr = has_page_addr; - ret = binder_update_page_range(alloc, 1, - (void *)PAGE_ALIGN((uintptr_t)buffer->data), end_page_addr); + ret = binder_update_page_range(alloc, 1, (void __user *) + PAGE_ALIGN((uintptr_t)buffer->user_data), end_page_addr); if (ret) return ERR_PTR(ret); @@ -454,7 +476,7 @@ static struct binder_buffer *binder_alloc_new_buf_locked( __func__, alloc->pid); goto err_alloc_buf_struct_failed; } - new_buffer->data = (u8 *)buffer->data + size; + new_buffer->user_data = (u8 __user *)buffer->user_data + size; list_add(&new_buffer->entry, &buffer->entry); new_buffer->free = 1; binder_insert_free_buffer(alloc, new_buffer); @@ -462,7 +484,7 @@ static struct binder_buffer *binder_alloc_new_buf_locked( rb_erase(best_fit, &alloc->free_buffers); buffer->free = 0; - buffer->free_in_progress = 0; + buffer->allow_user_free = 0; binder_insert_allocated_buffer_locked(alloc, buffer); binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, "%d: binder_alloc_buf size %zd got %pK\n", @@ -480,8 +502,8 @@ static struct binder_buffer *binder_alloc_new_buf_locked( return buffer; err_alloc_buf_struct_failed: - binder_update_page_range(alloc, 0, - (void *)PAGE_ALIGN((uintptr_t)buffer->data), + binder_update_page_range(alloc, 0, (void __user *) + PAGE_ALIGN((uintptr_t)buffer->user_data), end_page_addr); return ERR_PTR(-ENOMEM); } @@ -516,14 +538,15 @@ struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc, return buffer; } -static void *buffer_start_page(struct binder_buffer *buffer) +static void __user *buffer_start_page(struct binder_buffer *buffer) { - return (void *)((uintptr_t)buffer->data & PAGE_MASK); + return (void __user *)((uintptr_t)buffer->user_data & PAGE_MASK); } -static void *prev_buffer_end_page(struct binder_buffer *buffer) +static void __user *prev_buffer_end_page(struct binder_buffer *buffer) { - return (void *)(((uintptr_t)(buffer->data) - 1) & PAGE_MASK); + return (void __user *) + (((uintptr_t)(buffer->user_data) - 1) & PAGE_MASK); } static void binder_delete_free_buffer(struct binder_alloc *alloc, @@ -538,7 +561,8 @@ static void binder_delete_free_buffer(struct binder_alloc *alloc, to_free = false; binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, "%d: merge free, buffer %pK share page with %pK\n", - alloc->pid, buffer->data, prev->data); + alloc->pid, buffer->user_data, + prev->user_data); } if (!list_is_last(&buffer->entry, &alloc->buffers)) { @@ -548,23 +572,24 @@ static void binder_delete_free_buffer(struct binder_alloc *alloc, binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, "%d: merge free, buffer %pK share page with %pK\n", alloc->pid, - buffer->data, - next->data); + buffer->user_data, + next->user_data); } } - if (PAGE_ALIGNED(buffer->data)) { + if (PAGE_ALIGNED(buffer->user_data)) { binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, "%d: merge free, buffer start %pK is page aligned\n", - alloc->pid, buffer->data); + alloc->pid, buffer->user_data); to_free = false; } if (to_free) { binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, "%d: merge free, buffer %pK do not share page with %pK or %pK\n", - alloc->pid, buffer->data, - prev->data, next ? next->data : NULL); + alloc->pid, buffer->user_data, + prev->user_data, + next ? next->user_data : NULL); binder_update_page_range(alloc, 0, buffer_start_page(buffer), buffer_start_page(buffer) + PAGE_SIZE); } @@ -590,8 +615,8 @@ static void binder_free_buf_locked(struct binder_alloc *alloc, BUG_ON(buffer->free); BUG_ON(size > buffer_size); BUG_ON(buffer->transaction != NULL); - BUG_ON(buffer->data < alloc->buffer); - BUG_ON(buffer->data > alloc->buffer + alloc->buffer_size); + BUG_ON(buffer->user_data < alloc->buffer); + BUG_ON(buffer->user_data > alloc->buffer + alloc->buffer_size); if (buffer->async_transaction) { alloc->free_async_space += size + sizeof(struct binder_buffer); @@ -602,8 +627,9 @@ static void binder_free_buf_locked(struct binder_alloc *alloc, } binder_update_page_range(alloc, 0, - (void *)PAGE_ALIGN((uintptr_t)buffer->data), - (void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK)); + (void __user *)PAGE_ALIGN((uintptr_t)buffer->user_data), + (void __user *)(((uintptr_t) + buffer->user_data + buffer_size) & PAGE_MASK)); rb_erase(&buffer->rb_node, &alloc->allocated_buffers); buffer->free = 1; @@ -659,7 +685,6 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc, struct vm_area_struct *vma) { int ret; - struct vm_struct *area; const char *failure_string; struct binder_buffer *buffer; @@ -670,30 +695,11 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc, goto err_already_mapped; } - area = get_vm_area(vma->vm_end - vma->vm_start, VM_ALLOC); - if (area == NULL) { - ret = -ENOMEM; - failure_string = "get_vm_area"; - goto err_get_vm_area_failed; - } - alloc->buffer = area->addr; - alloc->user_buffer_offset = - vma->vm_start - (uintptr_t)alloc->buffer; + alloc->buffer = (void __user *)vma->vm_start; mutex_unlock(&binder_alloc_mmap_lock); -#ifdef CONFIG_CPU_CACHE_VIPT - if (cache_is_vipt_aliasing()) { - while (CACHE_COLOUR( - (vma->vm_start ^ (uint32_t)alloc->buffer))) { - pr_info("%s: %d %lx-%lx maps %pK bad alignment\n", - __func__, alloc->pid, vma->vm_start, - vma->vm_end, alloc->buffer); - vma->vm_start += PAGE_SIZE; - } - } -#endif - alloc->pages = kzalloc(sizeof(alloc->pages[0]) * - ((vma->vm_end - vma->vm_start) / PAGE_SIZE), + alloc->pages = kcalloc((vma->vm_end - vma->vm_start) / PAGE_SIZE, + sizeof(alloc->pages[0]), GFP_KERNEL); if (alloc->pages == NULL) { ret = -ENOMEM; @@ -709,14 +715,12 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc, goto err_alloc_buf_struct_failed; } - buffer->data = alloc->buffer; + buffer->user_data = alloc->buffer; list_add(&buffer->entry, &alloc->buffers); buffer->free = 1; binder_insert_free_buffer(alloc, buffer); alloc->free_async_space = alloc->buffer_size / 2; - barrier(); - alloc->vma = vma; - alloc->vma_vm_mm = vma->vm_mm; + binder_alloc_set_vma(alloc, vma); /* Same as mmgrab() in later kernel versions */ atomic_inc(&alloc->vma_vm_mm->mm_count); @@ -727,13 +731,13 @@ err_alloc_buf_struct_failed: alloc->pages = NULL; err_alloc_pages_failed: mutex_lock(&binder_alloc_mmap_lock); - vfree(alloc->buffer); alloc->buffer = NULL; -err_get_vm_area_failed: err_already_mapped: mutex_unlock(&binder_alloc_mmap_lock); - pr_err("%s: %d %lx-%lx %s failed %d\n", __func__, - alloc->pid, vma->vm_start, vma->vm_end, failure_string, ret); + binder_alloc_debug(BINDER_DEBUG_USER_ERROR, + "%s: %d %lx-%lx %s failed %d\n", __func__, + alloc->pid, vma->vm_start, vma->vm_end, + failure_string, ret); return ret; } @@ -744,10 +748,10 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc) int buffers, page_count; struct binder_buffer *buffer; - BUG_ON(alloc->vma); - buffers = 0; mutex_lock(&alloc->mutex); + BUG_ON(alloc->vma); + while ((n = rb_first(&alloc->allocated_buffers))) { buffer = rb_entry(n, struct binder_buffer, rb_node); @@ -773,7 +777,7 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc) int i; for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) { - void *page_addr; + void __user *page_addr; bool on_lru; if (!alloc->pages[i].page_ptr) @@ -786,12 +790,10 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc) "%s: %d: page %d at %pK %s\n", __func__, alloc->pid, i, page_addr, on_lru ? "on lru" : "active"); - unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE); __free_page(alloc->pages[i].page_ptr); page_count++; } kfree(alloc->pages); - vfree(alloc->buffer); } mutex_unlock(&alloc->mutex); if (alloc->vma_vm_mm) @@ -806,7 +808,7 @@ static void print_binder_buffer(struct seq_file *m, const char *prefix, struct binder_buffer *buffer) { seq_printf(m, "%s %d: %pK size %zd:%zd:%zd %s\n", - prefix, buffer->debug_id, buffer->data, + prefix, buffer->debug_id, buffer->user_data, buffer->data_size, buffer->offsets_size, buffer->extra_buffers_size, buffer->transaction ? "active" : "delivered"); @@ -890,7 +892,7 @@ int binder_alloc_get_allocated_count(struct binder_alloc *alloc) */ void binder_alloc_vma_close(struct binder_alloc *alloc) { - WRITE_ONCE(alloc->vma, NULL); + binder_alloc_set_vma(alloc, NULL); } /** @@ -925,7 +927,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item, index = page - alloc->pages; page_addr = (uintptr_t)alloc->buffer + index * PAGE_SIZE; - vma = alloc->vma; + vma = binder_alloc_get_vma(alloc); if (vma) { if (!mmget_not_zero(alloc->vma_vm_mm)) goto err_mmget; @@ -940,10 +942,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item, if (vma) { trace_binder_unmap_user_start(alloc, index); - zap_page_range(vma, - page_addr + - alloc->user_buffer_offset, - PAGE_SIZE, NULL); + zap_page_range(vma, page_addr, PAGE_SIZE, NULL); trace_binder_unmap_user_end(alloc, index); @@ -953,7 +952,6 @@ enum lru_status binder_alloc_free_page(struct list_head *item, trace_binder_unmap_kernel_start(alloc, index); - unmap_kernel_range(page_addr, PAGE_SIZE); __free_page(page->page_ptr); page->page_ptr = NULL; @@ -1020,3 +1018,173 @@ int binder_alloc_shrinker_init(void) } return ret; } + +/** + * check_buffer() - verify that buffer/offset is safe to access + * @alloc: binder_alloc for this proc + * @buffer: binder buffer to be accessed + * @offset: offset into @buffer data + * @bytes: bytes to access from offset + * + * Check that the @offset/@bytes are within the size of the given + * @buffer and that the buffer is currently active and not freeable. + * Offsets must also be multiples of sizeof(u32). The kernel is + * allowed to touch the buffer in two cases: + * + * 1) when the buffer is being created: + * (buffer->free == 0 && buffer->allow_user_free == 0) + * 2) when the buffer is being torn down: + * (buffer->free == 0 && buffer->transaction == NULL). + * + * Return: true if the buffer is safe to access + */ +static inline bool check_buffer(struct binder_alloc *alloc, + struct binder_buffer *buffer, + binder_size_t offset, size_t bytes) +{ + size_t buffer_size = binder_alloc_buffer_size(alloc, buffer); + + return buffer_size >= bytes && + offset <= buffer_size - bytes && + IS_ALIGNED(offset, sizeof(u32)) && + !buffer->free && + (!buffer->allow_user_free || !buffer->transaction); +} + +/** + * binder_alloc_get_page() - get kernel pointer for given buffer offset + * @alloc: binder_alloc for this proc + * @buffer: binder buffer to be accessed + * @buffer_offset: offset into @buffer data + * @pgoffp: address to copy final page offset to + * + * Lookup the struct page corresponding to the address + * at @buffer_offset into @buffer->user_data. If @pgoffp is not + * NULL, the byte-offset into the page is written there. + * + * The caller is responsible to ensure that the offset points + * to a valid address within the @buffer and that @buffer is + * not freeable by the user. Since it can't be freed, we are + * guaranteed that the corresponding elements of @alloc->pages[] + * cannot change. + * + * Return: struct page + */ +static struct page *binder_alloc_get_page(struct binder_alloc *alloc, + struct binder_buffer *buffer, + binder_size_t buffer_offset, + pgoff_t *pgoffp) +{ + binder_size_t buffer_space_offset = buffer_offset + + (buffer->user_data - alloc->buffer); + pgoff_t pgoff = buffer_space_offset & ~PAGE_MASK; + size_t index = buffer_space_offset >> PAGE_SHIFT; + struct binder_lru_page *lru_page; + + lru_page = &alloc->pages[index]; + *pgoffp = pgoff; + return lru_page->page_ptr; +} + +/** + * binder_alloc_copy_user_to_buffer() - copy src user to tgt user + * @alloc: binder_alloc for this proc + * @buffer: binder buffer to be accessed + * @buffer_offset: offset into @buffer data + * @from: userspace pointer to source buffer + * @bytes: bytes to copy + * + * Copy bytes from source userspace to target buffer. + * + * Return: bytes remaining to be copied + */ +unsigned long +binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc, + struct binder_buffer *buffer, + binder_size_t buffer_offset, + const void __user *from, + size_t bytes) +{ + if (!check_buffer(alloc, buffer, buffer_offset, bytes)) + return bytes; + + while (bytes) { + unsigned long size; + unsigned long ret; + struct page *page; + pgoff_t pgoff; + void *kptr; + + page = binder_alloc_get_page(alloc, buffer, + buffer_offset, &pgoff); + size = min_t(size_t, bytes, PAGE_SIZE - pgoff); + kptr = kmap(page) + pgoff; + ret = copy_from_user(kptr, from, size); + kunmap(page); + if (ret) + return bytes - size + ret; + bytes -= size; + from += size; + buffer_offset += size; + } + return 0; +} + +static void binder_alloc_do_buffer_copy(struct binder_alloc *alloc, + bool to_buffer, + struct binder_buffer *buffer, + binder_size_t buffer_offset, + void *ptr, + size_t bytes) +{ + /* All copies must be 32-bit aligned and 32-bit size */ + BUG_ON(!check_buffer(alloc, buffer, buffer_offset, bytes)); + + while (bytes) { + unsigned long size; + struct page *page; + pgoff_t pgoff; + void *tmpptr; + void *base_ptr; + + page = binder_alloc_get_page(alloc, buffer, + buffer_offset, &pgoff); + size = min_t(size_t, bytes, PAGE_SIZE - pgoff); + base_ptr = kmap_atomic(page); + tmpptr = base_ptr + pgoff; + if (to_buffer) + memcpy(tmpptr, ptr, size); + else + memcpy(ptr, tmpptr, size); + /* + * kunmap_atomic() takes care of flushing the cache + * if this device has VIVT cache arch + */ + kunmap_atomic(base_ptr); + bytes -= size; + pgoff = 0; + ptr = ptr + size; + buffer_offset += size; + } +} + +void binder_alloc_copy_to_buffer(struct binder_alloc *alloc, + struct binder_buffer *buffer, + binder_size_t buffer_offset, + void *src, + size_t bytes) +{ + binder_alloc_do_buffer_copy(alloc, true, buffer, buffer_offset, + src, bytes); +} + +void binder_alloc_copy_from_buffer(struct binder_alloc *alloc, + void *dest, + struct binder_buffer *buffer, + binder_size_t buffer_offset, + size_t bytes) +{ + binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset, + dest, bytes); +} + diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index 9ef64e5..b60d161 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -22,6 +22,7 @@ #include #include #include +#include extern struct list_lru binder_alloc_lru; struct binder_transaction; @@ -30,16 +31,16 @@ struct binder_transaction; * struct binder_buffer - buffer used for binder transactions * @entry: entry alloc->buffers * @rb_node: node for allocated_buffers/free_buffers rb trees - * @free: true if buffer is free - * @allow_user_free: describe the second member of struct blah, - * @async_transaction: describe the second member of struct blah, - * @debug_id: describe the second member of struct blah, - * @transaction: describe the second member of struct blah, - * @target_node: describe the second member of struct blah, - * @data_size: describe the second member of struct blah, - * @offsets_size: describe the second member of struct blah, - * @extra_buffers_size: describe the second member of struct blah, - * @data:i describe the second member of struct blah, + * @free: %true if buffer is free + * @allow_user_free: %true if user is allowed to free buffer + * @async_transaction: %true if buffer is in use for an async txn + * @debug_id: unique ID for debugging + * @transaction: pointer to associated struct binder_transaction + * @target_node: struct binder_node associated with this buffer + * @data_size: size of @transaction data + * @offsets_size: size of array of offsets + * @extra_buffers_size: size of space for other objects (like sg lists) + * @user_data: user pointer to base of buffer space * * Bookkeeping structure for binder transaction buffers */ @@ -50,8 +51,7 @@ struct binder_buffer { unsigned free:1; unsigned allow_user_free:1; unsigned async_transaction:1; - unsigned free_in_progress:1; - unsigned debug_id:28; + unsigned debug_id:29; struct binder_transaction *transaction; @@ -59,7 +59,7 @@ struct binder_buffer { size_t data_size; size_t offsets_size; size_t extra_buffers_size; - void *data; + void __user *user_data; }; /** @@ -82,7 +82,6 @@ struct binder_lru_page { * (invariant after init) * @vma_vm_mm: copy of vma->vm_mm (invarient after mmap) * @buffer: base of per-proc address space mapped via mmap - * @user_buffer_offset: offset between user and kernel VAs for buffer * @buffers: list of all buffers for this proc * @free_buffers: rb tree of buffers available for allocation * sorted by size @@ -103,8 +102,7 @@ struct binder_alloc { struct mutex mutex; struct vm_area_struct *vma; struct mm_struct *vma_vm_mm; - void *buffer; - ptrdiff_t user_buffer_offset; + void __user *buffer; struct list_head buffers; struct rb_root free_buffers; struct rb_root allocated_buffers; @@ -163,26 +161,24 @@ binder_alloc_get_free_async_space(struct binder_alloc *alloc) return free_async_space; } -/** - * binder_alloc_get_user_buffer_offset() - get offset between kernel/user addrs - * @alloc: binder_alloc for this proc - * - * Return: the offset between kernel and user-space addresses to use for - * virtual address conversion - */ -static inline ptrdiff_t -binder_alloc_get_user_buffer_offset(struct binder_alloc *alloc) -{ - /* - * user_buffer_offset is constant if vma is set and - * undefined if vma is not set. It is possible to - * get here with !alloc->vma if the target process - * is dying while a transaction is being initiated. - * Returning the old value is ok in this case and - * the transaction will fail. - */ - return alloc->user_buffer_offset; -} +unsigned long +binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc, + struct binder_buffer *buffer, + binder_size_t buffer_offset, + const void __user *from, + size_t bytes); + +void binder_alloc_copy_to_buffer(struct binder_alloc *alloc, + struct binder_buffer *buffer, + binder_size_t buffer_offset, + void *src, + size_t bytes); + +void binder_alloc_copy_from_buffer(struct binder_alloc *alloc, + void *dest, + struct binder_buffer *buffer, + binder_size_t buffer_offset, + size_t bytes); #endif /* _LINUX_BINDER_ALLOC_H */ diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/binder_alloc_selftest.c index 8bd7bce..b727089 100644 --- a/drivers/android/binder_alloc_selftest.c +++ b/drivers/android/binder_alloc_selftest.c @@ -102,11 +102,12 @@ static bool check_buffer_pages_allocated(struct binder_alloc *alloc, struct binder_buffer *buffer, size_t size) { - void *page_addr, *end; + void __user *page_addr; + void __user *end; int page_index; - end = (void *)PAGE_ALIGN((uintptr_t)buffer->data + size); - page_addr = buffer->data; + end = (void __user *)PAGE_ALIGN((uintptr_t)buffer->user_data + size); + page_addr = buffer->user_data; for (; page_addr < end; page_addr += PAGE_SIZE) { page_index = (page_addr - alloc->buffer) / PAGE_SIZE; if (!alloc->pages[page_index].page_ptr || diff --git a/drivers/android/binder_trace.h b/drivers/android/binder_trace.h index b11dffc5..7df1e94 100644 --- a/drivers/android/binder_trace.h +++ b/drivers/android/binder_trace.h @@ -272,14 +272,17 @@ DECLARE_EVENT_CLASS(binder_buffer_class, __field(int, debug_id) __field(size_t, data_size) __field(size_t, offsets_size) + __field(size_t, extra_buffers_size) ), TP_fast_assign( __entry->debug_id = buf->debug_id; __entry->data_size = buf->data_size; __entry->offsets_size = buf->offsets_size; + __entry->extra_buffers_size = buf->extra_buffers_size; ), - TP_printk("transaction=%d data_size=%zd offsets_size=%zd", - __entry->debug_id, __entry->data_size, __entry->offsets_size) + TP_printk("transaction=%d data_size=%zd offsets_size=%zd extra_buffers_size=%zd", + __entry->debug_id, __entry->data_size, __entry->offsets_size, + __entry->extra_buffers_size) ); DEFINE_EVENT(binder_buffer_class, binder_transaction_alloc_buf, @@ -296,7 +299,7 @@ DEFINE_EVENT(binder_buffer_class, binder_transaction_failed_buffer_release, TRACE_EVENT(binder_update_page_range, TP_PROTO(struct binder_alloc *alloc, bool allocate, - void *start, void *end), + void __user *start, void __user *end), TP_ARGS(alloc, allocate, start, end), TP_STRUCT__entry( __field(int, proc) diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h index 5539933..0631c50 100644 --- a/include/uapi/linux/android/binder.h +++ b/include/uapi/linux/android/binder.h @@ -87,6 +87,14 @@ enum flat_binder_object_flags { * scheduling policy from the caller (for synchronous transactions). */ FLAT_BINDER_FLAG_INHERIT_RT = 0x800, + + /** + * @FLAT_BINDER_FLAG_TXN_SECURITY_CTX: request security contexts + * + * Only when set, causes senders to include their security + * context + */ + FLAT_BINDER_FLAG_TXN_SECURITY_CTX = 0x1000, }; #ifdef BINDER_IPC_32BIT @@ -246,6 +254,15 @@ struct binder_node_debug_info { __u32 has_weak_ref; }; +struct binder_node_info_for_ref { + __u32 handle; + __u32 strong_count; + __u32 weak_count; + __u32 reserved1; + __u32 reserved2; + __u32 reserved3; +}; + #define BINDER_WRITE_READ _IOWR('b', 1, struct binder_write_read) #define BINDER_SET_IDLE_TIMEOUT _IOW('b', 3, __s64) #define BINDER_SET_MAX_THREADS _IOW('b', 5, __u32) @@ -254,6 +271,8 @@ struct binder_node_debug_info { #define BINDER_THREAD_EXIT _IOW('b', 8, __s32) #define BINDER_VERSION _IOWR('b', 9, struct binder_version) #define BINDER_GET_NODE_DEBUG_INFO _IOWR('b', 11, struct binder_node_debug_info) +#define BINDER_GET_NODE_INFO_FOR_REF _IOWR('b', 12, struct binder_node_info_for_ref) +#define BINDER_SET_CONTEXT_MGR_EXT _IOW('b', 13, struct flat_binder_object) /* * NOTE: Two special error codes you should check for when calling @@ -312,6 +331,11 @@ struct binder_transaction_data { } data; }; +struct binder_transaction_data_secctx { + struct binder_transaction_data transaction_data; + binder_uintptr_t secctx; +}; + struct binder_transaction_data_sg { struct binder_transaction_data transaction_data; binder_size_t buffers_size; @@ -347,6 +371,11 @@ enum binder_driver_return_protocol { BR_OK = _IO('r', 1), /* No parameters! */ + BR_TRANSACTION_SEC_CTX = _IOR('r', 2, + struct binder_transaction_data_secctx), + /* + * binder_transaction_data_secctx: the received command. + */ BR_TRANSACTION = _IOR('r', 2, struct binder_transaction_data), BR_REPLY = _IOR('r', 3, struct binder_transaction_data), /*