From: Kirill A. Shutemov Date: Tue, 6 Jun 2017 11:31:21 +0000 (+0300) Subject: x86/asm: Fix comment in return_from_SYSCALL_64() X-Git-Tag: v4.13-rc1~196^2~19 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=cbe0317bf10acf1f41811108ed0f9a316103c0f3;p=platform%2Fkernel%2Flinux-exynos.git x86/asm: Fix comment in return_from_SYSCALL_64() On x86-64 __VIRTUAL_MASK_SHIFT depends on paging mode now. Signed-off-by: Kirill A. Shutemov Cc: Andrew Morton Cc: Andy Lutomirski Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Brian Gerst Cc: Dave Hansen Cc: Denys Vlasenko Cc: H. Peter Anvin Cc: Josh Poimboeuf Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: linux-arch@vger.kernel.org Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20170606113133.22974-3-kirill.shutemov@linux.intel.com Signed-off-by: Ingo Molnar --- diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 4a4c083..a9a8027 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -265,7 +265,8 @@ return_from_SYSCALL_64: * If width of "canonical tail" ever becomes variable, this will need * to be updated to remain correct on both old and new CPUs. * - * Change top 16 bits to be the sign-extension of 47th bit + * Change top bits to match most significant bit (47th or 56th bit + * depending on paging mode) in the address. */ shl $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx sar $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx