The reason for copying of %r8 to %rcx is quite non-obvious.
Add a comment which explains why it is done.
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1433339930-20880-1-git-send-email-dvlasenk@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
ALIGN
GLOBAL(stub32_clone)
leaq sys_clone(%rip), %rax
+ /*
+ * 32-bit clone API is clone(..., int tls_val, int *child_tidptr).
+ * 64-bit clone API is clone(..., int *child_tidptr, int tls_val).
+ * Native 64-bit kernel's sys_clone() implements the latter.
+ * We need to swap args here. But since tls_val is in fact ignored
+ * by sys_clone(), we can get away with an assignment
+ * (arg4 = arg5) instead of a full swap:
+ */
mov %r8, %rcx
jmp ia32_ptregs_common