Aurelien Jarno [Sun, 24 May 2015 23:28:56 +0000 (01:28 +0200)]
target-sh4: use bit number for SR constants
Use the bit number for SR constants instead of using a bit mask. This
make possible to also use the constants for shifts.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Aurelien Jarno [Wed, 3 Jun 2015 21:16:43 +0000 (23:16 +0200)]
sh4/r2d: convert to new MMIO accessor style
The documentation is clear to use 16-bit accesses for all registers.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Richard Henderson [Sat, 23 May 2015 22:06:54 +0000 (15:06 -0700)]
linux-user: Add HWCAP for SH4
Only exposing FPU and LLSC as the only features
supported by the translator.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Richard Henderson [Sat, 23 May 2015 22:06:53 +0000 (15:06 -0700)]
linux-user: Default sh4 to sh7785
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Peter Maydell [Tue, 9 Jun 2015 14:29:34 +0000 (15:29 +0100)]
Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-
20150609' into staging
Collected TCG patches
# gpg: Signature made Tue Jun 9 15:06:18 2015 BST using RSA key ID
4DD0279B
# gpg: Good signature from "Richard Henderson <rth7680@gmail.com>"
# gpg: aka "Richard Henderson <rth@redhat.com>"
# gpg: aka "Richard Henderson <rth@twiddle.net>"
* remotes/rth/tags/pull-tcg-
20150609:
tcg/optimize: rename tcg_constant_folding
tcg/optimize: fold constant test in tcg_opt_gen_mov
tcg/optimize: fold temp copies test in tcg_opt_gen_mov
tcg/optimize: remove opc argument from tcg_opt_gen_mov
tcg/optimize: remove opc argument from tcg_opt_gen_movi
tcg: fix dead computation for repeated input arguments
tcg: fix register allocation with two aliased dead inputs
tcg: Handle MO_AMASK in tcg_dump_ops
tcg: Mask TCGMemOp appropriately for indexing
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Aurelien Jarno [Thu, 4 Jun 2015 19:53:27 +0000 (21:53 +0200)]
tcg/optimize: rename tcg_constant_folding
The tcg_constant_folding folding ends up doing all the optimizations
(which is a good thing to avoid looping on all ops multiple time), so
make it clear and just rename it tcg_optimize.
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <
1433447607-31184-6-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Aurelien Jarno [Fri, 5 Jun 2015 09:19:18 +0000 (11:19 +0200)]
tcg/optimize: fold constant test in tcg_opt_gen_mov
Most of the calls to tcg_opt_gen_mov are preceeded by a test to check if
the source temp is a constant. Fold that into the tcg_opt_gen_mov
function.
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <
1433495958-9508-1-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Aurelien Jarno [Thu, 4 Jun 2015 19:53:25 +0000 (21:53 +0200)]
tcg/optimize: fold temp copies test in tcg_opt_gen_mov
Each call to tcg_opt_gen_mov is preceeded by a test to check if the
source and destination temps are copies. Fold that into the
tcg_opt_gen_mov function.
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <
1433447607-31184-4-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Aurelien Jarno [Thu, 4 Jun 2015 19:53:24 +0000 (21:53 +0200)]
tcg/optimize: remove opc argument from tcg_opt_gen_mov
We can get the opcode using the TCGOp pointer. It needs to be
dereferenced, but it's anyway done a few lines below to write
the new value.
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <
1433447607-31184-3-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Aurelien Jarno [Thu, 4 Jun 2015 19:53:23 +0000 (21:53 +0200)]
tcg/optimize: remove opc argument from tcg_opt_gen_movi
We can get the opcode using the TCGOp pointer. It needs to be
dereferenced, but it's anyway done a few lines below to write
the new value.
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <
1433447607-31184-2-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Aurelien Jarno [Thu, 4 Jun 2015 19:47:08 +0000 (21:47 +0200)]
tcg: fix dead computation for repeated input arguments
When the same temp is used twice or more as an input argument to a TCG
instruction, the dead computation code doesn't recognize the second use
as a dead temp. This is because the temp is marked as live in the same
loop where dead inputs are checked.
The fix is to split the loop in two parts. This avoid emitting a move
and using a register for the movcond instruction when used as "move if
true" on x86-64. This might bring more improvements on RISC TCG targets
which don't have outputs aliased to inputs.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <
1433447228-29425-3-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Aurelien Jarno [Thu, 4 Jun 2015 19:47:07 +0000 (21:47 +0200)]
tcg: fix register allocation with two aliased dead inputs
For TCG ops with two outputs registers (add2, sub2, div2, div2u), when
the same input temp is used for the two inputs aliased to the two
outputs, and when these inputs are both dead, the register allocation
code wrongly assigned the same register to the same output.
This happens for example with sub2 t1, t2, t3, t3, t4, t5, when t3 is
not used anymore after the TCG op. In that case the same register is
used for t1, t2 and t3.
The fix is to look for already allocated aliased input when allocating
a dead aliased input and check that the register is not already
used.
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <
1433447228-29425-2-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Richard Henderson [Mon, 1 Jun 2015 21:38:56 +0000 (14:38 -0700)]
tcg: Handle MO_AMASK in tcg_dump_ops
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Tested-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Richard Henderson [Fri, 29 May 2015 16:16:51 +0000 (09:16 -0700)]
tcg: Mask TCGMemOp appropriately for indexing
The addition of MO_AMASK means that places that used inverted masks
need to be changed to use positive masks, and places that failed to
mask the intended bits need updating.
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Tested-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Peter Maydell [Tue, 9 Jun 2015 10:07:41 +0000 (11:07 +0100)]
Merge remote-tracking branch 'remotes/borntraeger/tags/s390x-
20150609' into staging
s390x/virtio-ccw: migration and virtio for 2.4
1. Migration fixups
2. virtio 9pfs
# gpg: Signature made Tue Jun 9 09:00:05 2015 BST using RSA key ID
B5A61C7C
# gpg: Good signature from "Christian Borntraeger (IBM) <borntraeger@de.ibm.com>"
* remotes/borntraeger/tags/s390x-
20150609:
s390x/migration: add comment about floating point migration
s390x/kvm: always ignore empty vcpu interrupt state
virtio-ccw/migration: Migrate config vector for virtio devices
virtio-ccw: add support for 9pfs
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Peter Maydell [Tue, 9 Jun 2015 09:05:29 +0000 (10:05 +0100)]
Merge remote-tracking branch 'remotes/armbru/tags/pull-error-2015-06-09' into staging
Error reporting patches
# gpg: Signature made Tue Jun 9 06:42:15 2015 BST using RSA key ID
EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>"
* remotes/armbru/tags/pull-error-2015-06-09:
vhost-user: Improve -netdev/netdev_add/-net/... error reporting
QemuOpts: Convert qemu_opt_foreach() to Error
QemuOpts: Drop qemu_opt_foreach() parameter abort_on_failure
blkdebug: Simplify passing of Error through qemu_opts_foreach()
QemuOpts: Convert qemu_opts_foreach() to Error
QemuOpts: Drop qemu_opts_foreach() parameter abort_on_failure
vl: Fail right after first bad -object
vl: Print -device help at most once
vl: Report failure to sandbox at most once
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Christian Borntraeger [Mon, 8 Jun 2015 10:21:24 +0000 (12:21 +0200)]
s390x/migration: add comment about floating point migration
commit
46c804def4bd ("s390x: move fpu regs into a subsection
of the vmstate") moved the fprs into a subsection and bumped
the version number. This will allow to not transfer fprs in
the future if necessary. Add a comment to mark the return true
as intentional.
CC: Juan Quintela <quintela@redhat.com>
CC: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <
1433758884-2997-1-git-send-email-borntraeger@de.ibm.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Markus Armbruster [Fri, 13 Mar 2015 13:17:16 +0000 (14:17 +0100)]
vhost-user: Improve -netdev/netdev_add/-net/... error reporting
When -netdev vhost-user fails, it first reports a specific error, then
one or more generic ones, like this:
$ qemu-system-x86_64 -netdev vhost-user,id=foo,chardev=xxx
qemu-system-x86_64: -netdev vhost-user,id=foo,chardev=xxx: chardev "xxx" not found
qemu-system-x86_64: -netdev vhost-user,id=foo,chardev=xxx: No suitable chardev found
qemu-system-x86_64: -netdev vhost-user,id=foo,chardev=xxx: Device 'vhost-user' could not be initialized
With the command line, the messages go to stderr. In HMP, they go to
the monitor. In QMP, the last one becomes the error reply, and the
others go to stderr.
Convert net_init_vhost_user() and its helpers to Error. This
suppresses the unwanted unspecific error messages, and makes the
specific error the QMP error reply.
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Markus Armbruster [Thu, 12 Mar 2015 07:40:25 +0000 (08:40 +0100)]
QemuOpts: Convert qemu_opt_foreach() to Error
Retain the function value for now, to permit selective conversion of
its callers.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Markus Armbruster [Thu, 12 Mar 2015 06:45:10 +0000 (07:45 +0100)]
QemuOpts: Drop qemu_opt_foreach() parameter abort_on_failure
When the argument is non-zero, qemu_opt_foreach() stops on callback
returning non-zero, and returns that value.
When the argument is zero, it doesn't stop, and returns the callback's
value from the last iteration.
The two callers that pass zero could just as well pass one:
* qemu_spice_init()'s callback add_channel() either returns zero or
exit()s.
* config_write_opts()'s callback config_write_opt() always returns
zero.
Drop the parameter, and always stop.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Markus Armbruster [Fri, 13 Mar 2015 12:38:42 +0000 (13:38 +0100)]
blkdebug: Simplify passing of Error through qemu_opts_foreach()
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-block@nongnu.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Acked-by: Kevin Wolf <kwolf@redhat.com>
Markus Armbruster [Fri, 13 Mar 2015 12:35:14 +0000 (13:35 +0100)]
QemuOpts: Convert qemu_opts_foreach() to Error
Retain the function value for now, to permit selective conversion of
its callers.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Acked-by: Kevin Wolf <kwolf@redhat.com>
Markus Armbruster [Fri, 13 Mar 2015 10:07:24 +0000 (11:07 +0100)]
QemuOpts: Drop qemu_opts_foreach() parameter abort_on_failure
When the argument is non-zero, qemu_opts_foreach() stops on callback
returning non-zero, and returns that value.
When the argument is zero, it doesn't stop, and returns the bit-wise
inclusive or of all the return values. Funky :)
The callers that pass zero could just as well pass one, because their
callbacks can't return anything but zero:
* qemu_add_globals()'s callback qdev_add_one_global()
* qemu_config_write()'s callback config_write_opts()
* main()'s callbacks default_driver_check(), drive_enable_snapshot(),
vnc_init_func()
Drop the parameter, and always stop.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Acked-by: Kevin Wolf <kwolf@redhat.com>
Markus Armbruster [Fri, 13 Mar 2015 12:08:36 +0000 (13:08 +0100)]
vl: Fail right after first bad -object
Failure to create an object with -object is a fatal error. However,
we delay the actual exit until all -object are processed. On the one
hand, this permits detection of genuine additional errors. On the
other hand, it can muddy the waters with uninteresting additional
errors, e.g. when a later -object tries to reference a prior one that
failed.
We generally stop right on the first bad option, so do that for
-object as well.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Markus Armbruster [Fri, 13 Mar 2015 12:02:03 +0000 (13:02 +0100)]
vl: Print -device help at most once
We print it once for each -device help. Not helpful. Stop after the
first one.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Markus Armbruster [Fri, 13 Mar 2015 11:59:43 +0000 (12:59 +0100)]
vl: Report failure to sandbox at most once
It's reported once per -sandbox on. Stop on the first failure, like
we do for other options.
Not fixed: "-sandbox on -sandbox off" should leave the sandbox off.
It doesn't.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Peter Maydell [Mon, 8 Jun 2015 14:57:41 +0000 (15:57 +0100)]
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* KVM error improvement from Laurent
* CONFIG_PARALLEL fix from Mirek
* Atomic/optimized dirty bitmap access from myself and Stefan
* BUILD_DIR convenience/bugfix from Peter C
* Memory leak fix from Shannon
* SMM improvements (though still TCG only) from myself and Gerd, acked by mst
# gpg: Signature made Fri Jun 5 18:45:20 2015 BST using RSA key ID
78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (62 commits)
update Linux headers from kvm/next
atomics: add explicit compiler fence in __atomic memory barriers
ich9: implement SMI_LOCK
q35: implement TSEG
q35: add test for SMRAM.D_LCK
q35: implement SMRAM.D_LCK
q35: add config space wmask for SMRAM and ESMRAMC
q35: fix ESMRAMC default
q35: implement high SMRAM
hw/i386: remove smram_update
target-i386: use memory API to implement SMRAM
hw/i386: add a separate region that tracks the SMRAME bit
target-i386: create a separate AddressSpace for each CPU
vl: run "late" notifiers immediately
qom: add object_property_add_const_link
vl: allow full-blown QemuOpts syntax for -global
pflash_cfi01: add secure property
pflash_cfi01: change to new-style MMIO accessors
pflash_cfi01: change big-endian property to BIT type
target-i386: wake up processors that receive an SMI
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Peter Maydell [Mon, 8 Jun 2015 13:07:32 +0000 (14:07 +0100)]
Merge remote-tracking branch 'remotes/jnsnow/tags/ide-pull-request' into staging
# gpg: Signature made Fri Jun 5 20:59:07 2015 BST using RSA key ID
AAFC390E
# gpg: Good signature from "John Snow (John Huston) <jsnow@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: FAEB 9711 A12C F475 812F 18F2 88A9 064D 1835 61EB
# Subkey fingerprint: F9B7 ABDB BCAC DF95 BE76 CBD0 7DEF 8106 AAFC 390E
* remotes/jnsnow/tags/ide-pull-request:
macio: remove remainder_len DBDMA_io property
macio: update comment/constants to reflect the new code
macio: switch pmac_dma_write() over to new offset/len implementation
macio: switch pmac_dma_read() over to new offset/len implementation
fdc-test: Test state for existing cases more thoroughly
fdc: Fix MSR.RQM flag
fdc: Disentangle phases in fdctrl_read_data()
fdc: Code cleanup in fdctrl_write_data()
fdc: Use phase in fdctrl_write_data()
fdc: Introduce fdctrl->phase
fdc: Rename fdctrl_set_fifo() to fdctrl_to_result_phase()
fdc: Rename fdctrl_reset_fifo() to fdctrl_to_command_phase()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Alexander Graf [Fri, 5 Jun 2015 09:05:03 +0000 (11:05 +0200)]
machine: Drop use of DEFAULT_RAM_SIZE in help text
As of commit
076b35b5a (machine: add default_ram_size to machine
class) we no longer have a global default ram size, but instead
machine specific defaults. When invoking qemu --help we don't know
which machine you selected, so we can't tell the user the default RAM
size in the help text anymore now.
Thus I don't see an easy way to expose the default ram size to the
user in the help text. The easiest option IMHO is to just drop this
piece of information.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Acked-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Acked-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Message-id:
1433495103-62084-1-git-send-email-agraf@suse.de
[PMM: rewrapped long commit message lines]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Markus Armbruster [Mon, 8 Jun 2015 08:44:30 +0000 (10:44 +0200)]
monitor: Fix QMP ABI breakage around "id"
Commit 65207c5 accidentally dropped a line of code we need along with
a comment that became wrong then. This made QMP reject "id":
{"execute": "system_reset", "id": "1"}
{"error": {"class": "GenericError", "desc": "QMP input object member 'id' is unexpected"}}
Put the lost line right back, so QMP again accepts and returns "id",
as promised by the ABI:
{"execute": "system_reset", "id": "1"}
{"return": {}, "id": "1"}
Reported-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Don Slutz <dslutz@verizon.com>
Tested-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Tested-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-id:
1433753070-12632-2-git-send-email-armbru@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Paolo Bonzini [Thu, 4 Jun 2015 14:38:29 +0000 (16:38 +0200)]
update Linux headers from kvm/next
This is kvm.git commit
05ff30bb56c6b3d3000519d6e02ed35678ddae3b.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 3 Jun 2015 12:21:20 +0000 (14:21 +0200)]
atomics: add explicit compiler fence in __atomic memory barriers
__atomic_thread_fence does not include a compiler barrier; in the
C++11 memory model, fences take effect in combination with other
atomic operations. GCC implements this by making __atomic_load and
__atomic_store access memory as if the pointer was volatile, and
leaves no trace whatsoever of acquire and release fences in the
compiler's intermediate representation.
In QEMU, we want memory barriers to act on all memory, but at the same
time we would like to use __atomic_thread_fence for portability reasons.
Add compiler barriers manually around the __atomic_thread_fence.
Message-Id: <
1433334080-14912-1-git-send-email-pbonzini@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Gerd Hoffmann [Wed, 6 May 2015 08:58:30 +0000 (10:58 +0200)]
ich9: implement SMI_LOCK
Add write mask for the smi enable register, so we can disable write
access to certain bits. Open all bits on reset. Disable write access
to GBL_SMI_EN when SMI_LOCK (in ich9 lpc pci config space) is set.
Write access to SMI_LOCK itself is disabled too.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Gerd Hoffmann [Mon, 20 Apr 2015 08:55:09 +0000 (10:55 +0200)]
q35: implement TSEG
TSEG provides larger amounts of SMRAM than the 128 KB available with
legacy SMRAM and high SMRAM.
Route access to tseg into nowhere when enabled, for both cpus and
busmaster dma, and add tseg window to smram region, so cpus can access
it in smm mode.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Gerd Hoffmann [Tue, 14 Apr 2015 13:11:36 +0000 (15:11 +0200)]
q35: add test for SMRAM.D_LCK
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
[Fix compilation of the newly introduced test. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Gerd Hoffmann [Tue, 14 Apr 2015 12:03:22 +0000 (14:03 +0200)]
q35: implement SMRAM.D_LCK
Once the SMRAM.D_LCK bit has been set by the guest several bits in SMRAM
and ESMRAMC become readonly until the next machine reset. Implement
this by updating the wmask accordingly when the guest sets the lock bit.
As the lock it itself is locked down too we don't need to worry about
the guest clearing the lock bit.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Gerd Hoffmann [Wed, 15 Apr 2015 14:48:12 +0000 (16:48 +0200)]
q35: add config space wmask for SMRAM and ESMRAMC
Not all bits in SMRAM and ESMRAMC can be changed by the guest.
Add wmask defines accordingly and set them in mch_reset().
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Gerd Hoffmann [Wed, 15 Apr 2015 14:43:24 +0000 (16:43 +0200)]
q35: fix ESMRAMC default
The cache bits in ESMRAMC are hardcoded to 1 (=disabled) according to
the q35 mch specs. Add and use a define with this default.
While being at it also update the SMRAM default to use the name (no code
change, just makes things a bit more readable).
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Tue, 31 Mar 2015 15:13:01 +0000 (17:13 +0200)]
q35: implement high SMRAM
When H_SMRAME is 1, low memory at 0xa0000 is left alone by
SMM, and instead the chipset maps the 0xa0000-0xbffff window at
0xfeda0000-0xfedbffff. This affects both the "non-SMM" view controlled
by D_OPEN and the SMM view controlled by G_SMRAME, so add two new
MemoryRegions and toggle the enabled/disabled state of all four
in mch_update_smram.
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Tue, 31 Mar 2015 12:14:28 +0000 (14:14 +0200)]
hw/i386: remove smram_update
It's easier to inline it now that most of its work is done by the CPU
(rather than the chipset) through /machine/smram and the memory API.
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Tue, 31 Mar 2015 12:12:25 +0000 (14:12 +0200)]
target-i386: use memory API to implement SMRAM
Remove cpu_smm_register and cpu_smm_update. Instead, each CPU
address space gets an extra region which is an alias of
/machine/smram. This extra region is enabled or disabled
as the CPU enters/exits SMM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Tue, 31 Mar 2015 12:10:22 +0000 (14:10 +0200)]
hw/i386: add a separate region that tracks the SMRAME bit
This region is exported at /machine/smram. It is "empty" if
SMRAME=0 and points to SMRAM if SMRAME=1. The CPU will
enable/disable it as it enters or exits SMRAM.
While touching nearby code, the existing memory region setup was
slightly inconsistent. The smram_region is *disabled* in order to open
SMRAM (because the smram_region shows the low VRAM instead of the RAM
at 0xa0000). Because SMRAM is closed at startup, the smram_region must
be enabled when creating the i440fx or q35 devices.
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Tue, 31 Mar 2015 12:11:09 +0000 (14:11 +0200)]
target-i386: create a separate AddressSpace for each CPU
Different CPUs can be in SMM or not at the same time, thus they
will see different things where the chipset places SMRAM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Tue, 31 Mar 2015 12:01:06 +0000 (14:01 +0200)]
vl: run "late" notifiers immediately
If a machine_init_done notifier is added late, as part of a hot-plugged
device, run it immediately.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Tue, 5 May 2015 16:29:00 +0000 (18:29 +0200)]
qom: add object_property_add_const_link
Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Thu, 9 Apr 2015 12:16:19 +0000 (14:16 +0200)]
vl: allow full-blown QemuOpts syntax for -global
-global does not work for drivers that have a dot in their name, such as
cfi.pflash01. This is just a parsing limitation, because such globals
can be declared easily inside a -readconfig file.
To allow this usage, support the full QemuOpts key/value syntax for -global
too, for example "-global driver=cfi.pflash01,property=secure,value=on".
The two formats do not conflict, because the key/value syntax does not have
a period before the first equal sign.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 8 Apr 2015 12:09:43 +0000 (14:09 +0200)]
pflash_cfi01: add secure property
When this property is set, MMIO accesses are only allowed with the
MEMTXATTRS_SECURE attribute. This is used for secure access to UEFI
variables stored in flash.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 8 Apr 2015 12:00:53 +0000 (14:00 +0200)]
pflash_cfi01: change to new-style MMIO accessors
This is a required step to implement read_with_attrs and write_with_attrs.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 8 Apr 2015 11:53:29 +0000 (13:53 +0200)]
pflash_cfi01: change big-endian property to BIT type
Make this consistent with the secure property, added in the next patch.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Tue, 19 May 2015 11:46:47 +0000 (13:46 +0200)]
target-i386: wake up processors that receive an SMI
An SMI should definitely wake up a processor in halted state!
This lets OVMF boot with SMM on multiprocessor systems, although
it halts very soon after that with a "CpuIndex != BspIndex"
assertion failure.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Thu, 30 Apr 2015 10:02:46 +0000 (12:02 +0200)]
target-i386: set G=1 in SMM big real mode selectors
Because the limit field's bits 31:20 is 1, G should be 1.
VMX actually enforces this, let's do it for completeness
in QEMU as well.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 22 Apr 2015 09:40:41 +0000 (11:40 +0200)]
target-i386: mask NMIs on entry to SMM
QEMU is not blocking NMIs on entry to SMM. Implementing this has to
cover a few corner cases, because:
- NMIs can then be enabled by an IRET instruction and there
is no mechanism to _set_ the "NMIs masked" flag on exit from SMM:
"A special case can occur if an SMI handler nests inside an NMI handler
and then another NMI occurs. [...] When the processor enters SMM while
executing an NMI handler, the processor saves the SMRAM state save map
but does not save the attribute to keep NMI interrupts disabled.
- However, there is some hidden state, because "If NMIs were blocked
before the SMI occurred [and no IRET is executed while in SMM], they
are blocked after execution of RSM." This is represented by the new
HF2_SMM_INSIDE_NMI_MASK bit. If it is zero, NMIs are _unblocked_
on exit from RSM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 8 Apr 2015 12:45:53 +0000 (14:45 +0200)]
target-i386: Use correct memory attributes for ioport accesses
In order to do this, stop using the cpu_in*/out* helpers, and instead
access address_space_io directly.
cpu_in* and cpu_out* remain for usage in the monitor, in qtest, and
in Xen.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 8 Apr 2015 11:39:37 +0000 (13:39 +0200)]
target-i386: Use correct memory attributes for memory accesses
These include page table walks, SVM accesses and SMM state save accesses.
The bulk of the patch is obtained with
sed -i 's/\(\<[a-z_]*_phys\(_notdirty\)\?\>(cs\)->as,/x86_\1,/'
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 8 Apr 2015 12:52:04 +0000 (14:52 +0200)]
target-i386: introduce cpu_get_mem_attrs
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Victor CLEMENT [Fri, 29 May 2015 15:14:06 +0000 (17:14 +0200)]
icount: print a warning if there is no more deadline in sleep=no mode
While qemu is running in sleep=no mode, a warning will be printed
when no timer deadline is set.
As this mode is intended for getting deterministic virtual time, if no
timer is set on the virtual clock this determinism is broken.
Signed-off-by: Victor CLEMENT <victor.clement@openwide.fr>
Message-Id: <
1432912446-9811-4-git-send-email-victor.clement@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Victor CLEMENT [Fri, 29 May 2015 15:14:05 +0000 (17:14 +0200)]
icount: add sleep parameter to the icount option to set icount_sleep mode
The 'sleep' parameter sets the icount_sleep mode, which is enabled by
default. To disable it, add the 'sleep=no' parameter (or 'nosleep') to the
qemu -icount option.
Signed-off-by: Victor CLEMENT <victor.clement@openwide.fr>
Message-Id: <
1432912446-9811-3-git-send-email-victor.clement@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Victor CLEMENT [Fri, 29 May 2015 15:14:04 +0000 (17:14 +0200)]
icount: implement a new icount_sleep mode toggleing real-time cpu sleep
When the icount_sleep mode is disabled, the QEMU_VIRTUAL_CLOCK runs at the
maximum possible speed by warping the sleep times of the virtual cpu to the
soonest clock deadline. The virtual clock will be updated only according
the instruction counter.
Signed-off-by: Victor CLEMENT <victor.clement@openwide.fr>
Message-Id: <
1432912446-9811-2-git-send-email-victor.clement@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Sun, 29 Mar 2015 07:31:43 +0000 (09:31 +0200)]
memory: use mr->ram_addr in "is this RAM?" assertions
mr->terminates alone doesn't guarantee that we are looking at a RAM region.
mr->ram_addr also has to be checked, in order to distinguish RAM and I/O
regions.
So, do the following:
1) add a new define RAM_ADDR_INVALID, and test it in the assertions
instead of mr->terminates
2) IOMMU regions were not setting mr->ram_addr to a bogus value, initialize
it in the instance_init function so that the new assertions would fire
for IOMMU regions as well.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Stefan Hajnoczi [Tue, 2 Dec 2014 11:23:19 +0000 (11:23 +0000)]
memory: make cpu_physical_memory_sync_dirty_bitmap() fully atomic
The fast path of cpu_physical_memory_sync_dirty_bitmap() directly
manipulates the dirty bitmap. Use atomic_xchg() to make the
test-and-clear atomic.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <
1417519399-3166-7-git-send-email-stefanha@redhat.com>
[Only do xchg on nonzero words. - Paolo]
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Stefan Hajnoczi [Tue, 2 Dec 2014 11:23:18 +0000 (11:23 +0000)]
memory: replace cpu_physical_memory_reset_dirty() with test-and-clear
The cpu_physical_memory_reset_dirty() function is sometimes used
together with cpu_physical_memory_get_dirty(). This is not atomic since
two separate accesses to the dirty memory bitmap are made.
Turn cpu_physical_memory_reset_dirty() and
cpu_physical_memory_clear_dirty_range_type() into the atomic
cpu_physical_memory_test_and_clear_dirty().
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <
1417519399-3166-6-git-send-email-stefanha@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Stefan Hajnoczi [Tue, 2 Dec 2014 11:23:17 +0000 (11:23 +0000)]
migration: move dirty bitmap sync to ram_addr.h
The dirty memory bitmap is managed by ram_addr.h and copied to
migration_bitmap[] periodically during live migration.
Move the code to sync the bitmap to ram_addr.h where related code lives.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <
1417519399-3166-5-git-send-email-stefanha@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Stefan Hajnoczi [Tue, 2 Dec 2014 11:23:16 +0000 (11:23 +0000)]
memory: use atomic ops for setting dirty memory bits
Use set_bit_atomic() and bitmap_set_atomic() so that multiple threads
can dirty memory without race conditions.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <
1417519399-3166-4-git-send-email-stefanha@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Stefan Hajnoczi [Tue, 2 Dec 2014 11:23:15 +0000 (11:23 +0000)]
bitmap: add atomic test and clear
The new bitmap_test_and_clear_atomic() function clears a range and
returns whether or not the bits were set.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <
1417519399-3166-3-git-send-email-stefanha@redhat.com>
[Test before xchg; then a full barrier is needed at the end just like
in the previous patch. The barrier can be avoided if we did at least
one xchg. - Paolo]
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Stefan Hajnoczi [Tue, 2 Dec 2014 11:23:14 +0000 (11:23 +0000)]
bitmap: add atomic set functions
Use atomic_or() for atomic bitmaps where several threads may set bits at
the same time. This avoids the race condition between threads loading
an element, bitwise ORing, and then storing the element.
When setting all bits in a word we can avoid atomic ops and instead just
use an smp_mb() at the end.
Most bitmap users don't need atomicity so introduce new functions.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <
1417519399-3166-2-git-send-email-stefanha@redhat.com>
[Avoid barrier in the single word case, use full barrier instead of write.
- Paolo]
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 23 Mar 2015 10:41:32 +0000 (11:41 +0100)]
memory: do not touch code dirty bitmap unless TCG is enabled
cpu_physical_memory_set_dirty_lebitmap unconditionally syncs the
DIRTY_MEMORY_CODE bitmap. This however is unused unless TCG is
enabled.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 25 Mar 2015 14:21:39 +0000 (15:21 +0100)]
exec: only check relevant bitmaps for cleanliness
Most of the time, not all bitmaps have to be marked as dirty;
do not do anything if the interesting ones are already dirty.
Previously, any clean bitmap would have cause all the bitmaps to be
marked dirty.
In fact, unless running TCG most of the time bitmap operations need
not be done at all, because memory_region_is_logging returns zero.
In this case, skip the call to cpu_physical_memory_range_includes_clean
altogether as well.
With this patch, cpu_physical_memory_set_dirty_range is called
unconditionally, so there need not be anymore a separate call to
xen_modified_memory.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 22 Apr 2015 11:48:25 +0000 (13:48 +0200)]
exec: invert return value of cpu_physical_memory_get_clean, rename
While it is obvious that cpu_physical_memory_get_dirty returns true even if
a single page is dirty, the same is not true for cpu_physical_memory_get_clean;
one would expect that it returns true only if all the pages are clean, but
it actually looks for even one clean page. (By contrast, the caller of that
function, cpu_physical_memory_range_includes_clean, has a good name).
To clarify, rename the function to cpu_physical_memory_all_dirty and return
true if _all_ the pages are dirty. This is the opposite of the previous
meaning, because "all are 1" is the same as "not (any is 0)", so we have to
modify cpu_physical_memory_range_includes_clean as well.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 23 Mar 2015 10:56:01 +0000 (11:56 +0100)]
exec: pass client mask to cpu_physical_memory_set_dirty_range
This cuts in half the cost of bitmap operations (which will become more
expensive when made atomic) during migration on non-VRAM regions.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 22 Apr 2015 12:20:35 +0000 (14:20 +0200)]
translate-all: make less of tb_invalidate_phys_page_range depend on is_cpu_write_access
is_cpu_write_access is only set if tb_invalidate_phys_page_range is called
from tb_invalidate_phys_page_fast, and hence from notdirty_mem_write.
However:
- the code bitmap can be built directly in tb_invalidate_phys_page_fast
(unconditionally, since is_cpu_write_access would always be passed as 1);
- the virtual address is not needed to mark the page as "not containing
code" (dirty code bitmap = 1), so we can also remove that use of
is_cpu_write_access. For calls of tb_invalidate_phys_page_range
that do not come from notdirty_mem_write, the next call to
notdirty_mem_write will notice that the page does not contain code
anymore, and will fix up the TLB entry.
The parameter needs to remain in order to guard accesses to cpu->mem_io_pc.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 22 Apr 2015 12:24:54 +0000 (14:24 +0200)]
cputlb: remove useless arguments to tlb_unprotect_code_phys, rename
These days modification of the TLB is done in notdirty_mem_write,
so the virtual address and env pointer as unnecessary.
The new name of the function, tlb_unprotect_code, is consistent with
tlb_protect_code.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 22 Apr 2015 12:20:35 +0000 (14:20 +0200)]
translate-all: remove unnecessary argument to tb_invalidate_phys_range
The is_cpu_write_access argument is always 0, remove it.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 22 Apr 2015 12:15:48 +0000 (14:15 +0200)]
exec: move functions to translate-all.h
Remove them from the sundry exec-all.h header, since they are only used by
the TCG runtime in exec.c and user-exec.c.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 23 Mar 2015 10:45:53 +0000 (11:45 +0100)]
exec: use memory_region_get_dirty_log_mask to optimize dirty tracking
The memory API can now return the exact set of bitmaps that have to
be tracked. Use it instead of the in_migration variable.
In the next patches, we will also use it to set only DIRTY_MEMORY_VGA
or DIRTY_MEMORY_MIGRATION if necessary. This can make a difference
for dataplane, especially after the dirty bitmap is changed to use
more expensive atomic operations.
Of some interest is the change to stl_phys_notdirty. When migration
was introduced, stl_phys_notdirty was changed to effectively behave
as stl_phys during migration. In fact, if one looks at the function as it
was in the beginning (commit 8df1cd0, physical memory access functions,
2005-01-28), at the time the dirty bitmap was the equivalent of
DIRTY_MEMORY_CODE nowadays; hence, the function simply should not touch
the dirty code bits. This patch changes it to do the intended thing.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 23 Mar 2015 10:35:19 +0000 (11:35 +0100)]
ram_addr: tweaks to xen_modified_memory
Invoke xen_modified_memory from cpu_physical_memory_set_dirty_range_nocode;
it is akin to DIRTY_MEMORY_MIGRATION, so set it together with that bitmap.
The remaining call from invalidate_and_set_dirty's "else" branch will go
away soon.
Second, fix the second argument to the function in the
cpu_physical_memory_set_dirty_lebitmap call site. That function is only used
by KVM, but it is better to be clean anyway.
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 23 Mar 2015 09:57:21 +0000 (10:57 +0100)]
kvm: remove special handling of DIRTY_MEMORY_MIGRATION in the dirty log mask
One recent example is commit 4cc856f (kvm-all: Sync dirty-bitmap from
kvm before kvm destroy the corresponding dirty_bitmap, 2015-04-02).
Another performance problem is that KVM keeps tracking dirty pages
after a failed live migration, which causes bad performance due to
disallowing huge page mapping.
Thanks to the previous patch, KVM can now stop hooking into
log_global_start/stop. This simplifies the KVM code noticeably.
Reported-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Reported-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 23 Mar 2015 09:57:21 +0000 (10:57 +0100)]
memory: include DIRTY_MEMORY_MIGRATION in the dirty log mask
The separate handling of DIRTY_MEMORY_MIGRATION, which does not
call log_start/log_stop callbacks when it changes in a region's
dirty logging mask, has caused several bugs.
One recent example is commit 4cc856f (kvm-all: Sync dirty-bitmap from
kvm before kvm destroy the corresponding dirty_bitmap, 2015-04-02).
Another performance problem is that KVM keeps tracking dirty pages
after a failed live migration, which causes bad performance due to
disallowing huge page mapping.
This patch removes the root cause of the problem by reporting
DIRTY_MEMORY_MIGRATION changes via log_start and log_stop.
Note that we now have to rebuild the FlatView when global dirty
logging is enabled or disabled; this ensures that log_start and
log_stop callbacks are invoked.
This will also be used to make the setting of bitmaps conditional.
In general, this patch lets users of the memory API ignore the
global state of dirty logging if they handle dirty logging
generically per region.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 27 Apr 2015 12:51:31 +0000 (14:51 +0200)]
kvm: accept non-mapped memory in kvm_dirty_pages_log_change
It is okay if memory is not mapped into the guest but has dirty logging
enabled. When this happens, KVM will not do anything and only accesses
from the host will be logged.
This can be triggered by iofuzz.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 23 Mar 2015 09:53:21 +0000 (10:53 +0100)]
memory: track DIRTY_MEMORY_CODE in mr->dirty_log_mask
DIRTY_MEMORY_CODE is only needed for TCG. By adding it directly to
mr->dirty_log_mask, we avoid testing for TCG everywhere a region is
checked for the enabled/disabled state of dirty logging.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 22 Apr 2015 11:30:19 +0000 (13:30 +0200)]
ui/console: remove dpy_gfx_update_dirty
dpy_gfx_update_dirty expects DIRTY_MEMORY_VGA logging to be always on,
but that will not be the case soon. Because it computes the memory
region on the fly for every update (with memory_region_find), it cannot
enable/disable logging by itself.
We could always treat updates as invalidations if dirty logging is
not enabled, assuming that the board will enable logging on the
RAM region that includes the framebuffer.
However, the function is unused, so just drop it.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 23 Mar 2015 09:46:52 +0000 (10:46 +0100)]
framebuffer: check memory_region_is_logging
framebuffer.c expects DIRTY_MEMORY_VGA logging to be always on, but that
will not be the case soon. Because framebuffer.c computes the memory
region on the fly for every update (with memory_region_find), it cannot
enable/disable logging by itself.
Instead, always treat updates as invalidations if dirty logging is
not enabled, assuming that the board will enable logging on the
RAM region that includes the framebuffer.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Sat, 25 Apr 2015 12:38:30 +0000 (14:38 +0200)]
memory: prepare for multiple bits in the dirty log mask
When the dirty log mask will also cover other bits than DIRTY_MEMORY_VGA,
some listeners may be interested in the overall zero/non-zero value of
the dirty log mask; others may be interested in the value of single bits.
For this reason, always call log_start/log_stop if bits have respectively
appeared or disappeared, and pass the old and new values of the dirty log
mask so that listeners can distinguish the kinds of change.
For example, KVM checks if dirty logging used to be completely disabled
(in log_start) or is now completely disabled (in log_stop). On the
other hand, Xen has to check manually if DIRTY_MEMORY_VGA changed,
since that is the only bit it cares about.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 23 Mar 2015 09:50:57 +0000 (10:50 +0100)]
memory: differentiate memory_region_is_logging and memory_region_get_dirty_log_mask
For now memory regions only track DIRTY_MEMORY_VGA individually, but
this will change soon. To support this, split memory_region_is_logging
in two functions: one that returns a given bit from dirty_log_mask,
and one that returns the entire mask. memory_region_is_logging gets an
extra parameter so that the compiler flags misuse.
While VGA-specific users (including the Xen listener!) will want to keep
checking that bit, KVM and vhost check for "any bit except migration"
(because migration is handled via the global start/stop listener
callbacks).
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 22 Apr 2015 11:12:40 +0000 (13:12 +0200)]
display: add memory_region_sync_dirty_bitmap calls
These are strictly speaking only needed for KVM and Xen, but it's still
nice to be consistent.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 23 Mar 2015 09:47:45 +0000 (10:47 +0100)]
display: enable DIRTY_MEMORY_VGA tracking explicitly
This will be required soon by the memory core.
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 22 Apr 2015 10:43:24 +0000 (12:43 +0200)]
g364fb: remove pointless call to memory_region_set_coalescing
Coalescing work on MMIO, not RAM, thus this call has no effect.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 23 Mar 2015 09:31:53 +0000 (10:31 +0100)]
memory: the only dirty memory flag for users is DIRTY_MEMORY_VGA
DIRTY_MEMORY_MIGRATION is triggered by memory_global_dirty_log_start
and memory_global_dirty_log_stop, so it cannot be used with
memory_region_set_log.
Specify this in the documentation and assert it.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Peter Crosthwaite [Tue, 26 May 2015 05:38:06 +0000 (22:38 -0700)]
Makefile.target: set master BUILD_DIR
make can be invoked in the individual build dirs to build an individual
target or just a single file of a target. e.g.
touch translate-all.c
make -C microblazeel-softmmu translate-all.o
There is however a small bug when using the pixman submodule.
config-host.mak will ref BUILD_DIR for the pixman -I CFLAGS:
grep BUILD_DIR config-host.mak
QEMU_CFLAGS=-I$(SRC_PATH)/pixman/pixman -I$(BUILD_DIR)/pixman/pixman ...
This causes a build failure as -I/pixman/pixman (BUILD_DIR=="") will
not be found.
BUILD_DIR is usually set by the top level Makefile. Just lazy-set it in
Makefile.target to the parent directory.
Granted, this will not work if the pixman submodule is not prebuilt,
but it at least means you can do incremental partial builds once you
have done your initial full build (or attempt) from the top level.
The next step would be refactor make infrastructure to rebuild pixman
on a submake like the one above.
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Message-Id: <
1432618686-16077-1-git-send-email-crosthwaite.peter@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Thu, 21 May 2015 13:12:29 +0000 (15:12 +0200)]
exec: optimize phys_page_set_level
phys_page_set_level is writing zeroes to a struct that has just been
filled in by phys_map_node_alloc. Instead, tell phys_map_node_alloc
whether to fill in the page "as a leaf" or "as a non-leaf".
memcpy is faster than struct assignment, which copies each bitfield
individually. A compiler bug (https://gcc.gnu.org/PR66391), and
small memcpys like this one are special-cased anyway, and optimized
to a register move, so just use the memcpy.
This cuts the cost of phys_page_set_level from 25% to 5% when
booting qboot.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Fam Zheng [Tue, 19 May 2015 10:50:59 +0000 (10:50 +0000)]
qemu-nbd: Switch to qemu_set_fd_handler
Achieved by:
- Remembering the server fd with a global variable, in order to access
it from nbd_client_closed.
- Checking nbd_can_accept() and updating server_fd handler whenever
client connects or disconnects.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <
1432032670-15124-3-git-send-email-famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Laurent Vivier [Mon, 18 May 2015 19:06:47 +0000 (21:06 +0200)]
ppc: add helpful message when KVM fails to start VCPU
On POWER8 systems, KVM checks if VCPU is running on primary threads,
and that secondary threads are offline. If this is not the case,
ioctl() fails with errno set to EBUSY.
QEMU aborts with a non explicit error message:
$ ./qemu-system-ppc64 --nographic -machine pseries,accel=kvm
error: kvm run failed Device or resource busy
To help user to diagnose the problem, this patch adds an informative
error message.
There is no easy way to check if SMT is enabled before starting the VCPU,
and as this case is the only one setting errno to EBUSY, we just check
the errno value to display a message.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <
1431976007-20503-1-git-send-email-lvivier@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Miroslav Rezanina [Wed, 13 May 2015 09:39:30 +0000 (11:39 +0200)]
Move parallel_hds_isa_init to hw/isa/isa-bus.c
Disabling CONFIG_PARALLEL cause removing parallel_hds_isa_init defined in
parallel.c. This function is called during initialization of some boards so
disabling CONFIG_PARALLEL cause build failure.
This patch moves parallel_hds_isa_init to hw/isa/isa-bus.c so it is included
in case of disabled CONFIG_PARALLEL. Build is successful but qemu will abort
with "Unknown device" error when function is called.
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
Message-Id: <
1431509970-32154-1-git-send-email-mrezanin@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Peter Maydell [Fri, 5 Jun 2015 11:04:41 +0000 (12:04 +0100)]
Merge remote-tracking branch 'remotes/agraf/tags/signed-s390-for-upstream' into staging
Patch queue for s390 - 2015-06-05
This time there are a lot of s390x TCG emulation bug fixes - almost all
of them from Aurelien, who returned from nirvana :).
# gpg: Signature made Fri Jun 5 00:39:27 2015 BST using RSA key ID
03FEDC60
# gpg: Good signature from "Alexander Graf <agraf@suse.de>"
# gpg: aka "Alexander Graf <alex@csgraf.de>"
* remotes/agraf/tags/signed-s390-for-upstream: (34 commits)
target-s390x: Only access allocated storage keys
target-s390x: fix MVC instruction when areas overlap
target-s390x: use softmmu functions for mvcp/mvcs
target-s390x: support non current ASC in s390_cpu_handle_mmu_fault
target-s390x: add a cpu_mmu_idx_to_asc function
target-s390x: implement high-word facility
target-s390x: implement load-and-trap facility
target-s390x: implement miscellaneous-instruction-extensions facility
target-s390x: implement LPDFR and LNDFR instructions
target-s390x: implement TRANSLATE EXTENDED instruction
target-s390x: implement TRANSLATE AND TEST instruction
target-s390x: implement LOAD FP INTEGER instructions
target-s390x: move SET DFP ROUNDING MODE to the correct facility
target-s390x: move STORE CLOCK FAST to the correct facility
target-s390x: change CHRL and CGHRL format to RIL-b
target-s390x: fix CLGIT instruction
target-s390x: fix exception for invalid operation code
target-s390x: implement LAY and LAEY instructions
target-s390x: move a few instructions to the correct facility
target-s390x: detect tininess before rounding for FP operations
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark Cave-Ayland [Thu, 4 Jun 2015 21:59:37 +0000 (22:59 +0100)]
macio: remove remainder_len DBDMA_io property
Since the block alignment code is now effectively independent of the DMA
implementation, this variable is no longer required and can be removed.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id:
1433455177-21243-5-git-send-email-mark.cave-ayland@ilande.co.uk
Signed-off-by: John Snow <jsnow@redhat.com>
Mark Cave-Ayland [Thu, 4 Jun 2015 21:59:36 +0000 (22:59 +0100)]
macio: update comment/constants to reflect the new code
With the offset/len functions taking care of all of the alignment mapping
in isolation from the DMA tranasaction, many comments are now unnecessary.
Remove these and tidy up a few constants at the same time.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id:
1433455177-21243-4-git-send-email-mark.cave-ayland@ilande.co.uk
Signed-off-by: John Snow <jsnow@redhat.com>
Mark Cave-Ayland [Thu, 4 Jun 2015 21:59:35 +0000 (22:59 +0100)]
macio: switch pmac_dma_write() over to new offset/len implementation
In particular, this fixes a bug whereby chains of overlapping head/tail chains
would incorrectly write over each other's remainder cache. This is the access
pattern used by OS X/Darwin and fixes an issue with a corrupt Darwin
installation in my local tests.
While we are here, rename the DBDMA_io struct property remainder to
head_remainder for clarification.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id:
1433455177-21243-3-git-send-email-mark.cave-ayland@ilande.co.uk
Signed-off-by: John Snow <jsnow@redhat.com>
Mark Cave-Ayland [Thu, 4 Jun 2015 21:59:34 +0000 (22:59 +0100)]
macio: switch pmac_dma_read() over to new offset/len implementation
For better handling of unaligned block device accesses.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id:
1433455177-21243-2-git-send-email-mark.cave-ayland@ilande.co.uk
Signed-off-by: John Snow <jsnow@redhat.com>
Alexander Graf [Wed, 3 Jun 2015 22:52:44 +0000 (00:52 +0200)]
target-s390x: Only access allocated storage keys
We allocate ram_size / PAGE_SIZE storage keys, so we need to make sure that
we only access that many. Unfortunately the code can overrun this array by
one, potentially overwriting unrelated memory.
Fix it by limiting storage keys to their scope.
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Aurelien Jarno [Wed, 3 Jun 2015 21:09:56 +0000 (23:09 +0200)]
target-s390x: fix MVC instruction when areas overlap
The MVC instruction and the memmove C funtion do not have the same
semantic when memory areas overlap:
MVC: When the operands overlap, the result is obtained as if the
operands were processed one byte at a time and each result byte were
stored immediately after fetching the necessary operand byte.
memmove: Copying takes place as though the bytes in src are first copied
into a temporary array that does not overlap src or dest, and the bytes
are then copied from the temporary array to dest.
The behaviour is therefore the same when the destination is at a lower
address than the source, but not in the other case. This is actually a
trick for propagating a value to an area. While the current code detects
that and call memset in that case, it only does for 1-byte value. This
trick can and is used for propagating two or more bytes to an area.
In the softmmu case, the call to mvc_fast_memmove is correct as the
above tests verify that source and destination are each within a page,
and both in a different page. The part doing the move 8 bytes by 8 bytes
is wrong and we need to check that if the source and destination
overlap, they do with a distance of minimum 8 bytes before copying 8
bytes at a time.
In the user code, we should check check that the destination is at a
lower address than source or than the end of the source is at a lower
address than the destination before calling memmove. In the opposite
case we fallback to the same code as the softmmu one. Note that l
represents (length - 1).
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Aurelien Jarno [Wed, 3 Jun 2015 21:09:55 +0000 (23:09 +0200)]
target-s390x: use softmmu functions for mvcp/mvcs
mvcp and mvcs helper get access to the physical memory by a call to
mmu_translate for the virtual to real conversion and then using ldb_phys
and stb_phys to physically access the data. In practice this is quite
slow because it bypasses the QEMU softmmu TLB and because stb_phys calls
try to invalidate the corresponding memory for each access.
Instead use cpu_ldb_{primary,secondary} for the loads and
cpu_stb_{primary,secondary} for the stores. Ideally this should be
further optimized by a call to memcpy, but that already improves the
boot time of a guest by a factor 1.8.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>