1 <?xml version="1.0" encoding="UTF-8"?>
2 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
3 <html xmlns="http://www.w3.org/1999/xhtml">
5 This file is autogenerated from drvqemu.html.in
6 Do not edit this file. Changes will be lost.
9 <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
10 <link rel="stylesheet" type="text/css" href="main.css" />
11 <link rel="SHORTCUT ICON" href="32favicon.png" />
12 <title>libvirt: KVM/QEMU hypervisor driver</title>
13 <meta name="description" content="libvirt, virtualization, virtualization API" />
17 <div id="headerLogo"></div>
18 <div id="headerSearch">
19 <form action="search.php" enctype="application/x-www-form-urlencoded" method="get"><div>
20 <input id="query" name="query" type="text" size="12" value="" />
21 <input id="submit" name="submit" type="submit" value="Search" />
29 <a title="Front page of the libvirt website" class="inactive" href="index.html">Home</a>
33 <a title="Details of new features and bugs fixed in each release" class="inactive" href="news.html">News</a>
37 <a title="Applications known to use libvirt" class="inactive" href="apps.html">Applications</a>
41 <a title="Get the latest source releases, binary builds and get access to the source repository" class="inactive" href="downloads.html">Downloads</a>
45 <a title="Information for users, administrators and developers" class="active" href="docs.html">Documentation</a>
48 <a title="How to compile libvirt" class="inactive" href="compiling.html">Compiling</a>
52 <a title="Information about deploying and using libvirt" class="inactive" href="deployment.html">Deployment</a>
56 <a title="Overview of the logical subsystems in the libvirt API" class="inactive" href="intro.html">Architecture</a>
60 <a title="Description of the XML formats used in libvirt" class="inactive" href="format.html">XML format</a>
64 <a title="Hypervisor specific driver information" class="active" href="drivers.html">Drivers</a>
67 <a title="Driver the Xen hypervisor" class="inactive" href="drvxen.html">Xen</a>
71 <span class="active">QEMU / KVM</span>
75 <a title="Driver for the Linux native container API" class="inactive" href="drvlxc.html">Linux Container</a>
79 <a title="Pseudo-driver simulating APIs in memory for test suites" class="inactive" href="drvtest.html">Test</a>
83 <a title="Driver providing secure remote to the libvirt APIs" class="inactive" href="drvremote.html">Remote</a>
87 <a title="Driver for the OpenVZ container technology" class="inactive" href="drvopenvz.html">OpenVZ</a>
91 <a title="Driver for the User Mode Linux technology" class="inactive" href="drvuml.html">UML</a>
95 <a title="Driver for the storage management APIs" class="inactive" href="storage.html">Storage</a>
99 <a title="Driver for VirtualBox" class="inactive" href="drvvbox.html">VirtualBox</a>
103 <a title="Driver for VMware ESX" class="inactive" href="drvesx.html">VMware ESX</a>
107 <a title="Driver for VMware Workstation / Player" class="inactive" href="drvvmware.html">VMware Workstation / Player</a>
111 <a title="Driver for Microsoft Hyper-V" class="inactive" href="drvhyperv.html">Microsoft Hyper-V</a>
115 <a title="Driver for IBM PowerVM" class="inactive" href="drvphyp.html">IBM PowerVM</a>
119 <a title="Driver for Parallels Cloud Server" class="inactive" href="drvparallels.html">Parallels</a>
123 <a title="Driver for bhyve" class="inactive" href="drvbhyve.html">Bhyve</a>
129 <a title="Reference manual for the C public API" class="inactive" href="html/index.html">API reference</a>
133 <a title="Bindings of the libvirt API for other languages" class="inactive" href="bindings.html">Language bindings</a>
137 <a title="Working on the internals of libvirt API, driver and daemon code" class="inactive" href="internals.html">Internals</a>
141 <a title="A guide and reference for developing with libvirt" class="inactive" href="devguide.html">Development Guide</a>
145 <a title="Command reference for virsh" class="inactive" href="virshcmdref.html">Virsh Commands</a>
149 <a title="Project governance and code of conduct" class="inactive" href="governance.html">Governance</a>
155 <a title="User contributed content" class="inactive" href="http://wiki.libvirt.org">Wiki</a>
159 <a title="Frequently asked questions" class="inactive" href="http://wiki.libvirt.org/page/FAQ">FAQ</a>
163 <a title="How and where to report bugs and request features" class="inactive" href="bugs.html">Bug reports</a>
167 <a title="How to contact the developers via email and IRC" class="inactive" href="contact.html">Contact</a>
171 <a title="Available test suites for libvirt" class="inactive" href="testsuites.html">Test suites</a>
175 <a title="Miscellaneous links of interest related to libvirt" class="inactive" href="relatedlinks.html">Related Links</a>
179 <a title="Overview of all content on the website" class="inactive" href="sitemap.html">Sitemap</a>
184 <h1>KVM/QEMU hypervisor driver</h1>
186 <a href="#project">Project Links</a>
188 <a href="#prereq">Deployment pre-requisites</a>
190 <a href="#uris">Connections to QEMU driver</a>
192 <a href="#security">Driver security architecture</a>
194 <a href="#securitydriver">Driver instances</a>
196 <a href="#securitydac">POSIX users/groups</a>
198 <a href="#securitycap">Linux process capabilities</a>
200 <a href="#securityselinux">SELinux basic confinement</a>
202 <a href="#securitysvirt">SELinux sVirt confinement</a>
204 <a href="#securitysvirtaa">AppArmor sVirt confinement</a>
206 <a href="#securityacl">Cgroups device ACLs</a>
209 <a href="#imex">Import and export of libvirt domain XML configs</a>
211 <a href="#xmlimport">Converting from QEMU args to domain XML</a>
213 <a href="#xmlexport">Converting from domain XML to QEMU args</a>
216 <a href="#qemucommand">Pass-through of arbitrary qemu
219 <a href="#xmlconfig">Example domain XML config</a>
222 The libvirt KVM/QEMU driver can manage any QEMU emulator from
223 version 0.8.1 or later. It can also manage Xenner, which
224 provides the same QEMU command line syntax and monitor
228 <a name="project" shape="rect" id="project">Project Links</a>
229 <a class="headerlink" href="#project" title="Permalink to this headline">¶</a>
232 The <a href="http://www.linux-kvm.org/" shape="rect">KVM</a> Linux
235 The <a href="http://wiki.qemu.org/Index.html" shape="rect">QEMU</a> emulator
238 <a name="prereq" shape="rect" id="prereq">Deployment pre-requisites</a>
239 <a class="headerlink" href="#prereq" title="Permalink to this headline">¶</a>
242 <strong>QEMU emulators</strong>: The driver will probe <code>/usr/bin</code>
243 for the presence of <code>qemu</code>, <code>qemu-system-x86_64</code>,
244 <code>qemu-system-microblaze</code>,
245 <code>qemu-system-microblazeel</code>,
246 <code>qemu-system-mips</code>,<code>qemu-system-mipsel</code>,
247 <code>qemu-system-sparc</code>,<code>qemu-system-ppc</code>. The results
248 of this can be seen from the capabilities XML output.
250 <strong>KVM hypervisor</strong>: The driver will probe <code>/usr/bin</code>
251 for the presence of <code>qemu-kvm</code> and <code>/dev/kvm</code> device
252 node. If both are found, then KVM fullyvirtualized, hardware accelerated
253 guests will be available.
255 <strong>Xenner hypervisor</strong>: The driver will probe <code>/usr/bin</code>
256 for the presence of <code>xenner</code> and <code>/dev/kvm</code> device
257 node. If both are found, then Xen paravirtualized guests can be run using
258 the KVM hardware acceleration.
261 <a name="uris" shape="rect" id="uris">Connections to QEMU driver</a>
262 <a class="headerlink" href="#uris" title="Permalink to this headline">¶</a>
265 The libvirt QEMU driver is a multi-instance driver, providing a single
266 system wide privileged driver (the "system" instance), and per-user
267 unprivileged drivers (the "session" instance). The URI driver protocol
268 is "qemu". Some example connection URIs for the libvirt driver are:
270 <pre xml:space="preserve">
271 qemu:///session (local access to per-user instance)
272 qemu+unix:///session (local access to per-user instance)
274 qemu:///system (local access to system instance)
275 qemu+unix:///system (local access to system instance)
276 qemu://example.com/system (remote access, TLS/x509)
277 qemu+tcp://example.com/system (remote access, SASl/Kerberos)
278 qemu+ssh://root@example.com/system (remote access, SSH tunnelled)
281 <a name="security" shape="rect" id="security">Driver security architecture</a>
282 <a class="headerlink" href="#security" title="Permalink to this headline">¶</a>
285 There are multiple layers to security in the QEMU driver, allowing for
286 flexibility in the use of QEMU based virtual machines.
289 <a name="securitydriver" shape="rect" id="securitydriver">Driver instances</a>
290 <a class="headerlink" href="#securitydriver" title="Permalink to this headline">¶</a>
293 As explained above there are two ways to access the QEMU driver
294 in libvirt. The "qemu:///session" family of URIs connect to a
295 libvirtd instance running as the same user/group ID as the client
296 application. Thus the QEMU instances spawned from this driver will
297 share the same privileges as the client application. The intended
298 use case for this driver is desktop virtualization, with virtual
299 machines storing their disk images in the user's home directory and
300 being managed from the local desktop login session.
303 The "qemu:///system" family of URIs connect to a
304 libvirtd instance running as the privileged system account 'root'.
305 Thus the QEMU instances spawned from this driver may have much
306 higher privileges than the client application managing them.
307 The intended use case for this driver is server virtualization,
308 where the virtual machines may need to be connected to host
309 resources (block, PCI, USB, network devices) whose access requires
313 <a name="securitydac" shape="rect" id="securitydac">POSIX users/groups</a>
314 <a class="headerlink" href="#securitydac" title="Permalink to this headline">¶</a>
317 In the "session" instance, the POSIX users/groups model restricts QEMU
318 virtual machines (and libvirtd in general) to only have access to resources
319 with the same user/group ID as the client application. There is no
320 finer level of configuration possible for the "session" instances.
323 In the "system" instance, libvirt releases from 0.7.0 onwards allow
324 control over the user/group that the QEMU virtual machines are run
325 as. A build of libvirt with no configuration parameters set will
326 still run QEMU processes as root:root. It is possible to change
327 this default by using the --with-qemu-user=$USERNAME and
328 --with-qemu-group=$GROUPNAME arguments to 'configure' during
329 build. It is strongly recommended that vendors build with both
330 of these arguments set to 'qemu'. Regardless of this build time
331 default, administrators can set a per-host default setting in
332 the <code>/etc/libvirt/qemu.conf</code> configuration file via
333 the <code>user=$USERNAME</code> and <code>group=$GROUPNAME</code>
334 parameters. When a non-root user or group is configured, the
335 libvirt QEMU driver will change uid/gid to match immediately
336 before executing the QEMU binary for a virtual machine.
339 If QEMU virtual machines from the "system" instance are being
340 run as non-root, there will be greater restrictions on what
341 host resources the QEMU process will be able to access. The
342 libvirtd daemon will attempt to manage permissions on resources
343 to minimise the likelihood of unintentional security denials,
344 but the administrator / application developer must be aware of
345 some of the consequences / restrictions.
349 The directories <code>/var/run/libvirt/qemu/</code>,
350 <code>/var/lib/libvirt/qemu/</code> and
351 <code>/var/cache/libvirt/qemu/</code> must all have their
352 ownership set to match the user / group ID that QEMU
353 guests will be run as. If the vendor has set a non-root
354 user/group for the QEMU driver at build time, the
355 permissions should be set automatically at install time.
356 If a host administrator customizes user/group in
357 <code>/etc/libvirt/qemu.conf</code>, they will need to
358 manually set the ownership on these directories.
362 When attaching USB and PCI devices to a QEMU guest,
363 QEMU will need to access files in <code>/dev/bus/usb</code>
364 and <code>/sys/bus/pci/devices</code> respectively. The libvirtd daemon
365 will automatically set the ownership on specific devices
366 that are assigned to a guest at start time. There should
367 not be any need for administrator changes in this respect.
371 Any files/devices used as guest disk images must be
372 accessible to the user/group ID that QEMU guests are
373 configured to run as. The libvirtd daemon will automatically
374 set the ownership of the file/device path to the correct
375 user/group ID. Applications / administrators must be aware
376 though that the parent directory permissions may still
377 deny access. The directories containing disk images
378 must either have their ownership set to match the user/group
379 configured for QEMU, or their UNIX file permissions must
380 have the 'execute/search' bit enabled for 'others'.
383 The simplest option is the latter one, of just enabling
384 the 'execute/search' bit. For any directory to be used
385 for storing disk images, this can be achieved by running
386 the following command on the directory itself, and any
389 <pre xml:space="preserve">
390 chmod o+x /path/to/directory
393 In particular note that if using the "system" instance
394 and attempting to store disk images in a user home
395 directory, the default permissions on $HOME are typically
396 too restrictive to allow access.
400 <a name="securitycap" shape="rect" id="securitycap">Linux process capabilities</a>
401 <a class="headerlink" href="#securitycap" title="Permalink to this headline">¶</a>
404 The libvirt QEMU driver has a build time option allowing it to use
405 the <a href="http://people.redhat.com/sgrubb/libcap-ng/index.html" shape="rect">libcap-ng</a>
406 library to manage process capabilities. If this build option is
407 enabled, then the QEMU driver will use this to ensure that all
408 process capabilities are dropped before executing a QEMU virtual
409 machine. Process capabilities are what gives the 'root' account
410 its high power, in particular the CAP_DAC_OVERRIDE capability
411 is what allows a process running as 'root' to access files owned
415 If the QEMU driver is configured to run virtual machines as non-root,
416 then they will already lose all their process capabilities at time
417 of startup. The Linux capability feature is thus aimed primarily at
418 the scenario where the QEMU processes are running as root. In this
419 case, before launching a QEMU virtual machine, libvirtd will use
420 libcap-ng APIs to drop all process capabilities. It is important
421 for administrators to note that this implies the QEMU process will
422 <strong>only</strong> be able to access files owned by root, and
423 not files owned by any other user.
426 Thus, if a vendor / distributor has configured their libvirt package
427 to run as 'qemu' by default, a number of changes will be required
428 before an administrator can change a host to run guests as root.
429 In particular it will be necessary to change ownership on the
430 directories <code>/var/run/libvirt/qemu/</code>,
431 <code>/var/lib/libvirt/qemu/</code> and
432 <code>/var/cache/libvirt/qemu/</code> back to root, in addition
433 to changing the <code>/etc/libvirt/qemu.conf</code> settings.
436 <a name="securityselinux" shape="rect" id="securityselinux">SELinux basic confinement</a>
437 <a class="headerlink" href="#securityselinux" title="Permalink to this headline">¶</a>
440 The basic SELinux protection for QEMU virtual machines is intended to
441 protect the host OS from a compromised virtual machine process. There
442 is no protection between guests.
445 In the basic model, all QEMU virtual machines run under the confined
446 domain <code>root:system_r:qemu_t</code>. It is required that any
447 disk image assigned to a QEMU virtual machine is labelled with
448 <code>system_u:object_r:virt_image_t</code>. In a default deployment,
449 package vendors/distributor will typically ensure that the directory
450 <code>/var/lib/libvirt/images</code> has this label, such that any
451 disk images created in this directory will automatically inherit the
452 correct labelling. If attempting to use disk images in another
453 location, the user/administrator must ensure the directory has be
454 given this requisite label. Likewise physical block devices must
455 be labelled <code>system_u:object_r:virt_image_t</code>.
458 Not all filesystems allow for labelling of individual files. In
459 particular NFS, VFat and NTFS have no support for labelling. In
460 these cases administrators must use the 'context' option when
461 mounting the filesystem to set the default label to
462 <code>system_u:object_r:virt_image_t</code>. In the case of
463 NFS, there is an alternative option, of enabling the <code>virt_use_nfs</code>
467 <a name="securitysvirt" shape="rect" id="securitysvirt">SELinux sVirt confinement</a>
468 <a class="headerlink" href="#securitysvirt" title="Permalink to this headline">¶</a>
471 The SELinux sVirt protection for QEMU virtual machines builds to the
472 basic level of protection, to also allow individual guests to be
473 protected from each other.
476 In the sVirt model, each QEMU virtual machine runs under its own
477 confined domain, which is based on <code>system_u:system_r:svirt_t:s0</code>
478 with a unique category appended, eg, <code>system_u:system_r:svirt_t:s0:c34,c44</code>.
479 The rules are setup such that a domain can only access files which are
480 labelled with the matching category level, eg
481 <code>system_u:object_r:svirt_image_t:s0:c34,c44</code>. This prevents one
482 QEMU process accessing any file resources that are prevent to another QEMU
486 There are two ways of assigning labels to virtual machines under sVirt.
487 In the default setup, if sVirt is enabled, guests will get an automatically
488 assigned unique label each time they are booted. The libvirtd daemon will
489 also automatically relabel exclusive access disk images to match this
490 label. Disks that are marked as <shared> will get a generic
491 label <code>system_u:system_r:svirt_image_t:s0</code> allowing all guests
492 read/write access them, while disks marked as <readonly> will
493 get a generic label <code>system_u:system_r:svirt_content_t:s0</code>
494 which allows all guests read-only access.
497 With statically assigned labels, the application should include the
498 desired guest and file labels in the XML at time of creating the
499 guest with libvirt. In this scenario the application is responsible
500 for ensuring the disk images & similar resources are suitably
501 labelled to match, libvirtd will not attempt any relabelling.
504 If the sVirt security model is active, then the node capabilities
505 XML will include its details. If a virtual machine is currently
506 protected by the security model, then the guest XML will include
507 its assigned labels. If enabled at compile time, the sVirt security
508 model will always be activated if SELinux is available on the host
509 OS. To disable sVirt, and revert to the basic level of SELinux
510 protection (host protection only), the <code>/etc/libvirt/qemu.conf</code>
511 file can be used to change the setting to <code>security_driver="none"</code>
514 <a name="securitysvirtaa" shape="rect" id="securitysvirtaa">AppArmor sVirt confinement</a>
515 <a class="headerlink" href="#securitysvirtaa" title="Permalink to this headline">¶</a>
518 When using basic AppArmor protection for the libvirtd daemon and
519 QEMU virtual machines, the intention is to protect the host OS
520 from a compromised virtual machine process. There is no protection
524 The AppArmor sVirt protection for QEMU virtual machines builds on
525 this basic level of protection, to also allow individual guests to
526 be protected from each other.
529 In the sVirt model, if a profile is loaded for the libvirtd daemon,
530 then each <code>qemu:///system</code> QEMU virtual machine will have
531 a profile created for it when the virtual machine is started if one
532 does not already exist. This generated profile uses a profile name
533 based on the UUID of the QEMU virtual machine and contains rules
534 allowing access to only the files it needs to run, such as its disks,
535 pid file and log files. Just before the QEMU virtual machine is
536 started, the libvirtd daemon will change into this unique profile,
537 preventing the QEMU process from accessing any file resources that
538 are present in another QEMU process or the host machine.
541 The AppArmor sVirt implementation is flexible in that it allows an
542 administrator to customize the template file in
543 <code>/etc/apparmor.d/libvirt/TEMPLATE</code> for site-specific
544 access for all newly created QEMU virtual machines. Also, when a new
545 profile is generated, two files are created:
546 <code>/etc/apparmor.d/libvirt/libvirt-<uuid></code> and
547 <code>/etc/apparmor.d/libvirt/libvirt-<uuid>.files</code>. The
548 former can be fine-tuned by the administrator to allow custom access
549 for this particular QEMU virtual machine, and the latter will be
550 updated appropriately when required file access changes, such as when
551 a disk is added. This flexibility allows for situations such as
552 having one virtual machine in complain mode with all others in
556 While users can define their own AppArmor profile scheme, a typical
557 configuration will include a profile for <code>/usr/sbin/libvirtd</code>,
558 <code>/usr/lib/libvirt/virt-aa-helper</code> (a helper program which the
559 libvirtd daemon uses instead of manipulating AppArmor directly), and
560 an abstraction to be included by <code>/etc/apparmor.d/libvirt/TEMPLATE</code>
561 (typically <code>/etc/apparmor.d/abstractions/libvirt-qemu</code>).
562 An example profile scheme can be found in the examples/apparmor
563 directory of the source distribution.
566 If the sVirt security model is active, then the node capabilities
567 XML will include its details. If a virtual machine is currently
568 protected by the security model, then the guest XML will include
569 its assigned profile name. If enabled at compile time, the sVirt
570 security model will be activated if AppArmor is available on the host
571 OS and a profile for the libvirtd daemon is loaded when libvirtd is
572 started. To disable sVirt, and revert to the basic level of AppArmor
573 protection (host protection only), the <code>/etc/libvirt/qemu.conf</code>
574 file can be used to change the setting to <code>security_driver="none"</code>.
577 <a name="securityacl" shape="rect" id="securityacl">Cgroups device ACLs</a>
578 <a class="headerlink" href="#securityacl" title="Permalink to this headline">¶</a>
581 Recent Linux kernels have a capability known as "cgroups" which is used
582 for resource management. It is implemented via a number of "controllers",
583 each controller covering a specific task/functional area. One of the
584 available controllers is the "devices" controller, which is able to
585 setup whitelists of block/character devices that a cgroup should be
586 allowed to access. If the "devices" controller is mounted on a host,
587 then libvirt will automatically create a dedicated cgroup for each
588 QEMU virtual machine and setup the device whitelist so that the QEMU
589 process can only access shared devices, and explicitly disks images
590 backed by block devices.
593 The list of shared devices a guest is allowed access to is
595 <pre xml:space="preserve">
596 /dev/null, /dev/full, /dev/zero,
597 /dev/random, /dev/urandom,
598 /dev/ptmx, /dev/kvm, /dev/kqemu,
599 /dev/rtc, /dev/hpet, /dev/net/tun
602 In the event of unanticipated needs arising, this can be customized
603 via the <code>/etc/libvirt/qemu.conf</code> file.
604 To mount the cgroups device controller, the following command
605 should be run as root, prior to starting libvirtd
607 <pre xml:space="preserve">
609 mount -t cgroup none /dev/cgroup -o devices
612 libvirt will then place each virtual machine in a cgroup at
613 <code>/dev/cgroup/libvirt/qemu/$VMNAME/</code>
616 <a name="imex" shape="rect" id="imex">Import and export of libvirt domain XML configs</a>
617 <a class="headerlink" href="#imex" title="Permalink to this headline">¶</a>
619 <p>The QEMU driver currently supports a single native
620 config format known as <code>qemu-argv</code>. The data for this format
621 is expected to be a single line first a list of environment variables,
622 then the QEMu binary name, finally followed by the QEMU command line
625 <a name="xmlimport" shape="rect" id="xmlimport">Converting from QEMU args to domain XML</a>
626 <a class="headerlink" href="#xmlimport" title="Permalink to this headline">¶</a>
629 The <code>virsh domxml-from-native</code> provides a way to
630 convert an existing set of QEMU args into a guest description
631 using libvirt Domain XML that can then be used by libvirt.
632 Please note that this command is intended to be used to convert
633 existing qemu guests previously started from the command line to
634 be managed through libvirt. It should not be used a method of
635 creating new guests from scratch. New guests should be created
636 using an application calling the libvirt APIs (see
637 the <a href="apps.html" shape="rect">libvirt applications page</a> for some
638 examples) or by manually crafting XML to pass to virsh.
640 <pre xml:space="preserve">$ cat > demo.args <<EOF
641 LC_ALL=C PATH=/bin HOME=/home/test USER=test \
642 LOGNAME=test /usr/bin/qemu -S -M pc -m 214 -smp 1 \
643 -nographic -monitor pty -no-acpi -boot c -hda \
644 /dev/HostVG/QEMUGuest1 -net none -serial none \
648 $ virsh domxml-from-native qemu-argv demo.args
649 <domain type='qemu'>
650 <uuid>00000000-0000-0000-0000-000000000000</uuid>
651 <memory>219136</memory>
652 <currentMemory>219136</currentMemory>
653 <vcpu>1</vcpu>
655 <type arch='i686' machine='pc'>hvm</type>
656 <boot dev='hd'/>
658 <clock offset='utc'/>
659 <on_poweroff>destroy</on_poweroff>
660 <on_reboot>restart</on_reboot>
661 <on_crash>destroy</on_crash>
663 <emulator>/usr/bin/qemu</emulator>
664 <disk type='block' device='disk'>
665 <source dev='/dev/HostVG/QEMUGuest1'/>
666 <target dev='hda' bus='ide'/>
671 <p>NB, don't include the literal \ in the args, put everything on one line</p>
673 <a name="xmlexport" shape="rect" id="xmlexport">Converting from domain XML to QEMU args</a>
674 <a class="headerlink" href="#xmlexport" title="Permalink to this headline">¶</a>
677 The <code>virsh domxml-to-native</code> provides a way to convert a
678 guest description using libvirt Domain XML, into a set of QEMU args
679 that can be run manually.
681 <pre xml:space="preserve">$ cat > demo.xml <<EOF
682 <domain type='qemu'>
683 <name>QEMUGuest1</name>
684 <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid>
685 <memory>219200</memory>
686 <currentMemory>219200</currentMemory>
687 <vcpu>1</vcpu>
689 <type arch='i686' machine='pc'>hvm</type>
690 <boot dev='hd'/>
692 <clock offset='utc'/>
693 <on_poweroff>destroy</on_poweroff>
694 <on_reboot>restart</on_reboot>
695 <on_crash>destroy</on_crash>
697 <emulator>/usr/bin/qemu</emulator>
698 <disk type='block' device='disk'>
699 <source dev='/dev/HostVG/QEMUGuest1'/>
700 <target dev='hda' bus='ide'/>
706 $ virsh domxml-to-native qemu-argv demo.xml
707 LC_ALL=C PATH=/usr/bin:/bin HOME=/home/test \
708 USER=test LOGNAME=test /usr/bin/qemu -S -M pc \
709 -no-kqemu -m 214 -smp 1 -name QEMUGuest1 -nographic \
710 -monitor pty -no-acpi -boot c -drive \
711 file=/dev/HostVG/QEMUGuest1,if=ide,index=0 -net none \
712 -serial none -parallel none -usb
715 <a name="qemucommand" shape="rect" id="qemucommand">Pass-through of arbitrary qemu
717 <a class="headerlink" href="#qemucommand" title="Permalink to this headline">¶</a>
719 <p>Libvirt provides an XML namespace and an optional
720 library <code>libvirt-qemu.so</code> for dealing specifically
721 with qemu. When used correctly, these extensions allow testing
722 specific qemu features that have not yet been ported to the
723 generic libvirt XML and API interfaces. However, they
724 are <b>unsupported</b>, in that the library is not guaranteed to
725 have a stable API, abusing the library or XML may result in
726 inconsistent state the crashes libvirtd, and upgrading either
727 qemu-kvm or libvirtd may break behavior of a domain that was
728 relying on a qemu-specific pass-through. If you find yourself
729 needing to use them to access a particular qemu feature, then
730 please post an RFE to the libvirt mailing list to get that
731 feature incorporated into the stable libvirt XML and API
734 <p>The library provides two
735 API: <code>virDomainQemuMonitorCommand</code>, for sending an
736 arbitrary monitor command (in either HMP or QMP format) to a
737 qemu guest (<span class="since">Since 0.8.3</span>),
738 and <code>virDomainQemuAttach</code>, for registering a qemu
739 domain that was manually started so that it can then be managed
740 by libvirtd (<span class="since">Since 0.9.4</span>).
742 <p>Additionally, the following XML additions allow fine-tuning of
743 the command line given to qemu when starting a domain
744 (<span class="since">Since 0.8.3</span>). In order to use the
745 XML additions, it is necessary to issue an XML namespace request
746 (the special <code>xmlns:<i>name</i></code> attribute) that
747 pulls in <code>http://libvirt.org/schemas/domain/qemu/1.0</code>;
748 typically, the namespace is given the name
749 of <code>qemu</code>. With the namespace in place, it is then
750 possible to add an element <code><qemu:commandline></code>
751 under <code>driver</code>, with the following sub-elements
752 repeated as often as needed:
754 <dl><dt><code>qemu:arg</code></dt><dd>Add an additional command-line argument to the qemu
755 process when starting the domain, given by the value of the
756 attribute <code>value</code>.
757 </dd><dt><code>qemu:env</code></dt><dd>Add an additional environment variable to the qemu
758 process when starting the domain, given with the name-value
759 pair recorded in the attributes <code>name</code>
760 and optional <code>value</code>.</dd></dl>
762 <pre xml:space="preserve">
763 <domain type='qemu' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
764 <name>QEmu-fedora-i686</name>
765 <memory>219200</memory>
767 <type arch='i686' machine='pc'>hvm</type>
770 <emulator>/usr/bin/qemu-system-x86_64</emulator>
772 <qemu:commandline>
773 <qemu:arg value='-newarg'/>
774 <qemu:env name='QEMU_ENV' value='VAL'/>
775 </qemu:commandline>
779 <a name="xmlconfig" shape="rect" id="xmlconfig">Example domain XML config</a>
780 <a class="headerlink" href="#xmlconfig" title="Permalink to this headline">¶</a>
782 <h3>QEMU emulated guest on x86_64</h3>
783 <pre xml:space="preserve"><domain type='qemu'>
784 <name>QEmu-fedora-i686</name>
785 <uuid>c7a5fdbd-cdaf-9455-926a-d65c16db1809</uuid>
786 <memory>219200</memory>
787 <currentMemory>219200</currentMemory>
788 <vcpu>2</vcpu>
790 <type arch='i686' machine='pc'>hvm</type>
791 <boot dev='cdrom'/>
794 <emulator>/usr/bin/qemu-system-x86_64</emulator>
795 <disk type='file' device='cdrom'>
796 <source file='/home/user/boot.iso'/>
797 <target dev='hdc'/>
800 <disk type='file' device='disk'>
801 <source file='/home/user/fedora.img'/>
802 <target dev='hda'/>
804 <interface type='network'>
805 <source network='default'/>
807 <graphics type='vnc' port='-1'/>
809 </domain></pre>
810 <h3>KVM hardware accelerated guest on i686</h3>
811 <pre xml:space="preserve"><domain type='kvm'>
812 <name>demo2</name>
813 <uuid>4dea24b3-1d52-d8f3-2516-782e98a23fa0</uuid>
814 <memory>131072</memory>
815 <vcpu>1</vcpu>
817 <type arch="i686">hvm</type>
819 <clock sync="localtime"/>
821 <emulator>/usr/bin/qemu-kvm</emulator>
822 <disk type='file' device='disk'>
823 <source file='/var/lib/libvirt/images/demo2.img'/>
824 <target dev='hda'/>
826 <interface type='network'>
827 <source network='default'/>
828 <mac address='24:42:53:21:52:45'/>
830 <graphics type='vnc' port='-1' keymap='de'/>
832 </domain></pre>
833 <h3>Xen paravirtualized guests with hardware acceleration</h3>
838 Sponsored by:<br /><a href="http://et.redhat.com/"><img src="et.png" alt="Project sponsored by Red Hat Emerging Technology" /></a></p>