linux/arch/x86/xen/Kconfig

79 lines
1.9 KiB
Plaintext
Raw Normal View History

#
# This Kconfig describes xen options
#
config XEN
bool "Xen guest support"
depends on PARAVIRT
select PARAVIRT_CLOCK
depends on X86_64 || (X86_32 && X86_PAE)
depends on X86_LOCAL_APIC && X86_TSC
help
This is the Linux Xen port. Enabling this will allow the
kernel to boot in a paravirtualized environment under the
Xen hypervisor.
config XEN_PV
bool "Xen PV guest support"
default y
depends on XEN
select XEN_HAVE_PVMMU
select XEN_HAVE_VPMU
help
Support running as a Xen PV guest.
config XEN_PV_SMP
def_bool y
depends on XEN_PV && SMP
config XEN_DOM0
bool "Xen PV Dom0 support"
default y
depends on XEN_PV && PCI_XEN && SWIOTLB_XEN
depends on X86_IO_APIC && ACPI && PCI
help
Support running as a Xen PV Dom0 guest.
config XEN_PVHVM
bool "Xen PVHVM guest support"
default y
depends on XEN && PCI && X86_LOCAL_APIC
help
Support running as a Xen PVHVM guest.
config XEN_PVHVM_SMP
def_bool y
depends on XEN_PVHVM && SMP
config XEN_512GB
bool "Limit Xen pv-domain memory to 512GB"
depends on XEN_PV && X86_64
default y
help
Limit paravirtualized user domains to 512GB of RAM.
The Xen tools and crash dump analysis tools might not support
pv-domains with more than 512 GB of RAM. This option controls the
default setting of the kernel to use only up to 512 GB or more.
It is always possible to change the default via specifying the
boot parameter "xen_512gb_limit".
config XEN_SAVE_RESTORE
bool
depends on XEN
select HIBERNATE_CALLBACKS
default y
config XEN_DEBUG_FS
bool "Enable Xen debug and tuning parameters in debugfs"
depends on XEN && DEBUG_FS
default n
help
Enable statistics output and various tuning options in debugfs.
Enabling this option may incur a significant performance overhead.
xen/pvh/x86: Define what an PVH guest is (v3). Which is a PV guest with auto page translation enabled and with vector callback. It is a cross between PVHVM and PV. The Xen side defines PVH as (from docs/misc/pvh-readme.txt, with modifications): "* the guest uses auto translate: - p2m is managed by Xen - pagetables are owned by the guest - mmu_update hypercall not available * it uses event callback and not vlapic emulation, * IDT is native, so set_trap_table hcall is also N/A for a PVH guest. For a full list of hcalls supported for PVH, see pvh_hypercall64_table in arch/x86/hvm/hvm.c in xen. From the ABI prespective, it's mostly a PV guest with auto translate, although it does use hvm_op for setting callback vector." Also we use the PV cpuid, albeit we can use the HVM (native) cpuid. However, we do have a fair bit of filtering in the xen_cpuid and we can piggyback on that until the hypervisor/toolstack filters the appropiate cpuids. Once that is done we can swap over to use the native one. We setup a Kconfig entry that is disabled by default and cannot be enabled. Note that on ARM the concept of PVH is non-existent. As Ian put it: "an ARM guest is neither PV nor HVM nor PVHVM. It's a bit like PVH but is different also (it's further towards the H end of the spectrum than even PVH).". As such these options (PVHVM, PVH) are never enabled nor seen on ARM compilations. Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2013-12-13 17:39:56 +00:00
config XEN_PVH
bool "Support for running as a PVH guest"
depends on XEN && XEN_PVHVM && ACPI
xen/pvh/x86: Define what an PVH guest is (v3). Which is a PV guest with auto page translation enabled and with vector callback. It is a cross between PVHVM and PV. The Xen side defines PVH as (from docs/misc/pvh-readme.txt, with modifications): "* the guest uses auto translate: - p2m is managed by Xen - pagetables are owned by the guest - mmu_update hypercall not available * it uses event callback and not vlapic emulation, * IDT is native, so set_trap_table hcall is also N/A for a PVH guest. For a full list of hcalls supported for PVH, see pvh_hypercall64_table in arch/x86/hvm/hvm.c in xen. From the ABI prespective, it's mostly a PV guest with auto translate, although it does use hvm_op for setting callback vector." Also we use the PV cpuid, albeit we can use the HVM (native) cpuid. However, we do have a fair bit of filtering in the xen_cpuid and we can piggyback on that until the hypervisor/toolstack filters the appropiate cpuids. Once that is done we can swap over to use the native one. We setup a Kconfig entry that is disabled by default and cannot be enabled. Note that on ARM the concept of PVH is non-existent. As Ian put it: "an ARM guest is neither PV nor HVM nor PVHVM. It's a bit like PVH but is different also (it's further towards the H end of the spectrum than even PVH).". As such these options (PVHVM, PVH) are never enabled nor seen on ARM compilations. Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2013-12-13 17:39:56 +00:00
def_bool n