mirror of
https://github.com/torvalds/linux.git
synced 2024-11-26 22:21:42 +00:00
7880672bdc
There is a spelling mistake in a Kconfig description. Fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Link: https://lore.kernel.org/r/20221007203500.2756787-1-colin.i.king@gmail.com Signed-off-by: Juergen Gross <jgross@suse.com>
368 lines
12 KiB
Plaintext
368 lines
12 KiB
Plaintext
# SPDX-License-Identifier: GPL-2.0-only
|
|
menu "Xen driver support"
|
|
depends on XEN
|
|
|
|
config XEN_BALLOON
|
|
bool "Xen memory balloon driver"
|
|
default y
|
|
help
|
|
The balloon driver allows the Xen domain to request more memory from
|
|
the system to expand the domain's memory allocation, or alternatively
|
|
return unneeded memory to the system.
|
|
|
|
config XEN_BALLOON_MEMORY_HOTPLUG
|
|
bool "Memory hotplug support for Xen balloon driver"
|
|
depends on XEN_BALLOON && MEMORY_HOTPLUG
|
|
default y
|
|
help
|
|
Memory hotplug support for Xen balloon driver allows expanding memory
|
|
available for the system above limit declared at system startup.
|
|
It is very useful on critical systems which require long
|
|
run without rebooting.
|
|
|
|
It's also very useful for non PV domains to obtain unpopulated physical
|
|
memory ranges to use in order to map foreign memory or grants.
|
|
|
|
Memory could be hotplugged in following steps:
|
|
|
|
1) target domain: ensure that memory auto online policy is in
|
|
effect by checking /sys/devices/system/memory/auto_online_blocks
|
|
file (should be 'online').
|
|
|
|
2) control domain: xl mem-max <target-domain> <maxmem>
|
|
where <maxmem> is >= requested memory size,
|
|
|
|
3) control domain: xl mem-set <target-domain> <memory>
|
|
where <memory> is requested memory size; alternatively memory
|
|
could be added by writing proper value to
|
|
/sys/devices/system/xen_memory/xen_memory0/target or
|
|
/sys/devices/system/xen_memory/xen_memory0/target_kb on the
|
|
target domain.
|
|
|
|
Alternatively, if memory auto onlining was not requested at step 1
|
|
the newly added memory can be manually onlined in the target domain
|
|
by doing the following:
|
|
|
|
for i in /sys/devices/system/memory/memory*/state; do \
|
|
[ "`cat "$i"`" = offline ] && echo online > "$i"; done
|
|
|
|
or by adding the following line to udev rules:
|
|
|
|
SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'"
|
|
|
|
config XEN_MEMORY_HOTPLUG_LIMIT
|
|
int "Hotplugged memory limit (in GiB) for a PV guest"
|
|
default 512
|
|
depends on XEN_HAVE_PVMMU
|
|
depends on MEMORY_HOTPLUG
|
|
help
|
|
Maximum amount of memory (in GiB) that a PV guest can be
|
|
expanded to when using memory hotplug.
|
|
|
|
A PV guest can have more memory than this limit if is
|
|
started with a larger maximum.
|
|
|
|
This value is used to allocate enough space in internal
|
|
tables needed for physical memory administration.
|
|
|
|
config XEN_SCRUB_PAGES_DEFAULT
|
|
bool "Scrub pages before returning them to system by default"
|
|
depends on XEN_BALLOON
|
|
default y
|
|
help
|
|
Scrub pages before returning them to the system for reuse by
|
|
other domains. This makes sure that any confidential data
|
|
is not accidentally visible to other domains. It is more
|
|
secure, but slightly less efficient. This can be controlled with
|
|
xen_scrub_pages=0 parameter and
|
|
/sys/devices/system/xen_memory/xen_memory0/scrub_pages.
|
|
This option only sets the default value.
|
|
|
|
If in doubt, say yes.
|
|
|
|
config XEN_DEV_EVTCHN
|
|
tristate "Xen /dev/xen/evtchn device"
|
|
default y
|
|
help
|
|
The evtchn driver allows a userspace process to trigger event
|
|
channels and to receive notification of an event channel
|
|
firing.
|
|
If in doubt, say yes.
|
|
|
|
config XEN_BACKEND
|
|
bool "Backend driver support"
|
|
default XEN_DOM0
|
|
help
|
|
Support for backend device drivers that provide I/O services
|
|
to other virtual machines.
|
|
|
|
config XENFS
|
|
tristate "Xen filesystem"
|
|
select XEN_PRIVCMD
|
|
default y
|
|
help
|
|
The xen filesystem provides a way for domains to share
|
|
information with each other and with the hypervisor.
|
|
For example, by reading and writing the "xenbus" file, guests
|
|
may pass arbitrary information to the initial domain.
|
|
If in doubt, say yes.
|
|
|
|
config XEN_COMPAT_XENFS
|
|
bool "Create compatibility mount point /proc/xen"
|
|
depends on XENFS
|
|
default y
|
|
help
|
|
The old xenstore userspace tools expect to find "xenbus"
|
|
under /proc/xen, but "xenbus" is now found at the root of the
|
|
xenfs filesystem. Selecting this causes the kernel to create
|
|
the compatibility mount point /proc/xen if it is running on
|
|
a xen platform.
|
|
If in doubt, say yes.
|
|
|
|
config XEN_SYS_HYPERVISOR
|
|
bool "Create xen entries under /sys/hypervisor"
|
|
depends on SYSFS
|
|
select SYS_HYPERVISOR
|
|
default y
|
|
help
|
|
Create entries under /sys/hypervisor describing the Xen
|
|
hypervisor environment. When running native or in another
|
|
virtual environment, /sys/hypervisor will still be present,
|
|
but will have no xen contents.
|
|
|
|
config XEN_XENBUS_FRONTEND
|
|
tristate
|
|
|
|
config XEN_GNTDEV
|
|
tristate "userspace grant access device driver"
|
|
depends on XEN
|
|
default m
|
|
select MMU_NOTIFIER
|
|
help
|
|
Allows userspace processes to use grants.
|
|
|
|
config XEN_GNTDEV_DMABUF
|
|
bool "Add support for dma-buf grant access device driver extension"
|
|
depends on XEN_GNTDEV && XEN_GRANT_DMA_ALLOC
|
|
select DMA_SHARED_BUFFER
|
|
help
|
|
Allows userspace processes and kernel modules to use Xen backed
|
|
dma-buf implementation. With this extension grant references to
|
|
the pages of an imported dma-buf can be exported for other domain
|
|
use and grant references coming from a foreign domain can be
|
|
converted into a local dma-buf for local export.
|
|
|
|
config XEN_GRANT_DEV_ALLOC
|
|
tristate "User-space grant reference allocator driver"
|
|
depends on XEN
|
|
default m
|
|
help
|
|
Allows userspace processes to create pages with access granted
|
|
to other domains. This can be used to implement frontend drivers
|
|
or as part of an inter-domain shared memory channel.
|
|
|
|
config XEN_GRANT_DMA_ALLOC
|
|
bool "Allow allocating DMA capable buffers with grant reference module"
|
|
depends on XEN && HAS_DMA
|
|
help
|
|
Extends grant table module API to allow allocating DMA capable
|
|
buffers and mapping foreign grant references on top of it.
|
|
The resulting buffer is similar to one allocated by the balloon
|
|
driver in that proper memory reservation is made by
|
|
({increase|decrease}_reservation and VA mappings are updated if
|
|
needed).
|
|
This is useful for sharing foreign buffers with HW drivers which
|
|
cannot work with scattered buffers provided by the balloon driver,
|
|
but require DMAable memory instead.
|
|
|
|
config SWIOTLB_XEN
|
|
def_bool y
|
|
depends on XEN_PV || ARM || ARM64
|
|
select DMA_OPS
|
|
select SWIOTLB
|
|
|
|
config XEN_PCI_STUB
|
|
bool
|
|
|
|
config XEN_PCIDEV_STUB
|
|
tristate "Xen PCI-device stub driver"
|
|
depends on PCI && !X86 && XEN
|
|
depends on XEN_BACKEND
|
|
select XEN_PCI_STUB
|
|
default m
|
|
help
|
|
The PCI device stub driver provides limited version of the PCI
|
|
device backend driver without para-virtualized support for guests.
|
|
If you select this to be a module, you will need to make sure no
|
|
other driver has bound to the device(s) you want to make visible to
|
|
other guests.
|
|
|
|
The "hide" parameter (only applicable if backend driver is compiled
|
|
into the kernel) allows you to bind the PCI devices to this module
|
|
from the default device drivers. The argument is the list of PCI BDFs:
|
|
xen-pciback.hide=(03:00.0)(04:00.0)
|
|
|
|
If in doubt, say m.
|
|
|
|
config XEN_PCIDEV_BACKEND
|
|
tristate "Xen PCI-device backend driver"
|
|
depends on PCI && X86 && XEN
|
|
depends on XEN_BACKEND
|
|
select XEN_PCI_STUB
|
|
default m
|
|
help
|
|
The PCI device backend driver allows the kernel to export arbitrary
|
|
PCI devices to other guests. If you select this to be a module, you
|
|
will need to make sure no other driver has bound to the device(s)
|
|
you want to make visible to other guests.
|
|
|
|
The parameter "passthrough" allows you specify how you want the PCI
|
|
devices to appear in the guest. You can choose the default (0) where
|
|
PCI topology starts at 00.00.0, or (1) for passthrough if you want
|
|
the PCI devices topology appear the same as in the host.
|
|
|
|
The "hide" parameter (only applicable if backend driver is compiled
|
|
into the kernel) allows you to bind the PCI devices to this module
|
|
from the default device drivers. The argument is the list of PCI BDFs:
|
|
xen-pciback.hide=(03:00.0)(04:00.0)
|
|
|
|
If in doubt, say m.
|
|
|
|
config XEN_PVCALLS_FRONTEND
|
|
tristate "XEN PV Calls frontend driver"
|
|
depends on INET && XEN
|
|
select XEN_XENBUS_FRONTEND
|
|
help
|
|
Experimental frontend for the Xen PV Calls protocol
|
|
(https://xenbits.xen.org/docs/unstable/misc/pvcalls.html). It
|
|
sends a small set of POSIX calls to the backend, which
|
|
implements them.
|
|
|
|
config XEN_PVCALLS_BACKEND
|
|
tristate "XEN PV Calls backend driver"
|
|
depends on INET && XEN && XEN_BACKEND
|
|
help
|
|
Experimental backend for the Xen PV Calls protocol
|
|
(https://xenbits.xen.org/docs/unstable/misc/pvcalls.html). It
|
|
allows PV Calls frontends to send POSIX calls to the backend,
|
|
which implements them.
|
|
|
|
If in doubt, say n.
|
|
|
|
config XEN_SCSI_BACKEND
|
|
tristate "XEN SCSI backend driver"
|
|
depends on XEN && XEN_BACKEND && TARGET_CORE
|
|
help
|
|
The SCSI backend driver allows the kernel to export its SCSI Devices
|
|
to other guests via a high-performance shared-memory interface.
|
|
Only needed for systems running as XEN driver domains (e.g. Dom0) and
|
|
if guests need generic access to SCSI devices.
|
|
|
|
config XEN_PRIVCMD
|
|
tristate "Xen hypercall passthrough driver"
|
|
depends on XEN
|
|
default m
|
|
help
|
|
The hypercall passthrough driver allows privileged user programs to
|
|
perform Xen hypercalls. This driver is normally required for systems
|
|
running as Dom0 to perform privileged operations, but in some
|
|
disaggregated Xen setups this driver might be needed for other
|
|
domains, too.
|
|
|
|
config XEN_ACPI_PROCESSOR
|
|
tristate "Xen ACPI processor"
|
|
depends on XEN && XEN_PV_DOM0 && X86 && ACPI_PROCESSOR && CPU_FREQ
|
|
default m
|
|
help
|
|
This ACPI processor uploads Power Management information to the Xen
|
|
hypervisor.
|
|
|
|
To do that the driver parses the Power Management data and uploads
|
|
said information to the Xen hypervisor. Then the Xen hypervisor can
|
|
select the proper Cx and Pxx states. It also registers itself as the
|
|
SMM so that other drivers (such as ACPI cpufreq scaling driver) will
|
|
not load.
|
|
|
|
To compile this driver as a module, choose M here: the module will be
|
|
called xen_acpi_processor If you do not know what to choose, select
|
|
M here. If the CPUFREQ drivers are built in, select Y here.
|
|
|
|
config XEN_MCE_LOG
|
|
bool "Xen platform mcelog"
|
|
depends on XEN_PV_DOM0 && X86_MCE
|
|
help
|
|
Allow kernel fetching MCE error from Xen platform and
|
|
converting it into Linux mcelog format for mcelog tools
|
|
|
|
config XEN_HAVE_PVMMU
|
|
bool
|
|
|
|
config XEN_EFI
|
|
def_bool y
|
|
depends on (ARM || ARM64 || X86_64) && EFI
|
|
|
|
config XEN_AUTO_XLATE
|
|
def_bool y
|
|
depends on ARM || ARM64 || XEN_PVHVM
|
|
help
|
|
Support for auto-translated physmap guests.
|
|
|
|
config XEN_ACPI
|
|
def_bool y
|
|
depends on X86 && ACPI
|
|
|
|
config XEN_SYMS
|
|
bool "Xen symbols"
|
|
depends on X86 && XEN_DOM0 && XENFS
|
|
default y if KALLSYMS
|
|
help
|
|
Exports hypervisor symbols (along with their types and addresses) via
|
|
/proc/xen/xensyms file, similar to /proc/kallsyms
|
|
|
|
config XEN_HAVE_VPMU
|
|
bool
|
|
|
|
config XEN_FRONT_PGDIR_SHBUF
|
|
tristate
|
|
|
|
config XEN_UNPOPULATED_ALLOC
|
|
bool "Use unpopulated memory ranges for guest mappings"
|
|
depends on ZONE_DEVICE
|
|
default XEN_BACKEND || XEN_GNTDEV || XEN_DOM0
|
|
help
|
|
Use unpopulated memory ranges in order to create mappings for guest
|
|
memory regions, including grant maps and foreign pages. This avoids
|
|
having to balloon out RAM regions in order to obtain physical memory
|
|
space to create such mappings.
|
|
|
|
config XEN_GRANT_DMA_IOMMU
|
|
bool
|
|
select IOMMU_API
|
|
|
|
config XEN_GRANT_DMA_OPS
|
|
bool
|
|
select DMA_OPS
|
|
|
|
config XEN_VIRTIO
|
|
bool "Xen virtio support"
|
|
depends on VIRTIO
|
|
select XEN_GRANT_DMA_OPS
|
|
select XEN_GRANT_DMA_IOMMU if OF
|
|
help
|
|
Enable virtio support for running as Xen guest. Depending on the
|
|
guest type this will require special support on the backend side
|
|
(qemu or kernel, depending on the virtio device types used).
|
|
|
|
If in doubt, say n.
|
|
|
|
config XEN_VIRTIO_FORCE_GRANT
|
|
bool "Require Xen virtio support to use grants"
|
|
depends on XEN_VIRTIO
|
|
help
|
|
Require virtio for Xen guests to use grant mappings.
|
|
This will avoid the need to give the backend the right to map all
|
|
of the guest memory. This will need support on the backend side
|
|
(e.g. qemu or kernel, depending on the virtio device types used).
|
|
|
|
endmenu
|