Create a new vCPU in the VM
Synopsis:
cpu [options]*
Options:
- partition
name
- If adaptive partitioning (APS) is implemented in the hypervisor host domain,
the vCPU will run in the host domain APS partition specified by
name. If the partition option isn't
specified, the vCPU thread will run in the partition where the
qvm process was started.
- runmask
cpu_number{,cpu_number}
- Allow the vCPU to run only on the specified physical CPU or CPUs (core
pinning). CPU numbering is zero-based. Default is no restrictions
(floating).
- Assigning runmasks to vCPUs implies some important design choices. If a vCPU
is allowed to float (no runmask is set, or the runmask includes more than
one CPU) then the vCPU may migrate.
- Migration is useful on some systems, because with migration a vCPU will move
to a core that is free (if the current runmask permits). However, for some
realtime guests, assigning a vCPU to one core that isn't shared with other
vCPUs can improve realtime determinism.
- QNX recommends that you first use floating vCPUs in your design, then move
to restrict or prohibit migration as required.
- sched
priority[r | f | o ]
- sched
high_priority,low_priority,max_replacements,replacement_period,initial_budget
s
- Set the vCPU's scheduling priority and scheduling algorithm. The scheduling
algorithm can be round-robin (r), FIFO
(f), or sporadic (s). The
o (other) scheduling algorithm is reserved for future
use; currently it is equivalent to r.
- The default vCPU configuration uses round-robin scheduling for vCPUs. Our
testing has indicated that most guests respond most favorably to this
scheduling algorithm. It allows a guest that has its own internal scheduling
policies to operate efficiently.
- See Configuring sporadic scheduling below for more information about
using sporadic scheduling.
Description:
The cpu option creates a new vCPU in the VM. Every vCPU is a thread,
so a thread runmask can be used to restrict the vCPU to a specific physical CPU (or
several physical CPUs). Similarly, standard thread scheduling priorities and
algorithms can be applied to the vCPU. Note that vCPU threads are threads in the
hypervisor host domain.
If no cpu is specified, the qvm process instance
creates a single vCPU.
For more information about vCPUs and system performance, see vCPUs and hypervisor performance in the Performance Tuning chapter.
Configuring sporadic scheduling
Note: You should consult with QNX engineering support before using scheduling such as
FIFO or sporadic.
For sporadic scheduling, you need to specify the following five parameters:
- high_priority – the high priority value
- low_priority – the low priority value
- max_replacements – the maximum number of times the
vCPU's budget can be replenished due to blocking
- replacement_period – the number of nanoseconds that
must elapse before the vCPU's budget can be replenished after being blocked, or
after overrunning max_replacements
- initial_budget – the number of nanoseconds to run at
high_priority before being dropped to
low_priority
Maximum vCPUs per guest
The maximum number of vCPUs that may be defined for each guest running in a
hypervisor VM is limited by a number of factors:
- Hardware
- On supported AArch64 (ARMv8) and x86-64 platforms, the hardware currently
allows a maximum of 254 vCPUs on the board. This number may change with
newer hardware.
- Specific hardware components may also limit the number of vCPUs per guest.
For example, on x86-64 boards, the LAPIC virtualization limits a guest to a
maximum of 15 vCPUs. Similarly, on AArch64 boards, the maximum number of
vCPUs per guest is limited by the GIC version in which the GIC vdev is
running. With the GIC vdev running in GICv2 mode the maximum number of vCPUs
that can be assigned to a guest is eight (8).
Note: QNX recommends that you don't give a guest more vCPUs than there are
physical CPUs on the underlying hardware platform.
- Guest OS
- Current QNX OSs support a maximum of 32 CPUs (except on ARM boards with
GICv2, for which the limit is 8 CPUs). These limits also apply to vCPUs,
since a guest OS makes no distinction between a CPU and a vCPU.
- Check the latest documentation for your guest OSs (QNX Neutrino and Linux)
for more information about the maximum number of CPUs they support.
Examples:
The following examples illustrate some use cases using cpu to
configure vCPUs.
Example 1: pin vCPU, set scheduling priority
The following creates a vCPU that is permitted to run only on physical CPU 3
(numbering is zero-based):
cpu runmask 3 sched 8r
The priority is 8. The scheduling algorithm is round-robin.
Example 2: floating vCPUs, set scheduling priority
The following creates four vCPUs (0, 1, 2, 3), all with priority 10:
cpu sched 10
cpu sched 10
cpu sched 10
cpu sched 10
The runmask option isn't specified, so the default (floating) is
used.
Since no affinity has been specified for any of the vCPU threads, the hypervisor
microkernel scheduler can run each vCPU thread on whatever available physical CPU it
deems most appropriate.
Example 3: two vCPUs pinned to physical CPUs, default scheduling
The following creates four vCPUs (0, 1, 2, 3):
cpu runmask 2,3 # vCPU 0 may run only on pCPU 2 or 3.
cpu runmask 2,3 # vCPU 1 may run only on pCPU 2 or 3.
cpu # vCPU 2 may run on any pCPU.
cpu # vCPU 3 may run on any pCPU.
vCPUs 0 and 1 have their runmask options set to pin them to pCPUs 2
and 3. This allows them to run on pCPUs 2 and 3 only. They won't migrate to
pCPU 0 or 1, even if these pCPUs are idle. No runmask option is
specified for vCPUs 2 and 3, so they will use the default: float. They will be able
to run on any available physical CPU (including pCPUs 2 and 3).
For information about how priorities for hypervisor threads and guest threads and
handled, see Scheduling in the Understanding QNX Virtual Environments chapter.
For more information about affinity and scheduling in QNX systems, see the QNX
Neutrino OS documentation.