Huge Page Support: Difference between revisions

From Xen
Jump to navigationJump to search
 
(10 intermediate revisions by the same user not shown)
Line 11: Line 11:
== Using Huge Pages ==
== Using Huge Pages ==


* In the Hypervisor: In recent versions, huge page support is '''enabled by default'''. Older versions (and custom builds with different defaults) may need to specify the hypervisor boot command line flag "allowsuperpage" (formerly called "allowhugepage").
* '''In the Hypervisor:''' In recent versions, huge page support is '''enabled by default'''. Older versions (and custom builds with different defaults) may need to specify the hypervisor boot command line flag "allowsuperpage" (formerly called "allowhugepage").


<!-- The following is old and perhaps Oracle-VM-specific:
<!-- The following is old and perhaps Oracle-VM-specific:
Line 21: Line 21:
#With this option, PV guest `memory` and `maxmem` must be 2M aligned. Otherwise the VM cannot be started.
#With this option, PV guest `memory` and `maxmem` must be 2M aligned. Otherwise the VM cannot be started.


* In Linux: The Linux boot command line flag "balloon_hugepages" can be used to allow the balloon driver to use huge pages.
-->
-->
* '''In the guest:''' The ballooning driver does not support hugepages, so keep the memory of the DomU constant. Create the DomU with minimum memory equal to maximum memory so the balloon driver is never called. Never execute the ''xl mem-set'' command against the DomU to change its memory size. Then, within the VM, execute the following:
* In Linux: The Linux boot command line flag "balloon_hugepages" can be used to allow the balloon driver to use huge pages.

* In the guest::


# echo 20 > /proc/sys/vm/nr_hugepages
# echo 20 > /proc/sys/vm/nr_hugepages
Line 41: Line 40:
== Huge Pages: Internals ==
== Huge Pages: Internals ==


If you use an HVM or PVH guest in HAP mode (the default), and minimize memory ballooning, you will be maximizing your use of hugepages from Xen's perspective.
If you use an HVM or PVH guest in Hardware Assisted Paging (HAP) mode (the default), and minimize memory ballooning, you will be maximizing your use of hugepages from the hypervisor's perspective.


Superpages have two advantages, both of which translate to reduced overhead due to TLB misses on workloads that involve accessing large amounts of memory.
Superpages have two advantages, both of which translate to reduced overhead due to TLB misses on workloads that involve accessing large amounts of memory.


1. Superpages in the pagetable translate to hugepages in the TLB. On x86, the architectural limit of the TLB is 16, so having hugepages in the TLB increases its coverage from 64kiB (16x4k) to 32MiB (16x2MiB).
1. Superpages in the pagetable translate to hugepages in the Translation Lookaside Buffer (TLB). On x86, the architectural limit of the TLB is 16, so having hugepages in the TLB increases its coverage from 64kiB (16x4k) to 32MiB (16x2MiB). This translates to fewer TLB misses.
This translates to fewer TLB misses.


2. Superpages skip one level of the pagetable on a TLB miss, making TLB misses less expensive.
2. Superpages skip one level of the pagetable on a TLB miss, making TLB misses less expensive.
Line 52: Line 50:
Superpages might be used in several places:
Superpages might be used in several places:
* The guest pagetables.
* The guest pagetables.
* Xen's pagetables
* The hypervisor's pagetables
* For an HVM or PVH guest, the [Physical-to-Machine|X86_Paravirtualised_Memory_Management#Direct_Paging] (p2m) table (which is inside Xen)
* For an HVM or PVH guest, the [http://wiki.xenproject.org/wiki/X86_Paravirtualised_Memory_Management#Direct_Paging Physical-to-Machine] (p2m) table (which is inside Xen)
* For a guest running in shadow mode, the shadow pagetables
* For a guest running in shadow mode, the shadow pagetables


Line 73: Line 71:
However, the cost of a TLB miss when running in HAP is much more expensive than a TLB miss when running in shadow mode. So whether HAP or shadow provides better performance depends on the parameters of the particular workload that's being run. (In most cases, HAP will provide better performance.)
However, the cost of a TLB miss when running in HAP is much more expensive than a TLB miss when running in shadow mode. So whether HAP or shadow provides better performance depends on the parameters of the particular workload that's being run. (In most cases, HAP will provide better performance.)


=== References ===
== External References ==


* [http://en.wikipedia.org/wiki/Page_%28computer_memory%29#Huge_pages Wikipedia entry for Huge Pages]
* [http://en.wikipedia.org/wiki/Page_%28computer_memory%29#Huge_pages Wikipedia entry for Huge Pages]
* Huge Pages [http://linux-mm.org/HugePages from Linux memory management site]
* Huge Pages [http://linux-mm.org/HugePages from Linux memory management site]
* Huge Page Document [https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt from Kernel.org]
* Huge Page Document [https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt from Kernel.org]
* And the genesis document for this page: http://zhigang.org/wiki/XenHugePages
* An old [http://zhigang.org/wiki/XenHugePages Oracle-centric page] with some useful, but also some dated, information

Latest revision as of 00:09, 9 May 2015

What Are Huge Pages?

  • Huge pages are also known as "superpages" in FreeBSD (or "large pages" in the Microsoft Windows world)
  • Newer AMD64 processors can use 1GB pages in long mode.
  • Linux has supported huge pages on several architectures since the 2.6 series via the hugetlbfs filesystem.
  • Xen Project supports allocating huge pages for HVM and PVH guests (use in PV guests is not supported). The hypervisor itself uses huge pages wherever it can.

Using Huge Pages

  • In the Hypervisor: In recent versions, huge page support is enabled by default. Older versions (and custom builds with different defaults) may need to specify the hypervisor boot command line flag "allowsuperpage" (formerly called "allowhugepage").
  • In the guest: The ballooning driver does not support hugepages, so keep the memory of the DomU constant. Create the DomU with minimum memory equal to maximum memory so the balloon driver is never called. Never execute the xl mem-set command against the DomU to change its memory size. Then, within the VM, execute the following:
   # echo 20 > /proc/sys/vm/nr_hugepages

   # cat /proc/meminfo
   ...
   AnonHugePages:         0 kB
   HugePages_Total:      20
   HugePages_Free:       20
   HugePages_Rsvd:        0
   HugePages_Surp:        0
   Hugepagesize:       2048 kB
   DirectMap4k:     1056768 kB
   DirectMap2M:           0 kB

Huge Pages: Internals

If you use an HVM or PVH guest in Hardware Assisted Paging (HAP) mode (the default), and minimize memory ballooning, you will be maximizing your use of hugepages from the hypervisor's perspective.

Superpages have two advantages, both of which translate to reduced overhead due to TLB misses on workloads that involve accessing large amounts of memory.

1. Superpages in the pagetable translate to hugepages in the Translation Lookaside Buffer (TLB). On x86, the architectural limit of the TLB is 16, so having hugepages in the TLB increases its coverage from 64kiB (16x4k) to 32MiB (16x2MiB). This translates to fewer TLB misses.

2. Superpages skip one level of the pagetable on a TLB miss, making TLB misses less expensive.

Superpages might be used in several places:

  • The guest pagetables.
  • The hypervisor's pagetables
  • For an HVM or PVH guest, the Physical-to-Machine (p2m) table (which is inside Xen)
  • For a guest running in shadow mode, the shadow pagetables

Xen will always use superpages in its own pagetables when possible.

Xen will always use superpages in the p2m table when possible. On a clean machine that has never done any ballooning, this should always happen. Ballooning can fragment the p2m, making it not possible to use superpages.

Xen has no support for superpages in the pagetables of PV guests. Oracle did some work to make this possible some time back, but it was never upstreamed, and they switched to pursuing PVH instead.

HVM and PVH guests can always put superpages in their pagetables.

At then moment, shadow pagetables never have superpage entries.

When HVM and PVH guests are running in HAP mode (the default), the TLB will contain superpage entries, and the cost of a TLB miss will go down compared to having no superpage entries.

When HVM and PVH guests are running in shadow mode, they can use superpages in their own pagetables; however, the shadow tables, used by the actual hardware, will not have superpages. This means that the TLB will not contain superpage entries, nor will the cost of a TLB miss go down compared to having no superpage entries.

However, the cost of a TLB miss when running in HAP is much more expensive than a TLB miss when running in shadow mode. So whether HAP or shadow provides better performance depends on the parameters of the particular workload that's being run. (In most cases, HAP will provide better performance.)

External References