License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2008-10-23 15:01:39 +08:00
|
|
|
#ifndef _ASM_X86_XEN_PAGE_H
|
|
|
|
#define _ASM_X86_XEN_PAGE_H
|
2008-04-03 01:53:58 +08:00
|
|
|
|
2008-12-17 04:37:07 +08:00
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/spinlock.h>
|
2008-04-03 01:53:58 +08:00
|
|
|
#include <linux/pfn.h>
|
2010-09-30 19:37:26 +08:00
|
|
|
#include <linux/mm.h>
|
2017-01-30 17:01:20 +08:00
|
|
|
#include <linux/device.h>
|
2008-04-03 01:53:58 +08:00
|
|
|
|
2016-12-25 03:46:01 +08:00
|
|
|
#include <linux/uaccess.h>
|
2008-12-17 04:37:07 +08:00
|
|
|
#include <asm/page.h>
|
2008-04-03 01:53:58 +08:00
|
|
|
#include <asm/pgtable.h>
|
|
|
|
|
2008-12-17 04:37:07 +08:00
|
|
|
#include <xen/interface/xen.h>
|
2015-06-20 00:49:03 +08:00
|
|
|
#include <xen/interface/grant_table.h>
|
2008-04-03 01:53:58 +08:00
|
|
|
#include <xen/features.h>
|
|
|
|
|
|
|
|
/* Xen machine address */
|
|
|
|
typedef struct xmaddr {
|
|
|
|
phys_addr_t maddr;
|
|
|
|
} xmaddr_t;
|
|
|
|
|
|
|
|
/* Xen pseudo-physical address */
|
|
|
|
typedef struct xpaddr {
|
|
|
|
phys_addr_t paddr;
|
|
|
|
} xpaddr_t;
|
|
|
|
|
2017-10-28 01:49:37 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
#define XEN_PHYSICAL_MASK __sme_clr((1UL << 52) - 1)
|
|
|
|
#else
|
|
|
|
#define XEN_PHYSICAL_MASK __PHYSICAL_MASK
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#define XEN_PTE_MFN_MASK ((pteval_t)(((signed long)PAGE_MASK) & \
|
|
|
|
XEN_PHYSICAL_MASK))
|
|
|
|
|
2008-04-03 01:53:58 +08:00
|
|
|
#define XMADDR(x) ((xmaddr_t) { .maddr = (x) })
|
|
|
|
#define XPADDR(x) ((xpaddr_t) { .paddr = (x) })
|
|
|
|
|
|
|
|
/**** MACHINE <-> PHYSICAL CONVERSION MACROS ****/
|
|
|
|
#define INVALID_P2M_ENTRY (~0UL)
|
xen/mmu: Add the notion of identity (1-1) mapping.
Our P2M tree structure is a three-level. On the leaf nodes
we set the Machine Frame Number (MFN) of the PFN. What this means
is that when one does: pfn_to_mfn(pfn), which is used when creating
PTE entries, you get the real MFN of the hardware. When Xen sets
up a guest it initially populates a array which has descending
(or ascending) MFN values, as so:
idx: 0, 1, 2
[0x290F, 0x290E, 0x290D, ..]
so pfn_to_mfn(2)==0x290D. If you start, restart many guests that list
starts looking quite random.
We graft this structure on our P2M tree structure and stick in
those MFN in the leafs. But for all other leaf entries, or for the top
root, or middle one, for which there is a void entry, we assume it is
"missing". So
pfn_to_mfn(0xc0000)=INVALID_P2M_ENTRY.
We add the possibility of setting 1-1 mappings on certain regions, so
that:
pfn_to_mfn(0xc0000)=0xc0000
The benefit of this is, that we can assume for non-RAM regions (think
PCI BARs, or ACPI spaces), we can create mappings easily b/c we
get the PFN value to match the MFN.
For this to work efficiently we introduce one new page p2m_identity and
allocate (via reserved_brk) any other pages we need to cover the sides
(1GB or 4MB boundary violations). All entries in p2m_identity are set to
INVALID_P2M_ENTRY type (Xen toolstack only recognizes that and MFNs,
no other fancy value).
On lookup we spot that the entry points to p2m_identity and return the identity
value instead of dereferencing and returning INVALID_P2M_ENTRY. If the entry
points to an allocated page, we just proceed as before and return the PFN.
If the PFN has IDENTITY_FRAME_BIT set we unmask that in appropriate functions
(pfn_to_mfn).
The reason for having the IDENTITY_FRAME_BIT instead of just returning the
PFN is that we could find ourselves where pfn_to_mfn(pfn)==pfn for a
non-identity pfn. To protect ourselves against we elect to set (and get) the
IDENTITY_FRAME_BIT on all identity mapped PFNs.
This simplistic diagram is used to explain the more subtle piece of code.
There is also a digram of the P2M at the end that can help.
Imagine your E820 looking as so:
1GB 2GB
/-------------------+---------\/----\ /----------\ /---+-----\
| System RAM | Sys RAM ||ACPI| | reserved | | Sys RAM |
\-------------------+---------/\----/ \----------/ \---+-----/
^- 1029MB ^- 2001MB
[1029MB = 263424 (0x40500), 2001MB = 512256 (0x7D100), 2048MB = 524288 (0x80000)]
And dom0_mem=max:3GB,1GB is passed in to the guest, meaning memory past 1GB
is actually not present (would have to kick the balloon driver to put it in).
When we are told to set the PFNs for identity mapping (see patch: "xen/setup:
Set identity mapping for non-RAM E820 and E820 gaps.") we pass in the start
of the PFN and the end PFN (263424 and 512256 respectively). The first step is
to reserve_brk a top leaf page if the p2m[1] is missing. The top leaf page
covers 512^2 of page estate (1GB) and in case the start or end PFN is not
aligned on 512^2*PAGE_SIZE (1GB) we loop on aligned 1GB PFNs from start pfn to
end pfn. We reserve_brk top leaf pages if they are missing (means they point
to p2m_mid_missing).
With the E820 example above, 263424 is not 1GB aligned so we allocate a
reserve_brk page which will cover the PFNs estate from 0x40000 to 0x80000.
Each entry in the allocate page is "missing" (points to p2m_missing).
Next stage is to determine if we need to do a more granular boundary check
on the 4MB (or 2MB depending on architecture) off the start and end pfn's.
We check if the start pfn and end pfn violate that boundary check, and if
so reserve_brk a middle (p2m[x][y]) leaf page. This way we have a much finer
granularity of setting which PFNs are missing and which ones are identity.
In our example 263424 and 512256 both fail the check so we reserve_brk two
pages. Populate them with INVALID_P2M_ENTRY (so they both have "missing" values)
and assign them to p2m[1][2] and p2m[1][488] respectively.
At this point we would at minimum reserve_brk one page, but could be up to
three. Each call to set_phys_range_identity has at maximum a three page
cost. If we were to query the P2M at this stage, all those entries from
start PFN through end PFN (so 1029MB -> 2001MB) would return INVALID_P2M_ENTRY
("missing").
The next step is to walk from the start pfn to the end pfn setting
the IDENTITY_FRAME_BIT on each PFN. This is done in 'set_phys_range_identity'.
If we find that the middle leaf is pointing to p2m_missing we can swap it over
to p2m_identity - this way covering 4MB (or 2MB) PFN space. At this point we
do not need to worry about boundary aligment (so no need to reserve_brk a middle
page, figure out which PFNs are "missing" and which ones are identity), as that
has been done earlier. If we find that the middle leaf is not occupied by
p2m_identity or p2m_missing, we dereference that page (which covers
512 PFNs) and set the appropriate PFN with IDENTITY_FRAME_BIT. In our example
263424 and 512256 end up there, and we set from p2m[1][2][256->511] and
p2m[1][488][0->256] with IDENTITY_FRAME_BIT set.
All other regions that are void (or not filled) either point to p2m_missing
(considered missing) or have the default value of INVALID_P2M_ENTRY (also
considered missing). In our case, p2m[1][2][0->255] and p2m[1][488][257->511]
contain the INVALID_P2M_ENTRY value and are considered "missing."
This is what the p2m ends up looking (for the E820 above) with this
fabulous drawing:
p2m /--------------\
/-----\ | &mfn_list[0],| /-----------------\
| 0 |------>| &mfn_list[1],| /---------------\ | ~0, ~0, .. |
|-----| | ..., ~0, ~0 | | ~0, ~0, [x]---+----->| IDENTITY [@256] |
| 1 |---\ \--------------/ | [p2m_identity]+\ | IDENTITY [@257] |
|-----| \ | [p2m_identity]+\\ | .... |
| 2 |--\ \-------------------->| ... | \\ \----------------/
|-----| \ \---------------/ \\
| 3 |\ \ \\ p2m_identity
|-----| \ \-------------------->/---------------\ /-----------------\
| .. +->+ | [p2m_identity]+-->| ~0, ~0, ~0, ... |
\-----/ / | [p2m_identity]+-->| ..., ~0 |
/ /---------------\ | .... | \-----------------/
/ | IDENTITY[@0] | /-+-[x], ~0, ~0.. |
/ | IDENTITY[@256]|<----/ \---------------/
/ | ~0, ~0, .... |
| \---------------/
|
p2m_missing p2m_missing
/------------------\ /------------\
| [p2m_mid_missing]+---->| ~0, ~0, ~0 |
| [p2m_mid_missing]+---->| ..., ~0 |
\------------------/ \------------/
where ~0 is INVALID_P2M_ENTRY. IDENTITY is (PFN | IDENTITY_BIT)
Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
[v5: Changed code to use ranges, added ASCII art]
[v6: Rebased on top of xen->p2m code split]
[v4: Squished patches in just this one]
[v7: Added RESERVE_BRK for potentially allocated pages]
[v8: Fixed alignment problem]
[v9: Changed 1<<3X to 1<<BITS_PER_LONG-X]
[v10: Copied git commit description in the p2m code + Add Review tag]
[v11: Title had '2-1' - should be '1-1' mapping]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2011-01-19 09:15:21 +08:00
|
|
|
#define FOREIGN_FRAME_BIT (1UL<<(BITS_PER_LONG-1))
|
|
|
|
#define IDENTITY_FRAME_BIT (1UL<<(BITS_PER_LONG-2))
|
2008-04-03 01:53:58 +08:00
|
|
|
#define FOREIGN_FRAME(m) ((m) | FOREIGN_FRAME_BIT)
|
xen/mmu: Add the notion of identity (1-1) mapping.
Our P2M tree structure is a three-level. On the leaf nodes
we set the Machine Frame Number (MFN) of the PFN. What this means
is that when one does: pfn_to_mfn(pfn), which is used when creating
PTE entries, you get the real MFN of the hardware. When Xen sets
up a guest it initially populates a array which has descending
(or ascending) MFN values, as so:
idx: 0, 1, 2
[0x290F, 0x290E, 0x290D, ..]
so pfn_to_mfn(2)==0x290D. If you start, restart many guests that list
starts looking quite random.
We graft this structure on our P2M tree structure and stick in
those MFN in the leafs. But for all other leaf entries, or for the top
root, or middle one, for which there is a void entry, we assume it is
"missing". So
pfn_to_mfn(0xc0000)=INVALID_P2M_ENTRY.
We add the possibility of setting 1-1 mappings on certain regions, so
that:
pfn_to_mfn(0xc0000)=0xc0000
The benefit of this is, that we can assume for non-RAM regions (think
PCI BARs, or ACPI spaces), we can create mappings easily b/c we
get the PFN value to match the MFN.
For this to work efficiently we introduce one new page p2m_identity and
allocate (via reserved_brk) any other pages we need to cover the sides
(1GB or 4MB boundary violations). All entries in p2m_identity are set to
INVALID_P2M_ENTRY type (Xen toolstack only recognizes that and MFNs,
no other fancy value).
On lookup we spot that the entry points to p2m_identity and return the identity
value instead of dereferencing and returning INVALID_P2M_ENTRY. If the entry
points to an allocated page, we just proceed as before and return the PFN.
If the PFN has IDENTITY_FRAME_BIT set we unmask that in appropriate functions
(pfn_to_mfn).
The reason for having the IDENTITY_FRAME_BIT instead of just returning the
PFN is that we could find ourselves where pfn_to_mfn(pfn)==pfn for a
non-identity pfn. To protect ourselves against we elect to set (and get) the
IDENTITY_FRAME_BIT on all identity mapped PFNs.
This simplistic diagram is used to explain the more subtle piece of code.
There is also a digram of the P2M at the end that can help.
Imagine your E820 looking as so:
1GB 2GB
/-------------------+---------\/----\ /----------\ /---+-----\
| System RAM | Sys RAM ||ACPI| | reserved | | Sys RAM |
\-------------------+---------/\----/ \----------/ \---+-----/
^- 1029MB ^- 2001MB
[1029MB = 263424 (0x40500), 2001MB = 512256 (0x7D100), 2048MB = 524288 (0x80000)]
And dom0_mem=max:3GB,1GB is passed in to the guest, meaning memory past 1GB
is actually not present (would have to kick the balloon driver to put it in).
When we are told to set the PFNs for identity mapping (see patch: "xen/setup:
Set identity mapping for non-RAM E820 and E820 gaps.") we pass in the start
of the PFN and the end PFN (263424 and 512256 respectively). The first step is
to reserve_brk a top leaf page if the p2m[1] is missing. The top leaf page
covers 512^2 of page estate (1GB) and in case the start or end PFN is not
aligned on 512^2*PAGE_SIZE (1GB) we loop on aligned 1GB PFNs from start pfn to
end pfn. We reserve_brk top leaf pages if they are missing (means they point
to p2m_mid_missing).
With the E820 example above, 263424 is not 1GB aligned so we allocate a
reserve_brk page which will cover the PFNs estate from 0x40000 to 0x80000.
Each entry in the allocate page is "missing" (points to p2m_missing).
Next stage is to determine if we need to do a more granular boundary check
on the 4MB (or 2MB depending on architecture) off the start and end pfn's.
We check if the start pfn and end pfn violate that boundary check, and if
so reserve_brk a middle (p2m[x][y]) leaf page. This way we have a much finer
granularity of setting which PFNs are missing and which ones are identity.
In our example 263424 and 512256 both fail the check so we reserve_brk two
pages. Populate them with INVALID_P2M_ENTRY (so they both have "missing" values)
and assign them to p2m[1][2] and p2m[1][488] respectively.
At this point we would at minimum reserve_brk one page, but could be up to
three. Each call to set_phys_range_identity has at maximum a three page
cost. If we were to query the P2M at this stage, all those entries from
start PFN through end PFN (so 1029MB -> 2001MB) would return INVALID_P2M_ENTRY
("missing").
The next step is to walk from the start pfn to the end pfn setting
the IDENTITY_FRAME_BIT on each PFN. This is done in 'set_phys_range_identity'.
If we find that the middle leaf is pointing to p2m_missing we can swap it over
to p2m_identity - this way covering 4MB (or 2MB) PFN space. At this point we
do not need to worry about boundary aligment (so no need to reserve_brk a middle
page, figure out which PFNs are "missing" and which ones are identity), as that
has been done earlier. If we find that the middle leaf is not occupied by
p2m_identity or p2m_missing, we dereference that page (which covers
512 PFNs) and set the appropriate PFN with IDENTITY_FRAME_BIT. In our example
263424 and 512256 end up there, and we set from p2m[1][2][256->511] and
p2m[1][488][0->256] with IDENTITY_FRAME_BIT set.
All other regions that are void (or not filled) either point to p2m_missing
(considered missing) or have the default value of INVALID_P2M_ENTRY (also
considered missing). In our case, p2m[1][2][0->255] and p2m[1][488][257->511]
contain the INVALID_P2M_ENTRY value and are considered "missing."
This is what the p2m ends up looking (for the E820 above) with this
fabulous drawing:
p2m /--------------\
/-----\ | &mfn_list[0],| /-----------------\
| 0 |------>| &mfn_list[1],| /---------------\ | ~0, ~0, .. |
|-----| | ..., ~0, ~0 | | ~0, ~0, [x]---+----->| IDENTITY [@256] |
| 1 |---\ \--------------/ | [p2m_identity]+\ | IDENTITY [@257] |
|-----| \ | [p2m_identity]+\\ | .... |
| 2 |--\ \-------------------->| ... | \\ \----------------/
|-----| \ \---------------/ \\
| 3 |\ \ \\ p2m_identity
|-----| \ \-------------------->/---------------\ /-----------------\
| .. +->+ | [p2m_identity]+-->| ~0, ~0, ~0, ... |
\-----/ / | [p2m_identity]+-->| ..., ~0 |
/ /---------------\ | .... | \-----------------/
/ | IDENTITY[@0] | /-+-[x], ~0, ~0.. |
/ | IDENTITY[@256]|<----/ \---------------/
/ | ~0, ~0, .... |
| \---------------/
|
p2m_missing p2m_missing
/------------------\ /------------\
| [p2m_mid_missing]+---->| ~0, ~0, ~0 |
| [p2m_mid_missing]+---->| ..., ~0 |
\------------------/ \------------/
where ~0 is INVALID_P2M_ENTRY. IDENTITY is (PFN | IDENTITY_BIT)
Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
[v5: Changed code to use ranges, added ASCII art]
[v6: Rebased on top of xen->p2m code split]
[v4: Squished patches in just this one]
[v7: Added RESERVE_BRK for potentially allocated pages]
[v8: Fixed alignment problem]
[v9: Changed 1<<3X to 1<<BITS_PER_LONG-X]
[v10: Copied git commit description in the p2m code + Add Review tag]
[v11: Title had '2-1' - should be '1-1' mapping]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2011-01-19 09:15:21 +08:00
|
|
|
#define IDENTITY_FRAME(m) ((m) | IDENTITY_FRAME_BIT)
|
2008-04-03 01:53:58 +08:00
|
|
|
|
2015-07-17 12:51:37 +08:00
|
|
|
#define P2M_PER_PAGE (PAGE_SIZE / sizeof(unsigned long))
|
|
|
|
|
2010-09-30 19:37:26 +08:00
|
|
|
extern unsigned long *machine_to_phys_mapping;
|
xen/x86: replace order-based range checking of M2P table by linear one
The order-based approach is not only less efficient (requiring a shift
and a compare, typical generated code looking like this
mov eax, [machine_to_phys_order]
mov ecx, eax
shr ebx, cl
test ebx, ebx
jnz ...
whereas a direct check requires just a compare, like in
cmp ebx, [machine_to_phys_nr]
jae ...
), but also slightly dangerous in the 32-on-64 case - the element
address calculation can wrap if the next power of two boundary is
sufficiently far away from the actual upper limit of the table, and
hence can result in user space addresses being accessed (with it being
unknown what may actually be mapped there).
Additionally, the elimination of the mistaken use of fls() here (should
have been __fls()) fixes a latent issue on x86-64 that would trigger
if the code was run on a system with memory extending beyond the 44-bit
boundary.
CC: stable@kernel.org
Signed-off-by: Jan Beulich <jbeulich@novell.com>
[v1: Based on Jeremy's feedback]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2011-08-16 22:07:41 +08:00
|
|
|
extern unsigned long machine_to_phys_nr;
|
2014-11-28 18:53:55 +08:00
|
|
|
extern unsigned long *xen_p2m_addr;
|
|
|
|
extern unsigned long xen_p2m_size;
|
|
|
|
extern unsigned long xen_max_p2m_pfn;
|
2008-05-27 06:31:19 +08:00
|
|
|
|
2015-07-22 21:48:09 +08:00
|
|
|
extern int xen_alloc_p2m_entry(unsigned long pfn);
|
|
|
|
|
2008-05-27 06:31:18 +08:00
|
|
|
extern unsigned long get_phys_to_machine(unsigned long pfn);
|
2010-08-28 04:42:04 +08:00
|
|
|
extern bool set_phys_to_machine(unsigned long pfn, unsigned long mfn);
|
2011-01-19 09:09:41 +08:00
|
|
|
extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
|
2015-07-17 12:51:37 +08:00
|
|
|
extern unsigned long __init set_phys_range_identity(unsigned long pfn_s,
|
|
|
|
unsigned long pfn_e);
|
2008-04-03 01:53:58 +08:00
|
|
|
|
2017-03-15 01:35:52 +08:00
|
|
|
#ifdef CONFIG_XEN_PV
|
2014-02-27 23:55:30 +08:00
|
|
|
extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
|
|
|
|
struct gnttab_map_grant_ref *kmap_ops,
|
|
|
|
struct page **pages, unsigned int count);
|
|
|
|
extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
|
2015-01-05 22:13:41 +08:00
|
|
|
struct gnttab_unmap_grant_ref *kunmap_ops,
|
2014-02-27 23:55:30 +08:00
|
|
|
struct page **pages, unsigned int count);
|
2017-03-15 01:35:52 +08:00
|
|
|
#else
|
|
|
|
static inline int
|
|
|
|
set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
|
|
|
|
struct gnttab_map_grant_ref *kmap_ops,
|
|
|
|
struct page **pages, unsigned int count)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int
|
|
|
|
clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
|
|
|
|
struct gnttab_unmap_grant_ref *kunmap_ops,
|
|
|
|
struct page **pages, unsigned int count)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif
|
2010-12-15 21:19:33 +08:00
|
|
|
|
2014-12-05 20:28:04 +08:00
|
|
|
/*
|
|
|
|
* Helper functions to write or read unsigned long values to/from
|
|
|
|
* memory, when the access may fault.
|
|
|
|
*/
|
|
|
|
static inline int xen_safe_write_ulong(unsigned long *addr, unsigned long val)
|
|
|
|
{
|
|
|
|
return __put_user(val, (unsigned long __user *)addr);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int xen_safe_read_ulong(unsigned long *addr, unsigned long *val)
|
|
|
|
{
|
|
|
|
return __get_user(*val, (unsigned long __user *)addr);
|
|
|
|
}
|
|
|
|
|
2017-03-15 01:35:52 +08:00
|
|
|
#ifdef CONFIG_XEN_PV
|
2014-11-28 18:53:57 +08:00
|
|
|
/*
|
|
|
|
* When to use pfn_to_mfn(), __pfn_to_mfn() or get_phys_to_machine():
|
|
|
|
* - pfn_to_mfn() returns either INVALID_P2M_ENTRY or the mfn. No indicator
|
|
|
|
* bits (identity or foreign) are set.
|
|
|
|
* - __pfn_to_mfn() returns the found entry of the p2m table. A possibly set
|
|
|
|
* identity or foreign indicator will be still set. __pfn_to_mfn() is
|
xen: switch to linear virtual mapped sparse p2m list
At start of the day the Xen hypervisor presents a contiguous mfn list
to a pv-domain. In order to support sparse memory this mfn list is
accessed via a three level p2m tree built early in the boot process.
Whenever the system needs the mfn associated with a pfn this tree is
used to find the mfn.
Instead of using a software walked tree for accessing a specific mfn
list entry this patch is creating a virtual address area for the
entire possible mfn list including memory holes. The holes are
covered by mapping a pre-defined page consisting only of "invalid
mfn" entries. Access to a mfn entry is possible by just using the
virtual base address of the mfn list and the pfn as index into that
list. This speeds up the (hot) path of determining the mfn of a
pfn.
Kernel build on a Dell Latitude E6440 (2 cores, HT) in 64 bit Dom0
showed following improvements:
Elapsed time: 32:50 -> 32:35
System: 18:07 -> 17:47
User: 104:00 -> 103:30
Tested with following configurations:
- 64 bit dom0, 8GB RAM
- 64 bit dom0, 128 GB RAM, PCI-area above 4 GB
- 32 bit domU, 512 MB, 8 GB, 43 GB (more wouldn't work even without
the patch)
- 32 bit domU, ballooning up and down
- 32 bit domU, save and restore
- 32 bit domU with PCI passthrough
- 64 bit domU, 8 GB, 2049 MB, 5000 MB
- 64 bit domU, ballooning up and down
- 64 bit domU, save and restore
- 64 bit domU with PCI passthrough
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
2014-11-28 18:53:58 +08:00
|
|
|
* encapsulating get_phys_to_machine() which is called in special cases only.
|
|
|
|
* - get_phys_to_machine() is to be called by __pfn_to_mfn() only in special
|
|
|
|
* cases needing an extended handling.
|
2014-11-28 18:53:57 +08:00
|
|
|
*/
|
|
|
|
static inline unsigned long __pfn_to_mfn(unsigned long pfn)
|
|
|
|
{
|
xen: switch to linear virtual mapped sparse p2m list
At start of the day the Xen hypervisor presents a contiguous mfn list
to a pv-domain. In order to support sparse memory this mfn list is
accessed via a three level p2m tree built early in the boot process.
Whenever the system needs the mfn associated with a pfn this tree is
used to find the mfn.
Instead of using a software walked tree for accessing a specific mfn
list entry this patch is creating a virtual address area for the
entire possible mfn list including memory holes. The holes are
covered by mapping a pre-defined page consisting only of "invalid
mfn" entries. Access to a mfn entry is possible by just using the
virtual base address of the mfn list and the pfn as index into that
list. This speeds up the (hot) path of determining the mfn of a
pfn.
Kernel build on a Dell Latitude E6440 (2 cores, HT) in 64 bit Dom0
showed following improvements:
Elapsed time: 32:50 -> 32:35
System: 18:07 -> 17:47
User: 104:00 -> 103:30
Tested with following configurations:
- 64 bit dom0, 8GB RAM
- 64 bit dom0, 128 GB RAM, PCI-area above 4 GB
- 32 bit domU, 512 MB, 8 GB, 43 GB (more wouldn't work even without
the patch)
- 32 bit domU, ballooning up and down
- 32 bit domU, save and restore
- 32 bit domU with PCI passthrough
- 64 bit domU, 8 GB, 2049 MB, 5000 MB
- 64 bit domU, ballooning up and down
- 64 bit domU, save and restore
- 64 bit domU with PCI passthrough
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
2014-11-28 18:53:58 +08:00
|
|
|
unsigned long mfn;
|
|
|
|
|
|
|
|
if (pfn < xen_p2m_size)
|
|
|
|
mfn = xen_p2m_addr[pfn];
|
|
|
|
else if (unlikely(pfn < xen_max_p2m_pfn))
|
|
|
|
return get_phys_to_machine(pfn);
|
|
|
|
else
|
|
|
|
return IDENTITY_FRAME(pfn);
|
|
|
|
|
|
|
|
if (unlikely(mfn == INVALID_P2M_ENTRY))
|
|
|
|
return get_phys_to_machine(pfn);
|
|
|
|
|
|
|
|
return mfn;
|
2014-11-28 18:53:57 +08:00
|
|
|
}
|
2017-03-15 01:35:52 +08:00
|
|
|
#else
|
|
|
|
static inline unsigned long __pfn_to_mfn(unsigned long pfn)
|
|
|
|
{
|
|
|
|
return pfn;
|
|
|
|
}
|
|
|
|
#endif
|
2014-11-28 18:53:57 +08:00
|
|
|
|
2008-04-03 01:53:58 +08:00
|
|
|
static inline unsigned long pfn_to_mfn(unsigned long pfn)
|
|
|
|
{
|
2010-09-01 05:06:22 +08:00
|
|
|
unsigned long mfn;
|
|
|
|
|
2015-08-08 00:34:37 +08:00
|
|
|
/*
|
|
|
|
* Some x86 code are still using pfn_to_mfn instead of
|
|
|
|
* pfn_to_mfn. This will have to be removed when we figured
|
|
|
|
* out which call.
|
|
|
|
*/
|
2008-04-03 01:53:58 +08:00
|
|
|
if (xen_feature(XENFEAT_auto_translated_physmap))
|
|
|
|
return pfn;
|
|
|
|
|
2014-11-28 18:53:57 +08:00
|
|
|
mfn = __pfn_to_mfn(pfn);
|
2010-09-01 05:06:22 +08:00
|
|
|
|
|
|
|
if (mfn != INVALID_P2M_ENTRY)
|
xen/mmu: Add the notion of identity (1-1) mapping.
Our P2M tree structure is a three-level. On the leaf nodes
we set the Machine Frame Number (MFN) of the PFN. What this means
is that when one does: pfn_to_mfn(pfn), which is used when creating
PTE entries, you get the real MFN of the hardware. When Xen sets
up a guest it initially populates a array which has descending
(or ascending) MFN values, as so:
idx: 0, 1, 2
[0x290F, 0x290E, 0x290D, ..]
so pfn_to_mfn(2)==0x290D. If you start, restart many guests that list
starts looking quite random.
We graft this structure on our P2M tree structure and stick in
those MFN in the leafs. But for all other leaf entries, or for the top
root, or middle one, for which there is a void entry, we assume it is
"missing". So
pfn_to_mfn(0xc0000)=INVALID_P2M_ENTRY.
We add the possibility of setting 1-1 mappings on certain regions, so
that:
pfn_to_mfn(0xc0000)=0xc0000
The benefit of this is, that we can assume for non-RAM regions (think
PCI BARs, or ACPI spaces), we can create mappings easily b/c we
get the PFN value to match the MFN.
For this to work efficiently we introduce one new page p2m_identity and
allocate (via reserved_brk) any other pages we need to cover the sides
(1GB or 4MB boundary violations). All entries in p2m_identity are set to
INVALID_P2M_ENTRY type (Xen toolstack only recognizes that and MFNs,
no other fancy value).
On lookup we spot that the entry points to p2m_identity and return the identity
value instead of dereferencing and returning INVALID_P2M_ENTRY. If the entry
points to an allocated page, we just proceed as before and return the PFN.
If the PFN has IDENTITY_FRAME_BIT set we unmask that in appropriate functions
(pfn_to_mfn).
The reason for having the IDENTITY_FRAME_BIT instead of just returning the
PFN is that we could find ourselves where pfn_to_mfn(pfn)==pfn for a
non-identity pfn. To protect ourselves against we elect to set (and get) the
IDENTITY_FRAME_BIT on all identity mapped PFNs.
This simplistic diagram is used to explain the more subtle piece of code.
There is also a digram of the P2M at the end that can help.
Imagine your E820 looking as so:
1GB 2GB
/-------------------+---------\/----\ /----------\ /---+-----\
| System RAM | Sys RAM ||ACPI| | reserved | | Sys RAM |
\-------------------+---------/\----/ \----------/ \---+-----/
^- 1029MB ^- 2001MB
[1029MB = 263424 (0x40500), 2001MB = 512256 (0x7D100), 2048MB = 524288 (0x80000)]
And dom0_mem=max:3GB,1GB is passed in to the guest, meaning memory past 1GB
is actually not present (would have to kick the balloon driver to put it in).
When we are told to set the PFNs for identity mapping (see patch: "xen/setup:
Set identity mapping for non-RAM E820 and E820 gaps.") we pass in the start
of the PFN and the end PFN (263424 and 512256 respectively). The first step is
to reserve_brk a top leaf page if the p2m[1] is missing. The top leaf page
covers 512^2 of page estate (1GB) and in case the start or end PFN is not
aligned on 512^2*PAGE_SIZE (1GB) we loop on aligned 1GB PFNs from start pfn to
end pfn. We reserve_brk top leaf pages if they are missing (means they point
to p2m_mid_missing).
With the E820 example above, 263424 is not 1GB aligned so we allocate a
reserve_brk page which will cover the PFNs estate from 0x40000 to 0x80000.
Each entry in the allocate page is "missing" (points to p2m_missing).
Next stage is to determine if we need to do a more granular boundary check
on the 4MB (or 2MB depending on architecture) off the start and end pfn's.
We check if the start pfn and end pfn violate that boundary check, and if
so reserve_brk a middle (p2m[x][y]) leaf page. This way we have a much finer
granularity of setting which PFNs are missing and which ones are identity.
In our example 263424 and 512256 both fail the check so we reserve_brk two
pages. Populate them with INVALID_P2M_ENTRY (so they both have "missing" values)
and assign them to p2m[1][2] and p2m[1][488] respectively.
At this point we would at minimum reserve_brk one page, but could be up to
three. Each call to set_phys_range_identity has at maximum a three page
cost. If we were to query the P2M at this stage, all those entries from
start PFN through end PFN (so 1029MB -> 2001MB) would return INVALID_P2M_ENTRY
("missing").
The next step is to walk from the start pfn to the end pfn setting
the IDENTITY_FRAME_BIT on each PFN. This is done in 'set_phys_range_identity'.
If we find that the middle leaf is pointing to p2m_missing we can swap it over
to p2m_identity - this way covering 4MB (or 2MB) PFN space. At this point we
do not need to worry about boundary aligment (so no need to reserve_brk a middle
page, figure out which PFNs are "missing" and which ones are identity), as that
has been done earlier. If we find that the middle leaf is not occupied by
p2m_identity or p2m_missing, we dereference that page (which covers
512 PFNs) and set the appropriate PFN with IDENTITY_FRAME_BIT. In our example
263424 and 512256 end up there, and we set from p2m[1][2][256->511] and
p2m[1][488][0->256] with IDENTITY_FRAME_BIT set.
All other regions that are void (or not filled) either point to p2m_missing
(considered missing) or have the default value of INVALID_P2M_ENTRY (also
considered missing). In our case, p2m[1][2][0->255] and p2m[1][488][257->511]
contain the INVALID_P2M_ENTRY value and are considered "missing."
This is what the p2m ends up looking (for the E820 above) with this
fabulous drawing:
p2m /--------------\
/-----\ | &mfn_list[0],| /-----------------\
| 0 |------>| &mfn_list[1],| /---------------\ | ~0, ~0, .. |
|-----| | ..., ~0, ~0 | | ~0, ~0, [x]---+----->| IDENTITY [@256] |
| 1 |---\ \--------------/ | [p2m_identity]+\ | IDENTITY [@257] |
|-----| \ | [p2m_identity]+\\ | .... |
| 2 |--\ \-------------------->| ... | \\ \----------------/
|-----| \ \---------------/ \\
| 3 |\ \ \\ p2m_identity
|-----| \ \-------------------->/---------------\ /-----------------\
| .. +->+ | [p2m_identity]+-->| ~0, ~0, ~0, ... |
\-----/ / | [p2m_identity]+-->| ..., ~0 |
/ /---------------\ | .... | \-----------------/
/ | IDENTITY[@0] | /-+-[x], ~0, ~0.. |
/ | IDENTITY[@256]|<----/ \---------------/
/ | ~0, ~0, .... |
| \---------------/
|
p2m_missing p2m_missing
/------------------\ /------------\
| [p2m_mid_missing]+---->| ~0, ~0, ~0 |
| [p2m_mid_missing]+---->| ..., ~0 |
\------------------/ \------------/
where ~0 is INVALID_P2M_ENTRY. IDENTITY is (PFN | IDENTITY_BIT)
Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
[v5: Changed code to use ranges, added ASCII art]
[v6: Rebased on top of xen->p2m code split]
[v4: Squished patches in just this one]
[v7: Added RESERVE_BRK for potentially allocated pages]
[v8: Fixed alignment problem]
[v9: Changed 1<<3X to 1<<BITS_PER_LONG-X]
[v10: Copied git commit description in the p2m code + Add Review tag]
[v11: Title had '2-1' - should be '1-1' mapping]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2011-01-19 09:15:21 +08:00
|
|
|
mfn &= ~(FOREIGN_FRAME_BIT | IDENTITY_FRAME_BIT);
|
2010-09-01 05:06:22 +08:00
|
|
|
|
|
|
|
return mfn;
|
2008-04-03 01:53:58 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline int phys_to_machine_mapping_valid(unsigned long pfn)
|
|
|
|
{
|
|
|
|
if (xen_feature(XENFEAT_auto_translated_physmap))
|
|
|
|
return 1;
|
|
|
|
|
2014-11-28 18:53:57 +08:00
|
|
|
return __pfn_to_mfn(pfn) != INVALID_P2M_ENTRY;
|
2008-04-03 01:53:58 +08:00
|
|
|
}
|
|
|
|
|
2013-09-13 22:13:30 +08:00
|
|
|
static inline unsigned long mfn_to_pfn_no_overrides(unsigned long mfn)
|
2008-04-03 01:53:58 +08:00
|
|
|
{
|
|
|
|
unsigned long pfn;
|
2013-09-13 22:13:30 +08:00
|
|
|
int ret;
|
2008-04-03 01:53:58 +08:00
|
|
|
|
2013-09-13 22:13:30 +08:00
|
|
|
if (unlikely(mfn >= machine_to_phys_nr))
|
|
|
|
return ~0;
|
|
|
|
|
2008-04-03 01:53:58 +08:00
|
|
|
/*
|
|
|
|
* The array access can fail (e.g., device space beyond end of RAM).
|
|
|
|
* In such cases it doesn't matter what we return (we return garbage),
|
|
|
|
* but we must handle the fault without crashing!
|
|
|
|
*/
|
2014-12-05 20:28:04 +08:00
|
|
|
ret = xen_safe_read_ulong(&machine_to_phys_mapping[mfn], &pfn);
|
2011-02-03 02:32:59 +08:00
|
|
|
if (ret < 0)
|
2013-09-13 22:13:30 +08:00
|
|
|
return ~0;
|
|
|
|
|
|
|
|
return pfn;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned long mfn_to_pfn(unsigned long mfn)
|
|
|
|
{
|
|
|
|
unsigned long pfn;
|
|
|
|
|
2015-08-08 00:34:37 +08:00
|
|
|
/*
|
|
|
|
* Some x86 code are still using mfn_to_pfn instead of
|
|
|
|
* gfn_to_pfn. This will have to be removed when we figure
|
|
|
|
* out which call.
|
|
|
|
*/
|
2013-09-13 22:13:30 +08:00
|
|
|
if (xen_feature(XENFEAT_auto_translated_physmap))
|
|
|
|
return mfn;
|
|
|
|
|
|
|
|
pfn = mfn_to_pfn_no_overrides(mfn);
|
2015-01-06 01:06:01 +08:00
|
|
|
if (__pfn_to_mfn(pfn) != mfn)
|
|
|
|
pfn = ~0;
|
2011-02-03 02:32:59 +08:00
|
|
|
|
2014-02-27 23:55:30 +08:00
|
|
|
/*
|
2015-01-06 01:06:01 +08:00
|
|
|
* pfn is ~0 if there are no entries in the m2p for mfn or the
|
|
|
|
* entry doesn't map back to the mfn.
|
2010-12-15 21:19:33 +08:00
|
|
|
*/
|
2014-11-28 18:53:57 +08:00
|
|
|
if (pfn == ~0 && __pfn_to_mfn(mfn) == IDENTITY_FRAME(mfn))
|
2011-02-03 02:32:59 +08:00
|
|
|
pfn = mfn;
|
2010-12-15 21:19:33 +08:00
|
|
|
|
2008-04-03 01:53:58 +08:00
|
|
|
return pfn;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline xmaddr_t phys_to_machine(xpaddr_t phys)
|
|
|
|
{
|
|
|
|
unsigned offset = phys.paddr & ~PAGE_MASK;
|
2008-09-11 16:31:48 +08:00
|
|
|
return XMADDR(PFN_PHYS(pfn_to_mfn(PFN_DOWN(phys.paddr))) | offset);
|
2008-04-03 01:53:58 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline xpaddr_t machine_to_phys(xmaddr_t machine)
|
|
|
|
{
|
|
|
|
unsigned offset = machine.maddr & ~PAGE_MASK;
|
2008-09-11 16:31:48 +08:00
|
|
|
return XPADDR(PFN_PHYS(mfn_to_pfn(PFN_DOWN(machine.maddr))) | offset);
|
2008-04-03 01:53:58 +08:00
|
|
|
}
|
|
|
|
|
2015-08-08 00:34:37 +08:00
|
|
|
/* Pseudo-physical <-> Guest conversion */
|
|
|
|
static inline unsigned long pfn_to_gfn(unsigned long pfn)
|
|
|
|
{
|
|
|
|
if (xen_feature(XENFEAT_auto_translated_physmap))
|
|
|
|
return pfn;
|
|
|
|
else
|
|
|
|
return pfn_to_mfn(pfn);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned long gfn_to_pfn(unsigned long gfn)
|
|
|
|
{
|
|
|
|
if (xen_feature(XENFEAT_auto_translated_physmap))
|
|
|
|
return gfn;
|
|
|
|
else
|
|
|
|
return mfn_to_pfn(gfn);
|
|
|
|
}
|
|
|
|
|
xen: Make clear that swiotlb and biomerge are dealing with DMA address
The swiotlb is required when programming a DMA address on ARM when a
device is not protected by an IOMMU.
In this case, the DMA address should always be equal to the machine address.
For DOM0 memory, Xen ensure it by have an identity mapping between the
guest address and host address. However, when mapping a foreign grant
reference, the 1:1 model doesn't work.
For ARM guest, most of the callers of pfn_to_mfn expects to get a GFN
(Guest Frame Number), i.e a PFN (Page Frame Number) from the Linux point
of view given that all ARM guest are auto-translated.
Even though the name pfn_to_mfn is misleading, we need to ensure that
those caller get a GFN and not by mistake a MFN. In pratical, I haven't
seen error related to this but we should fix it for the sake of
correctness.
In order to fix the implementation of pfn_to_mfn on ARM in a follow-up
patch, we have to introduce new helpers to return the DMA from a PFN and
the invert.
On x86, the new helpers will be an alias of pfn_to_mfn and mfn_to_pfn.
The helpers will be used in swiotlb and xen_biovec_phys_mergeable.
This is necessary in the latter because we have to ensure that the
biovec code will not try to merge a biovec using foreign page and
another using Linux memory.
Lastly, the helper mfn_to_local_pfn has been renamed to bfn_to_local_pfn
given that the only usage was in swiotlb.
Signed-off-by: Julien Grall <julien.grall@citrix.com>
Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
2015-08-08 00:34:35 +08:00
|
|
|
/* Pseudo-physical <-> Bus conversion */
|
2015-08-08 00:34:37 +08:00
|
|
|
#define pfn_to_bfn(pfn) pfn_to_gfn(pfn)
|
|
|
|
#define bfn_to_pfn(bfn) gfn_to_pfn(bfn)
|
xen: Make clear that swiotlb and biomerge are dealing with DMA address
The swiotlb is required when programming a DMA address on ARM when a
device is not protected by an IOMMU.
In this case, the DMA address should always be equal to the machine address.
For DOM0 memory, Xen ensure it by have an identity mapping between the
guest address and host address. However, when mapping a foreign grant
reference, the 1:1 model doesn't work.
For ARM guest, most of the callers of pfn_to_mfn expects to get a GFN
(Guest Frame Number), i.e a PFN (Page Frame Number) from the Linux point
of view given that all ARM guest are auto-translated.
Even though the name pfn_to_mfn is misleading, we need to ensure that
those caller get a GFN and not by mistake a MFN. In pratical, I haven't
seen error related to this but we should fix it for the sake of
correctness.
In order to fix the implementation of pfn_to_mfn on ARM in a follow-up
patch, we have to introduce new helpers to return the DMA from a PFN and
the invert.
On x86, the new helpers will be an alias of pfn_to_mfn and mfn_to_pfn.
The helpers will be used in swiotlb and xen_biovec_phys_mergeable.
This is necessary in the latter because we have to ensure that the
biovec code will not try to merge a biovec using foreign page and
another using Linux memory.
Lastly, the helper mfn_to_local_pfn has been renamed to bfn_to_local_pfn
given that the only usage was in swiotlb.
Signed-off-by: Julien Grall <julien.grall@citrix.com>
Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
2015-08-08 00:34:35 +08:00
|
|
|
|
2008-04-03 01:53:58 +08:00
|
|
|
/*
|
|
|
|
* We detect special mappings in one of two ways:
|
|
|
|
* 1. If the MFN is an I/O page then Xen will set the m2p entry
|
|
|
|
* to be outside our maximum possible pseudophys range.
|
|
|
|
* 2. If the MFN belongs to a different domain then we will certainly
|
|
|
|
* not have MFN in our p2m table. Conversely, if the page is ours,
|
|
|
|
* then we'll have p2m(m2p(MFN))==MFN.
|
|
|
|
* If we detect a special mapping then it doesn't have a 'struct page'.
|
|
|
|
* We force !pfn_valid() by returning an out-of-range pointer.
|
|
|
|
*
|
|
|
|
* NB. These checks require that, for any MFN that is not in our reservation,
|
|
|
|
* there is no PFN such that p2m(PFN) == MFN. Otherwise we can get confused if
|
|
|
|
* we are foreign-mapping the MFN, and the other domain as m2p(MFN) == PFN.
|
|
|
|
* Yikes! Various places must poke in INVALID_P2M_ENTRY for safety.
|
|
|
|
*
|
|
|
|
* NB2. When deliberately mapping foreign pages into the p2m table, you *must*
|
|
|
|
* use FOREIGN_FRAME(). This will cause pte_pfn() to choke on it, as we
|
|
|
|
* require. In all the cases we care about, the FOREIGN_FRAME bit is
|
|
|
|
* masked (e.g., pfn_to_mfn()) so behaviour there is correct.
|
|
|
|
*/
|
xen: Make clear that swiotlb and biomerge are dealing with DMA address
The swiotlb is required when programming a DMA address on ARM when a
device is not protected by an IOMMU.
In this case, the DMA address should always be equal to the machine address.
For DOM0 memory, Xen ensure it by have an identity mapping between the
guest address and host address. However, when mapping a foreign grant
reference, the 1:1 model doesn't work.
For ARM guest, most of the callers of pfn_to_mfn expects to get a GFN
(Guest Frame Number), i.e a PFN (Page Frame Number) from the Linux point
of view given that all ARM guest are auto-translated.
Even though the name pfn_to_mfn is misleading, we need to ensure that
those caller get a GFN and not by mistake a MFN. In pratical, I haven't
seen error related to this but we should fix it for the sake of
correctness.
In order to fix the implementation of pfn_to_mfn on ARM in a follow-up
patch, we have to introduce new helpers to return the DMA from a PFN and
the invert.
On x86, the new helpers will be an alias of pfn_to_mfn and mfn_to_pfn.
The helpers will be used in swiotlb and xen_biovec_phys_mergeable.
This is necessary in the latter because we have to ensure that the
biovec code will not try to merge a biovec using foreign page and
another using Linux memory.
Lastly, the helper mfn_to_local_pfn has been renamed to bfn_to_local_pfn
given that the only usage was in swiotlb.
Signed-off-by: Julien Grall <julien.grall@citrix.com>
Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
2015-08-08 00:34:35 +08:00
|
|
|
static inline unsigned long bfn_to_local_pfn(unsigned long mfn)
|
2008-04-03 01:53:58 +08:00
|
|
|
{
|
2013-12-14 01:09:28 +08:00
|
|
|
unsigned long pfn;
|
|
|
|
|
|
|
|
if (xen_feature(XENFEAT_auto_translated_physmap))
|
|
|
|
return mfn;
|
|
|
|
|
|
|
|
pfn = mfn_to_pfn(mfn);
|
2014-11-28 18:53:57 +08:00
|
|
|
if (__pfn_to_mfn(pfn) != mfn)
|
2010-02-05 06:46:34 +08:00
|
|
|
return -1; /* force !pfn_valid() */
|
2008-04-03 01:53:58 +08:00
|
|
|
return pfn;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* VIRT <-> MACHINE conversion */
|
|
|
|
#define virt_to_machine(v) (phys_to_machine(XPADDR(__pa(v))))
|
2009-02-10 04:05:46 +08:00
|
|
|
#define virt_to_pfn(v) (PFN_DOWN(__pa(v)))
|
|
|
|
#define virt_to_mfn(v) (pfn_to_mfn(virt_to_pfn(v)))
|
2008-04-03 01:53:58 +08:00
|
|
|
#define mfn_to_virt(m) (__va(mfn_to_pfn(m) << PAGE_SHIFT))
|
|
|
|
|
2015-08-08 00:34:37 +08:00
|
|
|
/* VIRT <-> GUEST conversion */
|
|
|
|
#define virt_to_gfn(v) (pfn_to_gfn(virt_to_pfn(v)))
|
|
|
|
#define gfn_to_virt(g) (__va(gfn_to_pfn(g) << PAGE_SHIFT))
|
|
|
|
|
2008-04-03 01:53:58 +08:00
|
|
|
static inline unsigned long pte_mfn(pte_t pte)
|
|
|
|
{
|
2017-10-28 01:49:37 +08:00
|
|
|
return (pte.pte & XEN_PTE_MFN_MASK) >> PAGE_SHIFT;
|
2008-04-03 01:53:58 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline pte_t mfn_pte(unsigned long page_nr, pgprot_t pgprot)
|
|
|
|
{
|
|
|
|
pte_t pte;
|
|
|
|
|
|
|
|
pte.pte = ((phys_addr_t)page_nr << PAGE_SHIFT) |
|
2009-02-05 10:33:38 +08:00
|
|
|
massage_pgprot(pgprot);
|
2008-04-03 01:53:58 +08:00
|
|
|
|
|
|
|
return pte;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pteval_t pte_val_ma(pte_t pte)
|
|
|
|
{
|
|
|
|
return pte.pte;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pte_t __pte_ma(pteval_t x)
|
|
|
|
{
|
|
|
|
return (pte_t) { .pte = x };
|
|
|
|
}
|
|
|
|
|
|
|
|
#define pmd_val_ma(v) ((v).pmd)
|
2008-07-09 06:06:38 +08:00
|
|
|
#ifdef __PAGETABLE_PUD_FOLDED
|
2017-03-18 02:55:15 +08:00
|
|
|
#define pud_val_ma(v) ((v).p4d.pgd.pgd)
|
2008-07-09 06:06:38 +08:00
|
|
|
#else
|
|
|
|
#define pud_val_ma(v) ((v).pud)
|
|
|
|
#endif
|
2008-04-03 01:53:58 +08:00
|
|
|
#define __pmd_ma(x) ((pmd_t) { (x) } )
|
|
|
|
|
2017-03-18 02:55:15 +08:00
|
|
|
#ifdef __PAGETABLE_P4D_FOLDED
|
|
|
|
#define p4d_val_ma(x) ((x).pgd.pgd)
|
|
|
|
#else
|
|
|
|
#define p4d_val_ma(x) ((x).p4d)
|
|
|
|
#endif
|
2008-04-03 01:53:58 +08:00
|
|
|
|
2008-07-09 06:06:55 +08:00
|
|
|
xmaddr_t arbitrary_virt_to_machine(void *address);
|
2009-02-28 01:19:26 +08:00
|
|
|
unsigned long arbitrary_virt_to_mfn(void *vaddr);
|
2008-04-03 01:53:58 +08:00
|
|
|
void make_lowmem_page_readonly(void *vaddr);
|
|
|
|
void make_lowmem_page_readwrite(void *vaddr);
|
|
|
|
|
2013-02-19 21:59:19 +08:00
|
|
|
#define xen_remap(cookie, size) ioremap((cookie), (size));
|
2014-01-06 23:40:36 +08:00
|
|
|
#define xen_unmap(cookie) iounmap((cookie))
|
2013-02-19 21:59:19 +08:00
|
|
|
|
2014-11-21 19:07:39 +08:00
|
|
|
static inline bool xen_arch_need_swiotlb(struct device *dev,
|
2015-09-09 22:17:33 +08:00
|
|
|
phys_addr_t phys,
|
|
|
|
dma_addr_t dev_addr)
|
2014-11-21 19:07:39 +08:00
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2015-04-24 17:16:40 +08:00
|
|
|
static inline unsigned long xen_get_swiotlb_free_pages(unsigned int order)
|
|
|
|
{
|
|
|
|
return __get_free_pages(__GFP_NOWARN, order);
|
|
|
|
}
|
|
|
|
|
2008-10-23 15:01:39 +08:00
|
|
|
#endif /* _ASM_X86_XEN_PAGE_H */
|