linux/arch
Nicolas Saenz Julienne 8424ecdde7 arm64: mm: Set ZONE_DMA size based on devicetree's dma-ranges
We recently introduced a 1 GB sized ZONE_DMA to cater for platforms
incorporating masters that can address less than 32 bits of DMA, in
particular the Raspberry Pi 4, which has 4 or 8 GB of DRAM, but has
peripherals that can only address up to 1 GB (and its PCIe host
bridge can only access the bottom 3 GB)

The DMA layer also needs to be able to allocate memory that is
guaranteed to meet those DMA constraints, for bounce buffering as well
as allocating the backing for consistent mappings. This is why the 1 GB
ZONE_DMA was introduced recently. Unfortunately, it turns out the having
a 1 GB ZONE_DMA as well as a ZONE_DMA32 causes problems with kdump, and
potentially in other places where allocations cannot cross zone
boundaries. Therefore, we should avoid having two separate DMA zones
when possible.

So, with the help of of_dma_get_max_cpu_address() get the topmost
physical address accessible to all DMA masters in system and use that
information to fine-tune ZONE_DMA's size. In the absence of addressing
limited masters ZONE_DMA will span the whole 32-bit address space,
otherwise, in the case of the Raspberry Pi 4 it'll only span the 30-bit
address space, and have ZONE_DMA32 cover the rest of the 32-bit address
space.

Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Link: https://lore.kernel.org/r/20201119175400.9995-6-nsaenzjulienne@suse.de
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-20 09:34:13 +00:00
..
alpha arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
arc ARC: [plat-hsdk] Remap CCMs super early in asm boot trampoline 2020-11-02 11:45:09 -08:00
arm ARM, xtensa: highmem: avoid clobbering non-page aligned memory reservations 2020-11-04 10:42:57 +02:00
arm64 arm64: mm: Set ZONE_DMA size based on devicetree's dma-ranges 2020-11-20 09:34:13 +00:00
c6x arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
csky treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
h8300 arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
hexagon arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
ia64 treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
m68k arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
microblaze treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
mips treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
nds32 arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
nios2 arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
openrisc arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
parisc treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
powerpc powerpc/numa: Fix build when CONFIG_NUMA=n 2020-11-06 14:16:19 +11:00
riscv RISC-V: Fix the VDSO symbol generaton for binutils-2.35+ 2020-11-06 00:03:48 -08:00
s390 s390/pci: fix hot-plug of PCI function missing bus 2020-11-03 15:12:16 +01:00
sh treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
sparc treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
um arch/um: partially revert the conversion to __section() macro 2020-10-26 15:39:37 -07:00
x86 A set of x86 fixes: 2020-11-08 10:09:36 -08:00
xtensa ARM, xtensa: highmem: avoid clobbering non-page aligned memory reservations 2020-11-04 10:42:57 +02:00
.gitignore
Kconfig Merge branch 'work.set_fs' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs 2020-10-22 09:59:21 -07:00