Skip to content

About MMU mapping on ARM64 #46477

Open
@carlocaione

Description

@carlocaione

Facts

On ARM64 we can MMU-map a memory region in two different ways:

  • Directly interfacing with the MMU code
  • Going through the Zephyr MMU / device MMIO APIs.

Direct interface with MMU code for direct mapping

This is done by the ARM64 MMU code to setup the basic Zephyr regions (text, data, etc..) in:

static const struct arm_mmu_flat_range mmu_zephyr_ranges[] = {
/* Mark the zephyr execution regions (data, bss, noinit, etc.)
* cacheable, read-write
* Note: read-write region is marked execute-never internally
*/
{ .name = "zephyr_data",
.start = _image_ram_start,
.end = _image_ram_end,
.attrs = MT_NORMAL | MT_P_RW_U_NA | MT_DEFAULT_SECURE_STATE },
/* Mark text segment cacheable,read only and executable */
{ .name = "zephyr_code",
.start = __text_region_start,
.end = __text_region_end,
.attrs = MT_NORMAL | MT_P_RX_U_RX | MT_DEFAULT_SECURE_STATE },
/* Mark rodata segment cacheable, read only and execute-never */
{ .name = "zephyr_rodata",
.start = __rodata_region_start,
.end = __rodata_region_end,
.attrs = MT_NORMAL | MT_P_RO_U_RO | MT_DEFAULT_SECURE_STATE },
#ifdef CONFIG_NOCACHE_MEMORY
/* Mark nocache segment noncachable, read-write and execute-never */
{ .name = "nocache_data",
.start = _nocache_ram_start,
.end = _nocache_ram_end,
.attrs = MT_NORMAL_NC | MT_P_RW_U_RW | MT_DEFAULT_SECURE_STATE },
#endif
};

but it also used by the soc-specific code to map regions for peripherals that do not support the device MMIO APIs, for example in:

static const struct arm_mmu_region mmu_regions[] = {
MMU_REGION_FLAT_ENTRY("GIC",
DT_REG_ADDR_BY_IDX(DT_INST(0, arm_gic), 0),
DT_REG_SIZE_BY_IDX(DT_INST(0, arm_gic), 0),
MT_DEVICE_nGnRnE | MT_P_RW_U_NA | MT_DEFAULT_SECURE_STATE),
MMU_REGION_FLAT_ENTRY("GIC",
DT_REG_ADDR_BY_IDX(DT_INST(0, arm_gic), 1),
DT_REG_SIZE_BY_IDX(DT_INST(0, arm_gic), 1),
MT_DEVICE_nGnRnE | MT_P_RW_U_NA | MT_DEFAULT_SECURE_STATE),
};

This mapping is done directly in the MMU driver code and it is usually a direct (1:1) mapping.

Using the device MMIO (or MMU) APIs

There has been lately a certain effort to make the drivers using the device MMIO APIs. These API are leveraging the Zephyr MMU code to map the physical MMIO region of a peripheral to a virtual memory region automatically at init time (see include/zephyr/sys/device_mmio.h)

In general the mapping is not a direct mapping, but instead the virtual region is carved out from a memory pool of virtual addresses configured using CONFIG_KERNEL_VM_BASE and CONFIG_KERNEL_VM_SIZE.

Problems

There are several.

  1. The two methods are orthogonal, the only point of contact is the MMU driver that is actually doing the mapping.
  2. The Zephyr MMU code is using a simple mechanism to keep tracking of the allocated pages that is being bypassed by the direct interface with the MMU code, so in theory could be conflicts.
  3. Especially on ARM64 we have (theoretically) plenty of virtual memory so we really would like to do direct mapping for MMIO driver regions but this is not currently possible with the Zephyr MMU code.

Solution?

The easiest one is to give up the direct interface and instead relying exclusively on the Zephyr MMU code. This would force us to give up the 1:1 mapping or adding support for that.

Tagging the main actors involved @dcpleung @npitre @povergoing

Metadata

Metadata

Assignees

Labels

EnhancementChanges/Updates/Additions to existing featuresarea: ARM64ARM (64-bit) Architecture

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions