-
Notifications
You must be signed in to change notification settings - Fork 7.4k
About MMU mapping on ARM64 #46477
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
My concern would be, that the MPU should support the MMIO region too, but these MMIO APIs cannot be reused by MPU if it is not a 1:1 mapping design? You can simply consider the MPU as an MMU that only supports 1:1 mapping. Does the 1:1 mapping or direct mapping mean I am not sure if it is suitable or how difficult it is. Is it possible that we re-use the kernel partitions? Add |
Well, I was not aware of that and this is definitely concerning (:hankey:)
Yes.
I think @dcpleung could shed some light on this. But the point is that when a physical address needs to be mapped using Lines 736 to 751 in d130160
so it definitely is not a 1:1 mapping (or AFAICT).
Uhm, this seems more complicated than adding support for 1:1 in the current API. |
Oh well, maybe not. I just checked and when MMU is not present (i.e. you have MPU), the device MMIO APIs are not mapping anything and you are basically back to accessing straight the phys. |
The device MMIO was introduced before I took over userspace, so the design decision is a bit fuzzy. But IIRC, it is working similar to the Linux Kernel where the MMIO range is not 1:1 mapping in general (at least on x86). Just wondering what would be the use case for having 1:1 mapping? I can see that it would make debugging easier, but in production, does it matter where the hardware registers are mapped? |
Well, the big issue with Zephyr is that 95% of the drivers are not using the device MMIO API and that means that they are basically accessing the physical address all the times (usually the physical address is retrieved from the DT with the usual So either you fix the driver adding support for the MMIO API (so the driver uses the virt address instead of phys) or you add a 1:1 mapping leaving the driver unfixed. See for example what happened here #46443 (comment). This is a huge problem IMHO. |
Driver not using MMIO API is indeed a huge issue when dealing with MMU as those addresses by default are not accessible. Though I was asking what were the use cases when using the MMIO API. I would assume a proper MMU implementation allows I/O addresses to be mapped into virtual space. |
Oh right, I probably explained myself badly. So, If you are using the MMIO API and the driver supports it there is indeed no problem, we are fine in that case even without a 1:1 mapping. We still have to deal with the case where the driver is not using the MMIO API. In this case for ARM64 we are bypassing this problem by directly creating the 1:1 mapping using the MMU driver but entirely bypassing the Zephyr MMU code. So my suggestion was for this second case: removing the direct interface with the MMU driver and instead relying on the Zephyr MMU code to create the 1:1 mapping for all the driver still not supporting MMIO API. |
Maybe we can convert those drivers to use the device MMIO API when they are being included? TBH, anything we do now to make those non-"device MMIO API" enabled devices work would be a stop-gap effort. So I think the proper way going forward is to convert them to use device MMIO API. Though... I don't know how many you will need to do at the moment. Could you hazard a guess on what you need for your development at the moment? |
Cool, that means, if we want MPU to support MMIO API instead of a big device region, we can extend the non-MMU case? |
Yes, this is indeed what I'm trying to do while reviewing new drivers submission: convince people to use MMIO API.
I don't need any for my development but: (1) this must be considered for new drivers submission and (2) this is part of a cleanup work to remove the About the point (2) in general having the two methods (the MMIO API and the direct mapping using |
Possibly? But the MPU case is definitely easier (and more limited since you have a limited number of slots) and I'm not sure if going through the MMIO API is worth it |
As far as I can tell from this discussion, the MMIO interface is intended for mapping devices' register spaces, but what about DMA areas? Take the Xilinx Ethernet driver, for example: the DT of the two SoC families that support it define an OCM memory area to be used for the DMA. I can obtain that physical address via a 'chosen' entry which is configurable at the board level. At the SoC level, an identity mapping is set up via the mmu_regions table using just that information from the DT. The driver declares the DMA area for each activated instance of the device (size may vary between instances, DMA parameters such as buffer count/size are configurable on a per-device basis) as a struct, of which one instance is placed in the OCM memory area using section and __aligned attributes:
Any access to those structs happen on the basis of the physical address, and the controller requires writing the physical addresses of certain members of that struct to its registers (namely TX queue base address, RX queue base address), which can just be obtained using Will there be a way to map a DMA area aside from a device's register space, and will there be a way to resolve its physical address? What about situations like this one where the linker inserts references to the physical address based on section placement of data? |
Also, if getting rid of the mmu_regions table entirely is the eventual goal, how will we handle required mappings that are not associated with any driver, but are required for the SoC code and maybe also some driver code to work properly? For example, the Zynq maps:
Will all that be moved to the device tree, including permissions? |
That's not part of the discussion really. The MMIO API is used only to map the MMIO registers space of the drivers, it's basically the Zephyr equivalente of the
You can keep doing that if you want.
You can create a 1:1 mapping using
I want to get rid of the
No. |
@carlocaione Thanks for the info! |
I am all for nudging everyone to use the device MMIO API. :) |
Facts
On ARM64 we can MMU-map a memory region in two different ways:
Direct interface with MMU code for direct mapping
This is done by the ARM64 MMU code to setup the basic Zephyr regions (text, data, etc..) in:
zephyr/arch/arm64/core/mmu.c
Lines 649 to 679 in bfec3b2
but it also used by the soc-specific code to map regions for peripherals that do not support the device MMIO APIs, for example in:
zephyr/soc/arm64/qemu_cortex_a53/mmu_regions.c
Lines 11 to 22 in bfec3b2
This mapping is done directly in the MMU driver code and it is usually a direct (1:1) mapping.
Using the device MMIO (or MMU) APIs
There has been lately a certain effort to make the drivers using the device MMIO APIs. These API are leveraging the Zephyr MMU code to map the physical MMIO region of a peripheral to a virtual memory region automatically at init time (see
include/zephyr/sys/device_mmio.h
)In general the mapping is not a direct mapping, but instead the virtual region is carved out from a memory pool of virtual addresses configured using
CONFIG_KERNEL_VM_BASE
andCONFIG_KERNEL_VM_SIZE
.Problems
There are several.
Solution?
The easiest one is to give up the direct interface and instead relying exclusively on the Zephyr MMU code. This would force us to give up the 1:1 mapping or adding support for that.
Tagging the main actors involved @dcpleung @npitre @povergoing
The text was updated successfully, but these errors were encountered: