lupyuen.org/articles/mmu.html
Lup Yuen Lee 6e2f142414
Some checks are pending
Build Articles / build (push) Waiting to run
Commit from GitHub Actions
2025-01-04 06:09:02 +00:00

1038 lines
No EOL
75 KiB
HTML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="generator" content="rustdoc">
<title>RISC-V Ox64 BL808 SBC: Sv39 Memory Management Unit</title>
<!-- Begin scripts/articles/*-header.html: Article Header for Custom Markdown files processed by rustdoc, like chip8.md -->
<meta property="og:title"
content="RISC-V Ox64 BL808 SBC: Sv39 Memory Management Unit"
data-rh="true">
<meta property="og:description"
content="Let's boot Apache NuttX RTOS on Pine64 Ox64 64-bit RISC-V SBC... And figure out how the Sv39 Memory Management Unit works"
data-rh="true">
<meta name="description"
content="Let's boot Apache NuttX RTOS on Pine64 Ox64 64-bit RISC-V SBC... And figure out how the Sv39 Memory Management Unit works">
<meta property="og:image"
content="https://lupyuen.github.io/images/mmu-title.jpg">
<meta property="og:type"
content="article" data-rh="true">
<link rel="canonical"
href="https://lupyuen.org/articles/mmu.html" />
<!-- End scripts/articles/*-header.html -->
<!-- Begin scripts/rustdoc-header.html: Header for Custom Markdown files processed by rustdoc, like chip8.md -->
<link rel="alternate" type="application/rss+xml" title="RSS Feed for lupyuen" href="/rss.xml" />
<link rel="stylesheet" type="text/css" href="../normalize.css">
<link rel="stylesheet" type="text/css" href="../rustdoc.css" id="mainThemeStyle">
<link rel="stylesheet" type="text/css" href="../dark.css">
<link rel="stylesheet" type="text/css" href="../light.css" id="themeStyle">
<link rel="stylesheet" type="text/css" href="../prism.css">
<script src="../storage.js"></script><noscript>
<link rel="stylesheet" href="../noscript.css"></noscript>
<link rel="shortcut icon" href="../favicon.ico">
<style type="text/css">
#crate-search {
background-image: url("../down-arrow.svg");
}
</style>
<!-- End scripts/rustdoc-header.html -->
</head>
<body class="rustdoc">
<!--[if lte IE 8]>
<div class="warning">
This old browser is unsupported and will most likely display funky
things.
</div>
<![endif]-->
<!-- Begin scripts/rustdoc-before.html: Pre-HTML for Custom Markdown files processed by rustdoc, like chip8.md -->
<!-- Begin Theme Picker -->
<div class="theme-picker" style="left: 0"><button id="theme-picker" aria-label="Pick another theme!"><img src="../brush.svg"
width="18" alt="Pick another theme!"></button>
<div id="theme-choices"></div>
</div>
<!-- Theme Picker -->
<!-- End scripts/rustdoc-before.html -->
<h1 class="title">RISC-V Ox64 BL808 SBC: Sv39 Memory Management Unit</h1>
<nav id="rustdoc"><ul>
<li><a href="#memory-protection" title="Memory Protection">1 Memory Protection</a><ul></ul></li>
<li><a href="#huge-chunks-level-1" title="Huge Chunks: Level 1">2 Huge Chunks: Level 1</a><ul></ul></li>
<li><a href="#medium-chunks-level-2" title="Medium Chunks: Level 2">3 Medium Chunks: Level 2</a><ul></ul></li>
<li><a href="#connect-level-1-to-level-2" title="Connect Level 1 to Level 2">4 Connect Level 1 to Level 2</a><ul></ul></li>
<li><a href="#smaller-chunks-level-3" title="Smaller Chunks: Level 3">5 Smaller Chunks: Level 3</a><ul></ul></li>
<li><a href="#connect-level-2-to-level-3" title="Connect Level 2 to Level 3">6 Connect Level 2 to Level 3</a><ul></ul></li>
<li><a href="#virtual-memory" title="Virtual Memory">7 Virtual Memory</a><ul></ul></li>
<li><a href="#user-level-3" title="User Level 3">8 User Level 3</a><ul></ul></li>
<li><a href="#user-levels-1-and-2" title="User Levels 1 and 2">9 User Levels 1 and 2</a><ul></ul></li>
<li><a href="#swap-the-satp-register" title="Swap the SATP Register">10 Swap the SATP Register</a><ul></ul></li>
<li><a href="#whats-next" title="Whats Next">11 Whats Next</a><ul></ul></li>
<li><a href="#appendix-flush-the-mmu-cache-for-t-head-c906" title="Appendix: Flush the MMU Cache for T-Head C906">12 Appendix: Flush the MMU Cache for T-Head C906</a><ul></ul></li>
<li><a href="#appendix-address-translation" title="Appendix: Address Translation">13 Appendix: Address Translation</a><ul></ul></li>
<li><a href="#appendix-build-and-run-nuttx" title="Appendix: Build and Run NuttX">14 Appendix: Build and Run NuttX</a><ul></ul></li>
<li><a href="#appendix-fix-the-interrupt-controller" title="Appendix: Fix the Interrupt Controller">15 Appendix: Fix the Interrupt Controller</a><ul></ul></li></ul></nav><p>📝 <em>19 Nov 2023</em></p>
<p><img src="https://lupyuen.github.io/images/mmu-title.jpg" alt="Sv39 Memory Management Unit" /></p>
<p><em>Whats this MMU?</em></p>
<p><a href="https://en.wikipedia.org/wiki/Memory_management_unit"><strong>Memory Management Unit (MMU)</strong></a> is the hardware inside our Single-Board Computer (SBC) that does…</p>
<ul>
<li>
<p><strong>Memory Protection</strong>: Prevent Applications (and Kernel) from meddling with things (in System Memory) that theyre not supposed to</p>
</li>
<li>
<p><strong>Virtual Memory</strong>: Allow Applications to access chunks of “Imaginary Memory” at Exotic Addresses (<strong><code>0x8000_0000</code></strong>!)</p>
<p>But in reality: Theyre System RAM recycled from boring old addresses (like <strong><code>0x5060_4000</code></strong>)</p>
<p>(Kinda like “The Matrix”)</p>
</li>
</ul>
<p><em>Sv39 sounds familiar… Any relation to SVR4?</em></p>
<p>Actually <a href="https://five-embeddev.com/riscv-isa-manual/latest/supervisor.html#sec:sv39"><strong>Sv39 Memory Management Unit</strong></a> is inside many RISC-V SBCs…</p>
<ul>
<li>
<p>Pine64 Ox64, Sipeed M1s</p>
<p>(Based on <strong>Bouffalo Lab BL808 SoC</strong>)</p>
</li>
<li>
<p>Pine64 Star64, StarFive VisionFive 2, Milk-V Mars</p>
<p>(Based on <strong>StarFive JH7110 SoC</strong>)</p>
</li>
</ul>
<p>In this article, we find out <strong>how Sv39 MMU works</strong> on a simple barebones SBC: <a href="https://pine64.org/documentation/Ox64/"><strong>Pine64 Ox64 64-bit RISC-V SBC</strong></a>. (Pic below)</p>
<p>(Powered by <a href="https://github.com/bouffalolab/bl_docs/blob/main/BL808_RM/en/BL808_RM_en_1.3.pdf"><strong>Bouffalo Lab BL808 SoC</strong></a>)</p>
<p>We start with <strong>Memory Protection</strong>, then <strong>Virtual Memory</strong>. Well do this with <a href="https://lupyuen.github.io/articles/ox2"><strong>Apache NuttX RTOS</strong></a>. (Real-Time Operating System)</p>
<p><em>And “Sv39” means…</em></p>
<ul>
<li>
<p><strong>“Sv”</strong> signifies its a RISC-V Extension for Supervisor-Mode Virtual Memory</p>
</li>
<li>
<p><strong>“39”</strong> because it supports 39 Bits for Virtual Addresses</p>
<p>(<strong><code>0x0</code></strong> to <strong><code>0x7F_FFFF_FFFF</code></strong>!)</p>
</li>
<li>
<p>Coincidentally its also <strong>3 by 9</strong></p>
<p><strong>3 Levels</strong> of Page Tables, each level adding <strong>9 Address Bits</strong>!</p>
</li>
</ul>
<p><em>Why NuttX?</em></p>
<p><a href="https://lupyuen.github.io/articles/ox2"><strong>Apache NuttX RTOS</strong></a> is tiny and easier to teach, as we walk through the MMU Internals.</p>
<p>And were documenting <strong>everything that happens</strong> when NuttX configures the Sv39 MMU for Ox64 SBC.</p>
<p><em>This stuff is covered in Computer Science Textbooks. No?</em></p>
<p>Lets learn things a little differently! This article will read (and look) like a (yummy) tray of <strong>Chunky Chocolate Brownies</strong>… Because we love Food Analogies.</p>
<p>(Apologies to my fellow CS Teachers)</p>
<p><img src="https://lupyuen.github.io/images/ox64-solder.jpg" alt="Pine64 Ox64 64-bit RISC-V SBC (Sorry for my substandard soldering)" /></p>
<p><a href="https://pine64.org/documentation/Ox64/"><em>Pine64 Ox64 64-bit RISC-V SBC (Sorry for my substandard soldering)</em></a></p>
<h1 id="memory-protection"><a class="doc-anchor" href="#memory-protection">§</a>1 Memory Protection</h1>
<p><em>What memory shall we protect on Ox64?</em></p>
<p>Ox64 SBC needs the <strong>Memory Regions</strong> below to boot our Kernel.</p>
<p>Today we configure the Sv39 MMU so that our <strong>Kernel can access these regions</strong> (and nothing else)…</p>
<div><table><thead><tr><th style="text-align: left">Region</th><th style="text-align: center">Start Address</th><th style="text-align: left">Size</th></tr></thead><tbody>
<tr><td style="text-align: left"><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L51-L54"><strong>Memory-Mapped I/O</strong></a></td><td style="text-align: center"><strong><code>0x0000_0000</code></strong></td><td style="text-align: left"><strong><code>0x4000_0000</code></strong> <em>(1 GB)</em></td></tr>
<tr><td style="text-align: left"><a href="https://github.com/apache/nuttx/blob/master/boards/risc-v/bl808/ox64/scripts/ld.script#L23"><strong>Kernel Code</strong></a> <em>(RAM)</em></td><td style="text-align: center"><strong><code>0x5020_0000</code></strong></td><td style="text-align: left"><strong><code>0x0020_0000</code></strong> <em>(2 MB)</em></td></tr>
<tr><td style="text-align: left"><a href="https://github.com/apache/nuttx/blob/master/boards/risc-v/bl808/ox64/scripts/ld.script#L24"><strong>Kernel Data</strong></a> <em>(RAM)</em></td><td style="text-align: center"><strong><code>0x5040_0000</code></strong></td><td style="text-align: left"><strong><code>0x0020_0000</code></strong> <em>(2 MB)</em></td></tr>
<tr><td style="text-align: left"><a href="https://github.com/apache/nuttx/blob/master/boards/risc-v/bl808/ox64/scripts/ld.script#L25-L26"><strong>Page Pool</strong></a> <em>(RAM)</em></td><td style="text-align: center"><strong><code>0x5060_0000</code></strong></td><td style="text-align: left"><strong><code>0x0140_0000</code></strong> <em>(20 MB)</em></td></tr>
<tr><td style="text-align: left"><a href="https://lupyuen.github.io/articles/ox2#platform-level-interrupt-controller"><strong>Interrupt Controller</strong></a></td><td style="text-align: center"><strong><code>0xE000_0000</code></strong></td><td style="text-align: left"><strong><code>0x1000_0000</code></strong> <em>(256 MB)</em></td></tr>
</tbody></table>
</div>
<p>Our (foodie) hygiene requirements…</p>
<ol>
<li>
<p><strong>Applications</strong> shall NOT be allowed to touch these Memory Regions</p>
</li>
<li>
<p><strong>Kernel Code Region</strong> will allow Read and Execute Access</p>
</li>
<li>
<p><strong>Other Memory Regions</strong> will allow Read and Write Access</p>
</li>
<li>
<p><strong>Memory-Mapped I/O</strong> will be used by Kernel for controlling the System Peripherals: UART, I2C, SPI, …</p>
<p>(Same for <strong>Interrupt Controller</strong>)</p>
</li>
<li>
<p><strong>Page Pool</strong> will be allocated (on-the-fly) by our Kernel to Applications</p>
<p>(As <strong>Virtual Memory</strong>)</p>
</li>
<li>
<p><strong>Our Kernel</strong> runs in RISC-V Supervisor Mode</p>
</li>
<li>
<p><strong>Applications</strong> run in RISC-V User Mode</p>
</li>
<li>
<p>Any meddling of <strong>Forbidden Regions</strong> by Kernel and Applications shall immediately trigger a <a href="https://lupyuen.github.io/articles/mmu#appendix-address-translation"><strong>Page Fault</strong></a> (RISC-V Exception)</p>
</li>
</ol>
<p>We begin with the biggest chunk: I/O Memory…</p>
<h1 id="huge-chunks-level-1"><a class="doc-anchor" href="#huge-chunks-level-1">§</a>2 Huge Chunks: Level 1</h1>
<p><strong>[ 1 GB per Huge Chunk ]</strong></p>
<p><em>How will we protect the I/O Memory?</em></p>
<div><table><thead><tr><th style="text-align: left">Region</th><th style="text-align: center">Start Address</th><th style="text-align: left">Size</th></tr></thead><tbody>
<tr><td style="text-align: left"><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L51-L54"><strong>Memory-Mapped I/O</strong></a></td><td style="text-align: center"><strong><code>0x0000_0000</code></strong></td><td style="text-align: left"><strong><code>0x4000_0000</code></strong> <em>(1 GB)</em></td></tr>
</tbody></table>
</div>
<p>Heres the simplest setup for Sv39 MMU that will protect the <strong>I/O Memory</strong> from <strong><code>0x0</code></strong> to <strong><code>0x3FFF_FFFF</code></strong></p>
<p><img src="https://lupyuen.github.io/images/mmu-l1kernel2a.jpg" alt="Level 1 Page Table for Kernel" /></p>
<p>All we need is a <strong>Level 1 Page Table</strong>. (4,096 Bytes)</p>
<p>Our Page Table contains only one <strong>Page Table Entry</strong> (8 Bytes) that says…</p>
<ul>
<li>
<p><strong>V:</strong> Its a <strong>Valid</strong> Page Table Entry</p>
</li>
<li>
<p><strong>G:</strong> Its a <a href="https://lupyuen.github.io/articles/mmu#swap-the-satp-register"><strong>Global Mapping</strong></a> thats valid for all Address Spaces</p>
</li>
<li>
<p><strong>R:</strong> Allow Reads for <strong><code>0x0</code></strong> to <strong><code>0x3FFF_FFFF</code></strong></p>
</li>
<li>
<p><strong>W:</strong> Allow Writes for <strong><code>0x0</code></strong> to <strong><code>0x3FFF_FFFF</code></strong></p>
<p>(We dont allow Execute for I/O Memory)</p>
</li>
<li>
<p><strong>SO:</strong> Enforce <a href="https://lupyuen.github.io/articles/plic3#t-head-errata"><strong>Strong-Ordering</strong></a> for Memory Access</p>
</li>
<li>
<p><strong>S:</strong> Enforce <a href="https://lupyuen.github.io/articles/plic3#t-head-errata"><strong>Shareable</strong></a> Memory Access</p>
</li>
<li>
<p><strong>PPN:</strong> Physical Page Number (44 Bits) is <strong><code>0x0</code></strong></p>
<p>(PPN = Memory Address / 4,096)</p>
</li>
</ul>
<p>But we have so many questions…</p>
<ol>
<li>
<p><em>Why 0x3FFF_FFFF?</em></p>
<p>We have a <strong>Level 1</strong> Page Table. Every Entry in the Page Table configures a (huge) <strong>1 GB Chunk of Memory</strong>.</p>
<p>(Or <strong><code>0x4000_0000</code></strong> bytes)</p>
<p>Our Page Table Entry is at <strong>Index 0</strong>. Hence it configures the Memory Range for <strong><code>0x0</code></strong> to <strong><code>0x3FFF_FFFF</code></strong>. (Pic below)</p>
<p><a href="https://lupyuen.github.io/articles/mmu#appendix-address-translation">(More about <strong>Address Translation</strong>)</a></p>
</li>
<li>
<p><em>How to allocate the Page Table?</em></p>
<p>In NuttX, we write this to allocate the <strong>Level 1 Page Table</strong>: <a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L68-L103">bl808_mm_init.c</a></p>
<div class="example-wrap"><pre class="language-c"><code>// Number of Page Table Entries (8 bytes per entry)
#define PGT_L1_SIZE (512) // Page Table Size is 4 KB
// Allocate Level 1 Page Table from `.pgtables` section
static size_t m_l1_pgtable[PGT_L1_SIZE]
locate_data(&quot;.pgtables&quot;);</code></pre></div>
<p><strong><code>.pgtables</code></strong> comes from the NuttX Linker Script: <a href="https://github.com/apache/nuttx/blob/master/boards/risc-v/bl808/ox64/scripts/ld.script#L121-L127">ld.script</a></p>
<div class="example-wrap"><pre class="language-yaml"><code>/* Page Tables (aligned to 4 KB boundary) */
.pgtables (NOLOAD) : ALIGN(0x1000) {
*(.pgtables)
. = ALIGN(4);
} &gt; ksram</code></pre></div>
<p>Then GCC Linker helpfully allocates our Level 1 Page Table at RAM Address <strong><code>0x5040_7000</code></strong>.</p>
</li>
<li>
<p><em>What is SATP?</em></p>
<p>SATP is the RISC-V System Register for <a href="https://five-embeddev.com/riscv-isa-manual/latest/supervisor.html#sec:satp"><strong>Supervisor Address Translation and Protection</strong></a>.</p>
<p>To enable the MMU, we set SATP Register to the <strong>Physical Page Number (PPN)</strong> of our Level 1 Page Table…</p>
<div class="example-wrap"><pre class="language-c"><code>PPN = Address / 4096
= 0x50407000 / 4096
= 0x50407</code></pre></div>
<p>This is how we set the <strong>SATP Register</strong> in NuttX: <a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L278-L298">bl808_mm_init.c</a></p>
<div class="example-wrap"><pre class="language-c"><code>// Set the SATP Register to the
// Physical Page Number of Level 1 Page Table.
// Set SATP Mode to Sv39.
mmu_enable(
g_kernel_pgt_pbase, // 0x5040 7000 (Page Table Address)
0 // Set Address Space ID to 0
);</code></pre></div>
<p><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/common/riscv_mmu.h#L268-L292">(<strong>mmu_enable</strong> is defined here)</a></p>
<p><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/common/riscv_mmu.h#L152-L176">(Which calls <strong>mmu_satp_reg</strong>)</a></p>
<p><a href="https://lupyuen.github.io/articles/mmu#appendix-flush-the-mmu-cache-for-t-head-c906">(Remember to <strong>sfence</strong> and flush the <strong>MMU Cache</strong>)</a></p>
</li>
<li>
<p><em>How to set the Page Table Entry?</em></p>
<p>To set the Level 1 <strong>Page Table Entry</strong> for <strong><code>0x0</code></strong> to <strong><code>0x3FFF_FFFF</code></strong>: <a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L235-L247">bl808_mm_init.c</a></p>
<div class="example-wrap"><pre class="language-c"><code>// Map the I/O Region in Level 1 Page Table
mmu_ln_map_region(
1, // Level 1
PGT_L1_VBASE, // 0x5040 7000 (Page Table Address)
MMU_IO_BASE, // 0x0 (Physical Address)
MMU_IO_BASE, // 0x0 (Virtual Address)
MMU_IO_SIZE, // 0x4000 0000 (Size is 1 GB)
PTE_R | PTE_W | PTE_G | MMU_THEAD_STRONG_ORDER | MMU_THEAD_SHAREABLE // Read + Write + Global + Strong Order + Shareable
);</code></pre></div>
<p><a href="https://lupyuen.github.io/articles/plic3#t-head-errata">(Why we need <strong>Strong-Ordering</strong>)</a></p>
<p><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L42-L49">(<strong>STRONG_ORDER</strong> and <strong>SHAREABLE</strong> are here)</a></p>
<p><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/common/riscv_mmu.c#L137-L153">(<strong>mmu_ln_map_region</strong> is defined here)</a></p>
<p><a href="https://github.com/lupyuen/nuttx-ox64#map-the-io-region-level-1">(See the <strong>NuttX L1 Log</strong>)</a></p>
</li>
<li>
<p><em>Why is Virtual Address set to 0?</em></p>
<p>Right now were doing <strong>Memory Protection</strong> for the Kernel, hence we set…</p>
<p>Virtual Address = Physical Address = Actual Address of System Memory</p>
<p>Later when we configure <strong>Virtual Memory</strong> for the Applications, well see interesting values.</p>
</li>
</ol>
<p><img src="https://lupyuen.github.io/images/mmu-l1kernel2b.jpg" alt="Level 1 Page Table for Kernel" /></p>
<p>Next we protect the Interrupt Controller…</p>
<h1 id="medium-chunks-level-2"><a class="doc-anchor" href="#medium-chunks-level-2">§</a>3 Medium Chunks: Level 2</h1>
<p><strong>[ 2 MB per Medium Chunk ]</strong></p>
<p><em>Our Interrupt Controller needs 256 MB of protection…</em></p>
<p><em>Surely a Level 1 Chunk (1 GB) is too wasteful?</em></p>
<div><table><thead><tr><th style="text-align: left">Region</th><th style="text-align: center">Start Address</th><th style="text-align: left">Size</th></tr></thead><tbody>
<tr><td style="text-align: left"><a href="https://lupyuen.github.io/articles/ox2#platform-level-interrupt-controller"><strong>Interrupt Controller</strong></a></td><td style="text-align: center"><strong><code>0xE000_0000</code></strong></td><td style="text-align: left"><strong><code>0x1000_0000</code></strong> <em>(256 MB)</em></td></tr>
</tbody></table>
</div>
<p>Yep thats why Sv39 MMU gives us (medium-size) <strong>Level 2 Chunks of 2 MB</strong>!</p>
<p>For the Interrupt Controller, we need <strong>128 Chunks</strong> of 2 MB.</p>
<p>Hence we create a <strong>Level 2 Page Table</strong> (also 4,096 bytes). And we populate <strong>128 Entries</strong> (Index <code>0x100</code> to <code>0x17F</code>)…</p>
<p><img src="https://lupyuen.github.io/images/mmu-l2int.jpg" alt="Level 2 Page Table for Interrupt Controller" /></p>
<p><em>How did we get the Index of the Page Table Entry?</em></p>
<p>Our Interrupt Controller is at <strong><code>0xE000_0000</code></strong>.</p>
<p>To compute the Index of the Level 2 <strong>Page Table Entry (PTE)</strong></p>
<span style="font-size:90%">
<ul>
<li>
<p><strong>Virtual Address: vaddr</strong> = <code>0xE000_0000</code></p>
<p>(For Now: Virtual Address = Actual Address)</p>
</li>
<li>
<p><strong>Virtual Page Number: vpn</strong> <br> = <strong>vaddr</strong> &gt;&gt; 12 <br> = <code>0xE0000</code></p>
<p>(4,096 bytes per Memory Page)</p>
</li>
<li>
<p><strong>Level 2 PTE Index</strong> <br> = (<strong>vpn</strong> &gt;&gt; 9) &amp; <code>0b1_1111_1111</code> <br> = <code>0x100</code></p>
<p>(Extract Bits 9 to 17 to get Level 2 Index)</p>
<p><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/common/riscv_mmu.c#L61C1-L107">(Implemented as <strong>mmu_ln_setentry</strong>)</a></p>
<p><a href="https://lupyuen.github.io/articles/mmu#appendix-address-translation">(More about <strong>Address Translation</strong>)</a></p>
</li>
</ul>
</span>
<p>Do the same for <strong><code>0xEFFF_FFFF</code></strong>, and well get Index <strong><code>0x17F</code></strong>.</p>
<p>Thus our Page Table Index runs from <strong><code>0x100</code></strong> to <strong><code>0x17F</code></strong>.</p>
<p><em>How to allocate the Level 2 Page Table?</em></p>
<p>In NuttX we do this: <a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L68-L103">bl808_mm_init.c</a></p>
<div class="example-wrap"><pre class="language-c"><code>// Number of Page Table Entries (8 bytes per entry)
#define PGT_INT_L2_SIZE (512) // Page Table Size is 4 KB
// Allocate Level 2 Page Table from `.pgtables` section
static size_t m_int_l2_pgtable[PGT_INT_L2_SIZE]
locate_data(&quot;.pgtables&quot;);</code></pre></div>
<p><a href="https://github.com/apache/nuttx/blob/master/boards/risc-v/bl808/ox64/scripts/ld.script#L121-L127">(<strong><code>.pgtables</code></strong> comes from the Linker Script)</a></p>
<p>Then GCC Linker respectfully allocates our Level 2 Page Table at RAM Address <strong><code>0x5040</code> <code>3000</code></strong>.</p>
<p><em>How to populate the 128 Page Table Entries?</em></p>
<p>Just do this in NuttX: <a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L247-L253">bl808_mm_init.c</a></p>
<div class="example-wrap"><pre class="language-c"><code>// Map the Interrupt Controller in Level 2 Page Table
mmu_ln_map_region(
2, // Level 2
PGT_INT_L2_PBASE, // 0x5040 3000 (Page Table Address)
0xE0000000, // Physical Address of Interrupt Controller
0xE0000000, // Virtual Address of Interrupt Controller
0x10000000, // 256 MB (Size)
PTE_R | PTE_W | PTE_G | MMU_THEAD_STRONG_ORDER | MMU_THEAD_SHAREABLE // Read + Write + Global + Strong Order + Shareable
);</code></pre></div>
<p><a href="https://lupyuen.github.io/articles/plic3#t-head-errata">(Why we need <strong>Strong-Ordering</strong>)</a></p>
<p><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L42-L49">(<strong>STRONG_ORDER</strong> and <strong>SHAREABLE</strong> are here)</a></p>
<p><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/common/riscv_mmu.c#L137-L153">(<strong>mmu_ln_map_region</strong> is defined here)</a></p>
<p><a href="https://github.com/lupyuen/nuttx-ox64#map-the-plic-level-2">(See the <strong>NuttX L2 Log</strong>)</a></p>
<p>Were not done yet! Next we connect the Levels…</p>
<h1 id="connect-level-1-to-level-2"><a class="doc-anchor" href="#connect-level-1-to-level-2">§</a>4 Connect Level 1 to Level 2</h1>
<p><em>Were done with the Level 2 Page Table for our Interrupt Controller…</em></p>
<p><em>But Level 2 should talk back to Level 1 right?</em></p>
<div><table><thead><tr><th style="text-align: left">Region</th><th style="text-align: center">Start Address</th><th style="text-align: left">Size</th></tr></thead><tbody>
<tr><td style="text-align: left"><a href="https://lupyuen.github.io/articles/ox2#platform-level-interrupt-controller"><strong>Interrupt Controller</strong></a></td><td style="text-align: center"><strong><code>0xE000_0000</code></strong></td><td style="text-align: left"><strong><code>0x1000_0000</code></strong> <em>(256 MB)</em></td></tr>
</tbody></table>
</div>
<p>Exactly! Watch how we <strong>connect our Level 2 Page Table</strong> back to Level 1…</p>
<p><img src="https://lupyuen.github.io/images/mmu-l1kernel3.jpg" alt="Level 1 Page Table for Kernel" /></p>
<p>3 is the <strong>Level 1 Index</strong> for Interrupt Controller <strong><code>0xE000_0000</code></strong> because…</p>
<span style="font-size:90%">
<ul>
<li>
<p><strong>Virtual Address: vaddr</strong> = <code>0xE000_0000</code></p>
<p>(For Now: Virtual Address = Actual Address)</p>
</li>
<li>
<p><strong>Virtual Page Number: vpn</strong> <br> = <strong>vaddr</strong> &gt;&gt; 12 <br> = <code>0xE0000</code></p>
<p>(4,096 bytes per Memory Page)</p>
</li>
<li>
<p><strong>Level 1 PTE Index</strong> <br> = (<strong>vpn</strong> &gt;&gt; 18) &amp; <code>0b1_1111_1111</code> <br> = 3</p>
<p>(Extract Bits 18 to 26 to get Level 1 Index)</p>
<p><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/common/riscv_mmu.c#L61C1-L107">(Implemented as <strong>mmu_ln_setentry</strong>)</a></p>
<p><a href="https://lupyuen.github.io/articles/mmu#appendix-address-translation">(More about <strong>Address Translation</strong>)</a></p>
</li>
</ul>
</span>
<p><em>Why “NO RWX”?</em></p>
<p>When we set the <strong>Read, Write and Execute Bits</strong> to 0…</p>
<p>Sv39 MMU interprets the PPN (Physical Page Number) as a <strong>Pointer to Level 2 Page Table</strong>. Thats how we connect Level 1 to Level 2!</p>
<p>(Remember: Actual Address = PPN * 4,096)</p>
<p>In NuttX, we write this to <strong>connect Level 1 with Level 2</strong>: <a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L253-L258">bl808_mm_init.c</a></p>
<div class="example-wrap"><pre class="language-c"><code>// Connect the L1 and L2 Page Tables for Interrupt Controller
mmu_ln_setentry(
1, // Level 1
PGT_L1_VBASE, // 0x5040 7000 (L1 Page Table Address)
PGT_INT_L2_PBASE, // 0x5040 3000 (L2 Page Table Address)
0xE0000000, // Virtual Address of Interrupt Controller
PTE_G // Global Only
);</code></pre></div>
<p><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/common/riscv_mmu.c#L61C1-L107">(<strong>mmu_ln_setentry</strong> is defined here)</a></p>
<p><a href="https://github.com/lupyuen/nuttx-ox64#connect-the-level-1-and-level-2-page-tables-for-plic">(See the <strong>NuttX L1 and L2 Log</strong>)</a></p>
<p>Were done protecting the Interrupt Controller with Level 1 AND Level 2 Page Tables!</p>
<p><em>Wait wasnt there something already in the Level 1 Page Table?</em></p>
<div><table><thead><tr><th style="text-align: left">Region</th><th style="text-align: center">Start Address</th><th style="text-align: left">Size</th></tr></thead><tbody>
<tr><td style="text-align: left"><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L51-L54"><strong>Memory-Mapped I/O</strong></a></td><td style="text-align: center"><strong><code>0x0000_0000</code></strong></td><td style="text-align: left"><strong><code>0x4000_0000</code></strong> <em>(1 GB)</em></td></tr>
<tr><td style="text-align: left"><a href="https://lupyuen.github.io/articles/ox2#platform-level-interrupt-controller"><strong>Interrupt Controller</strong></a></td><td style="text-align: center"><strong><code>0xE000_0000</code></strong></td><td style="text-align: left"><strong><code>0x1000_0000</code></strong> <em>(256 MB)</em></td></tr>
</tbody></table>
</div>
<p>Oh yeah: <strong>I/O Memory</strong>. When we bake everything together, things will look more complicated (and theres more!)…</p>
<p><img src="https://lupyuen.github.io/images/mmu-l1kernel.jpg" alt="Level 1 Page Table for Kernel" /></p>
<h1 id="smaller-chunks-level-3"><a class="doc-anchor" href="#smaller-chunks-level-3">§</a>5 Smaller Chunks: Level 3</h1>
<p><strong>[ 4 KB per Smaller Chunk ]</strong></p>
<p><em>Level 2 Chunks (2 MB) are still mighty big… Is there anything smaller?</em></p>
<p>Yep we have smaller (bite-size) <strong>Level 3 Chunks</strong> of <strong>4 KB</strong> each.</p>
<p>We create a <strong>Level 3 Page Table</strong> for the Kernel Code. And fill it (to the brim) with <strong>4 KB Chunks</strong></p>
<div><table><thead><tr><th style="text-align: left">Region</th><th style="text-align: center">Start Address</th><th style="text-align: left">Size</th></tr></thead><tbody>
<tr><td style="text-align: left"><a href="https://github.com/apache/nuttx/blob/master/boards/risc-v/bl808/ox64/scripts/ld.script#L23"><strong>Kernel Code</strong></a> <em>(RAM)</em></td><td style="text-align: center"><strong><code>0x5020_0000</code></strong></td><td style="text-align: left"><strong><code>0x20_0000</code></strong> <em>(2 MB)</em></td></tr>
<tr><td style="text-align: left"><a href="https://github.com/apache/nuttx/blob/master/boards/risc-v/bl808/ox64/scripts/ld.script#L24"><strong>Kernel Data</strong></a> <em>(RAM)</em></td><td style="text-align: center"><strong><code>0x5040_0000</code></strong></td><td style="text-align: left"><strong><code>0x20_0000</code></strong> <em>(2 MB)</em></td></tr>
</tbody></table>
</div>
<p><img src="https://lupyuen.github.io/images/mmu-l3kernel.jpg" alt="Level 3 Page Table for Kernel" /></p>
<p>(<strong>Kernel Data</strong> has a similar Level 3 Page Table)</p>
<p><em>How do we cook up a Level 3 Index?</em></p>
<p>Suppose were configuring address <strong><code>0x5020_1000</code></strong>. To compute the Index of the Level 3 <strong>Page Table Entry (PTE)</strong></p>
<span style="font-size:90%">
<ul>
<li>
<p><strong>Virtual Address: vaddr</strong> = <code>0x5020_1000</code></p>
<p>(For Now: Virtual Address = Actual Address)</p>
</li>
<li>
<p><strong>Virtual Page Number: vpn</strong> <br> = <strong>vaddr</strong> &gt;&gt; 12 <br> = <code>0x50201</code></p>
<p>(4,096 bytes per Memory Page)</p>
</li>
<li>
<p><strong>Level 3 PTE Index</strong> <br> = <strong>vpn</strong> &amp; <code>0b1_1111_1111</code> <br> = 1</p>
<p>(Extract Bits 0 to 8 to get Level 3 Index)</p>
<p><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/common/riscv_mmu.c#L61C1-L107">(Implemented as <strong>mmu_ln_setentry</strong>)</a></p>
<p><a href="https://lupyuen.github.io/articles/mmu#appendix-address-translation">(More about <strong>Address Translation</strong>)</a></p>
</li>
</ul>
</span>
<p>Thus address <strong><code>0x5020_1000</code></strong> is configured by <strong>Index 1</strong> of the Level 3 Page Table.</p>
<p>To populate the <strong>Level 3 Page Table</strong>, our code looks a little different: <a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L258-L266">bl808_mm_init.c</a></p>
<div class="example-wrap"><pre class="language-c"><code>// Number of Page Table Entries (8 bytes per entry)
// for Kernel Code and Data
#define PGT_L3_SIZE (1024) // 2 Page Tables (4 KB each)
// Allocate Level 3 Page Table from `.pgtables` section
// for Kernel Code and Data
static size_t m_l3_pgtable[PGT_L3_SIZE]
locate_data(&quot;.pgtables&quot;);
// Map the Kernel Code in L2 and L3 Page Tables
map_region(
KFLASH_START, // 0x5020 0000 (Physical Address)
KFLASH_START, // 0x5020 0000 (Virtual Address)
KFLASH_SIZE, // 0x20 0000 (Size is 2 MB)
PTE_R | PTE_X | PTE_G // Read + Execute + Global
);
// Map the Kernel Data in L2 and L3 Page Tables
map_region(
KSRAM_START, // 0x5040 0000 (Physical Address)
KSRAM_START, // 0x5040 0000 (Virtual Address)
KSRAM_SIZE, // 0x20 0000 (Size is 2 MB)
PTE_R | PTE_W | PTE_G // Read + Write + Global
);</code></pre></div>
<p><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L158-L216">(<strong>map_region</strong> is defined here)</a></p>
<p><a href="https://github.com/lupyuen/nuttx-ox64#map-the-kernel-text-levels-2--3">(See the <strong>NuttX L2 and L3 Log</strong>)</a></p>
<p>Thats because <a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L158-L216"><strong>map_region</strong></a> calls a <a href="https://en.wikipedia.org/wiki/Slab_allocation"><strong>Slab Allocator</strong></a> to manage the Level 3 Page Table Entries.</p>
<p>But internally it calls the same old functions: <a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/common/riscv_mmu.c#L137-L153"><strong>mmu_ln_map_region</strong></a> and <a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/common/riscv_mmu.c#L61-L107"><strong>mmu_ln_setentry</strong></a></p>
<h1 id="connect-level-2-to-level-3"><a class="doc-anchor" href="#connect-level-2-to-level-3">§</a>6 Connect Level 2 to Level 3</h1>
<p><em>Level 3 will talk back to Level 2 right?</em></p>
<p>Correct! Finally we create a <strong>Level 2 Page Table</strong> for Kernel Code and Data…</p>
<div><table><thead><tr><th style="text-align: left">Region</th><th style="text-align: center">Start Address</th><th style="text-align: left">Size</th></tr></thead><tbody>
<tr><td style="text-align: left"><a href="https://github.com/apache/nuttx/blob/master/boards/risc-v/bl808/ox64/scripts/ld.script#L23"><strong>Kernel Code</strong></a> <em>(RAM)</em></td><td style="text-align: center"><strong><code>0x5020_0000</code></strong></td><td style="text-align: left"><strong><code>0x0020_0000</code></strong> <em>(2 MB)</em></td></tr>
<tr><td style="text-align: left"><a href="https://github.com/apache/nuttx/blob/master/boards/risc-v/bl808/ox64/scripts/ld.script#L24"><strong>Kernel Data</strong></a> <em>(RAM)</em></td><td style="text-align: center"><strong><code>0x5040_0000</code></strong></td><td style="text-align: left"><strong><code>0x0020_0000</code></strong> <em>(2 MB)</em></td></tr>
<tr><td style="text-align: left"><a href="https://github.com/apache/nuttx/blob/master/boards/risc-v/bl808/ox64/scripts/ld.script#L25-L26"><strong>Page Pool</strong></a> <em>(RAM)</em></td><td style="text-align: center"><strong><code>0x5060_0000</code></strong></td><td style="text-align: left"><strong><code>0x0140_0000</code></strong> <em>(20 MB)</em></td></tr>
</tbody></table>
</div>
<p>(Not to be confused with the earlier Level 2 Page Table for Interrupt Controller)</p>
<p>And we <strong>connect the Level 2 and 3</strong> Page Tables…</p>
<p><img src="https://lupyuen.github.io/images/mmu-l2kernel.jpg" alt="Level 2 Page Table for Kernel" /></p>
<p><em>Page Pool goes into the same Level 2 Page Table?</em></p>
<p>Yep, thats because the <strong>Page Pool</strong> contains Medium-Size Chunks (2 MB) of goodies anyway.</p>
<p>(Page Pool will be <strong>allocated to Applications</strong> in a while)</p>
<p>This is how we populate the Level 2 Entries for the <strong>Page Pool</strong>: <a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L271-L278">bl808_mm_init.c</a></p>
<div class="example-wrap"><pre class="language-c"><code>// Map the Page Pool in Level 2 Page Table
mmu_ln_map_region(
2, // Level 2
PGT_L2_VBASE, // 0x5040 6000 (Level 2 Page Table)
PGPOOL_START, // 0x5060 0000 (Physical Address of Page Pool)
PGPOOL_START, // 0x5060 0000 (Virtual Address of Page Pool)
PGPOOL_SIZE, // 0x0140_0000 (Size is 20 MB)
PTE_R | PTE_W | PTE_G // Read + Write + Global
);</code></pre></div>
<p><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/common/riscv_mmu.c#L137-L153">(<strong>mmu_ln_map_region</strong> is defined here)</a></p>
<p><a href="https://github.com/lupyuen/nuttx-ox64#map-the-page-pool-level-2">(See the <strong>NuttX Page Pool Log</strong>)</a></p>
<p><em>Did we forget something?</em></p>
<p>Oh yeah, remember to <strong>connect the Level 1 and 2</strong> Page Tables: <a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L266-L271">bl808_mm_init.c</a></p>
<div class="example-wrap"><pre class="language-c"><code>// Connect the L1 and L2 Page Tables
// for Kernel Code, Data and Page Pool
mmu_ln_setentry(
1, // Level 1
PGT_L1_VBASE, // 0x5040 7000 (Level 1 Page Table)
PGT_L2_PBASE, // 0x5040 6000 (Level 2 Page Table)
KFLASH_START, // 0x5020 0000 (Kernel Code Address)
PTE_G // Global Only
);</code></pre></div>
<p><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/common/riscv_mmu.c#L61-L107">(<strong>mmu_ln_setentry</strong> is defined here)</a></p>
<p><a href="https://github.com/lupyuen/nuttx-ox64#connect-the-level-1-and-level-2-page-tables">(See the <strong>NuttX L1 and L2 Log</strong>)</a></p>
<p>Our <strong>Level 1 Page Table</strong> becomes chock full of toppings…</p>
<div><table><thead><tr><th style="text-align: center">Index</th><th style="text-align: center">Permissions</th><th style="text-align: left">Physical Page Number</th></tr></thead><tbody>
<tr><td style="text-align: center">0</td><td style="text-align: center">VGRWSO+S</td><td style="text-align: left"><strong><code>0x00000</code></strong> <em>(I/O Memory)</em></td></tr>
<tr><td style="text-align: center">1</td><td style="text-align: center">VG <em>(Pointer)</em></td><td style="text-align: left"><strong><code>0x50406</code></strong> <em>(L2 Kernel Code &amp; Data)</em></td></tr>
<tr><td style="text-align: center">3</td><td style="text-align: center">VG <em>(Pointer)</em></td><td style="text-align: left"><strong><code>0x50403</code></strong> <em>(L2 Interrupt Controller)</em></td></tr>
</tbody></table>
</div>
<p>But it tastes very similar to our <strong>Kernel Memory Map</strong>!</p>
<div><table><thead><tr><th style="text-align: left">Region</th><th style="text-align: center">Start Address</th><th style="text-align: left">Size</th></tr></thead><tbody>
<tr><td style="text-align: left"><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L51-L54"><strong>I/O Memory</strong></a></td><td style="text-align: center"><strong><code>0x0000_0000</code></strong></td><td style="text-align: left"><strong><code>0x4000_0000</code></strong> <em>(1 GB)</em></td></tr>
<tr><td style="text-align: left"><a href="https://github.com/apache/nuttx/blob/master/boards/risc-v/bl808/ox64/scripts/ld.script#L23-L26"><strong>RAM</strong></a></td><td style="text-align: center"><strong><code>0x5020_0000</code></strong></td><td style="text-align: left"><strong><code>0x0180_0000</code></strong> <em>(24 MB)</em></td></tr>
<tr><td style="text-align: left"><a href="https://lupyuen.github.io/articles/ox2#platform-level-interrupt-controller"><strong>Interrupt Controller</strong></a></td><td style="text-align: center"><strong><code>0xE000_0000</code></strong></td><td style="text-align: left"><strong><code>0x1000_0000</code></strong> <em>(256 MB)</em></td></tr>
</tbody></table>
</div>
<p>Now we switch course to Applications and Virtual Memory…</p>
<p><a href="https://lupyuen.github.io/articles/ox2#appendix-nuttx-boot-flow">(Who calls <strong>mmu_ln_map_region</strong> and <strong>mmu_ln_setentry</strong> at startup)</a></p>
<p><img src="https://lupyuen.github.io/images/mmu-boot1.png" alt="Ox64 boots to NuttX Shell" /></p>
<p><a href="https://gist.github.com/lupyuen/aa9b3e575ba4e0c233ab02c328221525"><em>Ox64 boots to NuttX Shell</em></a></p>
<h1 id="virtual-memory"><a class="doc-anchor" href="#virtual-memory">§</a>7 Virtual Memory</h1>
<p>Earlier we talked about Sv39 MMU and <strong>Virtual Memory</strong></p>
<blockquote>
<p>Allow Applications to access chunks of “Imaginary Memory” at Exotic Addresses (<strong><code>0x8000_0000</code></strong>!)</p>
</blockquote>
<blockquote>
<p>But in reality: Theyre System RAM recycled from boring old addresses (like <strong><code>0x5060_4000</code></strong>)</p>
</blockquote>
<p>Lets make some magic!</p>
<p><em>What are the “Exotic Addresses” for our Application?</em></p>
<p>NuttX will map the <strong>Application Code (Text), Data and Heap</strong> at these <strong>Virtual Addresses</strong>: <a href="https://github.com/apache/nuttx/blob/master/boards/risc-v/bl808/ox64/configs/nsh/defconfig#L17-L30">nsh/defconfig</a></p>
<div class="example-wrap"><pre class="language-text"><code>CONFIG_ARCH_TEXT_VBASE=0x80000000
CONFIG_ARCH_TEXT_NPAGES=128
CONFIG_ARCH_DATA_VBASE=0x80100000
CONFIG_ARCH_DATA_NPAGES=128
CONFIG_ARCH_HEAP_VBASE=0x80200000
CONFIG_ARCH_HEAP_NPAGES=128</code></pre></div>
<p>Which says…</p>
<div><table><thead><tr><th style="text-align: left">Region</th><th style="text-align: center">Start Address</th><th style="text-align: left">Size</th></tr></thead><tbody>
<tr><td style="text-align: left">User Code</td><td style="text-align: center"><strong><code>0x8000_0000</code></strong></td><td style="text-align: left"><em>(Max 128 Pages)</em></td></tr>
<tr><td style="text-align: left">User Data</td><td style="text-align: center"><strong><code>0x8010_0000</code></strong></td><td style="text-align: left"><em>(Max 128 Pages)</em></td></tr>
<tr><td style="text-align: left">User Heap</td><td style="text-align: center"><strong><code>0x8020_0000</code></strong></td><td style="text-align: left"><em>(Max 128 Pages)</em> <br> <em>(Each Page is 4 KB)</em></td></tr>
</tbody></table>
</div>
<p>“User” refers to <strong>RISC-V User Mode</strong>, which is less privileged than our Kernel running in Supervisor Mode.</p>
<p><em>And what are the boring old Physical Addresses?</em></p>
<p>NuttX will map the Virtual Addresses above to the <strong>Physical Addresses</strong> from…</p>
<p>The <a href="https://lupyuen.github.io/articles/mmu#connect-level-2-to-level-3"><strong>Kernel Page Pool</strong></a> that we saw earlier! The Pooled Pages will be dished out dynamically to Applications as they run.</p>
<p><em>Will Applications see the I/O Memory, Kernel RAM, Interrupt Controller?</em></p>
<p>Nope! Thats the beauty of an MMU: We control <em>everything</em> that the Application can meddle with!</p>
<p>Our Application will see only the assigned <strong>Virtual Addresses</strong>, not the actual Physical Addresses used by the Kernel.</p>
<p>We watch NuttX do its magic…</p>
<h1 id="user-level-3"><a class="doc-anchor" href="#user-level-3">§</a>8 User Level 3</h1>
<p>Our Application (NuttX Shell) requires <strong>22 Pages of Virtual Memory</strong> for its User Code.</p>
<p>NuttX populates the <strong>Level 3 Page Table</strong> for the User Code like so…</p>
<p><img src="https://lupyuen.github.io/images/mmu-l3user.jpg" alt="Level 3 Page Table for User" /></p>
<p><em>Something smells special… Whats this “U” Permission?</em></p>
<p>The <strong>“U” User Permission</strong> says that this Page Table Entry is accesible by our Application. (Which runs in <strong>RISC-V User Mode</strong>)</p>
<p>Note that the <strong>Virtual Address</strong> <code>0x8000_0000</code> now maps to a different <strong>Physical Address</strong> <code>0x5060_4000</code>.</p>
<p>(Which comes from the <a href="https://lupyuen.github.io/articles/mmu#connect-level-2-to-level-3"><strong>Kernel Page Pool</strong></a>)</p>
<p>Thats the tasty goodness of Virtual Memory!</p>
<p><em>But where is Virtual Address 0x8000_0000 defined?</em></p>
<p>Virtual Addresses are propagated from the Level 1 Page Table, as well soon see.</p>
<p><em>Anything else in the Level 3 Page Table?</em></p>
<p>Page Table Entries for the <strong>User Data</strong> will appear in the same Level 3 Page Table.</p>
<p>We move up to Level 2…</p>
<p><a href="https://github.com/lupyuen/nuttx-ox64#map-the-user-code-data-and-heap-levels-1-2-3">(See the <strong>NuttX Virtual Memory Log</strong>)</a></p>
<p><img src="https://lupyuen.github.io/images/mmu-l2user.jpg" alt="Level 2 Page Table for User" /></p>
<h1 id="user-levels-1-and-2"><a class="doc-anchor" href="#user-levels-1-and-2">§</a>9 User Levels 1 and 2</h1>
<p>NuttX populates the <strong>User Level 2</strong> Page Table (pic above) with the <strong>Physical Page Numbers</strong> (PPN) of the…</p>
<ul>
<li>
<p>Level 3 Page Table for <strong>User Code and Data</strong></p>
<p>(From previous section)</p>
</li>
<li>
<p>Level 3 Page Table for <strong>User Heap</strong></p>
<p>(To make <strong>malloc</strong> work)</p>
</li>
</ul>
<p>Ultimately we track back to <strong>User Level 1</strong> Page Table…</p>
<p><img src="https://lupyuen.github.io/images/mmu-l1user.jpg" alt="Level 1 Page Table for User" /></p>
<p>Which points PPN to the <strong>User Level 2</strong> Page Table.</p>
<p>And thats how User Levels 1, 2 and 3 are connected!</p>
<p>Each Application will have its <strong>own set of User Page Tables</strong> for…</p>
<div><table><thead><tr><th style="text-align: left">Region</th><th style="text-align: center">Start Address</th><th style="text-align: left">Size</th></tr></thead><tbody>
<tr><td style="text-align: left">User Code</td><td style="text-align: center"><strong><code>0x8000_0000</code></strong></td><td style="text-align: left"><em>(Max 128 Pages)</em></td></tr>
<tr><td style="text-align: left">User Data</td><td style="text-align: center"><strong><code>0x8010_0000</code></strong></td><td style="text-align: left"><em>(Max 128 Pages)</em></td></tr>
<tr><td style="text-align: left">User Heap</td><td style="text-align: center"><strong><code>0x8020_0000</code></strong></td><td style="text-align: left"><em>(Max 128 Pages)</em> <br> <em>(Each Page is 4 KB)</em></td></tr>
</tbody></table>
</div>
<p><em>Once again: Where is Virtual Address 0x8000_0000 defined?</em></p>
<p>From the pic above, we see that the Page Table Entry has <strong>Index 2</strong>.</p>
<p>Recall that each Entry in the Level 1 Page Table configures <strong>1 GB of Virtual Memory</strong>. (<strong><code>0x4000_0000</code></strong> Bytes)</p>
<p>Since the Entry Index is 2, then the Virtual Address must be <strong><code>0x8000_0000</code></strong>. Mystery solved!</p>
<p><a href="https://lupyuen.github.io/articles/mmu#appendix-address-translation">(More about <strong>Address Translation</strong>)</a></p>
<p><em>Theres something odd about the SATP Register…</em></p>
<p>Yeah the SATP Register has changed! We investigate…</p>
<p><a href="https://github.com/lupyuen/nuttx-ox64#start-nuttx-apps-on-ox64-bl808">(Who populates the <strong>User Page Tables</strong>)</a></p>
<p><a href="https://github.com/lupyuen/nuttx-ox64#map-the-user-code-data-and-heap-levels-1-2-3">(See the <strong>NuttX Virtual Memory Log</strong>)</a></p>
<p><img src="https://lupyuen.github.io/images/mmu-satp.jpg" alt="Kernel SATP vs User SATP" /></p>
<h1 id="swap-the-satp-register"><a class="doc-anchor" href="#swap-the-satp-register">§</a>10 Swap the SATP Register</h1>
<p><em>SATP Register looks different from the earlier one in the Kernel…</em></p>
<p><em>Are there Multiple SATP Registers?</em></p>
<p>We saw two different <strong>SATP Registers</strong>, each pointing to a different Level 1 Page Table…</p>
<p>But actually theres only <strong>one SATP Register</strong>!</p>
<p><a href="https://five-embeddev.com/riscv-isa-manual/latest/supervisor.html#sec:satp">(SATP is for <strong>Supervisor Address Translation and Protection</strong>)</a></p>
<p>Heres the secret: NuttX uses this nifty recipe to cook up the illusion of Multiple SATP Registers…</p>
<p>Earlier we wrote this to set the <strong>SATP Register</strong>: <a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L278-L298">bl808_mm_init.c</a></p>
<div class="example-wrap"><pre class="language-c"><code>// Set the SATP Register to the
// Physical Page Number of Level 1 Page Table.
// Set SATP Mode to Sv39.
mmu_enable(
g_kernel_pgt_pbase, // Page Table Address
0 // Set Address Space ID to 0
);</code></pre></div>
<p><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/common/riscv_mmu.h#L268-L292">(<strong>mmu_enable</strong> is defined here)</a></p>
<p>When we <strong>switch the context</strong> from Kernel to Application: We <strong>swap the value</strong> of the SATP Register… Which points to a <strong>Different Level 1</strong> Page Table!</p>
<p>The <strong>Address Space ID</strong> (stored in SATP Register) can also change. Its a handy shortcut that tells us which Level 1 Page Table (Address Space) is in effect.</p>
<p>(NuttX doesnt seem to use Address Space)</p>
<p>We see NuttX <strong>swapping the SATP Register</strong> as it starts an Application (NuttX Shell)…</p>
<div class="example-wrap"><pre class="language-text"><code>// At Startup: NuttX points the SATP Register to
// Kernel Level 1 Page Table (0x5040 7000)
mmu_satp_reg:
pgbase=0x50407000, asid=0x0, reg=0x8000000000050407
mmu_write_satp:
reg=0x8000000000050407
nx_start: Entry
...
// Later: NuttX points the SATP Register to
// User Level 1 Page Table (0x5060 0000)
Starting init task: /system/bin/init
mmu_satp_reg:
pgbase=0x50600000, asid=0x0, reg=0x8000000000050600
up_addrenv_select:
addrenv=0x5040d560, satp=0x8000000000050600
mmu_write_satp:
reg=0x8000000000050600</code></pre></div>
<p><a href="https://five-embeddev.com/riscv-isa-manual/latest/supervisor.html#sec:satp">(<strong>SATP Register</strong> begins with <strong><code>0x8</code></strong> to enable Sv39)</a></p>
<p><a href="https://gist.github.com/lupyuen/aa9b3e575ba4e0c233ab02c328221525#file-ox64-nuttx20-log-L271-L304">(See the <strong>NuttX SATP Log</strong>)</a></p>
<p>Always remember to <strong>Flush the MMU Cache</strong> when swapping the SATP Register (and switching Page Tables)…</p>
<ul>
<li><a href="https://lupyuen.github.io/articles/mmu#appendix-flush-the-mmu-cache-for-t-head-c906"><strong>“Flush the MMU Cache for T-Head C906”</strong></a></li>
</ul>
<p><em>So indeed we can have “Multiple” SATP Registers sweet!</em></p>
<p>Ah theres a catch… Remember the <strong>“G” Global Mapping Permission</strong> from earlier?</p>
<p><img src="https://lupyuen.github.io/images/mmu-satp2.jpg" alt="“G” Global Mapping Permission" /></p>
<p>This means that the Page Table Entry will be effective across <a href="https://five-embeddev.com/riscv-isa-manual/latest/supervisor.html#sec:sv32"><strong>ALL Address Spaces</strong></a>! Even in our Applications!</p>
<p><em>Huh? Our Applications can meddle with the I/O Memory?</em></p>
<p>Nope they cant, because the <strong>“U” User Permission</strong> is denied. Therefore were all safe and well protected!</p>
<p><em>Can NuttX Kernel access the Virtual Memory of NuttX Apps?</em></p>
<p>Yep! Heres how…</p>
<ul>
<li><a href="https://lupyuen.github.io/articles/app#kernel-accesses-app-memory"><strong>“Kernel Accesses App Memory”</strong></a></li>
</ul>
<p><img src="https://lupyuen.github.io/images/mmu-boot2.jpg" alt="NuttX swaps the SATP Register" /></p>
<p><a href="https://gist.github.com/lupyuen/aa9b3e575ba4e0c233ab02c328221525#file-ox64-nuttx20-log-L271-L304"><em>NuttX swaps the SATP Register</em></a></p>
<h1 id="whats-next"><a class="doc-anchor" href="#whats-next">§</a>11 Whats Next</h1>
<p>I hope this article has been a <strong>tasty treat</strong> for understanding the inner workings of…</p>
<ul>
<li>
<p><strong>Memory Protection</strong></p>
<p>(For our Kernel)</p>
</li>
<li>
<p><strong>Virtual Memory</strong></p>
<p>(For the Applications)</p>
</li>
<li>
<p>And the <strong>Sv39 Memory Management Unit</strong></p>
</li>
</ul>
<p>…As we documented everything that happens when <strong>Apache NuttX RTOS</strong> boots on Ox64 SBC!</p>
<p>(Actually we wrote this article to fix a <a href="https://lupyuen.github.io/articles/mmu#appendix-fix-the-interrupt-controller"><strong>Troubling Roadblock</strong></a> for Ox64 NuttX)</p>
<p>Well do much more for <strong>NuttX on Ox64 BL808</strong>, stay tuned for updates!</p>
<p>Many Thanks to my <a href="https://lupyuen.github.io/articles/sponsor"><strong>GitHub Sponsors</strong></a> (and the awesome NuttX Community) for supporting my work! This article wouldnt have been possible without your support.</p>
<ul>
<li>
<p><a href="https://lupyuen.github.io/articles/sponsor"><strong>Sponsor me a coffee</strong></a></p>
</li>
<li>
<p><a href="https://news.ycombinator.com/item?id=38326040"><strong>Discuss this article on Hacker News</strong></a></p>
</li>
<li>
<p><a href="https://forum.pine64.org/showthread.php?tid=18884"><strong>Discuss this article on Pine64 Forum</strong></a></p>
</li>
<li>
<p><a href="https://bbs.bouffalolab.com/d/259-article-risc-v-ox64-bl808-sbc-sv39-memory-management-unit"><strong>Discuss this article on Bouffalo Lab Forum</strong></a></p>
</li>
<li>
<p><a href="https://github.com/lupyuen/nuttx-ox64"><strong>My Current Project: “Apache NuttX RTOS for Ox64 BL808”</strong></a></p>
</li>
<li>
<p><a href="https://github.com/lupyuen/nuttx-star64"><strong>My Other Project: “NuttX for Star64 JH7110”</strong></a></p>
</li>
<li>
<p><a href="https://github.com/lupyuen/pinephone-nuttx"><strong>Older Project: “NuttX for PinePhone”</strong></a></p>
</li>
<li>
<p><a href="https://lupyuen.github.io"><strong>Check out my articles</strong></a></p>
</li>
<li>
<p><a href="https://lupyuen.github.io/rss.xml"><strong>RSS Feed</strong></a></p>
</li>
</ul>
<p><em>Got a question, comment or suggestion? Create an Issue or submit a Pull Request here…</em></p>
<p><a href="https://github.com/lupyuen/lupyuen.github.io/blob/master/src/mmu.md"><strong>lupyuen.github.io/src/mmu.md</strong></a></p>
<h1 id="appendix-flush-the-mmu-cache-for-t-head-c906"><a class="doc-anchor" href="#appendix-flush-the-mmu-cache-for-t-head-c906">§</a>12 Appendix: Flush the MMU Cache for T-Head C906</h1>
<p>Ox64 BL808 crashes with a <strong>Page Fault</strong> when we run <code>getprime</code> then <code>hello</code></p>
<div class="example-wrap"><pre class="language-text"><code>nsh&gt; getprime
getprime took 0 msec
nsh&gt; hello
riscv_exception: EXCEPTION: Store/AMO page fault.
MCAUSE: 0f,
EPC: 50208fcc,
MTVAL: 80200000</code></pre></div>
<p><a href="https://gist.github.com/lupyuen/84ceeb5bd4eed98d9d1a3cd83d712dab">(See the <strong>Complete Log</strong>)</a></p>
<p>The Invalid Address <code>0x8020_0000</code> is the Virtual Address of the <strong>Dynamic Heap</strong> (<code>malloc</code>) for the <code>hello</code> app: <a href="https://github.com/lupyuen2/wip-nuttx/blob/nuttx-12.4.0-RC0/boards/risc-v/bl808/ox64/configs/nsh/defconfig#L20">boards/risc-v/bl808/ox64/configs/nsh/defconfig</a></p>
<div class="example-wrap"><pre class="language-bash"><code>## NuttX Config for Ox64
CONFIG_ARCH_DATA_NPAGES=128
CONFIG_ARCH_DATA_VBASE=0x80100000
CONFIG_ARCH_HEAP_NPAGES=128
CONFIG_ARCH_HEAP_VBASE=0x80200000
CONFIG_ARCH_TEXT_NPAGES=128
CONFIG_ARCH_TEXT_VBASE=0x80000000</code></pre></div>
<p>We discover that T-Head C906 MMU is incorrectly accessing the <strong>MMU Page Tables of the Previous Process</strong> (<code>getprime</code>) while starting the New Process (<code>hello</code>)…</p>
<div class="example-wrap"><pre class="language-text"><code>## Virtual Address 0x8020_0000 is OK for Previous Process
nsh&gt; getprime
Virtual Address 0x80200000 maps to Physical Address 0x506a6000
*0x506a6000 is 0
*0x80200000 is 0
## Virtual Address 0x8020_0000 crashes for New Process
nsh&gt; hello
Virtual Address 0x80200000 maps to Physical Address 0x506a4000
*0x506a4000 is 0
riscv_exception: EXCEPTION: Load page fault. MCAUSE: 000000000000000d, EPC: 000000005020c020, MTVAL: 0000000080200000</code></pre></div>
<p><a href="https://gist.github.com/lupyuen/8d5fb67ef5e174cacbb55121d04e4b10">(See the <strong>Complete Log</strong>)</a></p>
<p>Which means that the <strong>T-Head C906 MMU Cache</strong> needs to be flushed. This should be done right after we swap the <strong>MMU SATP Register</strong>.</p>
<p>To fix the problem, we refer to the <a href="https://github.com/torvalds/linux/blob/master/arch/riscv/errata/thead/errata.c#L38-L78"><strong>T-Head Errata for Linux Kernel</strong></a></p>
<div class="example-wrap"><pre class="language-c"><code>// th.dcache.ipa rs1 (invalidate, physical address)
// | 31 - 25 | 24 - 20 | 19 - 15 | 14 - 12 | 11 - 7 | 6 - 0 |
// 0000001 01010 rs1 000 00000 0001011
// th.dcache.iva rs1 (invalidate, virtual address)
// 0000001 00110 rs1 000 00000 0001011
// th.dcache.cpa rs1 (clean, physical address)
// | 31 - 25 | 24 - 20 | 19 - 15 | 14 - 12 | 11 - 7 | 6 - 0 |
// 0000001 01001 rs1 000 00000 0001011
// th.dcache.cva rs1 (clean, virtual address)
// 0000001 00101 rs1 000 00000 0001011
// th.dcache.cipa rs1 (clean then invalidate, physical address)
// | 31 - 25 | 24 - 20 | 19 - 15 | 14 - 12 | 11 - 7 | 6 - 0 |
// 0000001 01011 rs1 000 00000 0001011
// th.dcache.civa rs1 (clean then invalidate, virtual address)
// 0000001 00111 rs1 000 00000 0001011
// th.sync.s (make sure all cache operations finished)
// | 31 - 25 | 24 - 20 | 19 - 15 | 14 - 12 | 11 - 7 | 6 - 0 |
// 0000000 11001 00000 000 00000 0001011
#define THEAD_INVAL_A0 &quot;.long 0x02a5000b&quot;
#define THEAD_CLEAN_A0 &quot;.long 0x0295000b&quot;
#define THEAD_FLUSH_A0 &quot;.long 0x02b5000b&quot;
#define THEAD_SYNC_S &quot;.long 0x0190000b&quot;
#define THEAD_CMO_OP(_op, _start, _size, _cachesize) \
asm volatile( \
&quot;mv a0, %1\n\t&quot; \
&quot;j 2f\n\t&quot; \
&quot;3:\n\t&quot; \
THEAD_##_op##_A0 &quot;\n\t&quot; \
&quot;add a0, a0, %0\n\t&quot; \
&quot;2:\n\t&quot; \
&quot;bltu a0, %2, 3b\n\t&quot; \
THEAD_SYNC_S \
: : &quot;r&quot;(_cachesize), \
&quot;r&quot;((unsigned long)(_start) &amp; ~((_cachesize) - 1UL)), \
&quot;r&quot;((unsigned long)(_start) + (_size)) \
: &quot;a0&quot;)</code></pre></div>
<p><strong>For NuttX:</strong> We flush the MMU Cache whenever we <strong>swap the MMU SATP Register</strong> to the New Page Tables.</p>
<p>We execute 2 RISC-V Instructions that are <strong>specific to T-Head C906</strong></p>
<ul>
<li>
<p><strong>DCACHE.IALL</strong>: Invalidate all Page Table Entries in the D-Cache</p>
<p>(From <a href="https://occ-intl-prod.oss-ap-southeast-1.aliyuncs.com/resource/XuanTie-OpenC906-UserManual.pdf"><strong>C906 User Manual</strong></a>, Page 551)</p>
</li>
<li>
<p><strong>SYNC.S</strong>: Ensure that all Cache Operations are completed</p>
<p>(Undocumented, comes from the T-Head Errata above)</p>
<p>(Looks similar to <strong>SYNC</strong> and <strong>SYNC.I</strong> from <a href="https://occ-intl-prod.oss-ap-southeast-1.aliyuncs.com/resource/XuanTie-OpenC906-UserManual.pdf"><strong>C906 User Manual</strong></a>, Page 559)</p>
</li>
</ul>
<p>This is how we <strong>Flush the MMU Cache</strong> for T-Head C906: <a href="https://github.com/lupyuen2/wip-nuttx/blob/satp/arch/risc-v/src/bl808/bl808_mm_init.c#L299-L323">arch/risc-v/src/bl808/bl808_mm_init.c</a></p>
<div class="example-wrap"><pre class="language-c"><code>// Flush the MMU Cache for T-Head C906. Called by mmu_write_satp() after
// updating the MMU SATP Register, when swapping MMU Page Tables.
// This operation executes RISC-V Instructions that are specific to
// T-Head C906.
void weak_function mmu_flush_cache(void) {
__asm__ __volatile__ (
// DCACHE.IALL: Invalidate all Page Table Entries in the D-Cache
&quot;.long 0x0020000b\n&quot;
// SYNC.S: Ensure that all Cache Operations are completed
&quot;.long 0x0190000b\n&quot;
);
}</code></pre></div>
<p><code>mmu_flush_cache</code> is called by <code>mmu_write_satp</code>, whenever the <strong>MMU SATP Register is updated</strong> (and the MMU Page Tables are swapped): <a href="https://github.com/lupyuen2/wip-nuttx/blob/satp/arch/risc-v/src/common/riscv_mmu.h#L190-L221">arch/risc-v/src/common/riscv_mmu.h</a></p>
<div class="example-wrap"><pre class="language-c"><code>// Update the MMU SATP Register for swapping MMU Page Tables
static inline void mmu_write_satp(uintptr_t reg) {
__asm__ __volatile__ (
&quot;csrw satp, %0\n&quot;
&quot;sfence.vma x0, x0\n&quot;
&quot;fence rw, rw\n&quot;
&quot;fence.i\n&quot;
:
: &quot;rK&quot; (reg)
: &quot;memory&quot;
);
// Flush the MMU Cache if needed (T-Head C906)
if (mmu_flush_cache != NULL) {
mmu_flush_cache();
}
}</code></pre></div>
<p>With this fix, the <code>hello</code> app now accesses the Heap Memory correctly…</p>
<div class="example-wrap"><pre class="language-text"><code>## Virtual Address 0x8020_0000 is OK for Previous Process
nsh&gt; getprime
Virtual Address 0x80200000 maps to Physical Address 0x506a6000
*0x506a6000 is 0
*0x80200000 is 0
## Virtual Address 0x8020_0000 is OK for New Process
nsh&gt; hello
Virtual Address 0x80200000 maps to Physical Address 0x506a4000
*0x506a4000 is 0
*0x80200000 is 0</code></pre></div>
<p><a href="https://gist.github.com/lupyuen/ebf6c35f73132e99cedc2808e57d39e5">(See the <strong>Complete Log</strong>)</a></p>
<p><a href="https://gist.github.com/lupyuen/def8fb96245643c046e5f3ad6c4e3ed0">(See the <strong>Detailed Analysis</strong>)</a></p>
<p><a href="https://github.com/apache/nuttx/pull/11585">(See the <strong>Pull Request</strong>)</a></p>
<p><img src="https://lupyuen.github.io/images/mmu-address.jpg" alt="Virtual Address Translation Process (Page 82)" /></p>
<p><a href="https://github.com/riscv/riscv-isa-manual/releases/download/Priv-v1.12/riscv-privileged-20211203.pdf"><em>Virtual Address Translation Process (Page 82)</em></a></p>
<h1 id="appendix-address-translation"><a class="doc-anchor" href="#appendix-address-translation">§</a>13 Appendix: Address Translation</h1>
<p><em>How does Sv39 MMU translate a Virtual Address to Physical Address?</em></p>
<p>Sv39 MMU translates a <strong>Virtual Address to Physical Address</strong> by traversing the Page Tables as described in…</p>
<ul>
<li>
<p><a href="https://github.com/riscv/riscv-isa-manual/releases/download/Priv-v1.12/riscv-privileged-20211203.pdf"><strong>“RISC-V ISA: Privileged Architectures”</strong></a> (Page 82)</p>
<p>Section 4.3.2: “Virtual Address Translation Process”</p>
<p>(Pic above)</p>
</li>
</ul>
<p><strong>For Sv39 MMU:</strong> The parameters are…</p>
<ul>
<li>
<p><strong>PAGESIZE</strong> = 4,096</p>
</li>
<li>
<p><strong>LEVELS</strong> = 3</p>
</li>
<li>
<p><strong>PTESIZE</strong> = 8</p>
</li>
</ul>
<p><strong><code>vpn[i]</code></strong> and <strong><code>ppn[i]</code></strong> refer to these <strong>Virtual and Physical Address Fields</strong> (Page 85)…</p>
<p><img src="https://lupyuen.github.io/images/mmu-address2.png" alt="Virtual / Physical Address Fields (Page 85)" /></p>
<p><em>What if the Address Translation fails?</em></p>
<p>The algo above says that Sv39 MMU will trigger a <a href="https://five-embeddev.com/riscv-isa-manual/latest/supervisor.html#sec:scause"><strong>Page Fault</strong></a>. (RISC-V Exception)</p>
<p>Which is super handy for implementing <a href="https://en.wikipedia.org/wiki/Memory_paging"><strong>Memory Paging</strong></a>.</p>
<p><em>What about mapping a Physical Address back to Virtual Address?</em></p>
<p>Well that would require an Exhaustive Search of all Page Tables!</p>
<p><em>OK how about Virtual / Physical Address to Page Table Entry (PTE)?</em></p>
<p>Given a <strong>Virtual Address vaddr</strong></p>
<p>(Or a <strong>Physical Address</strong>, assuming Virtual = Physical like our Kernel)</p>
<ul>
<li>
<p><strong>Virtual Page Number: vpn</strong> <br> = <strong>vaddr</strong> &gt;&gt; 12</p>
<p>(4,096 bytes per Memory Page)</p>
</li>
<li>
<p><strong>Level 1 PTE Index</strong> <br> = (<strong>vpn</strong> &gt;&gt; 18) &amp; <code>0b1_1111_1111</code></p>
<p>(Extract Bits 18 to 26 to get Level 1 Index)</p>
</li>
<li>
<p><strong>Level 2 PTE Index</strong> <br> = (<strong>vpn</strong> &gt;&gt; 9) &amp; <code>0b1_1111_1111</code></p>
<p>(Extract Bits 9 to 17 to get Level 2 Index)</p>
</li>
<li>
<p><strong>Level 3 PTE Index</strong> <br> = <strong>vpn</strong> &amp; <code>0b1_1111_1111</code></p>
<p>(Extract Bits 0 to 8 to get Level 3 Index)</p>
<p><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/common/riscv_mmu.c#L61C1-L107">(Implemented as <strong>mmu_ln_setentry</strong>)</a></p>
</li>
</ul>
<p><img src="https://lupyuen.github.io/images/privilege-title.jpg" alt="RISC-V Machine Mode vs Supervisor Mode" /></p>
<p><em>Isnt there another kind of Memory Protection in RISC-V?</em></p>
<p>Yes RISC-V also supports <a href="https://five-embeddev.com/riscv-isa-manual/latest/machine.html#sec:pmp"><strong>Physical Memory Protection</strong></a>.</p>
<p>But it only works in <strong>RISC-V Machine Mode</strong> (pic above), the most powerful mode. <a href="https://lupyuen.github.io/articles/sbi">(Like for <strong>OpenSBI</strong>)</a></p>
<p>NuttX and Linux run in <strong>RISC-V Supervisor Mode</strong>, which is less poweful. And wont have access to this Physical Memory Protection.</p>
<p>Thats why NuttX and Linux use Sv39 MMU instead for Memory Protection.</p>
<p><a href="https://github.com/openbouffalo/OBLFR/blob/master/apps/d0_lowload/src/main.c#L44-L98">(See the Ox64 settings for <strong>Physical Memory Protection</strong>)</a></p>
<p><img src="https://lupyuen.github.io/images/mmu-boot1.png" alt="Ox64 boots to NuttX Shell" /></p>
<p><a href="https://gist.github.com/lupyuen/aa9b3e575ba4e0c233ab02c328221525"><em>Ox64 boots to NuttX Shell</em></a></p>
<h1 id="appendix-build-and-run-nuttx"><a class="doc-anchor" href="#appendix-build-and-run-nuttx">§</a>14 Appendix: Build and Run NuttX</h1>
<p>In this article, we ran a Work-In-Progress Version of <strong>Apache NuttX RTOS for Ox64</strong>, with added <strong>MMU Logging</strong>.</p>
<p>(Console Input is not yet supported)</p>
<p>This is how we download and build NuttX for Ox64 BL808 SBC…</p>
<div class="example-wrap"><pre class="language-bash"><code>## Download the WIP NuttX Source Code
git clone \
--branch ox64a \
https://github.com/lupyuen2/wip-nuttx \
nuttx
git clone \
--branch ox64a \
https://github.com/lupyuen2/wip-nuttx-apps \
apps
## Build NuttX
cd nuttx
tools/configure.sh star64:nsh
make
## Export the NuttX Kernel
## to `nuttx.bin`
riscv64-unknown-elf-objcopy \
-O binary \
nuttx \
nuttx.bin
## Dump the disassembly to nuttx.S
riscv64-unknown-elf-objdump \
--syms --source --reloc --demangle --line-numbers --wide \
--debugging \
nuttx \
&gt;nuttx.S \
2&gt;&amp;1</code></pre></div>
<p><a href="https://lupyuen.github.io/articles/release#build-nuttx-for-star64">(Remember to install the <strong>Build Prerequisites and Toolchain</strong>)</a></p>
<p><a href="https://lupyuen.github.io/articles/riscv#appendix-build-apache-nuttx-rtos-for-64-bit-risc-v-qemu">(And enable <strong>Scheduler Info Output</strong>)</a></p>
<p>Then we build the <strong>Initial RAM Disk</strong> that contains NuttX Shell and NuttX Apps…</p>
<div class="example-wrap"><pre class="language-bash"><code>## Build the Apps Filesystem
make -j 8 export
pushd ../apps
./tools/mkimport.sh -z -x ../nuttx/nuttx-export-*.tar.gz
make -j 8 import
popd
## Generate the Initial RAM Disk `initrd`
## in ROMFS Filesystem Format
## from the Apps Filesystem `../apps/bin`
## and label it `NuttXBootVol`
genromfs \
-f initrd \
-d ../apps/bin \
-V &quot;NuttXBootVol&quot;
## Prepare a Padding with 64 KB of zeroes
head -c 65536 /dev/zero &gt;/tmp/nuttx.pad
## Append Padding and Initial RAM Disk to NuttX Kernel
cat nuttx.bin /tmp/nuttx.pad initrd \
&gt;Image</code></pre></div>
<p><a href="https://github.com/lupyuen2/wip-nuttx/releases/tag/ox64a-1">(See the <strong>Build Script</strong>)</a></p>
<p><a href="https://github.com/lupyuen2/wip-nuttx/releases/tag/ox64a-1">(See the <strong>Build Outputs</strong>)</a></p>
<p><a href="https://lupyuen.github.io/articles/app#pad-the-initial-ram-disk">(Why the <strong>64 KB Padding</strong>)</a></p>
<p>Next we prepare a <strong>Linux microSD</strong> for Ox64 as described <a href="https://lupyuen.github.io/articles/ox64"><strong>in the previous article</strong></a>.</p>
<p><a href="https://lupyuen.github.io/articles/ox64#flash-opensbi-and-u-boot">(Remember to flash <strong>OpenSBI and U-Boot Bootloader</strong>)</a></p>
<p>Then we do the <a href="https://lupyuen.github.io/articles/ox64#apache-nuttx-rtos-for-ox64"><strong>Linux-To-NuttX Switcheroo</strong></a>: Overwrite the microSD Linux Image by the <strong>NuttX Kernel</strong></p>
<div class="example-wrap"><pre class="language-bash"><code>## Overwrite the Linux Image
## on Ox64 microSD
cp Image \
&quot;/Volumes/NO NAME/Image&quot;
diskutil unmountDisk /dev/disk2</code></pre></div>
<p>Insert the <a href="https://lupyuen.github.io/images/ox64-sd.jpg"><strong>microSD into Ox64</strong></a> and power up Ox64.</p>
<p>Ox64 boots <a href="https://lupyuen.github.io/articles/sbi"><strong>OpenSBI</strong></a>, which starts <a href="https://lupyuen.github.io/articles/linux#u-boot-bootloader-for-star64"><strong>U-Boot Bootloader</strong></a>, which starts <strong>NuttX Kernel</strong> and the NuttX Shell (NSH). (Pic above)</p>
<p><a href="https://gist.github.com/lupyuen/aa9b3e575ba4e0c233ab02c328221525">(See the <strong>NuttX Log</strong>)</a></p>
<p><a href="https://github.com/lupyuen2/wip-nuttx/releases/tag/ox64a-1">(See the <strong>Build Outputs</strong>)</a></p>
<p><img src="https://lupyuen.github.io/images/mmu-l2int.jpg" alt="Level 2 Page Table for Interrupt Controller" /></p>
<h1 id="appendix-fix-the-interrupt-controller"><a class="doc-anchor" href="#appendix-fix-the-interrupt-controller">§</a>15 Appendix: Fix the Interrupt Controller</h1>
<p><em>Whats wrong with the Interrupt Controller?</em></p>
<p>Earlier we had difficulty configuring the Sv39 MMU for the Interrupt Controller at <strong><code>0xE000_0000</code></strong></p>
<div><table><thead><tr><th style="text-align: left">Region</th><th style="text-align: center">Start Address</th><th style="text-align: left">Size</th></tr></thead><tbody>
<tr><td style="text-align: left"><a href="https://github.com/apache/nuttx/blob/master/arch/risc-v/src/bl808/bl808_mm_init.c#L51-L54"><strong>I/O Memory</strong></a></td><td style="text-align: center"><strong><code>0x0000_0000</code></strong></td><td style="text-align: left"><strong><code>0x4000_0000</code></strong> <em>(1 GB)</em></td></tr>
<tr><td style="text-align: left"><a href="https://github.com/apache/nuttx/blob/master/boards/risc-v/bl808/ox64/scripts/ld.script#L23-L26"><strong>RAM</strong></a></td><td style="text-align: center"><strong><code>0x5020_0000</code></strong></td><td style="text-align: left"><strong><code>0x0180_0000</code></strong> <em>(24 MB)</em></td></tr>
<tr><td style="text-align: left"><a href="https://lupyuen.github.io/articles/ox2#platform-level-interrupt-controller"><strong>Interrupt Controller</strong></a></td><td style="text-align: center"><strong><code>0xE000_0000</code></strong></td><td style="text-align: left"><strong><code>0x1000_0000</code></strong> <em>(256 MB)</em></td></tr>
</tbody></table>
</div>
<p><em>Why not park the Interrupt Controller as a Level 1 Page Table Entry?</em></p>
<div><table><thead><tr><th style="text-align: center">Index</th><th style="text-align: center">Permissions</th><th style="text-align: left">Physical Page Number</th></tr></thead><tbody>
<tr><td style="text-align: center">0</td><td style="text-align: center">VGRWSO+S</td><td style="text-align: left"><strong><code>0x00000</code></strong> <em>(I/O Memory)</em></td></tr>
<tr><td style="text-align: center">1</td><td style="text-align: center">VG <em>(Pointer)</em></td><td style="text-align: left"><strong><code>0x50406</code></strong> <em>(L2 Kernel Code &amp; Data)</em></td></tr>
<tr><td style="text-align: center">3</td><td style="text-align: center">VGRWSO+S</td><td style="text-align: left"><strong><code>0xC0000</code></strong> <em>(Interrupt Controller)</em></td></tr>
</tbody></table>
</div>
<p>Uh its super wasteful to reserve <strong>1 GB of Address Space</strong> (Level 1 at <strong><code>0xC000_0000</code></strong>) for our Interrupt Controller that requires only 256 MB.</p>
<p>But theres another problem: Our <strong>User Memory</strong> was originally assigned to <strong><code>0xC000_0000</code></strong></p>
<div><table><thead><tr><th style="text-align: left">Region</th><th style="text-align: center">Start Address</th><th style="text-align: left">Size</th></tr></thead><tbody>
<tr><td style="text-align: left">User Code</td><td style="text-align: center"><strong><code>0xC000_0000</code></strong></td><td style="text-align: left"><em>(Max 128 Pages)</em></td></tr>
<tr><td style="text-align: left">User Data</td><td style="text-align: center"><strong><code>0xC010_0000</code></strong></td><td style="text-align: left"><em>(Max 128 Pages)</em></td></tr>
<tr><td style="text-align: left">User Heap</td><td style="text-align: center"><strong><code>0xC020_0000</code></strong></td><td style="text-align: left"><em>(Max 128 Pages)</em> <br> <em>(Each Page is 4 KB)</em></td></tr>
</tbody></table>
</div>
<p><a href="https://github.com/lupyuen2/wip-nuttx/blob/ox64/boards/risc-v/jh7110/star64/configs/nsh/defconfig#L25-L38">(Source)</a></p>
<p>Which would <strong>collide with our Interrupt Controller</strong>!</p>
<p><em>OK so we move our User Memory elsewhere?</em></p>
<p>Yep thats why we moved the User Memory from <strong><code>0xC000_0000</code></strong> to <strong><code>0x8000_0000</code></strong></p>
<div><table><thead><tr><th style="text-align: left">Region</th><th style="text-align: center">Start Address</th><th style="text-align: left">Size</th></tr></thead><tbody>
<tr><td style="text-align: left">User Code</td><td style="text-align: center"><strong><code>0x8000_0000</code></strong></td><td style="text-align: left"><em>(Max 128 Pages)</em></td></tr>
<tr><td style="text-align: left">User Data</td><td style="text-align: center"><strong><code>0x8010_0000</code></strong></td><td style="text-align: left"><em>(Max 128 Pages)</em></td></tr>
<tr><td style="text-align: left">User Heap</td><td style="text-align: center"><strong><code>0x8020_0000</code></strong></td><td style="text-align: left"><em>(Max 128 Pages)</em> <br> <em>(Each Page is 4 KB)</em></td></tr>
</tbody></table>
</div>
<p><a href="https://github.com/apache/nuttx/blob/master/boards/risc-v/bl808/ox64/configs/nsh/defconfig#L17-L30">(Source)</a></p>
<p>Which wont conflict with our Interrupt Controller.</p>
<p>(Or maybe we should have moved the User Memory to another Exotic Address: <strong><code>0x1_0000_0000</code></strong>)</p>
<p><em>But we said that Level 1 is too wasteful for Interrupt Controller?</em></p>
<p>Once again: Its super wasteful to reserve <strong>1 GB of Address Space</strong> (Level 1 at <strong><code>0xC000_0000</code></strong>) for our Interrupt Controller that requires only 256 MB.</p>
<p>Also we hope MMU will stop the Kernel from meddling with the memory at <strong><code>0xC000_0000</code></strong>. Because its not supposed to!</p>
<p><em>Move the Interrupt Controller to Level 2 then!</em></p>
<p>Thats why we wrote this article: To figure out how to move the Interrupt Controller to a <strong>Level 2 Page Table</strong>. (And connect Level 1 with Level 2)</p>
<p>And thats how we arrived at this final <strong>MMU Mapping</strong></p>
<div><table><thead><tr><th style="text-align: center">Index</th><th style="text-align: center">Permissions</th><th style="text-align: left">Physical Page Number</th></tr></thead><tbody>
<tr><td style="text-align: center">0</td><td style="text-align: center">VGRWSO+S</td><td style="text-align: left"><strong><code>0x00000</code></strong> <em>(I/O Memory)</em></td></tr>
<tr><td style="text-align: center">1</td><td style="text-align: center">VG <em>(Pointer)</em></td><td style="text-align: left"><strong><code>0x50406</code></strong> <em>(L2 Kernel Code &amp; Data)</em></td></tr>
<tr><td style="text-align: center">3</td><td style="text-align: center">VG <em>(Pointer)</em></td><td style="text-align: left"><strong><code>0x50403</code></strong> <em>(L2 Interrupt Controller)</em></td></tr>
</tbody></table>
</div>
<p>That works hunky dory for Interrupt Controller and for User Memory!</p>
<p><img src="https://lupyuen.github.io/images/mmu-table.jpg" alt="Table full of… RISC-V Page Tables!" /></p>
<p><em>Table full of… RISC-V Page Tables!</em></p>
<!-- Begin scripts/rustdoc-after.html: Post-HTML for Custom Markdown files processed by rustdoc, like chip8.md -->
<!-- Begin Theme Picker and Prism Theme -->
<script src="../theme.js"></script>
<script src="../prism.js"></script>
<!-- Theme Picker and Prism Theme -->
<!-- End scripts/rustdoc-after.html -->
</body>
</html>