Sitemap

The Shift to 16KB Pages on Android: A Deep Dive for Developers on Why It Matters and How to Adapt

7 min readAug 24, 2025

The transition from a 4KB to a 16KB page size for 64-bit devices, becoming mandatory for apps targeting Android 15 and higher. For many developers, especially those working primarily in Kotlin or Java, this might seem like an obscure, low-level detail. However, for those with native C/C++ code, this change is not just significant — it’s actionable.

Press enter or click to view image in full size
*

This article will serve as your comprehensive guide. We will move beyond the simple announcement to explore the fundamental OS concepts that drive this decision. You will learn not just what you need to do, but why this change leads to tangible performance gains, and how to ensure your application is ready for the future of Android.

Back to Basics — What is a Memory “Page”?
Before we can appreciate the impact of changing the page size, we must understand what a page is. In the early days of computing, applications accessed physical memory (RAM) directly. This led to a host of problems: programs could overwrite each other, security was nonexistent, and memory quickly became fragmented and inefficient.

The solution was virtual memory, a powerful abstraction that is the bedrock of all modern operating systems, including Android.

  • Virtual vs. Physical Address Space: Instead of seeing the real, physical RAM, your application is given its own private, pristine, and contiguous block of addresses called a virtual address space. From your app’s perspective, it has the entire memory range to itself. The operating system and a special hardware component called the Memory Management Unit (MMU) handle the complex task of mapping these virtual addresses to actual physical addresses in RAM.
  • Pages and Frames: To manage this mapping efficiently, the system breaks both virtual and physical memory into fixed-size chunks. A chunk of virtual memory is called a page. A chunk of physical memory is called a frame. A page and a frame are always the same size. For decades, that size on Android has been 4 kilobytes (4096 bytes).
  • The Page Table - The Master Map: For every process, the OS maintains a page table. Think of this as the master directory or index that maps each virtual page of your app to a physical frame in RAM. When your code tries to access virtual_address_1234, the CPU’s MMU looks up the corresponding page in the page table to find its physical frame location and then calculates the final physical address.

This system is brilliant because it solves the earlier problems: it provides memory protection (apps can’t see each other’s page tables), eliminates external fragmentation (any free frame can be used for any page), and allows for clever tricks like loading parts of an app into memory only when they are needed (demand paging).

The Performance Engine — The Role of the Translation Lookaside Buffer
The page table system has one major, inherent bottleneck. To access a single piece of data in memory, the CPU would need to:

  • First, access memory to read the page table to find the physical address.
  • Second, access memory again at that physical address to get the actual data.

This “double memory access” would effectively halve performance, a completely unacceptable overhead. This is where the Translation Lookaside Buffer (TLB) comes in.

The TLB is a small, incredibly fast cache built directly into the MMU. Its sole job is to store recently used virtual-to-physical page mappings. Think of it as a high-speed “cheat sheet.”

Here’s the workflow for every memory access:

  • TLB Lookup: The MMU first checks the TLB for the mapping. This is a near-instantaneous hardware lookup.
  • TLB Hit (The Fast Path): If the mapping is in the TLB (a “TLB hit”), the physical address is retrieved instantly, and the data is accessed. The slow page table lookup is completely bypassed. This is the ideal and most common scenario, thanks to the principle of locality of reference (programs tend to access the same memory regions repeatedly).
  • TLB Miss (The Slow Path): If the mapping is not in the TLB (a “TLB miss”), the MMU must perform a full, slow page walk by reading the page table from main memory. Once the mapping is found, it is loaded into the TLB (often replacing an older entry), and the memory access can then proceed.

The entire performance of the virtual memory system hinges on maximizing the TLB hit rate. If you can reduce TLB misses, you can directly increase performance.

Connecting the Dots — Why 16KB Pages Unlock More Performance
Now we can finally understand why increasing the page size from 4KB to 16KB is so impactful. It’s all about maximizing the effectiveness of the TLB.
The Concept of “TLB Reach”

A single entry in the TLB caches the mapping for a single page.

  • With a 4KB page size, one TLB entry covers 4KB of memory.
  • With a 16KB page size, one TLB entry covers 16KB of memory.

This means that by quadrupling the page size, we quadruple the amount of memory that can be mapped by the TLB at any given time. This is called increasing the TLB Reach.

Imagine your app needs to access code and data spread across 64KB of memory.

  • With 4KB pages: This requires 16 different pages (64 / 4 = 16). You would need 16 entries in the TLB to map this entire working set without misses.
  • With 16KB pages: This requires only 4 different pages (64 / 16 = 4). You only need 4 entries in the TLB.

A larger TLB Reach means your application is far more likely to find the address translation it needs in the fast cache, dramatically reducing the number of slow TLB misses.

The Cascade of Benefits — Reducing TLB misses leads to a domino effect of performance improvements:

  • Faster App and Code Execution: Fewer stalls waiting for page walks mean the CPU spends more time executing your code. This results in faster app launches, smoother animations, and quicker processing.
  • Reduced Power Consumption: Page walks are energy-intensive operations. By keeping the CPU busy with useful work instead of waiting on memory, the device can complete tasks faster and return to low-power states sooner.
  • More Efficient I/O: When a page fault does occur (the app needs data from storage), the OS reads an entire page. Transferring a single 16KB block from flash storage is more efficient than orchestrating four separate 4KB transfers, due to the inherent latency of I/O operations.

While there is a minor trade-off — larger pages can lead to slightly more wasted RAM due to internal fragmentation — on modern devices with abundant RAM, this is a small price to pay for the significant system-wide performance and efficiency gains.

The Developer’s Playbook — Migrating Your App
Now for the practical part. How do you ensure your app is compliant?

Who is Affected?
This change primarily impacts applications that include native C/C++ libraries (.so files). If your app is 100% Kotlin or Java, the build system and Android Runtime (ART) already handle this alignment, and you are likely already compatible. If you use native code, you must recompile it to support 16KB alignment.

The Core Requirement: 16KB Alignment
The core issue is memory alignment. An executable segment of your native library cannot span across a 16KB page boundary. To prevent this, you need to instruct the linker to align your code appropriately during compilation.

Step-by-Step Migration Guide

  • Update Your Toolchain: Ensure you are using modern tools that are aware of this requirement.

Android Gradle Plugin (AGP): Upgrade to version 8.5.1 or higher.

Android NDK: Upgrade to version r28 or higher.

  • Rebuild Your Native Libraries (.so files): The crucial step is to rebuild all your native libraries and dependencies. When you use a modern NDK with CMake, this is often handled automatically. For custom build systems, you need to ensure the following linker flag is used:
-Wl,-z,max-page-size=16384

This flag tells the linker (ld) to ensure that the loadable segments in your shared object are aligned to a maximum page size of 16KB (16384 bytes).

  • Verify Your App Bundle: After rebuilding, you can check for compliance before uploading.

Use the App Bundle Explorer in the Google Play Console. It will show a warning if any .so files in your bundle are not 16KB-aligned.

Alternatively, you can use the readelf tool locally on your .so files to inspect their segment alignment.

  • Thoroughly Test Your Application:

Create a 16KB AVD: You can create an Android Virtual Device configured to run with a 16KB page size to test your app’s behavior in a compatible environment.

Use a Physical Device: Newer Google Pixel devices running Android 15 Beta or later can also be used for real-world testing.

Focus testing on areas of your app that heavily utilize your native libraries. Look for any new crashes, unexpected behavior, or performance regressions that could indicate an alignment issue.

Embracing a More Performant Future
The transition to a 16KB page size is a sophisticated, under-the-hood change that represents a logical evolution of the Android platform. By aligning with the realities of modern hardware — abundant RAM and the critical need for performance — Android is paving the way for a faster, more efficient ecosystem.

For developers, this is more than just a compliance task. It’s an opportunity to re-evaluate and modernize your native build process, ensuring your application can deliver the best possible performance on the next generation of Android devices. By understanding the “why” behind this shift, you are better equipped to build robust, high-performance applications for the future. Start rebuilding and testing today.

--

--

Pankaj Rai 🇮🇳
Pankaj Rai 🇮🇳

Written by Pankaj Rai 🇮🇳

Software Engineer | GDE Android, Firebase, AI

Responses (3)