Randomizing the Linux kernel heap freelists

Thomas Garnier
4 min readSep 8, 2016

--

This article discusses freelist randomization options that I added recently in the Linux kernel (v4.8). The option is available for the SLUB (default) and SLAB kernel heaps. This feature can be enabled using CONFIG_SLAB_FREELIST_RANDOM, it is disabled by default.

The commits on Linus’ tree upstream:

Freelist usage

A freelist is a FIFO queue with each entry referencing a chunk of the heap. There is one freelist for multiple pages. When a new set of pages are initialized, the freelist order results in sequential allocations. SLUB and SLAB allocators design are different but their freelists are fairly similar.

Freelist state for new pages

Exploiting heap predictability

During a kernel heap overflow, an attacker doesn’t know the state of the heap. The heap might be fragmented with chunk spread in different places. An attacker must control the heap state to reliability exploit a heap overflow. It is true for user-mode or kernel-mode exploitation.

It is done by repeatedly allocating the same size to fill the previous pages and reset the heap state. Using the predictability of the freelist, an attacker knows that future allocations will follow each other.

A good example can be found on this blog post by Jon Oberheide. Jon keeps allocating shmid structures. He triggers an overflow on a buffer with the same size and then searches which shmid structure was corrupted.

Differences between a heap default state and cleaned for an overflow

Freelist randomization is about reducing an attacker’s control on the state of the kernel heap. There is a higher chance the exploit will fail but it depends on the bug used and the activity on the system.

Freelist randomization

Randomizing a freelist is equivalent to shuffling a list of unique numbers. I picked the Fisher-Yates shuffle as a base:

The Fisher–Yates shuffle is an algorithm for generating a random permutation of a finite set — in plain terms, the algorithm shuffles the set. The algorithm effectively puts all the elements into a hat; it continually determines the next element by randomly drawing an element from the hat until no elements remain. The algorithm produces an unbiased permutation: every permutation is equally likely.

The performance impact was a concern during implementation. The Fisher-Yates shuffle expects a random number per entry in the list. Generating random numbers takes time and entropy available can run out resulting in delays.

Getting good entropy early at boot was another problem. At this stage of the boot process, the software entropy API has not gathered enough noise to generate random numbers. You can debug it by enabling DEBUG_RANDOM_BOOT flag on drivers/char/random.c. Intel most recent processors added the RDRAND instruction to help on that. This problem also applies to KASLR and numerous approaches are being debated by the community.

The final implementation generates a random list for each possible size. The random list is held by the cache dedicated for this size. For each new page, a random number is generated as a starting position. It adds a bit of randomness without impacting performance too much.

Freelist randomization using template

Performance testing was done using slab_test on dedicated hardware with reboots between measurements. The impact on the SLUB allocator is about 3%. No major impact was seen on SLAB.

Side note, SLAB freelist placement since 4.6-rc1

As a side note, a recent commit changed the placement of the freelist for the SLAB allocator. The freelist is now at the end of the reserved pages. A heap overflow might result in odd use-after-free condition as the freelist gets corrupted. Corrupting the freelist should be harder with randomization but not by much.

Heap hardening vs exploits vs performance

Heap hardening features are mainly protecting the heap against itself. Often they increase the chances of failure for an exploit by adding randomness. They don’t mitigate heap overflows. I think the main reason is the performance impact.

There is a conflict of interest between a fast and a secure heap. A secure heap would have one allocation per page with guard pages around. It would never use the same pages again. That should mitigate most heap overflows but it wouldn’t be usable.

Freelist randomization faced the same dilemma. It introduces more randomness to the Linux kernel heaps with a limited performance impact. It cannot mitigate heap overflows.

--

--

Thomas Garnier

Security Architect. Worked at Google and Microsoft. Co-creator of Sysinternals Sysmon and Linux KRSI.