Skip to content

Commit a2be266

Browse files
zephyr: Protect Zephyr heap metadata from corruption
Zephyr stores heap metadata just before each allocated chunk. This change ensures metadata for each chunk is stored in its own separate cache line. So if invalidate/writeback is mistakenly called for non-cached memory, metadata of a neighboring chunk does not get corrupted. We already have such size alignment constraints implemented for cached allocations; this change adds the same size alignment for non-cached allocations. This is a workaround for potential problems caused by invalidate/writeback calls for non-cached memory, which are wrong and should never happen in well-written code. However, such problems could be easily introduced and are quite hard to debug. The trade-off -- we waste an additional cache line for each chunk's metadata for non-cached allocations (as we already do for cached allocations). Signed-off-by: Serhiy Katsyuba <serhiy.katsyuba@intel.com>
1 parent 8d44f26 commit a2be266

File tree

1 file changed

+14
-19
lines changed

1 file changed

+14
-19
lines changed

zephyr/lib/alloc.c

Lines changed: 14 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -376,6 +376,20 @@ static void *heap_alloc_aligned(struct k_heap *h, size_t min_align, size_t bytes
376376
struct sys_memory_stats stats;
377377
#endif
378378

379+
/*
380+
* Zephyr sys_heap stores metadata at start of each
381+
* heap allocation. To ensure no allocated cached buffer
382+
* overlaps the same cacheline with the metadata chunk,
383+
* align both allocation start and size of allocation
384+
* to cacheline. As cached and non-cached allocations are
385+
* mixed, same rules need to be followed for both type of
386+
* allocations.
387+
*/
388+
#ifdef CONFIG_SOF_ZEPHYR_HEAP_CACHED
389+
min_align = MAX(PLATFORM_DCACHE_ALIGN, min_align);
390+
bytes = ALIGN_UP(bytes, min_align);
391+
#endif
392+
379393
key = k_spin_lock(&h->lock);
380394
ret = sys_heap_aligned_alloc(&h->heap, min_align, bytes);
381395
k_spin_unlock(&h->lock, key);
@@ -394,20 +408,6 @@ static void __sparse_cache *heap_alloc_aligned_cached(struct k_heap *h,
394408
{
395409
void __sparse_cache *ptr;
396410

397-
/*
398-
* Zephyr sys_heap stores metadata at start of each
399-
* heap allocation. To ensure no allocated cached buffer
400-
* overlaps the same cacheline with the metadata chunk,
401-
* align both allocation start and size of allocation
402-
* to cacheline. As cached and non-cached allocations are
403-
* mixed, same rules need to be followed for both type of
404-
* allocations.
405-
*/
406-
#ifdef CONFIG_SOF_ZEPHYR_HEAP_CACHED
407-
min_align = MAX(PLATFORM_DCACHE_ALIGN, min_align);
408-
bytes = ALIGN_UP(bytes, min_align);
409-
#endif
410-
411411
ptr = (__sparse_force void __sparse_cache *)heap_alloc_aligned(h, min_align, bytes);
412412

413413
#ifdef CONFIG_SOF_ZEPHYR_HEAP_CACHED
@@ -470,11 +470,6 @@ void *rmalloc_align(uint32_t flags, size_t bytes, uint32_t alignment)
470470
if (!(flags & SOF_MEM_FLAG_COHERENT)) {
471471
ptr = (__sparse_force void *)heap_alloc_aligned_cached(heap, alignment, bytes);
472472
} else {
473-
/*
474-
* XTOS alloc implementation has used dcache alignment,
475-
* so SOF application code is expecting this behaviour.
476-
*/
477-
alignment = MAX(PLATFORM_DCACHE_ALIGN, alignment);
478473
ptr = heap_alloc_aligned(heap, alignment, bytes);
479474
}
480475

0 commit comments

Comments
 (0)