Memory Allocation in Go

Memory Allocation in Go

Stack Space

The compiler allocates memory dynamically as needed. It inserts checks for each function call to see if allocated memory is sufficient. This saves guard pages that would otherwise prevent stack overflow. If runtime usage exceeds allocated space, the system attempts stack copying.

Escape Analysis

Variables stay on the stack when their scope doesn’t escape the current function. Otherwise they move to the heap.

Escape conditions:

  • Sending pointers or values containing pointers to channels
  • Storing pointers or values with pointers in slices
  • Slice internal arrays get reallocated, causing heap escape during data expansion
  • Calling methods through interfaces

Memory Alignment

Structure padding ensures alignment to CPU word multiples.

type                               size in bytes
------                             ------
byte, uint8, int8                  1
uint16, int16                      2
uint32, int32, float32             4
uint64, int64, float64, complex64  8
complex128                         16
uint, int                          4 or 8, depend on architectures
uintptr                            large enough to store the uninterpreted
                                   bits of a pointer value

64-bit needs 8-byte alignment. 32-bit needs 4-byte alignment.

Initialization Order

import -> const -> var -> init()

Memory Allocation Details

  • Go asks the OS for a large chunk of memory at startup, then manages it internally.
  • The basic unit is mspan, composed of several pages. Each mspan allocates objects of specific sizes.
  • mcache, mcentral, and mheap are three components that manage memory hierarchically. mcache holds locally cached ms pans per thread. mcentral manages global mspans for all threads. mheap manages all dynamic memory in Go.
  • Tiny objects share one object to save resources via the tiny allocator. Small objects use mspan. Large objects go straight to mheap.

TCMalloc

Go uses this algorithm for memory allocation and deallocation, solving heap fragmentation at runtime.

Core idea: Divide memory into multiple levels to reduce lock granularity.

Thread Memory Management

Each memory page splits into multiple free lists of different sizes. Each list holds nodes of the same fixed size. This reduces fragmentation. Each thread gets a local lock-free memory pool for small objects (<= 32k).

Heap Page Management

TCMalloc holds a set of contiguous pages. Each page represents a span. Objects larger than 32k need the global lock to get memory from here. If that’s not enough, it asks the OS.

Go Memory Allocator

Memory pages divide into 67 different sizes. Go’s minimum page granularity is 8K, called an mspan.

mcache

mspan organization: doubly linked list. Each node contains page start address, size, and page count. Each P holds local mspan pointers for every size (<=32K) for lock-free allocation. Holds 134 types of mspan pointers total for objects with and without pointers. One benefit: garbage collection doesn’t need to traverse objects without pointers.

Get memory first from local P, then from mcentral if none exist there.

mcentral

Holds two lists: one for spans in cache or without free space, another for spans with free space.

mheap

Each mcentral sits inside mheap. Each mcentral indexes by size class.

Allocation Strategy

Objects > 32k go to mheap. Objects < 16k use mcache’s tiny allocator. Objects 16k-32k go to mcache by size class first, then mcentral if unavailable. If still nothing, find best fit in mheap. If no suitable space exists, ask OS for new pages (at least 1MB).

Pages

Each Go page is an arena. All heap memory comes from arenas, allocated by the OS. When calling malloc, the OS allocates an extra main arena (linked list structure) for the main thread. In 1.11.5, initial heap maps to one arena page: 64M (64-bit) or 4M (32-bit). Each arena page consists of 8000 8KB mspans.

Memory Leaks

Nine out of ten leaks come from goroutine leaks. Example: not calling resp.Body.Close()

Timers that don’t get stopped after use.