Go Performance Guide
Memory Management

Map Performance Patterns in Go

Master Go map internals, optimization techniques, and when to choose alternatives like sync.Map or Swiss tables

Go maps are among the most commonly used data structures, but their performance characteristics are often misunderstood. This guide covers map internals, optimization patterns, and when to reach for alternatives.

Map Internals: How Go Maps Work

Go maps are implemented as hash tables with chaining via overflow buckets. Understanding this structure is essential for optimization.

Hash Table Structure

The runtime uses an array of buckets. Each bucket is a small fixed-size structure containing:

  • 8 hash values (high 8 bits of the full hash)
  • 8 keys
  • 8 values
  • A pointer to the next overflow bucket (if needed)

When you insert a key-value pair, Go:

  1. Hashes the key
  2. Uses the low bits of the hash to find the bucket index
  3. Stores the high 8 bits for quick filtering
  4. Places the key and value in the bucket or overflow bucket

This design minimizes cache misses and allocations compared to naive implementations.

// Internal bucket structure (simplified)
type bmap struct {
    tophash [8]uint8        // high 8 bits of hash
    keys    [8]keytype      // keys
    values  [8]valuetype    // values
    overflow *bmap          // overflow bucket
}

Bucket Counts and Load Factor

Go maps use a load factor of 6.5 entries per bucket on average. When a map exceeds this threshold, it grows by doubling the bucket count.

// Demonstrate map growth with benchmarks
package main

import (
    "fmt"
    "testing"
)

func BenchmarkMapGrowth(b *testing.B) {
    for _, size := range []int{100, 1000, 10000} {
        b.Run(fmt.Sprintf("size_%d", size), func(b *testing.B) {
            b.ReportAllocs()
            b.ResetTimer()

            m := make(map[string]int)
            for i := 0; i < size; i++ {
                m[fmt.Sprintf("key%d", i)] = i
            }
        })
    }
}

func BenchmarkMapGrowthWithPrealloc(b *testing.B) {
    for _, size := range []int{100, 1000, 10000} {
        b.Run(fmt.Sprintf("size_%d", size), func(b *testing.B) {
            b.ReportAllocs()
            b.ResetTimer()

            m := make(map[string]int, size)
            for i := 0; i < size; i++ {
                m[fmt.Sprintf("key%d", i)] = i
            }
        })
    }
}

Preallocation: The Capacity Hint

The make(map[K]V, hint) syntax allows you to suggest an initial bucket count, avoiding repeated allocations during growth.

When Preallocation Helps

// BAD: Forces multiple rehashing operations
m := make(map[string]int)
for i := 0; i < 100000; i++ {
    m[fmt.Sprintf("user_%d", i)] = i
}

// GOOD: Single allocation if estimate is accurate
m := make(map[string]int, 100000)
for i := 0; i < 100000; i++ {
    m[fmt.Sprintf("user_%d", i)] = i
}

Preallocation Benchmark

func BenchmarkPreallocationVsGrowth(b *testing.B) {
    tests := []int{1000, 10000, 100000, 1000000}

    for _, size := range tests {
        b.Run(fmt.Sprintf("no_prealloc_%d", size), func(b *testing.B) {
            b.ReportAllocs()
            b.ResetTimer()
            for i := 0; i < b.N; i++ {
                m := make(map[int]int)
                for j := 0; j < size; j++ {
                    m[j] = j * 2
                }
            }
        })

        b.Run(fmt.Sprintf("prealloc_%d", size), func(b *testing.B) {
            b.ReportAllocs()
            b.ResetTimer()
            for i := 0; i < b.N; i++ {
                m := make(map[int]int, size)
                for j := 0; j < size; j++ {
                    m[j] = j * 2
                }
            }
        })
    }
}

Tip: If you know the approximate final size, preallocate with make(map[K]V, size). This can reduce allocations by 50-75% depending on growth pattern.

Map Growth: Rehashing Costs

When a map reaches its load factor threshold, Go allocates a new bucket array (2x larger) and rehashes all entries. This operation is O(n).

// Map growth visualization
func ExploreMapGrowth() {
    // Simplified: actual growth happens automatically
    m := make(map[int]int, 4) // Start with ~4 buckets

    fmt.Println("Inserting elements triggers internal growth:")
    for i := 0; i < 50; i++ {
        m[i] = i * 2
        // Growth occurs when load factor exceeds ~6.5
    }
}

Growth operations are amortized O(1) per insertion, but you may observe occasional latency spikes when rehashing occurs.

Choosing Key Types: Performance Impact

The key type significantly affects hash computation performance.

Key Type Comparison

Key TypeHash SpeedNotes
int, int64FastestDirect use as hash
stringFastPolynomial rolling hash, cached
structMediumAll fields hashed
[32]byteMediumAll bytes hashed
func BenchmarkKeyTypes(b *testing.B) {
    // Test: int key
    b.Run("int_key", func(b *testing.B) {
        m := make(map[int]int)
        b.ResetTimer()
        for i := 0; i < b.N; i++ {
            m[i%100000] = i
        }
    })

    // Test: string key
    b.Run("string_key", func(b *testing.B) {
        m := make(map[string]int)
        keys := make([]string, 100000)
        for i := range keys {
            keys[i] = fmt.Sprintf("key_%d", i)
        }
        b.ResetTimer()
        for i := 0; i < b.N; i++ {
            m[keys[i%100000]] = i
        }
    })

    // Test: struct key
    b.Run("struct_key", func(b *testing.B) {
        type Key struct {
            A, B int64
        }
        m := make(map[Key]int)
        b.ResetTimer()
        for i := 0; i < b.N; i++ {
            m[Key{int64(i), int64(i % 100)}] = i
        }
    })
}

String hashing: Go caches string hash values, so lookups after the first access are extremely fast.

Struct keys: All fields contribute to the hash. More fields = slower hashing.

Concurrent Map Access: Synchronization Strategies

Go maps are not safe for concurrent writes. You must choose a synchronization strategy.

Strategy 1: sync.RWMutex

Protects the entire map with a read-write lock. Good for maps with many readers and few writers.

type SafeMapRWMutex struct {
    mu sync.RWMutex
    m  map[string]int
}

func (s *SafeMapRWMutex) Get(key string) (int, bool) {
    s.mu.RLock()
    defer s.mu.RUnlock()
    val, ok := s.m[key]
    return val, ok
}

func (s *SafeMapRWMutex) Set(key string, val int) {
    s.mu.Lock()
    defer s.mu.Unlock()
    s.m[key] = val
}

Strategy 2: sync.Map

Optimized for workloads where reads vastly outnumber writes. Uses copy-on-write internally.

func ExampleSyncMap() {
    var m sync.Map

    // Store and Load
    m.Store("user_1", 100)
    val, ok := m.Load("user_1")
    if ok {
        fmt.Println(val.(int))
    }

    // Delete
    m.Delete("user_1")
}

Benchmark: RWMutex vs sync.Map

func BenchmarkConcurrentMapAccess(b *testing.B) {
    // Setup: 90% reads, 10% writes (typical cache workload)
    readRatio := 90

    b.Run("RWMutex", func(b *testing.B) {
        var mu sync.RWMutex
        m := make(map[string]int)
        m["initial"] = 1

        b.RunParallel(func(pb *testing.PB) {
            i := 0
            for pb.Next() {
                key := fmt.Sprintf("key_%d", i%1000)

                if i%100 < readRatio {
                    mu.RLock()
                    _ = m[key]
                    mu.RUnlock()
                } else {
                    mu.Lock()
                    m[key] = i
                    mu.Unlock()
                }
                i++
            }
        })
    })

    b.Run("sync.Map", func(b *testing.B) {
        var m sync.Map
        m.Store("initial", 1)

        b.RunParallel(func(pb *testing.PB) {
            i := 0
            for pb.Next() {
                key := fmt.Sprintf("key_%d", i%1000)

                if i%100 < readRatio {
                    m.Load(key)
                } else {
                    m.Store(key, i)
                }
                i++
            }
        })
    })
}

When to use each:

  • RWMutex: Balanced read/write patterns, or when you need to lock multiple keys atomically
  • sync.Map: Read-heavy workloads (95%+ reads), cache-like patterns

Map vs Switch: Small Lookup Optimization

For small fixed sets of keys, a switch statement may outperform a map lookup.

// Using a map (indirection + hash compute)
func statusByCodeMap(code int) string {
    statusMap := map[int]string{
        200: "OK",
        301: "Moved Permanently",
        400: "Bad Request",
        404: "Not Found",
        500: "Internal Server Error",
    }
    return statusMap[code]
}

// Using a switch (compiler optimization)
func statusByCodeSwitch(code int) string {
    switch code {
    case 200:
        return "OK"
    case 301:
        return "Moved Permanently"
    case 400:
        return "Bad Request"
    case 404:
        return "Not Found"
    case 500:
        return "Internal Server Error"
    default:
        return "Unknown"
    }
}

// Benchmark
func BenchmarkMapVsSwitch(b *testing.B) {
    codes := []int{200, 404, 500, 301, 400}

    b.Run("map", func(b *testing.B) {
        statusMap := map[int]string{
            200: "OK", 404: "Not Found", 500: "Error",
            301: "Moved", 400: "Bad Request",
        }
        b.ResetTimer()
        for i := 0; i < b.N; i++ {
            _ = statusMap[codes[i%len(codes)]]
        }
    })

    b.Run("switch", func(b *testing.B) {
        b.ResetTimer()
        for i := 0; i < b.N; i++ {
            code := codes[i%len(codes)]
            switch code {
            case 200:
                _ = "OK"
            case 404:
                _ = "Not Found"
            case 500:
                _ = "Error"
            case 301:
                _ = "Moved"
            case 400:
                _ = "Bad Request"
            }
        }
    })
}

Result: Switch is 2-3x faster for 5-10 cases, but less maintainable. Use maps for flexibility.

Map Iteration Order

Go deliberately randomizes map iteration order to prevent dependency on ordering.

func DemonstrateMapIterationRandomness() {
    m := map[string]int{
        "a": 1, "b": 2, "c": 3, "d": 4, "e": 5,
    }

    // Each iteration produces different order
    fmt.Print("First pass: ")
    for k := range m {
        fmt.Print(k, " ")
    }

    fmt.Print("\nSecond pass: ")
    for k := range m {
        fmt.Print(k, " ")
    }
}

// Performance: range is O(n), no better order exists
func BenchmarkMapIteration(b *testing.B) {
    m := make(map[int]int, 10000)
    for i := 0; i < 10000; i++ {
        m[i] = i
    }

    b.ReportAllocs()
    b.ResetTimer()
    for i := 0; i < b.N; i++ {
        for k := range m {
            _ = k
        }
    }
}

Reducing Allocations: Map Reuse and clear()

Go 1.21 introduced clear(m) to empty a map without deallocation, enabling reuse.

// Pre-1.21: Delete all keys (creates garbage)
func ClearMapOld(m map[string]int) {
    for k := range m {
        delete(m, k)
    }
}

// Go 1.21+: true clear (no GC pressure)
func ClearMapNew(m map[string]int) {
    clear(m)
}

// Benchmark: clear() vs delete loop
func BenchmarkMapClear(b *testing.B) {
    b.Run("delete_loop", func(b *testing.B) {
        m := make(map[string]int, 10000)
        for i := 0; i < 10000; i++ {
            m[fmt.Sprintf("key_%d", i)] = i
        }

        b.ResetTimer()
        for i := 0; i < b.N; i++ {
            for k := range m {
                delete(m, k)
            }
        }
    })

    b.Run("clear", func(b *testing.B) {
        m := make(map[string]int, 10000)
        for i := 0; i < 10000; i++ {
            m[fmt.Sprintf("key_%d", i)] = i
        }

        b.ResetTimer()
        for i := 0; i < b.N; i++ {
            clear(m)
        }
    })
}

Performance tip: Reuse maps with clear() to avoid repeated allocations in tight loops.

Map of Pointers vs Map of Values: GC Impact

Storing pointers in maps increases garbage collection overhead.

// GC implications
type User struct {
    ID   int
    Name string
}

func BenchmarkMapValueType(b *testing.B) {
    b.Run("map_values", func(b *testing.B) {
        m := make(map[int]User, 1000)
        for i := 0; i < 1000; i++ {
            m[i] = User{ID: i, Name: fmt.Sprintf("user_%d", i)}
        }

        b.ReportAllocs()
        b.ResetTimer()
        for i := 0; i < b.N; i++ {
            _ = m[i%1000]
        }
    })

    b.Run("map_pointers", func(b *testing.B) {
        m := make(map[int]*User, 1000)
        for i := 0; i < 1000; i++ {
            m[i] = &User{ID: i, Name: fmt.Sprintf("user_%d", i)}
        }

        b.ReportAllocs()
        b.ResetTimer()
        for i := 0; i < b.N; i++ {
            _ = m[i%1000]
        }
    })
}

Guidelines:

  • Small structs (under 128 bytes): Use values directly
  • Large structs: Use pointers to avoid copying
  • Memory-sensitive workloads: Consider GC scanning overhead of pointers

Alternatives: Beyond Standard Maps

Swiss Table Implementation

Swiss tables offer better performance for string-keyed maps through SIMD operations.

// github.com/dolthub/swiss - drop-in replacement
import "github.com/dolthub/swiss"

func CompareSwissVsStandard() {
    // Standard map
    m := make(map[string]int)
    m["key"] = 42

    // Swiss table (same API)
    sm := swiss.NewMap[string, int](0)
    sm.Put("key", 42)
}

Swiss tables excel at:

  • Large maps with string keys
  • Maps with poor hash distributions
  • Memory efficiency

Using Arrays as Maps for Small Integers

For keys in a known small range, arrays are faster than maps.

// Map-like access with array performance
type IntKeyMap struct {
    data [256]interface{}
}

func (m *IntKeyMap) Get(key int) (interface{}, bool) {
    if key < 0 || key >= 256 {
        return nil, false
    }
    val := m.data[key]
    return val, val != nil
}

func (m *IntKeyMap) Set(key int, val interface{}) {
    if key >= 0 && key < 256 {
        m.data[key] = val
    }
}

Comprehensive Benchmarks: Map Operations at Scale

func BenchmarkMapOperationsAtScale(b *testing.B) {
    sizes := []int{100, 1000, 10000, 1000000}

    for _, size := range sizes {
        b.Run(fmt.Sprintf("lookup_%d", size), func(b *testing.B) {
            m := make(map[int]int, size)
            for i := 0; i < size; i++ {
                m[i] = i * 2
            }

            b.ResetTimer()
            for i := 0; i < b.N; i++ {
                _ = m[i%size]
            }
        })

        b.Run(fmt.Sprintf("insert_%d", size), func(b *testing.B) {
            b.ResetTimer()
            m := make(map[int]int, size)
            for i := 0; i < b.N; i++ {
                m[i%size] = i
            }
        })

        b.Run(fmt.Sprintf("delete_%d", size), func(b *testing.B) {
            m := make(map[int]int, size)
            for i := 0; i < size; i++ {
                m[i] = i
            }

            b.ResetTimer()
            for i := 0; i < b.N; i++ {
                delete(m, i%size)
                m[i] = i // reinsert for next iteration
            }
        })
    }
}

Summary: Map Performance Checklist

  • Preallocate maps when you know approximate size
  • Use appropriate key types: int is fastest, then string, then struct
  • Avoid maps in tight loops if possible
  • Prefer sync.Map for read-heavy concurrent access
  • Use switch for small fixed key sets (5-10 entries)
  • Clear maps with clear() instead of deleting keys
  • Prefer values over pointers for small structs
  • Consider alternatives (Swiss tables, arrays) for specialized use cases
  • Monitor GC pressure when storing many pointers in maps

On this page