Reflection Performance
Deep dive into Go's reflection package, performance costs, method dispatch via reflection, and strategies for avoiding reflection overhead in performance-critical code.
Introduction
Reflection allows inspecting and manipulating types and values at runtime. It's powerful but expensive. This article explores what reflection does, where its costs come from, and practical techniques to minimize reflection overhead in your Go applications.
What Reflection Does
The reflect package provides runtime type inspection:
import "reflect"
type Person struct {
Name string
Age int
}
func inspectType(v interface{}) {
t := reflect.TypeOf(v)
fmt.Printf("Type: %v\n", t)
fmt.Printf("Kind: %v\n", t.Kind())
if t.Kind() == reflect.Struct {
for i := 0; i < t.NumField(); i++ {
field := t.Field(i)
fmt.Printf(" Field %d: %s (%v)\n", i, field.Name, field.Type)
}
}
}
func inspectValue(v interface{}) {
val := reflect.ValueOf(v)
fmt.Printf("Value: %v\n", val)
fmt.Printf("Can Set: %v\n", val.CanSet())
}Reflection enables:
- Type introspection
- Value inspection and modification
- Function and method invocation
- Struct field access by name
- Dynamic type assertion
But each of these operations has overhead.
Cost of reflect.TypeOf and reflect.ValueOf
TypeOf: Extracting Type Information
func TypeOf(i interface{}) TypeTypeOf returns cached type information. It's relatively cheap:
// Simplified
func TypeOf(i interface{}) Type {
if i == nil {
return nil
}
eface := (*eface)(unsafe.Pointer(&i))
return toType(eface._type)
}No allocation. Just extracts the type pointer from the interface and returns it.
ValueOf: Creating a reflect.Value
func ValueOf(i interface{}) ValueValueOf is more expensive:
// Simplified
func ValueOf(i interface{}) Value {
if !escapes(i) {
// Value is small; allocated on stack
return Value{typ: typ, ptr: ptr, flag: flag}
}
// Large values escape to heap
return Value{typ: typ, ptr: allocAndCopy(i), flag: flagAddr}
}In many cases, ValueOf causes the value to escape to the heap (be allocated on the heap). This is expensive.
Benchmark: TypeOf vs ValueOf
package main
import (
"reflect"
"testing"
)
type Data struct {
A, B, C, D, E int64
}
func BenchmarkTypeOf(b *testing.B) {
d := Data{1, 2, 3, 4, 5}
b.ReportAllocs()
for i := 0; i < b.N; i++ {
_ = reflect.TypeOf(d)
}
}
func BenchmarkValueOf(b *testing.B) {
d := Data{1, 2, 3, 4, 5}
b.ReportAllocs()
for i := 0; i < b.N; i++ {
_ = reflect.ValueOf(d)
}
}
func BenchmarkValueOfEscape(b *testing.B) {
d := Data{1, 2, 3, 4, 5}
b.ReportAllocs()
for i := 0; i < b.N; i++ {
// Large struct escapes
v := reflect.ValueOf(d)
_ = v
}
}
func main() {
testing.Main(
func(pat, str string) (bool, error) { return true, nil },
nil, nil, nil,
[]testing.Benchmark{},
)
}Typical results:
BenchmarkTypeOf-8 500000000 2.1 ns/op 0 B/op
BenchmarkValueOf-8 100000000 15 ns/op 48 B/op (escapes to heap)
BenchmarkValueOfEscape-8 50000000 30 ns/op 64 B/opValueOf is 10-15x slower than TypeOf and causes allocations.
reflect.Value Methods: Field and Method Access
Accessing fields and calling methods via reflection has significant overhead.
Field Access: .Field(i) and .FieldByName(name)
func BenchmarkFieldAccess(b *testing.B) {
p := Person{Name: "Alice", Age: 30}
v := reflect.ValueOf(p)
b.Run("DirectAccess", func(b *testing.B) {
b.ReportAllocs()
var sum int
for i := 0; i < b.N; i++ {
sum += p.Age
}
_ = sum
})
b.Run("FieldByIndex", func(b *testing.B) {
b.ReportAllocs()
fieldAge := 1 // Index of Age field
var sum int
for i := 0; i < b.N; i++ {
sum += int(v.Field(fieldAge).Int())
}
_ = sum
})
b.Run("FieldByName", func(b *testing.B) {
b.ReportAllocs()
var sum int
for i := 0; i < b.N; i++ {
sum += int(v.FieldByName("Age").Int())
}
_ = sum
})
}Typical results:
BenchmarkFieldAccess/DirectAccess-8 2000000000 0.5 ns/op
BenchmarkFieldAccess/FieldByIndex-8 200000000 5.5 ns/op (10x slower)
BenchmarkFieldAccess/FieldByName-8 10000000 150 ns/op (300x slower!)- Direct access: 0.5ns (just CPU register)
- Field by index: ~5ns (bounds check + offset calculation)
- Field by name: ~150ns (string comparison in name lookup)
Always cache field indices!
Method Call via Reflection: .Call()
This is where reflection becomes very expensive:
type Calculator struct{}
func (c Calculator) Add(a, b int) int {
return a + b
}
func BenchmarkMethodCall(b *testing.B) {
calc := Calculator{}
b.Run("DirectCall", func(b *testing.B) {
b.ReportAllocs()
var sum int
for i := 0; i < b.N; i++ {
sum += calc.Add(5, 3)
}
_ = sum
})
b.Run("ReflectCall", func(b *testing.B) {
v := reflect.ValueOf(calc)
methodAdd := v.MethodByName("Add")
b.ReportAllocs()
var sum int
for i := 0; i < b.N; i++ {
args := []reflect.Value{
reflect.ValueOf(5),
reflect.ValueOf(3),
}
ret := methodAdd.Call(args)
sum += int(ret[0].Int())
}
_ = sum
})
}Typical results:
BenchmarkMethodCall/DirectCall-8 1000000000 1.0 ns/op
BenchmarkMethodCall/ReflectCall-8 1000000 1500 ns/op (1500x slower!)Reflection method calls are 1000-2000x slower than direct calls.
The overhead comes from:
- Allocating
[]reflect.Valuefor arguments - Boxing each argument into
reflect.Value - Function pointer lookup
- Calling through the function pointer
- Unboxing the return value
Why Reflection Method Calls Are So Slow
// Direct call
sum := calc.Add(5, 3) // ~1 ns
// Reflection call
methodAdd.Call([]reflect.Value{
reflect.ValueOf(5), // allocation + boxing
reflect.ValueOf(3), // allocation + boxing
}) // allocation + indirection
// Returns []reflect.Value
// Extract result // unboxing
// ~1500 nsType Assertion vs Reflection
When you know a possible concrete type, use type assertion instead of reflection:
// Slow: reflection
func typeReflection(i interface{}) int {
v := reflect.ValueOf(i)
if v.Kind() == reflect.Int {
return int(v.Int())
}
return 0
}
// Fast: type assertion
func typeAssertion(i interface{}) int {
if v, ok := i.(int); ok {
return v
}
return 0
}
func BenchmarkTypeChecking(b *testing.B) {
i := interface{}(42)
b.Run("Reflection", func(b *testing.B) {
b.ReportAllocs()
var sum int
for n := 0; n < b.N; n++ {
sum += typeReflection(i)
}
_ = sum
})
b.Run("TypeAssertion", func(b *testing.B) {
b.ReportAllocs()
var sum int
for n := 0; n < b.N; n++ {
sum += typeAssertion(i)
}
_ = sum
})
}Typical results:
BenchmarkTypeChecking/Reflection-8 50000000 35 ns/op
BenchmarkTypeChecking/TypeAssertion-8 1000000000 1.5 ns/op (23x faster!)Struct Tag Lookup: Runtime String Parsing
Many libraries (encoding/json, database/sql) use struct tags. Tag lookup happens at runtime:
type User struct {
Name string `json:"name"`
Age int `json:"age"`
}
func getJSONTags(t reflect.Type) {
for i := 0; i < t.NumField(); i++ {
field := t.Field(i)
tag := field.Tag.Get("json") // String parsing!
fmt.Printf("Field %s: JSON tag = %s\n", field.Name, tag)
}
}This is expensive because:
- Each
Tag.Get()call parses the tag string - Tags are parsed on every lookup
- String comparison is involved
Solution: Cache tag information at initialization.
Example: Caching Struct Field Information
package main
import (
"reflect"
"strings"
)
type FieldInfo struct {
Index int
JSONTag string
}
type StructMetadata struct {
Fields map[string]FieldInfo
}
var (
metadataCache = make(map[reflect.Type]*StructMetadata)
)
func GetMetadata(t reflect.Type) *StructMetadata {
if meta, ok := metadataCache[t]; ok {
return meta
}
meta := &StructMetadata{
Fields: make(map[string]FieldInfo),
}
for i := 0; i < t.NumField(); i++ {
field := t.Field(i)
jsonTag := field.Tag.Get("json")
if jsonTag == "" {
jsonTag = field.Name
} else {
// Handle json:"fieldName,omitempty"
parts := strings.Split(jsonTag, ",")
jsonTag = parts[0]
}
meta.Fields[field.Name] = FieldInfo{
Index: i,
JSONTag: jsonTag,
}
}
metadataCache[t] = meta
return meta
}
func main() {
type Person struct {
Name string `json:"name"`
Age int `json:"age,omitempty"`
}
meta := GetMetadata(reflect.TypeOf(Person{}))
for fieldName, info := range meta.Fields {
println("Field:", fieldName, "JSONTag:", info.JSONTag)
}
}encoding/json and the Reflection Problem
The standard encoding/json package uses reflection heavily. This is why JSON marshaling/unmarshaling is slow:
func BenchmarkJSONMarshal(b *testing.B) {
type Person struct {
Name string
Age int
}
p := Person{Name: "Alice", Age: 30}
b.ReportAllocs()
for i := 0; i < b.N; i++ {
_, _ = json.Marshal(p)
}
}Typical results:
BenchmarkJSONMarshal-8 1000000 1200 ns/op 200 B/op1200 nanoseconds for a simple struct! Most of this is reflection.
Alternatives to encoding/json
For performance-critical applications, consider:
- easyjson — Code generation for JSON marshaling
- sonic — Fast JSON parser using SIMD
- jsoniter — API-compatible, faster than encoding/json
- protobuf — Binary format, very fast
Example with easyjson:
//go:generate easyjson -all struct.go
type Person struct {
Name string `json:"name"`
Age int `json:"age"`
}
// After code generation, marshal is much faster
p := Person{Name: "Alice", Age: 30}
data, _ := p.MarshalJSON()Code generation is 100x faster than reflection for JSON.
go generate and Code Generation
Instead of reflection at runtime, generate type-specific code:
Example: Code-Generated Marshaler
// Original struct
type Point struct {
X, Y float64
}
// Code-generated marshaler (generated once)
func (p Point) MarshalJSON() ([]byte, error) {
var buf strings.Builder
buf.WriteString(`{"X":`)
strconv.FormatFloat(p.X, 'f', -1, 64)
buf.WriteString(`,"Y":`)
strconv.FormatFloat(p.Y, 'f', -1, 64)
buf.WriteString(`}`)
return []byte(buf.String()), nil
}This is compiled code, not reflection, so it's fast.
Tools for code generation:
- stringer — Generate String() methods
- jsonenums — Generate JSON marshalers for enums
- protobuf — Generate serialization code
- easyjson — Generate JSON marshalers
- sqlc — Generate database code
reflect.Type Caching Pattern
Always cache reflect.Type to avoid repeated lookups:
// Slow: repeated TypeOf calls
func Process(items []interface{}) {
for _, item := range items {
t := reflect.TypeOf(item) // Repeated work!
if t.Kind() == reflect.Int {
// ...
}
}
}
// Fast: cache the type
func Process(items []interface{}, itemType reflect.Type) {
for _, item := range items {
if itemType.Kind() == reflect.Int {
// ...
}
}
}Precomputed Field Offsets and unsafe Pointer Arithmetic
For ultra-high-performance code that needs reflection, use unsafe.Pointer arithmetic:
type Point struct {
X float64 // offset 0
Y float64 // offset 8
Z float64 // offset 16
}
// Slow
func GetX(p interface{}) float64 {
v := reflect.ValueOf(p)
return v.FieldByName("X").Float()
}
// Fast: use unsafe pointer arithmetic
func GetXUnsafe(p *Point) float64 {
return *(*float64)(unsafe.Pointer(p))
}
func GetYUnsafe(p *Point) float64 {
return *(*float64)(unsafe.Pointer(uintptr(unsafe.Pointer(p)) + 8))
}
func BenchmarkFieldAccess(b *testing.B) {
p := &Point{X: 1.0, Y: 2.0, Z: 3.0}
pInterface := interface{}(p)
b.Run("Reflection", func(b *testing.B) {
b.ReportAllocs()
var sum float64
for i := 0; i < b.N; i++ {
sum += GetX(pInterface)
}
_ = sum
})
b.Run("UnsafePointer", func(b *testing.B) {
b.ReportAllocs()
var sum float64
for i := 0; i < b.N; i++ {
sum += GetXUnsafe(p)
}
_ = sum
})
}Warning: Using
unsafe.Pointeris error-prone and breaks safety guarantees. Only use when absolutely necessary for performance.
Generics as Reflection Replacement (Go 1.18+)
Go 1.18 introduced generics, which can replace many reflection use cases:
Before: Reflection-Based Container
type Container struct {
items []interface{}
lock sync.RWMutex
}
func (c *Container) Add(item interface{}) {
c.lock.Lock()
c.items = append(c.items, item)
c.lock.Unlock()
}
func (c *Container) Get(i int) interface{} {
c.lock.RLock()
defer c.lock.RUnlock()
return c.items[i]
}After: Generic Container
type Container[T any] struct {
items []T
lock sync.RWMutex
}
func (c *Container[T]) Add(item T) {
c.lock.Lock()
c.items = append(c.items, item)
c.lock.Unlock()
}
func (c *Container[T]) Get(i int) T {
c.lock.RLock()
defer c.lock.RUnlock()
return c.items[i]
}
// Type-safe, no reflection, compile-time type checking
c := &Container[int]{}
c.Add(42)
v := c.Get(0) // v is int, no type assertion neededThe generic version is type-safe and fast (no reflection).
Comprehensive Benchmark: All Reflection Methods
package main
import (
"reflect"
"testing"
)
type Data struct {
Value int
}
func (d Data) GetValue() int {
return d.Value
}
func BenchmarkReflectionMethods(b *testing.B) {
d := Data{Value: 42}
v := reflect.ValueOf(d)
t := reflect.TypeOf(d)
b.Run("DirectCall", func(b *testing.B) {
b.ReportAllocs()
var sum int
for i := 0; i < b.N; i++ {
sum += d.GetValue()
}
_ = sum
})
b.Run("ReflectCall", func(b *testing.B) {
method := v.MethodByName("GetValue")
b.ReportAllocs()
var sum int
for i := 0; i < b.N; i++ {
ret := method.Call(nil)
sum += int(ret[0].Int())
}
_ = sum
})
b.Run("DirectField", func(b *testing.B) {
b.ReportAllocs()
var sum int
for i := 0; i < b.N; i++ {
sum += d.Value
}
_ = sum
})
b.Run("ReflectFieldByIndex", func(b *testing.B) {
b.ReportAllocs()
var sum int
for i := 0; i < b.N; i++ {
sum += int(v.Field(0).Int())
}
_ = sum
})
b.Run("TypeOf", func(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
_ = reflect.TypeOf(d)
}
})
b.Run("ValueOf", func(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
_ = reflect.ValueOf(d)
}
})
b.Run("TypeAssertion", func(b *testing.B) {
i := interface{}(d)
b.ReportAllocs()
var sum int
for n := 0; n < b.N; n++ {
if v, ok := i.(Data); ok {
sum += v.Value
}
}
_ = sum
})
}
func main() {
testing.Main(
func(pat, str string) (bool, error) { return true, nil },
nil, nil, nil,
[]testing.Benchmark{},
)
}Typical results:
BenchmarkReflectionMethods/DirectCall-8 1000000000 1.0 ns/op
BenchmarkReflectionMethods/ReflectCall-8 1000000 1500 ns/op
BenchmarkReflectionMethods/DirectField-8 1000000000 0.5 ns/op
BenchmarkReflectionMethods/ReflectFieldByIndex-8 200000000 5.5 ns/op
BenchmarkReflectionMethods/TypeOf-8 500000000 2.1 ns/op
BenchmarkReflectionMethods/ValueOf-8 100000000 15 ns/op
BenchmarkReflectionMethods/TypeAssertion-8 1000000000 1.5 ns/opSummary and Best Practices
Reflection costs:
TypeOf(): cheap (~2ns)ValueOf(): moderate (~15ns, may allocate).Field(i): moderate (~5ns).FieldByName(): expensive (~150ns).Call(): very expensive (~1500ns)
Best practices:
- Avoid reflection in hot paths — Move it to initialization
- Cache type and metadata information — Don't recompute on every call
- Use type assertions instead of reflection for type checking (~23x faster)
- Use code generation instead of reflection for serialization (~100x faster)
- Consider generics (Go 1.18+) for type-safe alternatives to
interface{} - Profile before optimizing — Measure to confirm reflection is the bottleneck
Reflection is powerful but comes at a cost. In performance-critical code, prefer compile-time solutions: generics, code generation, and type assertions.