Slice
Slices are one of the most commonly used data structures in Go. They are dynamic arrays that can automatically expand as needed. The underlying structure of a slice is an array. When the capacity of the slice is insufficient to accommodate new elements, Go automatically creates a new, larger array and copies the existing data to the new array.
Initial Capacity: Initially, the buf
slice of Builder
is empty with a capacity of 0
.
Expansion Conditions
When WriteString
is called, if the remaining capacity is insufficient to accommodate the new data (len(s) > cap(buf)-len(buf)
), expansion is triggered.
Expansion Strategies
The expansion logic of strings.Builder
is mainly implemented in the grow
method, with slight differences across versions:
Go 1.17 version in
src/runtime/slice.go
:1func growSlice() { 2 newcap := old.cap 3 doublecap := newcap + newcap 4 if cap > doublecap { 5 newcap = cap 6 } else { 7 if old.cap < 1024 { 8 newcap = doublecap 9 } else { 10 // Check 0 < newcap to detect overflow 11 // and prevent an infinite loop. 12 for 0 < newcap && newcap < cap { 13 newcap += newcap / 4 14 } 15 // Set newcap to the requested cap when 16 // the newcap calculation overflowed. 17 if newcap <= 0 { 18 newcap = cap 19 } 20 } 21 } 22}
- If the required capacity
cap
of the new slice is greater than double the current capacitydoublecap
, expand directly to the required capacitycap
. - If the required capacity
cap
of the new slice is not greater than double the current capacitydoublecap
, set a threshold of 1024:- If the old capacity is less than 1024, double the capacity.
- If the old capacity is greater than or equal to 1024, increase the capacity by 1/4 of the old capacity each time.
- If the required capacity
Go 1.18 version in
src/runtime/slice.go
:1func growSlice() int { 2 newcap := old.cap 3 doublecap := newcap + newcap 4 if cap > doublecap { 5 newcap = cap 6 } else { 7 const threshold = 256 8 if old.cap < threshold { 9 newcap = doublecap 10 } else { 11 // Check 0 < newcap to detect overflow 12 // and prevent an infinite loop. 13 for 0 < newcap && newcap < cap { 14 // Transition from growing 2x for small slices 15 // to growing 1.25x for large slices. This formula 16 // gives a smooth-ish transition between the two. 17 newcap += (newcap + 3*threshold) / 4 18 } 19 // Set newcap to the requested cap when 20 // the newcap calculation overflowed. 21 if newcap <= 0 { 22 newcap = cap 23 } 24 } 25 } 26 return newcap 27}
- If the required capacity
cap
of the new slice is greater than double the current capacitydoublecap
, expand directly to the required capacitycap
. - If the required capacity
cap
of the new slice is not greater than double the current capacitydoublecap
, set a threshold of 256:- If the old capacity is less than 256, double the capacity.
- If the old capacity is greater than or equal to 256, increase the capacity by (old capacity + 3 * 256) / 4, i.e., increase by (1/4 of the old capacity + 192).
- If the required capacity
Go 1.22 version in
src/runtime/slice.go
, with no major changes, extracted from thegrowSlice
method:1// nextslicecap computes the next appropriate slice length. 2func nextslicecap(newLen, oldCap int) int { 3 newcap := oldCap 4 doublecap := newcap + newcap 5 if newLen > doublecap { 6 return newLen 7 } 8 const threshold = 256 9 if oldCap < threshold { 10 return doublecap 11 } 12 for { 13 // Transition from growing 2x for small slices 14 // to growing 1.25x for large slices. This formula 15 // gives a smooth-ish transition between the two. 16 newcap += (newcap + 3*threshold) >> 2 17 // We need to check `newcap >= newLen` and whether `newcap` overflowed. 18 // newLen is guaranteed to be larger than zero, hence 19 // when newcap overflows then `uint(newcap) > uint(newLen)`. 20 // This allows to check for both with the same comparison. 21 if uint(newcap) >= uint(newLen) { 22 break 23 } 24 } 25 // Set newcap to the requested cap when 26 // the newcap calculation overflowed. 27 if newcap <= 0 { 28 return newLen 29 } 30 return newcap 31}
Note: The calculated cap
above is the target capacity, not the actual allocated capacity. During actual allocation, memory alignment and optimization are performed, and the actual allocated capacity is determined by roundupsize
, which rounds up the required memory size to the alignment boundary of the memory allocator. For more details, refer to the source code.
Map
Maps in Go are implemented as hash tables, used to store key-value pairs. When the number of elements in the map increases to a certain extent, Go automatically expands the map to maintain the performance of the hash table.
1// src/runtime/hashmap.go/mapassign
2
3// Trigger for expansion
4if !h.growing() && (overLoadFactor(int64(h.count), h.B) || tooManyOverflowBuckets(h.noverflow, h.B)) {
5 hashGrow(t, h)
6}
7
8// Load factor exceeds 6.5
9func overLoadFactor(count int64, B uint8) bool {
10 return count >= bucketCnt && float32(count) >= loadFactor*float32((uint64(1)<<B))
11}
12
13// Too many overflow buckets
14func tooManyOverflowBuckets(noverflow uint16, B uint8) bool {
15 if B < 16 {
16 return noverflow >= uint16(1)<<B
17 }
18 return noverflow >= 1<<15
19}
Expansion Conditions
- Load Factor: The map in Go uses a load factor to determine when to expand. The default load factor is 6.5, meaning that when the number of elements in the map exceeds 6.5 times the number of buckets, the map will expand.
- Too Many Overflow Buckets:
- When B < 15, i.e., the total number of buckets is less than 2^15, if the number of overflow buckets exceeds 2^B.
- When B >= 15, i.e., the total number of buckets 2^B is greater than or equal to 2^15, if the number of overflow buckets exceeds 2^15.
Expansion Strategies
For conditions 1 and 2, expansion will occur, but the strategies differ.
- More elements, fewer buckets: Directly increase B by 1, i.e., double the capacity. Note that the elements remain in the old buckets, and only when accessing the old buckets, a portion is migrated until completion, after which the old buckets are GC collected.
- Not many elements, but too many overflow buckets, many not fully utilized: Allocate a new bucket and move elements from the old bucket to the new bucket, i.e., equal-capacity expansion. This is equivalent to rearranging to make keys more compact, allowing keys from overflow buckets to move to new buckets, improving bucket utilization. However, in extreme cases, after multiple hash collisions, too many overflow buckets can still be generated, reducing operational efficiency to
O(n)
.
1func hashGrow(t *maptype, h *hmap) {
2 // B+1 is equivalent to doubling the space
3 bigger := uint8(1)
4
5 // Corresponds to condition 2
6 if !overLoadFactor(int64(h.count), h.B) {
7 // Perform equal-capacity expansion, so B remains unchanged
8 bigger = 0
9 h.flags |= sameSizeGrow
10 }
11 // Attach old buckets to buckets
12 oldbuckets := h.buckets
13 // Allocate new buckets space
14 newbuckets, nextOverflow := makeBucketArray(t, h.B+bigger)
15
16 flags := h.flags &^ (iterator | oldIterator)
17 if h.flags&iterator != 0 {
18 flags |= oldIterator
19 }
20 // Commit the grow action
21 h.B += bigger
22 h.flags = flags
23 h.oldbuckets = oldbuckets
24 h.buckets = newbuckets
25 // Migration progress is 0
26 h.nevacuate = 0
27 // Number of overflow buckets is 0
28 h.noverflow = 0
29
30 // ……
31}