greatcodeNavigate back to the homepage

Exploring Generic Types in Go 2.0

Mike Christensen
January 9th, 2021 · 6 min read

If you’ve ever written any JavaScript, then you’ve likely used a utility library like lodash and underscore. These handy libraries are very flexible thanks to JavaScript’s dynamic typing. For example, we can easily call Lodash’s find method with either an array of numbers or an array of strings:

1_.find([1, 2, 3], i => i % 2 === 0) // => 2
2_.find(['one', 'two', 'three'], i => i.length === 5) // => 'three'

This has always been a bit more tricky in Go, due to the static type system and lack of generic types.

Nonetheless, similar libraries do exist in the Go ecosystem: one excellent example is gofunk from @thoas. However, restrictions in the Go language mean that tradeoffs must be made in order to implement type-agnostic functions. In this post I’d like to explore these trade-offs and describe a coming change to the Go language that will improve the language’s support for this type of programming.

Go 1: Type-safe Implementation

Traditionally, Go programmers have to write a type-safe implementation of a given function for each type that needs to be supported.

1func FindInt(in []int, f func(int) bool) int {
2 for i := range in {
3 if f(in[i]) {
4 return in[i]
5 }
6 }
7 return nil
10func FindString(in []string, func(string) bool) string {
11 for i := range in {
12 if f(in[i]) {
13 return in[i]
14 }
15 }
16 return nil
19func main() {
20 FindInt([]int{1, 2, 3}, func(i int) bool { return i % 2 == 0 }) // => 2
21 FindString([]string{"one", "two", "three"}, func(s string) bool { return len(s) == 5 }) // => "three"

You can see that the function bodies for FindString and FindInt are identical. Reimplementing the function for each type leads to code duplication. However, we maintain the type safety we expect from our Go programs.

Go 1: Reflection-based Implementation

How can we can get around duplicating code? Go 1.X provides us with a way to write functions that can handle arguments of unknown type thanks to the empty interface. Since every type implements at least zero methods, the empty interface type can hold values of any type.

As the Go proverb says:

interface{} says nothing

The implication of this is that the behaviour (given by the underlying type) of an interface{} function argument is unknown at compile-time. Therefore we must inspect the argument at run-time to understand what we can do with it. We can achieve this by using reflection.

1func Find(in interface{}, f interface{}) interface{} {
2 funcValue := reflect.ValueOf(f)
3 funcType := funcValue.Type()
4 if funcType.Out(0).Kind() != reflect.Bool {
5 panic("f should return a boolean")
6 }
7 inValue := reflect.ValueOf(in)
8 for i := 0; i < inValue.Len(); i++ {
9 elem := inValue.Index(i)
10 result := funcValue.Call([]reflect.Value{elem})[0].Interface().(bool)
11 if result {
12 return elem.Interface()
13 }
14 }
15 return nil
18func main() {
19 Find([]int{1, 2, 3}, func(i int) bool { return i%2 == 0 }) // => 2
20 Find([]string{"one", "two", "three"}, func(s string) bool { return len(s) == 5 }) // => "three"

We’ve succeeded in defining the Find function only once. Additionally, we can call Find with whatever type arguments we desire.

However, in reducing code duplication we have lost the benefits of compile-time type-checking and instead we must make these checks at run-time. The onus is no longer on the compiler to ensure the type-safety of your program; instead it is on you, the developer.

To illustrate this point, imagine we make a call to Find with an argument of type map[string]int:

1func main() {
2 Find(map[string]int{
3 "one": 1,
4 "two": 2,
5 "three": 3,
6 }, func(i int) bool { return i%2 == 0 })

To the unwitting developer, this looks like a perfectly reasonable program: “take this map and return the first even value you happen to come across”. This program will even compile just fine. However, at run-time you’ll be hit with a panic:

1panic: reflect: call of reflect.Value.Index on map Value
3goroutine 1 [running]:
4reflect.Value.Index(0x10d66e0, 0xc000098180, 0x15, 0x0, 0x10d2b20, 0x30, 0x11c14a0)

It turns out that our Find function was implicitly expecting the underlying type of the in interface{} argument to be a slice. When it happened to be a map instead, our call to Index on the reflect value caused a panic.

For this reason, code that uses reflection must be very, very well tested. Additionally, your documentation should be absolutely clear about the argument types your function expects to receive. Even an astute developer who reads your implementation is very likely to miss something. As that other old Go proverb goes:

Reflection is never clear.

Additionally, we incur a performance cost when we rely on code that uses reflection. Our reflection-based implementation of Find is simply less concise and makes more allocations than our type-safe equivalents. We can get an indication of the size of this cost by writing some benchmarks for the Find and FindInt functions:

1package main
3import (
4 "testing"
7func benchmarkFindInt(i int, b *testing.B) {
8 is := make([]int, i)
9 for x := range is {
10 is[x] = x
11 }
12 for n := 0; n < b.N; n++ {
13 FindInt(is, func(i int) bool { return i == len(is)-1 })
14 }
17func benchmarkFind(i int, b *testing.B) {
18 is := make([]int, i)
19 for x := range is {
20 is[x] = x
21 }
22 for n := 0; n < b.N; n++ {
23 Find(is, func(i int) bool { return i == len(is)-1 })
24 }
27func BenchmarkFindInt10(b *testing.B) { benchmarkFindInt(10, b) }
28func BenchmarkFindInt100(b *testing.B) { benchmarkFindInt(100, b) }
29func BenchmarkFindInt1000(b *testing.B) { benchmarkFindInt(1000, b) }
30func BenchmarkFindInt10000(b *testing.B) { benchmarkFindInt(10000, b) }
32func BenchmarkFind10(b *testing.B) { benchmarkFind(10, b) }
33func BenchmarkFind100(b *testing.B) { benchmarkFind(100, b) }
34func BenchmarkFind1000(b *testing.B) { benchmarkFind(1000, b) }
35func BenchmarkFind10000(b *testing.B) { benchmarkFind(10000, b) }

The results on my machine:

1❯ go test -bench=. ./...
2goos: darwin
3goarch: amd64
4BenchmarkFindInt10-4 49077010 24.3 ns/op
5BenchmarkFindInt100-4 5218797 235 ns/op
6BenchmarkFindInt1000-4 558915 2080 ns/op
7BenchmarkFindInt10000-4 56361 22674 ns/op
8BenchmarkFind10-4 456028 2519 ns/op
9BenchmarkFind100-4 49142 23174 ns/op
10BenchmarkFind1000-4 5001 240574 ns/op
11BenchmarkFind10000-4 498 2374598 ns/op
13ok 13.679s

From these results we can see that the reflect-based implementation is two orders of magnitude slower than its type-safe counterpart!

Go 2.0: Introducing Generics

This recent design draft proposes adding Type Parameters to the Go language, which would provide support for generic programming without depending on the empty interface or reflection.

Instead of describing the proposal in its entirety, I will instead show how the new syntax can be used to write generic implementations of common utility functions such as those provided in gofunk. I encourage you to read the design draft to gain a deeper understanding of how generics might be implemented in Go.

Type Parameters

In the proposed syntax, type parameters appear before regular parameters and are encapsulated by square brackets. In the same way that regular parameters have types, type parameters have constraints. A special constraint any can be used to match any type.

Using the proposed syntax, we may write a generic implementation of the Find function as:

1func Find[T any](s []T, f func(T) bool) T {
2 for i := range s {
3 if f(s[i]) {
4 return s[i]
5 }
6 }
7 return nil
10func main() {
11 Find([]int{1, 2, 3}, func(i int) bool { return i%2 == 0 }) // => 2
12 Find([]string{"one", "two", "three"}, func(s string) bool { return len(s) == 5 }) // => "three"

The function has a single type parameter T with constraint any. The types of the regular function parameters refer to this type constraint. The first argument is a slice of any type that meets the contraints given by T, i.e. anything. Similarly, the second argument is a function that accepts an argument of any type and returns a boolean.

This new, powerful syntax has several benefits. We have reduced the code duplication incurred when writing multiple type-safe implementations of our function for each type we wish to support. Secondly, we have done so without bloating the function body with complex and unreadable reflection logic. Finally, we have maintained the type-safety of our program. Unlike the empty interface, the usage of type parameters allows us to assert that the type of s must be a slice (not a map!) of T.

Type Constraints

In the draft proposal, type constraints are simply interface types. Therefore the maximally permissive type constraint any is equivalent to interface{}.

We can apply further restrictions to our type parameters by using more specific interfaces. For example:

1type Stringer interface {
2 String() string
5// Stringify calls the String method on each element of s,
6// and returns the results.
7func Stringify[T Stringer](s []T) (ret []string) {
8 for _, v := range s {
9 ret = append(ret, v.String())
10 }
11 return ret

In this example, a slice of any type that implements a String() string method can be passed to Stringify.

In some cases it is useful to constrain types by the operators that can be used with them. The built-in comparable constraint, for example, ensures our arguments can be compared using the == and != operators.

1// Index returns the index of x in s, or -1 if not found.
2func Index[T comparable](s []T, x T) int {
3 for i, v := range s {
4 // v and x are type T, which has the comparable
5 // constraint, so we can use == here.
6 if v == x {
7 return i
8 }
9 }
10 return -1

The proposal defines a mechanism to construct type constraints by providing a type list. This allows us to create a type constraint to accept only types that support a given set of operators. Note that these embedded type lists inside interfaces can only be used with interfaces that are to be used as type constraints.

1// Ordered is a type constraint that matches any ordered type.
2// An ordered type is one that supports the <, <=, >, and >= operators.
3type Ordered interface {
4 type int, int8, int16, int32, int64,
5 uint, uint8, uint16, uint32, uint64, uintptr,
6 float32, float64,
7 string

Generic Types

In addition to generic functions, the proposal also supports generic types. This is achieved by passing type parameters to a type definition.

1type Vector[T any] []T

Type arguments are then supplied when the type is instantiated.

1var v Vector[int]

These types can have methods that use the constraints:

1func (v *Vector[T]) Push(x T) { *v = append(*v, x) }


In order to get my hands dirty with generics, I set out to implement the go-funk API in Go 2. The result can be found in go-funk-generics. I won’t detail the implementation for each function here, but if you’re interested feel free to check it out (and give it a ⭐!). Hopefully it gives an indication of what a modern utility library might look like if the Go team releases generics support.

One interesting thing I wanted to achieve was the ability to chain methods together like you might in JavaScript. As a toy example:

1[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
2 .filter(x => x % 2 === 0)
3 .map(x => `${x}`)
4 .reduce((prev, cur) => `${prev}${cur};`, '')
5// => 2;4;6;8;10;

The filter, map and reduce functions can be written as follows in Go2:

1func Filter[T any](s []T, f func(T) bool) []T {
2 var r []T
3 for _, v := range s {
4 if f(v) {
5 r = append(r, v)
6 }
7 }
8 return r
11func Map[T1, T2 any](s []T1, f func(T1) T2) []T2 {
12 r := make([]T2, len(s))
13 for i, v := range s {
14 r[i] = f(v)
15 }
16 return r
19func Reduce[T1, T2 any](s []T1, initial T2, f func(T2, T1) T2) T2 {
20 r := initial
21 for _, v := range s {
22 r = f(r, v)
23 }
24 return r
28// example usage
30x1 := Filter[int]([]int{1,2,3,4,5,6,7,8,9,10}, func(i int) bool {
31 return i % 2 == 0
33x2 := Map[int, string](x1, strconv.Itoa)
34x3 := Reduce[string, string](x2, "", func (prev, cur string) string {
35 return prev + cur + ";"
37fmt.Println(x3) // => 2;4;6;8;10;

How can we make these functions chainable? One approach is to use a generic type. Then, we can implement methods on each which wrap calls to the relevant functions.

1type List[T any] []T
3func (l List[T]) Append(e ...T) List[T] {
4 return append(l, e...)
7func (l List[T]) Filter(f func(T) bool) List[T] {
8 return Filter(l, f)
11type Mappable[T1, T2 any] List[T1]
13func (m Mappable[T1, T2]) Map(f func(T1) T2) List[T2] { // Mappable[T1, T2] { cant do this as we dont know the types we want to map to! only the caller does
14 return Map[T1, T2](m, f)
17func (m Mappable[T1, T2]) Reduce(initializer T2, f func(T2, T1) T2) T2 {
18 return Reduce[T1, T2](m, initializer, f)

Here, the generic Mappable type implements both Map and Reduce methods. Therefore it would be great if Map could return a Mappable type too, so that Reduce could be called as part of a chain. Unfortunately this is not possible as we dont know the target types the caller wants to map to. To get around this, the caller can cast the result of a call to the relevant generic type, which isn’t very pretty. If anyone has a brighter idea of how to do this, please reach out on Twitter!

1var l List[int]
2x := Mappable[string, string](
3 Mappable[int, string](
4 l.Append(1,2,3,4,5,6,7,8,9,10).
5 Filter(func(i int) bool {
6 return i % 2 == 0
7 }),
8 ).Map(strconv.Itoa),
9).Reduce("", func (prev, cur string) string {
10 return prev + cur + ";"


This is new and exciting territory for Go. On the one hand, I love the expressivity of the proposed syntax. It feels great to implement a function only once, with a simple function body unbloated by reflection logic, and call it on any type I wish.

On the other hand, this code is undoubtedly harder to understand and, in my opinion, detracts from the magic of Go’s simplicity. There is a risk that the average quality and readability of the world’s Go code may decrease as this powerful feature is abused.

I wonder how prudent these changes are given that, for the cases where generics are desirable, there are viable intermediate solutions such as code generation which solve the problem of hand-writing multiple type-safe implementations!

Another concern I have is that using methods to constrain types (via interfaces) is often kind of useless, when you could just use the interface with the regular type parameters. For example:

1type Stringer interface {
2 String() string
5type foo struct {
6 x string
9func (f *foo) String() string {
10 return f.x
13func StringifyGeneric[T Stringer](s []T) (ret []string) {
14 for _, v := range s {
15 ret = append(ret, v.String())
16 }
17 return ret
20func StringifyNormal(s []Stringer) (ret []string) {
21 for _, v := range s {
22 ret = append(ret, v.String())
23 }
24 return ret
27func main() {
28 a := &foo{"one"}
29 b := &foo{"two"}
30 fmt.Println(StringifyGeneric([]*foo{a, b}))
31 fmt.Println(StringifyNormal([]Stringer{a, b}))

While there are certainly examples of useful method-based type constraints, the fact that this is even possible means people will write code that uses both implementations. As Hyrum’s law says:

With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody.

I’d love to hear other opinions on this - as always, you can find me on Twitter. 😇


As @joncalhoun pointed out to me on Twitter, in this example it may be more common to have a slice of foo, which the generic implementation happily accepts while the traditional one requires a little more wrangling. He points out the very interesting possibility that perhaps the compiler could solve this by compiling the traditional implementation to the generic one. Personally, I would be thrilled to see an approach that leverages the power of generics without altering the syntax of the language.

1package main
3import (
4 "fmt"
5 "strings"
8type Stringer interface {
9 String() string
12type Upper interface {
13 Up() string
16type foo struct {
17 x string
20func (f *foo) String() string {
21 return f.x
24func (f *foo) Up() string {
25 return strings.ToUpper(f.x)
28func StringifyGeneric[T Stringer](s []T) (ret []string) {
29 for _, v := range s {
30 ret = append(ret, v.String())
31 }
32 return ret
35func UpifyGeneric[T Upper](s []T) (ret []string) {
36 for _, v := range s {
37 ret = append(ret, v.Up())
38 }
39 return ret
42func StringifyNormal(s []Stringer) (ret []string) {
43 for _, v := range s {
44 ret = append(ret, v.String())
45 }
46 return ret
49func UpifyNormal(s []Upper) (ret []string) {
50 for _, v := range s {
51 ret = append(ret, v.Up())
52 }
53 return ret
56func main() {
57 // More realistically, you probably already have a slice of foo
58 foos := []*foo{
59 &foo{"one"},
60 &foo{"two"},
61 &foo{"three"},
62 &foo{"four"},
63 }
65 // Using generics here it is super easy to use Stringify
66 fmt.Println(StringifyGeneric(foos))
67 fmt.Println(UpifyGeneric(foos))
69 // Using interfaces as they are now is a little more annoying in this case.
70 var stringers []Stringer
71 for _, foo := range foos {
72 stringers = append(stringers, foo)
73 }
74 fmt.Println(StringifyNormal(stringers))
76 // The problem is further exacerbated if you need to call multiple functions with different interface slices
77 var uppers []Upper
78 for _, foo := range foos {
79 uppers = append(uppers, foo)
80 }
81 fmt.Println(UpifyNormal(uppers))

Thanks Jon for your input!

If you’d like to get your hands dirty with generics, try writing some simple programs in the Go2 playground. Feel free to clone the go-funk-generics repo and have a play around!

Thanks for reading.

More articles from Christensen Codes

Escape Analysis in Go: Stack vs. Heap

Optimising code by easing pressure on the Garbage Collector

November 8th, 2020 · 3 min read

Concurrency is not Parallelism

Wasting Goroutines on Conway's Game of Life

October 27th, 2020 · 4 min read
© 2020–2022 Christensen Codes
Link to $ to $ to $