Quantcast
Channel: The Go Blog
Viewing all 265 articles
Browse latest View live

What's new in the Go Cloud Development Kit

$
0
0

Introduction

Last July, we introduced the Go Cloud Development Kit (previously referred to as simply "Go Cloud"), an open source project building libraries and tools to improve the experience of developing for the cloud with Go. We've made a lot of progress since then -- thank you to early contributors! We look forward to growing the Go CDK community of users and contributors, and are excited to work closely with early adopters.

Portable APIs

Our first initiative is a set of portable APIs for common cloud services. You write your application using these APIs, and then deploy it on any combination of providers, including AWS, GCP, Azure, on-premise, or on a single developer machine for testing. Additional providers can be added by implementing an interface.

These portable APIs are a great fit if any of the following are true:

  • You develop cloud applications locally.
  • You have on-premise applications that you want to run in the cloud (permanently, or as part of a migration).
  • You want portability across multiple clouds.
  • You are creating a new Go application that will use cloud services.

Unlike traditional approaches where you would need to write new application code for each cloud provider, with the Go CDK you write your application code once using our portable APIs to access the set of services listed below. Then, you can run your application on any supported cloud with minimal config changes.

Our current set of APIs includes:

  • blob, for persistence of blob data. Supported providers include: AWS S3, Google Cloud Storage (GCS), Azure Storage, the filesystem, and in-memory.
  • pubsub for publishing/subscribing of messages to a topic. Supported providers include: Amazon SNS/SQS, Google Pub/Sub, Azure Service Bus, RabbitMQ, and in-memory.
  • runtimevar, for watching external configuration variables. Supported providers include AWS Parameter Store, Google Runtime Configurator, etcd, and the filesystem.
  • secrets, for encryption/decryption. Supported providers include AWS KMS, GCP KMS, Hashicorp Vault, and local symmetric keys.
  • Helpers for connecting to cloud SQL providers. Supported providers include AWS RDS and Google Cloud SQL.
  • We are also working on a document storage API (e.g. MongoDB, DynamoDB, Firestore).

Feedback

We hope you're as excited about the Go CDK as we are -- check out our godoc, walk through our tutorial, and use the Go CDK in your application(s). We'd love to hear your ideas for other APIs and API providers you'd like to see.

If you're digging into Go CDK please share your experiences with us:

  • What went well?
  • Were there any pain points using the APIs?
  • Are there any features missing in the API you used?
  • Suggestions for documentation improvements.

To send feedback, you can:

Thanks!


The New Go Developer Network

$
0
0

A sense of community flourishes when we come together in person. As handles become names and avatars become faces, the smiles are real and true friendship can grow. There is joy in the sharing of knowledge and celebrating the accomplishments of our friends, colleagues, and neighbors. In our rapidly growing Go community this critical role is played by the Go user groups.

To better support our Go user groups worldwide, the Go community leaders at GoBridge and Google have joined forces to create a new program called the Go Developer Network (GDN). The GDN is a collection of Go user groups working together with a shared mission to empower developer communities with the knowledge, experience, and wisdom to build the next generation of software in Go.

We have partnered with Meetup to create our own Pro Network of Go Developers providing Go developers a single place to search for local user groups, events, and see what other Gophers are doing around the world.

User groups that join the GDN will be recognized by GoBridge as the official user group for that city and be provided with the latest news, information, conduct policies, and procedures. GDN groups will have Meetup fees paid by the GDN and will have access to special swag and other fun items. Each organizer of a GDN local group will continue to own the group and maintain full admin rights. If you currently run a user group, please fill out this application to request to join the GDN.

We hope you are as excited about the GDN as we are.

Using Go Modules

$
0
0

Introduction

Go 1.11 and 1.12 include preliminarysupport for modules, Go’snew dependency management system that makes dependency version information explicit and easier to manage. This blog post is an introduction to the basic operations needed to get started using modules. A followup post will cover releasing modules for others to use.

A module is a collection ofGo packages stored in a file tree with a go.mod file at its root. The go.mod file defines the module’s module path, which is also the import path used for the root directory, and its dependency requirements, which are the other modules needed for a successful build. Each dependency requirement is written as a module path and a specificsemantic version.

As of Go 1.11, the go command enables the use of modules when the current directory or any parent directory has a go.mod, provided the directory is outside$GOPATH/src. (Inside $GOPATH/src, for compatibility, the go command still runs in the old GOPATH mode, even if a go.mod is found. See thego command documentation for details.) Starting in Go 1.13, module mode will be the default for all development.

This post walks through a sequence of common operations that arise when developing Go code with modules:

  • Creating a new module.
  • Adding a dependency.
  • Upgrading dependencies.
  • Adding a dependency on a new major version.
  • Upgrading a dependency to a new major version.
  • Removing unused dependencies.

Creating a new module

Let's create a new module.

Create a new, empty directory somewhere outside $GOPATH/src,cd into that directory, and then create a new source file, hello.go:

package hellofunc Hello() string {    return "Hello, world."}

Let's write a test, too, in hello_test.go:

package helloimport "testing"func TestHello(t *testing.T) {    want := "Hello, world."    if got := Hello(); got != want {        t.Errorf("Hello() = %q, want %q", got, want)    }}

At this point, the directory contains a package, but not a module, because there is no go.mod file. If we were working in /home/gopher/hello and ran go test now, we'd see:

$ go testPASSok      _/home/gopher/hello    0.020s$

The last line summarizes the overall package test. Because we are working outside $GOPATH and also outside any module, the go command knows no import path for the current directory and makes up a fake one based on the directory name: _/home/gopher/hello.

Let's make the current directory the root of a module by using go mod init and then try go test again:

$ go mod init example.com/hellogo: creating new go.mod: module example.com/hello$ go testPASSok      example.com/hello    0.020s$

Congratulations! You’ve written and tested your first module.

The go mod init command wrote a go.mod file:

$ cat go.modmodule example.com/hellogo 1.12$

The go.mod file only appears in the root of the module. Packages in subdirectories have import paths consisting of the module path plus the path to the subdirectory. For example, if we created a subdirectory world, we would not need to (nor want to) run go mod init there. The package would automatically be recognized as part of theexample.com/hello module, with import pathexample.com/hello/world.

Adding a dependency

The primary motivation for Go modules was to improve the experience of using (that is, adding a dependency on) code written by other developers.

Let's update our hello.go to import rsc.io/quote and use it to implement Hello:

package helloimport "rsc.io/quote"func Hello() string {    return quote.Hello()}

Now let’s run the test again:

$ go testgo: finding rsc.io/quote v1.5.2go: downloading rsc.io/quote v1.5.2go: extracting rsc.io/quote v1.5.2go: finding rsc.io/sampler v1.3.0go: finding golang.org/x/text v0.0.0-20170915032832-14c0d48ead0cgo: downloading rsc.io/sampler v1.3.0go: extracting rsc.io/sampler v1.3.0go: downloading golang.org/x/text v0.0.0-20170915032832-14c0d48ead0cgo: extracting golang.org/x/text v0.0.0-20170915032832-14c0d48ead0cPASSok      example.com/hello    0.023s$

The go command resolves imports by using the specific dependency module versions listed in go.mod. When it encounters an import of a package not provided by any module in go.mod, the go command automatically looks up the module containing that package and adds it togo.mod, using the latest version. (“Latest” is defined as the latest tagged stable (non-prerelease) version, or else the latest tagged prerelease version, or else the latest untagged version.) In our example, go test resolved the new import rsc.io/quote to the module rsc.io/quote v1.5.2. It also downloaded two dependencies used by rsc.io/quote, namely rsc.io/sampler and golang.org/x/text. Only direct dependencies are recorded in the go.mod file:

$ cat go.modmodule example.com/hellogo 1.12require rsc.io/quote v1.5.2$

A second go test command will not repeat this work, since the go.mod is now up-to-date and the downloaded modules are cached locally (in $GOPATH/pkg/mod):

$ go testPASSok      example.com/hello    0.020s$

Note that while the go command makes adding a new dependency quick and easy, it is not without cost. Your module now literally depends on the new dependency in critical areas such as correctness, security, and proper licensing, just to name a few. For more considerations, see Russ Cox's blog post,“Our Software Dependency Problem.”

As we saw above, adding one direct dependency often brings in other indirect dependencies too. The command go list -m all lists the current module and all its dependencies:

$ go list -m allexample.com/hellogolang.org/x/text v0.0.0-20170915032832-14c0d48ead0crsc.io/quote v1.5.2rsc.io/sampler v1.3.0$

In the go list output, the current module, also known as the main module, is always the first line, followed by dependencies sorted by module path.

The golang.org/x/text version v0.0.0-20170915032832-14c0d48ead0c is an example of apseudo-version, which is the go command's version syntax for a specific untagged commit.

In addition to go.mod, the go command maintains a file named go.sum containing the expected cryptographic hashes of the content of specific module versions:

$ cat go.sumgolang.org/x/text v0.0.0-20170915032832-14c0d48ead0c h1:qgOY6WgZO...golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:Nq...rsc.io/quote v1.5.2 h1:w5fcysjrx7yqtD/aO+QwRjYZOKnaM9Uh2b40tElTs3...rsc.io/quote v1.5.2/go.mod h1:LzX7hefJvL54yjefDEDHNONDjII0t9xZLPX...rsc.io/sampler v1.3.0 h1:7uVkIFmeBqHfdjD+gZwtXXI+RODJ2Wc4O7MPEh/Q...rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9...$

The go command uses the go.sum file to ensure that future downloads of these modules retrieve the same bits as the first download, to ensure the modules your project depends on do not change unexpectedly, whether for malicious, accidental, or other reasons. Both go.mod and go.sum should be checked into version control.

Upgrading dependencies

With Go modules, versions are referenced with semantic version tags. A semantic version has three parts: major, minor, and patch. For example, for v0.1.2, the major version is 0, the minor version is 1, and the patch version is 2. Let's walk through a couple minor version upgrades. In the next section, we’ll consider a major version upgrade.

From the output of go list -m all, we can see we're using an untagged version of golang.org/x/text. Let's upgrade to the latest tagged version and test that everything still works:

$ go get golang.org/x/textgo: finding golang.org/x/text v0.3.0go: downloading golang.org/x/text v0.3.0go: extracting golang.org/x/text v0.3.0$ go testPASSok      example.com/hello    0.013s$

Woohoo! Everything passes. Let's take another look at go list -m all and the go.mod file:

$ go list -m allexample.com/hellogolang.org/x/text v0.3.0rsc.io/quote v1.5.2rsc.io/sampler v1.3.0$ cat go.modmodule example.com/hellogo 1.12require (    golang.org/x/text v0.3.0 // indirect    rsc.io/quote v1.5.2)$

The golang.org/x/text package has been upgraded to the latest tagged version (v0.3.0). The go.mod file has been updated to specify v0.3.0 too. The indirect comment indicates a dependency is not used directly by this module, only indirectly by other module dependencies. See go help modules for details.

Now let's try upgrading the rsc.io/sampler minor version. Start the same way, by running go get and running tests:

$ go get rsc.io/samplergo: finding rsc.io/sampler v1.99.99go: downloading rsc.io/sampler v1.99.99go: extracting rsc.io/sampler v1.99.99$ go test--- FAIL: TestHello (0.00s)    hello_test.go:8: Hello() = "99 bottles of beer on the wall, 99 bottles of beer, ...", want "Hello, world."FAILexit status 1FAIL    example.com/hello    0.014s$

Uh, oh! The test failure shows that the latest version of rsc.io/sampler is incompatible with our usage. Let's list the available tagged versions of that module:

$ go list -m -versions rsc.io/samplerrsc.io/sampler v1.0.0 v1.2.0 v1.2.1 v1.3.0 v1.3.1 v1.99.99$

We had been using v1.3.0; v1.99.99 is clearly no good. Maybe we can try using v1.3.1 instead:

$ go get rsc.io/sampler@v1.3.1go: finding rsc.io/sampler v1.3.1go: downloading rsc.io/sampler v1.3.1go: extracting rsc.io/sampler v1.3.1$ go testPASSok      example.com/hello    0.022s$

Note the explicit @v1.3.1 in the go get argument. In general each argument passed to go get can take an explicit version; the default is @latest, which resolves to the latest version as defined earlier.

Adding a dependency on a new major version

Let's add a new function to our package:func Proverb returns a Go concurrency proverb, by calling quote.Concurrency, which is provided by the module rsc.io/quote/v3. First we update hello.go to add the new function:

package helloimport ("rsc.io/quote"    quoteV3 "rsc.io/quote/v3")func Hello() string {    return quote.Hello()}func Proverb() string {    return quoteV3.Concurrency()}

Then we add a test to hello_test.go:

func TestProverb(t *testing.T) {    want := "Concurrency is not parallelism."    if got := Proverb(); got != want {        t.Errorf("Proverb() = %q, want %q", got, want)    }}

Then we can test our code:

$ go testgo: finding rsc.io/quote/v3 v3.1.0go: downloading rsc.io/quote/v3 v3.1.0go: extracting rsc.io/quote/v3 v3.1.0PASSok      example.com/hello    0.024s$

Note that our module now depends on both rsc.io/quote and rsc.io/quote/v3:

$ go list -m rsc.io/q...rsc.io/quote v1.5.2rsc.io/quote/v3 v3.1.0$

Each different major version (v1, v2, and so on) of a Go module uses a different module path: starting at v2, the path must end in the major version. In the example, v3 of rsc.io/quote is no longer rsc.io/quote: instead, it is identified by the module path rsc.io/quote/v3. This convention is calledsemantic import versioning, and it gives incompatible packages (those with different major versions) different names. In contrast, v1.6.0 of rsc.io/quote should be backwards-compatible with v1.5.2, so it reuses the name rsc.io/quote. (In the previous section, rsc.io/samplerv1.99.99should have been backwards-compatible with rsc.io/samplerv1.3.0, but bugs or incorrect client assumptions about module behavior can both happen.)

The go command allows a build to include at most one version of any particular module path, meaning at most one of each major version: one rsc.io/quote, one rsc.io/quote/v2, one rsc.io/quote/v3, and so on. This gives module authors a clear rule about possible duplication of a single module path: it is impossible for a program to build with bothrsc.io/quote v1.5.2 and rsc.io/quote v1.6.0. At the same time, allowing different major versions of a module (because they have different paths) gives module consumers the ability to upgrade to a new major version incrementally. In this example, we wanted to use quote.Concurrency from rsc/quote/v3 v3.1.0 but are not yet ready to migrate our uses of rsc.io/quote v1.5.2. The ability to migrate incrementally is especially important in a large program or codebase.

Upgrading a dependency to a new major version

Let's complete our conversion from using rsc.io/quote to using only rsc.io/quote/v3. Because of the major version change, we should expect that some APIs may have been removed, renamed, or otherwise changed in incompatible ways. Reading the docs, we can see that Hello has become HelloV3:

$ go doc rsc.io/quote/v3package quote // import "rsc.io/quote"Package quote collects pithy sayings.func Concurrency() stringfunc GlassV3() stringfunc GoV3() stringfunc HelloV3() stringfunc OptV3() string$

(There is also aknown bug in the output; the displayed import path has incorrectly dropped the /v3.)

We can update our use of quote.Hello() in hello.go to use quoteV3.HelloV3():

package helloimport quoteV3 "rsc.io/quote/v3"func Hello() string {    return quoteV3.HelloV3()}func Proverb() string {    return quoteV3.Concurrency()}

And then at this point, there's no need for the renamed import anymore, so we can undo that:

package helloimport "rsc.io/quote/v3"func Hello() string {    return quote.HelloV3()}func Proverb() string {    return quote.Concurrency()}

Let's re-run the tests to make sure everything is working:

$ go testPASSok      example.com/hello       0.014s

Removing unused dependencies

We've removed all our uses of rsc.io/quote, but it still shows up in go list -m all and in our go.mod file:

$ go list -m allexample.com/hellogolang.org/x/text v0.3.0rsc.io/quote v1.5.2rsc.io/quote/v3 v3.1.0rsc.io/sampler v1.3.1$ cat go.modmodule example.com/hellogo 1.12require (    golang.org/x/text v0.3.0 // indirect    rsc.io/quote v1.5.2    rsc.io/quote/v3 v3.0.0    rsc.io/sampler v1.3.1 // indirect)$

Why? Because building a single package, like with go build or go test, can easily tell when something is missing and needs to be added, but not when something can safely be removed. Removing a dependency can only be done after checking all packages in a module, and all possible build tag combinations for those packages. An ordinary build command does not load this information, and so it cannot safely remove dependencies.

The go mod tidy command cleans up these unused dependencies:

$ go mod tidy$ go list -m allexample.com/hellogolang.org/x/text v0.3.0rsc.io/quote/v3 v3.1.0rsc.io/sampler v1.3.1$ cat go.modmodule example.com/hellogo 1.12require (    golang.org/x/text v0.3.0 // indirect    rsc.io/quote/v3 v3.1.0    rsc.io/sampler v1.3.1 // indirect)$ go testPASSok      example.com/hello    0.020s$

Conclusion

Go modules are the future of dependency management in Go. Module functionality is now available in all supported Go versions (that is, in Go 1.11 and Go 1.12).

This post introduced these workflows using Go modules:

  • go mod init creates a new module, initializing the go.mod file that describes it.
  • go build, go test, and other package-building commands add new dependencies to go.mod as needed.
  • go list -m all prints the current module’s dependencies.
  • go get changes the required version of a dependency (or adds a new dependency).
  • go mod tidy removes unused dependencies.

We encourage you to start using modules in your local development and to add go.mod and go.sum files to your projects. To provide feedback and help shape the future of dependency management in Go, please send usbug reports or experience reports.

Thanks for all your feedback and help improving modules.

Debugging what you deploy in Go 1.12

$
0
0

Introduction

Go 1.11 and Go 1.12 make significant progress toward allowing developers to debug the same optimized binaries that they deploy to production.

As the Go compiler has become increasingly aggressive in producing faster binaries, we've lost ground in debuggability. In Go 1.10, users needed to disable optimizations entirely in order to have a good debugging experience from interactive tools like Delve. But users shouldn’t have to trade performance for debuggability, especially when running production services. If your problem is occurring in production, you need to debug it in production, and that shouldn’t require deploying unoptimized binaries.

For Go 1.11 and 1.12, we focused on improving the debugging experience on optimized binaries (the default setting of the Go compiler). Improvements include

  • More accurate value inspection, in particular for arguments at function entry;
  • More precisely identifying statement boundaries so that stepping is less jumpy and breakpoints more often land where the programmer expects;
  • And preliminary support for Delve to call Go functions (goroutines and garbage collection make this trickier than it is in C and C++).

Debugging optimized code with Delve

Delve is a debugger for Go on x86 supporting both Linux and macOS. Delve is aware of goroutines and other Go features and provides one of the best Go debugging experiences. Delve is also the debugging engine behind GoLand and VS Code.

Delve normally rebuilds the code it is debugging with -gcflags "all=-N -l", which disables inlining and most optimizations. To debug optimized code with delve, first build the optimized binary, then use dlv exec your_program to debug it. Or, if you have a core file from a crash, you can examine it with dlv core your_program your_core. With 1.12 and the latest Delve releases, you should be able to examine many variables, even in optimized binaries.

Improved value inspection

When debugging optimized binaries produced by Go 1.10, variable values were usually completely unavailable. In contrast, starting with Go 1.11, variables can usually be examined even in optimized binaries, unless they’ve been optimized away completely. In Go 1.11 the compiler began emitting DWARF location lists so debuggers can track variables as they move in and out of registers and reconstruct complex objects that are split across different registers and stack slots.

Improved stepping

This shows an example of stepping through a simple function in a debugger in 1.10, with flaws (skipped and repeated lines) highlighted by red arrows.

Flaws like this make it easy to lose track of where you are when stepping through a program and interfere with hitting breakpoints.

Go 1.11 and 1.12 record statement boundary information and do a better job of tracking source line numbers through optimizations and inlining. As a result, in Go 1.12, stepping through this code stops on every line and does so in the order you would expect.

Function calls

Function call support in Delve is still under development, but simple cases work. For example:

(dlv) call fib(6)> main.main() ./hello.go:15 (PC: 0x49d648)Values returned:    ~r1: 8

The path forward

Go 1.12 is a step toward a better debugging experience for optimized binaries and we have plans to improve it even further.

There are fundamental tradeoffs between debuggability and performance, so we’re focusing on the highest-priority debugging defects, and working to collect automated metrics to monitor our progress and catch regressions.

We’re focusing on generating correct information for debuggers about variable locations, so if a variable can be printed, it is printed correctly. We’re also looking at making variable values available more of the time, particularly at key points like call sites, though in many cases improving this would require slowing down program execution. Finally, we’re working on improving stepping: we’re focusing on the order of stepping with panics, the order of stepping around loops, and generally trying to follow source order where possible.

A note on macOS support

Go 1.11 started compressing debug information to reduce binary sizes. This is natively supported by Delve, but neither LLDB nor GDB support compressed debug info on macOS. If you are using LLDB or GDB, there are two workarounds: build binaries with -ldflags=-compressdwarf=false, or use splitdwarf (go get golang.org/x/tools/cmd/splitdwarf) to decompress the debug information in an existing binary.

Go 2018 Survey Results

$
0
0

Thank you

This post summarizes the results of our 2018 user survey and draws comparisons between the results of our prior surveys from 2016 and 2017.

This year we had 5,883 survey respondents from 103 different countries. We are grateful to everyone who provided their feedback through this survey to help shape the future of Go. Thank you!

Summary of findings

  • For the first time, half of survey respondents are now using Go as part of their daily routine. This year also saw significant increases in the number of respondents who develop in Go as part of their jobs and use Go outside of work responsibilities.
  • The most common uses for Go remain API/RPC services and CLI tools. Automation tasks, while not as common as CLI tools and API services, are a fast-growing area for Go.
  • Web development remains the most common domain that survey respondents work in, but DevOps showed the highest year-over-year growth and is now the second most common domain.
  • A large majority of survey respondents said Go is their most-preferred programming language, despite generally feeling less proficient with it than at least one other language.
  • VS Code and GoLand are surging in popularity and are now the most popular code editors among survey respondents.
  • Highlighting the portable nature of Go, many Go developers use more than one primary OS for development. Linux and macOS are particularly popular, with a large majority of survey respondents using one or both of these operating systems to write Go code.
  • Survey respondents appear to be shifting away from on-prem Go deployments and moving towards containers and serverless cloud deployments.
  • The majority of respondents said they feel welcome in the Go community, and most ideas for improving the Go community specifically focus on improving the experience of newcomers.

Read on for all of the details.

Programming background

This year's results show a significant increase in the number of survey respondents who are paid to write Go as part of their jobs (68% → 72%), continuing a year-over-year trend that has been growing since our first survey in 2016. We also see an increase in the number of respondents who program in Go outside of work (64% → 70%). For the first time, the number of survey respondents who write in Go as part of their daily routine reached 50% (up from 44% in 2016). These findings suggests companies are continuing to embrace Go for professional software development at a consistent pace, and that Go's general popularity with developers remains strong.

To better understand where developers use Go, we broke responses down into three groups: 1) people who are using Go both in and outside of work, 2) people who use Go professionally but not outside of work, and 3) people who only write Go outside of their job responsibilities. Nearly half (46%) of respondents write Go code both professionally and on their own time (a 10-point increase since 2017), while the remaining respondents are closely split between either only writing Go at work, or only writing Go outside of work. The large percentage of respondents who both use Go at work and choose to use it outside of work suggests that the language appeals to developers who do not view software engineering as a day job: they also choose to hack on code outside of work responsibilities, and (as evidenced by 85% of respondents saying they'd prefer Go for their next project, see section Attitudes towards Go below) Go is the top language they'd prefer to use for these non-work-related projects.

When asked how long they've been using Go, participants' answers are strongly trending upward over time, with a higher percentage of responses in the 2-4 and 4+ year buckets each year. This is expected for a newer programming language, and we're glad to see that the percentage of respondents who are new to Go is dropping more slowly than the percentage of respondents who have been using Go for 2+ years is increasing, as this suggests that developers are not dropping out of the ecosystem after initially learning the language.

As in prior years, Go ranks at the top of respondents' preferred languages and languages in which they have expertise. A majority of respondents (69%) claimed expertise in 5 different languages, highlighting that their attitudes towards Go are influenced by experiences with other programming stacks. The charts below are sorted by the number of respondents who ranked each language as their most preferred/understood (the darkest blue bars), which highlights three interesting bits:

  • While about ⅓ of respondents consider Go to be the language in which they have the most expertise, twice that many respondents consider it their most preferred programming language. So even though many respondents feel they haven't become as proficient with Go as with some other language, they still frequently prefer to develop with Go.
  • Few survey respondents rank Rust as a language in which they have expertise (6.8%), yet 19% rank it as a top preferred language, indicating a high level of interest in Rust among this audience.
  • Only three languages have more respondents who say they prefer the language than say they have expertise with it: Rust (2.41:1 ratio of preference:expertise), Kotlin (1.95:1), and Go (1.02:1). Higher preference than expertise implies interest—but little direct experience—in a language, while lower preference than expertise numbers suggests barriers to proficient use. Ratios near 1.0 suggest that most developers are able to work effectively and enjoyably with a given language. This data is corroborated by Stack Overflow's 2018 developer survey, which also found Rust, Kotlin, and Go to be among the most-preferred programming languages.

Reading the data: Participants could rank their top 5 languages. The color coding starts with dark blue for the top rank and lightens for each successive rank. These charts are sorted by the percentage of participants who ranked each language as their top choice.

Development domains

Survey respondents reported working on a median of three different domains, with a large majority (72%) working in 2-5 different areas. Web development is the most prevalent at 65%, and it increased its dominance as the primary area survey respondents work in (up from 61% last year): web development has been the most common domain for Go development since 2016. This year DevOps noticeably increased, from 36% to 41% of respondents, taking over the number two spot from Systems Programming. We did not find any domains with lower usage in 2018 than in 2017, suggesting that respondents are adopting Go for a wider variety of projects, rather than shifting usage from one domain to another.

Since 2016, the top two uses of Go have been writing API/RPC services and developing CLI applications. While CLI usage has remained stable at 63% for three years, API/RPC usage has increased from 60% in 2016 to 65% in 2017 to 73% today. These domains play to core strengths of Go and are both central to cloud-native software development, so we expect them to remain two of the primary scenarios for Go developers into the future. The percentage of respondents who write web services that directly return HTML has steadily dropped while API/RPC usage has increased, suggesting some migration to the API/RPC model for web services. Another year-over-year trend suggests that automation is also a growing area for Go, with 38% of respondents now using Go for scripts and automation tasks (up from 31% in 2016).

To better understand the contexts in which developers are using Go, we added a question about Go adoption across different industries. Perhaps unsurprisingly for a relatively new language, over half of survey respondents work in companies in the Internet/web services and Software categories (i.e., tech companies). The only other industries with >3% responses were Finance, banking, or insurance and Media, advertising, publishing, or entertainment. (In the chart below, we've condensed all of the categories with response rates below 3% into the "Other" category.) We'll continue tracking Go's adoption across industries to better understand developer needs outside of technology companies.

Attitudes towards Go

This year we added a question asking "How likely are you to recommend Go to a friend or colleague?" to calculate our Net Promoter Score. This score attempts to measure how many more "promoters" a product has than "detractors" and ranges from -100 to 100; a positive value suggests most people are likely to recommend using a product, while negative values suggest most people are likely to recommend against using it. Our 2018 score is 61 (68% promoters - 7% detractors) and will serve as a baseline to help us gauge community sentiment towards the Go ecosystem over time.

In addition to NPS, we asked several questions about developer satisfaction with Go. Overall, survey respondents indicated a high level of satisfaction, consistent with prior years. Large majorities say they are happy with Go (89%), would prefer to use Go for their next project (85%), and feel that it is working well for their team (66%), while a plurality feel that Go is at least somewhat critical to their company's success (44%). While all of these metrics showed an increase in 2017, they remained mostly stable this year. (The wording of the first question changed in 2018 from "I would recommend using Go to others" to "Overall, I'm happy with Go", so those results are not directly comparable.)

Given the strong sentiment towards preferring Go for future development, we want to understand what prevents developers from doing so. These remained largely unchanged since last year: about ½ of survey respondents work on existing projects written in other languages, and ⅓ work on a team or project that prefers to use a different language. Missing language features and libraries round out the most common reasons respondents did not use Go more. We also asked about the biggest challenges developers face while using Go; unlike most of our survey questions, respondents could type in anything they wished to answer this question. We analyzed the results via machine learning to identify common themes and counting the number of responses that supported each theme. The top three major challenges we identified are:

  • Package management (e.g., "Keeping up with vendoring", "dependency / packet [sic] management / vendoring not unified")
  • Differences from more familiar programming languages (e.g., "syntax close to C-languages with slightly different semantics makes me look up references somewhat more than I'd like", "coworkers who come from non-Go backgrounds trying to use Go as a version of their previous language but with channels and Goroutines")
  • Lack of generics (e.g., "Lack of generics makes it difficult to persuade people who have not tried Go that they would find it efficient.", "Hard to build richer abstractions (want generics)")

This year we added several questions about developer satisfaction with different aspects of Go. Survey respondents were very satisfied with Go applications' CPU performance (46:1, meaning 46 respondents said they were satisfied for every 1 respondent who said they were not satisfied), build speed (37:1), and application memory utilization (32:1). Responses for application debuggability (3.2:1) and binary size (6.4:1), however, suggest room for improvement.

The dissatisfaction with binary size largely comes from developers building CLIs, only 30% of whom are satisfied with the size of Go's generated binaries. For all other types of applications, however, developer satisfaction was > 50%, and binary size was consistently ranked at the bottom of the list of important factors.

Debuggability, conversely, stands out when we look at how respondents ranked the importance of each aspect; 44% of respondents ranked debuggability as their most or second-most important aspect, but only 36% were satisfied with the current state of Go debugging. Debuggability was consistently rated about as important as memory usage and build speed but with significantly lower satisfaction levels, and this pattern held true regardless of the type of software respondents were building. The two most recent Go releases, Go 1.11 and 1.12, both contained significant improvements to debuggability. We plan to investigate how developers debug Go applications in more depth this year, with a goal of improving the overall debugging experience for Go developers.

Development environments

We asked respondents which operating systems they primarily use when writing Go code. A majority (65%) of respondents said they use Linux, 50% use macOS, and 18% use Windows, consistent with last year. This year we also looked at how many respondents develop on multiple OSes vs. a single OS. Linux and macOS remain the clear leaders, with 81% of respondents developing on some mix of these two systems. Only 3% of respondents evenly split their time between all three OSes. Overall, 41% of respondents use multiple operating systems for Go development, highlighting the cross-platform nature of Go.

Last year, VS Code edged out Vim as the most popular Go editor among survey respondents. This year it significantly expanded its lead to become the preferred editor for over ⅓ of our survey respondents (up from 27% last year). GoLand also experienced strong growth and is now the second most-preferred editor at 22%, swapping places with Vim (down to 17%). The surging popularity of VS Code and GoLand appear to be coming at the expense of Sublime Text and Atom. Vim also saw the number of respondents ranking it their top choice drop, but it remains the most popular second-choice editor at 14%. Interestingly, we found no differences in the level of satisfaction respondents reported for their editor(s) of choice.

We also asked respondents what would most improve Go support in their preferred editor. Like the "biggest challenge" question above, participants could write in their own response rather than select from a multiple-choice list. A thematic analysis on the responses revealed that improved debugging support (e.g., "Live debugging", "Integrated debugging", "Even better debugging") was the most-common request, followed by improved code completion (e.g., "autocomplete performance and quality", "smarter autocomplete"). Other requests include better integration with Go's CLI toolchain, better support for modules/packages, and general performance improvements.

This year we also added a question asking which deployment architectures are most important to Go developers. Unsurprisingly, survey respondents overwhelmingly view x86/x86-64 as their top deployment platform (76% of respondents listed it as their most important deployment architecture, and 84% had it in their top 3). The ranking of the second- and third-choice architectures, however, is informative: there is significant interest in ARM64 (45%), WebAssembly (30%), and ARM (22%), but very little interest in other platforms.

Deployments and services

For 2018 we see a continuation of the trend from on-prem to cloud hosting for both Go and non-Go deployments. The percentage of survey respondents who deploy Go applications to on-prem servers dropped from 43% → 32%, mirroring the 46% → 36% drop reported for non-Go deployments. The cloud services which saw the highest year-over-year growth include AWS Lambda (4% → 11% for Go, 10% → 15% non-Go) and Google Kubernetes Engine (8% → 12% for Go, 5% → 10% non-Go), suggesting that serverless and containers are becoming increasingly popular deployment platforms. This service growth appears to be driven by respondents who had already adopted cloud services, however, as we found no meaningful growth in the percentage of respondents who deploy to at least one cloud service this year (55% → 56%). We also see steady growth in Go deployments to GCP since 2016, increasing from 12% → 19% of respondents.

Perhaps correlated with the decrease in on-prem deployments, this year we saw cloud storage become the second-most used service by survey respondents, increasing from 32% → 44%. Authentication & federation services also saw a significant increase (26% → 33%). The primary service survey respondents access from Go remains open-source relational databases, which ticked up from 61% → 65% of respondents. As the below chart shows, service usage increased across the board.

Go community

The top community sources for finding answers to Go questions continue to be Stack Overflow (23% of respondents marked it as their top source), Go web sites (18% for godoc.org, 14% for golang.org), and reading source code (8% for source code generally, 4% for GitHub specifically). The order remains largely consistent with prior years. The primary sources for Go news remain the Go blog, Reddit's r/golang, Twitter, and Hacker News. These were also the primary distribution methods for this survey, however, so there is likely some bias in this result. In the two charts below, we've grouped sources used by less than < 5% of respondents into the "Other" category.

This year, 55% of survey respondents said they have or are interested in contributing to the Go community, slightly down from 59% last year. Because the two most common areas for contribution (the standard library and official Go tools) require interacting with the core Go team, we suspect this decrease may be related to a dip in the percentage of participants who agreed with the statements "I feel comfortable approaching the Go project leadership with questions and feedback" (30% → 25%) and "I am confident in the leadership of Go (54% → 46%).

An important aspect of community is helping everyone feel welcome, especially people from traditionally under-represented demographics. To better understand this, we asked an optional question about identification across several under-represented groups. In 2017 we saw year-over-year increases across the board. For 2018, we saw a similar percentage of respondents (12%) identify as part of an under-represented group, and this was paired with a significant decrease in the percentage of respondents who do not identify as part of an under-represented group. In 2017, for every person who identified as part of an under-represented group, 3.5 people identified as not part of an under-represented group (3.5:1 ratio). In 2018 that ratio improved to 3.08:1. This suggests that the Go community is at least retaining the same proportions of under-represented members, and may even be increasing.

Maintaining a healthy community is extremely important to the Go project, so for the past three years we've been measuring the extent to which developers feel welcome in the Go community. This year we saw a drop in the percentage of survey respondents who agree with the statement "I feel welcome in the Go community", from 66% → 59%.

To better understand this decrease, we looked more closely at who reported feeling less welcome. Among traditionally under-represented groups, fewer people reported feeling unwelcome in 2018, suggesting that outreach in that area has been helpful. Instead, we found a linear relationship between the length of time someone has used Go and how welcome they feel: newer Go developers felt significantly less welcome (at 50%) than developers with 1-2 years of experience (62%), who in turn felt less welcome than developers with a few years of experience (73%). This interpretation of the data is supported by responses to the question "What changes would make the Go community more welcoming?". Respondents' comments can be broadly grouped into four categories:

  • Reduce a perception of elitism, especially for newcomers to Go (e.g., "less dismissiveness", "Less defensiveness and hubris")
  • Increase transparency at the leadership level (e.g., "Future direction and planning discussions", "Less top down leadership", "More democratic")
  • Increase introductory resources (e.g., "A more clear introduction for contributors", "Fun challenges to learn best practices")
  • More events and meetups, with a focus on covering a larger geographic area (e.g., "More meetups & social events", "Events in more cities")

This feedback is very helpful and gives us concrete areas we can focus on to improve the experience of being a Go developer. While it doesn't represent a large percentage of our user base, we take this feedback very seriously and are working on improving each area.

Conclusion

We hope you've enjoyed seeing the results of our 2018 developer survey. These results are impacting our 2019 planning, and in the coming months we'll share some ideas with you to address specific issues and needs the community has highlighted for us. Once again, thank you to everyone who contributed to this survey!

Next steps toward Go 2

$
0
0

Status

We’re well on the way towards the release of Go 1.13, hopefully in early August of this year. This is the first release that will include concrete changes to the language (rather than just minor adjustments to the spec), after a longer moratorium on any such changes.

To arrive at these language changes, we started out with a small set of viable proposals, selected from the much larger list ofGo 2 proposals, per the new proposal evaluation process outlined in the“Go 2, here we come!” blog post. We wanted our initial selection of proposals to be relatively minor and mostly uncontroversial, to have a reasonably high chance of having them make it through the process. The proposed changes had to be backward-compatible to be minimally disruptive sincemodules, which eventually will allow module-specific language version selection, are not the default build mode quite yet. In short, this initial round of changes was more about getting the ball rolling again and gaining experience with the new process, rather than tackling big issues.

Ouroriginal list of proposalsgeneral Unicode identifiers,binary integer literals,separators for number literals,signed integer shift counts– got both trimmed and expanded. The general Unicode identifiers didn’t make the cut as we didn’t have a concrete design document in place in time. The proposal for binary integer literals was expanded significantly and led to a comprehensive overhaul and modernization ofGo’s number literal syntax. And we added the Go 2 draft design proposal onerror inspection, which has beenpartially accepted.

With these initial changes in place for Go 1.13, it’s now time to look forward to Go 1.14 and determine what we want to tackle next.

Proposals for Go 1.14

The goals we have for Go today are the same as in 2007: tomake software development scale. The three biggest hurdles on this path to improved scalability for Go are package and version management, better error handling support, and generics.

With Go module support getting increasingly stronger, support for package and version management is being addressed. This leaves better error handling support and generics. We have been working on both of these and presenteddraft designs at last year’s GopherCon in Denver. Since then we have been iterating those designs. For error handling, we have published a concrete, significantly revised and simplified proposal (see below). For generics, we are making progress, with a talk (“Generics in Go” by Ian Lance Taylor)coming up at this year’s GopherCon in San Diego, but we have not reached the concrete proposal stage yet.

We also want to continue with smaller improvements to the language. For Go 1.14, we have selected the following proposals:

#32437. A built-in Go error check function, “try” (design doc).

This is our concrete proposal for improved error handling. While the proposed, fully backwards-compatible language extension is minimal, we expect an outsize impact on error handling code. This proposal has already attracted an enormous amount of comments, and it’s not easy to follow up. We recommend starting with theinitial comment for a quick outline and then to read the detailed design doc. The initial comment contains a couple of links leading to summaries of the feedback so far. Please follow the feedback recommendations (see the “Next steps” section below) before posting.

#6977. Allow embedding overlapping interfaces (design doc).

This is an old, backwards-compatible proposal for making interface embedding more tolerant.

#32479 Diagnose string(int) conversion in go vet.

The string(int) conversion was introduced early in Go for convenience, but it is confusing to newcomers (string(10) is "\n" not "10") and not justified anymore now that the conversion is available in the unicode/utf8 package. Since removing this conversion is not a backwards-compatible change, we propose to start with a vet error instead.

#32466 Adopt crypto principles (design doc).

This is a request for feedback on a set of design principles for cryptographic libraries that we would like to adopt. See also the relatedproposal to remove SSLv3 support from crypto/tls.

Next steps

We are actively soliciting feedback on all these proposals. We are especially interested in fact-based evidence illustrating why a proposal might not work well in practice, or problematic aspects we might have missed in the design. Convincing examples in support of a proposal are also very helpful. On the other hand, comments containing only personal opinions are less actionable: we can acknowledge them but we can’t address them in any constructive way. Before posting, please take the time to read the detailed design docs and prior feedback or feedback summaries. Especially in long discussions, your concern may have already been raised and discussed in earlier comments.

Unless there are strong reasons to not even proceed into the experimental phase with a given proposal, we are planning to have all these implemented at the start of theGo 1.14 cycle (beginning of August, 2019) so that they can be evaluated in practice. Per theproposal evaluation process, the final decision will be made at the end of the development cycle (beginning of November, 2019).

Thank you for helping making Go a better language!

Announcing The New Go Store

$
0
0

We are excited to launch the new Go official swag and merch store shipping worldwide. We are even more excited to announce that 100% of the proceeds from the Go store go directly to GoBridge.GoBridge is a non-profit organization focused on building bridges to educate underrepresented groups by teaching technical skills and fostering diversity in the Go community.

At the Go store you’ll find our beloved gopher plushies and vinyls as well as new merchandise. Visit the store for 20% off with code Gopher20 through Sunday, July 21st at 11:59 PM PST.

We plan on adding stock to current items and bringing on new ones to the store. If we are out of stock when you go to place an order, check back again soon. Follow the Twitter account for updates, we plan on adding new goodies for all our Go fans out there, so keep an eye out!

If you find any issues, please submit an issue prefixed with “GoStore” and we will aim to remedy it as soon as we can.

Happy shopping!

Why Generics?

$
0
0

Introduction

[This is a version of a talk presented at Gophercon 2019. Video link to follow when available.]

This article is about what it would mean to add generics to Go, and why I think we should do it. I'll also touch on an update to a possible design for adding generics to Go.

Go was released on November 10, 2009. Less than 24 hours later we saw thefirst comment about generics. (That comment also mentions exceptions, which we added to the language, in the form of panic and recover, in early 2010.)

In three years of Go surveys, lack of generics has always been listed as one of the top three problems to fix in the language.

Why generics?

But what does it mean to add generics, and why would we want it?

To paraphraseJazayeri, et al: generic programming enables the representation of functions and data structures in a generic form, with types factored out.

What does that mean?

For a simple example, let's assume we want to reverse the elements in a slice. It's not something that many programs need to do, but it's not all that unusual.

Let's say it's a slice of int.

func ReverseInts(s []int) {    first := 0    last := len(s)    for first < last {        s[first], s[last] = s[last], s[first]        first++        last--    }}

Pretty simple, but even for a simple function like that you'd want to write a few test cases. In fact, when I did, I found a bug. I'm sure many readers have spotted it already.

func ReverseInts(s []int) {    first := 0    last := len(s) - 1    for first < last {        s[first], s[last] = s[last], s[first]        first++        last--    }}

We need to subtract 1 when we set the variable last.

Now let's reverse a slice of string.

func ReverseStrings(s []string) {    first := 0    last := len(s) - 1    for first < last {        s[first], s[last] = s[last], s[first]        first++        last--    }}

If you compare ReverseInts and ReverseStrings, you'll see that the two functions are exactly the same, except for the type of the parameter. I don't think any reader is surprised by that.

What some people new to Go find surprising is that there is no way to write a simple Reverse function that works for a slice of any type.

Most other languages do let you write that kind of function.

In a dynamically typed language like Python or JavaScript you can simply write the function, without bothering to specify the element type. This doesn't work in Go because Go is statically typed, and requires you to write down the exact type of the slice and the type of the slice elements.

Most other statically typed languages, like C++ or Java or Rust or Swift, support generics to address exactly this kind of issue.

Go generic programming today

So how do people write this kind of code in Go?

In Go you can write a single function that works for different slice types by using an interface type, and defining a method on the slice types you want to pass in. That is how the standard library's sort.Sort function works.

In other words, interface types in Go are a form of generic programming. They let us capture the common aspects of different types and express them as methods. We can then write functions that use those interface types, and those functions will work for any type that implements those methods.

But this approach falls short of what we want. With interfaces you have to write the methods yourself. It's awkward to have to define a named type with a couple of methods just to reverse a slice. And the methods you write are exactly the same for each slice type, so in a sense we've just moved and condensed the duplicate code, we haven't eliminated it. Although interfaces are a form of generics, they don’t give us everything we want from generics.

A different way of using interfaces for generics, which could get around the need to write the methods yourself, would be to have the language define methods for some kinds of types. That isn't something the language supports today, but, for example, the language could define that every slice type has an Index method that returns an element. But in order to use that method in practice it would have to return an empty interface type, and then we lose all the benefits of static typing. More subtly, there would be no way to define a generic function that takes two different slices with the same element type, or that takes a map of one element type and returns a slice of the same element type. Go is a statically typed language because that makes it easier to write large programs; we don’t want to lose the benefits of static typing in order to gain the benefits of generics.

Another approach would be to write a generic Reverse function using the reflect package, but that is so awkward to write and slow to run that few people do that. That approach also requires explicit type assertions and has no static type checking.

Or, you could write a code generator that takes a type and generates aReverse function for slices of that type. There are several code generators out there that do just that. But this adds another step to every package that needs Reverse, it complicates the build because all the different copies have to be compiled, and fixing a bug in the master source requires re-generating all the instances, some of which may be in different projects entirely.

All these approaches are awkward enough that I think most people who have to reverse a slice in Go just write the function for the specific slice type that they need. Then they'll need to write test cases for the function, to make sure they didn't make a simple mistake like the one I made initially. And they'll need to run those tests routinely.

However we do it, it means a lot of extra work just for a function that looks exactly the same except for the element type. It's not that it can't be done. It clearly can be done, and Go programmers are doing it. It's just that there ought to be a better way.

For a statically typed language like Go, that better way is generics. What I wrote earlier is that generic programming enables the representation of functions and data structures in a generic form, with types factored out. That's exactly what we want here.

What generics can bring to Go

The first and most important thing we want from generics in Go is to be able to write functions like Reverse without caring about the element type of the slice. We want to factor out that element type. Then we can write the function once, write the tests once, put them in a go-gettable package, and call them whenever we want.

Even better, since this is an open source world, someone else can write Reverse once, and we can use their implementation.

At this point I should say that “generics” can mean a lot of different things. In this article, what I mean by “generics” is what I just described. In particular, I don’t mean templates as found in the C++ language, which support quite a bit more than what I’ve written here.

I went through Reverse in detail, but there are many other functions that we could write generically, such as:

  • Find smallest/largest element in slice
  • Find average/standard deviation of slice
  • Compute union/intersection of maps
  • Find shortest path in node/edge graph
  • Apply transformation function to slice/map, returning new slice/map

These examples are available in most other languages. In fact, I wrote this list by glancing at the C++ standard template library.

There are also examples that are specific to Go with its strong support for concurrency.

  • Read from a channel with a timeout
  • Combine two channels into a single channel
  • Call a list of functions in parallel, returning a slice of results
  • Call a list of functions, using a Context, return the result of the first function to finish, canceling and cleaning up extra goroutines

I've seen all of these functions written out many times with different types. It's not hard to write them in Go. But it would be nice to be able to reuse an efficient and debugged implementation that works for any value type.

To be clear, these are just examples. There are many more general purpose functions that could be written more easily and safely using generics.

Also, as I wrote earlier, it's not just functions. It's also data structures.

Go has two general purpose generic data structures built into the language: slices and maps. Slices and maps can hold values of any data type, with static type checking for values stored and retrieved. The values are stored as themselves, not as interface types. That is, when I have a []int, the slice holds ints directly, not ints converted to an interface type.

Slices and maps are the most useful generic data structures, but they aren’t the only ones. Here are some other examples.

  • Sets
  • Self-balancing trees, with efficient insertion and traversal in sorted order
  • Multimaps, with multiple instances of a key
  • Concurrent hash maps, supporting parallel insertions and lookups with no single lock

If we can write generic types, we can define new data structures, like these, that have the same type-checking advantages as slices and maps: the compiler can statically type-check the types of the values that they hold, and the values can be stored as themselves, not as interface types.

It should also be possible to take algorithms like the ones mentioned earlier and apply them to generic data structures.

These examples should all be just like Reverse: generic functions and data structures written once, in a package, and reused whenever they are needed. They should work like slices and maps, in that they shouldn't store values of empty interface type, but should store specific types, and those types should be checked at compile time.

So that's what Go can gain from generics. Generics can give us powerful building blocks that let us share code and build programs more easily.

I hope I’ve explained why this is worth looking into.

Benefits and costs

But generics don't come from theBig Rock Candy Mountain, the land where the sun shines every day over thelemonade springs. Every language change has a cost. There's no doubt that adding generics to Go will make the language more complicated. As with any change to the language, we need to talk about maximizing the benefit and minimizing the cost.

In Go, we’ve aimed to reduce complexity through independent, orthogonal language features that can be combined freely. We reduce complexity by making the individual features simple, and we maximize the benefit of the features by permitting their free combination. We want to do the same with generics.

To make this more concrete I’m going to list a few guidelines we should follow.

* Minimize new concepts

We should add as few new concepts to the language as possible. That means a minimum of new syntax and a minimum of new keywords and other names.

* Complexity falls on the writer of generic code, not the user

As much as possible the complexity should fall on the programmer writing the generic package. We don't want the user of the package to have to worry about generics. This means that it should be possible to call generic functions in a natural way, and it means that any errors in using a generic package should be reported in a way that is easy to understand and to fix. It should also be easy to debug calls into generic code.

* Writer and user can work independently

Similarly, we should make it easy to separate the concerns of the writer of the generic code and its user, so that they can develop their code independently. They shouldn't have to worry about what the other is doing, any more than the writer and caller of a normal function in different packages have to worry. This sounds obvious, but it's not true of generics in every other programming language.

* Short build times, fast execution times

Naturally, as much as possible, we want to keep the short build times and fast execution time that Go gives us today. Generics tend to introduce a tradeoff between fast builds and fast execution. As much as possible, we want both.

* Preserve clarity and simplicity of Go

Most importantly, Go today is a simple language. Go programs are usually clear and easy to understand. A major part of our long process of exploring this space has been trying to understand how to add generics while preserving that clarity and simplicity. We need to find mechanisms that fit well into the existing language, without turning it into something quite different.

These guidelines should apply to any generics implementation in Go. That’s the most important message I want to leave you with today:generics can bring a significant benefit to the language, but they are only worth doing if Go still feels like Go.

Draft design

Fortunately, I think it can be done. To finish up this article I’m going to shift from discussing why we want generics, and what the requirements on them are, to briefly discuss a design for how we think we can add them to the language.

At this year's Gophercon Robert Griesemer and I publisheda design draft for adding generics to Go. See the draft for full details. I'll go over some of the main points here.

Here is the generic Reverse function in this design.

func Reverse (type Element) (s []Element) {    first := 0    last := len(s) - 1    for first < last {        s[first], s[last] = s[last], s[first]        first++        last--    }}

You'll notice that the body of the function is exactly the same. Only the signature has changed.

The element type of the slice has been factored out. It's now named Element and has become what we call atype parameter. Instead of being part of the type of the slice parameter, it's now a separate, additional, type parameter.

To call a function with a type parameter, in the general case you pass a type argument, which is like any other argument except that it's a type.

func ReverseAndPrint(s []int) {    Reverse(int)(s)    fmt.Println(s)}

That is the (int) seen after Reverse in this example.

Fortunately, in most cases, including this one, the compiler can deduce the type argument from the types of the regular arguments, and you don't need to mention the type argument at all.

Calling a generic function just looks like calling any other function.

func ReverseAndPrint(s []int) {    Reverse(s)    fmt.Println(s)}

In other words, although the generic Reverse function is slightly more complex than ReverseInts and ReverseStrings, that complexity falls on the writer of the function, not the caller.

Contracts

Since Go is a statically typed language, we have to talk about the type of a type parameter. This meta-type tells the compiler what sorts of type arguments are permitted when calling a generic function, and what sorts of operations the generic function can do with values of the type parameter.

The Reverse function can work with slices of any type. The only thing it does with values of type Element is assignment, which works with any type in Go. For this kind of generic function, which is a very common case, we don't need to say anything special about the type parameter.

Let's take a quick look at a different function.

func IndexByte (type T Sequence) (s T, b byte) int {    for i := 0; i < len(s); i++ {        if s[i] == b {            return i        }    }    return -1}

Currently both the bytes package and the strings package in the standard library have an IndexByte function. This function returns the index of b in the sequence s, where s is either a string or a []byte. We could use this single generic function to replace the two functions in the bytes and strings packages. In practice we may not bother doing that, but this is a useful simple example.

Here we need to know that the type parameter T acts like a string or a []byte. We can call len on it, and we can index to it, and we can compare the result of the index operation to a byte value.

To let this compile, the type parameter T itself needs a type. It's a meta-type, but because we sometimes need to describe multiple related types, and because it describes a relationship between the implementation of the generic function and its callers, we actually call the type of T a contract. Here the contract is named Sequence. It appears after the list of type parameters.

This is how the Sequence contract is defined for this example.

contract Sequence(T) {    T string, []byte}

It's pretty simple, since this is a simple example: the type parameterT can be either string or []byte. Here contract may be a new keyword, or a special identifier recognized in package scope; see the design draft for details.

Anybody who remembers the design we presented at Gophercon 2018 will see that this way of writing a contract is a lot simpler. We got a lot of feedback on that earlier design that contracts were too complicated, and we've tried to take that into account. The new contracts are much simpler to write, and to read, and to understand.

They let you specify the underlying type of a type parameter, and/or list the methods of a type parameter. They also let you describe the relationship between different type parameters.

Contracts with methods

Here is another simple example, of a function that uses the String method to return a []string of the string representation of all the elements in s.

func ToStrings (type E Stringer) (s []E) []string {    r := make([]string, len(s))    for i, v := range s {        r[i] = v.String()    }    return r}

It's pretty straightforward: walk through the slice, call the String method on each element, and return a slice of the resulting strings.

This function requires that the element type implement the String method. The Stringer contract ensures that.

contract Stringer(T) {    T String() string}

The contract simply says that T has to implement the String method.

You may notice that this contract looks like the fmt.Stringer interface, so it's worth pointing out that the argument of theToStrings function is not a slice of fmt.Stringer. It's a slice of some element type, where the element type implementsfmt.Stringer. The memory representation of a slice of the element type and a slice of fmt.Stringer are normally different, and Go does not support direct conversions between them. So this is worth writing, even though fmt.Stringer exists.

Contracts with multiple types

Here is an example of a contract with multiple type parameters.

type Graph (type Node, Edge G) struct { ... }contract G(Node, Edge) {    Node Edges() []Edge    Edge Nodes() (from Node, to Node)}func New (type Node, Edge G) (nodes []Node) *Graph(Node, Edge) {    ...}func (g *Graph(Node, Edge)) ShortestPath(from, to Node) []Edge {    ...}

Here we're describing a graph, built from nodes and edges. We're not requiring a particular data structure for the graph. Instead, we're saying that the Node type has to have an Edges method that returns the list of edges that connect to the Node. And the Edge type has to have a Nodes method that returns the twoNodes that the Edge connects.

I've skipped the implementation, but this shows the signature of aNew function that returns a Graph, and the signature of aShortestPath method on Graph.

The important takeaway here is that a contract isn't just about a single type. It can describe the relationships between two or more types.

Ordered types

One surprisingly common complaint about Go is that it doesn't have aMin function. Or, for that matter, a Max function. That's because a useful Min function should work for any ordered type, which means that it has to be generic.

While Min is pretty trivial to write yourself, any useful generics implementation should let us add it to the standard library. This is what it looks like with our design.

func Min (type T Ordered) (a, b T) T {    if a < b {        return a    }    return b}

The Ordered contract says that the type T has to be an ordered type, which means that it supports operators like less than, greater than, and so forth.

contract Ordered(T) {    T int, int8, int16, int32, int64,        uint, uint8, uint16, uint32, uint64, uintptr,        float32, float64,        string}

The Ordered contract is just a list of all the ordered types that are defined by the language. This contract accepts any of the listed types, or any named type whose underlying type is one of those types. Basically, any type you can use with the less than operator.

It turns out that it's much easier to simply enumerate the types that support the less than operator than it is to invent a new notation that works for all operators. After all, in Go, only built-in types support operators.

This same approach can be used for any operator, or more generally to write a contract for any generic function intended to work with builtin types. It lets the writer of the generic function specify clearly the set of types the function is expected to be used with. It lets the caller of the generic function clearly see whether the function is applicable for the types being used.

In practice this contract would probably go into the standard library. and so really the Min function (which will probably also be in the standard library somewhere) will look like this. Here we're just referring to the contract Ordered defined in the package contracts.

func Min (type T contracts.Ordered) (a, b T) T {    if a < b {        return a    }    return b}

Generic data structures

Finally, let's look at a simple generic data structure, a binary tree. In this example the tree has a comparison function, so there are no requirements on the element type.

type Tree (type E) struct {    root    *node(E)    compare func(E, E) int}type node (type E) struct {    val         E    left, right *node(E)}

Here is how to create a new binary tree. The comparison function is passed to the New function.

func New (type E) (cmp func(E, E) int) *Tree(E) {    return &Tree(E){compare: cmp}}

An unexported method returns a pointer either to the slot holding v, or to the location in the tree where it should go.

func (t *Tree(E)) find(v E) **node(E) {    pn := &t.root    for *pn != nil {        switch cmp := t.compare(v, (*pn).val); {        case cmp < 0:            pn = &(*pn).left        case cmp > 0:            pn = &(*pn).right        default:            return pn        }    }    return pn}

The details here don't really matter, especially since I haven't tested this code. I'm just trying to show what it looks like to write a simple generic data structure.

This is the code for testing whether the tree contains a value.

func (t *Tree(E)) Contains(v E) bool {    return *t.find(e) != nil}

This is the code for inserting a new value.

func (t *Tree(E)) Insert(v E) bool {    pn := t.find(v)    if *pn != nil {        return false    }    *pn = &node(E){val: v}    return true}

Notice the type argument E to the type node. This is what it looks like to write a generic data structure. As you can see, it looks like writing ordinary Go code, except that some type arguments are sprinkled in here and there.

Using the tree is pretty simple.

var intTree = tree.New(func(a, b int) int { return a - b })func InsertAndCheck(v int) {    intTree.Insert(v)    if !intTree.Contains(v) {        log.Fatalf("%d not found after insertion", v)    }}

That's as it should be. It's a bit harder to write a generic data structure, because you often have to explicitly write out type arguments for supporting types, but as much as possible using one is no different from using an ordinary non-generic data structure.

Next steps

We are working on actual implementations to allow us to experiment with this design. It's important to be able to try out the design in practice, to make sure that we can write the kinds of programs we want to write. It hasn't gone as fast as we'd hoped, but we'll send out more detail on these implementations as they become available.

Robert Griesemer has written apreliminary CL that modifies the go/types package. This permits testing whether code using generics and contracts can type check. It’s incomplete right now, but it mostly works for a single package, and we’ll keep working on it.

What we'd like people to do with this and future implementations is to try writing and using generic code and see what happens. We want to make sure that people can write the code they need, and that they can use it as expected. Of course not everything is going to work at first, and as we explore this space we may have to change things. And, to be clear, we're much more interested in feedback on the semantics than on details of the syntax.

I’d like to thank everyone who commented on the earlier design, and everyone who has discussed what generics can look like in Go. We’ve read all of the comments, and we greatly appreciate the work that people have put into this. We would not be where we are today without that work.

Our goal is to arrive at a design that makes it possible to write the kinds of generic code I’ve discussed today, without making the language too complex to use or making it not feel like Go anymore. We hope that this design is a step toward that goal, and we expect to continue to adjust it as we learn, from our experiences and yours, what works and what doesn’t. If we do reach that goal, then we’ll have something that we can propose for future versions of Go.


Experiment, Simplify, Ship

$
0
0

Introduction

[This is the blog post version of my talk last week at Gophercon 2019. We will add a video link to the talk once it is available.]

We are all on the path to Go 2, together, but none of us know exactly where that path leads or sometimes even which direction the path goes. This post discusses how we actually find and follow the path to Go 2. Here’s what the process looks like.

We experiment with Go as it exists now, to understand it better, learning what works well and what doesn’t. Then we experiment with possible changes, to understand them better, again learning what works well and what doesn’t. Based on what we learn from those experiments, we simplify. And then we experiment again. And then we simplify again. And so on. And so on.

The Four R’s of Simplifying

During this process, there are four main ways that we can simplify the overall experience of writing Go programs: reshaping, redefining, removing, and restricting.

Simplify by Reshaping

The first way we simplify is by reshaping what exists into a new form, one that ends up being simpler overall.

Every Go program we write serves as an experiment to test Go itself. In the early days of Go, we quickly learned that it was common to write code like this addToList function:

func addToList(list []int, x int) []int {    n := len(list)    if n+1 > cap(list) {        big := make([]int, n, (n+5)*2)        copy(big, list)        list = big    }    list = list[:n+1]    list[n] = x    return list}

We’d write the same code for slices of bytes, and slices of strings, and so on. Our programs were too complex, because Go was too simple.

So we took the many functions like addToList in our programs and reshaped them into one function provided by Go itself. Adding append made the Go language a little more complex, but on balance it made the overall experience of writing Go programs simpler, even after accounting for the cost of learning about append.

Here’s another example. For Go 1, we looked at the very many development tools in the Go distribution, and we reshaped them into one new command.

5a      8g5g      8l5l      cgo6a      gobuild6cov    gofix         →     go6g      goinstall6l      gomake6nm     gopack8a      govet

The go command is so central now that it is easy to forget that we went so long without it and how much extra work that involved.

We added code and complexity to the Go distribution, but on balance we simplified the experience of writing Go programs. The new structure also created space for other interesting experiments, which we’ll see later.

Simplify by Redefining

A second way we simplify is by redefining functionality we already have, allowing it to do more. Like simplifying by reshaping, simplifying by redefining makes programs simpler to write, but now with nothing new to learn.

For example, append was originally defined to read only from slices. When appending to a byte slice, you could append the bytes from another byte slice, but not the bytes from a string. We redefined append to allow appending from a string, without adding anything new to the language.

var b []bytevar more []byteb = append(b, more...) // okvar b []bytevar more stringb = append(b, more...) // ok later

Simplify by Removing

A third way we simplify is by removing functionality when it has turned out to be less useful or less important than we expected. Removing functionality means one less thing to learn, one less thing to fix bugs in, one less thing to be distracted by or use incorrectly. Of course, removing also forces users to update existing programs, perhaps making them more complex, to make up for the removal. But the overall result can still be that the process of writing Go programs becomes simpler.

An example of this is when we removed the boolean forms of non-blocking channel operations from the language:

ok := c <- x  // before Go 1, was non-blocking sendx, ok := <-c  // before Go 1, was non-blocking receive

These operations were also possible to do using select, making it confusing to need to decide which form to use. Removing them simplified the language without reducing its power.

Simplify by Restricting

We can also simplify by restricting what is allowed. From day one, Go has restricted the encoding of Go source files: they must be UTF-8. This restriction makes every program that tries to read Go source files simpler. Those programs don’t have to worry about Go source files encoded in Latin-1 or UTF-16 or UTF-7 or anything else.

Another important restriction is gofmt for program formatting. Nothing rejects Go code that isn’t formatted using gofmt, but we have established a convention that tools that rewrite Go programs leave them in gofmt form. If you keep your programs in gofmt form too, then these rewriters don’t make any formatting changes. When you compare before and after, the only diffs you see are real changes. This restriction has simplified program rewriters and led to successful experiments likegoimports, gorename, and many others.

Go Development Process

This cycle of experiment and simplify is a good model for what we’ve been doing the past ten years. but it has a problem: it’s too simple. We can’t only experiment and simplify.

We have to ship the result. We have to make it available to use. Of course, using it enables more experiments, and possibly more simplifying, and the process cycles on and on.

We shipped Go to all of you for the first time on November 10, 2009. Then, with your help, we shipped Go 1 together in March 2012. And we’ve shipped twelve Go releases since then. All of these were important milestones, to enable more experimentation, to help us learn more about Go, and of course to make Go available for production use.

When we shipped Go 1, we explicitly shifted our focus to using Go, to understand this version of the language much better before trying any more simplifications involving language changes. We needed to take time to experiment, to really understand what works and what doesn’t.

Of course, we’ve had twelve releases since Go 1, so we have still been experimenting and simplifying and shipping. But we’ve focused on ways to simplify Go development without significant language changes and without breaking existing Go programs. For example, Go 1.5 shipped the first concurrent garbage collector and then the following releases improved it, simplifying Go development by removing pause times as an ongoing concern.

At Gophercon in 2017, we announced that after five years of experimentation, it was again time to think about significant changes that would simplify Go development. Our path to Go 2 is really the same as the path to Go 1: experiment and simplify and ship, towards an overall goal of simplifying Go development.

For Go 2, the concrete topics that we believed were most important to address are error handling, generics, and dependencies. Since then we have realized that another important topic is developer tooling.

The rest of this post discusses how our work in each of these areas follows that path. Along the way, we’ll take one detour, stopping to inspect the technical detail of what will be shipping soon in Go 1.13 for error handling.

Errors

It is hard enough to write a program that works the right way in all cases when all the inputs are valid and correct and nothing the program depends on is failing. When you add errors into the mix, writing a program that works the right way no matter what goes wrong is even harder.

As part of thinking about Go 2, we want to understand better whether Go can help make that job any simpler.

There are two different aspects that could potentially be simplified: error values and error syntax. We’ll look at each in turn, with the technical detour I promised focusing on the Go 1.13 error value changes.

Error Values

Error values had to start somewhere. Here is the Read function from the first version of the os package:

export func Read(fd int64, b *[]byte) (ret int64, errno int64) {    r, e := syscall.read(fd, &b[0], int64(len(b)));    return r, e}

There was no File type yet, and also no error type.Read and the other functions in the package returned an errno int64 directly from the underlying Unix system call.

This code was checked in on September 10, 2008 at 12:14pm. Like everything back then, it was an experiment, and code changed quickly. Two hours and five minutes later, the API changed:

export type Error struct { s string }func (e *Error) Print() { … } // to standard error!func (e *Error) String() string { … }export func Read(fd int64, b *[]byte) (ret int64, err *Error) {    r, e := syscall.read(fd, &b[0], int64(len(b)));    return r, ErrnoToError(e)}

This new API introduced the first Error type. An error held a string and could return that string and also print it to standard error.

The intent here was to generalize beyond integer codes. We knew from past experience that operating system error numbers were too limited a representation, that it would simplify programs not to have to shoehorn all detail about an error into 64 bits. Using error strings had worked reasonably well for us in the past, so we did the same here. This new API lasted seven months.

The next April, after more experience using interfaces, we decided to generalize further and allow user-defined error implementations, by making the os.Error type itself an interface. We simplified by removing the Print method.

For Go 1 two years later, based on a suggestion by Roger Peppe,os.Error became the built-in error type, and the String method was renamed to Error. Nothing has changed since then. But we have written many Go programs, and as a result we have experimented a lot with how best to implement and use errors.

Errors Are Values

Making error a simple interface and allowing many different implementations means we have the entire Go language available to define and inspect errors. We like to say that errors are values, the same as any other Go value.

Here’s an example. On Unix, an attempt to dial a network connection ends up using the connect system call. That system call returns a syscall.Errno, which is a named integer type that represents a system call error number and implements the error interface:

package syscalltype Errno int64func (e Errno) Error() string { ... }const ECONNREFUSED = Errno(61)    ... err == ECONNREFUSED ...

The syscall package also defines named constants for the host operating system’s defined error numbers. In this case, on this system, ECONNREFUSED is number 61. Code that gets an error from a function can test whether the error is ECONNREFUSED using ordinary value equality.

Moving up a level, in package os, any system call failure is reported using a larger error structure that records what operation was attempted in addition to the error. There are a handful of these structures. This one, SyscallError, describes an error invoking a specific system call with no additional information recorded:

package ostype SyscallError struct {    Syscall string    Err     error}func (e *SyscallError) Error() string {    return e.Syscall + ": " + e.Err.Error()}

Moving up another level, in package net, any network failure is reported using an even larger error structure that records the details of the surrounding network operation, such as dial or listen, and the network and addresses involved:

package nettype OpError struct {    Op     string    Net    string    Source Addr    Addr   Addr    Err    error}func (e *OpError) Error() string { ... }

Putting these together, the errors returned by operations like net.Dial can format as strings, but they are also structured Go data values. In this case, the error is a net.OpError, which adds context to an os.SyscallError, which adds context to a syscall.Errno:

c, err := net.Dial("tcp", "localhost:50001")// "dial tcp [::1]:50001: connect: connection refused"err is &net.OpError{    Op:   "dial",    Net:  "tcp",    Addr: &net.TCPAddr{IP: ParseIP("::1"), Port: 50001},    Err: &os.SyscallError{        Syscall: "connect",        Err:     syscall.Errno(61), // == ECONNREFUSED    },}

When we say errors are values, we mean both that the entire Go language is available to define them and also that the entire Go language is available to inspect them.

Here is an example from package net. It turns out that when you attempt a socket connection, most of the time you will get connected or get connection refused, but sometimes you can get a spurious EADDRNOTAVAIL, for no good reason. Go shields user programs from this failure mode by retrying. To do this, it has to inspect the error structure to find out whether the syscall.Errno deep inside is EADDRNOTAVAIL.

Here is the code:

func spuriousENOTAVAIL(err error) bool {    if op, ok := err.(*OpError); ok {        err = op.Err    }    if sys, ok := err.(*os.SyscallError); ok {        err = sys.Err    }    return err == syscall.EADDRNOTAVAIL}

A type assertion peels away any net.OpError wrapping. And then a second type assertion peels away any os.SyscallError wrapping. And then the function checks the unwrapped error for equality with EADDRNOTAVAIL.

What we’ve learned from years of experience, from this experimenting with Go errors, is that it is very powerful to be able to define arbitrary implementations of the error interface, to have the full Go language available both to construct and to deconstruct errors, and not to require the use of any single implementation.

These properties—that errors are values, and that there is not one required error implementation—are important to preserve.

Not mandating one error implementation enabled everyone to experiment with additional functionality that an error might provide, leading to many packages, such asgithub.com/pkg/errors,gopkg.in/errgo.v2,github.com/hashicorp/errwrap,upspin.io/errors,github.com/spacemonkeygo/errors, and more.

One problem with unconstrained experimentation, though, is that as a client you have to program to the union of all the possible implementations you might encounter. A simplification that seemed worth exploring for Go 2 was to define a standard version of commonly-added functionality, in the form of agreed-upon optional interfaces, so that different implementations could interoperate.

Unwrap

The most commonly-added functionality in these packages is some method that can be called to remove context from an error, returning the error inside. Packages use different names and meanings for this operation, and sometimes it removes one level of context, while sometimes it removes as many levels as possible.

For Go 1.13, we have introduced a convention that an error implementation adding removable context to an inner error should implement an Unwrap method that returns the inner error, unwrapping the context. If there is no inner error appropriate to expose to callers, either the error shouldn’t have an Unwrap method, or the Unwrap method should return nil.

// Go 1.13 optional method for error implementations.interface {    // Unwrap removes one layer of context,    // returning the inner error if any, or else nil.    Unwrap() error}

The way to call this optional method is to invoke the helper function errors.Unwrap, which handles cases like the error itself being nil or not having an Unwrap method at all.

package errors// Unwrap returns the result of calling// the Unwrap method on err,// if err’s type defines an Unwrap method.// Otherwise, Unwrap returns nil.func Unwrap(err error) error

We can use the Unwrap method to write a simpler, more general version of spuriousENOTAVAIL. Instead of looking for specific error wrapper implementations like net.OpError or os.SyscallError, the general version can loop, calling Unwrap to remove context, until either it reaches EADDRNOTAVAIL or there’s no error left:

func spuriousENOTAVAIL(err error) bool {    for err != nil {        if err == syscall.EADDRNOTAVAIL {            return true        }        err = errors.Unwrap(err)    }    return false}

This loop is so common, though, that Go 1.13 defines a second function, errors.Is, that repeatedly unwraps an error looking for a specific target. So we can replace the entire loop with a single call to errors.Is:

func spuriousENOTAVAIL(err error) bool {    return errors.Is(err, syscall.EADDRNOTAVAIL)}

At this point we probably wouldn’t even define the function; it would be equally clear, and simpler, to call errors.Is directly at the call sites.

Go 1.13 also introduces a function errors.As that unwraps until it finds a specific implementation type.

If you want to write code that works with arbitrarily-wrapped errors,errors.Is is the wrapper-aware version of an error equality check:

err == target→errors.Is(err, target)

And errors.As is the wrapper-aware version of an error type assertion:

target, ok := err.(*Type)if ok {    ...}→var target *Typeif errors.As(err, &target) {   ...}

To Unwrap Or Not To Unwrap?

Whether to make it possible to unwrap an error is an API decision, the same way that whether to export a struct field is an API decision. Sometimes it is appropriate to expose that detail to calling code, and sometimes it isn’t. When it is, implement Unwrap. When it isn’t, don’t implement Unwrap.

Until now, fmt.Errorf has not exposed an underlying error formatted with %v to caller inspection. That is, the result of fmt.Errorf has not been possible to unwrap. Consider this example:

// errors.Unwrap(err2) == nil// err1 is not available (same as earlier Go versions)err2 := fmt.Errorf("connect: %v", err1)

If err2 is returned to a caller, that caller has never had any way to open up err2 and access err1. We preserved that property in Go 1.13.

For the times when you do want to allow unwrapping the result of fmt.Errorf, we also added a new printing verb %w, which formats like %v, requires an error value argument, and makes the resulting error’s Unwrap method return that argument. In our example, suppose we replace %v with %w:

// errors.Unwrap(err4) == err3// (%w is new in Go 1.13)err4 := fmt.Errorf("connect: %w", err3)

Now, if err4 is returned to a caller, the caller can use Unwrap to retrieve err3.

It is important to note that absolute rules like“always use %v (or never implement Unwrap)” or “always use %w (or always implement Unwrap)” are as wrong as absolute rules like “never export struct fields” or “always export struct fields.” Instead, the right decision depends on whether callers should be able to inspect and depend on the additional information that using %w or implementing Unwrap exposes.

As an illustration of this point, every error-wrapping type in the standard library that already had an exported Err field now also has an Unwrap method returning that field, but implementations with unexported error fields do not, and existing uses of fmt.Errorf with %v still use %v, not %w.

Error Value Printing (Abandoned)

Along with the design draft for Unwrap, we also published adesign draft for an optional method for richer error printing, including stack frame information and support for localized, translated errors.

// Optional method for error implementationstype Formatter interface {    Format(p Printer) (next error)}// Interface passed to Formattype Printer interface {    Print(args ...interface{})    Printf(format string, args ...interface{})    Detail() bool}

This one is not as simple as Unwrap, and I won’t go into the details here. As we discussed the design with the Go community over the winter, we learned that the design wasn’t simple enough. It was too hard for individual error types to implement, and it did not help existing programs enough. On balance, it did not simplify Go development.

As a result of this community discussion, we abandoned this printing design.

Error Syntax

That was error values. Let’s look briefly at error syntax, another abandoned experiment.

Here is some code fromcompress/lzw/writer.go in the standard library:

// Write the savedCode if valid.if e.savedCode != invalidCode {    if err := e.write(e, e.savedCode); err != nil {        return err    }    if err := e.incHi(); err != nil && err != errOutOfCodes {        return err    }}// Write the eof code.eof := uint32(1)<<e.litWidth + 1if err := e.write(e, eof); err != nil {    return err}

At a glance, this code is about half error checks. My eyes glaze over when I read it. And we know that code that is tedious to write and tedious to read is easy to misread, making it a good home for hard-to-find bugs. For example, one of these three error checks is not like the others, a fact that is easy to miss on a quick skim. If you were debugging this code, how long would it take to notice that?

At Gophercon last year wepresented a draft design for a new control flow construct marked by the keyword check.Check consumes the error result from a function call or expression. If the error is non-nil, the check returns that error. Otherwise the check evaluates to the other results from the call. We can use check to simplify the lzw code:

// Write the savedCode if valid.if e.savedCode != invalidCode {    check e.write(e, e.savedCode)    if err := e.incHi(); err != errOutOfCodes {        check err    }}// Write the eof code.eof := uint32(1)<<e.litWidth + 1check e.write(e, eof)

This version of the same code uses check, which removes four lines of code and more importantly highlights that the call to e.incHi is allowed to return errOutOfCodes.

Maybe most importantly, the design also allowed defining error handler blocks to be run when later checks failed. That would let you write shared context-adding code just once, like in this snippet:

handle err {    err = fmt.Errorf("closing writer: %w", err)}// Write the savedCode if valid.if e.savedCode != invalidCode {    check e.write(e, e.savedCode)    if err := e.incHi(); err != errOutOfCodes {        check err    }}// Write the eof code.eof := uint32(1)<<e.litWidth + 1check e.write(e, eof)

In essence, check was a short way to write the if statement, and handle was likedefer but only for error return paths. In contrast to exceptions in other languages, this design retained Go’s important property that every potential failing call was marked explicitly in the code, now using the check keyword instead of if err != nil.

The big problem with this design was that handle overlapped too much, and in confusing ways, with defer.

In May we posteda new design with three simplifications: to avoid the confusion with defer, the design dropped handle in favor of just using defer; to match a similar idea in Rust and Swift, the design renamed check to try; and to allow experimentation in a way that existing parsers like gofmt would recognize, it changed check (now try) from a keyword to a built-in function.

Now the same code would look like this:

defer errd.Wrapf(&err, "closing writer")// Write the savedCode if valid.if e.savedCode != invalidCode {    try(e.write(e, e.savedCode))    if err := e.incHi(); err != errOutOfCodes {        try(err)    }}// Write the eof code.eof := uint32(1)<<e.litWidth + 1try(e.write(e, eof))

We spent most of June discussing this proposal publicly on GitHub.

The fundamental idea of check or try was to shorten the amount of syntax repeated at each error check, and in particular to remove the return statement from view, keeping the error check explicit and better highlighting interesting variations. One interesting point raised during the public feedback discussion, however, was that without an explicit if statement and return, there’s nowhere to put a debugging print, there’s nowhere to put a breakpoint, and there’s no code to show as unexecuted in code coverage results. The benefits we were after came at the cost of making these situations more complex. On balance, from this as well as other considerations, it was not at all clear that the overall result would be simpler Go development, so we abandoned this experiment.

That’s everything about error handling, which was one of the main focuses for this year.

Generics

Now for something a little less controversial: generics.

The second big topic we identified for Go 2 was some kind of way to write code with type parameters. This would enable writing generic data structures and also writing generic functions that work with any kind of slice, or any kind of channel, or any kind of map. For example, here is a generic channel filter:

// Filter copies values from c to the returned channel,// passing along only those values satisfying f.func Filter(type value)(f func(value) bool, c <-chan value) <-chan value {    out := make(chan value)    go func() {        for v := range c {            if f(v) {                out <- v            }        }        close(out)    }()    return out}

We’ve been thinking about generics since work on Go began, and we wrote and rejected our first concrete design in 2010. We wrote and rejected three more designs by the end of 2013. Four abandoned experiments, but not failed experiments, We learned from them, like we learned from check and try. Each time, we learned that the path to Go 2 is not in that exact direction, and we noticed other directions that might be interesting to explore. But by 2013 we had decided that we needed to focus on other concerns, so we put the entire topic aside for a few years.

Last year we started exploring and experimenting again, and we presented anew design, based on the idea of a contract, at Gophercon last summer. We’ve continued to experiment and simplify, and we’ve been working with programming language theory experts to understand the design better.

Overall, I am hopeful that we’re headed in a good direction, toward a design that will simplify Go development. Even so, we might find that this design doesn’t work either. We might have to abandon this experiment and adjust our path based on what we learned. We’ll find out.

At Gophercon 2019, Ian Lance Taylor talked about why we might want to add generics to Go and briefly previewed the latest design draft. For details, see his blog post “Why Generics?

Dependencies

The third big topic we identified for Go 2 was dependency management.

In 2010 we published a tool called goinstall, which we called“an experiment in package installation.” It downloaded dependencies and stored them in your Go distribution tree, in GOROOT.

As we experimented with goinstall, we learned that the Go distribution and the installed packages should be kept separate, so that it was possible to change to a new Go distribution without losing all your Go packages. So in 2011 we introduced GOPATH, an environment variable that specified where to look for packages not found in the main Go distribution.

Adding GOPATH created more places for Go packages but simplified Go development overall, by separating your Go distribution from your Go libraries.

Compatibility

The goinstall experiment intentionally left out an explicit concept of package versioning. Instead, goinstall always downloaded the latest copy. We did this so we could focus on the other design problems for package installation.

Goinstall became go get as part of Go 1. When people asked about versions, we encouraged them to experiment by creating additional tools, and they did. And we encouraged package AUTHORS to provide their USERS with the same backwards compatibility we did for the Go 1 libraries. Quoting the Go FAQ:

“Packages intended for public use should try to maintain backwards compatibility as they evolve.

If different functionality is required, add a new name instead of changing an old one.

If a complete break is required, create a new package with a new import path.”

This convention simplifies the overall experience of using a package by restricting what authors can do: avoid breaking changes to APIs; give new functionality a new name; and give a whole new package design a new import path.

Of course, people kept experimenting. One of the most interesting experiments was started by Gustavo Niemeyer. He created a Git redirector calledgopkg.in, which provided different import paths for different API versions, to help package authors follow the convention of giving a new package design a new import path.

For example, the Go source code in the GitHub repositorygo-yaml/yaml has different APIs in the v1 and v2 semantic version tags. The gopkg.in server provides these with different import pathsgopkg.in/yaml.v1 andgopkg.in/yaml.v2.

The convention of providing backwards compatibility, so that a newer version of a package can be used in place of an older version, is what makes go get’s very simple rule—“always download the latest copy”—work well even today.

Versioning And Vendoring

But in production contexts you need to be more precise about dependency versions, to make builds reproducible.

Many people experimented with what that should look like, building tools that served their needs, including Keith Rarick’s goven (2012) and godep (2013), Matt Butcher’s glide (2014), and Dave Cheney’s gb (2015). All of these tools use the model that you copy dependency packages into your own source control repository. The exact mechanisms used to make those packages available for import varied, but they were all more complex than it seemed they should be.

After a community-wide discussion, we adopted a proposal by Keith Rarick to add explicit support for referring to copied dependencies without GOPATH tricks. This was simplifying by reshaping: like with addToList and append, these tools were already implementing the concept, but it was more awkward than it needed to be. Adding explicit support for vendor directories made these uses simpler overall.

Shipping vendor directories in the go command led to more experimentation with vendoring itself, and we realized that we had introduced a few problems. The most serious was that we lost package uniqueness. Before, during any given build, an import path might appear in lots of different packages, and all the imports referred to the same target. Now with vendoring, the same import path in different packages might refer to different vendored copies of the package, all of which would appear in the final resulting binary.

At the time, we didn’t have a name for this property: package uniqueness. It was just how the GOPATH model worked. We didn’t completely appreciate it until it went away.

There is a parallel here with the check and try error syntax proposals. In that case, we were relying on how the visible return statement worked in ways we didn’t appreciate until we considered removing it.

When we added vendor directory support, there were many different tools for managing dependencies. We thought that a clear agreement about the format of vendor directories and vendoring metadata would allow the various tools to interoperate, the same way that agreement about how Go programs are stored in text files enables interoperation between the Go compiler, text editors, and tools like goimports and gorename.

This turned out to be naively optimistic. The vendoring tools all differed in subtle semantic ways. Interoperation would require changing them all to agree about the semantics, likely breaking their respective users. Convergence did not happen.

Dep

At Gophercon in 2016, we started an effort to define a single tool to manage dependencies. As part of that effort, we conducted surveys with many different kinds of users to understand what they needed as far as dependency management, and a team started work on a new tool, which became dep.

Dep aimed to be able to replace all the existing dependency management tools. The goal was to simplify by reshaping the existing different tools into a single one. It partly accomplished that.Dep also restored package uniqueness for its users, by having only one vendor directory at the top of the project tree.

But dep also introduced a serious problem that took us a while to fully appreciate. The problem was that dep embraced a design choice from glide, to support and encourage incompatible changes to a given package without changing the import path.

Here is an example. Suppose you are building your own program, and you need to have a configuration file, so you use version 2 of a popular Go YAML package:

Now suppose your program imports the Kubernetes client. It turns out that Kubernetes uses YAML extensively, and it uses version 1 of the same popular package:

Version 1 and version 2 have incompatible APIs, but they also have different import paths, so there is no ambiguity about which is meant by a given import. Kubernetes gets version 1, your config parser gets version 2, and everything works.

Dep abandoned this model. Version 1 and version 2 of the yaml package would now have the same import path, producing a conflict. Using the same import path for two incompatible versions, combined with package uniqueness, makes it impossible to build this program that you could build before:

It took us a while to understand this problem, because we had been applying the“new API means new import path” convention for so long that we took it for granted. The dep experiment helped us appreciate that convention better, and we gave it a name: the import compatibility rule:

“If an old package and a new package have the same import path, the new package must be backwards compatible with the old package.”

Go Modules

We took what worked well in the dep experiment and what we learned about what didn’t work well, and we experimented with a new design, called vgo. In vgo, packages followed the import compatibility rule, so that we can provide package uniqueness but still not break builds like the one we just looked at. This let us simplify other parts of the design as well.

Besides restoring the import compatibility rule, another important part of the vgo design was to give the concept of a group of packages a name and to allow that grouping to be separated from source code repository boundaries. The name of a group of Go packages is a module, so we refer to the system now as Go modules.

Go modules are now integrated with the go command, which avoids needing to copy around vendor directories at all.

Replacing GOPATH

With Go modules comes the end of GOPATH as a global name space. Nearly all the hard work of converting existing Go usage and tools to modules is caused by this change, from moving away from GOPATH.

The fundamental idea of GOPATH is that the GOPATH directory tree is the global source of truth for what versions are being used, and the versions being used don’t change as you move around between directories. But the global GOPATH mode is in direct conflict with the production requirement of per-project reproducible builds, which itself simplifies the Go development and deployment experience in many important ways.

Per-project reproducible builds means that when you are working in a checkout of project A, you get the same set of dependency versions that the other developers of project A get at that commit, as defined by the go.mod file. When you switch to working in a checkout of project B, now you get that project’s chosen dependency versions, the same set that the other developers of project B get. But those are likely different from project A. The set of dependency versions changing when you move from project A to project B is necessary to keep your development in sync with that of the other developers on A and on B. There can’t be a single global GOPATH anymore.

Most of the complexity of adopting modules arises directly from the loss of the one global GOPATH. Where is the source code for a package? Before, the answer depended only on your GOPATH environment variable, which most people rarely changed. Now, the answer depends on what project you are working on, which may change often. Everything needs updating for this new convention.

Most development tools use thego/build package to find and load Go source code. We’ve kept that package working, but the API did not anticipate modules, and the workarounds we added to avoid API changes are slower than we’d like. We’ve published a replacement,golang.org/x/tools/go/packages. Developer tools should now use that instead. It supports both GOPATH and Go modules, and it is faster and easier to use. In a release or two we may move it into the standard library, but for now golang.org/x/tools/go/packages is stable and ready for use.

Go Module Proxies

One of the ways modules simplify Go development is by separating the concept of a group of packages from the underlying source control repository where they are stored.

When we talked to Go users about dependencies, almost everyone using Go at their companies asked how to route go get package fetches through their own servers, to better control what code can be used. And even open-source developers were concerned about dependencies disappearing or changing unexpectedly, breaking their builds. Before modules, users had attempted complex solutions to these problems, including intercepting the version control commands that the go command runs.

The Go modules design makes it easy to introduce the idea of a module proxy that can be asked for a specific module version.

Companies can now easily run their own module proxy, with custom rules about what is allowed and where cached copies are stored. The open-source Athens project has built just such a proxy, and Aaron Schlesinger gave a talk about it at Gophercon 2019. (We’ll add a link here when the video becomes available.)

And for individual developers and open source teams, the Go team at Google has launched a proxy that serves as a public mirror of all open-source Go packages, and Go 1.13 will use that proxy by default when in module mode. Katie Hockman gave a talk about this system at Gophercon 2019. (We’ll add a link here when the video becomes available.)

Go Modules Status

Go 1.11 introduced modules as an experimental, opt-in preview. We keep experimenting and simplifying. Go 1.12 shipped improvements, and Go 1.13 will ship more improvements.

Modules are now at the point where we believe that they will serve most users, but we aren’t ready to shut down GOPATH just yet. We will keep experimenting, simplifying, and revising.

We fully recgonize that the Go user community built up almost a decade of experience and tooling and workflows around GOPATH, and it will take a while to convert all of that to Go modules.

But again, we think that modules will now work very well for most users, and I encourage you to take a look when Go 1.13 is released.

As one data point, the Kubernetes project has a lot of dependencies, and they have migrated to using Go modules to manage them. You probably can too. And if you can’t, please let us know what’s not working for you or what’s too complex, by filing a bug report, and we will experiment and simplify.

Tools

Error handling, generics, and dependency management are going to take a few more years at least, and we’re going to focus on them for now. Error handling is close to done, modules will be next after that, and maybe generics after that.

But suppose we look a couple years out, to when we are done experimenting and simplifying and have shipped error handling, modules, and generics. Then what? It’s very difficult to predict the future, but I think that once these three have shipped, that may mark the start of a new quiet period for major changes. Our focus at that point will likely shift to simplifying Go development with improved tools.

Some of the tool work is already underway, so this post finishes by looking at that.

While we helped update all the Go community’s existing tools to understand Go modules, we noticed that having a ton of development helper tools that each do one small job is not serving users well. The individual tools are too hard to combine, too slow to invoke, and too different to use.

We began an effort to unify the most commonly-required development helpers into a single tool, now called gopls (pronounced “go, please”).Gopls speaks theLanguage Server Protocol, LSP, and works with any integrated development environment or text editor with LSP support, which is essentially everything at this point.

Gopls marks an expansion in focus for the Go project, from delivering standalone compiler-like, command-line tools like go vet or gorename to also delivering a complete IDE service. Rebecca Stambler gave a talk with more details about gopls and IDEs at Gophercon 2019. (We’ll add a link here when the video becomes available.)

After gopls, we also have ideas for reviving go fix in an extensible way and for making go vet even more helpful.

Coda

So there’s the path to Go 2. We will experiment and simplify. And experiment and simplify. And ship. And experiment and simplify. And do it all again. It may look or even feel like the path goes around in circles. But each time we experiment and simplify we learn a little more about what Go 2 should look like and move another step closer to it. Even abandoned experiments like try or our first four generics designs or dep are not wasted time. They help us learn what needs to be simplified before we can ship, and in some cases they help us better understand something we took for granted.

At some point we will realize we have experimented enough, and simplified enough, and shipped enough, and we will have Go 2.

Thanks to all of you in the Go community for helping us experiment and simplify and ship and find our way on this path.

Contributors Summit 2019

$
0
0

Introduction

For the third year in a row, the Go team and contributors convened the day before GopherCon to discuss and plan for the future of the Go project. The event included self-organizing into breakout groups, a town-hall style discussion about the proposal process in the morning, and afternoon break-out roundtable discussions based on topics our contributors chose. We asked five contributors to write about their experience in various discussions at this year’s summit.

(Photo by Steve Francia.)

Compiler and Runtime (report by Lynn Boger)

The Go contributors summit was a great opportunity to meet and discuss topics and ideas with others who also contribute to Go.

The day started out with a time to meet everyone in the room. There was a good mix of the core Go team and others who actively contribute to Go. From there we decided what topics were of interest and how to split the big group into smaller groups. My area of interest is the compiler, so I joined that group and stayed with them for most of the time.

At our first meeting, a long list of topics were brought up and as a result the compiler group decided to keep meeting throughout the day. I had a few topics of interest that I shared and many that others suggested were also of interest to me. Not all items on the list were discussed in detail; here is my list of those topics which had the most interest and discussion, followed by some brief comments that were made on other topics.

Binary size. There was a concern expressed about binary size, especially that it continues to grow with each release. Some possible reasons were identified such as increased inlining and other optimizations. Most likely there is a set of users who want small binaries, and another group who wants the best performance possible and maybe some don’t care. This led to the topic of TinyGo, and it was noted that TinyGo was not a full implementation of Go and that it is important to keep TinyGo from diverging from Go and splitting the user base. More investigation is required to understand the need among users and the exact reasons contributing to the current size. If there are opportunities to reduce the size without affecting performance, those changes could be made, but if performance were affected some users would prefer better performance.

Vector assembly. How to leverage vector assembly in Go was discussed for a while and has been a topic of interest in the past. I have split this into three separate possibilities, since they all relate to the use of vector instructions, but the way they are used are different, starting with the topic of vector assembly. This is another case of a compiler trade off.

For most targets, there are critical functions in standard packages such as crypto, hash, math and others, where the use of assembly is necessary to get the best possible performance; however having large functions written in assembly makes them difficult to support and maintain and could require different implementations for each target platform. One solution is to make use of macro assembly or other high-level generation techniques to make the vector assembly easier to read and understand.

Another side to this question is whether the Go compiler can directly generate SIMD vector instructions when compiling a Go source file, by enhancing the Go compiler to transform code sequences to “simdize” the code to make use of vector instructions. Implementing SIMD in the Go compiler would add complexity and compile time, and might not always result in code that performs better. The way the code is transformed could in some cases depend on the target platform so that would not be ideal.

Another way to leverage vector instructions in Go is to provide a way to make it easier to make use of vector instructions from within the Go source code. Topics discussed were intrinsics, or implementations that exist in other compilers like Rust. In gcc some platforms provide inline asm, and Go possibly could provide this capability, but I know from experience that intermixing inline asm with Go code adds complexity to the compiler in terms of tracking register use and debugging. It allows the user to do things the compiler might not expect or want, and it does add an extra level of complexity. It could be inserted in places that are not ideal.

In summary, it is important to provide a way to leverage the available vector instructions, and make it easier and safer to write. Where possible, functions use as much Go code as possible, and potentially find a way to use high level assembly. There was some discussion of designing an experimental vector package to try and implement some of these ideas.

New calling convention. Several people were interested in the topic of theABI changes to provide a register based calling convention. The current status was reported with details. There was discussion on what remained to be done before it could be used. The ABI specification needs to be written first and it was not clear when that would be done. I know this will benefit some target platforms more than others and a register calling convention is used in most compilers for other platforms.

General optimizations. Certain optimizations that are more beneficial for some platforms other than x86 were discussed. In particular, loop optimizations such as hoisting of invariants and strength reduction could be done and provide more benefit on some platforms. Potential solutions were discussed, and implementation would probably be up to the targets that find those improvements important.

Feedback-directed optimizations. This was discussed and debated as a possible future enhancement. In my experience, it is hard to find meaningful programs to use for collecting performance data that can later be used to optimize code. It increases compile time and takes a lot of space to save the data which might only be meaningful for a small set of programs.

Pending submissions. A few members in the group mentioned changes they had been working on and plan to submit soon, including improvements to makeslice, and a rewrite of rulegen.

Compile time concerns. Compile time was discussed briefly. It was noted that phase timing was added to the GOSSAFUNC output.

Compiler contributor communication. Someone asked if there was a need for a Go compiler mailing list. It was suggested that we use golang-dev for that purpose, adding compiler to the subject line to identify it. If there is too much traffic on golang-dev, then a compiler-specific mailing list can be considered at some later point in time.

Community. I found the day very beneficial in terms of connecting with people who have been active in the community and have similar areas of interest. I was able to meet many people who I’ve only known by the user name appearing in issues or mailing lists or CLs. I was able to discuss some topics and existing issues and get direct interactive feedback instead of waiting for online responses. I was encouraged to write issues on problems I have seen. These connections happened not just during this day but while running into others throughout the conference, having been introduced on this first day, which led to many interesting discussions. Hopefully these connections will lead to more effective communication and improved handling of issues and code changes in the future.

Tools (report by Paul Jolly)

The tools breakout session during the contributor summit took an extended form, with two further sessions on the main conference days organized by thegolang-tools group. This summary is broken down into two parts: the tools session at the contributor workshop, and a combined report from the golang-tools sessions on the main conference days.

Contributor summit. The tools session started with introductions from ~25 folks gathered, followed by a brainstorming of topics, including: gopls, ARM 32-bit, eval, signal, analysis, go/packages api, refactoring, pprof, module experience, mono repo analysis, go mobile, dependencies, editor integrations, compiler opt decisions, debugging, visualization, documentation. A lot of people with lots of interest in lots of tools!

The session focused on two areas (all that time allowed): gopls and visualizations.Gopls (pronounced: “go please”) is an implementation of theLanguage Server Protocol (LSP) server for Go. Rebecca Stamber, the gopls lead author, and the rest of the Go tools team were interested in hearing people’s experiences with gopls: stability, missing features, integrations in editors working, etc? The general feeling was that gopls was in really good shape and working extremely well for the majority of use cases. Integration test coverage needs to be improved, but this is a hard problem to get “right” across all editors. We discussed a better means of users reporting gopls errors they encounter via their editor, telemetry/diagnostics, gopls performance metrics, all subjects that got more detailed coverage in golang-tools sessions that followed on the main conference days (see below). A key area of discussion was how to extend gopls, e.g., in the form of additional go/analysis vet-like checks, lint checks, refactoring, etc. Currently there is no good solution, but it’s actively under investigation. Conversation shifted to the very broad topic of visualizations, with a demo-based introduction from Anthony Starks (who, incidentally, gave an excellent talk aboutGo for information displays at GopherCon 2018).

Conference days. The golang-tools sessions on the main conference days were a continuation of themonthly calls that have been happening since the group’s inception at GopherCon 2018. Full notes are available for theday 1 andday 2 sessions. These sessions were again well attended with 25-30 people at each session. The Go tools team was there in strength (a good sign of the support being put behind this area), as was the Uber platform team. In contrast to the contributor summit, the goal from these sessions was to come away with specific action items.

Gopls. Gopls “readiness” was a major focus for both sessions. This answer effectively boiled down to determining when it makes sense to tell editor integrators “we have a good first cut of gopls” and then compiling a list of “blessed” editor integrations/plugins known to work with gopls. Central to this “certification” of editor integrations/plugins is a well-defined process by which users can report problems they experience with gopls. Performance and memory are not blockers for this initial “release”. The conversation about how to extend gopls, started in the contributor summit the day before, continued in earnest. Despite the many obvious benefits and attractions to extending gopls (custom go/analysis checks, linter support, refactoring, code generation…), there isn’t a clear answer on how to implement this in a scalable way. Those gathered agreed that this should not be seen as a blocker for the initial “release”, but should continue to be worked on. In the spirit of gopls and editor integrations, Heschi Kreinick from the Go tools team brought up the topic of debugging support. Delve has become the de facto debugger for Go and is in good shape; now the state of debugger-editor integration needs to be established, following a process similar to that of gopls and the “blessed” integrations.

Go Discovery Site. The second golang-tools session started with an excellent introduction to the Go Discovery Site by Julie Qiu from the Go tools team, along with a quick demo. Julie talked about the plans for the Discovery Site: open sourcing the project, what signals are used in search ranking, how godoc.org will ultimately be replaced, how submodules should work, how users can discover new major versions.

Build Tags. Conversation then moved to build tag support within gopls. This is an area that clearly needs to be better understood (use cases are currently being gathered in issue 33389). In light of this conversation, the session wrapped up with Alexander Zolotov from the JetBrains GoLand team suggesting that the gopls and GoLand teams should share experience in this and more areas, given GoLand has already gained lots of experience.

Join Us! We could easily have talked about tools-related topics for days! The good news is that the golang-tools calls will continue for the foreseeable future. Anyone interested in Go tooling is very much encouraged to join: the wiki has more details.

Enterprise Use (report by Daniel Theophanes)

Actively asking after the needs of less vocal developers will be the largest challenge, and greatest win, for the Go language. There is a large segment of programmers who don’t actively participate in the Go community. Some are business associates, marketers, or quality assurance who also do development. Some will wear management hats and make hiring or technology decisions. Others just do their job and return to their families. And lastly, many times these developers work in businesses with strict IP protection contracts. Even though most of these developers won’t end up directly participating in open source or the Go community proposals, their ability to use Go depends on both.

The Go community and Go proposals need to understand the needs of these less vocal developers. Go proposals can have a large impact on what is adopted and used. For instance, the vendor folder and later the Go modules proxy are incredibly important for businesses that strictly control source code and typically have fewer direct conversations with the Go community. Having these mechanisms allow these organizations to use Go at all. It follows that we must not only pay attention to current Go users, but also to developers and organizations who have considered Go, but have chosen against it. We need to understand these reasons.

Similarly, should the Go community pay attention to “enterprise” environments it would unlock many additional organizations who can utilize Go. By ensuring active directory authentication works, users who would be forced to use a different ecosystem can keep Go on the table. By ensuring WSDL just works, a section of users can pick Go up as a tool. No one suggested blindly making changes to appease non-Go users. But rather we should be aware of untapped potential and unrecognized hindrances in the Go language and ecosystem.

While several different possibilities to actively solicit this information from the outside was discussed, this is a problem we fundamentally need your help. If you are in an organization that doesn’t use Go even though it was considered, let us know why Go wasn’t chosen. If you are in an organization where Go is only used for a subsection of programming tasks, but not others, why isn’t it used for more? Are there specific blockers to adoption?

Education (report by Andy Walker)

One of the roundtables I was involved in at the Contributors Summit this year was on the topic of Go education, specifically what kind of resources we make available to the new Go programmer, and how we can improve them. Present were a number of very passionate organizers, engineers and educators, each of whom had a unique perspective on the subject, either through tools they’d designed, documents they’d written or workshops they’d given to developers of all stripes.

Early on, talk turned to whether or not Go makes a good first programming language. I wasn’t sure, and advocated against it. Go isn’t a good first language, I argued, because it isn’t intended to be. As Rob Pike wrote back in 2012,“the language was designed by and for people who write—and read and debug and maintain—large software systems”. To me, this guiding ethos is clear: Go is a deliberate response to perceived flaws in the processes used by experienced engineers, not an attempt to create an ideal programming language, and as such a certain basic familiarity with programming concepts is assumed.

This is evident in the official documentation at golang.org/doc. It jumps right into how to install the language before passing the user on to thetour, which is geared towards programmers who are already familiar with a C-like language. From there, they are taken to How to Write Go Code, which provides a very basic introduction to the classic non-module Go workspace, before moving immediately on to writing libraries and testing. Finally, we have Effective Go, and a series of references including the spec, rounded out by some examples. These are all decent resources if you’re already familiar with a C-like language, but they still leave a lot to be desired, and there’s nothing to be found for the raw beginner or even someone coming directly from a language like Python.

As an accessible, interactive starting point, the tour is a natural first target towards making the language more beginner friendly, and I think a lot of headway can be made targeting that alone. First, it should be the first link in the documentation, if not the first link in the bar at the top of golang.org, front and center. We should encourage the curious user to jump right in and start playing with the language. We should also consider including optional introductory sections on coming from other common languages, and the differences they are likely to encounter in Go, with interactive exercises. This would go a long way to helping new Go programmers in mapping the concepts they are already familiar with onto Go.

For experienced programmers, an optional, deeper treatment should be given to most sections in the tour, allowing them to drill down into more detailed documentation or interactive exercises enumerating the design decisions principles of good architecture in Go. They should find answers to questions like:

  • Why are there so many integer types when I am encouraged to use int most of the time?
  • Is there ever a good reason to pick a value receiver?
  • Why is there a plain int, but no plain float?
  • What are send- and receive-only channels, and when would I use them?
  • How do I effectively compose concurrency primitives, and when would I not want to use channels?
  • What is uint good for? Should I use it to restrict my user to positive values? Why not?

The tour should be someplace they can revisit upon finishing the first run-through to dive more deeply into some of the more interesting choices in language design.

But we can do more. Many people seek out programming as a way to design applications or scratch a particular itch, and they are most likely to want to target the interface they are most familiar with: the browser. Go does not have a good front-end story yet. Javascript is still the only language that really provides both a frontend and a backend environment, but WASM is fast becoming a first-order platform, and there are so many places we could go with that. We could provide something like vecty in The Go Play Space, or perhaps Gio, targeting WASM, for people to get started programming in the browser right away, inspiring their imagination, and provide them a migration path out of our playground into a terminal and onto GitHub.

So, is Go a good first language? I honestly don’t know, but it’s certainly true there are a significant number of people entering the programming profession with Go as their starting point, and I am very interested in talking to them, learning about their journey and their process, and shaping the future of Go education with their input.

Learning Platforms (report by Ronna Steinberg)

We discussed what a learning platform for Go should look like and how we can combine global resources to effectively teach the language. We generally agreed that teaching and learning is easier with visualization and that a REPL is very gratifying. We also overviewed some existing solutions for visualization with Go: templates, Go WASM, GopherJS as well as SVG and GIFs generation.

Compiler errors not making sense to the new developer was also brought up and we considered ideas of how to handle it, perhaps a bank of errors and how they would be useful. One idea was a wrapper for the compiler that explains your errors to you, with examples and solutions.

A new group convened for a second round later and we focused more on what UX should the Go learning platform have, and if and how we can take existing materials (talks, blog posts, podcasts, etc) from the community and organize them into a program people can learn from. Should such a platform link to those external resources? Embed them? Cite them? We agreed that a portal-like-solution (of external links to resources) makes navigation difficult and takes away from the learning experience, which led us to the conclusion that such contribution cannot be passive, and contributors will likely have to opt-in to have their material on the platform. There was then much excitement around the idea of adding a voting mechanism to the platform, effectively turning the learners into contributors, too, and incentivizing the contributors to put their materials on the platform.

(If you are interested in helping in educational efforts for Go, please email Carmen Andoh candoh@google.com.)

Thank You!

Thanks to all the attendees for the excellent discussions on contributor day, and thanks especially to Lynn, Paul, Daniel, Andy, and Ronna for taking the time to write these reports.

Migrating to Go Modules

$
0
0

Introduction

This post is part 2 in a series. See part 1 — Using Go Modules.

Go projects use a wide variety of dependency management strategies. Vendoring tools such as dep and glide are popular, but they have wide differences in behavior and don't always work well together. Some projects store their entire GOPATH directory in a single Git repository. Others simply rely on go get and expect fairly recent versions of dependencies to be installed in GOPATH.

Go's module system, introduced in Go 1.11, provides an official dependency management solution built into the go command. This article describes tools and techniques for converting a project to modules.

Please note: if your project is already tagged at v2.0.0 or higher, you will need to update your module path when you add a go.mod file. We'll explain how to do that without breaking your users in a future article focused on v2 and beyond.

Migrating to Go modules in your project

A project might be in one of three states when beginning the transition to Go modules:

  • A brand new Go project.
  • An established Go project with a non-modules dependency manager.
  • An established Go project without any dependency manager.

The first case is covered in Using Go Modules; we'll address the latter two in this post.

With a dependency manager

To convert a project that already uses a dependency management tool, run the following commands:

$ git clone https://github.com/my/project[...]$ cd project$ cat Godeps/Godeps.json{"ImportPath": "github.com/my/project","GoVersion": "go1.12","GodepVersion": "v80","Deps": [        {"ImportPath": "rsc.io/binaryregexp","Comment": "v0.2.0-1-g545cabd","Rev": "545cabda89ca36b48b8e681a30d9d769a30b3074"        },        {"ImportPath": "rsc.io/binaryregexp/syntax","Comment": "v0.2.0-1-g545cabd","Rev": "545cabda89ca36b48b8e681a30d9d769a30b3074"        }    ]}$ go mod init github.com/my/projectgo: creating new go.mod: module github.com/my/projectgo: copying requirements from Godeps/Godeps.json$ cat go.modmodule github.com/my/projectgo 1.12require rsc.io/binaryregexp v0.2.1-0.20190524193500-545cabda89ca$

go mod init creates a new go.mod file and automatically imports dependencies from Godeps.json, Gopkg.lock, or a number of other supported formats. The argument to go mod init is the module path, the location where the module may be found.

This is a good time to pause and run go build ./... and go test ./... before continuing. Later steps may modify your go.mod file, so if you prefer to take an iterative approach, this is the closest your go.mod file will be to your pre-modules dependency specification.

$ go mod tidygo: downloading rsc.io/binaryregexp v0.2.1-0.20190524193500-545cabda89cago: extracting rsc.io/binaryregexp v0.2.1-0.20190524193500-545cabda89ca$ cat go.sumrsc.io/binaryregexp v0.2.1-0.20190524193500-545cabda89ca h1:FKXXXJ6G2bFoVe7hX3kEX6Izxw5ZKRH57DFBJmHCbkU=rsc.io/binaryregexp v0.2.1-0.20190524193500-545cabda89ca/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=$

go mod tidy finds all the packages transitively imported by packages in your module. It adds new module requirements for packages not provided by any known module, and it removes requirements on modules that don't provide any imported packages. If a module provides packages that are only imported by projects that haven't migrated to modules yet, the module requirement will be marked with an // indirect comment. It is always good practice to run go mod tidy before committing a go.mod file to version control.

Let's finish by making sure the code builds and tests pass:

$ go build ./...$ go test ./...[...]$

Note that other dependency managers may specify dependencies at the level of individual packages or entire repositories (not modules), and generally do not recognize the requirements specified in the go.mod files of dependencies. Consequently, you may not get exactly the same version of every package as before, and there's some risk of upgrading past breaking changes. Therefore, it's important to follow the above commands with an audit of the resulting dependencies. To do so, run

$ go list -m allgo: finding rsc.io/binaryregexp v0.2.1-0.20190524193500-545cabda89cagithub.com/my/projectrsc.io/binaryregexp v0.2.1-0.20190524193500-545cabda89ca$

and compare the resulting versions with your old dependency management file to ensure that the selected versions are appropriate. If you find a version that wasn't what you wanted, you can find out why using go mod why -m and/or go mod graph, and upgrade or downgrade to the correct version using go get. (If the version you request is older than the version that was previously selected, go get will downgrade other dependencies as needed to maintain compatibility.) For example,

$ go mod why -m rsc.io/binaryregexp[...]$ go mod graph | grep rsc.io/binaryregexp[...]$ go get rsc.io/binaryregexp@v0.2.0$

Without a dependency manager

For a Go project without a dependency management system, start by creating a go.mod file:

$ git clone https://go.googlesource.com/blog[...]$ cd blog$ go mod init golang.org/x/bloggo: creating new go.mod: module golang.org/x/blog$ cat go.modmodule golang.org/x/bloggo 1.12$

Without a configuration file from a previous dependency manager, go mod init will create a go.mod file with only the module and go directives. In this example, we set the module path to golang.org/x/blog because that is its custom import path. Users may import packages with this path, and we must be careful not to change it.

The module directive declares the module path, and the go directive declares the expected version of the Go language used to compile the code within the module.

Next, run go mod tidy to add the module's dependencies:

$ go mod tidygo: finding golang.org/x/website latestgo: finding gopkg.in/tomb.v2 latestgo: finding golang.org/x/net latestgo: finding golang.org/x/tools latestgo: downloading github.com/gorilla/context v1.1.1go: downloading golang.org/x/tools v0.0.0-20190813214729-9dba7caff850go: downloading golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7go: extracting github.com/gorilla/context v1.1.1go: extracting golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7go: downloading gopkg.in/tomb.v2 v2.0.0-20161208151619-d5d1b5820637go: extracting gopkg.in/tomb.v2 v2.0.0-20161208151619-d5d1b5820637go: extracting golang.org/x/tools v0.0.0-20190813214729-9dba7caff850go: downloading golang.org/x/website v0.0.0-20190809153340-86a7442ada7cgo: extracting golang.org/x/website v0.0.0-20190809153340-86a7442ada7c$ cat go.modmodule golang.org/x/bloggo 1.12require (    github.com/gorilla/context v1.1.1    golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7    golang.org/x/text v0.3.2    golang.org/x/tools v0.0.0-20190813214729-9dba7caff850    golang.org/x/website v0.0.0-20190809153340-86a7442ada7c    gopkg.in/tomb.v2 v2.0.0-20161208151619-d5d1b5820637)$ cat go.sumcloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=git.apache.org/thrift.git v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=git.apache.org/thrift.git v0.0.0-20181218151757-9b75e4fe745a/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=[...]$

go mod tidy added module requirements for all the packages transitively imported by packages in your module and built a go.sum with checksums for each library at a specific version. Let's finish by making sure the code still builds and tests still pass:

$ go build ./...$ go test ./...ok      golang.org/x/blog    0.335s?       golang.org/x/blog/content/appengine    [no test files]ok      golang.org/x/blog/content/cover    0.040s?       golang.org/x/blog/content/h2push/server    [no test files]?       golang.org/x/blog/content/survey2016    [no test files]?       golang.org/x/blog/content/survey2017    [no test files]?       golang.org/x/blog/support/racy    [no test files]$

Note that when go mod tidy adds a requirement, it adds the latest version of the module. If your GOPATH included an older version of a dependency that subsequently published a breaking change, you may see errors in go mod tidy, go build, or go test. If this happens, try downgrading to an older version with go get (for example, go get github.com/broken/module@v1.1.0), or take the time to make your module compatible with the latest version of each dependency.

Tests in module mode

Some tests may need tweaks after migrating to Go modules.

If a test needs to write files in the package directory, it may fail when the package directory is in the module cache, which is read-only. In particular, this may cause go test all to fail. The test should copy files it needs to write to a temporary directory instead.

If a test relies on relative paths (../package-in-another-module) to locate and read files in another package, it will fail if the package is in another module, which will be located in a versioned subdirectory of the module cache or a path specified in a replace directive. If this is the case, you may need to copy the test inputs into your module, or convert the test inputs from raw files to data embedded in .go source files.

If a test expects go commands within the test to run in GOPATH mode, it may fail. If this is the case, you may need to add a go.mod file to the source tree to be tested, or set GO111MODULE=off explicitly.

Publishing a release

Finally, you should tag and publish a release version for your new module. This is optional if you haven't released any versions yet, but without an official release, downstream users will depend on specific commits using pseudo-versions, which may be more difficult to support.

$ git tag v1.2.0$ git push origin v1.2.0

Your new go.mod file defines a canonical import path for your module and adds new minimum version requirements. If your users are already using the correct import path, and your dependencies haven't made breaking changes, then adding the go.mod file is backwards-compatible — but it's a significant change, and may expose existing problems. If you have existing version tags, you should increment the minor version.

Imports and canonical module paths

Each module declares its module path in its go.mod file. Each import statement that refers to a package within the module must have the module path as a prefix of the package path. However, the go command may encounter a repository containing the module through many different remote import paths. For example, both golang.org/x/lint and github.com/golang/lint resolve to repositories containing the code hosted at go.googlesource.com/lint. The go.mod file contained in that repository declares its path to be golang.org/x/lint, so only that path corresponds to a valid module.

Go 1.4 provided a mechanism for declaring canonical import paths using // import comments, but package authors did not always provide them. As a result, code written prior to modules may have used a non-canonical import path for a module without surfacing an error for the mismatch. When using modules, the import path must match the canonical module path, so you may need to update import statements: for example, you may need to change import "github.com/golang/lint" to import "golang.org/x/lint".

Another scenario in which a module's canonical path may differ from its repository path occurs for Go modules at major version 2 or higher. A Go module with a major version above 1 must include a major-version suffix in its module path: for example, version v2.0.0 must have the suffix /v2. However, import statements may have referred to the packages within the module without that suffix. For example, non-module users of github.com/russross/blackfriday/v2 at v2.0.1 may have imported it as github.com/russross/blackfriday instead, and will need to update the import path to include the /v2 suffix.

Conclusion

Converting to Go modules should be a straightforward process for most users. Occasional issues may arise due to non-canonical import paths or breaking changes within dependency. Future posts will explore publishing new versions, v2 and beyond, and ways to debug strange situations.

To provide feedback and help shape the future of dependency management in Go, please send us bug reports or experience reports.

Thanks for all your feedback and help improving modules.

Module Mirror and Checksum Database Launched

$
0
0

We are excited to share that our module mirror,index, andchecksum database are now production ready! The go command will use the module mirror and checksum database by default forGo 1.13 module users. Seeproxy.golang.org/privacy for privacy information about these services and thego command documentation for configuration details, including how to disable the use of these servers or use different ones. If you depend on non-public modules, see thedocumentation for configuring your environment.

This post will describe these services and the benefits of using them, and summarizes some of the points from theGo Module Proxy: Life of a Query talk at Gophercon 2019. See the recording if you are interested in the full talk.

Module Mirror

Modules are sets of Go packages that are versioned together, and the contents of each version are immutable. That immutability provides new opportunities for caching and authentication. When go get runs in module mode, it must fetch the module containing the requested packages, as well as any new dependencies introduced by that module, updating yourgo.mod andgo.sum files as needed. Fetching modules from version control can be expensive in terms of latency and storage in your system: the go command may be forced to pull down the full commit history of a repository containing a transitive dependency, even one that isn’t being built, just to resolve its version.

The solution is to use a module proxy, which speaks an API that is better suited to the go command’s needs (see go help goproxy). When go get runs in module mode with a proxy, it will work faster by only asking for the specific module metadata or source code it needs, and not worrying about the rest. Below is an example of how the go command may use a proxy with go get by requesting the list of versions, then the info, mod, and zip file for the latest tagged version.

A module mirror is a special kind of module proxy that caches metadata and source code in its own storage system, allowing the mirror to continue to serve source code that is no longer available from the original locations. This can speed up downloads and protect you from disappearing dependencies. SeeGo Modules in 2019 for more information.

The Go team maintains a module mirror, served atproxy.golang.org, which the go command will use by default for module users as of Go 1.13. If you are running an earlier version of the go command, then you can use this service by settingGOPROXY=https://proxy.golang.org in your local environment.

Checksum Database

Modules introduced the go.sum file, which is a list of SHA-256 hashes of the source code and go.mod files of each dependency when it was first downloaded. The go command can use the hashes to detect misbehavior by an origin server or proxy that gives you different code for the same version.

The limitation of this go.sum file is that it works entirely by trust on your first use. When you add a version of a dependency that you’ve never seen before to your module (possibly by upgrading an existing dependency), the go command fetches the code and adds lines to the go.sum file on the fly. The problem is that those go.sum lines aren’t being checked against anyone else’s: they might be different from the go.sum lines that the go command just generated for someone else, perhaps because a proxy intentionally served malicious code targeted to you.

Go's solution is a global source of go.sum lines, called achecksum database, which ensures that the go command always adds the same lines to everyone'sgo.sum file. Whenever the go command receives new source code, it can verify the hash of that code against this global database to make sure the hashes match, ensuring that everyone is using the same code for a given version.

The checksum database is served by sum.golang.org, and is built on a Transparent Log (or “Merkle tree”) of hashes backed by Trillian. The main advantage of a Merkle tree is that it is tamper proof and has properties that don’t allow for misbehavior to go undetected, which makes it more trustworthy than a simple database. The go command uses this tree to check“inclusion” proofs (that a specific record exists in the log) and “consistency” proofs (that the tree hasn’t been tampered with) before adding new go.sum lines to your module’s go.sum file. Below is an example of such a tree.

The checksum database supportsa set of endpoints used by the go command to request and verify go.sum lines. The /lookup endpoint provides a “signed tree head” (STH) and the requested go.sum lines. The/tile endpoint provides chunks of the tree called tiles which the go command can use for proofs. Below is an example of how the go command may interact with the checksum database by doing a /lookup of a module version, then requesting the tiles required for the proofs.

This checksum database allows the go command to safely use an otherwise untrusted proxy. Because there is an auditable security layer sitting on top of it, a proxy or origin server can’t intentionally, arbitrarily, or accidentally start giving you the wrong code without getting caught. Even the author of a module can’t move their tags around or otherwise change the bits associated with a specific version from one day to the next without the change being detected.

If you are using Go 1.12 or earlier, you can manually check a go.sum file against the checksum database withgosumcheck:

$ go get golang.org/x/mod/gosumcheck$ gosumcheck /path/to/go.sum

In addition to verification done by the go command, third-party auditors can hold the checksum database accountable by iterating over the log looking for bad entries. They can work together and gossip about the state of the tree as it grows to ensure that it remains uncompromised, and we hope that the Go community will run them.

Module Index

The module index is served by index.golang.org, and is a public feed of new module versions that become available throughproxy.golang.org. This is particularly useful for tool developers that want to keep their own cache of what’s available inproxy.golang.org, or keep up-to-date on some of the newest modules that people are using.

Feedback or bugs

We hope these services improve your experience with modules, and encourage you to file issues if you run into problems or have feedback!

Go 1.13 is released

$
0
0

Today the Go team is very happy to announce the release of Go 1.13. You can get it from the download page.

Some of the highlights include:

For the complete list of changes and more information about the improvements above, see the Go 1.13 release notes.

We want to thank everyone who contributed to this release by writing code, filing bugs, providing feedback, and/or testing the beta and release candidates. Your contributions and diligence helped to ensure that Go 1.13 is as stable as possible. That said, if you notice any problems, please file an issue.

We hope you enjoy the new release!

Publishing Go Modules

$
0
0

Introduction

This post is part 3 in a series.

This post discusses how to write and publish modules so other modules can depend on them.

Please note: this post covers development up to and including v1. A future article will cover developing a module at v2 and beyond, which requires changing the module's path.

This post uses Git in examples.Mercurial,Bazaar, and others are supported as well.

Project setup

For this post, you'll need an existing project to use as an example. So, start with the files from the end of theUsing Go Modules article:

$ cat go.modmodule example.com/hellogo 1.12require rsc.io/quote/v3 v3.1.0$ cat go.sumgolang.org/x/text v0.0.0-20170915032832-14c0d48ead0c h1:qgOY6WgZOaTkIIMiVjBQcw93ERBE4m30iBm00nkL0i8=golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=rsc.io/quote/v3 v3.1.0 h1:9JKUTTIUgS6kzR9mK1YuGKv6Nl+DijDNIc0ghT58FaY=rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=rsc.io/sampler v1.3.0 h1:7uVkIFmeBqHfdjD+gZwtXXI+RODJ2Wc4O7MPEh/QiW4=rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=$ cat hello.gopackage helloimport "rsc.io/quote/v3"func Hello() string {    return quote.HelloV3()}func Proverb() string {    return quote.Concurrency()}$ cat hello_test.gopackage helloimport ("testing")func TestHello(t *testing.T) {    want := "Hello, world."    if got := Hello(); got != want {        t.Errorf("Hello() = %q, want %q", got, want)    }}func TestProverb(t *testing.T) {    want := "Concurrency is not parallelism."    if got := Proverb(); got != want {        t.Errorf("Proverb() = %q, want %q", got, want)    }}$

Next, create a new git repository and add an initial commit. If you're publishing your own project, be sure to include a LICENSE file. Change to the directory containing the go.mod then create the repo:

$ git init$ git add LICENSE go.mod go.sum hello.go hello_test.go$ git commit -m "hello: initial commit"$

Semantic versions and modules

Every required module in a go.mod has asemantic version, the minimum version of that dependency to use to build the module.

A semantic version has the form vMAJOR.MINOR.PATCH.

  • Increment the MAJOR version when you make a backwards incompatible change to the public API of your module. This should only be done when absolutely necessary.
  • Increment the MINOR version when you make a backwards compatible change to the API, like changing dependencies or adding a new function, method, struct field, or type.
  • Increment the PATCH version after making minor changes that don't affect your module's public API or dependencies, like fixing a bug.

You can specify pre-release versions by appending a hyphen and dot separated identifiers (for example, v1.0.1-alpha or v2.2.2-beta.2). Normal releases are preferred by the go command over pre-release versions, so users must ask for pre-release versions explicitly (for example,go get example.com/hello@v1.0.1-alpha) if your module has any normal releases.

v0 major versions and pre-release versions do not guarantee backwards compatibility. They let you refine your API before making stability commitments to your users. However, v1 major versions and beyond require backwards compatibility within that major version.

The version referenced in a go.mod may be an explicit release tagged in the repository (for example, v1.5.2), or it may be apseudo-version based on a specific commit (for example, v0.0.0-20170915032832-14c0d48ead0c). Pseudo-versions are a special type of pre-release version. Pseudo-versions are useful when a user needs to depend on a project that has not published any semantic version tags, or develop against a commit that hasn't been tagged yet, but users should not assume that pseudo-versions provide a stable or well-tested API. Tagging your modules with explicit versions signals to your users that specific versions are fully tested and ready to use.

Once you start tagging your repo with versions, it's important to keep tagging new releases as you develop your module. When users request a new version of your module (with go get -u or go get example.com/hello), the go command will choose the greatest semantic release version available, even if that version is several years old and many changes behind the primary branch. Continuing to tag new releases will make your ongoing improvements available to your users.

Do not delete version tags from your repo. If you find a bug or a security issue with a version, release a new version. If people depend on a version that you have deleted, their builds may fail. Similarly, once you release a version, do not change or overwrite it. Themodule mirror and checksum database store modules, their versions, and signed cryptographic hashes to ensure that the build of a given version remains reproducible over time.

v0: the initial, unstable version

Let's tag the module with a v0 semantic version. A v0 version does not make any stability guarantees, so nearly all projects should start with v0 as they refine their public API.

Tagging a new version has a few steps:

1. Run go mod tidy, which removes any dependencies the module might have accumulated that are no longer necessary.

2. Run go test ./... a final time to make sure everything is working.

3. Tag the project with a new version using git tag.

4. Push the new tag to the origin repository.

$ go mod tidy$ go test ./...ok      example.com/hello       0.015s$ git add go.mod go.sum hello.go hello_test.go$ git commit -m "hello: changes for v0.1.0"$ git tag v0.1.0$ git push origin v0.1.0$

Now other projects can depend on v0.1.0 of example.com/hello. For your own module, you can run go list -m example.com/hello@v0.1.0 to confirm the latest version is available (this example module does not exist, so no versions are available). If you don't see the latest version immediately and you're using the Go module proxy (the default since Go 1.13), try again in a few minutes to give the proxy time to load the new version.

If you add to the public API, make a breaking change to a v0 module, or upgrade the minor or version of one of your dependencies, increment the MINOR version for your next release. For example, the next release after v0.1.0 would be v0.2.0.

If you fix a bug in an existing version, increment the PATCH version. For example, the next release after v0.1.0 would be v0.1.1.

v1: the first stable version

Once you are absolutely sure your module's API is stable, you can releasev1.0.0. A v1 major version communicates to users that no incompatible changes will be made to the module's API. They can upgrade to new v1 minor and patch releases, and their code should not break. Function and method signatures will not change, exported types will not be removed, and so on. If there are changes to the API, they will be backwards compatible (for example, adding a new field to a struct) and will be included in a new minor release. If there are bug fixes (for example, a security fix), they will be included in a patch release (or as part of a minor release).

Sometimes, maintaining backwards compatibility can lead to awkward APIs. That's OK. An imperfect API is better than breaking users' existing code.

The standard library's strings package is a prime example of maintaining backwards compatibility at the cost of API consistency.

  • Split slices a string into all substrings separated by a separator and returns a slice of the substrings between those separators.
  • SplitN can be used to control the number of substrings to return.

However, Replace took a count of how many instances of the string to replace from the beginning (unlike Split).

Given Split and SplitN, you would expect functions like Replace andReplaceN. But, we couldn't change the existing Replace without breaking callers, which we promised not to do. So, in Go 1.12, we added a new function,ReplaceAll. The resulting API is a little odd, since Split and Replace behave differently, but that inconsistency is better than a breaking change.

Let's say you're happy with the API of example.com/hello and you want to release v1 as the first stable version.

Tagging v1 uses the same process as tagging a v0 version: run go mod tidy and go test ./..., tag the version, and push the tag to the origin repository:

$ go mod tidy$ go test ./...ok      example.com/hello       0.015s$ git add go.mod go.sum hello.go hello_test.go$ git commit -m "hello: changes for v1.0.0"$ git tag v1.0.0$ git push origin v1.0.0$

At this point, the v1 API of example.com/hello is solidified. This communicates to everyone that our API is stable and they should feel comfortable using it.

Conclusion

This post walked through the process of tagging a module with semantic versions and when to release v1. A future post will cover how to maintain and publish modules at v2 and beyond.

To provide feedback and help shape the future of dependency management in Go, please send us bug reports orexperience reports.

Thanks for all your feedback and help improving Go modules.

Working with Errors in Go 1.13

$
0
0

Introduction

Go’s treatment of errors as values has served us well over the last decade. Although the standard library’s support for errors has been minimal—just the errors.New and fmt.Errorf functions, which produce errors that contain only a message—the built-in error interface allows Go programmers to add whatever information they desire. All it requires is a type that implements an Error method:

type QueryError struct {    Query string    Err   error}func (e *QueryError) Error() string { return e.Query + ": " + e.Err.Error() }

Error types like this one are ubiquitous, and the information they store varies widely, from timestamps to filenames to server addresses. Often, that information includes another, lower-level error to provide additional context.

The pattern of one error containing another is so pervasive in Go code that, after extensive discussion, Go 1.13 added explicit support for it. This post describes the additions to the standard library that provide that support: three new functions in the errors package, and a new formatting verb for fmt.Errorf.

Before describing the changes in detail, let's review how errors are examined and constructed in previous versions of the language.

Errors before Go 1.13

Examining errors

Go errors are values. Programs make decisions based on those values in a few ways. The most common is to compare an error to nil to see if an operation failed.

if err != nil {    // something went wrong}

Sometimes we compare an error to a known sentinel value, to see if a specific error has occurred.

var ErrNotFound = errors.New("not found")if err == ErrNotFound {    // something wasn't found}

An error value may be of any type which satisfies the language-defined error interface. A program can use a type assertion or type switch to view an error value as a more specific type.

type NotFoundError struct {    Name string}func (e *NotFoundError) Error() string { return e.Name + ": not found" }if e, ok := err.(*NotFoundError); ok {    // e.Name wasn't found}

Adding information

Frequently a function passes an error up the call stack while adding information to it, like a brief description of what was happening when the error occurred. A simple way to do this is to construct a new error that includes the text of the previous one:

if err != nil {    return fmt.Errorf("decompress %v: %v", name, err)}

Creating a new error with fmt.Errorf discards everything from the original error except the text. As we saw above with QueryError, we may sometimes want to define a new error type that contains the underlying error, preserving it for inspection by code. Here is QueryError again:

type QueryError struct {    Query string    Err   error}

Programs can look inside a *QueryError value to make decisions based on the underlying error. You'll sometimes see this referred to as "unwrapping" the error.

if e, ok := err.(*QueryError); ok && e.Err == ErrPermission {    // query failed because of a permission problem}

The os.PathError type in the standard library is another example of one error which contains another.

Errors in Go 1.13

The Unwrap method

Go 1.13 introduces new features to the errors and fmt standard library packages to simplify working with errors that contain other errors. The most significant of these is a convention rather than a change: an error which contains another may implement an Unwrap method returning the underlying error. If e1.Unwrap() returns e2, then we say that e1wrapse2, and that you can unwrape1 to get e2.

Following this convention, we can give the QueryError type above an Unwrap method that returns its contained error:

func (e *QueryError) Unwrap() error { return e.Err }

The result of unwrapping an error may itself have an Unwrap method; we call the sequence of errors produced by repeated unwrapping the error chain.

Examining errors with Is and As

The Go 1.13 errors package includes two new functions for examining errors: Is and As.

The errors.Is function compares an error to a value.

// Similar to://   if err == ErrNotFound { … }if errors.Is(err, ErrNotFound) {    // something wasn't found}

The As function tests whether an error is a specific type.

// Similar to://   if e, ok := err.(*QueryError); ok { … }var e *QueryErrorif errors.As(err, &e) {    // err is a *QueryError, and e is set to the error's value}

In the simplest case, the errors.Is function behaves like a comparison to a sentinel error, and the errors.As function behaves like a type assertion. When operating on wrapped errors, however, these functions consider all the errors in a chain. Let's look again at the example from above of unwrapping a QueryError to examine the underlying error:

if e, ok := err.(*QueryError); ok && e.Err == ErrPermission {    // query failed because of a permission problem}

Using the errors.Is function, we can write this as:

if errors.Is(err, ErrPermission) {    // err, or some error that it wraps, is a permission problem}

The errors package also includes a new Unwrap function which returns the result of calling an error's Unwrap method, or nil when the error has noUnwrap method. It is usually better to use errors.Is or errors.As, however, since these functions will examine the entire chain in a single call.

Wrapping errors with %w

As mentioned earlier, it is common to use the fmt.Errorf function to add additional information to an error.

if err != nil {    return fmt.Errorf("decompress %v: %v", name, err)}

In Go 1.13, the fmt.Errorf function supports a new %w verb. When this verb is present, the error returned by fmt.Errorf will have an Unwrap method returning the argument of %w, which must be an error. In all other ways, %w is identical to %v.

if err != nil {    // Return an error which unwraps to err.    return fmt.Errorf("decompress %v: %w", name, err)}

Wrapping an error with %w makes it available to errors.Is and errors.As:

err := fmt.Errorf("access denied: %w”, ErrPermission)...if errors.Is(err, ErrPermission) ...

Whether to Wrap

When adding additional context to an error, either with fmt.Errorf or by implementing a custom type, you need to decide whether the new error should wrap the original. There is no single answer to this question; it depends on the context in which the new error is created. Wrap an error to expose it to callers. Do not wrap an error when doing so would expose implementation details.

As one example, imagine a Parse function which reads a complex data structure from an io.Reader. If an error occurs, we wish to report the line and column number at which it occurred. If the error occurs while reading from theio.Reader, we will want to wrap that error to allow inspection of the underlying problem. Since the caller provided the io.Reader to the function, it makes sense to expose the error produced by it.

In contrast, a function which makes several calls to a database probably should not return an error which unwraps to the result of one of those calls. If the database used by the function is an implementation detail, then exposing these errors is a violation of abstraction. For example, if the LookupUser function of your package pkg uses Go's database/sql package, then it may encounter asql.ErrNoRows error. If you return that error withfmt.Errorf("accessing DB: %v", err) then a caller cannot look inside to find the sql.ErrNoRows. But if the function instead returns fmt.Errorf("accessing DB: %w", err), then a caller could reasonably write

err := pkg.LookupUser(...)if errors.Is(err, sql.ErrNoRows) …

At that point, the function must always return sql.ErrNoRows if you don't want to break your clients, even if you switch to a different database package. In other words, wrapping an error makes that error part of your API. If you don't want to commit to supporting that error as part of your API in the future, you shouldn't wrap the error.

It’s important to remember that whether you wrap or not, the error text will be the same. A person trying to understand the error will have the same information either way; the choice to wrap is about whether to give programs additional information so they can make more informed decisions, or to withhold that information to preserve an abstraction layer.

Customizing error tests with Is and As methods

The errors.Is function examines each error in a chain for a match with a target value. By default, an error matches the target if the two are equal. In addition, an error in the chain may declare that it matches a target by implementing an Ismethod.

As an example, consider this error inspired by theUpspin error package which compares an error against a template, considering only fields which are non-zero in the template:

type Error struct {    Path string    User string}func (e *Error) Is(target error) bool {    t, ok := target.(*Error)    if !ok {        return false    }    return (e.Path == t.Path || t.Path == "") &&           (e.User == t.User || t.User == "")}if errors.Is(err, &Error{User: "someuser"}) {    // err's User field is "someuser".}

The errors.As function similarly consults an As method when present.

Errors and package APIs

A package which returns errors (and most do) should describe what properties of those errors programmers may rely on. A well-designed package will also avoid returning errors with properties that should not be relied upon.

The simplest specification is to say that operations either succeed or fail, returning a nil or non-nil error value respectively. In many cases, no further information is needed.

If we wish a function to return an identifiable error condition, such as "item not found," we might return an error wrapping a sentinel.

var ErrNotFound = errors.New("not found")// FetchItem returns the named item.//// If no item with the name exists, FetchItem returns an error// wrapping ErrNotFound.func FetchItem(name string) (*Item, error) {    if itemNotFound(name) {        return nil, fmt.Errorf("%q: %w", name, ErrNotFound)    }    // ...}

There are other existing patterns for providing errors which can be semantically examined by the caller, such as directly returning a sentinel value, a specific type, or a value which can be examined with a predicate function.

In all cases, care should be taken not to expose internal details to the user. As we touched on in "Whether to Wrap" above, when you return an error from another package you should convert the error to a form that does not expose the underlying error, unless you are willing to commit to returning that specific error in the future.

f, err := os.Open(filename)if err != nil {    // The *os.PathError returned by os.Open is an internal detail.    // To avoid exposing it to the caller, repackage it as a new    // error with the same text. We use the %v formatting verb, since    // %w would permit the caller to unwrap the original *os.PathError.    return fmt.Errorf("%v", err)}

If a function is defined as returning an error wrapping some sentinel or type, do not return the underlying error directly.

var ErrPermission = errors.New("permission denied")// DoSomething returns an error wrapping ErrPermission if the user// does not have permission to do something.func DoSomething() {    if !userHasPermission() {        // If we return ErrPermission directly, callers might come        // to depend on the exact error value, writing code like this:        //        //     if err := pkg.DoSomething(); err == pkg.ErrPermission { … }        //        // This will cause problems if we want to add additional        // context to the error in the future. To avoid this, we        // return an error wrapping the sentinel so that users must        // always unwrap it:        //        //     if err := pkg.DoSomething(); errors.Is(err, pkg.ErrPermission) { ... }        return fmt.Errorf("%w", ErrPermission)    }    // ...}

Conclusion

Although the changes we’ve discussed amount to just three functions and a formatting verb, we hope they will go a long way toward improving how errors are handled in Go programs. We expect that wrapping to provide additional context will become commonplace, helping programs to make better decisions and helping programmers to find bugs more quickly.

As Russ Cox said in his GopherCon 2019 keynote, on the path to Go 2 we experiment, simplify and ship. Now that we’ve shipped these changes, we look forward to the experiments that will follow.


Go Modules: v2 and Beyond

$
0
0

Introduction

This post is part 4 in a series.

As a successful project matures and new requirements are added, past features and design decisions might stop making sense. Developers may want to integrate lessons they've learned by removing deprecated functions, renaming types, or splitting complicated packages into manageable pieces. These kinds of changes require effort by downstream users to migrate their code to the new API, so they should not be made without careful consideration that the benefits outweigh the costs.

For projects that are still experimental — at major version v0— occasional breaking changes are expected by users. For projects which are declared stable— at major version v1 or higher — breaking changes must be done in a new major version. This post explores major version semantics, how to create and publish a new major version, and how to maintain multiple major versions of a module.

Major versions and module paths

Modules formalized an important principle in Go, theimport compatibility rule:

If an old package and a new package have the same import path,the new package must be backwards compatible with the old package.

By definition, a new major version of a package is not backwards compatible with the previous version. This means a new major version of a module must have a different module path than the previous version. Starting with v2, the major version must appear at the end of the module path (declared in the module statement in the go.mod file). For example, when the authors of the modulegithub.com/googleapis/gax-go developed v2, they used the new module pathgithub.com/googleapis/gax-go/v2. Users who wanted to use v2 had to change their package imports and module requirements to github.com/googleapis/gax-go/v2.

The need for major version suffixes is one of the ways Go modules differs from most other dependency management systems. Suffixes are needed to solve the diamond dependency problem. Before Go modules, gopkg.in allowed package maintainers to follow what we now refer to as the import compatibility rule. With gopkg.in, if you depend on a package that imports gopkg.in/yaml.v1 and another package that imports gopkg.in/yaml.v2, there is no conflict because the two yaml packages have different import paths — they use a version suffix, as with Go modules. Since gopkg.in shares the same version suffix methodology as Go modules, the Go command accepts the .v2 in gopkg.in/yaml.v2 as a valid major version suffix. This is a special case for compatibility with gopkg.in: modules hosted at other domains need a slash suffix like /v2.

Major version strategies

The recommended strategy is to develop v2+ modules in a directory named after the major version suffix.

github.com/googleapis/gax-go @ master branch/go.mod    → module github.com/googleapis/gax-go/v2/go.mod → module github.com/googleapis/gax-go/v2

This approach is compatible with tools that aren't aware of modules: file paths within the repository match the paths expected by go get in GOPATH mode. This strategy also allows all major versions to be developed together in different directories.

Other strategies may keep major versions on separate branches. However, ifv2+ source code is on the repository's default branch (usually master), tools that are not version-aware — including the go command in GOPATH mode— may not distinguish between major versions.

The examples in this post will follow the major version subdirectory strategy, since it provides the most compatibility. We recommend that module authors follow this strategy as long as they have users developing in GOPATH mode.

Publishing v2 and beyond

This post uses github.com/googleapis/gax-go as an example:

$ pwd/tmp/gax-go$ lsCODE_OF_CONDUCT.md  call_option.go  internalCONTRIBUTING.md     gax.go          invoke.goLICENSE             go.mod          tools.goREADME.md           go.sum          RELEASING.mdheader.go$ cat go.modmodule github.com/googleapis/gax-gogo 1.9require (    github.com/golang/protobuf v1.3.1    golang.org/x/exp v0.0.0-20190221220918-438050ddec5e    golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3    golang.org/x/tools v0.0.0-20190114222345-bf090417da8b    google.golang.org/grpc v1.19.0    honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099)$

To start development on v2 of github.com/googleapis/gax-go, we'll create a new v2/ directory and copy our package into it.

$ mkdir v2$ cp *.go v2/building file list ... donecall_option.gogax.goheader.goinvoke.gotools.gosent 10588 bytes  received 130 bytes  21436.00 bytes/sectotal size is 10208  speedup is 0.95$

Now, let's create a v2 go.mod file by copying the current go.mod file and adding a v2/ suffix to the module path:

$ cp go.mod v2/go.mod$ go mod edit -module github.com/googleapis/gax-go/v2 v2/go.mod$

Note that the v2 version is treated as a separate module from the v0 / v1 versions: both may coexist in the same build. So, if your v2+ module has multiple packages, you should update them to use the new /v2 import path: otherwise, your v2+ module will depend on your v0 / v1 module. For example, to update all github.com/my/project references to github.com/my/project/v2, you can use find and sed:

$ find . -type f \    -name '*.go' \    -exec sed -i -e 's,github.com/my/project,github.com/my/project/v2,g' {} \;$

Now we have a v2 module, but we want to experiment and make changes before publishing a release. Until we release v2.0.0 (or any version without a pre-release suffix), we can develop and make breaking changes as we decide on the new API. If we want users to be able to experiment with the new API before we officially make it stable, we can publish a v2 pre-release version:

$ git tag v2.0.0-alpha1$ git push origin v2.0.0-alpha1$

Once we are happy with our v2 API and are sure we don't need any other breaking changes, we can tag v2.0.0:

$ git tag v2.0.0$ git push origin v2.0.0$

At that point, there are now two major versions to maintain. Backwards compatible changes and bug fixes will lead to new minor and patch releases (for example, v1.1.0, v2.0.1, etc.).

Conclusion

Major version changes result in development and maintenance overhead and require investment from downstream users to migrate. The larger the project, the larger these overheads tend to be. A major version change should only come after identifying a compelling reason. Once a compelling reason has been identified for a breaking change, we recommend developing multiple major versions in the master branch because it is compatible with a wider variety of existing tools.

Breaking changes to a v1+ module should always happen in a new, vN+1 module. When a new module is released, it means additional work for the maintainers and for the users who need to migrate to the new package. Maintainers should therefore validate their APIs before making a stable release, and consider carefully whether breaking changes are really necessary beyond v1.

Go Turns 10

$
0
0

Happy birthday, Go!

This weekend we celebrate the 10th anniversary ofthe Go release, marking the 10th birthday of Go as an open-source programming language and ecosystem for building modern networked software.

To mark the occasion,Renee French, the creator of theGo gopher, painted this delightful scene:

Celebrating 10 years of Go makes me think back to early November 2009, when we were getting ready to share Go with the world. We didn’t know what kind of reaction to expect, whether anyone would care about this little language. I hoped that even if no one ended up using Go, we would at least have drawn attention to some good ideas, especially Go’s approach to concurrency and interfaces, that could influence follow-on languages.

Once it became clear that people were excited about Go, I looked at the history of popular languages like C, C++, Perl, Python, and Ruby, examining how long each took to gain widespread adoption. For example, Perl seemed to me to have appeared fully-formed in the mid-to-late 1990s, with CGI scripts and the web, but it was first released in 1987. This pattern repeated for almost every language I looked at: it seems to take roughly a decade of quiet, steady improvement and dissemination before a new language really takes off.

I wondered: where would Go be after a decade?

Today, we can answer that question: Go is everywhere, used by at least a million developers worldwide.

Go’s original target was networked system infrastructure, what we now call cloud software. Every major cloud provider today uses core cloud infrastructure written in Go, such as Docker, Etcd, Istio, Kubernetes, Prometheus, and Terraform; the majority of theCloud Native Computing Foundation’s projects are written in Go. Countless companies are using Go to move their own work to the cloud as well, from startups building from scratch to enterprises modernizing their software stack. Go has also found adoption well beyond its original cloud target, with uses ranging from controlling tiny embedded systems withGoBot and TinyGo to detecting cancer withmassive big data analysis and machine learning at GRAIL, and everything in between.

All this is to say that Go has succeeded beyond our wildest dreams. And Go’s success isn’t just about the language. It’s about the language, the ecosystem, and especially the community working together.

In 2009, the language was a good idea with a working sketch of an implementation. The go command did not exist: we ran commands like 6g to compile and 6l to link binaries, automated with makefiles. We typed semicolons at the ends of statements. The entire program stopped during garbage collection, which then struggled to make good use of two cores. Go ran only on Linux and Mac, on 32- and 64-bit x86 and 32-bit ARM.

Over the last decade, with the help of Go developers all over the world, we have evolved this idea and sketch into a productive language with fantastic tooling, a production-quality implementation, astate-of-the-art garbage collector, and ports to a 12 operating systems and 10 architectures.

Any programming language needs the support of a thriving ecosystem. The open source release was the seed for that ecosystem, but since then, many people have contributed their time and talent to fill the Go ecosystem with great tutorials, books, courses, blog posts, podcasts, tools, integrations, and of course reusable Go packages importable with goget. Go could never have succeeded without the support of this ecosystem.

Of course, the ecosystem needs the support of a thriving community. In 2019 there are dozens of Go conferences all over the world, along withover 150 Go meetup groups with over 90,000 members.GoBridge andWomen Who Go help bring new voices into the Go community, through mentoring, training, and conference scholarships. This year alone, they have taught hundreds of people from traditionally underrepresented groups at workshops where community members teach and mentor those new to Go.

There areover a million Go developers worldwide, and companies all over the globe are looking to hire more. In fact, people often tell us that learning Go helped them get their first jobs in the tech industry. In the end, what we’re most proud of about Go is not a well-designed feature or a clever bit of code but the positive impact Go has had in so many people’s lives. We aimed to create a language that would help us be better developers, and we are thrilled that Go has helped so many others.

As#GoTurns10, I hope everyone will take a moment to celebrate the Go community and all we have achieved. On behalf of the entire Go team at Google, thank you to everyone who has joined us over the past decade. Let’s make the next one even more incredible!

Go.dev: a new hub for Go developers

$
0
0

Over the last two years, as we’ve spoken with users at companies of all sizes, we’ve heard three questions repeatedly: who else is using Go, what do they use it for, and how can I find useful Go packages?

Today we are launching go.dev, a new hub for Go developers, to help answer those questions. There you will find a wealth of learning resources to get started with the language, featured use cases, and case studies of companies using Go.

(Note that golang.org is still the home for the open source Go project and the Go distribution. Go.dev is a companion site to provide these supporting resources.)

Clicking on Explore brings you to pkg.go.dev, a central source of information about Go packages and modules. Like godoc.org, pkg.go.dev serves Go documentation. However, it also understands modules and has information about all versions of a package, includingall releases of the standard library! And it detects and displays licenses and has a better search algorithm. You can followGo issue 33654 for future developments.

Today’s launch is our minimum viable product for go.dev, so we can share what we’ve built to help the community and get feedback. We intend to expand the site over time. If you have any ideas, suggestions or issues, please let us know via the “Share Feedback” and “Report an Issue” links at the bottom of every page. Or you can send your bugs, ideas, feature requests, and questions togo-discovery-feedback@google.com.

Announcing the 2019 Go Developer Survey

$
0
0

Help shape the future of Go

Since 2016, thousands of Gophers around the world have helped the Go project by sharing your thoughts via our annual Go Developer Survey. Your feedback has played an enormous role in driving changes to our language, ecosystem, and community, including the gopls language server, new error-handling mechanics, the module mirror, and so much more from the latest Go 1.13 release. And of course, we publicly share eachyear'sresults, so we can all benefit from the community's insights.

Today we are launching the 2019 Go Developer Survey. We'd love to hear from everyone who uses Go, used to use Go, or is interested in using Go, to help ensure the language, community, and ecosystem fit the needs of the people closest to it. Please help us shape Go's future by participating in this 15-minute survey by December 15th: Take the 2019 Go Developer Survey.

Spread the word!

We need as many Gophers as possible to participate in this survey to help us better understand our global user base. We'd be grateful if you would spread the word by sharing this post on your social network feeds, around the office, at meet-ups, and in other communities. Thank you!

Proposals for Go 1.15

$
0
0

Status

We are close to the Go 1.14 release, planned for February assuming all goes well, with an RC1 candidate almost ready. Per the process outlined in theGo 2, here we come! blog post, it is again the time in our development and release cycle to consider if and what language or library changes we might want to include for our next release, Go 1.15, scheduled for August of this year.

The primary goals for Go remain package and version management, better error handling support, and generics. Module support is in good shape and getting better with each day, and we are also making progress on the generics front (more on that later this year). Our attempt seven months ago at providing a better error handling mechanism, thetry proposal, met good support but also strong opposition and we decided to abandon it. In its aftermath there were many follow-up proposals, but none of them seemed convincing enough, clearly superior to the try proposal, or less likely to cause similar controversy. Thus, we have not further pursued changes to error handling for now. Perhaps some future insight will help us to improve upon the status quo.

Proposals

Given that modules and generics are actively being worked on, and with error handling changes out of the way for the time being, what other changes should we pursue, if any? There are some perennial favorites such as requests for enums and immutable types, but none of those ideas are sufficiently developed yet, nor are they urgent enough to warrant a lot of attention by the Go team, especially when also considering the cost of making a language change.

After reviewing all potentially viable proposals, and more importantly, because we don’t want to incrementally add new features without a long-term plan, we concluded that it is better to hold off with major changes this time. Instead we concentrate on a couple of new vet checks and a minor adjustment to the language. We have selected the following three proposals:

#32479. Diagnose string(int) conversion in go vet.

We were planning to get this done for the upcoming Go 1.14 release but we didn’t get around to it, so here it is again. The string(int) conversion was introduced early in Go for convenience, but it is confusing to newcomers (string(10) is"\n" not "10") and not justified anymore now that the conversion is available in the unicode/utf8 package. Since removing this conversion is not a backwards-compatible change, we propose to start with a vet error instead.

#4483. Diagnose impossible interface-interface type assertions in go vet.

Currently, Go permits any type assertion x.(T) (and corresponding type switch case) where the type of x and T are interfaces. Yet, if both x and T have a method with the same name but different signatures it is impossible for any value assigned to x to also implement T; such type assertions will always fail at runtime (panic or evaluate to false). Since we know this at compile time, the compiler might as well report an error. Reporting a compiler error in this case is not a backwards-compatible change, thus we also propose to start with a vet error instead.

#28591. Constant-evaluate index and slice expressions with constant strings and indices.

Currently, indexing or slicing a constant string with a constant index, or indices, produces a non-constant byte or string value, respectively. But if all operands are constant, the compiler can constant-evaluate such expressions and produce a constant (possibly untyped) result. This is a fully backward-compatible change and we propose to make the necessary adjustments to the spec and compilers.

Timeline

We believe that none of these three proposals are controversial but there’s always a chance that we missed something important. For that reason we plan to have the proposals implemented at the beginning of the Go 1.15 release cycle (at or shortly after the Go 1.14 release) so that there is plenty of time to gather experience and provide feedback. Per theproposal evaluation process, the final decision will be made at the end of the development cycle, at the beginning of May, 2020.

And one more thing...

We receive many more language change proposals (issues labeled LanguageChange) than we can review thoroughly. For instance, just for error handling alone, there are 57 issues, of which five are currently still open. Since the cost of making a language change, no matter how small, is high and the benefits are often unclear, we must err on the side of caution. Consequently, most language change proposals get rejected sooner or later, sometimes with minimal feedback. This is unsatisfactory for all parties involved. If you have spent a lot of time and effort outlining your idea in detail, it would be nice to not have it immediately rejected. On the flip side, because the generalproposal process is deliberately simple, it is very easy to create language change proposals that are only marginally explored, causing the review committee significant amounts of work. To improve this experience for everybody we are adding a newquestionnaire for language changes: filling out that template will help reviewers evaluate proposals more efficiently because they don’t need to try to answer those questions themselves. And hopefully it will also provide better guidance for proposers by setting expectations right from the start. This is an experiment that we will refine over time as needed.

Thank you for helping us improve the Go experience!

Viewing all 265 articles
Browse latest View live