Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

protoc-gen-go-grpc: API for service registration #3669

Closed
belak opened this issue Jun 9, 2020 · 86 comments
Closed

protoc-gen-go-grpc: API for service registration #3669

belak opened this issue Jun 9, 2020 · 86 comments
Assignees

Comments

@belak
Copy link

belak commented Jun 9, 2020

There were some changes in #3657 that make it harder to develop gRPC services and harder to find new unimplemented methods - I wanted to start a discussion around the new default and figure out why the change was made. I do understand this is in an unreleased version, so I figured a discussion would be better than a bug report or feature request.

From my perspective, this is a number of steps backwards for reasons I will outline below.

When implementing a gRPC service in Go, I often start with a blank slate - the service has been defined in proto files, the go and gRPC protobuf definitions have been generated, all that's left to do is write the code. I often use something like the following so the compiler will help me along, telling me about missing methods, incorrect signatures, things like that.

package chat
func init() {
	// Ensure that Server implements the ChatIngestServer interface
	var server *Server = nil
	var _ pb.ChatIngestServer = server
}

This can alternatively be done with var _ pb.ChatIngestServer = &Server{} but that theoretically leaves a little bit more memory around at runtime.

After this, I add all the missing methods myself (returning the unimplemented status) and start adding implementations to them until I have a concrete implementation for all the methods.

Problems with the new approach

  • As soon as you embed the Unimplemented implementation, the Go compiler gets a lot less useful - it will no longer tell you about missing methods and because Go interfaces are implicit, if you make a mistake in implementing a method (like misspelling a method name), you will not find out until runtime. Additionally, because these are public methods, if they're attached to a public struct (like the commonly used Server), they may not be detected as an unused method.
  • If protos and generated files are updated, you will not know about any missing methods until runtime. Personally, I would prefer to know if I have not fully implemented a service at compile time rather than waiting until clients are running against it.
  • IDE hinting is worse with the new changes - IDEs will generally not recommend a method stub for an unimplemented method if it's provided by an embedded struct because even though it's technically implemented, it does not have a working implementation.

I generally prefer compile time guarantees that all methods are implemented over runtime checks.

Benefits of the new approach

  • Protos and generated files can be updated without requiring updates to server implementations.

Proposal

By default, the option requireUnimplementedServers should default to false. This option is more valuable when dealing with external protobufs which are not versioned (maybe there should be a recommendation to embed the unimplemented struct in this instance) and makes it harder to catch mistakes if you are developing a canonical implementation of a service which should implement all the available methods.

At least for me, the problems with the new approach vastly outweigh the benefits I've seen so far.

@dfawley
Copy link
Member

dfawley commented Jun 9, 2020

By default, the option requireUnimplementedServers should default to false.

If this defaults to false, nobody will use it. Note that we wouldn't even have created a flag if we were starting from scratch. The only reason for the flag is to enable backward compatibility without requiring code changes.

It was always supposed to be a requirement (undocumented, most likely, so I have no pointers for you) that the addition of new methods to a proto would be a backward-compatible change. The initial implementation of the grpc-go proto codegen that didn't allow for this broke that requirement.

This issue is covered in #2318 and golang/protobuf#784.

If you have suggestions for other ways to implement this requirement, then we're willing to discuss -- we still haven't released 1.0 yet, so things can be changed. This is the best approach we found.

Note that C++ and Java both have similar implementations. In C++, there is a base class that you use and then override methods for the handlers, and in Java, the service is generated as an abstract class that your implementation must extend.

@dfawley dfawley closed this as completed Jun 9, 2020
@belak
Copy link
Author

belak commented Jun 9, 2020

By default, the option requireUnimplementedServers should default to false.

If this defaults to false, nobody will use it. Note that we wouldn't even have created a flag if we were starting from scratch. The only reason for the flag is to enable backward compatibility without requiring code changes.

It seems useful, especially in a language where you don't have to explicitly implement an interface or override methods, to make this optional because, as mentioned before this removes some of the compile-time checks that you're doing things correctly. Even before this point, I'm fairly sure you could still embed the unimplemented version if you wanted to.

It was always supposed to be a requirement (undocumented, most likely, so I have no pointers for you) that the addition of new methods to a proto would be a backward-compatible change. The initial implementation of the grpc-go proto codegen that didn't allow for this broke that requirement.

It makes perfect sense for clients to not break when things are added, but if you're dealing with the canonical implementation of a service, isn't it more logical to realize you're not implementing an interface at compile time rather than runtime? You're not meeting the contract specified by the service you're implementing. I do understand that for services which are implemented in multiple places this doesn't make as much sense and the changes are an improvement.

If you have suggestions for other ways to implement this requirement, then we're willing to discuss -- we still haven't released 1.0 yet, so things can be changed. This is the best approach we found.

Would you consider making the generated method provided by the UnimplementedSomethingServer public? Something like MustProvideForwardCompatibility This way you could still opt out if you absolutely wanted to and the generated code wouldn't need to be different between versions.

Alternatively, a second generated struct could be provided allowing you to opt out of the forward compatibility in services when it's embedded - both UnimplementedSomethingServer and ManualSomethingServer could be provided as embed options, but strongly recommending using unimplemented over the manual implementation unless it is the implementation of this RPC service.

Note that C++ and Java both have similar implementations. In C++, there is a base class that you use and then override methods for the handlers, and in Java, the service is generated as an abstract class that your implementation must extend.

The nice thing about Java though is that you at least get a compile time guarantee that you didn't misspell a method with @Overrides. It's been a while since I've written C++, but I assume there would be compile time checks against the header file. With Go and implicit interfaces it's not as safe to do that.

Thanks for explaining your reasoning - I wasn't aware other languages already did this.

@dfawley
Copy link
Member

dfawley commented Jun 10, 2020

The nice thing about Java though is that you at least get a compile time guarantee that you didn't misspell a method with @OVERRIDES. It's been a while since I've written C++, but I assume there would be compile time checks against the header file. With Go and implicit interfaces it's not as safe to do that.

That's a good point. OOP languages have a way of distinguishing adding methods vs. overriding methods, which prevents typos like this. Go isn't OO (it lacks inheritance), and so the type system isn't capable of this kind of feat.

Brainstorming some other options:

  1. Make RegisterFooServer accept interface{}. Applications could then use var _ FooServer = (*myFooServer)(nil) for the check you wanted, above.

    • Con: no type checking at all on the input, but, with an embedded UnimplementedFooServer, half of the value of the type check is already lost so maybe this is not that bad.
  2. Alter UnimplementedFooServer so it is essentially a dispatching implementation instead:

    type UnimplementedFooServer struct { ... }
    func (s *UnimplementedFooServer) MyMethod(...) ... { return s.myMethod(...) }
    func (s *UnimplementedFooServer) SetMyMethod(m func(...)...) { s.myMethod = m }
    func UnsafeWrapFooServer(s UnsafeFooServer) FooServer {
    	// Return simple wrapper that adds a magic method to s.  Breaks if a new method is added (due to UnsafeFooServer parameter type).
    }
    
    // Usage:
    func main() {
    	s := &myFooServer{}
    	u := &pb.UnimplementedFooServer{}
    	u.SetMyMethod(s.MyMethod) // etc...
    	pb.RegisterFooServer(grpcServer, u)
    	// -- or --
    	pb.RegisterFooServer(grpcServer, pb.UnsafeWrapFooServer(&myFooServer{}))
    }

@bufdev
Copy link
Contributor

bufdev commented Jun 23, 2020

I always appreciate the work of the grpc-go maintainers, but I do think this issue deserves some speaking up on the issue, so my two cents:

It was always supposed to be a requirement (undocumented, most likely, so I have no pointers for you) that the addition of new methods to a proto would be a backward-compatible change. The initial implementation of the grpc-go proto codegen that didn't allow for this broke that requirement.

It wasn't a requirement though - there's a lot of things I do in my code that I wish I hadn't done, but unless I'm willing to v2 the code, that's just how it is. Changing that on a minor will cause more harm than the problem it solves.

If you have suggestions for other ways to implement this requirement, then we're willing to discuss -- we still haven't released 1.0 yet, so things can be changed. This is the best approach we found.

The issue is that grpc-go as a whole IS 1.0, and has been for a while. More to the point, in practice, this generator is a port from another 1.0'ed repository, where people are being asked to move to this.

With setting the generator as unreleased and adding this requirement based on granular and/or documented compatibility guarantees, perhaps this doesn't violate the letter of SemVer, but it certainly violates the spirit. It also doesn't follow principle of least surprise - I happen to follow these issues closely, but the vast majority of gRPC users do not, and adding a (especially runtime) requirement to embed this could quite literally break the internet. I sympathize with the intent here, but in practice, this generator should be considered 1.0, and not have breaking changes. Opt-in to this feature, while not optimal, is better than lots of users being being broken, which will result in users leaving the Protobuf/gRPC ecosystem out of a lack of trust.

@dfawley
Copy link
Member

dfawley commented Jun 23, 2020

It wasn't a requirement though - there's a lot of things I do in my code that I wish I hadn't done, but unless I'm willing to v2 the code, that's just how it is. Changing that on a minor will cause more harm than the problem it solves.

Adding methods to services was always intended to be backward compatible. It is a backward compatible change in every other language as far as I'm aware. It is also backward compatible at a wire protocol level. Asking users to do a major version bump of their protocol for every new method added, just because of Go, is unacceptable. In a monorepo, this problem can be mitigated by updating every service implementation atomically with the addition of the method, but that doesn't work in OSS.

The issue is that grpc-go as a whole IS 1.0, and has been for a while. More to the point, in practice, this generator is a port from another 1.0'ed repository, where people are being asked to move to this.

Flags will be provided in protoc-gen-go-grpc to enable backward-compatible behavior. If you can't tolerate the new version of the codegen, set the flag and you get the old codgen's code. By default we want to encourage the new behavior, which will allow the addition of methods to be backward-compatible. If it helps avoid confusion, we could name the first release of protoc-gen-go-grpc v2.0 to follow the "spirit" of semver more closely, though I fail to see how this would help, practically speaking.

adding a (especially runtime) requirement to embed this could quite literally break the internet.

The current implementation of this functionality makes it a compile-time requirement, not anything checked at runtime. We are aware of pros and cons of various alternatives and will do everything we can to minimize any disruption.

@bufdev
Copy link
Contributor

bufdev commented Jun 23, 2020

It is also backward compatible at a wire protocol level. Asking users to do a major version bump of their protocol for every new method added, just because of Go, is unacceptable.

This isn't what's happening though - users' Go code is being broken with this change. It's up to users to decide what compatibility requirements they want for their generated code - for example, we wouldn't expose generated gRPC code as part of SemVer.

In a monorepo, this problem can be mitigated by updating every service implementation atomically with the addition of the method, but that doesn't work in OSS.

This is the key thing though: 99% of gRPC users are not in a monorepo, and SemVer matters as a part of the Golang toolchain at this point.

Flags will be provided in protoc-gen-go-grpc to enable backward-compatible behavior. If you can't tolerate the new version of the codegen, set the flag and you get the old codgen's code.

This in itself is not backwards-compatible. Making breaking changes that are opt-out is not a backwards-compatible change, it has to be opt-in.

This change will further erode trust in the Protobuf/gRPC community and have the opposite effect of it's intent. I strongly recommend reconsidering this, and perhaps just making this flag opt-out for Google internal use? More to the point, set this flag to opt-in for everyone else, and then opt into it Google-wide?

@neild
Copy link
Contributor

neild commented Jun 23, 2020

Making breaking changes that are opt-out is not a backwards-compatible change, it has to be opt-in.

Note that there is no change to existing tools. The gRPC service generator in github.com/golang/protobuf/protoc-gen-go behaves as it has in the past.

Someone changing from github.com/golang/protobuf/protoc-gen-go to google.golang.org/grpc/cmd/protoc-gen-go-grpc will need to choose between updating their code or setting --go-grpc_opt=requireUnimplementedServers=false, but they're already making an explicit choice to change generators.

@belak
Copy link
Author

belak commented Jun 23, 2020

What was the problem with the original approach? Before you could choose to embed the UnimplementedXXXServer and you would get forward compatibility, but it wasn't required. Requiring that it be embedded makes it so when projects update protobufs, they may miss implementing new methods, which seems worse.

What if there were two structs provided that could be embedded, one which provides forward compatibility and one which doesn't? It wouldn't require a wrapper, the default method name could point to the Unimplemented version rather than the one which doesn't provide forward compatibility, and you would have to explicitly choose to not have forward compatibility for service definitions.

@bufdev
Copy link
Contributor

bufdev commented Jun 23, 2020

Someone changing from github.com/golang/protobuf/protoc-gen-go to google.golang.org/grpc/cmd/protoc-gen-go-grpc will need to choose between updating their code or setting --go-grpc_opt=requireUnimplementedServers=false, but they're already making an explicit choice to change generators.

Yea but it's a "choice", in the same way people have the choice to not upgrade to golang/protobuf 1.4+ - people are being strongly suggested to come over here, and that plugins=grpc is being deprecated. As I said, you can make the argument that this is OK per compatibility guarantees/granular versioning, but this definitely violates the spirit of backwards compatibility.

@dfawley
Copy link
Member

dfawley commented Jun 24, 2020

What was the problem with the original approach?

The problem is that the grpc-go codegen missed the requirement that the addition of new methods should be backward compatible.

For many services, this isn't actually a problem, because there is only one service implementer, and they also own the definition, and can update the two in lockstep. There are many other services that are more generic in nature, however, and have many implementers. E.g. the gRPC health service. Here, we added a watch-based health check, and consequently broke everyone attempting to implement their own health service and importing our canonical service definition. Note that in OSS, protos cannot be registered multiple times, so anyone referencing our health protos in code should be using our .pb.go, or they risk running into init-time panics.

when projects update protobufs, they may miss implementing new methods, which seems worse.

This is the case in literally every other language, and is a desired feature, not a deficiency. If you forget to implement a method, your clients will get UNIMPLEMENTED, which is perfectly appropriate.

Making breaking changes that are opt-out is not a backwards-compatible change, it has to be opt-in.

It is not a requirement of this tool to be backward-compatible with the old tool. Our goal is to minimize any such differences while also bringing in the feature of future-proofing service registration. We will provide an escape valve for people looking to get on the new stack without the need to change any code, but it seems pretty reasonable to expect that some things may be different when migrating to a brand new tool.

@dcow
Copy link

dcow commented Jun 26, 2020

@bufdev come on... this has nothing to do with the spirit of semver. Use the old tool if you prefer the old behavior and update all your code when ready. That much doesn't seem to be the pressing issue. (edit: although I admit now that your point about how confusing and unclean/dishonest this transition technically is remains valid especially considering how people are being pushed through the transition by all relevant documentation and example code without clear guidance on why they're encountering a breaking change and what to do about it and how to opt out of it, etc.)

For many services, this isn't actually a problem, because there is only one service implementer, and they also own the definition, and can update the two in lockstep. There are many other services that are more generic in nature, however, and have many implementers. E.g. the gRPC health service. Here, we added a watch-based health check, and consequently broke everyone attempting to implement their own health service and importing our canonical service definition. Note that in OSS, protos cannot be registered multiple times, so anyone referencing our health protos in code should be using our .pb.go, or they risk running into init-time panics.

@dfawley people didn't have to update to your new service definition until they were ready to add a 3 line "unimplemented" method that takes all of 15 seconds to write, though. I understand how this can be confusing because other grpc implementations wouldn't have required code changes to silently add a new server method and I agree it makes sense to try and close that gap. However, the whole idea of base classes in the first place is native to the referenced languages. In the case of golang, switching everyone to a default-embedded "Unimplemented" server is at least a worse out-of-the-box developer experience for us golanders, but more importantly it makes the generated code less flexible and more cumbersome to use.

For example, my grpc server stubs in my tests embed the preexisting unimplemented "base" type. It's not a secret, it's part of the public API for the generated code. However, I choose not to embed the unimplemented type in my actual code for all of the reasons @belak has outlined. I well understand that an addition to the service definition will result in a compile time failure, it's a feature. Forcing me to chose one way over the other regresses the flexibility of the generated code. Golang package ecosystem has always been more about providing primitives that enable other people to stitch things together as desired for their applications without having to entirely reinvent things, not necessarily providing bullet proof never-think-about-it-again-until-it-is-deployed-in-your-cluster-and-you-start-seeing-500s frameworks aimed at satisfying all the uber-ly unproductive dev-monsters that lurk around the intertubes.

Anyway. I haven't looked at the new codegen yet (about to do so), but here's one vote for hoping the API remains mostly as before and the frivolous desire to force default adoption of the new behavior throughout the community by introducing a gen-time option won't break projects that rely on the ability to choose whether or not to embed the unimplemented server on a site-specific basis.

Honestly though I don't see why the community needs to be pushed around at all. Is this really actually a big deal? More plugin flags are just more confusing. Can this not be solved with some rework of the server on-boarding examples/docs to better educate package users on the implications of working with grpc-go code without the addition of a default that mutilates the server development experience by removing all of the support from the compiler for type interface conformance? In fact, one might argue that this is a "failure" of the go language and it should not be the task of the grpc-go project to pave over things with idiosyncratic code. The only reason adding a new field to a protobuf type doesn't cause a compile time issue is because go lets you omit fields from struct instantiation and assigns default values to any omitted fields. There is simply not an analog for type interface conformance. Idiomatically, you would introduce two separate interfaces for each basic unit of functionality and expect the type to implement both. Throwing up our arms and saying well then I guess we can't have nice things and converting the issue to a runtime problem is silly. In reality there probably should have been a new type of heath check service added and then existing health checkers could choose to conform to both the old and new health check interfaces if zero compile time breakage was paramount. That wouldn't have broken anything and you avoid this silly base class hierarchical OOP-style inheritance-themed bull**** in favor of granular composition. But I'm not sold on the whole zero compile-time breakage in the first place.

I'm skeptical about the "adding service methods to an rpc service should mirror adding fields to the end of a protobuf type and not result in a compile-time issue in any language impl" requirement in the first place. The service definitions already are backwards compatible. If you codegen off an old service definition on a client but use a newer definition to gen, build, and run a server, the client will still be happily served, it just won't know about the new rpc (and vice versa). The service IDL is already backwards compatible--just like the basic protobuf types. The idea that "a project using grpc codegen should not benefit from its language's compile-time type checking and isolate developers from dependency updates by only exhibiting lazy runtime failure when interfaces are not satisfied" is totally different and simply orthogonal and is not the concern of the IDL whatsoever. The former is important to the protobuf specification. The latter is simply an observation that different languages have different idiomatic ways to describe types and interfaces and what works in one case might not work in another.

I came to protobuf/grpc so I could have a nice IDL and codegen without sacrificing my language's type system. If it wanted something else I'd use http+json. Like, what's the point of any of the codegen if not to ensure type and interface conformance at compile time?

@dfawley
Copy link
Member

dfawley commented Jun 26, 2020

For some transparency, since this seems like a popular and controversial issue: at this point we are leaning toward something like option 2 from #3669 (comment), which would provide both static type check protection and backward compatibility w.r.t. new methods added to services. This may mean some changes are needed to perform service registration by default, or maybe not depending on how the implementation ends up. We will share more details when they're available. In the meantime, if anyone has any concrete alternatives that also provide those features, we are happy to hear them.

@dcow
Copy link

dcow commented Jun 26, 2020

Maybe I wrote too much text. My point is, "why"? I think the existing defaults are just fine and are in line with the language, idiomatically. Why require the majority of use cases to now add an "unsafe" wrapper if they want the safety of compile time interface conformance checking? Respectfully, I think this project is disproportionately responding to a very minor nuisance by proposing to make the server registration API worse in the majority of use cases. I think striving to maintain source compatibility of generated code for interface changes literally makes no sense because by definition an interface has changed. It should really be the responsibility of whoever is publishing a package to make sure that any changes they make are in able to be consumed with as little friction as desirable for any given scenario. In the case of the health check server, the people publishing the updates could simply learn form the lesson and devise a slightly more tactical approach next time they want to change an interface consumed by many dependents. Could have been as simple as providing a snippet people could paste in when updating, or something. Anyway I've said my piece. Thanks for the color.

@dfawley
Copy link
Member

dfawley commented Jun 26, 2020

Why require the majority of use cases to now add an "unsafe" wrapper if they want the safety of compile time interface conformance checking?

Maybe I wasn't clear enough in my response? 😄 It is a goal to remove the "unsafe" wrapper so that type safety can be maintained.

Interfaces can't add methods and remain backward compatible per SemVer. This means you can't publish generated code for your service unless you do a major release whenever new methods are added. This is often infeasible (as in our case), because we don't want to do a major release of gRPC itself when adding a new method to a service exposed by the package (or ever, really). We could split each proto into its own module, but that's more difficult to manage, and ideally, versioning of the generated code should be able to match versioning of the proto definition itself.

Let's wait until we have some concrete proposals to discuss further. Keeping things the way they were before is still an option, but it's not the number one option we're considering at the moment.

@dcow
Copy link

dcow commented Jun 26, 2020

So let's assume we want to apply sem-ver to published IDL interfaces. What a library traditionally does when it wants to change an interface is add a new "v2" interface (not requiring a major version bump) and ask people to migrate to the new one by deprecating the old interface and phasing it out over time so it can be removed when the project is ready for a version bump. You can't really have it both ways. You can't create interfaces that don't break contracts when they're changed unless you create the notion of optional method and build a way for consumers to understand that a method may not be available on all supposedly conforming implementations. In other words, without changing the definition of contract. If that's how protobuf services are supposed to work or have always been intended to work, it's news to me.

An interface is a pure contract between two parties that outlines an agreement about how the two parties can operate together. Protobuf does not specify service methods as optional. For proto3 messages, yes, all fields are optional and have default values (an interesting retreat from proto2 to proto3 because people didn't want to version their server implementations). Should gRPC take the stance that all proto service methods are actually optional, it would suggest to me something is fundamentally changing about the semantics of the word "interface".

To try an add something more concrete:

It really feels like the addition of an optional prefix for service rpcs to the protobuf spec is in order followed by, perhaps, a feature request in golang for optional interface methods. If not, at least explicit documentation explaining that when using gRPC, the implementation of protobuf interfaces by servers is, by default, purely optional. Since there's no way for a client to know what version of a server they're talking to when sending an rpc (they only know about the interface as it exists in the application's proto package they depend upon), clients will need to design for the possibility of receiving "not implemented" from their peer. Maybe this is only a grpc-go problem and the rest of the grpc world has been operating this way but this is enough of a fundamental shift in my book that it warrants discussion of additional syntax.

Anyway, all of this discussion about the semantics of interfaces is probably only happening because the default behavior is changing. I feel like this would be much less contentious if grpc-go maintained the default behavior of requiring services to implement the all the specified rpc methods and optionally allow people to opt into a forward-compatible base server if that's important to them, much like the original proposal in the inital PR. Then perhaps the philosophical change could be eased in more gradually with more guidance and ideally supporting proto syntax/specification that helps solve the forward interface compatibility issue if it's important for the protobuf community to solve.

Looking forward to the upcoming reveal (;

@dfawley
Copy link
Member

dfawley commented Aug 5, 2020

This is the idea I'm considering right now. It is similar to the preferred approach mentioned above (option #2 in this comment), but is more fully fleshed out and would be simpler/easier to use.

To accommodate per-method registration capabilities and provide forward compatibility for new methods when they are added, we would create a new <ServiceName>Server struct. Example (based on Echo):

type EchoServer struct {
    // These functions are exactly what the application would implement
    // in the service interface previously
	UnaryEcho                  func(context.Context, EchoRequest) (EchoResponse, error)
	ServerStreamingEcho        func(*EchoRequest, Echo_ServerStreamingEchoServer) error
...
}

Users would initialize this struct, which would then work with gRPC as follows:

// This implements the function signature required by gRPC service handlers
// and dispatches to the user's handler (if present)
func (s *EchoServer) unaryEcho(_ interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
	if s.UnaryEcho == nil {
		return nil, status.Errorf(codes.Unimplemented, "method UnaryEcho not implemented")
	}
	/// ... the old contents of _Echo_UnaryEcho_Handler
	return s.UnaryEcho(ctx, in)
}

// This registers the service on the gRPC server.  Note an interface is used
// to allow alternative server implementations, similar to Clients wrapping
// grpc.ClientConns.
func RegisterEchoService(r grpc.ServiceRegistrar, s *EchoServer) {
	sd := &grpc.ServiceDesc{
		ServiceName: "grpc.examples.echo.Echo",
		Methods: []grpc.MethodDesc{
			{
				MethodName: "UnaryEcho",
				Handler:    s.unaryEcho,
			}
			// ...
		},
		Metadata: "examples/features/proto/echo/echo.proto",
	}
	r.RegisterService(sd, nil)
	// Note: we don't actually need to pass s to RegisterService since the Handler fields
	// will reference the proper method on the service implementation directly.
}

Usage would be as follows:

func main() {
	// Create the gRPC server as before
	grpcServer := grpc.NewServer()
	// Initialize your implementation as before
	srv := myEchoServiceImpl{ /* ... */ }
	// New: add implementation methods to the UnaryEcho struct
	es := pb.EchoServer{UnaryEcho: srv.UnaryEcho, /* etc for each implemented method */}
	// Changed: register your service with the gRPC server
	pb.RegisterEchoService(grpcServer, es)
}

If you would like to guarantee all functions are implemented by your service, we would provide the following. Note that this is unsafe, as newly added methods will break, so "Unsafe" is added to all symbols:

// UnsafeEchoService is the server API for Echo service.
type UnsafeEchoService interface {
	// UnaryEcho is unary echo.
	UnaryEcho(context.Context, *EchoRequest) (*EchoResponse, error)
	// ServerStreamingEcho is server side streaming.
	ServerStreamingEcho(*EchoRequest, Echo_ServerStreamingEchoServer) error
...
}

func UnsafeRegisterEchoService(r grpc.ServiceRegistrar, srv UnsafeEchoServer) {
	RegisterEchoService(s, &EchoServer{
		UnaryEcho:                  srv.UnaryEcho,
		ServerStreamingEcho:        srv.ServerStreamingEcho,
...
	})
}

These symbols would be generated only if an option is set, since additions to the interface would violate semantic versioning.

Please feel free to comment on this proposal.

@belak
Copy link
Author

belak commented Aug 5, 2020

This is interesting... switching to a struct rather than an interface for the common case allows for compile time safety of any methods you do implement, but would allow for automatically dispatching missing methods as unimplemented.

Additionally still exposing the UnsafeSomethingService allows for people who want the old behavior to get the compile time interface checks.

This works with my use case and seems fairly well thought out. Thanks for taking the time to look at this in a bit more depth!

@neild
Copy link
Contributor

neild commented Aug 5, 2020

I think it might be worth questioning whether compile-time type safety in service registration is worth the limitations.

The Go Stubby API (a Google-internal RPC system), we initially had a service registration mechanism similar to what gRPC uses today: Each service has an interface definition, and you register a type which implements the interface.

At a later point, we added an alternate, optional mechanism for registering service handlers, which looks something like this:

rpc.RegisterMethod(method, func(ctx context.Context, req *somepb.Request) (*somepb.Response, error) {
  // method handler
})

One advantage of this is that it avoids the unimplemented-method problem. (Unregistered methods return a not-implemented error.) But there are other benefits. One big one is that since the method parameter to RegisterMethod is just an interface{}, we can support variable method signatures:

rpc.RegisterMethod(method, func(ctx context.Context, req proto.Message) (proto.Message, error) {
  // Generic handler receiving and returning a proto.Message.
})

rpc.RegisterMethod(method, func(ctx context.Context, req []byte) ([]byte, error) {
  // Handler receiving a raw []byte slice.
})

The downside is that RegisterMethod isn't compile-time type safe. However, in practice service registration generally only happens at startup time, so errors surface as a binary that panics immediately at startup. On balance, I think the benefits gained from not being tied to a single handler function signature have outweighed the loss of type safety.

@dfawley
Copy link
Member

dfawley commented Aug 5, 2020

@neild I think the gRPC analog to what you're describing is the grpc.ServiceDesc. The Methods in it contain a Handler which is defined pretty generically as:

func(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor UnaryServerInterceptor) (interface{}, error)

(Streams are similar but have differences due to being a streaming API.) This is how you would use gRPC on a server without the proto codegen.

gRPC could support a RegisterMethod function that had a similar definition.

Then the way users would use it through the generated code might look something like:

pb.RegisterEchoUnaryEcho(myEchoHandler)
pb.RegisterEchoServerStreamingEcho(myEchoServerStreamingHandler)

Compared to the proposal above, which would look like:

es := pb.EchoServer{UnaryEcho: myEchoHandler, ServerStreamingEcho: myEchoServerStreamingHandler}
pb.RegisterEchoService(grpcServer, es)

In either case, the unimplemented methods would return UNIMPLEMENTED (as all unknown endpoints do).

@dcow
Copy link

dcow commented Aug 5, 2020

Neat rework! I agree with @belak.

I dissent only semantically on the choice of Unsafe to signal the concept of "source compatibility". In golang, Unsafe means "does not fall under the domain of the language's memory safety guarantees". In gRPC, Unsafe is further used to indicate a transport method that does not implement security. IMO the Unsafe behavior should be the unlabeled behavior and the forward compatible call should be labeled like RegisterDefaultEchoServer or RegisterCompatEchoService (per your example). If this is not negotiable, then alternates for Unsafe might be RegisterEchoServiceChecked, or RegisterEchoServiceStatic or something. Meh.. nothing really succinct is coming to mind right now, though.

@Limdi
Copy link

Limdi commented Sep 17, 2020

With the new way of registering, is there a way to get notified when new grpc calls get added, like, ever? Currently this is done with the interface. But I fail to see a way now. This makes implementing the new calls very unlikely, unless something specifically breaks with a user. Even a client and a server on the same major version may not be able to use all the opposites' functionality. Only the maintainers deciding on the legacy behavior do not have that issue.

@dfawley What about making interface (with optional UnimplementedFooServer for users deciding they don't want to implement others)1 the default, but let library authors decide that it is Ok to not have all implemented and not let users get notifications "by build failure" for being able to add grpc calls in a minor version, if they so choose (by gen flag). You get easy upgradeability by default and adding grpc calls in minor versions when needed.

By making stable the default and calling the old way "legacy" you hide the only way which gives an easy upgrade path to users. For something that only a small subset of libraries need. Many will choose the "legacy" way for its easy upgrade path and so have to add the "unstable"-flag. But wasn't Go's philosophy to not require configurations for the usual case? I know this is not Go, but I think it applies here too.

1: Adding the not-implemented grpc calls later can be done safely by temporarily removing the embedded UnimplementedFooServer.

@daved
Copy link

daved commented Sep 17, 2020

Interfaces must maintain backward compatibility such that existing structs implementing them will continue to do so forever. I.e. methods may not be added to exported interfaces.

Methods ARE being added to the interface. Again, the "requirement" is what I'm suggesting is spurious. The approach of requiring an unimplemented type via opt-out means that existing types can be expected to be silently compatible with changes to a contract, while not actually being compatible with the contract. While composing that form of gracefulness can be a specifically useful mechanism in some case (by all means, providing a generated unimplemented type does sound reasonable), suggesting that it is a fundamental expectation to opt out of just doesn't seem sound. Typically, if a contract changes, one would expect to explicitly not support it. Otherwise, one should not regenerate their source based on code containing changes to the contract.

Reiterating: I may misunderstand the perspective establishing the requirement. However, going by "I would expect the structs to be forward compatible." from golang/protobuf#784, the concern reads as a fundamental misunderstanding of how composition+duck-typing work together in Go to express reusabilty/abstractions (as well as the resulting expectations). Go's adherence to OOP best practices does make it stick out from more popular manifestations of OOP (particularly class-based languages), but it is probably not a good idea to force it into those foreign expectations.

@dfawley
Copy link
Member

dfawley commented Sep 18, 2020

I'm going to close this issue. This line of debate is not productive. Adding methods to a service cannot result in a breakage for existing implementers. This is how it works in all our other languages. We'll have a way to get that breakage if that's what you want, because that's how it was done historically in Go, in violation of this requirement. But that absolutely cannot be the default.

To focus on the two main options proposed, I've opened #3885 to discuss.

@dfawley dfawley closed this as completed Sep 18, 2020
@dcow
Copy link

dcow commented Sep 18, 2020

@dfawley it's come full circle because you are misinterpreting semver and trying to apply it in a way that simply doesn't make sense to across the board. At this point, people are seeing the new code and asking for the old code. In reality, the code as it existed before already fulfilled all your requirements. You can solve the problem of operator error without a big complex registration system and without telling everybody who doesn't see the value of "forward compatible" services that they're wrong by simply adding documentation to the usage suggesting that, if forward compatibility is a requirement because you publish service implementations, then you need to embed the "unimplemented" struct in your service struct. It really is that simple. No breaking changes needed. No idiosyncratic service registration system required.

fkorotkov added a commit to cirruslabs/cirrus-cli that referenced this issue Sep 24, 2020
Plus use the new gRPC service registration pattern. See more discussion in grpc/grpc-go#3669
fkorotkov added a commit to cirruslabs/cirrus-cli that referenced this issue Sep 24, 2020
Plus use the new gRPC service registration pattern. See more discussion in grpc/grpc-go#3669
L1ghtman2k added a commit to ScoreTrak/ScoreTrak that referenced this issue Nov 2, 2020
2) Add context to all services, and repo interfaces
sebnyberg pushed a commit to secuis/rp-apitools-go that referenced this issue Nov 30, 2020
@g00nix
Copy link

g00nix commented May 14, 2021

I know I am really late at the party. I would really need to know when the version with false as a default will be stable, as now I have to explain to a lot of people new to coding why compiler errors are better than runtime errors, as they are all giving the argument "but gRPC now prefers runtime errors so they must be better".

@dcow

This comment has been minimized.

@mvrhov
Copy link

mvrhov commented May 17, 2021

I've stopped using gRPC in favor of vanilla HTTP. Server push in the form required for bidirectional streaming isn't even supported by HTTP3 which was the main reason I used gRPC in the first place.

@dcow What do you mean by that? http3 draft 34

@jhump
Copy link
Member

jhump commented May 17, 2021

@dcow, server push is not the mechanism used for bidirectional streaming. A bidirectional stream is a normal client-initiated stream, which http/2 then allows to be full duplex (unlike http 1.1).

Server push is for servers that try to anticipate client requests (like for related files for a page that is being served, such as CSS, JS, images) and initiate the streams from the server, as a latency optimization (i.e. don't have to wait for client to parse the HTML and then issues those requests).

@dfawley
Copy link
Member

dfawley commented May 17, 2021

I would really need to know when the version with false as a default will be stable

@g00nix

I'm not sure what you mean about "false as a default", but the current protoc-gen-go-grpc is the stable and recommended version, and there are no experimental features in flight. If you mean require_unimplemented_servers, that is not and will not be false by default.

alexashley added a commit to rode/rode that referenced this issue May 26, 2021
See: grpc/grpc-go#3669

Opting into the default is sensible for servers that are often embedded in other servers -- for instance the gRPC health server.

Given that the Rode server is currently intended to run as a standalone service, and not planned to be embedded or available as a library, this default
doesn't apply. It also prevents us from embedding structs with partial implementations of the larger server interface -- the unimplemented server
already contains all of the exported functions, leading to ambiguity. We could solve this by having the rode server delegate to the other implementation, but
this adds some boilerplate that isn't desirable.
@dcow

This comment has been minimized.

Lagovas added a commit to cossacklabs/acra that referenced this issue Nov 25, 2021
update golang services according to updates grpc/grpc-go#3669
update Makefile with up-to-date building commands
Lagovas added a commit to cossacklabs/acra that referenced this issue Nov 25, 2021
* * use AcraTranslatorData instead of fields of services,
  share all data using only AcraTranslatorData struct
* pass build flags in integrations tests
* fix overriding test in integrations tests
* make getClientID as separate function instead of method of TLSServiceWrapper
* add registries for registering callbacks on HTTP/gRPC initialization
* initialize and assign tokenizer to TranslatorData explicitly and outside of
newServer function
* extend makefile with building command of grpc services for tests
update golang services according to updates grpc/grpc-go#3669
@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 9, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests