Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrading to latest causes Lambdas to fail #21

Closed
mattcollum opened this issue Jun 27, 2019 · 5 comments · Fixed by #24
Closed

Upgrading to latest causes Lambdas to fail #21

mattcollum opened this issue Jun 27, 2019 · 5 comments · Fixed by #24

Comments

@mattcollum
Copy link

We've been using 0.0.1-beta.2 for a while successfully. There was a message in the Sentry console today that we should update to 0.1.0 so we did. Local testing seemed fine but when running in an actual environment (Lambda based) for some reason it short circuited all processing of the function. Rolling back to the older version resolved it. Any idea what may be causing this? Are there code changes we should be making to get on to the current version?

@kamilogorek
Copy link
Contributor

@mattcollum all breaking changes are marked here in the changelog - https://github.com/getsentry/sentry-go/blob/master/CHANGELOG.md

Do you have any more insights for this issue? I'll take a look at it first thing on Monday.

@kamilogorek
Copy link
Contributor

I just tested everything on AWS and it seems to work just fine. When working with serverless solutions, you may want to take a look at https://docs.sentry.io/platforms/go/transports and use HTTPSyncTransport now, as it doesn't require Flush calls.

@mattcollum
Copy link
Author

@kamilogorek sorry for the delayed response. holiday long weekend here :)

so the Lambdas just seem to exit after we init sentry and call defer sentry.Recover().

The init is pretty straightforward. Calls to sentry include the following the first time:

sentry.Init with Dsn, AttachStacktrace and BeforeSend

And on subsequent calls if the Lambda was already inititalized we're calling:

sentry.ConfigureScope(func(scope *sentry.Scope) { scope.Clear() })

Haven't had a chance to review the change log or look into HTTPSyncTransport as we're pretty busy right now but can look down the road for sure. For now I've locked the version back to beta.2 and it's working again. If you happen to see anything in this init or Recover calls that could be the cause let me know

@kamilogorek
Copy link
Contributor

kamilogorek commented Jul 1, 2019

@mattcollum if you don't await event delivery, your lambda will instantly exit the process. You have to either flush the queue manually or use SyncTransport. Although I wonder how it's possible that it worked in beta.2 without it?

Just deployed code below to Lambda and can confirm that it correctly reports to Sentry:

https://sentry.io/share/issue/77c21f3def21495785e0635009d68dde/

package main

import (
	"encoding/json"
	"errors"
	"fmt"
	"github.com/aws/aws-lambda-go/lambda"
	"github.com/getsentry/sentry-go"
)

func foo() error {
	return bar()
}

func bar() error {
	return baz()
}

func baz() error {
	panic(errors.New("boom"))
}

func HandleRequest() (string, error) {
	defer sentry.Recover()
	foo()
	return "Mkey", nil
}

func main() {
	sentry.Init(sentry.ClientOptions{
		Dsn: "https://[email protected]/1419836",
		Transport: sentry.NewHTTPSyncTransport(),
	})

	lambda.Start(HandleRequest)
}

@mattcollum
Copy link
Author

Added the SyncTransport and took out flushes and that seems to be working fine. Issue seems to be caused by something I'm doing to clear the scope between requests.

  1. Currrently calling this on the second request and onwards before processing starts"
sentry.ConfigureScope(func(scope *sentry.Scope) {
			scope.Clear()
		})
  1. Then we add some tags:

sentry.ConfigureScope(func(scope *sentry.Scope) {
scope.SetTags(tags)
})

First time the lambda runs the scope.SetTags is fine. Second time I'm getting:

runtime.plainError
assignment to entry in nil map

Let me know if there is a better way to reset scopes between requests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants