Skip to content

Commit

Permalink
Fix panic when s3 URL is invalid
Browse files Browse the repository at this point in the history
Gracefully handle when S3 URLs have an unexpected number of path segments.

Currently we expect `s3.amazonaws.com/bucket/path`, but something like `s3.amazonaws.com/bucket` will cause a panic, e.g.

```
panic: runtime error: index out of range [2] with length 2

github.com/hashicorp/go-getter.(*S3Getter).parseUrl(,)
    /go/pkg/mod/github.com/hashicorp/[email protected]/get_s3.go:272
github.com/hashicorp/go-getter.(*S3Getter).Get(, {,},)
    /go/pkg/mod/git...
```
  • Loading branch information
liamg committed Jul 24, 2024
1 parent 5a63fd9 commit 83fd927
Showing 1 changed file with 12 additions and 0 deletions.
12 changes: 12 additions & 0 deletions get_s3.go
Original file line number Diff line number Diff line change
Expand Up @@ -268,6 +268,10 @@ func (g *S3Getter) parseUrl(u *url.URL) (region, bucket, path, version string, c
region = "us-east-1"
}
pathParts := strings.SplitN(u.Path, "/", 3)
if len(pathParts) < 3 {
err = fmt.Errorf("URL is not a valid S3 URL")
return
}
bucket = pathParts[1]
path = pathParts[2]
// vhost-style, dash region indication
Expand All @@ -279,12 +283,20 @@ func (g *S3Getter) parseUrl(u *url.URL) (region, bucket, path, version string, c
return
}
pathParts := strings.SplitN(u.Path, "/", 2)
if len(pathParts) < 2 {
err = fmt.Errorf("URL is not a valid S3 URL")
return
}
bucket = hostParts[0]
path = pathParts[1]
//vhost-style, dot region indication
case 5:
region = hostParts[2]
pathParts := strings.SplitN(u.Path, "/", 2)
if len(pathParts) < 3 {
err = fmt.Errorf("URL is not a valid S3 URL")
return
}
bucket = hostParts[0]
path = pathParts[1]

Expand Down

0 comments on commit 83fd927

Please sign in to comment.