-
-
Notifications
You must be signed in to change notification settings - Fork 907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Option to allow public_url to be HTTPS #551
Comments
No. Most buckets are DNS-named, and thus https generally fails certificate On Mon, May 18, 2015 at 3:41 PM, Dan Craig [email protected] wrote:
|
Thanks, I figured it wasn't workable across the board but wasn't sure why. But a separate option, say something like |
*.s3.amazonaws.com (where * is a single word, no dots) is only meaningful On Mon, May 18, 2015 at 4:57 PM, Dan Craig [email protected] wrote:
|
Looks like the preferred way to do HTTPS with Amazon S3 is through CloudFront. I suppose that could be outside the scope of s3cmd. So perhaps it is a limited use case for Amazon S3, which I'm sure is the bulk of your user base. Just remember that there are at least a few of us who use s3cmd to talk to S3-compatible APIs for tools like Riak CS. s3cmd a great tool for that and we've gotten a ton of use from it. |
Hello, |
The protocol for the URL returned from
s3cmd info <object>
appears to be hardcoded to HTTP in https://github.com/s3tools/s3cmd/blob/master/S3/S3Uri.py#L82. Could it be configurable to use HTTPS instead? I'm not sure if it would be safe to extend the meaning ofuse_https
to cover this or not, so perhaps it should be its own standalone config. Our use case is an internal corporate Riak CS system.The text was updated successfully, but these errors were encountered: