Skip to content

Commit

Permalink
S3 bucket names rename project (boto#4291)
Browse files Browse the repository at this point in the history
* S3 bucket name change project
Our doc standard now requires the use of
specific, reserved, bucket names throughout documentation and code examples.
This commit replaces every bucket name I could find with one of those reserved
names.

---------

Co-authored-by: Nate Prewitt <[email protected]>
  • Loading branch information
2 people authored and hswong3i committed Oct 3, 2024
1 parent aaa09a4 commit a96cfb4
Show file tree
Hide file tree
Showing 18 changed files with 100 additions and 98 deletions.
4 changes: 2 additions & 2 deletions docs/source/guide/clients.rst
Original file line number Diff line number Diff line change
Expand Up @@ -105,8 +105,8 @@ from its list of possible waiters::
Then to actually start waiting, you must call the waiter's ``wait()`` method
with the method's appropriate parameters passed in::

# Begin waiting for the S3 bucket, mybucket, to exist
s3_bucket_exists_waiter.wait(Bucket='mybucket')
# Begin waiting for the S3 bucket, amzn-s3-demo-bucket, to exist
s3_bucket_exists_waiter.wait(Bucket='amzn-s3-demo-bucket')

Multithreading or multiprocessing with clients
----------------------------------------------
Expand Down
10 changes: 5 additions & 5 deletions docs/source/guide/collections.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ the following conditions:

* **Batch actions (see below)**::

s3.Bucket('my-bucket').objects.delete()
s3.Bucket('amzn-s3-demo-bucket').objects.delete()

Filtering
---------
Expand Down Expand Up @@ -124,11 +124,11 @@ Some collections support batch actions, which are actions that operate
on an entire page of results at a time. They will automatically handle
pagination::

# S3 delete everything in `my-bucket`
# S3 delete everything in `amzn-s3-demo-bucket`
s3 = boto3.resource('s3')
s3.Bucket('my-bucket').objects.delete()
s3.Bucket('amzn-s3-demo-bucket').objects.delete()

.. danger::

The above example will **completely erase all data** in the ``my-bucket``
bucket! Please be careful with batch actions.
The above example will **completely erase all data** in the
``amzn-s3-demo-bucket`` bucket! Please be careful with batch actions.
2 changes: 1 addition & 1 deletion docs/source/guide/error-handling.rst
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@ Using Amazon S3 as an example resource service, you can use the client’s excep
client = boto3.resource('s3')
try:
client.create_bucket(BucketName='myTestBucket')
client.create_bucket(BucketName='amzn-s3-demo-bucket')
except client.meta.client.exceptions.BucketAlreadyExists as err:
print("Bucket {} already exists!".format(err.response['Error']['BucketName']))
Expand Down
93 changes: 47 additions & 46 deletions docs/source/guide/events.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,17 +14,16 @@ Boto3's event system.
An introduction to the event system
-----------------------------------

Boto3's event system allows users to register a function to
a specific event. Then once the running program reaches a line that
emits that specific event, Boto3 will call every function
registered to the event in the order in which they were registered.
When Boto3 calls each of these registered functions,
it will call each of them with a specific set of
keyword arguments that are associated with that event.
Then once the registered function
is called, the function may modify the keyword arguments passed to that
function or return a value.
Here is an example of how the event system works::
Boto3's event system allows users to register a function to a specific event.
Then once the running program reaches a line that emits that specific event,
Boto3 will call every function registered to the event in the order in which
they were registered.

When Boto3 calls each of these registered functions, it will call each of them
with a specific set of keyword arguments that are associated with that event.
Then once the registered function is called, the function may modify the
keyword arguments passed to that function or return a value. Here is an
example of how the event system works::

import boto3

Expand All @@ -37,19 +36,19 @@ Here is an example of how the event system works::
def add_my_bucket(params, **kwargs):
# Add the name of the bucket you want to default to.
if 'Bucket' not in params:
params['Bucket'] = 'mybucket'
params['Bucket'] = 'amzn-s3-demo-bucket'

# Register the function to an event
event_system.register('provide-client-params.s3.ListObjectsV2', add_my_bucket)

response = s3.list_objects_v2()

In this example, the handler ``add_my_bucket``
is registered such that the handler will inject the
value ``'mybucket'`` for the ``Bucket`` parameter whenever the
``list_objects_v2`` client call is made without the ``Bucket`` parameter. Note
that if the same ``list_objects_v2`` call is made without the ``Bucket``
parameter and the registered handler, it will result in a validation error.
In this example, the handler ``add_my_bucket`` is registered such that the
handler will inject the value ``'amzn-s3-demo-bucket'`` for the ``Bucket``
parameter whenever the ``list_objects_v2`` client call is made without the
``Bucket`` parameter. Note that if the same ``list_objects_v2`` call is made
without the ``Bucket`` parameter and the registered handler, it will result in
a validation error.

Here are the takeaways from this example:

Expand Down Expand Up @@ -103,11 +102,11 @@ its hierarchical structure::

def add_my_general_bucket(params, **kwargs):
if 'Bucket' not in params:
params['Bucket'] = 'mybucket'
params['Bucket'] = 'amzn-s3-demo-bucket1'

def add_my_specific_bucket(params, **kwargs):
if 'Bucket' not in params:
params['Bucket'] = 'myspecificbucket'
params['Bucket'] = 'amzn-s3-demo-bucket2'

event_system.register('provide-client-params.s3', add_my_general_bucket)
event_system.register('provide-client-params.s3.ListObjectsV2', add_my_specific_bucket)
Expand All @@ -116,17 +115,18 @@ its hierarchical structure::
put_obj_response = s3.put_object(Key='mykey', Body=b'my body')

In this example, the ``list_objects_v2`` method call will use the
``'myspecificbucket'`` for the bucket instead of ``'mybucket'`` because
the ``add_my_specific_bucket`` method was registered to the
``'provide-client-params.s3.ListObjectsV2'`` event which is more specific than
the ``'provide-client-params.s3'`` event. Thus, the
``'amzn-s3-demo-bucket2'`` for the bucket instead of
``'amzn-s3-demo-bucket1'`` because the ``add_my_specific_bucket`` method was
registered to the ``'provide-client-params.s3.ListObjectsV2'`` event which is
more specific than the ``'provide-client-params.s3'`` event. Thus, the
``add_my_specific_bucket`` function is called before the
``add_my_general_bucket`` function is called when the event is emitted.

However for the ``put_object`` call, the bucket used is ``'mybucket'``. This
is because the event emitted for the ``put_object`` client call is
``'provide-client-params.s3.PutObject'`` and the ``add_my_general_bucket``
method is called via its registration to ``'provide-client-params.s3'``. The
However for the ``put_object`` call, the bucket used is
``'amzn-s3-demo-bucket1'``. This is because the event emitted for the
``put_object`` client call is ``'provide-client-params.s3.PutObject'`` and the
``add_my_general_bucket`` method is called via its registration to
``'provide-client-params.s3'``. The
``'provide-client-params.s3.ListObjectsV2'`` event is never emitted so the
registered ``add_my_specific_bucket`` function is never called.

Expand All @@ -147,7 +147,7 @@ of using wildcards in the event system::

def add_my_wildcard_bucket(params, **kwargs):
if 'Bucket' not in params:
params['Bucket'] = 'mybucket'
params['Bucket'] = 'amzn-s3-demo-bucket'

event_system.register('provide-client-params.s3.*', add_my_wildcard_bucket)
response = s3.list_objects_v2()
Expand Down Expand Up @@ -184,11 +184,11 @@ to another client's event system::

def add_my_bucket(params, **kwargs):
if 'Bucket' not in params:
params['Bucket'] = 'mybucket'
params['Bucket'] = 'amzn-s3-demo-bucket1'

def add_my_other_bucket(params, **kwargs):
if 'Bucket' not in params:
params['Bucket'] = 'myotherbucket'
params['Bucket'] = 'amzn-s3-demo-bucket2'

client1.meta.events.register(
'provide-client-params.s3.ListObjectsV2', add_my_bucket)
Expand All @@ -200,10 +200,10 @@ to another client's event system::


Thanks to the isolation of clients' event systems, ``client1`` will inject
``'mybucket'`` for its ``list_objects_v2`` method call while ``client2`` will
inject ``'myotherbucket'`` for its ``list_objects_v2`` method call because
``add_my_bucket`` was registered to ``client1`` while ``add_my_other_bucket``
was registered to ``client2``.
``'amzn-s3-demo-bucket1'`` for its ``list_objects_v2`` method call while
``client2`` will inject ``'amzn-s3-demo-bucket2'`` for its ``list_objects_v2``
method call because ``add_my_bucket`` was registered to ``client1`` while
``add_my_other_bucket`` was registered to ``client2``.


Boto3 specific events
Expand All @@ -212,13 +212,14 @@ Boto3 specific events
Boto3 emits a set of events that users can register to
customize clients or resources and modify the behavior of method calls.

Here is a table of events that users of Boto3 can register handlers to. More information
about each event can be found in the corresponding sections below:
Here is a table of events that users of Boto3 can register handlers to. More
information about each event can be found in the corresponding sections below:

.. note::

Events with a ``*`` in their order number are conditionally emitted while all others are always emitted.
An explanation of all 3 conditional events is provided below.
Events with a ``*`` in their order number are conditionally emitted while
all others are always emitted. An explanation of all 3 conditional events is
provided below.

``2 *`` - ``creating-resource-class`` is emitted ONLY when using a service resource.

Expand Down Expand Up @@ -440,7 +441,7 @@ about each event can be found in the corresponding sections below:
def add_my_bucket(params, **kwargs):
# Add the name of the bucket you want to default to.
if 'Bucket' not in params:
params['Bucket'] = 'mybucket'
params['Bucket'] = 'amzn-s3-demo-bucket'

# Register the function to an event
event_system.register('provide-client-params.s3.ListObjectsV2', add_my_bucket)
Expand Down Expand Up @@ -551,13 +552,13 @@ about each event can be found in the corresponding sections below:
# Register the function to an event
event_system.register('request-created.s3.ListObjectsV2', inspect_request_created)

response = s3.list_objects_v2(Bucket='my-bucket')
response = s3.list_objects_v2(Bucket='amzn-s3-demo-bucket')

This should output::

Request Info:
method: GET
url: https://my-bucket.s3 ...
url: https://amzn-s3-demo-bucket.s3 ...
data: ...
params: { ... }
auth_path: ...
Expand Down Expand Up @@ -682,9 +683,9 @@ about each event can be found in the corresponding sections below:
``'after-call.service-name.operation-name'``

:Description:
This event is emitted just after the service client makes an API call.
This event allows developers to postprocess or inspect the API response according to the
specific requirements of their application if needed.
This event is emitted just after the service client makes an API call. This
event allows developers to postprocess or inspect the API response according
to the specific requirements of their application if needed.

:Keyword Arguments Emitted:

Expand Down Expand Up @@ -720,7 +721,7 @@ about each event can be found in the corresponding sections below:
# Register the function to an event
event_system.register('after-call.s3.ListObjectsV2', print_after_call_args)

s3.list_objects_v2(Bucket='my-bucket')
s3.list_objects_v2(Bucket='amzn-s3-demo-bucket')

This should output::

Expand Down
4 changes: 2 additions & 2 deletions docs/source/guide/migration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,10 +34,10 @@ Second, while every service now uses the runtime-generated low-level client, som
# High-level connections & resource objects
from boto.s3.bucket import Bucket
s3_conn = boto.connect_s3()
boto2_bucket = Bucket('mybucket')
boto2_bucket = Bucket('amzn-s3-demo-bucket')

s3 = boto3.resource('s3')
boto3_bucket = s3.Bucket('mybucket')
boto3_bucket = s3.Bucket('amzn-s3-demo-bucket')

Installation and configuration
------------------------------
Expand Down
18 changes: 9 additions & 9 deletions docs/source/guide/migrations3.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,12 @@ Creating a bucket
Creating a bucket in Boto 2 and Boto3 is very similar, except that in Boto3 all action parameters must be passed via keyword arguments and a bucket configuration must be specified manually::

# Boto 2.x
s3_connection.create_bucket('mybucket')
s3_connection.create_bucket('mybucket', location=Location.USWest)
s3_connection.create_bucket('amzn-s3-demo-bucket')
s3_connection.create_bucket('amzn-s3-demo-bucket', location=Location.USWest)

# Boto3
s3.create_bucket(Bucket='mybucket')
s3.create_bucket(Bucket='mybucket', CreateBucketConfiguration={
s3.create_bucket(Bucket='amzn-s3-demo-bucket')
s3.create_bucket(Bucket='amzn-s3-demo-bucket', CreateBucketConfiguration={
'LocationConstraint': 'us-west-1'})

Storing data
Expand All @@ -39,23 +39,23 @@ Storing data from a file, stream, or string is easy::
key.set_contents_from_file('/tmp/hello.txt')

# Boto3
s3.Object('mybucket', 'hello.txt').put(Body=open('/tmp/hello.txt', 'rb'))
s3.Object('amzn-s3-demo-bucket', 'hello.txt').put(Body=open('/tmp/hello.txt', 'rb'))


Accessing a bucket
------------------
Getting a bucket is easy with Boto3's resources, however these do not automatically validate whether a bucket exists::

# Boto 2.x
bucket = s3_connection.get_bucket('mybucket', validate=False)
exists = s3_connection.lookup('mybucket')
bucket = s3_connection.get_bucket('amzn-s3-demo-bucket', validate=False)
exists = s3_connection.lookup('amzn-s3-demo-bucket')

# Boto3
import botocore
bucket = s3.Bucket('mybucket')
bucket = s3.Bucket('amzn-s3-demo-bucket')
exists = True
try:
s3.meta.client.head_bucket(Bucket='mybucket')
s3.meta.client.head_bucket(Bucket='amzn-s3-demo-bucket')
except botocore.exceptions.ClientError as e:
# If a client error is thrown, then check that it was a 404 error.
# If it was a 404 error, then the bucket does not exist.
Expand Down
8 changes: 4 additions & 4 deletions docs/source/guide/paginators.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ underlying API operation. The ``paginate`` method then returns an iterable
paginator = client.get_paginator('list_objects_v2')

# Create a PageIterator from the Paginator
page_iterator = paginator.paginate(Bucket='my-bucket')
page_iterator = paginator.paginate(Bucket='amzn-s3-demo-bucket')

for page in page_iterator:
print(page['Contents'])
Expand All @@ -47,7 +47,7 @@ the pages of API operation results. The ``paginate`` method accepts a
pagination::

paginator = client.get_paginator('list_objects_v2')
page_iterator = paginator.paginate(Bucket='my-bucket',
page_iterator = paginator.paginate(Bucket='amzn-s3-demo-bucket',
PaginationConfig={'MaxItems': 10})

``MaxItems``
Expand Down Expand Up @@ -82,7 +82,7 @@ to the client::
client = boto3.client('s3', region_name='us-west-2')
paginator = client.get_paginator('list_objects_v2')
operation_parameters = {'Bucket': 'my-bucket',
operation_parameters = {'Bucket': 'amzn-s3-demo-bucket',
'Prefix': 'foo/baz'}
page_iterator = paginator.paginate(**operation_parameters)
for page in page_iterator:
Expand All @@ -103,7 +103,7 @@ JMESPath expressions that are applied to each page of results through the
client = boto3.client('s3', region_name='us-west-2')
paginator = client.get_paginator('list_objects_v2')
page_iterator = paginator.paginate(Bucket='my-bucket')
page_iterator = paginator.paginate(Bucket='amzn-s3-demo-bucket')
filtered_iterator = page_iterator.search("Contents[?Size > `100`][]")
for key_data in filtered_iterator:
print(key_data)
Expand Down
7 changes: 4 additions & 3 deletions docs/source/guide/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -140,12 +140,13 @@ Now that you have an ``s3`` resource, you can make send requests to the service.
for bucket in s3.buckets.all():
print(bucket.name)

You can also upload and download binary data. For example, the following uploads a new file to S3,
assuming that the bucket ``my-bucket`` already exists::
You can also upload and download binary data. For example, the following
uploads a new file to S3, assuming that the bucket ``amzn-s3-demo-bucket``
already exists::

# Upload a new file
with open('test.jpg', 'rb') as data:
s3.Bucket('my-bucket').put_object(Key='test.jpg', Body=data)
s3.Bucket('amzn-s3-demo-bucket').put_object(Key='test.jpg', Body=data)

:ref:`guide_resources` and :ref:`guide_collections` are covered in more detail in the following
sections.
12 changes: 6 additions & 6 deletions docs/source/guide/resources.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,12 +48,12 @@ instantiation will result in an exception. Examples of identifiers::
print(queue.url)

# S3 Object (bucket_name and key are identifiers)
obj = s3.Object(bucket_name='boto3', key='test.py')
obj = s3.Object(bucket_name='amzn-s3-demo-bucket', key='test.py')
print(obj.bucket_name)
print(obj.key)

# Raises exception, missing identifier: key!
obj = s3.Object(bucket_name='boto3')
obj = s3.Object(bucket_name='amzn-s3-demo-bucket')

Identifiers may also be passed as positional arguments::

Expand All @@ -70,9 +70,9 @@ Identifiers also play a role in resource instance equality. For two
instances of a resource to be considered equal, their identifiers must
be equal::

>>> bucket1 = s3.Bucket('boto3')
>>> bucket2 = s3.Bucket('boto3')
>>> bucket3 = s3.Bucket('some-other-bucket')
>>> bucket1 = s3.Bucket('amzn-s3-demo-bucket1')
>>> bucket2 = s3.Bucket('amzn-s3-demo-bucket1')
>>> bucket3 = s3.Bucket('amzn-s3-demo-bucket3')

>>> bucket1 == bucket2
True
Expand Down Expand Up @@ -128,7 +128,7 @@ of actions::
message.delete()

# S3 Object
obj = s3.Object(bucket_name='boto3', key='test.py')
obj = s3.Object(bucket_name='amzn-s3-demo-bucket', key='test.py')
response = obj.get()
data = response['Body'].read()

Expand Down
Loading

0 comments on commit a96cfb4

Please sign in to comment.