Google Cloud Client Library for Python
- an idiomatic, intuitive, and natural way for Python developers to
integrate with Google Cloud Platform services, like Cloud Datastore
@@ -71,7 +77,7 @@
google-cloud is a client library for accessing Google
Cloud Platform services that significantly reduces the boilerplate
code you have to write. The library provides high-level API
abstractions so they're easier to understand. It embraces
@@ -127,7 +133,7 @@
What is it?
All this means you spend more time creating code that matters
to you.
-
gcloud is configured to access Google Cloud Platform
+
google-cloud is configured to access Google Cloud Platform
services and authorize (OAuth 2.0) automatically on your behalf.
With a one-line install and a private key, you are up and ready
to go. Better yet, if you are running on a Google Compute Engine
@@ -138,7 +144,7 @@
- gcloud-python-expenses-demo - Use gcloud-python with the Datastore and Cloud Storage to manage expenses
+ gcloud-python-expenses-demo - Use google-cloud-python with the Datastore and Cloud Storage to manage expenses
@@ -166,22 +172,22 @@
Examples
FAQ
-
What is the relationship between the gcloud package
+
What is the relationship between the google-cloud package
and the gcloud command-line tool?
Both the gcloud command-line tool and
- gcloud package are a part of the Google Cloud SDK: a collection
+ google-cloud package are a part of the Google Cloud SDK: a collection
of tools and libraries that enable you to easily create and manage
resources on the Google Cloud Platform. The gcloud command-line
tool can be used to manage both your development workflow and your
- Google Cloud Platform resources while the gcloud package is the
+ Google Cloud Platform resources while the google-cloud package is the
Google Cloud Client Library for Python.
-
What is the relationship between gcloud
+
What is the relationship between google-cloud
and the Google APIs Python Client?
The
Google APIs Python Client is a client library for
using the broad set of Google APIs.
- gcloud is built specifically for the Google Cloud Platform
+ google-cloud is built specifically for the Google Cloud Platform
and is the recommended way to integrate Google Cloud APIs into your
Python applications. If your application requires both Google Cloud Platform and
other Google APIs, the 2 libraries may be used by your application.
diff --git a/latest/.buildinfo b/latest/.buildinfo
index a1cb70e9df85..e7c0d4c4bb62 100644
--- a/latest/.buildinfo
+++ b/latest/.buildinfo
@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
-config: c47f8a026362260bbd4b30e616d24b42
+config: b1abedd30578a89bbd7f5f0c1adf0729
tags: 645f666f9bcd5a90fca523b33c5a78b7
diff --git a/latest/_modules/gcloud/bigquery/connection.html b/latest/_modules/gcloud/bigquery/connection.html
deleted file mode 100644
index 9e15efd7c8f0..000000000000
--- a/latest/_modules/gcloud/bigquery/connection.html
+++ /dev/null
@@ -1,255 +0,0 @@
-
-
-
-
-
-
-
- gcloud.bigquery.connection — gcloud 47bfd0a documentation
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-# Copyright 2015 Google Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Create / interact with gcloud bigquery connections."""
-
-fromgcloudimportconnectionasbase_connection
-
-
-
[docs]classConnection(base_connection.JSONConnection):
- """A connection to Google Cloud BigQuery via the JSON REST API."""
-
- API_BASE_URL='https://www.googleapis.com'
- """The base of the API call URL."""
-
- API_VERSION='v2'
- """The version of the API, used in building the API call's URL."""
-
- API_URL_TEMPLATE='{api_base_url}/bigquery/{api_version}{path}'
- """A template for the URL of a particular API call."""
-
- SCOPE=('https://www.googleapis.com/auth/bigquery',
- 'https://www.googleapis.com/auth/cloud-platform')
- """The scopes required for authenticating as a Cloud BigQuery consumer."""
-# Copyright 2015 Google Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Parent client for calling the Google Cloud Bigtable API.
-
-This is the base from which all interactions with the API occur.
-
-In the hierarchy of API concepts
-
-* a :class:`Client` owns an :class:`.Instance`
-* a :class:`.Instance` owns a :class:`Table <gcloud.bigtable.table.Table>`
-* a :class:`Table <gcloud.bigtable.table.Table>` owns a
- :class:`ColumnFamily <.column_family.ColumnFamily>`
-* a :class:`Table <gcloud.bigtable.table.Table>` owns a :class:`Row <.row.Row>`
- (and all the cells in the row)
-"""
-
-
-frompkg_resourcesimportget_distribution
-
-fromgrpc.betaimportimplementations
-
-fromgcloud.bigtable._generated_v2import(
- bigtable_instance_admin_pb2asinstance_admin_v2_pb2)
-# V1 table admin service
-fromgcloud.bigtable._generated_v2import(
- bigtable_table_admin_pb2astable_admin_v2_pb2)
-# V1 data service
-fromgcloud.bigtable._generated_v2import(
- bigtable_pb2asdata_v2_pb2)
-
-fromgcloud.bigtable._generated_v2import(
- operations_grpc_pb2asoperations_grpc_v2_pb2)
-
-fromgcloud.bigtable.clusterimportDEFAULT_SERVE_NODES
-fromgcloud.bigtable.instanceimportInstance
-fromgcloud.bigtable.instanceimport_EXISTING_INSTANCE_LOCATION_ID
-fromgcloud.clientimport_ClientFactoryMixin
-fromgcloud.clientimport_ClientProjectMixin
-fromgcloud.credentialsimportget_credentials
-
-
-TABLE_STUB_FACTORY_V2=(
- table_admin_v2_pb2.beta_create_BigtableTableAdmin_stub)
-TABLE_ADMIN_HOST_V2='bigtableadmin.googleapis.com'
-"""Table Admin API request host."""
-TABLE_ADMIN_PORT_V2=443
-"""Table Admin API request port."""
-
-INSTANCE_STUB_FACTORY_V2=(
- instance_admin_v2_pb2.beta_create_BigtableInstanceAdmin_stub)
-INSTANCE_ADMIN_HOST_V2='bigtableadmin.googleapis.com'
-"""Cluster Admin API request host."""
-INSTANCE_ADMIN_PORT_V2=443
-"""Cluster Admin API request port."""
-
-DATA_STUB_FACTORY_V2=data_v2_pb2.beta_create_Bigtable_stub
-DATA_API_HOST_V2='bigtable.googleapis.com'
-"""Data API request host."""
-DATA_API_PORT_V2=443
-"""Data API request port."""
-
-OPERATIONS_STUB_FACTORY_V2=operations_grpc_v2_pb2.beta_create_Operations_stub
-OPERATIONS_API_HOST_V2=INSTANCE_ADMIN_HOST_V2
-OPERATIONS_API_PORT_V2=INSTANCE_ADMIN_PORT_V2
-
-ADMIN_SCOPE='https://www.googleapis.com/auth/bigtable.admin'
-"""Scope for interacting with the Cluster Admin and Table Admin APIs."""
-DATA_SCOPE='https://www.googleapis.com/auth/bigtable.data'
-"""Scope for reading and writing table data."""
-READ_ONLY_SCOPE='https://www.googleapis.com/auth/bigtable.data.readonly'
-"""Scope for reading table data."""
-
-DEFAULT_TIMEOUT_SECONDS=10
-"""The default timeout to use for API requests."""
-
-DEFAULT_USER_AGENT='gcloud-python/{0}'.format(
- get_distribution('gcloud').version)
-"""The default user agent for API requests."""
-
-
-
[docs]classClient(_ClientFactoryMixin,_ClientProjectMixin):
- """Client for interacting with Google Cloud Bigtable API.
-
- .. note::
-
- Since the Cloud Bigtable API requires the gRPC transport, no
- ``http`` argument is accepted by this class.
-
- :type project: :class:`str` or :func:`unicode <unicode>`
- :param project: (Optional) The ID of the project which owns the
- instances, tables and data. If not provided, will
- attempt to determine from the environment.
-
- :type credentials:
- :class:`OAuth2Credentials <oauth2client.client.OAuth2Credentials>` or
- :data:`NoneType <types.NoneType>`
- :param credentials: (Optional) The OAuth2 Credentials to use for this
- client. If not provided, defaults to the Google
- Application Default Credentials.
-
- :type read_only: bool
- :param read_only: (Optional) Boolean indicating if the data scope should be
- for reading only (or for writing as well). Defaults to
- :data:`False`.
-
- :type admin: bool
- :param admin: (Optional) Boolean indicating if the client will be used to
- interact with the Instance Admin or Table Admin APIs. This
- requires the :const:`ADMIN_SCOPE`. Defaults to :data:`False`.
-
- :type user_agent: str
- :param user_agent: (Optional) The user agent to be used with API request.
- Defaults to :const:`DEFAULT_USER_AGENT`.
-
- :type timeout_seconds: int
- :param timeout_seconds: Number of seconds for request time-out. If not
- passed, defaults to
- :const:`DEFAULT_TIMEOUT_SECONDS`.
-
- :raises: :class:`ValueError <exceptions.ValueError>` if both ``read_only``
- and ``admin`` are :data:`True`
- """
-
- def__init__(self,project=None,credentials=None,
- read_only=False,admin=False,user_agent=DEFAULT_USER_AGENT,
- timeout_seconds=DEFAULT_TIMEOUT_SECONDS):
- _ClientProjectMixin.__init__(self,project=project)
- ifcredentialsisNone:
- credentials=get_credentials()
-
- ifread_onlyandadmin:
- raiseValueError('A read-only client cannot also perform'
- 'administrative actions.')
-
- scopes=[]
- ifread_only:
- scopes.append(READ_ONLY_SCOPE)
- else:
- scopes.append(DATA_SCOPE)
-
- ifadmin:
- scopes.append(ADMIN_SCOPE)
-
- self._admin=bool(admin)
- try:
- credentials=credentials.create_scoped(scopes)
- exceptAttributeError:
- pass
- self._credentials=credentials
- self.user_agent=user_agent
- self.timeout_seconds=timeout_seconds
-
- # These will be set in start().
- self._data_stub_internal=None
- self._instance_stub_internal=None
- self._operations_stub_internal=None
- self._table_stub_internal=None
-
-
[docs]defcopy(self):
- """Make a copy of this client.
-
- Copies the local data stored as simple types but does not copy the
- current state of any open connections with the Cloud Bigtable API.
-
- :rtype: :class:`.Client`
- :returns: A copy of the current client.
- """
- credentials=self._credentials
- copied_creds=credentials.create_scoped(credentials.scopes)
- returnself.__class__(
- self.project,
- copied_creds,
- READ_ONLY_SCOPEincopied_creds.scopes,
- self._admin,
- self.user_agent,
- self.timeout_seconds,
- )
-
- @property
- defcredentials(self):
- """Getter for client's credentials.
-
- :rtype:
- :class:`OAuth2Credentials <oauth2client.client.OAuth2Credentials>`
- :returns: The credentials stored on the client.
- """
- returnself._credentials
-
- @property
- defproject_name(self):
- """Project name to be used with Instance Admin API.
-
- .. note::
-
- This property will not change if ``project`` does not, but the
- return value is not cached.
-
- The project name is of the form
-
- ``"projects/{project}"``
-
- :rtype: str
- :returns: The project name to be used with the Cloud Bigtable Admin
- API RPC service.
- """
- return'projects/'+self.project
-
- @property
- def_data_stub(self):
- """Getter for the gRPC stub used for the Data API.
-
- :rtype: :class:`grpc.beta._stub._AutoIntermediary`
- :returns: A gRPC stub object.
- :raises: :class:`ValueError <exceptions.ValueError>` if the current
- client has not been :meth:`start`-ed.
- """
- ifself._data_stub_internalisNone:
- raiseValueError('Client has not been started.')
- returnself._data_stub_internal
-
- @property
- def_instance_stub(self):
- """Getter for the gRPC stub used for the Instance Admin API.
-
- :rtype: :class:`grpc.beta._stub._AutoIntermediary`
- :returns: A gRPC stub object.
- :raises: :class:`ValueError <exceptions.ValueError>` if the current
- client is not an admin client or if it has not been
- :meth:`start`-ed.
- """
- ifnotself._admin:
- raiseValueError('Client is not an admin client.')
- ifself._instance_stub_internalisNone:
- raiseValueError('Client has not been started.')
- returnself._instance_stub_internal
-
- @property
- def_operations_stub(self):
- """Getter for the gRPC stub used for the Operations API.
-
- :rtype: :class:`grpc.beta._stub._AutoIntermediary`
- :returns: A gRPC stub object.
- :raises: :class:`ValueError <exceptions.ValueError>` if the current
- client is not an admin client or if it has not been
- :meth:`start`-ed.
- """
- ifnotself._admin:
- raiseValueError('Client is not an admin client.')
- ifself._operations_stub_internalisNone:
- raiseValueError('Client has not been started.')
- returnself._operations_stub_internal
-
- @property
- def_table_stub(self):
- """Getter for the gRPC stub used for the Table Admin API.
-
- :rtype: :class:`grpc.beta._stub._AutoIntermediary`
- :returns: A gRPC stub object.
- :raises: :class:`ValueError <exceptions.ValueError>` if the current
- client is not an admin client or if it has not been
- :meth:`start`-ed.
- """
- ifnotself._admin:
- raiseValueError('Client is not an admin client.')
- ifself._table_stub_internalisNone:
- raiseValueError('Client has not been started.')
- returnself._table_stub_internal
-
- def_make_data_stub(self):
- """Creates gRPC stub to make requests to the Data API.
-
- :rtype: :class:`grpc.beta._stub._AutoIntermediary`
- :returns: A gRPC stub object.
- """
- return_make_stub(self,DATA_STUB_FACTORY_V2,
- DATA_API_HOST_V2,DATA_API_PORT_V2)
-
- def_make_instance_stub(self):
- """Creates gRPC stub to make requests to the Instance Admin API.
-
- :rtype: :class:`grpc.beta._stub._AutoIntermediary`
- :returns: A gRPC stub object.
- """
- return_make_stub(self,INSTANCE_STUB_FACTORY_V2,
- INSTANCE_ADMIN_HOST_V2,INSTANCE_ADMIN_PORT_V2)
-
- def_make_operations_stub(self):
- """Creates gRPC stub to make requests to the Operations API.
-
- These are for long-running operations of the Instance Admin API,
- hence the host and port matching.
-
- :rtype: :class:`grpc.beta._stub._AutoIntermediary`
- :returns: A gRPC stub object.
- """
- return_make_stub(self,OPERATIONS_STUB_FACTORY_V2,
- OPERATIONS_API_HOST_V2,OPERATIONS_API_PORT_V2)
-
- def_make_table_stub(self):
- """Creates gRPC stub to make requests to the Table Admin API.
-
- :rtype: :class:`grpc.beta._stub._AutoIntermediary`
- :returns: A gRPC stub object.
- """
- return_make_stub(self,TABLE_STUB_FACTORY_V2,
- TABLE_ADMIN_HOST_V2,TABLE_ADMIN_PORT_V2)
-
-
[docs]defis_started(self):
- """Check if the client has been started.
-
- :rtype: bool
- :returns: Boolean indicating if the client has been started.
- """
- returnself._data_stub_internalisnotNone
-
-
[docs]defstart(self):
- """Prepare the client to make requests.
-
- Activates gRPC contexts for making requests to the Bigtable
- Service(s).
- """
- ifself.is_started():
- return
-
- # NOTE: We __enter__ the stubs more-or-less permanently. This is
- # because only after entering the context managers is the
- # connection created. We don't want to immediately close
- # those connections since the client will make many
- # requests with it over HTTP/2.
- self._data_stub_internal=self._make_data_stub()
- self._data_stub_internal.__enter__()
- ifself._admin:
- self._instance_stub_internal=self._make_instance_stub()
- self._operations_stub_internal=self._make_operations_stub()
- self._table_stub_internal=self._make_table_stub()
-
- self._instance_stub_internal.__enter__()
- self._operations_stub_internal.__enter__()
- self._table_stub_internal.__enter__()
-
- def__enter__(self):
- """Starts the client as a context manager."""
- self.start()
- returnself
-
-
[docs]defstop(self):
- """Closes all the open gRPC clients."""
- ifnotself.is_started():
- return
-
- # When exit-ing, we pass None as the exception type, value and
- # traceback to __exit__.
- self._data_stub_internal.__exit__(None,None,None)
- ifself._admin:
- self._instance_stub_internal.__exit__(None,None,None)
- self._operations_stub_internal.__exit__(None,None,None)
- self._table_stub_internal.__exit__(None,None,None)
-
- self._data_stub_internal=None
- self._instance_stub_internal=None
- self._operations_stub_internal=None
- self._table_stub_internal=None
-
- def__exit__(self,exc_type,exc_val,exc_t):
- """Stops the client as a context manager."""
- self.stop()
-
-
[docs]definstance(self,instance_id,location=_EXISTING_INSTANCE_LOCATION_ID,
- display_name=None,serve_nodes=DEFAULT_SERVE_NODES):
- """Factory to create a instance associated with this client.
-
- :type instance_id: str
- :param instance_id: The ID of the instance.
-
- :type location: string
- :param location: location name, in form
- ``projects/<project>/locations/<location>``; used to
- set up the instance's cluster.
-
- :type display_name: str
- :param display_name: (Optional) The display name for the instance in
- the Cloud Console UI. (Must be between 4 and 30
- characters.) If this value is not set in the
- constructor, will fall back to the instance ID.
-
- :type serve_nodes: int
- :param serve_nodes: (Optional) The number of nodes in the instance's
- cluster; used to set up the instance's cluster.
-
- :rtype: :class:`.Instance`
- :returns: an instance owned by this client.
- """
- returnInstance(instance_id,self,location,
- display_name=display_name,serve_nodes=serve_nodes)
-
-
[docs]deflist_instances(self):
- """List instances owned by the project.
-
- :rtype: tuple
- :returns: A pair of results, the first is a list of
- :class:`.Instance` objects returned and the second is a
- list of strings (the failed locations in the request).
- """
- request_pb=instance_admin_v2_pb2.ListInstancesRequest(
- parent=self.project_name)
-
- response=self._instance_stub.ListInstances(
- request_pb,self.timeout_seconds)
-
- instances=[Instance.from_pb(instance_pb,self)
- forinstance_pbinresponse.instances]
- returninstances,response.failed_locations
-
-
-class_MetadataPlugin(object):
- """Callable class to transform metadata for gRPC requests.
-
- :type client: :class:`.client.Client`
- :param client: The client that owns the instance.
- Provides authorization and user agent.
- """
-
- def__init__(self,client):
- self._credentials=client.credentials
- self._user_agent=client.user_agent
-
- def__call__(self,unused_context,callback):
- """Adds authorization header to request metadata."""
- access_token=self._credentials.get_access_token().access_token
- headers=[
- ('Authorization','Bearer '+access_token),
- ('User-agent',self._user_agent),
- ]
- callback(headers,None)
-
-
-def_make_stub(client,stub_factory,host,port):
- """Makes a stub for an RPC service.
-
- Uses / depends on the beta implementation of gRPC.
-
- :type client: :class:`.client.Client`
- :param client: The client that owns the instance.
- Provides authorization and user agent.
-
- :type stub_factory: callable
- :param stub_factory: A factory which will create a gRPC stub for
- a given service.
-
- :type host: str
- :param host: The host for the service.
-
- :type port: int
- :param port: The port for the service.
-
- :rtype: :class:`grpc.beta._stub._AutoIntermediary`
- :returns: The stub object used to make gRPC requests to a given API.
- """
- # Leaving the first argument to ssl_channel_credentials() as None
- # loads root certificates from `grpc/_adapter/credentials/roots.pem`.
- transport_creds=implementations.ssl_channel_credentials(None,None,None)
- custom_metadata_plugin=_MetadataPlugin(client)
- auth_creds=implementations.metadata_call_credentials(
- custom_metadata_plugin,name='google_creds')
- channel_creds=implementations.composite_channel_credentials(
- transport_creds,auth_creds)
- channel=implementations.secure_channel(host,port,channel_creds)
- returnstub_factory(channel)
-
-# Copyright 2016 Google Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Google Cloud Bigtable HappyBase batch module."""
-
-
-importdatetime
-importwarnings
-
-importsix
-
-fromgcloud._helpersimport_datetime_from_microseconds
-fromgcloud.bigtable.row_filtersimportTimestampRange
-
-
-_WAL_SENTINEL=object()
-# Assumed granularity of timestamps in Cloud Bigtable.
-_ONE_MILLISECOND=datetime.timedelta(microseconds=1000)
-_WARN=warnings.warn
-_WAL_WARNING=('The wal argument (Write-Ahead-Log) is not '
- 'supported by Cloud Bigtable.')
-
-
-
[docs]classBatch(object):
- """Batch class for accumulating mutations.
-
- .. note::
-
- When using a batch with ``transaction=False`` as a context manager
- (i.e. in a ``with`` statement), mutations will still be sent as
- row mutations even if the context manager exits with an error.
- This behavior is in place to match the behavior in the HappyBase
- HBase / Thrift implementation.
-
- :type table: :class:`Table <gcloud.bigtable.happybase.table.Table>`
- :param table: The table where mutations will be applied.
-
- :type timestamp: int
- :param timestamp: (Optional) Timestamp (in milliseconds since the epoch)
- that all mutations will be applied at.
-
- :type batch_size: int
- :param batch_size: (Optional) The maximum number of mutations to allow
- to accumulate before committing them.
-
- :type transaction: bool
- :param transaction: Flag indicating if the mutations should be sent
- transactionally or not. If ``transaction=True`` and
- an error occurs while a :class:`Batch` is active,
- then none of the accumulated mutations will be
- committed. If ``batch_size`` is set, the mutation
- can't be transactional.
-
- :type wal: object
- :param wal: Unused parameter (Boolean for using the HBase Write Ahead Log).
- Provided for compatibility with HappyBase, but irrelevant for
- Cloud Bigtable since it does not have a Write Ahead Log.
-
- :raises: :class:`TypeError <exceptions.TypeError>` if ``batch_size``
- is set and ``transaction=True``.
- :class:`ValueError <exceptions.ValueError>` if ``batch_size``
- is not positive.
- """
-
- def__init__(self,table,timestamp=None,batch_size=None,
- transaction=False,wal=_WAL_SENTINEL):
- ifwalisnot_WAL_SENTINEL:
- _WARN(_WAL_WARNING)
-
- ifbatch_sizeisnotNone:
- iftransaction:
- raiseTypeError('When batch_size is set, a Batch cannot be '
- 'transactional')
- ifbatch_size<=0:
- raiseValueError('batch_size must be positive')
-
- self._table=table
- self._batch_size=batch_size
- self._timestamp=self._delete_range=None
-
- # Timestamp is in milliseconds, convert to microseconds.
- iftimestampisnotNone:
- self._timestamp=_datetime_from_microseconds(1000*timestamp)
- # For deletes, we get the very next timestamp (assuming timestamp
- # granularity is milliseconds). This is because HappyBase users
- # expect HBase deletes to go **up to** and **including** the
- # timestamp while Cloud Bigtable Time Ranges **exclude** the
- # final timestamp.
- next_timestamp=self._timestamp+_ONE_MILLISECOND
- self._delete_range=TimestampRange(end=next_timestamp)
-
- self._transaction=transaction
-
- # Internal state for tracking mutations.
- self._row_map={}
- self._mutation_count=0
-
-
[docs]defsend(self):
- """Send / commit the batch of mutations to the server."""
- forrowinself._row_map.values():
- # commit() does nothing if row hasn't accumulated any mutations.
- row.commit()
-
- self._row_map.clear()
- self._mutation_count=0
-
- def_try_send(self):
- """Send / commit the batch if mutations have exceeded batch size."""
- ifself._batch_sizeandself._mutation_count>=self._batch_size:
- self.send()
-
- def_get_row(self,row_key):
- """Gets a row that will hold mutations.
-
- If the row is not already cached on the current batch, a new row will
- be created.
-
- :type row_key: str
- :param row_key: The row key for a row stored in the map.
-
- :rtype: :class:`Row <gcloud.bigtable.row.Row>`
- :returns: The newly created or stored row that will hold mutations.
- """
- ifrow_keynotinself._row_map:
- table=self._table._low_level_table
- self._row_map[row_key]=table.row(row_key)
-
- returnself._row_map[row_key]
-
-
[docs]defput(self,row,data,wal=_WAL_SENTINEL):
- """Insert data into a row in the table owned by this batch.
-
- :type row: str
- :param row: The row key where the mutation will be "put".
-
- :type data: dict
- :param data: Dictionary containing the data to be inserted. The keys
- are columns names (of the form ``fam:col``) and the values
- are strings (bytes) to be stored in those columns.
-
- :type wal: object
- :param wal: Unused parameter (to over-ride the default on the
- instance). Provided for compatibility with HappyBase, but
- irrelevant for Cloud Bigtable since it does not have a
- Write Ahead Log.
- """
- ifwalisnot_WAL_SENTINEL:
- _WARN(_WAL_WARNING)
-
- row_object=self._get_row(row)
- # Make sure all the keys are valid before beginning
- # to add mutations.
- column_pairs=_get_column_pairs(six.iterkeys(data),
- require_qualifier=True)
- forcolumn_family_id,column_qualifierincolumn_pairs:
- value=data[column_family_id+':'+column_qualifier]
- row_object.set_cell(column_family_id,column_qualifier,
- value,timestamp=self._timestamp)
-
- self._mutation_count+=len(data)
- self._try_send()
-
- def_delete_columns(self,columns,row_object):
- """Adds delete mutations for a list of columns and column families.
-
- :type columns: list
- :param columns: Iterable containing column names (as
- strings). Each column name can be either
-
- * an entire column family: ``fam`` or ``fam:``
- * a single column: ``fam:col``
-
- :type row_object: :class:`Row <gcloud_bigtable.row.Row>`
- :param row_object: The row which will hold the delete mutations.
-
- :raises: :class:`ValueError <exceptions.ValueError>` if the delete
- timestamp range is set on the current batch, but a
- column family delete is attempted.
- """
- column_pairs=_get_column_pairs(columns)
- forcolumn_family_id,column_qualifierincolumn_pairs:
- ifcolumn_qualifierisNone:
- ifself._delete_rangeisnotNone:
- raiseValueError('The Cloud Bigtable API does not support '
- 'adding a timestamp to '
- '"DeleteFromFamily" ')
- row_object.delete_cells(column_family_id,
- columns=row_object.ALL_COLUMNS)
- else:
- row_object.delete_cell(column_family_id,
- column_qualifier,
- time_range=self._delete_range)
-
-
[docs]defdelete(self,row,columns=None,wal=_WAL_SENTINEL):
- """Delete data from a row in the table owned by this batch.
-
- :type row: str
- :param row: The row key where the delete will occur.
-
- :type columns: list
- :param columns: (Optional) Iterable containing column names (as
- strings). Each column name can be either
-
- * an entire column family: ``fam`` or ``fam:``
- * a single column: ``fam:col``
-
- If not used, will delete the entire row.
-
- :type wal: object
- :param wal: Unused parameter (to over-ride the default on the
- instance). Provided for compatibility with HappyBase, but
- irrelevant for Cloud Bigtable since it does not have a
- Write Ahead Log.
-
- :raises: If the delete timestamp range is set on the
- current batch, but a full row delete is attempted.
- """
- ifwalisnot_WAL_SENTINEL:
- _WARN(_WAL_WARNING)
-
- row_object=self._get_row(row)
-
- ifcolumnsisNone:
- # Delete entire row.
- ifself._delete_rangeisnotNone:
- raiseValueError('The Cloud Bigtable API does not support '
- 'adding a timestamp to "DeleteFromRow" '
- 'mutations')
- row_object.delete()
- self._mutation_count+=1
- else:
- self._delete_columns(columns,row_object)
- self._mutation_count+=len(columns)
-
- self._try_send()
-
- def__enter__(self):
- """Enter context manager, no set-up required."""
- returnself
-
- def__exit__(self,exc_type,exc_value,traceback):
- """Exit context manager, no set-up required.
-
- :type exc_type: type
- :param exc_type: The type of the exception if one occurred while the
- context manager was active. Otherwise, :data:`None`.
-
- :type exc_value: :class:`Exception <exceptions.Exception>`
- :param exc_value: An instance of ``exc_type`` if an exception occurred
- while the context was active.
- Otherwise, :data:`None`.
-
- :type traceback: ``traceback`` type
- :param traceback: The traceback where the exception occurred (if one
- did occur). Otherwise, :data:`None`.
- """
- # If the context manager encountered an exception and the batch is
- # transactional, we don't commit the mutations.
- ifself._transactionandexc_typeisnotNone:
- return
-
- # NOTE: For non-transactional batches, this will even commit mutations
- # if an error occurred during the context manager.
- self.send()
-
-
-def_get_column_pairs(columns,require_qualifier=False):
- """Turns a list of column or column families into parsed pairs.
-
- Turns a column family (``fam`` or ``fam:``) into a pair such
- as ``['fam', None]`` and turns a column (``fam:col``) into
- ``['fam', 'col']``.
-
- :type columns: list
- :param columns: Iterable containing column names (as
- strings). Each column name can be either
-
- * an entire column family: ``fam`` or ``fam:``
- * a single column: ``fam:col``
-
- :type require_qualifier: bool
- :param require_qualifier: Boolean indicating if the columns should
- all have a qualifier or not.
-
- :rtype: list
- :returns: List of pairs, where the first element in each pair is the
- column family and the second is the column qualifier
- (or :data:`None`).
- :raises: :class:`ValueError <exceptions.ValueError>` if any of the columns
- are not of the expected format.
- :class:`ValueError <exceptions.ValueError>` if
- ``require_qualifier`` is :data:`True` and one of the values is
- for an entire column family
- """
- column_pairs=[]
- forcolumnincolumns:
- ifisinstance(column,six.binary_type):
- column=column.decode('utf-8')
- # Remove trailing colons (i.e. for standalone column family).
- ifcolumn.endswith(u':'):
- column=column[:-1]
- num_colons=column.count(u':')
- ifnum_colons==0:
- # column is a column family.
- ifrequire_qualifier:
- raiseValueError('column does not contain a qualifier',
- column)
- else:
- column_pairs.append([column,None])
- elifnum_colons==1:
- column_pairs.append(column.split(u':'))
- else:
- raiseValueError('Column contains the : separator more than once')
-
- returncolumn_pairs
-
Source code for gcloud.bigtable.happybase.connection
-# Copyright 2016 Google Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Google Cloud Bigtable HappyBase connection module."""
-
-
-importdatetime
-importwarnings
-
-importsix
-
-fromgrpc.betaimportinterfaces
-fromgrpc.framework.interfaces.faceimportface
-
-try:
- fromhappybase.hbase.ttypesimportAlreadyExists
-exceptImportError:
- fromgcloud.exceptionsimportConflictasAlreadyExists
-
-fromgcloud.bigtable.clientimportClient
-fromgcloud.bigtable.column_familyimportGCRuleIntersection
-fromgcloud.bigtable.column_familyimportMaxAgeGCRule
-fromgcloud.bigtable.column_familyimportMaxVersionsGCRule
-fromgcloud.bigtable.happybase.tableimportTable
-fromgcloud.bigtable.tableimportTableas_LowLevelTable
-
-
-# Constants reproduced here for HappyBase compatibility, though values
-# are all null.
-COMPAT_MODES=None
-THRIFT_TRANSPORTS=None
-THRIFT_PROTOCOLS=None
-DEFAULT_HOST=None
-DEFAULT_PORT=None
-DEFAULT_TRANSPORT=None
-DEFAULT_COMPAT=None
-DEFAULT_PROTOCOL=None
-
-_LEGACY_ARGS=frozenset(('host','port','compat','transport','protocol'))
-_WARN=warnings.warn
-_BASE_DISABLE='Cloud Bigtable has no concept of enabled / disabled tables.'
-_DISABLE_DELETE_MSG=('The disable argument should not be used in '
- 'delete_table(). ')+_BASE_DISABLE
-_ENABLE_TMPL='Connection.enable_table(%r) was called, but '+_BASE_DISABLE
-_DISABLE_TMPL='Connection.disable_table(%r) was called, but '+_BASE_DISABLE
-_IS_ENABLED_TMPL=('Connection.is_table_enabled(%r) was called, but '+
- _BASE_DISABLE)
-_COMPACT_TMPL=('Connection.compact_table(%r, major=%r) was called, but the '
- 'Cloud Bigtable API handles table compactions automatically '
- 'and does not expose an API for it.')
-
-
-def_get_instance(timeout=None):
- """Gets instance for the default project.
-
- Creates a client with the inferred credentials and project ID from
- the local environment. Then uses
- :meth:`.bigtable.client.Client.list_instances` to
- get the unique instance owned by the project.
-
- If the request fails for any reason, or if there isn't exactly one instance
- owned by the project, then this function will fail.
-
- :type timeout: int
- :param timeout: (Optional) The socket timeout in milliseconds.
-
- :rtype: :class:`gcloud.bigtable.instance.Instance`
- :returns: The unique instance owned by the project inferred from
- the environment.
- :raises ValueError: if there is a failed location or any number of
- instances other than one.
- """
- client_kwargs={'admin':True}
- iftimeoutisnotNone:
- client_kwargs['timeout_seconds']=timeout/1000.0
- client=Client(**client_kwargs)
- try:
- client.start()
- instances,failed_locations=client.list_instances()
- finally:
- client.stop()
-
- iflen(failed_locations)!=0:
- raiseValueError('Determining instance via ListInstances encountered '
- 'failed locations.')
- iflen(instances)==0:
- raiseValueError('This client doesn\'t have access to any instances.')
- iflen(instances)>1:
- raiseValueError('This client has access to more than one instance. '
- 'Please directly pass the instance you\'d '
- 'like to use.')
- returninstances[0]
-
-
-
[docs]classConnection(object):
- """Connection to Cloud Bigtable backend.
-
- .. note::
-
- If you pass a ``instance``, it will be :meth:`.Instance.copy`-ed before
- being stored on the new connection. This also copies the
- :class:`Client <gcloud.bigtable.client.Client>` that created the
- :class:`Instance <gcloud.bigtable.instance.Instance>` instance and the
- :class:`Credentials <oauth2client.client.Credentials>` stored on the
- client.
-
- The arguments ``host``, ``port``, ``compat``, ``transport`` and
- ``protocol`` are allowed (as keyword arguments) for compatibility with
- HappyBase. However, they will not be used in any way, and will cause a
- warning if passed.
-
- :type timeout: int
- :param timeout: (Optional) The socket timeout in milliseconds.
-
- :type autoconnect: bool
- :param autoconnect: (Optional) Whether the connection should be
- :meth:`open`-ed during construction.
-
- :type table_prefix: str
- :param table_prefix: (Optional) Prefix used to construct table names.
-
- :type table_prefix_separator: str
- :param table_prefix_separator: (Optional) Separator used with
- ``table_prefix``. Defaults to ``_``.
-
- :type instance: :class:`Instance <gcloud.bigtable.instance.Instance>`
- :param instance: (Optional) A Cloud Bigtable instance. The instance also
- owns a client for making gRPC requests to the Cloud
- Bigtable API. If not passed in, defaults to creating client
- with ``admin=True`` and using the ``timeout`` here for the
- ``timeout_seconds`` argument to the
- :class:`Client <gcloud.bigtable.client.Client>`
- constructor. The credentials for the client
- will be the implicit ones loaded from the environment.
- Then that client is used to retrieve all the instances
- owned by the client's project.
-
- :type kwargs: dict
- :param kwargs: Remaining keyword arguments. Provided for HappyBase
- compatibility.
- """
-
- _instance=None
-
- def__init__(self,timeout=None,autoconnect=True,table_prefix=None,
- table_prefix_separator='_',instance=None,**kwargs):
- self._handle_legacy_args(kwargs)
- iftable_prefixisnotNone:
- ifnotisinstance(table_prefix,six.string_types):
- raiseTypeError('table_prefix must be a string','received',
- table_prefix,type(table_prefix))
-
- ifnotisinstance(table_prefix_separator,six.string_types):
- raiseTypeError('table_prefix_separator must be a string',
- 'received',table_prefix_separator,
- type(table_prefix_separator))
-
- self.table_prefix=table_prefix
- self.table_prefix_separator=table_prefix_separator
-
- ifinstanceisNone:
- self._instance=_get_instance(timeout=timeout)
- else:
- iftimeoutisnotNone:
- raiseValueError('Timeout cannot be used when an existing '
- 'instance is passed')
- self._instance=instance.copy()
-
- ifautoconnect:
- self.open()
-
- self._initialized=True
-
- @staticmethod
- def_handle_legacy_args(arguments_dict):
- """Check legacy HappyBase arguments and warn if set.
-
- :type arguments_dict: dict
- :param arguments_dict: Unused keyword arguments.
-
- :raises TypeError: if a keyword other than ``host``, ``port``,
- ``compat``, ``transport`` or ``protocol`` is used.
- """
- common_args=_LEGACY_ARGS.intersection(six.iterkeys(arguments_dict))
- ifcommon_args:
- all_args=', '.join(common_args)
- message=('The HappyBase legacy arguments %s were used. These '
- 'arguments are unused by gcloud.'%(all_args,))
- _WARN(message)
- forarg_nameincommon_args:
- arguments_dict.pop(arg_name)
- ifarguments_dict:
- unexpected_names=arguments_dict.keys()
- raiseTypeError('Received unexpected arguments',unexpected_names)
-
-
[docs]defopen(self):
- """Open the underlying transport to Cloud Bigtable.
-
- This method opens the underlying HTTP/2 gRPC connection using a
- :class:`Client <gcloud.bigtable.client.Client>` bound to the
- :class:`Instance <gcloud.bigtable.instance.Instance>` owned by
- this connection.
- """
- self._instance._client.start()
-
-
[docs]defclose(self):
- """Close the underlying transport to Cloud Bigtable.
-
- This method closes the underlying HTTP/2 gRPC connection using a
- :class:`Client <gcloud.bigtable.client.Client>` bound to the
- :class:`Instance <gcloud.bigtable.instance.Instance>` owned by
- this connection.
- """
- self._instance._client.stop()
-
- def__del__(self):
- ifself._instanceisnotNone:
- self.close()
-
- def_table_name(self,name):
- """Construct a table name by optionally adding a table name prefix.
-
- :type name: str
- :param name: The name to have a prefix added to it.
-
- :rtype: str
- :returns: The prefixed name, if the current connection has a table
- prefix set.
- """
- ifself.table_prefixisNone:
- returnname
-
- returnself.table_prefix+self.table_prefix_separator+name
-
-
[docs]deftable(self,name,use_prefix=True):
- """Table factory.
-
- :type name: str
- :param name: The name of the table to be created.
-
- :type use_prefix: bool
- :param use_prefix: Whether to use the table prefix (if any).
-
- :rtype: :class:`Table <gcloud.bigtable.happybase.table.Table>`
- :returns: Table instance owned by this connection.
- """
- ifuse_prefix:
- name=self._table_name(name)
- returnTable(name,self)
-
-
[docs]deftables(self):
- """Return a list of table names available to this connection.
-
- .. note::
-
- This lists every table in the instance owned by this connection,
- **not** every table that a given user may have access to.
-
- .. note::
-
- If ``table_prefix`` is set on this connection, only returns the
- table names which match that prefix.
-
- :rtype: list
- :returns: List of string table names.
- """
- low_level_table_instances=self._instance.list_tables()
- table_names=[table_instance.table_id
- fortable_instanceinlow_level_table_instances]
-
- # Filter using prefix, and strip prefix from names
- ifself.table_prefixisnotNone:
- prefix=self._table_name('')
- offset=len(prefix)
- table_names=[name[offset:]fornameintable_names
- ifname.startswith(prefix)]
-
- returntable_names
-
-
[docs]defcreate_table(self,name,families):
- """Create a table.
-
- .. warning::
-
- The only column family options from HappyBase that are able to be
- used with Cloud Bigtable are ``max_versions`` and ``time_to_live``.
-
- Values in ``families`` represent column family options. In HappyBase,
- these are dictionaries, corresponding to the ``ColumnDescriptor``
- structure in the Thrift API. The accepted keys are:
-
- * ``max_versions`` (``int``)
- * ``compression`` (``str``)
- * ``in_memory`` (``bool``)
- * ``bloom_filter_type`` (``str``)
- * ``bloom_filter_vector_size`` (``int``)
- * ``bloom_filter_nb_hashes`` (``int``)
- * ``block_cache_enabled`` (``bool``)
- * ``time_to_live`` (``int``)
-
- :type name: str
- :param name: The name of the table to be created.
-
- :type families: dict
- :param families: Dictionary with column family names as keys and column
- family options as the values. The options can be among
-
- * :class:`dict`
- * :class:`.GarbageCollectionRule`
-
- :raises TypeError: If ``families`` is not a dictionary.
- :raises ValueError: If ``families`` has no entries.
- :raises AlreadyExists: If creation fails due to an already
- existing table.
- :raises NetworkError: If creation fails for a reason other than
- table exists.
- """
- ifnotisinstance(families,dict):
- raiseTypeError('families arg must be a dictionary')
-
- ifnotfamilies:
- raiseValueError('Cannot create table %r (no column '
- 'families specified)'%(name,))
-
- # Parse all keys before making any API requests.
- gc_rule_dict={}
- forcolumn_family_name,optioninfamilies.items():
- ifisinstance(column_family_name,six.binary_type):
- column_family_name=column_family_name.decode('utf-8')
- ifcolumn_family_name.endswith(':'):
- column_family_name=column_family_name[:-1]
- gc_rule_dict[column_family_name]=_parse_family_option(option)
-
- # Create table instance and then make API calls.
- name=self._table_name(name)
- low_level_table=_LowLevelTable(name,self._instance)
- column_families=(
- low_level_table.column_family(column_family_name,gc_rule=gc_rule)
- forcolumn_family_name,gc_ruleinsix.iteritems(gc_rule_dict)
- )
- try:
- low_level_table.create(column_families=column_families)
- exceptface.NetworkErrorasnetwork_err:
- ifnetwork_err.code==interfaces.StatusCode.ALREADY_EXISTS:
- raiseAlreadyExists(name)
- else:
- raise
-
-
[docs]defdelete_table(self,name,disable=False):
- """Delete the specified table.
-
- :type name: str
- :param name: The name of the table to be deleted. If ``table_prefix``
- is set, a prefix will be added to the ``name``.
-
- :type disable: bool
- :param disable: Whether to first disable the table if needed. This
- is provided for compatibility with HappyBase, but is
- not relevant for Cloud Bigtable since it has no concept
- of enabled / disabled tables.
- """
- ifdisable:
- _WARN(_DISABLE_DELETE_MSG)
-
- name=self._table_name(name)
- _LowLevelTable(name,self._instance).delete()
-
- @staticmethod
-
[docs]defenable_table(name):
- """Enable the specified table.
-
- .. warning::
-
- Cloud Bigtable has no concept of enabled / disabled tables so this
- method does nothing. It is provided simply for compatibility.
-
- :type name: str
- :param name: The name of the table to be enabled.
- """
- _WARN(_ENABLE_TMPL%(name,))
-
- @staticmethod
-
[docs]defdisable_table(name):
- """Disable the specified table.
-
- .. warning::
-
- Cloud Bigtable has no concept of enabled / disabled tables so this
- method does nothing. It is provided simply for compatibility.
-
- :type name: str
- :param name: The name of the table to be disabled.
- """
- _WARN(_DISABLE_TMPL%(name,))
-
- @staticmethod
-
[docs]defis_table_enabled(name):
- """Return whether the specified table is enabled.
-
- .. warning::
-
- Cloud Bigtable has no concept of enabled / disabled tables so this
- method always returns :data:`True`. It is provided simply for
- compatibility.
-
- :type name: str
- :param name: The name of the table to check enabled / disabled status.
-
- :rtype: bool
- :returns: The value :data:`True` always.
- """
- _WARN(_IS_ENABLED_TMPL%(name,))
- returnTrue
-
- @staticmethod
-
[docs]defcompact_table(name,major=False):
- """Compact the specified table.
-
- .. warning::
-
- Cloud Bigtable supports table compactions, it just doesn't expose
- an API for that feature, so this method does nothing. It is
- provided simply for compatibility.
-
- :type name: str
- :param name: The name of the table to compact.
-
- :type major: bool
- :param major: Whether to perform a major compaction.
- """
- _WARN(_COMPACT_TMPL%(name,major))
-
-
-def_parse_family_option(option):
- """Parses a column family option into a garbage collection rule.
-
- .. note::
-
- If ``option`` is not a dictionary, the type is not checked.
- If ``option`` is :data:`None`, there is nothing to do, since this
- is the correct output.
-
- :type option: :class:`dict`,
- :data:`NoneType <types.NoneType>`,
- :class:`.GarbageCollectionRule`
- :param option: A column family option passes as a dictionary value in
- :meth:`Connection.create_table`.
-
- :rtype: :class:`.GarbageCollectionRule`
- :returns: A garbage collection rule parsed from the input.
- """
- result=option
- ifisinstance(result,dict):
- ifnotset(result.keys())<=set(['max_versions','time_to_live']):
- all_keys=', '.join(repr(key)forkeyinresult.keys())
- warning_msg=('Cloud Bigtable only supports max_versions and '
- 'time_to_live column family settings. '
- 'Received: %s'%(all_keys,))
- _WARN(warning_msg)
-
- max_num_versions=result.get('max_versions')
- max_age=None
- if'time_to_live'inresult:
- max_age=datetime.timedelta(seconds=result['time_to_live'])
-
- versions_rule=age_rule=None
- ifmax_num_versionsisnotNone:
- versions_rule=MaxVersionsGCRule(max_num_versions)
- ifmax_ageisnotNone:
- age_rule=MaxAgeGCRule(max_age)
-
- ifversions_ruleisNone:
- result=age_rule
- else:
- ifage_ruleisNone:
- result=versions_rule
- else:
- result=GCRuleIntersection(rules=[age_rule,versions_rule])
-
- returnresult
-
-# Copyright 2016 Google Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Google Cloud Bigtable HappyBase pool module."""
-
-
-importcontextlib
-importthreading
-
-importsix
-
-fromgcloud.bigtable.happybase.connectionimportConnection
-fromgcloud.bigtable.happybase.connectionimport_get_instance
-
-
-_MIN_POOL_SIZE=1
-"""Minimum allowable size of a connection pool."""
-
-
-
[docs]classNoConnectionsAvailable(RuntimeError):
- """Exception raised when no connections are available.
-
- This happens if a timeout was specified when obtaining a connection,
- and no connection became available within the specified timeout.
- """
-
-
-
[docs]classConnectionPool(object):
- """Thread-safe connection pool.
-
- .. note::
-
- All keyword arguments are passed unmodified to the
- :class:`Connection <.happybase.connection.Connection>` constructor
- **except** for ``autoconnect``. This is because the ``open`` /
- ``closed`` status of a connection is managed by the pool. In addition,
- if ``instance`` is not passed, the default / inferred instance is
- determined by the pool and then passed to each
- :class:`Connection <.happybase.connection.Connection>` that is created.
-
- :type size: int
- :param size: The maximum number of concurrently open connections.
-
- :type kwargs: dict
- :param kwargs: Keyword arguments passed to
- :class:`Connection <.happybase.Connection>`
- constructor.
-
- :raises: :class:`TypeError <exceptions.TypeError>` if ``size``
- is non an integer.
- :class:`ValueError <exceptions.ValueError>` if ``size``
- is not positive.
- """
- def__init__(self,size,**kwargs):
- ifnotisinstance(size,six.integer_types):
- raiseTypeError('Pool size arg must be an integer')
-
- ifsize<_MIN_POOL_SIZE:
- raiseValueError('Pool size must be positive')
-
- self._lock=threading.Lock()
- self._queue=six.moves.queue.LifoQueue(maxsize=size)
- self._thread_connections=threading.local()
-
- connection_kwargs=kwargs
- connection_kwargs['autoconnect']=False
- if'instance'notinconnection_kwargs:
- connection_kwargs['instance']=_get_instance(
- timeout=kwargs.get('timeout'))
-
- for_insix.moves.range(size):
- connection=Connection(**connection_kwargs)
- self._queue.put(connection)
-
- def_acquire_connection(self,timeout=None):
- """Acquire a connection from the pool.
-
- :type timeout: int
- :param timeout: (Optional) Time (in seconds) to wait for a connection
- to open.
-
- :rtype: :class:`Connection <.happybase.Connection>`
- :returns: An active connection from the queue stored on the pool.
- :raises: :class:`NoConnectionsAvailable` if ``Queue.get`` fails
- before the ``timeout`` (only if a timeout is specified).
- """
- try:
- returnself._queue.get(block=True,timeout=timeout)
- exceptsix.moves.queue.Empty:
- raiseNoConnectionsAvailable('No connection available from pool '
- 'within specified timeout')
-
- @contextlib.contextmanager
-
[docs]defconnection(self,timeout=None):
- """Obtain a connection from the pool.
-
- Must be used as a context manager, for example::
-
- with pool.connection() as connection:
- pass # do something with the connection
-
- If ``timeout`` is omitted, this method waits forever for a connection
- to become available from the local queue.
-
- Yields an active :class:`Connection <.happybase.connection.Connection>`
- from the pool.
-
- :type timeout: int
- :param timeout: (Optional) Time (in seconds) to wait for a connection
- to open.
-
- :raises: :class:`NoConnectionsAvailable` if no connection can be
- retrieved from the pool before the ``timeout`` (only if
- a timeout is specified).
- """
- connection=getattr(self._thread_connections,'current',None)
-
- retrieved_new_cnxn=False
- ifconnectionisNone:
- # In this case we need to actually grab a connection from the
- # pool. After retrieval, the connection is stored on a thread
- # local so that nested connection requests from the same
- # thread can re-use the same connection instance.
- #
- # NOTE: This code acquires a lock before assigning to the
- # thread local; see
- # ('https://emptysqua.re/blog/'
- # 'another-thing-about-pythons-threadlocals/')
- retrieved_new_cnxn=True
- connection=self._acquire_connection(timeout)
- withself._lock:
- self._thread_connections.current=connection
-
- # This is a no-op for connections that have already been opened
- # since they just call Client.start().
- connection.open()
- yieldconnection
-
- # Remove thread local reference after the outermost 'with' block
- # ends. Afterwards the thread no longer owns the connection.
- ifretrieved_new_cnxn:
- delself._thread_connections.current
- self._queue.put(connection)
-# Copyright 2016 Google Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Google Cloud Bigtable HappyBase table module."""
-
-
-importstruct
-importwarnings
-
-importsix
-
-fromgcloud._helpersimport_datetime_from_microseconds
-fromgcloud._helpersimport_microseconds_from_datetime
-fromgcloud._helpersimport_to_bytes
-fromgcloud._helpersimport_total_seconds
-fromgcloud.bigtable.column_familyimportGCRuleIntersection
-fromgcloud.bigtable.column_familyimportMaxAgeGCRule
-fromgcloud.bigtable.column_familyimportMaxVersionsGCRule
-fromgcloud.bigtable.happybase.batchimport_get_column_pairs
-fromgcloud.bigtable.happybase.batchimport_WAL_SENTINEL
-fromgcloud.bigtable.happybase.batchimportBatch
-fromgcloud.bigtable.row_filtersimportCellsColumnLimitFilter
-fromgcloud.bigtable.row_filtersimportColumnQualifierRegexFilter
-fromgcloud.bigtable.row_filtersimportFamilyNameRegexFilter
-fromgcloud.bigtable.row_filtersimportRowFilterChain
-fromgcloud.bigtable.row_filtersimportRowFilterUnion
-fromgcloud.bigtable.row_filtersimportRowKeyRegexFilter
-fromgcloud.bigtable.row_filtersimportTimestampRange
-fromgcloud.bigtable.row_filtersimportTimestampRangeFilter
-fromgcloud.bigtable.tableimportTableas_LowLevelTable
-
-
-_WARN=warnings.warn
-_PACK_I64=struct.Struct('>q').pack
-_UNPACK_I64=struct.Struct('>q').unpack
-_SIMPLE_GC_RULES=(MaxAgeGCRule,MaxVersionsGCRule)
-
-
-
[docs]defmake_row(cell_map,include_timestamp):
- """Make a row dict for a Thrift cell mapping.
-
- .. warning::
-
- This method is only provided for HappyBase compatibility, but does not
- actually work.
-
- :type cell_map: dict
- :param cell_map: Dictionary with ``fam:col`` strings as keys and ``TCell``
- instances as values.
-
- :type include_timestamp: bool
- :param include_timestamp: Flag to indicate if cell timestamps should be
- included with the output.
-
- :raises: :class:`NotImplementedError <exceptions.NotImplementedError>`
- always
- """
- raiseNotImplementedError('The Cloud Bigtable API output is not the same '
- 'as the output from the Thrift server, so this '
- 'helper can not be implemented.','Called with',
- cell_map,include_timestamp)
-
-
-
[docs]defmake_ordered_row(sorted_columns,include_timestamp):
- """Make a row dict for sorted Thrift column results from scans.
-
- .. warning::
-
- This method is only provided for HappyBase compatibility, but does not
- actually work.
-
- :type sorted_columns: list
- :param sorted_columns: List of ``TColumn`` instances from Thrift.
-
- :type include_timestamp: bool
- :param include_timestamp: Flag to indicate if cell timestamps should be
- included with the output.
-
- :raises: :class:`NotImplementedError <exceptions.NotImplementedError>`
- always
- """
- raiseNotImplementedError('The Cloud Bigtable API output is not the same '
- 'as the output from the Thrift server, so this '
- 'helper can not be implemented.','Called with',
- sorted_columns,include_timestamp)
-
-
-
[docs]classTable(object):
- """Representation of Cloud Bigtable table.
-
- Used for adding data and
-
- :type name: str
- :param name: The name of the table.
-
- :type connection: :class:`Connection <.happybase.connection.Connection>`
- :param connection: The connection which has access to the table.
- """
-
- def__init__(self,name,connection):
- self.name=name
- # This remains as legacy for HappyBase, but only the instance
- # from the connection is needed.
- self.connection=connection
- self._low_level_table=None
- ifself.connectionisnotNone:
- self._low_level_table=_LowLevelTable(self.name,
- self.connection._instance)
-
- def__repr__(self):
- return'<table.Table name=%r>'%(self.name,)
-
-
[docs]deffamilies(self):
- """Retrieve the column families for this table.
-
- :rtype: dict
- :returns: Mapping from column family name to garbage collection rule
- for a column family.
- """
- column_family_map=self._low_level_table.list_column_families()
- result={}
- forcol_fam,col_fam_objinsix.iteritems(column_family_map):
- result[col_fam]=_gc_rule_to_dict(col_fam_obj.gc_rule)
- returnresult
-
-
[docs]defregions(self):
- """Retrieve the regions for this table.
-
- .. warning::
-
- Cloud Bigtable does not give information about how a table is laid
- out in memory, so this method does not work. It is
- provided simply for compatibility.
-
- :raises: :class:`NotImplementedError <exceptions.NotImplementedError>`
- always
- """
- raiseNotImplementedError('The Cloud Bigtable API does not have a '
- 'concept of splitting a table into regions.')
-
-
[docs]defrow(self,row,columns=None,timestamp=None,include_timestamp=False):
- """Retrieve a single row of data.
-
- Returns the latest cells in each column (or all columns if ``columns``
- is not specified). If a ``timestamp`` is set, then **latest** becomes
- **latest** up until ``timestamp``.
-
- :type row: str
- :param row: Row key for the row we are reading from.
-
- :type columns: list
- :param columns: (Optional) Iterable containing column names (as
- strings). Each column name can be either
-
- * an entire column family: ``fam`` or ``fam:``
- * a single column: ``fam:col``
-
- :type timestamp: int
- :param timestamp: (Optional) Timestamp (in milliseconds since the
- epoch). If specified, only cells returned before the
- the timestamp will be returned.
-
- :type include_timestamp: bool
- :param include_timestamp: Flag to indicate if cell timestamps should be
- included with the output.
-
- :rtype: dict
- :returns: Dictionary containing all the latest column values in
- the row.
- """
- filters=[]
- ifcolumnsisnotNone:
- filters.append(_columns_filter_helper(columns))
- # versions == 1 since we only want the latest.
- filter_=_filter_chain_helper(versions=1,timestamp=timestamp,
- filters=filters)
-
- partial_row_data=self._low_level_table.read_row(
- row,filter_=filter_)
- ifpartial_row_dataisNone:
- return{}
-
- return_partial_row_to_dict(partial_row_data,
- include_timestamp=include_timestamp)
-
-
[docs]defrows(self,rows,columns=None,timestamp=None,
- include_timestamp=False):
- """Retrieve multiple rows of data.
-
- All optional arguments behave the same in this method as they do in
- :meth:`row`.
-
- :type rows: list
- :param rows: Iterable of the row keys for the rows we are reading from.
-
- :type columns: list
- :param columns: (Optional) Iterable containing column names (as
- strings). Each column name can be either
-
- * an entire column family: ``fam`` or ``fam:``
- * a single column: ``fam:col``
-
- :type timestamp: int
- :param timestamp: (Optional) Timestamp (in milliseconds since the
- epoch). If specified, only cells returned before (or
- at) the timestamp will be returned.
-
- :type include_timestamp: bool
- :param include_timestamp: Flag to indicate if cell timestamps should be
- included with the output.
-
- :rtype: list
- :returns: A list of pairs, where the first is the row key and the
- second is a dictionary with the filtered values returned.
- """
- ifnotrows:
- # Avoid round-trip if the result is empty anyway
- return[]
-
- filters=[]
- ifcolumnsisnotNone:
- filters.append(_columns_filter_helper(columns))
- filters.append(_row_keys_filter_helper(rows))
- # versions == 1 since we only want the latest.
- filter_=_filter_chain_helper(versions=1,timestamp=timestamp,
- filters=filters)
-
- partial_rows_data=self._low_level_table.read_rows(filter_=filter_)
- # NOTE: We could use max_loops = 1000 or some similar value to ensure
- # that the stream isn't open too long.
- partial_rows_data.consume_all()
-
- result=[]
- forrow_keyinrows:
- ifrow_keynotinpartial_rows_data.rows:
- continue
- curr_row_data=partial_rows_data.rows[row_key]
- curr_row_dict=_partial_row_to_dict(
- curr_row_data,include_timestamp=include_timestamp)
- result.append((row_key,curr_row_dict))
-
- returnresult
-
-
[docs]defcells(self,row,column,versions=None,timestamp=None,
- include_timestamp=False):
- """Retrieve multiple versions of a single cell from the table.
-
- :type row: str
- :param row: Row key for the row we are reading from.
-
- :type column: str
- :param column: Column we are reading from; of the form ``fam:col``.
-
- :type versions: int
- :param versions: (Optional) The maximum number of cells to return. If
- not set, returns all cells found.
-
- :type timestamp: int
- :param timestamp: (Optional) Timestamp (in milliseconds since the
- epoch). If specified, only cells returned before (or
- at) the timestamp will be returned.
-
- :type include_timestamp: bool
- :param include_timestamp: Flag to indicate if cell timestamps should be
- included with the output.
-
- :rtype: list
- :returns: List of values in the cell (with timestamps if
- ``include_timestamp`` is :data:`True`).
- """
- filter_=_filter_chain_helper(column=column,versions=versions,
- timestamp=timestamp)
- partial_row_data=self._low_level_table.read_row(row,filter_=filter_)
- ifpartial_row_dataisNone:
- return[]
- else:
- cells=partial_row_data._cells
- # We know that `_filter_chain_helper` has already verified that
- # column will split as such.
- column_family_id,column_qualifier=column.split(':')
- # NOTE: We expect the only key in `cells` is `column_family_id`
- # and the only key `cells[column_family_id]` is
- # `column_qualifier`. But we don't check that this is true.
- curr_cells=cells[column_family_id][column_qualifier]
- return_cells_to_pairs(
- curr_cells,include_timestamp=include_timestamp)
-
-
[docs]defscan(self,row_start=None,row_stop=None,row_prefix=None,
- columns=None,timestamp=None,
- include_timestamp=False,limit=None,**kwargs):
- """Create a scanner for data in this table.
-
- This method returns a generator that can be used for looping over the
- matching rows.
-
- If ``row_prefix`` is specified, only rows with row keys matching the
- prefix will be returned. If given, ``row_start`` and ``row_stop``
- cannot be used.
-
- .. note::
-
- Both ``row_start`` and ``row_stop`` can be :data:`None` to specify
- the start and the end of the table respectively. If both are
- omitted, a full table scan is done. Note that this usually results
- in severe performance problems.
-
- The keyword argument ``filter`` is also supported (beyond column and
- row range filters supported here). HappyBase / HBase users will have
- used this as an HBase filter string. (See the `Thrift docs`_ for more
- details on those filters.) However, Google Cloud Bigtable doesn't
- support those filter strings so a
- :class:`~gcloud.bigtable.row.RowFilter` should be used instead.
-
- .. _Thrift docs: http://hbase.apache.org/0.94/book/thrift.html
-
- The arguments ``batch_size``, ``scan_batching`` and ``sorted_columns``
- are allowed (as keyword arguments) for compatibility with
- HappyBase. However, they will not be used in any way, and will cause a
- warning if passed. (The ``batch_size`` determines the number of
- results to retrieve per request. The HBase scanner defaults to reading
- one record at a time, so this argument allows HappyBase to increase
- that number. However, the Cloud Bigtable API uses HTTP/2 streaming so
- there is no concept of a batched scan. The ``sorted_columns`` flag
- tells HBase to return columns in order, but Cloud Bigtable doesn't
- have this feature.)
-
- :type row_start: str
- :param row_start: (Optional) Row key where the scanner should start
- (includes ``row_start``). If not specified, reads
- from the first key. If the table does not contain
- ``row_start``, it will start from the next key after
- it that **is** contained in the table.
-
- :type row_stop: str
- :param row_stop: (Optional) Row key where the scanner should stop
- (excludes ``row_stop``). If not specified, reads
- until the last key. The table does not have to contain
- ``row_stop``.
-
- :type row_prefix: str
- :param row_prefix: (Optional) Prefix to match row keys.
-
- :type columns: list
- :param columns: (Optional) Iterable containing column names (as
- strings). Each column name can be either
-
- * an entire column family: ``fam`` or ``fam:``
- * a single column: ``fam:col``
-
- :type timestamp: int
- :param timestamp: (Optional) Timestamp (in milliseconds since the
- epoch). If specified, only cells returned before (or
- at) the timestamp will be returned.
-
- :type include_timestamp: bool
- :param include_timestamp: Flag to indicate if cell timestamps should be
- included with the output.
-
- :type limit: int
- :param limit: (Optional) Maximum number of rows to return.
-
- :type kwargs: dict
- :param kwargs: Remaining keyword arguments. Provided for HappyBase
- compatibility.
-
- :raises: If ``limit`` is set but non-positive, or if ``row_prefix`` is
- used with row start/stop,
- :class:`TypeError <exceptions.TypeError>` if a string
- ``filter`` is used.
- """
- row_start,row_stop,filter_chain=_scan_filter_helper(
- row_start,row_stop,row_prefix,columns,timestamp,limit,kwargs)
-
- partial_rows_data=self._low_level_table.read_rows(
- start_key=row_start,end_key=row_stop,
- limit=limit,filter_=filter_chain)
-
- # Mutable copy of data.
- rows_dict=partial_rows_data.rows
- whileTrue:
- try:
- partial_rows_data.consume_next()
- forrow_keyinsorted(rows_dict):
- curr_row_data=rows_dict.pop(row_key)
- # NOTE: We expect len(rows_dict) == 0, but don't check it.
- curr_row_dict=_partial_row_to_dict(
- curr_row_data,include_timestamp=include_timestamp)
- yield(row_key,curr_row_dict)
- exceptStopIteration:
- break
-
-
[docs]defput(self,row,data,timestamp=None,wal=_WAL_SENTINEL):
- """Insert data into a row in this table.
-
- .. note::
-
- This method will send a request with a single "put" mutation.
- In many situations, :meth:`batch` is a more appropriate
- method to manipulate data since it helps combine many mutations
- into a single request.
-
- :type row: str
- :param row: The row key where the mutation will be "put".
-
- :type data: dict
- :param data: Dictionary containing the data to be inserted. The keys
- are columns names (of the form ``fam:col``) and the values
- are strings (bytes) to be stored in those columns.
-
- :type timestamp: int
- :param timestamp: (Optional) Timestamp (in milliseconds since the
- epoch) that the mutation will be applied at.
-
- :type wal: object
- :param wal: Unused parameter (to be passed to a created batch).
- Provided for compatibility with HappyBase, but irrelevant
- for Cloud Bigtable since it does not have a Write Ahead
- Log.
- """
- withself.batch(timestamp=timestamp,wal=wal)asbatch:
- batch.put(row,data)
-
-
[docs]defdelete(self,row,columns=None,timestamp=None,wal=_WAL_SENTINEL):
- """Delete data from a row in this table.
-
- This method deletes the entire ``row`` if ``columns`` is not
- specified.
-
- .. note::
-
- This method will send a request with a single delete mutation.
- In many situations, :meth:`batch` is a more appropriate
- method to manipulate data since it helps combine many mutations
- into a single request.
-
- :type row: str
- :param row: The row key where the delete will occur.
-
- :type columns: list
- :param columns: (Optional) Iterable containing column names (as
- strings). Each column name can be either
-
- * an entire column family: ``fam`` or ``fam:``
- * a single column: ``fam:col``
-
- :type timestamp: int
- :param timestamp: (Optional) Timestamp (in milliseconds since the
- epoch) that the mutation will be applied at.
-
- :type wal: object
- :param wal: Unused parameter (to be passed to a created batch).
- Provided for compatibility with HappyBase, but irrelevant
- for Cloud Bigtable since it does not have a Write Ahead
- Log.
- """
- withself.batch(timestamp=timestamp,wal=wal)asbatch:
- batch.delete(row,columns)
-
-
[docs]defbatch(self,timestamp=None,batch_size=None,transaction=False,
- wal=_WAL_SENTINEL):
- """Create a new batch operation for this table.
-
- This method returns a new
- :class:`Batch <.happybase.batch.Batch>` instance that can be
- used for mass data manipulation.
-
- :type timestamp: int
- :param timestamp: (Optional) Timestamp (in milliseconds since the
- epoch) that all mutations will be applied at.
-
- :type batch_size: int
- :param batch_size: (Optional) The maximum number of mutations to allow
- to accumulate before committing them.
-
- :type transaction: bool
- :param transaction: Flag indicating if the mutations should be sent
- transactionally or not. If ``transaction=True`` and
- an error occurs while a
- :class:`Batch <.happybase.batch.Batch>` is
- active, then none of the accumulated mutations will
- be committed. If ``batch_size`` is set, the
- mutation can't be transactional.
-
- :type wal: object
- :param wal: Unused parameter (to be passed to the created batch).
- Provided for compatibility with HappyBase, but irrelevant
- for Cloud Bigtable since it does not have a Write Ahead
- Log.
-
- :rtype: :class:`Batch <gcloud.bigtable.happybase.batch.Batch>`
- :returns: A batch bound to this table.
- """
- returnBatch(self,timestamp=timestamp,batch_size=batch_size,
- transaction=transaction,wal=wal)
-
-
[docs]defcounter_get(self,row,column):
- """Retrieve the current value of a counter column.
-
- This method retrieves the current value of a counter column. If the
- counter column does not exist, this function initializes it to ``0``.
-
- .. note::
-
- Application code should **never** store a counter value directly;
- use the atomic :meth:`counter_inc` and :meth:`counter_dec` methods
- for that.
-
- :type row: str
- :param row: Row key for the row we are getting a counter from.
-
- :type column: str
- :param column: Column we are ``get``-ing from; of the form ``fam:col``.
-
- :rtype: int
- :returns: Counter value (after initializing / incrementing by 0).
- """
- # Don't query directly, but increment with value=0 so that the counter
- # is correctly initialized if didn't exist yet.
- returnself.counter_inc(row,column,value=0)
-
-
[docs]defcounter_set(self,row,column,value=0):
- """Set a counter column to a specific value.
-
- .. note::
-
- Be careful using this method. It can be useful for setting the
- initial value of a counter, but it defeats the purpose of using
- atomic increment and decrement.
-
- :type row: str
- :param row: Row key for the row we are setting a counter in.
-
- :type column: str
- :param column: Column we are setting a value in; of
- the form ``fam:col``.
-
- :type value: int
- :param value: Value to set the counter to.
- """
- self.put(row,{column:_PACK_I64(value)})
-
-
[docs]defcounter_inc(self,row,column,value=1):
- """Atomically increment a counter column.
-
- This method atomically increments a counter column in ``row``.
- If the counter column does not exist, it is automatically initialized
- to ``0`` before being incremented.
-
- :type row: str
- :param row: Row key for the row we are incrementing a counter in.
-
- :type column: str
- :param column: Column we are incrementing a value in; of the
- form ``fam:col``.
-
- :type value: int
- :param value: Amount to increment the counter by. (If negative,
- this is equivalent to decrement.)
-
- :rtype: int
- :returns: Counter value after incrementing.
- """
- row=self._low_level_table.row(row,append=True)
- ifisinstance(column,six.binary_type):
- column=column.decode('utf-8')
- column_family_id,column_qualifier=column.split(':')
- row.increment_cell_value(column_family_id,column_qualifier,value)
- # See AppendRow.commit() will return a dictionary:
- # {
- # u'col-fam-id': {
- # b'col-name1': [
- # (b'cell-val', datetime.datetime(...)),
- # ...
- # ],
- # ...
- # },
- # }
- modified_cells=row.commit()
- # Get the cells in the modified column,
- column_cells=modified_cells[column_family_id][column_qualifier]
- # Make sure there is exactly one cell in the column.
- iflen(column_cells)!=1:
- raiseValueError('Expected server to return one modified cell.')
- column_cell=column_cells[0]
- # Get the bytes value from the column and convert it to an integer.
- bytes_value=column_cell[0]
- int_value,=_UNPACK_I64(bytes_value)
- returnint_value
-
-
[docs]defcounter_dec(self,row,column,value=1):
- """Atomically decrement a counter column.
-
- This method atomically decrements a counter column in ``row``.
- If the counter column does not exist, it is automatically initialized
- to ``0`` before being decremented.
-
- :type row: str
- :param row: Row key for the row we are decrementing a counter in.
-
- :type column: str
- :param column: Column we are decrementing a value in; of the
- form ``fam:col``.
-
- :type value: int
- :param value: Amount to decrement the counter by. (If negative,
- this is equivalent to increment.)
-
- :rtype: int
- :returns: Counter value after decrementing.
- """
- returnself.counter_inc(row,column,-value)
-
-
-def_gc_rule_to_dict(gc_rule):
- """Converts garbage collection rule to dictionary if possible.
-
- This is in place to support dictionary values as was done
- in HappyBase, which has somewhat different garbage collection rule
- settings for column families.
-
- Only does this if the garbage collection rule is:
-
- * :class:`gcloud.bigtable.column_family.MaxAgeGCRule`
- * :class:`gcloud.bigtable.column_family.MaxVersionsGCRule`
- * Composite :class:`gcloud.bigtable.column_family.GCRuleIntersection`
- with two rules, one each of type
- :class:`gcloud.bigtable.column_family.MaxAgeGCRule` and
- :class:`gcloud.bigtable.column_family.MaxVersionsGCRule`
-
- Otherwise, just returns the input without change.
-
- :type gc_rule: :data:`NoneType <types.NoneType>`,
- :class:`.GarbageCollectionRule`
- :param gc_rule: A garbage collection rule to convert to a dictionary
- (if possible).
-
- :rtype: dict or
- :class:`gcloud.bigtable.column_family.GarbageCollectionRule`
- :returns: The converted garbage collection rule.
- """
- result=gc_rule
- ifgc_ruleisNone:
- result={}
- elifisinstance(gc_rule,MaxAgeGCRule):
- result={'time_to_live':_total_seconds(gc_rule.max_age)}
- elifisinstance(gc_rule,MaxVersionsGCRule):
- result={'max_versions':gc_rule.max_num_versions}
- elifisinstance(gc_rule,GCRuleIntersection):
- iflen(gc_rule.rules)==2:
- rule1,rule2=gc_rule.rules
- if(isinstance(rule1,_SIMPLE_GC_RULES)and
- isinstance(rule2,_SIMPLE_GC_RULES)):
- rule1=_gc_rule_to_dict(rule1)
- rule2=_gc_rule_to_dict(rule2)
- key1,=rule1.keys()
- key2,=rule2.keys()
- ifkey1!=key2:
- result={key1:rule1[key1],key2:rule2[key2]}
- returnresult
-
-
-def_next_char(str_val,index):
- """Gets the next character based on a position in a string.
-
- :type str_val: str
- :param str_val: A string containing the character to update.
-
- :type index: int
- :param index: An integer index in ``str_val``.
-
- :rtype: str
- :returns: The next character after the character at ``index``
- in ``str_val``.
- """
- ord_val=six.indexbytes(str_val,index)
- return_to_bytes(chr(ord_val+1),encoding='latin-1')
-
-
-def_string_successor(str_val):
- """Increment and truncate a byte string.
-
- Determines shortest string that sorts after the given string when
- compared using regular string comparison semantics.
-
- Modeled after implementation in ``gcloud-golang``.
-
- Increments the last byte that is smaller than ``0xFF``, and
- drops everything after it. If the string only contains ``0xFF`` bytes,
- ``''`` is returned.
-
- :type str_val: str
- :param str_val: String to increment.
-
- :rtype: str
- :returns: The next string in lexical order after ``str_val``.
- """
- str_val=_to_bytes(str_val,encoding='latin-1')
- ifstr_val==b'':
- returnstr_val
-
- index=len(str_val)-1
- whileindex>=0:
- ifsix.indexbytes(str_val,index)!=0xff:
- break
- index-=1
-
- ifindex==-1:
- returnb''
-
- returnstr_val[:index]+_next_char(str_val,index)
-
-
-def_convert_to_time_range(timestamp=None):
- """Create a timestamp range from an HBase / HappyBase timestamp.
-
- HBase uses timestamp as an argument to specify an exclusive end
- deadline. Cloud Bigtable also uses exclusive end times, so
- the behavior matches.
-
- :type timestamp: int
- :param timestamp: (Optional) Timestamp (in milliseconds since the
- epoch). Intended to be used as the end of an HBase
- time range, which is exclusive.
-
- :rtype: :class:`gcloud.bigtable.row.TimestampRange`,
- :data:`NoneType <types.NoneType>`
- :returns: The timestamp range corresponding to the passed in
- ``timestamp``.
- """
- iftimestampisNone:
- returnNone
-
- next_timestamp=_datetime_from_microseconds(1000*timestamp)
- returnTimestampRange(end=next_timestamp)
-
-
-def_cells_to_pairs(cells,include_timestamp=False):
- """Converts list of cells to HappyBase format.
-
- For example::
-
- >>> import datetime
- >>> from gcloud.bigtable.row_data import Cell
- >>> cell1 = Cell(b'val1', datetime.datetime.utcnow())
- >>> cell2 = Cell(b'val2', datetime.datetime.utcnow())
- >>> _cells_to_pairs([cell1, cell2])
- [b'val1', b'val2']
- >>> _cells_to_pairs([cell1, cell2], include_timestamp=True)
- [(b'val1', 1456361486255), (b'val2', 1456361491927)]
-
- :type cells: list
- :param cells: List of :class:`gcloud.bigtable.row_data.Cell` returned
- from a read request.
-
- :type include_timestamp: bool
- :param include_timestamp: Flag to indicate if cell timestamps should be
- included with the output.
-
- :rtype: list
- :returns: List of values in the cell. If ``include_timestamp=True``, each
- value will be a pair, with the first part the bytes value in
- the cell and the second part the number of milliseconds in the
- timestamp on the cell.
- """
- result=[]
- forcellincells:
- ifinclude_timestamp:
- ts_millis=_microseconds_from_datetime(cell.timestamp)//1000
- result.append((cell.value,ts_millis))
- else:
- result.append(cell.value)
- returnresult
-
-
-def_partial_row_to_dict(partial_row_data,include_timestamp=False):
- """Convert a low-level row data object to a dictionary.
-
- Assumes only the latest value in each row is needed. This assumption
- is due to the fact that this method is used by callers which use
- a ``CellsColumnLimitFilter(1)`` filter.
-
- For example::
-
- >>> import datetime
- >>> from gcloud.bigtable.row_data import Cell, PartialRowData
- >>> cell1 = Cell(b'val1', datetime.datetime.utcnow())
- >>> cell2 = Cell(b'val2', datetime.datetime.utcnow())
- >>> row_data = PartialRowData(b'row-key')
- >>> _partial_row_to_dict(row_data)
- {}
- >>> row_data._cells[u'fam1'] = {b'col1': [cell1], b'col2': [cell2]}
- >>> _partial_row_to_dict(row_data)
- {b'fam1:col2': b'val2', b'fam1:col1': b'val1'}
- >>> _partial_row_to_dict(row_data, include_timestamp=True)
- {b'fam1:col2': (b'val2', 1456361724480),
- b'fam1:col1': (b'val1', 1456361721135)}
-
- :type partial_row_data: :class:`.row_data.PartialRowData`
- :param partial_row_data: Row data consumed from a stream.
-
- :type include_timestamp: bool
- :param include_timestamp: Flag to indicate if cell timestamps should be
- included with the output.
-
- :rtype: dict
- :returns: The row data converted to a dictionary.
- """
- result={}
- forcolumn,cellsinsix.iteritems(partial_row_data.to_dict()):
- cell_vals=_cells_to_pairs(cells,
- include_timestamp=include_timestamp)
- # NOTE: We assume there is exactly 1 version since we used that in
- # our filter, but we don't check this.
- result[column]=cell_vals[0]
- returnresult
-
-
-def_filter_chain_helper(column=None,versions=None,timestamp=None,
- filters=None):
- """Create filter chain to limit a results set.
-
- :type column: str
- :param column: (Optional) The column (``fam:col``) to be selected
- with the filter.
-
- :type versions: int
- :param versions: (Optional) The maximum number of cells to return.
-
- :type timestamp: int
- :param timestamp: (Optional) Timestamp (in milliseconds since the
- epoch). If specified, only cells returned before (or
- at) the timestamp will be matched.
-
- :type filters: list
- :param filters: (Optional) List of existing filters to be extended.
-
- :rtype: :class:`RowFilter <gcloud.bigtable.row.RowFilter>`
- :returns: The chained filter created, or just a single filter if only
- one was needed.
- :raises: :class:`ValueError <exceptions.ValueError>` if there are no
- filters to chain.
- """
- iffiltersisNone:
- filters=[]
-
- ifcolumnisnotNone:
- ifisinstance(column,six.binary_type):
- column=column.decode('utf-8')
- column_family_id,column_qualifier=column.split(':')
- fam_filter=FamilyNameRegexFilter(column_family_id)
- qual_filter=ColumnQualifierRegexFilter(column_qualifier)
- filters.extend([fam_filter,qual_filter])
- ifversionsisnotNone:
- filters.append(CellsColumnLimitFilter(versions))
- time_range=_convert_to_time_range(timestamp=timestamp)
- iftime_rangeisnotNone:
- filters.append(TimestampRangeFilter(time_range))
-
- num_filters=len(filters)
- ifnum_filters==0:
- raiseValueError('Must have at least one filter.')
- elifnum_filters==1:
- returnfilters[0]
- else:
- returnRowFilterChain(filters=filters)
-
-
-def_scan_filter_helper(row_start,row_stop,row_prefix,columns,
- timestamp,limit,kwargs):
- """Helper for :meth:`scan`: build up a filter chain."""
- filter_=kwargs.pop('filter',None)
- legacy_args=[]
- forkw_namein('batch_size','scan_batching','sorted_columns'):
- ifkw_nameinkwargs:
- legacy_args.append(kw_name)
- kwargs.pop(kw_name)
- iflegacy_args:
- legacy_args=', '.join(legacy_args)
- message=('The HappyBase legacy arguments %s were used. These '
- 'arguments are unused by gcloud.'%(legacy_args,))
- _WARN(message)
- ifkwargs:
- raiseTypeError('Received unexpected arguments',kwargs.keys())
-
- iflimitisnotNoneandlimit<1:
- raiseValueError('limit must be positive')
- ifrow_prefixisnotNone:
- ifrow_startisnotNoneorrow_stopisnotNone:
- raiseValueError('row_prefix cannot be combined with '
- 'row_start or row_stop')
- row_start=row_prefix
- row_stop=_string_successor(row_prefix)
-
- filters=[]
- ifisinstance(filter_,six.string_types):
- raiseTypeError('Specifying filters as a string is not supported '
- 'by Cloud Bigtable. Use a '
- 'gcloud.bigtable.row.RowFilter instead.')
- eliffilter_isnotNone:
- filters.append(filter_)
-
- ifcolumnsisnotNone:
- filters.append(_columns_filter_helper(columns))
-
- # versions == 1 since we only want the latest.
- filter_=_filter_chain_helper(versions=1,timestamp=timestamp,
- filters=filters)
- returnrow_start,row_stop,filter_
-
-
-def_columns_filter_helper(columns):
- """Creates a union filter for a list of columns.
-
- :type columns: list
- :param columns: Iterable containing column names (as strings). Each column
- name can be either
-
- * an entire column family: ``fam`` or ``fam:``
- * a single column: ``fam:col``
-
- :rtype: :class:`RowFilter <gcloud.bigtable.row.RowFilter>`
- :returns: The union filter created containing all of the matched columns.
- :raises: :class:`ValueError <exceptions.ValueError>` if there are no
- filters to union.
- """
- filters=[]
- forcolumn_family_id,column_qualifierin_get_column_pairs(columns):
- fam_filter=FamilyNameRegexFilter(column_family_id)
- ifcolumn_qualifierisnotNone:
- qual_filter=ColumnQualifierRegexFilter(column_qualifier)
- combined_filter=RowFilterChain(
- filters=[fam_filter,qual_filter])
- filters.append(combined_filter)
- else:
- filters.append(fam_filter)
-
- num_filters=len(filters)
- ifnum_filters==0:
- raiseValueError('Must have at least one filter.')
- elifnum_filters==1:
- returnfilters[0]
- else:
- returnRowFilterUnion(filters=filters)
-
-
-def_row_keys_filter_helper(row_keys):
- """Creates a union filter for a list of rows.
-
- :type row_keys: list
- :param row_keys: Iterable containing row keys (as strings).
-
- :rtype: :class:`RowFilter <gcloud.bigtable.row.RowFilter>`
- :returns: The union filter created containing all of the row keys.
- :raises: :class:`ValueError <exceptions.ValueError>` if there are no
- filters to union.
- """
- filters=[]
- forrow_keyinrow_keys:
- filters.append(RowKeyRegexFilter(row_key))
-
- num_filters=len(filters)
- ifnum_filters==0:
- raiseValueError('Must have at least one filter.')
- elifnum_filters==1:
- returnfilters[0]
- else:
- returnRowFilterUnion(filters=filters)
-
-# Copyright 2014 Google Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Connections to gcloud datastore API servers."""
-
-importos
-
-fromgcloudimportconnection
-fromgcloud.environment_varsimportGCD_HOST
-fromgcloud.exceptionsimportmake_exception
-fromgcloud.datastore._generatedimportdatastore_pb2as_datastore_pb2
-fromgoogle.rpcimportstatus_pb2
-
-
-
[docs]classConnection(connection.Connection):
- """A connection to the Google Cloud Datastore via the Protobuf API.
-
- This class should understand only the basic types (and protobufs)
- in method arguments, however should be capable of returning advanced types.
-
- :type credentials: :class:`oauth2client.client.OAuth2Credentials`
- :param credentials: The OAuth2 Credentials to use for this connection.
-
- :type http: :class:`httplib2.Http` or class that defines ``request()``.
- :param http: An optional HTTP object to make requests.
-
- :type api_base_url: string
- :param api_base_url: The base of the API call URL. Defaults to
- :attr:`API_BASE_URL`.
- """
-
- API_BASE_URL='https://datastore.googleapis.com'
- """The base of the API call URL."""
-
- API_VERSION='v1beta3'
- """The version of the API, used in building the API call's URL."""
-
- API_URL_TEMPLATE=('{api_base}/{api_version}/projects'
- '/{project}:{method}')
- """A template for the URL of a particular API call."""
-
- SCOPE=('https://www.googleapis.com/auth/datastore',)
- """The scopes required for authenticating as a Cloud Datastore consumer."""
-
- def__init__(self,credentials=None,http=None,api_base_url=None):
- super(Connection,self).__init__(credentials=credentials,http=http)
- ifapi_base_urlisNone:
- try:
- # gcd.sh has /datastore/ in the path still since it supports
- # v1beta2 and v1beta3 simultaneously.
- api_base_url='%s/datastore'%(os.environ[GCD_HOST],)
- exceptKeyError:
- api_base_url=self.__class__.API_BASE_URL
- self.api_base_url=api_base_url
-
- def_request(self,project,method,data):
- """Make a request over the Http transport to the Cloud Datastore API.
-
- :type project: string
- :param project: The project to make the request for.
-
- :type method: string
- :param method: The API call method name (ie, ``runQuery``,
- ``lookup``, etc)
-
- :type data: string
- :param data: The data to send with the API call.
- Typically this is a serialized Protobuf string.
-
- :rtype: string
- :returns: The string response content from the API call.
- :raises: :class:`gcloud.exceptions.GCloudError` if the response
- code is not 200 OK.
- """
- headers={
- 'Content-Type':'application/x-protobuf',
- 'Content-Length':str(len(data)),
- 'User-Agent':self.USER_AGENT,
- }
- headers,content=self.http.request(
- uri=self.build_api_url(project=project,method=method),
- method='POST',headers=headers,body=data)
-
- status=headers['status']
- ifstatus!='200':
- error_status=status_pb2.Status.FromString(content)
- raisemake_exception(headers,error_status.message,use_json=False)
-
- returncontent
-
- def_rpc(self,project,method,request_pb,response_pb_cls):
- """Make a protobuf RPC request.
-
- :type project: string
- :param project: The project to connect to. This is
- usually your project name in the cloud console.
-
- :type method: string
- :param method: The name of the method to invoke.
-
- :type request_pb: :class:`google.protobuf.message.Message` instance
- :param request_pb: the protobuf instance representing the request.
-
- :type response_pb_cls: A :class:`google.protobuf.message.Message`
- subclass.
- :param response_pb_cls: The class used to unmarshall the response
- protobuf.
-
- :rtype: :class:`google.protobuf.message.Message`
- :returns: The RPC message parsed from the response.
- """
- response=self._request(project=project,method=method,
- data=request_pb.SerializeToString())
- returnresponse_pb_cls.FromString(response)
-
-
[docs]defbuild_api_url(self,project,method,base_url=None,
- api_version=None):
- """Construct the URL for a particular API call.
-
- This method is used internally to come up with the URL to use when
- making RPCs to the Cloud Datastore API.
-
- :type project: string
- :param project: The project to connect to. This is
- usually your project name in the cloud console.
-
- :type method: string
- :param method: The API method to call (e.g. 'runQuery', 'lookup').
-
- :type base_url: string
- :param base_url: The base URL where the API lives.
- You shouldn't have to provide this.
-
- :type api_version: string
- :param api_version: The version of the API to connect to.
- You shouldn't have to provide this.
-
- :rtype: str
- :returns: The API URL created.
- """
- returnself.API_URL_TEMPLATE.format(
- api_base=(base_urlorself.api_base_url),
- api_version=(api_versionorself.API_VERSION),
- project=project,method=method)
-
-
[docs]deflookup(self,project,key_pbs,
- eventual=False,transaction_id=None):
- """Lookup keys from a project in the Cloud Datastore.
-
- Maps the ``DatastoreService.Lookup`` protobuf RPC.
-
- This uses mostly protobufs
- (:class:`gcloud.datastore._generated.entity_pb2.Key` as input and
- :class:`gcloud.datastore._generated.entity_pb2.Entity` as output). It
- is used under the hood in
- :meth:`Client.get() <.datastore.client.Client.get>`:
-
- >>> from gcloud import datastore
- >>> client = datastore.Client(project='project')
- >>> key = client.key('MyKind', 1234)
- >>> client.get(key)
- [<Entity object>]
-
- Using a :class:`Connection` directly:
-
- >>> connection.lookup('project', [key.to_protobuf()])
- [<Entity protobuf>]
-
- :type project: string
- :param project: The project to look up the keys in.
-
- :type key_pbs: list of
- :class:`gcloud.datastore._generated.entity_pb2.Key`
- :param key_pbs: The keys to retrieve from the datastore.
-
- :type eventual: bool
- :param eventual: If False (the default), request ``STRONG`` read
- consistency. If True, request ``EVENTUAL`` read
- consistency.
-
- :type transaction_id: string
- :param transaction_id: If passed, make the request in the scope of
- the given transaction. Incompatible with
- ``eventual==True``.
-
- :rtype: tuple
- :returns: A triple of (``results``, ``missing``, ``deferred``) where
- both ``results`` and ``missing`` are lists of
- :class:`gcloud.datastore._generated.entity_pb2.Entity` and
- ``deferred`` is a list of
- :class:`gcloud.datastore._generated.entity_pb2.Key`.
- """
- lookup_request=_datastore_pb2.LookupRequest()
- _set_read_options(lookup_request,eventual,transaction_id)
- _add_keys_to_request(lookup_request.keys,key_pbs)
-
- lookup_response=self._rpc(project,'lookup',lookup_request,
- _datastore_pb2.LookupResponse)
-
- results=[result.entityforresultinlookup_response.found]
- missing=[result.entityforresultinlookup_response.missing]
-
- returnresults,missing,list(lookup_response.deferred)
-
-
[docs]defrun_query(self,project,query_pb,namespace=None,
- eventual=False,transaction_id=None):
- """Run a query on the Cloud Datastore.
-
- Maps the ``DatastoreService.RunQuery`` protobuf RPC.
-
- Given a Query protobuf, sends a ``runQuery`` request to the
- Cloud Datastore API and returns a list of entity protobufs
- matching the query.
-
- You typically wouldn't use this method directly, in favor of the
- :meth:`gcloud.datastore.query.Query.fetch` method.
-
- Under the hood, the :class:`gcloud.datastore.query.Query` class
- uses this method to fetch data:
-
- >>> from gcloud import datastore
- >>> client = datastore.Client()
- >>> query = client.query(kind='MyKind')
- >>> query.add_filter('property', '=', 'val')
-
- Using the query iterator's
- :meth:`next_page() <.datastore.query.Iterator.next_page>` method:
-
- >>> query_iter = query.fetch()
- >>> entities, more_results, cursor = query_iter.next_page()
- >>> entities
- [<list of Entity unmarshalled from protobuf>]
- >>> more_results
- <boolean of more results>
- >>> cursor
- <string containing cursor where fetch stopped>
-
- Under the hood this is doing:
-
- >>> connection.run_query('project', query.to_protobuf())
- [<list of Entity Protobufs>], cursor, more_results, skipped_results
-
- :type project: string
- :param project: The project over which to run the query.
-
- :type query_pb: :class:`gcloud.datastore._generated.query_pb2.Query`
- :param query_pb: The Protobuf representing the query to run.
-
- :type namespace: string
- :param namespace: The namespace over which to run the query.
-
- :type eventual: bool
- :param eventual: If False (the default), request ``STRONG`` read
- consistency. If True, request ``EVENTUAL`` read
- consistency.
-
- :type transaction_id: string
- :param transaction_id: If passed, make the request in the scope of
- the given transaction. Incompatible with
- ``eventual==True``.
-
- :rtype: tuple
- :returns: Four-tuple containing the entities returned,
- the end cursor of the query, a ``more_results``
- enum and a count of the number of skipped results.
- """
- request=_datastore_pb2.RunQueryRequest()
- _set_read_options(request,eventual,transaction_id)
-
- ifnamespace:
- request.partition_id.namespace_id=namespace
-
- request.query.CopyFrom(query_pb)
- response=self._rpc(project,'runQuery',request,
- _datastore_pb2.RunQueryResponse)
- return(
- [e.entityforeinresponse.batch.entity_results],
- response.batch.end_cursor,# Assume response always has cursor.
- response.batch.more_results,
- response.batch.skipped_results,
- )
-
-
[docs]defbegin_transaction(self,project):
- """Begin a transaction.
-
- Maps the ``DatastoreService.BeginTransaction`` protobuf RPC.
-
- :type project: string
- :param project: The project to which the transaction applies.
-
- :rtype: bytes
- :returns: The serialized transaction that was begun.
- """
- request=_datastore_pb2.BeginTransactionRequest()
- response=self._rpc(project,'beginTransaction',request,
- _datastore_pb2.BeginTransactionResponse)
- returnresponse.transaction
-
-
[docs]defcommit(self,project,request,transaction_id):
- """Commit mutations in context of current transation (if any).
-
- Maps the ``DatastoreService.Commit`` protobuf RPC.
-
- :type project: string
- :param project: The project to which the transaction applies.
-
- :type request: :class:`._generated.datastore_pb2.CommitRequest`
- :param request: The protobuf with the mutations being committed.
-
- :type transaction_id: string or None
- :param transaction_id: The transaction ID returned from
- :meth:`begin_transaction`. Non-transactional
- batches must pass ``None``.
-
- .. note::
-
- This method will mutate ``request`` before using it.
-
- :rtype: tuple
- :returns: The pair of the number of index updates and a list of
- :class:`._generated.entity_pb2.Key` for each incomplete key
- that was completed in the commit.
- """
- iftransaction_id:
- request.mode=_datastore_pb2.CommitRequest.TRANSACTIONAL
- request.transaction=transaction_id
- else:
- request.mode=_datastore_pb2.CommitRequest.NON_TRANSACTIONAL
-
- response=self._rpc(project,'commit',request,
- _datastore_pb2.CommitResponse)
- return_parse_commit_response(response)
-
-
[docs]defrollback(self,project,transaction_id):
- """Rollback the connection's existing transaction.
-
- Maps the ``DatastoreService.Rollback`` protobuf RPC.
-
- :type project: string
- :param project: The project to which the transaction belongs.
-
- :type transaction_id: string
- :param transaction_id: The transaction ID returned from
- :meth:`begin_transaction`.
- """
- request=_datastore_pb2.RollbackRequest()
- request.transaction=transaction_id
- # Nothing to do with this response, so just execute the method.
- self._rpc(project,'rollback',request,
- _datastore_pb2.RollbackResponse)
-
-
[docs]defallocate_ids(self,project,key_pbs):
- """Obtain backend-generated IDs for a set of keys.
-
- Maps the ``DatastoreService.AllocateIds`` protobuf RPC.
-
- :type project: string
- :param project: The project to which the transaction belongs.
-
- :type key_pbs: list of
- :class:`gcloud.datastore._generated.entity_pb2.Key`
- :param key_pbs: The keys for which the backend should allocate IDs.
-
- :rtype: list of :class:`gcloud.datastore._generated.entity_pb2.Key`
- :returns: An equal number of keys, with IDs filled in by the backend.
- """
- request=_datastore_pb2.AllocateIdsRequest()
- _add_keys_to_request(request.keys,key_pbs)
- # Nothing to do with this response, so just execute the method.
- response=self._rpc(project,'allocateIds',request,
- _datastore_pb2.AllocateIdsResponse)
- returnlist(response.keys)
-
-
-def_set_read_options(request,eventual,transaction_id):
- """Validate rules for read options, and assign to the request.
-
- Helper method for ``lookup()`` and ``run_query``.
-
- :raises: :class:`ValueError` if ``eventual`` is ``True`` and the
- ``transaction_id`` is not ``None``.
- """
- ifeventualand(transaction_idisnotNone):
- raiseValueError('eventual must be False when in a transaction')
-
- opts=request.read_options
- ifeventual:
- opts.read_consistency=_datastore_pb2.ReadOptions.EVENTUAL
- eliftransaction_id:
- opts.transaction=transaction_id
-
-
-def_add_keys_to_request(request_field_pb,key_pbs):
- """Add protobuf keys to a request object.
-
- :type request_field_pb: `RepeatedCompositeFieldContainer`
- :param request_field_pb: A repeated proto field that contains keys.
-
- :type key_pbs: list of :class:`gcloud.datastore._generated.entity_pb2.Key`
- :param key_pbs: The keys to add to a request.
- """
- forkey_pbinkey_pbs:
- request_field_pb.add().CopyFrom(key_pb)
-
-
-def_parse_commit_response(commit_response_pb):
- """Extract response data from a commit response.
-
- :type commit_response_pb: :class:`._generated.datastore_pb2.CommitResponse`
- :param commit_response_pb: The protobuf response from a commit request.
-
- :rtype: tuple
- :returns: The pair of the number of index updates and a list of
- :class:`._generated.entity_pb2.Key` for each incomplete key
- that was completed in the commit.
- """
- mut_results=commit_response_pb.mutation_results
- index_updates=commit_response_pb.index_updates
- completed_keys=[mut_result.keyformut_resultinmut_results
- ifmut_result.HasField('key')]# Message field (Key)
- returnindex_updates,completed_keys
-
-# Copyright 2015 Google Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Create / interact with gcloud dns connections."""
-
-fromgcloudimportconnectionasbase_connection
-
-
-
[docs]classConnection(base_connection.JSONConnection):
- """A connection to Google Cloud DNS via the JSON REST API."""
-
- API_BASE_URL='https://www.googleapis.com'
- """The base of the API call URL."""
-
- API_VERSION='v1'
- """The version of the API, used in building the API call's URL."""
-
- API_URL_TEMPLATE='{api_base_url}/dns/{api_version}{path}'
- """A template for the URL of a particular API call."""
-
- SCOPE=('https://www.googleapis.com/auth/ndev.clouddns.readwrite',)
- """The scopes required for authenticating as a Cloud DNS consumer."""
-# Copyright 2015 Google Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Define API ResourceRecordSets."""
-
-
-
[docs]classResourceRecordSet(object):
- """ResourceRecordSets are DNS resource records.
-
- RRS are owned by a :class:`gcloud.dns.zone.ManagedZone` instance.
-
- See:
- https://cloud.google.com/dns/api/v1/resourceRecordSets
-
- :type name: string
- :param name: the name of the record set
-
- :type record_type: string
- :param record_type: the RR type of the zone
-
- :type ttl: integer
- :param ttl: TTL (in seconds) for caching the record sets
-
- :type rrdatas: list of string
- :param rrdatas: one or more lines containing the resource data
-
- :type zone: :class:`gcloud.dns.zone.ManagedZone`
- :param zone: A zone which holds one or more record sets.
- """
-
- def__init__(self,name,record_type,ttl,rrdatas,zone):
- self.name=name
- self.record_type=record_type
- self.ttl=ttl
- self.rrdatas=rrdatas
- self.zone=zone
-
- @classmethod
-
[docs]deffrom_api_repr(cls,resource,zone):
- """Factory: construct a record set given its API representation
-
- :type resource: dict
- :param resource: record sets representation returned from the API
-
- :type zone: :class:`gcloud.dns.zone.ManagedZone`
- :param zone: A zone which holds one or more record sets.
-
- :rtype: :class:`gcloud.dns.zone.ResourceRecordSet`
- :returns: RRS parsed from ``resource``.
- """
- name=resource['name']
- record_type=resource['type']
- ttl=int(resource['ttl'])
- rrdatas=resource['rrdatas']
- returncls(name,record_type,ttl,rrdatas,zone=zone)
-# Copyright 2016 Google Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Client for interacting with the `Google Stackdriver Monitoring API (V3)`_.
-
-Example::
-
- >>> from gcloud import monitoring
- >>> client = monitoring.Client()
- >>> query = client.query(minutes=5)
- >>> print(query.as_dataframe()) # Requires pandas.
-
-At present, the client supports querying of time series, metric descriptors,
-and monitored resource descriptors.
-
-.. _Google Stackdriver Monitoring API (V3):
- https://cloud.google.com/monitoring/api/v3/
-"""
-
-fromgcloud.clientimportJSONClient
-fromgcloud.monitoring.connectionimportConnection
-fromgcloud.monitoring.metricimportMetricDescriptor
-fromgcloud.monitoring.metricimportMetricKind
-fromgcloud.monitoring.metricimportValueType
-fromgcloud.monitoring.queryimportQuery
-fromgcloud.monitoring.resourceimportResourceDescriptor
-
-
-
[docs]classClient(JSONClient):
- """Client to bundle configuration needed for API requests.
-
- :type project: string
- :param project: The target project. If not passed, falls back to the
- default inferred from the environment.
-
- :type credentials: :class:`oauth2client.client.OAuth2Credentials` or
- :class:`NoneType`
- :param credentials: The OAuth2 Credentials to use for the connection
- owned by this client. If not passed (and if no ``http``
- object is passed), falls back to the default inferred
- from the environment.
-
- :type http: :class:`httplib2.Http` or class that defines ``request()``
- :param http: An optional HTTP object to make requests. If not passed, an
- ``http`` object is created that is bound to the
- ``credentials`` for the current object.
- """
-
- _connection_class=Connection
-
-
[docs]defquery(self,
- metric_type=Query.DEFAULT_METRIC_TYPE,
- end_time=None,
- days=0,hours=0,minutes=0):
- """Construct a query object for retrieving metric data.
-
- Example::
-
- >>> query = client.query(minutes=5)
- >>> print(query.as_dataframe()) # Requires pandas.
-
- :type metric_type: string
- :param metric_type: The metric type name. The default value is
- :data:`Query.DEFAULT_METRIC_TYPE
- <gcloud.monitoring.query.Query.DEFAULT_METRIC_TYPE>`,
- but please note that this default value is provided only for
- demonstration purposes and is subject to change. See the
- `supported metrics`_.
-
- :type end_time: :class:`datetime.datetime` or None
- :param end_time: The end time (inclusive) of the time interval
- for which results should be returned, as a datetime object.
- The default is the start of the current minute.
-
- The start time (exclusive) is determined by combining the
- values of ``days``, ``hours``, and ``minutes``, and
- subtracting the resulting duration from the end time.
-
- It is also allowed to omit the end time and duration here,
- in which case
- :meth:`~gcloud.monitoring.query.Query.select_interval`
- must be called before the query is executed.
-
- :type days: integer
- :param days: The number of days in the time interval.
-
- :type hours: integer
- :param hours: The number of hours in the time interval.
-
- :type minutes: integer
- :param minutes: The number of minutes in the time interval.
-
- :rtype: :class:`~gcloud.monitoring.query.Query`
- :returns: The query object.
-
- :raises: :exc:`ValueError` if ``end_time`` is specified but
- ``days``, ``hours``, and ``minutes`` are all zero.
- If you really want to specify a point in time, use
- :meth:`~gcloud.monitoring.query.Query.select_interval`.
-
- .. _supported metrics: https://cloud.google.com/monitoring/api/metrics
- """
- returnQuery(self,metric_type,
- end_time=end_time,
- days=days,hours=hours,minutes=minutes)
-
-
[docs]defmetric_descriptor(self,type_,
- metric_kind=MetricKind.METRIC_KIND_UNSPECIFIED,
- value_type=ValueType.VALUE_TYPE_UNSPECIFIED,
- labels=(),unit='',description='',display_name=''):
- """Construct a metric descriptor object.
-
- Metric descriptors specify the schema for a particular metric type.
-
- This factory method is used most often in conjunction with the metric
- descriptor :meth:`~gcloud.monitoring.metric.MetricDescriptor.create`
- method to define custom metrics::
-
- >>> descriptor = client.metric_descriptor(
- ... 'custom.googleapis.com/my_metric',
- ... metric_kind=MetricKind.GAUGE,
- ... value_type=ValueType.DOUBLE,
- ... description='This is a simple example of a custom metric.')
- >>> descriptor.create()
-
- Here is an example where the custom metric is parameterized by a
- metric label::
-
- >>> label = LabelDescriptor('response_code', LabelValueType.INT64,
- ... description='HTTP status code')
- >>> descriptor = client.metric_descriptor(
- ... 'custom.googleapis.com/my_app/response_count',
- ... metric_kind=MetricKind.CUMULATIVE,
- ... value_type=ValueType.INT64,
- ... labels=[label],
- ... description='Cumulative count of HTTP responses.')
- >>> descriptor.create()
-
- :type type_: string
- :param type_:
- The metric type including a DNS name prefix. For example:
- ``"custom.googleapis.com/my_metric"``
-
- :type metric_kind: string
- :param metric_kind:
- The kind of measurement. It must be one of
- :data:`MetricKind.GAUGE`, :data:`MetricKind.DELTA`,
- or :data:`MetricKind.CUMULATIVE`.
- See :class:`~gcloud.monitoring.metric.MetricKind`.
-
- :type value_type: string
- :param value_type:
- The value type of the metric. It must be one of
- :data:`ValueType.BOOL`, :data:`ValueType.INT64`,
- :data:`ValueType.DOUBLE`, :data:`ValueType.STRING`,
- or :data:`ValueType.DISTRIBUTION`.
- See :class:`ValueType`.
-
- :type labels: list of :class:`~gcloud.monitoring.label.LabelDescriptor`
- :param labels:
- A sequence of zero or more label descriptors specifying the labels
- used to identify a specific instance of this metric.
-
- :type unit: string
- :param unit: An optional unit in which the metric value is reported.
-
- :type description: string
- :param description: An optional detailed description of the metric.
-
- :type display_name: string
- :param display_name: An optional concise name for the metric.
-
- :rtype: :class:`MetricDescriptor`
- :returns: The metric descriptor created with the passed-in arguments.
- """
- returnMetricDescriptor(
- self,type_,
- metric_kind=metric_kind,
- value_type=value_type,
- labels=labels,
- unit=unit,
- description=description,
- display_name=display_name,
- )
-
-
[docs]deffetch_metric_descriptor(self,metric_type):
- """Look up a metric descriptor by type.
-
- Example::
-
- >>> METRIC = 'compute.googleapis.com/instance/cpu/utilization'
- >>> print(client.fetch_metric_descriptor(METRIC))
-
- :type metric_type: string
- :param metric_type: The metric type name.
-
- :rtype: :class:`~gcloud.monitoring.metric.MetricDescriptor`
- :returns: The metric descriptor instance.
-
- :raises: :class:`gcloud.exceptions.NotFound` if the metric descriptor
- is not found.
- """
- returnMetricDescriptor._fetch(self,metric_type)
-
-
[docs]deflist_metric_descriptors(self,filter_string=None,type_prefix=None):
- """List all metric descriptors for the project.
-
- Examples::
-
- >>> for descriptor in client.list_metric_descriptors():
- ... print(descriptor.type)
-
- >>> for descriptor in client.list_metric_descriptors(
- ... type_prefix='custom.'):
- ... print(descriptor.type)
-
- :type filter_string: string or None
- :param filter_string:
- An optional filter expression describing the metric descriptors
- to be returned. See the `filter documentation`_.
-
- :type type_prefix: string or None
- :param type_prefix: An optional prefix constraining the selected
- metric types. This adds ``metric.type = starts_with("<prefix>")``
- to the filter.
-
- :rtype: list of :class:`~gcloud.monitoring.metric.MetricDescriptor`
- :returns: A list of metric descriptor instances.
-
- .. _filter documentation:
- https://cloud.google.com/monitoring/api/v3/filters
- """
- returnMetricDescriptor._list(self,filter_string,
- type_prefix=type_prefix)
-
-
[docs]deffetch_resource_descriptor(self,resource_type):
- """Look up a monitored resource descriptor by type.
-
- Example::
-
- >>> print(client.fetch_resource_descriptor('gce_instance'))
-
- :type resource_type: string
- :param resource_type: The resource type name.
-
- :rtype: :class:`~gcloud.monitoring.resource.ResourceDescriptor`
- :returns: The resource descriptor instance.
-
- :raises: :class:`gcloud.exceptions.NotFound` if the resource descriptor
- is not found.
- """
- returnResourceDescriptor._fetch(self,resource_type)
-
-
[docs]deflist_resource_descriptors(self,filter_string=None):
- """List all monitored resource descriptors for the project.
-
- Example::
-
- >>> for descriptor in client.list_resource_descriptors():
- ... print(descriptor.type)
-
- :type filter_string: string or None
- :param filter_string:
- An optional filter expression describing the resource descriptors
- to be returned. See the `filter documentation`_.
-
- :rtype: list of :class:`~gcloud.monitoring.resource.ResourceDescriptor`
- :returns: A list of resource descriptor instances.
-
- .. _filter documentation:
- https://cloud.google.com/monitoring/api/v3/filters
- """
- returnResourceDescriptor._list(self,filter_string)
-# Copyright 2016 Google Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Create / interact with Stackdriver Monitoring connections."""
-
-fromgcloudimportconnectionasbase_connection
-
-
-
[docs]classConnection(base_connection.JSONConnection):
- """A connection to Google Stackdriver Monitoring via the JSON REST API.
-
- :type credentials: :class:`oauth2client.client.OAuth2Credentials`
- :param credentials: (Optional) The OAuth2 Credentials to use for this
- connection.
-
- :type http: :class:`httplib2.Http` or class that defines ``request()``
- :param http: (Optional) HTTP object to make requests.
-
- :type api_base_url: string
- :param api_base_url: The base of the API call URL. Defaults to the value
- :attr:`Connection.API_BASE_URL`.
- """
-
- API_BASE_URL='https://monitoring.googleapis.com'
- """The base of the API call URL."""
-
- API_VERSION='v3'
- """The version of the API, used in building the API call's URL."""
-
- API_URL_TEMPLATE='{api_base_url}/{api_version}{path}'
- """A template for the URL of a particular API call."""
-
- SCOPE=('https://www.googleapis.com/auth/monitoring.read',
- 'https://www.googleapis.com/auth/monitoring',
- 'https://www.googleapis.com/auth/cloud-platform')
- """The scopes required for authenticating as a Monitoring consumer."""
Source code for gcloud.resource_manager.connection
-# Copyright 2015 Google Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Create / interact with gcloud.resource_manager connections."""
-
-
-fromgcloudimportconnectionasbase_connection
-
-
-
[docs]classConnection(base_connection.JSONConnection):
- """A connection to Google Cloud Resource Manager via the JSON REST API.
-
- :type credentials: :class:`oauth2client.client.OAuth2Credentials`
- :param credentials: (Optional) The OAuth2 Credentials to use for this
- connection.
-
- :type http: :class:`httplib2.Http` or class that defines ``request()``.
- :param http: (Optional) HTTP object to make requests.
- """
-
- API_BASE_URL='https://cloudresourcemanager.googleapis.com'
- """The base of the API call URL."""
-
- API_VERSION='v1beta1'
- """The version of the API, used in building the API call's URL."""
-
- API_URL_TEMPLATE='{api_base_url}/{api_version}{path}'
- """A template for the URL of a particular API call."""
-
- SCOPE=('https://www.googleapis.com/auth/cloud-platform',)
- """The scopes required for authenticating as a Resouce Manager consumer."""
-# Copyright 2014 Google Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Create / interact with gcloud storage connections."""
-
-fromgcloudimportconnectionasbase_connection
-
-
-
[docs]classConnection(base_connection.JSONConnection):
- """A connection to Google Cloud Storage via the JSON REST API.
-
- :type credentials: :class:`oauth2client.client.OAuth2Credentials`
- :param credentials: (Optional) The OAuth2 Credentials to use for this
- connection.
-
- :type http: :class:`httplib2.Http` or class that defines ``request()``.
- :param http: (Optional) HTTP object to make requests.
- """
-
- API_BASE_URL=base_connection.API_BASE_URL
- """The base of the API call URL."""
-
- API_VERSION='v1'
- """The version of the API, used in building the API call's URL."""
-
- API_URL_TEMPLATE='{api_base_url}/storage/{api_version}{path}'
- """A template for the URL of a particular API call."""
-
- SCOPE=('https://www.googleapis.com/auth/devstorage.full_control',
- 'https://www.googleapis.com/auth/devstorage.read_only',
- 'https://www.googleapis.com/auth/devstorage.read_write')
- """The scopes required for authenticating as a Cloud Storage consumer."""
-# Copyright 2016 Google Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Create / interact with Google Cloud Translate connections."""
-
-fromgcloudimportconnectionasbase_connection
-
-
-
[docs]classConnection(base_connection.JSONConnection):
- """A connection to Google Cloud Translate via the JSON REST API."""
-
- API_BASE_URL='https://www.googleapis.com'
- """The base of the API call URL."""
-
- API_VERSION='v2'
- """The version of the API, used in building the API call's URL."""
-
- API_URL_TEMPLATE='{api_base_url}/language/translate/{api_version}{path}'
- """A template for the URL of a particular API call."""
# Copyright 2015 Google Inc. All rights reserved.## Licensed under the Apache License, Version 2.0 (the "License");
@@ -111,9 +266,9 @@
Source code for gcloud.bigquery._helpers
# See the License for the specific language governing permissions and# limitations under the License.
-"""Shared elper functions for BigQuery API classes."""
+"""Shared helper functions for BigQuery API classes."""
-fromgcloud._helpersimport_datetime_from_microseconds
+fromgoogle.cloud._helpersimport_datetime_from_microsecondsdef_not_null(value,field):
@@ -248,13 +403,13 @@
Source code for gcloud.bigquery._helpers
class_EnumProperty(_ConfigurationProperty):
- """Psedo-enumeration class.
+ """Pseudo-enumeration class. Subclasses must define ``ALLOWED`` as a class-level constant: it must be a sequence of strings. :type name: string
- :param name: name of the property
+ :param name: name of the property. """def_validate(self,value):"""Check that ``value`` is one of the allowed values.
@@ -265,123 +420,62 @@
Source code for gcloud.bigquery._helpers
raiseValueError('Pass one of: %s'', '.join(self.ALLOWED))
# Copyright 2015 Google Inc. All rights reserved.## Licensed under the Apache License, Version 2.0 (the "License");
@@ -114,17 +269,41 @@
Source code for gcloud.bigquery.client
"""Client for interacting with the Google BigQuery API."""
-fromgcloud.clientimportJSONClient
-fromgcloud.bigquery.connectionimportConnection
-fromgcloud.bigquery.datasetimportDataset
-fromgcloud.bigquery.jobimportCopyJob
-fromgcloud.bigquery.jobimportExtractTableToStorageJob
-fromgcloud.bigquery.jobimportLoadTableFromStorageJob
-fromgcloud.bigquery.jobimportQueryJob
-fromgcloud.bigquery.queryimportQueryResults
+fromgoogle.cloud.clientimportJSONClient
+fromgoogle.cloud.bigquery.connectionimportConnection
+fromgoogle.cloud.bigquery.datasetimportDataset
+fromgoogle.cloud.bigquery.jobimportCopyJob
+fromgoogle.cloud.bigquery.jobimportExtractTableToStorageJob
+fromgoogle.cloud.bigquery.jobimportLoadTableFromStorageJob
+fromgoogle.cloud.bigquery.jobimportQueryJob
+fromgoogle.cloud.bigquery.queryimportQueryResults
+
+
+
[docs]classProject(object):
+ """Wrapper for resource describing a BigQuery project.
+
+ :type project_id: str
+ :param project_id: Opaque ID of the project
+ :type numeric_id: int
+ :param numeric_id: Numeric ID of the project
-
[docs]classClient(JSONClient):
+ :type friendly_name: str
+ :param friendly_name: Display name of the project
+ """
+ def__init__(self,project_id,numeric_id,friendly_name):
+ self.project_id=project_id
+ self.numeric_id=numeric_id
+ self.friendly_name=friendly_name
+
+ @classmethod
+
[docs]deffrom_api_repr(cls,resource):
+ """Factory: construct an instance from a resource dict."""
+ returncls(
+ resource['id'],resource['numericId'],resource['friendlyName'])
+
+
+
[docs]classClient(JSONClient):"""Client to bundle configuration needed for API requests. :type project: str
@@ -147,7 +326,43 @@
[docs]deflist_projects(self,max_results=None,page_token=None):
+ """List projects for the project associated with this client.
+
+ See:
+ https://cloud.google.com/bigquery/docs/reference/v2/projects/list
+
+ :type max_results: int
+ :param max_results: maximum number of projects to return, If not
+ passed, defaults to a value set by the API.
+
+ :type page_token: str
+ :param page_token: opaque marker for the next "page" of projects. If
+ not passed, the API will return the first page of
+ projects.
+
+ :rtype: tuple, (list, str)
+ :returns: list of :class:`gcloud.bigquery.client.Project`, plus a
+ "next page token" string: if the token is not None,
+ indicates that more projects can be retrieved with another
+ call (pass that value as ``page_token``).
+ """
+ params={}
+
+ ifmax_resultsisnotNone:
+ params['maxResults']=max_results
+
+ ifpage_tokenisnotNone:
+ params['pageToken']=page_token
+
+ path='/projects'
+ resp=self.connection.api_request(method='GET',path=path,
+ query_params=params)
+ projects=[Project.from_api_repr(resource)
+ forresourceinresp.get('projects',())]
+ returnprojects,resp.get('nextPageToken')
+
+
[docs]deflist_datasets(self,include_all=False,max_results=None,page_token=None):"""List datasets for the project associated with this client.
@@ -167,8 +382,8 @@
Source code for gcloud.bigquery.client
datasets. :rtype: tuple, (list, str)
- :returns: list of :class:`gcloud.bigquery.dataset.Dataset`, plus a
- "next page token" string: if the token is not None,
+ :returns: list of :class:`~google.cloud.bigquery.dataset.Dataset`,
+ plus a "next page token" string: if the token is not None, indicates that more datasets can be retrieved with another call (pass that value as ``page_token``). """
@@ -190,29 +405,29 @@
[docs]defdataset(self,dataset_name):"""Construct a dataset bound to this client. :type dataset_name: str :param dataset_name: Name of the dataset.
- :rtype: :class:`gcloud.bigquery.dataset.Dataset`
+ :rtype: :class:`google.cloud.bigquery.dataset.Dataset` :returns: a new ``Dataset`` instance """returnDataset(dataset_name,client=self)
[docs]deflist_jobs(self,max_results=None,page_token=None,all_users=None,state_filter=None):"""List jobs for the project associated with this client.
@@ -281,7 +496,7 @@
[docs]defload_table_from_storage(self,job_name,destination,*source_uris):"""Construct a job for loading data into a table from CloudStorage. See:
@@ -290,20 +505,20 @@
Source code for gcloud.bigquery.client
:type job_name: str :param job_name: Name of the job.
- :type destination: :class:`gcloud.bigquery.table.Table`
+ :type destination: :class:`google.cloud.bigquery.table.Table` :param destination: Table into which data is to be loaded. :type source_uris: sequence of string :param source_uris: URIs of data files to be loaded; in format ``gs://<bucket_name>/<object_name_or_glob>``.
- :rtype: :class:`gcloud.bigquery.job.LoadTableFromStorageJob`
+ :rtype: :class:`google.cloud.bigquery.job.LoadTableFromStorageJob` :returns: a new ``LoadTableFromStorageJob`` instance """returnLoadTableFromStorageJob(job_name,destination,source_uris,client=self)
[docs]defcopy_table(self,job_name,destination,*sources):"""Construct a job for copying one or more tables into another table. See:
@@ -312,18 +527,18 @@
Source code for gcloud.bigquery.client
:type job_name: str :param job_name: Name of the job.
- :type destination: :class:`gcloud.bigquery.table.Table`
+ :type destination: :class:`google.cloud.bigquery.table.Table` :param destination: Table into which data is to be copied.
- :type sources: sequence of :class:`gcloud.bigquery.table.Table`
+ :type sources: sequence of :class:`google.cloud.bigquery.table.Table` :param sources: tables to be copied.
- :rtype: :class:`gcloud.bigquery.job.CopyJob`
+ :rtype: :class:`google.cloud.bigquery.job.CopyJob` :returns: a new ``CopyJob`` instance """returnCopyJob(job_name,destination,sources,client=self)
[docs]defextract_table_to_storage(self,job_name,source,*destination_uris):"""Construct a job for extracting a table into Cloud Storage files. See:
@@ -332,7 +547,7 @@
Source code for gcloud.bigquery.client
:type job_name: str :param job_name: Name of the job.
- :type source: :class:`gcloud.bigquery.table.Table`
+ :type source: :class:`google.cloud.bigquery.table.Table` :param source: table to be extracted. :type destination_uris: sequence of string
@@ -340,13 +555,13 @@
Source code for gcloud.bigquery.client
table data is to be extracted; in format ``gs://<bucket_name>/<object_name_or_glob>``.
- :rtype: :class:`gcloud.bigquery.job.ExtractTableToStorageJob`
+ :rtype: :class:`google.cloud.bigquery.job.ExtractTableToStorageJob` :returns: a new ``ExtractTableToStorageJob`` instance """returnExtractTableToStorageJob(job_name,source,destination_uris,client=self)
+# Copyright 2015 Google Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Create / interact with Google Cloud BigQuery connections."""
+
+fromgoogle.cloudimportconnectionasbase_connection
+
+
+
[docs]classConnection(base_connection.JSONConnection):
+ """A connection to Google Cloud BigQuery via the JSON REST API."""
+
+ API_BASE_URL='https://www.googleapis.com'
+ """The base of the API call URL."""
+
+ API_VERSION='v2'
+ """The version of the API, used in building the API call's URL."""
+
+ API_URL_TEMPLATE='{api_base_url}/bigquery/{api_version}{path}'
+ """A template for the URL of a particular API call."""
+
+ SCOPE=('https://www.googleapis.com/auth/bigquery',
+ 'https://www.googleapis.com/auth/cloud-platform')
+ """The scopes required for authenticating as a Cloud BigQuery consumer."""
[docs]classAccessGrant(object):"""Represent grant of an access role to an entity. Every entry in the access list will have exactly one of
@@ -184,7 +339,7 @@
[docs]classDataset(object):"""Datasets are containers for tables. See:
@@ -193,7 +348,7 @@
Source code for gcloud.bigquery.dataset
:type name: string :param name: the name of the dataset
- :type client: :class:`gcloud.bigquery.client.Client`
+ :type client: :class:`google.cloud.bigquery.client.Client` :param client: A client which holds credentials and project configuration for the dataset (which requires a project).
@@ -391,17 +546,17 @@
[docs]deffrom_api_repr(cls,resource,client):"""Factory: construct a dataset given its API representation :type resource: dict :param resource: dataset resource representation returned from the API
- :type client: :class:`gcloud.bigquery.client.Client`
+ :type client: :class:`google.cloud.bigquery.client.Client` :param client: Client which holds credentials and project configuration for the dataset.
- :rtype: :class:`gcloud.bigquery.dataset.Dataset`
+ :rtype: :class:`google.cloud.bigquery.dataset.Dataset` :returns: Dataset parsed from ``resource``. """if('datasetReference'notinresourceor
@@ -416,11 +571,12 @@
Source code for gcloud.bigquery.dataset
def_require_client(self,client):"""Check client or verify over-ride.
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset.
- :rtype: :class:`gcloud.bigquery.client.Client`
+ :rtype: :class:`google.cloud.bigquery.client.Client` :returns: The client passed in or the currently bound client. """ifclientisNone:
@@ -435,10 +591,10 @@
Source code for gcloud.bigquery.dataset
type is ``view``. :type access: list of mappings
- :param access: each mapping represents a single access grant
+ :param access: each mapping represents a single access grant. :rtype: list of :class:`AccessGrant`
- :returns: a list of parsed grants
+ :returns: a list of parsed grants. :raises: :class:`ValueError` if a grant in ``access`` has more keys than ``role`` and one additional key. """
@@ -457,7 +613,7 @@
Source code for gcloud.bigquery.dataset
"""Update properties from resource in body of ``api_response`` :type api_response: httplib2.Response
- :param api_response: response returned from an API call
+ :param api_response: response returned from an API call. """self._properties.clear()cleaned=api_response.copy()
@@ -506,13 +662,14 @@
Source code for gcloud.bigquery.dataset
returnresource
-
[docs]defcreate(self,client=None):
- """API call: create the dataset via a PUT request
+
[docs]defcreate(self,client=None):
+ """API call: create the dataset via a PUT request. See: https://cloud.google.com/bigquery/docs/reference/v2/tables/insert
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset. """
@@ -522,13 +679,14 @@
[docs]defexists(self,client=None):"""API call: test for the existence of the dataset via a GET request See https://cloud.google.com/bigquery/docs/reference/v2/datasets/get
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset.
@@ -545,13 +703,14 @@
Source code for gcloud.bigquery.dataset
else:returnTrue
-
[docs]defreload(self,client=None):
- """API call: refresh dataset properties via a GET request
+
[docs]defreload(self,client=None):
+ """API call: refresh dataset properties via a GET request. See https://cloud.google.com/bigquery/docs/reference/v2/datasets/get
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset. """
@@ -561,13 +720,14 @@
[docs]defpatch(self,client=None,**kw):
- """API call: update individual dataset properties via a PATCH request
+
[docs]defpatch(self,client=None,**kw):
+ """API call: update individual dataset properties via a PATCH request. See https://cloud.google.com/bigquery/docs/reference/v2/datasets/patch
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset.
@@ -599,13 +759,14 @@
[docs]defupdate(self,client=None):
- """API call: update dataset properties via a PUT request
+
[docs]defupdate(self,client=None):
+ """API call: update dataset properties via a PUT request. See https://cloud.google.com/bigquery/docs/reference/v2/datasets/update
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset. """
@@ -614,20 +775,21 @@
[docs]defdelete(self,client=None):
- """API call: delete the dataset via a DELETE request
+
[docs]defdelete(self,client=None):
+ """API call: delete the dataset via a DELETE request. See: https://cloud.google.com/bigquery/docs/reference/v2/tables/delete
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset. """client=self._require_client(client)client.connection.api_request(method='DELETE',path=self.path)
[docs]deflist_tables(self,max_results=None,page_token=None):"""List tables for the project associated with this client. See:
@@ -643,7 +805,7 @@
Source code for gcloud.bigquery.dataset
datasets. :rtype: tuple, (list, str)
- :returns: list of :class:`gcloud.bigquery.table.Table`, plus a
+ :returns: list of :class:`google.cloud.bigquery.table.Table`, plus a "next page token" string: if not ``None``, indicates that more tables can be retrieved with another call (pass that value as ``page_token``).
@@ -664,138 +826,77 @@
[docs]deftable(self,name,schema=()):"""Construct a table bound to this dataset. :type name: string :param name: Name of the table.
- :type schema: list of :class:`gcloud.bigquery.table.SchemaField`
+ :type schema: list of :class:`google.cloud.bigquery.table.SchemaField` :param schema: The table's schema
- :rtype: :class:`gcloud.bigquery.table.Table`
+ :rtype: :class:`google.cloud.bigquery.table.Table` :returns: a new ``Table`` instance """returnTable(name,dataset=self,schema=schema)
[docs]classCreateDisposition(_EnumProperty):"""Pseudo-enum for ``create_disposition`` properties."""CREATE_IF_NEEDED='CREATE_IF_NEEDED'CREATE_NEVER='CREATE_NEVER'ALLOWED=(CREATE_IF_NEEDED,CREATE_NEVER)
[docs]classQueryPriority(_EnumProperty):"""Pseudo-enum for ``QueryJob.priority`` property."""INTERACTIVE='INTERACTIVE'BATCH='BATCH'ALLOWED=(INTERACTIVE,BATCH)
[docs]classWriteDisposition(_EnumProperty):"""Pseudo-enum for ``write_disposition`` properties."""WRITE_APPEND='WRITE_APPEND'WRITE_TRUNCATE='WRITE_TRUNCATE'
@@ -181,7 +390,7 @@
Source code for gcloud.bigquery.job
class_BaseJob(object):"""Base class for jobs.
- :type client: :class:`gcloud.bigquery.client.Client`
+ :type client: :class:`google.cloud.bigquery.client.Client` :param client: A client which holds credentials and project configuration for the dataset (which requires a project). """
@@ -201,11 +410,12 @@
Source code for gcloud.bigquery.job
def_require_client(self,client):"""Check client or verify over-ride.
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset.
- :rtype: :class:`gcloud.bigquery.client.Client`
+ :rtype: :class:`google.cloud.bigquery.client.Client` :returns: The client passed in or the currently bound client. """ifclientisNone:
@@ -219,7 +429,7 @@
Source code for gcloud.bigquery.job
:type name: string :param name: the name of the job
- :type client: :class:`gcloud.bigquery.client.Client`
+ :type client: :class:`google.cloud.bigquery.client.Client` :param client: A client which holds credentials and project configuration for the dataset (which requires a project). """
@@ -399,7 +609,8 @@
Source code for gcloud.bigquery.job
See: https://cloud.google.com/bigquery/docs/reference/v2/jobs/insert
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset. """
@@ -415,7 +626,8 @@
Source code for gcloud.bigquery.job
See https://cloud.google.com/bigquery/docs/reference/v2/jobs/get
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset.
@@ -438,7 +650,8 @@
Source code for gcloud.bigquery.job
See https://cloud.google.com/bigquery/docs/reference/v2/jobs/get
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset. """
@@ -454,7 +667,8 @@
Source code for gcloud.bigquery.job
See https://cloud.google.com/bigquery/docs/reference/v2/jobs/cancel
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset. """
@@ -483,24 +697,24 @@
[docs]classLoadTableFromStorageJob(_AsyncJob):"""Asynchronous job for loading data into a table from CloudStorage. :type name: string :param name: the name of the job
- :type destination: :class:`gcloud.bigquery.table.Table`
+ :type destination: :class:`google.cloud.bigquery.table.Table` :param destination: Table into which data is to be loaded. :type source_uris: sequence of string :param source_uris: URIs of one or more data files to be loaded, in format ``gs://<bucket_name>/<object_name_or_glob>``.
- :type client: :class:`gcloud.bigquery.client.Client`
+ :type client: :class:`google.cloud.bigquery.client.Client` :param client: A client which holds credentials and project configuration for the dataset (which requires a project).
- :type schema: list of :class:`gcloud.bigquery.table.SchemaField`
+ :type schema: list of :class:`google.cloud.bigquery.table.SchemaField` :param schema: The job's schema """
@@ -695,7 +909,7 @@
[docs]classCopyJob(_AsyncJob):"""Asynchronous job: copy data into a table from other tables. :type name: string :param name: the name of the job
- :type destination: :class:`gcloud.bigquery.table.Table`
+ :type destination: :class:`google.cloud.bigquery.table.Table` :param destination: Table into which data is to be loaded.
- :type sources: list of :class:`gcloud.bigquery.table.Table`
+ :type sources: list of :class:`google.cloud.bigquery.table.Table` :param sources: Table into which data is to be loaded.
- :type client: :class:`gcloud.bigquery.client.Client`
+ :type client: :class:`google.cloud.bigquery.client.Client` :param client: A client which holds credentials and project configuration for the dataset (which requires a project). """
@@ -805,7 +1019,7 @@
[docs]classExtractTableToStorageJob(_AsyncJob):"""Asynchronous job: extract data from a table into Cloud Storage. :type name: string :param name: the name of the job
- :type source: :class:`gcloud.bigquery.table.Table`
+ :type source: :class:`google.cloud.bigquery.table.Table` :param source: Table into which data is to be loaded. :type destination_uris: list of string
@@ -861,7 +1075,7 @@
Source code for gcloud.bigquery.job
extracted data will be written, in format ``gs://<bucket_name>/<object_name_or_glob>``.
- :type client: :class:`gcloud.bigquery.client.Client`
+ :type client: :class:`google.cloud.bigquery.client.Client` :param client: A client which holds credentials and project configuration for the dataset (which requires a project). """
@@ -931,7 +1145,7 @@
[docs]deffrom_api_repr(cls,resource,client):"""Factory: construct a job given its API representation :type resource: dict :param resource: dataset job representation returned from the API
- :type client: :class:`gcloud.bigquery.client.Client`
+ :type client: :class:`google.cloud.bigquery.client.Client` :param client: Client which holds credentials and project configuration for the dataset.
- :rtype: :class:`gcloud.bigquery.job.RunAsyncQueryJob`
+ :rtype: :class:`google.cloud.bigquery.job.RunAsyncQueryJob` :returns: Job parsed from ``resource``. """name,config=cls._get_resource_config(resource)
@@ -1139,123 +1365,62 @@
[docs]classQueryResults(object):"""Synchronous job: query tables. :type query: string :param query: SQL query string
- :type client: :class:`gcloud.bigquery.client.Client`
+ :type client: :class:`google.cloud.bigquery.client.Client` :param client: A client which holds credentials and project configuration for the dataset (which requires a project).
+
+ :type udf_resources: tuple
+ :param udf_resources: An iterable of
+ :class:`google.cloud.bigquery.job.UDFResource`
+ (empty by default) """
- def__init__(self,query,client):
+
+ _UDF_KEY='userDefinedFunctionResources'
+
+ def__init__(self,query,client,udf_resources=()):self._client=clientself._properties={}self.query=queryself._configuration=_SyncQueryConfiguration()
+ self.udf_resources=udf_resourcesself._job=None@property
@@ -165,11 +331,12 @@
Source code for gcloud.bigquery.query
def_require_client(self,client):"""Check client or verify over-ride.
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset.
- :rtype: :class:`gcloud.bigquery.client.Client`
+ :rtype: :class:`google.cloud.bigquery.client.Client` :returns: The client passed in or the currently bound client. """ifclientisNone:
@@ -232,7 +399,7 @@
Source code for gcloud.bigquery.query
defjob(self):"""Job instance used to run the query.
- :rtype: :class:`gcloud.bigquery.job.QueryJob`, or ``NoneType``
+ :rtype: :class:`google.cloud.bigquery.job.QueryJob`, or ``NoneType`` :returns: Job instance used to run the query (None until ``jobReference`` property is set by the server). """
@@ -257,7 +424,7 @@
Source code for gcloud.bigquery.query
@propertydeftotal_rows(self):
- """Total number of rows returned by the query
+ """Total number of rows returned by the query. See: https://cloud.google.com/bigquery/docs/reference/v2/jobs/query#totalRows
@@ -269,7 +436,7 @@
Source code for gcloud.bigquery.query
@propertydeftotal_bytes_processed(self):
- """Total number of bytes processed by the query
+ """Total number of bytes processed by the query. See: https://cloud.google.com/bigquery/docs/reference/v2/jobs/query#totalBytesProcessed
@@ -328,6 +495,8 @@
[docs]defrun(self,client=None):"""API call: run the query via a POST request See: https://cloud.google.com/bigquery/docs/reference/v2/jobs/query
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset. """
@@ -394,7 +567,7 @@
[docs]deffetch_data(self,max_results=None,page_token=None,start_index=None,timeout_ms=None,client=None):"""API call: fetch a page of query result data via a GET request
@@ -414,7 +587,8 @@
Source code for gcloud.bigquery.query
:param timeout_ms: timeout, in milliseconds, to wait for query to complete
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset.
@@ -460,123 +634,62 @@
+# Copyright 2015 Google Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Scheamas for BigQuery tables / queries."""
+
+
+
[docs]classSchemaField(object):
+ """Describe a single field within a table schema.
+
+ :type name: str
+ :param name: the name of the field.
+
+ :type field_type: str
+ :param field_type: the type of the field (one of 'STRING', 'INTEGER',
+ 'FLOAT', 'BOOLEAN', 'TIMESTAMP' or 'RECORD').
+
+ :type mode: str
+ :param mode: the type of the field (one of 'NULLABLE', 'REQUIRED',
+ or 'REPEATED').
+
+ :type description: str
+ :param description: optional description for the field.
+
+ :type fields: list of :class:`SchemaField`, or None
+ :param fields: subfields (requires ``field_type`` of 'RECORD').
+ """
+ def__init__(self,name,field_type,mode='NULLABLE',description=None,
+ fields=None):
+ self.name=name
+ self.field_type=field_type
+ self.mode=mode
+ self.description=description
+ self.fields=fields
+
+ def__eq__(self,other):
+ return(
+ self.name==other.nameand
+ self.field_type.lower()==other.field_type.lower()and
+ self.mode==other.modeand
+ self.description==other.descriptionand
+ self.fields==other.fields)
[docs]classSchemaField(object):
- """Describe a single field within a table schema.
-
- :type name: str
- :param name: the name of the field
-
- :type field_type: str
- :param field_type: the type of the field (one of 'STRING', 'INTEGER',
- 'FLOAT', 'BOOLEAN', 'TIMESTAMP' or 'RECORD')
-
- :type mode: str
- :param mode: the type of the field (one of 'NULLABLE', 'REQUIRED',
- or 'REPEATED')
-
- :type description: str
- :param description: optional description for the field
-
- :type fields: list of :class:`SchemaField`, or None
- :param fields: subfields (requires ``field_type`` of 'RECORD').
- """
- def__init__(self,name,field_type,mode='NULLABLE',description=None,
- fields=None):
- self.name=name
- self.field_type=field_type
- self.mode=mode
- self.description=description
- self.fields=fields
-
- def__eq__(self,other):
- return(
- self.name==other.nameand
- self.field_type.lower()==other.field_type.lower()and
- self.mode==other.modeand
- self.description==other.descriptionand
- self.fields==other.fields)
[docs]classTable(object):"""Tables represent a set of rows whose values correspond to a schema. See:
@@ -179,7 +298,7 @@
Source code for gcloud.bigquery.table
:type name: str :param name: the name of the table
- :type dataset: :class:`gcloud.bigquery.dataset.Dataset`
+ :type dataset: :class:`google.cloud.bigquery.dataset.Dataset` :param dataset: The dataset which contains the table. :type schema: list of :class:`SchemaField`
@@ -329,6 +448,59 @@
Source code for gcloud.bigquery.table
"""returnself._properties.get('type')
+ @property
+ defpartitioning_type(self):
+ """Time partitioning of the table.
+ :rtype: str, or ``NoneType``
+ :returns: Returns type if the table is partitioned, None otherwise.
+ """
+ returnself._properties.get('timePartitioning',{}).get('type')
+
+ @partitioning_type.setter
+ defpartitioning_type(self,value):
+ """Update the partitioning type of the table
+
+ :type value: str
+ :param value: partitioning type only "DAY" is currently supported
+ """
+ ifvaluenotin('DAY',None):
+ raiseValueError("value must be one of ['DAY', None]")
+
+ ifvalueisNone:
+ self._properties.pop('timePartitioning',None)
+ else:
+ time_part=self._properties.setdefault('timePartitioning',{})
+ time_part['type']=value.upper()
+
+ @property
+ defpartition_expiration(self):
+ """Expiration time in ms for a partition
+ :rtype: int, or ``NoneType``
+ :returns: Returns the time in ms for partition expiration
+ """
+ returnself._properties.get('timePartitioning',{}).get('expirationMs')
+
+ @partition_expiration.setter
+ defpartition_expiration(self,value):
+ """Update the experation time in ms for a partition
+
+ :type value: int
+ :param value: partition experiation time in ms
+ """
+ ifnotisinstance(value,(int,type(None))):
+ raiseValueError(
+ "must be an integer representing millisseconds or None")
+
+ ifvalueisNone:
+ if'timePartitioning'inself._properties:
+ self._properties['timePartitioning'].pop('expirationMs')
+ else:
+ try:
+ self._properties['timePartitioning']['expirationMs']=value
+ exceptKeyError:
+ self._properties['timePartitioning']={'type':'DAY'}
+ self._properties['timePartitioning']['expirationMs']=value
+
@propertydefdescription(self):"""Description of the table.
@@ -447,17 +619,34 @@
Source code for gcloud.bigquery.table
"""Delete SQL query defining the table as a view."""self._properties.pop('view',None)
+
[docs]deflist_partitions(self,client=None):
+ """List the partitions in a table.
+
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType``
+ :param client: the client to use. If not passed, falls back to the
+ ``client`` stored on the current dataset.
+
+ :rtype: list
+ :returns: a list of time partitions
+ """
+ query=self._require_client(client).run_sync_query(
+ 'SELECT partition_id from [%s.%s$__PARTITIONS_SUMMARY__]'%
+ (self.dataset_name,self.name))
+ query.run()
+ return[row[0]forrowinquery.rows]
[docs]deffrom_api_repr(cls,resource,dataset):"""Factory: construct a table given its API representation :type resource: dict :param resource: table resource representation returned from the API
- :type dataset: :class:`gcloud.bigquery.dataset.Dataset`
+ :type dataset: :class:`google.cloud.bigquery.dataset.Dataset` :param dataset: The dataset containing the table.
- :rtype: :class:`gcloud.bigquery.table.Table`
+ :rtype: :class:`google.cloud.bigquery.table.Table` :returns: Table parsed from ``resource``. """if('tableReference'notinresourceor
@@ -472,11 +661,12 @@
Source code for gcloud.bigquery.table
def_require_client(self,client):"""Check client or verify over-ride.
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset.
- :rtype: :class:`gcloud.bigquery.client.Client`
+ :rtype: :class:`google.cloud.bigquery.client.Client` :returns: The client passed in or the currently bound client. """ifclientisNone:
@@ -522,6 +712,9 @@
[docs]defcreate(self,client=None):"""API call: create the dataset via a PUT request See: https://cloud.google.com/bigquery/docs/reference/v2/tables/insert
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset. """
@@ -551,13 +745,14 @@
[docs]defexists(self,client=None):"""API call: test for the existence of the table via a GET request See https://cloud.google.com/bigquery/docs/reference/v2/tables/get
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset.
@@ -574,13 +769,14 @@
[docs]defreload(self,client=None):"""API call: refresh table properties via a GET request See https://cloud.google.com/bigquery/docs/reference/v2/tables/get
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset. """
@@ -590,7 +786,7 @@
See https://cloud.google.com/bigquery/docs/reference/v2/tables/patch
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset.
@@ -663,13 +860,14 @@
[docs]defupdate(self,client=None):"""API call: update table properties via a PUT request See https://cloud.google.com/bigquery/docs/reference/v2/tables/update
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset. """
@@ -678,20 +876,21 @@
[docs]defdelete(self,client=None):"""API call: delete the table via a DELETE request See: https://cloud.google.com/bigquery/docs/reference/v2/tables/delete
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset. """client=self._require_client(client)client.connection.api_request(method='DELETE',path=self.path)
[docs]deffetch_data(self,max_results=None,page_token=None,client=None):"""API call: fetch the table data via a GET request See:
@@ -711,7 +910,8 @@
Source code for gcloud.bigquery.table
:type page_token: str or ``NoneType`` :param page_token: token representing a cursor into the table's rows.
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset.
@@ -743,7 +943,7 @@
schema of the template table. See: https://cloud.google.com/bigquery/streaming-data-into-bigquery#template-tables
- :type client: :class:`gcloud.bigquery.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.bigquery.client.Client` or
+ ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset.
@@ -830,7 +1031,7 @@
:type source_format: str :param source_format: one of 'CSV' or 'NEWLINE_DELIMITED_JSON'. job configuration option; see
- :meth:`gcloud.bigquery.job.LoadJob`
+ :meth:`google.cloud.bigquery.job.LoadJob` :type rewind: boolean :param rewind: If True, seek to the beginning of the file handle before
@@ -876,51 +1077,52 @@
Source code for gcloud.bigquery.table
:type allow_jagged_rows: boolean :param allow_jagged_rows: job configuration option; see
- :meth:`gcloud.bigquery.job.LoadJob`
+ :meth:`google.cloud.bigquery.job.LoadJob`. :type allow_quoted_newlines: boolean :param allow_quoted_newlines: job configuration option; see
- :meth:`gcloud.bigquery.job.LoadJob`
+ :meth:`google.cloud.bigquery.job.LoadJob`. :type create_disposition: str :param create_disposition: job configuration option; see
- :meth:`gcloud.bigquery.job.LoadJob`
+ :meth:`google.cloud.bigquery.job.LoadJob`. :type encoding: str :param encoding: job configuration option; see
- :meth:`gcloud.bigquery.job.LoadJob`
+ :meth:`google.cloud.bigquery.job.LoadJob`. :type field_delimiter: str :param field_delimiter: job configuration option; see
- :meth:`gcloud.bigquery.job.LoadJob`
+ :meth:`google.cloud.bigquery.job.LoadJob`. :type ignore_unknown_values: boolean :param ignore_unknown_values: job configuration option; see
- :meth:`gcloud.bigquery.job.LoadJob`
+ :meth:`google.cloud.bigquery.job.LoadJob`. :type max_bad_records: integer :param max_bad_records: job configuration option; see
- :meth:`gcloud.bigquery.job.LoadJob`
+ :meth:`google.cloud.bigquery.job.LoadJob`. :type quote_character: str :param quote_character: job configuration option; see
- :meth:`gcloud.bigquery.job.LoadJob`
+ :meth:`google.cloud.bigquery.job.LoadJob`. :type skip_leading_rows: integer :param skip_leading_rows: job configuration option; see
- :meth:`gcloud.bigquery.job.LoadJob`
+ :meth:`google.cloud.bigquery.job.LoadJob`. :type write_disposition: str :param write_disposition: job configuration option; see
- :meth:`gcloud.bigquery.job.LoadJob`
+ :meth:`google.cloud.bigquery.job.LoadJob`.
- :type client: :class:`gcloud.storage.client.Client` or ``NoneType``
+ :type client: :class:`~google.cloud.storage.client.Client` or
+ ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the current dataset.
- :rtype: :class:`gcloud.bigquery.jobs.LoadTableFromStorageJob`
+ :rtype: :class:`google.cloud.bigquery.jobs.LoadTableFromStorageJob` :returns: the job instance used to load the data (e.g., for
- querying status)
+ querying status). :raises: :class:`ValueError` if ``size`` is not passed in and can not be determined, or if the ``file_obj`` can be detected to be a file opened in text mode.
@@ -1122,123 +1324,62 @@