Skip to content

Commit 6428b13

Browse files
authored
Merge branch 'main' into gemini-magic
2 parents 7cf6278 + b12aef7 commit 6428b13

File tree

12 files changed

+396
-90
lines changed

12 files changed

+396
-90
lines changed

CONTRIBUTING.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Follow either of the two links above to access the appropriate CLA and
2424
instructions for how to sign and return it. Once we receive it, we'll be able to
2525
accept your pull requests.
2626

27-
## Contributing A Patch
27+
## Contributing a Patch
2828

2929
1. Submit an issue describing your proposed change to the repo in question.
3030
1. The repo owner will respond to your issue promptly.
@@ -45,7 +45,7 @@ accept your pull requests.
4545
1. [Set up authentication with a service account][auth] so you can access the
4646
API from your local workstation.
4747

48-
You can use an API-key, but remember never to same it in your source files.
48+
You can use an API-key, but remember never to save it in your source files.
4949

5050

5151
## Development
@@ -103,4 +103,4 @@ python docs/build_docs.py
103103
[projects]: https://console.cloud.google.com/project
104104
[billing]: https://support.google.com/cloud/answer/6293499#enable-billing
105105
[enable_api]: https://console.cloud.google.com/flows/enableapi?apiid=generativelanguage.googleapis.com
106-
[auth]: https://cloud.google.com/docs/authentication/getting-started
106+
[auth]: https://cloud.google.com/docs/authentication/getting-started

README.md

Lines changed: 121 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,47 +1,146 @@
1-
Google Generative AI Python Client
2-
==================================
1+
# Google AI Python SDK
32

43
[![PyPI version](https://badge.fury.io/py/google-generativeai.svg)](https://badge.fury.io/py/google-generativeai)
54
![Python support](https://img.shields.io/pypi/pyversions/google-generativeai)
65
![PyPI - Downloads](https://img.shields.io/pypi/dd/google-generativeai)
76

8-
Get started using the PaLM API in Python. Check out the [developer site](https://developers.generativeai.google/)
9-
for comprehensive documentation.
7+
The Google AI Python SDK enables developers to use Google's state-of-the-art generative AI
8+
models (like Gemini and PaLM) to build AI-powered features and applications. This SDK
9+
supports use cases like:
10+
11+
- Generate text from text-only input
12+
- Generate text from text-and-images input (multimodal) (for Gemini only)
13+
- Build multi-turn conversations (chat)
14+
- Embedding
15+
16+
For example, with just a few lines of code, you can access Gemini's multimodal
17+
capabilities to generate text from text-and-image input:
18+
19+
```
20+
model = genai.GenerativeModel('gemini-pro-vision')
21+
22+
cookie_picture = {
23+
'mime_type': 'image/png',
24+
'data': Path('cookie.png').read_bytes()
25+
}
26+
prompt = "Give me a recipe for this:"
27+
28+
response = model.generate_content(
29+
content=[prompt, cookie_picture]
30+
)
31+
print(response.text)
32+
```
1033

11-
## Installation and usage
34+
35+
## Try out the API
1236

1337
Install from PyPI.
14-
```bash
15-
pip install google-generativeai
38+
39+
`pip install google-generativeai`
40+
41+
[Obtain an API key from AI Studio](https://makersuite.google.com/app/apikey),
42+
then configure it here.
43+
44+
Import the SDK and load a model.
45+
1646
```
47+
import google.generativeai as genai
1748
18-
Get an [API key from MakerSuite](https://makersuite.google.com/app/apikey), then configure it here.
19-
```python
49+
genai.configure(api_key=os.environ["API_KEY"])
50+
51+
model = genai.GenerativeModel('gemini-pro')
52+
```
53+
54+
Use `GenerativeModel.generate_content` to have the model complete some initial text.
55+
56+
```
57+
response = model.generate_content("The opposite of hot is")
58+
print(response.text) # cold.
59+
```
60+
61+
Use `GenerativeModel.start_chat` to have a discussion with a model.
62+
63+
```
64+
chat = model.start_chat()
65+
response = chat.send_message('Hello, what should I have for dinner?')
66+
print(response.text) # 'Here are some suggestions...'
67+
response = chat.send_message("How do I cook the first one?")
68+
```
69+
70+
71+
72+
## Installation and usage
73+
74+
Run [`pip install google-generativeai`](https://pypi.org/project/google-generativeai).
75+
76+
For detailed instructions, you can find a
77+
[quickstart](https://ai.google.dev/tutorials/python_quickstart) for the Google AI
78+
Python SDK in the Google documentation.
79+
80+
This quickstart describes how to add your API key and install the SDK in your app,
81+
initialize the model, and then call the API to access the model. It also describes some
82+
additional use cases and features, like streaming, embedding, counting tokens, and
83+
controlling responses.
84+
85+
86+
## Documentation
87+
88+
Find complete documentation for the Google AI SDKs and the Gemini model in the Google
89+
documentation: https://ai.google.dev/docs
90+
91+
92+
## Contributing
93+
94+
See [Contributing](https://github.com/google/generative-ai-python/blob/main/CONTRIBUTING.md) for more information on contributing to the Google AI Python SDK.
95+
96+
## Developers who use the PaLM API
97+
98+
### Migrate to use the Gemini API
99+
100+
Check our [migration guide](https://ai.google.dev/docs/migration_guide) in the Google
101+
documentation.
102+
103+
### Installation and usage for the PaLM API
104+
105+
Install from PyPI.
106+
107+
`pip install google-generativeai`
108+
109+
[Obtain an API key from AI Studio](https://makersuite.google.com/app/apikey), then
110+
configure it here.
111+
112+
```
20113
import google.generativeai as palm
21114
22115
palm.configure(api_key=os.environ["PALM_API_KEY"])
23116
```
24117

25-
Use [`palm.generate_text`](https://developers.generativeai.google/api/python/google/generativeai/generate_text)
26-
to have the model complete some initial text.
27-
```python
118+
Use `palm.generate_text` to have the model complete some initial text.
119+
120+
```
28121
response = palm.generate_text(prompt="The opposite of hot is")
29122
print(response.result) # cold.
30123
```
31124

32-
Use [`palm.chat`](https://developers.generativeai.google/api/python/google/generativeai/chat)
33-
to have a discussion with a model.
34-
```python
125+
Use `palm.chat` to have a discussion with a model.
126+
127+
```
35128
response = palm.chat(messages=["Hello."])
36129
print(response.last) # 'Hello! What can I help you with?'
37130
response.reply("Can you tell me a joke?")
38131
```
39132

40-
## Documentation
133+
### Documentation for the PaLM API
134+
135+
- [General PaLM documentation](https://ai.google.dev/docs/palm_api_overview)
136+
137+
- [Text quickstart](https://github.com/google/generative-ai-docs/blob/main/site/en/palm_docs/text_quickstart.ipynb)
41138

42-
Checkout the full [API docs](https://developers.generativeai.google/api), the [guide](https://developers.generativeai.google/guide) and [quick starts](https://developers.generativeai.google/tutorials).
139+
- [Chat quickstart](https://github.com/google/generative-ai-docs/blob/main/site/en/palm_docs/chat_quickstart.ipynb)
43140

44-
## Colab magics
141+
- [Tuning quickstart](https://github.com/google/generative-ai-docs/blob/main/site/en/palm_docs/tuning_quickstart_python.ipynb)
142+
143+
### Colab magics
45144

46145
```
47146
%pip install -q google-generativeai
@@ -54,3 +153,7 @@ Once installed, use the Python client via the `%%llm` Colab magic. Read the full
54153
%%llm
55154
The best thing since sliced bread is
56155
```
156+
157+
## License
158+
159+
The contents of this repository are licensed under the [Apache License, version 2.0](http://www.apache.org/licenses/LICENSE-2.0).

docs/build_docs.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,10 @@
2424
import pathlib
2525
import re
2626
import textwrap
27+
import typing
28+
29+
# For showing the conditional imports and types in `content_types.py`
30+
typing.TYPE_CHECKING = True
2731

2832
from absl import app
2933
from absl import flags

google/generativeai/__init__.py

Lines changed: 11 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -12,60 +12,31 @@
1212
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1313
# See the License for the specific language governing permissions and
1414
# limitations under the License.
15-
"""A high level client library for generative AI.
15+
"""Google AI Python SDK
1616
1717
## Setup
1818
1919
```posix-terminal
2020
pip install google-generativeai
2121
```
2222
23-
```
24-
import google.generativeai as palm
25-
import os
26-
27-
palm.configure(api_key=os.environ['API_KEY'])
28-
```
29-
30-
## Text
31-
32-
Use the `palm.generate_text` function to have the model complete some initial
33-
text.
34-
35-
```
36-
response = palm.generate_text(prompt="The opposite of hot is")
37-
print(response.result) # 'cold.'
38-
```
23+
## GenerativeModel
3924
40-
## Chat
25+
Use `genai.GenerativeModel` to access the API:
4126
42-
Use the `palm.chat` function to have a discussion with a model:
43-
44-
```
45-
chat = palm.chat(messages=["Hello."])
46-
print(chat.last) # 'Hello! What can I help you with?'
47-
chat = chat.reply("Can you tell me a joke?")
48-
print(chat.last) # 'Why did the chicken cross the road?'
4927
```
28+
import google.generativeai as genai
29+
import os
5030
51-
## Models
52-
53-
Use the model service discover models and find out more about them:
54-
55-
Use `palm.get_model` to get details if you know a model's name:
56-
57-
```
58-
model = palm.get_model('models/chat-bison-001') # 🦬
59-
```
31+
genai.configure(api_key=os.environ['API_KEY'])
6032
61-
Use `palm.list_models` to discover models:
33+
model = genai.GenerativeModel(name='gemini-pro')
34+
response = model.generate_content('Please summarise this document: ...')
6235
63-
```
64-
import pprint
65-
for model in palm.list_models():
66-
pprint.pprint(model) # 🦎🦦🦬🦄
36+
print(response.text)
6737
```
6838
39+
See the [python quickstart](https://ai.google.dev/tutorials/python_quickstart) for more details.
6940
"""
7041
from __future__ import annotations
7142

@@ -82,6 +53,7 @@
8253
from google.generativeai.embedding import embed_content
8354

8455
from google.generativeai.generative_models import GenerativeModel
56+
from google.generativeai.generative_models import ChatSession
8557

8658
from google.generativeai.text import generate_text
8759
from google.generativeai.text import generate_embeddings

google/generativeai/generative_models.py

Lines changed: 24 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,26 +2,26 @@
22

33
from __future__ import annotations
44

5+
from collections.abc import Iterable
56
import dataclasses
67
import textwrap
8+
from typing import Union
79

810
# pylint: disable=bad-continuation, line-too-long
911

1012

11-
from collections.abc import Iterable
12-
1313
from google.ai import generativelanguage as glm
1414
from google.generativeai import client
1515
from google.generativeai import string_utils
1616
from google.generativeai.types import content_types
1717
from google.generativeai.types import generation_types
1818
from google.generativeai.types import safety_types
1919

20-
_GENERATE_CONTENT_ASYNC_DOC = """The async version of `Model.generate_content`."""
20+
_GENERATE_CONTENT_ASYNC_DOC = """The async version of `GenerativeModel.generate_content`."""
2121

2222
_GENERATE_CONTENT_DOC = """A multipurpose function to generate responses from the model.
2323
24-
This `GenerativeModel.generate_content` method can handle multimodal input, and multiturn
24+
This `GenerativeModel.generate_content` method can handle multimodal input, and multi-turn
2525
conversations.
2626
2727
>>> model = genai.GenerativeModel('models/gemini-pro')
@@ -70,6 +70,7 @@
7070
generation_config: Overrides for the model's generation config.
7171
safety_settings: Overrides for the model's safety settings.
7272
stream: If True, yield response chunks as they are generated.
73+
tools: `glm.Tools` more info coming soon.
7374
"""
7475

7576
_SEND_MESSAGE_ASYNC_DOC = """The async version of `ChatSession.send_message`."""
@@ -158,6 +159,7 @@ def __init__(
158159
model_name: str = "gemini-m",
159160
safety_settings: safety_types.SafetySettingOptions | None = None,
160161
generation_config: generation_types.GenerationConfigType | None = None,
162+
tools: content_types.ToolsType = None,
161163
):
162164
if "/" not in model_name:
163165
model_name = "models/" + model_name
@@ -166,6 +168,8 @@ def __init__(
166168
safety_settings, harm_category_set="new"
167169
)
168170
self._generation_config = generation_types.to_generation_config_dict(generation_config)
171+
self._tools = content_types.to_tools(tools)
172+
169173
self._client = None
170174
self._async_client = None
171175

@@ -213,6 +217,7 @@ def _prepare_request(
213217
contents=contents,
214218
generation_config=merged_gc,
215219
safety_settings=merged_ss,
220+
tools=self._tools,
216221
**kwargs,
217222
)
218223

@@ -274,21 +279,34 @@ async def generate_content_async(
274279
def count_tokens(
275280
self, contents: content_types.ContentsType
276281
) -> glm.CountTokensResponse:
282+
if self._client is None:
283+
self._client = client.get_default_generative_client()
277284
contents = content_types.to_contents(contents)
278-
return self._client.count_tokens(self.model_name, contents)
285+
return self._client.count_tokens(glm.CountTokensRequest(model=self.model_name, contents=contents))
279286

280287
async def count_tokens_async(
281288
self, contents: content_types.ContentsType
282289
) -> glm.CountTokensResponse:
290+
if self._async_client is None:
291+
self._async_client = client.get_default_generative_async_client()
283292
contents = content_types.to_contents(contents)
284-
return await self._client.count_tokens(self.model_name, contents)
293+
return await self._async_client.count_tokens(glm.CountTokensRequest(model=self.model_name, contents=contents))
285294
# fmt: on
286295

287296
def start_chat(
288297
self,
289298
*,
290299
history: Iterable[content_types.StrictContentType] | None = None,
291300
) -> ChatSession:
301+
"""Returns a `genai.ChatSession` attached to this model.
302+
303+
>>> model = genai.GenerativeModel()
304+
>>> chat = model.start_chat(history=[...])
305+
>>> response = chat.send_message("Hello?")
306+
307+
Arguments:
308+
history: An iterable of `glm.Content` objects, or equvalents to initialize the session.
309+
"""
292310
if self._generation_config.get("candidate_count", 1) > 1:
293311
raise ValueError("Can't chat with `candidate_count > 1`")
294312
return ChatSession(

0 commit comments

Comments
 (0)