Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added .DS_Store
Binary file not shown.
12 changes: 8 additions & 4 deletions .cursor/rules/general-rule.mdc
Original file line number Diff line number Diff line change
@@ -1,11 +1,15 @@
---
description:
globs:
description:
globs:
alwaysApply: true
---

## Rules to Follow

- You must always commit your changes whenever you update code.
- You always prefer to use branch development.
- Before writing any code, you create a feature branch to hold those changes.
- After you are done, provide instructions in a "Merge.md" file that explains how to merge the changes back to main with both a Github PR route and a Github CLI route.
- You must always commit your changes whenever you update code.
- You must always try and write code that is well documented. (self or commented is fine)
- You must only work on a single feature at a time.
- You must explain your decisions thouroughly to the user.
- You must explain your decisions thouroughly to the user.
123 changes: 123 additions & 0 deletions Merge.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
# Deployment Instructions

This project consists of two separate deployments:

1. Frontend (Next.js)
2. API (FastAPI)

## API Deployment

### Prerequisites

1. Install Vercel CLI:

```bash
npm install -g vercel
```

2. Login to Vercel:

```bash
vercel login
```

### Deploy API

1. Navigate to the API directory:

```bash
cd api
```

2. Deploy to Vercel:

```bash
vercel
```

3. After deployment, copy the API URL (you'll need it for the frontend)

### Environment Variables for API

Set these in your Vercel project dashboard:

- `OPENAI_API_KEY`: Your OpenAI API key

## Frontend Deployment

### Prerequisites

Same as API deployment (Vercel CLI and login)

### Deploy Frontend

1. Navigate to the frontend directory:

```bash
cd frontend
```

2. Deploy to Vercel:

```bash
vercel
```

### Environment Variables for Frontend

Set these in your Vercel project dashboard:

- `NEXT_PUBLIC_API_URL`: The URL of your deployed API (e.g., https://your-api.vercel.app)

## Monitoring and Management

### API Project

1. Monitor API logs and performance in Vercel dashboard
2. Check Function execution logs
3. Monitor API rate limits and usage

### Frontend Project

1. Monitor build logs and deployment status
2. Check static asset delivery
3. Monitor page performance

## Troubleshooting

### API Issues

1. Check API logs in Vercel dashboard
2. Verify environment variables are set
3. Test API endpoints directly

### Frontend Issues

1. Check build logs
2. Verify API URL is correctly set
3. Check browser console for errors
4. Verify API is accessible from frontend domain

## Alternative: GitHub PR Route

1. For API changes:

```bash
cd api
gh pr create --title "Deploy API changes" --body "Deploy latest API changes to Vercel"
```

2. For Frontend changes:

```bash
cd frontend
gh pr create --title "Deploy Frontend changes" --body "Deploy latest Frontend changes to Vercel"
```

3. After PRs are merged:

```bash
gh pr merge
```

Vercel will automatically deploy changes when merged to main branch for each project.
Empty file added aimakerspace/__init__.py
Empty file.
1 change: 1 addition & 0 deletions aimakerspace/aimakerspace
Empty file.
66 changes: 66 additions & 0 deletions aimakerspace/openai_utils/chatmodel.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
import os
from typing import Any, AsyncIterator, Iterable, List, MutableMapping

from dotenv import load_dotenv
from openai import AsyncOpenAI, OpenAI

load_dotenv()

ChatMessage = MutableMapping[str, Any]


class ChatOpenAI:
"""Thin wrapper around the OpenAI chat completion APIs."""

def __init__(self, model_name: str = "gpt-4o-mini"):
self.model_name = model_name
self.openai_api_key = os.getenv("OPENAI_API_KEY")
if self.openai_api_key is None:
raise ValueError("OPENAI_API_KEY is not set")

self._client = OpenAI()
self._async_client = AsyncOpenAI()

def run(
self,
messages: Iterable[ChatMessage],
text_only: bool = True,
**kwargs: Any,
) -> Any:
"""Execute a chat completion request.

``messages`` must be an iterable of ``{"role": ..., "content": ...}``
dictionaries. When ``text_only`` is ``True`` (the default) only the
completion text is returned; otherwise the full response object is
provided.
"""

message_list = self._coerce_messages(messages)
response = self._client.chat.completions.create(
model=self.model_name, messages=message_list, **kwargs
)

if text_only:
return response.choices[0].message.content

return response

async def astream(
self, messages: Iterable[ChatMessage], **kwargs: Any
) -> AsyncIterator[str]:
"""Yield streaming completion chunks as they arrive from the API."""

message_list = self._coerce_messages(messages)
stream = await self._async_client.chat.completions.create(
model=self.model_name, messages=message_list, stream=True, **kwargs
)

async for chunk in stream:
content = chunk.choices[0].delta.content
if content is not None:
yield content

def _coerce_messages(self, messages: Iterable[ChatMessage]) -> List[ChatMessage]:
if isinstance(messages, list):
return messages
return list(messages)
69 changes: 69 additions & 0 deletions aimakerspace/openai_utils/embedding.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
import asyncio
import os
from typing import Iterable, List

from dotenv import load_dotenv
from openai import AsyncOpenAI, OpenAI


class EmbeddingModel:
"""Helper for generating embeddings via the OpenAI API."""

def __init__(self, embeddings_model_name: str = "text-embedding-3-small"):
load_dotenv()
self.openai_api_key = os.getenv("OPENAI_API_KEY")
if self.openai_api_key is None:
raise ValueError(
"OPENAI_API_KEY environment variable is not set. "
"Please configure it with your OpenAI API key."
)

self.embeddings_model_name = embeddings_model_name
self.async_client = AsyncOpenAI()
self.client = OpenAI()

async def async_get_embeddings(self, list_of_text: Iterable[str]) -> List[List[float]]:
"""Return embeddings for ``list_of_text`` using the async client."""

embedding_response = await self.async_client.embeddings.create(
input=list(list_of_text), model=self.embeddings_model_name
)

return [item.embedding for item in embedding_response.data]

async def async_get_embedding(self, text: str) -> List[float]:
"""Return an embedding for a single text using the async client."""

embedding = await self.async_client.embeddings.create(
input=text, model=self.embeddings_model_name
)

return embedding.data[0].embedding

def get_embeddings(self, list_of_text: Iterable[str]) -> List[List[float]]:
"""Return embeddings for ``list_of_text`` using the sync client."""

embedding_response = self.client.embeddings.create(
input=list(list_of_text), model=self.embeddings_model_name
)

return [item.embedding for item in embedding_response.data]

def get_embedding(self, text: str) -> List[float]:
"""Return an embedding for a single text using the sync client."""

embedding = self.client.embeddings.create(
input=text, model=self.embeddings_model_name
)

return embedding.data[0].embedding


if __name__ == "__main__":
embedding_model = EmbeddingModel()
print(asyncio.run(embedding_model.async_get_embedding("Hello, world!")))
print(
asyncio.run(
embedding_model.async_get_embeddings(["Hello, world!", "Goodbye, world!"])
)
)
60 changes: 60 additions & 0 deletions aimakerspace/openai_utils/prompts.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
import re
from typing import Any, Dict, List


class BasePrompt:
"""Simple string template helper used to format prompt text."""

def __init__(self, prompt: str):
self.prompt = prompt
self._pattern = re.compile(r"\{([^}]+)\}")

def format_prompt(self, **kwargs: Any) -> str:
"""Return the prompt with ``kwargs`` substituted for placeholders."""

matches = self._pattern.findall(self.prompt)
replacements = {match: kwargs.get(match, "") for match in matches}
return self.prompt.format(**replacements)

def get_input_variables(self) -> List[str]:
"""Return the placeholder names used by this prompt."""

return self._pattern.findall(self.prompt)


class RolePrompt(BasePrompt):
"""Prompt template that also captures an accompanying chat role."""

def __init__(self, prompt: str, role: str):
super().__init__(prompt)
self.role = role

def create_message(self, apply_format: bool = True, **kwargs: Any) -> Dict[str, str]:
"""Build an OpenAI chat message dictionary for this prompt."""

content = self.format_prompt(**kwargs) if apply_format else self.prompt
return {"role": self.role, "content": content}


class SystemRolePrompt(RolePrompt):
def __init__(self, prompt: str):
super().__init__(prompt, "system")


class UserRolePrompt(RolePrompt):
def __init__(self, prompt: str):
super().__init__(prompt, "user")


class AssistantRolePrompt(RolePrompt):
def __init__(self, prompt: str):
super().__init__(prompt, "assistant")


if __name__ == "__main__":
prompt = BasePrompt("Hello {name}, you are {age} years old")
print(prompt.format_prompt(name="John", age=30))

prompt = SystemRolePrompt("Hello {name}, you are {age} years old")
print(prompt.create_message(name="John", age=30))
print(prompt.get_input_variables())
Loading