Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
50170cd
Build initial chatbot
terellcodes Jun 25, 2025
50d4758
fly.io setup
terellcodes Jun 25, 2025
8a82c40
Studio Ghibli chatbot
terellcodes Jun 25, 2025
8e9a967
Point to fly.io api in production
terellcodes Jun 25, 2025
6d31fc3
Remove use of any in page.tsx to fix vercel deployment error
terellcodes Jun 25, 2025
95828b4
Increase chat container height for better use of vertical space and l…
terellcodes Jun 25, 2025
fafb44c
Chat windo is wider
terellcodes Jun 27, 2025
f5cfe80
Make conversation bubbles wider
terellcodes Jun 27, 2025
b2f0f39
Give the conversatoin window a green tint
terellcodes Jun 27, 2025
1449ba2
Refactor: Move chat input form to its own component and file. Only sh…
terellcodes Jun 27, 2025
b6661f6
Fix: Remove className from ReactMarkdown and wrap in styled div for m…
terellcodes Jun 27, 2025
98c87ac
Support rendering latext
terellcodes Jun 27, 2025
591967e
Fix bug where the main page would scroll to a second
terellcodes Jun 27, 2025
ef82462
feat: Add copy-to-clipboard button to AI conversation bubbles and pos…
terellcodes Jun 27, 2025
c030b96
Stream regenerated AI response into bubble, clearing content and show…
terellcodes Jun 27, 2025
963d585
Remove chatEndRef prop from ChatWindow to fix Vercel type error
terellcodes Jun 27, 2025
1cf69d1
host api on vercel
terellcodes Jul 2, 2025
29e974e
for deployment
terellcodes Jul 2, 2025
32774fe
RAG works locally
terellcodes Jul 3, 2025
3f69411
Fix unused variable error
terellcodes Jul 3, 2025
608f713
unused var err
terellcodes Jul 3, 2025
c85d59d
Stream chat pdf response
terellcodes Jul 3, 2025
84afe48
fix aimakerspace import
terellcodes Jul 3, 2025
7ecb151
Add numpy dependency
terellcodes Jul 3, 2025
7aba925
How to push github changes
terellcodes Jul 3, 2025
db22f39
Apply batching for generating embeddings
terellcodes Jul 3, 2025
4f40bbb
increase batch size
terellcodes Jul 3, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
8 changes: 8 additions & 0 deletions .cursor/rules/branch-development.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
description:
globs:
alwaysApply: true
---
You always prefer to use branch development. Before writing any code - you create a feature branch to hold those changes.

After you are done - provide instructions in a "MERGE.md" file that explains how to merge the changes back to main with both a GitHub PR route and a GitHub CLI route.
27 changes: 26 additions & 1 deletion .cursor/rules/frontend-rule.mdc
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,29 @@ alwaysApply: false
- When asking the user for sensitive information - you must use password style text-entry boxes in the UI.
- You should use Next.js as it works best with Vercel.
- This frontend will ultimately be deployed on Vercel, but it should be possible to test locally.
- Always provide users with a way to run the created UI once you have created it.
- Always provide users with a way to run the created UI once you have created it.
- The UI should mimic studio ghibli aesthetic using the following rules
- Design Style
- Use soft earth-tone colors and watercolor-like backgrounds.
- Use rounded corners (`border-radius: 12px+`) and soft drop shadows.
- Apply natural textures, paper-like backgrounds, and grain overlays.

- Typography
- Use fonts like 'Quicksand', 'Noto Serif JP', or 'Garamond' for a whimsical, storybook feel.
- Avoid modern, harsh sans-serif fonts.

- Components
- All buttons should look hand-drawn, with hover animations that glow or float.
- Inputs should be wide and softly bordered with illustrated icons.

- Layout
- Layouts should be centered with parchment or wooden-framed containers.
- Avoid sterile grids — allow white space and natural asymmetry.

- Animations
- Use Framer Motion for soft transitions, magical fades, and bouncy effects.
- Animations should mimic nature: floating, glowing, or breezy motion.

- Vibes
- Every page should feel like it came from a storybook or nature journal.
- Prioritize playfulness, calm, and natural wonder.
44 changes: 44 additions & 0 deletions MERGE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# How to Merge Your Feature Branch Back to Main

This project uses branch-based development. Follow these steps to merge your feature branch (e.g., `feature-rag-e2e`) back to `main`.

---

## 1. GitHub Pull Request (Recommended)

1. **Push your branch to GitHub** (if you haven't already):
```bash
git push origin feature-rag-e2e
```
2. **Go to your repository on GitHub.**
3. **Click "Compare & pull request"** next to your branch.
4. **Review the changes** and add a descriptive title and summary.
5. **Click "Create pull request".**
6. After review, **click "Merge pull request"** to merge into `main`.
7. (Optional) **Delete the feature branch** on GitHub after merging.

---

## 2. GitHub CLI (Command Line)

1. **Push your branch to GitHub** (if you haven't already):
```bash
git push origin feature-rag-e2e
```
2. **Create a pull request from the CLI:**
```bash
gh pr create --base main --head feature-rag-e2e --fill
```
3. **Merge the pull request from the CLI:**
```bash
gh pr merge --merge
```
4. (Optional) **Delete the local and remote feature branch:**
```bash
git branch -d feature-rag-e2e
git push origin --delete feature-rag-e2e
```

---

**Note:** Always ensure all tests pass and your branch is up to date with `main` before merging.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ Check it out 👇

A big shoutout to the @AI Makerspace for all making this possible. Couldn't have done it without the incredible community there. 🤗🙏

Looking forward to building with the community! 🙌✨ Here's to many more creations ahead! 🥂🎉
Looking forward to building with the community! 🙌✨ Here's to many more creations ahead! 🥂🎉

Who else is diving into the world of AI? Let's connect! 🌐💡

Expand Down
Empty file added aimakerspace/__init__.py
Empty file.
Empty file.
45 changes: 45 additions & 0 deletions aimakerspace/openai_utils/chatmodel.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
from openai import OpenAI, AsyncOpenAI
from dotenv import load_dotenv
import os

load_dotenv()


class ChatOpenAI:
def __init__(self, model_name: str = "gpt-4o-mini", api_key: str = None):
self.model_name = model_name
self.openai_api_key = api_key or os.getenv("OPENAI_API_KEY")
if self.openai_api_key is None:
raise ValueError("OPENAI_API_KEY is not set")

def run(self, messages, text_only: bool = True, **kwargs):
if not isinstance(messages, list):
raise ValueError("messages must be a list")

client = OpenAI(api_key=self.openai_api_key)
response = client.chat.completions.create(
model=self.model_name, messages=messages, **kwargs
)

if text_only:
return response.choices[0].message.content

return response

async def astream(self, messages, **kwargs):
if not isinstance(messages, list):
raise ValueError("messages must be a list")

client = AsyncOpenAI(api_key=self.openai_api_key)

stream = await client.chat.completions.create(
model=self.model_name,
messages=messages,
stream=True,
**kwargs
)

async for chunk in stream:
content = chunk.choices[0].delta.content
if content is not None:
yield content
57 changes: 57 additions & 0 deletions aimakerspace/openai_utils/embedding.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
from dotenv import load_dotenv
from openai import AsyncOpenAI, OpenAI
import openai
from typing import List
import os
import asyncio


class EmbeddingModel:
def __init__(self, embeddings_model_name: str = "text-embedding-3-small", api_key: str = None):
load_dotenv()
self.openai_api_key = api_key or os.getenv("OPENAI_API_KEY")
if self.openai_api_key is None:
raise ValueError(
"OPENAI_API_KEY environment variable is not set. Please set it to your OpenAI API key."
)
self.async_client = AsyncOpenAI(api_key=self.openai_api_key)
self.client = OpenAI(api_key=self.openai_api_key)
self.embeddings_model_name = embeddings_model_name

async def async_get_embeddings(self, list_of_text: List[str]) -> List[List[float]]:
embedding_response = await self.async_client.embeddings.create(
input=list_of_text, model=self.embeddings_model_name
)

return [embeddings.embedding for embeddings in embedding_response.data]

async def async_get_embedding(self, text: str) -> List[float]:
embedding = await self.async_client.embeddings.create(
input=text, model=self.embeddings_model_name
)

return embedding.data[0].embedding

def get_embeddings(self, list_of_text: List[str]) -> List[List[float]]:
embedding_response = self.client.embeddings.create(
input=list_of_text, model=self.embeddings_model_name
)

return [embeddings.embedding for embeddings in embedding_response.data]

def get_embedding(self, text: str) -> List[float]:
embedding = self.client.embeddings.create(
input=text, model=self.embeddings_model_name
)

return embedding.data[0].embedding


if __name__ == "__main__":
embedding_model = EmbeddingModel()
print(asyncio.run(embedding_model.async_get_embedding("Hello, world!")))
print(
asyncio.run(
embedding_model.async_get_embeddings(["Hello, world!", "Goodbye, world!"])
)
)
78 changes: 78 additions & 0 deletions aimakerspace/openai_utils/prompts.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
import re


class BasePrompt:
def __init__(self, prompt):
"""
Initializes the BasePrompt object with a prompt template.

:param prompt: A string that can contain placeholders within curly braces
"""
self.prompt = prompt
self._pattern = re.compile(r"\{([^}]+)\}")

def format_prompt(self, **kwargs):
"""
Formats the prompt string using the keyword arguments provided.

:param kwargs: The values to substitute into the prompt string
:return: The formatted prompt string
"""
matches = self._pattern.findall(self.prompt)
return self.prompt.format(**{match: kwargs.get(match, "") for match in matches})

def get_input_variables(self):
"""
Gets the list of input variable names from the prompt string.

:return: List of input variable names
"""
return self._pattern.findall(self.prompt)


class RolePrompt(BasePrompt):
def __init__(self, prompt, role: str):
"""
Initializes the RolePrompt object with a prompt template and a role.

:param prompt: A string that can contain placeholders within curly braces
:param role: The role for the message ('system', 'user', or 'assistant')
"""
super().__init__(prompt)
self.role = role

def create_message(self, format=True, **kwargs):
"""
Creates a message dictionary with a role and a formatted message.

:param kwargs: The values to substitute into the prompt string
:return: Dictionary containing the role and the formatted message
"""
if format:
return {"role": self.role, "content": self.format_prompt(**kwargs)}

return {"role": self.role, "content": self.prompt}


class SystemRolePrompt(RolePrompt):
def __init__(self, prompt: str):
super().__init__(prompt, "system")


class UserRolePrompt(RolePrompt):
def __init__(self, prompt: str):
super().__init__(prompt, "user")


class AssistantRolePrompt(RolePrompt):
def __init__(self, prompt: str):
super().__init__(prompt, "assistant")


if __name__ == "__main__":
prompt = BasePrompt("Hello {name}, you are {age} years old")
print(prompt.format_prompt(name="John", age=30))

prompt = SystemRolePrompt("Hello {name}, you are {age} years old")
print(prompt.create_message(name="John", age=30))
print(prompt.get_input_variables())
Loading