Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
5a5407a
feat: integrate korean book metadata and UI citations
SanghunYun95 Mar 2, 2026
8a01e1d
fix: apply coderabbit review suggestions
SanghunYun95 Mar 2, 2026
133442a
fix(backend): apply coderabbit review feedback for db and mapping scr…
SanghunYun95 Mar 2, 2026
43d1722
fix(backend): address additional coderabbit PR inline comments
SanghunYun95 Mar 2, 2026
0dd84a4
refactor(backend): use shared env parser and HTTPS for API
SanghunYun95 Mar 3, 2026
3057ad7
fix(backend): allow key rotation for all errors in book mapping
SanghunYun95 Mar 3, 2026
fc24774
feat: implement dynamic chat title and dynamic philosopher highlighting
SanghunYun95 Mar 3, 2026
cdbc817
fix: apply CodeRabbit PR review feedback
SanghunYun95 Mar 3, 2026
6c7566d
fix(pr): address CodeRabbit review feedback on backend tools and DB s…
SanghunYun95 Mar 3, 2026
78fc51a
chore: resolve merge conflicts
SanghunYun95 Mar 3, 2026
9de894d
fix(pr): address additional CodeRabbit comments
SanghunYun95 Mar 3, 2026
3d773d7
style: update welcome messages and input placeholder to be more gener…
SanghunYun95 Mar 3, 2026
4335bee
fix(pr): address additional CodeRabbit feedback for title truncation …
SanghunYun95 Mar 3, 2026
7298aac
UI: Remove redundant buttons (useful, copy, regenerate) from MessageList
SanghunYun95 Mar 3, 2026
30dd215
Merge branch 'main' into feat/book-metadata
SanghunYun95 Mar 3, 2026
ce91d6a
Refactor: apply CodeRabbit review suggestions
SanghunYun95 Mar 3, 2026
0bd1fcd
docs: rewrite README for interviewers
SanghunYun95 Mar 3, 2026
1196e30
docs, refactor: refine README and MessageList observer logic per PR c…
SanghunYun95 Mar 3, 2026
1b31b83
refactor: resolve observer unmount leak, Biome formatting, exhaustive…
SanghunYun95 Mar 3, 2026
e1ec3fc
fix: clear visibleMessages on unmount & use targeted eslint disable
SanghunYun95 Mar 3, 2026
36bd572
docs, refactor: disable philosopher filtering & update README examples
SanghunYun95 Mar 3, 2026
f13f327
refactor: apply PR refinements for mapping script and observers
SanghunYun95 Mar 3, 2026
1a9358b
Merge origin/main into feat/book-metadata (Resolve conflicts)
SanghunYun95 Mar 3, 2026
5d2841d
Fix: apply CodeRabbit feedback for React hooks and Tailwind
SanghunYun95 Mar 3, 2026
2584e3b
Feat: support multiple GEMINI_API_KEYS via comma-separated env var fo…
SanghunYun95 Mar 4, 2026
2395400
Fix: apply PR CodeRabbit round 8 feedback and add favicon
SanghunYun95 Mar 4, 2026
a0f719c
Fix: resolve conflicts and apply PR CodeRabbit round 9 feedback
SanghunYun95 Mar 4, 2026
789bdf4
Fix: apply PR CodeRabbit round 10 feedback
SanghunYun95 Mar 4, 2026
4c33094
Fix: apply PR CodeRabbit round 11 feedback
SanghunYun95 Mar 4, 2026
c9b0b91
Fix: apply PR CodeRabbit round 12 feedback
SanghunYun95 Mar 4, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ Philo-RAG의 주요 질문-답변 파이프라인은 다음과 같이 작동합
1. **행복에 대한 질문**: "진정한 행복이란 무엇이라고 생각하시나요?"
- 결과: AI는 데이터베이스 내의 다양한 철학 서적을 검색하여, '행복'에 대한 여러 철학자들의 통찰력을 바탕으로 실시간 답변을 작성합니다.
2. **윤리적 딜레마 질문**: "인간 관계에서 거짓말은 어떠한 경우에도 정당화될 수 없나요?"
- 결과: 우측 화면에 도덕이나 윤리와 관련된 출처 카드(알라딘 도서 표지 및 메타데이터 선탑재)가 표기되며, 여러 관점을 혼합한 구조적인 답변이 스트리밍 됩니다.
- 결과: 우측 화면에 도덕·윤리 관련 출처 카드(알라딘 도서 표지 및 메타데이터 사전 로드)가 표시되며, 여러 관점을 혼합한 구조적 답변이 스트리밍됩니다.
3. **사회적 질문**: "이상적이고 평등한 국가란 어떤 모습이어야 할까요?"
- 결과: 정치/사회 철학과 관련된 도서 메타데이터를 RAG 파이프라인으로 찾아 직설적이고 다각적인 답변을 제시합니다.

Expand Down Expand Up @@ -163,6 +163,15 @@ The core Q&A pipeline operates as follows:
- **Scroll-Responsive Context Sidebar**: If the conversation grows long and the user scrolls up to an older AI response, the left sidebar automatically reads the metadata for that exact message and updates the sources visually (`IntersectionObserver`).
- **High-Speed Streaming UX**: Addressing the common latency issue of RAG pipelines by actively streaming tokens the moment the LLM begins generation.

## 💡 Usage Examples

1. **Question about Happiness**: "What do you think true happiness is?"
- Result: AI searches various philosophical books in the database and streams a real-time answer based on insights from multiple philosophers regarding 'happiness'.
2. **Ethical Dilemma Question**: "Can lying in human relationships ever be justified?"
- Result: Source cards related to morality or ethics (with book covers and metadata pre-loaded) appear on the right pane, while a structured answer combining different perspectives is streamed.
3. **Social Question**: "What should an ideal and egalitarian state look like?"
- Result: Uses the RAG pipeline to locate metadata on political/social philosophy books and provides a direct, multifaceted answer.

---

## 💻 How to Run (Local Setup)
Expand Down
31 changes: 22 additions & 9 deletions backend/app/core/env_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,15 @@
def parse_gemini_api_keys(env_path: Path) -> list[str]:
"""
Reads active GEMINI_API_KEY assignments from the given .env file.
Only extracts active assignments and strips inline comments and quotes.
Also falls back to os.environ if no keys are found in the file.
Extracts active assignments and strips inline comments and quotes.
Also merges GEMINI_API_KEYS (comma-separated) and GEMINI_API_KEY
from os.environ with de-duplication, preserving first-seen order.
"""
def _normalize_key(value: str) -> str:
return value.strip().strip('"').strip("'") if value else ""
api_keys = []

if env_path.exists():
if env_path.is_file():
with open(env_path, 'r', encoding='utf-8') as f:
content = f.read()
# Find all variations of GEMINI_API_KEY assignments
Expand All @@ -22,14 +25,24 @@ def parse_gemini_api_keys(env_path: Path) -> list[str]:
for m in matches:
# Remove inline comments and strip quotes
m = re.split(r'\s+#', m, 1)[0]
key = m.strip().strip('"').strip("'")
key = _normalize_key(m)
if key and key not in api_keys:
api_keys.append(key)

# Fallback to os.environ when parsing produced no key or file doesn't exist
if not api_keys:
k = os.getenv("GEMINI_API_KEY")
if k and k not in api_keys:
api_keys.append(k)
# Also check GEMINI_API_KEYS (comma-separated list) from environment variables
# This is highly useful for deployment environments like Render
env_keys_str = os.getenv("GEMINI_API_KEYS")
if env_keys_str:
for k in env_keys_str.split(','):
key = _normalize_key(k)
if key and key not in api_keys:
api_keys.append(key)

# Also merge single GEMINI_API_KEY from environment (if present)
k = os.getenv("GEMINI_API_KEY")
if k:
key = _normalize_key(k)
if key and key not in api_keys:
api_keys.append(key)

return api_keys
3 changes: 2 additions & 1 deletion backend/scripts/generate_book_mapping.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,8 @@ async def translate_book_info(file_name: str) -> dict:

# If all keys exhausted or other error, fallback
print(f"LLM Failed for {file_name}, falling back to Kyobo Search...")
name_without_ext = os.path.splitext(file_name)[0]
# file_name is already stem in current call path; only strip explicit .txt when present
name_without_ext = file_name[:-4] if file_name.lower().endswith(".txt") else file_name
parts = name_without_ext.rsplit(" by ", 1)
fallback_title = parts[0].strip()
fallback_author = parts[1].strip() if len(parts) == 2 else ""
Expand Down
Binary file added frontend/app/icon.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
14 changes: 12 additions & 2 deletions frontend/components/chat/MessageList.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -119,14 +119,24 @@ export function MessageList({ messages, onOpenCitation, onVisibleMessageChange }
} else {
elementById.current.delete(id);
visibleMessages.current.delete(id);
refCallbackById.current.delete(id);
}
}, []);

const getMessageRef = useCallback((id: string) => {
let cb = refCallbackById.current.get(id);
if (!cb) {
cb = (el) => observeElement(id, el);
const nextCb = (el: HTMLDivElement | null) => {
observeElement(id, el);
if (el === null) {
// Delay cleanup to survive React StrictMode's setup -> cleanup(null) -> setup cycle
Promise.resolve().then(() => {
if (refCallbackById.current.get(id) === nextCb && !elementById.current.has(id)) {
refCallbackById.current.delete(id);
}
});
}
};
cb = nextCb;
refCallbackById.current.set(id, cb);
}
return cb;
Expand Down
4 changes: 2 additions & 2 deletions frontend/components/sidebar/ActivePhilosophers.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ export function ActivePhilosophers({ metadata, activeMetadata = [] }: Props) {
: "border-white/10 bg-white/5"
}`}
>
<div className={`absolute inset-0 bg-gradient-to-r ${isActive ? "from-primary/10" : "from-primary/5"} to-transparent opacity-0 group-hover:opacity-100 transition-opacity`}></div>
<div className={`absolute inset-0 bg-gradient-to-r ${isActive ? "from-primary/10" : "from-primary/5"} to-transparent opacity-0 transition-opacity`}></div>
<div className="relative flex items-center gap-4">
<div
className={`h-12 w-12 shrink-0 rounded-full border ${isActive ? "border-primary/30" : "border-white/20"} bg-gradient-to-br from-white/10 to-transparent flex items-center justify-center shadow-inner`}
Expand All @@ -55,7 +55,7 @@ export function ActivePhilosophers({ metadata, activeMetadata = [] }: Props) {
<h4 className={`font-display text-lg ${isActive ? "text-white" : "text-white/80"}`}>{meta.scholar}</h4>
<p className={`text-xs ${isActive ? "text-white/60" : "text-white/40"}`}>{meta.school}</p>
</div>
<CheckCircle className={`ml-auto w-5 h-5 shrink-0 transition-colors ${isActive ? "text-primary" : "text-primary/20 group-hover:text-primary/50"}`} />
<CheckCircle className={`ml-auto w-5 h-5 shrink-0 transition-colors ${isActive ? "text-primary" : "text-primary/20"}`} />
</div>
</div>
);
Expand Down