Conversation
…and observer cleanup
…-deps, and update docs
…ent thread pool hang
…check to satisfy Ruff TRY300
…ifespan, and move asyncio import to stdlib group
…ion logging feedback
…nd update UI/README
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughPR은 백엔드 서비스의 모듈 임포트를 지역 범위로 이동하고, 스트리밍 청크 추적 및 타임아웃 처리를 추가하며, 프롬프트 인젝션 방지 지침을 포함하고, 프론트엔드 UI 버튼을 교체하고, 테스트 모킹 대상을 업데이트합니다. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
frontend/components/chat/ChatMain.tsx (1)
62-65:⚠️ Potential issue | 🟠 Major모바일에서 버튼 접근 가능한 이름이 사라집니다.
Line 64 텍스트가 작은 화면에서 숨겨져 아이콘-only 버튼이 됩니다.
aria-label을 버튼에 추가해 스크린리더에서 기능을 식별할 수 있게 해주세요.수정 제안
-<button onClick={onClearChat} className="p-2 sm:px-4 sm:py-2 rounded-full bg-white/5 border border-white/10 text-white/60 text-sm hover:bg-white/10 hover:text-white transition-colors flex items-center gap-2"> +<button + onClick={onClearChat} + aria-label="새 대화 시작" + title="새 대화" + className="p-2 sm:px-4 sm:py-2 rounded-full bg-white/5 border border-white/10 text-white/60 text-sm hover:bg-white/10 hover:text-white transition-colors flex items-center gap-2" +>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/components/chat/ChatMain.tsx` around lines 62 - 65, The button that calls onClearChat currently hides its text on small screens leaving only the Plus icon, so add an accessible name by adding an aria-label to that button (e.g., aria-label="새 대화" or an appropriate localized string) on the element that uses the Plus icon and onClearChat handler; ensure the aria-label matches the visible label ("새 대화") so screen readers can identify the button when the span is hidden.
🧹 Nitpick comments (1)
backend/app/main.py (1)
94-95: readiness 실패 로그에 traceback을 남기면 운영 분석이 쉬워집니다.현재는 예외 메시지만 남겨 원인 추적이 제한됩니다.
logger.exception으로 바꾸면 장애 분석이 수월해집니다.수정 제안
- except Exception as e: - logger.warning("Preload task failed during readiness check: %s", e) + except Exception: + logger.exception("Preload task failed during readiness check") return JSONResponse({"status": "failed"}, status_code=503)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/main.py` around lines 94 - 95, The except block catching Exception in main.py currently calls logger.warning("Preload task failed during readiness check: %s", e) which logs only the message; replace that call with logger.exception("Preload task failed during readiness check") inside the same except block so the full traceback is recorded for the Preload readiness check failure (locate the except Exception as e handler around the readiness check in main.py).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/api/routes/chat.py`:
- Around line 131-143: The fallback branch can run after a break when the client
disconnected; modify the loop around get_response_stream_async to track
disconnection by introducing a boolean (e.g., was_disconnected) that is set true
when await request.is_disconnected() is true and you break, and then only run
the chunk_count == 0 fallback when was_disconnected is false; update references
to chunk_count and chunk_clean unchanged but gate the final yield behind a check
like if chunk_count == 0 and not was_disconnected to avoid sending fallback
content to disconnected clients.
In `@backend/app/main.py`:
- Around line 52-53: The except block catching Exception in the
shutdown/waiting-for-preload-task code should not declare an unused variable;
change the clause from "except Exception as e:" to "except Exception:" and leave
the call to logger.exception("Exception occurred while waiting for preload task
during shutdown.") intact so there is no unused variable noise. Ensure you
update the except header where logger.exception is invoked.
In `@backend/app/services/llm.py`:
- Around line 126-135: The async generator from chain.astream (assigned to
generator) isn't closed on StopAsyncIteration or timeout; wrap the iteration
loop in try/finally and in the finally call await generator.aclose() (guarding
if generator is not None) to ensure cleanup, and replace the print("LLM stream
chunk timed out...") with a logger.error or logger.warning call (use the
module's logger instance, e.g., logger) so timeouts are logged properly; keep
the existing asyncio.wait_for and generator.__anext__ usage but ensure the
finally block always runs to call aclose().
---
Outside diff comments:
In `@frontend/components/chat/ChatMain.tsx`:
- Around line 62-65: The button that calls onClearChat currently hides its text
on small screens leaving only the Plus icon, so add an accessible name by adding
an aria-label to that button (e.g., aria-label="새 대화" or an appropriate
localized string) on the element that uses the Plus icon and onClearChat
handler; ensure the aria-label matches the visible label ("새 대화") so screen
readers can identify the button when the span is hidden.
---
Nitpick comments:
In `@backend/app/main.py`:
- Around line 94-95: The except block catching Exception in main.py currently
calls logger.warning("Preload task failed during readiness check: %s", e) which
logs only the message; replace that call with logger.exception("Preload task
failed during readiness check") inside the same except block so the full
traceback is recorded for the Preload readiness check failure (locate the except
Exception as e handler around the readiness check in main.py).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2320605a-18a9-459b-9a48-df0f84aeac57
📒 Files selected for processing (7)
README.mdbackend/app/api/routes/chat.pybackend/app/main.pybackend/app/services/llm.pybackend/tests/e2e/test_chat_endpoint.pybackend/tests/integration/test_supabase_match.pyfrontend/components/chat/ChatMain.tsx
| async for chunk in get_response_stream_async(context=combined_context, query=english_query, history=formatted_history): | ||
| # If client disconnects, stop generating | ||
| if await request.is_disconnected(): | ||
| break | ||
|
|
||
| chunk_count += 1 | ||
| # Clean up chunk to avoid SSE formatting issues with newlines | ||
| chunk_clean = chunk.replace("\n", "\\n") | ||
| yield {"event": "content", "data": chunk_clean} | ||
|
|
||
| if chunk_count == 0: | ||
| logger.warning("LLM returned 0 chunks. Sending a fallback message.") | ||
| yield {"event": "content", "data": "철학자는 난색을 표하며 서적을 뒤적거립니다. 대신 철학자가 답변을 해줄 만한 다른 질문은 없을까요?"} |
There was a problem hiding this comment.
클라이언트 연결 종료 시 zero-chunk fallback 분기를 건너뛰는 게 안전합니다.
현재는 연결이 끊겨 break된 경우에도 chunk_count == 0이면 fallback 콘텐츠를 생성하려고 시도할 수 있습니다. disconnect 상태를 플래그로 분리해 fallback을 막아주세요.
수정 제안
- chunk_count = 0
+ chunk_count = 0
+ disconnected = False
async for chunk in get_response_stream_async(context=combined_context, query=english_query, history=formatted_history):
# If client disconnects, stop generating
if await request.is_disconnected():
+ disconnected = True
break
@@
- if chunk_count == 0:
+ if not disconnected and chunk_count == 0:
logger.warning("LLM returned 0 chunks. Sending a fallback message.")
yield {"event": "content", "data": "철학자는 난색을 표하며 서적을 뒤적거립니다. 대신 철학자가 답변을 해줄 만한 다른 질문은 없을까요?"}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/routes/chat.py` around lines 131 - 143, The fallback branch
can run after a break when the client disconnected; modify the loop around
get_response_stream_async to track disconnection by introducing a boolean (e.g.,
was_disconnected) that is set true when await request.is_disconnected() is true
and you break, and then only run the chunk_count == 0 fallback when
was_disconnected is false; update references to chunk_count and chunk_clean
unchanged but gate the final yield behind a check like if chunk_count == 0 and
not was_disconnected to avoid sending fallback content to disconnected clients.
| except Exception as e: | ||
| logger.exception("Exception occurred while waiting for preload task during shutdown.") |
There was a problem hiding this comment.
미사용 예외 변수는 제거하는 게 좋습니다.
Line 52의 as e는 사용되지 않으므로 제거해 노이즈를 줄여주세요.
수정 제안
- except Exception as e:
+ except Exception:
logger.exception("Exception occurred while waiting for preload task during shutdown.")🧰 Tools
🪛 Ruff (0.15.2)
[error] 52-52: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/main.py` around lines 52 - 53, The except block catching
Exception in the shutdown/waiting-for-preload-task code should not declare an
unused variable; change the clause from "except Exception as e:" to "except
Exception:" and leave the call to logger.exception("Exception occurred while
waiting for preload task during shutdown.") intact so there is no unused
variable noise. Ensure you update the except header where logger.exception is
invoked.
| generator = chain.astream({"context": context, "chat_history": history, "query": query}) | ||
| while True: | ||
| try: | ||
| chunk = await asyncio.wait_for(generator.__anext__(), timeout=30.0) | ||
| yield chunk | ||
| except StopAsyncIteration: | ||
| break | ||
| except asyncio.TimeoutError: | ||
| print("LLM stream chunk timed out after 30 seconds.") | ||
| raise |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check the current implementation of get_response_stream_async
rg -n "def get_response_stream_async|astream\(|__anext__\(|aclose\(" backend/app/services/llm.pyRepository: SanghunYun95/philo-rag
Length of output: 332
🏁 Script executed:
# Read the full context around lines 126-135 in llm.py
cat -n backend/app/services/llm.py | sed -n '110,145p'Repository: SanghunYun95/philo-rag
Length of output: 1507
🏁 Script executed:
# Check imports and logger setup in llm.py
head -30 backend/app/services/llm.pyRepository: SanghunYun95/philo-rag
Length of output: 1067
async 제너레이터 정리 누락으로 리소스가 남을 수 있습니다.
현재 루프는 종료 또는 타임아웃 시 하위 astream 제너레이터를 명시적으로 닫지 않습니다. finally 블록에서 aclose()를 호출하여 정리 경로를 보장해야 합니다. 또한 print() 대신 로거를 사용하는 것이 운영 환경에서 적절합니다.
수정 제안
+import logging
+
+logger = logging.getLogger(__name__)
+
async def get_response_stream_async(context: str, query: str, history: str = ""):
"""
Returns an async stream of strings from the LLM.
"""
prompt = get_rag_prompt()
chain = prompt | get_llm() | StrOutputParser()
generator = chain.astream({"context": context, "chat_history": history, "query": query})
- while True:
+ try:
+ while True:
+ try:
+ chunk = await asyncio.wait_for(generator.__anext__(), timeout=30.0)
+ yield chunk
+ except StopAsyncIteration:
+ break
+ except asyncio.TimeoutError:
+ logger.warning("LLM stream chunk timed out after 30 seconds.")
+ raise
+ finally:
+ await generator.aclose()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/services/llm.py` around lines 126 - 135, The async generator from
chain.astream (assigned to generator) isn't closed on StopAsyncIteration or
timeout; wrap the iteration loop in try/finally and in the finally call await
generator.aclose() (guarding if generator is not None) to ensure cleanup, and
replace the print("LLM stream chunk timed out...") with a logger.error or
logger.warning call (use the module's logger instance, e.g., logger) so timeouts
are logged properly; keep the existing asyncio.wait_for and generator.__anext__
usage but ensure the finally block always runs to call aclose().
Summary by CodeRabbit
릴리스 노트
New Features
Bug Fixes
Documentation
Tests