-
Notifications
You must be signed in to change notification settings - Fork 416
feat: Improve the developer journey for example notebooks (part 2) #913
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Improve the developer journey for example notebooks (part 2) #913
Conversation
Signed-off-by: Will Killian <[email protected]>
WalkthroughExpands the observability/evaluation/profiling notebook to add Phoenix-based telemetry, NAT tool definitions, workflow configs, evaluation/profiling configurations, datasets, and instructions to run workflows, evals, and profiling end-to-end, including artifact generation and output directory organization. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant U as User
participant NB as Notebook
participant NAT as NAT CLI/Runtime
participant WF as Workflow (Agents + Tools)
participant PH as Phoenix
participant DS as Data
U->>NB: Run setup cells
NB->>NAT: nat run -c config.yml
NAT->>WF: Initialize agents/tools
WF->>DS: Load CSV / product_catalog
WF->>PH: Emit telemetry (traces/logs)
WF->>WF: Analyze data / RAG / visualize
WF-->>NAT: Results + artifacts
NAT-->>NB: Output_dir with results
NB-->>U: Display results/paths
rect rgb(235,245,255)
note over PH: Phoenix observability (new)
end
sequenceDiagram
autonumber
participant U as User
participant NB as Notebook
participant NAT as NAT CLI
participant EV as Evaluators
participant PH as Phoenix
U->>NB: Trigger eval
NB->>NAT: nat eval -c config_eval.yml --data eval_data.json
NAT->>EV: Run rag_* and trajectory evaluators
EV->>PH: Send telemetry (optional)
EV-->>NAT: Metrics/summaries
NAT-->>NB: Eval reports
sequenceDiagram
autonumber
participant U as User
participant NB as Notebook
participant NAT as NAT CLI
participant PR as Profiler
participant PH as Phoenix
U->>NB: Trigger profiling
NB->>NAT: nat profile -c config_profile.yml
NAT->>PR: Collect runtime/LLM/concurrency data
PR->>PH: Emit traces/metrics
PR-->>NB: profile_output + gantt_chart.png
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Suggested labelsfeature request, non-breaking Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used📓 Path-based instructions (1)**/*⚙️ CodeRabbit configuration file
Files:
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
examples/notebooks/4_observability_evaluation_and_profiling.ipynb (1)
1219-1240: Avoid binding Phoenix to 0.0.0.0 by defaultSetting
PHOENIX_HOST=0.0.0.0exposes the Phoenix UI on every network interface. On shared or cloud notebook environments this opens an unauthenticated observability surface to anyone who can reach the machine, which is risky. Default to127.0.0.1(loopback) and add an explicit warning or opt-in instructions if external access is truly required.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
examples/notebooks/4_observability_evaluation_and_profiling.ipynb(7 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*
⚙️ CodeRabbit configuration file
**/*: # Code Review Instructions
- Ensure the code follows best practices and coding standards. - For Python code, follow
PEP 20 and
PEP 8 for style guidelines.- Check for security vulnerabilities and potential issues. - Python methods should use type hints for all parameters and return values.
Example:def my_function(param1: int, param2: str) -> bool: pass- For Python exception handling, ensure proper stack trace preservation:
- When re-raising exceptions: use bare
raisestatements to maintain the original stack trace,
and uselogger.error()(notlogger.exception()) to avoid duplicate stack trace output.- When catching and logging exceptions without re-raising: always use
logger.exception()
to capture the full stack trace information.Documentation Review Instructions - Verify that documentation and comments are clear and comprehensive. - Verify that the documentation doesn't contain any TODOs, FIXMEs or placeholder text like "lorem ipsum". - Verify that the documentation doesn't contain any offensive or outdated terms. - Verify that documentation and comments are free of spelling mistakes, ensure the documentation doesn't contain any
words listed in the
ci/vale/styles/config/vocabularies/nat/reject.txtfile, words that might appear to be
spelling mistakes but are listed in theci/vale/styles/config/vocabularies/nat/accept.txtfile are OK.Misc. - All code (except .mdc files that contain Cursor rules) should be licensed under the Apache License 2.0,
and should contain an Apache License 2.0 header comment at the top of each file.
- Confirm that copyright years are up-to date whenever a file is changed.
Files:
examples/notebooks/4_observability_evaluation_and_profiling.ipynb
examples/**/*
⚙️ CodeRabbit configuration file
examples/**/*: - This directory contains example code and usage scenarios for the toolkit, at a minimum an example should
contain a README.md or file README.ipynb.
- If an example contains Python code, it should be placed in a subdirectory named
src/and should
contain apyproject.tomlfile. Optionally, it might also contain scripts in ascripts/directory.- If an example contains YAML files, they should be placed in a subdirectory named
configs/. - If an example contains sample data files, they should be placed in a subdirectory nameddata/, and should
be checked into git-lfs.
Files:
examples/notebooks/4_observability_evaluation_and_profiling.ipynb
🪛 Ruff (0.13.3)
examples/notebooks/4_observability_evaluation_and_profiling.ipynb
66-66: Redefinition of unused FunctionInfo from line 22
Remove definition: FunctionInfo
(F811)
77-77: Unused function argument: builder
(ARG001)
113-113: Redefinition of unused FunctionInfo from line 66
Remove definition: FunctionInfo
(F811)
166-166: Redefinition of unused FunctionInfo from line 113
Remove definition: FunctionInfo
(F811)
185-185: Loop control variable root overrides iterable it iterates
(B020)
185-185: Loop control variable dirs not used within loop body
Rename unused dirs to _dirs
(B007)
229-229: Do not catch blind exception: Exception
(BLE001)
230-230: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
231-231: Use explicit conversion flag
Replace with conversion flag
(RUF010)
242-242: Redefinition of unused FunctionInfo from line 166
Remove definition: FunctionInfo
(F811)
299-299: Unused function argument: arg
(ARG001)
332-332: Unused function argument: arg
(ARG001)
360-360: Redefinition of unused llama_index_rag_tool from line 191
(F811)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: CI Pipeline / Check
Signed-off-by: Will Killian <[email protected]>
|
/merge |
1 similar comment
|
/merge |
Description
Updates the Observability, Evaluation, and Profiling example notebook
Closes
By Submitting this PR I confirm:
Summary by CodeRabbit
New Features
Documentation
Chores