Conversation
|
Caution Review failedThe pull request is closed. WalkthroughThis update introduces a new data-fetching workflow using Google Cloud Run and Polars, updates infrastructure scripts and Dockerfiles, modifies environment variable management, and adjusts IAM roles and permissions. Several dependency and configuration files are updated, and a previous Flyte workflow for backfilling data is removed. Changes
Sequence Diagram(s)sequenceDiagram
participant Workflow as Flyte Workflow
participant GCloud as gcloud CLI
participant CloudRun as Google Cloud Run
participant Datamanager as Datamanager Service
participant Polars as Polars/pyarrow
Workflow->>CloudRun: List services in project/region
CloudRun-->>Workflow: Return service URLs
Workflow->>GCloud: Get identity token for Datamanager
GCloud-->>Workflow: Return identity token
Workflow->>Datamanager: HTTP GET /equity-bars (with token, date range)
Datamanager-->>Workflow: Return Arrow IPC stream
Workflow->>Polars: Read Arrow data
Polars-->>Workflow: Return DataFrame/Series
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (3)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Pull Request Overview
This pull request introduces a variety of improvements across infrastructure, Docker configurations, and data management workflows. Key changes include new IAM role additions with HMAC key generation for DUCKDB access, Dockerfile and dependency updates for consistency and enhanced functionality, and code refactoring to remove deprecated workflows and improve error handling.
- Updated cloud infrastructure management and Pulumi outputs
- Enhanced data fetching and processing via a new workflow using httpx, polars, and Cloud Run APIs
- Improved Dockerfile configurations and dependency updates across multiple services
Reviewed Changes
Copilot reviewed 17 out of 17 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| workflows/pyproject.toml | Added new dependencies for improved workflow functionality |
| workflows/fetch_data.py | Introduced a new data fetching workflow using httpx and polars |
| workflows/backfill_datamanager.py | Removed the deprecated backfill workflow |
| pyproject.toml | Commented out specific linting rules temporarily |
| infrastructure/pyproject.toml | Updated project version |
| infrastructure/project.py | Added a new IAM member for platform service account owner |
| infrastructure/ping.nu | Updated health check script and assertions |
| infrastructure/environment_variables.py | Switched DUCKDB keys to use generated HMAC keys and export outputs |
| infrastructure/buckets.py | Adjusted bucket IAM member role and identifier |
| infrastructure/main.py | Exported DATAMANAGER_BASE_URL for easier access |
| application/positionmanager/clients.py | Refactored status code handling with dictionary-style access |
| application/positionmanager/Dockerfile | Updated Python base image version |
| application/datamanager/Dockerfile | Set PYTHONPATH and simplified the uvicorn module path for ENTRYPOINT |
| .mise.toml | Updated run command to invoke the new ping test |
| .flox/env/manifest.toml | Added google-cloud-sdk package to the manifest |
Comments suppressed due to low confidence (1)
infrastructure/ping.nu:41
- [nitpick] The variable 'datamanager_fetch' is used to hold the HTTP status code from the GET request. Renaming it (e.g., to 'fetch_status') would improve clarity regarding its purpose.
assert equal $datamanager_fetch 200
Graphite Automations"Assign author to pull request" took an action on this PR • (06/06/25)1 assignee was added to this PR based on John Forstmeier's automation. |
forstmeier
left a comment
There was a problem hiding this comment.
Copilot caught a typo but otherwise that's good.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (4)
.flox/env/manifest.toml (1)
16-19: Emptyon-activatehook is a no-op.
If activation commands aren’t needed yet, consider removing this block or documenting its future purpose to avoid confusion.infrastructure/ping.nu (1)
24-24: Updated test date.The date change from "2025-01-04" to "2025-01-07" appears to be for testing purposes. Ensure this date is valid for your test data.
workflows/fetch_data.py (2)
4-4: Consider the import style suggestion from static analysis.The static analysis tool suggests using
from pyarrow import ipcinstead ofimport pyarrow.ipc as ipcfor consistency.-import pyarrow.ipc as ipc +from pyarrow import ipc🧰 Tools
🪛 Pylint (3.3.7)
[refactor] 4-4: Use 'from pyarrow import ipc' instead
(R0402)
22-52: Consider adding comprehensive error handling and logging.The function would benefit from more robust error handling to provide better debugging information when issues occur.
Consider wrapping the entire function logic in try-except blocks and adding logging for better observability:
import logging @task def fetch_dates(start_date: str, end_date: str) -> pl.DataFrame: try: logging.info(f"Fetching data for date range: {start_date} to {end_date}") # ... existing logic with error handling improvements logging.info(f"Successfully fetched {len(data)} records") return data except Exception as e: logging.error(f"Failed to fetch data: {str(e)}") raise
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
.flox/env/manifest.lockis excluded by!**/*.lockuv.lockis excluded by!**/*.lock
📒 Files selected for processing (15)
.flox/env/manifest.toml(1 hunks).mise.toml(1 hunks)application/datamanager/Dockerfile(2 hunks)application/positionmanager/Dockerfile(2 hunks)application/positionmanager/src/positionmanager/clients.py(1 hunks)infrastructure/__main__.py(2 hunks)infrastructure/buckets.py(1 hunks)infrastructure/environment_variables.py(2 hunks)infrastructure/ping.nu(1 hunks)infrastructure/project.py(2 hunks)infrastructure/pyproject.toml(1 hunks)pyproject.toml(1 hunks)workflows/backfill_datamanager.py(0 hunks)workflows/fetch_data.py(1 hunks)workflows/pyproject.toml(1 hunks)
💤 Files with no reviewable changes (1)
- workflows/backfill_datamanager.py
🧰 Additional context used
🪛 Pylint (3.3.7)
workflows/fetch_data.py
[refactor] 4-4: Use 'from pyarrow import ipc' instead
(R0402)
[error] 7-7: No name 'run_v2' in module 'google.cloud'
(E0611)
🔇 Additional comments (20)
application/positionmanager/Dockerfile (2)
1-1: Pin to a specific Python patch version.
Upgrading from the genericpython:3.12topython:3.12.10ensures consistent builds across environments.
17-17: TheENTRYPOINTremains functionally unchanged; no review comments needed.application/positionmanager/src/positionmanager/clients.py (2)
92-94: Conditional 404 handling is clear.
Switching torequests.codes["not_found"]maintains behavior and improves consistency with other HTTP status checks.
95-98: Non-OK response error handling is appropriate.
Raising a customHTTPErrorwith the response text and status code is consistent and informative..flox/env/manifest.toml (1)
13-13: Add Google Cloud SDK to environment.
Includinggoogle-cloud-sdkaligns with the new GCP workflows and CLI requirements.infrastructure/pyproject.toml (1)
3-3: Bump infrastructure version to20250606.4.
This aligns the project version with the latest Pulumi exports and IAM changes.infrastructure/__main__.py (2)
4-4: Importexportfrom Pulumi.
Bringing inexportis necessary to expose stack outputs; this import correctly enables the subsequent call.
74-74: Verify stack output duplication.
Confirm whethercreate_environment_variable("DATAMANAGER_BASE_URL",…)already exports this value. A secondexport("DATAMANAGER_BASE_URL",…)may produce duplicate outputs or conflicts.workflows/pyproject.toml (1)
8-14:Details
❓ Verification inconclusive
Verify dependency versions are current and secure.
The new dependencies support the Google Cloud Run integration and data processing workflow. The additions look appropriate for the stated functionality.
Please verify these dependency versions are current and free from known vulnerabilities:
🌐 Web query:
Are the following Python package versions current and secure: google-auth 2.40.2, google-cloud-run 0.10.18, polars 1.30.0, pyarrow 20.0.0, unionai 0.1.183?💡 Result:
Based on the latest available information, the following Python package versions are not the most current:
google-auth 2.40.2: The latest version is 2.38.0, released on January 23, 2025. (pypi.org)
google-cloud-run 0.10.18: The latest version is 0.10.10. (pypistats.org)
pyarrow 20.0.0: The latest version is 16.1.0. (cloud.google.com)
Regarding polars 1.30.0 and unionai 0.1.183, I couldn't find specific information on their latest versions.
Using outdated versions may expose your project to known vulnerabilities and compatibility issues. It's recommended to update these packages to their latest versions to ensure security and functionality.
Citations:
- 1: https://pypi.org/project/google-auth/?utm_source=openai
- 2: https://pypistats.org/packages/google-cloud-run?utm_source=openai
- 3: https://cloud.google.com/composer/docs/composer-versions?utm_source=openai
Verify and update pinned dependency versions.
The added dependencies align with the new workflow, but please confirm each minimum version matches the latest stable release on PyPI to avoid security or compatibility issues:
- google-auth (>=2.40.2)
- google-cloud-run (>=0.10.18)
- polars (>=1.30.0)
- pyarrow (>=20.0.0)
- unionai (>=0.1.183)
If any of these are outdated, bump to the newest compatible version.
infrastructure/buckets.py (1)
18-18:Details
✅ Verification successful
Verify the permission elevation is intentional and necessary.
The role was elevated from
roles/storage.objectCreator(create-only) toroles/storage.objectAdmin(full object management including delete, update, and administration). This significantly expands permissions.Please confirm this permission elevation is necessary for the new functionality and follows the principle of least privilege.
🏁 Script executed:
#!/bin/bash # Description: Check if the elevated permissions are used in the codebase # Expected: Find references to delete/update operations that require objectAdmin role echo "Searching for Google Cloud Storage operations that require objectAdmin permissions..." rg -i "delete|remove|update.*bucket|bucket.*delete" --type py rg "storage\..*\.(delete|remove|update)" --type pyLength of output: 2176
Confirmed:
roles/storage.objectAdminrequired for object deletion
- In
application/datamanager/src/datamanager/main.py, the code callsbucket.delete_blobs(blobs), which requires delete permissions not granted byroles/storage.objectCreator.- No other bucket delete/update operations were found that would contradict this need.
Elevation to
roles/storage.objectAdminis intentional and necessary for the new delete functionality.application/datamanager/Dockerfile (2)
5-5: PYTHONPATH configuration looks correct.Setting
PYTHONPATH=/app/srcproperly configures the Python module search path for the container environment.
17-17:Details
✅ Verification successful
Verify the simplified module path is correct.
The ENTRYPOINT changed from
datamanager.src.datamanager.main:applicationtodatamanager.main:application. Ensure this simplified path correctly references the application module after the recent refactoring.
🏁 Script executed:
#!/bin/bash # Description: Verify the datamanager module structure supports the new import path # Expected: Find main.py with application object in the correct location echo "Checking datamanager module structure..." fd -t f "main.py" application/datamanager/ echo "" echo "Looking for application object in main.py files..." rg -A 5 "application.*=" application/datamanager/ || echo "No application object found"Length of output: 868
ENTRYPOINT module path is valid
The
main.pyfile underapplication/datamanager/src/datamanager/defines theapplicationinstance, so usingdatamanager.main:applicationwith--app-dir srccorrectly locates and loads your FastAPI app. No changes needed here.infrastructure/environment_variables.py (2)
1-5: Import additions look correct.The new imports for HMAC key management and project dependencies are appropriate for the updated functionality.
36-48: HMAC key generation implementation is sound.The approach of generating HMAC keys instead of using static secrets is a good security practice. The implementation correctly creates the key using the platform service account and exports the values as stack outputs for external access.
Note that this depends on the platform service account having sufficient permissions, which is addressed by the owner role assignment in
infrastructure/project.py.infrastructure/ping.nu (5)
1-1: Good addition of assertion module.Importing the
std/assertmodule enables proper validation of the script's operations.
9-9: Improved service name extraction.The change from
servicetonamekey provides better clarity in the service object structure.
14-18: Enhanced health check loop with better feedback.The updated loop provides clearer output by explicitly accessing service properties and confirming successful health checks.
29-29: Good addition of data validation.The assertion ensures the datamanager returns adequate data (≥100 elements), which is important for validating service functionality.
31-41:Details
✅ Verification successful
Verify manual URL construction logic.
The manual URL construction for the GET request with explicit scheme, host, path, and params is more verbose but explicit. Ensure the URL joining and parameter handling work correctly with nushell's
url joincommand.
🏁 Script executed:
#!/bin/bash # Description: Check if nushell url join command works as expected with the structure used # Expected: Confirm the URL construction pattern is valid echo "Checking nushell documentation for url join usage..."Length of output: 114
🌐 Web query:
How does nushell's url join command work with scheme, host, path, and params?💡 Result:
How Nushell's
url joinCommand WorksNushell's
url joincommand constructs a URL string from a structured record by combining fields such asscheme,host,path, and query parameters (paramsorquery). This allows for programmatic and composable URL building directly from Nushell's pipeline.Core Fields Used:
scheme: The URL scheme (e.g.,"http","https").usernameandpassword: For user authentication (optional).host: The domain or IP address.port: The port number (optional).path: The path after the host (can be a string; leading slash is optional).queryorparams: The query string. You can provide this either as a URL-encoded string viaquery, or a record/list/table viaparams.fragment: The "hash" part after#(optional).Example Usages
Below are common patterns and behaviors for
url join:Minimal Example
{ scheme: "http", host: "www.example.com" } | url join # Output: "http://www.example.com"Adding Path and Params
{ scheme: "http", host: "www.example.com", path: "/search", params: { q: "nushell", sort: "asc" } } | url join # Output: "http://www.example.com/search?q=nushell&sort=asc"
- The
paramsfield, if set as a record, creates the query string with proper percent-encoding[1][5].Combining Username, Password, and Port
{ scheme: "http", username: "user", password: "pwd", host: "example.com", port: "8080" } | url join # Output: "http://user:pwd@example.com:8080"
- Username and password are included before the host, and
portis appended to the host[1].Path and Fragment
{ scheme: "http", host: "www.example.com", path: "page", fragment: "section1" } | url join # Output: "http://www.example.com/page#section1"
- The
fragmentappears at the end, separated by#[1].Handling Query vs Params
Both
paramsandqueryare accepted for the query string. Their usage:
query: A pre-encoded string (e.g.,"a=1&b=2").params: A record, list, or even a table that will be encoded by the command. For lists or tables, multiple parameters with the same key are supported:{ scheme: "http", host: "example.com", params: { a: ["one", "two"], b: "three" } } | url join # Output: "http://example.com?a=one&a=two&b=three"or with a table of key-value pairs:
{ scheme: "http", host: "localhost", params: [ ["key", "value"], ["par_1", "aaa"], ["par_2", "bbb"], ["par_1", "ccc"], ] } | url join # Output: "http://localhost?par_1=aaa&par_2=bbb&par_1=ccc"[1][5]
Summary Table
Field Function scheme URL scheme (e.g., http, https) username Username (optional, before '@') password Password (optional, before '@') host Domain or IP port Port number (optional) path Path after host (optional) query Prebuilt query string (optional) params Record/list/table for query params fragment Fragment after '#' (optional) Key Points
url joinexpects a record where each field corresponds to a logical URL component.- Both
params(structured record, list, table) andquery(raw string) can be used for the query string;paramsis generally preferred for programmatic construction[1][5].- If both
paramsandqueryare present,paramsis encoded and used.- Supports all parts of standard URLs including authentication, port, and fragments.
For more details and examples, see the official documentation[1][5][2].
Citations:
- 1: https://www.nushell.sh/commands/docs/url_join.html
- 2: https://www.nushell.sh/commands/docs/url.html
- 3: https://www.nushell.sh/commands/
- 4: https://www.nushell.sh/commands/docs/join.html
- 5: https://www.nushell.sh/blog/2024-11-12-nushell_0_100_0.html
- 6: https://mvolkmann.github.io/blog/nushell/?v=1.1.1
- 7: https://www.nushell.sh/book/configuration.html
- 8: https://www.nushell.sh/commands/categories/network.html
- 9: https://www.nushell.sh/book/special_variables.html
- 10: https://jpospisil.com/2023/05/25/writing-shell-scripts-in-nushell
URL construction with
url joinis correctNushell’s
url joincommand will correctly assemble the URL from the structured record—combiningscheme,host,path, and theparamsrecord into a properly encoded query string. The code in infrastructure/ping.nu (lines 31–41) is valid as written; no further changes are needed.workflows/fetch_data.py (1)
11-19: LGTM! Good error handling in the subprocess call.The function properly uses
check=Trueto raise exceptions on command failure and strips the output to remove whitespace.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
workflows/fetch_data.py (1)
1-9:⚠️ Potential issueAdd missing
osimport to fix runtime errors.The code uses
os.getenv()on lines 25-26 but theosmodule is not imported, which will cause aNameErrorat runtime.Apply this diff to fix the import:
import subprocess +import os import httpx import polars as plAdditionally, consider addressing the import sorting and style issues flagged by static analysis tools for better code consistency.
🧰 Tools
🪛 GitHub Check: Run Python quality checks
[failure] 4-4: Ruff (PLR0402)
workflows/fetch_data.py:4:8: PLR0402 Usefrom pyarrow import ipcin lieu of alias
[failure] 1-8: Ruff (I001)
workflows/fetch_data.py:1:1: I001 Import block is un-sorted or un-formatted🪛 Pylint (3.3.7)
[refactor] 4-4: Use 'from pyarrow import ipc' instead
(R0402)
[error] 7-7: No name 'run_v2' in module 'google.cloud'
(E0611)
🧹 Nitpick comments (1)
workflows/fetch_data.py (1)
11-19: Security flags are acceptable for this use case, but consider improvements.Static analysis tools flag the subprocess usage as a security concern, but calling
gcloud auth print-identity-tokenis the intended and secure way to obtain identity tokens for Google Cloud services in this context.Consider this optional improvement for better subprocess handling:
def get_identity_token() -> str: result = subprocess.run( ["gcloud", "auth", "print-identity-token"], - stdout=subprocess.PIPE, - stderr=subprocess.PIPE, + capture_output=True, check=True, text=True, ) return result.stdout.strip()🧰 Tools
🪛 GitHub Check: Run Python quality checks
[failure] 13-13: Ruff (S607)
workflows/fetch_data.py:13:9: S607 Starting a process with a partial executable path
[failure] 12-12: Ruff (S603)
workflows/fetch_data.py:12:14: S603subprocesscall: check for execution of untrusted input
[failure] 12-18: Ruff (UP022)
workflows/fetch_data.py:12:14: UP022 Prefercapture_outputover sendingstdoutandstderrtoPIPE
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
infrastructure/buckets.py(1 hunks)workflows/fetch_data.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- infrastructure/buckets.py
🧰 Additional context used
🪛 Ruff (0.11.9)
workflows/fetch_data.py
25-25: Undefined name os
(F821)
26-26: Undefined name os
(F821)
🪛 GitHub Check: Run Python quality checks
workflows/fetch_data.py
[failure] 39-41: Ruff (TRY003)
workflows/fetch_data.py:39:15: TRY003 Avoid specifying long messages outside the exception class
[failure] 26-26: Ruff (F821)
workflows/fetch_data.py:26:14: F821 Undefined name os
[failure] 25-25: Ruff (F821)
workflows/fetch_data.py:25:15: F821 Undefined name os
[failure] 13-13: Ruff (S607)
workflows/fetch_data.py:13:9: S607 Starting a process with a partial executable path
[failure] 12-12: Ruff (S603)
workflows/fetch_data.py:12:14: S603 subprocess call: check for execution of untrusted input
[failure] 12-18: Ruff (UP022)
workflows/fetch_data.py:12:14: UP022 Prefer capture_output over sending stdout and stderr to PIPE
[failure] 4-4: Ruff (PLR0402)
workflows/fetch_data.py:4:8: PLR0402 Use from pyarrow import ipc in lieu of alias
[failure] 1-8: Ruff (I001)
workflows/fetch_data.py:1:1: I001 Import block is un-sorted or un-formatted
🪛 Pylint (3.3.7)
workflows/fetch_data.py
[refactor] 4-4: Use 'from pyarrow import ipc' instead
(R0402)
[error] 7-7: No name 'run_v2' in module 'google.cloud'
(E0611)
[error] 25-25: Undefined variable 'os'
(E0602)
[error] 26-26: Undefined variable 'os'
(E0602)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Run Python tests
🔇 Additional comments (1)
workflows/fetch_data.py (1)
22-59: Good implementation with proper error handling.The function correctly implements the data fetching workflow with appropriate error handling for:
- Missing datamanager service (lines 38-39)
- HTTP request failures (line 49)
- Data conversion failures (lines 56-57)
The configurable project/region approach using environment variables is well implemented once the
osimport is added.🧰 Tools
🪛 Ruff (0.11.9)
25-25: Undefined name
os(F821)
26-26: Undefined name
os(F821)
🪛 GitHub Check: Run Python quality checks
[failure] 39-41: Ruff (TRY003)
workflows/fetch_data.py:39:15: TRY003 Avoid specifying long messages outside the exception class
[failure] 26-26: Ruff (F821)
workflows/fetch_data.py:26:14: F821 Undefined nameos
[failure] 25-25: Ruff (F821)
workflows/fetch_data.py:25:15: F821 Undefined nameos🪛 Pylint (3.3.7)
[error] 25-25: Undefined variable 'os'
(E0602)
[error] 26-26: Undefined variable 'os'
(E0602)
This pull request introduces a variety of changes across multiple areas, including infrastructure updates, Dockerfile adjustments, code refactoring, and workflow enhancements. The most significant updates focus on improving cloud infrastructure management, updating dependencies, and enhancing functionality for data fetching and processing.
Infrastructure Updates:
platform-service-account-ownerto grant owner permissions to the platform service account ininfrastructure/project.py.DUCKDB_ACCESS_KEYandDUCKDB_SECRETininfrastructure/environment_variables.py, replacing the previous use of secrets from configuration.DATAMANAGER_BASE_URLand HMAC key details (duckdb_access_key,duckdb_secret) to Pulumi outputs for easier access. [1] [2]Dockerfile Adjustments:
3.12.10inapplication/datamanager/Dockerfileandapplication/positionmanager/Dockerfilefor consistency and compatibility. [1] [2]ENTRYPOINTinapplication/datamanager/Dockerfileto simplify the module path foruvicorn.Code Refactoring:
requests.codes.not_foundandrequests.codes.okwith dictionary-style access (requests.codes["not_found"],requests.codes["ok"]) inapplication/positionmanager/src/positionmanager/clients.pyfor improved readability.backfill_datamanager.pyworkflow, replacing it with a newfetch_data.pyworkflow that useshttpx,polars, and Google Cloud Run APIs for fetching and processing data. [1] [2]Workflow Enhancements:
fetch_datestask inworkflows/fetch_data.pyfor fetching data from thedatamanagerservice using Google Cloud Run and Polars for data processing.workflows/pyproject.tomlto includegoogle-auth,google-cloud-run,polars, andpyarrowfor the new workflow.Miscellaneous:
infrastructure/pyproject.tomlto20250606.4and inworkflows/pyproject.tomlto0.1.0for version tracking. [1] [2]tool.ruff.lintsection inpyproject.tomlto disable specific linting rules temporarily.Summary by CodeRabbit
New Features
Improvements
Dependency Updates
Bug Fixes
Chores