Skip to content

Conversation

@nikhil-zlai
Copy link
Contributor

@nikhil-zlai nikhil-zlai commented Jul 6, 2024

Summary

as title

Checklist

  • Added Unit Tests
  • Covered by existing CI
  • Integration tested docker build --progress=plain -t chronon-base .
  • Documentation update

Summary by CodeRabbit

  • New Features

    • Added a new Docker image setup based on Alpine Linux, providing tools like Python, Amazon Corretto JDK, SBT for Scala, Thrift, and common Python packages.
  • Updates

    • Scala version updated to 2.12.18.
    • Thrift package updated to version 0.20.0.
    • Updated various development tools to their latest versions, including black, isort, pytest, tox, and more.
  • Documentation

    • Simplified the pull request template by removing unnecessary sections and comments.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 6, 2024

Warning

Review failed

The pull request is closed.

Walkthrough

This update includes major version changes to Scala and various Python dependencies, a new Dockerfile setup for an Alpine Linux Docker image, and modifications to the pull request template. These changes enhance compatibility with updated tools, streamline the development environment, and align dependencies with the latest versions.

Changes

Files Change Summary
.circleci/Dockerfile Scala version updated from 2.11.7 to 2.12.18.
.github/image/Dockerfile Introduced a new Dockerfile based on Alpine Linux setting up Python, Corretto JDK, SBT, Thrift, and common packages.
.github/pull_request_template.md Removed comments and sections related to overview, goals, test plans, and reviewers.
api/py/requirements/base.in Updated version constraint for thrift package to ==0.20.0.
api/py/requirements/base.txt Updated thrift from 0.13.0 to 0.20.0.
api/py/requirements/dev.txt Updated various development dependencies to their latest versions.

Poem

🐇✨ In code we trust, our tools align,
Updating Scala, Thrift, and more, we'll shine.
Docker sings with Alpine might,
A smoother path, day and night.
Dependencies up-to-date, all is bright,
Coding bunnies hop with delight!
🚀🌟


Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@nikhil-zlai nikhil-zlai merged commit 837a7e4 into main Jul 6, 2024
@nikhil-zlai nikhil-zlai deleted the docker_image branch July 6, 2024 03:01
@sean-zlai sean-zlai mentioned this pull request Feb 6, 2025
4 tasks
kumar-zlai pushed a commit that referenced this pull request Apr 25, 2025
Docker image for github actions
piyush-zlai added a commit that referenced this pull request May 2, 2025
…r Fetcher threadpool (#726)

## Summary
PR to swap our metrics reporter from statsd to open telemetry metrics.
We need otel to allow us to capture metrics in Etsy without the need of
a prometheus statsd exporter sidecar that they've seen issues with
occasionally. Otel in general is a popular metrics ingestion interface
with a number of supported backends (e.g. prom / datadog / gcloud / aws
cloudwatch). Wiring up Otel also enables us to set up traces and spans
in the repo in the future.
Broad changes:
- Decouple bulk of the metrics reporting logic from the Metrics.Context.
The metrics reporter we use is pluggable. Currently this is just the
OpenTelemetry but in principle we can support others in the future.
- Online module creates the appropriate [otel
SDK](https://opentelemetry.io/docs/languages/java/sdk/) - either we use
the [Http provider or the Prometheus Http
server](https://opentelemetry.io/docs/languages/java/configuration/#properties-exporters).
We need the Http provider to plug into Vert.x as their Micrometer
integration works with that. The Prom http server is what Etsy is keen
we use.

## Checklist
- [ ] Added Unit Tests
- [X] Covered by existing CI
- [X] Integration tested
- [ ] Documentation update

Tested via docker container and a local instance of open telemetry:
Start up fetcher docker svc
```
docker run -v ~/.config/gcloud/application_default_credentials.json:/gcp/credentials.json  -p 9000:9000  -e "GCP_PROJECT_ID=canary-443022"  -e "GOOGLE_CLOUD_PROJECT=canary-443022"  -e "GCP_BIGTABLE_INSTANCE_ID=zipline-canary-instance"  -e "EXPORTER_OTLP_ENDPOINT=http://host.docker.internal:4318"  -e GOOGLE_APPLICATION_CREDENTIALS=/gcp/credentials.json  zipline-fetcher:latest
```

And then otel:
```
./otelcol --config otel-collector-config.yaml
...
```

We see:
```
2025-04-18T17:35:37.351-0400	info	ResourceMetrics #0
Resource SchemaURL: 
Resource attributes:
     -> service.name: Str(ai.chronon)
     -> telemetry.sdk.language: Str(java)
     -> telemetry.sdk.name: Str(opentelemetry)
     -> telemetry.sdk.version: Str(1.49.0)
ScopeMetrics #0
ScopeMetrics SchemaURL: 
InstrumentationScope ai.chronon 3.7.0-M11
Metric #0
Descriptor:
     -> Name: kv_store.bigtable.cache.insert
     -> Description: 
     -> Unit: 
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> dataset: Str(TableId{tableId=CHRONON_METADATA})
     -> environment: Str(kv_store)
     -> production: Str(false)
StartTimestamp: 2025-04-18 21:31:52.180857637 +0000 UTC
Timestamp: 2025-04-18 21:35:37.18442138 +0000 UTC
Value: 1
Metric #1
Descriptor:
     -> Name: kv_store.bigtable.multiGet.latency
     -> Description: 
     -> Unit: 
     -> DataType: Histogram
     -> AggregationTemporality: Cumulative
HistogramDataPoints #0
Data point attributes:
     -> dataset: Str(TableId{tableId=CHRONON_METADATA})
     -> environment: Str(kv_store)
     -> production: Str(false)
StartTimestamp: 2025-04-18 21:31:52.180857637 +0000 UTC
Timestamp: 2025-04-18 21:35:37.18442138 +0000 UTC
Count: 1
Sum: 229.000000
Min: 229.000000
Max: 229.000000
ExplicitBounds #0: 0.000000
...
Buckets #0, Count: 0
...
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced OpenTelemetry-based metrics reporting throughout the
platform, replacing the previous StatsD approach.
- Added a Dockerfile and startup script for a new Fetcher service,
supporting both AWS and GCP integrations with configurable metrics
export.
- Enhanced thread pool monitoring with a new executor that provides
detailed metrics on task execution and queue status.

- **Improvements**
- Metrics tags are now structured as key-value maps, improving clarity
and flexibility.
- Metrics reporting is now context-aware, supporting per-dataset and
per-table metrics.
- Increased thread pool queue capacity for better throughput under load.
- Replaced StatsD metrics configuration with OpenTelemetry OTLP in
service launcher and build configurations.

- **Bug Fixes**
- Improved error handling and logging in metrics reporting and thread
pool management.

- **Chores**
- Updated dependencies to include OpenTelemetry, Micrometer OTLP
registry, Prometheus, OkHttp, and Kotlin libraries.
- Refactored build and test configurations to support new telemetry
libraries and remove deprecated dependencies.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
…r Fetcher threadpool (#726)

## Summary
PR to swap our metrics reporter from statsd to open telemetry metrics.
We need otel to allow us to capture metrics in our clients without the need of
a prometheus statsd exporter sidecar that they've seen issues with
occasionally. Otel in general is a popular metrics ingestion interface
with a number of supported backends (e.g. prom / datadog / gcloud / aws
cloudwatch). Wiring up Otel also enables us to set up traces and spans
in the repo in the future.
Broad changes:
- Decouple bulk of the metrics reporting logic from the Metrics.Context.
The metrics reporter we use is pluggable. Currently this is just the
OpenTelemetry but in principle we can support others in the future.
- Online module creates the appropriate [otel
SDK](https://opentelemetry.io/docs/languages/java/sdk/) - either we use
the [Http provider or the Prometheus Http
server](https://opentelemetry.io/docs/languages/java/configuration/#properties-exporters).
We need the Http provider to plug into Vert.x as their Micrometer
integration works with that. The Prom http server is what our clients is keen
we use.

## Checklist
- [ ] Added Unit Tests
- [X] Covered by existing CI
- [X] Integration tested
- [ ] Documentation update

Tested via docker container and a local instance of open telemetry:
Start up fetcher docker svc
```
docker run -v ~/.config/gcloud/application_default_credentials.json:/gcp/credentials.json  -p 9000:9000  -e "GCP_PROJECT_ID=canary-443022"  -e "GOOGLE_CLOUD_PROJECT=canary-443022"  -e "GCP_BIGTABLE_INSTANCE_ID=zipline-canary-instance"  -e "EXPORTER_OTLP_ENDPOINT=http://host.docker.internal:4318"  -e GOOGLE_APPLICATION_CREDENTIALS=/gcp/credentials.json  zipline-fetcher:latest
```

And then otel:
```
./otelcol --config otel-collector-config.yaml
...
```

We see:
```
2025-04-18T17:35:37.351-0400	info	ResourceMetrics #0
Resource SchemaURL: 
Resource attributes:
     -> service.name: Str(ai.chronon)
     -> telemetry.sdk.language: Str(java)
     -> telemetry.sdk.name: Str(opentelemetry)
     -> telemetry.sdk.version: Str(1.49.0)
ScopeMetrics #0
ScopeMetrics SchemaURL: 
InstrumentationScope ai.chronon 3.7.0-M11
Metric #0
Descriptor:
     -> Name: kv_store.bigtable.cache.insert
     -> Description: 
     -> Unit: 
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> dataset: Str(TableId{tableId=CHRONON_METADATA})
     -> environment: Str(kv_store)
     -> production: Str(false)
StartTimestamp: 2025-04-18 21:31:52.180857637 +0000 UTC
Timestamp: 2025-04-18 21:35:37.18442138 +0000 UTC
Value: 1
Metric #1
Descriptor:
     -> Name: kv_store.bigtable.multiGet.latency
     -> Description: 
     -> Unit: 
     -> DataType: Histogram
     -> AggregationTemporality: Cumulative
HistogramDataPoints #0
Data point attributes:
     -> dataset: Str(TableId{tableId=CHRONON_METADATA})
     -> environment: Str(kv_store)
     -> production: Str(false)
StartTimestamp: 2025-04-18 21:31:52.180857637 +0000 UTC
Timestamp: 2025-04-18 21:35:37.18442138 +0000 UTC
Count: 1
Sum: 229.000000
Min: 229.000000
Max: 229.000000
ExplicitBounds #0: 0.000000
...
Buckets #0, Count: 0
...
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced OpenTelemetry-based metrics reporting throughout the
platform, replacing the previous StatsD approach.
- Added a Dockerfile and startup script for a new Fetcher service,
supporting both AWS and GCP integrations with configurable metrics
export.
- Enhanced thread pool monitoring with a new executor that provides
detailed metrics on task execution and queue status.

- **Improvements**
- Metrics tags are now structured as key-value maps, improving clarity
and flexibility.
- Metrics reporting is now context-aware, supporting per-dataset and
per-table metrics.
- Increased thread pool queue capacity for better throughput under load.
- Replaced StatsD metrics configuration with OpenTelemetry OTLP in
service launcher and build configurations.

- **Bug Fixes**
- Improved error handling and logging in metrics reporting and thread
pool management.

- **Chores**
- Updated dependencies to include OpenTelemetry, Micrometer OTLP
registry, Prometheus, OkHttp, and Kotlin libraries.
- Refactored build and test configurations to support new telemetry
libraries and remove deprecated dependencies.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
…r Fetcher threadpool (#726)

## Summary
PR to swap our metrics reporter from statsd to open telemetry metrics.
We need otel to allow us to capture metrics in our clients without the need of
a prometheus statsd exporter sidecar that they've seen issues with
occasionally. Otel in general is a popular metrics ingestion interface
with a number of supported backends (e.g. prom / datadog / gcloud / aws
cloudwatch). Wiring up Otel also enables us to set up traces and spans
in the repo in the future.
Broad changes:
- Decouple bulk of the metrics reporting logic from the Metrics.Context.
The metrics reporter we use is pluggable. Currently this is just the
OpenTelemetry but in principle we can support others in the future.
- Online module creates the appropriate [otel
SDK](https://opentelemetry.io/docs/languages/java/sdk/) - either we use
the [Http provider or the Prometheus Http
server](https://opentelemetry.io/docs/languages/java/configuration/#properties-exporters).
We need the Http provider to plug into Vert.x as their Micrometer
integration works with that. The Prom http server is what our clients is keen
we use.

## Checklist
- [ ] Added Unit Tests
- [X] Covered by existing CI
- [X] Integration tested
- [ ] Documentation update

Tested via docker container and a local instance of open telemetry:
Start up fetcher docker svc
```
docker run -v ~/.config/gcloud/application_default_credentials.json:/gcp/credentials.json  -p 9000:9000  -e "GCP_PROJECT_ID=canary-443022"  -e "GOOGLE_CLOUD_PROJECT=canary-443022"  -e "GCP_BIGTABLE_INSTANCE_ID=zipline-canary-instance"  -e "EXPORTER_OTLP_ENDPOINT=http://host.docker.internal:4318"  -e GOOGLE_APPLICATION_CREDENTIALS=/gcp/credentials.json  zipline-fetcher:latest
```

And then otel:
```
./otelcol --config otel-collector-config.yaml
...
```

We see:
```
2025-04-18T17:35:37.351-0400	info	ResourceMetrics #0
Resource SchemaURL: 
Resource attributes:
     -> service.name: Str(ai.chronon)
     -> telemetry.sdk.language: Str(java)
     -> telemetry.sdk.name: Str(opentelemetry)
     -> telemetry.sdk.version: Str(1.49.0)
ScopeMetrics #0
ScopeMetrics SchemaURL: 
InstrumentationScope ai.chronon 3.7.0-M11
Metric #0
Descriptor:
     -> Name: kv_store.bigtable.cache.insert
     -> Description: 
     -> Unit: 
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> dataset: Str(TableId{tableId=CHRONON_METADATA})
     -> environment: Str(kv_store)
     -> production: Str(false)
StartTimestamp: 2025-04-18 21:31:52.180857637 +0000 UTC
Timestamp: 2025-04-18 21:35:37.18442138 +0000 UTC
Value: 1
Metric #1
Descriptor:
     -> Name: kv_store.bigtable.multiGet.latency
     -> Description: 
     -> Unit: 
     -> DataType: Histogram
     -> AggregationTemporality: Cumulative
HistogramDataPoints #0
Data point attributes:
     -> dataset: Str(TableId{tableId=CHRONON_METADATA})
     -> environment: Str(kv_store)
     -> production: Str(false)
StartTimestamp: 2025-04-18 21:31:52.180857637 +0000 UTC
Timestamp: 2025-04-18 21:35:37.18442138 +0000 UTC
Count: 1
Sum: 229.000000
Min: 229.000000
Max: 229.000000
ExplicitBounds #0: 0.000000
...
Buckets #0, Count: 0
...
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced OpenTelemetry-based metrics reporting throughout the
platform, replacing the previous StatsD approach.
- Added a Dockerfile and startup script for a new Fetcher service,
supporting both AWS and GCP integrations with configurable metrics
export.
- Enhanced thread pool monitoring with a new executor that provides
detailed metrics on task execution and queue status.

- **Improvements**
- Metrics tags are now structured as key-value maps, improving clarity
and flexibility.
- Metrics reporting is now context-aware, supporting per-dataset and
per-table metrics.
- Increased thread pool queue capacity for better throughput under load.
- Replaced StatsD metrics configuration with OpenTelemetry OTLP in
service launcher and build configurations.

- **Bug Fixes**
- Improved error handling and logging in metrics reporting and thread
pool management.

- **Chores**
- Updated dependencies to include OpenTelemetry, Micrometer OTLP
registry, Prometheus, OkHttp, and Kotlin libraries.
- Refactored build and test configurations to support new telemetry
libraries and remove deprecated dependencies.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
chewy-zlai pushed a commit that referenced this pull request May 16, 2025
…r Fetcher threadpool (#726)

## Summary
PR to swap our metrics reporter from statsd to open telemetry metrics.
We need otel to allow us to capture metrics in our clients without the need of
a prometheus statsd exporter sidecar that they've seen issues with
occasionally. Otel in general is a popular metrics ingestion interface
with a number of supported baour clientsends (e.g. prom / datadog / gcloud / aws
cloudwatch). Wiring up Otel also enables us to set up traces and spans
in the repo in the future.
Broad changes:
- Decouple bulk of the metrics reporting logic from the Metrics.Context.
The metrics reporter we use is pluggable. Currently this is just the
OpenTelemetry but in principle we can support others in the future.
- Online module creates the appropriate [otel
SDK](https://opentelemetry.io/docs/languages/java/sdk/) - either we use
the [Http provider or the Prometheus Http
server](https://opentelemetry.io/docs/languages/java/configuration/#properties-exporters).
We need the Http provider to plug into Vert.x as their Micrometer
integration works with that. The Prom http server is what our clients is keen
we use.

## Cheour clientslist
- [ ] Added Unit Tests
- [X] Covered by existing CI
- [X] Integration tested
- [ ] Documentation update

Tested via doour clientser container and a local instance of open telemetry:
Start up fetcher doour clientser svc
```
doour clientser run -v ~/.config/gcloud/application_default_credentials.json:/gcp/credentials.json  -p 9000:9000  -e "GCP_PROJECT_ID=canary-443022"  -e "GOOGLE_CLOUD_PROJECT=canary-443022"  -e "GCP_BIGTABLE_INSTANCE_ID=zipline-canary-instance"  -e "EXPORTER_OTLP_ENDPOINT=http://host.doour clientser.internal:4318"  -e GOOGLE_APPLICATION_CREDENTIALS=/gcp/credentials.json  zipline-fetcher:latest
```

And then otel:
```
./otelcol --config otel-collector-config.yaml
...
```

We see:
```
2025-04-18T17:35:37.351-0400	info	ResourceMetrics #0
Resource SchemaURL: 
Resource attributes:
     -> service.name: Str(ai.chronon)
     -> telemetry.sdk.language: Str(java)
     -> telemetry.sdk.name: Str(opentelemetry)
     -> telemetry.sdk.version: Str(1.49.0)
ScopeMetrics #0
ScopeMetrics SchemaURL: 
InstrumentationScope ai.chronon 3.7.0-M11
Metric #0
Descriptor:
     -> Name: kv_store.bigtable.cache.insert
     -> Description: 
     -> Unit: 
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> dataset: Str(TableId{tableId=CHRONON_METADATA})
     -> environment: Str(kv_store)
     -> production: Str(false)
StartTimestamp: 2025-04-18 21:31:52.180857637 +0000 UTC
Timestamp: 2025-04-18 21:35:37.18442138 +0000 UTC
Value: 1
Metric #1
Descriptor:
     -> Name: kv_store.bigtable.multiGet.latency
     -> Description: 
     -> Unit: 
     -> DataType: Histogram
     -> AggregationTemporality: Cumulative
HistogramDataPoints #0
Data point attributes:
     -> dataset: Str(TableId{tableId=CHRONON_METADATA})
     -> environment: Str(kv_store)
     -> production: Str(false)
StartTimestamp: 2025-04-18 21:31:52.180857637 +0000 UTC
Timestamp: 2025-04-18 21:35:37.18442138 +0000 UTC
Count: 1
Sum: 229.000000
Min: 229.000000
Max: 229.000000
ExplicitBounds #0: 0.000000
...
Buour clientsets #0, Count: 0
...
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced OpenTelemetry-based metrics reporting throughout the
platform, replacing the previous StatsD approach.
- Added a Doour clientserfile and startup script for a new Fetcher service,
supporting both AWS and GCP integrations with configurable metrics
export.
- Enhanced thread pool monitoring with a new executor that provides
detailed metrics on task execution and queue status.

- **Improvements**
- Metrics tags are now structured as key-value maps, improving clarity
and flexibility.
- Metrics reporting is now context-aware, supporting per-dataset and
per-table metrics.
- Increased thread pool queue capacity for better throughput under load.
- Replaced StatsD metrics configuration with OpenTelemetry OTLP in
service launcher and build configurations.

- **Bug Fixes**
- Improved error handling and logging in metrics reporting and thread
pool management.

- **Chores**
- Updated dependencies to include OpenTelemetry, Micrometer OTLP
registry, Prometheus, OkHttp, and Kotlin libraries.
- Refactored build and test configurations to support new telemetry
libraries and remove deprecated dependencies.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants