Skip to content

feat: add AI orchestration frameworks (Langflow, CrewAI, AutoGen)#53

Merged
marcusquinn merged 2 commits intomainfrom
feature/ai-orchestration-frameworks
Jan 11, 2026
Merged

feat: add AI orchestration frameworks (Langflow, CrewAI, AutoGen)#53
marcusquinn merged 2 commits intomainfrom
feature/ai-orchestration-frameworks

Conversation

@marcusquinn
Copy link
Owner

@marcusquinn marcusquinn commented Jan 11, 2026

Summary

Add comprehensive integration for three major AI orchestration frameworks with common design patterns aligned with existing Agno integration.

Frameworks Added

Framework License GUI Port Key Strength
Langflow MIT 7860 Visual drag-and-drop workflow builder
CrewAI MIT 8501 Multi-agent teams with role-based orchestration
AutoGen MIT 8081 Microsoft's agentic AI framework with MCP support

New Files

Documentation (.agent/tools/ai-orchestration/):

  • overview.md - Decision matrix and framework comparison
  • langflow.md - Langflow setup and usage guide
  • crewai.md - CrewAI setup and usage guide
  • autogen.md - AutoGen setup and usage guide
  • packaging.md - Deployment and packaging guide (Docker, FastAPI, PyInstaller)

Helper Scripts (.agent/scripts/):

  • langflow-helper.sh - Setup/start/stop/status commands
  • crewai-helper.sh - Setup/start/stop/status commands
  • autogen-helper.sh - Setup/start/stop/status commands

Config Templates (configs/):

  • langflow-config.json.txt
  • crewai-config.json.txt
  • autogen-config.json.txt

Updated Files

  • .agent/AGENTS.md - Added tools/ai-orchestration/ category
  • README.md - Added AI Orchestration Frameworks section
  • setup.sh - Added setup_ai_orchestration() function

Common Design Patterns

All frameworks follow the same pattern as existing Agno integration:

  • Runtime directory: ~/.aidevops/{tool}/
  • Virtual environment: venv/
  • Environment config: .env
  • Management scripts: start-{tool}.sh, stop-{tool}.sh, {tool}-status.sh

Key Features

  • Zero lock-in: All MIT licensed, fully open-source
  • Local LLM support: All frameworks support Ollama
  • Git integration: JSON/YAML exports for version control
  • Packaging guide: Templates for Docker, FastAPI, PyInstaller, serverless

Testing

  • ShellCheck passes on all new scripts
  • Scripts are executable
  • Documentation follows aidevops patterns

Related

  • Addresses project brief for integrating Langflow, CrewAI, and AutoGen
  • Parallel worktree for local setup guidance will reference these integrations

Summary by CodeRabbit

Release Notes

  • New Features

    • Added AI orchestration framework support with setup and management tools for Langflow, CrewAI, and AutoGen.
  • Documentation

    • Added comprehensive guides covering AI orchestration frameworks, deployment patterns, packaging workflows, and integration examples.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 11, 2026

Warning

Rate limit exceeded

@marcusquinn has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 6 minutes and 46 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 768ddc9 and 7099c1a.

📒 Files selected for processing (14)
  • .agent/AGENTS.md
  • .agent/scripts/autogen-helper.sh
  • .agent/scripts/crewai-helper.sh
  • .agent/scripts/langflow-helper.sh
  • .agent/tools/ai-orchestration/autogen.md
  • .agent/tools/ai-orchestration/crewai.md
  • .agent/tools/ai-orchestration/langflow.md
  • .agent/tools/ai-orchestration/overview.md
  • .agent/tools/ai-orchestration/packaging.md
  • README.md
  • configs/autogen-config.json.txt
  • configs/crewai-config.json.txt
  • configs/langflow-config.json.txt
  • setup.sh

Walkthrough

Introduces comprehensive AI orchestration framework support (Langflow, CrewAI, AutoGen) through helper scripts, documentation, configuration templates, and main setup integration, enabling multi-agent framework orchestration within the aidevops ecosystem.

Changes

Cohort / File(s) Summary
Helper Scripts
.agent/scripts/autogen-helper.sh, .agent/scripts/crewai-helper.sh, .agent/scripts/langflow-helper.sh
Three Bash orchestration scripts providing framework setup, virtual environment configuration, prerequisite validation (Python 3.10+, pip/uv), and runtime management (start/stop/status). Each includes colored messaging, dependency checks, scaffold generation (example scripts, configs, .env templates), and CLI action dispatching.
Framework Documentation
.agent/tools/ai-orchestration/autogen.md, .agent/tools/ai-orchestration/crewai.md, .agent/tools/ai-orchestration/langflow.md
Comprehensive per-framework guides covering quick reference, installation, configuration (env vars, YAML), usage examples, local LLM support (Ollama, LM Studio), deployment (Docker, FastAPI), integration patterns, troubleshooting, and resources.
Orchestration Overview & Packaging
.agent/tools/ai-orchestration/overview.md, .agent/tools/ai-orchestration/packaging.md
Master orchestration guide with framework decision matrix, common design patterns, integration guidance, and Git export workflows. Packaging guide covers deployment targets (Web, Desktop, Mobile, Serverless) with FastAPI/Docker/Kubernetes examples, CI/CD patterns, and observability hooks.
Configuration Templates
configs/autogen-config.json.txt, configs/crewai-config.json.txt, configs/langflow-config.json.txt
JSON configuration files defining studio integration, model providers (OpenAI, Anthropic, Ollama), tooling options, Git integration hooks, setup requirements, and AI DevOps benefits for each framework.
Main Setup Integration & Registry
setup.sh, .agent/AGENTS.md, README.md
Adds setup_ai_orchestration() function to main setup flow with framework directory creation and guidance. Updates AGENTS.md Subagent Folders table with tools/ai-orchestration/ entry. Updates README with AI Orchestration Frameworks section and MainWP addition.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant SetupScript as setup.sh
    participant HelperScript as Helper Script<br/>(autogen/crewai/langflow)
    participant VirtualEnv as Virtual Env
    participant FrameworkCLI as Framework CLI
    participant FileSystem as File System

    User->>SetupScript: Run setup.sh
    SetupScript->>SetupScript: Check setup_ai_orchestration
    SetupScript->>HelperScript: Call helper setup action
    HelperScript->>HelperScript: Check prerequisites<br/>(Python 3.10+, pip, uv)
    HelperScript->>FileSystem: Create directories<br/>($HOME/.aidevops/*)
    HelperScript->>VirtualEnv: Create Python venv
    HelperScript->>VirtualEnv: Install framework packages
    HelperScript->>FileSystem: Generate .env template
    HelperScript->>FileSystem: Create config files<br/>(example scripts, studio_app.py)
    HelperScript->>FileSystem: Create management scripts<br/>(start/stop/status)
    HelperScript->>User: Return success & guidance
Loading
sequenceDiagram
    participant User
    participant ManagementScript as Management Script<br/>(start/stop/status)
    participant VirtualEnv as Virtual Env
    participant FrameworkService as Framework Service<br/>(Studio/CLI)
    participant StatusCheck as Health Check

    User->>ManagementScript: Execute action<br/>(start/stop/status)
    ManagementScript->>VirtualEnv: Activate venv
    ManagementScript->>ManagementScript: Load .env
    alt Action: Start
        ManagementScript->>FrameworkService: Launch service<br/>(port, config)
        FrameworkService->>User: Accessible on localhost
    else Action: Stop
        ManagementScript->>FrameworkService: Terminate process
        ManagementScript->>ManagementScript: Fallback: pkill
    else Action: Status
        ManagementScript->>StatusCheck: Check process/port
        StatusCheck->>FrameworkService: Query health endpoint
        StatusCheck->>User: Report status & info
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🤖 Three frameworks dance in harmony now,
Langflow, CrewAI, AutoGen bow,
Helper scripts orchestrate the way,
Setup guides light the DevOps day,
Multi-agent dreams in code, I say! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately and concisely summarizes the main change: adding three AI orchestration frameworks to the project with clear framework names.
Docstring Coverage ✅ Passed Docstring coverage is 80.43% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the project's AI capabilities by incorporating three prominent AI orchestration frameworks: Langflow, CrewAI, and AutoGen. This integration provides a versatile toolkit for developing and managing AI agents and complex workflows, ranging from visual programming to multi-agent collaboration. The changes adhere to a consistent, open-source design philosophy, ensuring ease of setup, configuration, and future extensibility.

Highlights

  • New AI Orchestration Frameworks: Integrated three leading AI orchestration frameworks: Langflow (visual workflow builder), CrewAI (multi-agent teams), and AutoGen (Microsoft's agentic AI framework).
  • Standardized Helper Scripts: Introduced dedicated helper scripts (langflow-helper.sh, crewai-helper.sh, autogen-helper.sh) for streamlined setup, start, stop, and status management of each framework.
  • Comprehensive Documentation: Added extensive documentation for each framework, including detailed setup and usage guides, an overview comparing the frameworks, and a guide for packaging AI automations for deployment.
  • Consistent Integration Patterns: Ensured all new integrations follow a common design pattern, utilizing virtual environments, .env configuration, and standardized management scripts for ease of use and consistency.
  • Core System Updates: Updated .agent/AGENTS.md, README.md, and setup.sh to reflect the new AI orchestration capabilities and provide a central entry point for their installation and management.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 256 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Sun Jan 11 16:49:37 UTC 2026: Code review monitoring started
Sun Jan 11 16:49:37 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 256
Sun Jan 11 16:49:38 UTC 2026: Qlty - 0 issues found, auto-formatting applied
Sun Jan 11 16:49:40 UTC 2026: Codacy analysis completed with auto-fixes

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 256
  • VULNERABILITIES: 0

Generated on: Sun Jan 11 16:50:17 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces comprehensive integration for three popular AI orchestration frameworks: Langflow, CrewAI, and AutoGen. The changes are well-structured, providing helper scripts, documentation, and configuration templates for each framework, following common design patterns. The documentation is particularly thorough, with overviews, comparisons, and deployment guides.

My review focuses on improving the robustness and user experience of the new helper scripts. I've identified a few areas for improvement:

  • In the CrewAI helper, the generated Streamlit UI has some unused elements that could be confusing.
  • The Langflow helper script is missing a Python version check and suppresses potentially useful error messages.

Overall, this is an excellent addition that significantly expands the capabilities of the AI DevOps framework. The changes are well-implemented and will be very valuable for users looking to work with these orchestration tools.

Comment on lines +219 to +269
# Model selection
model = st.sidebar.selectbox(
"Select Model",
["gpt-4o-mini", "gpt-4o", "gpt-4-turbo", "ollama/llama3.2"]
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The model selected in this selectbox is not used when creating the crew. The crew will use the default model from its configuration, which could be confusing. To avoid this, I suggest removing this UI element.

Comment on lines +233 to +282
col1, col2 = st.columns(2)
with col1:
num_agents = st.slider("Number of Agents", 1, 5, 2)
with col2:
process_type = st.selectbox("Process Type", ["sequential", "hierarchical"])

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The num_agents slider value is not used. The crew is created with a fixed number of two agents. This UI element is misleading. I suggest removing the slider and the column layout.

Suggested change
col1, col2 = st.columns(2)
with col1:
num_agents = st.slider("Number of Agents", 1, 5, 2)
with col2:
process_type = st.selectbox("Process Type", ["sequential", "hierarchical"])
process_type = st.selectbox("Process Type", ["sequential", "hierarchical"])

Comment on lines +59 to +112
# Check Python
if command -v python3 &> /dev/null; then
local python_version
python_version=$(python3 --version 2>&1 | cut -d' ' -f2)
print_success "Python 3 found: $python_version"
else
print_error "Python 3 not found"
missing=1
fi

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This script checks for the presence of python3 but doesn't enforce a minimum version. Langflow requires Python 3.10+, and not checking for this could lead to installation or runtime errors. The other helper scripts in this PR correctly check for Python 3.10+. I recommend adding a version check for consistency and robustness.

Suggested change
# Check Python
if command -v python3 &> /dev/null; then
local python_version
python_version=$(python3 --version 2>&1 | cut -d' ' -f2)
print_success "Python 3 found: $python_version"
else
print_error "Python 3 not found"
missing=1
fi
# Check Python
if command -v python3 &> /dev/null; then
local python_version
python_version=$(python3 --version 2>&1 | cut -d' ' -f2)
local major minor
major=$(echo "$python_version" | cut -d. -f1)
minor=$(echo "$python_version" | cut -d. -f2)
if [[ $major -ge 3 ]] && [[ $minor -ge 10 ]]; then
print_success "Python $python_version found (3.10+ required)"
else
print_error "Python 3.10+ required, found $python_version"
missing=1
fi
else
print_error "Python 3 not found"
missing=1
fi

mkdir -p "$output_dir"

# Export all flows
if langflow export --all --output "$output_dir" 2>/dev/null; then

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Suppressing stderr with 2>/dev/null can hide important error messages from the langflow export command, making it difficult to debug failures. It's better to let the user see the original error.

Suggested change
if langflow export --all --output "$output_dir" 2>/dev/null; then
if langflow export --all --output "$output_dir"; then

local count=0
for flow_file in "$input_dir"/*.json; do
if [[ -f "$flow_file" ]]; then
if langflow import --file "$flow_file" 2>/dev/null; then

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Suppressing stderr with 2>/dev/null can hide important error messages from the langflow import command, making it difficult to debug failures. It's better to let the user see the original error.

Suggested change
if langflow import --file "$flow_file" 2>/dev/null; then
if langflow import --file "$flow_file"; then

@augmentcode
Copy link

augmentcode bot commented Jan 11, 2026

🤖 Augment PR Summary

Summary: Adds first-class integration docs and helper tooling for three AI orchestration frameworks (Langflow, CrewAI, AutoGen), following the same local-runtime patterns used for Agno.

Changes:

  • Introduced a new documentation set under .agent/tools/ai-orchestration/ (overview + per-framework setup/usage + packaging/deployment guidance).
  • Added helper scripts for each framework (langflow-helper.sh, crewai-helper.sh, autogen-helper.sh) to standardize setup/start/stop/status workflows.
  • Added config template JSON files in configs/ for Langflow/CrewAI/AutoGen to document defaults and integration options.
  • Updated .agent/AGENTS.md to include the new tools/ai-orchestration/ category.
  • Updated README.md to advertise the new orchestration frameworks and their default localhost ports.
  • Extended setup.sh with setup_ai_orchestration() to surface the new frameworks and point to setup commands.

Technical Notes: All three frameworks are designed to run locally in /.aidevops/{tool}/ with a dedicated Python venv, a .env file for provider keys, and management scripts in /.aidevops/scripts/.

🤖 Was this summary useful? React with 👍 or 👎

Copy link

@augmentcode augmentcode bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review completed. 4 suggestions posted.

Fix All in Augment

Comment augment review to trigger a new review at any time.

if [[ -f "$flow_file" ]]; then
if langflow import --file "$flow_file" 2>/dev/null; then
print_success "Imported: $(basename "$flow_file")"
((count++))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because this script runs with set -e, ((count++)) will exit with status 1 on the first successful import (when count was 0), which can prematurely abort the import_flows loop.

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎

# Wait for startup
sleep 5

if curl -s http://localhost:7860/health >/dev/null 2>&1; then
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This health check (and the generated status output below) hard-codes port 7860, so if LANGFLOW_PORT is set via .env the script can report a false negative / wrong URLs.

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎

if command -v python3 &> /dev/null; then
local python_version
python_version=$(python3 --version 2>&1 | cut -d' ' -f2)
print_success "Python 3 found: $python_version"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

check_prerequisites only checks that python3 exists, but the rest of the repo/docs/configs call out Python 3.10+ for Langflow; this can lead to attempting setup on unsupported versions.

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎

print_info "Installing AutoGen..."
if command -v uv &> /dev/null; then
uv pip install autogen-agentchat -U
uv pip install 'autogen-ext[openai]' -U
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The setup installs autogen-ext[openai], but this PR’s config/docs also reference MCP support (e.g., autogen-ext[mcp] and autogen_ext.tools.mcp), so there’s a mismatch between documented capability and installed dependencies.

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🤖 Fix all issues with AI agents
In @.agent/scripts/autogen-helper.sh:
- Line 107: The cd command in the function uses "cd \"$AUTOGEN_DIR\" || exit 1",
which will terminate the whole script; change the failure handler to return from
the function instead of exiting the process by replacing "exit 1" with "return
1" (i.e., use "cd \"$AUTOGEN_DIR\" || return 1") so the function returns an
error status without killing the entire script; apply the same change wherever
the pattern "|| exit 1" appears inside helper functions.

In @.agent/scripts/crewai-helper.sh:
- Around line 64-71: The version check in crewai-helper.sh incorrectly uses '[[
$major -ge 3 ]] && [[ $minor -ge 10 ]]' which rejects Python 4.x; change the
condition to allow any major > 3 or major == 3 with minor >= 10 (i.e., use a
compound check like major -gt 3 || (major -eq 3 && minor -ge 10)) so the block
around print_success/print_error (using variables python_version, major, minor
and setting missing=1) correctly accepts future major versions while still
enforcing 3.10+ for Python 3.
- Around line 490-491: The command `cd "$project_dir" || exit 1` inside the
function terminates the whole script; replace `exit 1` with `return 1` so the
function returns an error code instead of exiting the process—update the `cd
"$project_dir" || exit 1` line to `cd "$project_dir" || return 1` (preserving
the surrounding function and any existing error handling pattern).

In @.agent/tools/ai-orchestration/autogen.md:
- Line 27: autogen.md and autogen-helper.sh currently reference and create
~/.aidevops/autogen/.env but AGENTS.md is the authoritative source and requires
credentials in ~/.config/aidevops/mcp-env.sh with 600 perms; update
autogen-helper.sh to stop creating/sourcing .env and instead source
~/.config/aidevops/mcp-env.sh (ensure permission checks and fail-fast if
missing), update autogen.md to reference the AGENTS.md standard path and
permissions and replace inline credential examples with placeholders pointing to
that secure file, and amend the Quick Reference text to note that
start/stop/status scripts are generated by the setup step rather than being
pre-existing.

In @.agent/tools/ai-orchestration/overview.md:
- Around line 177-179: The doc snippet shows a mismatched script name pattern;
update the overview to reflect the actual generated scripts by replacing the
generic `start_{tool}.sh` with the real naming convention
`start-{tool}-studio.sh` (e.g., `start-crewai-studio.sh`,
`start-autogen-studio.sh`) and ensure the directory list and any
examples/comments mention the hyphenated `-studio` suffix so the documentation
matches the implemented filenames.
- Around line 73-75: Replace the hard-coded GitHub star counts for projects (the
bulleted items referencing Langflow, CrewAI, AutoGen) with relative,
non-time-sensitive phrases (e.g., "highly popular", "widely adopted",
"well-known") or add a clear "as of <date>" timestamp and a maintenance note;
update the three occurrences mentioned (the bullets at the diff and the similar
lists at the other two locations) to use these relative descriptors or
timestamped counts to avoid stale data.
🧹 Nitpick comments (14)
.agent/tools/ai-orchestration/autogen.md (2)

119-119: Minor typographical refinement.

Line 119: "Hello World" should be followed by a comma in this documentation context for grammatical consistency:

- Say 'Hello World!', (with comma)

This is a low-priority fix but noted for completeness per static analysis.


1-406: Comprehensive AutoGen documentation with strong structure and examples.

The new subagent documentation provides excellent coverage:

  • ✅ Quick reference with immediate actionable paths
  • ✅ Clear architecture overview
  • ✅ Both automated and manual installation paths
  • ✅ Multiple usage patterns (Hello World, MCP, multi-agent orchestration)
  • ✅ GUI support (AutoGen Studio)
  • ✅ Local LLM integration (Ollama, Azure)
  • ✅ Cross-platform support (.NET)
  • ✅ Git integration best practices
  • ✅ Deployment patterns (Docker, FastAPI)
  • ✅ Troubleshooting section with migration guidance
  • ✅ Proper use of placeholders in all code examples

The documentation follows aidevops patterns and integrates well with the progressive disclosure strategy outlined in AGENTS.md.

Consider expanding the "Integration Examples" section (lines 328-365) with explicit cross-references to sibling frameworks (Langflow, CrewAI) to clarify when to use each framework and how they complement each other. This would help users make informed tool selection decisions.

README.md (1)

442-451: Consider adding documentation links for improved user discoverability.

The AI Orchestration Frameworks section is well-structured and integrates cleanly into the existing pattern with appropriate MIT licensing and localhost ports. Supporting documentation exists for each framework (overview, langflow, crewai, autogen, packaging), but direct README links could enhance navigation and discovery for users exploring the orchestration tools without adding unnecessary clutter.

Minor observation: MainWP placement under WordPress Development (line 451) is logically appropriate and intentionally separated from the AI Orchestration section—correct positioning.

.agent/tools/ai-orchestration/packaging.md (3)

161-237: ✓ Solid Kubernetes manifest with security-conscious secret management.

Deployment properly uses Secrets for API keys (SecurityKeyRef for OPENAI_API_KEY), resource limits are reasonable, and LoadBalancer Service is appropriate for external access. The three-replica setup provides basic redundancy. Consider adding liveness/readiness probes in production for enhanced resilience, though the current setup is deployable.


238-266: ✓ Production-ready SaaS patterns with Stripe integration.

API key authentication using FastAPI's Security dependency is correct. Stripe metered billing implementation with usage records is sound. Note: Production code should add webhook signature verification for Stripe events, though this is beyond the scope of a boilerplate example.


268-356: ✓ Desktop applications with proper GUI threading and Electron integration.

Tkinter threading pattern (using root.after()) is correct for GUI responsiveness. PyInstaller syntax is accurate with --onefile and --windowed flags. Electron's spawn() pattern for Python backend is standard. One note: Consider documenting onedir vs --onefile trade-offs (smaller individual files vs. larger initial download), though both are valid approaches.

configs/langflow-config.json.txt (1)

60-96: ✓ Comprehensive model provider and security configuration.

Model provider defaults (OpenAI, Anthropic, Ollama, optional Google) are well-chosen. All credentials use environment variable pattern for security. CORS is appropriately restrictive (localhost only). Git integration with auto_export disabled by default is sensible. The "environment": "local" designation clarifies this is development-focused, though production deployment might benefit from a note about enabling authentication and rate_limiting in deployment.docker/production sections.

setup.sh (1)

1650-1654: Consider combining local declarations with assignments.

The coding guidelines specify using local var="$value" pattern. Currently, variables are declared and assigned separately:

local python_version
python_version=$(python3 --version 2>&1 | cut -d' ' -f2)
local major minor
major=$(echo "$python_version" | cut -d. -f1)
minor=$(echo "$python_version" | cut -d. -f2)
♻️ Proposed refactor to match coding guidelines
-        local python_version
-        python_version=$(python3 --version 2>&1 | cut -d' ' -f2)
-        local major minor
-        major=$(echo "$python_version" | cut -d. -f1)
-        minor=$(echo "$python_version" | cut -d. -f2)
+        local python_version
+        python_version=$(python3 --version 2>&1 | cut -d' ' -f2)
+        local major
+        major=$(echo "$python_version" | cut -d. -f1)
+        local minor
+        minor=$(echo "$python_version" | cut -d. -f2)

Based on coding guidelines.

.agent/scripts/langflow-helper.sh (1)

60-67: Consider following the local variable assignment pattern.

The coding guidelines specify using local var="$value" pattern. Currently python_version is declared and assigned separately.

♻️ Proposed refactor
     # Check Python
     if command -v python3 &> /dev/null; then
-        local python_version
-        python_version=$(python3 --version 2>&1 | cut -d' ' -f2)
+        local python_version
+        python_version=$(python3 --version 2>&1 | cut -d' ' -f2)
         print_success "Python 3 found: $python_version"
     else

Note: The current pattern is already split across two lines, which is acceptable. The guideline primarily discourages inline assignment like local python_version=$(...) which can mask errors due to local's return code.

Based on coding guidelines.

.agent/scripts/crewai-helper.sh (3)

117-131: Consider adding install verification.

The set -e should catch install failures, but adding explicit verification after package installation would provide clearer feedback to users when something goes wrong.

Optional enhancement
     if command -v uv &> /dev/null; then
         uv pip install crewai -U
         uv pip install 'crewai[tools]' -U
         uv pip install streamlit -U
     else
         pip install crewai -U
         pip install 'crewai[tools]' -U
         pip install streamlit -U
     fi
+    
+    # Verify installation
+    if ! python -c "import crewai" 2>/dev/null; then
+        print_error "CrewAI installation verification failed"
+        return 1
+    fi

434-436: Status script output can be confusing.

The pgrep -f outputs PIDs directly, then ps aux | grep outputs full process info, and grep -v grep filters. This can result in orphaned PIDs printed without context if the ps command timing differs.

Cleaner process info output
 echo ""
 echo "Process Information:"
-pgrep -f "streamlit.*studio_app" && ps aux | grep -E "streamlit.*studio_app" | grep -v grep || echo "No CrewAI Studio processes found"
+if pgrep -f "streamlit.*studio_app" >/dev/null 2>&1; then
+    ps aux | grep -E "streamlit.*studio_app" | grep -v grep
+else
+    echo "No CrewAI Studio processes found"
+fi

600-604: Unknown actions silently show help instead of error.

The help|* pattern means typos like strat instead of start silently show help. Consider warning on unknown actions while still showing help.

Optional: Warn on unknown actions
         "help")
             show_usage
             ;;
+        *)
+            print_warning "Unknown action: $action"
+            show_usage
+            return 1
+            ;;
-        "help"|*)
-            show_usage
-            ;;
.agent/scripts/autogen-helper.sh (1)

33-49: Duplicate helper functions across scripts.

The print_info, print_success, print_warning, and print_error functions are identical across crewai-helper.sh, autogen-helper.sh, and langflow-helper.sh. Consider extracting to a shared library.

Extract to shared library

Create .agent/scripts/lib/common.sh:

#!/bin/bash
# Common helper functions for AI orchestration scripts

RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'

print_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
print_success() { echo -e "${GREEN}[SUCCESS]${NC} $1"; }
print_warning() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
print_error() { echo -e "${RED}[ERROR]${NC} $1"; }

Then source it in each helper:

SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/lib/common.sh"
.agent/tools/ai-orchestration/overview.md (1)

282-284: Prefer SIGTERM before SIGKILL.

Using kill -9 (SIGKILL) immediately doesn't allow graceful shutdown. Recommend trying SIGTERM first.

Gentler process termination
 # Kill process on port
-kill -9 $(lsof -t -i:7860)
+# Try graceful shutdown first, then force if needed
+kill $(lsof -t -i:7860) 2>/dev/null || kill -9 $(lsof -t -i:7860)
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a01adb7 and 768ddc9.

📒 Files selected for processing (14)
  • .agent/AGENTS.md
  • .agent/scripts/autogen-helper.sh
  • .agent/scripts/crewai-helper.sh
  • .agent/scripts/langflow-helper.sh
  • .agent/tools/ai-orchestration/autogen.md
  • .agent/tools/ai-orchestration/crewai.md
  • .agent/tools/ai-orchestration/langflow.md
  • .agent/tools/ai-orchestration/overview.md
  • .agent/tools/ai-orchestration/packaging.md
  • README.md
  • configs/autogen-config.json.txt
  • configs/crewai-config.json.txt
  • configs/langflow-config.json.txt
  • setup.sh
🧰 Additional context used
📓 Path-based instructions (4)
**/*.md

📄 CodeRabbit inference engine (AGENTS.md)

Use placeholders in examples and note secure storage location in documentation

Files:

  • README.md
.agent/scripts/*.sh

⚙️ CodeRabbit configuration file

.agent/scripts/*.sh: Automation scripts - focus on:

  • Reliability and robustness
  • Clear logging and feedback
  • Proper exit codes
  • Error recovery mechanisms

Files:

  • .agent/scripts/crewai-helper.sh
  • .agent/scripts/autogen-helper.sh
  • .agent/scripts/langflow-helper.sh
**/setup.sh

📄 CodeRabbit inference engine (AGENTS.md)

Deploy agents locally using ./setup.sh script

Files:

  • setup.sh
**/*.sh

📄 CodeRabbit inference engine (AGENTS.md)

**/*.sh: Use local var="$1" pattern for shell variable assignments
Use explicit returns in shell scripts
Run ShellCheck on all scripts with zero violations required

Files:

  • setup.sh
🧠 Learnings (13)
📓 Common learnings
Learnt from: CR
Repo: marcusquinn/aidevops PR: 0
File: templates/home/AGENTS.md:0-0
Timestamp: 2025-12-22T01:24:53.937Z
Learning: Reference the authoritative repository at ~/Git/aidevops/ for all detailed AI assistant instructions and configurations
📚 Learning: 2025-11-29T04:34:42.033Z
Learnt from: CR
Repo: marcusquinn/aidevops PR: 0
File: AGENT.md:0-0
Timestamp: 2025-11-29T04:34:42.033Z
Learning: Maintain all AI assistant instructions, documentation, and operational guidance in AGENTS.md as the single source of truth

Applied to files:

  • .agent/tools/ai-orchestration/overview.md
  • .agent/AGENTS.md
  • .agent/tools/ai-orchestration/autogen.md
  • .agent/tools/ai-orchestration/crewai.md
📚 Learning: 2025-12-22T01:24:53.937Z
Learnt from: CR
Repo: marcusquinn/aidevops PR: 0
File: templates/home/AGENTS.md:0-0
Timestamp: 2025-12-22T01:24:53.937Z
Learning: Reference the authoritative repository at ~/Git/aidevops/ for all detailed AI assistant instructions and configurations

Applied to files:

  • .agent/tools/ai-orchestration/overview.md
  • README.md
  • .agent/tools/ai-orchestration/langflow.md
  • configs/langflow-config.json.txt
  • configs/autogen-config.json.txt
  • configs/crewai-config.json.txt
  • .agent/tools/ai-orchestration/autogen.md
  • .agent/tools/ai-orchestration/crewai.md
📚 Learning: 2025-11-29T04:34:42.033Z
Learnt from: CR
Repo: marcusquinn/aidevops PR: 0
File: AGENT.md:0-0
Timestamp: 2025-11-29T04:34:42.033Z
Learning: Reference AGENTS.md (authoritative) instead of AGENT.md for AI assistant guidance

Applied to files:

  • .agent/tools/ai-orchestration/overview.md
  • .agent/AGENTS.md
  • .agent/tools/ai-orchestration/autogen.md
  • .agent/tools/ai-orchestration/crewai.md
📚 Learning: 2025-11-29T04:34:27.158Z
Learnt from: CR
Repo: marcusquinn/aidevops PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-29T04:34:27.158Z
Learning: All instructions, documentation, and operational guidance should be maintained in AGENTS.md as the single source of truth

Applied to files:

  • .agent/tools/ai-orchestration/overview.md
  • .agent/AGENTS.md
  • .agent/tools/ai-orchestration/autogen.md
  • .agent/tools/ai-orchestration/crewai.md
📚 Learning: 2025-11-29T04:34:30.742Z
Learnt from: CR
Repo: marcusquinn/aidevops PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-11-29T04:34:30.742Z
Learning: Reference AGENTS.md for authoritative AI assistant guidance instead of GEMINI.md

Applied to files:

  • .agent/tools/ai-orchestration/overview.md
  • .agent/AGENTS.md
  • .agent/tools/ai-orchestration/autogen.md
  • .agent/tools/ai-orchestration/crewai.md
📚 Learning: 2026-01-06T15:57:56.007Z
Learnt from: CR
Repo: marcusquinn/aidevops PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-06T15:57:56.007Z
Learning: Applies to **/setup.sh : Deploy agents locally using ./setup.sh script

Applied to files:

  • .agent/scripts/autogen-helper.sh
📚 Learning: 2025-12-22T01:24:53.937Z
Learnt from: CR
Repo: marcusquinn/aidevops PR: 0
File: templates/home/AGENTS.md:0-0
Timestamp: 2025-12-22T01:24:53.937Z
Learning: Follow all security protocols and working directory specifications defined in ~/Git/aidevops/AGENTS.md

Applied to files:

  • .agent/scripts/autogen-helper.sh
📚 Learning: 2025-12-22T01:24:53.937Z
Learnt from: CR
Repo: marcusquinn/aidevops PR: 0
File: templates/home/AGENTS.md:0-0
Timestamp: 2025-12-22T01:24:53.937Z
Learning: Use the helper script at ~/Git/aidevops/.agent/scripts/setup-local-api-keys.sh to manage secure storage of API keys rather than manual configuration

Applied to files:

  • .agent/scripts/autogen-helper.sh
📚 Learning: 2026-01-06T15:57:56.007Z
Learnt from: CR
Repo: marcusquinn/aidevops PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-06T15:57:56.007Z
Learning: Applies to **/.agent/scripts/linters-local.sh : Run quality checks before committing using .agent/scripts/linters-local.sh

Applied to files:

  • .agent/scripts/autogen-helper.sh
📚 Learning: 2026-01-06T15:57:56.008Z
Learnt from: CR
Repo: marcusquinn/aidevops PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-06T15:57:56.008Z
Learning: Applies to **/AGENTS.md : Use progressive disclosure in AGENTS.md with pointers to subagents rather than inline content

Applied to files:

  • .agent/AGENTS.md
📚 Learning: 2025-11-29T04:34:30.742Z
Learnt from: CR
Repo: marcusquinn/aidevops PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-11-29T04:34:30.742Z
Learning: Maintain all instructions, documentation, and operational guidance in AGENTS.md as the single source of truth

Applied to files:

  • .agent/AGENTS.md
  • .agent/tools/ai-orchestration/autogen.md
📚 Learning: 2026-01-06T15:57:56.008Z
Learnt from: CR
Repo: marcusquinn/aidevops PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-06T15:57:56.008Z
Learning: Applies to **/AGENTS.md : Limit root AGENTS.md to ~50-100 max instructions with universal applicability to >80% of tasks

Applied to files:

  • .agent/AGENTS.md
🧬 Code graph analysis (3)
.agent/scripts/autogen-helper.sh (3)
.agent/scripts/crewai-helper.sh (4)
  • print_info (36-38)
  • print_success (40-42)
  • print_error (48-50)
  • print_warning (44-46)
.agent/scripts/langflow-helper.sh (4)
  • print_info (37-39)
  • print_success (41-43)
  • print_error (49-51)
  • print_warning (45-47)
setup.sh (4)
  • print_info (25-25)
  • print_success (26-26)
  • print_error (28-28)
  • print_warning (27-27)
.agent/scripts/langflow-helper.sh (2)
.agent/scripts/autogen-helper.sh (3)
  • print_info (34-36)
  • print_success (38-40)
  • create_management_scripts (228-340)
.agent/scripts/crewai-helper.sh (3)
  • print_info (36-38)
  • print_success (40-42)
  • create_management_scripts (344-450)
setup.sh (1)
aidevops.sh (3)
  • print_info (25-25)
  • print_success (26-26)
  • print_warning (27-27)
🪛 LanguageTool
.agent/tools/ai-orchestration/autogen.md

[typographical] ~119-~119: This greeting should probably end with a comma.
Context: ...GEN_STUDIO_PORT=8081 ## Usage ### Hello World python import asyncio from autogen_...

(EN_GREETING_WITHOUT_COMMA)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Codacy Static Code Analysis
🔇 Additional comments (27)
.agent/AGENTS.md (1)

359-359: Subagent folder entry properly formatted and well-positioned.

The new tools/ai-orchestration/ entry follows the established table format, includes a clear purpose description, and lists the appropriate subagent files. Placement in the subagent folders table is logical and maintains consistency with surrounding entries.

.agent/tools/ai-orchestration/autogen.md (1)

22-26: No action required—all referenced scripts are correctly implemented.

The referenced scripts have been verified:

  • .agent/scripts/autogen-helper.sh exists and is properly implemented
  • The three management scripts (start-autogen-studio.sh, stop-autogen-studio.sh, autogen-status.sh) are dynamically created in ~/.aidevops/scripts/ by the helper's setup action
  • All paths and naming in the Quick Reference section match the actual implementation exactly
.agent/tools/ai-orchestration/crewai.md (6)

1-38: ✓ Well-structured frontmatter and quick reference.

The metadata and quick reference are clear, actionable, and align with the project's established patterns. Port 8501 and runtime directory structure (~/.aidevops/crewai/) are properly referenced.


40-114: ✓ Clear framework overview and practical installation guidance.

Concepts are well-explained with appropriate API examples. Installation paths (automated vs. manual) provide good user choice, and the "Create a New Project" section demonstrates practical workflows.


116-184: ✓ Comprehensive configuration guidance with proper YAML structure.

Environment variables and YAML examples follow CrewAI conventions. The inclusion of optional providers (Anthropic, Serper) and local LLM support demonstrates good coverage. YAML syntax is correct, and variable templating is well-demonstrated.


185-284: ✓ Practical Python examples with correct API usage.

The Basic Crew and Flows examples demonstrate proper CrewAI patterns. Typing with Pydantic, decorator usage, and sequential process orchestration are correctly shown. Examples are concise and actionable for getting started.


285-350: ✓ Solid local LLM and deployment patterns.

Ollama (localhost:11434) and LM Studio (localhost:1234/v1) configurations use standard endpoints. Git integration guidance properly separates versioned configs from secrets. Docker and FastAPI examples are correct, though the FastAPI example references create_my_crew() which isn't in scope—acceptable for templated guidance.


389-444: ✓ Practical integration examples and comprehensive troubleshooting.

Integration guidance ties CrewAI to Langflow and aidevops workflows naturally. Troubleshooting covers common issues (imports, API keys, memory) with actionable solutions. Resource links provide clear paths for deeper learning.

.agent/tools/ai-orchestration/packaging.md (5)

1-46: ✓ Clear quick reference with appropriate technology choices.

Deployment option matrix is well-organized and technology selections (FastAPI, Docker, PyInstaller, serverless) are well-suited to their use cases. Quick commands are concise and actionable.


52-159: ✓ Production-quality FastAPI and Docker patterns.

The FastAPI backend demonstrates proper async patterns, exception handling, and structured responses. Dockerfile uses layer caching best practices (requirements.txt before application code) and --no-cache-dir optimization. Docker Compose health checks and optional Redis integration are well-configured. One minor verification needed: AutoGen's agent.run() API signature—confirm the exact parameter names match the latest AutoGen release.


397-477: ✓ Sound mobile backend with proper async patterns.

BackgroundTasks for non-blocking task execution is the correct FastAPI approach. UUID generation and task status polling via React Native Fetch API follow standard mobile patterns. The honest caveat about in-memory storage vs. Redis production readiness is appreciated. Consider mentioning WebSocket as a future upgrade for real-time updates, though polling is appropriate for this guide.


479-565: ✓ Serverless patterns and export strategies properly documented.

Vercel and Lambda handlers demonstrate correct patterns for their runtimes. The emphasis on lightweight agents for serverless is important guidance. Export patterns (Langflow, CrewAI, AutoGen) show how to move from visual development to portable code—aligning well with the Zero Lock-in principle stated in the guide.


567-647: ✓ Production-ready CI/CD and comprehensive best practices.

GitHub Actions workflow demonstrates proper Docker build/push and Kubernetes rolling deployment. Best practices section (Zero Lock-in, Security, Performance, Monitoring) covers essential aspects. Prometheus and OpenTelemetry integration examples are correct. The emphasis on secrets management and monitoring aligns perfectly with A-grade DevOps standards.

configs/langflow-config.json.txt (2)

1-59: ✓ Well-structured configuration with secure defaults.

Directory structure follows established ~/.aidevops/ pattern. Port 7860 is correct for Langflow UI. Security-conscious defaults (include_credentials: false for exports, SQLite for local development) demonstrate proper DevOps practices. Features appropriately defaulted (MCP disabled, custom components enabled).


97-115: ✓ Usage examples and benefits well-articulated.

All helper script references align with the PR's tooling structure (.agent/scripts/ and ~/.aidevops/scripts/). Usage commands cover complete lifecycle (setup, start, stop, status, export, import). Resource recommendations (4GB/8GB) are reasonable for LLM-based tools. Benefits section effectively communicates value proposition aligned with AI DevOps goals.

.agent/scripts/langflow-helper.sh (7)

21-51: LGTM - Excellent error handling and clean helper functions.

The use of set -euo pipefail ensures robust error handling, and the color-coded logging functions provide clear user feedback. Configuration variables properly use ${VAR:-default} pattern for defaults.


93-191: LGTM - Robust setup with proper error handling.

The function correctly handles:

  • Directory creation with safe mkdir -p
  • Error checking with || exit 1 on critical operations
  • Quoted heredocs ('EOF') to prevent unintended variable expansion in templates
  • Proper ShellCheck directives for dynamic source paths
  • Conditional file creation to avoid overwriting existing configs

193-283: LGTM - Consistent management script generation.

The management script generation follows the same patterns as autogen-helper.sh and crewai-helper.sh, ensuring consistency across the AI orchestration framework helpers. Good use of:

  • Quoted heredocs to prevent variable expansion
  • PID file tracking for clean process management
  • Health endpoint checks for service verification
  • Fallback pkill for robustness

285-319: LGTM - Clean delegation pattern with good fallbacks.

These control functions properly delegate to generated management scripts while providing sensible fallbacks when scripts aren't found. The error handling is appropriate for each operation type.


321-346: LGTM - Proper parameter handling and error checking.

The function correctly uses local var="${1:-default}" pattern per coding guidelines, includes proper venv validation, and gracefully handles cases where no flows exist.


348-388: LGTM - Robust import logic with good error handling.

The function properly:

  • Validates prerequisites (venv, directory existence)
  • Uses the recommended local var="${1:-default}" pattern
  • Safely iterates through JSON files with existence checks
  • Provides detailed feedback on import results

420-461: LGTM - Well-structured main function with proper argument handling.

The main function correctly implements the command dispatch pattern with:

  • Proper default argument handling using ${1:-help}
  • Safe shift with || true to handle no-argument case
  • Comprehensive case statement covering all documented actions
  • Explicit return statement

This maintains excellent consistency with the other AI orchestration helper scripts (autogen-helper.sh, crewai-helper.sh).

.agent/tools/ai-orchestration/langflow.md (1)

1-405: LGTM - Comprehensive and well-structured documentation.

The Langflow documentation is thorough and aligns perfectly with the implementation in:

  • langflow-helper.sh (setup paths, ports, scripts)
  • configs/langflow-config.json.txt (configuration values)

Key strengths:

  • Clear quick reference section with essential commands
  • Multiple installation methods (automated, manual, Docker, desktop)
  • Practical examples for common workflows (RAG, multi-agent)
  • Git integration best practices with export/import
  • Troubleshooting section with concrete solutions
  • Integration examples with CrewAI and OpenCode
.agent/scripts/crewai-helper.sh (1)

1-21: LGTM on script header and safety settings.

Good use of set -euo pipefail for robust error handling. The header documentation clearly describes available actions.

.agent/scripts/autogen-helper.sh (1)

181-225: LGTM on example script.

Good async patterns with proper cleanup via await model_client.close(). The example demonstrates AutoGen usage clearly.

.agent/tools/ai-orchestration/overview.md (2)

41-48: Port allocation table matches helper scripts - LGTM.

Verified: CrewAI Studio uses 8501 (line 32 in crewai-helper.sh), AutoGen Studio uses 8081 (line 30 in autogen-helper.sh). Documentation is consistent with implementation.


1-50: Well-structured overview documentation.

The AI context markers, quick reference, and decision matrix provide excellent guidance for selecting the right framework. The port allocation table and quick setup commands are immediately actionable.

mkdir -p "$AUTOGEN_DIR"
mkdir -p "$SCRIPTS_DIR"

cd "$AUTOGEN_DIR" || exit 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use return instead of exit in function.

Same issue as crewai-helper.sh - exit 1 terminates the entire script rather than just returning from the function.

Suggested fix
-    cd "$AUTOGEN_DIR" || exit 1
+    cd "$AUTOGEN_DIR" || { print_error "Cannot access directory: $AUTOGEN_DIR"; return 1; }
🤖 Prompt for AI Agents
In @.agent/scripts/autogen-helper.sh at line 107, The cd command in the function
uses "cd \"$AUTOGEN_DIR\" || exit 1", which will terminate the whole script;
change the failure handler to return from the function instead of exiting the
process by replacing "exit 1" with "return 1" (i.e., use "cd \"$AUTOGEN_DIR\" ||
return 1") so the function returns an error status without killing the entire
script; apply the same change wherever the pattern "|| exit 1" appears inside
helper functions.

Comment on lines +64 to +116
minor=$(echo "$python_version" | cut -d. -f2)

if [[ $major -ge 3 ]] && [[ $minor -ge 10 ]]; then
print_success "Python $python_version found (3.10+ required)"
else
print_error "Python 3.10+ required, found $python_version"
missing=1
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Python version comparison logic is flawed.

The current logic [[ $major -ge 3 ]] && [[ $minor -ge 10 ]] would incorrectly reject Python 4.0+ (since minor would be less than 10). While Python 4.x isn't imminent, the logic should be future-proof.

Suggested fix
-        if [[ $major -ge 3 ]] && [[ $minor -ge 10 ]]; then
+        if [[ $major -gt 3 ]] || { [[ $major -eq 3 ]] && [[ $minor -ge 10 ]]; }; then
🤖 Prompt for AI Agents
In @.agent/scripts/crewai-helper.sh around lines 64 - 71, The version check in
crewai-helper.sh incorrectly uses '[[ $major -ge 3 ]] && [[ $minor -ge 10 ]]'
which rejects Python 4.x; change the condition to allow any major > 3 or major
== 3 with minor >= 10 (i.e., use a compound check like major -gt 3 || (major -eq
3 && minor -ge 10)) so the block around print_success/print_error (using
variables python_version, major, minor and setting missing=1) correctly accepts
future major versions while still enforcing 3.10+ for Python 3.

Comment on lines +490 to +565
cd "$project_dir" || exit 1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use return instead of exit in function.

Using exit 1 inside a function terminates the entire script. For consistency with the error handling pattern used elsewhere (e.g., line 495), use return 1.

Suggested fix
-    cd "$project_dir" || exit 1
+    cd "$project_dir" || { print_error "Cannot access directory: $project_dir"; return 1; }
🤖 Prompt for AI Agents
In @.agent/scripts/crewai-helper.sh around lines 490 - 491, The command `cd
"$project_dir" || exit 1` inside the function terminates the whole script;
replace `exit 1` with `return 1` so the function returns an error code instead
of exiting the process—update the `cd "$project_dir" || exit 1` line to `cd
"$project_dir" || return 1` (preserving the surrounding function and any
existing error handling pattern).

- **Stop**: `~/.aidevops/scripts/stop-autogen-studio.sh`
- **Status**: `~/.aidevops/scripts/autogen-status.sh`
- **URL**: http://localhost:8081 (AutoGen Studio)
- **Config**: `~/.aidevops/autogen/.env`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find autogen.md in the repository
find . -name "autogen.md" -type f

Repository: marcusquinn/aidevops

Length of output: 106


🏁 Script executed:

# Find AGENTS.md to check authoritative credential guidance
find . -name "AGENTS.md" -type f

Repository: marcusquinn/aidevops

Length of output: 152


🏁 Script executed:

# Check if the referenced script files exist
find . -path "*/.agent/scripts/autogen-helper.sh" -o -path "*/.aidevops/scripts/start-autogen-studio.sh" -o -path "*/.aidevops/scripts/stop-autogen-studio.sh" -o -path "*/.aidevops/scripts/autogen-status.sh"

Repository: marcusquinn/aidevops

Length of output: 98


🏁 Script executed:

# Read autogen.md to verify the credential storage references
wc -l ./.agent/tools/ai-orchestration/autogen.md

Repository: marcusquinn/aidevops

Length of output: 110


🏁 Script executed:

# Read autogen.md to see lines around 27, 61, and 97-115
cat -n ./.agent/tools/ai-orchestration/autogen.md | head -120

Repository: marcusquinn/aidevops

Length of output: 3606


🏁 Script executed:

# Read main AGENTS.md to check authoritative credential guidance
grep -n "credentials\|mcp-env.sh\|\.env\|600 permissions" ./AGENTS.md | head -20

Repository: marcusquinn/aidevops

Length of output: 165


🏁 Script executed:

# Search for the missing script files referenced in autogen.md
find . -name "start-autogen-studio.sh" -o -name "stop-autogen-studio.sh" -o -name "autogen-status.sh"

Repository: marcusquinn/aidevops

Length of output: 46


🏁 Script executed:

# Check if these scripts are referenced in any .agent AGENTS.md
grep -r "start-autogen-studio\|stop-autogen-studio\|autogen-status" ./.agent/

Repository: marcusquinn/aidevops

Length of output: 1698


🏁 Script executed:

# Check the main AGENTS.md more broadly for credential and AutoGen guidance
cat -n ./AGENTS.md | sed -n '80,110p'

Repository: marcusquinn/aidevops

Length of output: 1276


🏁 Script executed:

# Check the autogen-helper.sh setup function to understand credential handling
sed -n '1,100p' ./.agent/scripts/autogen-helper.sh | grep -A 30 "setup()"

Repository: marcusquinn/aidevops

Length of output: 46


🏁 Script executed:

# Check if autogen-helper.sh sources from mcp-env.sh or creates its own .env
grep -n "mcp-env\|\.env\|credentials\|OPENAI_API_KEY" ./.agent/scripts/autogen-helper.sh | head -20

Repository: marcusquinn/aidevops

Length of output: 665


Align credential storage with AGENTS.md authoritative security standards.

autogen.md references ~/.aidevops/autogen/.env for credentials (lines 27, 61, 97–115), but AGENTS.md mandates ~/.config/aidevops/mcp-env.sh with 600 permissions. The implementation in autogen-helper.sh mirrors autogen.md's pattern, creating and sourcing a separate .env file rather than the authoritative credential location.

To fix:

  1. Update autogen-helper.sh to source credentials from ~/.config/aidevops/mcp-env.sh instead of creating .env
  2. Update autogen.md to reference the AGENTS.md credential standard
  3. Use placeholders in all credential examples and explicitly note the secure storage location per AGENTS.md line 93

Secondary issue: Quick Reference (lines 23–25) lists start/stop/status scripts as available before setup creates them. Clarify that these scripts are generated during setup.

🤖 Prompt for AI Agents
In @.agent/tools/ai-orchestration/autogen.md at line 27, autogen.md and
autogen-helper.sh currently reference and create ~/.aidevops/autogen/.env but
AGENTS.md is the authoritative source and requires credentials in
~/.config/aidevops/mcp-env.sh with 600 perms; update autogen-helper.sh to stop
creating/sourcing .env and instead source ~/.config/aidevops/mcp-env.sh (ensure
permission checks and fail-fast if missing), update autogen.md to reference the
AGENTS.md standard path and permissions and replace inline credential examples
with placeholders pointing to that secure file, and amend the Quick Reference
text to note that start/stop/status scripts are generated by the setup step
rather than being pre-existing.

Comment on lines +73 to +77
- **License**: MIT
- **Stars**: 143k+
- **GUI**: Native web UI (localhost:7860)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

Langflow GitHub stars 2025

💡 Result:

Short timeline in 2025 for Langflow's GitHub stars:

  • Jan 13, 2025 — reported ~43.1k stars. [1]
  • Nov/Dec 2025 — Langflow announced it hit 100k stars. [2]

Sources: [1] GitHub simple-icons issue mentioning 43.1k (Jan 13, 2025). [2] Langflow blog post announcing 100k stars (Nov/Dec 2025).


🌐 Web query:

CrewAI GitHub stars 2025

💡 Result:

  • July 21, 2025 — ~34,000 stars. [1][2]
  • October 21, 2025 — ~39,381 stars (RepositoryStats snapshot). [3]

Sources: news report noting 34k stars (Jul 21, 2025) and RepositoryStats listing ~39,381 stargazers (Oct 2025). [1][3]


🌐 Web query:

AutoGen GitHub stars 2025

💡 Result:

  • microsoft/autogen had about 45.7K stars in June 2025. [1]
  • On GitHub right now (Jan 11, 2026) the repo shows ~53.4K stars. [2]

Sources: [1] AgentWise (Jun 2025 metrics). [2] microsoft/autogen GitHub repository (current star count).


Update GitHub star counts to reflect current data (January 2026).

The cited star counts are inaccurate. Langflow is listed as 143k+ but reached ~100k+ in late 2025. CrewAI shows ~39.4k as of October 2025, and AutoGen is correctly at ~53.4k. Rather than maintaining specific counts that will drift, use relative comparisons ("highly popular", "widely adopted") or commit to regular updates as part of documentation maintenance.

Also applies to: lines 98-100, 122-125

🤖 Prompt for AI Agents
In @.agent/tools/ai-orchestration/overview.md around lines 73 - 75, Replace the
hard-coded GitHub star counts for projects (the bulleted items referencing
Langflow, CrewAI, AutoGen) with relative, non-time-sensitive phrases (e.g.,
"highly popular", "widely adopted", "well-known") or add a clear "as of <date>"
timestamp and a maintenance note; update the three occurrences mentioned (the
bullets at the diff and the similar lists at the other two locations) to use
these relative descriptors or timestamped counts to avoid stale data.

Comment on lines +177 to +181
├── .env.example # Template for .env
├── start_{tool}.sh # Startup script
└── {tool-specific}/ # Tool-specific files
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Script naming inconsistency with implementation.

Documentation shows start_{tool}.sh but the actual generated scripts are named start-crewai-studio.sh, start-autogen-studio.sh, etc. (with hyphens and -studio suffix).

Suggested fix
 ~/.aidevops/{tool}/
 ├── venv/                 # Python virtual environment
 ├── .env                  # API keys and configuration
 ├── .env.example          # Template for .env
-├── start_{tool}.sh       # Startup script
+├── studio_app.py         # Studio application (if applicable)
 └── {tool-specific}/      # Tool-specific files

And update lines 208-212:

 After setup, management scripts are created at `~/.aidevops/scripts/`:
 
-- `start-{tool}-stack.sh` - Start all services
-- `stop-{tool}-stack.sh` - Stop all services
+- `start-{tool}-studio.sh` - Start studio service
+- `stop-{tool}-studio.sh` - Stop studio service
 - `{tool}-status.sh` - Check service health
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
├── .env.example # Template for .env
├── start_{tool}.sh # Startup script
└── {tool-specific}/ # Tool-specific files
├── .env.example # Template for .env
├── studio_app.py # Studio application (if applicable)
└── {tool-specific}/ # Tool-specific files
🤖 Prompt for AI Agents
In @.agent/tools/ai-orchestration/overview.md around lines 177 - 179, The doc
snippet shows a mismatched script name pattern; update the overview to reflect
the actual generated scripts by replacing the generic `start_{tool}.sh` with the
real naming convention `start-{tool}-studio.sh` (e.g., `start-crewai-studio.sh`,
`start-autogen-studio.sh`) and ensure the directory list and any
examples/comments mention the hyphenated `-studio` suffix so the documentation
matches the implemented filenames.

@github-actions
Copy link

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 256 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Sun Jan 11 16:56:10 UTC 2026: Code review monitoring started
Sun Jan 11 16:56:11 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 256
Sun Jan 11 16:56:11 UTC 2026: Qlty - 0 issues found, auto-formatting applied
Sun Jan 11 16:56:13 UTC 2026: Codacy analysis completed with auto-fixes

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 256
  • VULNERABILITIES: 0

Generated on: Sun Jan 11 16:56:49 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

Add comprehensive integration for three major AI orchestration frameworks:
- Langflow: Visual drag-and-drop workflow builder (MIT, localhost:7860)
- CrewAI: Multi-agent teams with role-based orchestration (MIT, localhost:8501)
- AutoGen: Microsoft's agentic AI framework with MCP support (MIT, localhost:8081)

New files:
- .agent/tools/ai-orchestration/overview.md - Decision matrix and comparison
- .agent/tools/ai-orchestration/{langflow,crewai,autogen}.md - Setup guides
- .agent/tools/ai-orchestration/packaging.md - Deployment templates
- .agent/scripts/{langflow,crewai,autogen}-helper.sh - Setup scripts
- configs/{langflow,crewai,autogen}-config.json.txt - Config templates

Updated:
- .agent/AGENTS.md - Added ai-orchestration category
- README.md - Added AI Orchestration Frameworks section
- setup.sh - Added setup_ai_orchestration() function

All frameworks follow common design patterns aligned with existing Agno integration:
- Runtime directory: ~/.aidevops/{tool}/
- Virtual environment management
- Standard start/stop/status scripts
- Local LLM support via Ollama
…tion helpers

- Add get_available_port() function to all three helpers
- Update start scripts to check port availability before starting
- Auto-select alternative port if default is in use
- Save actual port to /tmp/{tool}_port for status scripts
- Update status scripts to read actual port from saved file
- Document port management integration in overview.md

Integrates with PR #52 localhost port management for conflict avoidance.
@marcusquinn marcusquinn force-pushed the feature/ai-orchestration-frameworks branch from 91dc49c to 7099c1a Compare January 11, 2026 17:00
@sonarqubecloud
Copy link

@github-actions
Copy link

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 257 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Sun Jan 11 17:00:36 UTC 2026: Code review monitoring started
Sun Jan 11 17:00:37 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 257
Sun Jan 11 17:00:37 UTC 2026: Qlty - 0 issues found, auto-formatting applied
Sun Jan 11 17:00:39 UTC 2026: Codacy analysis completed with auto-fixes

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 257
  • VULNERABILITIES: 0

Generated on: Sun Jan 11 17:01:17 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

@marcusquinn marcusquinn merged commit 7295914 into main Jan 11, 2026
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant