Skip to content

feat: add HeyGen AI avatar video creation skill#170

Merged
marcusquinn merged 4 commits intomainfrom
feature/add-heygen-skill
Jan 24, 2026
Merged

feat: add HeyGen AI avatar video creation skill#170
marcusquinn merged 4 commits intomainfrom
feature/add-heygen-skill

Conversation

@marcusquinn
Copy link
Owner

@marcusquinn marcusquinn commented Jan 24, 2026

Summary

  • Import HeyGen skill from heygen-com/skills with full rule set (18 files)
  • Place under .agent/tools/video/heygen-skill/ with proper aidevops frontmatter
  • Register in skill-sources.json for upstream update tracking

Details

The HeyGen skill provides comprehensive API guidance for AI avatar video creation, covering:

Foundation: Authentication, quota management, video status polling, asset uploads
Core Video Creation: Avatars, voices, scripts, video generation (v2 API), Video Agent API, dimensions
Customization: Backgrounds, text overlays, captions
Advanced: Templates, video translation/dubbing, streaming avatars, photo avatars, webhooks
Integration: Remotion composition patterns

Changes

File Description
.agent/tools/video/heygen-skill.md Skill entry point with aidevops frontmatter
.agent/tools/video/heygen-skill/rules/*.md 18 rule files from upstream
.agent/configs/skill-sources.json Upstream tracking entry
README.md Added to imported skills table

Summary by CodeRabbit

Release Notes

  • New Features

    • Added HeyGen skill integration for AI avatar video creation with support for avatars, voices, video generation, and streaming avatars
    • Enabled video translation, webhook integration, and template-based video generation capabilities
  • Documentation

    • Comprehensive guides for authentication, video generation workflows, and avatar/voice selection
    • Detailed documentation covering advanced features including streaming avatars, templates, video translation, and API integration patterns

✏️ Tip: You can customize this high-level summary in your review settings.

Import heygen-com/skills with 18 rule files covering authentication,
video generation, avatars, voices, backgrounds, captions, templates,
streaming, photo avatars, webhooks, and Remotion integration.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 24, 2026

Warning

Rate limit exceeded

@marcusquinn has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 21 minutes and 9 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

This PR adds a comprehensive HeyGen skill to the agent framework, including configuration registration, primary documentation, and 18 detailed rule files covering API authentication, avatar selection, video generation, streaming, translation, webhooks, and integration patterns.

Changes

Cohort / File(s) Summary
Configuration & Registry
.agent/configs/skill-sources.json, README.md
Registers HeyGen skill with upstream metadata (18 rule files, commit tracking) and adds entry to Imported Skills table.
Primary Documentation
.agent/tools/video/heygen-skill.md
Introduces HeyGen Skill reference with categorized rule links across foundational, core video creation, customization, advanced features, and integration sections.
Foundational Rules
.agent/tools/video/heygen-skill/rules/authentication.md, quota.md, video-status.md
Documents API key setup, authenticated requests across curl/TypeScript/Python, credit-based quota system, and video status polling patterns with retry logic.
Avatar & Voice Management
.agent/tools/video/heygen-skill/rules/avatars.md, voices.md, photo-avatars.md
Covers avatar listing, selection, preview workflows, voice integration, language matching, and photo-to-avatar generation workflows with type definitions and helper utilities.
Video Generation & Configuration
.agent/tools/video/heygen-skill/rules/video-generation.md, video-agent.md, dimensions.md, scripts.md
Details v2 video generation workflow, Video Agent API prompts, dimension presets (720p/1080p), and script best practices (150 words/min benchmark, SSML break tags).
Styling & Customization
.agent/tools/video/heygen-skill/rules/backgrounds.md, captions.md, text-overlays.md
Documents background types (color/image/video), caption configuration with language/styling, and text overlay positioning with template presets.
Advanced Features
.agent/tools/video/heygen-skill/rules/templates.md, video-translation.md, streaming-avatars.md, webhooks.md
Covers templating system with variable placeholders, multi-language translation with lip-sync, WebRTC streaming sessions, and webhook event handling with signature verification.
Integration & Assets
.agent/tools/video/heygen-skill/rules/assets.md, remotion-integration.md
Describes presigned URL asset upload workflow (images/videos/audio), and Remotion composition integration with OffthreadVideo for frame-accurate rendering.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~28 minutes

Possibly related PRs

Poem

🎬 HeyGen avatars now speak through our frames,
From config to webhooks, we've captured the gains,
Eighteen rules strong, from scripts to the streams,
Building video magic with DevOps precision and dreams! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the primary change: adding a new HeyGen skill for AI avatar video creation with comprehensive rule documentation.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates a new HeyGen skill, providing extensive documentation and best practices for AI avatar video generation. It covers everything from authentication and asset management to advanced features like streaming avatars, video translation, and integration with tools like Remotion, enabling developers to leverage HeyGen's capabilities effectively.

Highlights

  • HeyGen Skill Integration: The pull request introduces a new skill for HeyGen AI avatar video creation, enabling comprehensive API guidance for various video generation features.
  • Extensive API Documentation: A total of 18 detailed rule files have been added, covering every aspect of the HeyGen API, from authentication and asset management to advanced features like streaming avatars and video translation.
  • Structured Skill Organization: The new skill is organized under .agent/tools/video/heygen-skill/ with a main entry point and individual rule files, ensuring clear and accessible documentation.
  • Upstream Tracking Enabled: The HeyGen skill is registered in skill-sources.json, allowing for proper tracking of its upstream repository and future updates.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 406 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Sat Jan 24 05:31:22 UTC 2026: Code review monitoring started
Sat Jan 24 05:31:23 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 406
Sat Jan 24 05:31:23 UTC 2026: Qlty - 0 issues found, auto-formatting applied
Sat Jan 24 05:31:25 UTC 2026: Codacy analysis completed with auto-fixes

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 406
  • VULNERABILITIES: 0

Generated on: Sat Jan 24 05:32:50 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a comprehensive skill for the HeyGen AI avatar video creation API. The documentation is extensive, covering everything from authentication and asset management to advanced features like streaming avatars and video translation. The code examples are generally clear and helpful. I've identified a few areas for improvement, mainly related to code correctness in some TypeScript examples, efficiency in file handling, and clarity in a few API usage patterns. Addressing these points will enhance the quality and reliability of the provided skill documentation.

Comment on lines 117 to 120
if (!response.ok) {
const error = await response.json();
throw new Error(error.message || `HTTP ${response.status}`);
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The error handling in the HeyGenClient's request method incorrectly tries to access error.message. Based on the API response format documented later in this file ({"error": "Invalid API key", ...}), the error message is in the error property of the JSON response. This should be error.error to correctly propagate the error message from the API.

Suggested change
if (!response.ok) {
const error = await response.json();
throw new Error(error.message || `HTTP ${response.status}`);
}
if (!response.ok) {
const error = await response.json();
throw new Error(error.error || `HTTP ${response.status}`);
}

linkedin: { width: 1920, height: 1080 },
};

const dimension = platformDimensions[options.platform];

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The createVideoConfig function has a bug where it directly mutates the dimension object retrieved from platformDimensions. Because objects are passed by reference in JavaScript, this modification will persist and affect subsequent calls to the function. For example, after creating a 720p video, the dimensions for that platform in platformDimensions will be permanently scaled down. To fix this, you should create a shallow copy of the dimension object before modifying it.

Suggested change
const dimension = platformDimensions[options.platform];
const dimension = { ...platformDimensions[options.platform] };

Comment on lines 701 to 728
import { Video, AbsoluteFill } from "remotion";

export const LoomStyleVideo: React.FC<{
screenRecordingUrl: string;
avatarWebmUrl: string;
}> = ({ screenRecordingUrl, avatarWebmUrl }) => {
return (
<AbsoluteFill>
{/* Screen recording as base layer */}
<Video src={screenRecordingUrl} style={{ width: "100%", height: "100%" }} />

{/* Avatar with circular mask applied in CSS */}
<Video
src={avatarWebmUrl}
style={{
position: "absolute",
bottom: 20,
left: 20,
width: 150,
height: 150,
borderRadius: "50%", // Circular mask
overflow: "hidden",
objectFit: "cover",
}}
/>
</AbsoluteFill>
);
};

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The LoomStyleVideo example uses Remotion's <Video> component. As noted in remotion-integration.md, using <Video> can lead to jitter and frame inaccuracies during rendering, especially with videos from external sources like HeyGen. For smooth, frame-accurate playback, you should use the <OffthreadVideo> component instead.

import { OffthreadVideo, AbsoluteFill } from "remotion";

export const LoomStyleVideo: React.FC<{
  screenRecordingUrl: string;
  avatarWebmUrl: string;
}> = ({ screenRecordingUrl, avatarWebmUrl }) => {
  return (
    <AbsoluteFill>
      {/* Screen recording as base layer */}
      <OffthreadVideo src={screenRecordingUrl} style={{ width: "100%", height: "100%" }} />

      {/* Avatar with circular mask applied in CSS */}
      <OffthreadVideo
        src={avatarWebmUrl}
        style={{
          position: "absolute",
          bottom: 20,
          left: 20,
          width: 150,
          height: 150,
          borderRadius: "50%", // Circular mask
          overflow: "hidden",
          objectFit: "cover",
        }}
      />
    </AbsoluteFill>
  );
};

Comment on lines 204 to 220
async function uploadFromUrl(sourceUrl: string, contentType: string): Promise<string> {
// 1. Download the file
const sourceResponse = await fetch(sourceUrl);
const buffer = await sourceResponse.arrayBuffer();

// 2. Get HeyGen upload URL
const { url, asset_id } = await getUploadUrl(contentType);

// 3. Upload to HeyGen
await fetch(url, {
method: "PUT",
headers: { "Content-Type": contentType },
body: buffer,
});

return asset_id;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The uploadFromUrl function currently downloads the entire file into an in-memory buffer before uploading it to HeyGen. This can be very memory-intensive and may fail for large files. A more efficient approach would be to stream the file directly from the source URL to the HeyGen upload URL. This avoids loading the entire file into memory.

async function uploadFromUrl(sourceUrl: string, contentType: string): Promise<string> {
  // 1. Get HeyGen upload URL
  const { url, asset_id } = await getUploadUrl(contentType);

  // 2. Fetch the source as a stream
  const sourceResponse = await fetch(sourceUrl);
  if (!sourceResponse.ok || !sourceResponse.body) {
    throw new Error(`Failed to download from source: ${sourceResponse.status}`);
  }

  // 3. Upload to HeyGen as a stream
  const uploadResponse = await fetch(url, {
    method: "PUT",
    headers: { 
      "Content-Type": contentType,
      // Content-Length might be required by the presigned URL, get it from source response if available
      "Content-Length": sourceResponse.headers.get('content-length') || undefined,
    },
    body: sourceResponse.body,
    // @ts-ignore - duplex is needed for streaming with node-fetch
    duplex: "half",
  });

  if (!uploadResponse.ok) {
    throw new Error(`Upload to HeyGen failed: ${uploadResponse.status}`);
  }

  return asset_id;
}

): Promise<string> {
// 1. Upload background
console.log("Uploading background...");
const backgroundId = await uploadFile(backgroundPath, "image/jpeg");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The createVideoWithCustomBackground function uses uploadFile to upload the background image. uploadFile uses fs.readFileSync, which is synchronous and can block the Node.js event loop, especially with large files. For better performance and to avoid blocking, you should use the uploadLargeFile function, which uses streams.

Suggested change
const backgroundId = await uploadFile(backgroundPath, "image/jpeg");
const backgroundId = await uploadLargeFile(backgroundPath, "image/jpeg");

error: null | string;
data: {
avatars: Avatar[];
talking_photos: TalkingPhoto[];

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The AvatarsResponse interface references a TalkingPhoto type that is not defined anywhere in this file. This will cause a TypeScript compilation error. To fix this, you should either define the TalkingPhoto type or, if its structure is unknown or variable, use any[] or Record<string, any>[] as a placeholder.

Suggested change
talking_photos: TalkingPhoto[];
talking_photos: any[];

const translationConfig = {
input_video_id: "original_video_id",
output_languages: ["es-ES", "fr-FR"],
srt_key: "path/to/custom.srt", // Custom SRT file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The example for using a custom SRT file includes an srt_key property. However, the documentation doesn't explain how to obtain this key. It's unclear if this refers to an asset that needs to be uploaded first. Please clarify the process for uploading an SRT file and obtaining the corresponding srt_key, including the required Content-Type for the upload.

const { data: quota } = await quotaResponse.json();

// Estimate required credits (rough estimate: 1 credit per minute)
const estimatedMinutes = videoConfig.estimatedDuration / 60;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The generateVideoWithQuotaCheck example estimates credit usage based on videoConfig.estimatedDuration. However, VideoConfig is not defined, and estimatedDuration is not a standard property of the video generation request. This makes the example confusing and not directly usable. A more robust approach would be to estimate the duration from the script's word count, as demonstrated in scripts.md.

Comment on lines +380 to +382
private async ping(): Promise<void> {
// Implementation depends on API
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The ping method within the startKeepAlive example is empty, with a comment // Implementation depends on API. To make this documentation more useful, it should specify what kind of message the HeyGen streaming API expects as a keep-alive ping. For example, does it expect an empty message, a specific JSON payload, or something else? Without this information, developers cannot correctly implement the keep-alive mechanism.

@augmentcode
Copy link

augmentcode bot commented Jan 24, 2026

🤖 Augment PR Summary

Summary: Adds a new “HeyGen” skill to aidevops to provide domain guidance for generating AI avatar videos via HeyGen’s API.
Key additions:

  • New skill entry point at .agent/tools/video/heygen-skill.md with aidevops frontmatter
  • Imported 18 rule documents under .agent/tools/video/heygen-skill/rules/ covering auth, quota, assets, video generation/status, templates, translation, streaming, webhooks, and Remotion integration
  • Registered the upstream source and pinned commit in .agent/configs/skill-sources.json for future sync
  • Updated the README imported-skills table to list the HeyGen skill
Why: Enables agents to reference standardized HeyGen API patterns (endpoints, payloads, polling/webhooks) when implementing video workflows. Notes: Content is documentation/examples; no runtime code paths were introduced. Upstream: Tracks heygen-com/skills at commit f11988d…. Scope: Adds new docs only; existing skills/tools behavior remains unchanged.

🤖 Was this summary useful? React with 👍 or 👎

Copy link

@augmentcode augmentcode bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review completed. 6 suggestions posted.

Fix All in Augment

Comment augment review to trigger a new review at any time.

const { execSync } = require("child_process");
for (const avatar of data.avatars.slice(0, 3)) {
// 'open' on macOS opens the URL in default browser - doesn't download
execSync(`open "${avatar.preview_image_url}"`);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using execSync with an interpolated URL can become a command-injection vector if preview_image_url is ever user-controlled (or contains unexpected quotes). Consider using a non-shell invocation pattern in the example to keep it safe to copy/paste.

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎

.digest("hex");

return crypto.timingSafeEqual(
Buffer.from(signature),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since expectedSignature is hex (digest("hex")), Buffer.from(...) without an explicit encoding treats it as UTF-8, which will make timingSafeEqual comparisons incorrect and likely fail validations. Consider clarifying the expected signature encoding/format in this example.

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎


## Best Practices

1. **Preview avatars before generating** - Download `preview_image_url` so user can see what the avatar looks like before committing to a video (see [avatars.md](avatars.md))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This says to “Download preview_image_url”, but avatars.md recommends opening preview URLs directly in the browser (no download). Consider aligning the guidance so users follow one consistent workflow.

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎


1. **Preview avatars before generating** - Download `preview_image_url` so user can see what the avatar looks like before committing to a video (see [avatars.md](avatars.md))
2. **Use avatar's default voice** - Most avatars have a `default_voice_id` that's pre-matched for natural results (see [avatars.md](avatars.md))
2. **Fallback: match gender manually** - If no default voice, ensure avatar and voice genders match (see [voices.md](voices.md))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The list numbering repeats 2. here, which can be confusing when referencing these best-practice items. Consider renumbering so it’s sequential.

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎


// Helper function for polling
async function waitForVideo(videoId: string): Promise<string> {
const maxAttempts = 60;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With maxAttempts = 60 and pollInterval = 10s, this helper times out after ~10 minutes, but this doc later recommends 15–20 minutes and notes 10–15+ minute generations. The example defaults may cause unnecessary timeouts in normal usage.

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎

Apply circular masking in Remotion:

```tsx
import { Video, AbsoluteFill } from "remotion";
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This Remotion snippet uses Video even though remotion-integration.md recommends OffthreadVideo for frame-accurate rendering (to avoid jitter in renders). Consider keeping these examples consistent so users don’t copy a less reliable pattern.

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎

@github-actions
Copy link

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 406 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Sat Jan 24 05:36:39 UTC 2026: Code review monitoring started
Sat Jan 24 05:36:40 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 406
Sat Jan 24 05:36:40 UTC 2026: Qlty - 0 issues found, auto-formatting applied
Sat Jan 24 05:36:42 UTC 2026: Codacy analysis completed with auto-fixes

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 406
  • VULNERABILITIES: 0

Generated on: Sat Jan 24 05:37:59 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @.agent/tools/video/heygen-skill.md:
- Around line 19-45: The markdown headings (Foundation, Core Video Creation,
Video Customization, Advanced Features, Integration) lack blank lines before and
after them causing MD022 failures; update the .agent/tools/video/heygen-skill.md
file to ensure there is an empty line above and below each of those section
headings (e.g., add a blank line before "### Foundation" and one after it, and
do the same for "### Core Video Creation", "### Video Customization", "###
Advanced Features", and "### Integration") so each heading is separated by a
single blank line from surrounding content.
♻️ Duplicate comments (11)
.agent/tools/video/heygen-skill/rules/authentication.md (1)

117-120: Fix error property access to match API response format.

The error handling incorrectly accesses error.message, but based on the documented API response format (lines 164-171), error messages are in the error property of the JSON response.

🔧 Proposed fix
     if (!response.ok) {
       const error = await response.json();
-      throw new Error(error.message || `HTTP ${response.status}`);
+      throw new Error(error.error || `HTTP ${response.status}`);
     }
.agent/tools/video/heygen-skill/rules/captions.md (1)

155-162: Clarify the SRT file upload and key retrieval process.

The documentation references srt_key but doesn't explain how to obtain it. Users need to know whether this requires uploading the SRT file as an asset first and retrieving the asset key.

Consider adding a section before this example:

### Uploading Custom SRT Files

To use a custom SRT file, upload it as an asset first:

```typescript
// 1. Upload the SRT file
const assetId = await uploadFile("./custom-captions.srt", "application/x-subrip");

// 2. Use the asset path as srt_key
const translationConfig = {
  input_video_id: "original_video_id",
  output_languages: ["es-ES", "fr-FR"],
  srt_key: `path/to/asset/${assetId}`, // or however HeyGen expects it
  srt_role: "input",
};

Alternatively, if SRT is uploaded separately, clarify the endpoint and format.




Run the following to check if asset upload documentation covers SRT files:

```shell
#!/bin/bash
# Search for SRT upload documentation in asset management files
rg -n -C3 "srt|subtitle" .agent/tools/video/heygen-skill/rules/assets.md
.agent/tools/video/heygen-skill/rules/quota.md (1)

84-105: Fix undocumented estimatedDuration property usage.

The example references videoConfig.estimatedDuration, but VideoConfig type is undefined and this property isn't standard. This makes the example confusing and not directly usable.

Estimate duration from script word count instead (as demonstrated in scripts.md):

♻️ Improved quota check with duration estimation
 async function generateVideoWithQuotaCheck(videoConfig: VideoConfig) {
   // Check quota first
   const quotaResponse = await fetch(
     "https://api.heygen.com/v1/video_generate.quota",
     { headers: { "X-Api-Key": process.env.HEYGEN_API_KEY! } }
   );
 
   const { data: quota } = await quotaResponse.json();
 
-  // Estimate required credits (rough estimate: 1 credit per minute)
-  const estimatedMinutes = videoConfig.estimatedDuration / 60;
+  // Estimate duration from script (assuming ~150 words per minute)
+  const scriptText = videoConfig.video_inputs[0]?.voice?.input_text || "";
+  const wordCount = scriptText.split(/\s+/).length;
+  const estimatedMinutes = Math.ceil(wordCount / 150);
   const requiredCredits = Math.ceil(estimatedMinutes);
 
   if (quota.remaining_quota < requiredCredits) {
     throw new Error(
       `Insufficient credits. Need ${requiredCredits}, have ${quota.remaining_quota}`
     );
   }
 
   // Proceed with video generation
   return generateVideo(videoConfig);
 }
.agent/tools/video/heygen-skill/rules/webhooks.md (1)

257-274: Critical: Buffer encoding mismatch will cause signature verification to fail.

The signature verification logic has an encoding mismatch. Line 268 generates expectedSignature as a hex string via digest("hex"), but line 272 creates a Buffer without specifying the encoding, defaulting to UTF-8. This will cause timingSafeEqual to compare incompatible buffers and fail validation.

🔒 Proposed fix for signature verification
  return crypto.timingSafeEqual(
-   Buffer.from(signature),
-   Buffer.from(expectedSignature)
+   Buffer.from(signature, "hex"),
+   Buffer.from(expectedSignature, "hex")
  );
.agent/tools/video/heygen-skill/rules/avatars.md (2)

110-116: TypeScript compilation error: undefined type TalkingPhoto.

The AvatarsResponse interface references TalkingPhoto[] at line 114, but this type is not defined in the file. This will cause a TypeScript compilation error.

🔧 Proposed fix
 interface AvatarsResponse {
   error: null | string;
   data: {
     avatars: Avatar[];
-    talking_photos: TalkingPhoto[];
+    talking_photos: any[];  // Or define TalkingPhoto interface if structure is known
   };
 }

51-58: Security: Potential command injection in execSync example.

The example uses execSync with string interpolation of preview_image_url from the API response. If the URL contains shell metacharacters (quotes, semicolons, etc.), this could lead to command injection. While this is example code, users may copy-paste it into production.

🔒 Safer alternative using Node.js spawn
- const { execSync } = require("child_process");
+ const { spawn } = require("child_process");
  for (const avatar of data.avatars.slice(0, 3)) {
-   // 'open' on macOS opens the URL in default browser - doesn't download
-   execSync(`open "${avatar.preview_image_url}"`);
+   // Safer: Use spawn with array args to avoid shell injection
+   spawn("open", [avatar.preview_image_url], { stdio: "ignore" });
  }
.agent/tools/video/heygen-skill/rules/video-generation.md (3)

700-729: Use OffthreadVideo instead of Video for frame-accurate rendering.

This Remotion example imports Video from remotion, but remotion-integration.md recommends OffthreadVideo to avoid jitter during renders, especially with external video sources like HeyGen WebM outputs.

♻️ Proposed fix
-import { Video, AbsoluteFill } from "remotion";
+import { OffthreadVideo, AbsoluteFill } from "remotion";

 export const LoomStyleVideo: React.FC<{
   screenRecordingUrl: string;
   avatarWebmUrl: string;
 }> = ({ screenRecordingUrl, avatarWebmUrl }) => {
   return (
     <AbsoluteFill>
       {/* Screen recording as base layer */}
-      <Video src={screenRecordingUrl} style={{ width: "100%", height: "100%" }} />
+      <OffthreadVideo src={screenRecordingUrl} style={{ width: "100%", height: "100%" }} />

       {/* Avatar with circular mask applied in CSS */}
-      <Video
+      <OffthreadVideo
         src={avatarWebmUrl}

389-412: Polling timeout defaults are inconsistent with documented guidance.

The waitForVideo helper defaults to 60 attempts × 10 seconds = 10 minutes, but lines 548-558 recommend a 20-minute timeout and note that generation "often takes 10-15 min, sometimes longer." Users copying this example may experience unnecessary timeouts.

🔧 Proposed fix to align timeout with recommendations
 async function waitForVideo(videoId: string): Promise<string> {
-  const maxAttempts = 60;
-  const pollInterval = 10000; // 10 seconds
+  const maxAttempts = 120;    // 20 minutes total
+  const pollInterval = 10000; // 10 seconds

741-753: Fix sequential list numbering.

The best practices list has duplicate numbering at item 2 (lines 744-745), which creates confusion when referencing these items.

📝 Proposed fix
 ## Best Practices

 1. **Preview avatars before generating** - Download `preview_image_url` so user can see what the avatar looks like before committing to a video (see [avatars.md](avatars.md))
 2. **Use avatar's default voice** - Most avatars have a `default_voice_id` that's pre-matched for natural results (see [avatars.md](avatars.md))
-2. **Fallback: match gender manually** - If no default voice, ensure avatar and voice genders match (see [voices.md](voices.md))
-3. **Validate inputs** - Check avatar and voice IDs before generating
-4. **Use test mode** - Test configurations without consuming credits
+3. **Fallback: match gender manually** - If no default voice, ensure avatar and voice genders match (see [voices.md](voices.md))
+4. **Validate inputs** - Check avatar and voice IDs before generating
+5. **Use test mode** - Test configurations without consuming credits
.agent/tools/video/heygen-skill/rules/streaming-avatars.md (1)

380-382: Empty ping() method lacks implementation guidance.

The keep-alive example shows an empty ping() method with only a comment. Without specifying what message the HeyGen streaming API expects (empty message, JSON payload, specific endpoint), developers cannot implement this correctly.

.agent/tools/video/heygen-skill/rules/dimensions.md (1)

272-306: Bug: Direct mutation of shared platformDimensions object.

The createVideoConfig function retrieves a dimension object by reference from platformDimensions and then mutates it when quality === "720p". This permanently modifies the lookup table, causing incorrect dimensions on subsequent calls.

🐛 Proposed fix: Create a shallow copy before mutation
 function createVideoConfig(options: VideoConfigOptions) {
   const platformDimensions: Record<string, Dimensions> = {
     youtube: { width: 1920, height: 1080 },
     tiktok: { width: 1080, height: 1920 },
     instagram_feed: { width: 1080, height: 1080 },
     instagram_story: { width: 1080, height: 1920 },
     linkedin: { width: 1920, height: 1080 },
   };

-  const dimension = platformDimensions[options.platform];
+  const dimension = { ...platformDimensions[options.platform] };

   // Scale down for 720p if requested
   if (options.quality === "720p") {
🧹 Nitpick comments (17)
.agent/tools/video/heygen-skill/rules/authentication.md (2)

46-46: Avoid non-null assertion in production code examples.

The non-null assertion operator (!) bypasses TypeScript's safety checks. For documentation examples, demonstrate proper validation instead.

♻️ Safer pattern
-    "X-Api-Key": process.env.HEYGEN_API_KEY!,
+    "X-Api-Key": process.env.HEYGEN_API_KEY || (() => { throw new Error("HEYGEN_API_KEY not set") })(),

Or validate at the start of the function:

const apiKey = process.env.HEYGEN_API_KEY;
if (!apiKey) throw new Error("HEYGEN_API_KEY environment variable is required");

const response = await fetch("https://api.heygen.com/v2/avatars", {
  headers: { "X-Api-Key": apiKey },
});

208-227: Consider exponential backoff jitter for production resilience.

The exponential backoff implementation is solid, but production systems benefit from adding jitter to prevent thundering herd problems when multiple clients retry simultaneously.

Enhanced retry with jitter
 async function requestWithRetry(
   fn: () => Promise<Response>,
   maxRetries = 3
 ): Promise<Response> {
   for (let i = 0; i < maxRetries; i++) {
     const response = await fn();
 
     if (response.status === 429) {
       const waitTime = Math.pow(2, i) * 1000;
+      const jitter = Math.random() * 500; // Add up to 500ms jitter
-      await new Promise((resolve) => setTimeout(resolve, waitTime));
+      await new Promise((resolve) => setTimeout(resolve, waitTime + jitter));
       continue;
     }
 
     return response;
   }
 
   throw new Error("Max retries exceeded");
 }
.agent/tools/video/heygen-skill/rules/captions.md (1)

32-34: Verify caption availability constraints by plan tier.

The comment "(availability varies by plan)" suggests captions may not be available on all subscription tiers, but this constraint isn't detailed in the Limitations section (lines 278-283).

Expand the Limitations section to clarify:

  • Which subscription tiers support captions
  • Whether basic vs styled captions have different requirements
  • API-specific caption limitations vs web interface

This helps users understand quota/access issues before attempting caption generation.

.agent/tools/video/heygen-skill/rules/photo-avatars.md (1)

410-434: Consider adding timeout to polling logic.

The waitForPhotoGeneration function has maxAttempts (60) and pollIntervalMs (5000ms), giving a 5-minute timeout. Photo generation may take longer during peak times.

Consider documenting expected generation times and increasing the timeout:

 async function waitForPhotoGeneration(generationId: string): Promise<string> {
-  const maxAttempts = 60;
+  const maxAttempts = 120; // 10 minutes at 5-second intervals
   const pollIntervalMs = 5000; // 5 seconds

Or add a comment explaining the timeout duration:

async function waitForPhotoGeneration(generationId: string): Promise<string> {
  const maxAttempts = 60; // 5 minutes timeout
  const pollIntervalMs = 5000; // 5 seconds
  // ...
}
.agent/tools/video/heygen-skill/rules/quota.md (1)

158-165: Document test mode availability and limitations.

The comment "When available, use test mode" (line 156) suggests test mode may not be universally available. Clarify which subscription tiers support test mode and whether it applies to all API endpoints.

Add a note explaining:

  • Which subscription tiers include test mode
  • Whether test mode works with all HeyGen APIs (video generation, streaming, photo avatars, etc.)
  • Any limitations of test mode videos (watermarks, expiration, quality)

This prevents users from relying on test mode in environments where it isn't available.

.agent/tools/video/heygen-skill/rules/video-status.md (1)

135-136: Consider increasing default timeout for production use.

The default maxWaitMs = 600000 (10 minutes) may be insufficient based on the documented generation times (lines 93-96 recommend 15-20 minutes for safety).

Align default with documented recommendations
 async function waitForVideo(
   videoId: string,
-  maxWaitMs = 600000, // 10 minutes
+  maxWaitMs = 1200000, // 20 minutes (aligns with documented recommendations)
   pollIntervalMs = 5000 // 5 seconds
 ): Promise<string> {
.agent/tools/video/heygen-skill/rules/backgrounds.md (2)

120-125: Image requirements need dimension specifics.

The recommendation to "Match video dimensions (e.g., 1920x1080 for 1080p)" is helpful, but users may not know how to determine their video dimensions before generating.

Cross-reference the dimensions documentation:

### Image Requirements

- **Formats**: JPEG, PNG
- **Recommended size**: Match video dimensions (see [dimensions.md](dimensions.md) for common sizes)
- **Aspect ratio**: Should match video aspect ratio
- **File size**: Under 10MB recommended

Or include a table of common dimension presets:

| Video Type | Dimensions | Background Size |
|------------|------------|-----------------|
| 1080p landscape | 1920×1080 | 1920×1080 |
| 720p landscape | 1280×720 | 1280×720 |
| Portrait/TikTok | 1080×1920 | 1080×1920 |
| Square/Instagram | 1080×1080 | 1080×1080 |

154-159: Clarify video background audio handling.

Line 158 states "Background video audio is typically muted" but doesn't specify if this is automatic or requires configuration. Line 296 mentions "typically muted" again, creating ambiguity.

Clarify the behavior:

### Video Requirements

- **Format**: MP4 (H.264 codec recommended)
- **Looping**: Video will loop if shorter than avatar content
- **Audio**: Background video audio is automatically muted (avatar voice takes priority)
- **File size**: Under 100MB recommended
.agent/tools/video/heygen-skill/rules/avatars.md (1)

676-708: Optional: Consider caching avatar list to reduce API calls.

The helper functions getAvatarById, isValidAvatarId, and getRandomAvatar each call listAvatars() independently. In production code, consider caching the avatar list to reduce API calls, especially when calling multiple helpers in sequence.

Example caching pattern
// Simple in-memory cache with TTL
let avatarCache: { data: Avatar[]; expires: number } | null = null;

async function getCachedAvatars(ttlMs = 300000): Promise<Avatar[]> {
  if (avatarCache && Date.now() < avatarCache.expires) {
    return avatarCache.data;
  }
  const avatars = await listAvatars();
  avatarCache = { data: avatars, expires: Date.now() + ttlMs };
  return avatars;
}

// Use in helpers
async function getAvatarById(avatarId: string): Promise<Avatar | null> {
  const avatars = await getCachedAvatars();
  return avatars.find((a) => a.avatar_id === avatarId) || null;
}
.agent/tools/video/heygen-skill/rules/video-translation.md (1)

44-54: Optional: Improve type safety for mutually exclusive fields.

The documentation states "Either video_url OR video_id must be provided," but the TypeScript interface allows both to be undefined or both to be set. Consider using a discriminated union for compile-time validation.

Type-safe alternative
-interface VideoTranslateRequest {
-  video_url?: string;                          // Required (or video_id)
-  video_id?: string;                           // Required (or video_url)
+type VideoTranslateRequest = {
   output_language: string;                     // Required
   title?: string;
   translate_audio_only?: boolean;
   speaker_num?: number;
   callback_id?: string;
   callback_url?: string;
-}
+} & (
+  | { video_url: string; video_id?: never }
+  | { video_id: string; video_url?: never }
+);
.agent/tools/video/heygen-skill/rules/templates.md (2)

340-380: Optional: Enhance validation error messages with detailed context.

The validation function provides good coverage but could include more detail in error messages, such as actual vs. maximum length, to help developers debug issues faster.

Enhanced error messages
     // Check text length limits
     if (templateVar.type === "text" && templateVar.properties?.max_length) {
       if (value.length > templateVar.properties.max_length) {
         errors.push(
-          `Variable "${templateVar.name}" exceeds max length of ${templateVar.properties.max_length}`
+          `Variable "${templateVar.name}" exceeds max length of ${templateVar.properties.max_length} (current: ${value.length})`
         );
       }
     }

286-334: Recommended: Make batch generation more resilient to individual failures.

The current batch generation implementation fails entirely if any single video generation fails. For production use, consider collecting results with success/failure status per recipient so partial batch success is possible.

More resilient batch generation
 async function batchGenerateVideos(
   templateId: string,
   recipients: PersonalizationData[]
-): Promise<string[]> {
-  const videoIds: string[] = [];
+): Promise<Array<{ recipient: PersonalizationData; videoId?: string; error?: string }>> {
+  const results: Array<{ recipient: PersonalizationData; videoId?: string; error?: string }> = [];

   for (const recipient of recipients) {
     const variables = {
       recipient_name: recipient.name,
       company_name: recipient.company,
       personalized_message: recipient.customMessage,
     };

-    const videoId = await generateFromTemplate(templateId, variables);
-    videoIds.push(videoId);
+    try {
+      const videoId = await generateFromTemplate(templateId, variables);
+      results.push({ recipient, videoId });
+    } catch (error) {
+      console.error(`Failed for ${recipient.name}:`, error);
+      results.push({ recipient, error: error.message });
+    }

     // Rate limiting: add delay between requests
     await new Promise((r) => setTimeout(r, 1000));
   }

-  return videoIds;
+  return results;
 }
.agent/tools/video/heygen-skill/rules/video-agent.md (2)

259-281: Markdown formatting: Add blank lines around fenced code blocks.

The example prompts section has code blocks directly adjacent to text without blank lines, which triggers markdownlint MD031. This affects readability and some Markdown parsers.

📝 Proposed fix for markdown formatting
 **Product Demo:**
+
 ```text
 Create a 90-second product demo for our project management tool.
 Target audience: startup founders and small team leads.
 Highlight: Kanban boards, time tracking, and Slack integration.
 Tone: Professional but approachable.

Educational:
+

Explain how blockchain technology works in simple terms.
Duration: 2 minutes.
Audience: Complete beginners with no technical background.
Use analogies and avoid jargon.

Marketing:
+

Create an energetic 30-second ad for our fitness app launch.
Target: Health-conscious millennials.
Key message: AI-powered personalized workouts.
End with a strong call-to-action to download.
</details>

---

`294-329`: **Markdown formatting: Add blank lines around headings and code blocks.**

The comparison section headings lack the required blank line before the code blocks, triggering MD022 and MD031.

<details>
<summary>📝 Proposed fix</summary>

```diff
 ## Comparison: Video Agent vs Standard API

 ### Video Agent Request
+
 ```typescript
 // Simple: describe what you want
 const videoId = await generateWithVideoAgent(
   "Create a 60-second tutorial on setting up two-factor authentication. Professional tone, step-by-step."
 );

Equivalent Standard API Request

// Complex: specify every detail
.agent/tools/video/heygen-skill/rules/streaming-avatars.md (1)

298-316: Consider adding client to useEffect dependencies or documenting behavior.

The useEffect uses client from state but doesn't include it in the dependency array. While the useState initializer ensures client is stable, if avatarId or voiceId change, the effect reconnects but reuses the same client instance. This works but the reconnection behavior should be explicit.

💡 Alternative: Add client to deps for clarity
     return () => {
       client.disconnect();
     };
-  }, [avatarId, voiceId]);
+  }, [avatarId, voiceId, client]);

Since client is created via useState with an initializer, it's stable and won't cause extra re-renders. Adding it makes the dependency relationship explicit.

.agent/tools/video/heygen-skill/rules/text-overlays.md (1)

18-27: Use placeholders for IDs in examples.

These snippets hardcode real-looking avatar_id/voice_id values. Please switch to placeholders (e.g., <AVATAR_ID>, <VOICE_ID>) to keep the docs generic and safer.

Also applies to: 143-170, 181-188

.agent/tools/video/heygen-skill/rules/assets.md (1)

28-56: Use placeholders and add a credential storage note.

Examples hardcode real-looking IDs and show HEYGEN_API_KEY without a secure storage note. Please swap IDs for placeholders (e.g., <AVATAR_ID>, <VOICE_ID>, <ASSET_ID>) and add a short note that keys must be stored in env vars or a secret manager (never in source).

Also applies to: 227-314

Comment on lines +19 to +45
### Foundation
- [heygen-skill/rules/authentication.md](heygen-skill/rules/authentication.md) - API key setup, X-Api-Key header, and authentication patterns
- [heygen-skill/rules/quota.md](heygen-skill/rules/quota.md) - Credit system, usage limits, and checking remaining quota
- [heygen-skill/rules/video-status.md](heygen-skill/rules/video-status.md) - Polling patterns, status types, and retrieving download URLs
- [heygen-skill/rules/assets.md](heygen-skill/rules/assets.md) - Uploading images, videos, and audio for use in video generation

### Core Video Creation
- [heygen-skill/rules/avatars.md](heygen-skill/rules/avatars.md) - Listing avatars, avatar styles, and avatar_id selection
- [heygen-skill/rules/voices.md](heygen-skill/rules/voices.md) - Listing voices, locales, speed/pitch configuration
- [heygen-skill/rules/scripts.md](heygen-skill/rules/scripts.md) - Writing scripts, pauses/breaks, pacing, and structure templates
- [heygen-skill/rules/video-generation.md](heygen-skill/rules/video-generation.md) - POST /v2/video/generate workflow and multi-scene videos
- [heygen-skill/rules/video-agent.md](heygen-skill/rules/video-agent.md) - One-shot prompt video generation with Video Agent API
- [heygen-skill/rules/dimensions.md](heygen-skill/rules/dimensions.md) - Resolution options (720p/1080p) and aspect ratios

### Video Customization
- [heygen-skill/rules/backgrounds.md](heygen-skill/rules/backgrounds.md) - Solid colors, images, and video backgrounds
- [heygen-skill/rules/text-overlays.md](heygen-skill/rules/text-overlays.md) - Adding text with fonts and positioning
- [heygen-skill/rules/captions.md](heygen-skill/rules/captions.md) - Auto-generated captions and subtitle options

### Advanced Features
- [heygen-skill/rules/templates.md](heygen-skill/rules/templates.md) - Template listing and variable replacement
- [heygen-skill/rules/video-translation.md](heygen-skill/rules/video-translation.md) - Translating videos, quality/fast modes, and dubbing
- [heygen-skill/rules/streaming-avatars.md](heygen-skill/rules/streaming-avatars.md) - Real-time interactive avatar sessions
- [heygen-skill/rules/photo-avatars.md](heygen-skill/rules/photo-avatars.md) - Creating avatars from photos (talking photos)
- [heygen-skill/rules/webhooks.md](heygen-skill/rules/webhooks.md) - Registering webhook endpoints and event types

### Integration
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add blank lines around section headings (MD022).

Static analysis flags missing blank lines before/after the section headings (“Foundation”, “Core Video Creation”, “Video Customization”, “Advanced Features”, “Integration”). Please insert a blank line above and below each heading to satisfy markdownlint.

🧰 Tools
🪛 GitHub Check: Codacy Static Code Analysis

[notice] 19-19: .agent/tools/video/heygen-skill.md#L19
Expected: 1; Actual: 0; Below


[notice] 25-25: .agent/tools/video/heygen-skill.md#L25
Expected: 1; Actual: 0; Below


[notice] 33-33: .agent/tools/video/heygen-skill.md#L33
Expected: 1; Actual: 0; Below


[notice] 38-38: .agent/tools/video/heygen-skill.md#L38
Expected: 1; Actual: 0; Below


[notice] 45-45: .agent/tools/video/heygen-skill.md#L45
Expected: 1; Actual: 0; Below

🪛 markdownlint-cli2 (0.18.1)

19-19: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


25-25: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


33-33: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


38-38: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


45-45: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)

🤖 Prompt for AI Agents
In @.agent/tools/video/heygen-skill.md around lines 19 - 45, The markdown
headings (Foundation, Core Video Creation, Video Customization, Advanced
Features, Integration) lack blank lines before and after them causing MD022
failures; update the .agent/tools/video/heygen-skill.md file to ensure there is
an empty line above and below each of those section headings (e.g., add a blank
line before "### Foundation" and one after it, and do the same for "### Core
Video Creation", "### Video Customization", "### Advanced Features", and "###
Integration") so each heading is separated by a single blank line from
surrounding content.

- Fix error property access in HeyGenClient (error.error not error.message)
- Fix dimension object mutation with spread operator
- Use OffthreadVideo instead of Video for Remotion examples
- Use streaming upload instead of buffering entire file in memory
- Use async uploadLargeFile instead of sync uploadFile
- Add missing TalkingPhoto interface definition
@sonarqubecloud
Copy link

@github-actions
Copy link

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 406 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Sat Jan 24 05:41:28 UTC 2026: Code review monitoring started
Sat Jan 24 05:41:28 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 406
Sat Jan 24 05:41:28 UTC 2026: Qlty - 0 issues found, auto-formatting applied
Sat Jan 24 05:41:30 UTC 2026: Codacy analysis completed with auto-fixes

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 406
  • VULNERABILITIES: 0

Generated on: Sat Jan 24 05:42:49 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

@marcusquinn marcusquinn merged commit 034c92b into main Jan 24, 2026
8 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant