Skip to content

feat: add Higgsfield AI API support with Context7 documentation#149

Merged
marcusquinn merged 2 commits intomainfrom
feature/higgsfield-api-context7
Jan 22, 2026
Merged

feat: add Higgsfield AI API support with Context7 documentation#149
marcusquinn merged 2 commits intomainfrom
feature/higgsfield-api-context7

Conversation

@marcusquinn
Copy link
Owner

@marcusquinn marcusquinn commented Jan 22, 2026

Summary

  • Add comprehensive Higgsfield AI API subagent documentation
  • Integrate with Context7 for real-time API docs access
  • Update AGENTS.md with new subagent references

Changes

New File: .agent/tools/video/higgsfield.md

Complete API documentation covering:

  • Text-to-Image (Soul model) - prompts, dimensions, quality, character consistency
  • Image-to-Video (DOP, Kling v2.1 Pro, Seedance v1 Pro) - animation from images
  • Character Consistency - create and reuse characters across generations
  • Webhooks - async job completion notifications
  • Job Polling - status checking and result retrieval
  • Python SDK - sync and async examples

Updated: .agent/tools/context/context7.md

Added Higgsfield to "Generative Media" section:

  • /websites/higgsfield_ai - Higgsfield AI (100+ image/video/audio models)

Updated: .agent/AGENTS.md

  • Updated tools/video/ description to include AI generation
  • Added Higgsfield to subagent list
  • Added "AI image/video generation" to "When to read subagents" table

Context7 Integration

The subagent includes context7_id: /websites/higgsfield_ai for real-time documentation access:

resolve-library-id("higgsfield")
query-docs("/websites/higgsfield_ai", "text-to-image parameters")

Testing

  • Verified Context7 library ID resolves correctly
  • Queried Context7 docs for API endpoints and parameters
  • All documentation sourced from official Higgsfield docs

Summary by CodeRabbit

  • New Features

    • Added AI image/video generation capabilities to the video tooling catalog
    • Integrated Higgsfield AI platform supporting 100+ generative media models (images, video, audio)
  • Documentation

    • Updated video tooling docs to reflect expanded AI-driven generation options
    • Added a comprehensive Higgsfield AI API reference with usage examples, workflows, SDK guidance, webhook/job status details, and error handling notes

✏️ Tip: You can customize this high-level summary in your review settings.

- Add higgsfield.md subagent with comprehensive API documentation
- Cover text-to-image (Soul), image-to-video (DOP, Kling, Seedance)
- Include character consistency, webhooks, job polling
- Add Python SDK examples (sync/async)
- Update context7.md with Higgsfield library ID
- Update AGENTS.md subagent tables
@gemini-code-assist
Copy link

Summary of Changes

Hello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the system's generative media capabilities by incorporating the Higgsfield AI API. It provides detailed documentation for various AI models and functionalities, ensuring that users can easily leverage text-to-image, image-to-video, and character consistency features. The integration with Context7 streamlines access to this new information, making it readily available within the existing documentation framework.

Highlights

  • New Higgsfield AI API Subagent: Introduced a new subagent for Higgsfield AI, providing unified access to over 100 generative media models for image, video, voice, and audio generation.
  • Context7 Integration: Integrated Higgsfield AI documentation with Context7, allowing for real-time access to API details, parameters, and models via query-docs.
  • Updated Agent References: Modified AGENTS.md to include Higgsfield AI under the tools/video/ category and added 'AI image/video generation' to the subagent usage guide.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 22, 2026

Caution

Review failed

The pull request is closed.

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Higgsfield AI integration added: video tooling catalog updated, library context registered, and a new comprehensive Higgsfield API documentation describing 100+ generative media models and workflows (T2I, I2V, character consistency, webhooks, job polling, Python SDK, error handling).

Changes

Cohort / File(s) Summary
Documentation & Tool Integration
\.agent/AGENTS.md, \.agent/tools/context/context7.md
Updated video tooling catalog to include AI image/video generation; added library reference /websites/higgsfield_ai labeled "Higgsfield AI (100+ image/video/audio models)"
Higgsfield API Documentation
\.agent/tools/video/higgsfield.md
Added new detailed API reference for Higgsfield: endpoints, auth, Text-to-Image (Soul), Image-to-Video (DOP) variants, Character Consistency, webhook payloads, job status polling, Python SDK examples (sync/async), and error handling notes

Sequence Diagram(s)

sequenceDiagram
  rect rgba(200,200,255,0.5)
    participant Client
  end
  rect rgba(200,255,200,0.5)
    participant HiggsfieldAPI
  end
  rect rgba(255,200,200,0.5)
    participant WebhookReceiver
  end

  Client->>HiggsfieldAPI: Submit generation job (sync/async)
  HiggsfieldAPI-->>Client: 202 Accepted + job_id (or immediate result)
  alt Async flow
    HiggsfieldAPI->>WebhookReceiver: POST job result (webhook)
    WebhookReceiver-->>Client: Notify job completion
  else Polling flow
    Client->>HiggsfieldAPI: GET /jobs/{job_id} (poll)
    HiggsfieldAPI-->>Client: Job status / result
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

Frames awaken, pixels play 🎞️
One hundred models light the way
Requests dispatched, webhooks sing 📬
Jobs report back on swiftest wing
Docs and catalog, bright as day ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately and specifically describes the main change: adding Higgsfield AI API support with Context7 integration documentation across three files.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds comprehensive documentation for the Higgsfield AI API as a new subagent and integrates it with Context7. The changes include updates to AGENTS.md to reference the new tool and the creation of higgsfield.md with detailed API usage instructions. My review focuses on the new documentation file. I've found a couple of inconsistencies in the API documentation regarding credential placeholders and the Python SDK examples, which could lead to confusion. The changes are otherwise well-structured and clear.

Comment on lines +299 to +306
result = higgsfield_client.subscribe(
'bytedance/seedream/v4/text-to-image',
arguments={
'prompt': 'A serene lake at sunset with mountains',
'resolution': '2K',
'aspect_ratio': '16:9'
}
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This Python SDK example has significant inconsistencies with the REST API documentation provided earlier in this file, which could confuse developers:

  1. Model Mismatch: The example uses the model 'bytedance/seedream/v4/text-to-image', which is not documented elsewhere. The main text-to-image section describes the soul model.
  2. Parameter Mismatch: The parameters resolution: '2K' and aspect_ratio: '16:9' are used here. However, the REST API documentation for text-to-image specifies a width_and_height parameter with explicit pixel values (e.g., "1696x960").

Please align the SDK examples with the documented API endpoints and parameters. If the SDK uses different models or abstracts parameters, this should be explicitly explained in the documentation.

Comment on lines 49 to 50
hf-api-key: {your_api_key}
hf-secret: {your_api_secret}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The placeholders for credentials in this section ({your_api_key} and {your_api_secret}) are inconsistent with those used in the curl examples later in the document (e.g., {api-key} and {secret} on lines 74-75). To improve clarity and prevent confusion, it's best to use consistent placeholders for the hf- headers.

Suggested change
hf-api-key: {your_api_key}
hf-secret: {your_api_secret}
hf-api-key: {api-key}
hf-secret: {secret}

@augmentcode
Copy link

augmentcode bot commented Jan 22, 2026

🤖 Augment PR Summary

Summary: This PR adds a new Higgsfield AI subagent document and wires it into the existing agent documentation and Context7 reference list.

Changes:

  • Added .agent/tools/video/higgsfield.md with a consolidated Higgsfield API guide (text-to-image, image-to-video, character consistency, webhooks, job polling, and Python SDK examples).
  • Updated .agent/tools/context/context7.md to include the Context7 library ID /websites/higgsfield_ai under “Generative Media”.
  • Updated .agent/AGENTS.md to list Higgsfield under tools/video/ and to add a “AI image/video generation” reference entry.

Technical Notes: The Higgsfield subagent includes context7_id: /websites/higgsfield_ai so consumers can resolve and query live docs via Context7 for up-to-date endpoints/parameters.

🤖 Was this summary useful? React with 👍 or 👎

Copy link

@augmentcode augmentcode bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review completed. 2 suggestions posted.

Fix All in Augment

Comment augment review to trigger a new review at any time.


## Authentication

All requests require two headers:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The line “All requests require two headers” seems inconsistent with the following “Alternative format (some endpoints)” that uses Authorization: Key ...; consider clarifying that endpoints accept either header pair depending on the route.

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎

| `seed` | integer | No | 1-1000000 for reproducibility |
| `style_id` | uuid | No | Preset style ID |
| `style_strength` | number | No | 0-1 (default: 1) |
| `custom_reference_id` | uuid | No | Character ID for consistency |
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

custom_reference_id is documented as type uuid, but the examples use values like character_123456; consider aligning the declared type with the format returned by /api/characters to avoid confusion.

Other Locations
  • .agent/tools/video/higgsfield.md:236

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎

@github-actions
Copy link

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 353 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Thu Jan 22 00:27:53 UTC 2026: Code review monitoring started
Thu Jan 22 00:27:53 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 353
Thu Jan 22 00:27:53 UTC 2026: Qlty - 0 issues found, auto-formatting applied
Thu Jan 22 00:27:55 UTC 2026: Codacy analysis completed with auto-fixes

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 353
  • VULNERABILITIES: 0

Generated on: Thu Jan 22 00:29:09 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

- Clarify authentication formats (header-based vs Authorization)
- Use consistent credential placeholders ({api-key}, {secret})
- Fix custom_reference_id type to string (UUID format)
- Use proper UUID format in character examples
- Add note explaining SDK parameter differences from REST API
@sonarqubecloud
Copy link

@github-actions
Copy link

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 353 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Thu Jan 22 00:31:03 UTC 2026: Code review monitoring started
Thu Jan 22 00:31:03 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 353
Thu Jan 22 00:31:03 UTC 2026: Qlty - 0 issues found, auto-formatting applied
Thu Jan 22 00:31:05 UTC 2026: Codacy analysis completed with auto-fixes

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 353
  • VULNERABILITIES: 0

Generated on: Thu Jan 22 00:32:18 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

@marcusquinn marcusquinn merged commit ba67079 into main Jan 22, 2026
7 of 8 checks passed
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In @.agent/tools/video/higgsfield.md:
- Around line 359-366: The fenced code block containing the Context7 examples
(calls to resolve-library-id and query-docs) is missing a language identifier;
update the opening fence to include a language (e.g., "bash" or "text") so
markdown linting passes by changing the code fence before
resolve-library-id("higgsfield") to ```bash (or ```text) while leaving the
commands resolve-library-id("higgsfield") and the subsequent query-docs(...)
lines unchanged.
- Around line 106-110: The fenced code block that lists image dimensions in
.agent/tools/video/higgsfield.md needs a language identifier to satisfy markdown
linting; update the block that currently starts with triple backticks and the
dimensions (the dimensions reference block) to use a language tag (e.g.,
```text) so it becomes ```text followed by the dimensions and ending ``` to fix
the linter error.
- Around line 16-373: The docs currently use the wrong base URL and auth and
list non-official endpoints — change the base URL from
"https://platform.higgsfield.ai" to "https://higgsfieldapi.com", replace the
headers examples (`hf-api-key`/`hf-secret` and Authorization: Key ...) with a
single Bearer token header `Authorization: Bearer {API_TOKEN}`, and
consolidate/replace the endpoint examples that reference `POST
/v1/text2image/soul`, `POST /v1/image2video/dop`, `/higgsfield-ai/dop/standard`,
`/kling-video/v2.1/pro/image-to-video`,
`/bytedance/seedance/v1/pro/image-to-video` to the official `/api/v1/generate`
structure (map model names via a model parameter), updating request/response
examples accordingly; also verify and correct the Python SDK examples
(references to higgsfield_client.subscribe and subscribe_async) to use the SDK’s
actual endpoint mappings or note the SDK is a wrapper if it differs.

Comment on lines 16 to 373
# Higgsfield AI API

Higgsfield provides unified access to 100+ generative media models through a single API. Generate images, videos, voice, and audio with automatic infrastructure scaling.

## When to Use

Read this skill when working with:

- AI image generation (text-to-image)
- AI video generation (image-to-video)
- Character consistency across generations
- Multi-model comparison (FLUX, Kling, Seedance, etc.)
- Webhook-based async generation pipelines

## Quick Reference

| Endpoint | Purpose | Model |
|----------|---------|-------|
| `POST /v1/text2image/soul` | Text to image | Soul |
| `POST /v1/image2video/dop` | Image to video | DOP |
| `POST /higgsfield-ai/dop/standard` | Image to video | DOP Standard |
| `POST /kling-video/v2.1/pro/image-to-video` | Image to video | Kling v2.1 Pro |
| `POST /bytedance/seedance/v1/pro/image-to-video` | Image to video | Seedance v1 Pro |
| `POST /api/characters` | Create character | - |
| `GET /api/generation-results` | Poll job status | - |

**Base URL**: `https://platform.higgsfield.ai`

## Authentication

All requests require two headers:

```bash
hf-api-key: {your_api_key}
hf-secret: {your_api_secret}
```

Alternative format (some endpoints):

```bash
Authorization: Key {api_key}:{api_secret}
```

Store credentials in `~/.config/aidevops/mcp-env.sh`:

```bash
export HIGGSFIELD_API_KEY="your-api-key"
export HIGGSFIELD_SECRET="your-api-secret"
```

## Text-to-Image (Soul Model)

Generate images from text prompts with optional character consistency.

### Basic Request

```bash
curl -X POST 'https://platform.higgsfield.ai/v1/text2image/soul' \
--header 'hf-api-key: {api-key}' \
--header 'hf-secret: {secret}' \
--header 'Content-Type: application/json' \
--data '{
"params": {
"prompt": "A serene mountain landscape at sunset",
"width_and_height": "1696x960",
"enhance_prompt": true,
"quality": "1080p",
"batch_size": 1
}
}'
```

### Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `prompt` | string | Yes | Text description of image |
| `width_and_height` | string | Yes | Dimensions (see supported sizes) |
| `enhance_prompt` | boolean | No | Auto-enhance prompt (default: false) |
| `quality` | string | No | `720p` or `1080p` (default: 1080p) |
| `batch_size` | integer | No | 1 or 4 (default: 1) |
| `seed` | integer | No | 1-1000000 for reproducibility |
| `style_id` | uuid | No | Preset style ID |
| `style_strength` | number | No | 0-1 (default: 1) |
| `custom_reference_id` | uuid | No | Character ID for consistency |
| `custom_reference_strength` | number | No | 0-1 (default: 1) |
| `image_reference` | object | No | Reference image for guidance |

### Supported Dimensions

```
1152x2048, 2048x1152, 2048x1536, 1536x2048,
1344x2016, 2016x1344, 960x1696, 1536x1536,
1536x1152, 1696x960, 1152x1536, 1088x1632, 1632x1088
```

### Response

```json
{
"id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"type": "text2image_soul",
"created_at": "2023-11-07T05:31:56Z",
"jobs": [
{
"id": "job-123",
"status": "queued",
"results": {
"min": { "type": "image/png", "url": "https://..." },
"raw": { "type": "image/png", "url": "https://..." }
}
}
]
}
```

## Image-to-Video (DOP Model)

Transform static images into animated videos.

### Basic Request

```bash
curl -X POST 'https://platform.higgsfield.ai/v1/image2video/dop' \
--header 'hf-api-key: {api-key}' \
--header 'hf-secret: {secret}' \
--header 'Content-Type: application/json' \
--data '{
"params": {
"model": "dop-turbo",
"prompt": "A cat walking gracefully through a garden",
"input_images": [{
"type": "image_url",
"image_url": "https://example.com/cat.jpg"
}],
"enhance_prompt": true
}
}'
```

### Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | `dop-turbo` or `dop-standard` |
| `prompt` | string | Yes | Animation description |
| `input_images` | array | Yes | Source image(s) |
| `input_images_end` | array | No | End frame image(s) |
| `motions` | array | No | Motion presets with strength |
| `seed` | integer | No | 1-1000000 for reproducibility |
| `enhance_prompt` | boolean | No | Auto-enhance prompt |

### Alternative Models

**DOP Standard** (simpler API):

```bash
curl -X POST 'https://platform.higgsfield.ai/higgsfield-ai/dop/standard' \
--header 'Authorization: Key {api_key}:{api_secret}' \
--header 'Content-Type: application/json' \
--data '{
"image_url": "https://example.com/image.jpg",
"prompt": "Woman walks down Tokyo street with neon lights",
"duration": 5
}'
```

**Kling v2.1 Pro** (cinematic):

```bash
curl -X POST 'https://platform.higgsfield.ai/kling-video/v2.1/pro/image-to-video' \
--header 'Authorization: Key {api_key}:{api_secret}' \
--header 'Content-Type: application/json' \
--data '{
"image_url": "https://example.com/landscape.jpg",
"prompt": "Camera slowly pans across landscape as clouds drift"
}'
```

**Seedance v1 Pro** (professional):

```bash
curl -X POST 'https://platform.higgsfield.ai/bytedance/seedance/v1/pro/image-to-video' \
--header 'Authorization: Key {api_key}:{api_secret}' \
--header 'Content-Type: application/json' \
--data '{
"image_url": "https://example.com/portrait.jpg",
"prompt": "Subject turns head slightly and smiles"
}'
```

## Character Consistency

Create reusable characters for consistent image generation.

### Create Character

```bash
curl -X POST 'https://platform.higgsfield.ai/api/characters' \
--header 'hf-api-key: {api-key}' \
--header 'hf-secret: {secret}' \
--form 'photo=@/path/to/photo.jpg'
```

Response:

```json
{
"id": "character_123456",
"photo_url": "https://cdn.higgsfield.ai/characters/photo_123.jpg",
"created_at": "2023-12-07T10:30:00Z"
}
```

### Use Character in Generation

```json
{
"params": {
"prompt": "Character sitting in a coffee shop",
"custom_reference_id": "character_123456",
"custom_reference_strength": 0.9
}
}
```

## Webhook Integration

Receive notifications when jobs complete.

```json
{
"webhook": {
"url": "https://your-server.com/webhook",
"secret": "your-webhook-secret"
},
"params": {
"prompt": "..."
}
}
```

## Job Status Polling

Check generation status and retrieve results.

```bash
curl -X GET 'https://platform.higgsfield.ai/api/generation-results?id=job_789012' \
--header 'hf-api-key: {api-key}' \
--header 'hf-secret: {secret}'
```

Response:

```json
{
"id": "job_789012",
"status": "completed",
"results": [{
"type": "image",
"url": "https://cdn.higgsfield.ai/generations/img_123.jpg"
}],
"retention_expires_at": "2023-12-14T10:30:00Z"
}
```

**Status values**: `pending`, `processing`, `completed`, `failed`

**Note**: Results are retained for 7 days.

## Python SDK

Install:

```bash
pip install higgsfield-client
```

### Synchronous

```python
import higgsfield_client

result = higgsfield_client.subscribe(
'bytedance/seedream/v4/text-to-image',
arguments={
'prompt': 'A serene lake at sunset with mountains',
'resolution': '2K',
'aspect_ratio': '16:9'
}
)

print(result['images'][0]['url'])
```

### Asynchronous

```python
import asyncio
import higgsfield_client

async def main():
result = await higgsfield_client.subscribe_async(
'bytedance/seedream/v4/text-to-image',
arguments={
'prompt': 'A serene lake at sunset with mountains',
'resolution': '2K',
'aspect_ratio': '16:9'
}
)
print(result['images'][0]['url'])

asyncio.run(main())
```

## Error Handling

### Validation Error (422)

```json
{
"detail": [
{
"loc": ["body", "params", "prompt"],
"msg": "Prompt cannot be empty",
"type": "value_error"
}
]
}
```

### Authentication Error (401)

Invalid or missing API credentials.

### Rate Limiting

The platform auto-scales, but implement exponential backoff for resilience.

## Context7 Integration

For up-to-date API documentation:

```
resolve-library-id("higgsfield")
# Returns: /websites/higgsfield_ai

query-docs("/websites/higgsfield_ai", "text-to-image parameters")
query-docs("/websites/higgsfield_ai", "image-to-video models")
query-docs("/websites/higgsfield_ai", "character consistency")
```

## Related

- [Higgsfield Docs](https://docs.higgsfield.ai/)
- [Higgsfield Dashboard](https://cloud.higgsfield.ai)
- `tools/video/remotion.md` - Programmatic video editing
- `tools/browser/stagehand.md` - Browser automation for assets
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Higgsfield AI official API documentation endpoint reference

💡 Result:

Official Higgsfield AI API docs (endpoint reference): https://docs.higgsfield.ai. [1]
Official Python SDK / repo (includes examples of endpoints and usage): https://github.com/higgsfield-ai/higgsfield-client. [2]

References:
[1] Higgsfield API documentation.
[2] higgsfield-ai / higgsfield-client — GitHub.


🏁 Script executed:

# Check if higgsfield-client is a real Python package
pip search higgsfield-client 2>/dev/null || echo "pip search unavailable, checking PyPI via curl"
curl -s "https://pypi.org/pypi/higgsfield-client/json" 2>/dev/null | head -20

Repository: marcusquinn/aidevops

Length of output: 150


🌐 Web query:

Higgsfield AI platform API v1 text2image soul model documentation

💡 Result:

I found the Higgsfield “Soul” text→image API docs and examples. Key points (sources below):

  • Base endpoints:

  • Auth:

    • Bearer API key in Authorization header for higgsfieldapi.com; some gateways use Ocp-Apim-Subscription-Key. (higgsfieldapi.com)
  • Important request parameters (common):

    • prompt (string) — required.
    • reference images / image_reference_image_url or reference_image_urls — 1–5 images to enable Soul/reference mode.
    • style_id / soul_style_id (UUID) and width_and_height / width+height enums for output size.
    • Optional: enhance_prompt, seed, webhook_url/webhook_secret for callbacks. (higgsfieldapi.com)
  • Example cURL / JSON payloads:

    • Higgsfield Soul (via Pixazo-style gateway) example payload shown with prompt, soul_style_id, width_and_height, image_reference_image_url. (pixazo.ai)
    • Higgsfield v1 generate Soul example (reference_image_urls array) shown in Higgsfield API docs with sample cURL. (higgsfieldapi.com)
  • SDK / client:

    • Official Python client repo (higgsfield-client) with quickstart and model names (e.g., bytedance/seedream) for text/image generation. (github.com)

If you want, I can:

  • Fetch the full OpenAPI page or a specific endpoint’s complete request/response schema, or
  • Prepare ready-to-run cURL, Python, or Node examples for the Soul text2image call using your preferred gateway and auth header.

Sources:
[1] Higgsfield API docs / Higgsfield API site. (higgsfieldapi.com)
[2] Higgsfield docs site (docs.higgsfield.ai). (docs.higgsfield.ai)
[3] Pixazo-hosted Higgsfield Soul API example (generateSoul). (pixazo.ai)
[4] Higgsfield Python client (GitHub). (github.com)

Citations:


🌐 Web query:

Higgsfield AI image-to-video DOP Kling Seedance models API

💡 Result:

Summary — what’s available and where to call it

  • Higgsfield (official): offers image-to-video (and image→multi-shot) via its app/site with cinematic controls including “Kling” motion control and multiple internal models; site: Higgsfield.ai. [1]

  • Higgsfield API (docs): REST API for text→video, image→video and “Soul” (reference-image) modes; base URL /api/v1/ with a /generate (or v1/generate) endpoint, Bearer API key auth, duration/resolution/seed/camera parameters and SDKs. (See their API docs for exact parameter names & examples.) [2]

  • Higgsfield DOP / DoP models (third‑party integration): public-facing API/docs that expose DoP image→video models and model options like dop-lite / dop-preview / dop-turbo, plus parameters such as prompt, seed, motion_id and motions_strength, webhook callbacks, input_images / input_images_end. Useful if you need DoP-specific controls. [3]

  • Seedance (ByteDance): Seedance is ByteDance’s I2V/T2V family (Seedance 1.x → 2.x) that produces multi‑shot, cinematic 1080p image→video outputs; available via ByteDance/Seed pages and several model marketplaces/playgrounds (Seedance v1/v1.5/v2 references). Use Seedance for high-quality multi-shot storytelling and stable motion. [4][5]

Quick pointers for integration

  • For Higgsfield API use their documentation and /v1/generate (Bearer token required). Test with image_url + prompt to trigger image→video mode. [2]
  • For DoP-specific features (motion presets, motion strength, start/end frame) use the DoP API model fields (model=dop-*, motions_id, motions_strength, input_images_end). [3]
  • To call Seedance on marketplaces/IDPs, select by model name (e.g., bytedance/seedance-v1.5-pro or seedance-1-lite) and pass the prompt + seed + first-frame image per that provider’s API. [4][5]

Sources

  • Higgsfield product/site. [1]
  • Higgsfield API documentation. [2]
  • Higgsfield DoP API (model names and params: dop-lite / dop-preview / dop-turbo, motion_id, motions_strength, input_images_end). [3]
  • ByteDance Seed/Seedance pages and tech report (Seedance 1.0 / 1.5 / 2 mentions). [4][5]

Do you want:

  • sample HTTP request body + curl for Higgsfield image→video (I’ll produce one for v1/generate or the DoP endpoint), or
  • a direct link to the specific API docs or SDK code sample?

Update API endpoints, authentication headers, and base URL to match official Higgsfield documentation.

The documented endpoints, authentication method, and base URL do not align with the official Higgsfield API documentation. According to official sources:

  • Correct base URL: https://higgsfieldapi.com (not platform.higgsfield.ai)
  • Correct authentication: Bearer token in Authorization header (not hf-api-key/hf-secret headers)
  • Correct endpoints: /api/v1/generate structure (not /v1/text2image/soul, /v1/image2video/dop, etc.)

Additionally, the documented endpoint paths for DOP, Kling, and Seedance models (/higgsfield-ai/dop/standard, /kling-video/v2.1/pro/image-to-video, etc.) do not appear in the official Higgsfield API docs. Verify these against the official Higgsfield API documentation and higgsfieldapi.com before finalizing. The Python SDK examples may reference a separate wrapper/marketplace service rather than the direct Higgsfield API—confirm the SDK's actual endpoint mappings.

🧰 Tools
🪛 GitHub Check: Codacy Static Code Analysis

[notice] 106-106: .agent/tools/video/higgsfield.md#L106
Fenced code blocks should have a language specified

🪛 markdownlint-cli2 (0.18.1)

106-106: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


359-359: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In @.agent/tools/video/higgsfield.md around lines 16 - 373, The docs currently
use the wrong base URL and auth and list non-official endpoints — change the
base URL from "https://platform.higgsfield.ai" to "https://higgsfieldapi.com",
replace the headers examples (`hf-api-key`/`hf-secret` and Authorization: Key
...) with a single Bearer token header `Authorization: Bearer {API_TOKEN}`, and
consolidate/replace the endpoint examples that reference `POST
/v1/text2image/soul`, `POST /v1/image2video/dop`, `/higgsfield-ai/dop/standard`,
`/kling-video/v2.1/pro/image-to-video`,
`/bytedance/seedance/v1/pro/image-to-video` to the official `/api/v1/generate`
structure (map model names via a model parameter), updating request/response
examples accordingly; also verify and correct the Python SDK examples
(references to higgsfield_client.subscribe and subscribe_async) to use the SDK’s
actual endpoint mappings or note the SDK is a wrapper if it differs.

Comment on lines +106 to +110
```
1152x2048, 2048x1152, 2048x1536, 1536x2048,
1344x2016, 2016x1344, 960x1696, 1536x1536,
1536x1152, 1696x960, 1152x1536, 1088x1632, 1632x1088
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add language identifier to fenced code block.

The dimensions reference block should specify a language identifier to satisfy markdown linting rules.

📝 Proposed fix
-```
+```text
 1152x2048, 2048x1152, 2048x1536, 1536x2048,
 1344x2016, 2016x1344, 960x1696, 1536x1536,
 1536x1152, 1696x960, 1152x1536, 1088x1632, 1632x1088

</details>

<!-- suggestion_start -->

<details>
<summary>📝 Committable suggestion</summary>

> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

```suggestion

🧰 Tools
🪛 GitHub Check: Codacy Static Code Analysis

[notice] 106-106: .agent/tools/video/higgsfield.md#L106
Fenced code blocks should have a language specified

🪛 markdownlint-cli2 (0.18.1)

106-106: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In @.agent/tools/video/higgsfield.md around lines 106 - 110, The fenced code
block that lists image dimensions in .agent/tools/video/higgsfield.md needs a
language identifier to satisfy markdown linting; update the block that currently
starts with triple backticks and the dimensions (the dimensions reference block)
to use a language tag (e.g., ```text) so it becomes ```text followed by the
dimensions and ending ``` to fix the linter error.

Comment on lines +359 to +366
```
resolve-library-id("higgsfield")
# Returns: /websites/higgsfield_ai

query-docs("/websites/higgsfield_ai", "text-to-image parameters")
query-docs("/websites/higgsfield_ai", "image-to-video models")
query-docs("/websites/higgsfield_ai", "character consistency")
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add language identifier to fenced code block.

The Context7 command examples should specify a language identifier (bash or text) to satisfy markdown linting rules.

📝 Proposed fix
-```
+```bash
 resolve-library-id("higgsfield")
 # Returns: /websites/higgsfield_ai
 
 query-docs("/websites/higgsfield_ai", "text-to-image parameters")
 query-docs("/websites/higgsfield_ai", "image-to-video models")
 query-docs("/websites/higgsfield_ai", "character consistency")

</details>

<!-- suggestion_start -->

<details>
<summary>📝 Committable suggestion</summary>

> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

```suggestion

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

359-359: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In @.agent/tools/video/higgsfield.md around lines 359 - 366, The fenced code
block containing the Context7 examples (calls to resolve-library-id and
query-docs) is missing a language identifier; update the opening fence to
include a language (e.g., "bash" or "text") so markdown linting passes by
changing the code fence before resolve-library-id("higgsfield") to ```bash (or
```text) while leaving the commands resolve-library-id("higgsfield") and the
subsequent query-docs(...) lines unchanged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant