feat: add Higgsfield AI API support with Context7 documentation#149
feat: add Higgsfield AI API support with Context7 documentation#149marcusquinn merged 2 commits intomainfrom
Conversation
- Add higgsfield.md subagent with comprehensive API documentation - Cover text-to-image (Soul), image-to-video (DOP, Kling, Seedance) - Include character consistency, webhooks, job polling - Add Python SDK examples (sync/async) - Update context7.md with Higgsfield library ID - Update AGENTS.md subagent tables
Summary of ChangesHello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the system's generative media capabilities by incorporating the Higgsfield AI API. It provides detailed documentation for various AI models and functionalities, ensuring that users can easily leverage text-to-image, image-to-video, and character consistency features. The integration with Context7 streamlines access to this new information, making it readily available within the existing documentation framework. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Caution Review failedThe pull request is closed. Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughHiggsfield AI integration added: video tooling catalog updated, library context registered, and a new comprehensive Higgsfield API documentation describing 100+ generative media models and workflows (T2I, I2V, character consistency, webhooks, job polling, Python SDK, error handling). Changes
Sequence Diagram(s)sequenceDiagram
rect rgba(200,200,255,0.5)
participant Client
end
rect rgba(200,255,200,0.5)
participant HiggsfieldAPI
end
rect rgba(255,200,200,0.5)
participant WebhookReceiver
end
Client->>HiggsfieldAPI: Submit generation job (sync/async)
HiggsfieldAPI-->>Client: 202 Accepted + job_id (or immediate result)
alt Async flow
HiggsfieldAPI->>WebhookReceiver: POST job result (webhook)
WebhookReceiver-->>Client: Notify job completion
else Polling flow
Client->>HiggsfieldAPI: GET /jobs/{job_id} (poll)
HiggsfieldAPI-->>Client: Job status / result
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request adds comprehensive documentation for the Higgsfield AI API as a new subagent and integrates it with Context7. The changes include updates to AGENTS.md to reference the new tool and the creation of higgsfield.md with detailed API usage instructions. My review focuses on the new documentation file. I've found a couple of inconsistencies in the API documentation regarding credential placeholders and the Python SDK examples, which could lead to confusion. The changes are otherwise well-structured and clear.
| result = higgsfield_client.subscribe( | ||
| 'bytedance/seedream/v4/text-to-image', | ||
| arguments={ | ||
| 'prompt': 'A serene lake at sunset with mountains', | ||
| 'resolution': '2K', | ||
| 'aspect_ratio': '16:9' | ||
| } | ||
| ) |
There was a problem hiding this comment.
This Python SDK example has significant inconsistencies with the REST API documentation provided earlier in this file, which could confuse developers:
- Model Mismatch: The example uses the model
'bytedance/seedream/v4/text-to-image', which is not documented elsewhere. The main text-to-image section describes thesoulmodel. - Parameter Mismatch: The parameters
resolution: '2K'andaspect_ratio: '16:9'are used here. However, the REST API documentation for text-to-image specifies awidth_and_heightparameter with explicit pixel values (e.g.,"1696x960").
Please align the SDK examples with the documented API endpoints and parameters. If the SDK uses different models or abstracts parameters, this should be explicitly explained in the documentation.
.agent/tools/video/higgsfield.md
Outdated
| hf-api-key: {your_api_key} | ||
| hf-secret: {your_api_secret} |
There was a problem hiding this comment.
The placeholders for credentials in this section ({your_api_key} and {your_api_secret}) are inconsistent with those used in the curl examples later in the document (e.g., {api-key} and {secret} on lines 74-75). To improve clarity and prevent confusion, it's best to use consistent placeholders for the hf- headers.
| hf-api-key: {your_api_key} | |
| hf-secret: {your_api_secret} | |
| hf-api-key: {api-key} | |
| hf-secret: {secret} |
🤖 Augment PR SummarySummary: This PR adds a new Higgsfield AI subagent document and wires it into the existing agent documentation and Context7 reference list. Changes:
Technical Notes: The Higgsfield subagent includes 🤖 Was this summary useful? React with 👍 or 👎 |
.agent/tools/video/higgsfield.md
Outdated
|
|
||
| ## Authentication | ||
|
|
||
| All requests require two headers: |
There was a problem hiding this comment.
.agent/tools/video/higgsfield.md
Outdated
| | `seed` | integer | No | 1-1000000 for reproducibility | | ||
| | `style_id` | uuid | No | Preset style ID | | ||
| | `style_strength` | number | No | 0-1 (default: 1) | | ||
| | `custom_reference_id` | uuid | No | Character ID for consistency | |
There was a problem hiding this comment.
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Thu Jan 22 00:29:09 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
- Clarify authentication formats (header-based vs Authorization)
- Use consistent credential placeholders ({api-key}, {secret})
- Fix custom_reference_id type to string (UUID format)
- Use proper UUID format in character examples
- Add note explaining SDK parameter differences from REST API
|
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Thu Jan 22 00:32:18 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @.agent/tools/video/higgsfield.md:
- Around line 359-366: The fenced code block containing the Context7 examples
(calls to resolve-library-id and query-docs) is missing a language identifier;
update the opening fence to include a language (e.g., "bash" or "text") so
markdown linting passes by changing the code fence before
resolve-library-id("higgsfield") to ```bash (or ```text) while leaving the
commands resolve-library-id("higgsfield") and the subsequent query-docs(...)
lines unchanged.
- Around line 106-110: The fenced code block that lists image dimensions in
.agent/tools/video/higgsfield.md needs a language identifier to satisfy markdown
linting; update the block that currently starts with triple backticks and the
dimensions (the dimensions reference block) to use a language tag (e.g.,
```text) so it becomes ```text followed by the dimensions and ending ``` to fix
the linter error.
- Around line 16-373: The docs currently use the wrong base URL and auth and
list non-official endpoints — change the base URL from
"https://platform.higgsfield.ai" to "https://higgsfieldapi.com", replace the
headers examples (`hf-api-key`/`hf-secret` and Authorization: Key ...) with a
single Bearer token header `Authorization: Bearer {API_TOKEN}`, and
consolidate/replace the endpoint examples that reference `POST
/v1/text2image/soul`, `POST /v1/image2video/dop`, `/higgsfield-ai/dop/standard`,
`/kling-video/v2.1/pro/image-to-video`,
`/bytedance/seedance/v1/pro/image-to-video` to the official `/api/v1/generate`
structure (map model names via a model parameter), updating request/response
examples accordingly; also verify and correct the Python SDK examples
(references to higgsfield_client.subscribe and subscribe_async) to use the SDK’s
actual endpoint mappings or note the SDK is a wrapper if it differs.
| # Higgsfield AI API | ||
|
|
||
| Higgsfield provides unified access to 100+ generative media models through a single API. Generate images, videos, voice, and audio with automatic infrastructure scaling. | ||
|
|
||
| ## When to Use | ||
|
|
||
| Read this skill when working with: | ||
|
|
||
| - AI image generation (text-to-image) | ||
| - AI video generation (image-to-video) | ||
| - Character consistency across generations | ||
| - Multi-model comparison (FLUX, Kling, Seedance, etc.) | ||
| - Webhook-based async generation pipelines | ||
|
|
||
| ## Quick Reference | ||
|
|
||
| | Endpoint | Purpose | Model | | ||
| |----------|---------|-------| | ||
| | `POST /v1/text2image/soul` | Text to image | Soul | | ||
| | `POST /v1/image2video/dop` | Image to video | DOP | | ||
| | `POST /higgsfield-ai/dop/standard` | Image to video | DOP Standard | | ||
| | `POST /kling-video/v2.1/pro/image-to-video` | Image to video | Kling v2.1 Pro | | ||
| | `POST /bytedance/seedance/v1/pro/image-to-video` | Image to video | Seedance v1 Pro | | ||
| | `POST /api/characters` | Create character | - | | ||
| | `GET /api/generation-results` | Poll job status | - | | ||
|
|
||
| **Base URL**: `https://platform.higgsfield.ai` | ||
|
|
||
| ## Authentication | ||
|
|
||
| All requests require two headers: | ||
|
|
||
| ```bash | ||
| hf-api-key: {your_api_key} | ||
| hf-secret: {your_api_secret} | ||
| ``` | ||
|
|
||
| Alternative format (some endpoints): | ||
|
|
||
| ```bash | ||
| Authorization: Key {api_key}:{api_secret} | ||
| ``` | ||
|
|
||
| Store credentials in `~/.config/aidevops/mcp-env.sh`: | ||
|
|
||
| ```bash | ||
| export HIGGSFIELD_API_KEY="your-api-key" | ||
| export HIGGSFIELD_SECRET="your-api-secret" | ||
| ``` | ||
|
|
||
| ## Text-to-Image (Soul Model) | ||
|
|
||
| Generate images from text prompts with optional character consistency. | ||
|
|
||
| ### Basic Request | ||
|
|
||
| ```bash | ||
| curl -X POST 'https://platform.higgsfield.ai/v1/text2image/soul' \ | ||
| --header 'hf-api-key: {api-key}' \ | ||
| --header 'hf-secret: {secret}' \ | ||
| --header 'Content-Type: application/json' \ | ||
| --data '{ | ||
| "params": { | ||
| "prompt": "A serene mountain landscape at sunset", | ||
| "width_and_height": "1696x960", | ||
| "enhance_prompt": true, | ||
| "quality": "1080p", | ||
| "batch_size": 1 | ||
| } | ||
| }' | ||
| ``` | ||
|
|
||
| ### Parameters | ||
|
|
||
| | Parameter | Type | Required | Description | | ||
| |-----------|------|----------|-------------| | ||
| | `prompt` | string | Yes | Text description of image | | ||
| | `width_and_height` | string | Yes | Dimensions (see supported sizes) | | ||
| | `enhance_prompt` | boolean | No | Auto-enhance prompt (default: false) | | ||
| | `quality` | string | No | `720p` or `1080p` (default: 1080p) | | ||
| | `batch_size` | integer | No | 1 or 4 (default: 1) | | ||
| | `seed` | integer | No | 1-1000000 for reproducibility | | ||
| | `style_id` | uuid | No | Preset style ID | | ||
| | `style_strength` | number | No | 0-1 (default: 1) | | ||
| | `custom_reference_id` | uuid | No | Character ID for consistency | | ||
| | `custom_reference_strength` | number | No | 0-1 (default: 1) | | ||
| | `image_reference` | object | No | Reference image for guidance | | ||
|
|
||
| ### Supported Dimensions | ||
|
|
||
| ``` | ||
| 1152x2048, 2048x1152, 2048x1536, 1536x2048, | ||
| 1344x2016, 2016x1344, 960x1696, 1536x1536, | ||
| 1536x1152, 1696x960, 1152x1536, 1088x1632, 1632x1088 | ||
| ``` | ||
|
|
||
| ### Response | ||
|
|
||
| ```json | ||
| { | ||
| "id": "3c90c3cc-0d44-4b50-8888-8dd25736052a", | ||
| "type": "text2image_soul", | ||
| "created_at": "2023-11-07T05:31:56Z", | ||
| "jobs": [ | ||
| { | ||
| "id": "job-123", | ||
| "status": "queued", | ||
| "results": { | ||
| "min": { "type": "image/png", "url": "https://..." }, | ||
| "raw": { "type": "image/png", "url": "https://..." } | ||
| } | ||
| } | ||
| ] | ||
| } | ||
| ``` | ||
|
|
||
| ## Image-to-Video (DOP Model) | ||
|
|
||
| Transform static images into animated videos. | ||
|
|
||
| ### Basic Request | ||
|
|
||
| ```bash | ||
| curl -X POST 'https://platform.higgsfield.ai/v1/image2video/dop' \ | ||
| --header 'hf-api-key: {api-key}' \ | ||
| --header 'hf-secret: {secret}' \ | ||
| --header 'Content-Type: application/json' \ | ||
| --data '{ | ||
| "params": { | ||
| "model": "dop-turbo", | ||
| "prompt": "A cat walking gracefully through a garden", | ||
| "input_images": [{ | ||
| "type": "image_url", | ||
| "image_url": "https://example.com/cat.jpg" | ||
| }], | ||
| "enhance_prompt": true | ||
| } | ||
| }' | ||
| ``` | ||
|
|
||
| ### Parameters | ||
|
|
||
| | Parameter | Type | Required | Description | | ||
| |-----------|------|----------|-------------| | ||
| | `model` | string | Yes | `dop-turbo` or `dop-standard` | | ||
| | `prompt` | string | Yes | Animation description | | ||
| | `input_images` | array | Yes | Source image(s) | | ||
| | `input_images_end` | array | No | End frame image(s) | | ||
| | `motions` | array | No | Motion presets with strength | | ||
| | `seed` | integer | No | 1-1000000 for reproducibility | | ||
| | `enhance_prompt` | boolean | No | Auto-enhance prompt | | ||
|
|
||
| ### Alternative Models | ||
|
|
||
| **DOP Standard** (simpler API): | ||
|
|
||
| ```bash | ||
| curl -X POST 'https://platform.higgsfield.ai/higgsfield-ai/dop/standard' \ | ||
| --header 'Authorization: Key {api_key}:{api_secret}' \ | ||
| --header 'Content-Type: application/json' \ | ||
| --data '{ | ||
| "image_url": "https://example.com/image.jpg", | ||
| "prompt": "Woman walks down Tokyo street with neon lights", | ||
| "duration": 5 | ||
| }' | ||
| ``` | ||
|
|
||
| **Kling v2.1 Pro** (cinematic): | ||
|
|
||
| ```bash | ||
| curl -X POST 'https://platform.higgsfield.ai/kling-video/v2.1/pro/image-to-video' \ | ||
| --header 'Authorization: Key {api_key}:{api_secret}' \ | ||
| --header 'Content-Type: application/json' \ | ||
| --data '{ | ||
| "image_url": "https://example.com/landscape.jpg", | ||
| "prompt": "Camera slowly pans across landscape as clouds drift" | ||
| }' | ||
| ``` | ||
|
|
||
| **Seedance v1 Pro** (professional): | ||
|
|
||
| ```bash | ||
| curl -X POST 'https://platform.higgsfield.ai/bytedance/seedance/v1/pro/image-to-video' \ | ||
| --header 'Authorization: Key {api_key}:{api_secret}' \ | ||
| --header 'Content-Type: application/json' \ | ||
| --data '{ | ||
| "image_url": "https://example.com/portrait.jpg", | ||
| "prompt": "Subject turns head slightly and smiles" | ||
| }' | ||
| ``` | ||
|
|
||
| ## Character Consistency | ||
|
|
||
| Create reusable characters for consistent image generation. | ||
|
|
||
| ### Create Character | ||
|
|
||
| ```bash | ||
| curl -X POST 'https://platform.higgsfield.ai/api/characters' \ | ||
| --header 'hf-api-key: {api-key}' \ | ||
| --header 'hf-secret: {secret}' \ | ||
| --form 'photo=@/path/to/photo.jpg' | ||
| ``` | ||
|
|
||
| Response: | ||
|
|
||
| ```json | ||
| { | ||
| "id": "character_123456", | ||
| "photo_url": "https://cdn.higgsfield.ai/characters/photo_123.jpg", | ||
| "created_at": "2023-12-07T10:30:00Z" | ||
| } | ||
| ``` | ||
|
|
||
| ### Use Character in Generation | ||
|
|
||
| ```json | ||
| { | ||
| "params": { | ||
| "prompt": "Character sitting in a coffee shop", | ||
| "custom_reference_id": "character_123456", | ||
| "custom_reference_strength": 0.9 | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| ## Webhook Integration | ||
|
|
||
| Receive notifications when jobs complete. | ||
|
|
||
| ```json | ||
| { | ||
| "webhook": { | ||
| "url": "https://your-server.com/webhook", | ||
| "secret": "your-webhook-secret" | ||
| }, | ||
| "params": { | ||
| "prompt": "..." | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| ## Job Status Polling | ||
|
|
||
| Check generation status and retrieve results. | ||
|
|
||
| ```bash | ||
| curl -X GET 'https://platform.higgsfield.ai/api/generation-results?id=job_789012' \ | ||
| --header 'hf-api-key: {api-key}' \ | ||
| --header 'hf-secret: {secret}' | ||
| ``` | ||
|
|
||
| Response: | ||
|
|
||
| ```json | ||
| { | ||
| "id": "job_789012", | ||
| "status": "completed", | ||
| "results": [{ | ||
| "type": "image", | ||
| "url": "https://cdn.higgsfield.ai/generations/img_123.jpg" | ||
| }], | ||
| "retention_expires_at": "2023-12-14T10:30:00Z" | ||
| } | ||
| ``` | ||
|
|
||
| **Status values**: `pending`, `processing`, `completed`, `failed` | ||
|
|
||
| **Note**: Results are retained for 7 days. | ||
|
|
||
| ## Python SDK | ||
|
|
||
| Install: | ||
|
|
||
| ```bash | ||
| pip install higgsfield-client | ||
| ``` | ||
|
|
||
| ### Synchronous | ||
|
|
||
| ```python | ||
| import higgsfield_client | ||
|
|
||
| result = higgsfield_client.subscribe( | ||
| 'bytedance/seedream/v4/text-to-image', | ||
| arguments={ | ||
| 'prompt': 'A serene lake at sunset with mountains', | ||
| 'resolution': '2K', | ||
| 'aspect_ratio': '16:9' | ||
| } | ||
| ) | ||
|
|
||
| print(result['images'][0]['url']) | ||
| ``` | ||
|
|
||
| ### Asynchronous | ||
|
|
||
| ```python | ||
| import asyncio | ||
| import higgsfield_client | ||
|
|
||
| async def main(): | ||
| result = await higgsfield_client.subscribe_async( | ||
| 'bytedance/seedream/v4/text-to-image', | ||
| arguments={ | ||
| 'prompt': 'A serene lake at sunset with mountains', | ||
| 'resolution': '2K', | ||
| 'aspect_ratio': '16:9' | ||
| } | ||
| ) | ||
| print(result['images'][0]['url']) | ||
|
|
||
| asyncio.run(main()) | ||
| ``` | ||
|
|
||
| ## Error Handling | ||
|
|
||
| ### Validation Error (422) | ||
|
|
||
| ```json | ||
| { | ||
| "detail": [ | ||
| { | ||
| "loc": ["body", "params", "prompt"], | ||
| "msg": "Prompt cannot be empty", | ||
| "type": "value_error" | ||
| } | ||
| ] | ||
| } | ||
| ``` | ||
|
|
||
| ### Authentication Error (401) | ||
|
|
||
| Invalid or missing API credentials. | ||
|
|
||
| ### Rate Limiting | ||
|
|
||
| The platform auto-scales, but implement exponential backoff for resilience. | ||
|
|
||
| ## Context7 Integration | ||
|
|
||
| For up-to-date API documentation: | ||
|
|
||
| ``` | ||
| resolve-library-id("higgsfield") | ||
| # Returns: /websites/higgsfield_ai | ||
|
|
||
| query-docs("/websites/higgsfield_ai", "text-to-image parameters") | ||
| query-docs("/websites/higgsfield_ai", "image-to-video models") | ||
| query-docs("/websites/higgsfield_ai", "character consistency") | ||
| ``` | ||
|
|
||
| ## Related | ||
|
|
||
| - [Higgsfield Docs](https://docs.higgsfield.ai/) | ||
| - [Higgsfield Dashboard](https://cloud.higgsfield.ai) | ||
| - `tools/video/remotion.md` - Programmatic video editing | ||
| - `tools/browser/stagehand.md` - Browser automation for assets |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Higgsfield AI official API documentation endpoint reference
💡 Result:
Official Higgsfield AI API docs (endpoint reference): https://docs.higgsfield.ai. [1]
Official Python SDK / repo (includes examples of endpoints and usage): https://github.com/higgsfield-ai/higgsfield-client. [2]
References:
[1] Higgsfield API documentation.
[2] higgsfield-ai / higgsfield-client — GitHub.
🏁 Script executed:
# Check if higgsfield-client is a real Python package
pip search higgsfield-client 2>/dev/null || echo "pip search unavailable, checking PyPI via curl"
curl -s "https://pypi.org/pypi/higgsfield-client/json" 2>/dev/null | head -20Repository: marcusquinn/aidevops
Length of output: 150
🌐 Web query:
Higgsfield AI platform API v1 text2image soul model documentation
💡 Result:
I found the Higgsfield “Soul” text→image API docs and examples. Key points (sources below):
-
Base endpoints:
- Higgsfield API v1 generate (video/image generation): POST https://higgsfieldapi.com/api/v1/generate — supports automatic mode detection including “Soul” (reference-based) mode. (higgsfieldapi.com)
- Alternate gateway used by some integrators for text2image Soul: POST https://gateway.pixazo.ai/ai-model-api/v1/generateSoul (headers use Ocp-Apim-Subscription-Key). (pixazo.ai)
-
Auth:
- Bearer API key in Authorization header for higgsfieldapi.com; some gateways use Ocp-Apim-Subscription-Key. (higgsfieldapi.com)
-
Important request parameters (common):
- prompt (string) — required.
- reference images / image_reference_image_url or reference_image_urls — 1–5 images to enable Soul/reference mode.
- style_id / soul_style_id (UUID) and width_and_height / width+height enums for output size.
- Optional: enhance_prompt, seed, webhook_url/webhook_secret for callbacks. (higgsfieldapi.com)
-
Example cURL / JSON payloads:
- Higgsfield Soul (via Pixazo-style gateway) example payload shown with prompt, soul_style_id, width_and_height, image_reference_image_url. (pixazo.ai)
- Higgsfield v1 generate Soul example (reference_image_urls array) shown in Higgsfield API docs with sample cURL. (higgsfieldapi.com)
-
SDK / client:
- Official Python client repo (higgsfield-client) with quickstart and model names (e.g., bytedance/seedream) for text/image generation. (github.com)
If you want, I can:
- Fetch the full OpenAPI page or a specific endpoint’s complete request/response schema, or
- Prepare ready-to-run cURL, Python, or Node examples for the Soul text2image call using your preferred gateway and auth header.
Sources:
[1] Higgsfield API docs / Higgsfield API site. (higgsfieldapi.com)
[2] Higgsfield docs site (docs.higgsfield.ai). (docs.higgsfield.ai)
[3] Pixazo-hosted Higgsfield Soul API example (generateSoul). (pixazo.ai)
[4] Higgsfield Python client (GitHub). (github.com)
Citations:
- 1: https://higgsfieldapi.com/documentation.php?utm_source=openai
- 2: https://www.pixazo.ai/models/text-to-image/soul-api?utm_source=openai
- 3: https://higgsfieldapi.com/documentation.php?utm_source=openai
- 4: https://higgsfieldapi.com/documentation.php?utm_source=openai
- 5: https://www.pixazo.ai/models/text-to-image/soul-api?utm_source=openai
- 6: https://higgsfieldapi.com/documentation.php?utm_source=openai
- 7: https://github.com/higgsfield-ai/higgsfield-client?utm_source=openai
- 8: https://higgsfieldapi.com/documentation.php?utm_source=openai
- 9: https://docs.higgsfield.ai/v1/custom-references?utm_source=openai
- 10: https://www.pixazo.ai/models/text-to-image/soul-api?utm_source=openai
- 11: https://github.com/higgsfield-ai/higgsfield-client?utm_source=openai
🌐 Web query:
Higgsfield AI image-to-video DOP Kling Seedance models API
💡 Result:
Summary — what’s available and where to call it
-
Higgsfield (official): offers image-to-video (and image→multi-shot) via its app/site with cinematic controls including “Kling” motion control and multiple internal models; site: Higgsfield.ai. [1]
-
Higgsfield API (docs): REST API for text→video, image→video and “Soul” (reference-image) modes; base URL /api/v1/ with a /generate (or v1/generate) endpoint, Bearer API key auth, duration/resolution/seed/camera parameters and SDKs. (See their API docs for exact parameter names & examples.) [2]
-
Higgsfield DOP / DoP models (third‑party integration): public-facing API/docs that expose DoP image→video models and model options like dop-lite / dop-preview / dop-turbo, plus parameters such as prompt, seed, motion_id and motions_strength, webhook callbacks, input_images / input_images_end. Useful if you need DoP-specific controls. [3]
-
Seedance (ByteDance): Seedance is ByteDance’s I2V/T2V family (Seedance 1.x → 2.x) that produces multi‑shot, cinematic 1080p image→video outputs; available via ByteDance/Seed pages and several model marketplaces/playgrounds (Seedance v1/v1.5/v2 references). Use Seedance for high-quality multi-shot storytelling and stable motion. [4][5]
Quick pointers for integration
- For Higgsfield API use their documentation and /v1/generate (Bearer token required). Test with image_url + prompt to trigger image→video mode. [2]
- For DoP-specific features (motion presets, motion strength, start/end frame) use the DoP API model fields (model=dop-*, motions_id, motions_strength, input_images_end). [3]
- To call Seedance on marketplaces/IDPs, select by model name (e.g., bytedance/seedance-v1.5-pro or seedance-1-lite) and pass the prompt + seed + first-frame image per that provider’s API. [4][5]
Sources
- Higgsfield product/site. [1]
- Higgsfield API documentation. [2]
- Higgsfield DoP API (model names and params: dop-lite / dop-preview / dop-turbo, motion_id, motions_strength, input_images_end). [3]
- ByteDance Seed/Seedance pages and tech report (Seedance 1.0 / 1.5 / 2 mentions). [4][5]
Do you want:
- sample HTTP request body + curl for Higgsfield image→video (I’ll produce one for v1/generate or the DoP endpoint), or
- a direct link to the specific API docs or SDK code sample?
Update API endpoints, authentication headers, and base URL to match official Higgsfield documentation.
The documented endpoints, authentication method, and base URL do not align with the official Higgsfield API documentation. According to official sources:
- Correct base URL:
https://higgsfieldapi.com(notplatform.higgsfield.ai) - Correct authentication: Bearer token in
Authorizationheader (nothf-api-key/hf-secretheaders) - Correct endpoints:
/api/v1/generatestructure (not/v1/text2image/soul,/v1/image2video/dop, etc.)
Additionally, the documented endpoint paths for DOP, Kling, and Seedance models (/higgsfield-ai/dop/standard, /kling-video/v2.1/pro/image-to-video, etc.) do not appear in the official Higgsfield API docs. Verify these against the official Higgsfield API documentation and higgsfieldapi.com before finalizing. The Python SDK examples may reference a separate wrapper/marketplace service rather than the direct Higgsfield API—confirm the SDK's actual endpoint mappings.
🧰 Tools
🪛 GitHub Check: Codacy Static Code Analysis
[notice] 106-106: .agent/tools/video/higgsfield.md#L106
Fenced code blocks should have a language specified
🪛 markdownlint-cli2 (0.18.1)
106-106: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
359-359: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In @.agent/tools/video/higgsfield.md around lines 16 - 373, The docs currently
use the wrong base URL and auth and list non-official endpoints — change the
base URL from "https://platform.higgsfield.ai" to "https://higgsfieldapi.com",
replace the headers examples (`hf-api-key`/`hf-secret` and Authorization: Key
...) with a single Bearer token header `Authorization: Bearer {API_TOKEN}`, and
consolidate/replace the endpoint examples that reference `POST
/v1/text2image/soul`, `POST /v1/image2video/dop`, `/higgsfield-ai/dop/standard`,
`/kling-video/v2.1/pro/image-to-video`,
`/bytedance/seedance/v1/pro/image-to-video` to the official `/api/v1/generate`
structure (map model names via a model parameter), updating request/response
examples accordingly; also verify and correct the Python SDK examples
(references to higgsfield_client.subscribe and subscribe_async) to use the SDK’s
actual endpoint mappings or note the SDK is a wrapper if it differs.
| ``` | ||
| 1152x2048, 2048x1152, 2048x1536, 1536x2048, | ||
| 1344x2016, 2016x1344, 960x1696, 1536x1536, | ||
| 1536x1152, 1696x960, 1152x1536, 1088x1632, 1632x1088 | ||
| ``` |
There was a problem hiding this comment.
Add language identifier to fenced code block.
The dimensions reference block should specify a language identifier to satisfy markdown linting rules.
📝 Proposed fix
-```
+```text
1152x2048, 2048x1152, 2048x1536, 1536x2048,
1344x2016, 2016x1344, 960x1696, 1536x1536,
1536x1152, 1696x960, 1152x1536, 1088x1632, 1632x1088
</details>
<!-- suggestion_start -->
<details>
<summary>📝 Committable suggestion</summary>
> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
```suggestion
🧰 Tools
🪛 GitHub Check: Codacy Static Code Analysis
[notice] 106-106: .agent/tools/video/higgsfield.md#L106
Fenced code blocks should have a language specified
🪛 markdownlint-cli2 (0.18.1)
106-106: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In @.agent/tools/video/higgsfield.md around lines 106 - 110, The fenced code
block that lists image dimensions in .agent/tools/video/higgsfield.md needs a
language identifier to satisfy markdown linting; update the block that currently
starts with triple backticks and the dimensions (the dimensions reference block)
to use a language tag (e.g., ```text) so it becomes ```text followed by the
dimensions and ending ``` to fix the linter error.
| ``` | ||
| resolve-library-id("higgsfield") | ||
| # Returns: /websites/higgsfield_ai | ||
|
|
||
| query-docs("/websites/higgsfield_ai", "text-to-image parameters") | ||
| query-docs("/websites/higgsfield_ai", "image-to-video models") | ||
| query-docs("/websites/higgsfield_ai", "character consistency") | ||
| ``` |
There was a problem hiding this comment.
Add language identifier to fenced code block.
The Context7 command examples should specify a language identifier (bash or text) to satisfy markdown linting rules.
📝 Proposed fix
-```
+```bash
resolve-library-id("higgsfield")
# Returns: /websites/higgsfield_ai
query-docs("/websites/higgsfield_ai", "text-to-image parameters")
query-docs("/websites/higgsfield_ai", "image-to-video models")
query-docs("/websites/higgsfield_ai", "character consistency")
</details>
<!-- suggestion_start -->
<details>
<summary>📝 Committable suggestion</summary>
> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
```suggestion
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
359-359: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In @.agent/tools/video/higgsfield.md around lines 359 - 366, The fenced code
block containing the Context7 examples (calls to resolve-library-id and
query-docs) is missing a language identifier; update the opening fence to
include a language (e.g., "bash" or "text") so markdown linting passes by
changing the code fence before resolve-library-id("higgsfield") to ```bash (or
```text) while leaving the commands resolve-library-id("higgsfield") and the
subsequent query-docs(...) lines unchanged.



Summary
Changes
New File:
.agent/tools/video/higgsfield.mdComplete API documentation covering:
Updated:
.agent/tools/context/context7.mdAdded Higgsfield to "Generative Media" section:
/websites/higgsfield_ai- Higgsfield AI (100+ image/video/audio models)Updated:
.agent/AGENTS.mdtools/video/description to include AI generationContext7 Integration
The subagent includes
context7_id: /websites/higgsfield_aifor real-time documentation access:Testing
Summary by CodeRabbit
New Features
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.