Skip to content

docs: Add API Reference Page to docs #6351

@ramonpzg

Description

@ramonpzg

This is the parent issue that covers the addition of the API Reference for llama-server and the Jan API Server on top of it.

Requirements

  • Use Scalar API docs
  • Add an API reference section
  • Create a CI workflow that triggers the generation of the API sepc

Part 1

The first part of task is to add the API reference page to the docs using scalar. This can be done after generating the openiapi.json file which is already available in the ./website/public/openapi/ directory of the new docs.

Part 2 - CI Workflow

Automated Processes*

  1. OpenAPI Specification Generation
  • Trigger: Push to main/dev, PRs, manual workflow dispatch
  • Process:
    • Installs llama-cpp-python dependencies
    • Generates OpenAPI spec from FastAPI server
    • Validates JSON structure
    • Auto-commits changes back to repository
  1. Build Integration
  • Prebuild Step: Automatically copies openapi.jsonpublic/openapi/openapi.json
  • Component Loading: React component automatically loads from /openapi/openapi.json
  • No Manual Steps: Zero manual file management required
  1. CI/CD Pipeline (.github/workflows/)
  • PR Validation: Tests spec generation and build success
  • Auto-commits: Updates spec when API changes detected
  • Auto-deployment: Deploys docs site on main branch merges

Automation Scripts

Script Purpose Automation Level
update-api-docs.sh Main orchestration script Fully Automated
generate-openapi-spec.py Core spec generation from Python API Fully Automated
fetch-openapi-spec.sh Fallback server extraction Fallback/Manual
test-openapi-integration.sh Validates generated documentation Automated Testing

Developer Workflow Automation

graph TD
    A[Code Change] --> B[Push/PR]
    B --> C[CI Triggers]
    C --> D[Generate Spec]
    D --> E[Validate JSON]
    E --> F[Build Docs]
    F --> G[Auto Deploy]
    
    H[Local Development] --> I[bun run update-api-docs]
    I --> J[Test Locally]
    J --> K[Commit Changes]
Loading

A couple of things to check with Akarshan and Louis.

  • how often does the llama-server specs changes? Perhaps it is not needed to run it on every PR to dev but on changes to llama-server from llama.cpp
  • Should we remove specific options or leave everything in it? Cortex emphasized the loading/offloading of models since there was no other way around it, but in Jan the user does it via the UI.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

Projects

Status

In Progress

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions