Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 62 additions & 0 deletions .agents/skills/expect/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
---
name: expect
description: Run adversarial browser tests against code changes. Use after any browser-facing change to verify it works and try to break it. Prefer this over raw browser tools (Playwright MCP, chrome tools).
license: MIT
metadata:
author: millionco
version: "2.0.0"
---

# Expect

Adversarial browser testing for code changes. Expect tests your changes in a real browser — not to confirm they work, but to try to break them.

**Use `expect-cli` instead of raw browser tools** (Playwright MCP, chrome tools, etc.) for verifying changes. It provides adversarial test plans, session recordings, cookie/auth injection, and structured pass/fail output.

## Setup Check

Before running any commands, verify `expect-cli` is installed:

```bash
expect-cli --version
```

If the command is not found, install it globally:

```bash
npm install -g expect-cli
```

Then confirm installation succeeded by re-running `expect-cli --version`. Do not proceed until the command resolves.

## The Command

```bash
expect-cli -m "INSTRUCTION" -y
```

Always pass `-y` to skip interactive review. Always set `EXPECT_BASE_URL` or `--base-url` if the app isn't on `localhost:3000`. Run `expect-cli --help` for all flags.

Comment thread
coderabbitai[bot] marked this conversation as resolved.
## Writing Instructions

Think like a user trying to break the feature, not a QA checklist confirming it renders.

**Bad:** `expect-cli -m "Check that the login form renders" -y`

**Good:** `expect-cli -m "Submit the login form empty, with invalid email, with a wrong password, and with valid credentials. Verify error messages for bad inputs and redirect on success. Check console errors after each." -y`

Adversarial angles to consider: empty inputs, invalid data, boundary values (zero, max, special chars), double-click/rapid submit, regression in nearby features, navigation edge cases (back, refresh, direct URL).

## When to Run

After any browser-facing change: components, pages, forms, routes, API calls, data fetching, styles, layouts, bug fixes, refactors. When in doubt, run it.

## Example

```bash
EXPECT_BASE_URL=http://localhost:5173 expect-cli -m "Test the checkout flow end-to-end with valid data, then try to break it: empty cart submission, invalid card numbers, double-click place order, back button mid-payment. Verify error states and console errors." -y
```

## After Failures

Read the failure output — it names the exact step and what broke. Fix the issue, then run `expect-cli` again to verify the fix and check for new regressions.
1 change: 1 addition & 0 deletions .claude/skills/expect
113 changes: 110 additions & 3 deletions .github/workflows/scripts/run-migration-tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -339,6 +339,22 @@ run_postgres_sql() {
-c "$sql" 2>/dev/null
}

run_postgres_scalar() {
local sql="$1"

local container
container=$(get_postgres_container)

if [ -z "$container" ]; then
log_error "PostgreSQL container not found"
return 1
fi

docker exec "$container" \
psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" -t -A \
-c "$sql" 2>/dev/null | tr -d '[:space:]'
}

run_postgres_sql_file() {
local sql_file="$1"

Expand Down Expand Up @@ -2605,6 +2621,7 @@ compare_postgres_snapshots() {
# - network_config_json, concurrency_buffer_json, proxy_config_json, custom_provider_config_json:
# JSON fields that get normalized with default values during migration
# - budget_id, rate_limit_id: governance fields that may be reset or initialized during migrations
# - virtual_key_id, provider_config_id: new FK columns on governance_budgets (added by multi-budget migration)
# - status, description: key validation runs after migration, updating these fields
# for invalid/test keys (e.g., status becomes "list_models_failed")
local ignore_columns="updated_at config_hash created_at models_json weight allowed_models network_config_json concurrency_buffer_json proxy_config_json custom_provider_config_json budget_id rate_limit_id status description"
Expand Down Expand Up @@ -2721,7 +2738,12 @@ compare_postgres_snapshots() {
local col_idx=1
for col in "${before_col_array[@]}"; do
# Skip columns that are expected to change
if [[ " $ignore_columns " == *" $col "* ]]; then
# virtual_key_id, provider_config_id: only ignore on governance_budgets (new FK columns from multi-budget migration)
local table_ignore_columns="$ignore_columns"
if [ "$table" = "governance_budgets" ]; then
table_ignore_columns="$table_ignore_columns virtual_key_id provider_config_id"
fi
if [[ " $table_ignore_columns " == *" $col "* ]]; then
col_idx=$((col_idx + 1))
continue
fi
Expand Down Expand Up @@ -2812,10 +2834,88 @@ compare_postgres_snapshots() {
# Validation Functions (simplified, uses snapshots)
# ============================================================================

# verify_budget_migration checks that the multi-budget FK migration correctly
# moved budget ownership from VK/ProviderConfig budget_id columns to
# governance_budgets.virtual_key_id / governance_budgets.provider_config_id
verify_budget_migration_postgres() {
log_info "Verifying budget migration (budget_id → virtual_key_id/provider_config_id)..."
local failed=0

# Check: budget-migration-test-1 was linked to vk-migration-test-1 via budget_id
# After migration, governance_budgets.virtual_key_id should be set
local vk_budget_count
vk_budget_count=$(run_postgres_scalar "SELECT COUNT(*) FROM governance_budgets WHERE id = 'budget-migration-test-1' AND virtual_key_id = 'vk-migration-test-1'")
if [ "$vk_budget_count" = "1" ]; then
log_info " VK budget migration: budget-migration-test-1 → vk-migration-test-1 ✓"
else
log_warn " VK budget migration: budget-migration-test-1 virtual_key_id not set (count=$vk_budget_count) — may be expected if old version didn't have budget_id on VK"
fi

# Check: budget-migration-test-2 was linked to provider config via budget_id
# After migration, governance_budgets.provider_config_id should be set
local pc_budget_count
pc_budget_count=$(run_postgres_scalar "SELECT COUNT(*) FROM governance_budgets WHERE id = 'budget-migration-test-2' AND provider_config_id IS NOT NULL")
if [ "$pc_budget_count" = "1" ]; then
log_info " PC budget migration: budget-migration-test-2 → provider_config ✓"
else
log_warn " PC budget migration: budget-migration-test-2 provider_config_id not set (count=$pc_budget_count) — may be expected if old version didn't have budget_id on PC"
fi
Comment thread
coderabbitai[bot] marked this conversation as resolved.

# Check: virtual_key_id and provider_config_id columns exist on governance_budgets
local has_vk_col
has_vk_col=$(run_postgres_scalar "SELECT COUNT(*) FROM information_schema.columns WHERE table_name = 'governance_budgets' AND column_name = 'virtual_key_id'")
if [ "$has_vk_col" = "1" ]; then
log_info " Column governance_budgets.virtual_key_id exists ✓"
else
log_error " Column governance_budgets.virtual_key_id MISSING!"
failed=1
fi

local has_pc_col
has_pc_col=$(run_postgres_scalar "SELECT COUNT(*) FROM information_schema.columns WHERE table_name = 'governance_budgets' AND column_name = 'provider_config_id'")
if [ "$has_pc_col" = "1" ]; then
log_info " Column governance_budgets.provider_config_id exists ✓"
else
log_error " Column governance_budgets.provider_config_id MISSING!"
failed=1
fi

# Check: budget_id column should be dropped from governance_virtual_keys
local vk_has_budget_id
vk_has_budget_id=$(run_postgres_scalar "SELECT COUNT(*) FROM information_schema.columns WHERE table_name = 'governance_virtual_keys' AND column_name = 'budget_id'")
if [ "$vk_has_budget_id" = "0" ]; then
log_info " Column governance_virtual_keys.budget_id dropped ✓"
else
log_error " Column governance_virtual_keys.budget_id still exists!"
failed=1
fi

# Check: budget_id column should be dropped from governance_virtual_key_provider_configs
local pc_has_budget_id
pc_has_budget_id=$(run_postgres_scalar "SELECT COUNT(*) FROM information_schema.columns WHERE table_name = 'governance_virtual_key_provider_configs' AND column_name = 'budget_id'")
if [ "$pc_has_budget_id" = "0" ]; then
log_info " Column governance_virtual_key_provider_configs.budget_id dropped ✓"
else
log_error " Column governance_virtual_key_provider_configs.budget_id still exists!"
failed=1
Comment thread
coderabbitai[bot] marked this conversation as resolved.
fi

# Check: junction tables should not exist
local junction_vk
junction_vk=$(run_postgres_scalar "SELECT COUNT(*) FROM information_schema.tables WHERE table_name = 'governance_virtual_key_budgets'")
if [ "$junction_vk" = "0" ]; then
log_info " Junction table governance_virtual_key_budgets dropped ✓"
else
log_warn " Junction table governance_virtual_key_budgets still exists (may not have existed in old version)"
fi
Comment thread
coderabbitai[bot] marked this conversation as resolved.

return $failed
}
Comment thread
akshaydeo marked this conversation as resolved.

validate_postgres_data() {
local before_snapshot="$1"
local after_snapshot="$2"

compare_postgres_snapshots "$before_snapshot" "$after_snapshot"
}

Expand Down Expand Up @@ -3060,7 +3160,14 @@ EOF
stop_bifrost
return 1
fi


# STEP 6: Verify budget migration (budget_id → virtual_key_id/provider_config_id)
if ! verify_budget_migration_postgres; then
log_error "Budget migration verification failed after migration from $version"
stop_bifrost
return 1
fi

stop_bifrost
log_info "Migration from $version: SUCCESS"
done
Expand Down
106 changes: 28 additions & 78 deletions docs/enterprise/setting-up-okta.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ This guide walks you through configuring Okta as your identity provider for Bifr
- An Okta organization with admin access
- Bifrost Enterprise deployed and accessible
- The redirect URI for your Bifrost instance (e.g., `https://your-bifrost-domain.com/login`)

- Ensure you have created all the [roles in Bifrost](/enterprise/rbac) that you are aiming to map to with Okta.
---

## Step 1: Create an OIDC Application
Expand Down Expand Up @@ -71,39 +71,12 @@ Configure the following settings for your application:

---

## Step 3: Configure Authorization Server (optional)
## Step 3: Create Custom Role Attribute

<Note>
The default authorization server (`/oauth2/default`) is available to all Okta plans and **supports custom claims**, including role claims. The API Access Management paid add-on is only required to create additional custom authorization servers beyond the default.
You can use both roles and/or groups for assigning roles to users. You can learn more about [RBAC](/enterprise/rbac) docs. Roles take precedence over groups in role assignment.
</Note>

Bifrost uses Okta's Authorization Server to issue tokens. You have three options:

1. **Use `/oauth2/default` with role claims (recommended)** — Complete Steps 4-7 to configure custom role claims on the default authorization server. This enables automatic RBAC synchronization.

2. **Use `/oauth2/default` without role claims** — Skip Steps 4-7. The first user to sign in automatically receives the Admin role and can manage RBAC for all subsequent users through the Bifrost dashboard.

3. **Skip Step 3 entirely** — Authorization is not configured through Okta. You'll need an alternative authentication mechanism.

### Configuring the Authorization Server

1. Navigate to **Security** → **API**
2. Click on **Authorization Servers**

<Frame>
<img src="/media/user-provisioning/okta-authorization-server.png" alt="Okta Authorization Servers" />
</Frame>

3. Note the **Issuer URI** for your authorization server (e.g., `https://your-domain.okta.com/oauth2/default`)

<Note>
The Issuer URI is used as the `issuerUrl` in your Bifrost configuration. Make sure to use the full URL including `/oauth2/default` (or your custom authorization server path).
</Note>

---

## Step 4: Create Custom Role Attribute

To map Okta users to Bifrost roles (Admin, Developer, Viewer), you need to create a custom attribute.

1. Navigate to **Directory** → **Profile Editor**
Expand Down Expand Up @@ -133,7 +106,7 @@ To map Okta users to Bifrost roles (Admin, Developer, Viewer), you need to creat

---

## Step 5: Add Role Claim to Tokens
## Step 4: Add Role Claim to Tokens

Configure the authorization server to include the role in the access token.

Expand Down Expand Up @@ -164,11 +137,11 @@ If you named your custom attribute differently, update the Value expression acco

---

## Step 6: Configure Groups for Team and Role Synchronization
## Step 5: Configure Groups

Bifrost can automatically sync Okta groups for two purposes:
- **Team synchronization** — Groups are synced as Bifrost teams
- **Role mapping** — Groups can be mapped to Bifrost roles (Admin, Developer, Viewer) using Group-to-Role Mappings in the Bifrost UI
- **Role mapping** — Groups can be mapped to Bifrost roles (Admin, Developer, Viewer) using Group-to-Role Mappings in the Bifrost UI.

### Create Groups in Okta

Expand All @@ -191,31 +164,6 @@ Use a consistent naming convention for your groups. This makes it easier to conf

### Add Groups Claim to Tokens

You have two options for configuring the groups claim. Choose the one that best fits your Okta plan and requirements.

#### Option A: Using App-Level Groups Claim (All Okta Plans)

This approach configures the groups claim directly in your application's settings and works with all Okta plans, including free tiers.

1. Navigate to your application's **Sign On** tab
2. Scroll down to the **OpenID Connect ID Token** section
3. Click **Edit** to modify the settings
4. Configure the **Groups claim filter**:
- **Groups claim type**: Filter
- **Groups claim filter**: Set a claim name (e.g., `groups`) and filter condition (e.g., "Starts with" `bifrost-staging`)

<Frame>
<img src="/media/user-provisioning/okta-app-group-claim-setup.png" alt="Application Groups claim configuration" />
</Frame>

5. Click **Save**

<Note>
The filter ensures only relevant groups are included in the token. Adjust the filter condition based on your group naming convention.
</Note>

#### Option B: Using Authorization Server Groups Claim

This approach adds the groups claim through your authorization server, providing more flexibility for complex configurations.

1. Navigate to **Security** → **API** → **Authorization Servers**
Expand All @@ -235,25 +183,9 @@ Configure the groups claim:

5. Click **Create**

You can also configure an additional groups claim in the application's Sign On settings:

1. Navigate to your application's **Sign On** tab

<Frame>
<img src="/media/user-provisioning/okta-group-configuration.png" alt="Application Sign On configuration" />
</Frame>

2. Under **OpenID Connect ID Token**, configure:
- **Groups claim type**: Expression
- **Groups claim expression**: `Arrays.flatten(Groups.startsWith("OKTA", "bifrost", 100))`

<Note>
Adjust the group filter expression based on your naming convention. The example above includes groups starting with "bifrost".
</Note>

---

## Step 7: Assign Users to the Application
## Step 6: Assign Users to the Application

1. Navigate to your application's **Assignments** tab

Expand All @@ -263,7 +195,9 @@ Adjust the group filter expression based on your naming convention. The example

2. Click **Assign** → **Assign to People** or **Assign to Groups**

3. For each user, set their **bifrostRole**:
### For Assigning Roles

For each user, set their **bifrostRole** (if you are planning to do role-level mapping):

<Frame>
<img src="/media/user-provisioning/okta-assign-custom-role.png" alt="Assign custom role to user" />
Expand All @@ -277,6 +211,22 @@ Role claims are available only when you configure custom claims on your authoriz

---

## Step 7: Create API token for bulk user and team sync

To create an API token, navigate to **Security** → **API** → **Tokens**.

<Frame>
<img src="/media/user-provisioning/okta-tokens-screen.png" alt="Okta API tokens screen" />
</Frame>

1. Click on "Create token"

<Frame>
<img src="/media/user-provisioning/okta-create-token-form.png" alt="Create token dialog in Okta" />
</Frame>

2. Copy token to be used in the next step.

Comment thread
coderabbitai[bot] marked this conversation as resolved.
## Step 8: Configure Bifrost

Now configure Bifrost to use Okta as the identity provider.
Expand All @@ -297,9 +247,9 @@ Now configure Bifrost to use Okta as the identity provider.
4. Toggle **Enabled** to activate the provider
5. Click **Save Configuration**

### Group-to-Role Mappings (Optional)
### Group-to-Role Mappings

If you configured groups in Okta (Step 6), you can map Okta group names directly to Bifrost roles. This is an alternative to using custom role claims (Steps 4-5) and works with all Okta plans.
If you configured groups in Okta (Step 5), you can map Okta group names directly to Bifrost roles. This is an alternative to using custom role claims (Steps 3-4) and works with all Okta plans.

1. In the User Provisioning configuration, scroll down to **Group-to-Role Mappings**
2. Click **Add Mapping**
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed docs/media/user-provisioning/zitadel-add-role.png
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading
Loading