diff --git a/.claude/commands/docs-audit.md b/.claude/commands/docs-audit.md new file mode 100644 index 000000000..82a7bb003 --- /dev/null +++ b/.claude/commands/docs-audit.md @@ -0,0 +1,33 @@ +Audit the project documentation for correctness and quality issues. + +## What this does + +Auto-detects the project language, source directories, documentation layout, and security guidelines. Then scans documentation for deprecated code references, wrong API signatures, broken links, security anti-patterns, and quality issues. Works with any language (C#, Java, TypeScript, Python, Go, Rust). No configuration required. + +## Instructions + +Read `.claude/skills/DocsAudit/SKILL.md` and follow the workflow routing table to determine which workflow to execute. + +**Default behavior (no arguments):** Run the full Audit workflow — auto-detect project structure, then scan for deprecated code references (T1-T6 findings). + +**With arguments:** +- `/docs-audit review` — Run the Review workflow (Q1-Q8 quality findings) +- `/docs-audit report` — Run both Audit + Review, then generate a combined report +- `/docs-audit review piv` — Review only a specific subdirectory +- `/docs-audit review security` — Focus only on security anti-pattern checks (Q8) + +## Workflow files + +- Audit (correctness): `.claude/skills/DocsAudit/Workflows/Audit.md` +- Review (quality): `.claude/skills/DocsAudit/Workflows/Review.md` +- Report (combined): `.claude/skills/DocsAudit/Workflows/Report.md` + +## Reference files (load on demand) + +- Error taxonomy (T1-T6, Q1-Q8): `.claude/skills/DocsAudit/ErrorTaxonomy.md` +- Security patterns (SP1-SP6, multi-language): `.claude/skills/DocsAudit/SecurityPatterns.md` +- Agent design, language profiles, model selection: `.claude/skills/DocsAudit/AgentDesign.md` + +## Optional configuration + +If a `.docsaudit.yaml` exists in the repo root, it will be used instead of auto-detection. The skill will offer to create this file after the first run. diff --git a/.claude/skills/DocsAudit/AgentDesign.md b/.claude/skills/DocsAudit/AgentDesign.md new file mode 100644 index 000000000..b73a162dd --- /dev/null +++ b/.claude/skills/DocsAudit/AgentDesign.md @@ -0,0 +1,323 @@ +--- +name: AgentDesign +description: Agent types, mindsets, model selection, language profiles, and orchestration patterns for DocsAudit skill workflows. +type: reference +--- + +# Agent Design + +DocsAudit uses specialized agents for different phases. Each agent has a defined mindset, scope, and recommended Claude model tier. The system auto-detects project characteristics — no configuration required. + +--- + +## Model Selection Strategy + +| Tier | Model | Use For | Cost/Speed | +|------|-------|---------|------------| +| **Haiku** | `haiku` | Discovery, bulk scanning, pattern matching, grep-heavy work | Fastest, cheapest | +| **Sonnet** | `sonnet` | Cross-referencing, signature comparison, prose review | Balanced | +| **Opus** | `opus` | Judgment calls, security review, final synthesis | Slowest, highest quality | + +**Principle:** Use the cheapest model that can reliably perform the task. Escalate only when judgment or nuance is required. + +--- + +## Language Profiles + +Built-in profiles for auto-detection. The DiscoveryAgent selects the correct profile based on source file counts. + +### C# (.cs) +``` +extensions: .cs +deprecation_pattern: \[Obsolete\( +message_format: [Obsolete("message")] — extract quoted string +replacement_hint: Look for "Use X instead" in message +categories: class, interface, method-overload, property, constructor, command +code_fence: csharp, cs +doc_link_formats: xref:Namespace.Type.Member (DocFX) +``` + +### Java (.java) +``` +extensions: .java +deprecation_pattern: @Deprecated +message_format: @deprecated tag in Javadoc comment above declaration +supplemental: @Deprecated(since = "version", forRemoval = true) +replacement_hint: Look for @see or "Use X instead" in Javadoc +categories: class, interface, method, field, constructor +code_fence: java +doc_link_formats: {@link ClassName#method} (Javadoc) +``` + +### TypeScript / JavaScript (.ts, .tsx, .js, .jsx) +``` +extensions: .ts, .tsx, .js, .jsx +deprecation_pattern: @deprecated (JSDoc/TSDoc tag) +message_format: /** @deprecated Use X instead */ +replacement_hint: Text following @deprecated tag +categories: class, function, method, property, type, interface +code_fence: typescript, ts, javascript, js +doc_link_formats: {@link ClassName} (TSDoc), [text](url) (markdown) +``` + +### Python (.py) +``` +extensions: .py +deprecation_pattern: warnings.warn(*, DeprecationWarning) OR @deprecated decorator +message_format: First argument to warnings.warn() or decorator message +replacement_hint: Look for "Use X instead" in warning message +categories: class, function, method, property, module +code_fence: python, py +doc_link_formats: :class:`Name`, :func:`Name`, :meth:`Name` (Sphinx), [text](url) +``` + +### Go (.go) +``` +extensions: .go +deprecation_pattern: // Deprecated: (godoc convention) +message_format: Text following "// Deprecated:" comment +replacement_hint: Look for "Use X instead" in comment +categories: function, type, method, variable, constant +code_fence: go, golang +doc_link_formats: [Name] (godoc linking) +``` + +### Rust (.rs) +``` +extensions: .rs +deprecation_pattern: #[deprecated( +message_format: #[deprecated(since = "version", note = "message")] +replacement_hint: Text in note field +categories: struct, enum, trait, function, method, type, module +code_fence: rust, rs +doc_link_formats: [`Name`](path) (rustdoc intra-doc links) +``` + +### Multi-Language Projects +When multiple languages are detected, the skill: +1. Uses the dominant language (most source files) as primary +2. Runs deprecation scans for all detected languages +3. Matches code blocks to the correct language profile by fence tag +4. Reports findings grouped by language + +--- + +## Agent Types + +### 0. DiscoveryAgent (NEW — runs first) +**Purpose:** Auto-detect project structure, language, and documentation layout. +**Model:** Haiku +**Mindset:** Investigator. Fast, thorough, no assumptions. +**Input:** Repository root +**Output:** Project configuration: +``` +{ + language: "csharp", + source_dirs: ["Yubico.YubiKey/src/", "Yubico.Core/src/"], + docs_dir: "docs/", + exclude_docs: ["whats-new.md"], + exclude_source: ["*Tests*", "*examples*"], + deprecation_profile: , + doc_link_format: "docfx-xref", + security_guidelines: "docs/.../sensitive-data.md" | null, + code_fence_languages: ["csharp"] +} +``` +**Method:** +1. **Language detection:** + - Glob for source files by extension: `**/*.cs`, `**/*.java`, `**/*.ts`, `**/*.py`, `**/*.go`, `**/*.rs` + - Count files per extension (exclude `node_modules/`, `vendor/`, `bin/`, `obj/`, `.git/`) + - Select dominant language; note secondaries if >10% of total +2. **Directory detection:** + - Docs: Try `docs/`, `doc/`, `documentation/`, `manual/`, `guide/` — first match wins + - If none: find directories with >5 `.md` files clustered together + - Source: Try `src/`, `lib/`, `source/`, or language-specific patterns (`**/*.csproj` parent dirs) + - Respect `.gitignore` +3. **Changelog detection:** + - Find files matching: `*changelog*`, `*whats-new*`, `*release-notes*`, `*history*` (case-insensitive) + - Add to exclude list (historical records, not instructional) +4. **Doc link format detection:** + - Grep docs for `xref:` → DocFX + - Grep for `{@link` → Javadoc/TSDoc + - Grep for `:class:` or `:func:` → Sphinx + - Grep for intra-doc `[`Name`]` → Rustdoc + - Multiple formats possible in one project +5. **Security guidelines discovery:** + - Search docs for files matching: `*secur*`, `*sensitive*`, `*credential*`, `*secret*`, `*handling*data*` + - Read candidate files, check if they contain security practices/guidelines + - If found → use as Q8 baseline + - If not found → skip Q8, note in report +6. **Config file check:** + - Look for `.docsaudit.yaml` in repo root + - If found → load and use (skip auto-detection) + - If not found → proceed with auto-detection (suggest saving after run) + +### 1. DeprecationScanner (formerly ObsoleteScanner) +**Purpose:** Build the deprecation map — all deprecated items in source code. +**Model:** Haiku +**Mindset:** Mechanical collector. No judgment, just extraction. +**Input:** Source directories + language profile from DiscoveryAgent +**Output:** Structured list of deprecated items: +``` +{type, name, file, line, deprecationMessage, replacementHint, language} +``` +**Method:** +1. Grep for the language profile's `deprecation_pattern` across source files +2. For each match, extract: identifier name, deprecation message, replacement hint +3. Categorize using the language profile's category list +4. Deduplicate and sort by namespace/module + +### 2. DocReferenceScanner +**Purpose:** Find all references to code entities in documentation. +**Model:** Haiku +**Mindset:** Pattern matcher. Extracts code references from markdown. +**Input:** Docs directory + code fence languages from DiscoveryAgent +**Output:** Structured list of doc references: +``` +{docFile, line, referenceType (codeBlock|prose|xref), entityName, language, context} +``` +**Method:** +1. Parse markdown files for fenced code blocks matching detected languages +2. Extract class/function names, method calls, property accesses from code blocks +3. Extract type names from prose (backtick-wrapped identifiers) +4. Extract doc links using the detected doc link format +5. Tag each reference with its surrounding context + +### 3. CrossReferencer +**Purpose:** Match doc references against the deprecation map to produce T1-T6 findings. +**Model:** Sonnet +**Mindset:** Analytical comparator. Matches two datasets and classifies discrepancies. +**Input:** DeprecationScanner output + DocReferenceScanner output +**Output:** T1-T6 findings in standard format (see ErrorTaxonomy.md) +**Method:** +1. For each doc reference, check if the entity appears in the deprecation map +2. Classify the finding type (T1-T6) based on reference type and deprecated item category +3. Look up the replacement from the deprecation message +4. Generate suggested fix using the replacement type/method +5. Verify the replacement exists in source (grep for it) + +### 4. SignatureVerifier +**Purpose:** Check that code examples use correct API signatures (beyond deprecation checks). +**Model:** Sonnet +**Mindset:** Compiler proxy. Validates that code examples would compile/run. +**Input:** Code blocks from docs + source API signatures +**Output:** Q1 findings (non-compiling examples) +**Method:** +1. For each code block, extract method/function calls with their argument types +2. Look up the actual signature in source +3. Check parameter count, types, and return type alignment +4. Flag mismatches as Q1 + +### 5. ProseReviewer +**Purpose:** Review documentation quality from three audience perspectives. +**Model:** Opus +**Mindset:** Three-lens reviewer (see Audiences below). Contextual judgment required. +**Input:** Documentation files + related source code +**Output:** Q2-Q7 findings +**Method:** +1. Read each doc through Library Developer lens → Q1, Q2, Q5 findings +2. Read each doc through Library User lens → Q3, Q4, Q6 findings +3. Read each doc through Technical Writer lens → Q6, Q7 findings +4. Deduplicate across lenses + +### 6. SecurityReviewer +**Purpose:** Check code examples against project security guidelines. +**Model:** Opus +**Mindset:** Security auditor. Applies discovered or universal security rules to code examples. +**Input:** Code blocks from docs + SecurityPatterns.md checklist + discovered security guidelines (if any) +**Output:** Q8 findings (with SP sub-classification) +**Method:** +1. If DiscoveryAgent found a security guidelines doc → read it and derive project-specific anti-patterns +2. Always apply universal SP1-SP3 checks (string storage of secrets, missing cleanup, missing try/finally) +3. Apply language-specific patterns (e.g., Python `getpass` usage, Java `char[]` for passwords) +4. Apply judgment notes (see SecurityPatterns.md) to filter noise +5. Generate findings with guideline references + +--- + +## Audiences + +Each quality review considers three perspectives: + +### Library Developer +**Who:** Engineer building features on top of the library/SDK. +**Cares about:** API correctness, compile-time validity, version compatibility. +**Finds:** T1-T6, Q1, Q2, Q5 — "Does this code actually work?" + +### Library User +**Who:** Developer following documentation to integrate the library into their app. +**Cares about:** Completeness, prerequisites, clarity. +**Finds:** Q3, Q4, Q6 — "Can I follow this without prior knowledge?" + +### Technical Writer +**Who:** Documentation maintainer ensuring consistency and navigability. +**Cares about:** Terminology consistency, link integrity, structural coherence. +**Finds:** Q6, Q7 — "Is this consistent with the rest of the docs?" + +--- + +## Orchestration Patterns + +### Audit Workflow (Correctness) +``` +DiscoveryAgent (Haiku) ──→ DeprecationScanner (Haiku) ──┐ + ──→ DocReferenceScanner (Haiku) ──┤ + ├──→ CrossReferencer (Sonnet) ──→ Findings +``` +Discovery first, then parallel scan, then join for cross-referencing. + +### Review Workflow (Quality) +``` +DiscoveryAgent (Haiku) ──→ SignatureVerifier (Sonnet) ──┐ + ──→ ProseReviewer (Opus) ──────┼──→ Deduplicate ──→ Findings + ──→ SecurityReviewer (Opus) ────┘ +``` +Discovery first, then all three reviewers in parallel. + +### Full Audit +``` +DiscoveryAgent (Haiku) ──→ Audit Workflow ──┐ + ──→ Review Workflow ──┤ + ├──→ Report Workflow (merge + format) +``` +Single discovery shared across both workflows. + +--- + +## Agent Launch Guidelines + +1. **Discovery runs once per invocation.** Share its output across all subsequent agents. +2. **Always scope agents narrowly.** Pass specific file lists from discovery, not "scan everything." +3. **Haiku agents get explicit instructions.** They follow patterns well but don't improvise. +4. **Opus agents get context + judgment latitude.** They decide what matters. +5. **Cross-referencing requires Sonnet minimum.** Matching two datasets needs reasoning. +6. **Parallelize independent agents.** DeprecationScanner and DocReferenceScanner have no dependency. +7. **Sequential where dependent.** CrossReferencer must wait for both scanners. + +--- + +## Criteria-Driven Execution + +Every workflow in DocsAudit follows a **criteria-first** pattern: + +### Before Work: Define Success Criteria +Each workflow defines binary-testable criteria (true/false, no ambiguity). These describe the **end state**, not the steps to get there. Examples: +- "Every finding includes source-code citation proving deprecated status" (not "scan for deprecated items") +- "Zero false positives remain after verification" (not "verify findings") + +### During Work: Execute Against Criteria +Agents execute their phases knowing what success looks like. This prevents scope creep and ensures completeness. + +### After Work: Verify Every Criterion +Walk through each criterion mechanically: +1. Read the criterion statement +2. Check the output against it (grep, count, spot-check) +3. Mark verified or failed +4. If failed → loop back to the relevant phase, don't ship partial results + +### Why This Matters +- **Reproducibility:** Different people running the same workflow get consistent results +- **No silent failures:** A criterion that can't be verified exposes a gap in the workflow +- **Self-improving:** If a criterion repeatedly fails, the workflow phase needs refinement + +Each workflow file (Audit.md, Review.md, Report.md) contains its own criteria table and verification protocol. diff --git a/.claude/skills/DocsAudit/ErrorTaxonomy.md b/.claude/skills/DocsAudit/ErrorTaxonomy.md new file mode 100644 index 000000000..c1d2b6fbd --- /dev/null +++ b/.claude/skills/DocsAudit/ErrorTaxonomy.md @@ -0,0 +1,106 @@ +--- +name: ErrorTaxonomy +description: Classification system for documentation correctness (T1-T6) and quality (Q1-Q8) issues found during DocsAudit scans. Language-agnostic. +type: reference +--- + +# Error Taxonomy + +Two categories: **Correctness** (T-series, mechanical, verifiable) and **Quality** (Q-series, contextual, judgment-based). + +## Correctness Errors (T1-T6) + +These are **factual errors** — the documentation contradicts the current codebase. Every T-finding must include a source-code citation proving the inconsistency. + +| ID | Name | Description | Detection Method | +|----|------|-------------|-----------------| +| **T1** | Deprecated type in code example | Code example uses a class/type marked as deprecated as if it's current API | Grep deprecation markers in source → cross-reference against doc code blocks | +| **T2** | Deprecated method/function overload in code example | Code example calls an overload/signature marked deprecated when a replacement exists | Check method signatures in source for deprecation on specific overloads | +| **T3** | Deprecated type in prose | Prose references a deprecated type/class name as if it's the current API surface | Grep type names from deprecation map against prose text (outside code blocks) | +| **T4** | Deprecated property/field access in code example | Code accesses properties/fields specific to a deprecated type; replacement has different member names | Compare member names between deprecated and replacement types | +| **T5** | Typo in identifier | Misspelled class/function/method/enum name in code example or prose | Fuzzy-match identifiers in docs against actual names in source | +| **T6** | Deprecated command/class in code example | Code uses a class/command replaced by an updated equivalent | Grep classes for deprecation markers and cross-reference docs | + +### Severity + +- **T1, T2, T6**: High — code examples won't compile/run or produce warnings +- **T3**: Medium — misleading prose but won't break compilation +- **T4**: High — code examples will fail (wrong member names) +- **T5**: Medium-High — may or may not compile depending on typo location + +### Language-Specific Deprecation Markers + +| Language | Marker | Example | +|----------|--------|---------| +| C# | `[Obsolete("message")]` | `[Obsolete("Use RSAPublicKey instead")]` | +| Java | `@Deprecated` + `@deprecated` Javadoc | `@Deprecated(since = "2.0")` | +| TypeScript/JS | `@deprecated` JSDoc/TSDoc | `/** @deprecated Use newMethod instead */` | +| Python | `warnings.warn(..., DeprecationWarning)` | `warnings.warn("Use X", DeprecationWarning)` | +| Go | `// Deprecated:` comment | `// Deprecated: Use NewFunc instead.` | +| Rust | `#[deprecated(note = "...")]` | `#[deprecated(since = "1.2", note = "Use X")]` | + +--- + +## Quality Issues (Q1-Q8) + +These are **judgment calls** — the documentation is technically not wrong but could mislead, confuse, or harm users. Quality findings include a rationale for why the issue matters. + +| ID | Name | Description | Detection Method | +|----|------|-------------|-----------------| +| **Q1** | Non-compiling/non-running code example | Code example has syntax errors, missing imports, or type mismatches (beyond deprecation issues) | Static analysis of code blocks against known API signatures | +| **Q2** | Prose contradicts code example | Explanatory text says one thing, adjacent code does another | Read prose + code pairs and check alignment | +| **Q3** | Missing context | Code example assumes setup/state not shown and not linked | Check if variables/objects used are declared or referenced elsewhere | +| **Q4** | Unclear prerequisites | Document assumes knowledge or setup steps not mentioned | Review from Library User perspective — can a newcomer follow this? | +| **Q5** | Missing version gate | Feature or behavior is version-specific but doc doesn't mention which versions | Check if APIs used are version-gated in source | +| **Q6** | Inconsistent terminology | Same concept called different names across related docs | Compare terminology across docs in the same section | +| **Q7** | Broken or invalid link | Doc link, anchor, or URL that doesn't resolve | Validate link targets exist; check anchor slugs match headings | +| **Q8** | Security anti-pattern | Code example violates project or universal security guidelines | Check against SecurityPatterns.md checklist | + +### Severity + +- **Q1**: High — broken examples erode trust +- **Q2**: High — actively misleading +- **Q3, Q4**: Medium — frustrating but recoverable +- **Q5**: Medium — version-specific bugs are hard to diagnose +- **Q6**: Low — cosmetic but accumulates +- **Q7**: Medium — broken navigation +- **Q8**: High — security issues in official examples are dangerous + +--- + +## Finding Format + +Each finding should be reported as: + +``` +[ID] File:Line — Summary + Evidence: + Source: (with file:line citation) + Suggested fix: +``` + +Example (C#): +``` +[T1] cert-request.md:95 — Uses deprecated PivRsaPublicKey in code example + Evidence: `PivRsaPublicKey rsaPublic = pivSession.GenerateKeyPair(...)` + Source: PivRsaPublicKey marked [Obsolete] at Cryptography/PivRsaPublicKey.cs:12 + Replacement: RSAPublicKey (Cryptography/RSAPublicKey.cs) + Suggested fix: `var rsaPublic = (RSAPublicKey)pivSession.GenerateKeyPair(...)` +``` + +Example (Python): +``` +[T1] auth.md:42 — Uses deprecated authenticate() function in code example + Evidence: `client.authenticate(username, password)` + Source: authenticate() has DeprecationWarning at auth/client.py:88 + Replacement: login() (auth/client.py:95) + Suggested fix: `client.login(username, password)` +``` + +Example (Java): +``` +[T2] encryption.md:67 — Uses deprecated Cipher.getInstance("DES") overload + Evidence: `Cipher cipher = Cipher.getInstance("DES");` + Source: DES deprecated in favor of AES + Suggested fix: `Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding");` +``` diff --git a/.claude/skills/DocsAudit/FindingsSchema.md b/.claude/skills/DocsAudit/FindingsSchema.md new file mode 100644 index 000000000..64a5029d0 --- /dev/null +++ b/.claude/skills/DocsAudit/FindingsSchema.md @@ -0,0 +1,174 @@ +--- +name: FindingsSchema +description: Structured JSON schema for agent findings output. Ensures deterministic, machine-parseable results that feed into the fixed report template. +type: reference +--- + +# Findings Schema + +All agents MUST emit findings in this structured format. The Report workflow renders findings into the fixed template (ReportTemplate.md). This separation ensures deterministic output regardless of which model or agent produces the findings. + +--- + +## Discovery Output Schema + +The DiscoveryAgent emits this on completion. All subsequent agents receive it as input. + +```json +{ + "discovery": { + "language": "csharp", + "language_display": "C#", + "source_dirs": ["Yubico.YubiKey/src/", "Yubico.Core/src/"], + "docs_dir": "docs/", + "exclude_docs": ["whats-new.md"], + "exclude_source": ["*Tests*", "*examples*"], + "deprecation_pattern": "\\[Obsolete\\(", + "doc_link_format": "docfx-xref", + "security_guidelines": "docs/users-manual/sdk-programming-guide/sensitive-data.md", + "code_fence_languages": ["csharp", "cs"], + "config_source": "auto-detected", + "timestamp": "2026-03-30T14:22:00Z" + } +} +``` + +Fields: +- `config_source`: `"auto-detected"` or `"docsaudit.yaml"` — tracks whether config was discovered or loaded +- `security_guidelines`: path or `null` if none found + +--- + +## Finding Object Schema + +Every individual finding — from any agent — uses this shape: + +```json +{ + "id": "T1", + "file": "docs/users-manual/application-piv/cert-request.md", + "line": 95, + "summary": "Uses deprecated PivRsaPublicKey in code example", + "severity": "critical", + "evidence": "PivRsaPublicKey rsaPublic = pivSession.GenerateKeyPair(...)", + "source": { + "file": "Yubico.YubiKey/src/Cryptography/PivRsaPublicKey.cs", + "line": 12, + "detail": "PivRsaPublicKey marked [Obsolete]" + }, + "replacement": { + "type": "RSAPublicKey", + "file": "Yubico.YubiKey/src/Cryptography/RSAPublicKey.cs", + "verified": true + }, + "suggested_fix": "var rsaPublic = (RSAPublicKey)pivSession.GenerateKeyPair(...)", + "sub_id": null, + "guideline": null, + "audience": "developer" +} +``` + +### Required Fields + +| Field | Type | Description | +|-------|------|-------------| +| `id` | string | Category code: T1-T6 or Q1-Q8 | +| `file` | string | Doc file path (relative to repo root) | +| `line` | number | Line number in doc file | +| `summary` | string | One-line description of the issue | +| `severity` | enum | `"critical"` \| `"high"` \| `"medium"` \| `"low"` | +| `evidence` | string | What the doc shows (quoted text or code) | +| `suggested_fix` | string | Specific replacement text | + +### Optional Fields + +| Field | Type | Description | When Used | +|-------|------|-------------|-----------| +| `source` | object | Source code citation proving the issue | T1-T6 (required), Q1 (recommended) | +| `source.file` | string | Source file path | | +| `source.line` | number | Line in source | | +| `source.detail` | string | What the source shows | | +| `replacement` | object | The correct type/method to use | T1-T6 | +| `replacement.type` | string | Replacement identifier | | +| `replacement.file` | string | Where replacement lives in source | | +| `replacement.verified` | boolean | Was the replacement confirmed to exist? | | +| `sub_id` | string | Sub-classification (e.g., "SP2" for Q8) | Q8 only | +| `guideline` | string | Reference to violated guideline | Q8 only | +| `audience` | enum | `"developer"` \| `"user"` \| `"writer"` | Q2-Q7 | + +### Severity Rules + +Severity is NOT a judgment call — it's determined by the finding category: + +| Category | Default Severity | Override Condition | +|----------|-----------------|-------------------| +| T1, T4 | critical | — | +| T2, T6 | high | — | +| T3 | medium | — | +| T5 | high | medium if typo doesn't affect compilation | +| Q1 | critical | high if example is clearly a snippet | +| Q2 | critical | — | +| Q3, Q4 | medium | — | +| Q5 | medium | — | +| Q6 | low | — | +| Q7 | medium | — | +| Q8/SP1, Q8/SP3 | high | medium if surrounding prose mentions cleanup | +| Q8/SP2 | medium | low if example is demonstrating non-security API | +| Q8/SP4-SP6 | low | — | + +--- + +## Agent Output Schema + +Each agent wraps its findings in this envelope: + +```json +{ + "agent": "DeprecationScanner", + "model": "haiku", + "timestamp": "2026-03-30T14:25:00Z", + "scope": { + "files_scanned": 847, + "directories": ["Yubico.YubiKey/src/", "Yubico.Core/src/"] + }, + "findings": [ + { /* Finding objects */ } + ], + "metadata": { + "deprecation_items_found": 162, + "doc_references_checked": 1243, + "false_positives_discarded": 3 + } +} +``` + +--- + +## Merged Output Schema + +The Report workflow merges all agent outputs into: + +```json +{ + "report": { + "date": "2026-03-30", + "discovery": { /* Discovery output */ }, + "agents": [ + { /* Agent output envelopes */ } + ], + "findings": [ + { /* Deduplicated, sorted findings */ } + ], + "summary": { + "total": 12, + "by_severity": {"critical": 2, "high": 4, "medium": 5, "low": 1}, + "by_category": {"T1": 0, "T2": 0, "T3": 0, "T4": 0, "T5": 2, "T6": 0, "Q1": 1, "Q2": 1, "Q3": 1, "Q6": 1, "Q8": 6}, + "files_with_findings": 8, + "systemic_issues": ["OTP key cleanup pattern (4 files)"] + }, + "config_saved": false + } +} +``` + +This merged output feeds directly into ReportTemplate.md for rendering. diff --git a/.claude/skills/DocsAudit/ReportTemplate.md b/.claude/skills/DocsAudit/ReportTemplate.md new file mode 100644 index 000000000..f3f9b1c25 --- /dev/null +++ b/.claude/skills/DocsAudit/ReportTemplate.md @@ -0,0 +1,123 @@ +--- +name: ReportTemplate +description: Fixed markdown template for DocsAudit reports. Agents fill data into this structure — no freestyle formatting. +type: reference +--- + +# Report Template + +This is the **exact output format** for all DocsAudit reports. The Report workflow renders the merged findings JSON into this template. No sections may be added, removed, or reordered. + +--- + +## Template + +````markdown +# Documentation Audit Report — {{date}} + +## Project + +| Property | Value | +|----------|-------| +| Language | {{discovery.language_display}} | +| Source | {{discovery.source_dirs | join(", ")}} | +| Docs | {{discovery.docs_dir}} | +| Security Guidelines | {{discovery.security_guidelines ?? "None found"}} | +| Config Source | {{discovery.config_source}} | + +## Executive Summary + +| Category | Count | Critical | High | Medium | Low | +|----------|-------|----------|------|--------|-----| +| Correctness (T1-T6) | {{summary.t_total}} | {{summary.t_critical}} | {{summary.t_high}} | {{summary.t_medium}} | {{summary.t_low}} | +| Quality (Q1-Q8) | {{summary.q_total}} | {{summary.q_critical}} | {{summary.q_high}} | {{summary.q_medium}} | {{summary.q_low}} | +| **Total** | **{{summary.total}}** | **{{summary.critical}}** | **{{summary.high}}** | **{{summary.medium}}** | **{{summary.low}}** | + +Estimated remediation: ~{{summary.estimated_hours}} hours + +{{#if summary.systemic_issues}} +## Systemic Issues + +{{#each summary.systemic_issues}} +### {{this.name}} + +**Affected files ({{this.file_count}}):** {{this.files | join(", ")}} + +**Pattern:** {{this.description}} + +**Single fix strategy:** {{this.fix_strategy}} + +{{/each}} +{{/if}} + +## Findings by File + +{{#each findings_by_file}} +### {{this.file}} + +| # | ID | Line | Summary | Severity | +|---|-----|------|---------|----------| +{{#each this.findings}} +| {{@index + 1}} | {{this.id}}{{#if this.sub_id}}/{{this.sub_id}}{{/if}} | {{this.line}} | {{this.summary}} | {{this.severity}} | +{{/each}} + +{{#each this.findings}} +**[{{this.id}}{{#if this.sub_id}}/{{this.sub_id}}{{/if}}] {{this.file}}:{{this.line}}** — {{this.summary}} +- **Evidence:** `{{this.evidence}}` +{{#if this.source}}- **Source:** {{this.source.detail}} ({{this.source.file}}:{{this.source.line}}){{/if}} +{{#if this.guideline}}- **Guideline:** {{this.guideline}}{{/if}} +- **Suggested fix:** `{{this.suggested_fix}}` + +{{/each}} +{{/each}} + +## Remediation Plan + +Priority order (lowest risk first): + +| Priority | Category | Count | Effort | Description | +|----------|----------|-------|--------|-------------| +{{#each remediation_plan}} +| {{this.priority}} | {{this.category}} | {{this.count}} | ~{{this.effort_minutes}} min | {{this.description}} | +{{/each}} + +## Scan Metadata + +| Agent | Model | Files Scanned | Duration | +|-------|-------|---------------|----------| +{{#each agents}} +| {{this.agent}} | {{this.model}} | {{this.scope.files_scanned}} | {{this.duration}} | +{{/each}} + +--- + +*Generated by DocsAudit skill — {{date}}* +*Config: {{discovery.config_source}}* +```` + +--- + +## Rendering Rules + +1. **No freestyle sections.** The template above is the complete report structure. Do not add commentary, observations, or "What worked" sections. +2. **Findings are sorted** by file path, then by line number within each file. +3. **Systemic Issues section** only appears if any entity appears in 3+ files. Otherwise omit the section entirely. +4. **Remediation Plan** is always in this order: + - T5 (typos) — ~2 min each + - T1, T3 (type replacements) — ~2 min each + - T2, T4 (signature/property updates) — ~5 min each + - T6 (command class rewrites) — ~15 min each + - Q1, Q2 (code/prose fixes) — ~10 min each + - Q8 (security fixes) — ~5 min each + - Q3-Q7 (quality improvements) — ~10 min each +5. **Empty categories** are included in the summary table (showing 0) but omitted from the remediation plan. +6. **Estimated hours** = sum of (count × effort_minutes) for all categories, divided by 60, rounded to nearest 0.5. + +--- + +## Why Fixed Template + +- **Deterministic:** Same findings → identical report, every time +- **Diffable:** Reports from different dates can be diff'd to track progress +- **Parseable:** Consistent structure enables automated processing +- **Trustworthy:** Readers know exactly where to find each piece of information diff --git a/.claude/skills/DocsAudit/SKILL.md b/.claude/skills/DocsAudit/SKILL.md new file mode 100644 index 000000000..c3b1b41e7 --- /dev/null +++ b/.claude/skills/DocsAudit/SKILL.md @@ -0,0 +1,77 @@ +--- +name: DocsAudit +description: Documentation consistency and correctness auditing for any codebase. USE WHEN docs audit, docs consistency, obsolete check, documentation scan, check docs for obsolete code, verify documentation accuracy, find stale code references in docs. +--- + +# DocsAudit + +Scan documentation for correctness issues (deprecated code references, wrong API signatures, broken links) and quality issues (security anti-patterns, missing context, inconsistent terminology). Works with any language and documentation toolchain. Two modes: **Audit** (mechanical, deterministic) and **Review** (contextual, suggests improvements). + +## Zero-Config Design + +DocsAudit auto-detects everything it needs from the repository: + +- **Language** — determined by counting source file extensions +- **Source/docs directories** — found by common naming patterns +- **Deprecation markers** — selected from built-in language profiles +- **Doc link format** — inferred from link syntax found in docs +- **Security guidelines** — discovered by searching for security/sensitive-data docs + +No configuration file required. After the first run, the skill can suggest saving a `.docsaudit.yaml` to speed up future runs — but it's optional. + +## Workflow Routing + +| Workflow | Trigger | File | +|----------|---------|------| +| **Audit** | "audit docs", "check for obsolete", "scan docs" | `Workflows/Audit.md` | +| **Review** | "review docs quality", "improve docs", "docs quality" | `Workflows/Review.md` | +| **Report** | "generate docs report", "show findings" | `Workflows/Report.md` | + +## Examples + +**Example 1: Scan for deprecated code in documentation** +``` +User: "Audit the docs for obsolete code references" +-> Auto-detects: C# project, docs/ directory, [Obsolete] attributes +-> Scans source for deprecation markers, builds obsolete map +-> Cross-references against docs code examples and prose +-> Reports findings categorized by T1-T6 taxonomy +``` + +**Example 2: Review documentation quality** +``` +User: "Review the PIV docs for quality issues" +-> Invokes Review workflow +-> Reads docs through SDK Developer, SDK User, and Technical Writer lenses +-> Discovers and checks code examples against security guidelines +-> Reports suggestions categorized by Q1-Q8 taxonomy +``` + +**Example 3: Full audit with report** +``` +User: "Do a full docs audit and generate a report" +-> Invokes Audit workflow, then Report workflow +-> Produces categorized findings with file paths, line numbers, and suggested fixes +-> Suggests saving .docsaudit.yaml for future runs +``` + +## Quick Reference + +- **Error taxonomy:** See `ErrorTaxonomy.md` — T1-T6 correctness + Q1-Q8 quality categories +- **Security patterns:** See `SecurityPatterns.md` — SP1-SP6 anti-patterns (adapts to project's own guidelines) +- **Agent design:** See `AgentDesign.md` — agent types, model tiers, language profiles, orchestration +- **Findings schema:** See `FindingsSchema.md` — structured JSON output format for deterministic results +- **Report template:** See `ReportTemplate.md` — fixed markdown template, no freestyle formatting +- **Audiences:** SDK Developer (correctness), SDK User (usability), Technical Writer (consistency) +- **Modes:** Audit (facts/findings) vs Review (suggestions/opinions) + +## How It Works + +Each workflow follows a **criteria-driven** approach: + +1. **Discover** the project structure, language, and documentation layout automatically. +2. **Define success criteria** before scanning — what does "done" look like? Each criterion is a binary-testable statement (true/false). +3. **Execute** the scan using specialized agents at appropriate model tiers (Haiku for bulk, Sonnet for analysis, Opus for judgment). +4. **Verify** every criterion mechanically after execution — no finding is reported without source-code evidence, no criterion is marked complete without verification. + +This ensures reproducible, auditable results regardless of who runs the skill or what language the project uses. diff --git a/.claude/skills/DocsAudit/SecurityPatterns.md b/.claude/skills/DocsAudit/SecurityPatterns.md new file mode 100644 index 000000000..867e28124 --- /dev/null +++ b/.claude/skills/DocsAudit/SecurityPatterns.md @@ -0,0 +1,125 @@ +--- +name: SecurityPatterns +description: Anti-patterns to detect in code examples across languages. Universal patterns (SP1-SP3) plus language-specific variants. Used by Q8 checks in DocsAudit. +type: reference +--- + +# Security Anti-Patterns for Code Examples + +Code examples in documentation must model correct security practices. These patterns flag violations. + +**Source:** If the project has its own security guidelines doc (auto-discovered), those supplement these universal patterns. + +--- + +## Universal Anti-Patterns (All Languages) + +### SP1: String storage of sensitive data +**Detect:** PINs, passwords, keys, or tokens stored in immutable string types. +**Why:** Strings cannot be securely wiped in most languages (immutable in C#, Java, Python, JS; interned by runtime). + +| Language | Bad | Good | +|----------|-----|------| +| C# | `string pin = "123456";` | `byte[] pin = new byte[] { ... };` | +| Java | `String password = "secret";` | `char[] password = ...;` then `Arrays.fill(password, '\0');` | +| Python | `pin = "123456"` | `pin = bytearray(b"123456")` then `pin[:] = b'\x00' * len(pin)` | +| Go | `pin := "123456"` | `pin := make([]byte, 6)` then zero with loop | +| Rust | Generally safe with `Zeroize` trait | Flag raw `String` for secrets without `zeroize` | + +### SP2: Missing buffer/memory zeroing +**Detect:** Sensitive buffers used without explicit cleanup after use. +**Why:** Data persists in memory after reference goes out of scope. + +| Language | Cleanup Method | +|----------|---------------| +| C# | `CryptographicOperations.ZeroMemory(buffer)` | +| Java | `Arrays.fill(charArray, '\0')` or `Arrays.fill(byteArray, (byte)0)` | +| Python | `bytearray[:] = b'\x00' * len(bytearray)` | +| Go | `for i := range buf { buf[i] = 0 }` | +| Rust | `zeroize::Zeroize` trait | +| TypeScript/JS | Manual loop (no built-in); `crypto.timingSafeEqual` for comparison | + +### SP3: Missing exception-safe cleanup +**Detect:** Sensitive buffer cleanup not guaranteed on error paths. +**Why:** If an exception/panic occurs between collection and zeroing, data remains. + +| Language | Pattern | +|----------|---------| +| C# | `try/finally` with `ZeroMemory()` in finally | +| Java | `try/finally` with `Arrays.fill()` in finally | +| Python | `try/finally` with zeroing in finally | +| Go | `defer` with zeroing function | +| Rust | `Drop` trait / `Zeroize` on drop | + +--- + +## Language-Specific Anti-Patterns + +### SP4: Deprecated security APIs +| Language | Anti-Pattern | Why | +|----------|-------------|-----| +| C# | `SecureString` | No longer recommended by Microsoft | +| Java | `java.security.Certificate` (old) | Use `java.security.cert.Certificate` | +| Python | `md5` / `sha1` for security purposes | Use `hashlib.sha256` minimum | +| JS/TS | `crypto.createCipher()` | Use `crypto.createCipheriv()` | + +### SP5: Unbounded sensitive buffers +**Detect:** Sensitive data in dynamically-sized collections rather than pre-allocated fixed-size buffers. +**Why:** Resizing creates copies in memory. + +| Language | Bad | Good | +|----------|-----|------| +| C# | `List` for key material | `new byte[KeySize]` | +| Java | `ArrayList` | `new byte[KEY_SIZE]` | +| Python | Appending to `list` | Pre-allocated `bytearray(size)` | + +### SP6: Long-lived sensitive data +**Detect:** Sensitive data stored in class fields, static variables, singletons, or cached beyond immediate use. +**Why:** Increases exposure window. Collect just before use, clear immediately after. +**Applies to all languages equally.** + +--- + +## Scope of Q8 Detection + +### What to scan +- All fenced code blocks in documentation files (language auto-detected from fence tag) +- Variable names suggesting sensitive data: `pin`, `puk`, `password`, `key`, `managementKey`, `secret`, `credential`, `privateKey`, `token`, `apiKey`, `passphrase` +- Method parameters receiving sensitive data + +### What to skip +- Code blocks that are clearly protocol-level illustrations (hex dumps, wire format) +- Historical changelog entries (auto-detected by DiscoveryAgent) +- Prose-only mentions of security concepts (not code examples) +- Languages without fenced code blocks (inline backtick references) + +### Project-specific guidelines +If the DiscoveryAgent found a security guidelines document in the project: +1. Read it and extract any additional anti-patterns beyond SP1-SP6 +2. Apply those patterns to code examples in docs +3. Reference the project's guideline in findings (e.g., "Guideline: sensitive-data.md §2") + +If no guidelines doc found: +- Apply only universal SP1-SP3 (always valid) +- Note in report: "No project-specific security guidelines found. Only universal patterns checked." + +### Reporting +Q8 findings reference the specific SP pattern violated: + +``` +[Q8/SP2] fips-mode.md:103 — PIN byte array not zeroed after use + Evidence: `byte[] newPin = new byte[] { ... }` used in TrySetPin, never cleared + Guideline: sensitive-data.md §2 (or "Universal SP2" if no project guidelines) + Suggested fix: Add try/finally with CryptographicOperations.ZeroMemory(newPin) +``` + +--- + +## Judgment Notes + +Not every code example needs full security ceremony. Apply these guidelines: + +1. **Instructional focus** — If the example's purpose is demonstrating a specific API (e.g., how to call `GenerateKeyPair`), a brief comment like `// Clear sensitive data after use` is acceptable instead of full try/finally boilerplate. +2. **PIN/password examples** — Short inline values are acceptable for illustration. Flag only if no mention of cleanup exists anywhere in the surrounding prose. +3. **Private key / cryptographic material** — These should always model correct security. Private key material in code examples without cleanup is always a Q8 finding regardless of instructional context. +4. **Severity scaling** — SP1 (strings for secrets) and SP3 (missing exception-safe cleanup for keys) are High. SP5/SP6 are Low for short examples. diff --git a/.claude/skills/DocsAudit/Workflows/Audit.md b/.claude/skills/DocsAudit/Workflows/Audit.md new file mode 100644 index 000000000..cb0e612a8 --- /dev/null +++ b/.claude/skills/DocsAudit/Workflows/Audit.md @@ -0,0 +1,174 @@ +--- +name: Audit +description: Correctness scan workflow — finds obsolete code references, wrong signatures, and typos in documentation. +--- + +# Audit Workflow + +Mechanical, deterministic scan for correctness errors (T1-T6). Produces verifiable findings with source citations. + +## Success Criteria + +Before starting, establish these testable criteria. Every criterion must be binary (true/false) and verified after execution. + +| # | Criterion | Verified | +|---|-----------|----------| +| SC-0 | Project language, source dirs, docs dir, and deprecation profile are identified | ☐ | +| SC-1 | All deprecated items in source are extracted into deprecation map | ☐ | +| SC-2 | All `.md` files in docs (excluding changelogs) are scanned for code references | ☐ | +| SC-3 | Every finding includes source-code citation proving the deprecated status | ☐ | +| SC-4 | Every suggested fix references a replacement type that exists in current source | ☐ | +| SC-5 | Zero false positives remain after verification phase (each finding re-read from file) | ☐ | +| SC-6 | Findings are classified using T1-T6 taxonomy with correct category assignment | ☐ | + +**Rule:** Do not produce the final report until all criteria are verified. If a criterion fails, loop back to the relevant phase. + +## Prerequisites + +Load on demand: +- `ErrorTaxonomy.md` — classification definitions +- `AgentDesign.md` — agent specs, language profiles, and model selection +- `FindingsSchema.md` — structured output format (all agents MUST emit findings in this schema) + +## Algorithm (6 Phases) + +### Phase 0: Discover Project +**Agent:** DiscoveryAgent | **Model:** Haiku + +Auto-detect project characteristics. No configuration required. + +1. **Check for existing config:** Look for `.docsaudit.yaml` in repo root. If found, load and skip to Phase 1. +2. **Detect language:** Glob for source files by extension (`*.cs`, `*.java`, `*.ts`, `*.py`, `*.go`, `*.rs`). Count per extension, excluding `node_modules/`, `vendor/`, `bin/`, `obj/`, `.git/`. Select dominant language. +3. **Find directories:** + - Docs: Try `docs/`, `doc/`, `documentation/`, `manual/`, `guide/`. First match with `.md` files wins. Fallback: find directories with >5 clustered `.md` files. + - Source: Try `src/`, `lib/`, `source/`, or find by project file patterns (`.csproj`, `pom.xml`, `package.json`, `Cargo.toml`, `go.mod`). +4. **Detect changelogs:** Find files matching `*changelog*`, `*whats-new*`, `*release-notes*`, `*history*` (case-insensitive). Add to exclusion list. +5. **Detect doc link format:** Grep docs for `xref:` (DocFX), `{@link` (Javadoc/TSDoc), `:class:` (Sphinx), intra-doc links (Rustdoc). +6. **Find security guidelines:** Search for files matching `*secur*`, `*sensitive*`, `*credential*` in docs. +7. **Output:** Project config object (see AgentDesign.md → DiscoveryAgent for schema). +8. **Present to user:** Show detected config and ask for confirmation before proceeding. + +### Phase 1: Build Deprecation Map +**Agent:** DeprecationScanner | **Model:** Haiku + +1. Using the language profile from Phase 0, grep source files for the deprecation pattern +2. For each match, extract: + - Fully qualified name (namespace/module + identifier) + - Simple name (just the identifier) + - Category from the language profile's category list + - Deprecation message text (contains replacement hints) + - File path and line number +3. Parse replacement hints from deprecation messages (e.g., "Use X instead") +4. Output: `deprecationMap[]` — structured list, deduplicated by fully qualified name + +**Exclusions:** Test files, example/sample projects (auto-detected or from config) + +### Phase 2: Scan Documentation References +**Agent:** DocReferenceScanner | **Model:** Haiku + +1. Find all `.md` files in the detected docs directory +2. For each file, extract: + - **Code block references:** Parse fenced code blocks matching detected languages for type/function names, method calls, property accesses + - **Prose references:** Find backtick-wrapped identifiers (`` `ClassName` ``) + - **Doc links:** Extract links using the detected doc link format(s) +3. Tag each reference: + - `referenceType`: `codeBlock` | `prose` | `docLink` + - `entityName`: the referenced identifier + - `language`: detected from code fence + - `docFile`: file path + - `line`: line number + - `context`: surrounding 2 lines for reporting + +**Exclusions:** Changelog files identified in Phase 0 + +### Phase 3: Cross-Reference +**Agent:** CrossReferencer | **Model:** Sonnet + +1. For each doc reference, check if `entityName` appears in `deprecationMap` +2. Match by simple name first, then verify by context (namespace hints in surrounding code) +3. Classify the finding: + - Code block + obsolete class → **T1** + - Code block + obsolete method overload → **T2** + - Prose + obsolete type → **T3** + - Code block + obsolete property → **T4** + - Code block + near-match to real class (edit distance ≤ 2) → **T5** + - Code block + obsolete command class → **T6** +4. For each finding, look up replacement from obsolete message +5. Verify replacement type exists in source (grep for `class ReplacementName` or `interface ReplacementName`) +6. Generate suggested fix text + +### Phase 4: Verify Findings +**Model:** Sonnet (same agent or inline) + +For each finding: +1. Read the actual doc file at the cited line — confirm the reference is real +2. Read the source file at the cited line — confirm the deprecation marker is real +3. Check that the suggested replacement compiles conceptually (correct constructor/factory method, correct property names) +4. Discard false positives (e.g., type name appears in a comment explaining migration history) + +### Phase 5: Format Output +Produce findings in the structured schema (see FindingsSchema.md → Finding Object Schema). Each finding MUST be a valid finding object with all required fields. Wrap all findings in an Agent Output envelope: + +``` +## Audit Results — [DATE] + +### Summary +- Files scanned: X docs, Y source +- Deprecated items found: N +- Documentation references checked: M +- Findings: F (by category breakdown) + +### Findings + +[T1] file.md:line — Summary + Evidence: ... + Source: ... + Suggested fix: ... + +[T2] ... +``` + +Group by file, then by category within each file. + +--- + +## Invocation + +``` +User: "Audit the docs for obsolete code references" +``` + +**Required inputs:** +- Source directories (auto-detect from project structure if not specified) +- Docs directory (auto-detect from project structure if not specified) + +**Optional inputs:** +- Scope limiter: specific docs subdirectory (e.g., `application-piv/`) +- Exclude patterns: files to skip + +--- + +## Parallelization + +Phase 0 runs first (discovery). Phases 1 and 2 run in **parallel** (no dependency). +Phases 3-5 run **sequentially** (each depends on prior output). + +``` +Phase 0 (Haiku) ──→ Phase 1 (Haiku) ──┐ + ──→ Phase 2 (Haiku) ──┤ + ├──→ Phase 3 (Sonnet) → Phase 4 (Sonnet) → Phase 5 +``` + +## Verification Protocol + +After Phase 5, walk through each success criterion: + +0. **SC-0:** Confirm discovery output includes: language name, at least one source dir, a docs dir, and the deprecation pattern. If any is missing, discovery failed. +1. **SC-1:** Count deprecated items found. If zero, warn — most codebases with docs have some. Re-check grep pattern against the language profile. +2. **SC-2:** Compare scanned file count against actual `.md` count in docs dir (minus exclusions). Discrepancies mean missed files. +3. **SC-3:** For each finding, confirm `Source:` field includes a real file path and line number. Spot-check 3 findings by reading the cited source line. +4. **SC-4:** For each suggested fix, grep for the replacement class/method in source. If not found, the fix is wrong — investigate. +5. **SC-5:** For each finding, re-read the doc file at the cited line. If the text doesn't match the evidence, discard the finding. +6. **SC-6:** Cross-check 3 random findings against ErrorTaxonomy.md T1-T6 definitions. Category must match. + +**If any criterion fails:** Return to the relevant phase, fix, and re-verify. Do not output partial or unverified results. diff --git a/.claude/skills/DocsAudit/Workflows/Report.md b/.claude/skills/DocsAudit/Workflows/Report.md new file mode 100644 index 000000000..c4ba1833d --- /dev/null +++ b/.claude/skills/DocsAudit/Workflows/Report.md @@ -0,0 +1,154 @@ +--- +name: Report +description: Findings report generation workflow — merges Audit and Review results into a structured, deterministic report. +--- + +# Report Workflow + +Combines Audit (T1-T6) and Review (Q1-Q8) findings into a single report using a fixed template. All agent outputs follow the structured schema in `FindingsSchema.md`. The report is rendered from `ReportTemplate.md` — no freestyle formatting. + +## Success Criteria + +| # | Criterion | Verified | +|---|-----------|----------| +| SC-1 | All findings from Audit and Review workflows are included (none dropped) | ☐ | +| SC-2 | Every finding has severity assigned per FindingsSchema.md severity rules (not judgment) | ☐ | +| SC-3 | Systemic issues (3+ files with same entity) are called out separately | ☐ | +| SC-4 | Remediation plan follows fixed priority order from ReportTemplate.md | ☐ | +| SC-5 | Report output exactly matches ReportTemplate.md structure (no added/removed sections) | ☐ | +| SC-6 | Config suggestion was presented to user (if no .docsaudit.yaml exists) | ☐ | + +## Prerequisites + +Load on demand: +- `FindingsSchema.md` — structured output schema for all agents +- `ReportTemplate.md` — fixed markdown template for rendering +- `ErrorTaxonomy.md` — for category descriptions and severity reference + +## Algorithm (4 Phases) + +### Phase 1: Collect and Validate Findings + +**Model:** Haiku (mechanical aggregation) + +1. Collect agent output envelopes from Audit and Review workflows +2. Validate each finding object against FindingsSchema.md: + - Required fields present (`id`, `file`, `line`, `summary`, `severity`, `evidence`, `suggested_fix`) + - Severity matches the rules table in FindingsSchema.md for that category + - If severity doesn't match → override to the correct value (log the override) +3. Merge all findings into a single array +4. Deduplicate: same `file` + `line` with overlapping `id` → keep the most specific + +### Phase 2: Analyze and Structure + +**Model:** Sonnet + +1. **Sort findings** by file path, then by line number within each file +2. **Group by file** for the "Findings by File" section +3. **Identify systemic issues:** Any entity (`evidence` text or `replacement.type`) appearing in 3+ findings across different files → extract into systemic issues array with: + - `name`: the repeated entity + - `file_count`: number of affected files + - `files`: list of file paths + - `description`: what the pattern is + - `fix_strategy`: single approach to fix all instances +4. **Build remediation plan** using the fixed priority order from ReportTemplate.md: + - Count findings per category + - Calculate effort per ReportTemplate.md effort estimates + - Calculate total estimated hours +5. **Build merged output** per FindingsSchema.md → Merged Output Schema + +### Phase 3: Render Report + +**Model:** Haiku (mechanical template fill) + +1. Load `ReportTemplate.md` +2. Fill template placeholders with data from the merged output +3. **Do not add any content not in the template.** No "Observations", no "What worked", no "Summary" prose. The template is the complete report. +4. Write to `docs-audit-report-[DATE].md` in project root + +### Phase 4: Config Suggestion (MANDATORY) + +**This phase is not optional. It must execute after every report generation.** + +1. Check if `.docsaudit.yaml` exists in repo root +2. If it exists → skip, add `"config_saved": true` to report metadata +3. If it does NOT exist → **present the detected configuration to the user**: + +``` +┌─────────────────────────────────────────────┐ +│ DocsAudit — Save Configuration? │ +├─────────────────────────────────────────────┤ +│ Language: C# (847 .cs files) │ +│ Source: Yubico.YubiKey/src/, │ +│ Yubico.Core/src/ │ +│ Docs: docs/ │ +│ Excluded: whats-new.md (changelog) │ +│ Security: docs/.../sensitive-data.md │ +│ Doc links: DocFX xref │ +│ │ +│ Save as .docsaudit.yaml? │ +│ This speeds up future runs by skipping │ +│ auto-detection. Delete the file to │ +│ re-detect. │ +└─────────────────────────────────────────────┘ +``` + +4. If user confirms → write `.docsaudit.yaml`: + +```yaml +# DocsAudit configuration +# Auto-generated on [DATE] +# Delete this file to re-run auto-detection + +language: {{language}} +source_dirs: +{{#each source_dirs}} + - {{this}} +{{/each}} +docs_dir: {{docs_dir}} +exclude_docs: +{{#each exclude_docs}} + - {{this}} +{{/each}} +exclude_source: + - "*Tests*" + - "*Test*" + - "*examples*" + - "*sample*" +security_guidelines: {{security_guidelines}} +``` + +5. If user declines → note in output: "Config not saved. Will auto-detect on next run." + +--- + +## Invocation + +``` +User: "Generate docs report" +User: "Show findings" +User: "Do a full docs audit and generate a report" +``` + +For "full audit + report": chain Audit → Review → Report workflows. + +--- + +## Output Options + +- **File:** Always save to `docs-audit-report-[DATE].md` in project root +- **Terminal:** Also print a summary table to stdout (the Executive Summary section only) +- **Structured:** If user requests, also output the merged JSON to `docs-audit-report-[DATE].json` + +--- + +## Verification Protocol + +1. **SC-1:** Count findings in merged output. Compare against sum of all agent `findings.length`. If mismatch, identify which findings were dropped and why. +2. **SC-2:** For every finding, check `severity` against the FindingsSchema.md severity rules table. The category determines severity — no judgment involved. If any finding has wrong severity, it's a bug. +3. **SC-3:** Scan findings for any `evidence` text or `replacement.type` appearing in 3+ different files. If found and NOT in Systemic Issues section → fail. +4. **SC-4:** Verify remediation plan rows are in the exact order specified in ReportTemplate.md. T5 first, Q3-Q7 last. +5. **SC-5:** Compare output file sections against ReportTemplate.md. Every section in template must be in output. No extra sections allowed. +6. **SC-6:** Check Phase 4 executed. If no `.docsaudit.yaml` existed before the run, the config suggestion MUST have been presented. Check output for the suggestion box or the "Config not saved" note. + +**If any criterion fails:** Fix and re-render. Do not deliver a non-conforming report. diff --git a/.claude/skills/DocsAudit/Workflows/Review.md b/.claude/skills/DocsAudit/Workflows/Review.md new file mode 100644 index 000000000..a49f7b176 --- /dev/null +++ b/.claude/skills/DocsAudit/Workflows/Review.md @@ -0,0 +1,155 @@ +--- +name: Review +description: Quality review workflow — evaluates documentation from SDK Developer, SDK User, and Technical Writer perspectives. +--- + +# Review Workflow + +Contextual, judgment-based review for quality issues (Q1-Q8). Produces suggestions with rationale. + +## Success Criteria + +Before starting, establish these testable criteria. Every criterion must be binary (true/false) and verified after execution. + +| # | Criterion | Verified | +|---|-----------|----------| +| SC-1 | All scoped docs are reviewed through at least one audience lens | ☐ | +| SC-2 | Every Q1 finding includes the actual vs expected method signature | ☐ | +| SC-3 | Every Q8 finding references a specific SP pattern and the violated guideline | ☐ | +| SC-4 | No duplicate findings exist (same file:line, same category) | ☐ | +| SC-5 | Security review covers all code blocks that handle sensitive data variables | ☐ | + +**Rule:** Do not produce the final report until all criteria are verified. + +## Prerequisites + +Load on demand: +- `ErrorTaxonomy.md` — Q1-Q8 definitions +- `SecurityPatterns.md` — SP1-SP6 anti-patterns for Q8 checks +- `AgentDesign.md` — agent specs and model selection +- `FindingsSchema.md` — structured output format (all agents MUST emit findings in this schema) + +## Algorithm (5 Phases) + +### Phase 0: Discover Project +**Agent:** DiscoveryAgent | **Model:** Haiku + +Same as Audit workflow Phase 0. If running as part of a full audit, reuse the discovery output. + +Auto-detect project structure, language, docs directory, security guidelines. Check for `.docsaudit.yaml` first. + +### Phase 1: Scope Selection + +Determine which docs to review: +- If user specifies files/directories → use those +- If "all" → enumerate all `.md` files in discovered docs dir, excluding detected changelogs +- If application-specific (e.g., "review PIV docs") → scope to that subdirectory + +### Phase 2: Parallel Review (3 agents) + +Launch three agents in parallel on the scoped doc set: + +#### Agent A: SignatureVerifier (Sonnet) +Focus: Q1 (non-compiling code) + Q2 (prose contradicts code) + +1. For each csharp code block in scoped docs: + - Extract method calls, constructors, property accesses + - Grep source for actual signatures + - Compare parameter counts, types, return types + - Flag mismatches as Q1 +2. For each code block, read surrounding prose: + - Does the prose describe what the code does? + - Do they agree? Flag contradictions as Q2 + +#### Agent B: ProseReviewer (Opus) +Focus: Q3-Q6 + +Read each doc through three lenses: + +**SDK Developer lens:** +- Q5: Are version-specific features gated? (e.g., "requires firmware 5.x") +- Are there assertions about behavior that depend on YubiKey version? + +**SDK User lens:** +- Q3: Can a newcomer follow the code example without undeclared context? +- Q4: Are prerequisites (imports, setup, key state) mentioned or linked? + +**Technical Writer lens:** +- Q6: Is terminology consistent within the doc and across related docs? +- Q7: Do all links (xref, anchors, URLs) resolve? + +#### Agent C: SecurityReviewer (Opus) +Focus: Q8 + +1. Load SecurityPatterns.md +2. If DiscoveryAgent found a security guidelines doc → read it and derive project-specific anti-patterns +3. Find all code blocks that handle sensitive data: + - Variable names: `pin`, `puk`, `password`, `key`, `managementKey`, `secret`, `privateKey`, `token`, `credential` + - Method calls: common auth/key patterns (language-aware from discovery) +4. Check each against universal SP1-SP3 + language-specific patterns +5. Apply judgment notes (instructional focus vs. full security ceremony) +6. If no security guidelines found → skip project-specific checks, apply only universal SP1-SP3, note in output +7. Report Q8 findings with SP sub-classification + +### Phase 3: Deduplicate and Merge + +1. Collect findings from all three agents +2. Deduplicate: if same file:line flagged by multiple agents, keep the most specific finding +3. Sort by file, then by line number within file +4. Assign severity based on ErrorTaxonomy.md guidelines + +### Phase 4: Format Output + +``` +## Quality Review Results — [DATE] + +### Summary +- Files reviewed: X +- Findings: Y (Q1: a, Q2: b, ..., Q8: h) +- High severity: N +- Medium severity: M + +### Findings by File + +#### file.md + +[Q3] file.md:45 — Missing context for connection object + Issue: Code uses `connection` variable without showing where it comes from + Audience: SDK User — newcomer can't follow this + Suggestion: Add `using var connection = device.Connect(...)` before use + +[Q8/SP2] file.md:103 — PIN buffer not zeroed after use + Issue: `byte[] pin = ...` used in TrySetPin, never cleared + Guideline: sensitive-data.md §2 + Suggestion: Wrap in try/finally with CryptographicOperations.ZeroMemory(pin) +``` + +--- + +## Invocation + +``` +User: "Review the PIV docs for quality issues" +User: "Review docs quality" +User: "Improve docs" +``` + +**Required inputs:** +- Docs directory (auto-detect) + +**Optional inputs:** +- Scope (specific application or section) +- Focus (e.g., "just security" → only run SecurityReviewer) +- Audience filter (e.g., "from SDK User perspective" → only User lens findings) + +## Verification Protocol + +After Phase 4, walk through each success criterion: + +1. **SC-1:** List scoped files and confirm each appears in at least one agent's output. Missing files = incomplete review. +2. **SC-2:** For each Q1 finding, confirm both `Expected:` and `Actual:` signatures are present. Spot-check 2 by grepping source. +3. **SC-3:** For each Q8 finding, confirm it names an SP pattern (SP1-SP6) and cites a section of sensitive-data.md. +4. **SC-4:** Sort findings by file:line — any consecutive duplicates? Remove them. +5. **SC-5:** Grep scoped docs for sensitive variable names (`pin`, `puk`, `password`, `key`, `managementKey`, `privateKey`). Every code block containing these must have been reviewed by SecurityReviewer. + +**If any criterion fails:** Return to the relevant phase, fix, and re-verify. diff --git a/.github/workflows/build-nativeshims.yml b/.github/workflows/build-nativeshims.yml index d472e27f4..18e20ce76 100644 --- a/.github/workflows/build-nativeshims.yml +++ b/.github/workflows/build-nativeshims.yml @@ -38,7 +38,7 @@ jobs: runs-on: windows-2022 steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit @@ -78,25 +78,25 @@ jobs: if %FAILED%==1 exit /b 1 echo All Windows builds verified: no VC++ Redistributable required exit /b 0 - - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + - uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: win-x64 path: Yubico.NativeShims/win-x64/** - - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + - uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: win-x86 path: Yubico.NativeShims/win-x86/** - - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + - uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: win-arm64 path: Yubico.NativeShims/win-arm64/** - - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + - uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: nuspec path: | Yubico.NativeShims/*.nuspec Yubico.NativeShims/readme.md - - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + - uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: msbuild path: Yubico.NativeShims/msbuild/* @@ -106,7 +106,7 @@ jobs: runs-on: ubuntu-24.04 steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit @@ -253,7 +253,7 @@ jobs: readelf -V *.so | grep GLIBC_2 | sort -u echo "✅ Binary compatible with Debian 10 (glibc 2.28)" ' - - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + - uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: linux-x64 path: Yubico.NativeShims/linux-x64/*.so @@ -263,7 +263,7 @@ jobs: runs-on: ubuntu-24.04 steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit @@ -340,7 +340,7 @@ jobs: bash ./build-linux-arm64.sh fi - name: Set up QEMU for ARM64 testing - uses: docker/setup-qemu-action@c7c53464625b32c7a7e944ae62b3e17d2b600130 # v3.7.0 + uses: docker/setup-qemu-action@ce360397dd3f832beb865e1373c09c0e9f86d70a # v4.0.0 with: platforms: arm64 - name: Test on Ubuntu 18.04 (glibc 2.27) @@ -414,7 +414,7 @@ jobs: readelf -V *.so | grep GLIBC_2 | sort -u echo "✅ ARM64 binary compatible with Debian 10 (glibc 2.28)" ' - - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + - uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: linux-arm64 path: Yubico.NativeShims/linux-arm64/*.so @@ -424,7 +424,7 @@ jobs: runs-on: macos-14 steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit @@ -440,11 +440,11 @@ jobs: else sh ./build-macOS.sh fi - - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + - uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: osx-x64 path: Yubico.NativeShims/osx-x64/** - - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + - uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: osx-arm64 path: Yubico.NativeShims/osx-arm64/** @@ -463,12 +463,12 @@ jobs: GITHUB_REPO_URL: https://github.com/${{ github.repository }} steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit - name: Download contents, set metadata and package - uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0 + uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 - run: | mv nuspec/*.nuspec . mv nuspec/readme.md . @@ -483,13 +483,13 @@ jobs: - run: nuget pack Yubico.NativeShims.nuspec - name: Upload Nuget Package - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: NuGet Package NativeShims path: Yubico.NativeShims.*.nupkg - name: Generate artifact attestation - uses: actions/attest-build-provenance@96278af6caaf10aea03fd8d33a09a777ca52d62f # v3.2.0 + uses: actions/attest-build-provenance@a2bbfa25375fe432b6a289bc6b6cd05ecd0c4c32 # v4.1.0 with: subject-path: | Yubico.NativeShims/**/*.dll @@ -507,11 +507,11 @@ jobs: if: ${{ github.event.inputs.push-to-dev == 'true' }} steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit - - uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0 + - uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 with: name: NuGet Package NativeShims - run: | diff --git a/.github/workflows/build-pull-requests.yml b/.github/workflows/build-pull-requests.yml index 6ece89849..bded0ef4d 100644 --- a/.github/workflows/build-pull-requests.yml +++ b/.github/workflows/build-pull-requests.yml @@ -46,19 +46,22 @@ jobs: build-artifacts: name: Build artifacts + permissions: + contents: read + packages: read runs-on: windows-latest needs: run-tests steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 with: persist-credentials: false - - uses: actions/setup-dotnet@baa11fbfe1d6520db94683bd5c7a3818018e4309 # v5.1.0 + - uses: actions/setup-dotnet@c2fa09f4bde5ebb9d1777cf28262a3eb3db3ced7 # v5.2.0 with: global-json-file: global.json source-url: https://nuget.pkg.github.com/Yubico/index.json @@ -71,7 +74,7 @@ jobs: NUGET_AUTH_TOKEN: ${{ secrets.GITHUB_TOKEN }} - name: Save build artifacts - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: Nuget Packages Release path: | @@ -79,7 +82,7 @@ jobs: Yubico.YubiKey/src/bin/Release/*.nupkg - name: Save build artifacts - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: Assemblies Release path: | diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 7d42b0713..57b2702db 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -82,14 +82,14 @@ jobs: assemblies-id: ${{ steps.assemblies-upload.outputs.artifact-id }} steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 with: persist-credentials: false - - uses: actions/setup-dotnet@baa11fbfe1d6520db94683bd5c7a3818018e4309 # v5.1.0 + - uses: actions/setup-dotnet@c2fa09f4bde5ebb9d1777cf28262a3eb3db3ced7 # v5.2.0 with: global-json-file: "./global.json" source-url: https://nuget.pkg.github.com/Yubico/index.json @@ -119,7 +119,7 @@ jobs: # Upload documentation log - name: "Save build artifacts: Docs log" id: docs-log-upload - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: Documentation log path: docfx.log @@ -128,7 +128,7 @@ jobs: # Upload documentation - name: "Save build artifacts: Docs" id: docs-upload - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: Documentation path: docs/_site/ @@ -137,7 +137,7 @@ jobs: # Upload NuGet packages - name: "Save build artifacts: Nuget Packages" id: nuget-upload - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: Nuget Packages path: | @@ -148,7 +148,7 @@ jobs: # Upload symbols - name: "Save build artifacts: Symbols Packages" id: symbols-upload - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: Symbols Packages path: | @@ -159,7 +159,7 @@ jobs: # Upload assemblies - name: "Save build artifacts: Assemblies" id: assemblies-upload - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: Assemblies path: | @@ -169,7 +169,7 @@ jobs: # Generate artifact attestation - name: Generate artifact attestation - uses: actions/attest-build-provenance@96278af6caaf10aea03fd8d33a09a777ca52d62f # v3.2.0 + uses: actions/attest-build-provenance@a2bbfa25375fe432b6a289bc6b6cd05ecd0c4c32 # v4.1.0 with: subject-path: | Yubico.Core/src/bin/Release/*.nupkg @@ -200,14 +200,14 @@ jobs: contents: read steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit - - uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0 + - uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 with: name: Nuget Packages - - uses: actions/setup-dotnet@baa11fbfe1d6520db94683bd5c7a3818018e4309 # v5.1.0 + - uses: actions/setup-dotnet@c2fa09f4bde5ebb9d1777cf28262a3eb3db3ced7 # v5.2.0 with: source-url: https://nuget.pkg.github.com/Yubico/index.json env: @@ -227,7 +227,7 @@ jobs: if: always() steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit diff --git a/.github/workflows/claude.yml b/.github/workflows/claude.yml index c58879eae..67b2f90db 100644 --- a/.github/workflows/claude.yml +++ b/.github/workflows/claude.yml @@ -30,7 +30,7 @@ jobs: actions: read # Required for Claude to read CI results on PRs steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit @@ -42,7 +42,7 @@ jobs: - name: Run Claude Code id: claude - uses: anthropics/claude-code-action@ade221fd1c400376a4799977d683a4eda09f9d7c # v1.0.60 + uses: anthropics/claude-code-action@0ee1beea589a67d33340072691a5d42abec7ae6b # v1.0.78 with: claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }} diff --git a/.github/workflows/codeql-analysis.yml b/.github/workflows/codeql-analysis.yml index c66b7af65..baa08c224 100644 --- a/.github/workflows/codeql-analysis.yml +++ b/.github/workflows/codeql-analysis.yml @@ -55,7 +55,7 @@ jobs: steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit @@ -66,7 +66,7 @@ jobs: # Setup .NET with authenticated NuGet source - name: Setup .NET - uses: actions/setup-dotnet@baa11fbfe1d6520db94683bd5c7a3818018e4309 # v5.1.0 + uses: actions/setup-dotnet@c2fa09f4bde5ebb9d1777cf28262a3eb3db3ced7 # v5.2.0 with: source-url: https://nuget.pkg.github.com/Yubico/index.json env: @@ -74,7 +74,7 @@ jobs: # Initializes the CodeQL tools for scanning. - name: Initialize CodeQL - uses: github/codeql-action/init@89a39a4e59826350b863aa6b6252a07ad50cf83e # v4.32.4 + uses: github/codeql-action/init@38697555549f1db7851b81482ff19f1fa5c4fedc # v4.34.1 with: # Override automatic language detection to only analyze C# # C/C++ code in Yubico.NativeShims is built separately (requires CMake/vcpkg) @@ -87,4 +87,4 @@ jobs: NUGET_AUTH_TOKEN: ${{ secrets.GITHUB_TOKEN }} - name: Perform CodeQL Analysis - uses: github/codeql-action/analyze@89a39a4e59826350b863aa6b6252a07ad50cf83e # v4.32.4 + uses: github/codeql-action/analyze@38697555549f1db7851b81482ff19f1fa5c4fedc # v4.34.1 diff --git a/.github/workflows/dependency-review.yml b/.github/workflows/dependency-review.yml index 630ea0f4c..5cdc66f9b 100644 --- a/.github/workflows/dependency-review.yml +++ b/.github/workflows/dependency-review.yml @@ -17,11 +17,11 @@ jobs: runs-on: ubuntu-latest steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit - name: 'Checkout Repository' uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 - name: 'Dependency Review' - uses: actions/dependency-review-action@05fe4576374b728f0c523d6a13d64c25081e0803 # v4.8.3 + uses: actions/dependency-review-action@2031cfc080254a8a887f58cffee85186f0e49e48 # v4.9.0 diff --git a/.github/workflows/deploy-docs.yml b/.github/workflows/deploy-docs.yml index 655e975c6..91de9ab27 100644 --- a/.github/workflows/deploy-docs.yml +++ b/.github/workflows/deploy-docs.yml @@ -27,7 +27,7 @@ jobs: steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit @@ -46,7 +46,7 @@ jobs: - name: Generate GitHub App token id: generate_token - uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2.2.1 + uses: actions/create-github-app-token@f8d387b68d61c58ab83c6c016672934102569859 # v3.0.0 with: app-id: 800408 # Yubico Docs owner: Yubico @@ -88,7 +88,7 @@ jobs: steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit @@ -105,7 +105,7 @@ jobs: - name: Generate GitHub App token id: generate_token - uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2.2.1 + uses: actions/create-github-app-token@f8d387b68d61c58ab83c6c016672934102569859 # v3.0.0 with: app-id: 260767 # Yubico Commit Status Reader owner: Yubico diff --git a/.github/workflows/scorecard.yml b/.github/workflows/scorecard.yml index fbe1c9209..3ced6c2dc 100644 --- a/.github/workflows/scorecard.yml +++ b/.github/workflows/scorecard.yml @@ -35,7 +35,7 @@ jobs: steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit @@ -70,7 +70,7 @@ jobs: # Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF # format to the repository Actions tab. - name: "Upload artifact" - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: SARIF file path: results.sarif @@ -79,6 +79,6 @@ jobs: # Upload the results to GitHub's code scanning dashboard (optional). # Commenting out will disable upload of results to your repo's Code Scanning dashboard - name: "Upload to code-scanning" - uses: github/codeql-action/upload-sarif@89a39a4e59826350b863aa6b6252a07ad50cf83e # v4.32.4 + uses: github/codeql-action/upload-sarif@38697555549f1db7851b81482ff19f1fa5c4fedc # v4.34.1 with: sarif_file: results.sarif diff --git a/.github/workflows/test-macos.yml b/.github/workflows/test-macos.yml index c530e7c05..af59408ae 100644 --- a/.github/workflows/test-macos.yml +++ b/.github/workflows/test-macos.yml @@ -31,14 +31,14 @@ jobs: steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 with: persist-credentials: false - - uses: actions/setup-dotnet@baa11fbfe1d6520db94683bd5c7a3818018e4309 # v5.1.0 + - uses: actions/setup-dotnet@c2fa09f4bde5ebb9d1777cf28262a3eb3db3ced7 # v5.2.0 with: global-json-file: "./global.json" @@ -71,7 +71,7 @@ jobs: run: dotnet test Yubico.Core/tests/Yubico.Core.UnitTests.csproj --filter "FullyQualifiedName!~DisposalTests" --logger trx --settings coverlet.runsettings.xml --collect:"XPlat Code Coverage" - name: Upload Test Result Files - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: TestResults-macOS path: '**/TestResults/*' diff --git a/.github/workflows/test-ubuntu.yml b/.github/workflows/test-ubuntu.yml index badab5604..3d1c40bf8 100644 --- a/.github/workflows/test-ubuntu.yml +++ b/.github/workflows/test-ubuntu.yml @@ -31,14 +31,14 @@ jobs: steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 with: persist-credentials: false - - uses: actions/setup-dotnet@baa11fbfe1d6520db94683bd5c7a3818018e4309 # v5.1.0 + - uses: actions/setup-dotnet@c2fa09f4bde5ebb9d1777cf28262a3eb3db3ced7 # v5.2.0 with: global-json-file: "./global.json" @@ -57,7 +57,7 @@ jobs: run: dotnet test Yubico.Core/tests/Yubico.Core.UnitTests.csproj --logger trx --settings coverlet.runsettings.xml --collect:"XPlat Code Coverage" - name: Upload Test Result Files - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: TestResults-Ubuntu path: '**/TestResults/*' diff --git a/.github/workflows/test-windows.yml b/.github/workflows/test-windows.yml index b1cf86be3..0c93908d7 100644 --- a/.github/workflows/test-windows.yml +++ b/.github/workflows/test-windows.yml @@ -31,14 +31,14 @@ jobs: steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 with: persist-credentials: false - - uses: actions/setup-dotnet@baa11fbfe1d6520db94683bd5c7a3818018e4309 # v5.1.0 + - uses: actions/setup-dotnet@c2fa09f4bde5ebb9d1777cf28262a3eb3db3ced7 # v5.2.0 with: global-json-file: "./global.json" @@ -52,7 +52,7 @@ jobs: run: dotnet test Yubico.Core/tests/Yubico.Core.UnitTests.csproj --logger trx --settings coverlet.runsettings.xml --collect:"XPlat Code Coverage" - name: Upload Test Result Files - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: TestResults-Windows path: '**/TestResults/*' diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml index 9e313596f..24808f321 100644 --- a/.github/workflows/test.yml +++ b/.github/workflows/test.yml @@ -81,13 +81,13 @@ jobs: if: inputs.build-coverage-report == true steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit - - uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0 + - uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 - name: Combine Coverage Reports # This is because one report is produced per project, and we want one result for all of them. - uses: danielpalme/ReportGenerator-GitHub-Action@ee0ae774f6d3afedcbd1683c1ab21b83670bdf8e # 5.5.1 + uses: danielpalme/ReportGenerator-GitHub-Action@cf6fe1b38ed5becc89ffe056c1f240825993be5b # 5.5.4 with: reports: "**/*.cobertura.xml" # REQUIRED # The coverage reports that should be parsed (separated by semicolon). Globbing is supported. targetdir: "${{ github.workspace }}" # REQUIRED # The directory where the generated report should be saved. @@ -112,7 +112,7 @@ jobs: thresholds: "40 60" - name: Upload Code Coverage Report - uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 with: name: CoverageResults path: code-coverage-results.md @@ -129,17 +129,17 @@ jobs: if: github.event_name == 'pull_request' steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit - name: Download coverage results - uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0 + uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 with: name: CoverageResults - name: Add PR Comment - uses: marocchino/sticky-pull-request-comment@773744901bac0e8cbb5a0dc842800d45e9b2b405 # v2.9.4 + uses: marocchino/sticky-pull-request-comment@70d2764d1a7d5d9560b100cbea0077fc8f633987 # v3.0.2 with: recreate: true path: code-coverage-results.md @@ -157,11 +157,11 @@ jobs: if: github.event_name == 'pull_request' steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit - - uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0 + - uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 - name: "Add PR Comment: Test Results (Windows)" uses: EnricoMi/publish-unit-test-result-action@c950f6fb443cb5af20a377fd0dfaa78838901040 # v2.23.0 diff --git a/.github/workflows/upload-docs.yml b/.github/workflows/upload-docs.yml index 6df21f6d1..6bf2e1b94 100644 --- a/.github/workflows/upload-docs.yml +++ b/.github/workflows/upload-docs.yml @@ -45,7 +45,7 @@ jobs: steps: # Checkout the local repository as we need the Dockerfile and other things even for this step. - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit @@ -54,7 +54,7 @@ jobs: persist-credentials: false # Grab the just-built documentation artifact and inflate the archive at the expected location. - - uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0 + - uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 with: name: Documentation path: docs/_site/ diff --git a/.github/workflows/verify-code-style.yml b/.github/workflows/verify-code-style.yml index eff9b080f..e411bb84c 100644 --- a/.github/workflows/verify-code-style.yml +++ b/.github/workflows/verify-code-style.yml @@ -37,14 +37,14 @@ jobs: steps: - name: Harden the runner (Audit all outbound calls) - uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0 + uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0 with: egress-policy: audit - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 with: persist-credentials: false - - uses: actions/setup-dotnet@baa11fbfe1d6520db94683bd5c7a3818018e4309 # v5.1.0 + - uses: actions/setup-dotnet@c2fa09f4bde5ebb9d1777cf28262a3eb3db3ced7 # v5.2.0 with: global-json-file: "./global.json" source-url: https://nuget.pkg.github.com/Yubico/index.json diff --git a/Dockerfile b/Dockerfile index 80004adb6..3521ecc92 100644 --- a/Dockerfile +++ b/Dockerfile @@ -12,7 +12,7 @@ # See the License for the specific language governing permissions and # limitations under the License. -FROM nginx:alpine@sha256:1d13701a5f9f3fb01aaa88cef2344d65b6b5bf6b7d9fa4cf0dca557a8d7702ba +FROM nginx:alpine@sha256:e7257f1ef28ba17cf7c248cb8ccf6f0c6e0228ab9c315c152f9c203cd34cf6d1 ARG UID=1000 ARG GID=1000 diff --git a/Yubico.Core/src/Yubico.Core.csproj b/Yubico.Core/src/Yubico.Core.csproj index 9e312cf72..bcdae7594 100644 --- a/Yubico.Core/src/Yubico.Core.csproj +++ b/Yubico.Core/src/Yubico.Core.csproj @@ -111,17 +111,17 @@ limitations under the License. --> - + - + all runtime; build; native; contentfiles; analyzers; buildtransitive - - - - - + + + + + all @@ -129,14 +129,13 @@ limitations under the License. --> - - + - + <_Parameter1>Yubico.Core.UnitTests,PublicKey=00240000048000001401000006020000002400005253413100080000010001003312c63e1417ad4652242148c599b55c50d3213c7610b4cc1f467b193bfb8d131de6686268a9db307fcef9efcd5e467483fe9015307e5d0cf9d2fd4df12f29a1c7a72e531d8811ca70f6c80c4aeb598c10bb7fc48742ab86aa7986b0ae9a2f4876c61e0b81eb38e5b549f1fc861c633206f5466bfde021cb08d094742922a8258b582c3bc029eab88c98d476dac6e6f60bc0016746293f5337c68b22e528931b6494acddf1c02b9ea3986754716a9f2a32c59ff3d97f1e35ee07ca2972b0269a4cde86f7b64f80e7c13152c0f84083b5cc4f06acc0efb4316ff3f08c79bc0170229007fb27c97fb494b22f9f7b07f45547e263a44d5a7fe7da6a945a5e47afc9 diff --git a/Yubico.Core/tests/Yubico.Core.UnitTests.csproj b/Yubico.Core/tests/Yubico.Core.UnitTests.csproj index 212018618..df3c0badc 100644 --- a/Yubico.Core/tests/Yubico.Core.UnitTests.csproj +++ b/Yubico.Core/tests/Yubico.Core.UnitTests.csproj @@ -43,7 +43,7 @@ limitations under the License. --> Linux - + diff --git a/Yubico.NativeShims/CMakeLists.txt b/Yubico.NativeShims/CMakeLists.txt index 05ecdf4ed..42825ac29 100644 --- a/Yubico.NativeShims/CMakeLists.txt +++ b/Yubico.NativeShims/CMakeLists.txt @@ -4,7 +4,7 @@ cmake_minimum_required(VERSION 3.15) # Project version # if(NOT DEFINED PROJECT_VERSION) - set(PROJECT_VERSION "1.14.0") + set(PROJECT_VERSION "1.0.0") endif() set(VCPKG_MANIFEST_VERSION ${PROJECT_VERSION}) diff --git a/Yubico.NativeShims/build-windows.ps1 b/Yubico.NativeShims/build-windows.ps1 index 9b38ea630..396646769 100644 --- a/Yubico.NativeShims/build-windows.ps1 +++ b/Yubico.NativeShims/build-windows.ps1 @@ -1,5 +1,5 @@ param( - [string]$Version + [string]$Version = "1.0.0" ) # Update to latest vcpkg baseline diff --git a/Yubico.YubiKey/src/Resources/ExceptionMessages.Designer.cs b/Yubico.YubiKey/src/Resources/ExceptionMessages.Designer.cs index c7871514c..ca7122b19 100644 --- a/Yubico.YubiKey/src/Resources/ExceptionMessages.Designer.cs +++ b/Yubico.YubiKey/src/Resources/ExceptionMessages.Designer.cs @@ -2453,5 +2453,140 @@ internal static string YubiKeyOperationFailed { return ResourceManager.GetString("YubiKeyOperationFailed", resourceCulture); } } + + /// + /// Looks up a localized string similar to Error generating key pair: {0}. + /// + internal static string GenerateKeyPairFailed { + get { + return ResourceManager.GetString("GenerateKeyPairFailed", resourceCulture); + } + } + + /// + /// Looks up a localized string similar to Certificate data too short to determine compression format.. + /// + internal static string CertificateDataTooShortToDetectFormat { + get { + return ResourceManager.GetString("CertificateDataTooShortToDetectFormat", resourceCulture); + } + } + + /// + /// Looks up a localized string similar to Could not detect compression format.. + /// + internal static string CouldNotDetectCompressionFormat { + get { + return ResourceManager.GetString("CouldNotDetectCompressionFormat", resourceCulture); + } + } + + /// + /// Looks up a localized string similar to Decompressed data length {0} does not match expected length {1} from GIDS header.. + /// + internal static string DecompressedLengthMismatch { + get { + return ResourceManager.GetString("DecompressedLengthMismatch", resourceCulture); + } + } + + /// + /// Looks up a localized string similar to The stream does not support writing.. + /// + internal static string StreamDoesNotSupportWriting { + get { + return ResourceManager.GetString("StreamDoesNotSupportWriting", resourceCulture); + } + } + + /// + /// Looks up a localized string similar to The stream does not support reading.. + /// + internal static string StreamDoesNotSupportReading { + get { + return ResourceManager.GetString("StreamDoesNotSupportReading", resourceCulture); + } + } + + /// + /// Looks up a localized string similar to Invalid CompressionMode value.. + /// + internal static string InvalidCompressionModeValue { + get { + return ResourceManager.GetString("InvalidCompressionModeValue", resourceCulture); + } + } + + /// + /// Looks up a localized string similar to Reading is not supported on compression streams.. + /// + internal static string ReadingNotSupportedOnCompressionStreams { + get { + return ResourceManager.GetString("ReadingNotSupportedOnCompressionStreams", resourceCulture); + } + } + + /// + /// Looks up a localized string similar to Writing is not supported on decompression streams.. + /// + internal static string WritingNotSupportedOnDecompressionStreams { + get { + return ResourceManager.GetString("WritingNotSupportedOnDecompressionStreams", resourceCulture); + } + } + + /// + /// Looks up a localized string similar to CopyTo is not supported on compression streams.. + /// + internal static string CopyToNotSupportedOnCompressionStreams { + get { + return ResourceManager.GetString("CopyToNotSupportedOnCompressionStreams", resourceCulture); + } + } + + /// + /// Looks up a localized string similar to CopyToAsync is not supported on compression streams.. + /// + internal static string CopyToAsyncNotSupportedOnCompressionStreams { + get { + return ResourceManager.GetString("CopyToAsyncNotSupportedOnCompressionStreams", resourceCulture); + } + } + + /// + /// Looks up a localized string similar to Unexpected end of stream while reading zlib header.. + /// + internal static string UnexpectedEndOfZlibHeader { + get { + return ResourceManager.GetString("UnexpectedEndOfZlibHeader", resourceCulture); + } + } + + /// + /// Looks up a localized string similar to Invalid zlib header checksum.. + /// + internal static string InvalidZlibHeaderChecksum { + get { + return ResourceManager.GetString("InvalidZlibHeaderChecksum", resourceCulture); + } + } + + /// + /// Looks up a localized string similar to Unsupported zlib compression method: {0}. Only deflate (8) is supported.. + /// + internal static string UnsupportedZlibCompressionMethod { + get { + return ResourceManager.GetString("UnsupportedZlibCompressionMethod", resourceCulture); + } + } + + /// + /// Looks up a localized string similar to Zlib streams with a preset dictionary are not supported.. + /// + internal static string ZlibPresetDictionaryNotSupported { + get { + return ResourceManager.GetString("ZlibPresetDictionaryNotSupported", resourceCulture); + } + } } } diff --git a/Yubico.YubiKey/src/Resources/ExceptionMessages.resx b/Yubico.YubiKey/src/Resources/ExceptionMessages.resx index fcee6f050..3b6911e76 100644 --- a/Yubico.YubiKey/src/Resources/ExceptionMessages.resx +++ b/Yubico.YubiKey/src/Resources/ExceptionMessages.resx @@ -916,4 +916,49 @@ Key agreement receipts do not match + + Error generating key pair: {0} + + + Certificate data too short to determine compression format. + + + Could not detect compression format. + + + Decompressed data length {0} does not match expected length {1} from GIDS header. + + + The stream does not support writing. + + + The stream does not support reading. + + + Invalid CompressionMode value. + + + Reading is not supported on compression streams. + + + Writing is not supported on decompression streams. + + + CopyTo is not supported on compression streams. + + + CopyToAsync is not supported on compression streams. + + + Unexpected end of stream while reading zlib header. + + + Invalid zlib header checksum. + + + Unsupported zlib compression method: {0}. Only deflate (8) is supported. + + + Zlib streams with a preset dictionary are not supported. + \ No newline at end of file diff --git a/Yubico.YubiKey/src/Yubico.YubiKey.csproj b/Yubico.YubiKey/src/Yubico.YubiKey.csproj index b57d47f11..c1a2b0dd7 100644 --- a/Yubico.YubiKey/src/Yubico.YubiKey.csproj +++ b/Yubico.YubiKey/src/Yubico.YubiKey.csproj @@ -104,14 +104,14 @@ limitations under the License. --> - - - + + + all runtime; build; native; contentfiles; analyzers; buildtransitive - - + + all @@ -123,10 +123,10 @@ limitations under the License. --> all runtime; build; native; contentfiles; analyzers; buildtransitive - + - + diff --git a/Yubico.YubiKey/src/Yubico/YubiKey/Cryptography/ZLibStream.cs b/Yubico.YubiKey/src/Yubico/YubiKey/Cryptography/ZLibStream.cs new file mode 100644 index 000000000..bda4904be --- /dev/null +++ b/Yubico.YubiKey/src/Yubico/YubiKey/Cryptography/ZLibStream.cs @@ -0,0 +1,598 @@ +// Copyright 2025 Yubico AB +// +// Licensed under the Apache License, Version 2.0 (the "License"). +// You may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// This implementation is based on RFC 1950 (ZLIB Compressed Data Format Specification). +// It handles the zlib framing (header + Adler-32 trailer) around raw deflate data, +// delegating the actual inflate/deflate work to the standard System.IO.Compression.DeflateStream. + +using System; +using System.IO; +using System.IO.Compression; +using System.Threading; +using System.Threading.Tasks; + +namespace Yubico.YubiKey.Cryptography +{ + /// + /// Provides methods and properties used to compress and decompress streams by + /// using the zlib data format specification (RFC 1950). + /// + /// + /// + /// The zlib format wraps raw DEFLATE compressed data with a 2-byte header + /// (CMF and FLG) and a 4-byte Adler-32 checksum trailer. This class handles + /// the framing and delegates the actual compression/decompression to + /// . + /// + /// + /// During compression, an Adler-32 checksum of all written bytes is + /// computed and appended as a 4-byte big-endian trailer when the stream is + /// disposed, producing a fully RFC 1950-compliant zlib stream. + /// + /// + /// During decompression, the 2-byte zlib header is validated (checksum, + /// compression method, and FDICT flag), but the 4-byte Adler-32 trailer is + /// not verified. Corruption that is not caught by the underlying DEFLATE + /// decoder will go undetected. + /// + /// + /// This implementation targets .NET Standard 2.0 / 2.1 / .NET Framework 4.7.2 + /// where System.IO.Compression.ZLibStream is not available. + /// + /// + internal sealed class ZLibStream : Stream + { + /// + /// The default zlib CMF byte: deflate method (CM=8), window size 2^15 (CINFO=7). + /// + private const byte DefaultCmf = 0x78; + + /// + /// The FLG byte for default compression level. + /// Chosen so that (DefaultCmf * 256 + DefaultFlg) % 31 == 0. + /// + private const byte DefaultFlg = 0x9C; + + private readonly CompressionMode _mode; + private readonly bool _leaveOpen; + private DeflateStream? _deflateStream; + private bool _headerProcessed; + private bool _disposed; + + // For compression: tracks written data for Adler-32 computation + private uint _adlerA = 1; + private uint _adlerB; + + /// + /// Initializes a new instance of the class by using the + /// specified stream and compression mode. + /// + /// The stream to which compressed data is written or from + /// which data to decompress is read. + /// One of the enumeration values that indicates whether to + /// compress data to the stream or decompress data from the stream. + public ZLibStream(Stream stream, CompressionMode mode) + : this(stream, mode, leaveOpen: false) + { + } + + /// + /// Initializes a new instance of the class by using the + /// specified stream, compression mode, and whether to leave the stream open. + /// + /// The stream to which compressed data is written or from + /// which data to decompress is read. + /// One of the enumeration values that indicates whether to + /// compress data to the stream or decompress data from the stream. + /// to leave the stream object open + /// after disposing the object; otherwise, + /// . + /// is + /// . + /// is + /// and the stream does not support + /// reading, or is + /// and the stream does not support writing. + public ZLibStream(Stream stream, CompressionMode mode, bool leaveOpen) + { + BaseStream = stream ?? throw new ArgumentNullException(nameof(stream)); + _mode = mode; + _leaveOpen = leaveOpen; + + if (mode == CompressionMode.Compress) + { + if (!stream.CanWrite) + { + throw new ArgumentException(ExceptionMessages.StreamDoesNotSupportWriting, nameof(stream)); + } + } + else if (mode == CompressionMode.Decompress) + { + if (!stream.CanRead) + { + throw new ArgumentException(ExceptionMessages.StreamDoesNotSupportReading, nameof(stream)); + } + } + else + { + throw new ArgumentException(ExceptionMessages.InvalidCompressionModeValue, nameof(mode)); + } + } + + /// + /// Initializes a new instance of the class by using the + /// specified stream and compression level. + /// + /// The stream to which compressed data is written. + /// One of the enumeration values that indicates + /// whether to emphasize speed or compression efficiency when compressing data. + public ZLibStream(Stream stream, CompressionLevel compressionLevel) + : this(stream, compressionLevel, leaveOpen: false) + { + } + + /// + /// Initializes a new instance of the class by using the + /// specified stream, compression level, and whether to leave the stream open. + /// + /// The stream to which compressed data is written. + /// One of the enumeration values that indicates + /// whether to emphasize speed or compression efficiency when compressing data. + /// to leave the stream object open + /// after disposing the object; otherwise, + /// . + public ZLibStream(Stream stream, CompressionLevel compressionLevel, bool leaveOpen) + { + BaseStream = stream ?? throw new ArgumentNullException(nameof(stream)); + + if (!stream.CanWrite) + { + throw new ArgumentException(ExceptionMessages.StreamDoesNotSupportWriting, nameof(stream)); + } + + _mode = CompressionMode.Compress; + _leaveOpen = leaveOpen; + + // Write the zlib header immediately + WriteZLibHeader(compressionLevel); + + _deflateStream = new DeflateStream(stream, compressionLevel, leaveOpen: true); + _headerProcessed = true; + } + + /// + public override bool CanRead => !_disposed && _mode == CompressionMode.Decompress; + + /// + public override bool CanWrite => !_disposed && _mode == CompressionMode.Compress; + + /// + public override bool CanSeek => false; + + /// + public override long Length => throw new NotSupportedException(); + + /// + public override long Position + { + get => throw new NotSupportedException(); + set => throw new NotSupportedException(); + } + + /// + /// Gets a reference to the underlying stream. + /// + public Stream BaseStream { get; } + + /// + public override int Read(byte[] buffer, int offset, int count) + { + ThrowIfDisposed(); + + if (_mode != CompressionMode.Decompress) + { + throw new InvalidOperationException(ExceptionMessages.ReadingNotSupportedOnCompressionStreams); + } + + EnsureDecompressionInitialized(); + + return _deflateStream!.Read(buffer, offset, count); + } + + /// + public override Task ReadAsync(byte[] buffer, int offset, int count, CancellationToken cancellationToken) + { + ThrowIfDisposed(); + + if (_mode != CompressionMode.Decompress) + { + throw new InvalidOperationException(ExceptionMessages.ReadingNotSupportedOnCompressionStreams); + } + + EnsureDecompressionInitialized(); + + return _deflateStream!.ReadAsync(buffer, offset, count, cancellationToken); + } + + /// + public override int ReadByte() + { + ThrowIfDisposed(); + + if (_mode != CompressionMode.Decompress) + { + throw new InvalidOperationException(ExceptionMessages.ReadingNotSupportedOnCompressionStreams); + } + + EnsureDecompressionInitialized(); + + return _deflateStream!.ReadByte(); + } + + /// + public override void Write(byte[] buffer, int offset, int count) + { + ThrowIfDisposed(); + + if (_mode != CompressionMode.Compress) + { + throw new InvalidOperationException(ExceptionMessages.WritingNotSupportedOnDecompressionStreams); + } + + EnsureCompressionInitialized(); + + // Track uncompressed data for Adler-32 + UpdateAdler32(buffer, offset, count); + + _deflateStream!.Write(buffer, offset, count); + } + + /// + public override Task WriteAsync(byte[] buffer, int offset, int count, CancellationToken cancellationToken) + { + ThrowIfDisposed(); + + if (_mode != CompressionMode.Compress) + { + throw new InvalidOperationException(ExceptionMessages.WritingNotSupportedOnDecompressionStreams); + } + + EnsureCompressionInitialized(); + + // Track uncompressed data for Adler-32 + UpdateAdler32(buffer, offset, count); + + return _deflateStream!.WriteAsync(buffer, offset, count, cancellationToken); + } + +#if NETSTANDARD2_1_OR_GREATER + /// + public override ValueTask ReadAsync(Memory buffer, CancellationToken cancellationToken = default) + { + ThrowIfDisposed(); + + if (_mode != CompressionMode.Decompress) + { + throw new InvalidOperationException(ExceptionMessages.ReadingNotSupportedOnCompressionStreams); + } + + EnsureDecompressionInitialized(); + + return _deflateStream!.ReadAsync(buffer, cancellationToken); + } + + /// + public override ValueTask WriteAsync(ReadOnlyMemory buffer, CancellationToken cancellationToken = default) + { + ThrowIfDisposed(); + + if (_mode != CompressionMode.Compress) + { + throw new InvalidOperationException(ExceptionMessages.WritingNotSupportedOnDecompressionStreams); + } + + EnsureCompressionInitialized(); + + // Track uncompressed data for Adler-32 + if (!buffer.IsEmpty) + { + byte[] temp = buffer.ToArray(); + UpdateAdler32(temp, 0, temp.Length); + } + + return _deflateStream!.WriteAsync(buffer, cancellationToken); + } +#endif + + /// + public override void Flush() + { + ThrowIfDisposed(); + _deflateStream?.Flush(); + } + + /// + public override Task FlushAsync(CancellationToken cancellationToken) + { + ThrowIfDisposed(); + + if (_deflateStream != null) + { + return _deflateStream.FlushAsync(cancellationToken); + } + + return Task.CompletedTask; + } + + /// + public override long Seek(long offset, SeekOrigin origin) => + throw new NotSupportedException(); + + /// + public override void SetLength(long value) => + throw new NotSupportedException(); + +#if NETSTANDARD2_1_OR_GREATER + /// + public override void CopyTo(Stream destination, int bufferSize) + { + ThrowIfDisposed(); + + if (_mode != CompressionMode.Decompress) + { + throw new InvalidOperationException(ExceptionMessages.CopyToNotSupportedOnCompressionStreams); + } + + EnsureDecompressionInitialized(); + + _deflateStream!.CopyTo(destination, bufferSize); + } +#endif + + /// + public override Task CopyToAsync(Stream destination, int bufferSize, CancellationToken cancellationToken) + { + ThrowIfDisposed(); + + if (_mode != CompressionMode.Decompress) + { + throw new InvalidOperationException(ExceptionMessages.CopyToAsyncNotSupportedOnCompressionStreams); + } + + EnsureDecompressionInitialized(); + + return _deflateStream!.CopyToAsync(destination, bufferSize, cancellationToken); + } + + /// + protected override void Dispose(bool disposing) + { + if (!_disposed) + { + if (disposing) + { + if (_mode == CompressionMode.Compress && _deflateStream != null) + { + // Flush and close the deflate stream to finalize compressed data + _deflateStream.Dispose(); + _deflateStream = null; + + // Write the Adler-32 checksum trailer (big-endian) + WriteAdler32Trailer(); + } + else + { + _deflateStream?.Dispose(); + _deflateStream = null; + } + + if (!_leaveOpen) + { + BaseStream.Dispose(); + } + } + + _disposed = true; + } + + base.Dispose(disposing); + } + + /// + /// Reads and validates the 2-byte zlib header (RFC 1950 section 2.2). + /// After validation, creates the internal + /// positioned at the start of the raw deflate data. + /// + /// The zlib header is invalid. + private void ReadAndValidateZLibHeader() + { + int cmf = BaseStream.ReadByte(); + int flg = BaseStream.ReadByte(); + + if (cmf == -1 || flg == -1) + { + throw new InvalidDataException(ExceptionMessages.UnexpectedEndOfZlibHeader); + } + + // Validate the header checksum: (CMF * 256 + FLG) must be divisible by 31 + if (((cmf * 256) + flg) % 31 != 0) + { + throw new InvalidDataException(ExceptionMessages.InvalidZlibHeaderChecksum); + } + + // Extract compression method (lower 4 bits of CMF) + int compressionMethod = cmf & 0x0F; + if (compressionMethod != 8) + { + throw new InvalidDataException( + string.Format( + System.Globalization.CultureInfo.CurrentCulture, + ExceptionMessages.UnsupportedZlibCompressionMethod, + compressionMethod)); + } + + // Check FDICT flag (bit 5 of FLG) - preset dictionary not supported + bool hasPresetDictionary = (flg & 0x20) != 0; + if (hasPresetDictionary) + { + throw new InvalidDataException(ExceptionMessages.ZlibPresetDictionaryNotSupported); + } + } + + /// + /// Writes the 2-byte zlib header to the base stream. + /// + private void WriteZLibHeader(CompressionLevel compressionLevel) + { + byte cmf = DefaultCmf; + byte flg; + + // Choose FLEVEL based on compression level and ensure header checksum is valid + switch (compressionLevel) + { + case CompressionLevel.NoCompression: + // FLEVEL = 0 (compressor used fastest algorithm) + flg = ComputeFlg(cmf, 0); + break; + case CompressionLevel.Fastest: + // FLEVEL = 1 (compressor used fast algorithm) + flg = ComputeFlg(cmf, 1); + break; + default: + // FLEVEL = 2 (default) - covers Optimal and SmallestSize + flg = DefaultFlg; + break; + } + + BaseStream.WriteByte(cmf); + BaseStream.WriteByte(flg); + } + + /// + /// Computes the FLG byte given a CMF byte and desired FLEVEL (0-3). + /// Ensures that (CMF * 256 + FLG) % 31 == 0 per RFC 1950. + /// + private static byte ComputeFlg(byte cmf, int flevel) + { + // FLG layout: FLEVEL (2 bits) | FDICT (1 bit, 0) | FCHECK (5 bits) + int flgBase = (flevel & 0x03) << 6; + int remainder = ((cmf * 256) + flgBase) % 31; + int fcheck = (31 - remainder) % 31; + + return (byte)(flgBase | fcheck); + } + + /// + /// Writes the 4-byte Adler-32 checksum trailer in big-endian byte order. + /// + private void WriteAdler32Trailer() + { + uint checksum = (_adlerB << 16) | _adlerA; + + BaseStream.WriteByte((byte)(checksum >> 24)); + BaseStream.WriteByte((byte)(checksum >> 16)); + BaseStream.WriteByte((byte)(checksum >> 8)); + BaseStream.WriteByte((byte)checksum); + } + + /// + /// Updates the running Adler-32 checksum with the given data. + /// + /// + /// Adler-32 is defined in RFC 1950 section 9. It consists of two 16-bit + /// checksums A and B: A = 1 + sum of all bytes, B = sum of all A values, + /// both modulo 65521. + /// + private void UpdateAdler32(byte[] buffer, int offset, int count) + { + const uint modAdler = 65521; + + for (int i = offset; i < offset + count; i++) + { + _adlerA = (_adlerA + buffer[i]) % modAdler; + _adlerB = (_adlerB + _adlerA) % modAdler; + } + } + + /// + /// Ensures the zlib header has been read and the internal DeflateStream + /// is initialized for decompression. + /// + private void EnsureDecompressionInitialized() + { + if (!_headerProcessed) + { + ReadAndValidateZLibHeader(); + _deflateStream = new DeflateStream(BaseStream, CompressionMode.Decompress, leaveOpen: true); + _headerProcessed = true; + } + } + + /// + /// Ensures the zlib header has been written and the internal DeflateStream + /// is initialized for compression. + /// + private void EnsureCompressionInitialized() + { + if (!_headerProcessed) + { + // Default compression level header + BaseStream.WriteByte(DefaultCmf); + BaseStream.WriteByte(DefaultFlg); + _deflateStream = new DeflateStream(BaseStream, CompressionLevel.Optimal, leaveOpen: true); + _headerProcessed = true; + } + } + + private void ThrowIfDisposed() + { + if (_disposed) + { + throw new ObjectDisposedException(GetType().FullName); + } + } + + /// + /// Computes the Adler-32 checksum over an entire byte array. + /// + /// The data to compute the checksum for. + /// The 32-bit Adler-32 checksum value. + internal static uint ComputeAdler32(byte[] data) + { + return ComputeAdler32(data, 0, data.Length); + } + + /// + /// Computes the Adler-32 checksum over a segment of a byte array. + /// + /// The data to compute the checksum for. + /// The offset into the data to start from. + /// The number of bytes to include. + /// The 32-bit Adler-32 checksum value. + internal static uint ComputeAdler32(byte[] data, int offset, int count) + { + const uint modAdler = 65521; + uint a = 1; + uint b = 0; + + for (int i = offset; i < offset + count; i++) + { + a = (a + data[i]) % modAdler; + b = (b + a) % modAdler; + } + + return (b << 16) | a; + } + } +} diff --git a/Yubico.YubiKey/src/Yubico/YubiKey/Fido2/Fido2Session.cs b/Yubico.YubiKey/src/Yubico/YubiKey/Fido2/Fido2Session.cs index 8926c3c95..5055fe1a0 100644 --- a/Yubico.YubiKey/src/Yubico/YubiKey/Fido2/Fido2Session.cs +++ b/Yubico.YubiKey/src/Yubico/YubiKey/Fido2/Fido2Session.cs @@ -18,6 +18,7 @@ using Microsoft.Extensions.Logging; using Yubico.Core.Logging; using Yubico.YubiKey.Fido2.Commands; +using Yubico.YubiKey.Scp; namespace Yubico.YubiKey.Fido2 { @@ -218,6 +219,7 @@ public ReadOnlyMemory? AuthenticatorCredStoreState /// YubiKey. /// /// + /// /// Because this class implements IDisposable, use the using keyword. For example, /// /// IYubiKeyDevice yubiKeyToUse = SelectYubiKey(); @@ -226,17 +228,42 @@ public ReadOnlyMemory? AuthenticatorCredStoreState /// /* Perform FIDO2 operations. */ /// } /// + /// + /// + /// To establish an SCP-protected FIDO2 session: + /// + /// using (var fido2 = new Fido2Session(yubiKeyToUse, keyParameters: Scp03KeyParameters.DefaultKey)) + /// { + /// /* All FIDO2 commands are encrypted via SCP. */ + /// } + /// + /// + /// + /// Transport notes for FIDO2 over SCP: On YubiKey firmware 5.8 and later, FIDO2 is + /// available over both HID and USB CCID (SmartCard), so SCP works over USB as well as NFC. + /// On earlier firmware, FIDO2 communicates via HID only over USB, which does not support SCP + /// (a SmartCard-layer protocol). Over NFC, all firmware versions expose FIDO2 via SmartCard. + /// /// /// /// The object that represents the actual YubiKey on which the FIDO2 operations should be performed. /// - /// If supplied, will be used for credential management read-only operations + /// If supplied, will be used for credential management read-only operations. + /// + /// + /// Optional parameters for establishing a Secure Channel Protocol (SCP) connection. + /// When provided, all communication with the YubiKey will be encrypted and authenticated + /// using the specified SCP protocol (e.g., SCP03 or SCP11). On firmware prior to 5.8, this + /// requires an NFC connection. On firmware 5.8+, SCP is also supported over USB. /// /// /// The argument is null. /// - public Fido2Session(IYubiKeyDevice yubiKey, ReadOnlyMemory? persistentPinUvAuthToken = null) - : base(Log.GetLogger(), yubiKey, YubiKeyApplication.Fido2, keyParameters: null) + public Fido2Session( + IYubiKeyDevice yubiKey, + ReadOnlyMemory? persistentPinUvAuthToken = null, + ScpKeyParameters? keyParameters = null) + : base(Log.GetLogger(), yubiKey, YubiKeyApplication.Fido2, keyParameters) { Guard.IsNotNull(yubiKey, nameof(yubiKey)); diff --git a/Yubico.YubiKey/src/Yubico/YubiKey/Piv/PivSession.KeyPairs.cs b/Yubico.YubiKey/src/Yubico/YubiKey/Piv/PivSession.KeyPairs.cs index 1a7decc40..b06cddcda 100644 --- a/Yubico.YubiKey/src/Yubico/YubiKey/Piv/PivSession.KeyPairs.cs +++ b/Yubico.YubiKey/src/Yubico/YubiKey/Piv/PivSession.KeyPairs.cs @@ -176,7 +176,11 @@ public IPublicKey GenerateKeyPair( if (response.Status != ResponseStatus.Success) { - throw new InvalidOperationException("Error generating key pair: " + response); + throw new InvalidOperationException( + string.Format( + CultureInfo.CurrentCulture, + ExceptionMessages.GenerateKeyPairFailed, + response)); } return PivKeyDecoder.CreatePublicKey(response.Data, keyType); @@ -605,14 +609,16 @@ public X509Certificate2 GetCertificate(byte slotNumber) try { - return new X509Certificate2(Decompress(certBytesCopy)); + byte[] decompressedData = DecompressWithFormatDetection(certBytesCopy); + return new X509Certificate2(decompressedData); } - catch (Exception) + catch (Exception ex) { throw new InvalidOperationException( string.Format( CultureInfo.CurrentCulture, - ExceptionMessages.FailedDecompressingCertificate)); + ExceptionMessages.FailedDecompressingCertificate), + ex); } } @@ -661,14 +667,113 @@ static private byte[] Compress(byte[] data) return compressedStream.ToArray(); } - static private byte[] Decompress(byte[] data) + static private byte[] Decompress(byte[] data, int offset = 0) { - using var dataStream = new MemoryStream(data); - using var decompressor = new GZipStream(dataStream, CompressionMode.Decompress); - using var decompressedStream = new MemoryStream(); - decompressor.CopyTo(decompressedStream); - - return decompressedStream.ToArray(); + using (var dataStream = new MemoryStream(data, offset, data.Length - offset)) + { + using (var decompressor = new GZipStream(dataStream, CompressionMode.Decompress)) + { + using (var decompressedStream = new MemoryStream()) + { + decompressor.CopyTo(decompressedStream); + return decompressedStream.ToArray(); + } + } + } + } + + /// + /// Decompresses a certificate by detecting the compression format. + /// + /// + /// + /// Attempts to decompress using the following formats in order of detection: + /// + /// + /// GZip (magic bytes 0x1F, 0x8B) — as specified by the PIV standard for + /// compressed certificates. + /// GIDS (magic bytes 0x01, 0x00 + 2-byte LE uncompressed length) + /// followed by zlib (RFC 1950) compressed data, as used by the GIDS smartcard + /// standard. + /// + /// + /// If none of the above formats are detected, throws an exception. + /// + /// + static private byte[] DecompressWithFormatDetection(byte[] data) + { + if (data.Length < 2) + { + throw new InvalidOperationException(ExceptionMessages.CertificateDataTooShortToDetectFormat); + } + + // Check for GZip magic bytes (0x1F, 0x8B) + if (data[0] == 0x1F && data[1] == 0x8B) + { + return Decompress(data); + } + + // Check for GIDS header (0x01, 0x00) followed by 2-byte LE length and zlib payload + if (data[0] == 0x01 && data[1] == 0x00 && data.Length >= 6) + { + return DecompressGids(data); + } + + throw new InvalidOperationException(ExceptionMessages.CouldNotDetectCompressionFormat); + } + + /// + /// Decompresses GIDS-formatted data. + /// + /// + /// + /// The GIDS format uses a 4-byte header: + /// + /// + /// Bytes 0–1: Magic prefix (0x01, 0x00). + /// Bytes 2–3: Expected uncompressed data length in little-endian byte order. + /// + /// + /// After the 4-byte header, the payload is zlib (RFC 1950) compressed data. + /// The decompressed length is validated against the expected length from the header. + /// + /// + static private byte[] DecompressGids(byte[] data) + { + const int gidsHeaderLength = 4; + + int expectedLength = data[2] | (data[3] << 8); + byte[] decompressed = DecompressZlib(data, offset: gidsHeaderLength); + + if (decompressed.Length != expectedLength) + { + throw new InvalidOperationException( + string.Format( + CultureInfo.CurrentCulture, + ExceptionMessages.DecompressedLengthMismatch, + decompressed.Length, + expectedLength)); + } + + return decompressed; + } + + /// + /// Decompresses zlib (RFC 1950) data starting at the specified offset. + /// + static private byte[] DecompressZlib(byte[] data, int offset = 0) + { + using (var dataStream = new MemoryStream(data, offset, data.Length - offset)) + { + using (var decompressor = new ZLibStream(dataStream, CompressionMode.Decompress)) + { + using (var decompressedStream = new MemoryStream()) + { + decompressor.CopyTo(decompressedStream); + return decompressedStream.ToArray(); + } + } + } } } } diff --git a/Yubico.YubiKey/src/Yubico/YubiKey/YubiKeyFeatureExtensions.cs b/Yubico.YubiKey/src/Yubico/YubiKey/YubiKeyFeatureExtensions.cs index 021161bdd..b6593062d 100644 --- a/Yubico.YubiKey/src/Yubico/YubiKey/YubiKeyFeatureExtensions.cs +++ b/Yubico.YubiKey/src/Yubico/YubiKey/YubiKeyFeatureExtensions.cs @@ -94,7 +94,8 @@ public static bool HasFeature(this IYubiKeyDevice yubiKeyDevice, YubiKeyFeature || HasApplication(yubiKeyDevice, YubiKeyCapabilities.Oath) || HasApplication(yubiKeyDevice, YubiKeyCapabilities.OpenPgp) || HasApplication(yubiKeyDevice, YubiKeyCapabilities.Otp) - || HasApplication(yubiKeyDevice, YubiKeyCapabilities.YubiHsmAuth)), + || HasApplication(yubiKeyDevice, YubiKeyCapabilities.YubiHsmAuth) + || HasApplication(yubiKeyDevice, YubiKeyCapabilities.Fido2)), YubiKeyFeature.Scp03Oath => yubiKeyDevice.FirmwareVersion >= FirmwareVersion.V5_6_3 diff --git a/Yubico.YubiKey/tests/integration/Yubico.YubiKey.IntegrationTests.csproj b/Yubico.YubiKey/tests/integration/Yubico.YubiKey.IntegrationTests.csproj index 545079e4e..09767e9e2 100644 --- a/Yubico.YubiKey/tests/integration/Yubico.YubiKey.IntegrationTests.csproj +++ b/Yubico.YubiKey/tests/integration/Yubico.YubiKey.IntegrationTests.csproj @@ -31,18 +31,15 @@ limitations under the License. --> - - + + - + - - - - + diff --git a/Yubico.YubiKey/tests/integration/Yubico/YubiKey/ReclaimTimeoutTests.cs b/Yubico.YubiKey/tests/integration/Yubico/YubiKey/ReclaimTimeoutTests.cs index 6a5ec8d53..2126530b3 100644 --- a/Yubico.YubiKey/tests/integration/Yubico/YubiKey/ReclaimTimeoutTests.cs +++ b/Yubico.YubiKey/tests/integration/Yubico/YubiKey/ReclaimTimeoutTests.cs @@ -16,28 +16,15 @@ using System.Diagnostics; using System.Threading; using Microsoft.Extensions.Logging; -using Serilog; -using Serilog.Core; -using Serilog.Events; using Xunit; using Yubico.YubiKey.Fido2; using Yubico.YubiKey.Otp; using Yubico.YubiKey.Piv; using Yubico.YubiKey.TestUtilities; using Log = Yubico.Core.Logging.Log; -using Logger = Serilog.Core.Logger; namespace Yubico.YubiKey { - class ThreadIdEnricher : ILogEventEnricher - { - public void Enrich(LogEvent logEvent, ILogEventPropertyFactory propertyFactory) - { - logEvent.AddPropertyIfAbsent(propertyFactory.CreateProperty( - "ThreadId", Environment.CurrentManagedThreadId)); - } - } - public class ReclaimTimeoutTests { [Trait(TraitTypes.Category, TestCategories.Elevated)] @@ -47,16 +34,10 @@ public void SwitchingBetweenTransports_ForcesThreeSecondWait() // Force the old behavior even for newer YubiKeys. AppContext.SetSwitch(YubiKeyCompatSwitches.UseOldReclaimTimeoutBehavior, true); - using Logger? log = new LoggerConfiguration() - .Enrich.With(new ThreadIdEnricher()) - .WriteTo.Console( - outputTemplate: "{Timestamp:HH:mm:ss.fffffff} [{Level}] ({ThreadId}) {Message}{NewLine}{Exception}") - .CreateLogger(); - Log.ConfigureLoggerFactory(builder => builder .ClearProviders() - .AddSerilog(log) + .AddSimpleConsole(opts => opts.TimestampFormat = "HH:mm:ss.fffffff ") .AddFilter(level => level >= LogLevel.Information)); // TEST ASSUMPTION: This test requires FIDO. On Windows, that means this test case must run elevated (admin). diff --git a/Yubico.YubiKey/tests/integration/Yubico/YubiKey/Scp/Scp03Tests.cs b/Yubico.YubiKey/tests/integration/Yubico/YubiKey/Scp/Scp03Tests.cs index 2161b0362..f29464a21 100644 --- a/Yubico.YubiKey/tests/integration/Yubico/YubiKey/Scp/Scp03Tests.cs +++ b/Yubico.YubiKey/tests/integration/Yubico/YubiKey/Scp/Scp03Tests.cs @@ -19,6 +19,7 @@ using Xunit; using Yubico.Core.Tlv; using Yubico.YubiKey.Cryptography; +using Yubico.YubiKey.Fido2; using Yubico.YubiKey.Piv; using Yubico.YubiKey.Piv.Commands; using Yubico.YubiKey.Scp03; @@ -31,6 +32,28 @@ namespace Yubico.YubiKey.Scp public class Scp03Tests { private readonly ReadOnlyMemory _defaultPin = new byte[] { 0x31, 0x32, 0x33, 0x34, 0x35, 0x36 }; + private readonly ReadOnlyMemory _fido2Pin = "11234567"u8.ToArray(); + + private bool Fido2KeyCollector(KeyEntryData data) + { + if (data.Request == KeyEntryRequest.Release) + { + return true; + } + + if (data.Request == KeyEntryRequest.TouchRequest) + { + return true; + } + + if (data.Request is KeyEntryRequest.VerifyFido2Pin or KeyEntryRequest.SetFido2Pin) + { + data.SubmitValue(_fido2Pin.Span); + return true; + } + + return false; + } public Scp03Tests() { @@ -404,6 +427,133 @@ public void Scp03_PivSession_TryVerifyPinAndGetMetaData_Succeeds( } + [SkippableTheory(typeof(DeviceNotFoundException))] + [InlineData(StandardTestDevice.Fw5, Transport.NfcSmartCard)] + [InlineData(StandardTestDevice.Fw5, Transport.UsbSmartCard)] + public void Scp03_Fido2Session_GetAuthenticatorInfo_Succeeds( + StandardTestDevice desiredDeviceType, + Transport transport) + { + var testDevice = GetDevice(desiredDeviceType, transport); + Assert.True(testDevice.FirmwareVersion >= FirmwareVersion.V5_3_0); + Assert.True(testDevice.HasFeature(YubiKeyFeature.Scp03)); + + // FIDO2 over CCID requires firmware 5.8+. Over NFC, all applets are + // selectable via SmartCard. Over USB, FIDO2 is available on CCID + // starting with firmware 5.8; older keys only expose FIDO2 over HID. + if (transport == Transport.UsbSmartCard) + { + Skip.IfNot( + testDevice.FirmwareVersion >= FirmwareVersion.V5_8_0, + "FIDO2 over USB CCID requires firmware 5.8+"); + } + else + { + Skip.IfNot( + testDevice.AvailableNfcCapabilities.HasFlag(YubiKeyCapabilities.Fido2), + "FIDO2 is not available over NFC on this device"); + } + + using var fido2Session = new Fido2Session(testDevice, keyParameters: Scp03KeyParameters.DefaultKey); + + var info = fido2Session.AuthenticatorInfo; + Assert.NotNull(info); + Assert.NotEmpty(info.Versions); + } + + [SkippableTheory(typeof(DeviceNotFoundException))] + [InlineData(StandardTestDevice.Fw5, Transport.UsbSmartCard)] + public void Scp03_Fido2Session_MakeCredential_Over_UsbCcid_Succeeds( + StandardTestDevice desiredDeviceType, + Transport transport) + { + var testDevice = GetDevice(desiredDeviceType, transport); + Assert.True(testDevice.HasFeature(YubiKeyFeature.Scp03)); + + Skip.IfNot( + testDevice.FirmwareVersion >= FirmwareVersion.V5_8_0, + "FIDO2 over USB CCID requires firmware 5.8+"); + + using var fido2Session = new Fido2Session(testDevice, keyParameters: Scp03KeyParameters.DefaultKey); + Assert.Equal("ScpConnection", fido2Session.Connection.GetType().Name); + + fido2Session.KeyCollector = Fido2KeyCollector; + + // Ensure PIN is set and verify it + var pinOption = fido2Session.AuthenticatorInfo.GetOptionValue(AuthenticatorOptions.clientPin); + if (pinOption == OptionValue.False) + { + fido2Session.TrySetPin(_fido2Pin); + } + else if (fido2Session.AuthenticatorInfo.ForcePinChange == true) + { + Skip.If(true, "Key requires PIN change — cannot test MakeCredential in this state"); + } + + bool verified; + try + { + verified = fido2Session.TryVerifyPin( + _fido2Pin, + permissions: null, + relyingPartyId: null, + retriesRemaining: out _, + rebootRequired: out _); + } + catch (Fido2.Fido2Exception) + { + verified = false; + } + + Skip.IfNot(verified, "PIN verification failed — key may have a different PIN set. Reset FIDO2 app to use default test PIN."); + + // MakeCredential — requires touch + var rp = new RelyingParty("scp03-ccid-test.yubico.com"); + var userId = new UserEntity(new byte[] { 0x01, 0x02, 0x03 }) + { + Name = "scp03-ccid-test", + DisplayName = "SCP03 CCID Test" + }; + + var mcParams = new MakeCredentialParameters(rp, userId) + { + ClientDataHash = new byte[] + { + 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, + 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, + 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, + 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38 + } + }; + + var mcData = fido2Session.MakeCredential(mcParams); + Assert.True(mcData.VerifyAttestation(mcParams.ClientDataHash)); + } + + [SkippableTheory(typeof(DeviceNotFoundException))] + [InlineData(StandardTestDevice.Fw5, Transport.UsbSmartCard)] + public void Scp03_Fido2Session_Pre58_UsbCcid_Skips_Gracefully( + StandardTestDevice desiredDeviceType, + Transport transport) + { + var testDevice = GetDevice(desiredDeviceType, transport); + + if (testDevice.FirmwareVersion >= FirmwareVersion.V5_8_0) + { + // On 5.8+, FIDO2 over CCID should work — verify it does + using var session = new Fido2Session(testDevice, keyParameters: Scp03KeyParameters.DefaultKey); + Assert.NotNull(session.AuthenticatorInfo); + } + else + { + // On pre-5.8, FIDO2 AID SELECT over CCID should fail with ApduException (0x6A82) + Assert.ThrowsAny(() => + { + using var session = new Fido2Session(testDevice, keyParameters: Scp03KeyParameters.DefaultKey); + }); + } + } + [SkippableTheory(typeof(DeviceNotFoundException))] [InlineData(StandardTestDevice.Fw5, Transport.UsbSmartCard)] [InlineData(StandardTestDevice.Fw5Fips, Transport.UsbSmartCard)] diff --git a/Yubico.YubiKey/tests/integration/Yubico/YubiKey/Scp/Scp11Tests.cs b/Yubico.YubiKey/tests/integration/Yubico/YubiKey/Scp/Scp11Tests.cs index 3d9af2c89..72830e0c4 100644 --- a/Yubico.YubiKey/tests/integration/Yubico/YubiKey/Scp/Scp11Tests.cs +++ b/Yubico.YubiKey/tests/integration/Yubico/YubiKey/Scp/Scp11Tests.cs @@ -25,6 +25,7 @@ using Yubico.Core.Devices.Hid; using Yubico.Core.Tlv; using Yubico.YubiKey.Cryptography; +using Yubico.YubiKey.Fido2; using Yubico.YubiKey.Oath; using Yubico.YubiKey.Otp; using Yubico.YubiKey.Piv; @@ -131,6 +132,43 @@ public void Scp11b_App_OtpSession_Operations_Succeeds( configObj.Execute(); } + [SkippableTheory(typeof(DeviceNotFoundException))] + [InlineData(StandardTestDevice.Fw5, Transport.NfcSmartCard)] + [InlineData(StandardTestDevice.Fw5, Transport.UsbSmartCard)] + [InlineData(StandardTestDevice.Fw5Fips, Transport.NfcSmartCard)] + [InlineData(StandardTestDevice.Fw5Fips, Transport.UsbSmartCard)] + public void Scp11b_App_Fido2Session_GetAuthenticatorInfo_Succeeds( + StandardTestDevice desiredDeviceType, + Transport transport) + { + var testDevice = GetDevice(desiredDeviceType, transport); + + // FIDO2 over CCID requires firmware 5.8+. Over NFC, all applets are + // selectable via SmartCard. Over USB, FIDO2 is available on CCID + // starting with firmware 5.8; older keys only expose FIDO2 over HID. + if (transport == Transport.UsbSmartCard) + { + Skip.IfNot( + testDevice.FirmwareVersion >= FirmwareVersion.V5_8_0, + "FIDO2 over USB CCID requires firmware 5.8+"); + } + else + { + Skip.IfNot( + testDevice.AvailableNfcCapabilities.HasFlag(YubiKeyCapabilities.Fido2), + "FIDO2 is not available over NFC on this device"); + } + + var keyReference = new KeyReference(ScpKeyIds.Scp11B, 0x1); + var keyParams = Get_Scp11b_SecureConnection_Parameters(testDevice, keyReference); + + using var session = new Fido2Session(testDevice, keyParameters: keyParams); + + var info = session.AuthenticatorInfo; + Assert.NotNull(info); + Assert.NotEmpty(info.Versions); + } + [SkippableTheory(typeof(DeviceNotFoundException))] [InlineData(StandardTestDevice.Fw5)] [InlineData(StandardTestDevice.Fw5Fips)] diff --git a/Yubico.YubiKey/tests/sandbox/Plugins/EventManagerPlugin.cs b/Yubico.YubiKey/tests/sandbox/Plugins/EventManagerPlugin.cs index 96c580b76..2d607edd3 100644 --- a/Yubico.YubiKey/tests/sandbox/Plugins/EventManagerPlugin.cs +++ b/Yubico.YubiKey/tests/sandbox/Plugins/EventManagerPlugin.cs @@ -14,23 +14,10 @@ using System; using Microsoft.Extensions.Logging; -using Serilog; -using Serilog.Core; -using Serilog.Events; using Log = Yubico.Core.Logging.Log; -using Logger = Serilog.Core.Logger; namespace Yubico.YubiKey.TestApp.Plugins { - class ThreadIdEnricher : ILogEventEnricher - { - public void Enrich(LogEvent logEvent, ILogEventPropertyFactory propertyFactory) - { - logEvent.AddPropertyIfAbsent(propertyFactory.CreateProperty( - "ThreadId", Environment.CurrentManagedThreadId)); - } - } - internal class EventManagerPlugin : PluginBase { public override string Name => "EventManager"; @@ -40,16 +27,11 @@ public EventManagerPlugin(IOutput output) : base(output) { } public override bool Execute() { - using Logger? log = new LoggerConfiguration() - .Enrich.With(new ThreadIdEnricher()) - .WriteTo.Console( - outputTemplate: "[{Level}] ({ThreadId}) {Message}{NewLine}{Exception}") - .CreateLogger(); - Log.ConfigureLoggerFactory(builder => builder - .AddSerilog(log) + .AddSimpleConsole() .AddFilter(level => level >= LogLevel.Information)); + YubiKeyDeviceListener.Instance.Arrived += (s, e) => { Console.WriteLine("YubiKey arrived:"); diff --git a/Yubico.YubiKey/tests/sandbox/Plugins/Fido2CcidProbePlugin.cs b/Yubico.YubiKey/tests/sandbox/Plugins/Fido2CcidProbePlugin.cs new file mode 100644 index 000000000..7e2e81caa --- /dev/null +++ b/Yubico.YubiKey/tests/sandbox/Plugins/Fido2CcidProbePlugin.cs @@ -0,0 +1,151 @@ +// Quick probe: test FIDO2 AID selection over USB CCID (SmartCard) on 5.8+ keys +// Tests: plain CCID, SCP03, and SCP11b — all over USB SmartCard +using System; +using System.Collections.Generic; +using System.Linq; +using System.Security.Cryptography; +using System.Security.Cryptography.X509Certificates; +using Yubico.Core.Devices.SmartCard; +using Yubico.YubiKey.Cryptography; +using Yubico.YubiKey.DeviceExtensions; +using Yubico.YubiKey.Fido2; +using Yubico.YubiKey.Fido2.Commands; +using Yubico.YubiKey.Scp; + +namespace Yubico.YubiKey.TestApp.Plugins +{ + internal class Fido2CcidProbePlugin : PluginBase + { + public override string Name => "Fido2CcidProbe"; + public override string Description => "Probes FIDO2 over USB CCID (SmartCard) on 5.8+ keys — SCP03 and SCP11b"; + + public Fido2CcidProbePlugin(IOutput output) : base(output) + { + Parameters["command"].Description = "[serial] Serial number of the YubiKey to test (e.g. 125)"; + } + + public override bool Execute() + { + int? targetSerial = string.IsNullOrEmpty(Command) ? null : int.Parse(Command); + + Output.WriteLine("=== FIDO2 over USB CCID Probe (SCP03 + SCP11b) ==="); + Output.WriteLine(); + + var allKeys = YubiKeyDevice.FindAll(); + Output.WriteLine($"Found {allKeys.Count()} YubiKey(s) total"); + + foreach (var key in allKeys) + { + Output.WriteLine($" Serial: {key.SerialNumber}, FW: {key.FirmwareVersion}, Transports: {key.AvailableTransports}"); + Output.WriteLine($" USB Capabilities: {key.AvailableUsbCapabilities}"); + Output.WriteLine($" HasSmartCard: {((YubiKeyDevice)key).HasSmartCard}, IsNfc: {((YubiKeyDevice)key).IsNfcDevice}"); + } + + var targetKey = allKeys.FirstOrDefault(k => + targetSerial == null || k.SerialNumber == targetSerial); + + if (targetKey == null) + { + Output.WriteLine($"No YubiKey found{(targetSerial.HasValue ? $" with serial {targetSerial}" : "")}"); + return false; + } + + Output.WriteLine(); + Output.WriteLine($"--- Target: Serial={targetKey.SerialNumber}, FW={targetKey.FirmwareVersion} ---"); + + var device = (YubiKeyDevice)targetKey; + Output.WriteLine($"FIDO2 in AvailableUsbCapabilities: {targetKey.AvailableUsbCapabilities.HasFlag(YubiKeyCapabilities.Fido2)}"); + + // ---- Test 1: Standard HID path ---- + RunTest("Test 1: Standard Connect(Fido2) — HID path", () => + { + using var conn = targetKey.Connect(YubiKeyApplication.Fido2); + Output.WriteLine($" Connection type: {conn.GetType().Name}"); + var info = conn.SendCommand(new GetInfoCommand()).GetData(); + Output.WriteLine($" AAGUID: {BitConverter.ToString(info.Aaguid.ToArray())}"); + Output.WriteLine($" Versions: {string.Join(", ", info.Versions ?? Array.Empty())}"); + }); + + // ---- Test 2: Direct SmartCard FIDO2 ---- + RunTest("Test 2: Direct SmartCardConnection for FIDO2 over USB CCID", () => + { + if (!device.HasSmartCard) { Output.WriteLine(" SKIPPED — no SmartCard interface"); return; } + var scDevice = device.GetSmartCardDevice(); + Output.WriteLine($" SmartCard path: {scDevice.Path}, IsNfc: {scDevice.IsNfcTransport()}"); + + using var scConn = new SmartCardConnection(scDevice, YubiKeyApplication.Fido2); + Output.WriteLine($" SmartCardConnection created for FIDO2!"); + var info = scConn.SendCommand(new GetInfoCommand()).GetData(); + Output.WriteLine($" AAGUID: {BitConverter.ToString(info.Aaguid.ToArray())}"); + Output.WriteLine($" Transports: {string.Join(", ", info.Transports ?? Array.Empty())}"); + }); + + // ---- Test 3: FIDO2 + SCP03 (default keys) over USB CCID ---- + RunTest("Test 3: Fido2Session + SCP03 (DefaultKey) over USB CCID", () => + { + using var session = new Fido2Session(targetKey, keyParameters: Scp03KeyParameters.DefaultKey); + Output.WriteLine($" Connection type: {session.Connection.GetType().Name}"); + Output.WriteLine($" AAGUID: {BitConverter.ToString(session.AuthenticatorInfo.Aaguid.ToArray())}"); + Output.WriteLine($" Versions: {string.Join(", ", session.AuthenticatorInfo.Versions ?? Array.Empty())}"); + }); + + // ---- Test 4: FIDO2 + SCP11b over USB CCID ---- + RunTest("Test 4: Fido2Session + SCP11b over USB CCID", () => + { + // Step 1: Reset Security Domain to clean state + Output.WriteLine(" Resetting Security Domain..."); + using (var sdSession = new SecurityDomainSession(targetKey)) + { + sdSession.Reset(); + } + Output.WriteLine(" Security Domain reset OK"); + + // Step 2: Get SCP11b key parameters (generates key ref on device) + var keyReference = new KeyReference(ScpKeyIds.Scp11B, 0x1); + Output.WriteLine($" Getting SCP11b certificates for {keyReference}..."); + + IReadOnlyCollection certs; + using (var sdSession = new SecurityDomainSession(targetKey)) + { + certs = sdSession.GetCertificates(keyReference); + } + + var leaf = certs.Last(); + var ecDsaPublicKey = leaf.PublicKey.GetECDsaPublicKey()!; + var keyParams = new Scp11KeyParameters( + keyReference, + ECPublicKey.CreateFromParameters(ecDsaPublicKey.ExportParameters(false))); + Output.WriteLine($" SCP11b key params created (leaf cert subject: {leaf.Subject})"); + + // Step 3: Open FIDO2 session with SCP11b + using var session = new Fido2Session(targetKey, keyParameters: keyParams); + Output.WriteLine($" Connection type: {session.Connection.GetType().Name}"); + Output.WriteLine($" AAGUID: {BitConverter.ToString(session.AuthenticatorInfo.Aaguid.ToArray())}"); + Output.WriteLine($" Versions: {string.Join(", ", session.AuthenticatorInfo.Versions ?? Array.Empty())}"); + }); + + Output.WriteLine(); + Output.WriteLine("=== Probe Complete ==="); + return true; + } + + private void RunTest(string name, Action action) + { + Output.WriteLine(); + Output.WriteLine(name); + try + { + action(); + Output.WriteLine($" >>> PASS"); + } + catch (Exception ex) + { + Output.WriteLine($" >>> FAIL: {ex.GetType().Name}: {ex.Message}"); + if (ex.InnerException != null) + { + Output.WriteLine($" Inner: {ex.InnerException.GetType().Name}: {ex.InnerException.Message}"); + } + } + } + } +} diff --git a/Yubico.YubiKey/tests/sandbox/Program.cs b/Yubico.YubiKey/tests/sandbox/Program.cs index bbc7856f1..ab8a33104 100644 --- a/Yubico.YubiKey/tests/sandbox/Program.cs +++ b/Yubico.YubiKey/tests/sandbox/Program.cs @@ -54,6 +54,7 @@ class Program : IOutput, IDisposable ["feature"] = (output) => new YubiKeyFeaturePlugin(output), ["david"] = (output) => new DavidPlugin(output), ["oath"] = (output) => new OathPlugin(output), + ["fido2ccid"] = (output) => new Fido2CcidProbePlugin(output), }; #region IDisposable Implementation diff --git a/Yubico.YubiKey/tests/sandbox/Yubico.YubiKey.TestApp.csproj b/Yubico.YubiKey/tests/sandbox/Yubico.YubiKey.TestApp.csproj index 75edc8f11..58a603601 100644 --- a/Yubico.YubiKey/tests/sandbox/Yubico.YubiKey.TestApp.csproj +++ b/Yubico.YubiKey/tests/sandbox/Yubico.YubiKey.TestApp.csproj @@ -32,11 +32,8 @@ limitations under the License. --> - - - - - + + diff --git a/Yubico.YubiKey/tests/unit/Yubico.YubiKey.UnitTests.csproj b/Yubico.YubiKey/tests/unit/Yubico.YubiKey.UnitTests.csproj index 2a00a00a8..22eef3e95 100644 --- a/Yubico.YubiKey/tests/unit/Yubico.YubiKey.UnitTests.csproj +++ b/Yubico.YubiKey/tests/unit/Yubico.YubiKey.UnitTests.csproj @@ -33,7 +33,7 @@ limitations under the License. --> - + @@ -42,7 +42,7 @@ limitations under the License. --> - + PreserveNewest diff --git a/Yubico.YubiKey/tests/unit/Yubico/YubiKey/Cryptography/ZLibStreamTests.cs b/Yubico.YubiKey/tests/unit/Yubico/YubiKey/Cryptography/ZLibStreamTests.cs new file mode 100644 index 000000000..e7e73d01e --- /dev/null +++ b/Yubico.YubiKey/tests/unit/Yubico/YubiKey/Cryptography/ZLibStreamTests.cs @@ -0,0 +1,574 @@ +// Copyright 2025 Yubico AB +// +// Licensed under the Apache License, Version 2.0 (the "License"). +// You may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +using System; +using System.IO; +using System.IO.Compression; +using System.Text; +using Xunit; + +namespace Yubico.YubiKey.Cryptography +{ + public class ZLibStreamTests + { + // "Hello, World!" compressed with zlib (RFC 1950). + private static readonly byte[] ZLibCompressedHelloWorld = + { + 0x78, 0x9C, 0xF3, 0x48, 0xCD, 0xC9, 0xC9, 0xD7, + 0x51, 0x08, 0xCF, 0x2F, 0xCA, 0x49, 0x51, 0x04, + 0x00, 0x20, 0x5E, 0x04, 0x8A + }; + private const string HelloWorldText = "Hello, World!"; + + [Fact] + public void Decompress_ValidZLibData_ReturnsOriginalData() + { + // Pure zlib (RFC 1950) data without any prefix. + string hex = "789c8b2c4dcaf4ce2c5148cb2f5270cc4b29cacf4c5128492d2e5148492c4904009f2e0aa4"; + + byte[] data = Convert.FromHexString(hex); + + using var compressedStream = new MemoryStream(data); + using var zlibStream = new ZLibStream(compressedStream, CompressionMode.Decompress); + using var resultStream = new MemoryStream(); + + zlibStream.CopyTo(resultStream); + string result = Encoding.UTF8.GetString(resultStream.ToArray()); + + Assert.Equal("YubiKit for Android test data", result); + } + + [Fact] + public void Decompress_GidsFormat_StripsHeaderAndDecompresses() + { + // GIDS format: 4-byte header (01 00 = magic, 1D 00 = LE uncompressed length 29) + // followed by standard zlib (RFC 1950) data. + string hex = "01001d00789c8b2c4dcaf4ce2c5148cb2f5270cc4b29cacf4c5128492d2e5148492c4904009f2e0aa4"; + + byte[] data = Convert.FromHexString(hex); + + // Strip 4-byte GIDS header, then decompress the zlib payload + const int gidsHeaderLength = 4; + using var compressedStream = new MemoryStream(data, gidsHeaderLength, data.Length - gidsHeaderLength); + using var zlibStream = new ZLibStream(compressedStream, CompressionMode.Decompress); + using var resultStream = new MemoryStream(); + + zlibStream.CopyTo(resultStream); + string result = Encoding.UTF8.GetString(resultStream.ToArray()); + + Assert.Equal("YubiKit for Android test data", result); + } + + [Fact] + public void Decompress_ReadByteArray_ReturnsOriginalData() + { + using var compressedStream = new MemoryStream(ZLibCompressedHelloWorld); + using var zlibStream = new ZLibStream(compressedStream, CompressionMode.Decompress); + + byte[] buffer = new byte[256]; + int totalRead = 0; + int bytesRead; + + while ((bytesRead = zlibStream.Read(buffer, totalRead, buffer.Length - totalRead)) > 0) + { + totalRead += bytesRead; + } + + string result = Encoding.UTF8.GetString(buffer, 0, totalRead); + Assert.Equal(HelloWorldText, result); + } + + [Fact] + public void Decompress_ReadByte_ReturnsCorrectFirstByte() + { + using var compressedStream = new MemoryStream(ZLibCompressedHelloWorld); + using var zlibStream = new ZLibStream(compressedStream, CompressionMode.Decompress); + + int firstByte = zlibStream.ReadByte(); + + Assert.Equal((int)'H', firstByte); + } + + [Fact] + public void Compress_ThenDecompress_RoundTrips() + { + byte[] original = Encoding.UTF8.GetBytes("The quick brown fox jumps over the lazy dog."); + + // Compress + byte[] compressed; + using (var compressedStream = new MemoryStream()) + { + using (var zlibStream = new ZLibStream(compressedStream, CompressionLevel.Optimal, leaveOpen: true)) + { + zlibStream.Write(original, 0, original.Length); + } + + compressed = compressedStream.ToArray(); + } + + // Verify zlib header is present + Assert.Equal(0x78, compressed[0]); + + // Decompress + byte[] decompressed; + using (var compressedStream = new MemoryStream(compressed)) + { + using (var zlibStream = new ZLibStream(compressedStream, CompressionMode.Decompress)) + { + using (var resultStream = new MemoryStream()) + { + zlibStream.CopyTo(resultStream); + decompressed = resultStream.ToArray(); + } + } + } + + Assert.Equal(original, decompressed); + } + + [Fact] + public void Compress_EmptyData_RoundTrips() + { + byte[] original = Array.Empty(); + + // Compress + byte[] compressed; + using (var compressedStream = new MemoryStream()) + { + using (var zlibStream = new ZLibStream(compressedStream, CompressionLevel.Optimal, leaveOpen: true)) + { + zlibStream.Write(original, 0, original.Length); + } + + compressed = compressedStream.ToArray(); + } + + // Decompress + byte[] decompressed; + using (var compressedStream = new MemoryStream(compressed)) + { + using (var zlibStream = new ZLibStream(compressedStream, CompressionMode.Decompress)) + { + using (var resultStream = new MemoryStream()) + { + zlibStream.CopyTo(resultStream); + decompressed = resultStream.ToArray(); + } + } + } + + Assert.Equal(original, decompressed); + } + + [Fact] + public void Compress_LargeData_RoundTrips() + { + // Create a large repetitive payload (~10KB) + var sb = new StringBuilder(); + for (int i = 0; i < 500; i++) + { + sb.AppendLine($"Line {i}: The quick brown fox jumps over the lazy dog."); + } + + byte[] original = Encoding.UTF8.GetBytes(sb.ToString()); + + // Compress + byte[] compressed; + using (var compressedStream = new MemoryStream()) + { + using (var zlibStream = new ZLibStream(compressedStream, CompressionLevel.Optimal, leaveOpen: true)) + { + zlibStream.Write(original, 0, original.Length); + } + + compressed = compressedStream.ToArray(); + } + + // Should actually be smaller due to repetition + Assert.True(compressed.Length < original.Length); + + // Decompress + byte[] decompressed; + using (var compressedStream = new MemoryStream(compressed)) + { + using (var zlibStream = new ZLibStream(compressedStream, CompressionMode.Decompress)) + { + using (var resultStream = new MemoryStream()) + { + zlibStream.CopyTo(resultStream); + decompressed = resultStream.ToArray(); + } + } + } + + Assert.Equal(original, decompressed); + } + + [Fact] + public void Decompress_InvalidHeader_ThrowsInvalidDataException() + { + // Invalid zlib header — checksum fails + byte[] invalidData = { 0x78, 0x00, 0x00, 0x00 }; + + using var stream = new MemoryStream(invalidData); + using var zlibStream = new ZLibStream(stream, CompressionMode.Decompress); + + Assert.Throws(() => zlibStream.ReadByte()); + } + + [Fact] + public void Decompress_NonDeflateCompressionMethod_ThrowsInvalidDataException() + { + // CMF = 0x09 means compression method 9 (not deflate) + // FLG must satisfy (CMF * 256 + FLG) % 31 == 0 + // 0x09 * 256 = 2304, 2304 % 31 = 10, so FLG = 31 - 10 = 21 = 0x15 + byte[] invalidData = { 0x09, 0x15, 0x00, 0x00 }; + + using var stream = new MemoryStream(invalidData); + using var zlibStream = new ZLibStream(stream, CompressionMode.Decompress); + + Assert.Throws(() => zlibStream.ReadByte()); + } + + [Fact] + public void Decompress_TruncatedHeader_ThrowsInvalidDataException() + { + byte[] truncatedData = { 0x78 }; + + using var stream = new MemoryStream(truncatedData); + using var zlibStream = new ZLibStream(stream, CompressionMode.Decompress); + + Assert.Throws(() => zlibStream.ReadByte()); + } + + [Fact] + public void Constructor_NullStream_ThrowsArgumentNullException() + { +#pragma warning disable CS8625 // Cannot convert null literal to non-nullable reference type. + Assert.Throws(() => new ZLibStream(null, CompressionMode.Decompress)); +#pragma warning restore CS8625 + } + + [Fact] + public void CanRead_DecompressMode_ReturnsTrue() + { + using var stream = new MemoryStream(ZLibCompressedHelloWorld); + using var zlibStream = new ZLibStream(stream, CompressionMode.Decompress); + + Assert.True(zlibStream.CanRead); + Assert.False(zlibStream.CanWrite); + Assert.False(zlibStream.CanSeek); + } + + [Fact] + public void CanWrite_CompressMode_ReturnsTrue() + { + using var stream = new MemoryStream(); + using var zlibStream = new ZLibStream(stream, CompressionMode.Compress); + + Assert.True(zlibStream.CanWrite); + Assert.False(zlibStream.CanRead); + Assert.False(zlibStream.CanSeek); + } + + [Fact] + public void Write_InDecompressMode_ThrowsInvalidOperationException() + { + using var stream = new MemoryStream(ZLibCompressedHelloWorld); + using var zlibStream = new ZLibStream(stream, CompressionMode.Decompress); + + Assert.Throws(() => zlibStream.Write(new byte[] { 1 }, 0, 1)); + } + + [Fact] + public void Read_InCompressMode_ThrowsInvalidOperationException() + { + using var stream = new MemoryStream(); + using var zlibStream = new ZLibStream(stream, CompressionMode.Compress); + + Assert.Throws(() => zlibStream.Read(new byte[1], 0, 1)); + } + + [Fact] + public void Seek_ThrowsNotSupportedException() + { + using var stream = new MemoryStream(ZLibCompressedHelloWorld); + using var zlibStream = new ZLibStream(stream, CompressionMode.Decompress); + + Assert.Throws(() => zlibStream.Seek(0, SeekOrigin.Begin)); + } + + [Fact] + public void Length_ThrowsNotSupportedException() + { + using var stream = new MemoryStream(ZLibCompressedHelloWorld); + using var zlibStream = new ZLibStream(stream, CompressionMode.Decompress); + + Assert.Throws(() => _ = zlibStream.Length); + } + + [Fact] + public void Dispose_ThenRead_ThrowsObjectDisposedException() + { + var stream = new MemoryStream(ZLibCompressedHelloWorld); + var zlibStream = new ZLibStream(stream, CompressionMode.Decompress); + + zlibStream.Dispose(); + + Assert.Throws(() => zlibStream.Read(new byte[1], 0, 1)); + } + + [Fact] + public void LeaveOpen_True_DoesNotDisposeBaseStream() + { + var stream = new MemoryStream(ZLibCompressedHelloWorld); + var zlibStream = new ZLibStream(stream, CompressionMode.Decompress, leaveOpen: true); + + zlibStream.Dispose(); + + // Stream should still be accessible + Assert.True(stream.CanRead); + } + + [Fact] + public void LeaveOpen_False_DisposesBaseStream() + { + var stream = new MemoryStream(ZLibCompressedHelloWorld); + var zlibStream = new ZLibStream(stream, CompressionMode.Decompress, leaveOpen: false); + + zlibStream.Dispose(); + + // Stream should be disposed + Assert.False(stream.CanRead); + } + + [Fact] + public void BaseStream_ReturnsUnderlyingStream() + { + var stream = new MemoryStream(ZLibCompressedHelloWorld); + using var zlibStream = new ZLibStream(stream, CompressionMode.Decompress); + + Assert.Same(stream, zlibStream.BaseStream); + } + + [Fact] + public void CompressedOutput_HasValidZLibHeader() + { + byte[] compressed; + using (var output = new MemoryStream()) + { + using (var zlibStream = new ZLibStream(output, CompressionLevel.Optimal, leaveOpen: true)) + { + byte[] data = Encoding.UTF8.GetBytes("test"); + zlibStream.Write(data, 0, data.Length); + } + + compressed = output.ToArray(); + } + + // Verify CMF byte: deflate (method 8), window size 15 (CINFO 7) + Assert.Equal(0x78, compressed[0]); + + // Verify header checksum: (CMF * 256 + FLG) % 31 == 0 + int headerCheck = (compressed[0] * 256) + compressed[1]; + Assert.Equal(0, headerCheck % 31); + } + + [Fact] + public void ComputeAdler32_EmptyInput_ReturnsOne() + { + uint result = ZLibStream.ComputeAdler32(Array.Empty()); + + // For empty input, A=1, B=0, so Adler32 = (0 << 16) | 1 = 1 + Assert.Equal(1u, result); + } + + [Fact] + public void ComputeAdler32_KnownInput_ReturnsExpectedChecksum() + { + // "Wikipedia" Adler-32 is well-known: 0x11E60398 + byte[] data = Encoding.ASCII.GetBytes("Wikipedia"); + uint result = ZLibStream.ComputeAdler32(data); + + Assert.Equal(0x11E60398u, result); + } + + [Fact] + public void ComputeAdler32_WithOffset_ComputesCorrectly() + { + byte[] data = Encoding.ASCII.GetBytes("XXWikipediaYY"); + // Offset 2, count 9 = "Wikipedia" + uint result = ZLibStream.ComputeAdler32(data, 2, 9); + + Assert.Equal(0x11E60398u, result); + } + + [Fact] + public void Compress_Fastest_ProducesValidOutput() + { + byte[] original = Encoding.UTF8.GetBytes("test data for fastest compression level"); + + byte[] compressed; + using (var compressedStream = new MemoryStream()) + { + using (var zlibStream = new ZLibStream(compressedStream, CompressionLevel.Fastest, leaveOpen: true)) + { + zlibStream.Write(original, 0, original.Length); + } + + compressed = compressedStream.ToArray(); + } + + // Verify valid header + Assert.Equal(0x78, compressed[0]); + int headerCheck = (compressed[0] * 256) + compressed[1]; + Assert.Equal(0, headerCheck % 31); + + // Verify decompression round-trip + byte[] decompressed; + using (var compressedStream = new MemoryStream(compressed)) + { + using (var zlibStream = new ZLibStream(compressedStream, CompressionMode.Decompress)) + { + using (var resultStream = new MemoryStream()) + { + zlibStream.CopyTo(resultStream); + decompressed = resultStream.ToArray(); + } + } + } + + Assert.Equal(original, decompressed); + } + + [Fact] + public void Compress_NoCompression_ProducesValidOutput() + { + byte[] original = Encoding.UTF8.GetBytes("test data for no compression level"); + + byte[] compressed; + using (var compressedStream = new MemoryStream()) + { + using (var zlibStream = new ZLibStream(compressedStream, CompressionLevel.NoCompression, leaveOpen: true)) + { + zlibStream.Write(original, 0, original.Length); + } + + compressed = compressedStream.ToArray(); + } + + // Verify valid header + Assert.Equal(0x78, compressed[0]); + int headerCheck = (compressed[0] * 256) + compressed[1]; + Assert.Equal(0, headerCheck % 31); + + // Verify decompression round-trip + byte[] decompressed; + using (var compressedStream = new MemoryStream(compressed)) + { + using (var zlibStream = new ZLibStream(compressedStream, CompressionMode.Decompress)) + { + using (var resultStream = new MemoryStream()) + { + zlibStream.CopyTo(resultStream); + decompressed = resultStream.ToArray(); + } + } + } + + Assert.Equal(original, decompressed); + } + + [Fact] + public void Compress_WritesCorrectAdler32Trailer() + { + // "Wikipedia" has the well-known Adler-32 value 0x11E60398. + // Verify that the last 4 bytes of the compressed output match the + // Adler-32 of the original data in big-endian order. + byte[] original = Encoding.ASCII.GetBytes("Wikipedia"); + + byte[] compressed; + using (var output = new MemoryStream()) + { + using (var zlibStream = new ZLibStream(output, CompressionLevel.Optimal, leaveOpen: true)) + { + zlibStream.Write(original, 0, original.Length); + } + + compressed = output.ToArray(); + } + + Assert.True(compressed.Length >= 6, "Compressed output too short to contain header + trailer."); + + // Parse the 4-byte big-endian Adler-32 trailer + uint trailer = ((uint)compressed[^4] << 24) + | ((uint)compressed[^3] << 16) + | ((uint)compressed[^2] << 8) + | compressed[^1]; + + uint expected = ZLibStream.ComputeAdler32(original); + Assert.Equal(0x11E60398u, expected); // sanity-check known test vector + Assert.Equal(expected, trailer); + } + + /// + /// Simulates the GIDS format: 4-byte GIDS header (0x01, 0x00 magic + + /// 2-byte LE uncompressed length) followed by standard zlib-compressed data. + /// Verifies the decompression approach used in PivSession.KeyPairs.DecompressGids. + /// + [Fact] + public void Decompress_GidsFormat_WithHeaderStripping_Works() + { + byte[] original = Encoding.UTF8.GetBytes("Certificate data for GIDS test"); + + // Compress with zlib + byte[] zlibCompressed; + using (var compressedStream = new MemoryStream()) + { + using (var zlibStream = new ZLibStream(compressedStream, CompressionLevel.Optimal, leaveOpen: true)) + { + zlibStream.Write(original, 0, original.Length); + } + + zlibCompressed = compressedStream.ToArray(); + } + + // Prepend the 4-byte GIDS header: magic (0x01, 0x00) + LE uncompressed length + int uncompressedLength = original.Length; + byte[] gidsData = new byte[4 + zlibCompressed.Length]; + gidsData[0] = 0x01; + gidsData[1] = 0x00; + gidsData[2] = (byte)(uncompressedLength & 0xFF); + gidsData[3] = (byte)((uncompressedLength >> 8) & 0xFF); + Buffer.BlockCopy(zlibCompressed, 0, gidsData, 4, zlibCompressed.Length); + + // Decompress like PivSession.KeyPairs.DecompressGids does: + // strip 4-byte header, then pass to ZLibStream + const int gidsHeaderLength = 4; + using (var dataStream = new MemoryStream(gidsData, gidsHeaderLength, gidsData.Length - gidsHeaderLength)) + { + using (var decompressor = new ZLibStream(dataStream, CompressionMode.Decompress)) + { + using (var resultStream = new MemoryStream()) + { + decompressor.CopyTo(resultStream); + byte[] decompressed = resultStream.ToArray(); + + Assert.Equal(original, decompressed); + } + } + } + } + } +} diff --git a/Yubico.YubiKey/tests/utilities/Yubico.YubiKey.TestUtilities.csproj b/Yubico.YubiKey/tests/utilities/Yubico.YubiKey.TestUtilities.csproj index e80a741c0..10ab64420 100644 --- a/Yubico.YubiKey/tests/utilities/Yubico.YubiKey.TestUtilities.csproj +++ b/Yubico.YubiKey/tests/utilities/Yubico.YubiKey.TestUtilities.csproj @@ -30,8 +30,8 @@ limitations under the License. --> - - + + diff --git a/build/Versions.props b/build/Versions.props index 60d327383..e425cdf26 100644 --- a/build/Versions.props +++ b/build/Versions.props @@ -30,10 +30,17 @@ for external milestones. - - 1.15.2 + 0.0.0-dev Here you can find all of the updates and release notes for published versions of the SDK. +## 1.16.x Releases + +### 1.16.0 + +Release date: March 31st, 2026 + +Features: + +- The FIDO2 application now supports SCP03 and SCP11 secure channels over USB CCID on YubiKeys with firmware version 5.8 and above. This enables encrypted communication with the FIDO2 application, matching the SCP support already available for PIV, OATH, OTP, and YubiHSM Auth. ([#428](https://github.com/Yubico/Yubico.NET.SDK/pull/428)) + +- ZLib compression and decompression support has been added via a new `ZlibStream` class. The `PivSession.KeyPairs` property now correctly handles compressed certificate formats. ([#417](https://github.com/Yubico/Yubico.NET.SDK/pull/417)) + +Bug Fixes: + +- The MSVC C runtime is now statically linked in Yubico.NativeShims, removing the dependency on the Visual C++ Redistributable. **Additionally, `cmake_minimum_required` has been bumped to 3.15** for proper CMP0091 policy support, and an explicit `exit /b 0` has been added to prevent `findstr` exit codes from leaking into the build process. ([#427](https://github.com/Yubico/Yubico.NET.SDK/pull/427)) + +Documentation: + +- NFC requirements and SCP usage examples have been added to the `Fido2Session` documentation. ([#428](https://github.com/Yubico/Yubico.NET.SDK/pull/428)) + +- Comments and logical grouping have been added to the NativeShims CMakeLists and readme. ([#427](https://github.com/Yubico/Yubico.NET.SDK/pull/427)) + +Miscellaneous: + +- The Serilog dependency has been removed from integration tests and the sandbox app, simplifying the test project dependencies. + +Dependencies: + +- Several dependencies across the Yubico.Core, Yubico.YubiKey, and GitHub Actions workflows have been updated to newer versions. ([#424](https://github.com/Yubico/Yubico.NET.SDK/pull/424), [#429](https://github.com/Yubico/Yubico.NET.SDK/pull/429), [#430](https://github.com/Yubico/Yubico.NET.SDK/pull/430), [#432](https://github.com/Yubico/Yubico.NET.SDK/pull/432), [#433](https://github.com/Yubico/Yubico.NET.SDK/pull/433), [#435](https://github.com/Yubico/Yubico.NET.SDK/pull/435), [#436](https://github.com/Yubico/Yubico.NET.SDK/pull/436), [#437](https://github.com/Yubico/Yubico.NET.SDK/pull/437), [#438](https://github.com/Yubico/Yubico.NET.SDK/pull/438)) + +_________ + ## 1.15.x Releases ### 1.15.2