Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions .agents/accounts.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,16 @@ subagents:

<!-- AI-CONTEXT-END -->

## Pre-flight Questions

Before generating financial or accounting output, work through:

1. How would this look to a tax inspector, investor, or lender reviewing the books?
2. What is the tax treatment — and in which jurisdiction(s)?
3. Are we recording substance or just form — does this reflect economic reality?
4. What audit trail exists to support every figure?
5. What would change if the business were investigated, sold, or seeking funding tomorrow?

## Accounting Workflows

### QuickFile Integration
Expand Down
9 changes: 9 additions & 0 deletions .agents/content/production/writing.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,15 @@ model: sonnet

<!-- AI-CONTEXT-END -->

## Pre-flight Questions

Before generating copy, scripts, or long-form content, work through:

1. What is the one thing the reader should do after reading this?
2. Is the value front-loaded — would someone get something useful from the first paragraph alone?
3. Is every section earning its place, or am I padding to fill a word count?
4. Does the tone match the context — and am I matching it deliberately or defaulting?

Comment on lines +29 to +37
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

rg -n "Pre-flight Questions|front-loaded|tone match|padding" .agents/

Repository: marcusquinn/aidevops

Length of output: 5028


🏁 Script executed:

# Also search more broadly for similar instruction patterns in writing/content agents
rg -n "Questions|reader should|value|word count" .agents/ | head -30

Repository: marcusquinn/aidevops

Length of output: 3096


🏁 Script executed:

# List the structure of .agents/ to understand the scope
find .agents/ -name "*.md" -type f | head -20

Repository: marcusquinn/aidevops

Length of output: 702


Search before adding instructions was skipped; "Pre-flight Questions" already exists across 9 agent files.

The guideline requiring rg search before adding instructions was not followed. This pattern already exists in .agents/content/research.md, .agents/content/story.md, .agents/youtube/script-writer.md, .agents/seo/eeat-score.md, .agents/sales.md, .agents/legal.md, .agents/accounts.md, .agents/marketing.md, and .agents/health.md. While the specific questions added here are contextually appropriate for the writing agent (reader action, front-loading value, avoiding padding), the developer should have confirmed awareness of this established pattern before adding. Future additions should follow the search-first guideline.

🤖 Prompt for AI Agents
In @.agents/content/production/writing.md around lines 29 - 37, The new
"Pre-flight Questions" section duplicates an existing cross-agent pattern; run a
repo search for the "Pre-flight Questions" header and the specific questions (or
search for the files listed: .agents/content/research.md,
.agents/content/story.md, .agents/youtube/script-writer.md,
.agents/seo/eeat-score.md, .agents/sales.md, .agents/legal.md,
.agents/accounts.md, .agents/marketing.md, .agents/health.md) and either remove
the duplicate from .agents/content/production/writing.md or refactor it to
reference the canonical shared guidance (keep the four questions but replace
with a link or note pointing to the established central file), ensuring the
unique header "Pre-flight Questions" is not redefined in multiple agent docs.

## Output Formats

### Long-Form Script (YouTube, Podcast)
Expand Down
11 changes: 11 additions & 0 deletions .agents/content/research.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,17 @@ Pre-writing research to validate niches, understand audiences, and analyse compe

<!-- AI-CONTEXT-END -->

## Pre-flight Questions

Before generating audience or market research, work through:

1. What are the first principles here — what is actually true vs commonly assumed?
2. What is the root cause, not the symptom?
3. What biases could be distorting this — confirmation, anchoring, availability, survivorship?
4. What is the evidence — and how reliable is the source?
5. Are there physics, psychology, or reliability constraints that limit what's possible?
6. What would disprove this conclusion?

## Workflow

### 1. Audience Research
Expand Down
9 changes: 9 additions & 0 deletions .agents/content/story.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,15 @@ model: sonnet

<!-- AI-CONTEXT-END -->

## Pre-flight Questions

Before committing to a narrative angle or hook, work through:

1. What is the theme — the universal truth this content explores?
2. What is the single takeaway — what should the audience think, feel, or do differently?
3. Does this tell a story — is there tension, transformation, and resolution?
4. Who is the protagonist — the audience, a character, or the brand — and is that the right choice?

Comment on lines +29 to +37
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for existing narrative/story guidance in agents that might duplicate the new pre-flight questions

echo "=== Checking for theme/takeaway/story structure guidance ==="
rg -i "theme|takeaway|transformation|protagonist" .agents/ -C 2 --type md

echo -e "\n=== Checking for pre-flight or preparatory narrative questions ==="
rg -i "before (committing|generating|creating)|pre-flight|preliminary" .agents/ -C 2 --type md

echo -e "\n=== Checking for narrative arc/story structure guidance ==="
rg -i "tension.*transformation|narrative.*arc|story.*structure" .agents/ -C 2 --type md

echo -e "\n=== Listing all .md files in .agents/ ==="
find .agents/ -name "*.md" -type f

Repository: marcusquinn/aidevops

Length of output: 50379


🏁 Script executed:

#!/bin/bash
# Focused verification on the specific pre-flight questions

echo "=== Story.md new pre-flight questions ==="
sed -n '29,37p' .agents/content/story.md

echo -e "\n=== YouTube script-writer pre-flight section ==="
grep -A 5 "Before writing a video script" .agents/youtube/script-writer.md

echo -e "\n=== Compare: 'single takeaway' across agents ==="
rg "single takeaway" .agents/ -C 1 --type md

Repository: marcusquinn/aidevops

Length of output: 1588


Coding guideline violation: "Single takeaway" question duplicates existing guidance in .agents/youtube/script-writer.md.

Before adding instructions to .agents/**/*.md files, the repository's coding guideline requires searching for existing similar guidance using rg. This search was not performed.

Duplication found:

  • Story.md Question 2: "What is the single takeaway — what should the audience think, feel, or do differently?"
  • YouTube script-writer.md Question 1: "What is the single takeaway — and does every section of the script serve it?"

Both questions address the same concept (identifying the core message) and should either be consolidated or explicitly differentiated by scope (story-level takeaway vs. script-section alignment). Additionally, the story.md "tension/transformation/resolution" question semantically overlaps with the YouTube agent's pre-flight checks.

Differentiate these questions to avoid conflicting narrative guidance across agents, or consolidate the shared framework into a shared reference document if the intent is identical.

🤖 Prompt for AI Agents
In @.agents/content/story.md around lines 29 - 37, The "single takeaway"
question in .agents/content/story.md duplicates guidance in
.agents/youtube/script-writer.md (Story.md Question 2 vs YouTube
script-writer.md Question 1); run the repository-wide search (rg) to locate the
existing wording, then either consolidate the duplicate into a single shared
guideline or explicitly differentiate the two (e.g., "story-level single
takeaway" vs "script-section alignment") so they don't conflict; update
.agents/content/story.md (the pre-flight Questions block) to reference the
consolidated/shared guideline or to use distinct scope language, and adjust any
overlapping "tension/transformation/resolution" phrasing similarly to avoid
semantic overlap with the YouTube agent checks.

## 7 Hook Formulas

Every piece of content starts with a hook. Use these formulas to generate variants:
Expand Down
10 changes: 10 additions & 0 deletions .agents/health.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,16 @@ Always consult healthcare professionals for medical advice.

<!-- AI-CONTEXT-END -->

## Pre-flight Questions

Before generating health-related output, work through:

1. What does the peer-reviewed evidence say? Cite studies, not opinions.
2. What is the mechanism of action — can it be explained physiologically?
3. What biases could be influencing this conclusion — confirmation, survivorship, selection, funding?
4. What would a controlled experiment look like to test this claim?
5. What are the risks of acting on this vs doing nothing — and for whom?

Comment on lines +34 to +43
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

rg -n "Pre-flight Questions|peer-reviewed|mechanism of action|controlled experiment|risks" .agents/

Repository: marcusquinn/aidevops

Length of output: 1722


🏁 Script executed:

# Also check the root AGENTS.md for similar guidance patterns
cat .agents/AGENTS.md | head -100

Repository: marcusquinn/aidevops

Length of output: 5281


🏁 Script executed:

# Check if there are other agent files with similar critical thinking patterns
fd -t f "\.md$" .agents/ | head -20

Repository: marcusquinn/aidevops

Length of output: 712


🏁 Script executed:

grep -n "Pre-flight" .agents/AGENTS.md

Repository: marcusquinn/aidevops

Length of output: 46


🏁 Script executed:

# Check the size of root AGENTS.md to see if it's close to the ~50-100 instruction budget
wc -l .agents/AGENTS.md

Repository: marcusquinn/aidevops

Length of output: 85


Consolidate "Pre-flight Questions" pattern to root AGENTS.md—do not replicate across domain agents.

Guideline violation confirmed: No prior rg search documented before adding this instruction. Additionally, "Pre-flight Questions" is already duplicated across 9 domain agents (legal, story, production/writing, research, youtube, seo, health, marketing, accounts, sales), violating the single-source-of-truth principle. This universal pattern should be centralized in root AGENTS.md with domain-specific tailoring applied locally, not duplicated verbatim across domains. Root AGENTS.md is currently 396 lines (4× the recommended ~50-100 instruction budget); consolidate this pattern and apply similar deduplication across other repeated instruction blocks to restore adherence to the architectural guidelines.

🧰 Tools
🪛 LanguageTool

[style] ~41-~41: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...n, survivorship, selection, funding? 4. What would a controlled experiment look like...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~42-~42: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...riment look like to test this claim? 5. What are the risks of acting on this vs doin...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)

🤖 Prompt for AI Agents
In @.agents/health.md around lines 34 - 43, The "Pre-flight Questions" block in
the health agent is duplicated across domains and must be centralized: remove
the duplicated "Pre-flight Questions" section from ".agents/health.md" and add a
single canonical copy into root AGENTS.md under a new "Pre-flight Questions"
pattern; update domain agent docs (legal, story, production, research, youtube,
seo, health, marketing, accounts, sales) to reference that canonical header and
implement only domain-specific tailoring locally (e.g., a short "Domain notes"
subsection), and record an `rg` search in the commit message or PR description
showing where duplicates were found; ensure the header text remains identical in
AGENTS.md and that all removed copies are replaced with a one-line reference
pointing to the canonical pattern.

## Health Workflows

### Developer Wellness
Expand Down
10 changes: 10 additions & 0 deletions .agents/legal.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,16 @@ Always consult qualified legal professionals for binding advice.

<!-- AI-CONTEXT-END -->

## Pre-flight Questions

Before generating legal-adjacent output, work through:

1. What does the actual law say — statute, regulation, case law? Cite it.
2. What jurisdiction(s) apply, and where do they conflict or overlap?
3. What are the consequences of getting this wrong — financial, criminal, reputational?
4. What would a competent opposing counsel argue against this position?
5. Is the proposed approach proportionate to the risk, or over/under-engineered?

## Legal Workflows

### Document Review
Expand Down
12 changes: 12 additions & 0 deletions .agents/marketing.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,18 @@ subagents:

<!-- AI-CONTEXT-END -->

## Pre-flight Questions

Before generating marketing strategy or campaign output, work through:

1. Is the offer valuable? What specific problem does it solve, and is that problem real and painful?
2. What is unique about our solution — what do we offer that alternatives don't?
3. What are the benefits (outcomes the buyer gets) before the features (how it works)?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This question's phrasing is a bit ambiguous. 'Benefits... before the features' could be clearer. To ensure the AI correctly prioritizes benefits in its output, consider rephrasing this to be more direct.

Suggested change
3. What are the benefits (outcomes the buyer gets) before the features (how it works)?
3. Are we leading with benefits (the outcomes for the buyer) over features (how it works)?

4. How does our pricing and value compare to alternatives — including doing nothing?
5. How can we guarantee results or satisfaction — and are our claims realistic and provable?
6. Who specifically are we addressing — named personas with real constraints, not demographics?
7. What would make someone say "this isn't for me" — and is that the right person to lose?

## Email Marketing

### FluentCRM Setup
Expand Down
6 changes: 6 additions & 0 deletions .agents/prompts/build.txt
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,12 @@ IMPORTANT: You must NEVER generate or guess URLs for the user unless you are con
# Professional objectivity
Prioritize technical accuracy and truthfulness over validating the user's beliefs. Focus on facts and problem-solving, providing direct, objective technical info without any unnecessary superlatives, praise, or emotional validation. Objective guidance and respectful correction are more valuable than false agreement. Whenever there is uncertainty, investigate to find the truth first rather than instinctively confirming the user's beliefs.

# Critical thinking
For all non-trivial output: Is this a good idea? Compared to what? At what cost? Based on
what evidence? Evaluate whether action is necessary — doing nothing is a valid option.
Ensure objective understanding, distinguish nuance, and consider unintended consequences
or third-order effects. Weigh value against cost and effort before proceeding.
Comment on lines +30 to +33

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current paragraph formatting, with its awkward line breaks, makes the core critical thinking questions less prominent and harder to read. To improve clarity and ensure the AI gives these questions proper weight, consider pulling them out into a bulleted list.

For all non-trivial output, ask:
- Is this a good idea?
- Compared to what?
- At what cost?
- Based on what evidence?

Evaluate whether action is necessary — doing nothing is a valid option. Ensure objective understanding, distinguish nuance, and consider unintended consequences or third-order effects. Weigh value against cost and effort before proceeding.


# Task Management
You have access to the TodoWrite tools to help you manage and plan tasks. Use these tools VERY frequently to ensure that you are tracking your tasks and giving the user visibility into your progress.
These tools are also EXTREMELY helpful for planning tasks, and for breaking down larger complex tasks into smaller steps. If you do not use this tool when planning, you may forget to do important tasks - and that is unacceptable.
Expand Down
13 changes: 13 additions & 0 deletions .agents/sales.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,19 @@ subagents:

<!-- AI-CONTEXT-END -->

## Pre-flight Questions

Before generating sales strategy or prospect-facing output, work through:

1. What is the hook — what gets attention in the first 5 seconds?
2. What is the need — what problem does the prospect have, in their words?
3. What is the desire — what outcome do they want, and how emotionally invested are they?
4. What is the price positioning — how does cost relate to the value of the problem solved?
5. Can they pay — budget, authority, and timing?
6. What is the reason to buy now — what changes if they wait?
7. How do we close — what is the specific next action?
8. How do we consolidate — what happens after the sale to prevent buyer's remorse and generate referrals?

## CRM Integration

### FluentCRM Setup
Expand Down
11 changes: 11 additions & 0 deletions .agents/seo/eeat-score.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,17 @@ eeat-score-helper.sh report ~/Downloads/example.com/_latest/eeat-scores.json

<!-- AI-CONTEXT-END -->

## Pre-flight Questions

Before assessing or generating E-E-A-T content, work through:

1. Are brand name, expert name, and credentials cited with verifiable sources?
2. Are there quality backlinks from authoritative domains supporting the claims?
3. Is NAP (name, address, phone) consistent across all mentions and structured data?
4. What is the entity density — are key entities mentioned with appropriate frequency and semantic weight?
5. Does this demonstrate first-hand experience, or just restate what already ranks?
6. Would a domain expert cite this — or dismiss it as surface-level?

## Overview

The E-E-A-T Score agent evaluates content quality using Google's E-E-A-T framework
Expand Down
9 changes: 9 additions & 0 deletions .agents/youtube/script-writer.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,15 @@ tools:

Generate YouTube video scripts optimized for audience retention. Supports structured outlines, full scripts with pattern interrupts, hook generation, and remix mode (transform competitor videos into unique content).

## Pre-flight Questions

Before writing a video script, work through:

1. What is the single takeaway — and does every section of the script serve it?
2. Why would someone watch past 30 seconds — what tension or promise holds them?
3. What does the viewer already know — where does this video meet them?
4. What would make this indistinguishable from the last 10 videos on this topic — and how do we avoid that?

## When to Use

Read this subagent when the user wants to:
Expand Down
Loading