Draft once, self-proofread, run dual review, then save.
Research separates writing quality from factual accuracy so each check does one job well before the document is finalized.
Open any workflow card below to jump straight to the relevant pipeline.
Research separates writing quality from factual accuracy so each check does one job well before the document is finalized.
This overview card follows the architecture-driven path rather than the smaller direct-delivery path.
Documentation runs one feature at a time so the drafts, checks, and backlog status stay aligned.
Security stays read-only throughout and ends with a prioritized report rather than a code change cycle.
The implementation flow can start with architecture or move directly to delivery, but it always closes with structured review, fixes, and a final summary of what changed.
Choose Architect for larger work or Implementer for smaller direct changes.
Implement the change from either the architecture document or the direct scope.
Run two review cycles, each with GPT-backed scrutiny.
Deliver the changed scope, review status, and any remaining issues.
The real flow is not simply Architect then Implementer. implementer.agent.md defines two operating modes: Mode A when an architecture document exists, and Mode B for small direct tasks. Large direct work should stop and route back through Architect.
Use this when .architect-output/architecture.md exists.
Use this only for small, well-scoped work.
Architect loads project knowledge, asks clarifying questions if the request is ambiguous, offloads codebase research to Explore, estimates weighted budget units, and writes a structured architecture document.
Explore is the read-heavy subagent for codebase research and pattern discovery in this pipeline.
Architect must run a GPT-based architecture review against requirements coverage, reference pattern quality, database design, API to DB consistency, permissions, migration sanity, behavioral gaps, and task split validity.
Implementer either follows the architecture document or operates in direct mode, but in both cases it loads relevant skills before touching code, implements tasks one by one, and appends every touched file to .architect-output/changed-files.md.
Each task is expected to move through the to-do list from in progress to completed. The file instructs the agent not to batch the plan silently.
Implementer can use Explore when understanding a pattern would require reading three or more files.
The first Code Reviewer run checks the changed file set in full, works through the code-review skill categories, self-verifies its own findings, and then performs two GPT scrutiny passes before findings go back to Implementer.
The second GPT pass is a final verification of the merged review, not a duplicate of the first pass.
If findings still remain after this second cycle, Implementer does not invent more loops. It summarizes the remaining items and escalates them to the user.
The final output includes requirement, app, scope, files created, files modified, migrations, permissions, review cycle count, auto-fixed findings, and anything still unresolved after the second review cycle.
The documentation flow works one feature at a time so drafting, verification, and backlog status stay aligned before the next feature begins.
Pick the next feature from the backlog and work it end to end.
Create the business and technical documents as a matched pair.
Use GPT-5.4 to check the drafts against the code and each other.
Apply fixes and mark the backlog status for that feature.
The workflow begins by reading documents/documentation-backlog.md and translating the requested feature set into a todo list with one feature per item.
Draft Documenter reads every relevant source file in the module path, uses the AccessIQ Launch Planning business document as the reference pattern, and writes a business-focused document for PM and analyst audiences.
The technical draft is a separate subagent run. It reads the same source code, reads the sibling business document for consistency, then writes the exhaustive technical reference document.
The verifier reads the business document, technical document, and source modules together. It checks completeness, accuracy, pattern adherence, cross-document consistency, and clarity.
Documenter ends by summarizing how many features were documented, what was skipped, what GPT findings were rejected, and how many files were created.
The research flow produces one written deliverable and then improves confidence in it through separate checks for writing quality and factual accuracy.
Create the research document and save the first version.
Clean up the document before external review begins.
Run separate GPT-5.4 passes for writing quality and accuracy.
Reconcile findings and publish the final saved document.
The first step is broad research using available tools, followed by a full markdown draft saved to the specified path or, by default, under documents/research/.
The pipeline expects the coordinator to fix its own grammar, clarity, and structure issues first, before paying for any GPT verification passes.
The proofreader is responsible for grammar, clarity, structure, consistency, and completeness. Its findings are about editorial quality.
This pass looks for factual accuracy, technical correctness, logical consistency, source validity, and outdated information. It is not performing the same job as the proofreader.
The final response is expected to say what was researched, where the file was saved, how many findings came from each GPT reviewer, and how many were accepted versus rejected.
The security workflow is a structured read-only assessment that moves from threat research to validation and ends with a prioritized report for decision-makers.
Define scope and gather current threat intelligence first.
Review dependencies, config, and OWASP exposure without changing code.
Use two GPT-5.4 scrutiny rounds to test the findings.
Deliver the prioritized risk report and recommendations.
The file explicitly requires online research for relevant CVEs and advisories before the actual code assessment begins.
This is where the workflow points the assessor to requirements.txt, the security and config files, rate limiting, and Docker configuration.
The workflow steps through access control, injection, crypto, misconfiguration, logging, SSRF, and the other OWASP categories, then removes false positives and calibrates severity before sending anything to GPT.
The second pass is narrower: remaining blind spots, severity consistency, and remediation quality in the reconciled report.
The Pen Tester rules explicitly say not to modify files. The deliverable is the structured report with executive summary, threat intelligence, findings, documented exceptions, and recommendations.