AI-Driven Development Lifecycle Framework

An open framework for AI-driven development workflows.

Apache-2.0 22 Skills 7 Stacks 14 Templates

View Presentation View on GitHub

AI-Driven Development Lifecycle Framework

An open framework for AI-driven development workflows.

22 Skills
7 Stack Presets
14 Quality Templates
4 CI/CD Workflows

The Problem

AI coding agents can be very productive, but they still need structure.

66% developers cite AI solutions that are "almost right, but not quite" (Stack Overflow 2025)

45% code in one large benchmark study contained security flaws (Veracode 2025)

Context degradation is a common failure mode.

Forget Context

Lost between sessions.

Hallucinate

Outdated methods.

The Solution

4 Layers of Protection

1. Rules

.clinerules + .clinerules-critical.md

2. Enforcement

Husky hooks · secretlint · commitlint · coverage gates

3. Verification

Static analysis · Skills · Subagents · CI/CD

4. Evolution

Reflect · Rule improvement · ADRs

Daily Workflow

Structured cycle for results

Resume

Context

Plan

Design

Act

Build

Verify

Check

Reflect

Improve

Maturity Model

Complexity activates when you need it — not before

L1 · Foundation

Solo · MVP · Pre-launch

Rules, hooks, memory bank

L2 · Growth

Team ≥2 · First users

Logging, Sentry, Renovate

L3 · Scale

Team ≥5 · SLAs · 100+ RPS

Tracing, SLOs, canary

L4 · Enterprise

Compliance · Multi-region

SBOM, IaC, audit logs

22 Skills — One for Every Phase

Specialized modules that guide the AI through each stage of delivery.

PLAN

spec-writing · api-design · context-hub-integration

BUILD

testing · database-queries · database-migrations · state-management · docker · llm-integration

VERIFY

code-review · security · frontend-performance · ux-heuristics

SHIP

safety-checks · git-workflow · ci-cd

OPERATE

observability · feature-flags · cost-monitoring

UTILITY

debugging · documentation · maturity-assessment

1 / 6

Why This Exists

AI coding agents in 2026 can be highly productive, but they still need structure. Without a framework, speed can come at the cost of quality and security.

66%

of developers report AI solutions that are "almost right, but not quite"

Source: Stack Overflow Developer Survey 2025

45%

of AI-generated code in one benchmark study contained security flaws

Source: Veracode GenAI Code Security Report 2025

Hallucinated APIs

AI training data is frozen. External APIs change. Code written from memory against outdated endpoints breaks at runtime — silently.

Addressed by: context-hub-integration skill

Context

degradation is a recurring failure mode — agents forget instructions mid-session

This framework doesn't replace developer judgment. It adds quality gates, persistent memory, and a feedback loop for improving the workflow over time.

Ad-hoc Prompting vs This Framework

The difference is not whether you use AI — it is whether you use it with discipline.

Dimension Ad-hoc AI Prompting With This Framework
Planning Jump straight to code Spec first: user stories, data model, acceptance criteria
Context Forgotten between sessions Memory bank reloaded at the start of every session
Security Checked if you remember to ask Mandatory OWASP checklist before every feature ships
Testing Optional, often skipped under pressure RED → GREEN → REFACTOR enforced as part of DoD
External APIs From training data — may be outdated or hallucinated Fresh docs fetched via context-hub before every integration
Code review None, or manual if you remember Independent subagent review on every STANDARD+ task
Quality gates Manual and inconsistent Husky hooks: secrets scan, lint, type-check, tests, dep audit
Stack guidance Generic, one-size-fits-all 7 stack presets with conventions, gotchas, and templates
Improvement Same mistakes repeat across sessions Reflect phase captures lessons and evolves the rules

Who Is This For?

This framework is aimed at builders who want speed without chaos.

Solo Developers

Add more structure and review discipline to solo AI-assisted development. Useful for indie hackers and small personal projects.

Startups

Move faster on MVP delivery while keeping planning, verification, and documentation visible from the start.

Teams

Standardize code quality with a shared ruleset and stack presets. Onboard new members faster with a structured reading order and project-specific context in cline_docs/onboarding.md.

Maturity Model

This framework scales with your project. Tools and processes activate when real-world triggers justify them, not before.

Level 1 · Foundation

Solo · MVP · Pre-launch

.clinerules, Husky hook templates, memory bank, conventional commits, and pre-commit quality gates

Level 2 · Growth

Team ≥2 · First users · Post-launch

Structured logging, Sentry, Renovate Bot, staging environment, API versioning, preview environments

Level 3 · Scale

Team ≥5 · SLAs · 100+ RPS

Load testing, distributed tracing, SLOs, feature flags, canary deployments, on-call rotation

Level 4 · Enterprise

Compliance · Multi-region · Team ≥20

Audit logging, SBOM generation, IaC, data retention policy, annual penetration testing

Advancing requires trigger criteria — real users, team growth, or operational incidents. Not calendar dates. Run Maturity check to assess readiness and get an ordered upgrade checklist.

Supported Stacks

Seven preset configurations. Choose one or define your own in cline_docs/stackConfig.md. All skills and rules adapt automatically.

Preset Stack Best for
nestjs-nextjsNestJS + Next.js (TypeScript)SaaS, dashboards, public or mobile API
trpc-nextjstRPC + Next.js + Prisma (TypeScript)SaaS, internal tools, TypeScript-only consumers (T3 Stack)
django-reactDjango + React (Python + TypeScript)Data-heavy apps, admin tools
laravel-vueLaravel + Vue (PHP)Content platforms, e-commerce
go-htmxGo + htmx (server-rendered)Internal tools, low-JS apps
fastapi-nextjs-ragFastAPI + Next.js + pgvector (Python AI)RAG, document Q&A, AI search
nextjs-aiNext.js + Vercel AI SDK (TypeScript AI)Chatbots, copilots, AI-enhanced SaaS

TypeScript

TypeScript-only consumers → trpc-nextjs
Public API / mobile → nestjs-nextjs
AI is core product → nextjs-ai

Python

Data / admin → django-react
RAG / AI search → fastapi-nextjs-rag

Other

Content / e-commerce → laravel-vue
Low-JS / internal → go-htmx

Core Commands & Workflow

The framework operates on a structured daily workflow.

Resume
Plan
Act
Verify
Reflect

Declare the task size before every task — it determines which steps are required. At L1, MICRO is the default. STANDARD and MAJOR are opt-in.

NANO

Single file · <30 min · No logic change

Edit → Verify → Commit

MICRO

1-2 files · <1 day · Contained change · L1 default

Plan → Act → Verify → Commit

STANDARD

Feature / bug with scope · L2 default · Reflect required

Resume → Plan → Act → Verify → Reflect

MAJOR

Architecture / cross-cutting · L3+ default · Reflect required

Resume → Plan+ADR → Act → Verify → Load test → Reflect
Command / Skill When / What
Initialize projectFirst time — stack + memory bank setup
ResumeStart of every session — reload memory & context
This is a NANO / MICRO / STANDARD / MAJOR taskDeclare lifecycle size before every task
spec-writing skillComplex feature → spec + acceptance criteria (STANDARD+)
Quality checkCode quality scan before marking done
Security auditFull security verification (OWASP, STRIDE)
UX reviewError messages, email, exports, onboarding, UI/a11y (ux-heuristics skill)
Review thisAI-on-AI code review (code-reviewer subagent)
Debug thisStructured 8-step root cause investigation (debugger subagent)
Maturity checkAssess current level + generate ordered upgrade checklist
ReflectUpdate memory files, propose rule improvements (STANDARD+)
End sessionSave context, update activeContext.md & progress.md
HotfixProduction emergency — minimal fix, rollback plan, post-mortem
Fresh startClear context, begin new unrelated task

8-Step Verification Process

No code ships without passing these gates.

1

chub fetch

Fresh API docs (external APIs only)

2

Tests

Coverage ≥70%

3

Quality

Quality scan

4

Security

OWASP, STRIDE

5

UX/a11y

Errors always · UI, email, exports conditionally

6

AI Review

Subagent

7

Commit

Lint · Format · Type check

8

Push

Full tests · Dep audit

Steps 7 and 8 can run automatically once hooks are configured in your project. The sample .husky/ hooks are designed to enforce quality with minimal manual discipline: pre-commit can run secretlint + lint + format + type-check, commit-msg can enforce conventional commit format, pre-push can run the full test suite with coverage checks and a dependency security audit. Hook scripts are included as ready-to-customise templates — update the commands to match your stackConfig.md before relying on them as enforcement.

File Structure

your-project/ ├── cline_docs/ # Memory Bank — read every session │ ├── activeContext.md # What we're doing now (always loaded) │ ├── stackConfig.md # Stack, commands, maturity level (always loaded) │ ├── projectBrief.md # Vision, scope, non-negotiables │ ├── progress.md # Status and completed milestones │ ├── systemPatterns.md # Architecture decisions + ADR index │ ├── techContext.md # Infrastructure and integrations │ ├── onboarding.md # New team member reading order + project context │ ├── common-mistakes.md# Known pitfalls in this project │ ├── tech-debt.md # Deferred work (no TODOs in code) │ ├── specs/ # Spec documents + test skeletons (STANDARD/MAJOR) │ └── adr/ # Architecture Decision Records + template ├── .cline/ │ ├── skills/ (22) # Capabilities: spec, security, UX, deploy… │ ├── stacks/ (7) # NestJS · tRPC · Django · Laravel · Go · FastAPI · Next.js AI │ └── subagents/ (2) # code-reviewer + debugger ├── .github/ │ ├── workflows/ # ci-node · ci-python · deploy · release │ ├── ISSUE_TEMPLATE/ # bug + feature request templates │ └── pull_request_template.md ├── .husky/ # Safety Gates: secrets · commits · coverage │ ├── pre-commit # secretlint + lint + format + type-check │ ├── commit-msg # commitlint (conventional commits) │ └── pre-push # branch naming + tests + dep audit ├── docs/ │ ├── skill-decision-tree.md # which skill + which stack to use │ └── templates/ (14) # ESLint · ruff · Playwright · Vitest · Jest · pytest# commitlint · secretlint · renovate · release-please ├── .clinerules # The "Constitution" — 827 lines ├── .clinerules-critical.md# 10 hard-stop rules (always in context) ├── .clinerules-quickref.md# Quick reference card └── .env.example # Environment variables template

Getting Started

# 1. Clone
git clone https://github.com/YOUR_ORG/ai-driven-dev-framework my-project
cd my-project

# 2. Install Husky and activate the hooks
npm install --save-dev husky
chmod +x .husky/*

# 3. Copy quality templates to project root (TypeScript example)
cp docs/templates/eslint.config.mjs .
cp docs/templates/prettier.config.js .       # Prettier (format command)
cp docs/templates/vitest.config.ts .         # or jest.config.ts
cp docs/templates/commitlint.config.js .
cp docs/templates/.secretlintrc.json .
cp docs/templates/playwright.config.ts .     # E2E tests
cp -r docs/templates/e2e/ e2e/

# 4. Set up release automation (optional)
cp docs/templates/release-please-config.json .
cp docs/templates/.release-please-manifest.json .
touch CHANGELOG.md
# GitHub workflows are already in .github/workflows/ — customize them:
#   deploy.yml   → fill in your deploy command and environment names
#   ci-node.yml  → adjust test/build commands for your stack

# 5. Configure (minimum 4 files)
# cline_docs/stackConfig.md   ← pick a preset from .cline/stacks/
# cline_docs/projectBrief.md  ← vision + scope
# cline_docs/activeContext.md ← current task
# cline_docs/progress.md      ← status + milestones

# 6. Start — type this in your AI assistant:
# "Initialize project"
#
# Works with: Cline (VS Code), Claude Code (CLI), Cursor, Windsurf, or
# any AI assistant that can read files from the project directory.