1. Keep session alive with ping-pong. 2. Refreshed tests.
This commit is contained in:
@@ -97,7 +97,15 @@ Execution steps:
|
||||
|
||||
4. Sequential questioning loop (interactive):
|
||||
- Present EXACTLY ONE question at a time.
|
||||
- For multiple‑choice questions render options as a Markdown table:
|
||||
- For multiple‑choice questions:
|
||||
* **Analyze all options** and determine the **most suitable option** based on:
|
||||
- Best practices for the project type
|
||||
- Common patterns in similar implementations
|
||||
- Risk reduction (security, performance, maintainability)
|
||||
- Alignment with any explicit project goals or constraints visible in the spec
|
||||
* Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
|
||||
* Format as: `**Recommended:** Option [X] - <reasoning>`
|
||||
* Then render all options as a Markdown table:
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
@@ -106,9 +114,14 @@ Execution steps:
|
||||
| C | <Option C description> | (add D/E as needed up to 5)
|
||||
| Short | Provide a different short answer (<=5 words) | (Include only if free-form alternative is appropriate)
|
||||
|
||||
- For short‑answer style (no meaningful discrete options), output a single line after the question: `Format: Short answer (<=5 words)`.
|
||||
* After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.`
|
||||
- For short‑answer style (no meaningful discrete options):
|
||||
* Provide your **suggested answer** based on best practices and context.
|
||||
* Format as: `**Suggested:** <your proposed answer> - <brief reasoning>`
|
||||
* Then output: `Format: Short answer (<=5 words). You can accept the suggestion by saying "yes" or "suggested", or provide your own answer.`
|
||||
- After the user answers:
|
||||
* Validate the answer maps to one option or fits the <=5 word constraint.
|
||||
* If the user replies with "yes", "recommended", or "suggested", use your previously stated recommendation/suggestion as the answer.
|
||||
* Otherwise, validate the answer maps to one option or fits the <=5 word constraint.
|
||||
* If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
|
||||
* Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
|
||||
- Stop asking further questions when:
|
||||
|
||||
@@ -54,27 +54,60 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
- **IF EXISTS**: Read research.md for technical decisions and constraints
|
||||
- **IF EXISTS**: Read quickstart.md for integration scenarios
|
||||
|
||||
4. Parse tasks.md structure and extract:
|
||||
4. **Project Setup Verification**:
|
||||
- **REQUIRED**: Create/verify ignore files based on actual project setup:
|
||||
|
||||
**Detection & Creation Logic**:
|
||||
- Check if the following command succeeds to determine if the repository is a git repo (create/verify .gitignore if so):
|
||||
|
||||
```sh
|
||||
git rev-parse --git-dir 2>/dev/null
|
||||
```
|
||||
- Check if Dockerfile* exists or Docker in plan.md → create/verify .dockerignore
|
||||
- Check if .eslintrc* or eslint.config.* exists → create/verify .eslintignore
|
||||
- Check if .prettierrc* exists → create/verify .prettierignore
|
||||
- Check if .npmrc or package.json exists → create/verify .npmignore (if publishing)
|
||||
- Check if terraform files (*.tf) exist → create/verify .terraformignore
|
||||
- Check if .helmignore needed (helm charts present) → create/verify .helmignore
|
||||
|
||||
**If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only
|
||||
**If ignore file missing**: Create with full pattern set for detected technology
|
||||
|
||||
**Common Patterns by Technology** (from plan.md tech stack):
|
||||
- **Node.js/JavaScript**: `node_modules/`, `dist/`, `build/`, `*.log`, `.env*`
|
||||
- **Python**: `__pycache__/`, `*.pyc`, `.venv/`, `venv/`, `dist/`, `*.egg-info/`
|
||||
- **Java**: `target/`, `*.class`, `*.jar`, `.gradle/`, `build/`
|
||||
- **C#/.NET**: `bin/`, `obj/`, `*.user`, `*.suo`, `packages/`
|
||||
- **Go**: `*.exe`, `*.test`, `vendor/`, `*.out`
|
||||
- **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/`
|
||||
|
||||
**Tool-Specific Patterns**:
|
||||
- **Docker**: `node_modules/`, `.git/`, `Dockerfile*`, `.dockerignore`, `*.log*`, `.env*`, `coverage/`
|
||||
- **ESLint**: `node_modules/`, `dist/`, `build/`, `coverage/`, `*.min.js`
|
||||
- **Prettier**: `node_modules/`, `dist/`, `build/`, `coverage/`, `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
|
||||
- **Terraform**: `.terraform/`, `*.tfstate*`, `*.tfvars`, `.terraform.lock.hcl`
|
||||
|
||||
5. Parse tasks.md structure and extract:
|
||||
- **Task phases**: Setup, Tests, Core, Integration, Polish
|
||||
- **Task dependencies**: Sequential vs parallel execution rules
|
||||
- **Task details**: ID, description, file paths, parallel markers [P]
|
||||
- **Execution flow**: Order and dependency requirements
|
||||
|
||||
5. Execute implementation following the task plan:
|
||||
6. Execute implementation following the task plan:
|
||||
- **Phase-by-phase execution**: Complete each phase before moving to the next
|
||||
- **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
|
||||
- **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
|
||||
- **File-based coordination**: Tasks affecting the same files must run sequentially
|
||||
- **Validation checkpoints**: Verify each phase completion before proceeding
|
||||
|
||||
6. Implementation execution rules:
|
||||
7. Implementation execution rules:
|
||||
- **Setup first**: Initialize project structure, dependencies, configuration
|
||||
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
|
||||
- **Core development**: Implement models, services, CLI commands, endpoints
|
||||
- **Integration work**: Database connections, middleware, logging, external services
|
||||
- **Polish and validation**: Unit tests, performance optimization, documentation
|
||||
|
||||
7. Progress tracking and error handling:
|
||||
8. Progress tracking and error handling:
|
||||
- Report progress after each completed task
|
||||
- Halt execution if any non-parallel task fails
|
||||
- For parallel tasks [P], continue with successful tasks, report failed ones
|
||||
@@ -82,7 +115,7 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
- Suggest next steps if implementation cannot proceed
|
||||
- **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
|
||||
|
||||
8. Completion validation:
|
||||
9. Completion validation:
|
||||
- Verify all required tasks are completed
|
||||
- Check that implemented features match the original specification
|
||||
- Validate that tests pass and coverage meets requirements
|
||||
|
||||
@@ -17,7 +17,7 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. **Load context**: Read FEATURE_SPEC and `.specify.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied).
|
||||
2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied).
|
||||
|
||||
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
|
||||
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
|
||||
|
||||
@@ -22,27 +22,13 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
- **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)
|
||||
- Note: Not all projects have all documents. Generate tasks based on what's available.
|
||||
|
||||
3. **Execute task generation workflow** (follow the template structure):
|
||||
3. **Execute task generation workflow**:
|
||||
- Load plan.md and extract tech stack, libraries, project structure
|
||||
- **Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)**
|
||||
- If data-model.md exists: Extract entities → map to user stories
|
||||
- If contracts/ exists: Each file → map endpoints to user stories
|
||||
- If research.md exists: Extract decisions → generate setup tasks
|
||||
- **Generate tasks ORGANIZED BY USER STORY**:
|
||||
- Setup tasks (shared infrastructure needed by all stories)
|
||||
- **Foundational tasks (prerequisites that must complete before ANY user story can start)**
|
||||
- For each user story (in priority order P1, P2, P3...):
|
||||
- Group all tasks needed to complete JUST that story
|
||||
- Include models, services, endpoints, UI components specific to that story
|
||||
- Mark which tasks are [P] parallelizable
|
||||
- If tests requested: Include tests specific to that story
|
||||
- Polish/Integration tasks (cross-cutting concerns)
|
||||
- **Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature spec or user asks for TDD approach
|
||||
- Apply task rules:
|
||||
- Different files = mark [P] for parallel
|
||||
- Same file = sequential (no [P])
|
||||
- If tests requested: Tests before implementation (TDD order)
|
||||
- Number tasks sequentially (T001, T002...)
|
||||
- Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)
|
||||
- If data-model.md exists: Extract entities and map to user stories
|
||||
- If contracts/ exists: Map endpoints to user stories
|
||||
- If research.md exists: Extract decisions for setup tasks
|
||||
- Generate tasks organized by user story (see Task Generation Rules below)
|
||||
- Generate dependency graph showing user story completion order
|
||||
- Create parallel execution examples per user story
|
||||
- Validate task completeness (each user story has all needed tasks, independently testable)
|
||||
@@ -52,12 +38,9 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
- Phase 1: Setup tasks (project initialization)
|
||||
- Phase 2: Foundational tasks (blocking prerequisites for all user stories)
|
||||
- Phase 3+: One phase per user story (in priority order from spec.md)
|
||||
- Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
|
||||
- Clear [Story] labels (US1, US2, US3...) for each task
|
||||
- [P] markers for parallelizable tasks within each story
|
||||
- Checkpoint markers after each story phase
|
||||
- Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
|
||||
- Final Phase: Polish & cross-cutting concerns
|
||||
- Numbered tasks (T001, T002...) in execution order
|
||||
- All tasks must follow the strict checklist format (see Task Generation Rules below)
|
||||
- Clear file paths for each task
|
||||
- Dependencies section showing story completion order
|
||||
- Parallel execution examples per story
|
||||
@@ -69,6 +52,7 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
- Parallel opportunities identified
|
||||
- Independent test criteria for each story
|
||||
- Suggested MVP scope (typically just User Story 1)
|
||||
- Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)
|
||||
|
||||
Context for task generation: {{args}}
|
||||
|
||||
@@ -76,10 +60,44 @@ The tasks.md should be immediately executable - each task must be specific enoug
|
||||
|
||||
## Task Generation Rules
|
||||
|
||||
**IMPORTANT**: Tests are optional. Only generate test tasks if the user explicitly requested testing or TDD approach in the feature specification.
|
||||
|
||||
**CRITICAL**: Tasks MUST be organized by user story to enable independent implementation and testing.
|
||||
|
||||
**Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.
|
||||
|
||||
### Checklist Format (REQUIRED)
|
||||
|
||||
Every task MUST strictly follow this format:
|
||||
|
||||
```text
|
||||
- [ ] [TaskID] [P?] [Story?] Description with file path
|
||||
```
|
||||
|
||||
**Format Components**:
|
||||
|
||||
1. **Checkbox**: ALWAYS start with `- [ ]` (markdown checkbox)
|
||||
2. **Task ID**: Sequential number (T001, T002, T003...) in execution order
|
||||
3. **[P] marker**: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks)
|
||||
4. **[Story] label**: REQUIRED for user story phase tasks only
|
||||
- Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)
|
||||
- Setup phase: NO story label
|
||||
- Foundational phase: NO story label
|
||||
- User Story phases: MUST have story label
|
||||
- Polish phase: NO story label
|
||||
5. **Description**: Clear action with exact file path
|
||||
|
||||
**Examples**:
|
||||
|
||||
- ✅ CORRECT: `- [ ] T001 Create project structure per implementation plan`
|
||||
- ✅ CORRECT: `- [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py`
|
||||
- ✅ CORRECT: `- [ ] T012 [P] [US1] Create User model in src/models/user.py`
|
||||
- ✅ CORRECT: `- [ ] T014 [US1] Implement UserService in src/services/user_service.py`
|
||||
- ❌ WRONG: `- [ ] Create User model` (missing ID and Story label)
|
||||
- ❌ WRONG: `T001 [US1] Create model` (missing checkbox)
|
||||
- ❌ WRONG: `- [ ] [US1] Create User model` (missing Task ID)
|
||||
- ❌ WRONG: `- [ ] T001 [US1] Create model` (missing file path)
|
||||
|
||||
### Task Organization
|
||||
|
||||
1. **From User Stories (spec.md)** - PRIMARY ORGANIZATION:
|
||||
- Each user story (P1, P2, P3...) gets its own phase
|
||||
- Map all related components to their story:
|
||||
@@ -94,22 +112,21 @@ The tasks.md should be immediately executable - each task must be specific enoug
|
||||
- If tests requested: Each contract → contract test task [P] before implementation in that story's phase
|
||||
|
||||
3. **From Data Model**:
|
||||
- Map each entity → to the user story(ies) that need it
|
||||
- Map each entity to the user story(ies) that need it
|
||||
- If entity serves multiple stories: Put in earliest story or Setup phase
|
||||
- Relationships → service layer tasks in appropriate story phase
|
||||
|
||||
4. **From Setup/Infrastructure**:
|
||||
- Shared infrastructure → Setup phase (Phase 1)
|
||||
- Foundational/blocking tasks → Foundational phase (Phase 2)
|
||||
- Examples: Database schema setup, authentication framework, core libraries, base configurations
|
||||
- These MUST complete before any user story can be implemented
|
||||
- Story-specific setup → within that story's phase
|
||||
|
||||
5. **Ordering**:
|
||||
- Phase 1: Setup (project initialization)
|
||||
- Phase 2: Foundational (blocking prerequisites - must complete before user stories)
|
||||
- Phase 3+: User Stories in priority order (P1, P2, P3...)
|
||||
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
|
||||
- Final Phase: Polish & Cross-Cutting Concerns
|
||||
- Each user story phase should be a complete, independently testable increment
|
||||
### Phase Structure
|
||||
|
||||
- **Phase 1**: Setup (project initialization)
|
||||
- **Phase 2**: Foundational (blocking prerequisites - MUST complete before user stories)
|
||||
- **Phase 3+**: User Stories in priority order (P1, P2, P3...)
|
||||
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
|
||||
- Each phase should be a complete, independently testable increment
|
||||
- **Final Phase**: Polish & Cross-Cutting Concerns
|
||||
"""
|
||||
|
||||
Reference in New Issue
Block a user