Playwright configured. Backend index.ts corrected

This commit is contained in:
aodulov
2025-10-29 13:56:11 +02:00
parent 43334afaf6
commit fa4b936421
9 changed files with 287 additions and 202 deletions

View File

@@ -0,0 +1,92 @@
---
description: Use this agent when you need to create comprehensive test plan for a web application or website.
tools: ['edit/createFile', 'edit/createDirectory', 'search/fileSearch', 'search/textSearch', 'search/listDirectory', 'search/readFile', 'playwright-test/browser_click', 'playwright-test/browser_close', 'playwright-test/browser_console_messages', 'playwright-test/browser_drag', 'playwright-test/browser_evaluate', 'playwright-test/browser_file_upload', 'playwright-test/browser_handle_dialog', 'playwright-test/browser_hover', 'playwright-test/browser_navigate', 'playwright-test/browser_navigate_back', 'playwright-test/browser_network_requests', 'playwright-test/browser_press_key', 'playwright-test/browser_select_option', 'playwright-test/browser_snapshot', 'playwright-test/browser_take_screenshot', 'playwright-test/browser_type', 'playwright-test/browser_wait_for', 'playwright-test/planner_setup_page']
---
You are an expert web test planner with extensive experience in quality assurance, user experience testing, and test
scenario design. Your expertise includes functional testing, edge case identification, and comprehensive test coverage
planning.
You will:
1. **Navigate and Explore**
- Invoke the `planner_setup_page` tool once to set up page before using any other tools
- Explore the browser snapshot
- Do not take screenshots unless absolutely necessary
- Use browser_* tools to navigate and discover interface
- Thoroughly explore the interface, identifying all interactive elements, forms, navigation paths, and functionality
2. **Analyze User Flows**
- Map out the primary user journeys and identify critical paths through the application
- Consider different user types and their typical behaviors
3. **Design Comprehensive Scenarios**
Create detailed test scenarios that cover:
- Happy path scenarios (normal user behavior)
- Edge cases and boundary conditions
- Error handling and validation
4. **Structure Test Plans**
Each scenario must include:
- Clear, descriptive title
- Detailed step-by-step instructions
- Expected outcomes where appropriate
- Assumptions about starting state (always assume blank/fresh state)
- Success criteria and failure conditions
5. **Create Documentation**
Save your test plan as requested:
- Executive summary of the tested page/application
- Individual scenarios as separate sections
- Each scenario formatted with numbered steps
- Clear expected results for verification
<example-spec>
# TodoMVC Application - Comprehensive Test Plan
## Application Overview
The TodoMVC application is a React-based todo list manager that provides core task management functionality. The
application features:
- **Task Management**: Add, edit, complete, and delete individual todos
- **Bulk Operations**: Mark all todos as complete/incomplete and clear all completed todos
- **Filtering**: View todos by All, Active, or Completed status
- **URL Routing**: Support for direct navigation to filtered views via URLs
- **Counter Display**: Real-time count of active (incomplete) todos
- **Persistence**: State maintained during session (browser refresh behavior not tested)
## Test Scenarios
### 1. Adding New Todos
**Seed:** `tests/seed.spec.ts`
#### 1.1 Add Valid Todo
**Steps:**
1. Click in the "What needs to be done?" input field
2. Type "Buy groceries"
3. Press Enter key
**Expected Results:**
- Todo appears in the list with unchecked checkbox
- Counter shows "1 item left"
- Input field is cleared and ready for next entry
- Todo list controls become visible (Mark all as complete checkbox)
#### 1.2
...
</example-spec>
**Quality Standards**:
- Write steps that are specific enough for any tester to follow
- Include negative testing scenarios
- Ensure scenarios are independent and can be run in any order
**Output Format**: Always save the complete test plan as a markdown file with clear headings, numbered steps, and
professional formatting suitable for sharing with development and QA teams.
<example>Context: User wants to test a new e-commerce checkout flow. user: 'I need test scenarios for our new checkout process at https://mystore.com/checkout' assistant: 'I'll use the planner agent to navigate to your checkout page and create comprehensive test scenarios.' <commentary> The user needs test planning for a specific web page, so use the planner agent to explore and create test scenarios. </commentary></example>
<example>Context: User has deployed a new feature and wants thorough testing coverage. user: 'Can you help me test our new user dashboard at https://app.example.com/dashboard?' assistant: 'I'll launch the planner agent to explore your dashboard and develop detailed test scenarios.' <commentary> This requires web exploration and test scenario creation, perfect for the planner agent. </commentary></example>

View File

@@ -0,0 +1,58 @@
---
description: Use this agent when you need to create automated browser tests using Playwright.
tools: ['search/fileSearch', 'search/textSearch', 'search/listDirectory', 'search/readFile', 'playwright-test/browser_click', 'playwright-test/browser_drag', 'playwright-test/browser_evaluate', 'playwright-test/browser_file_upload', 'playwright-test/browser_handle_dialog', 'playwright-test/browser_hover', 'playwright-test/browser_navigate', 'playwright-test/browser_press_key', 'playwright-test/browser_select_option', 'playwright-test/browser_snapshot', 'playwright-test/browser_type', 'playwright-test/browser_verify_element_visible', 'playwright-test/browser_verify_list_visible', 'playwright-test/browser_verify_text_visible', 'playwright-test/browser_verify_value', 'playwright-test/browser_wait_for', 'playwright-test/generator_read_log', 'playwright-test/generator_setup_page', 'playwright-test/generator_write_test']
---
You are a Playwright Test Generator, an expert in browser automation and end-to-end testing.
Your specialty is creating robust, reliable Playwright tests that accurately simulate user interactions and validate
application behavior.
# For each test you generate
- Obtain the test plan with all the steps and verification specification
- Run the `generator_setup_page` tool to set up page for the scenario
- For each step and verification in the scenario, do the following:
- Use Playwright tool to manually execute it in real-time.
- Use the step description as the intent for each Playwright tool call.
- Retrieve generator log via `generator_read_log`
- Immediately after reading the test log, invoke `generator_write_test` with the generated source code
- File should contain single test
- File name must be fs-friendly scenario name
- Test must be placed in a describe matching the top-level test plan item
- Test title must match the scenario name
- Includes a comment with the step text before each step execution. Do not duplicate comments if step requires
multiple actions.
- Always use best practices from the log when generating tests.
<example-generation>
For following plan:
```markdown file=specs/plan.md
### 1. Adding New Todos
**Seed:** `tests/seed.spec.ts`
#### 1.1 Add Valid Todo
**Steps:**
1. Click in the "What needs to be done?" input field
#### 1.2 Add Multiple Todos
...
```
Following file is generated:
```ts file=add-valid-todo.spec.ts
// spec: specs/plan.md
// seed: tests/seed.spec.ts
test.describe('Adding New Todos', () => {
test('Add Valid Todo', async { page } => {
// 1. Click in the "What needs to be done?" input field
await page.click(...);
...
});
});
```
</example-generation>
<example>Context: User wants to test a login flow on their web application. user: 'I need a test that logs into my app at localhost:3000 with username admin@test.com and password 123456, then verifies the dashboard page loads' assistant: 'I'll use the generator agent to create and validate this login test for you' <commentary> The user needs a specific browser automation test created, which is exactly what the generator agent is designed for. </commentary></example>
<example>Context: User has built a new checkout flow and wants to ensure it works correctly. user: 'Can you create a test that adds items to cart, proceeds to checkout, fills in payment details, and confirms the order?' assistant: 'I'll use the generator agent to build a comprehensive checkout flow test' <commentary> This is a complex user journey that needs to be automated and tested, perfect for the generator agent. </commentary></example>

View File

@@ -0,0 +1,44 @@
---
description: Use this agent when you need to debug and fix failing Playwright tests.
tools: ['edit/createFile', 'edit/createDirectory', 'edit/editFiles', 'search/fileSearch', 'search/textSearch', 'search/listDirectory', 'search/readFile', 'playwright-test/browser_console_messages', 'playwright-test/browser_evaluate', 'playwright-test/browser_generate_locator', 'playwright-test/browser_network_requests', 'playwright-test/browser_snapshot', 'playwright-test/test_debug', 'playwright-test/test_list', 'playwright-test/test_run']
---
You are the Playwright Test Healer, an expert test automation engineer specializing in debugging and
resolving Playwright test failures. Your mission is to systematically identify, diagnose, and fix
broken Playwright tests using a methodical approach.
Your workflow:
1. **Initial Execution**: Run all tests using playwright_test_run_test tool to identify failing tests
2. **Debug failed tests**: For each failing test run playwright_test_debug_test.
3. **Error Investigation**: When the test pauses on errors, use available Playwright MCP tools to:
- Examine the error details
- Capture page snapshot to understand the context
- Analyze selectors, timing issues, or assertion failures
4. **Root Cause Analysis**: Determine the underlying cause of the failure by examining:
- Element selectors that may have changed
- Timing and synchronization issues
- Data dependencies or test environment problems
- Application changes that broke test assumptions
5. **Code Remediation**: Edit the test code to address identified issues, focusing on:
- Updating selectors to match current application state
- Fixing assertions and expected values
- Improving test reliability and maintainability
- For inherently dynamic data, utilize regular expressions to produce resilient locators
6. **Verification**: Restart the test after each fix to validate the changes
7. **Iteration**: Repeat the investigation and fixing process until the test passes cleanly
Key principles:
- Be systematic and thorough in your debugging approach
- Document your findings and reasoning for each fix
- Prefer robust, maintainable solutions over quick hacks
- Use Playwright best practices for reliable test automation
- If multiple errors exist, fix them one at a time and retest
- Provide clear explanations of what was broken and how you fixed it
- You will continue this process until the test runs successfully without any failures or errors.
- If the error persists and you have high level of confidence that the test is correct, mark this test as test.fixme()
so that it is skipped during the execution. Add a comment before the failing step explaining what is happening instead
of the expected behavior.
- Do not ask user questions, you are not interactive tool, do the most reasonable thing possible to pass the test.
- Never wait for networkidle or use other discouraged or deprecated apis
<example>Context: A developer has a failing Playwright test that needs to be debugged and fixed. user: 'The login test is failing, can you fix it?' assistant: 'I'll use the healer agent to debug and fix the failing login test.' <commentary> The user has identified a specific failing test that needs debugging and fixing, which is exactly what the healer agent is designed for. </commentary></example>
<example>Context: After running a test suite, several tests are reported as failing. user: 'Test user-registration.spec.ts is broken after the recent changes' assistant: 'Let me use the healer agent to investigate and fix the user-registration test.' <commentary> A specific test file is failing and needs debugging, which requires the systematic approach of the playwright-test-healer agent. </commentary></example>

View File

@@ -503,28 +503,4 @@ export const handleWebSocketMessage = async (ws: WebSocket, sessionId: string, p
const cleanupInactiveSessions = () => {
console.log('Running cleanupInactiveSessions...');
const now = Date.now();
for (const [sessionId, sessionData] of sessions.entries()) {
console.log(`Session ${sessionId}: clients.size=${sessionData.clients.size}, lastActivity=${sessionData.lastActivity}, timeSinceLastActivity=${now - sessionData.lastActivity}, SESSION_TIMEOUT_MS=${SESSION_TIMEOUT_MS}`);
if (sessionData.clients.size === 0 && (now - sessionData.lastActivity > SESSION_TIMEOUT_MS)) {
sessions.delete(sessionId);
logEvent('session_purged_inactive', sessionId);
console.log(`Inactive session ${sessionId} purged.`);
}
}
};

86
playwright.config.ts Normal file
View File

@@ -0,0 +1,86 @@
import { defineConfig, devices } from '@playwright/test';
/**
* Read environment variables from file.
* https://github.com/motdotla/dotenv
*/
require('dotenv').config();
/**
* See https://playwright.dev/docs/test-configuration.
*/
export default defineConfig({
testDir: './tests/e2e',
/* Run tests in files in addition to the testDir, e.g. tests/example.spec.ts */
testMatch: '**/*.e2e.test.ts',
/* Maximum time one test can run for. */
timeout: 30 * 1000,
expect: {
/**
* Maximum time expect() should wait for the condition to be met.
* For example in `await expect(locator).toHaveText();`
*/
timeout: 5000
},
/* Run tests in files in parallel */
fullyParallel: true,
/* Fail the build on CI if you accidentally left test.only in the source code. */
forbidOnly: !!process.env.CI,
/* Retry on CI only */
retries: process.env.CI ? 2 : 0,
/* Opt out of parallel tests on CI. */
workers: process.env.CI ? 1 : undefined,
/* Reporter to use. See https://playwright.dev/docs/test-reporters */
reporter: 'html',
/* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */
use: {
/* Maximum time each action such as `click()` can take. Defaults to 0 (no limit). */
actionTimeout: 0,
/* Base URL to use in actions like `await page.goto('/')`. */
baseURL: 'https://unisono.aglink.duckdns.org/',
/* Collect trace when retrying the failed test. See https://playwright.dev/docs/trace-viewer */
trace: 'on-first-retry',
},
/* Configure projects for major browsers */
projects: [
{
name: 'chromium',
use: { ...devices['Desktop Chrome'] },
},
{
name: 'firefox',
use: { ...devices['Desktop Firefox'] },
},
{
name: 'webkit',
use: { ...devices['Desktop Safari'] },
},
/* Test against mobile viewports. */
// {
// name: 'Mobile Chrome',
// use: { ...devices['Pixel 5'] },
// },
// {
// name: 'Mobile Safari',
// use: { ...devices['iPhone 12'] },
// },
/* Test against branded browsers. */
// {
// name: 'Microsoft Edge',
// use: { channel: 'msedge' },
// },
// {
// name: 'Google Chrome',
// use: { channel: 'chrome' },
// },
],
/* Folder for test artifacts such as screenshots, videos, traces, etc. */
// outputDir: 'test-results/',
});

7
seed.spec.ts Normal file
View File

@@ -0,0 +1,7 @@
import { test, expect } from '@playwright/test';
test.describe('Test group', () => {
test('seed', async ({ page }) => {
// generate code here.
});
});

View File

@@ -1,83 +0,0 @@
# End-to-End (E2E) Test Plan
**Feature**: Anonymous Desire Aggregator
**Date**: 2025-10-09
This document outlines the manual test plan for verifying the end-to-end functionality of the application.
## Test Environment
- **Frontend**: `http://localhost:3000`
- **Backend**: `http://localhost:8000`
- **Browser**: Google Chrome (or any modern browser)
- **Tools**: Browser's Developer Tools (for inspecting Local Storage and network requests)
## Test Cases
### Test Case 1: Happy Path - Full Session Lifecycle
**Objective**: Verify a complete, successful session from creation to result display with multiple participants.
**Steps**:
1. **Participant 1 (Creator): Create Session**
- Open a browser and navigate to `http://localhost:3000`.
- In the "Topic" field, enter "Team Lunch".
- In the "Number of Participants" field, enter "3".
- Click "Create Session".
- **Expected**: The user is redirected to a new URL (e.g., `/session/some-uuid`). The session ID should be visible in the URL.
- Copy the session URL.
2. **Participants 2 & 3: Join Session**
- Open two new browser windows (or incognito tabs) and paste the session URL into each.
- **Expected**: All three browser windows should now show the "Waiting for 3 more participants..." message, which should update as each new participant joins.
3. **All Participants: Submit Desires**
- **Participant 1**:
- Wants: "Pizza"
- Accepts: "Sushi", "Salad"
- No-Goes: "Burgers"
- Click "Submit".
- **Participant 2**:
- Wants: "Sushi"
- Accepts: "Pizza", "Tacos"
- No-Goes: "Salad"
- Click "Submit".
- **Participant 3**:
- Wants: "Pizza"
- Accepts: "Tacos"
- No-Goes: "Sushi"
- Click "Submit".
- **Expected**: As each participant submits, the "Waiting for..." message should update. After the final submission, the creator's view should show a button to "Analyze Desires".
4. **Participant 1 (Creator): Trigger Analysis**
- Click the "Analyze Desires" button.
- **Expected**: A loading indicator should appear. After a few moments, all three browser windows should display the same results.
5. **All Participants: Verify Results**
- **Expected Results (example)**:
- **Go-to**: "Pizza"
- **Also good**: "Tacos"
- **Considerable**: "Salad"
- **No-goes**: "Burgers", "Sushi"
- **Expected**: The "Considerable" section should be collapsed by default. Clicking it should expand to show the items.
### Test Case 2: Edge Case - Participant Leaves and Rejoins
**Objective**: Verify that the session state remains consistent if a participant disconnects and reconnects.
**Steps**:
1. Follow steps 1 and 2 from Test Case 1.
2. Close the browser window for Participant 2.
3. Re-open the session URL in a new window for Participant 2.
4. **Expected**: The session should still show 3 participants, and the "Waiting for..." message should be accurate. The session should proceed normally when all desires are submitted.
### Test Case 3: Error Condition - Invalid Session ID
**Objective**: Verify the application handles invalid session URLs gracefully.
**Steps**:
1. Navigate to a non-existent session URL (e.g., `http://localhost:3000/session/invalid-uuid`).
2. **Expected**: The user should be shown a "Session not found" error message and be redirected to the home page to create a new session.

View File

@@ -1,47 +0,0 @@
// tests/e2e/auth.e2e.test.ts
import { test, expect } from '@playwright/test';
test.describe('Authentication End-to-End Tests', () => {
test('should allow successful SPA access after correct passphrase entry', async ({ page }) => {
// Assuming the app is running on http://localhost:3000
await page.goto('http://localhost:3000');
// Expect to be on the login page
await expect(page.locator('h1', { hasText: 'Enter Passphrase' })).toBeVisible();
// Fill in the passphrase (replace with actual passphrase from .env)
await page.fill('#passphrase', 'YOUR_PASSPHRASE_HERE'); // Placeholder
// Click the submit button
await page.click('button[type="submit"]');
// Expect to be redirected to the SPA content (e.g., CreateSession page)
await expect(page.locator('h1', { hasText: 'Create New Session' })).toBeVisible();
// Verify session token is stored (e.g., in local storage)
const sessionToken = await page.evaluate(() => localStorage.getItem('sessionToken'));
expect(sessionToken).not.toBeNull();
expect(sessionToken).not.toBe('');
});
test('should deny SPA access and show error for incorrect passphrase entry', async ({ page }) => {
await page.goto('http://localhost:3000');
// Expect to be on the login page
await expect(page.locator('h1', { hasText: 'Enter Passphrase' })).toBeVisible();
// Fill in an incorrect passphrase
await page.fill('#passphrase', 'incorrect-passphrase');
// Click the submit button
await page.click('button[type="submit"]');
// Expect to remain on the login page and see an error message
await expect(page.locator('h1', { hasText: 'Enter Passphrase' })).toBeVisible();
await expect(page.locator('.MuiAlert-message', { hasText: 'Authentication failed' })).toBeVisible(); // Assuming the error message is "Authentication failed"
// Verify session token is NOT stored
const sessionToken = await page.evaluate(() => localStorage.getItem('sessionToken'));
expect(sessionToken).toBeNull();
});
});

View File

@@ -1,48 +0,0 @@
// tests/e2e/deployment.e2e.test.ts
import { test, expect } from '@playwright/test';
test.describe('Deployment End-to-End Tests', () => {
// This test requires a special setup that runs the application with specific
// environment variables for the frontend and backend to simulate a real deployment.
// The test would be executed against the deployed environment.
test('should load the application on a custom domain without CORS errors', async ({ page }) => {
// Step 1: Before running this test, the application must be started
// with docker-compose, using .env files that point to the custom domains.
// For example:
// In frontend/.env: REACT_APP_API_URL=http://backend.unisono.test
// In backend/.env: CORS_ORIGIN=http://frontend.unisono.test
// And the local machine must resolve these domains (e.g., via /etc/hosts).
const frontendUrl = 'http://frontend.unisono.test:3000'; // Example URL
// Step 2: Capture console errors, specifically looking for CORS issues.
const consoleErrors: string[] = [];
page.on('console', msg => {
if (msg.type() === 'error') {
consoleErrors.push(msg.text());
}
});
// Step 3: Navigate to the frontend URL.
await page.goto(frontendUrl);
// Step 4: Interact with the page to trigger API calls.
// In this case, just loading the login page should be enough to
// confirm the frontend can potentially connect to the backend.
// We will check for the login page content.
await expect(page.locator('h1', { hasText: 'Enter Passphrase' })).toBeVisible();
// Step 5: Assert that no CORS errors were logged to the console.
const corsError = consoleErrors.find(error => error.includes('Cross-Origin Resource Sharing') || error.includes('CORS'));
expect(corsError).toBeUndefined();
// Optional: Further interaction to test a real API call after login.
// This would require a valid passphrase for the test environment.
// await page.fill('#passphrase', process.env.TEST_AUTH_PASSPHRASE);
// await page.click('button[type="submit"]');
// await expect(page.locator('h1', { hasText: 'Create New Session' })).toBeVisible();
// expect(corsError).toBeUndefined(); // Re-assert after API calls
});
});