playwright-archive
Version:
A lightweight CLI tool to archive and serve your Playwright test run history with a web interface. Useful for CI environments and local development.
402 lines (308 loc) β’ 11.6 kB
Markdown
A lightweight CLI tool to archive and serve your Playwright test run history with a web interface. Useful for CI environments and local development.
- Archives Playwright reports into timestamped folders
- Serves a simple web dashboard with a React frontend
- View all test runs and open full reports from a browser
- Generate run results summary file for CI integration (with customizable format and content)
- Real-time test execution through web interface
- Built-in web terminal for running commands
- Disk space monitoring for test artifacts
## π Installation
You can install the package locally or globally:
```bash
npm install playwright-archive --save-dev
```
Or globally:
```bash
npm install -g playwright-archive
```
## π οΈ CLI Commands
### Archive current Playwright run
```bash
npx playwright-archive --archive
```
This copies the latest Playwright report into the `run-history/` directory under a timestamped folder.
### Serve history dashboard
```bash
npx playwright-archive --serve
```
This launches a local Express server with a React-based UI at `http://localhost:3000`.
### Clear history
```bash
npx playwright-archive --clear
```
Removes the entire `run-history/` folder.
## π§ Example (combined usage)
## IMPORTANT!!!
Currently, the package works only if the test results and test report directories have their default names and locations (default results dir - `/test-results`, default report dir - `playwright-report`). These settings should not be overridden in `playwright.config`.
### STEP 1 - configure playwright reporter
In your `playwright.config.ts`, set the reporter to html:
```
reporter: [['html', { open: 'never' }]],
```
Windows:
```json
{
"scripts": {
"test": "npx playwright test || exit 0",
"posttest": "npx playwright-archive --archive && npx playwright-archive --serve"
}
}
```
UNIX/Linux:
```json
{
"scripts": {
"test": "npx playwright test || true",
"posttest": "npx playwright-archive --archive && npx playwright-archive --serve"
}
}
```
β οΈ Note: We use `exit 0` or `true` to prevent the posttest script from being skipped when tests fail.
All test run screenshots/videos will be available from the archive.
After running `npm test`, your reports will be archived and available at `http://localhost:3000`.
You can save additional information about test runs. This data will be displayed as color badges in the "Metadata" table column.
In your `playwright.config.ts`, set up reporters:
```
reporter: [['html', { open: 'never' }],["json", { outputFile: 'test-results/pw-archive.json' }]],
```
Windows:
```json
{
"scripts": {
"test": "npx playwright test || exit 0",
"posttest": "npx playwright-archive --archive && npx playwright-archive --serve"
}
}
```
UNIX/Linux:
```json
{
"scripts": {
"test": "npx playwright test || true",
"posttest": "npx playwright-archive --archive && npx playwright-archive --serve"
}
}
```
Add a `playwright-archive.config.js` file in the root folder of your project. Below is an example of the config file:
```javascript
/**
* @type {import('playwright-archive').PlaywrightArchiveConfig}
*/
const config = {
// Server settings
server: {
port: 3000, // Server port (default: 3000)
host: "localhost" // Server host (default: localhost, use 0.0.0.0 for Docker/external access)
},
// Display settings for test metrics and metadata
display: {
// Hide specific metrics from the report
hideMetrics: [
// 'workers', // number of worker processes
// 'projects', // number and names of Playwright projects
// 'duration', // test execution time
// 'totalTests', // total number of tests
// 'passed', // number of passed tests
// 'failed', // number of failed tests
// 'skipped', // number of skipped tests
// 'flaky' // number of flaky tests
],
// Hide passed/failed test suite names in the report
hideTestNames: false
},
// Run results file configuration (for CI integration)
runResults: {
enabled: true, // Enable results file generation
path: './test-results.txt', // Where to save the results
format: 'text', // 'text' or 'json'
include: [ // What information to include
'status', // overall run status
'summary', // test counts
'duration', // total run time
'failedTests', // list of failed tests
'errorMessages' // error messages from fails
],
// Optional custom template for text format
template: `
Test Run Results
---------------
Status: {status}
Duration: {duration}
Summary:
* Total: {totalTests}
* Passed: {passedTests}
* Failed: {failedTests}
* Skipped: {skippedTests}
{hasFailedTests?
Failed Tests:
{failedTestsList}
Error Messages:
{errorMessages}
:}
`
}
};
module.exports = config;
```
Delete unnecessary fields from the config. Run tests. The server will be launched on `localhost:3000`.
For CI environments, you can configure the package to generate a summary file after each test run. This file can be used to send notifications (e.g., to Slack) or for further processing in your CI pipeline.
To enable this feature, add the `runResults` section to your configuration:
```javascript
runResults: {
enabled: true, // Enable results file generation
path: './test-results.txt', // Where to save the results
format: 'text', // 'text' or 'json'
include: [ // What information to include
'status', // overall run status
'summary', // test counts
'duration', // total run time
'failedTests', // list of failed tests
'errorMessages' // error messages from fails
]
}
```
Available options for `include`:
- `status` - Overall run status (passed/failed)
- `summary` - Test counts (total/passed/failed/skipped)
- `duration` - Total run duration
- `failedTests` - List of failed tests
- `flakyTests` - List of flaky tests
- `projectStats` - Statistics per project
- `errorMessages` - Error messages from failed tests
### Custom Text Format
You can customize the text output format using a template with variables:
```javascript
template: `
Test Run Results
---------------
Status: {status}
Duration: {duration}
Summary:
* Total: {totalTests}
* Passed: {passedTests}
* Failed: {failedTests}
* Skipped: {skippedTests}
{hasFailedTests?
Failed Tests:
{failedTestsList}
Error Messages:
{errorMessages}
:}
`
```
Available template variables:
- `{status}` - Overall status
- `{duration}` - Formatted duration
- `{totalTests}` - Total test count
- `{passedTests}` - Passed tests count
- `{failedTests}` - Failed tests count
- `{skippedTests}` - Skipped tests count
- `{flakyTests}` - Flaky tests count
- `{failedTestsList}` - List of failed tests
- `{errorMessages}` - Error messages from fails
You can use conditional blocks with the syntax `{condition?content:}` where the content will only be included if the condition is true.
If you set `format: 'json'`, the results will be saved in a structured JSON format, which is useful for programmatic processing in CI pipelines.
Example JSON output:
```json
{
"status": "Failed",
"summary": {
"total": 100,
"passed": 95,
"failed": 3,
"skipped": 2,
"flaky": 0
},
"duration": "1m 30s",
"failedTests": [
"test1",
"test2",
"test3"
],
"errorMessages": [
"Error in test1: expected true to be false",
"Error in test2: timeout exceeded"
]
}
```
```
your-project/
ββ run-history/
β ββ 2025-06-14T11-12-34Z/
β β ββ report/
| | metadata.json (optional, will be added if you configure runs as described in example 2)
ββ node_modules/
ββ tests/
ββ playwright.config.js
ββ ...
```
The dashboard displays a table of archived runs with links to full Playwright reports.
Feedback and contributions are welcome!
Visit the package page on npm and use the "Rate", "Report", or "Star" features.
Create an issue or share ideas via the GitLab repository.
Let me know what works well and what can be improved!
This package is actively maintained and under development.
If something doesnβt work as expected, itβs most likely a bug, not a feature.
Please donβt hesitate to open an issue or share a report. Feedback helps me improve quickly.
Thank you for testing, using, and helping this tool grow! π
Feel free to create forks, open issues, or contribute.
Your help makes the project better for everyone. Thank you for participating! β€οΈ
- Updated README: improved usage example β now shows how to properly archive test reports even if tests fail
- Updated package metadata: added GitLab repository link (contributors welcome!)
### [1.1.0] - 2025-06-15
- Implemented displaying general metadata (duration, test quantity, etc.)
- Implemented optional config file
### [1.1.1] - 2025-06-15
- Added homepage (https://playwright-archive-c9228a.gitlab.io)
### [1.2.0] - 2025-06-16
- Now it is possible to change port/host for webview from user config
- Fixed styles in webview
### [2.0.0] - 2025-06-30
BREAKING CHANGES
- Added run results file generation for CI integration
- Added support for custom text templates and JSON format for results
- Added built-in web terminal for running commands
- Added disk space monitoring for test artifacts
- Added real-time test execution through web interface
- Updated configuration structure for better organization
- Improved documentation with CI integration examples
- new config structure
## π License
MIT License
```
MIT License
Copyright (c) 2025
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the βSoftwareβ), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED βAS ISβ, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```