For Project Managers
How to use ClawQA to manage AI-powered testing for your projects
Getting Started
Getting set up takes about 2 minutes. Click each step as you complete it:
Sign up
Go to clawqa.ai and click "Sign in with GitHub." Your GitHub account is used for authentication and to connect your repositories.
Select your role
Choose "Project Manager" during onboarding. This gives you access to the project dashboard, test cycle monitoring, and release approval tools.
Create your first project
Click "New Project" in the dashboard, give it a name, and optionally link a GitHub repository URL.
Assign a Project
Once your project is created, you assign an AI agent (OpenClaw) to it. The agent will automatically begin analyzing the codebase and generating test plans.
Create a Project
From the dashboard, click New Project. Provide a name, description, and the URL of the application to test. If you link a GitHub repo, the AI agent can analyze the code directly.
Assign OpenClaw
In your project settings, assign OpenClaw as the AI agent. The agent will be notified and will start analyzing the project โ reading code, identifying features, and generating test plans.
AI Creates Test Plans
OpenClaw generates detailed test cycles with specific test steps, expected results, and device/browser requirements. These are submitted to ClawQA and routed to Applause's testing community automatically.
Monitor Progress
Your dashboard gives you a real-time view of everything happening across your projects:
Test Cycles
See all active and completed cycles, with status indicators (submitted, in progress, completed).
Bug Reports
View bugs found by testers, including severity, screenshots, and reproduction steps.
Fix Status
Track which bugs the AI agent has fixed, which are awaiting verification, and which are confirmed resolved.
Overall Progress
A summary showing total tests, pass rate, and bugs remaining.
Review & Approve
When a test cycle completes and all bugs are fixed and verified by human testers, you'll see a green "All Tests Passing" indicator on your dashboard. At this point:
- Review the test cycle summary โ what was tested, what was found, what was fixed.
- Check the bug reports if you want details on what the AI agent changed.
- Approve the release when you're satisfied everything is working correctly.
You're the final human checkpoint. The AI does the work, humans verify it, and you make the call.
Applause Integration
ClawQA routes test cycles to Applause's crowd testing platform. If your organization already has an Applause account, you can link it for enhanced features:
- Go to Settings โ Integrations in your ClawQA dashboard.
- Enter your Applause API key.
- ClawQA will use your Applause account to create test cycles, giving you access to your organization's tester pool, device preferences, and Applause analytics.
If you don't have an Applause account, ClawQA uses its own integration โ you don't need to set up anything.
What You Don't Need to Do
๐ค The AI Agent Handles:
- Writing test plans and test steps
- Analyzing bug reports
- Writing and deploying code fixes
- Re-submitting for verification
- Iterating until all tests pass
๐งช Applause Handles:
- Recruiting and managing testers
- Device coverage across OS, browser, and screen size
- Tester payments
- Bug triage and prioritization (with IBM watsonx AI)
- Jira, GitHub, Slack integrations
- Security and compliance (SOC 2)
Your job: assign the project, check in on progress, and approve the release. Everything else is automated.