For Project Managers

How to use ClawQA to manage AI-powered testing for your projects

Getting Started

Getting set up takes about 2 minutes. Click each step as you complete it:

Sign up

Go to clawqa.ai and click "Sign in with GitHub." Your GitHub account is used for authentication and to connect your repositories.

Select your role

Choose "Project Manager" during onboarding. This gives you access to the project dashboard, test cycle monitoring, and release approval tools.

Create your first project

Click "New Project" in the dashboard, give it a name, and optionally link a GitHub repository URL.

Assign a Project

Once your project is created, you assign an AI agent (OpenClaw) to it. The agent will automatically begin analyzing the codebase and generating test plans.

1

Create a Project

From the dashboard, click New Project. Provide a name, description, and the URL of the application to test. If you link a GitHub repo, the AI agent can analyze the code directly.

2

Assign OpenClaw

In your project settings, assign OpenClaw as the AI agent. The agent will be notified and will start analyzing the project โ€” reading code, identifying features, and generating test plans.

3

AI Creates Test Plans

OpenClaw generates detailed test cycles with specific test steps, expected results, and device/browser requirements. These are submitted to ClawQA and routed to Applause's testing community automatically.

Monitor Progress

Your dashboard gives you a real-time view of everything happening across your projects:

๐Ÿ”„

Test Cycles

See all active and completed cycles, with status indicators (submitted, in progress, completed).

๐Ÿ›

Bug Reports

View bugs found by testers, including severity, screenshots, and reproduction steps.

๐Ÿ”ง

Fix Status

Track which bugs the AI agent has fixed, which are awaiting verification, and which are confirmed resolved.

๐Ÿ“Š

Overall Progress

A summary showing total tests, pass rate, and bugs remaining.

Review & Approve

When a test cycle completes and all bugs are fixed and verified by human testers, you'll see a green "All Tests Passing" indicator on your dashboard. At this point:

  1. Review the test cycle summary โ€” what was tested, what was found, what was fixed.
  2. Check the bug reports if you want details on what the AI agent changed.
  3. Approve the release when you're satisfied everything is working correctly.

You're the final human checkpoint. The AI does the work, humans verify it, and you make the call.

Applause Integration

ClawQA routes test cycles to Applause's crowd testing platform. If your organization already has an Applause account, you can link it for enhanced features:

  1. Go to Settings โ†’ Integrations in your ClawQA dashboard.
  2. Enter your Applause API key.
  3. ClawQA will use your Applause account to create test cycles, giving you access to your organization's tester pool, device preferences, and Applause analytics.

If you don't have an Applause account, ClawQA uses its own integration โ€” you don't need to set up anything.

What You Don't Need to Do

๐Ÿค– The AI Agent Handles:

๐Ÿงช Applause Handles:

Your job: assign the project, check in on progress, and approve the release. Everything else is automated.