Three Feedback Loops That Turn Broken Software Into Healthy Software

Every QA tool can find a bug. The hard part is what happens next. Does the bug get triaged? Does someone verify the fix? Does the same bug come back two sprints later? Most testing workflows stop at detection. Aiqaramba closes the full loop through three feedback mechanisms that operate at different speeds and levels of automation.

Loop 1: Fix what's broken

The health board shows every journey in your project as a card. Green means passing. Red means failing. When a card turns red, you click it.

The agent report tells you what happened in plain language: which page it visited, what it tried to do, and why it failed. You get the failure reason, the steps taken, and (if recorded) a video of the browser session.

Example

Your "Signup Flow" card turns red. The report says: "Email validation rejected user+test@example.com. The form's client-side validation rejects the + character." You fix the regex, click Re-run on the health card, and the card turns green.

This loop takes minutes. The health board surfaces the problem, the report explains it, and a single click verifies the fix. No separate test suite to maintain. No "works on my machine" guessing. The same agent that found the bug confirms whether the fix worked.

The manual loop is where most teams start. It is also the fallback when the automated loops surface something unexpected.

Loop 2: Catch regressions before users do

The regression endpoint (POST /projects/{id}/regression) connects Aiqaramba to your CI/CD pipeline. After every deploy, your pipeline sends commit messages, changed files, and the PR description. An LLM analyzes these changes against your project's journey catalog and app map, then runs only the journeys that could be affected.

This is not keyword matching. The LLM understands that a change to auth.ts could affect the login flow, the signup flow, and the checkout flow (because they all share an auth module). It matches journeys based on semantic relevance, not file paths.

0
Manual steps
~5
Minutes to results
Auto
GitHub issues

When a regression agent fails and the request includes a github_repo parameter, Aiqaramba automatically creates a GitHub issue. The issue contains the failure details, reproduction steps extracted from the agent trace, and expected vs. actual behavior. The developer who pushed the breaking change sees the issue in their repo within minutes of the deploy.

No one needs to decide which tests to run. No one needs to review results unless something actually breaks. The loop is fully automatic from push to issue.

Loop 3: Discover what to test

The first two loops assume you already have journeys. But when you first connect an app, you start from zero. The discovery system fills that gap.

You provide a URL and (optionally) login credentials. Aiqaramba sends agents to explore your application in phases. The first agent catalogs pages, forms, and navigation links from the entry URL. Follow-up agents explore leads from earlier phases: settings pages, nested forms, auth-gated areas. After 2-3 phases, an LLM synthesizes everything into a structured app map and generates 3-5 journey templates for the key user flows it found, ranked by importance.

The entire process takes about 10 minutes. At the end, your health board shows journey cards ready to run. Click "Run All" and the first loop (fix what's broken) begins immediately.

Discovery is typically a one-time step per project. The journeys it creates persist and become part of your ongoing test coverage. You can edit them, add checkpoints for granular progress tracking, and set success criteria. As your app evolves, you can run discovery again to pick up new pages and flows.

The full cycle

These three loops feed into each other. Discovery creates journeys. Running journeys produces health data. Health data drives manual investigation. CI/CD triggers regression runs. Regressions create GitHub issues. Fixes get verified by re-runs.

The product moves through a natural lifecycle:

  1. Unknown. No project, no journeys, no data. Discovery moves you past this in minutes.
  2. Discovered. App map exists, journeys ready to run. First run produces a health snapshot.
  3. Broken. Agents found failures. Reports explain exactly what went wrong.
  4. Recovering. Some journeys pass, some still fail. Checkpoint progress shows how close each flow is to working.
  5. Healthy. All critical journeys pass. The regression loop keeps it that way.

When a developer pushes code that breaks a healthy flow, the regression loop catches it. The health board turns red. The manual loop kicks in. The product recovers. This is the steady state: healthy, with automatic guardrails against regression.

Coverage tiers

Not all journeys are equal. Aiqaramba organizes them into five tiers that form a dependency pyramid:

A healthy project means "the product works for typical users." It does not mean "the product is perfect." If login works, navigation works, and core features work, the project is healthy, even if an edge case in the settings page is broken.

For implementation details on each loop, including the regression API, discovery phases, and health computation, see the full documentation.

See it in action

Book a 30-minute demo and we'll run all three loops on your product.

Book a demo →

Want to try this on your app?

Describe a test scenario in plain language. Our AI agents run it in a real browser and report back with screenshots.

Book a demo →