Manually reproducing issues slows everyone down—so we re-imagined the process, making it effortless and delightfully fast.
How It Works Simply describe the bug—like "Incorrect phone number in sign up doesn't give a validation message" or "No email when trying to reset password"—and let our AI take care of the rest.
Say goodbye to repetitive bug reproductions. With our AI-powered approach, finding and fixing issues is faster, simpler, and more efficient.
Our customers told us testing applications across various device screens can be cumbersome—so we simplified it. Introducing Device Presets, a streamlined way to manage device-specific testing effortlessly.
Why it matters:
Preset Highlights:
Testing with different device viewports is now as simple as selecting your preset—ensuring your applications deliver a flawless experience on every screen, every time.
Understanding complex test setups can be challenging, especially when tests share browser states and pass information between each other. With this update, we've made it simple to visualize test dependencies and interactions clearly.
How It Works
Simply navigate to a test result and click on the graph menu to explore your test dependencies visually.
We've introduced a new feature that lets you preview how your changes would impact the AI agent directly within the edit view. This allows you to quickly retry specific steps without needing to rerun the entire test, saving significant time during debugging and refinement.
How It Works:
Important Note:
Find this feature in the tracer panel within the edit view.
We've introduced a new feature that lets you easily rerun a test along with all its dependencies directly from the test edit view. In situations where running a single test isn't sufficient—such as needing to recreate an item before deleting it—you can now use the "Run w. Dependencies" button. Normally we use the latest execution of the dependency as starting point.
How it Works:
This enhancement streamlines testing workflows, reducing manual steps and ensuring your tests run in the correct context every time.
Debugging automated tests often involves understanding exactly what data the AI agent is processing. To simplify this, we've enhanced the tracer to show all the data the agent receives—including the current page and agent context - providing clearer visibility into what's happening during test execution.
How It Works
Getting Started
Access these enhanced details directly from your tracer view during any test run. Simply expand the tracer panel to view the complete data provided to your agent.
Now you can add a QA.tech status badge directly to your GitHub README to quickly display the status of your automated tests. The badge clearly indicates whether tests are passing or failing and provides insight into when the last test run occurred. This helps your team easily monitor test health at a glance, streamlining development workflows and improving visibility.
How It Works:
You find all the instructions you need on Settings -> Integrations
When debugging test automation, tracking events across a full test session can be overwhelming, especially when identifying exactly which actions correspond to each step. With our improved network and console logging, you now have crystal-clear visibility into the flow of events, making it significantly simpler and quicker to pinpoint issues.
How It Works
Benefits
This update makes debugging test automation smoother and more intuitive, allowing you to efficiently zero in on the exact points of interest without getting lost in the noise.
It's frustrating when your automated tests fail because the AI agent misses elements it should interact with. To solve this, we've introduced a simple way to visually confirm exactly what elements your agent sees and can interact with on your site.
How It Works
When to Use This
Try it out now and see exactly how your AI agent views your site. Debugging interactive tests just got simpler!
Introduced a new feature that tracks all changes made to test cases over time—test steps and configurations. Teams can now review who made changes, when they were made, and what was changed. This addition ensures a complete audit trail for each test and offers a simple way to restore previous versions if needed.
Key Highlights:
• Comprehensive Change Tracking: Every update to a test—be it step revisions, data tweaks, or configuration swaps—is recorded in a historical log.
• Auditable History: View contributor names and timestamps on test modifications. Great for collaboration and accountability.
• Enhanced Collaboration: Teams get deeper insights into when, how, and why a test evolved, creating transparency and reducing test maintenance overhead.
This feature is accessible from the test's detail page, where you'll find a "History" clock icon that provides a chronological list of versions and their modifications.