Learn how to use AI in Cypress with cy.prompt, Studio AI, and self-healing tests. A step-by-step 2026 guide to enable, write, and scale your AI Cypress tests.

Saniya Gazala
April 27, 2026
According to the Capgemini World Quality Report 2024-25, 68% of organizations are now using Generative AI in quality engineering, and 72% report faster automation as a direct result. As this shift gained momentum, Cypress introduced its answer in late 2025 with a built-in AI layer inside Cypress and Cypress Cloud that generates specs from English, recommends assertions, and self-heals selectors.
That shift in the industry sets the stage perfectly for Cypress AI. With AI built directly into Cypress and Cypress Cloud, teams can generate test specs from plain English, get smart assertion suggestions, and rely on self-healing selectors, making automation faster and easier to maintain.
Overview
What exactly is Cypress AI?
Cypress AI is the collection of AI-driven capabilities baked into Cypress from version 15.4.0 onward, available in both the open source runner and Cypress Cloud. It is not a separate product or paid add-on, but a layer of features such as cy.prompt(), Studio AI, and test insights that shift tests toward user intent instead of fragile CSS selectors.
Which testing problems does Cypress AI address?
Cypress AI addresses common pain points in Cypress E2E testing by reducing maintenance effort, speeding up test creation, and improving test reliability.
How do you generate tests with cy.prompt()?
You can generate tests in Cypress E2E testing using cy.prompt() by describing user actions in plain English and letting Cypress translate them into executable commands.
Can you scale Cypress AI tests on TestMu AI Cloud?
TestMu AI (formerly LambdaTest) takes specs authored with cy.prompt() and runs them in parallel across real browsers and devices, making it easier to scale Cypress E2E testing online beyond local environments. It removes the Chromium-only limitation and reduces flakiness caused by inconsistent developer setups.
Once your tests pass locally, you can save the generated code from the Command Log, configure a lambdatest-config.json file with browser and parallelization settings, and trigger runs through the TestMu AI CLI. This approach helps make Cypress E2E testing faster, more reliable, and better suited for CI at scale.
Cypress AI is a set of AI-powered capabilities built into Cypress 15.4.0+ and Cypress Cloud, enabling natural-language test authoring, self-healing selectors, and assertion suggestions.
Cypress AI refers to the set of AI-powered capabilities introduced in Cypress starting with version 15.4.0, spanning both the open source test runner and Cypress Cloud. It is not a separate product or a paid add-on for core functionality.
Instead, these features are built directly into the Cypress App and surfaced through commands like cy.prompt() as well as through Cypress Cloud experiences such as Studio AI and test insights.
The core idea is to keep tests focused on user intent rather than brittle CSS selectors. When the intent remains the same but the DOM changes, AI helps bridge the gap so tests remain stable and require less maintenance.
Cypress AI solves three recurring pain points in test suites, selector maintenance tax, slow authoring of new specs, and missing assertions during recorded flows.
Before writing any code, it helps to name the real pain. Most Cypress test suites do not fail because the logic is wrong. They fail because locators drift, assertions fall out of sync with the DOM, and writing a new spec often takes longer than building the feature itself.
The Cypress engineering team reported that selector churn is the single biggest source of broken runs their customers file tickets about. That matches what I saw in my own suite: roughly one in three red builds was a locator fix, not a real regression.
We eventually stopped measuring CI health by pass rate and started measuring it by how many hours we spent each week fixing selectors. Cypress AI was one of the first tools that actually reduced that number.
Cypress AI addresses three recurring problems in real-world test suites:
Cypress lists six AI capabilities in its Cloud features documentation. Three sit in the App, three live in the Cloud dashboard.
| Feature | What It Does | Plan |
|---|---|---|
| cy.prompt() | Converts natural language steps into runnable Cypress commands and self-heals selectors between runs. | Free (100/hr); Paid (600/hr) |
| Studio AI | Watches DOM changes while you record interactions and recommends assertions automatically. | Free (60/hr); Paid (300/hr) |
| Test Intent Summaries | AI-generated description of what each test is meant to verify, shown in the Cloud test details sidebar. | All plans |
| Error Summaries | Plain-language explanations of failures placed alongside the stack trace. | All plans |
| Cloud MCP (Beta) | Model Context Protocol server that exposes Cypress Cloud test results to AI coding assistants like Claude or Cursor. | Paid plan |
| UI Coverage Test Generation | Generates targeted Cypress tests from coverage gaps spotted during code reviews. | UI Coverage add-on |
Most teams start with cy.prompt() and Studio AI because they directly affect daily test authoring. The Cloud features become valuable once you have suites running in CI.
Before writing any AI-driven tests, make sure your environment meets the required baseline after your Cypress install.
The official cy.prompt() documentation outlines strict versioning and authentication requirements that can fail silently if not configured correctly.
--record and a valid --key.If you are on Cypress 14 or earlier, upgrade first. The Cypress tutorial walks through clean install and project setup if you are starting from scratch.
Enable cy.prompt() in your Cypress 15.4+ config, sign in to Cypress Cloud, and pass an array of plain English steps describing user intent to generate runnable Cypress commands.
cy.prompt() takes an array of natural language steps and translates each into real Cypress commands at runtime. cy.prompt() is now officially available as an experimental feature. Revealed at CypressConf 2025 by Brian Mann, it represents an initial step toward integrating AI-assisted workflows into the Cypress testing ecosystem.
npm init -y, and install Cypress using npm install cypress@latest. Open Cypress using npx cypress open and verify installation with npx cypress verify to ensure your Cypress E2E testing setup is ready.const { defineConfig } = require('cypress');
module.exports = defineConfig({
e2e: {
experimentalPromptCommand: true,
},
});On Cypress 15.13.0 and above, the flag is no longer needed. cy.prompt() is available by default.
npx cypress open so the test runner can access Cypress Cloud and enable AI-powered execution for cy.prompt().projectId in cypress.config.js to connect your local project with Cypress Cloud and enable Cypress AI features.const { defineConfig } = require('cypress');
module.exports = defineConfig({
projectId: "okox7g",
e2e: {},
allowCypressEnv: false,
});cypress/e2e/prompt-test.cy.js to keep Cypress E2E testing structured and maintainable.cypress/e2e/prompt-test.cy.jsdescribe('Cypress AI Test', () => {
Cypress.on('uncaught:exception', (err, runnable) => {
return false;
});
it('submits a message and verifies output', () => {
cy.prompt([
'visit https://www.testmuai.com/selenium-playground/simple-form-demo',
'type "Cypress AI is working" in the input field with placeholder "Please enter your Message"',
'click the Get Checked Value button',
'verify "Cypress AI is working" appears in the Your Message section',
]);
});
});Each step appears in the Command Log along with generated Cypress commands such as cy.visit, cy.get, cy.type, and cy.contains. You can use the Code option in the log to copy generated commands and move them into your spec file for deterministic Cypress E2E testing in CI.
npx cypress open Select E2E testing option from Cypress UI > then select the browser Chrome, then the Cypress browser opens and you can see your file created that is prompt-test.cy.js. Click on the file name shown. Results:
Note: Skip the manual selector hunt. Pair with cy.prompt and run AI-generated Cypress tests across 3000+ browsers and OS combinations Try TestMu AI Today!
cy.prompt() feels smooth in demos, but running it locally introduces friction. AI execution is limited to Chromium-based browsers, so teams still need separate setups for Firefox or WebKit. Differences in browser versions, network conditions, and system resources can also lead to flaky or inconsistent results between developers and CI.
At scale, the challenges increase. AI usage is rate-limited, which can slow parallel runs, and CI pipelines may depend on live AI calls, adding latency and instability. Running tests across multiple browsers or devices locally is resource-heavy, and parallelization is limited without extra infrastructure.
This is where a cloud-based platform like TestMu AI (formerly LambdaTest)becomes useful. Instead of managing these constraints locally, execution is offloaded to a scalable environment that runs AI-generated Cypress tests across Chrome, Edge, and real mobile browsers in parallel.
It eliminates local inconsistencies, speeds up execution, and removes the need to manage infrastructure or worry about scaling limits.
In practice, teams use cy.prompt() locally to generate tests, then rely on TestMu AI to run them reliably across browsers and devices in CI.
TestMu AI (formerly LambdaTest) is a cloud platform that helps teams run Cypress testing at scale without managing local infrastructure. It fits naturally with Cypress AI by taking tests created with cy.prompt() and running them across real browsers and devices in parallel.
Instead of relying on local machines, teams can execute Cypress AI tests in consistent environments, reduce flakiness caused by setup differences, and get faster feedback in CI. It’s especially useful when moving from generating tests locally to running them reliably across multiple browsers and devices.
Once your AI-authored tests pass locally, freeze them by clicking Code in the Command Log and saving the generated Cypress code into your spec file. You can then run those tests on the cloud using a standard configuration:
{
"lambdatest_auth": {
"username": "<your_username>",
"access_key": "<your_access_key>"
},
"browsers": [
{
"browser": "Chrome",
"platform": "Windows 11",
"versions": ["latest"]
},
{
"browser": "Edge",
"platform": "macOS Sonoma",
"versions": ["latest"]
}
],
"run_settings": {
"build_name": "Cypress AI Demo",
"parallels": 5,
"specs": "./cypress/e2e/prompt-test.cy.js"
}
}Trigger the run using the CLI:
lambdatest-cypress run --config-file=lambdatest-config.json --cy "--record --key <your_cypress_key>"This integrates smoothly with the Cypress CLI and Test Runner, allowing you to move from local AI-generated tests to scalable cloud execution without changing your core workflow.
Execute your first AI-authored Cypress spec across 10,000+ real browsers and devices in under ten minutes. For integration details, check the TestMu AI Cypress CLI documentation
Alongside running tests on the TestMu AI cloud grid, you can also use Cypress Agent Skills as another way to create and execute tests. Instead of writing tests manually or relying on cy.prompt(), Agent Skills let your AI coding assistant generate complete Cypress testing setups from natural language.
The Cypress Skill is part of TestMu AI agent-skills, which are structured packages that guide AI tools to produce production-ready test automation. While Cypress AI focuses on generating tests inside the runner, Agent Skills work at a broader level, helping you create full test suites, configurations, and execution flows.
How It Works
Once the Cypress skill is installed in your AI tool (such as Claude Code, Cursor, or GitHub Copilot), you can describe what you want:
The AI assistant then handles:
When to Use Agent Skills vs Cypress AI
If you are getting started with Cypress and want to generate production-ready test automation using the cypress-skill, refer to the guide on running your Cypress Testing using Agent Skills.
Upgrade to Cypress 15.11.0+, link your project to Cypress Cloud, then click the wand icon beside a test in the Cypress App and accept assertion suggestions appearing after each action.
Cypress Studio lets you record clicks, typing, and form submissions inside a real browser session and then save the generated commands. Studio AI sits on top: it watches DOM changes after each action and proposes the assertion you almost certainly want, like should be visible or should contain text "Welcome". Per the Cypress Studio docs, the recommendations cover visibility, element existence, text content, length, input values, attributes, URL, and page title.
Enable it in three steps:
Studio AI never reads your application source, business rules, or backend. It works purely from the rendered DOM and CSS, so the assertions it suggests are observational, not behavioral.
Use it for the boilerplate "is the button now disabled" assertions and write the deeper business-logic checks yourself.
Cypress AI self-heals through two paths - cached selectors from prior runs (no AI cost) or fresh AI-generated selectors that compare the failed step against current DOM and update the cache.
Self-healing is the most-requested feature in cy.prompt() and the reason teams keep prompts in CI rather than freezing the generated code. Cypress shipped a detailed walkthrough in its December 2025 self-healing announcement. Two healing paths exist:
Generated code is cached and shared across machines and environments, including CI runners, per the Cypress documentation. Re-running the same test does not call the AI again unless the prompt or the DOM changes in a way that invalidates the cache. This keeps prompt usage well under the 100/hour free limit even for sizeable suites.
Self-healing solves UI churn but it does not solve flaky waits, network race conditions, or test data drift. For those failure modes, route results through Test Intelligence on TestMu AI to get automatic flakiness scoring and root-cause classification across CI builds.
Note: Run AI-authored Cypress specs across 10,000+ real browsers and devices on TestMu AI cloud, with parallel execution and built-in test analytics. Start free on TestMu AI
Use cy.prompt() to author specs from English, Studio AI for recorded assertions, and KaneAI on TestMu AI for cross-browser, mobile, API, and accessibility testing in one agentic platform.
The three tools overlap in marketing copy but solve different jobs in practice. I ran all three on the same checkout flow and kept notes on where each one earned its keep and where it got in the way.
| Scenario | cy.prompt() | Studio AI | KaneAI (TestMu AI) |
|---|---|---|---|
| Authoring new E2E spec from English | Best fit. Writes full spec file. | Partial. Only assertions. | Best fit if you want no Cypress code at all. |
| Recording a flow and saving it as code | Not designed for this. | Best fit. Watches DOM, suggests assertions. | Supported via KaneAI recording. |
| Self-healing selectors across CI runs | Best fit. Cache + AI fallback. | No self-healing. | Best fit. Resilient locators across frameworks. |
| Cross-browser (Firefox, WebKit, real mobile) | Chromium only. | Chromium only. | Best fit. 10,000+ browsers and devices. |
| API, accessibility, visual checks | Not supported. | Not supported. | Best fit. UI + API + a11y in one agent. |
| Component testing | Not supported. | Partial. Record-only. | Out of scope (E2E focus). |
| Team size and Cypress fluency | Cypress-fluent teams. | Mixed-skill teams. | QA + PM teams with no Cypress background. |
Cypress is unusually transparent about what cy.prompt() cannot do. The cy.prompt reference lists every unsupported case. Reviewing them up front saves hours of debugging:
The healthy mental model: cy.prompt() handles UI walkthroughs that a human tester would do. For data setup, API contracts, iframes, or canvas-based apps, write the Cypress code yourself or move that scenario to a different testing layer.
Top Cypress AI mistakes include oversized prompts, hardcoding dynamic values, skipping code review, missing CI retries, ignoring quiet AI-heals, and running cross-browser tests only locally.
After migrating two suites to cy.prompt() and reviewing four more for other teams, the same mistakes surface in the first month. Catching them early is the difference between a suite that stabilizes in a sprint and one that quietly rots for a quarter.
Two hours of code review in week one saves two weeks of flake triage in month three. Make the review ritual non-optional.
Write intent-first prompts, split journeys into 5-10 step blocks, use placeholders for dynamic values, keep hand-coded smoke tests, watch for AI-heals, and push cross-browser runs to the cloud.
AI features are powerful, but they are not a replacement for good test design. To get consistent value from cy.prompt() and Studio AI, you need to use them deliberately alongside the traditional Cypress best practices.
Start small. Upgrade to Cypress 15.13.0 or later, connect to Cypress Cloud, and try rewriting a single spec using cy.prompt(). Review the generated code, then decide whether to keep the prompt for self-healing or commit the code for stable CI. Avoid scaling too quickly. Begin with a few flaky tests, validate the results, and expand gradually. Once stable, use Studio AI to strengthen assertions and move cross-browser runs to the cloud for better coverage and consistency. Cypress AI reduces the effort of writing and maintaining tests, but the overall testing strategy still depends on you.
Did you find this page helpful?
More Related Hubs
TestMu AI forEnterprise
Get access to solutions built on Enterprise
grade security, privacy, & compliance