• Home
  • /
  • Blog
  • /
  • How to Use Cypress AI: Complete 2026 Guide
Cypress TestingAutomationAI Testing

How to Use Cypress AI: Complete 2026 Guide

Learn how to use AI in Cypress with cy.prompt, Studio AI, and self-healing tests. A step-by-step 2026 guide to enable, write, and scale your AI Cypress tests.

Author

Saniya Gazala

April 27, 2026

According to the Capgemini World Quality Report 2024-25, 68% of organizations are now using Generative AI in quality engineering, and 72% report faster automation as a direct result. As this shift gained momentum, Cypress introduced its answer in late 2025 with a built-in AI layer inside Cypress and Cypress Cloud that generates specs from English, recommends assertions, and self-heals selectors.

That shift in the industry sets the stage perfectly for Cypress AI. With AI built directly into Cypress and Cypress Cloud, teams can generate test specs from plain English, get smart assertion suggestions, and rely on self-healing selectors, making automation faster and easier to maintain.

Overview

What exactly is Cypress AI?

Cypress AI is the collection of AI-driven capabilities baked into Cypress from version 15.4.0 onward, available in both the open source runner and Cypress Cloud. It is not a separate product or paid add-on, but a layer of features such as cy.prompt(), Studio AI, and test insights that shift tests toward user intent instead of fragile CSS selectors.

Which testing problems does Cypress AI address?

Cypress AI addresses common pain points in Cypress E2E testing by reducing maintenance effort, speeding up test creation, and improving test reliability.

  • Selector maintenance tax: UI refactors, class renames, and component library upgrades break tests that are otherwise correct, and cy.prompt() resolves elements by intent rather than brittle locators.
  • Slow test authoring: Hand-coding inputs, waits, and assertions consumes hours, while cy.prompt() turns a few plain English lines into a complete spec.
  • Assertion blind spots: Recorded flows often capture clicks but skip meaningful checks, and Studio AI inspects DOM state after each action to suggest visibility, text, attribute, and URL assertions.

How do you generate tests with cy.prompt()?

You can generate tests in Cypress E2E testing using cy.prompt() by describing user actions in plain English and letting Cypress translate them into executable commands.

  • Enable the command: Toggle the experimentalPromptCommand flag in cypress.config.js for versions 15.4 to 15.12, or use it directly on Cypress 15.13.0 and later.
  • Authenticate with Cloud: Sign into Cypress Cloud through the App so the runner can reach the underlying AI model.
  • Describe user intent: Pass an array of natural language steps that capture what the user does, not how each element is located, and Cypress translates them into cy.visit, cy.get, cy.type, and cy.contains at runtime.
  • Protect dynamic data: Route credentials, generated emails, and other variable values through the placeholders option to keep secrets out of AI calls and preserve cache hits across runs.

Can you scale Cypress AI tests on TestMu AI Cloud?

TestMu AI (formerly LambdaTest) takes specs authored with cy.prompt() and runs them in parallel across real browsers and devices, making it easier to scale Cypress E2E testing online beyond local environments. It removes the Chromium-only limitation and reduces flakiness caused by inconsistent developer setups.

Once your tests pass locally, you can save the generated code from the Command Log, configure a lambdatest-config.json file with browser and parallelization settings, and trigger runs through the TestMu AI CLI. This approach helps make Cypress E2E testing faster, more reliable, and better suited for CI at scale.

What Is Cypress AI?

Cypress AI is a set of AI-powered capabilities built into Cypress 15.4.0+ and Cypress Cloud, enabling natural-language test authoring, self-healing selectors, and assertion suggestions.

Cypress AI refers to the set of AI-powered capabilities introduced in Cypress starting with version 15.4.0, spanning both the open source test runner and Cypress Cloud. It is not a separate product or a paid add-on for core functionality.

Instead, these features are built directly into the Cypress App and surfaced through commands like cy.prompt() as well as through Cypress Cloud experiences such as Studio AI and test insights.

The core idea is to keep tests focused on user intent rather than brittle CSS selectors. When the intent remains the same but the DOM changes, AI helps bridge the gap so tests remain stable and require less maintenance.

The Problem Cypress AI Actually Solves

Cypress AI solves three recurring pain points in test suites, selector maintenance tax, slow authoring of new specs, and missing assertions during recorded flows.

Before writing any code, it helps to name the real pain. Most Cypress test suites do not fail because the logic is wrong. They fail because locators drift, assertions fall out of sync with the DOM, and writing a new spec often takes longer than building the feature itself.

The Cypress engineering team reported that selector churn is the single biggest source of broken runs their customers file tickets about. That matches what I saw in my own suite: roughly one in three red builds was a locator fix, not a real regression.

We eventually stopped measuring CI health by pass rate and started measuring it by how many hours we spent each week fixing selectors. Cypress AI was one of the first tools that actually reduced that number.

Cypress AI addresses three recurring problems in real-world test suites:

  • Selector maintenance tax: Refactors, class name changes, or UI library updates often break tests that are otherwise correct. With self-healing capabilities in cy.prompt() , elements are resolved based on intent rather than fragile CSS selectors.
  • Authoring speed: Writing even a simple login test can take significant time when handling inputs, waits, and assertions manually. With cy.prompt(), the same flow can be generated from a few lines of plain English. Assertion blind spots
  • Assertion blind spots: Recording flows with tools like Studio often captures actions but misses meaningful assertions. Studio AI observes the DOM after each step and suggests checks for visibility, text, attributes, and URLs that might otherwise be overlooked.

Cypress AI Features at a Glance

Cypress lists six AI capabilities in its Cloud features documentation. Three sit in the App, three live in the Cloud dashboard.

FeatureWhat It DoesPlan
cy.prompt()Converts natural language steps into runnable Cypress commands and self-heals selectors between runs.Free (100/hr); Paid (600/hr)
Studio AIWatches DOM changes while you record interactions and recommends assertions automatically.Free (60/hr); Paid (300/hr)
Test Intent SummariesAI-generated description of what each test is meant to verify, shown in the Cloud test details sidebar.All plans
Error SummariesPlain-language explanations of failures placed alongside the stack trace.All plans
Cloud MCP (Beta)Model Context Protocol server that exposes Cypress Cloud test results to AI coding assistants like Claude or Cursor.Paid plan
UI Coverage Test GenerationGenerates targeted Cypress tests from coverage gaps spotted during code reviews.UI Coverage add-on

Most teams start with cy.prompt() and Studio AI because they directly affect daily test authoring. The Cloud features become valuable once you have suites running in CI.

Prerequisites Before Using Cypress AI

Before writing any AI-driven tests, make sure your environment meets the required baseline after your Cypress install.

The official cy.prompt() documentation outlines strict versioning and authentication requirements that can fail silently if not configured correctly.

  • Cypress version: Use 15.4.0 or higher for cy.prompt(). Version 15.13.0 removes the need for the experimentalPromptCommand flag. Studio AI assertion recommendations require 15.11.0 or later.
  • Cypress Cloud account: Required to access AI capabilities. A free account works, while paid plans provide higher hourly usage limits.
  • Linked project:Your project must be connected to Cypress Cloud, either by logging in through the Cypress App or by running tests with --record and a valid --key.
  • Browser: AI features run only on Chromium-based browsers. Use Chrome, Edge, or Electron. Firefox and WebKit are not supported.
  • Network: Ensure outbound HTTPS access to Cypress Cloud endpoints. Studio AI also requires your development environment to serve sourcemaps.

If you are on Cypress 14 or earlier, upgrade first. The Cypress tutorial walks through clean install and project setup if you are starting from scratch.

How to Use cy.prompt() to Generate Tests?

Enable cy.prompt() in your Cypress 15.4+ config, sign in to Cypress Cloud, and pass an array of plain English steps describing user intent to generate runnable Cypress commands.

cy.prompt() takes an array of natural language steps and translates each into real Cypress commands at runtime. cy.prompt() is now officially available as an experimental feature. Revealed at CypressConf 2025 by Brian Mann, it represents an initial step toward integrating AI-assisted workflows into the Cypress testing ecosystem.

  • Create project and install Cypress: Create a new project directory, initialize Node.js with npm init -y, and install Cypress using npm install cypress@latest. Open Cypress using npx cypress open and verify installation with npx cypress verify to ensure your Cypress E2E testing setup is ready.
  • Enable the feature (if needed): If you are on Cypress 15.4.x to 15.12.x, enable the experimental flag in cypress.config.js:
  • const { defineConfig } = require('cypress');
    module.exports = defineConfig({
      e2e: {
        experimentalPromptCommand: true,
      },
    });

    On Cypress 15.13.0 and above, the flag is no longer needed. cy.prompt() is available by default.

  • Sign in to Cypress Cloud: Sign in through the Cypress App or UI mode using npx cypress open so the test runner can access Cypress Cloud and enable AI-powered execution for cy.prompt().
  • Configure Cypress project: Add a valid projectId in cypress.config.js to connect your local project with Cypress Cloud and enable Cypress AI features.
  • const { defineConfig } = require('cypress');
    
    module.exports = defineConfig({
       projectId: "okox7g",
      e2e: {},
      allowCypressEnv: false,
    });
  • Create the test spec file: Create your test file under cypress/e2e/prompt-test.cy.js to keep Cypress E2E testing structured and maintainable.
  • cypress/e2e/prompt-test.cy.js
  • Write intent-driven steps: Describe what the user does, not how elements are located, so Cypress AI can generate stable selectors automatically during execution.
  • describe('Cypress AI Test', () => {
    
      Cypress.on('uncaught:exception', (err, runnable) => {
        return false;
      });
    
      it('submits a message and verifies output', () => {
        cy.prompt([
          'visit https://www.testmuai.com/selenium-playground/simple-form-demo',
          'type "Cypress AI is working" in the input field with placeholder "Please enter your Message"',
          'click the Get Checked Value button',
          'verify "Cypress AI is working" appears in the Your Message section',
        ]);
      });
    
    });

    Each step appears in the Command Log along with generated Cypress commands such as cy.visit, cy.get, cy.type, and cy.contains. You can use the Code option in the log to copy generated commands and move them into your spec file for deterministic Cypress E2E testing in CI.

  • Select your Cypress file: Run the common command : npx cypress open Select E2E testing option from Cypress UI > then select the browser Chrome, then the Cypress browser opens and you can see your file created that is prompt-test.cy.js. Click on the file name shown.
  • Results:

Note

Note: Skip the manual selector hunt. Pair with cy.prompt and run AI-generated Cypress tests across 3000+ browsers and OS combinations Try TestMu AI Today!

cy.prompt() feels smooth in demos, but running it locally introduces friction. AI execution is limited to Chromium-based browsers, so teams still need separate setups for Firefox or WebKit. Differences in browser versions, network conditions, and system resources can also lead to flaky or inconsistent results between developers and CI.

At scale, the challenges increase. AI usage is rate-limited, which can slow parallel runs, and CI pipelines may depend on live AI calls, adding latency and instability. Running tests across multiple browsers or devices locally is resource-heavy, and parallelization is limited without extra infrastructure.

This is where a cloud-based platform like TestMu AI (formerly LambdaTest)becomes useful. Instead of managing these constraints locally, execution is offloaded to a scalable environment that runs AI-generated Cypress tests across Chrome, Edge, and real mobile browsers in parallel.

It eliminates local inconsistencies, speeds up execution, and removes the need to manage infrastructure or worry about scaling limits.

In practice, teams use cy.prompt() locally to generate tests, then rely on TestMu AI to run them reliably across browsers and devices in CI.

Running Cypress AI Tests at Scale on TestMu AI Cloud

TestMu AI (formerly LambdaTest) is a cloud platform that helps teams run Cypress testing at scale without managing local infrastructure. It fits naturally with Cypress AI by taking tests created with cy.prompt() and running them across real browsers and devices in parallel.

Instead of relying on local machines, teams can execute Cypress AI tests in consistent environments, reduce flakiness caused by setup differences, and get faster feedback in CI. It’s especially useful when moving from generating tests locally to running them reliably across multiple browsers and devices.

Once your AI-authored tests pass locally, freeze them by clicking Code in the Command Log and saving the generated Cypress code into your spec file. You can then run those tests on the cloud using a standard configuration:

{
  "lambdatest_auth": {
    "username": "<your_username>",
    "access_key": "<your_access_key>"
  },
  "browsers": [
    {
      "browser": "Chrome",
      "platform": "Windows 11",
      "versions": ["latest"]
    },
    {
      "browser": "Edge",
      "platform": "macOS Sonoma",
      "versions": ["latest"]
    }
  ],
  "run_settings": {
    "build_name": "Cypress AI Demo",
    "parallels": 5,
    "specs": "./cypress/e2e/prompt-test.cy.js"
  }
}

Trigger the run using the CLI:

lambdatest-cypress run --config-file=lambdatest-config.json --cy "--record --key <your_cypress_key>"

This integrates smoothly with the Cypress CLI and Test Runner, allowing you to move from local AI-generated tests to scalable cloud execution without changing your core workflow.

Execute your first AI-authored Cypress spec across 10,000+ real browsers and devices in under ten minutes. For integration details, check the TestMu AI Cypress CLI documentation

Run Cypress Tests Using Agent Skills

Alongside running tests on the TestMu AI cloud grid, you can also use Cypress Agent Skills as another way to create and execute tests. Instead of writing tests manually or relying on cy.prompt(), Agent Skills let your AI coding assistant generate complete Cypress testing setups from natural language.

The Cypress Skill is part of TestMu AI agent-skills, which are structured packages that guide AI tools to produce production-ready test automation. While Cypress AI focuses on generating tests inside the runner, Agent Skills work at a broader level, helping you create full test suites, configurations, and execution flows.

How It Works

Once the Cypress skill is installed in your AI tool (such as Claude Code, Cursor, or GitHub Copilot), you can describe what you want:

  • Write Cypress E2E tests for login and run them on TestMu AI cloud.
  • Set up Cypress tests with parallel execution across browsers.

The AI assistant then handles:

  • Project setup and folder structure.
  • Writing test cases with best practices.
  • Configuring cloud execution using lambdatest-config.json.
  • Setting up CI workflows if needed.

When to Use Agent Skills vs Cypress AI

  • Use Cypress AI (cy.prompt()) when you want to quickly generate or refine tests directly inside your test flow.
  • Use Cypress Agent Skills when you want to create or scale a complete Cypress testing setup using an AI assistant.

If you are getting started with Cypress and want to generate production-ready test automation using the cypress-skill, refer to the guide on running your Cypress Testing using Agent Skills.

How to Enable Studio AI for Assertion Recommendations?

Upgrade to Cypress 15.11.0+, link your project to Cypress Cloud, then click the wand icon beside a test in the Cypress App and accept assertion suggestions appearing after each action.

Cypress Studio lets you record clicks, typing, and form submissions inside a real browser session and then save the generated commands. Studio AI sits on top: it watches DOM changes after each action and proposes the assertion you almost certainly want, like should be visible or should contain text "Welcome". Per the Cypress Studio docs, the recommendations cover visibility, element existence, text content, length, input values, attributes, URL, and page title.

Enable it in three steps:

  • Upgrade to Cypress 15.11.0 or later. Studio AI is gated by version.
  • Sign in to Cypress Cloud and link the project. Free accounts get a two-week trial of paid limits.
  • In your test, click the wand icon next to a test name in the Cypress App. Interact with the app. After each action, a thinking indicator appears, then a suggested assertion shows up below it. Click the suggestion to keep it; click Save when finished.

Studio AI never reads your application source, business rules, or backend. It works purely from the rendered DOM and CSS, so the assertions it suggests are observational, not behavioral.

Use it for the boilerplate "is the button now disabled" assertions and write the deeper business-logic checks yourself.

How Self-Healing Works in Cypress AI

Cypress AI self-heals through two paths - cached selectors from prior runs (no AI cost) or fresh AI-generated selectors that compare the failed step against current DOM and update the cache.

Self-healing is the most-requested feature in cy.prompt() and the reason teams keep prompts in CI rather than freezing the generated code. Cypress shipped a detailed walkthrough in its December 2025 self-healing announcement. Two healing paths exist:

  • Self-healed via cache: The original selector fails, but a cached selector from a previous run successfully resolves the element. No AI call is made, so there is no added latency. This appears as “Self-Healed via Cache” in the Command Log.
  • Self-healed via AI:When both the original and cached selectors fail, Cypress sends the original natural language step along with the current DOM to the AI. It generates a new selector, executes it, and updates the cache for future runs. This is logged as “Self-Healed via AI” in the Command Log.

Generated code is cached and shared across machines and environments, including CI runners, per the Cypress documentation. Re-running the same test does not call the AI again unless the prompt or the DOM changes in a way that invalidates the cache. This keeps prompt usage well under the 100/hour free limit even for sizeable suites.

Self-healing solves UI churn but it does not solve flaky waits, network race conditions, or test data drift. For those failure modes, route results through Test Intelligence on TestMu AI to get automatic flakiness scoring and root-cause classification across CI builds.

Note

Note: Run AI-authored Cypress specs across 10,000+ real browsers and devices on TestMu AI cloud, with parallel execution and built-in test analytics. Start free on TestMu AI

When to Use cy.prompt() vs Studio AI vs KaneAI: A Decision Matrix

Use cy.prompt() to author specs from English, Studio AI for recorded assertions, and KaneAI on TestMu AI for cross-browser, mobile, API, and accessibility testing in one agentic platform.

The three tools overlap in marketing copy but solve different jobs in practice. I ran all three on the same checkout flow and kept notes on where each one earned its keep and where it got in the way.

Scenariocy.prompt()Studio AIKaneAI (TestMu AI)
Authoring new E2E spec from EnglishBest fit. Writes full spec file.Partial. Only assertions.Best fit if you want no Cypress code at all.
Recording a flow and saving it as codeNot designed for this.Best fit. Watches DOM, suggests assertions.Supported via KaneAI recording.
Self-healing selectors across CI runsBest fit. Cache + AI fallback.No self-healing.Best fit. Resilient locators across frameworks.
Cross-browser (Firefox, WebKit, real mobile)Chromium only.Chromium only.Best fit. 10,000+ browsers and devices.
API, accessibility, visual checksNot supported.Not supported.Best fit. UI + API + a11y in one agent.
Component testingNot supported.Partial. Record-only.Out of scope (E2E focus).
Team size and Cypress fluencyCypress-fluent teams.Mixed-skill teams.QA + PM teams with no Cypress background.

Limitations of Cypress AI You Should Know

Cypress is unusually transparent about what cy.prompt() cannot do. The cy.prompt reference lists every unsupported case. Reviewing them up front saves hours of debugging:

  • No component testing: cy.prompt() works only for end-to-end specs. Component tests must be written by hand.
  • No cy.request() or API calls: Network-only tests cannot be authored through prompts.
  • No multi-element assertions or not.exist: Assertions on collections, or "this should be gone" without an exact CSS selector, are not supported.
  • No canvas, no iframe, no shadow DOM helpers: Elements rendered inside a canvas, an iframe, or shadow roots cannot be resolved by the AI.
  • No cookie or session clearing through prompts: Use Cypress's session and cookie commands directly.
  • English only: Prompt steps in other languages are not officially supported.
  • Hard ceilings: 50 steps maximum per cy.prompt() call, 100 prompts per user per hour on the free plan, 600 on paid.

The healthy mental model: cy.prompt() handles UI walkthroughs that a human tester would do. For data setup, API contracts, iframes, or canvas-based apps, write the Cypress code yourself or move that scenario to a different testing layer.

Common Mistakes Teams Make With Cypress AI (And How to Avoid Them)

Top Cypress AI mistakes include oversized prompts, hardcoding dynamic values, skipping code review, missing CI retries, ignoring quiet AI-heals, and running cross-browser tests only locally.

After migrating two suites to cy.prompt() and reviewing four more for other teams, the same mistakes surface in the first month. Catching them early is the difference between a suite that stabilizes in a sprint and one that quietly rots for a quarter.

  • Treating cy.prompt() as a replacement for every selector: Teams pile fifty prompts into a single test and blow through the 50-step ceiling. Split one journey into three or four it() blocks, each with a focused prompt of five to ten steps.
  • Leaving dynamic values inline instead of using placeholders: Every email, tenant slug, or generated ID you hardcode into a prompt invalidates the cache on the next run and forces a fresh AI call. That quietly eats your 100-per-hour free budget. Route all dynamic data through the placeholders option.
  • Skipping the Code button: The generated Cypress code is visible in the Command Log for a reason. Teams that never click Code to inspect what the AI wrote ship CI runs they cannot debug when a prompt regresses. Review generated code weekly for the first month.
  • Using cy.prompt() in CI without a retry strategy: AI calls can time out, rate limit, or return a selector that misses on the first try. Set retries in cypress.config.js to at least 2 for run mode and lean on the self-healing cache to stabilize subsequent runs.
  • Ignoring the Command Log for quiet AI-heals: A test that self-heals every single run is not healthy, it is hiding a design problem in the app. Triage persistent Self-Healed via AI entries back to the dev team so they can add stable data-testid attributes.
  • Running the full matrix locally: cy.prompt() is Chromium-only on a laptop, so the test passes on your machine and breaks on Edge or real Android in staging. Push cross-browser runs to the cloud grid from day one rather than discovering the gap at release.

Two hours of code review in week one saves two weeks of flake triage in month three. Make the review ritual non-optional.

Best Practices for Cypress AI Adoption

Write intent-first prompts, split journeys into 5-10 step blocks, use placeholders for dynamic values, keep hand-coded smoke tests, watch for AI-heals, and push cross-browser runs to the cloud.

AI features are powerful, but they are not a replacement for good test design. To get consistent value from cy.prompt() and Studio AI, you need to use them deliberately alongside the traditional Cypress best practices.

  • Write intent-first prompts: "click the Submit button" is fine. "click the third button in the second row of the form panel" is fragile. Describe what the user does, not where the element sits.
  • Decide between authoring tool and runtime tool: If your CI must be deterministic, use cy.prompt() once locally, click Code, save the generated commands, and remove the prompt. If you want resilience to UI churn, leave the prompt in.
  • Cap prompt size: One cy.prompt() call supports up to 50 steps, but readability collapses well before that. Split user journeys into 5-10 step prompts per it() block.
  • Use placeholders for every secret and dynamic value: Even though Cypress masks password and credit-card fields automatically, anything else (email, order ID, tenant slug) should still go through placeholders to keep cache hits high.
  • Pair AI tests with human-coded smoke tests: Keep a small set of hand-written Cypress specs covering critical paths so a Cypress Cloud outage or AI rate limit does not block releases.
  • Watch the Command Log for AI-healed steps: A test that quietly heals every run is a test that should be rewritten. Persistent AI healing usually points to a brittle locator pattern in your app, not a Cypress problem.
  • Run cross-browser suites on cloud, not laptops: Local cy.prompt() runs in Chromium only. Push the broader matrix (Edge, Chrome on macOS, real Android Chrome) onto cloud automation testing infrastructure.
...

Conclusion

Start small. Upgrade to Cypress 15.13.0 or later, connect to Cypress Cloud, and try rewriting a single spec using cy.prompt(). Review the generated code, then decide whether to keep the prompt for self-healing or commit the code for stable CI. Avoid scaling too quickly. Begin with a few flaky tests, validate the results, and expand gradually. Once stable, use Studio AI to strengthen assertions and move cross-browser runs to the cloud for better coverage and consistency. Cypress AI reduces the effort of writing and maintaining tests, but the overall testing strategy still depends on you.

Author

Saniya Gazala is a Product Marketing Manager and Community Evangelist at TestMu AI with 2+ years of experience in software QA, manual testing, and automation adoption. She holds a B.Tech in Computer Science Engineering. At TestMu AI, she leads content strategy, community growth, and test automation initiatives, having managed a 5-member team and contributed to certification programs using Selenium, Cypress, Playwright, Appium, and KaneAI. Saniya has authored 15+ articles on QA and holds certifications in Automation Testing, Six Sigma Yellow Belt, Microsoft Power BI, and multiple automation tools. She also crafted hands-on problem statements for Appium and Espresso. Her work blends detailed execution with a strategic focus on impact, learning, and long-term community value.

Open in ChatGPT Icon

Open in ChatGPT

Open in Claude Icon

Open in Claude

Open in Perplexity Icon

Open in Perplexity

Open in Grok Icon

Open in Grok

Open in Gemini AI Icon

Open in Gemini AI

Copied to Clipboard!
...

3000+ Browsers. One Platform.

See exactly how your site performs everywhere.

Try it free
...

Write Tests in Plain English with KaneAI

Create, debug, and evolve tests using natural language.

Try for free

Frequently asked questions

Did you find this page helpful?

More Related Hubs

TestMu AI forEnterprise

Get access to solutions built on Enterprise
grade security, privacy, & compliance

  • Advanced access controls
  • Advanced data retention rules
  • Advanced Local Testing
  • Premium Support options
  • Early access to beta features
  • Private Slack Channel
  • Unlimited Manual Accessibility DevTools Tests