Hero Background

Next-Gen App & Browser Testing Cloud

Trusted by 2 Mn+ QAs & Devs to accelerate their release cycles

Next-Gen App & Browser Testing Cloud
  • Home
  • /
  • Blog
  • /
  • 18 Test Automation Mistakes I Wish I'd Known as a Newbie Tester
AutomationTesting Strategies

18 Common Test Automation Mistakes (And How to Avoid Them)

Learn from my experience! Discover 18 common test automation mistakes to avoid as a newbie tester. Don't let these pitfalls hinder your progress in test automation.

Author

Harshit Paul

March 31, 2026

Test automation mistakes are part of the process but the most common ones are also the most preventable.

Whether you're an automation tester just starting out or an experienced QA engineer scaling a CI/CD pipeline, the same pitfalls keep showing up. This includes automating the wrong things, picking the wrong tools, skipping test design, and writing scripts that break the moment the UI changes.

In this guide, we cover 14 common test automation mistakes QA engineers make and exactly how to avoid each one. I've made most of these myself, and learned the rest from watching teams around me. Make a note of them before you start your next automation project.

Overview

Test automation mistakes reduce the reliability, speed, and ROI of a QA team's automated test suite. Even a technically correct framework becomes a liability when the strategy, scripting patterns, or maintenance habits are weak.

Three Common Categories

These mistakes usually fall into three groups: strategy mistakes, scripting mistakes, and modern mistakes tied to AI-generated tests or AI-native product features. Together, they explain why many test suites become slow, flaky, or hard to trust.

How to Avoid Them?

Define scope before scripting, keep tests modular and independent, assert outcomes explicitly, and maintain the suite continuously as the application evolves. Teams that do this avoid the most expensive failures before they spread across the pipeline.

1. Automate When It Is Necessary

When I was first assigned to automate Selenium test scripts, I was excited but nervous to make a good impression. I completed my assigned module, then picked another on my own—and got stuck. The mistake wasn’t trying more, but not consulting my seniors. That module wasn’t meant for automation due to false positives and negatives. I spent days on it and neglected my actual work, not the impression I wanted to make.

upset gif

I have seen these automation testing failures many times happening with new automation testers. After all, curiosity can take you places. When you learn automation testing, you may try to bring in automation in every project. This is not necessary. You may be able to automate a certain thing, but is it feasible enough? While it is known that automation saves time and effort, it is absolutely important to answer the question: Why do you need to automate this project? If you get a pragmatic answer, only then show the green light to automation. It is vital to avoid this mistake as an automation tester.

Key takeaway: Not everything that can be automated should be. Always validate with your team whether a module is worth automating before you invest time in it.

2. Define The Scope

define the scope

Defining the scope of the tests that you are going to execute is extremely necessary. When I was a new automation tester, I tried to test everything and make every test automated. The problem is that although you can successfully automate all the tests, it is neither practical nor viable. Firstly, there are many parts of a code that do not need frequent testing and we may need to invest a lot of time in developing a framework or script just for these codes.

For example, while testing a website using Selenium, it is useless to automate each element of the website and run your scripts over it. This is not worth the time and money. Secondly, automating everything increases your test automation percentage which makes you feel like you have done an exceptional job, which is not true. It might look good on paper but it is not required. Define the scope of the tests and consider only viable codes for automation testing that provides real value in time.

Key takeaway: Automating everything inflates your coverage numbers without adding real value. Define which tests justify the time and stick to that list.

3. Select Your Automation Testing Tools Wisely

choose wisely

Another one of the most common mistakes as an automation tester is not picking the right automation testing tool. A project contains many components that focus on different testing objectives. These objectives should be divided into different tools that can help achieve that objective more efficiently. For example, if you want to test an API for your website, you better go with Postman but if you wish to ensure a flawless rendering of your web -application across different browsers then an online Selenium Grid is your best call for automated .

The straight-forward method for such a situation is that do not jump on software and then try to solve the problem through that software. First, find the problem and then find a suitable tool for it.

...

Key takeaway: No single tool solves every testing problem. Match each tool to the specific objective it is built for, not the other way around.

4. Happy Path Only

My suite tested one thing really well for a long time: what happens when everything goes right. Valid email, correct password, expected inputs. Green every day. Then a user submitted a form with an email missing the "@" symbol and got a blank page and a broken session. My suite had never once tested bad input.

Only testing the happy path means only testing the scenario where your users behave perfectly, which they never do. Users forget required fields. They paste text into number inputs. They click submit twice. They hit the back button mid-checkout.

For every form, add at least two negative cases: an empty submission and an invalid format. For every action that changes state, test what happens if a user tries it twice. The gap between "works when everything is perfect" and "works when things go wrong" is exactly where your users live.

Key takeaway: Automating only the scenario where everything goes right leaves every real-user failure mode completely untested. Add negative cases and boundary inputs to every critical flow.

5. Check For Return On Investments

It is a very rookie mistake to just take into account the tester’s salary as the only cost factor. I did the same during the initial days. Obviously, this is not the case. For example, let’s consider that you want to perform cross browser testing of your website. A tester’s salary is obviously a part of the cost. If your team does not know this type of testing or any tools associated with it, then you need to upgrade their skills by training them about it. This incurs extra costs. In addition, you need to have the right automation tool or framework to perform .

You might consider using Selenium to save costs, but it has limitations—especially in scalability. Selenium Grid allows parallel testing, but you’re still limited to the browsers and environments on your own setup. Testing across multiple browsers, OS versions, and devices quickly becomes time-consuming and expensive, and maintaining a device lab only adds to the burden. So, what’s the alternative?

...

You can look for an online Selenium Grid such as TestMu AI. Our Selenium Grid helps you to test your website across 2000+ real browsers and real operating systems. The best part? It is all on the cloud, which means you can ditch the hassle of maintaining your in-house infrastructure and choose to go for distributed machines hosted on the cloud. You would not only end up saving time but also money.

tenor

This was just one instance. Similarly, there are other investments that you will encounter on your way to perform automation testing of a web application. But, they will definitely appear. So, the testing costs should be considered carefully keeping in mind the return on these investments that you will get.

Key takeaway: The tester's salary is just one line item. Calculate the full cost of infrastructure, training, and maintenance before committing to an automation approach.

6. Not Adapting Your Automation Strategy for AI-Native Apps

Most automation frameworks were built for deterministic applications, same input, same output, every time.

That assumption breaks completely when you are testing AI-native features like chatbots, recommendation engines, or generated summaries. The output is non-deterministic by design, and a traditional assertion will either fail everything or miss every real failure.

I have seen teams spend weeks trying to make standard Selenium assertions work on AI-driven outputs before realising the strategy itself was wrong. Testing AI features means asserting on tone, structure, or intent rather than exact text and using tools built specifically for that job.

If your application has a chatbot or voice agent, you need a testing approach where one AI agent interacts with and evaluates another, which is exactly what TestMu AI's Agent to Agent testing is built for. It lets you test AI chatbots and voicebots the way real users interact with them, without retrofitting a Selenium script to do a job it was never designed for.

Key takeaway: A test suite built for deterministic apps produces meaningless results on AI-native features. Adapt your assertion strategy to validate intent and structure, not exact output, and use tools built for testing AI agents and language model behaviour.

7. Consider Codeless Automation As A Stepping Stone

While codeless automation testing tools have a narrow learning curve and are easy to get started with, they won’t help you build a relevant skill set required for your automation tester profile. As a beginner, they are good to get you started but as you grow your test automation career, you would realize that they aren’t as helpful as you expected them to be. And if you decide to sit in an interview for an automation tester profile with the wisdome of codeless automation tools, or if you thought you could automate your complex web application with codeless automation alone then you are going to have a rough time cracking it.

...

Reliability is another big question in these types of tools. At the end of the day, you need to learn code in order to debug where your automation test suite execution is going wrong. Also, if you are handed a complex website to work with, then you would not find codeless automation testing tools to be as flexible as you like. It is advised not to run away from code but to learn it proficiently. On top of that, it would be a charm to your resume. So make sure to avoid this common mistake as an automation tester.

Key takeaway: Codeless tools are a good starting point but a poor destination. You will eventually need code to debug failures and handle complex scenarios.

8. Maintain Test Design

Test design is a process of converting the general test objectives into tangible test cases and conditions. As a developer, we tend to think that since testing requires coding, why can’t developers do the job? Well, if that would have been the case, then tester’s would not have existed.

As a beginner, I did not understand the importance of test design and that was probably my biggest mistake as an automation tester. Testing anything anytime is an absurd idea. To test efficiently, testers are required to design the tests and then code them. Designing the tests helps create meaningful tests and makes the overall testing process very efficient.

...

Key takeaway: Automating without designing first leads to meaningless tests. Always define expected inputs, outputs, and edge cases before writing a single script.

9. Avoid False-Positive and False-Negative Results

A false-positive occurs when the test results wrongly indicate that the test passed but in actual, the test did not. A false-negative is vice-versa. Believing blindly on the test reports is a very common mistake among the testers. This can also be called as non-validation of the elements you are testing. For example, let’s say you are testing a login page with test scripts written with different test cases. The test reports indicate that the login passed. In such a case, you need to validate whether the login was successful or not. Don’t fall for this mistake as an automation tester by always being on toes for false negatives and false positives.

Key takeaway: A passing test that does not actually validate your application is worse than no test at all. Never trust results blindly without confirming what the test is really checking.

10. Focus On Code Reusability

A test case is not unique to the code it has been applied to. In a project, many similar components occur and they require similar test design and test suites. For example, while cross browser testing with Selenium, we found out that four elements of a web page are all input fields and require similar test cases.

Here you can copy-paste the code by writing the tests for the first element only. While this will give the expected results, the problem is that in the future the developers might change the elements in some way. Now, to change the test cases you need to change the code in every test suite that you wrote. The entire time will be wasted on finding and modifying these test codes. I did this mistake and I can tell, this turns very ugly while testing.

To avoid this, you should always focus on the reusability of the code. Instead of pasting the code, again and again, you should construct a function with appropriate arguments and call this function on each element. This way, if there is any future change, you just need to modify the function and you are good to go.

Key takeaway: Writing the same logic in multiple scripts creates a maintenance nightmare. Build reusable components from day one so one change does not require twenty fixes.

11. 100% Automation Is A Myth

...

Don’t fall for this myth as it would be a grave mistake as an automation tester. As a newbie in the test automation area, I was excited to bring automation to the projects. This led me to commit the mistake of thinking that automation testing can replace the manual testing process completely. In the course of time, I have known that this is not possible. Replacing manual testing with automation testing completely (100%) is a myth. It can never be achieved. As a beginner in this area, do not try to achieve such a goal. Automate only when it is necessary and only those things that require automation.

Key takeaway: Chasing 100% automation coverage is a distraction. Focus on automating the tests that run frequently, stay stable, and catch real bugs.

12. Follow Ground-Up Approach

While testing you will encounter different types of problems. You will be required to set objectives and categorize these problems. A ground-up approach means start automation testing with smaller modules rather than the big ones.

...

One of the biggest mistakes as an automation tester is to start automation with bigger and complex modules. Don’t do that! You lack the awareness around the inbound and outbound processes involved in each user interaction. You may even lack the proficiency to handle tricky test cases and you may end up wasting a lot of time with nothing to show for it. So start small and increase your automation testing coverage from a ground-up approach.

Key takeaway: Starting automation with your most complex module is the fastest way to burn time and confidence. Begin with simple, stable flows and build upward from there.

13. Involve Exploratory Testing

Not incorporating exploratory testing in your weekly routine is one of the common mistakes as an automation tester. Exploratory testing is a great adventure that helps in finding new test cases. Exploratory testing is very crucial when we are into automation. Being just on the test scripts might ignore some unexpected and important test cases in automation testing. As a beginner, we just want to rely on scripts and pre-written tests, which should be avoided.

Take some time out to perform exploratory testing. You may never know what bug you might catch while testing in the wild..!!

Involve Exploratory Testing

Key takeaway: Relying only on scripted tests means you will always miss the unexpected. Regular exploratory sessions uncover bugs that no pre-written test case would ever find.

14. Design Test Impervious To UI

The user interface changes constantly, especially in early builds. When your scripts rely on where an element sits or how the page is structured, every UI update becomes a broken test.

I ignored this early on and paid for it every sprint.

The issue is not that the UI changed. It is that the test was written in a way that made it vulnerable to that change. A test that depends on the position of an image or the nesting of a div is not testing your application. It is testing your layout and layouts change all the time for reasons that have nothing to do with bugs.

Write tests that target stable attributes like IDs or data-testid values rather than positions or structural paths. The fewer assumptions your script makes about the UI, the longer it survives without needing a rewrite.

Less time fixing broken scripts means more time actually catching bugs.

Key takeaway: Tests that break every time a button moves are not testing your application. They are testing your layout, which is not the job of automation.

15. Hard Sleeps vs Smart Waits

I used to add Thread.sleep() everywhere in my Selenium scripts. Every page transition, every form submission, just throw in a 5-second sleep and move on. It worked until it didn't. On a slow CI server, 5 seconds wasn't enough and my tests started failing randomly. On a fast machine, those 5 seconds were pure wasted time across hundreds of tests.

Hard sleeps freeze your script for a fixed duration regardless of what the application is doing. Smart waits like WebDriverWait poll the DOM at short intervals and move on the moment your condition is met. If the element appears in 300ms, your test continues in 300ms, not after 5 seconds.

Replace hard sleeps with smart waits and you will immediately see your suite run faster and fail less. Not because your app improved, but because your tests stopped lying about it.

Key takeaway: A hard sleep freezes your test for a fixed duration whether the app needs it or not. Smart waits exit the moment your condition is met, making your suite faster and far less flaky.

16. Tests Without Assertions

I once reviewed an automation suite where tests ran 50 steps through an entire checkout flow without a single assertion. Green pipeline. Broken application. Nobody knew.

A test without assertions is not a test. It is a script that clicks through your app and assumes silence means success. After logging in, assert the dashboard is visible. After submitting a form, assert the confirmation appears. That one line is the only thing separating a useful test from a false sense of coverage.

The most common reason testers skip this is that record-and-playback tools do not add assertions by default. They capture your interactions and leave the verification to you. If you started with recorded scripts and never went back, your suite may be accepting broken behavior every single day.

Key takeaway: A test that interacts with your application without checking the outcome cannot catch a single bug. Every test needs at least one explicit assertion to be worth running.

17. Monolithic vs Modular Test Design

Early on, I wrote a single test that covered registration, login, search, add to cart, checkout, and order confirmation, all 90 steps in one script. Then a locator on the checkout page changed. My entire test failed and debugging it took longer than the actual fix.

One giant end-to-end test that covers everything sounds thorough. It is not. When a 90-step test fails at step 84, you have no idea whether the problem is in checkout logic, a session timeout, or a random locator change. The failure is real but the signal is useless.

Break your suite into small, focused tests, one for login, one for cart, one for checkout. When the checkout test fails, you know exactly where to look. You also get the ability to run them in parallel, which cuts your pipeline time dramatically.

Key takeaway: A 90-step test that fails at step 84 tells you nothing useful. Small, focused tests fail specifically and tell you exactly what broke.

18. Coordinate Well With Fellow Testers

There are many people involved in a testing team. All these people are equipped with a different skill set. For example, someone is good with business testing while someone is good with functional testing. However, that is no reason for you to not discuss the progress of your tasks with them. Coordination is the key to accelerating product delivery. Acknowledge who is working on what, which tools are they using, which programming language for test automation are they comfortable with. Why?

That would definitely help you troubleshooting your test automation scripts. So in case, things go south, you will know where to knock or rather whom to knock!

knock knock

Knowing your team would also help you manage them when demanded by the time. As discussed in the last point, a project might require different tools to achieve the combined objective, it is best that you let your testers use the tool with which he is comfortable.

It is very important not to force anyone randomly with any task and tool. For this, you can always perform a dry run before starting the testing process. If no one suits fit, you need to train them accordingly.

Key takeaway: Knowing who on your team handles what prevents duplicated work and gives you the right person to call when your scripts hit a wall.

Key Takeaways

  • Automation is only as good as the planning behind it. Define scope, validate necessity, and align with your team before writing a single script.
  • Tool selection, ROI calculation, and team coordination are not one-time decisions. Revisit them as your product and team evolve.
  • A test without an assertion is not a test. Every script must explicitly verify the outcome it claims to be checking.
  • Hard sleeps, brittle locators, and monolithic test flows are the three fastest ways to turn a healthy suite into an unmaintainable mess.
  • 100% automation coverage is a vanity metric. Focus on tests that run frequently, stay stable, and catch bugs that matter.
  • Exploratory testing is not optional. Scripted tests cover what you expect to break. Exploratory testing finds what you never thought to check.
  • Testing only the happy path leaves every real user failure mode completely uncovered. Add negative cases and boundary inputs to every critical flow.
  • AI tools accelerate test creation but cannot judge whether a script actually tests the right thing. Review every AI-generated test before it enters your pipeline.
  • Traditional assertion strategies break on AI-native features. If your application uses chatbots, recommendation engines, or any non-deterministic output, your testing approach needs to match.
  • Automation is not a one-time setup. It is an ongoing engineering discipline that requires the same maintenance, review, and iteration as the code it tests.
cross browser testing tool

Author

Harshit Paul is the Director of Product Marketing at TestMu AI with over 7 years of experience in product and growth marketing. He also worked for 2 years as a certified Salesforce developer at Wipro Technologies. He has authored 80+ technical articles for TestMu AI, covering testing and automation. Harshit has led go-to-market campaigns, managed Google and Bing Ads with profitable ROAS, and hosted webinars on Software Testing, Automation Testing, Selenium, Browser Compatibility, DevOps, and Continuous Testing. He has also contributed across SEO, social media, performance marketing, partner marketing, and content strategy, while working on blogs, newsletters, web pages, certifications, case studies, and support documentation. Harshit graduated in Computer Programming from Vivekananda Institute of Professional Studies.

Frequently asked questions

Did you find this page helpful?

More Related Hubs

TestMu AI forEnterprise

Get access to solutions built on Enterprise
grade security, privacy, & compliance

  • Advanced access controls
  • Advanced data retention rules
  • Advanced Local Testing
  • Premium Support options
  • Early access to beta features
  • Private Slack Channel
  • Unlimited Manual Accessibility DevTools Tests