Next-Gen App & Browser Testing Cloud
Trusted by 2 Mn+ QAs & Devs to accelerate their release cycles

On This Page
Dive into the realm of 'Man Vs Machine' in bug discovery after software release. Explore the delicate balance between human intuition and automated precision. Uncover strategies to identify and replicate bugs for a seamless software experience.

Yash Bansal
January 13, 2026
In the fast-paced world of software development, the pursuit of creating flawless applications is an eternal quest. However, the journey is fraught with challenges, especially when identifying and resolving bugs. Imagine yourself in a dynamic software development setting, where teams are pressured to meet tighter release schedules and work within limited budgets. This often results in software production that may contain undiscovered bugs, as there is insufficient time for thorough identification and resolution during development.
In the traditional bug identification landscape, teams face a cumbersome process. Bugs are tracked through various methods, creating lists that might be prioritized inadequately, leading to critical issues getting lost in a chaotic backlog. This backlog, a mishmash of bugs, ideas, and user requirements, becomes a bottleneck in the development pipeline. The communication breakdown between teams further compounds the problem, as the support and development teams engage in a back-and-forth cycle with users, attempting to replicate and resolve issues. As software evolves post-release, scattered feedback from diverse platforms and struggling to prioritize bugs can hinder a seamless bug resolution process.
But now, a paradigm shift has occurred as non-technical users are empowered to contribute to bug identification. Organizations can bridge the gap between technical and non-technical members by streamlining the bug detection process and facilitating higher team collaboration. Due to this, the software development life cycle speeds up with clear communication channels, thus reshaping the landscape of post-release bug identification and resolution.
So, without any further ado, letâs plunge into the depths of finding replicable bugs post-release and unraveling the complexities of bug identification in the current landscape.
TestMu AI Experience (XP) Series includes recorded webinars/podcasts and fireside chats featuring renowned industry experts and business leaders in the testing & QA ecosystem. In this episode of our XP series webinars, our esteemed speaker is Jonathan Tobin, Founder & CEO of Userback.
Jon is not a typical CEO, but heâs a forward-thinking individual who believes in the perfect blend of technology and human interaction. Beyond business, his interests span a diverse spectrum, from the founderâs journey to the art of slow BBQ. Yes, you heard it right, slow BBQ. Whether you seek insights into customer-centricity, building software, or the finer details of BBQ, he is a valued voice.

Jonathan highlighted that tighter release schedules, cost constraints, and internally reduced quality lead to more reliance on automation testing tools. And sometimes, but not always, the tighter release schedules and cost constraints could mean hiring less experienced engineers. With less experience, there are more introduced bugs in production, and then thereâs less time for developers to do testing.
So then we end up with the users running into issues when theyâre using the software and catching all of them. More bugs may be considered non-critical and then get placed into the backlog. The issue with that is when weâre prioritizing critical bugs, they get put into the backlog. It can change how product development is happening because we move on to the next project, and then there are still bugs sitting there that you know need to be resolved.
So, we look for tools to assist teams and rely more on third-party technology than people genuinely trying to find and resolve issues more thoroughly.
Jonathan mentioned that traditional bug identification methods are generally quite time-consuming, and people probably tend to take shortcuts. Most organizations want to do the right thing, identify issues, and go through the right processes. Teams can sometimes create lists of bugs, which they get prioritized once they have the list. This leads to bugs potentially being lost based on severity, which can mean that bugs donât get fixed in time.
And around non-critical bugs being placed on the backlog, the backlog can fill up with bugs, and then theyâre mixed with ideas and user requirements. So teams end up with a very messy backlog because it contains everything in that backlog, having a more focused effort on streamlining the bug detection process or the testing process for higher priority bugs, a higher level of cooperation with the engineering teams for pre-release.
Regarding the post-release process, having an internal SLA for bug resolution, clear communication with users post-release, and avoiding putting bugs into the backlog is interesting because it allows you to continue supporting customers while managing their user expectations while resolving the non-critical issues.
According to Jon, most organizations of any size have this problem, i.e., lack of visibility across the different teams. They need to realize that they are collecting feedback from their users in different ways. For example, internally, a development team collects feedback from QA testers because theyâre going through and testing the product, collecting feedback through different surveys that they might be doing, like NPS or customer experience. The product managers run their surveys, and the marketing team runs their surveys.
Everyoneâs collecting feedback from different mediums, and thereâs no centralized place for that team to go to generally to find out what users are pointing out. The other challenge is that the non-technical team thatâs been collecting feedback from the users through surveys, generally speaking, will happen if a customer raises an issue because theyâll say they were using the product the other day. That can often get lost in receiving thousands of responses affecting other customers. Still, itâs only known or surfaced by one team in the organization, and maybe not the team ultimately responsible for helping those users or helping fix that issue.
Jonathan highlighted that according to research, 38% of developers spend over 25% of their time fixing bugs and 26% over 50% of their time. So, the problem isnât fixing the bug. Itâs gathering the required information to replicate and resolve the issue. And we know that developers donât genuinely like resolving bugs or replicating issues.
Either the QA tester or the internal team member has an issue to support, which gets logged as a bug or a task in the project management tool, such as JIRA, and the developer receives it. They started working on the issue, and the developer couldnât replicate it. Hence, itâs back and forth between the developer and the customer, and it takes longer to fix the issue once all that information has been gathered. So, using a tool doesnât necessarily matter whoâs logging the bug to the development team. The bug report is going to be consistent.
So, youâre just trying to speed up the replication process as much as possible. The biggest cost of resources is uncaptured, a stat from the state of software code report by Rollbar, i.e., 22% of developers feel overwhelmed by manual processes surrounding bugs. And whatâs more worrying is that 31% say manual responding to bugs frustrates them. So, it is a simple fix with a huge potential impact.
According to Jon, itâs an easy mindset change because the technology already supports this process. So effectively, you can transform non-technical users into an army of QA testers because the key to that is providing consistency and transitioning triage levels. So, the reality is that when anything is identified or logged by a non-technical user, someone is still in the middle who can review each issue, report, or request and make any adjustments to collect additional information before passing it through to the development team. This allows teams to have a gatekeeper on issues that might get logged as bugs.
In software development, users will always log something as a bug and say this is broken; it doesnât work as it should. Theyâre not all bugs, and they could be a feature request. Having that gatekeeper in place lets you triage appropriately. Therefore, reassuring the development team doesnât mean they will communicate directly with users because when an engineer receives an issue, they think itâs been reported by a user. They must share with that person, but thatâs not the case.
Q: How can development teams blend non-technical user insights effectively with internal technical expertise to resolve identified bugs or issues?
Jonathan: In my opinion, if all things are equal in terms of the data being provided with the bug reports, so as if internal reporters and the non-technical users are reporting issues, if everythingâs equal and the context being provided is the same. I guess the internal team will always be providing a little bit more information but allowing the internal teams to add more context to the user insights before that gets escalated to development with that triage step. So you can better understand the user as the support person whoâs triaging you a better understanding of the user so you can add more context around what the user is saying.
Lastly, the use of the perspective of the user is important because the userâs perspective is not always the same as the developerâs, the product manager or the internal team membersâ perspective. Sometimes, we think about our products in very specific ways. And we think weâre developing our product this way because this is how we want our users to use the product. But the result is the users use the product how they want or think they need to. Having the user insight available helps us make better decisions when resolving issues because they relate to other product areas.
Q: Could you share any insights on how this approach not only aids in bug identification but also contributes to an agile development process or a DevOps culture?
Jonathan: I think by automating the collection and the delivery of the information of the issue to the internal teams with each issue submission or feedback submission, the non-technical users can give the internal, I guess, more technical team members everything they need to identify, recreate, and resolve the issues reporting and feedback tools, it reduces the traditional, I guess, that investigation time by, you know, up to 70% from what weâve seen here, at least at Userback. And it means that our teams can maintain that iteration and release philosophy. And thatâs core to the DevOps philosophy.
So, it slows the DevOps process when a business tries to turn a non-technical user into a technical feedback submitter or issue submitter for insights and feedback. For any business, it probably should be avoided. But let the technology make it simpler and more frictionless for those non-technical users to provide feedback and let the technical devs do what they do best, which is, you know. Theyâre there to code and not cross-examine users. Diving deep into your product flaws isnât the userâs job. And if itâs too hard, they wonât provide any feedback. Thatâs detrimental to product insights and future builds.
Q: How do you foresee the future of using user feedback for bug identification and resolution, considering advancing technology and evolving user engagement behaviors?
Jonathan: So much data and tools are available, i.e., cross-browser testing tools like TestMu AI. Regarding bug identification, we are building deeper relationships with our users so they donât feel like they may report an issue. Maybe itâs prompting them to provide feedback along their journey using the product because our users probably run into issues more frequently than our internal teams. After all, theyâre the ones that are using it to do the thing that we built our software for. And prompt them along the way if we identify something in the data that may be causing an issue for our users. We can then ask them.
And because we know who that customer is, we find customers in our, um, user, user database that look similar to that customer and prompt for feedback from them along the journey. And we can gain better user insight into how our customers use the product and what they like and donât like. Often, users will do one thing and say another thing. And, yeah, I guess it just provides a better customer experience overall.
As we conclude this riveting exploration of âMan Vs Machine: Finding Replicable Bugs Post-Release,â a heartfelt thank you goes out to Jonathan Tobin for sharing his invaluable insights. Jonathanâs expertise has illuminated the intricate path of post-release bug detection, offering a blueprint for testers to elevate their approaches.
Brace yourselves for the exciting episodes in the TestMu AI Experience (XP) Series webinars. Until we converge again for more revelations in the expansive universe of software testing, remember to stay curious and resilient and, above all, keep testing! Until next time, the journey persists, and we look forward to sharing more insights with you.
#TestMu AIYourAppsâ¤ď¸
Did you find this page helpful?
More Related Hubs
TestMu AI forEnterprise
Get access to solutions built on Enterprise
grade security, privacy, & compliance