We are often asked if and when fully autonomous testing can be achieved. It’s a topic I love to discuss. But before we dive into it, let’s take a closer look at the two words that make up the term.
Autonomous, It’s very easy to mean “no human intervention”. test The exploratory and exploratory nature of the test is more difficult because it does not help with automation. The following is best categorized as an “autonomous check”. With that in mind, let’s continue.
Advanced tools such as vision-based test automation and other intelligent automation engines have moved the automatic checking problem from “how to reliably automate this interface” to a higher level problem. Humans are still overwhelmingly responsible for creating automatic checks. Describe the input to be input, the button to be clicked, and so on. This is the first period.
The transition to autonomy is best defined as “explanation is deciding.” This is already the case with approaches such as smart impact analytics. You don’t have to explain the test you want to run. You need to determine if the tool recommendations meet your needs. It’s ideal for closed systems such as SAP, Salesforce, and ServiceNow (these products are great). With the help of AI, this trend goes far beyond this and extends into the area of bespoke / custom applications.
Very great! In the future, we will get a printout of possible activities from the machine and give it a green light! Uh … not so fast. As you can see, these closed systems don’t just have processes defined. They also have a defined result (Oracle). This is not the case with custom applications. While it is generally possible to determine the actions to take (by examining the people who perform these actions), it is not always possible to extract the “reason” component. When the user executes the transaction, the eye flicks to the top of the screen to double-check that the Amount value is correct. This validation is not captured, so the automated process misses a point of check (this was to determine that the transaction was processed, not just that it was processed). Correctly)..
However, this is not a dark outlook. The “fully autonomous” check may still be quite far away, but the “explanatory decision” tendency will get rid of the heavy busy work that plagues quality engineers today. Analyzing the output scenario, inserting validation, and deciding which to do is a lot more fun than worrying about why the login button doesn’t have a stable ID field.
That said, there are a few things to keep in mind.
- Beware of test case spam
Be careful if you embark on an autonomous testing effort and your team returns with a tool or process that “generates thousands of tests”. You need to analyze these tests, insert validations, and debug if they “fail”. The “less, targeted testing” motto has been and continues to be a good guide for the last two decades.
- investigate how
If you’re told that you can generate tests automatically, dig a little deeper how This happens. AI is not magic. If something looks magical, it’s probably a fabrication. The team should be able to tell that the process has other sources of information on how to examine usage patterns, analyze existing (exact) definitions, and define tests. “Shaking the app and generating tests from it” is still firmly rooted in the world of magical thinking.
- Ask about maintenance
Having a thousand tests is like having a thousand smoke detectors. If you own an entire high-rise apartment, that’s probably justified. If you own a house, when you burn toast, you will spend two hours turning them all off. Tests that fail should be investigated, updated, or discarded. Investigate the nature of this method to see if autonomy can actually save time in the long run.
Nevertheless, the future of autonomous checking seems very bright. The industry goal is to devise a way to generate the best and least tests needed to achieve the desired level of assurance.
Road to fully autonomous testing
Source link Road to fully autonomous testing