Blog

Articles to grow your career

Article

Test Automation Implementation Pitfalls

Nowadays testers are trying to use test automation to speed up the process of testing, increase test coverage and make the work easier in general.

How to Plan a Strategy?

I think everyone has received messages from users about bugs that were not detected by automated tests. Users are also inattentive, and we ourselves do not always perfectly understand the nuances of the business for which a product is being developed, or we cannot cover all the scenarios. For example, there was such a case in one of the test environments. The test followed the link and checked that the new page opened. However, opening a page that says “You do not have permission to view this page” also counted as a successful scenario, although it was not. As a result, there is a problem, but the test was successful. In any case, the process can be improved, and test automation is a powerful tool for this.

It is best to start test automation based on units and integration tests from developers.

Next, depending on the frequency of the type of testing, its need, and risks. Therefore, the next to automate is the smoke tests, then go functional or regression tests. After that, we can implement test automation at the Continuous Delivery level. But it is important to choose the correct time.

Step 1. Choosing Functionality For Automation

If you already have test cases, this is good, and we build the analysis on their basis. If not — well, it’s time to create them.

Let’s pay attention to the following points.

Is it possible to automate certain scenarios and is it advisable? For example, a record in the database will appear in half an hour or an hour after adding, is there any point for the automated test to wait for this? You can wait, but will it speed up the testing process as a whole? Usually, this is almost the main purpose of automation. It turns out that it is necessary to replace manual testing in such a process only if we want to completely save our Manual QA from the need to test this.

If the script is simple, and the check is one-time, is it necessary to spend time automating it? In theory, yes, especially if the client requires “to automate absolutely everything!” But remember that this requires additional costs. Do you have them? Maybe you should take a closer look at something more important?

Finally, if every time the data for the test needs to be carefully selected by hand, is there any point in automation? For complex financial systems, it is not always possible to create a universal request to obtain information. Is it advisable in your case to manually generate data each time you run automated testing?

There are also tests where a human is faster and more likely to notice an error. Do you need to use scripts to build tests for the required expectations?

Will the resources spent on automation pay off? At first, the automation of a specific test looks simple, but when u start to work, you see that in the current implementation or under certain conditions it will require significant resources. Let’s think about whether we have time and if the client is willing to pay for it?

Do I need to automate simple tests? Why not! Maybe you should shovel the data into JSON of 500 lines. Once upon a time, I had to compare JSON with the data scattered throughout the Excel file the day before the release. I won’t say that the test was too difficult, but with maximum concentration, it took me seven hours. Of course, after that incident, we automated the process!

What about complex ones that require mathematical calculations? Definitely! This will provide the necessary accuracy in calculations and eliminate the human factor.

Step 2. Let’s Make Sure That the Existing Test Cases Are Ready to Be Automated

What do we mean by that? Let’s start with the design.

In many test management systems, you can add an attribute to a test that allows you to identify whether the test needs to be automated (a reason is also given) or if it is already automated. This comes in handy as it makes it easier to filter and determine the test coverage.

In general, writing clear and detailed test cases, as well as document management is a real art. A good practice is to use a test case review that can be performed by one of the colleagues in the testing team, as well as its lead or business analyst. An outside perspective is always useful, with its help you can not only make sure that you have not missed anything but also look at the project from the point of view of BA. This approach confirms that you have covered all user scenarios and met all requirements.

When checking test cases further, I recommend paying attention to the following points:

  • If test cases are written by manual testers. They often write test cases. It’s great if they have a general understanding of automation. This allows you to analyze its feasibility for a specific scenario and mark it as ready for I have repeatedly encountered situations when manual technicians completely forgot to set this attribute and test cases were lost from filters. Or, out of habit, they set it for all test cases in a row. If necessary, you can always consult with an experienced automation tester.
  • If test cases are written by automation engineers. In this case, test cases can be written in purely technical language, and the user scenario will not be understandable. For example, I came across a test that consisted of several queries: “run query 1”, “run query 2”. Again, if the query returned data, is it good or bad? When is the test passed? Perhaps they once discussed it, but after six months no one remembers it. In general, this is convenient, but it can be difficult to figure out what exactly is being checked without the details of the request itself. Check if the scenarios are clear to you, or if they still require clarification.
  • Detailed scenarios. Scenarios must be detailed. To make it obvious what needs to be done, where to click and what to check. For example, when executing a script for working with a document, the step “save the document” remains at the end. This step is ambiguous as this action is performed in several different ways. Choosing one of them can radically change the behavior of the automated test, which, as a result, will not test what was intended to be tested. We don’t want that, do we? Do not make other people think what you mean.
  • Test relevance. Tests must be kept up to date. Yes, it is difficult, but you can’t do without it. It is good practice to notify the automation team of changes before their automated tests For example, we know that the UI will change or an additional window will pop up somewhere. With manual testing, you may not even pay attention to this: we have heard about the upcoming changes and we understand that everything is fine. It remains to click OK and move on. But the automated test does not exactly expect anything like this and will start sounding the alarm.

Step 3. Decide on the Data That Automated Tests Will Use

Often, automated tests generate data for verification and delete it after execution.

This has its pros and cons: on the one hand, we try not to interfere with anything as we created what we needed ourselves, checked the functionality on the data, and removed it afterward. In addition, this is how we guarantee the required coverage. But it is important to conduct testing on the data with which the user is working. If for any reason we cannot create such data, we use what is already there, but after the tests, we do not delete the data.

Why is that? Real data has some peculiarities: for example, it can be imported into the system, formed differently, have more complex logic. It is something that will be difficult to repeat for the purity of the script.

Another caveat is that the data changes frequently. If this is your case, let’s look at a different approach.

At some point in the cases, we add criteria to the data that needs to be used. They can be simple: for example, you need to take a user who registered in the system more than a year ago. One of the solutions is a request to the database with the same criteria, it is even more convenient for automated tests. This approach is suitable if the test needs to be performed in different environments where the data is different. But there is also a drawback: if the data is rather complex, that is, there are many criteria (a user registered a year ago who has not made purchases of goods of a certain category in the last 3 months), then it will be difficult or even impossible to get them in this way. It will probably be easier to select data manually and substitute hardcode.

In order not to interfere with each other during testing, use different environments or separate data for automated testing and manual testing. Then, when checking a specific scenario, you will not face the problem of accidentally changing data.

Step 4. Optimize Tests

The quality of tests should not be neglected In the process of automated tests optimization. They are made faster as it is their obvious advantage over manual ones.  However, be sure to provide some coverage as well.

Attempt to speed up the test can have a bad effect on quality. For example, sending requests not through the user interface, but directly through the API — of course, it will go faster. But with this approach, the UI is not checked, and this is fraught with consequences. Users will not use the API, but they will notice problems with the interface right away — they will become a blocker for user tasks. But if you have such an idea, it might be the right time to split and combine tests. Then calmly think over the coverage and optimize the process for your convenience.

happy tester

Test Automation Recommendations

Let’s summarize what to keep in mind when running test automation on a project:

  • be sure to identify and label all scenarios that are appropriate to automate
  • combine tests and try to provide sufficient coverage of data and functionality
  • do not forget custom scripts

And the main thing to remember: the purpose of any tests is to find problems and release a quality product. Let automated tests be ‘green’ and users happy.

Alex Kara
By Alex Kara on Aug 14, 2021
Automation QA