Test automation - ISTQB in practice
- Bence Boda
- 5 days ago
- 3 min read
Updated: 4 days ago

Different certifications play an important role in IT. Not necessarily classical qualifications, but the accredited certifications, that show the knowledge and use of a given technology or methodology. The International Software Testing Qualification Board summarised the theoretical part of testing, by compiling a widely accepted glossary of terms and guidelines from the experience accumulated over decades. They provide internationally known certifications in the field of software testing and have developed a common base of theory on test automation. The problem is, how to put a highly theoretical training into practice?

At first glance, the syllabus is very dry material, but the testing principles and methods add value to the work of a tester - a textbook situation in a project can be quickly identified and planning can be made more efficient. In this article, I would like to give examples of these testing principles, briefly illustrate them, and present their interpretation for test automation!
Exhaustive testing is not possible
One of the first principles is, that exhaustive testing is not possible. Given the complexity, there is no meaningful way to actually test every line of code, every logical path, every condition that can occur. Adequate test coverage must be achieved in the light of the risks identified by the business and the priorities agreed. Test automation promises to be an excellent solution for this, as if we can run tests much faster and automatically, full coverage can be produced.
However, experience shows that not everything can and should be automated. The primary goal is to eliminate "exhaustive" testing, i.e. to take over the monotonous and regular tasks performed by testers and support their work. With a well-designed automated regression test set and proactive, experience-based methods of manual testers, we can address as many critical points as possible.

Testing is context dependent
For each application it is important to consider, in the light of the intended use, which areas to prioritise. The requirements and resources for an air traffic control application and an information website for a small business are completely different. My personal bias aside, the most important question for test automation is: do we need this?
For low complexity and low risk, QA works best with agile testers, but as more testing processes are created, it is worth revisiting the case for automation later. I have seen automated tests being commissioned on external instruction, even when the project itself did not require it or the processes themselves could not be automated. Unfortunately, it is easy to leave both parties with a bitter taste in their mouths.
It is important to plan ahead and be objective. On the other hand, if testing is a consideration at the design stage and there is scope for automation, then quality assurance can be very effectively designed.

Testing only shows the presence of defects
With automated tests, we can only find the defects, that we have designed our tests to detect. Most frameworks can "catch" the unexpected errors that occur during a test run, however, we have to define the checkpoints and conditions. We reduce the likelihood of non-detected errors, but except for trivial exceptions (e.g. page cannot load, computer freezes), we must define the potential errors, in addition to the positive conditions.

The absence of errors fallacy
The absence of errors fallacy suggests that simply finding and fixing all errors does not necessarily result in usable software. A crude example is, an accounting program that freezes if numbers are entered, so the fix is that numbers only in text can be stored - the program will work without errors, but also without any use. The reverse is also true for test automation, misperception of errors can also misguide the work of a project.
Typically, we may notice bugs related to the performance of a system, that could be an error in an automated test run, but not an issue in real customer use (e.g. not being able to click an element on the webpage within 1 millisecond). It is important to define, for example, the maximum time for each web page to load, so that relevant problems can be flagged with these tests.

A final suggestion
A good way to put the ISTQB principles into practice, is to ask them as questions in each testing context:
Am I trying to do something, which is known to be an uphill battle?
Have I chosen the most useful approach?
Do I see the borders that I have set?
Am I still aware of the real requirements?
Software Testing: An ISTQB ISEB Foundation Guide (Brian Hambling), www.istqb.org