Testing is an important part of software development. It ensures that the software works according to the customer’s requirements, helps to minimize risks and overall increase the quality of the product. Identifying deviations at an early stage by testing reduces the effort to correct them.
The possibilities to test are manifold and accompany a product as an important phase in its life cycle through the entire development process. The test process is integrated into the respective methods of software development, such as waterfall model, V model and agile processes (e. g. Scrum).
Generally, a distinction is made between component test, integration test, system test and acceptance test. Each of these test levels has different objectives and has two classes of requirements. Functional requirements define the behavior of the software (characteristics according to ISO /IEC 25 010: Accuracy, Appropriateness, Regularity, Conformity, Interoperability, Safety). Non-functional requirements describe how the system meets the functional requirements (ISO characteristics) /IEC 25 010: Reliability, usability, efficiency, changeability, portability). Different test methods are used depending on the test objective. Examples of functional requirements are application-based or business process-based testing; Examples of non-functional requirements are load test, performance test and stress test.
The tasks of the test manager include the planning, monitoring and control of the test activities as well as their documentation in test reports. In the test concept and test plan he puts specifies the planned scope of the test, describes the test strategy and test procedures, defines criteria for acceptance, discontinuation and continuation of the test, plans resources and timing, specifies the tools to be used for the automation of the test and for the management of test cases and deviations (cf. /IEC/ IEEE 29 119).
During the test, what was tested and when with what result is recorded in the test protocol. By linking requirements to test cases and test results, this serves as proof for the implementation of the planned test strategy and should also be comprehensible to non-involved persons. Key figures such as the number of scheduled, run and blocked tests illustrate the test progress. Severe, priority and status of detected defects (new, open, corrected) allow an assessment of the product stability or release maturity.
The testers at Novobit are ISTQB® certified and work according to this standard.
A human being as a tester is far more flexible than automated test scripts can be, which is why manual testing always plays a major role in quality assurance, especially when new functionalities are developed. However, the great strength of a test automation is the exact same execution of defined test cases. This makes them ideally suited for replicating test cases. These include component tests that allow developers to immediately see whether the modified or new components are working correctly, regression tests that ensure that unchanged functionalities continue to work properly, and non-functional tests that measure the performance of the application.
The component tests are usually created directly by the developers. Essentially, these are repeated calls of functions, methods, classes with different parameters and checking the return values. The regression tests usually run at system level and depict processes in the application, e. g. the creation of a data set. Different frameworks for development environments (relatively high development effort) and specialized tools (often less development effort) can be used here. Ideally, the automated tests are integrated into the continuous integration so that they can run directly after a build. Based on the result, a decision can then be made on a deployment.
The proportion of test automation usually decreases with the height of the test level. While at the component level the goal should be to provide each component with a test, during system testing, emphasis is placed on the essential functionalities. This is because with each higher test level, the test scripts become more complex, which increases the effort (creation and maintenance) and the error-prone, thus reducing the added value. This balance needs to be optimised.
Test automation is often used very early in agile processes, in certain processes such as test-driven software development even before the actual development. In this case, test cases are first created and then developed until the test cases pass successfully. This requires a very close collaboration between tester and developer. During a sprint, the test scripts must be able to run practically continuously so that deviations can be detected immediately. The decisive factor here is that the test automation is extremely robust, for which different frameworks are available, e. g. for behavior-driven, keyword-driven and model-based software development.