Software testing is an integral part of software engineering. This process is aimed at determining whether the software is valid, correct, complete, and corresponds to the standards set by a customer at each stage of its life cycle.
Unfortunately, developers often underestimate the importance of testing and resort to this procedure only when all initial processes are finished. At the early development stages, companies try to cut corners on testing and do not allot sufficient resources for it.
Due to costly downtime and tough competition in the modern market, more and more companies understand that they cannot do without testing, and they adopt it as a regular part of the labor process. At the same time, very few organizations know how to make testing efficient, use test management standards, and incorporate special strategies. In many cases, testing is done chaotically, yielding only modest results.
According to the latest research, more than 50% of software cost (even more for safety-critical software) is inextricably linked to quality assurance, making it crucial to optimize the testing process. This may be done by utilizing progressive testing technologies.
A testing procedure involves running a test case with certain input and then analyzing results. The data obtained helps developers understand how the system works and whether it conforms with initial requirements, and to detect its strengths, weaknesses, and deviations from the planned output. Based on this feedback, developers can correct mistakes, eliminate flaws, and reduce deviations before the product is used in real-life conditions. In other words, testing makes the development process more productive and accurate.
1.1. Testing techniques.
There are multiple ways to check whether software operates in the expected way. Here are the three most popular techniques:
- Black box. In this case, it is not obligatory to know the system’s internal code while testing it. Internal logic is almost never involved in the process. Testers are able to detect bugs and determine whether software corresponds to functional requirements.
- White box. Internal logic is more closely examined in the process of white box testing. Particular conditions and loops are used in the framework of test cases. Testers can compare a project with an expected state at different development stages.
- Gray box. This testing type involves manipulating back-end components to assess the system’s work when running a test task.
The testing process does not boil down to just the above-mentioned techniques. Various elements, states, and facets of the system can be viewed from multiple perspectives.
All the procedures should be documented in test plans and reports to draw a comprehensive picture of the system’s operation.
1.2. Test automation.
Whatever testing type you choose, it would be rather tedious to perform tests manually. You would have to follow a long path, spend extensive time and effort, and focus your attention on even the most minor operations to determine whether each development stage was passed correctly.
It is much more reasonable to utilize automated testing tools. With them, you can assess the app’s operation easily and quickly. Once you develop an efficient technique, it will serve you repeatedly on future projects. Developing a consistent workstyle will save you plenty of time, eliminate the need to input each figure and symbol in a testing system, and exempt you from having to analyze each action and reflect on its conclusions.
But it would not be right to say that manual testing should be done away with altogether. Even with automated tools, some operations should still be performed by live operators.
In addition, it can be rather challenging to shift to automated testing. Such changes require significant investment and educating testers who should possess advanced techniques and skills.
Opporty is a service-focused digital marketplace that provides boundless opportunities to both service providers looking for new customers, and people looking for quality services at reasonable prices. Countless users post their offers and requests on Opporty, making the platform a central marketplace where supply meets demand.
With more than 1,300 manual testing cases, quality assurance for the client’s main web application quickly became a challenging task. The application was constantly growing as its functionality was expanding. We needed to find a way to reduce testing time, increase the number of tests, and improve the overall quality of the system without extending the testing time.
Based on extensive experience, our testing team offered to automate key project functionality. Due to the complexity of the application, we had to decide which test architecture would be the most efficient. Also, one of the tasks was to determine which parts of the site’s functionality should be automated.
We decided to follow the most popular approach and selected several open source tools in combination with custom frameworks, allowing us to:
- Reduce script creation time.
- Reduce script maintenance time.
- Combine several automation tools to leverage their advantages and work around their disadvantages.
- Develop special features that can be used in all scenarios:
- Creating screenshots.
- Creating automatic reports.
Some of the tools and technologies selected include:
- Java as the main programming language;
- Maven (framework for automating project assembly);
- TestNG (testing framework);
- Allure (an open-source framework designed to create test execution reports);
- TeamCity (build server for continuous integration).
Clever-Solution helped the client reduce the test processing time by 40% and add hundreds of additional test cases without affecting the development schedule. Also, a number of functional and nonfunctional problems were identified while writing autotests, making it possible to improve the product quality. As a result, the platform’s new functionality works much faster.
While testing, we used a broad range of quality assurance strategies, such as:
- Functional Testing;
- Graphic User Interface (GUI);
- Usability and Accessibility;
- Performance (Stress and Load);
- Automation (Java + Selenium + Maven + TestNG);
- Smoke Test;
- Sanity test.
Documents and reports provided:
- Testing checklist;
- Developed test cases;
- Functional testing scenarios;
- Usability recommendations;
- Test plan;
- Test report.
Tools and Platforms:
- Tools: Redmine, TestRail, Google Docs, Jmeter;
- Platforms: Mac, Windows, iOS, Android;
- desktop: Chrome, FF, Safari 12, IE11, Edge
- mobile devices: Safari 12, Chrome, FF
Team: 2 QA engineers.
The following results were achieved by the project’s QA engineers:
- Numerous errors were identified in the app’s logic and interface at the design and development stages. A number of changes were proposed to improve the app’s usability.
- Based on development technology (Scrum), pre-testing was included in the overall test plan as an important factor. This allowed us to identify a number of non-trivial errors.
- Cross-browser testing revealed critical and non-trivial errors. Users’ negative experiences were minimized.