Quality control of our products and services is an ongoing process.
It is not limited solely to the QA department. Each team member participating in a project is involved in quality control to a certain extent.
Code review within the team
Each developer has their unique style of coding which may seem hard to understand to others trying to figure out their code.
Besides, if the code is never reviewed by others, its author may never know whether there are any problems with it.
It is something we want to avoid, and our policy is that we assign at least two developers to each project to review the code and to make sure that only quality code is committed.
We do the code review each time we complete a logical component, such as a feature or function, or fix a bug, implement an improvement, and so on. For code review, we use GitLab to merge requests.
With code review, we can catch bugs and errors at early stages and fix them before they get into the committed code.
Through code review, our developers get to know the code created by their colleagues and also can share some hints, tips and useful knowledge that can improve the code quality.
With everyone on the team familiar with the project code, there are no interruptions if one of the team members takes a vacation or sick leave.
We base our workflow on the principles of Continuous Integration and establish the standards of the source code that should be used in every project.
— Continuous integration
With continuous integration, the code is somewhat frequently integrated into a shared repository with a unique tool for automatic verification. This way, we prevent most of the integration issues that can affect project development.
Each build delivered to the repository can be reviewed and tested before other components are completed.
We can test our code several times within each iteration ensuring quick and efficient error fixing. Bugs and errors are caught before they can make their impact on other parts of the project.
Although continuous integration practices do not clear all bugs automatically, they allow their easy detection and removal.
— Code standard and Gitflow
We want our source code to be as transparent and consistent as possible. Therefore, the code for each project is written according to a single standard.
The coding standard is selected, discussed and finalized at the project planning meeting of our development team.
All our projects are developed by the GitFlow model, which is specifically intended for project collaboration.
It ensures compliance with the selected coding standard and allows high-level repository operations where all developers get a clear idea of branching and merging processes in the development flow.
If any change is made, each team member receives a notification. With GitFlow, a project is developed according to a clear model that is easy to follow.
The resulting source code requires less debugging, is committed within the set timeframe and is clear and consistent.
Unit testing is a powerful testing methodology where the source code is tested by individual units representing complete functions or procedures.
Its effectiveness is in its ability to detect bugs early and allow their prompt fixing. Each bug or error can be traced back to its origin to eliminate the factor that caused it.
This process does not affect the other parts of the code. As unit testing is done together with the development, the team does not have to wait until all product components are complete to start looking for bugs.
With unit testing, we can make demos of the finished components even before the whole product is completed. Besides, unit testing allows effective code refactoring.
Automatic library updates
These services create dependencies automatically while keeping control over the project configuration and structure. With these tools, the libraries require less maintenance, and the update flow is easier.
We apply thorough and detailed testing allowing to check new builds and the finished product in all conditions and scenarios.
— Test cases and test plan
We test our products using various testing techniques and a large number of different devices to verify that it is working as expected on every platform. We lay down the testing schedule in the test plan that is prepared before the project start.
The test plan shows when each feature or component is to be tested and states the deadline for each one.
Also, the test plan determines the testing environments (devices and operating systems) to be used and the testing types to be applied.
At the beginning of each sprint, we also prepare the test cases. They contain the description of all testing steps to be taken according to a special checklist. The checklist includes the testing procedures needed to verify each functional component.
For different types of components, we have worked out special testing techniques – for example, we have a specific way of testing the credentials validation function which is performed during user registration and login.
— Continuous testing
Our products are tested not only when they are fully developed. We also do ongoing testing to verify that our estimations and assumptions made during project planning and actual development are valid.
We also test the completed modules at the end of each iteration and also perform the tests of the individual features and functions developed in the course of the iteration.
This way, we can clear the issues immediately and ensure that no major or critical bugs get into the delivered build.
— Integration testing
Integration testing checks the relations between the functional components. During integration testing, we can detect issues caused by the implementation of a new feature into the already completed module.
Through integration testing, connections and relations between different components are established and checked for any errors.
This way, we can fix any connection issues before the system becomes too complex.
After integration tests, we proceed with targeted regression testing, and any bugs detected during the testing stage are fixed afterward.
— Regression testing
Whenever a new build is delivered, or a bug is fixed, there is always a chance that this build or fix affects something, which has been successfully implemented before. It is where regression testing can help – it reveals the bugs in the older components that were caused by new ones.
After regression testing, we do verification to make sure that no bugs remain, and finish the flow with smoke tests to make sure that our latest actions did not affect other product components.
Despite the thorough multi-stage testing and bug fixing, we can still miss some hard-to-find bugs which get into the finished product.
Or, an operating system update or a new device can cause our product to perform improperly.
To monitor such cases, we use Crashlytics, a crash reporting service tracking the issues occurring after the application is launched.
Once an issue is found, Crashlytics sends a so-called stack trace, which is a report showing the crash sequence step-by-step.
With this report, we can trace the problem right to its origin. Crashlytics reports also contain other helpful information – device, application version, language, country of use, etc.
— Compatibility testing
With compatibility testing, we verify that the current version of the application is compatible with various browsers, operating system versions, devices, etc.
It prevents the application from crashing when installing on older OS versions or used in various browsers. This also increases the retention rate and makes users enjoy the app.
We always pride ourselves on delivering top quality products.
It is why we pay so much attention to quality assurance, testing and bug fixing.
Quality control is always one of the top priorities for our team.Back to Playbook