Recently, a customer asked me about the required level of testing to perform for a new build of ePOS software. My response was “It depends on what changes were made”. The response seemed to baffle some in the meeting because of the commonly held belief that every new software build requires a full round of testing. I stated to the customer how most companies avoid wasted dollars and resources by wisely determining what level of testing is needed based on various inputs.
To clarify my position, I explained how testing everything each time a new software build is released would effectively make the new build obsolete before testing is complete. To prove the point, I laid out the scope of testing required to accomplish a full test:
- All EMV card brand certification scripts
- All network host test scripts
- All client test scripts
- All custom workflow test scripts
Once we built out the timeframes for conducting these tests, it became clear why the answer to the testing question is “it depends on what changes are made.”
But the bigger question is how in depth should testing be with each iteration of software? In order to make that decision, it is necessary to understand the level at which the software is being received. Is this code delivery meant to address new functionality (like EMV) or to transition to a new message specification (like ISO 8583), or is meant to address minor fixes? Understanding the scope of changes and the potential impact is necessary to develop an effective testing strategy. To make matters worse, it is often difficult to decipher exactly what has changed — sometimes changes are documented in formal release notes, other times the documentation for these changes needs to be coaxed out of the dev team.
This is where the knowledge of the Quality Analyst comes into play. The Quality Analyst can lend valuable insight into making decisions around testing strategy and design. Knowledge of past performance of the ePOS vendor’s software functionality and areas where the vendor was “weak” play into identifying the scope of testing. A deep understanding of the functionality of the ePOS in relation to the changes and enhancements is critical. Knowing this can be used to determine the level of regression testing to perform. For example, if a vendor has historically had issues with functionality related to sending a client specific mail message, then that should be tested on every build that the vendor provides. This is where the role of the Quality Analyst becomes part science and part artform.
In any cases, there must be criteria established in order to identify whether further testing of the build is required. A basic functionality test – sometimes called a smoke test – is used to determine if the software can perform basic functionality. This can include (but is not limited to) performing tests that validate:
- Acceptance of all supported payment types and entry modes
- Acceptance at all relevant channels (front counters, kiosks, fuel dispensers, self-checkout lanes, etc.)
- Ability to download/modify device configurations
- Accuracy of network and sales reports
If the build fails these basic tests, the software should be rejected.
In the end it is important to realize that ePOS testing – and for that matter testing of any software – cannot be a canned, one size fits all product. Consideration of the functionalities of the software, the scope of changes in the software build, and the track record of the vendor all play a role in the design and decisions of the testing cycle. The decision to limit testing prevents the company from wasting valuable testing resources and provides a more definable approach to ePOS software release management.