Synthetic Test Data Yields cost saving and code coverage | STAND 8 |
Developers for a major financial institution were spending more than half their time during each sprint identifying and extracting data needed for testing. To make matters worse, over 50% of test data compiled from Systems of Record (SOR) was bad. Worse still, developers lacked sufficient code coverage at the unit level which resulted in sub-par regression test results when introducing new features. The best data to use for the tests would have been production data, but that presented significant regulatory risks.
The client needed large amounts of test data to cut down on testing time and test more of their code during the course of each spent.
This approach had 2 key advantages. First, it removed regulatory risks. Since STAND 8 used data generated from data describing production data, no customer data was exposed. The second advantage of this approach was the speed and quality of obtaining data.
Developers no longer had to manually generate test data, which was incredibly time-consuming and error-prone. Using synthetic data allowed for more complete test data sets. The developers were also happy to rid themselves of this grunt work.
The result of all of this work was that code coverage improved and regression defects dropped.
One of the most resounding productivity gains was that developers were no longer spending half of a sprint creating data. Once STAND 8's data model was in place the needed test data could be generated in 7 minutes. That's enough time for a cup of coffee!
Executive stakeholders were ecstatic with the results. The development and testing teams were viewed as a model to improve processes across the organization.