We learned early that using an automation tool like Cypress to test UI and API could potentially get us to end-to-end testing once the testing framework and test creation are put into place. The ease of configuring Cypress and creating test cases led us to one challenge: transparent reporting of the test runs and results. Cypress does offer a dashboard solution for test runs, but we needed a test management tool that offered higher-level reports for all the different runs.
TestRail by Gurock does a fantastic job of test case management and reporting. Best of all, Cypress already has a plugin ready for custom reporting, which is based from the Mocha-TestRail-Reporter that would already work with the TestRail functionalities. Now we can start creating test cases and reporting test runs using Cypress and the TestRail custom Reporter. The only issue was that we have multiple specs spread across many applications at VideoAmp. The first solution was to find a Cypress-TestRail-Reporter that would allow for creation of one test run and updating to that same run instead of having multiple test runs for each spec. That led us into looking at the Cypress-TestRail-Reporter-Accumulative, which allowed for multiple specs to be reported as one test run.
As our TestRail is configured to be a single repository for all cases, we initially started off with multiple test suites to manage cases, but ultimately determined that in the long run, maintaining that type of setup would be a headache. We started to notice that the test runs were reporting the test cases that were performed against all of the test cases we had in our single repository. The issue here was that if we just needed to perform a smoke test, for example, then how could we report on just those test cases? Our first thought was that no one in management, product owners, project managers or even stakeholders would care about results for tests that weren’t performed, making the run report messy and incomplete with a results percentage based off of all test cases. This is not good for a company that works with big data.
Now onto the fun stuff: finding a way to report against only the test cases in the run. Our Cypress setup is unique with multiple applications and isn’t the ideal way for using Cypress. We looked into different ways to run just certain test cases and found that Cypress has a plugin for selecting tests by tags. After a few test runs we were still confused with how the reporting was working and why the select test plugin wasn’t reporting against only the test cases selected. Upon further investigation with the Cypress-TestRail-Reporter, TestRail did allow for us to take the approach of just reporting selected test cases, but that meant we would need to manually enter the test case IDs in the reporter option. Based on years of experience and the era we live in now, no one wants to maintain that in the long run because you never know when a test becomes outdated, and there is always room for potential human error from modifying the test case IDs. With that in mind, we asked ourselves why not just generate that list dynamically so that it can be as accurate as possible? Cypress-TestRail-Reporter was already capturing the test case IDs to update the status after each spec run.
We decided to take a stab at it because VideoAmp’s engineering culture is very pro-testing and all about taking risks. At first glance, we realized we might be in over our heads as the SDETs on the team had little to no previous experience working with Typescript, but we knew that if we could find out where in the code the test case IDs were being captured and stored, we’d be able to put that into an array possibly and have the test run to update the test cases in real-time as specs were being run. Although the solution isn’t perfect, the approach we took was to store and map the test case IDs after each spec to an array, which would then update the test run in real-time by updating with only the selected test cases that were in the run. We have now taken this approach and expanded it into dynamically reporting the Jenkins build URL, and GitHub commit URL for whichever repository the testing was performed on. This workaround does work for us at VideoAmp, and I hope it can work for your team as well. Check it out here.
Be a part of our agile-minded engineering team. Check out our Careers page to learn more.
Ricky Didaravong has been working in software testing for over nine years. His software testing experience began with mobile testing, which led him into test automation for frontend and backend. Ricky has led software testing efforts for MLB.com, NFL.com and 32 NFL clubs websites before coming to VideoAmp. He is currently the Senior Software Development Engineer in Test (SDET) at VideoAmp.