Bad Tests Running Wild – Concurrency, Test Data, and Minimal Human Interaction in Test Automation DevOps
9:15 - 10:15 Student Alumni Room
In 1984, Scorpions released their highly-successful album Love At First Sting. The opening track, Bad Boys Running Wild, speaks to me about test automation running amok in a DevOps environment. The lyrics include “if you don't play along with their games” which certainly had no relation to the concept of DevOps at the time, but perhaps now they do. When testing in a CI/CD pipeline, we often think “just create a task that fails the deployment if any tests fail”. Mechanically, that thought is not far off base. Practically, however, there are many facets of this flavor of testing that we must consider when running automated tests in a pipeline. Typically, the biggest considerations are execution duration and consistency; we don't want to wait “too long” for our deployment and we want the tests to behave the same way on each execution. Running our automated test scripts in parallel can absolutely reduce the duration of an automation suite’s execution. Having success and consistency with concurrent execution, however, requires upfront work to obtain detailed knowledge of the application being tested and dependencies in the automation suite. Omitting this work will result in our automation being unable to get out of its way; automation will inevitably run wild. Additionally, a deploy-on-every-commit, possibly even to production, is the end goal for many teams. There are additional considerations for being able to accomplish this goal because it’s likely no humans will touch the modified software until it’s actually in production. How do we handle test failures in the pipeline? How do we handle bugs that escape into production? Join Paul Grizzaffi as he walks through important aspects of test automation parallelization, commit-to-production, gating tests, and logging; aspects that must be addressed to be successful when implementing automation in a CI/CD pipeline for a DevOps-focused organization.
Paul Grizzaffi
As a QE Solutions Architect at Nerdery, Paul Grizzaffi is following his passion for providing technology solutions to testing, QE, and QA organizations, including automation assessments, implementations, and through activities benefiting the broader testing community. An accomplished keynote speaker and writer, Paul has spoken at both local and national conferences and meetings. He is a member of the Industry Advisory Board of the Advanced Research Center for Software Testing and Quality Assurance (STQA) at UT Dallas where he is a frequent guest lecturer. When not spouting 80s metal lyrics, Paul enjoys sharing his experiences and learnings from other testing professionals; his mostly cogent thoughts can be read on his blog https://responsibleautomation.wordpress.com/.