Chris Harbert
CEO of Testery, International Speaker, & Podcast Host
About
Chris Harbert is an industry executive, international conference speaker, host of the Developers Who Test podcast, MBA, and Founder & CEO of Testery: the easiest way to scale open-source tests. His love of software test automation started nearly twenty years ago when he was a developer doing test-driven development at a company that was early to adopt Agile practices. Today he shares this knowledge and experience with others so we can ship better quality software faster and have more fun doing it.
Deploying and Testing Ephemeral Environments
Description
Merging code that hasn't been fully tested is one of the biggest reasons teams experience missed release dates and flaky test suites. Why? Merging code often means other developers start their new work based on the newly merged code, changes are queued up for the next release, and the code quickly becomes coupled to other changes. The result is often code freezes, failing test runs, late night release "parties", and painful go / no-go meetings where you get pinned between postponing the release or shipping with bugs.
Deploying to and testing ephemeral environments gives your team the ability to know for certain that the new features are implemented correctly before committing those features to the release and before other developers start depending on the new code. This approach is more important than ever when working with AI-generated code.
In this session, we will cover,
- What ephemeral environments are and why they are so important
- Branching strategies for proper test code management
- Configuring your CI/CD pipeline to automatically deploy feature branches
- How to run end-to-end tests on ephemeral environments
- Strategies for managing databases and test data in ephemeral environments
- Leveraging ephemeral environments to protect your company from AI risks
Key Takeaways
- Understand the architecture and tooling options for ephemeral environment workflows
- Configure your CI/CD pipeline to deploy and test feature branches in isolation
- Establish quality gates to protect your team from starting with broken code
- Apply these practices to validate AI-generated code before it impacts your team