Conference Schedule

Full day of sessions, workshops, and networking opportunities.

Session times coming soon!

The full schedule with time slots will be announced shortly. Check back for updates!

AI Testing Isn't One Thing (And Treating It Like It Is Will Bite You)

Your team shipped an AI feature. Congrats. Now someone asks: how do we test this? You write a test. The output changes. You run it again. Different output. You consider a career in farming. Here's the thing nobody tells you upfront: testing AI-powered software isn't one discipline, it's two. And the moment you try to apply one strategy to both, you're in trouble. This talk breaks down the Two-Track Testing Model that every QA engineer building on AI needs to understand. There's the deterministic side, your traditional test pyramid covering infrastructure, routing, logic, and guardrails, and there's the AI evaluation side, where outputs are non-deterministic, pass/fail doesn't exist, and you need a completely different mental model to even know what "quality" means. We'll walk through how these two tracks diverge, when they converge, and what it takes to get quality signals from both. You'll leave with a practical framework: the Three Pillars of AI Evaluation (human eval, deterministic checks, and LLM-as-judge), a benchmark-first approach to designing your eval strategy, and a clear picture of how maturity stage changes what you should be testing and how. The fundamentals of our craft haven't changed. The pesticide paradox still applies. Risk-based thinking still applies. You still can't test everything. But the tools, vocabulary, and decision-making are genuinely new, and it's worth getting oriented before you're neck-deep in a chatbot that nobody can evaluate with any confidence. This is the talk I wish existed when I started.

Joel Wilson

Joel Wilson

Awesome Web Testing with Playwright and AI

Everybody gets frustrated when web apps are broken, but testing them thoroughly doesn't need to be a chore. [Playwright](https://playwright.dev/) together with AI makes testing web apps fun! Playwright offers a slew of nifty features like automatic waiting, mobile emulation, and network interception. Plus, with isolated browser contexts, Playwright tests can set up *much* faster than traditional Web UI tests. In this tutorial, we will automate concise yet robust web app tests for "BuggyBoard", a bug tracking web app with Playwright in TypeScript. And we will expedite our test development with the superpowers of AI. Specifically, we will cover: 1. How to add Playwright to a project 2. How to explore an app with AI assistance and Playwright's MCP 3. How to generate Playwright test code 4. How to context engineer scalable automation patterns 5. How Playwright compares to other browser automation tools like Selenium and Cypress By the end of this session, you'll be empowered to test modern web apps with modern web test tools. You'll also have an example project to be the foundation for your future tests. --- This year, I am revamping my Playwright content to include AI assisted coding techniques, like using context engineering, agentic analysis, and Playwright's MCP servers.

Andrew Knight

Andrew Knight

Being Nimble - The next step in Agile Testing Optimization

We’ll examine how playbooks drive collaboration by ensuring the right stakeholders engage at the right moments throughout the SDLC by turning “who’s in the room” from a variable into a strategic advantage. You’ll learn practical approaches to building playbooks that support nimble pivots without sacrificing quality. We’ll also address the reality that every Agile framework has tradeoffs. Rather than debating methodologies, this session focuses on quick diagnostic techniques to identify friction points and practical adjustments that help teams operate more efficiently within their chosen approach. Attendees will leave with actionable strategies to strengthen team collaboration, unlock efficiency gains and critical capabilities when adaptability matters most.

Melissa Tondi

Melissa Tondi

Collaborate on Your LEGO(R) Vision

LEGO(R) sets are fun to build, but who has ever attempted to build a complete set without looking at the instructions? In this meeting, attendees will form teams and attempt to build a LEGO(R) set without instructions. Only one person from each team will be able to view the finished product before the team starts building. That person must share their vision with the team, who will attempt to build the LEGO(R) set as close to the instructions as possible without peeking. Each group will learn different approaches to collaborate on product development during the meeting to build a set according to a customer's needs. The activity highlights the two Quality Gaps of product development: (1) the gap between what we set out to build and the finished product; and (2) the gap between what customers expect and the finished product. Our goal is to close the two Quality Gaps so we deliver a product on-time & on-budget that customers will love.

Thomas Haver

Thomas Haver

Decision Records: Understanding Why Those Decisions Were Made!

Ever stared at a complex system and thought, "Wait, why was that decision made?!” We’ve all been there – lost in a maze of logic, struggling to track down the root cause of a problem. Decision Records offer a brilliant way to finally unlock that understanding. They’re like having a detailed, searchable log of every decision in your applications – from architectural style to authentication to service discovery and containerization. By capturing these decisions, you build more maintainable, auditable, and, frankly, less frustrating systems. In this session, you will learn how decision records solve problems and will be given ideas of templates that you can implement in your projects today! Let’s ditch the guesswork and start documenting the why – because truly understanding your systems is the key to unlocking its full potential!

Sarah Dutkiewicz

Sarah Dutkiewicz

Deploying and Testing Ephemeral Environments

Merging code that hasn't been fully tested is one of the biggest reasons teams experience missed release dates and flaky test suites. Why? Merging code often means other developers start their new work based on the newly merged code, changes are queued up for the next release, and the code quickly becomes coupled to other changes. The result is often code freezes, failing test runs, late night release "parties", and painful go / no-go meetings where you get pinned between postponing the release or shipping with bugs. Deploying to and testing ephemeral environments gives your team the ability to know for certain that the new features are implemented correctly *before committing those features to the release* and *before other developers start depending on the new code*. This approach is more important than ever when working with AI-generated code. In this session, we will cover, - What ephemeral environments are and why they are so important - Branching strategies for proper test code management - Configuring your CI/CD pipeline to automatically deploy feature branches - How to run end-to-end tests on ephemeral environments - Strategies for managing databases and test data in ephemeral environments - Leveraging ephemeral environments to protect your company from AI risks Key Takeaways - Understand the architecture and tooling options for ephemeral environment workflows - Configure your CI/CD pipeline to deploy and test feature branches in isolation - Establish quality gates to protect your team from starting with broken code - Apply these practices to validate AI-generated code before it impacts your team

Chris Harbert

Chris Harbert

Ensuring Software Quality in the world of AI Developers

Like it or not, AI agents can now turn a loosely written paragraph of requirements into a pull request that looks production-ready in minutes. That’s impressive — and horrifying. When code is being generated faster than humans can fully internalize it, QA becomes the last line of defense between “seems fine” and a 2 a.m. incident caused by a misunderstood requirement or a bad database migration. In this session, we’ll explore how quality practices must evolve in a world where teams treat AI agents like new junior developers. We’ll talk about strengthening test plans so they validate intent instead of just implementation, expanding automated coverage to catch AI-specific failure modes, and partnering closely with developers whose familiarity with the generated code may be thinner than in years past. We’ll look at redefining code and feature review processes, improving requirement clarity to reduce ambiguity before it becomes defects, documenting our new vibe coded enterprise systems, and adding guardrails so AI-authored changes can’t slip past quality gates unchecked. By the end, you’ll have a clear understanding of the new risks AI introduces — and practical strategies to help your team move fast without letting AI-generated pull requests quietly YOLO their way into prod.

Matthew-Hope Eland

Matthew-Hope Eland

From API Contracts to UI Confidence: AI-Driven Quality in CI/CD

In modern distributed architectures, the most disruptive defects are often the ones that live in the gaps between services. Contract violations—schema drift, breaking API changes, and consumer-provider mismatches—frequently bypass traditional test suites, only to cause catastrophic failures in the UI or downstream services after deployment. This session provides a technical blueprint for bridging the gap between API reliability and UI confidence. We will walk through a practical implementation of containerized CI/CD pipelines that utilize oasdiff and Docker to detect breaking changes before they hit production. Key technical takeaways include: Automating the "Contract-to-UI" Link: How to ensure UI automation remains stable by catching underlying API shifts early. AI-Driven Testing with Schemathesis: Using AI to derive edge cases and boundary tests directly from OpenAPI specs to increase coverage without manual script bloating. Intelligent Triage: Implementing AI-assisted failure analysis to interpret pipeline logs and provide plain-language explanations for complex integration failures. Securing the Pipeline: A critical look at security-conscious AI adoption, focusing on data residency and sandboxed execution using enterprise-bounded platforms like Azure OpenAI or AWS Bedrock.

Mohini Agarwal

Mohini Agarwal

Rachana Menon

Rachana Menon

From Quality Metrics to Quality Mindset: Building Teams That Own Outcomes

Software teams often rely on metrics, dashboards, and defined processes to guide quality efforts. While these tools provide important visibility, they don’t automatically create ownership, accountability, or better outcomes. In many cases, teams learn to optimize for the numbers rather than the intent behind them. This session explores how quality outcomes are shaped less by tools and processes and more by leadership behaviors and team mindset. Drawing from experience in software testing and people leadership, the talk examines how well-meaning management practices can unintentionally reinforce “check-the-box” behavior, and what leaders can do differently to build teams that truly own quality. Participants will explore: Why quality challenges are often rooted in leadership and communication gaps. How metrics can shift from helpful signals to counterproductive targets. Common ways leaders unintentionally discourage ownership. Practical leadership behaviors that promote clarity, accountability, and proactive thinking Rather than introducing new frameworks or methodologies, this session focuses on small, intentional leadership shifts that can have an outsized impact on team behavior. Attendees will leave with actionable ideas they can apply immediately to move teams from compliance-driven execution to shared ownership of outcomes.

Barbara Deaton

Barbara Deaton

How to create a QA or the Highway talk

This session is aimed at the person who is interested in presenting at a conference like QA or the Highway but needs practical help building their first talk. Using examples, suggestions and group feedback, the participant will leave with a step-by-step playbook for what they need to do in order to submit their proposed talk next year.

David Leslie

David Leslie

If AI is Writing the Code, Who’s Guarding the Quality?

AI is helping teams rapidly transform the software development lifecycle—from requirements and design to coding and testing. While AI tools offer faster delivery and greater productivity, they also introduce new and often overlooked quality risks. Without the right processes and culture in place, AI can amplify existing challenges—leading to increased rework, wasted resources, team burnout, and a loss of trust in delivery outcomes. In this interactive session, Jeff Van Fleet and Scott Boyd break down how AI is reshaping the SDLC, where it can introduce hidden risks, and how QA teams can mitigate those risks. You’ll assess where your team sits on an AI maturity model, see what the data show about how teams like yours are performing at each stage, and leave with concrete next steps — not theory, but something you can act on tomorrow: How AI is impacting each phase of the SDLC—and where it creates the most risk The most common quality issues introduced by AI-generated code How to build reciprocal feedback loops where your team’s domain expertise ensures outputs are repeatable, high-quality, and continuously improved upon by both humans and AI

Jeff Van Fleet

Jeff Van Fleet

Scott Boyd

Scott Boyd

Layoff to Launch

Losing a job can feel like a dead end, but what if its the start of your ever big chapter ? session covers strategies for turning layoffs and reorgs in to career comebacks, leverage your network, upskill, and pivot in to new opportunities. if you are navigating a layoff or looking to future-proof your career, this session will equip you with practical tools to turn roadblocks in to launchpads.

Ram Gadde

Ram Gadde

Making Migrations Safer and Cheaper with AI-powered Testing

Migrating legacy systems, such as COBOL-based applications to modern technologies are more feasible today thanks to development copilots that accelerate work and reduce costs (Github Copilot, Cursor, etc). However, testing remains a significant hurdle, especially when documentation is missing or outdated (which is the most common scenario), making it hard to verify that the new system behaves the same as the old one. This talk presents a two-track approach designed to help QA teams tackle this challenge in migrations in a more sustainable way: first, a static understanding of the system through code-derived diagrams and second, a dynamic understanding via observability (being able to ask in natural language to the system what’s happening in the backend), acting as a testing copilot. A core contribution of the session is the integration of open-source tools we developed for AI agents that assist testers (https://github.com/abstracta/tero) and a set of examples used to demonstrate with practical demos. To elaborate, the static track uses code-derived diagrams (state machines, flow diagrams, and sequence diagrams) generated directly from the codebase to illuminate system behavior without relying on outdated documentation. In addition, the dynamic track introduces observability as a copilot for testers, enabling real-time visibility into backend behavior in production-like conditions and helping testers validate that changes preserve intended behavior even in the absence of perfect docs. The talk emphasizes AI as a friendly ally for testers across roles, not just developers, and avoids overpromising metrics. While formal metrics are still in progress, early signals suggest that this approach can enable more feasible migration projects by optimizing the testing part. As part of the process, we need a validation stage that will combine expert reviews of generated artifacts (diagrams and observability outputs) with cross-checks by peers to ensure coherence and usefulness. Contribution to the audience This talk offers a practical, replicable view on how to address legacy migrations from a QA lens powered by AI, without relying on complete or up-to-date documentation. It promotes an “AI as ally” narrative that fosters collaboration among testers, developers, and stakeholders, and it establishes a clear pathway aiming to reduce costs and delivery times while safeguarding quality and behavioral verification. Notes for reviewers Target audience: QA managers, test leads, engineering managers, VPs and DevOps leaders involved in migrations.

Federico Toledo

Federico Toledo

Playwright+MCP Server+Claude: a powerful trio

We’ve all heard the hype about Playwright. And the hype about MCP Servers. And even more hype about Claude Code. Now you get to see what all the hype is about. During this session we are going to wire up Claude Code to a Playwright MCP Server in a Playwright test automation project. Then use Claude Code to travel around the web, mapping out pages as it goes. We might even be able to get Claude to create some tests. Will Claude hallucinate? Probably. Will Claude create a bunch of duplicate tests? Probably. Will Claude create a production-ready test automation framework in the short amount of time we have for this session? We’ll find out.

Matthew Eakin

Matthew Eakin

Rolling the Dice

One of the hardest challenges in Quality Assurance is deciding when a release is ready. All software has bugs, but was that strange behavior you observed in testing just a fluke, or a sign of something catastrophic? This talk introduces a practical risk assessment framework built around “rolling the dice” on quality. Each situation is modeled as a die, and each potential outcome as one of its faces. This mental model helps attendees visualize uncertainty, understand the range of possibilities, and evaluate the stability of their releases. Attendees will learn to identify sources of chaos in their products and focus their testing efforts on high-impact risks, removing negative outcomes from their dice and increasing the odds of a successful release.

Paul Turchinetz

Paul Turchinetz

Taming the Beast: Testing Non-Deterministic AI Systems with Confidence

For decades, software testing has relied on a comforting assumption: given the same input, systems should produce the same output. AI-enabled systems break that assumption entirely. Large language models and other AI components generate responses that can vary in structure, tone, and content while still appearing “correct”. In this session, we’ll explore why traditional testing strategies struggle with non-deterministic AI behavior and where they quietly fail. Using real-world examples such as AI chatbots and resume-screening systems, we’ll walk through practical techniques for validating AI outputs without relying on brittle, deterministic assertions. Topics include input variation strategies, semantic similarity analysis, bias detection, and using LLMs responsibly as automated evaluators (aka “LLM-as-a-Judge”). Attendees will leave with a clear mental model for testing AI-based systems, concrete patterns they can apply immediately, and guidance on balancing automation, human judgment, and risk. If you’re responsible for the quality of AI-driven features, this talk will help you move from uncertainty to confidence!

Lee Barnes

Lee Barnes

Testing the Untestable: A Practical Guide to LLM Quality Assurance

Your entire QA career has been a lie. Okay, not entirely but everything you know about testing breaks down when the system under test is an LLM. Same input, different output. No spec to test against. "Correct" is subjective. Welcome to AI testing, where assert_equals goes to die. But here's the thing: AI still needs QA. It needs it MORE than deterministic systems because the failure modes are weirder, harder to detect, and way more embarrassing when they hit production. In this talk, I'll share the AI QA Playbook, a practical framework for testing systems that don't behave the same way twice. The five testing pillars you need: Accuracy Testing: Building golden datasets when "correct" is fuzzy Bias Testing: Counterfactual test design that catches discrimination Hallucination Testing: Detecting confident nonsense before users do Security Testing: Prompt injection, jailbreaks, and data leakage Regression Testing: What does "regression" even mean for AI? What makes this different: Real test data examples, not theory Metrics that actually work for non-deterministic systems CI/CD integration patterns Tools you can use today (including my open-source contributions) I've spent the last two years figuring out how to do QA for systems that refuse to be predictable. This talk is the playbook I wish existed when I started.

Tanvi Mittal

Tanvi Mittal

Testing without Ai - How I learned to stop worrying and love QA

Ai has gone from a curiosity to a "must have" technology in less than 4 years and (I grudgingly admit) has gotten significantly better over that time. We better understand it's capabilities and limitations. But there is still a lot of hype and anxiety and misconceptions about Ai. I will talk about how I went from a technical leadership role developing test automation to a 10x engineer with Ai, to becoming obsolete, redundant, unemployed. I'll talk about the hardships and stress affected me, the cold hard realities of Ai, and why I believe whatever the end state of Ai adoption, we're going to need more testers and better QA, not less. This is a difficult transition period, but I see cause for optimism. I'll talk about what Ai can and cannot do well, what Ai should and should not do, how to help non-technical leadership know the difference, and how you can improve the quality of life and the quality of software with or without Ai. P.S. I'm currently working in a position where Ai is forbidden and QA is very important.

Aaron Evans

Aaron Evans

The Day Testing Died — And Quality Evolved

Twenty years ago, my job was to break software. I was trained to think in edge cases, failure paths, and regression suites. If something slipped into production, it meant I missed something. Back then, quality meant testing. Then the world changed. Agile arrived. DevOps arrived. Continuous delivery arrived. And I realized something uncomfortable: quality was never about the test cases. It was about the system behind them. And now AI has changed the ground again. Today, code is written by machines. Tests are generated by copilots. Reviews are assisted by algorithms. So if machines can write the tests… Who is responsible for trust? That question is why we must evolve — from testers, to quality champions, to AI validation designers.

Tatyana Arbouzova

Tatyana Arbouzova

The Gumshoe Protocol

It was a dark and stormy night, the kind of night you expect a P0 defect, when the Teams call interrupted my dinner of Cup of Noodles. It was the VP of Customer Success “we have a customer facing problem. We need you and the Gumshoe Team on the case”. Root cause analysis (RCA) is a critical skill for everyone, however, most professionals have never had the opportunity to identify the root cause of a defect before needing to do so in a critical situation. Effective RCA requires all stakeholders to think critically and use their best judgement on the often limited information available. In this workshop, participants will role-play through a real-life scenario and interact with logs, users, and other stakeholders to figure out the root cause before coming together to brainstorm on how we could have prevented the incident. Join the Gumshoe Team of developers, QAs, product professionals, customer support, and project managers to crack the case.

Jenna Charlton

Jenna Charlton

Jenny Bramble

Jenny Bramble

The Human Advantage: Making Better QA Decisions with AI in the Loop

As AI becomes a key player in our tools and workflows, testing and testers need to evolve the strategies since the quality still depends on human judgment. This session focuses on the tester’s mindset and the strategies to follow in their personal + professional lives that help QA engineers make better decisions, communicate risk clearly, and build trust in what they deliver This is a practical, human-centered talk on how to become a stronger Quality Assurance professional in an AI-accelerated environment without becoming overly dependent on tools. Focusing on the human aspect of the QA, we’ll explore the mental models and habits that separate “someone who runs tests” from “someone who owns quality.” The key takeaways include - Using AI as a test helper, expanding ideas, help debug and write reports, prompt pattens to generate better scenarios QA mindset shifts for stronger testing - risk-based thinking, systems thinking, and skepticism without becoming adversarial. Communication strategies that make QAs more human Personal growth strategies that anyone can follow to build confidence and to be resilient A set of repeatable daily/weekly habits (checklists, journaling prompts, review routines) to build stronger intuition and consistency. This session will be beneficial for Quality Assurance professions in all levels, if you are wondering how the QA roles evolve with AI in the mix, this session is for you.

Krishna Bandarupalli

Krishna Bandarupalli

The Judgment Gap: Why AI Adoption Without Verification Is Worse Than No AI At All

Your organization adopted AI. Congratulations — so did everyone else. But here's what the data actually shows: in a pre-registered experiment with 758 consultants, Harvard and BCG found that AI made good work 40% better and bad work 19 percentage points *worse* than working without AI at all. The tool that amplifies expertise also amplifies poor judgment — and the boundary between the two is invisible without training. This isn't a theoretical risk. In our own survey of 571 professionals, 93.9% use AI frequently — but 69.4% spend zero time on advanced capabilities. Microsoft's 300,000-person Copilot rollout found the same pattern: broad adoption clustered at the simplest features, with a measurable productivity dip from weeks 3 through 10 as initial excitement collided with real-world complexity. Adoption isn't the hard part anymore. Judgment is. In this session, you'll learn: - How to identify whether a task falls inside or outside AI's reliability boundary — and why getting this wrong is catastrophic - A practical verification framework that catches AI failures *before* they reach your clients or production systems - Why the "adoption valley" kills most AI initiatives, and the specific practices that get teams through it - How to build team-level habits that make AI output trustworthy by default, not by accident You'll leave with concrete processes you can implement Monday morning to close the gap between "we use AI" and "we use AI well."

Tim Rayburn

Tim Rayburn

The Quality Horizon: Modern Best Practices and the Art of Constant Adaptation

When I started my career twenty years ago, software quality basically meant "no bugs," and testing meant executing a finite set of cases. Since then, I’ve watched the concept evolve alongside significant shifts in technology and industry practices. Because these changes have been largely additive, we face an ever-expanding horizon of what "good" software looks like. In this talk, I offer a definition of modern software quality that incorporates the many expectations accumulated from technological and process trends such as Agile, automated testing, cloud hosting, DevOps, and AI. Drawing on my own painful experiences with neglecting or misunderstanding the evolving dimensions of quality, I’ll share examples of what effective practices can look like. To help you assess how you and your team are doing with respect to this laundry list of things to be responsible for, we will break them down into four key pillars: - User Value: Building the Right Thing (strategic alignment, risk assessment, measurable impact) - Product Health: Testing the Things We Can Predict (enabling and executing solid testing strategies) - Operational Health: Dealing with the Unexpected (observability, recovery, non-derministic behaviour) - Sustainability: Holistic Stewardship (security, cost, performance, accessibility, maintainability) We’ll cover a series of questions to help you explore and audit each area, followed by a set of prompts to help you determine where the next quality evolution is likely to come from in your context. While the field may have been simpler when I began, the constant transformation is what has kept it exciting. Fortunately, today it’s easier than ever to learn a new skill that can bring more value to your users and your business. As a final thought, I hope to leave you with the realization that the most important software quality strategy is the willingness to adapt, evolve, and stay curious in an ever-changing landscape.

Tina Fletcher

Tina Fletcher

The Race to the Top… When Do You Stop?

In many QA organizations, career growth is often equated with becoming a people manager. But what happens when your strengths — and passions — lie elsewhere? In this session, we'll share a tale of two testers, metaphorically climbing QA Everest. Using our experiences and lessons learned to help you navigate the decision, "do I want to manage or do I want to keep testing?" We'll each share some of our personal journey from QA individual contributor to manager and back again. While you can value the opportunity to lead, you may ultimately realize that the parts of the role that energize you most are mentoring testers, improving quality practices, staying hands-on with testing, and supporting projects through strong organization and communication — not the people-management aspects like performance reviews, HR responsibilities, or navigating formal management structures. This session challenges the idea that you have to keep climbing the mountain toward higher base camps, including management. Instead, it reframes the individual contributor role as a powerful, intentional career choice that brings deep value to teams. We'll discuss the decision points and key characteristics to help you determine where you want to stop on your journey up QA Everest. Then, we'll outline ways QA individuals can advocate for themselves, define success beyond titles, and continue to grow their influence without managing people. Whether you’re a tester questioning your next career move, a people manager considering a change, or a leader looking to better support IC career paths, this talk will offer validation, clarity, and practical takeaways.

Alexa Beach

Alexa Beach

Emma Clouse

Emma Clouse

The RRQI Method: Turning "Go/No-Go" Decisions into Data-Driven Science

Release decisions are rarely as objective as we’d like them to be. We sit in “Go/No-Go” meetings asking if we’re ready, but the answers tend to lean on gut feelings, pass rate snapshots, or wishful thinking rather than actual readiness. That was the problem we kept running into: too much activity, not enough clarity. So we built the Release Readiness Quality Index (RRQI): a practical framework that turns progress into a score, not a guess. By combining three weighted factors—Coverage Confidence (how well we tested the right things), Execution Confidence (how stable those tests were), and Defect Risk (how serious the remaining bugs are)—we created a 0-100 scale to track readiness like a trend, not a binary call. In this session, I’ll break down how the RRQI works, share the formulas we used, and show how it helped us spot risk earlier, align teams faster, and make our release decisions easier to defend. You’ll leave with everything you need to build your own version—so your next Go/No-Go can actually feel like a real decision, not a leap of faith. Key Takeaways: 1. Eliminate Subjectivity: Replace "gut feelings" with a composite score (0-100) based on coverage, execution, and risk. 2. Trend Analysis: Learn to track readiness as a progressive trend relative to your deadline rather than a static snapshot. 3. Risk Quantification: Apply weighted formulas to prioritize residual risk over simple bug counts.

Joel Montvelisky

Joel Montvelisky

Turbocharge Your Playwright: Capabilities You're Probably Not Using

Most teams use Playwright to open a browser and click through flows. But Playwright can do so much more — mock APIs, persist authentication, inject custom headers for distributed tracing, and more. In this talk, Kevin goes beyond the basics and explores the capabilities that separate a good Playwright suite from a great one. You'll leave with techniques you can add to your suite the same week. This is not a "getting started with Playwright" talk. It's for teams that are already running Playwright tests and want to unlock capabilities they didn't know they had. I'll cover things like persisting authentication state across tests, mocking API responses to isolate your UI layer, injecting traceparent headers for distributed tracing, and intercepting network traffic — all features built into the library that most teams never touch. The goal is to get the audience thinking like testers when they read Playwright's API, spotting features that solve real testing problems rather than just following tutorials.

Kevin Roe

Kevin Roe

Verify, Then Trust: Human Judgement in the Age of Generative AI

Generative AI is rapidly becoming part of how modern organizations write, evaluate, and make decisions — from drafting content and summarizing data to supporting technical workflows. As these systems become more fluent and convincing, the real challenge is no longer whether AI can produce outputs quickly, but whether humans maintain the skills of verification, discernment, and accountability alongside it. Verify, Then Trust offers a grounded, human-centered framework for navigating AI without panic, blind faith, or outsourcing our thinking. This session explores why trust must be earned through validation, especially in environments where accuracy, clarity, and responsibility matter. Rather than positioning AI as a threat or a replacement, this talk reframes it as a powerful tool that still requires human oversight. Attendees will learn practical habits for staying mentally present, asking better questions, and building norms that keep human judgment meaningfully in the loop. This session is designed for technology professionals, leaders, and teams adapting to AI-enabled work who want to move forward with confidence — without losing critical thinking or responsibility in the process. Key themes include: * Speed vs. correctness in AI-generated outputs * Why verification is becoming a core professional skill * The risk of automation complacency * Practical ways to keep humans accountable and engaged

Ashley René Casey

Ashley René Casey

What is Your Working Genius?

The working genius model is a productivity model developed by Patrick Lencioni with the goal of accomplishing a simple concept: bringing more joy and fulfillment at work! When you and your team understand where your geniuses are and how to (and when not to) use them, it can improve meetings, reduce burnout, and dramatically reduce turbulence in getting projects done. In this session we will review the 6 types of working geniuses and how they bring projects from ideation to implementation. We will discover the hidden cause of burnout and how to keep meetings, including our agile ceremonies, more focused and more productive as a whole, all with the goal of improving your life and team culture, both in and outside of work. (that's right… ALL projects!)

Kyle Jenkins

Kyle Jenkins

When Regression Testing Holds You Hostage

Is your release cadence for long-lived software bogging down? It’s rarely the result of bad decisions—it’s the cost of success. As systems evolve, they accumulate features, behaviors, and user expectations that make change increasingly complex. In these environments, regressions are not anomalies but a structural reality. Regression testing emerges as the system’s immune response, preserving trust as complexity grows. Over time, however, ever-expanding regression suites slow feedback cycles and unintentionally anchor release velocity. This talk explores test impact analysis (TIA) as a governance model for scaled QA automation. By correlating tests to actual code coverage and using change data to determine impact, TIA introduces precision and policy into regression execution. Instead of relying on blanket “run everything” strategies, teams can make informed, repeatable decisions about what to test, when, and why. Attendees will see how this approach transforms regression testing from a blunt instrument into a sustainable, scalable quality strategy for complex systems. You'll learn how to: * Recognize why complexity, regressions, and expanding regression suites are inevitable outcomes of successful, long-lived software—and why they demand intentional governance. * Understand regression gravity as a systemic force that constrains release frequency, feedback loops, and automation scalability. * Apply test impact analysis principles as a governance mechanism to control regression growth, optimize execution, and maintain confidence at scale.

David Vano

David Vano

Wilhelm Haaker

Wilhelm Haaker