Joel Montvelisky
PractiTest Co-Founder and CPO
About
Joel Montvelisky is PractiTest’s Co-Founder and Chief Product Officer (CPO). Joel has been part of the testing and QA world since 1997, working as a tester, QA Manager and Director, and Consultant for companies across the globe. During this time he was guiding changes in testing from legacy to modern-day approaches. Joel is a Forbes Council member (https://www.forbes.com/sites/forbestechcouncil/2020/12/16/the-new-role-of-the-software-quality-architect/?sh=3ae54ed94500), a blogger, and a lecturer. Joel is the founder and Chair of the OnlineTestConf (http://www.onlinetestconf.com/), the co-founder of the State of Testing (http://qablog.practitest.com/state-of-testing/) survey and report. These “for the community” initiatives are a representation of his belief in sharing knowledge and making it available to as many people as possible. Joel is a seasoned conference speaker worldwide, among them the STAR Conferences, STPCon, JaSST, TestLeadership Conf, CAST, QA&Test, and more.
The RRQI Method: Turning "Go/No-Go" Decisions into Data-Driven Science
Description
Release decisions are rarely as objective as we’d like them to be. We sit in “Go/No-Go” meetings asking if we’re ready, but the answers tend to lean on gut feelings, pass rate snapshots, or wishful thinking rather than actual readiness. That was the problem we kept running into: too much activity, not enough clarity. So we built the Release Readiness Quality Index (RRQI): a practical framework that turns progress into a score, not a guess. By combining three weighted factors—Coverage Confidence (how well we tested the right things), Execution Confidence (how stable those tests were), and Defect Risk (how serious the remaining bugs are)—we created a 0-100 scale to track readiness like a trend, not a binary call.
In this session, I’ll break down how the RRQI works, share the formulas we used, and show how it helped us spot risk earlier, align teams faster, and make our release decisions easier to defend. You’ll leave with everything you need to build your own version—so your next Go/No-Go can actually feel like a real decision, not a leap of faith.
Key Takeaways:
- Eliminate Subjectivity: Replace "gut feelings" with a composite score (0-100) based on coverage, execution, and risk.
- Trend Analysis: Learn to track readiness as a progressive trend relative to your deadline rather than a static snapshot.
- Risk Quantification: Apply weighted formulas to prioritize residual risk over simple bug counts.