Federico Toledo

Federico Toledo

Co-founder and Chief Quality Officer at Abstracta

About

Driven by the challenge of making a positive impact through quality software, Federico Toledo boasts 18 years in the IT field. He's the co-founder and Chief Quality Officer of Abstracta (https://abstracta.us/), a global tech company dedicated to creating impactful software solutions, focusing on testing, innovation and development. Federico holds a degree in Computer Engineering from the Universidad de la República in Uruguay, a Ph.D.in Computer Science from the Universidad de Castilla-La Mancha in Spain, and is also a proud graduate of the Stanford + LBAN SLEI program in 2021. A renowned speaker and author, he also hosts the "Quality Sense” podcast and its namesake conference, showcasing his unparalleled commitment to software excellence.

Making Migrations Safer and Cheaper with AI-powered Testing

Time
TBA
Room
TBA

Description

Migrating legacy systems, such as COBOL-based applications to modern technologies are more feasible today thanks to development copilots that accelerate work and reduce costs (Github Copilot, Cursor, etc). However, testing remains a significant hurdle, especially when documentation is missing or outdated (which is the most common scenario), making it hard to verify that the new system behaves the same as the old one. This talk presents a two-track approach designed to help QA teams tackle this challenge in migrations in a more sustainable way: first, a static understanding of the system through code-derived diagrams and second, a dynamic understanding via observability (being able to ask in natural language to the system what’s happening in the backend), acting as a testing copilot.

A core contribution of the session is the integration of open-source tools we developed for AI agents that assist testers (https://github.com/abstracta/tero) and a set of examples used to demonstrate with practical demos. To elaborate, the static track uses code-derived diagrams (state machines, flow diagrams, and sequence diagrams) generated directly from the codebase to illuminate system behavior without relying on outdated documentation. In addition, the dynamic track introduces observability as a copilot for testers, enabling real-time visibility into backend behavior in production-like conditions and helping testers validate that changes preserve intended behavior even in the absence of perfect docs.

The talk emphasizes AI as a friendly ally for testers across roles, not just developers, and avoids overpromising metrics. While formal metrics are still in progress, early signals suggest that this approach can enable more feasible migration projects by optimizing the testing part. As part of the process, we need a validation stage that will combine expert reviews of generated artifacts (diagrams and observability outputs) with cross-checks by peers to ensure coherence and usefulness.

Contribution to the audience
This talk offers a practical, replicable view on how to address legacy migrations from a QA lens powered by AI, without relying on complete or up-to-date documentation. It promotes an “AI as ally” narrative that fosters collaboration among testers, developers, and stakeholders, and it establishes a clear pathway aiming to reduce costs and delivery times while safeguarding quality and behavioral verification.

Notes for reviewers
Target audience: QA managers, test leads, engineering managers, VPs and DevOps leaders involved in migrations.