<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Quality Reimagined: QE Leader Toolkit]]></title><description><![CDATA[QE Leader Toolkit is a curated set of practical, ready-to-use resources for Quality Engineering leaders modernizing in an AI-shifting world. Built for busy VPs, Directors, and senior managers who need clarity, not theory.]]></description><link>https://www.qualityreimagined.com/s/qe-leader-toolkit</link><generator>Substack</generator><lastBuildDate>Sat, 11 Apr 2026 11:34:28 GMT</lastBuildDate><atom:link href="https://www.qualityreimagined.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Richie Yu]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[qualityreimagined@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[qualityreimagined@substack.com]]></itunes:email><itunes:name><![CDATA[Richie Yu]]></itunes:name></itunes:owner><itunes:author><![CDATA[Richie Yu]]></itunes:author><googleplay:owner><![CDATA[qualityreimagined@substack.com]]></googleplay:owner><googleplay:email><![CDATA[qualityreimagined@substack.com]]></googleplay:email><googleplay:author><![CDATA[Richie Yu]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[You're About to Invest in AI for Testing. Do This First.]]></title><description><![CDATA[Most QE teams are automating the wrong 20%. Here's how to find the other 80%]]></description><link>https://www.qualityreimagined.com/p/youre-about-to-invest-in-ai-for-testing</link><guid isPermaLink="false">https://www.qualityreimagined.com/p/youre-about-to-invest-in-ai-for-testing</guid><dc:creator><![CDATA[Richie Yu]]></dc:creator><pubDate>Sun, 15 Feb 2026 14:50:11 GMT</pubDate><content:encoded><![CDATA[<p>Every QE leader I talk to right now is under pressure to &#8220;adopt AI.&#8221; The mandate comes from the CTO, from the board, from the analyst reports piling up in their inbox. And most of them are about to make the same mistake.</p><p>They&#8217;re going to pick a tool. They&#8217;re going to run a pilot on their UI regression suite. They&#8217;re going to report early wins. And eighteen months from now, they&#8217;re going to wonder why their testing operation still feels the same.</p><p>I&#8217;ve seen this pattern play out enough times to know why it happens.</p><p><strong>The problem isn&#8217;t the tool. It&#8217;s the target.</strong></p><p>When most organizations say &#8220;we&#8217;re adopting AI in QE,&#8221; what they mean is: we&#8217;re going to use AI to speed up test execution. Maybe auto-generate some Selenium scripts. Maybe add a copilot for writing test cases.</p><p>That&#8217;s not wrong. But it&#8217;s optimizing a stage that typically represents 15-20% of total QE effort.</p><p>The other 80% &#8212; coverage design, data preparation, environment setup, results analysis, defect triage, reporting, regression management &#8212; stays untouched. Manual. Expensive. Invisible.</p><p>A team can report 70% automation and still spend the majority of its budget on manual work. The metric everyone watches measures the wrong thing.</p><p><strong>The real question isn&#8217;t &#8220;which AI tool should we buy?&#8221;</strong></p><p>It&#8217;s: where in our testing operation would AI actually move the needle?</p><p>And you can&#8217;t answer that if you don&#8217;t know where the effort actually goes.</p><p>I&#8217;ve worked with QE organizations that were convinced their bottleneck was test execution speed. When we actually mapped the operation &#8212; every lifecycle stage, every test type, every release type &#8212; we found the real constraint was somewhere else entirely. Data preparation eating two days per release. Environment contention blocking three teams simultaneously. Results analysis where a senior engineer spent half their week manually triaging false failures.</p><p>These aren&#8217;t glamorous problems. They don&#8217;t make for exciting vendor demos. But they&#8217;re where the money is.</p><p><strong>Why discovery has to come first</strong></p><p>Here&#8217;s what I mean by discovery: before you evaluate a single AI tool, map your current testing operation from end to end. Not the process diagram on the wiki &#8212; what actually happens.</p><p>For every test type your organization performs, trace it through all ten stages of the testing lifecycle:</p><ol><li><p>Coverage design &#8212; how do you decide what to test?</p></li><li><p>Test case creation &#8212; who writes them, how long does it take?</p></li><li><p>Script development &#8212; what&#8217;s automated, what&#8217;s maintained by hand?</p></li><li><p>Data preparation &#8212; where does test data come from?</p></li><li><p>Environment setup &#8212; how long do you wait?</p></li><li><p>Execution &#8212; this is the stage everyone focuses on</p></li><li><p>Results analysis &#8212; how long does triage take?</p></li><li><p>Defect management &#8212; what&#8217;s the false positive rate?</p></li><li><p>Reporting &#8212; can you answer &#8220;are we safe to ship?&#8221;</p></li><li><p>Regression management &#8212; is the suite growing or grooming?</p></li></ol><p>For each stage, capture who does the work, how they do it, how long it takes, and what it costs. Then compare that to what&#8217;s actually possible today with modern tooling and agentic AI.</p><p>When you do this honestly, patterns emerge. You find lifecycle stages where the gap between current state and art of possible is enormous. You find stages where a single intervention would cascade through the entire operation. You find that the thing you were about to automate wasn&#8217;t actually the bottleneck.</p><p>That&#8217;s the fact base. Without it, every AI investment is a guess.</p><p><strong>The two-phase approach</strong></p><p>I frame this as a two-phase journey:</p><p>Phase 1: Discover. Map the operation. Build the fact base. Identify where the gaps are largest and where AI would deliver the most impact. Sequence priorities by dependency and ROI.</p><p>Phase 2: Transform. Match findings to solutions. Run proof of concepts against your actual environment. Train the team. Deploy. Optimize.</p><p>Most organizations skip Phase 1 and jump straight to Phase 2. They pick a tool because a vendor gave a compelling demo, pilot it on the most visible test type, and declare success based on a narrow metric. Meanwhile, the operating model stays the same.</p><p>Phase 1 takes 2-3 hours per application. Phase 2, done right, takes months. But Phase 1 is what makes Phase 2 successful.</p><p><strong>I built a framework for Phase 1</strong></p><p>I&#8217;ve put together a structured discovery document that walks you through this process. It covers all five dimensions of a testing operating model &#8212; crown jewels, release types, test phases, test types, and the full ten-stage lifecycle &#8212; with templates for both deep-dive and lightweight assessment.</p><p>It includes:</p><ul><li><p>A routing section so you only complete what&#8217;s relevant</p></li><li><p>Full and lightweight lifecycle assessment templates for each test type</p></li><li><p>An &#8220;art of possible&#8221; comparison for every lifecycle stage showing what AI-enabled testing looks like today</p></li><li><p>A priority matrix to sequence where to invest first</p></li><li><p>A results summary template you can take to your CTO</p></li></ul><p>This isn&#8217;t a maturity model. There&#8217;s no score at the end. It&#8217;s a fact base &#8212; the kind of clarity you need before you spend a dollar on AI tooling.</p><p><strong>Get the discovery framework</strong></p><p>I&#8217;m releasing this as a free PDF &#8212; it&#8217;s v0.9, a beta. I want feedback from practitioners who actually run QE organizations.</p><p><strong>Subscribe to this newsletter and I&#8217;ll send you the framework directly.</strong> It&#8217;s free &#8212; just drop your email.</p><p>If you complete it and want a second opinion on your findings, or if you need help turning them into a funded roadmap, I&#8217;m happy to talk. You can book a discovery call at qualityreimagined.com.</p><p>The AI tools are getting better every month. The organizations that win won&#8217;t be the ones that adopted first &#8212; they&#8217;ll be the ones that knew where to point them.</p><div><hr></div><p><em>Richie Yu works with QE leaders navigating the shift to agentic AI. His focus is on the operating model &#8212; not just the tools, but how testing work actually flows and where modernization delivers measurable returns.</em></p>]]></content:encoded></item><item><title><![CDATA[QE Modernization Diagnostic (Scorecard)]]></title><description><![CDATA[A one-page scorecard for mid-market and enterprise QE leaders]]></description><link>https://www.qualityreimagined.com/p/qe-modernization-diagnostic-scorecard</link><guid isPermaLink="false">https://www.qualityreimagined.com/p/qe-modernization-diagnostic-scorecard</guid><dc:creator><![CDATA[Richie Yu]]></dc:creator><pubDate>Sat, 13 Dec 2025 13:56:38 GMT</pubDate><content:encoded><![CDATA[<p>If you are modernizing Quality Engineering and the landscape feels like it is shifting under your feet, this scorecard will help you get grounded.</p><p><strong>How to use this:</strong> Score each statement <strong>0&#8211;2</strong>. Add the totals by section. Your lowest sections are your priority constraints.<br><strong>0 = Not in place | 1 = Partially | 2 = Consistent at scale</strong></p><p>If you want a lightweight sanity check, reply to this post with your section totals and your top constraint. I will respond with a few priority moves to consider.</p><div><hr></div><h2>A) Operating model and ownership (0&#8211;10)</h2><ol><li><p>Decision rights are clear (standards, quality gates, exceptions). 0 1 2</p></li><li><p>The engagement model is explicit (embedded, shared services, hybrid). 0 1 2</p></li><li><p>QE responsibilities across Dev, QA, Product, and Ops are defined and practiced. 0 1 2</p></li><li><p>There is a modernization backlog with an owner, funding, and cadence. 0 1 2</p></li><li><p>Standards are adopted through enablement, not policing. 0 1 2</p></li></ol><p><strong>Section total (0&#8211;10): ____</strong></p><div><hr></div><h2>B) Delivery integration and feedback loops (0&#8211;10)</h2><ol start="6"><li><p>Quality signals arrive early enough to change outcomes. 0 1 2</p></li><li><p>Environments, test data, and dependencies are managed as first-class constraints. 0 1 2</p></li><li><p>Triage is repeatable and assigns ownership quickly to reduce MTTR. 0 1 2</p></li><li><p>Teams can distinguish product defects vs test defects vs environment defects. 0 1 2</p></li><li><p>Release readiness is evidence-based and repeatable, not meeting-based. 0 1 2</p></li></ol><p><strong>Section total (0&#8211;10): ____</strong></p><div><hr></div><h2>C) Automation effectiveness and economics (0&#8211;10)</h2><ol start="11"><li><p>Automation is prioritized by risk and critical user journeys, not test counts. 0 1 2</p></li><li><p>Flaky tests are tracked, owned, and resolved with urgency. 0 1 2</p></li><li><p>Maintenance cost is visible (time spent fixing tests and false failures). 0 1 2</p></li><li><p>Low-value automation has a normal retirement path (kill, rebuild, replace). 0 1 2</p></li><li><p>Execution scales without linear effort (stability and orchestration are addressed). 0 1 2</p></li></ol><p><strong>Section total (0&#8211;10): ____</strong></p><div><hr></div><h2>D) Governance, controls, and auditability (0&#8211;10)</h2><ol start="16"><li><p>Quality gates are explicit and tuned by risk level, not one-size-fits-all. 0 1 2</p></li><li><p>Evidence for release decisions is captured consistently and easy to retrieve. 0 1 2</p></li><li><p>Exceptions are managed with a defined process, owner, and expiry. 0 1 2</p></li><li><p>Practices align to application criticality (crown jewels vs non-critical apps). 0 1 2</p></li><li><p>Reporting and controls match delivery reality, not an idealized process. 0 1 2</p></li></ol><p><strong>Section total (0&#8211;10): ____</strong></p><div><hr></div><h2>E) Quality signals leaders trust (0&#8211;10)</h2><ol start="21"><li><p>Leadership reporting focuses on outcomes (risk, stability, confidence), not activity. 0 1 2</p></li><li><p>Trends are visible (leakage, reliability, change failure, incident risk). 0 1 2</p></li><li><p>Teams can explain what changed and why confidence changed since the last release. 0 1 2</p></li><li><p>Signals connect to customer impact (critical journeys, severity, incident history). 0 1 2</p></li><li><p>Decisions move faster because leaders trust the signals and the process behind them. 0 1 2</p></li></ol><p><strong>Section total (0&#8211;10): ____</strong></p><div><hr></div><h1>Scoring and what to do next</h1><h2>Total score (0&#8211;50): ____</h2><p><strong>0&#8211;15: Foundations missing</strong><br>Start with Operating Model + Feedback Loops. Without these, everything else becomes chaos.</p><p><strong>16&#8211;30: Capability exists but does not scale</strong><br>Focus on Governance + Automation Economics. This is where most orgs get stuck.</p><p><strong>31&#8211;40: Mature pockets, inconsistent execution</strong><br>Standardize the operating model, build reusable patterns, and fix signal credibility.</p><p><strong>41&#8211;50: Strong baseline</strong><br>Shift from building to optimizing. Tighten metrics, reduce drag, and adopt AI with control.</p><div><hr></div><h2>Turn this into an action plan (5 minutes)</h2><ol><li><p>Circle your <strong>lowest two sections</strong>.</p></li><li><p>Write the top <strong>two constraints</strong> you see behind those scores.</p></li><li><p>Pick one <strong>30-day move</strong> that reduces friction immediately.</p></li><li><p>Pick one <strong>90-day move</strong> that changes the operating model, not just symptoms.</p></li></ol><p>If you want help translating the score into a modernization plan, reply with your totals and context.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.qualityreimagined.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.qualityreimagined.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item></channel></rss>