I've worked with engineering teams that run two-week sprints and still have QA review happening in week three. The sprint ceremony looks Agile. The calendar says Agile. But the actual flow of work is sequential: dev finishes, hands off to QA, QA finds things, dev fixes, QA re-tests. That's a waterfall loop inside an Agile container.

It's one of the most common QA problems I see at startups and mid-size product companies. And the reason it persists isn't laziness or bad intentions. It's that nobody has drawn out exactly where QA fits inside a sprint, day by day. So teams default to "QA tests at the end" because that's the only model they've seen work.

This article is the playbook I use when we embed goGreenlit into a client's sprint. It covers where testing actually slots in, what QA is doing during development (not after), and how to run regression without burning the last two days of every sprint doing it manually.

73% of Agile teams report QA as the most common sprint bottleneck
45% reduction in escaped defects when QA is embedded from sprint day one
2x faster defect resolution when bugs are caught within the same sprint

Why "QA at the end of the sprint" keeps breaking

When QA sits at the tail end of a sprint, a few things predictably go wrong. First, there's no buffer. If dev runs a day long (which happens in every sprint), QA gets compressed. A three-day testing window becomes one and a half days. Something gets cut. That something is usually the edge cases and regression coverage that actually catch production bugs.

Second, context is lost. Developers have moved on mentally. They're in sprint planning for the next cycle. When a bug comes back, they have to context-switch, re-read the ticket, and remember what they built two weeks ago. That adds friction and slows fixes.

Third, and most importantly, QA has no influence over quality during the sprint. By the time a tester touches the feature, all the architectural decisions, edge case assumptions, and data flow choices are already baked in. Catching a fundamental logic problem in testing is expensive. Catching it in the ticket review on day one is free.

The real shift in Agile QA: Move QA from a verification activity at the end of the sprint to a prevention activity throughout the sprint. The tools are different, the timing is different, and the conversations are different. But it's not more work. It's earlier work, which is cheaper work.

What Agile QA actually looks like, day by day

Here's the sprint structure I run with clients. It's based on a two-week sprint, but the pattern scales to one-week and three-week cycles with minor adjustments.

Sprint phase Dev activity QA activity Output
Sprint planning (Day 1) Story breakdown, task estimation Review acceptance criteria, flag missing test conditions, size testing effort QA-annotated stories, risk flags added to tickets
Early dev (Days 2-4) Feature development begins, unit tests written Write test plans for in-progress stories, set up test data, prepare automation scripts for happy-path flows Test plans linked to tickets, test data ready in staging
Mid-sprint (Days 5-7) Features reaching completion, PR review Begin functional testing on completed stories, log defects same-day, validate against acceptance criteria Bugs filed with reproduction steps, same-sprint fix cycle begins
Late sprint (Days 8-9) Bug fixes, code freeze Regression run on impacted areas, automation suite execution, retest of fixed defects Regression results, go/no-go signal for release
Sprint close (Day 10) Sprint review, retrospective Document test coverage, flag automation gaps, carry test debt into next sprint planning Test coverage report, automation backlog updated

The thing to notice is that testing in this model starts on day one, not day eight. QA is reading tickets during sprint planning not to sit in on meetings but to do work: reviewing acceptance criteria, identifying missing edge cases, flagging stories that don't have enough definition to test reliably.

The three conversations QA needs to have earlier

Most of the value in Agile QA comes from shifting three specific conversations earlier in the sprint cycle. These aren't formal ceremonies. They're short, informal exchanges that happen at the right moment instead of the wrong one.

  1. 1

    Acceptance criteria review before the sprint starts

    QA should be in backlog refinement, not as an observer but as a participant asking testability questions. "How do we know this is done?" and "What happens when the user does X instead of Y?" are testing questions. Asking them before the sprint starts costs nothing. Discovering the answer in testing costs a defect, a re-test, and a context switch.

  2. 2

    Test condition alignment during dev

    When a developer is midway through building a feature, a 10-minute conversation with QA about how the feature will be tested often surfaces assumptions the developer didn't know they were making. Not architectural changes, just clarifications. "The test is going to call this endpoint with an empty array, how should that behave?" That question, answered during development, prevents a defect from ever being written.

  3. 3

    Defect triage the same day bugs are logged

    Same-day defect triage is one of the highest-leverage habits in Agile QA. When a bug is logged at 10am and triaged at 10:30am, the developer still has the context to fix it quickly. When it sits in a backlog until the next standup, it gets prioritized against other work, sometimes deprioritized entirely, and the fix happens under sprint-close pressure. Same-sprint bugs are cheap. Next-sprint bugs are expensive. Escaped bugs are very expensive.

Handling regression without burning the sprint

Regression is where Agile QA teams most often break down. The logic seems simple: before releasing, run regression. But in a two-week sprint with a one-person QA function, a full manual regression run can take two or three days. That's 20 to 30 percent of your sprint dedicated to re-testing things that probably haven't changed.

The solution isn't to skip regression. It's to be honest about what "regression" actually means in each sprint.

In practice, we use a three-tier regression model that keeps coverage high without burning the sprint clock:

Teams that try to run full regression every sprint will eventually skip it entirely because it's not sustainable. Tiered regression is sustainable, and it keeps your 95% release coverage goal within reach on a realistic timeline.

A note on test automation in Agile: Automation doesn't replace sprint testing, it protects it. The value of a Playwright smoke suite isn't catching new bugs. It's giving you confidence that the existing functionality hasn't broken so your manual testing time can focus on the new stuff. That shift in how you use automation changes what you build and when you build it.

What "QA embedded in the sprint" looks like from the outside

When a team gets this right, the sprint dynamic shifts in a way that's immediately visible. Bugs get logged earlier. Defect density in the last three days of the sprint drops noticeably. Sprint reviews have fewer "still in QA" stories because testing finished before the sprint ended, not after.

The other thing that changes is the conversation in sprint retrospectives. Instead of "QA needs more time" as a recurring theme, the team starts talking about test coverage, automation debt, and which acceptance criteria patterns tend to create ambiguity. Those are better conversations. They improve quality at the source instead of compensating for it at the end.

We've run this model across teams of five and teams of forty. The ceremonies look different, the tooling looks different, and the sprint length varies. But the underlying principle stays the same: quality is embedded into the build process, not appended to it.

How outsourced QA fits into an Agile sprint

One question I get often is whether an external QA partner can actually integrate into a sprint cycle or whether they'll always be one step removed from the team. The honest answer is that it depends entirely on how the engagement is structured.

At goGreenlit, we build our engagements around sprint participation, not deliverables. We're in your sprint planning. We're reviewing tickets during refinement. We're in your Slack channels logging same-day bugs. We're not sending you a test report at the end of the month and calling it QA coverage.

The difference between outsourced QA that works and outsourced QA that feels like a drag on velocity is almost entirely a function of whether the QA team is embedded in the sprint or orbiting it. When you're embedded, the timing problem that creates most Agile QA debt disappears. You're not doing QA at the end of the sprint because you were never waiting to start.

Want QA that runs inside your sprint, not after it?

We embed directly into client sprint cycles. No handoff delays, no end-of-sprint crunch. Let's talk about your current setup.

Book a 30-minute call

Where to start if your team is still doing QA at the end

If your team currently runs QA after development completes, shifting to embedded QA doesn't require a full process overhaul. It requires three changes done in order:

First: Get QA into sprint planning and backlog refinement. Not to review every ticket in depth, just to flag the ones with unclear acceptance criteria or testing ambiguity. This alone shifts a meaningful number of defects earlier.

Second: Start logging bugs during the sprint, not after it. Even if dev review still happens at the end, logging bugs as they're found (rather than batching them into a report) creates a different urgency and shortens the fix cycle.

Third: Build a small automated smoke suite. Ten to fifteen tests covering your highest-traffic user journeys. Run it in CI on every merge. This gives you the safety net to do impact-based regression instead of full regression, which is where most of your sprint testing time gets reclaimed.

Those three changes alone will improve your sprint QA posture significantly. Everything else, tiered regression, test plan templates, defect triage cadences, layers on top of that foundation.

MA
Muhammad Ali
QA Manager, goGreenlit

Muhammad is co-founder and QA Manager at goGreenlit, a Chicago-based QA consultancy. With 9+ years in software quality, he has embedded QA into sprint cycles across startups, SaaS platforms, and mid-size engineering teams. He and Mohammad Khan previously worked together at KeHE Distributors before founding goGreenlit.