I've set up QA processes at more Series A startups than I can count at this point. The engineering context is almost always the same: a 6 to 15 person team that's been moving fast, testing informally, and getting away with it until very recently. The product works. Users are paying. The codebase is accumulating complexity. And something just happened, a bug in production, a missed demo, a regression that took three days to track down, that made the CTO or VP of Engineering finally say: we need to get our QA act together.
What I want to save you from is the instinct to solve this by buying TestRail, writing 200 test cases in a spreadsheet, and calling it a QA process. That's what a lot of teams do. It doesn't work, not because documentation and tooling are bad, but because the foundation has to come first.
This is the 90-day QA setup I use when I'm brought in to build a QA function from scratch at a Series A startup. It's sequenced, it's practical, and it's designed to deliver real reduction in escaped defects before the quarter is out.
The 90-day QA setup: phase by phase
Audit, understand, and identify the highest-risk gaps
Before writing a single test case, spend the first two weeks understanding the product and the current state of testing. This is where most external QA consultants waste time producing lengthy reports that nobody reads. Keep this phase narrow and output-focused.
What to do:
- Sit in on sprint planning, standup, and a retrospective. Observe the team's actual development and release rhythm before proposing changes to it.
- Review the last 3 months of production incidents and customer-reported bugs. This tells you where the real gaps are, not the theoretical ones.
- Map the critical path: what are the flows that, if broken, would cause immediate user or revenue damage? Usually 5 to 8 flows for a typical SaaS product.
- Review existing test cases, automation scripts, and any CI/CD configuration. Understand what's already there before duplicating effort.
Output: A one-page coverage gap analysis identifying the top 5 highest-risk untested or under-tested areas, and a proposed testing approach for the current sprint. Not a 40-page QA strategy document that comes later when the foundation is working.
Get embedded in the sprint and start testing the critical path
The fastest way to prove the value of a QA process is to start finding bugs. By the end of week two, you should be embedded in the active sprint: attending standups, getting assigned test coverage for features in progress, and running manual regression checks on the critical path identified in phase one.
What to build in this phase:
- A release readiness checklist: a short, specific list of what must pass before any release goes to production. Keep it to 10 to 15 items. This is the single most important QA process artifact for a startup it establishes the concept of a quality bar and makes it concrete.
- A defect template in JIRA: title, steps to reproduce, expected vs actual behaviour, severity, environment. Takes 30 minutes to set up and dramatically improves the quality of bug reports the team works from.
- A test coverage log: a simple spreadsheet or Notion doc mapping the critical path flows to test cases. This is the seed of your test suite, not the finished product.
Output: First sprint where a QA engineer signs off before release. Likely the first sprint where a production bug is caught in staging instead.
Build the Playwright automation foundation
Once manual sprint testing is running smoothly, start building the automation layer. The goal in this phase is not complete automation coverage it's a stable Playwright framework targeting the critical path flows identified in week one.
Framework decisions to make early:
- Playwright with TypeScript is the default choice for most SaaS web products in 2026. It's faster, more stable, and has better debugging than Selenium for new projects.
- Set up the repo structure with clear conventions from day one. Test files organised by feature, shared fixtures for authentication and common setup, a config file per environment.
- Integrate with GitHub Actions immediately, even before you have many tests. Running an empty test suite in CI establishes the infrastructure before the suite grows.
- Use Playwright's trace viewer and screenshot-on-failure from the start. These make debugging CI failures 10x faster.
Output: A Playwright test suite covering the 5 to 8 critical path flows, running in CI on every pull request. Not complete coverage a stable, trusted foundation that runs green reliably.
Expand coverage, establish metrics, and hand off the process
By day 60, you should have a repeatable sprint testing process, a release readiness sign-off, and a Playwright suite covering critical paths. Days 61 to 90 are about expanding coverage, establishing quality metrics, and making sure the process continues to work when the team grows or the QA lead changes.
What to build:
- Expand the Playwright suite to high-impact user flows beyond the critical path. Prioritise scenarios that are stable and repetitively tested every sprint.
- Set up a test results dashboard. At minimum: pass/fail rates per sprint, number of bugs caught in testing vs production, automation suite coverage percentage. This gives engineering leadership visibility into whether QA is improving over time.
- Write a QA onboarding document: how does testing work here? What's the release criteria? Where are the test cases? What tools do we use? This ensures the process survives team changes and doesn't live only in one person's head.
- Conduct a retrospective on the QA setup itself. What's working? What's creating friction? What did you miss in the audit phase that needs to be addressed? Build this into a quarterly QA process review.
Output: A measurable, documented QA process that runs independently of any single person. By this point, most teams have seen a significant reduction in production bugs and a noticeably calmer release cadence.
The most common mistake at this stage: Trying to automate everything before the manual process is working. Automation amplifies whatever process you have. If the process is broken, automation makes the brokenness faster and harder to fix. Get the manual sprint testing working first. Then automate the scenarios that earn it.
The tooling decisions that actually matter
Most Series A teams get caught in tooling analysis paralysis. Here's the short version of what actually matters:
Test management: start simple
TestRail is fine, Zephyr is fine, a well-structured Notion doc is fine for the first 90 days. Don't invest significant time in test management tooling until you have enough test cases to actually manage. A team with 50 test cases doesn't need TestRail. A team with 500 test cases does.
Automation framework: Playwright by default
For new automation projects in 2026, Playwright is the default choice unless you have a specific reason for Selenium (legacy Java stack, existing Selenium infrastructure). Playwright's speed, stability, and built-in debugging make it faster to build and maintain for most web application testing. If you're migrating from Selenium, do it gradually swap in Playwright for new test development rather than converting the whole suite at once.
CI/CD integration: GitHub Actions for most teams
GitHub Actions is the lowest-friction option for most startups already on GitHub. Set up a workflow that runs your Playwright suite on every pull request to main, with a separate workflow for nightly full regression runs. Don't block merges on test failures until the suite is stable enough to trust a flaky suite that blocks PRs creates more friction than it prevents bugs.
Bug tracking: use what the development team already uses
If the dev team is in JIRA, use JIRA. If they're in Linear, use Linear. The best bug tracker is the one developers actually look at. Don't introduce a new tool for QA if it creates a separate workflow the dev team has to context-switch into.
The mistakes that set Series A QA setups back six months
A few patterns I've seen derail otherwise good QA setups:
Writing test cases before understanding the product risk. I've seen teams produce 300 test cases in the first two weeks and still miss the flows that matter most. Test case volume is not the same as test coverage quality. Start with the highest-risk flows and work outward from there.
Building automation before the manual process works. This is the most common mistake. See the callout above. Process first, then automation.
Treating QA as a phase rather than an embedded role. If QA is something that happens after development finishes, it will always be a bottleneck. QA needs to be involved from sprint planning, running smoke tests as features complete development, and signing off before code reaches staging. The earlier in the sprint, the cheaper the bugs.
Not establishing a clear release criteria before the first release. The first time your team asks "are we ready to ship?" is not the time to define what "ready" means. Write the release readiness checklist before the first sprint ends. Even five items is better than zero.
goGreenlit sets up QA processes for Series A startups and scaling engineering teams across the US. If you want a structured 90-day QA setup without the lead time of a full-time hire, we're the team to call.
Book a free call →What a well-run Series A QA process looks like at day 90
Here's what you should have at the end of 90 days if this is done right:
A release readiness checklist that the team follows every sprint, without being reminded. A Playwright automation suite covering your critical path, running green in CI on every pull request. A documented test coverage map that tells you which flows are tested and which aren't. A defect lifecycle in JIRA with clear severity definitions and consistent reporting. A quality metrics dashboard showing escaped defect rate, automation coverage, and bug trends over time. And a QA onboarding document so the process survives team changes.
That's not a huge QA operation. It's a lean, working foundation. A team of 10 engineers with this in place will ship more reliably, catch more bugs in staging rather than production, and have significantly less sprint-end panic than they did 90 days earlier. That's the goal.
If you want to talk through what this could look like for your specific team, book a 15-minute call. We've done this enough times that we can give you a clear sense of the scope in a single conversation.