We Put Our Own QMS Under the Microscope. Here’s What We Found.
Table of Contents
We build Ketryx using Ketryx, and we hold ourselves to IEC 62304, ISO 13485, and ISO 14971. We do this not only to model compliance, but to prove that modern medical-grade software can move fast and stay safe.
Still, even high-performing teams accumulate process debt over time: extra clicks, duplicated fields, manual cross-checks, and bloated documentation. That extra weight, if left unchecked, slows product velocity.
So while the team was together in Vienna this week, we looked at our QMS and asked:
“Using a risk-based approach, how much effort could we remove from our processes without losing an ounce of compliance?”
Traditional QMS thinking says, “Document and validate everything, everywhere, always.” But an agile, risk-based approach says, “Document and validate the right things, automatically where possible.”
We decided to run our own evaluation to challenge the “just follow the SOP” mindset and tighten up our change workflows.
Why Re-Evaluate a Mature QMS?
A QMS is meant to evolve with the product and the team. When systems and teams scale, bloat can creep in. This comes in the form of:
- Fields that repeat the same information
- Change requests that stretch across weeks and lose focus
- Acceptance criteria buried in long text blocks
- Test cases drifting out of sync with current product behavior
- Release approvals bottlenecked across time zones
Our guiding principle for this QMS evaluation was to remove any excess we found in our processes and incorporate more automation and AI to strengthen process checks.
We applied the rule that if an artifact doesn’t reduce risk, clarify intent, or improve auditability, it should either be removed or automated. This is the practical application of a risk-based approach.
How We Ran Our QMS Evaluation
We mapped our actual lifecycle across:
Change Requests → Requirements → Specs → Tests → Anomalies → Release
Then we worked through four lenses:
- Pain points: Where friction lives
- Quick wins: What we can fix immediately
- Open questions: What needs SOP/auditor alignment
- AI opportunities: What can be automated or checked
Whiteboards, sticky notes, and laptops ready, we dove in.
What We Changed — and How It Helps Us Move Faster
Change Requests
CRs are where change starts, but ours had lots of text and screenshots and still were sometimes ambiguous. In order to keep CRs clear and actionable, we added a dedicated Acceptance Criteria field and removed the “Impact of Change” field, which duplicated information already in the description and linked items. We also introduced AI checks to compare CR intent against PR/CI diffs, suggest links to affected items/tests, and flag duplicate requests.
Anomalies
We believe that speed is quality, especially when it comes to anomalies. Adding extra process to anomaly resolution actually creates additional risk, because it leaves the anomaly unresolved for longer. We streamlined anomalies by renaming “Impact on System” to “Resolution/Fix,” requiring Steps to Reproduce with Observed vs Expected behavior, clarifying “Scope” to make CAPA triggers and notification rules explicit, and adding AI checks to make sure all the right fields are filled out.
Requirements & Specifications
Some specifications were overly detailed, which made them time-consuming to maintain. We shifted to removing screenshots from software items and keeping technical detail closer to code (still validating where Git-based specs make sense).
Tests & Executions
To make tests lighter and more current, we reaffirmed our bias toward automated tests wherever possible, adopted a practice of obsoleting and recreating manual tests when behavior changes, and began exploring how we can better link change requests to tests in Xray.
Releases & Approvals
Our approval workflows were solid, but timezone latency was slowing us down. To keep rigor and speed, we added QM capacity in CET so approvals happen during dev hours (not overnight) and introduced AI-driven pre-checks to flag missing links, missing acceptance-criteria coverage, and gaps in risk evidence before human review begins.
What This Means in Practice
These changes mean:
- Fewer clicks and fields
- Cleaner, clearer documentation
- Faster decision cycles
- Less time searching for truth
- More time building and testing actual functionality
- And a QMS that gets faster as it gets smarter, not slower
What We’re Still Validating
There were a few changes we considered that we weren’t ready to implement yet:
- How far to push Git-based specs without losing easy traceability
- The final “home” for acceptance criteria (authored in CRs, encoded in tests)
- Semantics for feature-flagged work and traceability views
- Where AI should enforce templates vs. infer patterns automatically
A QMS is never “finished.” It should evolve as the team evolves.
The Playbook You Can Borrow
- Map the real workflow.
- Ask: Does this reduce risk or is it just habit?
- Consolidate fields or delete them.
- Link to relevant configuration items instead of duplicating them in descriptions.
- Use AI to check consistency and PR/CI drift.
- Staff approvals around velocity.
- Review & tune often, as process debt compounds.
We left the session with a QMS that is leaner, clearer, and faster, while actually strengthening rigor and evidence.
This is what agile, risk-based, AI-assisted compliance looks like.
Want the template we used to conduct this QMS workshop internally? Download it here.

Lee Chickering is a Client Operations Manager at Ketryx and an expert in quality assurance and regulatory compliance, specializing in bridging quality management and customer success to drive operational excellence in the life sciences industry. With a diverse background spanning manufacturing, project management, and compliance at companies like Amgen, he has led the implementation of Quality Management Systems (QMS) aligned with ISO 13485, ISO 14971, and IEC 62304. Passionate about advancing quality in life sciences, he thrives on collaborating with organizations to enhance efficiency, compliance, and innovation.
