Skip to main content
BlogIn the News
 / 
AI/ML

“Did We Catch Everything?” How Ketryx Turns Risk Management into Proactive Product Intelligence

Megan Mannino
 and 
  •  
March 12, 2026

Table of Contents

I’ve been there: hundreds of pages into an 800-page risk management document, trying to trace whether Failure Mode #431 has adequate controls and whether those controls are actually verified.

I know this work matters. These documents protect patients. But at a certain point, the question stops being “is this right?” and starts being “did I catch everything?” Those aren’t the same question. That shift is where confidence in a risk assessment starts to erode.

Here’s what I’ve come to understand after years in this work: risk management at scale isn’t a cognition problem. It isn't busywork. It’s a process problem. Engineers aren’t missing things because they lack expertise. They’re missing things because the process asks humans to maintain thousands of relationships simultaneously, failure modes to controls, controls to requirements, requirements to tests, tests to evidence, across hundreds of failure modes, distributed across multiple tools, as the product continuously evolves and scales. No team maintains that manually without gaps.

The goal isn’t just getting through the document faster. It’s proactive product intelligence: transforming risk management from a reactive documentation exercise into a continuous, connected system that keeps pace with how devices are actually built.

“The question stops being ‘is this right?’ and starts being ‘did I catch everything?’ Those aren’t the same question.”

The problem with manual risk management at scale

As medical devices grow more complex, maintaining confidence that documentation is complete becomes exponentially harder, not from carelessness, but because the information is too complex for any one person to track manually

Managing a complex risk assessment across a modern toolchain means tracing failure modes through requirements in JAMA, design specs in Excel, test cases in TestRail, and verification evidence in Word. For a single failure mode, that trace is manageable. For hundreds, the mental overhead compounds quickly.

Failure Mode
#431
Hazardous Situation: Patient
x 400+ failure modes
Manual
Manual
Manual
Manual
J
JAMA
Requirements
XL
Excel
Risk assessment
TR
TestRail
Test cases
GD
Google Drive
Verification evidence

For a single failure mode, that trace is manageable. For hundreds, the mental overhead compounds quickly.

Engineers are constantly balancing four simultaneous questions
1
Were all relevant failure modes identified for each hazardous situation?
2
Does every control trace to a corresponding requirement?
3
Is there test coverage for every control?
4
Did those tests pass with formally approved evidence, not just a passing run?

Each question in the graphic above is answerable in isolation. Answering all of them simultaneously, across hundreds of failure modes, is where the process breaks down.

The information exists somewhere in the toolchain. The problem is synthesizing it into a complete, coherent view: one that provides actual confidence rather than a reasonable assumption that nothing was missed. And that assumes the documentation hasn’t drifted since the last review.

When it does break down, the response is almost always reactive. A gap surfaces during an audit, a design review, or late-stage verification, and the team scrambles to reconstruct traceability after the fact. The fix is localized, not systemic. The same gap can reappear in a different failure mode, a different release, or a different product line.

Without risk visibility in Jira
1
Developer modifies a requirement
A reasonable clarification, written 18 months ago.
2
PR passes review and merges
No one flags a compliance concern. Context lives in a separate tool.
!
3 weeks later: pre-release audit
QA flags it: that requirement was implementing Risk Control #89.
Compliance hold
Release blocked. Team scrambles to reconstruct traceability. The same gap can reappear in the next release.
With Ketryx: risk visibility in Jira
1
Developer modifies a requirement
Same change, but Jira shows a risk control relationship.
Risk Control #89 flagged in Jira
Developer sees the connection before committing. Asks the right question while there's still time to act.
Informed decision at point of change
Risk assessment updated proactively. Documentation stays current.
Clean release
No compliance hold. No retroactive reconstruction. Risk management stays in sync with actual development.

The visibility failure runs deeper than comprehensiveness. Developers working in regulated environments don’t intend to create documentation gaps. They’re working in the tools they’ve always used: Jira, GitHub, and their IDE. Their risk assessment exists somewhere else, maintained by someone else, in a format most developers rarely open. That structural separation is what causes drift.

Consider what happens when a developer modifies code that's directly implementing a risk control. A reasonable performance improvement, nothing that changes the intended behavior. The PR passes review and merges. Three weeks later, QA flags it: that code was the verification evidence for Risk Control #89. The control is now unverified, and the gap surfaces right before release. Nobody made a mistake. The developer had no way of knowing that the code change touched risk management — that context lived in a different document in a different tool, invisible from where the work was happening.

The fix isn't better documentation habits or more process overhead. It's making risk control relationships visible in the environments where development actually happens,  and keeping them current automatically, not reconstructing them manually.

“Risk management isn’t a cognition problem. It is a process problem.”

Where AI changes the equation

The shift AI-native platforms enable isn’t just speed. It’s confidence at speed. Those two things have always been in tension in risk management: moving faster meant accepting more risk that something was missed. Ketryx resolves that tension. Teams see up to an 90% reduction in documentation time, not by cutting corners, but by maintaining traceability automatically so engineers can focus on the judgment calls that actually require human expertise.

1
Failure Mode Identification
Before
The most anxiety-inducing question in risk management is whether all relevant failure modes were identified. The process starts from a blank page, relying on SME knowledge alone.
With Ketryx AI
Ketryx AI Assistant acts as a thought partner during FMEA development, surfacing potential failure modes based on the product type, applicable standards like ISO 14971 and IEC 62304, and failure modes from sources like the FDA MAUDE database. Engineering judgment still decides relevance, but the process no longer starts from a blank page.
2
Real-Time Traceability
Before
Manual traceability maintained in parallel. Reconstructed before each review cycle. Always at risk of drift. Traceability is a separate document someone maintains in parallel.
With Ketryx AI
The connections between risk controls and the rest of the DHF, requirements, test cases, and verification evidence, are maintained automatically as teams work, not reconstructed manually before each review cycle. When a developer modifies a Jira task linked to a risk control, that relationship is visible in the tool they're already using. Native integration with Jira, GitHub, and JAMA means traceability is a live representation of what's actually been built and tested.
3
Automated Completeness Verification
Before
Manual trace that takes two weeks. Gaps surface at audit or right before release. The fix is localized, not systemic.
With Ketryx AI
Ketryx AI Agents continuously analyze the DHF to verify that every risk control has associated requirements, every requirement has test coverage, and every test has audit-ready, approved verification evidence. When gaps exist, a control without test coverage or a passed test without formal approval, they're flagged proactively, not at the end of a manual trace that took two weeks.

To be precise about what AI is and isn’t doing here: it isn’t making safety determinations. The human-in-the-loop remains central. Decisions about whether a control is adequate, whether a risk level is acceptable, and whether field evidence requires a mitigation update stay with the engineers and quality professionals who have the context and accountability to make them. What gets offloaded is the verification that the documentation supporting those decisions is complete. That work is exhausting, error-prone at scale, and not where engineering expertise adds the most value.

Post-market: the same problem, higher stakes

Everything above happens during development, when gaps are still correctable. Post-market is where the same underlying failure carries patient consequences: incomplete traceability, documentation that’s drifted from reality, a risk assessment that doesn’t reflect what the device actually does.

Post-market surveillance reveals how a device actually performs. Failure modes that weren’t anticipated during development, probability estimates that don’t hold up against field data, and evidence that should reshape both the current product and the next design revision. The comprehensiveness problem doesn’t end at launch. It compounds.

Reactive
Manual risk management
Gaps compound over time. The same problem reappears in different forms.
Complaint or field data arrives
A failure mode is more probable than initially assessed. Or a new failure mode surfaces entirely.
Team searches for the risk assessment
The risk assessment that should be updated was written during initial development. It may have drifted from what was actually built and tested.
Localized fix applied
Address the specific complaint. Update the specific record. Assume nothing else is affected.
Same gap reappears elsewhere
Different failure mode, different release, different product line. The cycle repeats.
When a regulatory inspector asks how post-market surveillance was incorporated into risk management, the answer can't be "we addressed this complaint."
vs
Proactive
AI-assisted risk management
Field evidence continuously integrated. Risk management functions as an improvement system.
Field data analyzed against full risk assessment
Complaint arrives. AI-assisted change impact analysis identifies which risks, controls, and verification documents are affected.
Related failure modes surfaced automatically
Not just the reported failure mode, but adjacent risks worth reconsidering are identified, preventing the same gap elsewhere.
Risk assessment updated systematically
Documentation reflects what was actually analyzed, evaluated, and updated, not just what was addressed reactively.
Next design revision informed by field evidence
Post-market surveillance feeds forward into product development, as ISO 14971 intended.
When an inspector asks: here's how this field evidence was analyzed against the full risk assessment, here are the controls that were evaluated, here's the documentation that was updated.

From hoping to knowing

I’ve spent time on page 673 of a risk management document. I know what it feels like to be done but not certain. To close the file and still wonder whether you caught everything. That question doesn’t stay in the document. It follows the device into production, into the field, and into every audit.

“All auditors were amazed by the level of detail, linkage, and control… Ketryx is an efficient and compliant solution for medical device software development.” — Mihir Naik, Sr. Director of Quality, Vektor Medical (3 successful audits, including a leading notified body in 2024)

That’s not a documentation story. That’s a confidence story.

The difference AI makes isn’t getting to page 673 faster. It’s getting there and knowing, not hoping, that the answer to “did we catch everything?” is yes.

See how Ketryx transforms risk management →

Interview transcript

Megan Mannino
Technical Product Marketing Manager
Ketryx

Megan is a Technical Product Marketing Manager at Ketryx with 6 years of experience in medical device development at Abbott and iRhythm Technologies. She previously led cross-functional teams at Abbott and iRhythm, working across the total product lifecycle (TPLC) with engineering, clinical, quality, regulatory, and commercial stakeholders. She has supported FDA Class I-III device development from early product definition through design transfer, including multiple successful 510(k)s, PMA-S, and EU MDR submissions.

Megan is passionate about using product storytelling to make complex regulatory technology more understandable, meaningful, and impactful.