Skip to main content
BlogIn the News
 / 
AI/ML

The Compliance Confidence Trap: What 100 Requirements Reviews Still Miss

Megan Mannino
 and 
  •  
March 20, 2026

Table of Contents

The email came in on a Tuesday morning.

"What are your transit humidity requirements?"

I stared at it for a long time. We had temperature specs. We had pressure specs. We had operating humidity. But somehow, across years of development, hundreds of document reviews, and thousands of sign-offs from seasoned engineers, regulatory specialists, and quality professionals, nobody had documented the humidity conditions between the factory and the hospital.

I spent the next few weeks backfilling a requirement we should have written in year one.

Here's what I took away: I was a systems engineer. A good one. Working alongside great people. And we still had a gap.

Not because anyone was careless, but because this work is genuinely complex. The fact that one question could surface something so foundational? That's not a rarity . That's what  engineering often looks like. You find the gap, you close it, and the product gets better.

The Compliance Confidence Trap

Here's the paradox that lived underneath that email: we weren't negligent. We had an excellent process. We had smart, experienced people. We had traceability. We had reviews.

And we still missed it.

I've come to think of this as the Compliance Confidence Trap: the moment a rigorous review process creates false certainty, not because reviewers were careless, but because no review process can surface what it doesn't know to look for. More reviews don't fix this. They confirm what you already have. The gap, by definition, isn't there to be found.

This isn't a cautionary tale about one humidity requirement. It's about the three categories of failure that live beneath every well-run medical device program, and why the tools we used weren't built for the problem we actually had.

Compliance Confidence Trap
The Compliance Confidence Trap

Rigorous reviews create false certainty — not because reviewers are careless, but because no review process can surface what it doesn't know to look for.

Category 1
Missing Requirements

Every reviewer reviews what exists. Nobody generates a checklist of what should exist. More reviews confirm what you already have — the gap, by definition, isn't there to be found.

Real example

"IEC 60601-1 clause 11.6.3 requires humidity specs across operating, storage, and transit conditions. Our spec covered two of three. Years of reviews never caught it."

Category 2
MAUDE Analysis Gaps

Risk decisions made without knowing what's already gone wrong for other teams building similar products. Exclusion decisions — concluding certain failure modes don't apply — made without the full picture.

The real loss

"We missed failure modes that had happened to other teams. We missed emerging patterns that should have influenced our analysis. Some exclusions were probably right — for the wrong reasons."

Category 3
Configuration Chaos

Multiple hardware variants, firmware versions, shared and device-specific software components. Which revision is valid for which test? The answers live in a spreadsheet only two people understand.

The hidden tax

"3 mechanical housing revisions. 2 circuit board revisions. 4 software versions per component. Nothing was technically wrong. Nothing was reliably right, either."

The common thread: none of these problems required AI to solve them. They required better infrastructure — tools that eliminate the parts that don't require engineering judgment so the parts that do can get full attention.

Why More Reviews Won't Catch Missing Requirements

We got lucky with the humidity spec. We had existing environmental testing data we could reinterpret. But getting lucky meant three weeks of digging through old test reports, arguing about whether test conditions adequately covered the gap, and documenting our rationale for why this didn't constitute a test failure.

It was solvable. It was also completely preventable.

The failure pattern wasn't in our review process. It was in our starting point. Our requirements spec referenced environmental conditions for temperature and pressure comprehensively. Humidity appeared in operating conditions but not transit. Every reviewer was reviewing what existed. Nobody was generating a checklist of what should exist.

What I needed wasn't more reviewers. I needed something that could look at "Environmental Conditions — Section 4.2" and say: IEC 60601-1 clause 11.6.3 typically requires humidity specifications across operating, storage, and transit conditions. Your spec covers two of three.

That's not AI making an engineering decision. That's AI catching a gap before it becomes a certification finding.

The MAUDE Analysis Every Risk Team Skips (And Why)

The humidity requirement wasn't our only blind spot.

We ran a thorough risk management process: conference rooms, whiteboards, FMEAs built on the previous generation's foundation, institutional knowledge from engineers who'd been building similar devices for years. It felt rigorous because it was rigorous, by the standards we'd always applied.

What we didn't do: systematically analyze FDA MAUDE reports for similar devices.

The data was public. It was searchable. The task was never considered and forgotten. It fell between the cracks. Nobody owned it. And so we made decisions about our hazard analysis without knowing what had already gone wrong for other teams building similar products.

We missed failure modes that had happened to other teams. We missed emerging patterns that should have influenced our analysis. We missed the chance to learn from the industry's collective post-market experience before it became relevant to us.

The real loss wasn't the specific hazards. It was the decision quality. We were making exclusion decisions, concluding that certain failure modes didn't apply to our design, without the full picture. Some of those exclusions were probably right. Some were probably right for the wrong reasons.

What I needed wasn't AI telling me which hazards to include. I needed AI to surface 47 MAUDE reports for Class II cardiac devices with comparable failure modes, and ask: "Did you consider these? Here's why they might be relevant."

That's informed judgment. The version we had was just judgment.

Hardware-Software Configuration Management: The Hidden Tax of Combination Devices

If I'm honest, the humidity requirement and the MAUDE gap were recoverable. What cost me the most, in time, in cognitive load, in quiet anxiety, was configuration management.

We were building a product family: multiple hardware variants, each running different firmware versions, with shared and device-specific software components. On paper, we had a system. Part numbers, revision tracking, and change control procedures.

In practice: chaos.

Three mechanical housing revisions. Two circuit board revisions. Four software versions per component. The real problem wasn't keeping track of what existed. It was knowing what worked where. Which revision was valid for which test? Which firmware version was compatible with which board? The answers lived in a spreadsheet that only two people understood. For everyone else, it was largely a guess.

Not every test was valid for every configuration. Some verification tests were applied across the board; others were only meaningful for specific hardware-software combinations. Our requirements management tool had no way to express that distinction. So the team improvised: part Excel, part linked documents, part institutional knowledge held in the heads of people who might leave.

Nothing was technically wrong. Nothing was reliably right, either.

I spent more hours managing configuration data than doing engineering work. The engineers around me did the same. This is the hidden tax of building hardware-software combination devices, and it compounds every time you add a variant, a revision, or a new software branch.

Live Webinar · May 7

Still think another review cycle would have caught these?

On May 7, we're showing what actually works. Join us for From AI Curious to AI Native: Top Use Cases for AI Across Regulated SDLCs.

Register

What I Actually Needed (Not What I Thought I Needed)

Looking back, none of these problems required AI to solve them. They required better infrastructure.

On the humidity requirement: I needed something that could compare my requirements spec against a coverage model, not a perfect one, just one that knew IEC 60601-1's environmental requirements well enough to ask "why isn't transit humidity here?" before certification, not during.

On the MAUDE analysis: I needed something that could automatically pull relevant reports by device type and failure mode when I was building my hazard analysis, not to make risk decisions for me, but to make sure I was making them with complete information.

On configuration management: I needed a system where hardware revision, firmware version, and software component relationships were defined once and visible to the whole team. Where an engineer could ask "which configurations are valid for this test?" and get a clear answer instead of opening a spreadsheet only two people understand.

In every case, the ask was the same: eliminate the parts that don't require engineering judgment so the parts that do can get full attention.

I didn't need AI to write perfect requirements. I needed it to catch gaps before they became findings.

I didn't need AI to make risk decisions. I needed it to make sure those decisions were informed by everything relevant.

I didn't need AI to replace change impact assessment. I needed it to automate dependency tracing so I could focus on evaluating implications instead of mapping them.

What I Actually Needed
What I had
What I actually needed
1
Requirements Coverage
Reviewers reviewing what existed

More reviewers confirming the same document. Nobody generating a checklist of what should exist. Gaps invisible until certification.

What I needed
Coverage model that asks the right questions

Something that compares my spec against IEC 60601-1's environmental requirements and says: "transit humidity isn't here" — before certification, not during.

2
Risk Analysis
Judgment without complete information

FMEAs built on institutional knowledge and the previous generation's foundation. Nobody systematically analyzed MAUDE reports for similar devices.

What I needed
Relevant MAUDE reports surfaced automatically

47 reports for Class II cardiac devices with comparable failure modes, with the question: "Did you consider these?" That's informed judgment, not just judgment.

3
Configuration Management
A spreadsheet only two people understood

Which revision is valid for which test? Which firmware version is compatible with which board? More hours managing configuration data than doing engineering work.

What I needed
HW-SW relationships defined once, visible to all

An engineer asks "which configurations are valid for this test?" and gets a clear answer — instead of opening a spreadsheet only two people understand.

These Tools Should Have Existed Years Ago

The underlying problem was never human error. It was a tool mismatch.

Software ALM tools understood code but not hardware. Requirements tools understood documents but not CI/CD. PLM tools understood BOMs but not software dependencies. Every tool was built for a world where hardware programs and software programs were separate things with separate teams and separate review cycles.

The devices we were building didn't work that way. The tools didn't know that.

The reality of modern medical device development, with embedded software, firmware, connectivity, AI models, and continuous updates, outpaced the tooling by a decade. We built good products despite this. The question I keep coming back to is what we left on the table: faster development cycles, fewer late-stage findings, better risk analysis, configuration visibility that didn't depend on two people not leaving the company.

If you're a systems engineer nodding along, you're not alone. These problems are common, they're real, and they're not actually that hard to solve with the right infrastructure.

The tools are finally catching up.

HW SW CTA

Building a combination device? See how Ketryx connects hardware and software traceability in one platform.

Explore HW/SW

Interview transcript

Megan Mannino
Technical Product Marketing Manager
Ketryx

Megan is a Technical Product Marketing Manager at Ketryx with 6 years of experience in medical device development at Abbott and iRhythm Technologies. She previously led cross-functional teams at Abbott and iRhythm, working across the total product lifecycle (TPLC) with engineering, clinical, quality, regulatory, and commercial stakeholders. She has supported FDA Class I-III device development from early product definition through design transfer, including multiple successful 510(k)s, PMA-S, and EU MDR submissions.

Megan is passionate about using product storytelling to make complex regulatory technology more understandable, meaningful, and impactful.

Explore Our Top Blog Posts

No items found.