
AI-Assisted Change Impact Assessment: How 9 Engineers Doubled Their Completeness in Half the Time
Table of Contents
Modern medical device development has outgrown the workflows we still rely on. As systems become more interconnected and software-driven, the volume of requirements, cross-functional dependencies, and regulatory constraints has multiplied. What once could be reasoned through manually now spans thousands of linked artifacts across the product lifecycle.
Nowhere is this more clear that in conducting a change impact assessment. When a single change can touch User Needs, System Requirements, Software Requirements, Design Specifications, Risk Analysis, and Test Cases, the real challenge isn’t just effort, it’s also scale.
So, we wanted to understand what happens when you give teams AI assistance for one of regulated development's most demanding tasks. What we found wasn't just faster work. It was a fundamental shift in what's achievable under real-world time constraints.
Here’s the TL;DR:
The task: Assessing how adding a device status field ripples through requirements, specifications, test cases, and risk analyses across multiple subsystems.
Their completeness: 95% with 100% accuracy.
Their manual baseline from 30 minutes earlier: 15% completeness, 25% accuracy.
This represents what is now possible. Across all nine participants, AI assistance doubled completeness and accuracy in half the time.
The Experiment
To understand where the real friction lies, we designed a structured observational study, addressing the real-world constraints systems engineers face every day.
We observed 18 complete workflows. We observed 18 complete workflows. Nine professionals who have worked at organizations such as Raytheon and SpaceX, with backgrounds spanning software engineering, systems engineering, defense, and medical device development, ranging from novices to a senior Systems Engineer who had conducted more than 20 impact assessments, each completed two realistic change impact assessment scenarios.
Scenario A involved adding a device status field to a patient monitoring app.
Scenario B involved changing a network timeout from 10 to 15 seconds.
Each participant completed one scenario manually and one with AI assistance, with assignments randomized. They had 30 minutes per scenario to reflect the time pressure typical of sprint planning and rapid change impact assessments.
We measured completeness of impact identification, accuracy of traceability and reasoning, time to meaningful coverage, and whether or not participants identified critical regulatory determinations such as revalidation triggers and potential impacts to intended use.
For AI-assisted workflows, participants used Ketryx’s connected platform, where requirements, specifications, risk analyses, and test cases exist within a single traceable system. The AI could traverse end-to-end traceability instantly instead of requiring manual navigation across tools like Jira, Confluence, Excel, and TestRail.
This was designed as a realistic test of how modern change impact analysis actually happens and where it starts to break down.
The Results
What We Saw
The results were structural and substantial.
Across all participants, AI assistance fundamentally changed performance:
- Completeness: 164% improvement
- Accuracy: 118% improvement
- Time: 38% reduction
- Critical regulatory determinations: 100% accuracy with AI, compared to 50% manually
Every participant using AI correctly determined whether the change triggered revalidation and whether it impacted intended use. These are not academic or theoretical distinctions; they are the determinations that decide whether you need a new 510(k) submission to the FDA or a Notified Body review in Europe.
This was not a generic LLM output. The Ketryx AI operates inside a connected system, is trained on FDA guidance and ISO standards, and is structured around the logic of IEC 62304. The AI did not guess; it was reasoning within the regulatory and traceability framework that engineers actually work in.
The Ceiling Case
One participant, P5, showed what the upper ceiling looks like when expertise and tooling align.
For the same type of task, they exhibited:
- 95% completeness, compared to 15% manually
- 100% accuracy, compared to 25% manually
- 15 minutes, compared to 30 minutes manually
With AI support, P5 systematically addressed all impacted User Needs and cited FDA guidance. They identified all four impacted requirements with correct classifications. They surfaced 12 of 13 relevant test cases with proper designations.
Same engineer. Same domain knowledge. Same scenario.
Different tools.
The difference was not intelligence or experience. It was leverage.
Instead of manually clicking through Jira links to reconstruct the impact graph, the AI mapped it instantly. Instead of searching through Confluence pages to find context, it surfaced relevant excerpts. Instead of mentally juggling regulatory frameworks, it applied them automatically and consistently.
As the one senior engineer we tested, who has more than 20 impact assessments under their belt:
“100 billion times better, finding the needle in the haystack.”
The ceiling was not about replacing expertise. It was about amplifying it.
How AI Changes the Game
Using AI changes three fundamental challenges:
Cognitive overload: When done manually, engineers had to simultaneously parse documentation, navigate traceability, apply regulatory knowledge (FDA, ISO 14971, IEC 62304), and document justifications while maintaining their mental map of connections.
Time-completeness paradox: 30 minutes of manual work achieved 14% completeness. Reaching 70% would require hours or days. Teams don’t have time for rapid assessments.
Unknown unknowns: Engineers didn't realize what they missed. They'd complete sections, believe they caught key impacts, and move on, unaware of affected requirements deeper in the traceability matrix.
Here’s how AI improved performance:
Systematic discovery
AI traverses traceability relationships that humans skip under time pressure. In one case, a participant manually identified 2 impacted requirements, then ran out of time. With AI, they found all 15. The difference was coverage.
Rapid evidence synthesis
The AI agent instantly analyzes relationships and surfaces relevant excerpts. One participant said, “I could have completed 20 AI ones compared to the manual.” The time savings didn’t come from typing faster, it came from eliminating repetitive discovery work.
Regulatory knowledge application
Participants achieved 100% accuracy on critical determinations with purpose-built compliance AI. Unlike generic tools, it consistently applies frameworks like U.S. Food and Drug Administration 510(k) guidance, ISO 14971, and IEC 62304.
Reduced friction
“Way less frustrating.”
“Without AI I would hate this job.”
This adaptation is about retention and competitive advantage. When compliance becomes a capability instead of a burden, organizations are stronger.
What High Performers Did Differently
AI amplified judgment; it didn’t replace it. High performers:
- Critically evaluated AI outputs (no blind acceptance)
- Cross-referenced results with source documentation
- Prompted for specific, structured analysis
- Integrated outputs into disciplined workflows
The result: better decisions, faster.
Why Not Just Use ChatGPT?
No traceability understanding
Generic LLMs don’t inherently understand that SRS-42 traces to SYS-15, which traces to UN-07, verified by TC-88. You still manually assemble context, the very work you’re trying to eliminate.
Copy-paste overhead
Moving data between Jira, Confluence, TestRail, and spreadsheets, thus reducing friction
No embedded regulatory framework application
Generic tools won’t systematically apply 510(k) logic or lifecycle standards to your change. You remain responsible for interpretation.
No audit trail
When auditors ask, “How did you determine this didn’t require revalidation?” you can’t answer, “The chatbot said so.”
Teams achieving 95% completeness weren’t using AI in isolation. They were using AI inside systems designed for regulated development, enabling comprehensive assessments in 15 minutes.
What This Means
The capability gap is widening.
Teams pairing human judgment with AI-driven traceability traversal move faster and spend expertise on decisions, not manual effort.
Comprehensive change impact assessment (CIA) is becoming achievable in practice. Even average gains, doubling completeness and accuracy while cutting time ~40%, materially improve risk management.
The domains where AI helped most: traceability traversal, technical synthesis, and regulatory interpretation.
And talent matters. When engineers describe AI as turning CIA from “I hate this job” into a “superpower,” organizations that listen gain an edge.
Teams building AI partnership practices now will compound gains with every model improvement. Teams operating manually will fall further behind.
How Ketryx Powers This
What enabled 95% completeness and 100% regulatory accuracy?
Connected traceability
Requirements, specifications, test cases, and risk analyses live in one system with maintained relationships. AI traverses the entire graph instantly.
Purpose-built regulatory AI
Frameworks like 510(k), ISO 14971, and IEC 62304 are applied automatically and consistently.
Integrated workflows
Assessments occur where design control documentation lives—no copy-pasting, no reformatting.
Audit-ready documentation
Every assessment produces traceable records of what was analyzed, what was determined, and why.
The difference between generic AI and purpose-built platforms isn’t incremental. It’s the difference between 15% and 95% completeness.
See It in Action
Change impact assessment doesn’t have to bottleneck development.
If you’re facing manual CIA overload, audit findings on change control, or pressure to move faster without sacrificing compliance, we’ll show you what’s possible.
9 participants, 18 assessments (9 manual, 9 AI-assisted), two realistic medical device scenarios. Full methodology available upon request.
Ketryx: AI-native compliance platform for medical device development. Connected eQMS, requirements management, risk management, and design control capabilities.



