Skip to main content
BlogIn the News
 / 
Perspectives

Minimizing Risk Through Model-Based Engineering: A Regulatory Perspective

Paul Jones
 and 
  •  
March 9, 2026

Table of Contents

Here’s what two decades at the FDA taught me: risk isn’t reduced by hoping your product works. It’s minimized by proving it works before you build it.

The idea isn’t new. In the 1990s, the US government invested billions in academic research to understand why product failures kept rising. The NSA, NASA, and Department of Transportation funded comprehensive studies to answer a basic question: how do we reduce failures in systems where failure is not an option?

The answer was clear. You make the process more rigorous. You leverage computational power to model systems, validate requirements, and prove, mathematically, what a system can and cannot do. Quite simply, if a product cannot meet its requirements, it is not doing what it is supposed to do. Errors will always exist, but we can keep them to a minimum through mathematical rigor.

That work gave rise to what we now call model-based engineering and, later, static analysis tools. These approaches use formal models and computational programs to translate requirements into something testable. They rely on in-line checking, verifying logic as code is written before it is ever compiled. Math and modeling are used to assess completeness, consistency, and coverage, producing documentation auditors and regulators can trust.

In the mid-2000s, static analysis tools became mainstream in safety-critical domains, identifying issues as mundane as misspellings and as dangerous as flawed logic paths, problems that could otherwise lead to catastrophic failures. Ironically, as regulators today, including the FDA, focus more heavily on cybersecurity than on traditional safety, these same principles are more relevant than ever. Security failures are, at their core, failures of design and verification.

So why hasn’t modeling been more widely adopted in medical devices and other regulated industries?

First, most engineers were never taught these techniques. Second, there’s inertia. Modeling is perceived as difficult, academic, or disruptive to existing processes. Large organizations are especially reluctant to change. Startups, by contrast, have little to lose and a competitive advantage to gain. Many have successfully used modeling to shorten timelines while producing demonstrably better products.

If companies could start over in how they manage risk, most would say they would do things differently. Yet many still operate as if the traditional waterfall model, where requirements are fully defined up front before engineering begins, were the gold standard, even as that approach steadily fades from modern software and systems development. Today, savvy teams and many startups explore technical feasibility first, using rapid prototyping, simulation, and computational experimentation to see what works. Modeling supports this evolution. Instead of locking down requirements based on assumptions, teams can test system behavior early, simulate failure modes, and determine what is feasible before committing major time and resources. With this strategy, requirements are not driving engineering; rather, they evolve from tests and evidence about how the system actually behaves.

We’ve seen this work in practice. A global medical device manufacturer with a broad portfolio of implantable, software-driven products uses computational models to run systems-level simulations, validating interactions and failure modes long before anything reaches patients. In turn, a market-leading cardiovascular device company redesigned its cardiac defibrillators from the ground up using these techniques, achieving measurable gains in reliability and performance.

Aerospace has relied on this approach for decades, and automotive is increasingly following, recognizing that failures at scale are both predictable and preventable. Medical devices are now just as software-driven and just as safety-critical.

Risk isn’t reduced by moving faster or writing longer submissions. It’s reduced by understanding your system deeply, early, and computationally. For device manufacturers, that means embedding modeling, simulation, and continuous verification directly into development and change processes, not bolting them on at the end. The companies that do this won’t just manage recalls more effectively; they’ll prevent more of them in the first place. While this approach certainly builds trust with regulators, the most important outcome is the assurance of safer, more reliable products reaching patients.

Interview transcript

Paul Jones
VP of Regulatory Strategy
Ketryx

Paul is a world-renowned software safety expert who joined Ketryx following 25 years at the Food and Drug Administration (FDA). He helped create the FDA’s approach to safety-critical software and medical devices and founded the FDA’s software engineering lab. While holding committee positions with groups that handled medical software safety standards like ISO 13485, ISO/IEC 62304, and ISO 14971, he reviewed over 300 devices, carried out numerous inspections, and provided training to FDA staff on software quality, risk management, and software engineering. Prior to the FDA, he worked 20 years as a systems/software engineer for companies like Ford Motor, Electronic Data Systems, Honeywell, and SAIC. He holds a Master of Science degree in Computer Engineering from Loyola University, Maryland.