The Role of Simulation Technology in Modern Embedded System Design

Table of Contents

Ready to :innovate: together?

Simulation techniques for modern embedded system development

To effectively address the growing complexity of embedded designs, engineers must increasingly use simulation not merely as an auxiliary instrument, but as an integral part of the design and decision-making process. Simulation is therefore not simply a supporting tool. It is a fully-fledged decision-making environment. In this environment, the engineer defines the system architecture, verifies its behavior, and evaluates its compliance with design requirements. All of this happens before committing any physical resources. In systems with real-time constraints under 10ms (such as automotive ECUs or industrial controllers), the synchronization of components must often meet sub-millisecond timing precision. This becomes particularly critical in the context of distributed systems, real-time applications, and high-reliability domains. In such cases, it is essential to observe temporal dynamics and synchronization between system components. These factors are key to ensuring correct overall operation.

The article presents a systematic overview of the types of simulation used in embedded systems, their classification, supporting tools, and their relationship to engineering process models. Particular attention is given to technological limitations and practical criteria for selecting an appropriate simulation strategy depending on the stage of the product life cycle.

Your first step: Introduction to simulate embedded systems

Simulation in the context of embedded systems engineering is the process of creating mathematical or computational models of a system to predict how it will behave. In this domain, simulation involves modeling both hardware components and software layers. This comprises microcontrollers and digital interfaces, such as those found in STM32 devices. It also encompasses control algorithms and real-time operating systems. The goal is to analyze, verify, and validate these elements before physical implementation. The core purpose of simulation is not merely to replicate functionality. It is to examine interactions between components, analyze response times, resource utilization, and system fault tolerance. This enables comprehensive testing even before the prototyping phase, significantly reducing development costs and accelerating time-to-market.

In the design process of embedded systems, simulation integrates into various engineering methodologies. In the classical waterfall approach and its extension—the V-model—simulation is primarily applied during the verification and integration phases. It is used to ensure that the system design meets both functional and non-functional requirements. This involves unit testing, interface validation, and the simulation of edge cases.

In Model-Based Design (MBD) environments, simulation becomes a core tool. The development process begins with the creation of a functional model (e.g., in Simulink), which then serves as the basis for automatic code generation. This code can be immediately tested using Software-in-the-Loop (SiL), Processor-in-the-Loop (PiL), or Hardware-in-the-Loop (HiL) simulations. They enable continuous validation without the need to build physical prototypes. According to MathWorks case studies, using SiL and PiL simulations can reduce overall development time by up to 35% in safety-critical applications such as automotive or medical devices. Similarly, in Agile methodologies—where development is iterative and organized into short sprints—simulation supports rapid prototyping and testing of new features without interrupting ongoing hardware development.

The key benefits of simulation in embedded systems engineering

Despite the wide range of available techniques and tools, the true value of simulation often remains underestimated—particularly in the early stages of project planning. Why is simulation still overlooked at the point where it can deliver the greatest impact? Gaining a proper understanding of simulation can serve as a starting point for more informed architectural decisions, better resource planning, and more effective collaboration across interdisciplinary teams—especially when guided by best practices in simulation-driven design:

  • Reduced development costs:

    • a typical physical prototype for a custom embedded platform can cost between $5,000 and $50,000, depending on complexity. Simulation eliminates the need for multiple such prototypes,

    • allows testing a wide range of scenarios without engaging hardware resources.

  • Shorter design cycles:

    • enables parallel testing and system development,

    • faster design iterations through immediate verification of changes.

  • Early error detection:

    • supports unit and integration testing of embedded software before hardware availability,

    • lower cost of fixing bugs identified at the simulation stage compared to post-deployment fixes.

  • Analysis under edge and fault conditions:

    • enables testing system responses to unexpected events (e.g., sensor failures, bus latency, voltage spikes),

    • allows emulation of conditions that are costly, dangerous, or impractical to reproduce physically.

  • Improved quality and system reliability:

    • thorough validation across a wide range of operational scenarios,

    • supports automated regression testing in CI/CD pipelines.

Hardware, software, and co-simulation — the three pillars of embedded system testing

In embedded systems engineering, various types of simulation are applied depending on the project stage, the purpose of the analysis, and the required level of detail needed to assess system correctness. The fundamental classification consists of three main categories:

  • Hardware simulation focuses on replicating the physical components of a system, primarily digital elements such as microprocessors, peripheral devices, and programmable logic. During the design phase of integrated circuits or FPGA-based systems, Register Transfer Level (RTL) simulation is used to analyze the behavior of digital logic with clock-cycle precision. This approach allows designers to detect logical errors, timing hazards, or bus conflicts before synthesis. Another widely used method is FPGA prototyping, where the modeled circuit is implemented directly on a physical FPGA device to verify functional behavior using real clock signals and peripherals, thereby closely approximating real-world operating conditions.
  • Software simulation involves modeling the execution of application code in a simulated environment. One common technique is CPU instruction emulation, where binary code compiled for a specific architecture (e.g., ARM Cortex-M, RISC-V) is executed in an emulator such as QEMU or Renode. As João Bittencourt of STMicroelectronics aptly put it: “Every minute spent simulating in QEMU saves an hour in the lab.” This enables debugging, resource usage analysis, and functional validation without the need for physical hardware. In parallel, simulation of device drivers and real-time operating systems (RTOS) is performed to evaluate task behavior, interrupt handling, synchronization mechanisms, and memory management.
  • Hardware-software co-simulation encompasses hybrid approaches such as SiL, where application code runs within a simulated environment; PiL, which involves execution on a physical CPU while interacting with simulated components; and HiL, which allows for testing of the actual embedded hardware interfacing with a simulated external environment. As Felix Möller, lead test engineer from Continental AG noted, “We use HiL to crash a virtual car into a wall 1000 times a day, so we don’t have to do it even once in the real world”. In more complex scenarios, Co-Simulation is used—integrating multiple simulators (e.g., for hardware and physical environment models) into a single, synchronized simulation framework. Simulation can help in validating integration points early and safely, before real-world systems are available or stable enough for live interaction.

If you’d like to explore the intricacies of HIL, we encourage you to check out our dedicated article:

What is Hardware-in-the-Loop (HIL) Testing And Simulation? A Complete Guide for Engineers

Optimizing simulation strategies: using functional simulation and behavioral simulation

It should be started by stating that physical models describe system phenomena in a way that closely mirrors reality—they account for parameters such as voltage-current characteristics of electronic components, thermal effects, mechanical properties, or electromagnetic interference. One might then ask: where are they most commonly used? Due to their complexity and computational intensity, they are typically used in low-level simulations, such as analog circuit analysis (e.g., SPICE) or modeling of hardware IP blocks.

Functional models focus on representing the logical behavior of the system without tracking its internal physical states. For instance, a processor model might implement an instruction set and track variable values, without modeling actual execution time or signal delays. These models are widely used for algorithm verification, protocol simulation, and testing of high-level software.

Behavioral models occupy a middle ground between functional and physical models—their aim is to reflect the system’s behavior while considering key timing, logical, or state-based parameters. An example would be a driver model that reacts to input events based on predefined logic, with response times either statically defined or dynamically computed.

An essential aspect of model selection is the level of abstraction. Abstract models omit timing and physical details in favor of simplified logic, while cycle-accurate or time-accurate models simulate instruction execution time, signal propagation delays, and bus latency. Choosing between these approaches involves a trade-off between simulation realism and computational performance. Time-accurate models offer higher fidelity but are often impractical in large systems due to long simulation runtimes. On the other hand, abstract models allow faster iterations and early-stage design validation, at the cost of lower prediction accuracy.

In real-world projects, a multi-level modeling approach is often employed, where different subsystems are modeled at different levels of detail—for example, implementing control logic as a behavioral model while modeling hardware interfaces with cycle-accurate fidelity. This strategy enables effective complexity management and allows engineers to tailor simulation scope and resolution to the specific goals of the design phase.

How to choose a simulation tool? Commercial and open-source solutions in practice

In the commercial tools segment, MATLAB/Simulink deserves particular attention, as it is widely regarded as the standard for modeling control systems and signal processing. It enables the integration of physical models (e.g., Simscape), control logic, and automatic code generation via Embedded Coder, which can be directly used in SiL, PiL, and HiL simulations. However, the environment has high computational requirements and substantial licensing costs, which may limit its adoption in smaller teams or low-budget projects.

ModelSim provides precise time-accurate simulation at the RTL level, allowing detailed verification of digital circuits designed in VHDL or Verilog. It is commonly used in FPGA projects, particularly where propagation delays and timing accuracy are critical. On the other hand, PSpice is applied in the analysis of analog circuits, as it supports SPICE models that take into account non-linearities, transient phenomena, and thermal noise effects. For developers working with ARM microcontrollers, Keil MDK is worth mentioning. It includes an integrated instruction set simulator, timing profiler, and RTOS-aware debugger. Although it does not offer accurate modeling of hardware peripherals. Also noteworthy is QEMU. This tool was originally developed as open-source, but often used commercially for CPU architecture emulation, including ARM, RISC-V, and x86. It allows binary code execution in test environments and supports integration with external simulation models.

Among open-source tools, Renode stands out by enabling full SoC simulation, including buses, DMA, GPIO, and peripheral devices. It supports deterministic testing and integrates with CI tools, making it an attractive choice for teams implementing automated test pipelines. Verilator offers ultra-fast, cycle-accurate simulation of Verilog code, while maintaining the ability to integrate with unit tests written in C++. For simpler analog or educational projects, TINA-TI may be useful. Despite its limited functionality, it provides a user-friendly interface and basic capabilities for analyzing mixed-signal circuits.

Tool Use Case Key Features Limitations
MATLAB/Simulink Control, signal modeling Simscape, code generation, SiL/PiL/HiL Heavy, expensive
ModelSim Digital (RTL) simulation Precise timing, FPGA use Digital only
PSpice Analog circuits SPICE, non-linearity, noise Analog only
Keil MDK ARM development Sim, profiler, RTOS debug No hardware peripheral model
QEMU CPU emulation ARM/RISC-V/x86, test environment Limited hardware  accuracy
Renode SoC simulation (open-source) Full system, CI ready Niche use (embedded)
Verilator Verilog simulation (open-source) Fast, C++ test support Verilog only
TINA-TI Basic analog/mixed Easy UI, simple analysis Very limited

Tab. 1 Comparison of embedded system simulation tools

The selection of a simulation tool should be based on factors such as:

  • the target system architecture (e.g., microcontroller, FPGA, SoC),
  • the required level of timing accuracy,
  • the type of analysis (functional, energy-related, timing),
  • integration capabilities with CI/CD pipelines and team workflows,
  • availability of documentation and technical support,
  • compatibility with the production toolchain (compilers, debuggers, RTOS environments).

Boosting development efficiency through system-level simulation and continuous integration

A key role in fully integrating simulation environments into the engineering workflow is played by the concept of Model-Based Design (MBD). In this approach, the entire development cycle is based on a mathematical or graphical model of the system. This embraces everything from functional specification and algorithm validation to code generation. The model becomes the central source of design truth — the “single source of truth.” It can be reused for multiple purposes, such as functional verification, hardware-level testing, control parameter optimization, and compliance validation against system requirements.

However, the integration of simulation does not end with the model itself. Engineering teams are increasingly adopting practices from the software development world. It means Continuous Integration (CI) and Continuous Delivery (CD). In these workflows, every change to the source code or system model triggers automated test processes. Simulation tools (such as Renode, QEMU, or Simulink Test) are used as test backends. They enable remote execution of simulations across multiple test cases, hardware architectures, or system configurations. Simulation allows teams to detect regressions early and test under fault conditions. It also ensures consistency across multiple hardware variants — all without deploying a single physical prototype.

Regression testing within simulation environments is an essential quality assurance component in projects that evolve over many development iterations. Automated test campaigns can be triggered with every commit to the repository. These campaigns may include both unit tests and full system-level scenarios. They can also cover stress testing and fault-condition testing. Such scenarios would be expensive or risky to reproduce on physical hardware. Furthermore, simulation automation ensures consistent coverage of test cases and reproducibility of results, which is a formal requirement in safety-critical certification processes (e.g., ISO 26262, DO-178C). For ISO 26262 ASIL-D certification, over 90% code coverage and traceability of test cases to requirements is mandated. Simulation automation ensures this level of auditability.

InTechHouse: Complete project support through the full system simulation approach

The selection of appropriate tools and the proper balance between model fidelity and performance requirements are key to the effective use of simulation in practice. The integration of simulation with verification tools, code repositories, and certification systems is becoming increasingly important. It enables improved quality control and supports compliance with industry standards.

InTechHouse, a part of the SoftBlue Group, has been specializing for over 22 years in the comprehensive design of solutions in the fields of embedded systems, custom software, and hardware development. Our team combines solid engineering expertise with a modern approach to product development – from hardware design, through firmware and software, to system integration and final testing.

If you’re looking for a technology partner who truly understands market requirements, InTechHouse is a trusted choice. Choose quality, reliability, and experience and schedule a free consultation today.

FAQ

What is the difference between Software-in-the-Loop (SiL) and Hardware-in-the-Loop (HiL) simulation?
SiL runs application code in a simulated environment without physical hardware. HiL, on the other hand, integrates real hardware with a virtual environment, enabling full system testing in real time.

When should RTL simulation be used, and when is a functional model sufficient?
Use RTL simulation when precise timing is critical, such as in digital circuit design. Functional models are more abstract and suitable for verifying algorithmic logic without modeling hardware-level delays.

Are open-source tools like Renode or Verilator sufficient for commercial projects?
Yes, many open-source tools meet high quality standards. Renode supports CI/CD testing pipelines, and Verilator offers fast RTL simulation. In practice, they are often combined with commercial tools.

How can simulation be integrated into a CI/CD process?
Tools like Jenkins, GitHub Actions, or GitLab CI can automatically trigger simulation tests (e.g., in Renode or QEMU) with every code change or pull request.