The 2022 DVCon (Design and Verification) Europe conference was back in physical form at its usual venue at the Holiday Inn München. It was a great conference, and just like at the 2022 DAC people were very happy to be back in person.
The Conference in General
As has been the pattern since 2016, there were two days of DVCon, followed by the SystemC Evolution Day (SCED). Attendance was good, with 378 total registered participants from 115 different organizations and 36 countries! DVCon is an industrial conference, sponsored by Accellera, and the participation was skewed towards people from industry. That said, there were also academics present. For example, Fraunhofer had a tutorial as well as a booth in the exhibition. The exhibition was a bit smaller than in previous years, but the quality was high and the booth traffic felt more lively than what was the case at this year’s DAC.
?The Gingerbread cookies were back!
Lots of action on the show floor:
At the exhibition, we had Tamas from Jade. I met him at the DAC, where his booth was missing its nice booth background due to it getting lost on the flight. At DVCon, the background was back! Apparently it had taken eight weeks for it to be found, somewhere in the mountain of lost luggage at Frankfurt Airport. And Tamar hand-carried it to the show on train. No more chances.
Trends and Topics
It is always hard to summarize an event like this. Content-wise, there were two keynote talks, two panels, 30+ paper presentations, 15 tutorials, five posters, the exhibitors, and discussions with attendees. I also include observations from the SystemC Evolution day.
RTL verification continues to be the biggest topic at DVCon. As is kind of expected given the name and topic of the conference.
This year there was a pick-up in papers and especially tutorials related to (fast) virtual platforms. Parallel simulation and increasing performance were common themes.
More papers seem to be using emulators than I have seen before. There is a sense that simulators are too slow for time-critical work, and that mixing emulators and simulators is the right way forward. Where many users appears to have mostly relied on simulators in the past. Papers claim everything from 10x to 50x speedups from running in emulators – which I think sounds less than you could hope for.
Software is getting attention. Not as “software that runs on the deployed hardware”, but rather as part of test cases. Several papers dealt with software tracing and debug – when running software on RTL in emulators. Considering software as part of the test bench.
Static analysis and formal methods continue to get progressively more common. More people and companies are doing it, but they still seem a bit avant-garde compared to the mainstream “run RTL in simulation with a UVM testbench”.
Visual Studio Code is starting to make inroads into EDA, showing up as a supported GUI/editor alongside Eclipse for tools like those from AMIQ and Sigasi.
Simulation of complex systems of systems was discussed in both keynotes and a panel. There is a clear need to run simulations comprising both the classic hardware and software of a computer and models of the physical world. The compute systems used in real products is often complex and consist of multiple separate units (think cars and ECUs) or boards (think telecom racks). Connected to complex environments where you do want to simulate the environment as part of the stimuli to the virtual platform. I dub this kind of heterogeneous simulator integration federated simulation.
Over the past few years, several companies that used to be pretty closed have apparently started to put various interface libraries into the open. For example, the Vector SIL kit, and Synopsys OCX. This makes it easier for users of the commercial tools to get things integrated – and you still most likely get to sell the tools as someone else coming around offering a complete replacement just from the interface being open is unlikely.
RISC-V continues to be the processor architecture of choice for people dabbling in virtual platforms. Even we at Intel mentioned it, in the context of the PSG FPGA Nios-V. For RISC-V, virtual platform providers continue to emphasize the ability to add custom instructions. This is present in the Machineware SIM-V simulator, as well as the RISC-V simulator from Minres, and the Imperas tools.
There was a somewhat surprisingly low presence of machine learning and AI.
Virtual Platform Notes
Since I do virtual platforms for a living, I put some particular emphasis on this.
MachineWare bought the GreenSocs domain from Mark Burton! www.greensocs.com now redirects to the Qemu section of the Machineware website. They also had a paper, which I report more on below.
It is very crowded in RISC-V VP land. Imperas and MachineWare are both selling simulators that use JIT technology to run at speeds similar or faster than Qemu. MinRes has their own instruction-set simulator too. In all cases, SystemC is seen as the main way to get system models built.
Magillem was acquired by Arteris. Arteris tooling includes the old Magillem tool. Still selling stand-alone register handling tools, even though it is kind of hard to find the tools on the Arteris website currently.
Qualcomm, AMD/Xilinx, and companies in Automotive are all using Qemu for ARM as the basis for ARM simulations (instead of using some other virtual platform framework, commercial, open-source, or in-house). Some attempt to run Qemu on ARM hosts using ARM virtualization via KVM. Note that Qemu currently has terrible timing accuracy when used in VPs. You cannot control the advance of time. You cannot make the simulator reflect that different cores in a simulation is supposed to run at different speeds. You cannot make time synchronized between cores in a controlled way. There might be work underway to change this, but it is not really mainstream in Qemu.
It is common to combine Qemu with SystemC in order to build system simulations reusing the instruction-set simulation technology from Qemu. Qemu is not considered a very great framework per se, but it is what exists for many people. Qualcomm kind of hired Mark Burton to work on that.
Our Presentations
I was part of two presentations at this year’s DVCon Europe. The paper “Challenges and Solutions for Creating Virtual Platforms of FPGA and SASIC Designs“, by Kalen Brunham and Jakob Engblom, and the tutorial “Verification of Virtual Platform Models“, by Jakob Engblom and Ola Dahl (Ericsson). See the separate blog post for more about the discussions and insights from the tutorial.
Keynote: Magnus Östberg – Developing the Chip-to-Cloud Architecture for the Most Desirable Cars
Magnus Östberg is Chief Software Officer at Mercedes-Benz. Despite the title, the job is also about hardware. Very slick presentation. He only talked for 25 minutes out of the scheduled hours, and then moved to Q&A! It worked out as a conversation with the audience.
It was very interesting to see how Mercedes is looking at the digital side of cars. Their goal is to make MB as beautiful in the digital world as they are in the physical world. Constantly renewing the digital experience like we do in the “telecom” world. A key concept that Magnus came basck to was “Digital luxury” – more screens, size of screens, better total experience. For example, make navigation 3D – beautiful and immersive, with inspiration from computer games. It not all just crazy flashy stuff – you do need to think about the cognitive load on the driver.
Another aspect is building the car to “give back time” – assist the user to lower the load of the boring parts of driving = some autonomy. There are also regional variations as to what people expect. For example, in China, customers demand a virtual assistant. They see that as a natural part of the car. While a European might scoff at it as a silly gimmick.
MB is driving hard to become a leader in car software. Part of that is defining their own operating system, MB.OS. The goal is to define stable interfaces that make it easier to evolve software by separating it from the hardware. Separating hardware and software is a new thing in automotive. Change how they interact with suppliers and partners.
MB itself was to work on the user experience and core API to the software stack. Other things can be outsourced and bought from suppliers. There might be a slight difference here from other automotive companies that talk more about bringing things in-house to a greater extent. MB wants to focus on the core architecture and differentiating features.
Compute hardware is moving to a few centralized and very powerful nodes instead of a large network of individually weak ECUs. Many smaller units that used to have local processors are becoming just sensors and actuators connected to central compute. This also changes the relationship with suppliers, since they no longer just deliver a physical separate unit – but rather functionality that will run somewhere in the vehicle.
Getting to validation, a change is needed. In the past automotive did far too much validation on the complete vehicle level – in large part due to contracts and development setup. Contractors were required to provide subsystems that would only be integrated at the OEM level. New world: test software and hardware at unit level and subsystem levels. Goal is to perform step-wise integration. Each level defines a device-under-test (DUT).
Virtual prototypes and platforms are definitely on the radar (as has been noted by other DVCon keynotes from automotive in recent years). It is necessary to start developing and understanding hardware before the final hardware is delivered. Virtual platforms and simulation can be used to understand the implications of changes being made, how to verify backwards compatibility. Models of existing vehicles can be used to test new software, for example.
Getting models in place is a supply chain challenge. MB has started to require some form of VP model from their hardware suppliers. Exact level of abstraction and interfaces are a work in progress, industry is yet to converge to a common de-facto or formal standard. Right now, the impression is that suppliers are surprised to get the question!
Magnus also talked about the “Electric Software Hub” that MB has set up in Sindelfingen – it is a central location where all development comes together. Developers around the world, with remote access to the centralized compute. Lots of simulation facilities. HW-in-the-loop, SW-in-the-loop. Some full-system testing, for example they have 250 different electric chargers in place for testing!
Keynote: Axel Jahnke – Challenges in Soc Verification for 5G and Beyond
Axel is at Nokia, working with teams in Finland from his base in München. A few years ago, he worked at Intel iCDG with modem development. Axel presented a keynote that was more application-focused than Magnus Östberg. He is working with System-on-chip design and verification at Nokia, and the challenges inherent in that.
The challenges are indeed many and varied.
Systems have long lives – about fifteen years or so. This poses a challenge just from turnover. Axel indicated that you might see 80% of the team that designed an SoC turn over in that time frame, posing a question for how to retain information and institutional memory.
5G traffic requires relatively long simulation times on the order of tens of milliseconds for verification, which is an obvious latency and resource challenge. Especially if not using emulators.
Telecom systems work in a complex and varying real world. How can this be captured in verification? For example, how can the effect of rain on signals be introduced into system verification and validation? A simple fixed-input test bench cannot do that, other methods are needed. Axel saw three types of tests:
- Randomized testing to capture issues, as is standard practice
- Real-world scenarios to get at what happens in the real world
- Corner-case/worst-case constructed scenarios to get to the edges of the system behavior
Software is a challenge. Hardware features for next-gen systems depend on the software architecture, but it can be hard to get the software early enough. Often, software teams are fully busy with previous-generation hardware. The organization has to change to really shift the software effort left. Brought up example from ten years ago (at Intel): they had a virtual platform ready 24 months ahead of TI, but the software team did not have time to work on it until 18 months later! In the end, this means that the SoC team might have to develop their own software just to have something to run.
The systems used in 5G infrastructure are very large. An SoC is just the beginning – there are multiple types of boards, and many boards in a typical real setup. It is not possible to simulate everything at once. Instead, you have to rely on stubs and subsystems. For example, running SoC RTL with models of the real world written in Python or C. Encapsulation of subsystems is a key technique to make it feasible!
In the end, it all has to come together as a set of test boards. This is late, but there is no real alternative to ensuring that the system works.
An interesting topic that Axel mentioned is the status and teaching of verification. You get the sense that many people consider verification and testing to be “less creative” than development. Developing new things is seen as more prestigious than making sure things really work. In reality, 50% of the job in SoC design is verification and validation! Despite this, very few universities teach hardware verification even though they do teach hardware design.
This goes to a pet peeve of mine, which is that testing really has to be considered creative. Finding the scenarios when a system goes down requires a lot of creativity. Just of a slightly different type. I wrote a blog post for this at Intel a few years ago that I hope explain the kind of a crazy fun that testing can be.
Another aspect of this is that the languages and tools used for design vs verification have diverged and specialized. This is necessary for productivity, but it also means that people will specialize, be grouped into different teams, and in the end have a harder time communicating with each other.
Panel: 5G Design Challenges and Verification
Featuring: Gabriele Pulini, Anil Deshpande, Ashish Darbari, Axel Jahnke, Oren Katzir, Herbert Tauscher
This panel was in some ways a follow-on from Axel’s keynote. Some notes made:
All agree there is a lot of complexity in 5G. Less of repeat of the same block, many different things to do. Simplicity is not exactly a thing.
There are differences in the chips used in terminals (phones) and the infrastructure. It sounded like infrastructure chips tend to be more complex and varied inside.
One missing piece is early performance validation. Making sure the system all together with external chips and buses have sufficient performance. Nasty when you realize you need a bit more processor power or bandwidth.
There was a question on whether open-source software stacks could somehow make software appear earlier. In practice, most companies go for closed-source. The low-level software most related to the chip design and important for shift-left is not all that amenable to open-source anyway as it is very custom. Someone guessed that open-source stacks in 5G could reduce the amount of unique software needed to maybe 5%?
Another aspect of software: what would help with software availability would likely be better modularity. Adding more layers of software abstraction to the 5G stacks. There is resistance from software people, as abstractions lose performance and add friction. Probably eventually it will have to happen anyway.
Encapsulation of subsystems is a key technique. Full simulation RTL for one chip or something, then build model of the subsystem, and use the model for system-level verification.
There was some interesting disagreements between EDA vendors and users. Vendors: chips have to be designed in a way that enables verification scaling. Claims “processor” designers have broken down the design problem in a way that allow the tools to scale – while apparently this is not so much the case in 5G. I.e., the chip architecture might have to adapt to what can be done. Not just asking EDA for support for bigger designs in the same old way.
Formal vs simulation: Formal can often be faster than using plain old simulation to explore a large state space. Industry does not necessarily use it everywhere it could be used. But there are also cases where it is hard to see how it would apply. One panelist claims “if you can code it in HDL, you can verify it in formal”. To succeed in formal, you need to have a dedicated team at the company using it. It is expertise, not just tools. Same for verification and simulation with UVM.
Tutorials and Papers
I will only cover a few papers and presentations. I did not have time to attend more than a few.
Tutorial: What’s new in IP-XACT 1685-2022
Presented by Erwin de Kock (NXP Semiconductors), Jean-Michel Fernandez (Arteris IP), Devender Khari (Agnisys)
A new version of the IP-XACT standard was released in 2022, IEEE 1685-2022. The tutorial covered the news in the standard, which are quite numerous. IP-XACT is evolving to standardize how more and more hardware interface aspects are described.
IP-XACT history:
- December 2004 – IP-XACT 1.0, SPIRIT consortium
- March 2008 – last SPIRIT version, 1.4
- December 2009 – First IEEE version, 1658-2009
- June 2014 – 1658-2014
- September 2022 – 1658-2022
The new version of the standard removed the conditional handling that was introduced in the 2014 version. It brought a lot of complexity and no tools apparently fully implemented it. The committee thus made the decision to remove it. Kudos for observing what happened and adjusting!
The standard interface for calling into IP-XACT tools, TGI, has been updated to feature a modern REST-style API in addition to the older SOAP-based API.
IP-XACT has also added a level of indirection where a register or set of registers can be described as a “type” and then applied to multiple memory spaces. Make it easier to reuse the same definition in multiple places.
More information can also be encoded for registers. IP-XACT now supports modes, to describe mode-dependent access rules (security, user, supervisor, backdoor vs bus, …). SystemVerilog expressions are used to describe the conditions when each mode applies. Power domains and support for generation of UPF power annotations.
IP-XACT can describe that two fields are aliases of each other – say that one field in one register is an alias of a field in another register, but they can have different access rules (for example). A related concept is broadcast, that specify that writes to some field in some register propagates to another separate field in another register.
Vendor extensions are still very common in IP-XACT usage in practice. This caused some discussion in the room. In practice, the standard only guarantees that if you stay within the standard, all tools will work.
Good question from audience – is there some open-source library for handling IP-XACT? And why not? Answer: companies need to make money. Agree it would be nice to have a TGI open reference, for example.
Paper: Programmable Analysis of RISC-V Processor Simulations using WAL
Paper by Lucas Klemmer and Daniel Grosse from the Johannes Kepler Uni Linz, together with Eyck Jentzsch from MinRes Technologies.
WAL is language to analyze waveforms. The language is embedded in Python, but basically looks like LISP. Still, it can use existing Python packages for special functionality. The language is open-source, hosted at https://github.com/ics-jku/wal. WAL has been presented before, this paper was about a particular application to a RISC-V setup built using MinRes simulators and RISC-V cores.
The point of the language is being able to write analysis over captured waveforms in the form of programs. Using a domain-specific language makes sense since you have some unique domain properties that are easy to support in a custom language but do not match well to a regular language. In particular, it includes time as a first-class citizen. It is also very easy to use names of signals in expressions, including allowing names that are not valid names in Python.
Examples presented in this paper included using WAL to determine the average execution time of instructions from waveforms, computing average instructions-per-cycle (IPC), and the proportion of pipeline stalls suffered by a program. Such analysis requires managing some state and non-trivial logic as it has to understand how to interpret the signals from the waveform file.
Questions from the audience: The WAL tooling supports a few open-source formats. Designed for offline use currently, could be conceivably turned into something that does live analysis as well.
Paper: SIM-V Parallel Simulation of Virtual Platform
Lukas Jünger from Machineware presented a paper on their SIM-V simulator for the RISC-V instruction set. Basically, about improving the performance on their simulator by using parallelization. There were also some small details on their virtual platform environment.
They run their instruction-set simulators (ISS) inside their SystemC-based VCML environment. VCML allows for ISSes to run in parallel to each other, similar in principle to what Synopsys does in Virtualizer. I would claim this special-cases ISS models, which makes perfect sense.
The performance was compared to Qemu. According to Lukas, Qemu has a “timed” mode (ICOUNT) that can only run serially. To run in parallel, you need to use an “untimed” mode where it uses host time instead of simulation time (MTTCG). Compared to this, SIM-V (and any other simulator built to be a virtual platform from the ground up, unlike Qemu) always provides virtual time and can run in parallel.
The MachineWare JIT generator is called “FTL”. Interestingly, like Imperas, they allow a user to write their own instructions using the FTL API that can then be dynamically linked into the existing ISS.
FTL is designed to allow models to be used in multiple environments. It can run in parallel in MachineWare’s own VCML SystemC-based setup, as well as in a Synopsys environment via the OpenCpuX (OCX) interface (not sure if any other simulators support this interface). They also claim that they can run in general “SystemC TLM 2.0 parallel” setups, as well as in standard serial SystemC.
MachineWare also has a few other features. They call their simulator execution GUI “ViPER”, Virtual Platform Explorer. With the usual basic features of inspecting the SystemC system hierarchy, executing target code, etc. They have a Python system to control a VP, called PyVP (what else). PyVP is a bit unusual in that it runs Python in a separate process and connects to the simulator over a socket, so it is really just a remote control. Other integrations of Python with SystemC have been done within the same program, to allow Python to be used for setup tasks and more tightly integrated with the underlying simulator. The VCML modeling kit contains basic generic models for common hardware, as well as some SystemC implementations for interfaces like I2C and CAN.
Performance claims: Dhrystone: Single-thread about 1500 MIPS, 4-way parallel up to 5500 MIPS. This is done With a SystemC machine model in the background, which ought to slow it down a bit compared to Qemu machine model. Unclear from the article if this was run on top of an operating system or bare-metal.
Note that MIPS measurements are notoriously bad at providing a proper picture of the performance of a simulator. For example, a non-optimized binary with many simple instructions usually provides a higher MIPS rating than an optimized binary. Even if the optimized binary gets the work done faster. That is why I prefer to use slowdown as the measurement of a simulator – it relates to the work. But it is very hard to compare between simulators, which is the main goal here.
Question from the audience: Why faster than Qemu? “Cannot disclose everything”. First of all, different JIT engines. Qemu uses TCG, MachineWare uses FTL. They might have found some more clever optimizations than Qemu, as they put it. They have also modeled other architectures, nothing he can talk about. Argues it is not less generic than Qemu.
Paper: Unified Firmware Debug throughout SoC Development Lifecycle
Presented by Jurica Kundrata from the University of Zagreb. Written with Dimitri Ciaglia and Thomas Winkler from ams-OSRAM.
I found this paper a good representative for software-with-RTL. They want to debug firmware running on a processor in a microcontroller, and the way they do it is not to use an instruction-set simulator but instead to use the processor core as RTL. They run the processor core and the software as part of an RTL-level SystemVerilog testbench.
They connect a debugger to the RTL just like it would connect to a real-world chip. This means actually having the real processor debug support functionality in place in the RTL, and doing debug using this functionality. No back-doors or simulation tricks, just making the processor work exactly like it does in hardware.
In simulation, this is effected by building a custom DPI-based bridge in SystemVerilog that connects the RTL circuitry to an external debugger over TCP/IP, to an OpenOCD debug agent. The OpenOCD agent in turn connects to ARM’s standard software debugger using the ARM SWD interface. You can debug either with ARM DS or gdb. Basically, it looks like a hardware JTAG probe to the debugger. Somewhat interestingly, they combine this with a Python test bench that fakes I2C traffic into the RTL to provide stimuli for the microcontroller.
Once RTL runs on an FPGA prototype, they switch to using a hardware probe for debug. The advantage is that this looks exactly the same as the simulated variant, only faster.
The implementation of OpenOCD uses “JTAG DPI” from the Google Titan project. The code drives the RTL truly at the pin level, just like you would do with hardware, bit-banging the comms to the target inside of OpenOCD. Lots of details on how to make this happen.
Overall, the key point seems to be to get hardware debug in place before going to FPGA.
Running the RTL in Cadence Xcelium.
Not a Paper but Interesting: MinRes – RISC-V Processor Vendor!
I talked to Eyck Jentzsch from Minres. In addition to their existing business as a consulting firm in the design, verification, and virtual platform space, they have branched out into RISC-V IP. They have their own RISC-V core, called “The Good Core” (TCG), part of “The Good Folk Series” of IP. The TCG can be configured from 6000 to 30000 gates, and is a simple in-order, three-to-five-stage pipeline.
The design has taped out and manufactured on Global Foundries 22nm in Dresden. Making a point of what they have being all European – no Asian or US company or owner involved. Right now that that is a good argument.
MinRes lets users of the core add new instructions using a C-style language, which then gets added to the core and to the corresponding virtual platform. Bosch has done a research project with them where they added six custom instructions in just a few hours, which also produced very significant execution-time improvements. The VP was used to estimate the performance, and its estimates were good enough to let users experiment with the impact of instruction set changes.
Verifyter is now Cadence PinDown
In 2018, I saw a presentation at DVCon Europe about the very cool and interesting Verifyter tool. This year, I found the founder Daniel Hansson in the Cadence booth – since their tool is now part of the Cadence verification continuum, as the PinDown tool:
Poster Session
I was Poster Chair for DVCon Europe this year. Since the number of posters was fairly small, we tried something new and had all the posters present their ideas in a series of short talks in one of the paper sessions. In addition to the traditional display of posters in the exhibition hall.
SystemC Evolution Day
The SystemC Evolution Day was held the day after DVCon, as is now usual.
Jerome Cornet from ST talked about the progress on releasing the next version of the SystemC standard. It is a bit late, but should come out either in 2023 or maybe 2024. All details are not finalized just yet, as the standard update will collect ten years of updates and contributions. Still, most of the contents seems to be clear. Jerome shared a few highlights:
- The baseline C++ version will be moved up to C++17. This allows for better and more compact code in many places in the standard. It also makes it easier for programmers to use modern-ish C++ in their models.
- Backwards compatibility is important. For example, an attempt to move all strings from const char * to C++ string types turned out to cause a lot of issues. Having existing code easily updated is an important aspect.
- Adding additional simulation-stage callbacks to let code observe (not modify) the simulation state at more stages in the simulation setup and execution. The callbacks would trigger at points like “SC_PRE_TIMESTEP”, “SC_POST_UPDATE”, and the very logical but also somewhat amusing “SC_POST_BEFORE_END_OF_ELABORATION”.
- Better handling of SystemC kernel event starvation when waiting for asynchronous inputs from other programs running outside of the SystemC kernel (i.e., for simulator integration scenarios).
Francois-Frederic Ozog from Shokubai talked about how Qemu for ARM is being used in the automotive world to quickly run target software stacks. In particular those involving hypervisors. It was mostly about how KVM on ARM works, and how it can be made to work (hard) on Apple M2-based Macs using the MacOS hypervisor interface. Qemu itself is not yet at the point where it makes sense as a general virtual platform framework. The code is not very modular. Typically, Qemu is used for some processor models, and then VP models are run in a bolted-on SystemC simulator (several solutions exist for that).
The driving force comes from the work in the SOAFEE working group, about “hypervisor portability” in the automotive work. But SOAFEE has nothing to do with Qemu, they are dealing with target software.
I led a panel on system simulation and SystemC, featuring François-Frédéric Ozog, Manfred Thanner (NXP), Mark Burton (Qualcomm), and Bart Vanthournout (Synopsys). We talked about how to fit SystemC into a world where we have many different simulators combining to form a true system simulation. The panel made some key points:
- SystemC is not the answer to simulator integration – this is something different. Not clear who should standardize it, but something seems to be needed. Today, each domain has their own favorite solution. From FMI/FMU to Vector’s recently open-sourced SIL kit, to the work done at ESA, to the Fraunhofer FERAL toolkit. And many in-house solutions.
- The lifecycle of models gets longer than is typical for virtual platforms. A VP is often mostly used as a pre-silicon vehicle, for a few years until the hardware ships. However, if that VP gets integrated into a long-lived digital twin of a real product, you can expect life times of ten to twenty years. Someone has to do that. And as Axel noted in his keynote, expect turnover to be very high during that time, leading to organizational challenges.
- The supply chain gets more complex. Models will come from multiple vendors, and each model might be built up through a supply chain mimicking the real hardware supply chain. Magnus’ keynote definitely brought this up, where MB is asking for models for all components. That means long indirect relationships between models and users.
- Someone (a single one) has to support the integration. It does not work if all parties in a federated integration setup comes together to hash out what is wrong. It is necessary to have a single person or group that takes first-line responsibility for delivering and supporting the combination. They will then bring in the suppliers as needed, but someone has to be the federal government of the federation. Someone who gets paid to explicitly take care of this.
Following the panel, there was some discussion about dropping “SystemC” from the name of the event and making it more about system simulation in general (especially with eye on federated simulation). But the audience made it clear that a forum for SystemC is still needed, and there will be a SystemC Evolution Day in 2023 too, right after DVCon.