Embedded Open Source Summit 2023

Posted by Marcus Folkesson on Tuesday, June 27, 2023

Embedded Open Source Summit 2023

This year the Embedded Linux Conference is colocated with Automotive Linux Summit, Embedded IOT summit, Safety-critical software summit, LFEnergy and Zephyr Summit. The event was held in Prague, Czech Republic this time.

It's the second time I'm at a Linux conference in Czech Republic, and it clearly is my favorite place for such a event. Not only for the cheap beer but also for the architecture and the culture.

I've collected notes from some of the talks. Mostly for my own good, but here they're:

9 Years in the making, the story of Zephyr [1]

Much has happened since the project started by an announcement at an internal event at Intel in 2014. Two years later it went public and was quickly picked up by the Linux Foundation, and now it's listed as one of the top critical open source projects by Google.

Now, in June 2023, it has mad 40 releases and has over a milion lines of code. What a trip, hue?

The project has made a huge progress, but the road hasn't been straight forward. Many design decisions has been made and changed over time. Not only technical decisions but in all areas. For example, Zephyr was originally BSD licenced. The current license, Apache2, was not the first choice. The license was changed upon requests from other vendors. I think it's good that not only one company has full dominance on the project.

Even the name has been up for discussion before it landed in Zephyr. One fun thing is that Zephyr has completely taken over all search results, it's hard to find anything that are not related to the Zephyr project as it masks out all other hits... oopsie.

Some major transitions and transformations made by the project:

  • The build system which was initially a bunch of custom made Makefiles, which then became Kbuild and finally CMake.
  • The kernel itself moved from a nano/micro kernel model to a unified kernel.
  • Even the review system has changed from Garrit to Github.

The change from the dual kernel model to a unified kernel was made in 2016. The motivation was that the older model suffers from a few drawbacks:

  • Non-intutive nature of the nano/micro kernel split
  • Double context switch affecting the performance
  • Duplication of object types for nano and micro
  • System initialixation in the idle task

Instead, we ended up with something that:

  • Made the nanokernel 'pre-emptible thread' aware
  • Unified fibers and tasks as one type of threads by dropping the Microkernel server
  • Allowed cooperative threads to operate on all types of objects
  • Clarified duplicated object types
  • Created a new, more streamlined API, without any loss of functionality

Many things points to that Zephyr has healthy eco system. If we look at the contributions we can se that the member/ community contributions are strictly increasing every year and the commits by Intel is decreasing.

It shows us that the project itself is an evolving and become more and more of a self- sustaining open eco-system.

System device trees [2]

As the current usage of device tree does not scale well, especially when working with Multi-core AMP SoCs. we have to come up with some alternatives.

One such alternative is the System Device Tree. It's an extenstion of the DT specification that are devleoped in the open. To me it sounded uncomfortible at the first glance, but the talker made it clear that the work is heavily in cooperate with the DT specifications and the Linux device tree maintainters.

The main problem is that there are one instance of everything that is available for all CPUs and that is not suitable for AMP architectures where each core could be of a completely different types. The CPU cores are normally instantiated by one CPU node. One thing that the system device trees contribute to is to change that to independent CPU clusters instead.

Also, in a normal setup, many peripherals are attached to the global simple bus, and are shared across cores. The new indirect-bus on the other hand, which are introduced in System Device Tree, addresses this problem by map the bus to a particular CPU cluster which makes the peripheral visable for a specific set of cores.

System Device Tree will also introduce independent execution domains, of course also mapped to a specific set of CPU cluster. By this we can encapsulate which peripherals that should be accessable from which application.

But how does it work? The suggestion is to let a tool, sysbuild to postprocess the standard DT structure into several standard devicetrees, one for each execution domain.

Manifests: Project sanity in the ever-changing Zephyr world [3]

Mike Szczys talked about manifests files and why you should use those in your project.

But first, what is a manifest file?

It's a file that manages the project hiearchy by specify all repositories by URL, which branch/tag/hash to use and the local path for checkout. The manifest file also support some more advanced features such as:

  • Inheritance
  • Allow/block lists
  • Grouping
  • West support for validation

The Zephyr tree already uses Manifest files to manage versions of modules and libraries, and there is no reason to do not use the same method in your application. It let you keep control of which versions of all modules that your application requires in a clear way. Besides, as the manifest file is part of your application repository, it does also has a commit history and all changes to the manifest is trackable and hopefully explained in the commit message.

The inheritance feature in the manifest file is a powerful tool. It let you to import other manifest files and explicitely allow or exclude parts of it. This let you reduce the size of of your project significally.

West will handle everything for you. It will parse the manifest file, recursively clone all repositories and update those to a certain commit/tag/branch. It's preferred to not use branches (or even tags) in the manifest files as those may change. Use the hash if possible. Generally speaking, this is the preferred way in any such system (Yocto, Buildroot, ...).

The biggest benifit that I see is that you treat all dependencies aside from your application and that those dependencies are locked to known versions. Zephyr itself will be treated as a dependency to your application, not the other way around.

It's easy to draw parallells to the Yocto project. My first impression of Yocto was that it's REALLY hard to maintain, pretty much for the same reason that we are talking about here - how do I keep track of every layer in a controllable way? The solution for me wasto use KAS which pretty much do exactly the same thing - it creates a manifest files with all layers (read dependencies) that you can version control.

Zbus [4]

Rodrigo Peixoto, the maintainer and author of the Zbus subsystem had a talk where he gave us an introduction on what it's.

(Rodrigo is a nice guy. If you see him, throw a snowball at him and say hi from me - he will understand).

Zephyr has support for many IPC mechanisms such as LIFO, FIFO, Stack, Message Queue, Mailbox and pipes. All of those works great for one-to-one communication, but that is not allways what we need. Even one-to-many could be tricky with the existing mechanism that Zephyr provides.

ZBus is an internal bus used in Zephyr for Many-to-Many communication, besides, such a infrastructure cover all cases (1:1, 1:N, N:M) as a bonus.

I like these kind of infrastructure. It reminds me of dbus (and kbus..) but in a more simplier manner (and that is a good thing). It allows you to have a event-driven architecture in your application and a unified way to make threads talk and share data. Testability is also a bulletpoint for ZBus. You may easily swap a real sensor for stubbed code and the rest of the system would not notice.

The conference

/media/myself-embedded-open-source-summit.jpg

(I got stuck on a picture. Don't know which talk, but it seems like I enjoyed it)