Categories
Embedded IoT

3 Free Simulation Tools to Work Around the Global Chip Shortage

There is something that’s been seriously bothering me ever since I started to work with embedded devices: trying out new things when you don’t have an actual device at hand is hard!

Over the years, I’ve had the following happen more often than I care to admit:

  1. 💡 Hear about an interesting embedded tool or library ;
  2. 🧑‍💻 Check out the provided code samples ;
  3. 😔 Realize none work with any combination of the—dozens!—embedded boards and sensors I own ;
  4. 🚶‍♂️ Give up…

With the ongoing global chip shortage, the problem is just so much worse these days. Even if I were to go through the hassle of ordering the required hardware, I would have to face incredibly long lead times—sometimes well over a year!

Do I really want to wait a full year until I get started with my IoT/Embedded project idea?

Luckily, there has been a lot of innovation around making it easier to simulate embedded hardware over the past years, and I have found myself using the following tools on a regular basis.

For each, I will (briefly) explain what they do, what I like about them, and I will include a link to a quick demo for you to realize first-hand how much time they can save you!


Renode

Renode is an open-source framework that allows you to run, debug and test unmodified embedded software right from your PC.

Out of the box, Renode supports a wide variety of embedded boards and peripherals, with more being added regularly. If the board or peripheral you need is not in the list, there is a good chance they can be added with minimal work. Renode has a really elegant and extensible mechanism to describe platforms, as well as to model peripherals.

I am using Renode to drastically simplify and speed up my “inner dev loop” when I build Azure RTOS applications. It can emulate pretty complex peripherals such as LCD touchscreens, which means I can even test my GUIX applications!

Azure RTOS GUIX Home Automation Demo running on a simulated STM32F746G-DISCO board in Renode.

I have to give extra bonus points to Renode for allowing debugging of the simulated code, as well as providing a complete framework for automated testing.

I have assembled in the GitHub repository below some examples of Azure RTOS applications running on Renode, including an example showing how to use the testing framework mentioned above.

There is a lot more that can be done with Renode (multi-node simulation, networking, …) so I will probably write some more about it in the future.


Wokwi

It is becoming increasingly common to develop right from a web browser. While it is a significant paradigm shift for long-time embedded developers, it also brings tons of benefits and makes it much easier to setup reproducible (and versioned!) development environments and toolchains. In fact, I already spoke about this a while ago in this video 😊.

Wokwi is a web-based hardware simulation environment that can be used to simulate a wide variety of micro-controllers (Arduino Uno, ESP32, Raspberry Pi Pico, …) , electronic parts, sensors, and actuators.

At first sight, Wokwi feels like a tool that is only targeting “hobbyists”, but it is in fact really powerful and so much more than a toy (for example, it too does come with debugging support)! It comes with an online web IDE that resembles the Arduino IDE, but you can also bring your pre-compiled ELF binary if you want to keep using your usual development environment.

Beyond the fact that Wokwi “just works” and the fact that sharing a project with someone is as simple as giving them the URL to access it(!), I really love the following Wokwi features:

Example of Wokwi’s Logic Analyzer capturing the I2C traffic between the MCU and an RTC (real-time clock) module.
An external tool such as PulseView can then read and decode the captured traffic.
  • Logic analyzer for capturing/debugging the various signals in your system (ex. I2C or SPI traffic)
  • Network traffic capture for analyzing Wi-Fi traffic in Wireshark. You can e.g. troubleshoot the connection to your IoT cloud, something that’s pretty cumbersome or even impossible to do when running on “real” hardware.

When you use Wokwi to simulate an ESP32 application, you can actually connect to a Wi-Fi, right from within the simulation. Your application may simply connect to the simulated “Wokwi-GUEST” WLAN to access the Internet. Pretty convenient for simulating your next IoT project, eh?

As a way to show you how much easier things are when you don’t have to worry about the hardware, you may check out this Wokwi project.

It is showing the official Azure IoT Central sample code application for ESP32, running in a fully simulated environment. Getting the sample to run is really straightforward:

  1. Create an Azure IoT Central Application, if need, and provision a new device, as per the sample’s instructions.
  2. Enter your Azure IoT Central and device information into the sample’s iot_configs.h.
  3. Run the simulation! The application will automatically get compiled by Wokwi, and then run within your browser.

As the application starts, the simulated device will attach to the—simulated—wireless network, and will show up shortly thereafter in Azure IoT Central.

You will be able to interact with the LEDs or the LCD display by sending commands from Azure IoT Central, as well as see the telemetry corresponding to the various sensors (you can control what “fake” temperature, acceleration data, etc. the sensors are reporting by clicking on them in the Web interface.


TinkerCAD

TinkerCAD’s circuit simulator is the perfect companion to ensure you don’t fry the actual components of your next project.

I discovered TinkerCAD‘s circuit simulation capabilities only recently. Compared to Renode and Wokwi, TinkerCAD will get you the closest to the “metal”. It helps validate the circuit down to making sure you have the wiring 100% correct, and do not risk frying a component due to a missing resistor!

TinkerCAD lets you simulate Arduino UNO and micro:bit-based circuits, but I must admit that I haven’t spent a lot of time using it for actual projects just yet. The fact that it doesn’t allow to simulate network communications makes it a bit impractical for IoT scenarios anyway. However, there are some great projects in the online gallery that should give you a sense of what the tool is capable of.


I highly recommend you give these amazing (and did I mention free?) tools a try.

You should be able to get all the examples I shared up and running in no time, which I hope is making my point regarding how much of a time-saver they can be.

I would also love to hear about other simulation/emulations tools out there that you may be using to speed up your embedded & IoT development workflow—just let me know in the comments!

If you enjoyed this article, don’t forget to subscribe to this blog to be notified of upcoming publications! And of course, you can also always find me on Twitter and Mastodon.

Categories
Embedded IoT

5 Reasons Why I Dread Writing Embedded GUIs

A consequence of the massive adoption of Internet of Things technologies across all industries is an increasing need for embedded development skills. Yet, embedded development has historically been a pretty complex domain, and not something that one can add to their skillset overnight.

Luckily, over the last decade, silicon vendors have put a lot of effort into simplifying embedded development, especially for people with little to no experience in the domain. Communities such as Arduino and PlatformIO have also immensely contributed to providing easy-to-use tools and high-level libraries that can hide most of the scary details—yes, assembly, I’m looking at you!—of embedded programming while still allowing for professional applications to be written.

In my experience though, there is at least one area where things remain overly cumbersome: graphical user interface (GUI) development. Many applications require at least some kind of graphical user interface: the display might be small and monochrome, with hardly any buttons for the user to press, but it’s still a UI, eh?

I am sure many of you will relate: GUI development can be a lot of fun… until it isn’t!

In this article, I have compiled 5 reasons why I tend to not enjoy writing embedded GUI code so much anymore. And since you might not be interested in simply reading a rant, I am also sharing some tips and some of the tools I use to help keep GUI development enjoyable 🙂.

Hardware Integration & Portability

Most display devices out there come with sample code and drivers that will give you a head start in being able to at least display something.

But there is more to a GUI than just a screen, as an Interface is also made up of inputs, right? How about those push buttons, touch screen inputs, and other sensors in your system that may all participate in your interactions?

Image credit: Seeed Studio.

It might not seem like much, but properly handling simple hardware inputs such as buttons being pressed can be a lot of work when running on a constrained system, and you can quickly end up having to deal with complex timing or interrupt-management issues (see Event Management and “super-loop” section below for more). And as these often involve low-level programming, they tend to be pretty hardware-dependent and not easily portable.

A lot of embedded development is done using C, so, except for low-level bootstrapping code, embedded code can in theory be fairly portable. However, writing portable GUI code is a whole different story and unless you’re building on top of an existing framework such as LVGL or Azure RTOS GUIX, it requires a lot of effort to abstract all the hardware dependencies, even more so when trying to keep the performance optimal.

Of course, it is not always necessary (or possible) to have GUI code that’s 100% portable. However, in these times of global chip shortage, it can prove very handy to not have a hard dependency on a specific kind of micro-controller or LCD display.

Memory Management

Like I mentioned in the introduction, manipulating pixels can be a lot of fun—it really is! However, constrained systems have limited amounts of memory, and those pixels that you’re manipulating in your code can quickly add up to thousands of bytes of precious RAM and Flash storage.

Let’s take the example of a tiny 128×64 pixels monochrome display. As the screen only supports black & white, each pixel can be represented in memory using just a single bit, meaning a byte can hold up to 8 pixels—yay! But if you do the maths:

A 32x32px bitmap uses 128 bytes of memory!

That’s already 1KB of RAM, which is quite significant if your MCU only has, say, 16KB total. Interested in displaying a handful of 32×32px icons? That will be an additional 128 bytes for each of these tiny icons!

In short: your graphical user interface will likely eat up a lot of your memory, and you need to be extra clever to leave enough room for your actual application. As an example, a quick way to save on graphics memory is to double-check whether some of your raster graphics (ex. icons) can be replaced by a vector equivalent: surely it takes a lot less code and RAM to directly draw a simple 32x32px red square on the screen, instead of having it stored as a bitmap in memory.

Resource Management

It can be tricky to properly manage the various resources that make up a GUI project.

More specifically, and whether you are lucky enough to work with a graphics designer or not, your GUI mockups will likely consist of a variety of image files, icons, fonts, etc. However, in an embedded context, you typically can’t expect to be able to directly manipulate that nice transparent PNG file or TrueType font in your code! It first needs to be converted in a format that allows it to be manipulated in your embedded code.

If you are a seasoned embedded developer, I am sure that you (or someone at your company) have probably developed your very own macros and tools to help you streamline the conversion/optimization of your binary assets, but in my experience it’s always been a lot of tinkering, and one-off, quick and dirty, conversion scripts, which impact long-term maintainability. Add version control to the mix, and it becomes pretty hairy to keep your assets tidy at all times!

Event Handling & Performance

Graphical programming is by nature very much event-driven. It is therefore quite natural to expect embedded GUI code to look as follows (pseudo-code):

Button btnOK;
btnOK.onClick = btnOK_click;

btnOK_click = function() {
  // handle click on button btnOK
  // ...
}

As you can imagine, things are not always that simple…

Firstly, C, which is still the most used embedded programming language, is not exactly object-oriented. As a consequence, even if it’s perfectly possible to aim for a high-level API that looks like the above, there is a good chance you will find yourself juggling with error-prone function pointers whenever adding/updating an event handler to one of your UI elements.

Assuming that you have indeed found an elegant way to associate event handlers to the various pieces of your UI, you still need to implement some kind of event loop. Indeed, you must regularly process the events happening in your system (“button A pressed”, etc.) and dispatch them to the proper event handlers. A common pattern in embedded programming consists in doing so through a so-called super loop: the program is running an infinite loop that invokes each task the system needs to perform, ad infinitum.

int main() {
    setup();
    
    while (1) {
        read_sensors();
        refresh_ui();
        // etc.
    }
    
    /* program's execution will never reach here */
    return 0;
}

A benefit of this approach is that the execution flow remains pretty readable and straightforward, and it also avoids some potential headaches that may be induced by complex multi-threading or interrupt handling. However, any event handler running for too long (or crashing!) can compromise the performance and stability of your main application.

As embedded real-time operating systems such as FreeRTOS, or Azure RTOS ThreadX are becoming more popular, a more modern approach is to have the UI event loop run in a dedicated background task. The operating system can therefore ensure that this task, given its lower priority, will not compromise the performance of your main application.

An embedded GUI does not always need to be performant as in fast and responsive. However, it is considered a good practice to use embedded resources as efficiently as possible. Making sure that your GUI & app code are as performant as reasonably possible can potentially save you a lot of money as it means you can stick to using the smallest possible MCU for the task.

Tooling

Last but not least: tooling. To be honest, I have never been a big fan of designing graphical user interfaces using a WYSIWYG (What You See Is What You Get) approach. That being said, coding a graphical user interface that has more than just a couple of screens requires at least some tooling support, since most of the “boring” glue code can often be automatically generated.

What’s more, testing an embedded GUI can quickly become painful, as re-compiling your app and downloading it to your target can take quite some time. This can be very frustrating when you need to wait minutes to e.g test how that new fancy animation you coded looks like. 😒

Over the past few months, I have started to use Renode more often. It is a pretty complete open-source tool suite for emulating embedded hardware—including their display!—straight from your computer. In a future post, I plan on sharing more on how I started to use Renode for drastically shortening the “inner embedded development loop”, i.e. the time between making a code change and being able to test it live on your—emulated!—device.


I would be curious to hear about your experience, and your pain points when working with embedded GUIs. Let me know in the comments below!

Like I already mentioned, stay tuned for upcoming articles where I will be covering some of the tools and frameworks that I have started to use (and love) and that make my life SO much easier when it comes to GUI development!

If you enjoyed this article, don’t forget to subscribe to this blog to be notified of upcoming publications! And of course, you can also always find me on Twitter and Mastodon.

Categories
Embedded IoT

Using Github Codespaces for Embedded Development

Managing an embedded development environment can be pretty painful and error-prone, from properly checking out the codebase and all its dependencies, to making sure the correct (and often pretty big!) toolchains are setup and used, to having the developers’ IDE use the right set of extensions and plugins.

When you start thinking of containers as a technology that can be used not only for runtime (ex. for packaging microservices) but also at development time, it becomes possible to easily describe the entirety of the required development environment for a particular project. Make this description part of your source code repository and you end up with a versioned, fully reproducible, dev environment! Hey, using a cloud-based IDE surely you should even be able to code straight from your web browser, right?

I recently gave GitHub Codespaces a try to get a sense of the benefits of the approach. Spoiler alert: there is already a lot that can be done (debugging embedded code from your web browser anyone?), so I am really excited to see what’s ahead of us in terms of making embedded development even more seamless.

I highly encourage you to give Codespaces a try and see for yourself what you think might be missing in the picture. I would love to hear about it!

A good way for you to get started if you have STM32L4 developer kit handy would be to go with the Azure RTOS getting started example, like I did in the video. Don’t forget to check the debugging instructions—they complement what you see in the video nicely.