Managing an embedded development environment can be pretty painful and error-prone, from properly checking out the codebase and all its dependencies, to making sure the correct (and often pretty big!) toolchains are setup and used, to having the developers’ IDE use the right set of extensions and plugins.
When you start thinking of containers as a technology that can be used not only for runtime (ex. for packaging microservices) but also at development time, it becomes possible to easily describe the entirety of the required development environment for a particular project. Make this description part of your source code repository and you end up with a versioned, fully reproducible, dev environment! Hey, using a cloud-based IDE surely you should even be able to code straight from your web browser, right?
I recently gave GitHub Codespaces a try to get a sense of the benefits of the approach. Spoiler alert: there is already a lot that can be done (debugging embedded code from your web browser anyone?), so I am really excited to see what’s ahead of us in terms of making embedded development even more seamless.
I highly encourage you to give Codespaces a try and see for yourself what you think might be missing in the picture. I would love to hear about it!
It’s been a few months now since I started playing with the Wio Terminal from Seeed Studio. It is a pretty complete device that can be used to power a wide range of IoT solutions—just look at its specifications!
Cortex-M4F running at 120MHz (can be overclocked to 200MHz) from Microchip (ATSAMD51P19) ;
192 KB of RAM, 4MB of Flash ;
Wireless connectivity: WiFi 2.4 & 5 GHz (802.11 a/b/g/n), BLE, BLE 5.0, powered by a Realtek RTL8720DN module ;
Expansion ports: 2x Grove ports, 1x Raspberry-Pi compatible 40-pin header.
Wireless connectivity, extensibility, processing power… on paper, the Wio Terminal must be the ideal platform from IoT development, right? Well, ironically, one thing it doesn’t do out-of-the-box is to actually connect to an IoT cloud platform!
You will have guessed it by now… In this blog post, you’ll learn how to connect your Wio Terminal to Azure IoT. More importantly, you will learn about the steps I followed, giving you all the information you need in order to port the Azure IoT Embedded C libraries to your own IoT device.
You will need a Wio Terminal, of course, an Azure IoT Hub instance, and a working Wi-Fi connection. The Wio Terminal will need to be connected to your computer over USB—kudos to Seeed Studio for providing a USB-C port, by the way!—so it can be programmed.
Here are the steps you should follow to get your Wio Terminal connected to Azure IoT Hub:
Update the application settings (include/config.h) file with your Wi-Fi, IoT Hub URL, and device credentials.
Flash your Wio Terminal. Use the command palette (Windows/Linux: Ctrl+Shift+P / macOS: ⇧⌘P) to execute the PlatformIO: Upload command. The operation will probably take a while to complete as the Wio Terminal toolchain and the dependencies of the sample application are downloaded, and the code is compiled and uploaded to the device.
Once the code has been uploaded successfully, your Wio Terminal LCD should turn on and start logging connection traces. You can also open the PlatformIO serial monitor to check the logs of the application (PlatformIO: Serial Monitor command).
> Executing task: C:\Users\kartben\.platformio\penv\Scripts\platformio.exe device monitor <
--- Available filters and text transformations: colorize, debug, default, direct, hexlify, log2file, nocontrol, printable, send_on_enter, time
--- More details at http://bit.ly/pio-monitor-filters
--- Miniterm on COM4 9600,8,N,1 ---
--- Quit: Ctrl+C | Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H ---
Connecting to SSID: WiFi-Benjamin5G
Connecting to Azure IoT Hub...
Your device should now be sending its accelerometer sensor values to Azure IoT Hub every 2 seconds, and be ready to receive commands remotely sent to ring its buzzer.
It is important to mention that this sample application is compatible with IoT Plug and Play. It means that there is a clear and documented contract of the kind of messages the Wio Terminal may send (telemetry) or receive (commands).
You can see the model of this contract below—it is rather straightforward. It’s been authored using the dedicated VS Code extension for DTDL, the Digital Twin Description Language.
When connecting to IoT Hub, the Wio Terminal sample application “introduces itself” as conforming to the dtmi:seeed:wioterminal;1 model.
This allows you (or anyone who will be creating IoT applications integrating with your device, really) to be sure there won’t be any impedence mismatch between the way your device talks and expects to be talked to, and what your IoT application does.
A great example of why being able to automagically match a device to a corresponding DTDL model is useful can be illustrated with the way we used the Azure IoT Explorer earlier. Since the device “introduced itself” when connecting to IoT Hub, and since Azure IoT Explorer has a local copy of the model, it automatically showed us a dedicated UI for sending the ringBuzzer command!
Azure SDK for Embedded C
In the past, adding support for Azure IoT to an IoT device using the C programming language required to either use the rather monolithic (ex. it is not trivial to bring your own TCP/IP or TLS stack) Azure IoT C SDK, or to implement everything from scratch using the public documentation of Azure IoT’s MQTT front-end for devices.
The Azure SDK team has recently started to put together a C SDK that specifically targets embedded and constrained devices. It provides a generic, platform-independent, infrastructure for manipulating buffers, logging, JSON serialization/deserialization, and more. On top of this lightweight infrastructure, client libraries for e.g Azure Storage or Azure IoT have been developed.
You can read more on the Azure IoT client library here, but in a nutshell, here’s what I had to implement in order to use it on the Wio Terminal connected:
As the sample uses symmetric keys to authenticate, we need to be able to generate a security token.
The token needs to have an expiration date (typically set to a few hours in the future), so we need to know the current date and time. We use an NTP library to get the current time from a time server.
The token includes an HMAC-SHA256 signature string that needs to be base64-encoded. Luckily, the recommended WiFi+TLS stack for the Wio Terminal already includes Mbed TLS, making it relatively simple to compute HMAC signatures (ex. mbedtls_md_hmac_starts) and perform base64 encoding (ex. mbedtls_base64_encode).
The Azure IoT client library helps with crafting MQTT topics that follow the Azure IoT conventions. However, you still need to provide your own MQTT implementation. In fact, this is a major difference with the historical Azure IoT C SDK, for which the MQTT implementation was baked into it. Since it is widely supported and just worked out-of-the-box, the sample application uses the PubSubClient MQTT library from Nick O’Leary.
And of course, one must implement their own application logic. In the context of the sample application, this meant using the Wio Terminal’s IMU driver to get acceleration data every 2 seconds, and hooking up the ringBuzzer command to actual embedded code that… rings the buzzer.
I hope you found this post useful! I will soon publish additional articles that go beyond the simple “Hey, my Wio Terminal can send accelerometer data to the cloud!” to more advanced use cases such as remote firmware upgrade. Stay tuned! 🙂
Let me know in the comments what you’ve done (or will be doing!) with your Wio Terminal, and also don’t hesitate to ask any burning question you may have.
If you liked this article, don’t forget to subscribe to be notified of upcoming publications. And of course, you can also always find me on Twitter.
I have been, directly or indirectly, responsible for growing and nurturing several developer communities for over a decade now. Along the way, I’ve come to realize that there are lots of misconceptions in terms of what characterizes successful developer engagement programs, and how to effectively measure their impact.
A lot has already been said on the reasons why vanity metrics are dangerous, so why should you bother reading further? Well, what I had originally planned as a short brain dump ended up covering pretty extensively the pitfalls of vanity metrics in the specific context of developer engagement.
This article will help you identify some areas where you can improve, and new indicators that you will want to start tracking. I also hope it will help change your mindset so that you can actually start becoming proud of your not-so bright metrics and what you have learned from them.
Who doesn’t like a performance dashboard filled with green indicators? Well… I don’t!
Whether it’s intentional or not, if your metrics and KPIs are designed to make you “look good”, you’re probably not looking at the right thing, or at least not with the right level of granularity.
A “green” dashboard is not inherently bad—who would I be to question the fact that your community is growing anyway? What I am quite confident is bad, though, is a dashboard that does not capture and highlight the things that are not working… and there are always a few behind even the most stellar aggregated metrics.
The rest of this article will cover several ways you can refine your metrics to capture better the things that can be improved.
As a rule of thumb, always make sure that all the activities accruing to a given (green) indicator do it equally so. Just think about it: if out of four things you’re doing successfully overall—maybe you even exceeded your initial goal!—one is in fact really lagging behind, you might as well focus your time and effort on the ones that work, right? Or at the very least, you’ll want to analyze what is making that one activity unsuccessful, in order to do better next time…
Learn from the outliers
I can’t emphasize this enough: you will learn a lot by making sure your metrics have the right granularity, and by digging into your “outliers”, i.e those articles/social posts/videos that are performing particularly well–or not, for that matter.
Whenever I’m faced with a piece of content that is in appearance successful, I always start by trying to answer these two related questions:
Is this an actual success, or are my metrics somehow biased or, worse, simply inaccurate?
What made this piece perform so well?
More specifically, when it comes to deciding whether I should celebrate an actual success, I usually ask myself:
Has the content been promoted as part of a paid campaign? If so, it is worth looking at its organic traffic stats, and how they compare to your average article. A sub-par article can easily be flagged as impactful when, in reality, you’ve only paid for getting more eyeballs on it without particularly generating attention or engagement. More on the topic of engagement below)
What are the high-level demographics of the people who viewed or relayed my content? Would you call impactful an article that got shared or liked by 100 people among which 95 you either personally know or they happen to be direct or indirect colleagues? Personally, I’d rather have ten times less engagement if the people involved happen to spread the word in more distant and uncharted social circles.
Who, specifically, promoted and shared my content? There are good chances that your content has been picked up and amplified by some media outlets or key influencers in your community. Find who these are, and always try to personally reach out and engage.
On the opposite side of the spectrum, there are those “meh” articles or videos that didn’t seem to find an audience and that can also teach you a lot:
The success, or lack thereof, of your content is often going to be correlated to where in the hype cycle the technology you’re covering stands. If you’re covering bleeding edge technology, an underperforming article should not necessarily be a cause for disappointment. However, you will want to look for signals showing that it piqued the curiosity of at least some folks!
Don’t underestimate the impact of SEO and optimizing for social media. Sometimes, the only explanation as to why some content is lagging behind is that you didn’t take the time to create a nice visual/card for catching people’s attention when your post pops up in their timeline.
Eyeballs are nice, engagement is better
A metric that often contributes to the “green dashboard symptom” is the mythical pageview, and all its variations (ex. Twitter impressions).
You may argue that tracking pageviews allows you to measure your thought leadership and your reach. However, and at the very least, that’s assuming you have a good understanding of the size of your overall potential audience, otherwise you’re just making a wild guess about what a “good” number should be…
In most cases, you will be better of looking at the actual engagement of your audience. Rather than pageviews, I tend to look at the following instead:
Impressions click-through rate (CTR). Out of 100 people presented with the thumbnail of my YouTube video, or the link to my post in their Twitter feed, how many did I convince to click to learn more?
Number of comments. If I’m getting tens of thousands of views and not a single person is bothering commenting—even to simply say “Thanks!”, or “Cool stuff!”—or asking a question, I usually start being concerned about the relevance of my article, or at least if I did all I could to foster engagement from my audience.
Trends over absolute numbers
People you will share your metrics with likely have no idea if getting 50,000 views per month on your YouTube channel, or 70 retweets on your Twitter campaign is any good. In fact, you probably don’t either.
However, if you are able to show a trend over the past 7, 30, and 365 days, of how a particular metric has evolved, this will make it much easier to evaluate the impact of your various activities.
What’s more, this will also force you to not rest on your laurels, by giving you a way to spot absolute numbers that seemed huge a couple years ago, and that have, in fact, been stagnating since then.
There’s always room for improvement
Like everyone else, I like celebrating a successful article or video, and so should you. However, even your most successful content has downsides if you analyze it carefully.
Remember that contest you ran with a bunch of partners and that was super successful? Well, try and do the exercise of looking for that particular metric that might not shine as much as the others. By looking at your referral traffic, for example, you may notice that the impact of the promotion activities of one of the partners is lagging behind. Why is that? Maybe this partner’s community isn’t the right target for you? Maybe the tone you usually use just needs to be tweaked for this particular crowd?
It might sound like nitpicking to look for things that didn’t work, but trust me, you will learn a lot by paying attention to these.
Don’t set (arbitrary) goals too early
It is very tempting to look at some of the metrics your existing tools are giving you access to (ex. pageviews), increase them by an arbitrary ratio, and then use this number as your goal for the upcoming period. This is just wrong.
First, unless you’ve already been thinking about them twice, I doubt the goals that you initially set will reflect tangible and actionable insights. Congratulations, you have 20% more unique visitors on your web property! Now what? Are these visitors directly driving 20% more usage of your products? Are you even aiming for increased adoption in the first place? What if I tell you that your competitor saw a 100% growth during the same period, is that good or bad?
Once you’ve narrowed down some of the trends you are going to monitor, it becomes much easier to adapt your programs and tactics to make sure you’re aiming for continual improvement and growth.
Your community ≠ your official channels
A common mistake when looking after a developer community is to limit the breadth of monitored channels to your official/corporate ones. It usually stems from a pure tooling limitation: we naturally tend to only pay attention to the channels that can easily and automatically be tracked (see previous paragraph), since we directly own them.
However, your community lives in many places, and I would be surprised if your goal is to only grow traffic and engagement on your own properties. Whether you have tools that allow you to do this automatically or not, you should make sure you track metrics related to your performance on third party channels and platforms.
At a minimum, in particular if you’re finding it cumbersome to collect information for the properties you don’t directly own, you should always make referral traffic one of your key indicators. This way, you can directly evaluate how much your content has been shared or linked to from third party channels.
Empower your authors
For many organizations, the people creating the content are not necessarily the ones that are responsible for actually publishing and promoting it. This is of course how organizations can scale and how people can stay focused, but this presents a major flaw. In order to truly meet their audience, your authors need to be able to see first-hand how their content performed.
While not everyone is an expert at Google Analytics or social media tactics, you should aim at giving your authors direct access to the tools that will allow them to quickly assess if their message landed with their intended audience.
Don’t underestimate the impact empowered authors can have on your content creation activities and your overall organization. That feature owner who did their best to write a series of blog posts about a new release, actively engaging in promoting their piece in key communities and seeking developer engagement? It’s them who are going to get tons of valuable first-hand feedback from their actual users, as they will have been able to meet them where they are. And it’s them who are the thought leaders you need to establish trust with your developer community.
Automation should never replace your own judgment
From Google Analytics to Adobe Analytics to your favorite content marketing tool, you probably have at your disposal a ton of metrics that are automatically collected, and consolidated into nice reports. This is great and can save you a lot of effort every time you need to share an activity report with your stakeholders.
That being said, not only should you not trust these metrics blindly (remember to pay special attention to outliers), but you should also make sure to complement them with your own manual findings.
As an example, here are some of the things I do to give my reports more context:
For social media amplification, I always dig into the demographics of the people who ended up sharing or re-sharing something. Like I mentioned before, I’ll always tend to prefer articles that have been shared less if the people who shared it are not direct members of my community, nor colleagues.
For video content, ex. on YouTube, I try to compare the number of comments or number of likes (and dislikes!) that key videos are getting to the numbers that videos from similar communities, or close competitors, are getting. It is likely that you will have to collect these numbers manually, but it should only take you a few minutes.
I often try to manually capture and quote a couple key comments/posts/tweets from the community (both positive and negative ones!). If you have access to sentiment analysis tools, do not hesitate to use them to help you look in the right direction.
Not all indicators come in the form of tangible numbers, and you won’t always be able to directly include them in your report tables or to track their evolution over time. However, they are instrumental in reminding you that you should not overlook the human aspects, and the importance of personal interactions, in your developer community.
Once again, if you found this article useful, or if you’ve had other experiences, I would really love to hear from you in the comments. In the meantime, I’ll leave you with a few links to some really good resources.