Infrastructure

16 posts

Build your own IoT cloud platform

If you want to do large scale IoT and build your own IoT cloud platform, then you will need a messaging layer which can actually handle this. Not only handle the sheer load of messages, the number of connections. Even more important may be ability to integrate your custom bits and pieces and be able to make changes to every layer of that installation, in a controlled, yet simple way.

An overview

Eclipse Hono is an open source project under umbrella of the Eclipse IoT top level project. It provides a set of components and services used for building up your own IoT cloud platform:

Overview of Eclipse Hono IoT cloud platform

In a nutshell, Hono does provide a framework to create protocol adapters, and also delivers two “standard” protocol adapters, one for HTTP and one for MQTT. Both options are equally important to the project, because there will always be a special case for which you might want a custom solution.

Aside from the standard components, Hono also defines at set of APIs based on AMQP 1.0 in order to mesh in other services. Using the same ideas from adding custom protocol adapters, Hono allows to hook up your custom device registry and your existing authentication/authorization system (read more about Eclipse Hono APIs).

The final direct or store-and-forward message delivery is offloaded to an existing messaging layer. The scope of Hono is to create an IoT messaging infrastructure by re-using an existing, use case agnostic messaging layer and not to create another one. In this post we will assume that EnMasse is being used for that purpose. Simply because EnMasse is the best choice for AMQP 1.0 when it comes to Kubernetes/OpenShift. It is a combination of Apache Qpid, Apache Artemis, Keycloak and some EnMasse native components.

In addition to that, you will of course need to plug in your actual custom business logic. Which leaves you with a zoo of containers. Don’t get me wrong, containers are awesome, simply imagine you would need to deploy all of this on a single machine.

Container freshness

But this also means that you need to take care of containers freshness at some point. Most likely making changes to your custom logic and maybe even to Hono itself. What is “container freshness”?! – Containers are great to use, and easy to build in the beginning. Simply create a Dockerfile, run docker build and you are good to go. You can also do this during your Maven release and have one (or more) final output containers(s) for your release, like Hono does it for example. The big flaw here is, that a container is a stack of layers, making up your final (application) image. Starting with a basic operating system layer, adding additional tools, adding Java and maybe more. And finally your local bits and pieces (like the Hono services).

All those layers link to exactly one parent layer. And this link cannot be updated. So Hono 0.5 points to a specific version of the “openjdk” layer, which again points to a specific version of “debian”. But you want your IoT cloud platform to stay fresh and up-to-date. Now assume that there is some issue in any of the Java or Debian base layers, like a security issue in the “glibc”. Unless Hono releases a new set of images, you are unable to get rid of this issue. In most cases you want to upgrade your base layers more frequently than you actual application layer.

Or consider the idea of using a different base layer than the Hono project had in mind. What if you don’t want to use Debian as a base layer? Or want to use Eclipse J9 instead of the OpenJDK JVM?

Building with OpenShift

When you are using OpenShift as a container platform (and Kubernetes supports the same approach) you can make use of image streams and builds. An image stream simply is a way to store images and maintaining versions. When an image stream is created, it normally is empty. You can start to populate it with images, either by importing them from existing repositories, like DockerHub or your internal ones. Or by creating images yourself with a build running inside of OpenShift. Of course you are in charge of all operations, including tagging versions. This means that you can easily replace an image version, but in a controlled way. So no automatic update of a container will break your complete system.

There are different types of builds. A rather simple one is the well known “Dockerfile” approach. You define a base image and add a few commands which will make up the new container layer. Then there is the “source-to-image” (S2I) build, which we will have a look at in a second.

Building & Image Streams

Now with that functionality you can define a setup like this:

Diagram of example image streams

The base image gets pulled in from an external registry. And during that process you map versions to your internal versioning schema. What a move from “v1” to “v2” means in your setup is completely up to you.

The pulled in image gets fed into a build step, which will produce a new image based on the defined parent, e.g. your custom base image. Maybe this means simply adding a few command line utilities to the existing base image. Or some policy file, … The custom base image can then be used by the next build process to create an application specific container, hosting your custom application. Again, what a versioning schema you use, is completely up to you.

If you like you can also define triggers between these steps. So that when OpenShift pulls in a new image from the external source or the source code of the git repository changes, all required builds get executed and finally the new application versions gets deployed automatically. Old image versions may be kept so that you can easily switch back to an older version.

Source-to-Image (S2I)

Hono uses a plain Maven build and is based on Vert.x and Spring Boot. The default way of building new container images is to check out the sources from git and run a local maven build. During the build Maven wants to talk to some Docker Daemon in order to assemble new images and storing it into its registry.

Now that approach may be fine for developers. But first of all this is a quite complex, manual job. And second, in the context described above, it doesn’t really fit.

As already described, OpenShift supports different build types to create new images. One of those build types is “S2I”. The basic idea behind S2I is that you define a build container image, which adheres to a set of entry and exit points. Processing the provided source, creating a new container image which can be used to actually run this source. For Java, Spring Boot and Maven there is an S2I image from “fabric8”, which can be tweaked with a few arguments. It will run a maven build, find the Spring Boot entry point, take care of container heap management for Java, inject a JMX agent, …

That way, for Hono you can simply reuse this existing S2I image in a build template like:

source:
  type: Git
  git:
    uri: "https://github.com/eclipse/hono"
    ref: "0.5.x"
strategy:
  type: source
  sourceStrategy:
    from:
      kind: ImageStreamTag
      name: "fabric8-s2i-java:2.1"
    env:
      - name: MAVEN_ARGS_APPEND
        value: "-B -pl org.eclipse.hono:hono-service-messaging --also-make"
      - name: ARTIFACT_DIR
        value: "services/messaging/target"
      - name: ARTIFACT_COPY_ARGS
        value: "*-exec.jar"

This simple template allows you to reuse the complete existing Hono source code repository and build system. And yet you can start making modifications using custom base images or changes in Git right away.

Of course you can reuse this for your custom protocol adapters as well. And for your custom application parts. In your development process you can still use plain Maven, Spring Boot or whatever your prefer. When it comes to deploying your stack in the cloud, you hand over the same build scripts to OpenShift and S2I and let your application be built in the same way.

Choose your stack

The beauty of S2I is, that it is not tied to any specific language or toolset. In this case, for Hono, we used the “fabric8” S2I image for Java. But if you would prefer to write your custom protocol adaptor in something else, like Python, Go, .NET, … you still could use S2I and the same patterns to go with this language and toolset.

Also, Hono supports creating protocol adapters and services in different (non-JVM based) languages. Hono components get meshed up using Hono’s AMQP 1.0 APIs, which allow to use the same flow control mechanism for services as they are used for IoT data, building your IoT cloud platform using a stack you prefer most.

… and beyond the infinite

OpenShift has a lot more to offer when it comes to building your platform. It is possible to use build pipelines, which allow workflows publishing to some staging setup before going to production. Re-using the same generated images. Or things like:

  • Automatic key and certificate generation for the inter-service communication of Hono.
  • Easy management of Hono configuration files, logging configuration using “ConfigMaps”.
  • Application specific metrics generation to get some insights of application performance and throughput.

That would have been a bit too much for a single blog post. But I do encourage you to have a look at the OpenShift Hono setup at my forked Hono repository on GitHub, which makes use of some of this. This setup tries to provide a more production ready deployment setup for using Hono. However it can only be seen as a reference, as any production grade setup would definitely require replacements for the example device registry, a better tuned logging configuration and definitely a few other tweaks of your personal preference 😉

Hono also offers a lot more than this blog post can cover when building your own IoT cloud platform. One important aspect definitely is data privacy, yet supporting multiple tenants on the same instance. Hono already supports full mulit-tenancy, down to the messaging infrastructure. This makes it a perfect solution for honoring data privacy in the public and private cloud. Read more about new multi-tenant features of the next Hono version in Kai Hudalla’s blog post.

Take a look – EclipseCon France 2018

Dejan and I will have a talk about Hono at the EclipseCon France in Toulouse (June 13-14). We will present Hono in combination with EnMasse as an IoT cloud platform. We will also bring the setup described above with us and would be happy to you show everything in action. See you in Toulouse.

Manually reclaiming a persistent volume in OpenShift

When you have a persistent volume in OpenShift configured with “Retain”, the volume will switch to “Released” after the claim has been deleted. But what now? How to manually recycle it? This post will give a brief overview on how to manually reclaim the volume.

Deleting the claim

Delete the persistent volume claim in OpenShift is simple, either using the Web UI or by executing:

$ oc delete pvc/my-claim

If you check, then you will see the persistent volume is “Released” but not “Available”:

$ oc get pv
NAME              CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS     CLAIM                         REASON    AGE
my-pv             40Gi       RWO           Retain          Released   my-project/my-claim                     2d

What the documentation tells us

The OpenShift documentation states:

By default, persistent volumes are set to Retain. NFS volumes which are set to Recycle are scrubbed (i.e., rm -rf is run on the volume) after being released from their claim (i.e, after the user’s PersistentVolumeClaim bound to the volume is deleted). Once recycled, the NFS volume can be bound to a new claim.

At a different location it simply says:

Retained reclaim policy allows manual reclamation of the resource for those volume plug-ins that support it.

But how to actually do that? How to manually reclaim the volume?

Reclaiming the volume

First of all ensure that the data is actually deleted. Using NFS you will need to manually delete the content of the share using e.g. rm -Rf /exports/my-volume/*, but the be sure to the keep the actual export directory in place.

Now it is time to actually make the PV available again for being claimed. For this the reference to the previous claim (spec/claimRef) has to be removed from the persistent volume. You can manually do this from the Web UI or with short command from the shell (assuming you are using bash):

$ oc patch pv/my-pv --type json -p $'- op: remove\n  path: /spec/claimRef'
"my-pv" patched

This should return the volume into state “Available”:

$ oc get pv
NAME              CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS     CLAIM                         REASON    AGE
my-pv             40Gi       RWO           Retain          Available  my-project/my-claim                     2d

Remote managing Eclipse Kura on Apache Karaf with ECF

To be honest, I had my troubles in the past with the Eclipse Communication Framework (ECF), not that it is a bad framework, but whatever I started it was complicated and never really worked for me. This story is different!

A few months back the Eclipse Kura project ran into an issue that the plugin which was being used for remote managing a Kura instance (mToolkit) from an IDE just kind of went away (issue #496). There is some workaround for that now, but still the problems around mToolkit still exists. Beside the fact that it is no longer maintained, it is also rather buggy. Deploying a single bundle takes about a minute for me. Of course using the Apache File Install package for Kura would also help here 😉

But having a decent IDE integration would also be awesome. So when Scott Lewis from the ECF project contacted me about that, I was ready to give it a try. Unfortunately the whole setup required more than Kura could handle at that time. But now we do have support for Java 8 in Kura and there also is some basic support for running Kura on Karaf, including a docker image with the Kura emulator running on Karaf.

So I asked Scott for some help in getting this up and running and the set of instructions was rather short. In the following examples I am assuming your are running RHEL 7, forgive me if you are not 😉

First we need to spin up a new Kura emulator instance:

sudo docker run -ti --net=host ctron/kura:karaf-stable

We are mapping all network to the host instance, since we are using another port, which is not configured in the upstream Dockerfile. There is probably another way, but this is just a quick example.

Then, inside the Karaf instance install ECF. We configure it first to use “ecftcp” instead of MQTT. But of course you can also got with MQTT or some other adapter ECF provides:

property -p service.exported.configs ecf.generic.server
property -p ecf.generic.server.id ecftcp://localhost:3289/server

feature:repo-add http://download.eclipse.org/rt/ecf/kura.20161206/karaf4-features.xml
feature:install -v ecf-kura-karaf-bundlemgr

Now Kura is read to go. Following up in the Eclipse IDE, you will need Neon running on Java 8:

Add the ECF 3.13.3 P2 repository using http://download.eclipse.org/rt/ecf/3.13.3/site.p2 and install the following components:

  • ECF Remote Services SDK
  • ECF SDK for Eclipse

Next install the preview components for managing Karaf with ECF. Please note, those components are previews and may or may not be release at some point in the future. Add the following P2 repository: http://download.eclipse.org/rt/ecf/kura.20161206/eclipseui and install the following components (disable Group Items by Category):

  • Remote Management API
  • Remote Management Eclipse Consumer

Now comes the fiddly part, this UI is a work in progress, and you have been warned, but it works:

  • Switch to the Remote Services perspective
  • Open a new view: Window -> Show View -> Other… – Select Remote OSGi Bundles
  • Click one of the green + symbols (choose either MQTT or ECFTCP) and enter the address of your Karaf instance (localhost and 3289 for me)

You should already see some information about that target device now. But when you open a new view (as before) named Karaf Features you will also have the ability to tinker around with the Karaf installation.

If you just want to have a quick look, here it is:

ECF connecting to Kura on Karaf
ECF connecting to Kura on Karaf

Of course you don’t need to use an IDE for managing Karaf. But having such an integration as an option, is a nice addition. And it shows how powerful a great OSGi setup can be 😉

Mattermost at Eclipse

About half a year back Cédric and I did start a test to see if Mattermost is a valuable tool for the Eclipse community. Failure was an option, since a new tool should bring some benefit to the community.

The server this instance was running on was sponsored by my current employer, IBH SYSTEMS GmbH. The test was scheduled to be terminate at the end of June. And since I will change to RedHat, starting in July, we were forced to make a decision:

🙂 Mattermost now is a permanent, community supported service of the Eclipse Foundation. Hosted on the Eclipse Foundation’s infrastructure, but supported by its community. We also dropped the “-test” in the domain name. So please update your links, if you had some. All content was migrated to the new server.

Also did we setup a new IRC bridge, which does bridge the IRC channel #eclipse-dev on freenode to the Mattermost channel developers-qa.

Cédric and I proposed a talk at for EclipseCon Europe 2016 to show the community what Mattermost is and how it can be used for engaging users and committers.

Happy chatting 😉

Eclipse Mattermost – What’s the state?!

A few weeks ago we started to test Mattermost as a communication channel for Eclipse Foundation projects. So, how is it going?

First of all Cédric Brun wrote a bunch of Java tasks which create Mattermost events based on various other Eclipse systems (like Gerrit, Bugzilla, Twitter, Forum, …) which integrate nicely with the Mattermost channels and allow each project to aggregate all these events in a single location. So you actually can get a notification about a new Forum entry for your Eclipse project in Mattermost now. Cool stuff! Many thanks!

From the usage side we are currently approaching the 100 user mark. At the moment of writing there are 94 registered users, 24 public and 8 private channels. About 150 posts per day and between 10 to 20 active users. You can have a look at the statistics below.

So is it a success? Well I goes into the right direction. Please don’t forget that this is just a test, but on the other hand there are people which are using it on a day by day basis for their Eclipse process. The usage of one big team with many channels (even multiple per project) seem to work fine. Single-sign-on will be a topic, right now Mattermost does have its user management. Also is the setup for projects still a manual process, which could be automated somehow. So there are a few topic left to solve.

But for me it is really amazing to see how quickly a new communication platform was established if all people work together. And it seems that people accept the new system quite well. I really hope we can establish this as a permanent solution.

Mattermost Statistics 2016-01-29

Test driving “Mattermost” at the Eclipse Foundation

Mattermost Logo Thanks to @bruncedric and the Eclipse Webmasters we were able to quickly start a test of Mattermost at https://mattermost-test.eclipse.org.

“Mattermost” is a Slack/HipChat/… like web messaging system (aka webchat). I don’t want to go into too much detail of the system itself, but the main idea is to have a “faster-than-email” communication form for a team of people. Comparable to IRC, but more HTML5-ish. It also features a REST API, which can be used to automate inbound and outbound messages to the different channels.

Why not Slack or HipChat? Simply because the Eclipse Foundation requires its IT components to be based on open source solutions and not rely on any service which can go away at any moment, without the possibility to rescue your data in a portable format. Which is quite a good approach if you ask me. Just imaging you have years of data and loose it due to fact that your service provider simply shuts down.

So right now there is a Mattermost instance at https://mattermost-test.eclipse.org which is intended to be a setup for testing “Mattermost” and figuring out how it can be used to give Eclipse projects a benefit. Simply adding more technical gimmicks might not always be a good idea.

Also does Package Drone have a channel at mattermost.

So go ahead and give it a test run …

… if you have troubles or ideas … just look at eclipse/mattermost.

Package Drone – what’s next?!

Every now and then there is some time for Package Drone. So let’s peek ahead what will happen in the next few weeks.

First of all, there is the Eclipse DemoCamp in Munich, at which Package Drone will he presented. So if you want to talk in person, come over and pay us a visit.

Also I have been working on version 0.8.0. The more you think about it, the more ideas you get of what could be improved. If I only got the time. But finally it is time for validation! Channels and artifacts can be validated and the outcome will be presented in red and yellow, and a lot more detail ;-). This is a first step towards more things we hope to achieve with validation, like rejecting content and proving resolution mechanisms. Quick fix your artifacts 😉

Also there are a few enhancements to make it easier for new users to start with Package Drone. “Channel recipes” for example setup up and configure a channel for a specific purpose, just to name one.

Of course this is important since, with a little bit of luck, there will be an article in the upcoming German “Eclipse Magazin“, which might bring some new users. Helping them to have an easy start is always a good idea 😉

The next version also brings a new way to upload artifacts. A plain simple HTTP request will do to upload a new artifact. While I would not call it “API”, it definitely is the starting point of exactly that. Planned is a command line client and already available is the Jenkins plugin for Package Drone. It allows to archive artifacts directly to a Package Drone channel, including adding some meta data of the build.

So, if you have more ideas, please raise an issue in the GitHub issue tracker.

Controlling the screen resolution of a Windows Guest in VirtualBox

Now I wanted to create another screencast for Package Drone and stumbled over the same issue again. Time to document it 😉

VirtualBox with the Windows Guest drivers installed allows for any screen resolution which you could ever think of. Just resize the guest window and the screen resolution of the guest system will adapt.

But what if you want to set a specific screen resolution, pixel perfect?! Windows allows you to change the screen resolution but does not allow you to enter width and height. You are stuck with a slider of presets.

Googling around you will find the idea of adding a custom screen resolution to that selection. However, it seems that for some users this works, for others it doesn’t. I am one of the latter users.

But there is simple command which will tells your guest session to change to a specific resolution, directly from the command line:

VBoxManage controlvm "My virtual machine" setvideomodehint 1280 720 32

This will tell the currently running virtual machine to change resolution to 1280×720 at 32bit.

Maven basic authentication fails

While working on Package Drone, I stumbled over an interesting issue.

Deploying to Package Drone using Maven requires a deploy key. A random token, generated by the server which has to be used as either username or password on the basic authentication process of HTTP.

This worked fine as long as I started “maven deploy” from inside the Eclipse IDE. Starting “maven deploy” using the external maven installation or from the command line caused:

Caused by: org.apache.maven.wagon.TransferFailedException: Failed to transfer file: http://localhost:8080/maven/m2test/de/dentrassi/test/felixtest1/0.0.1-SNAPSHOT/felixtest1-0.0.1-20150217.162541-1.jar. Return code is: 401, ReasonPhrase: Unauthorized.

Although I did configure Maven to use the correct credentials in the “settings.xml” file:

…
<server>
  <username></username>
  <password>abc123</password>
  <id>pdrone.test</id>
</server>
…

After several hours of googling, source code reading and debugging maven it was actually pretty easy.

First of all, the embedded Maven instance in Eclipse uses “AetherRepositoryConnector” instead of “BasicRepositoryConnector” for accessing repositories. “Aether” simply takes the username and password values, as provided, and uses them.

The “BasicRepositoryConnector” however decided that an empty username (or an empty password) is not good at all and simply dropped the whole configuration without warning.

So in the end, introducing a dummy user name did the trick.