IoT

26 posts

Build your own IoT cloud platform

If you want to do large scale IoT and build your own IoT cloud platform, then you will need a messaging layer which can actually handle this. Not only handle the sheer load of messages, the number of connections. Even more important may be ability to integrate your custom bits and pieces and be able to make changes to every layer of that installation, in a controlled, yet simple way.

An overview

Eclipse Hono is an open source project under umbrella of the Eclipse IoT top level project. It provides a set of components and services used for building up your own IoT cloud platform:

Overview of Eclipse Hono IoT cloud platform

In a nutshell, Hono does provide a framework to create protocol adapters, and also delivers two “standard” protocol adapters, one for HTTP and one for MQTT. Both options are equally important to the project, because there will always be a special case for which you might want a custom solution.

Aside from the standard components, Hono also defines at set of APIs based on AMQP 1.0 in order to mesh in other services. Using the same ideas from adding custom protocol adapters, Hono allows to hook up your custom device registry and your existing authentication/authorization system (read more about Eclipse Hono APIs).

The final direct or store-and-forward message delivery is offloaded to an existing messaging layer. The scope of Hono is to create an IoT messaging infrastructure by re-using an existing, use case agnostic messaging layer and not to create another one. In this post we will assume that EnMasse is being used for that purpose. Simply because EnMasse is the best choice for AMQP 1.0 when it comes to Kubernetes/OpenShift. It is a combination of Apache Qpid, Apache Artemis, Keycloak and some EnMasse native components.

In addition to that, you will of course need to plug in your actual custom business logic. Which leaves you with a zoo of containers. Don’t get me wrong, containers are awesome, simply imagine you would need to deploy all of this on a single machine.

Container freshness

But this also means that you need to take care of containers freshness at some point. Most likely making changes to your custom logic and maybe even to Hono itself. What is “container freshness”?! – Containers are great to use, and easy to build in the beginning. Simply create a Dockerfile, run docker build and you are good to go. You can also do this during your Maven release and have one (or more) final output containers(s) for your release, like Hono does it for example. The big flaw here is, that a container is a stack of layers, making up your final (application) image. Starting with a basic operating system layer, adding additional tools, adding Java and maybe more. And finally your local bits and pieces (like the Hono services).

All those layers link to exactly one parent layer. And this link cannot be updated. So Hono 0.5 points to a specific version of the “openjdk” layer, which again points to a specific version of “debian”. But you want your IoT cloud platform to stay fresh and up-to-date. Now assume that there is some issue in any of the Java or Debian base layers, like a security issue in the “glibc”. Unless Hono releases a new set of images, you are unable to get rid of this issue. In most cases you want to upgrade your base layers more frequently than you actual application layer.

Or consider the idea of using a different base layer than the Hono project had in mind. What if you don’t want to use Debian as a base layer? Or want to use Eclipse J9 instead of the OpenJDK JVM?

Building with OpenShift

When you are using OpenShift as a container platform (and Kubernetes supports the same approach) you can make use of image streams and builds. An image stream simply is a way to store images and maintaining versions. When an image stream is created, it normally is empty. You can start to populate it with images, either by importing them from existing repositories, like DockerHub or your internal ones. Or by creating images yourself with a build running inside of OpenShift. Of course you are in charge of all operations, including tagging versions. This means that you can easily replace an image version, but in a controlled way. So no automatic update of a container will break your complete system.

There are different types of builds. A rather simple one is the well known “Dockerfile” approach. You define a base image and add a few commands which will make up the new container layer. Then there is the “source-to-image” (S2I) build, which we will have a look at in a second.

Building & Image Streams

Now with that functionality you can define a setup like this:

Diagram of example image streams

The base image gets pulled in from an external registry. And during that process you map versions to your internal versioning schema. What a move from “v1” to “v2” means in your setup is completely up to you.

The pulled in image gets fed into a build step, which will produce a new image based on the defined parent, e.g. your custom base image. Maybe this means simply adding a few command line utilities to the existing base image. Or some policy file, … The custom base image can then be used by the next build process to create an application specific container, hosting your custom application. Again, what a versioning schema you use, is completely up to you.

If you like you can also define triggers between these steps. So that when OpenShift pulls in a new image from the external source or the source code of the git repository changes, all required builds get executed and finally the new application versions gets deployed automatically. Old image versions may be kept so that you can easily switch back to an older version.

Source-to-Image (S2I)

Hono uses a plain Maven build and is based on Vert.x and Spring Boot. The default way of building new container images is to check out the sources from git and run a local maven build. During the build Maven wants to talk to some Docker Daemon in order to assemble new images and storing it into its registry.

Now that approach may be fine for developers. But first of all this is a quite complex, manual job. And second, in the context described above, it doesn’t really fit.

As already described, OpenShift supports different build types to create new images. One of those build types is “S2I”. The basic idea behind S2I is that you define a build container image, which adheres to a set of entry and exit points. Processing the provided source, creating a new container image which can be used to actually run this source. For Java, Spring Boot and Maven there is an S2I image from “fabric8”, which can be tweaked with a few arguments. It will run a maven build, find the Spring Boot entry point, take care of container heap management for Java, inject a JMX agent, …

That way, for Hono you can simply reuse this existing S2I image in a build template like:

source:
  type: Git
  git:
    uri: "https://github.com/eclipse/hono"
    ref: "0.5.x"
strategy:
  type: source
  sourceStrategy:
    from:
      kind: ImageStreamTag
      name: "fabric8-s2i-java:2.1"
    env:
      - name: MAVEN_ARGS_APPEND
        value: "-B -pl org.eclipse.hono:hono-service-messaging --also-make"
      - name: ARTIFACT_DIR
        value: "services/messaging/target"
      - name: ARTIFACT_COPY_ARGS
        value: "*-exec.jar"

This simple template allows you to reuse the complete existing Hono source code repository and build system. And yet you can start making modifications using custom base images or changes in Git right away.

Of course you can reuse this for your custom protocol adapters as well. And for your custom application parts. In your development process you can still use plain Maven, Spring Boot or whatever your prefer. When it comes to deploying your stack in the cloud, you hand over the same build scripts to OpenShift and S2I and let your application be built in the same way.

Choose your stack

The beauty of S2I is, that it is not tied to any specific language or toolset. In this case, for Hono, we used the “fabric8” S2I image for Java. But if you would prefer to write your custom protocol adaptor in something else, like Python, Go, .NET, … you still could use S2I and the same patterns to go with this language and toolset.

Also, Hono supports creating protocol adapters and services in different (non-JVM based) languages. Hono components get meshed up using Hono’s AMQP 1.0 APIs, which allow to use the same flow control mechanism for services as they are used for IoT data, building your IoT cloud platform using a stack you prefer most.

… and beyond the infinite

OpenShift has a lot more to offer when it comes to building your platform. It is possible to use build pipelines, which allow workflows publishing to some staging setup before going to production. Re-using the same generated images. Or things like:

  • Automatic key and certificate generation for the inter-service communication of Hono.
  • Easy management of Hono configuration files, logging configuration using “ConfigMaps”.
  • Application specific metrics generation to get some insights of application performance and throughput.

That would have been a bit too much for a single blog post. But I do encourage you to have a look at the OpenShift Hono setup at my forked Hono repository on GitHub, which makes use of some of this. This setup tries to provide a more production ready deployment setup for using Hono. However it can only be seen as a reference, as any production grade setup would definitely require replacements for the example device registry, a better tuned logging configuration and definitely a few other tweaks of your personal preference ;-)

Hono also offers a lot more than this blog post can cover when building your own IoT cloud platform. One important aspect definitely is data privacy, yet supporting multiple tenants on the same instance. Hono already supports full mulit-tenancy, down to the messaging infrastructure. This makes it a perfect solution for honoring data privacy in the public and private cloud. Read more about new multi-tenant features of the next Hono version in Kai Hudalla’s blog post.

Take a look – EclipseCon France 2018

Dejan and I will have a talk about Hono at the EclipseCon France in Toulouse (June 13-14). We will present Hono in combination with EnMasse as an IoT cloud platform. We will also bring the setup described above with us and would be happy to you show everything in action. See you in Toulouse.

CEP & Machine learning for IoT – Drools on Kura

Machine learning and predicate maintenance we are the role models of IoT use cases. Having an IoT gateway allows you to pre-process data before you send it upstream to your cloud. It also allows you to do local decisions, without the actual need for a cloud upload. But as you know from sitting at your favorite restaurant, staring at the menu, making decisions can be quite hard ;-) Complex event process and machine learning models can help you with IoT use cases though.

Eclipse Kura is an open source IoT gateway with a focus on industrial use cases. Drools is an open source rule engine and, with Drools Fusion, provides complex event processing. It also supports making decisions based on Predictive Model Markup Language (PMML) based models. So why not bring both components together?! PMML’s can be used for all kind of scenarios. But the classic IoT use case would probably be to do predicate maintenance, based on a PMML generated by your machine learning solution on the cloud. Sending back the “learned” knowledge to the edge gateway for local processing.

Installation

The Drools addon for Kura provide two DPs (the package type used by Kura) which can be deployed in order to extend Kura with Drools.

Note: If you don’t have a Raspberry Pi at hand, or don’t want to install Kura on some “real” device. You can always use the Kura Emulator docker image (see also Developing for Eclipse Kura in Windows).

Setup

Once the components are installed, it is possible to create a new Drools instance by clicking in the blue “+” on the left side of the Kura services area. Create a new component of the type de.dentrassi.kura.addons.drools.component.DroolsInstance. Choose any unused ID and click “Apply”. It might be necessary to reload the Web UI of Kura at this point as the refresh doesn’t properly work. When the service is listed in the left hand side “services” list, select it in order to configure:

The actual rules document comes from the file SimpleScorecard.pmml of the PMML drools example mpbravo/brms-pmml-example. Also be sure to set the file type to “Predictive Model Markup Language”. Save the changes by clicking on “Apply”.

Next we will use Kura Wires in order to mesh up the model with some “data”. A need to use a timer as input source, as Kura currently doesn’t offer any kind of value creating like a function or sine curve. So create a new “timer”, you can leave the default of 10 seconds. Add a new logger, which we simply use for testing, you should set the “verbosity” to “VERBOSE”. And then create a new “DroolsProcess” component with the following configuration:

ID
pmml1 – The ID of the drools session
Fire all rules
true – After the fact has been injected, rules have to fired
Delete after fire
true – After the rules have been fired, we can remove the fact from the session
Fact Package
org.drools.scorecards.example – The package name, from the rules model
Fact Type
SampleScore – The type name, from the rules model
Inputs
age=TIMER – comma separated list for Wire record names to fact object properties
Outputs
result=scorecard_calculatedScore – comma separated list for Wire record names to fact object properties

Finally wire that all up:

Results

Looking at the Kura log file /var/log/kura.log should show you something like (where result is coming from the PMML model):

2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  d.d.k.a.d.c.w.DroolsProcess - Result - type: class java.lang.Double, value: 24.0
2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  o.e.k.i.w.l.Logger - Received WireEnvelope from de.dentrassi.kura.addons.drools.component.wires.DroolsProcess-1521129031885-7
2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  o.e.k.i.w.l.Logger - Record List content: 
2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  o.e.k.i.w.l.Logger -   Record content: 
2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  o.e.k.i.w.l.Logger -     result : 24.0
2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  o.e.k.i.w.l.Logger - 

Conclusion

Of course the example is rather trivial. And the actual model and the way we wired it up is just an example. But of course you will bring your own model, based on your machine learning solutions, and have your own data to wire up. So go ahead and explore.

OPC UA solutions with Eclipse Milo

Eclipse IoTThis article walks you through the first steps of creating an OPC UA solution based on Eclipse Milo. OPC UA, also known as IEC 62541, is an IoT solution for connecting industrial automation systems. Eclipse Milo™ is an open-source Java based implementation.

Please note: In the recent 0.3.x release of Milo, the APIs have been changed. This blog post still uses the old APIs of Milo 0.2.x. However I wrote a new blog post, which covers the changes in Milo 0.3.x, compared to this blog post. Since the new post only covers the changes, I encourage you to read on, as everything else in this blog post is still valid.

What is OPC UA?

OPC UA is a point-to-point, client/server based communication protocol used in industrial automation scenarios. It offers APIs for telemetry data, command and control, historical data, alarming and event logs. And a bit more.

OPC UA is also the successor of OPC DA (AE, HD, …) and puts a lot more emphasis on interoperability than the older, COM/DCOM based, variants. It not only offers a platform neutral communication layer, with security built right into it, but also offers a rich set of interfaces for handling telemetry data, alarms and events, historical data and more. OPC clearly has an industrial background as it is coming from an area of process control, PLC, SCADA like systems. It is also known as IEC 62541.

Looking at OPC UA from an MQTT perspective one might ask, why do we need OPC UA? Where MQTT offers a completely undefined topics structure and data types, OPC UA provides a framework for standard and custom datatypes, a defined (hierarchical) namespace and a definition for request/response style communication patterns. Especially the type system, even with simple types, is a real improvement over MQTT’s BLOB approach. With MQTT you never know what is inside your message. It may be a numeric value encoded as string, a JSON encoded object or even a picture of a cat. OPC UA on the other side does offer you a type system which holds the information about the types, in combination with the actual values.

OPC UA’s subscription model also provides a really efficient way of transmitting data in a lively manner, but only transmitting data when necessary as defined by client and server. In combination with the binary protocol this can a real resource safer.

The architecture of Eclipse Milo

Traditionally OPC UA frameworks are split up in “stack” and “SDK”. The “stack” is the core communication protocol implementation. While the “SDK” is building on top of that, offering a simpler application development model.

Eclipse Milo Components

Eclipse Milo offers both “stack” and “SDK” for both “client” and “server”. “core” is the common code shared between client and server. This should explain the module structure of Milo when doing a search for “org.eclipse.milo” on Maven Central:

org.eclipse.milo : stack-core
org.eclipse.milo : stack-client
org.eclipse.milo : stack-server
org.eclipse.milo : sdk-core
org.eclipse.milo : sdk-client
org.eclipse.milo : sdk-server

Connecting to an existing OPC UA server would require you to use “sdk-client” only, as all the other modules are transient dependencies of this module. Likewise, creating your own OPC UA server would also only require the “sdk-server” modules.

Making contact

Focusing on the most common use case of OPC, data acquisition and command & control, we will now create a simple client which will read out telemetry data from an existing OPC UA server.

The first step is to look up the “endpoint descriptors” from the remote server:

EndpointDescription[] endpoints =
  UaTcpStackClient.getEndpoints("opc.tcp://localhost:4840")
    .get();

The trailing .get() might have tipped you off that Milo makes use of Java 8 futures and thus easily allows asynchronous programming. However this example will use the synchronous .get() call in order to wait for a result. This will make the tutorial more readable as we will look at the code step by step.

Normally the next step would be to pick the “best” endpoint descriptor. However “best” is relative and highly depends on your security and connectivity requirements. So we simply pick the first one:

OpcUaClientConfigBuilder cfg = new OpcUaClientConfigBuilder();
cfg.setEndpoint(endpoints[0]);

Next we will create and connect the OPC client instance based on this configuration. Of course the configuration offers a lot more options. Feel free to explore them all.

OpcUaClient client = new OpcUaClient(cfg.build());
client.connect().get();

Node IDs & the namespace

OPC UA does identify its elements, objects, folders, items by using “Node IDs”. Each server has multiple namespaces and each namespace has a tree of folders, objects and items. There is a browser API which allows you to browse through this tree, but this is mostly for human interaction. If you know the Node ID of the element you would like to access, then you can simply provide the node ID. Node IDs can be string encoded and might look something like ns=1;i=123. This example would reference to a node in namespace #1 identified by the numeric id “123”. Aside from numeric IDs, there as also string IDs (s=), UUID/GUID IDs (g=) and even a BLOB type (b=).

The following examples will assume that a node ID has been parsed into variables like nodeId, which can be done by the following code with Milo:

NodeId nodeIdNumeric = NodeId.parse("ns=1;i=42");
NodeId nodeIdString  = NodeId.parse("ns=1;s=foo-bar");

Of course instances of “NodeId” can also be created using the different constructors. This approach is more performant than using the parse method.

NodeId nodeIdNumeric = new NodeId(1, 42);
NodeId nodeIdString  = new NodeId(1, "foo-bar");

The main reason behind using Node IDs is, that those can be efficiently encoded when they are transmitted. For example is it possible to lookup an item by a larger string based browse path and then only use the numeric Node ID for further interaction. Node IDs can also be efficiently encoded in the OPC UA binary protocol.

Additionally there is a set of “well known” node IDs. For example the root folder always has the node ID ns=0;i=84. So there is no need to look those up, they can be considered constants and be directly used. Milo defines all well known IDs in the Identifiers class.

Reading data

After the connection has been established we will request a single read of a value. Normally OPC UA is used in an event driven manner, but we will start simple by using a single requested read:

DataValue value =
  client.readValue(0, TimestampsToReturn.Both, nodeId)
    .get();

The first parameter, max age, lets the server know that we may be ok reading a value which is a bit older. This could reduce traffic to the underlying device/system. However using zero as a parameter we request a fresh update from the value source.

The above call is actually a simplified version of a more complex read call. In OPC UA items do have attributes and there is a “main” attribute, the value. This call defaults to reading the “value” attribute. However it is possible to read all kinds of other attributes from the same item. The next snippet shows the more complex read call, which allows to not only read different attributes of the item, but also multiple items at the same time:

ReadValueId readValueId =
  new ReadValueId(nodeId, AttributeId.Value.uid(), null, null);

ReadResponse response = 
  client
    .read(0, TimestampsToReturn.Both, Arrays.asList(readValueId))
      .get();

Also for this read call we do request both, the server and source timestamp. OPC UA will timestamp values and so you know when the value switched to this reported value. But it is also possible that the device itself does the timestamping. Depending on your device and application, this can be a real benefit to your use case.

Subscriptions

As already explained, OPC UA can do way better than explicit single reads. Using subscriptions it is possible to have fine grained control over what you request and even how you request data.

When coming from MQTT you know that you get updates once they got published. However you have no control over the frequency you get those updates. Imagine your data source is originally some temperate sensor. It probably is capable of supplying data in a sub-microsecond resolution and frequency. But pushing a value update every microsecond, even if no one listens, is a waste of resources.

In OPC UA

OPC UA does allow you to take control over the subscription process from both sides. When a client creates a new subscription it will provide information like the number of in-flight events, the rate of updates, … the server has the ability to modify the request, but will try to adhere to it. It will then start serving the request. Also will data only be sent of there are actual changes. Imagine a pressure sensor, the value may stay the same for quite a while, but then suddenly change rather quickly. So re-transmitting the same value over and over again, just to achieve high frequency updates when an actual change occurs is again a waste of resources.

OPC UA Subscriptions Example

In order to achieve this in OPC UA the client will request a subscription from the server, the server will fulfill that subscription and notify the client of both value changes and subscription state changes. So the client knows if the connection to the device is broken or if there are simply no updates in the value. If no value changes occurred nothing will be transmitted. Of course there is a heartbeat on the OPC UA connection level (one for all subscriptions), which ensures detection communication loss as well.

In Eclipse Milo

The following code snippets will create a new subscription in Milo. The first step is to use the subscription manager and create a new subscription context:

// what to read
ReadValueId readValueId =
    new ReadValueId(nodeId, AttributeId.Value.uid(), null, null);

// monitoring parameters
int clientHandle = 123456789;
MonitoringParameters parameters =
    new MonitoringParameters(uint(clientHandle), 1000.0, null, uint(10), true);

// creation request
MonitoredItemCreateRequest request =
    new MonitoredItemCreateRequest(readValueId, MonitoringMode.Reporting, parameters);

The “client handle” is a client assigned identifier which allows the client to reference the subscribed item later on. If you don’t need to reference by client handle, simply set it so some random or incrementing number.

The next step will define an initial setup callback and add the items to the subscription. The setup callback will ensure that the newly created subscription will be able to hook up listeners before the first values are received from the server side, without any race condition:

// The actual consumer

BiConsumer<UaMonitoredItem, DataValue> consumer =
  (item, value) ->
    System.out.format("%s -> %s%n", item, value);

// setting the consumer after the subscription creation

BiConsumer<UaMonitoredItem, Integer> onItemCreated =
  (monitoredItem, id) ->
    monitoredItem.setValueConsumer(consumer);

// creating the subscription

UaSubscription subscription =
    client.getSubscriptionManager().createSubscription(1000.0).get();

List<UaMonitoredItem> items = subscription.createMonitoredItems(
    TimestampsToReturn.Both,
    Arrays.asList(request),
    onItemCreated)
  .get();

Taking Control

Of course consuming telemetry data is fun, but sometimes it is necessary to issue control commands as well. Issuing a command or setting a value on the target device is as easy as:

client
  .writeValue(nodeId, DataValue.valueOnly(new Variant(true)))
    .get();

This snippet will send the value TRUE to the object identified by nodeId. Of course, like for the read, there are more options when writing values/attributes of an object.

But wait … there is more!

This article only scratched the complex topic of OPC UA. But it should have given you a brief introduction in OPC UA and Eclipse Milo. And it should get you started into OPC UA from a client perspective.

Watch out for the repository ctron/milo-ece2017 which will receive some more content getting into OPC UA and Eclipse Milo and which will be accompanying repository for my talk Developing OPC UA with Eclipse Milo™ at EclipseCon Europe 2017.

I would like to thank Kevin Herron for not only helping me with this article, but for helping me so many times understanding Milo and OPC UA.

Also see:

AsyncAPI Java Tools 0.0.4 released

It started a as proof-of-concept, checking out AsyncAPI and learning something about it. Actually that topic came to me from two different angles. We where looking into solutions for RPC over messaging (namely AMQP 1.0) instead of HTTP/REST. The second angle was a discussion with a very good friend of mine of “how do you define today an API about messages from the server to the client”. Everyone is talking about Swagger and OpenAPI, but this seems to be all request/response for HTTP based APIs.

And then someone mentioned AsyncAPI, I had to look into this. And this is what I came up with so far.

AsyncAPI tools

ctron/asyncapi is a set of tools, written in Java, around AsyncAPI in general. Reading the specification file into a Java model, a code generator for client and server side code, a few base classes which the generate code makes use of.

The definition file parser still is not optimal. The idea is that the definition is based on JSON schema, but I couldn’t find any good support for JSON schema in Java at the moment. So right now it all home-brew and although it does get the job done, I don’t think it is pretty.

Code generation

The code generation is a bit more sophisticated. The idea of taking this out of the Maven plugin came pretty early. I guess it would be cool to have a gradle plugin at some time in the future and the Maven plugin really is just a small wrapper around the code generator. So that should easily be possible.

The code generation is backed by Eclipse JDT, which allows you to create and parse Java code in a DOM style way. I am constantly torn apart between liking and hating this at the same time. I did work with generic model-to-text tools in the past, and tools like Acceleo or Xpand are way easier to read than Java code generating Java code in that way. On the other side you would need a full blown Ecore model before and then wrap this again by a Maven plugin. I am not sure this is fun either. Also, using the DOM approach, it is quite simple to write extension modules for the code generation, which allow to actually process the generated Java DOM and extend it, without the requirement to write an awfully complex code generation template. So let’s see where this is going in the future.

What works?

Don’t expect productive code … yet ;-) Currently the code generator will create the defined types/schemas, the messages and their payload. Topics will be parsed into services, versions and actions and generate Client and Server interfaces. Also is there a client and server implementation using AMQP and Qpid JMS. This already allows you to communicate between client and server. Right now GSON will be used for creating JSON payload serialization. But I think that it should be pretty simple to swap this with e.g. Jackson.

What does not work?

The AsyncAPI specification actually defines a bit more than the tooling can currently handle. The server section is missing, various meta data like license and descriptions are not supported. But the most important thing which is currently missing IMHO is some implementation backed on MQTT. Not that I am a big fan on MQTT, I would prefer AMQP over that in this case. But I would like to see two different AsyncAPI partners communicate via MQTT/JSON. One of them being generated by this tool-set and the other one based on something completely different.

Also is JSON schema more powerful than what the parser can currently handle. I am not sure if JSON schema and Java is a very good fit, but if you want to go AsyncAPI, then you need to work with JSON schema. And this toolset needs to do a better job here.

AsyncAPI Maven Plugin

The AsyncAPI Maven plugin takes the code generator from the main tools project and wraps in into a Maven plugin. The idea is to simply drop in you AsyncAPI YAML file and let the Maven plugin generate the code for it. Of course this is Eclipse M2E aware, so that you can simply safe your YAML file and Eclipse will on the fly generate new code for you.

Examples

Interested in how this would look like in the form of source code? Well, here are some examples. Also be sure to check out my examples repository.

First we need to create a builder for either the server or the client implementation:

Builder<JmsClient> builder = JmsClient.newBuilder()
  .host("localhost")
  .profile(AmqpProfile.DEFAULT_PROFILE)
  .payloadFormat();

Next create an instance from it, the following is a client instance which will listen to some server-side event:

try (JmsClient client = builder.build();
     ListenerHandle listener =
        client.accounts()
          .eventUserSignup().subscribe(System.out::println)) {

  System.out.println("Waiting for messages…");
  Thread.sleep(Long.MAX_VALUE);

}

The of course we need a server to publish messages:

try (JmsServer client = builder.build()) {

  client.accounts().eventUserSignup()
    .publish(newUserMessage())
      .toCompletableFuture()
      .get();

}

Of course there is no need to actively wait for the message to be sent with get() if you don’t need to. But you can use the AsyncAPI in sync way if you like to ;-)

What is next?

There is a lot to do. Really, a lot! I would like to make an interop test with MQTT. There are several fields from the specification which are currently not supported. The server side JMS API isn’t really suitable for JEE style programming, especially when it comes to container managed JMS.

So I hope I will find some time to work on the MQTT backed implementation, because that would validate that two different AsyncAPI tools could work together. And I think this is what it is all about.

Before I forget…

The initial idea of why I looked into AsyncAPI was to get message based request/response. Well, that is something which AsyncAPI doesn’t really provide. But, to be fair, it would only require a few changes to add this to the specification.

See also

Kapua micro client SDK, running on a microcontroller

A few weeks back, while being at EclipseCon France, I did stumble over a nice little gadget. There was talk from MicroEJ around Java on microcontrollers. And they where showing an IoT related demo based on their development environment. And it seemed they did have Eclipse Paho (including TLS) and Google Protobuf running on their JVM without too much troubles.

ST Board with Thermocloud

My first idea was to simply drop the Kapua Gateway Client SDK on top of it, implementing the cloud facing API of MicroEJ and let their IoT demo publish data towards Kapua.
After a few days I was able to order such a STM32F746G-DISCO board myself and play a little bit around with it. It quickly turned out that is was pretty easy to drop some Java code on the device, using the gateway client SDK was not an option. The MicroEJ JVM is based on Java CDLC 8. Sounds like Java 8, right? Well, it is more like Java 7. Aside from a few classes which are missing, the core features missing where Java 8’s lambdas and enhancements to interfaces (like static methods and default methods).

Rewriting the gateway client SDK in Java 7, dropping the shiny API which we currently have, didn’t sound very appealing. But then again, implementing the Kapua communication stack actually isn’t that complicated and such an embedded device wouldn’t really need the extensibility and modularity of the Java 8 based gateway client SDK. So in a few hours there was the Kapua micro client SDK, which doesn’t consume any dependencies other than Paho and Protobuf and also only uses a minimal set of Java 7 functionality.

The second step was to implement the MicroEJ specific APIs and map the calls to the Kapua micro SDK, which wasn’t too difficult either. So now it is possible to simply install the “Kapua Data Channel Provider” from the MicroEJ Community Store. Alternatively you can compile the sources yourself as the code for this adapter is also on GitHub. Once the data channel provider is installed you can fire up any application consuming the DataChannel API, like the “Thermocloud” application, and publish data to Kapua. Please be sure to follow the installation instructions on the Kapua data channel provider for configuring the connection to your Kapua instance.

Kapua Data Channel Provider
Data from Thermocloud in Kapua

As the micro client is capable of running on Java 7, it might also be a choice for people wanting to connect from Android to Kapua without the need to go for Java 8. As Java 8 on Android still seems to be rather painful, this could be an option.

Also see:

I would like to thank Laurent and Frédéric from MicroEJ, who did help me fix all the noob-issues I had.

Just a bit of Apache Camel

Sometimes you write something and then you nearly forget that you did it … although it is quite handy sometimes, here are a few lines of Apache Camel XML running in Eclipse Kura:

I just wanted to publish some random data from Kura to Kapua, without the need to code, deploy or build anything. Camel came to the rescue:

<routes xmlns="http://camel.apache.org/schema/spring">
  <route id="route1">

    <from uri="timer:1"/>
    <setBody><simple>${bean:payloadFactory.create("value", ${random(100)})}</simple></setBody>
    <to uri="kura-cloud:myapp/topic"/>

  </route>
</routes>

Dropping this snippet into the default XML Camel router:

  • Registers a Kura application named myapp
  • Creates a random number between 0 and 100 every second
  • Converts this to the Kura Payload structure
  • And publishes it on the topic topic

Talking to the cloud

While working on Eclipse Kapua, I wanted to do different tests, pushing telemetry data into the system. So I started to work on the Kura simulator, which can used to simulator an Eclipse Kura IoT gateway in a plain Java project, no special setup required. Now that helped a lot for unit testing and scale testing. Even generating a few simple telemetry data streams for simulating data works out of the box.

But then again I wanted to have something more lightweight and controllable. With the simulator you actually derive some a simple class and get fully controlled by the simulator framework. That may work well in some cases, but in others you may want to turn over the control to the actual application. Assume you already have a component which is “in charge” of your data, and now you want to push this into the cloud. Of course you can do this somehow, working around that. But creating a nice API for that, which is simple and easy to understand is way more fun ;-)

So here is my take on a Gateway Client API, sending IoT data to the cloud, consuming command & control from it.

Intentions

I wanted to have a simple API, easy to understand, readable. Preventing you from making mistakes in the first place. And if something goes wrong, it should go wrong right away. Currently we go with MQTT, but there would be an option to go with HTTP as well, or AMQP in the future. And also for MQTT we have Eclipse Paho and FUSE MQTT. Both should be available, both may have special properties, but share some common ground. So implementing new providers should be possible, while sharing code should be easy as well.

Example

Now here is with what I came up with:

try (Client client = KuraMqttProfile.newProfile(FuseClient.Builder::new)
  .accountName("kapua-sys")
  .clientId("foo-bar-1")
  .brokerUrl("tcp://localhost:1883")
  .credentials(userAndPassword("kapua-broker", "kapua-password"))
  .build()) {

  try (Application application = client.buildApplication("app1").build()) {

    // subscribe to a topic

    application.data(Topic.of("my", "receiver")).subscribe(message -> {
      System.out.format("Received: %s%n", message);
    });

    // cache sender instance

    Sender sender = application
      .data(Topic.of("my", "sender"))
      .errors(ignore());

    int i = 0;
    while (true) {
      // send
      sender.send(Payload.of("counter", i++));
      Thread.sleep(1000);
    }
  }
}

Looks pretty simple right? On the background the MQTT connection is managed, payload gets encoded, birth certificates get exchanges and subscriptions get managed. But still the main application is in control of the data flow.

How to do this at home

If you want to have a look at the code, it is available on GitHub (ctron/kapua-gateway-client) and ready to consume on Maven Central (de.dentrassi.kapua). But please be aware of the fact that this is a proof-of-concept, and may never become more than that.

Simply adding the following dependency to your project should be enough:

<dependency>
  <groupId>de.dentrassi.kapua</groupId>
  <artifactId>kapua-gateway-client-provider-mqtt-fuse</artifactId>
  <version>0.2.0</version> <!-- check for a more recent version -->
</dependency>

With this dependency you can use the example above. If you want to got for Paho instead of FUSE use kapua-gateway-client-provider-mqtt-paho instead.

Taking for a test drive

Now taking this for a test drive as even more fun. Eclipse SmartHome has the concept of a persistence system, where telemetry data gets stored in a time series like database. There exists a default implementation for rrdb4j. So re-implementing this interface for Kapua was quite easy and resulted in an example module which can be installed into the Karaf based OpenHAB 2 distribution with just a few commands:

openhab> repo-add mvn:de.dentrassi.kapua/karaf/0.2.0/xml/features
openhab> feature:install eclipse-smarthome-kapua-persistence

Then you need to re-configure the component over the “Paper UI” and point it towards your Kapua setup. Maybe you will need to tweak the “kapua.persist” file in order to define what gets persisted and when. And if everything goes well, your temperate readings will get pushed from SmartHome to Kapua.

More information

OPC UA with Apache Camel

Apache Camel 2.19.0 is close to is release and the OPC UA component called “camel-milo” will be part of it. This is my Eclipse Milo backed component which was previously hosted in my personal GitHub repository ctron/de.dentrassi.camel.milo. It now got accepted into Apache Camel and will be part of the 2.19.0 release. As there are already a release candidates available, I think it is a great time to give a short introduction.

In a nutshell OPC UA is an industrial IoT communication protocol for acquiring telemetry data and command and control of industrial grade automation systems. It is also known as IEC 62541.

The Camel Milo component offers both an OPC UA client (milo-client) and server (milo-server) endpoint.

Running an OPC UA server

The following Camel example is based on Camel Blueprint and provides some random data over OPC UA, acting as a server:

Example project layout
Example project layout

The blueprint configuration would be:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="
	http://www.osgi.org/xmlns/blueprint/v1.0.0 https://osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
	http://camel.apache.org/schema/blueprint https://camel.apache.org/schema/blueprint/camel-blueprint.xsd
	">

	<bean id="milo-server"
		class="org.apache.camel.component.milo.server.MiloServerComponent">
		<property name="enableAnonymousAuthentication" value="true" />
	</bean>

	<camelContext xmlns="http://camel.apache.org/schema/blueprint">
		<route>
			<from uri="timer:test" />
			<setBody>
				<simple>random(0,100)</simple>
			</setBody>
			<to uri="milo-server:test-item" />
		</route>
	</camelContext>

</blueprint>

And adding the following Maven build configuration:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

	<modelVersion>4.0.0</modelVersion>

	<groupId>de.dentrassi.camel.milo</groupId>
	<artifactId>example1</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<packaging>bundle</packaging>

	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
		<camel.version>2.19.0</camel.version>
	</properties>

	<dependencies>
		<dependency>
			<groupId>ch.qos.logback</groupId>
			<artifactId>logback-classic</artifactId>
			<version>1.2.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.camel</groupId>
			<artifactId>camel-core-osgi</artifactId>
			<version>${camel.version}</version>
		</dependency>
		<dependency>
			<groupId>org.apache.camel</groupId>
			<artifactId>camel-milo</artifactId>
			<version>${camel.version}</version>
		</dependency>
	</dependencies>

	<build>
		<plugins>
			<plugin>
				<groupId>org.apache.felix</groupId>
				<artifactId>maven-bundle-plugin</artifactId>
				<version>3.3.0</version>
				<extensions>true</extensions>
			</plugin>
			<plugin>
				<groupId>org.apache.camel</groupId>
				<artifactId>camel-maven-plugin</artifactId>
				<version>${camel.version}</version>
			</plugin>
		</plugins>
	</build>

</project>

This allows you to simply run the OPC UA server with:

mvn package camel:run

Afterwards you can connect with the OPC UA client of your choice and subscribe to the item test-item, receiving that random number.

Release candidate

As this is currently the release candidate of Camel 2.19.0, it is necessary to add the release candidate Maven repository to the pom.xml. I did omit
this in the example above, as this will no longer be necessary when Camel 2.19.0 is released:

<repositories>
		<repository>
			<id>camel</id>
			<url>https://repository.apache.org/content/repositories/orgapachecamel-1073/</url>
		</repository>
	</repositories>

	<pluginRepositories>
		<pluginRepository>
			<id>camel</id>
			<url>https://repository.apache.org/content/repositories/orgapachecamel-1073/</url>
		</pluginRepository>
	</pluginRepositories>

It may also be that the URLs (marked above) will change as a new release candidate gets built. In this case it is necessary that you update the URLs to the appropriate
repository URL.

What’s next?

Once Camel 2.19.0 is released, I will also mark my old, personal GitHub repository as deprecated and point people towards this new component.

And of course I am happy to get some feedback and suggestions.

Simulating telemetry streams with Kapua and OpenShift

Sometimes it is necessary to have some simulated data instead of fancy sensors attached to your IoT setup. As Eclipse Kapua starts to adopt Elasticsearch, it started to seem necessary to actually unit test the inbound telemetry stream of Kapua. Data coming from the gateway, being processed by Kapua, then stored into Elasticsearch and then retrieved back from Elasticsearch over the Kapua REST API. A lot can go wrong here ;-)

The Kura simulator, which is now hosted in the Kapua repository, seemed to be right place to do this. That way we can not only test this inside Kapua, but we can also allow different use cases for simulating data streams outside of unit tests and we can leverage the existing OpenShift integration of the Kura Simulator.

The Kura simulator has the ability now to also send telemetry data. In addition to that there is a rather simple simulation model which can use existing value generators and map those to a more complex metric setup.

From a programmatic perspective creating a simple telemetry stream would look this:

GatewayConfiguration configuration = new GatewayConfiguration("tcp://kapua-broker:kapua-password@localhost:1883", "kapua-sys", "sim-1");
try (GeneratorScheduler scheduler = new GeneratorScheduler(Duration.ofSeconds(1))) {
  Set apps = new HashSet<>();
  apps.add(simpleDataApplication("data-1", scheduler, "sine", sine(ofSeconds(120), 100, 0, null)));
  try (MqttAsyncTransport transport = new MqttAsyncTransport(configuration);
       Simulator simulator = new Simulator(configuration, transport, apps);) {
      Thread.sleep(Long.MAX_VALUE);
  }
}

The Generators.simpleDataApplication creates a new Application from the provided map of functions (Map<String,Function<Instant,?>>). This is a very simple application, which reports a single metric on a single topic. The Generators.sine function returns a function which creates a sine curve using the provided parameters.

Now one might ask, why is this a Function<Instant,?>, wouldn’t a simple Supplier be enough? There is a good reason for that. The expectation of the data simulator is actually that the telemetry data is derived from the provided timestamp. This is done in order to generate predictable timestamp and values along the communication path. In this example we only have a single metric in a single instance. But it is possible to scale up the simulation to run 100 instances on 100 pods in OpenShift. In this case each simulation step in one JVM would receive the same timestamp and this each of those 100 instances should generate the same values. Sending the same timestamps upwards to Kapua. Now validating this information later on because quite easy, as you not only can measure the time delay of the transmission, but also check if there are inconsistencies in the data, gaps or other issues.

When using the SimulationRunner, it is possible to configure data generators instead of coding:

{
 "applications": {
  "example1": {
   "scheduler": { "period": 1000 },
   "topics": {
    "t1/data": {
     "positionGenerator": "spos",
     "metrics": {
      "temp1": { "generator": "sine1", "name": "value" },
      "temp2": { "generator": "sine2", "name": "value" }
     }
    },
    "t2/data": {
     "metrics": {
      "temp1": { "generator": "sine1", "name": "value" },
      "temp2": { "generator": "sine2", "name": "value" }
     }
    }
   },
   "generators": {
    "sine1": {
     "type": "sine", "period": 60000, "offset": 50, "amplitude": 100
    },
    "sine2": {
     "type": "sine", "period": 120000, "shift": 45.5, "offset": 30, "amplitude": 100
    },
    "spos": {
     "type": "spos"
    }
   }
  }
 }
}

For more details about this model see: Simple simulation model in the Kapua User Manual.

And of course this can also be managed with the OpenShift setup. Loading a JSON file works like this:

oc create configmap data-simulator-config --from-file=KSIM_SIMULATION_CONFIGURATION=../src/test/resources/example1.json
oc set env --from=configmap/data-simulator-config dc/simulator

Finally it is now possible to visually inspect this data with Grafana, directly accessing the Elasticsearch storage:

Developing for Eclipse Kura on Windows

Every now and then it is fun to leave the environment you are used to and do something completely different. So this journey take me to IntelliJ and Windows 10. And yes, I am glad to be back in Linux/Eclipse-land. But still, I think something rather interesting came out of this.

It all started when I helped my colleague Aurélien Pupier to get his environment ready for his talk at the Eclipse IoT day Grenoble. If you missed this talk, you can watch a recording of it. He wanted to present the Camel Developer Tools. The problem was the he was working on a Windows laptop. And he wanted to demonstrate Eclipse Kura in combination JBoss Tools IDE. However Kura can only run on Linux and he wanted to run the JBoss Tools native on his Windows machine.

Of course you could come up with some sort of Virtual Machine setup, but we wanted something which was easier to re-produce in the case there would be some issue with the laptop for the presentation.

Creating a docker image of Kura

The first step was to create a docker image of Kura. Currently Kura doesn’t offer any support for Docker. So that had to be created from scratch. As there is even no x86_64 distribution of Kura and no emulator distribution, it was necessary to do some rather unusual hacks. The background is, that Kura has a rather crude build system which assembles a few distributions in the end of the build. Kura also requires some hardware interfaces in order to work properly. For those hardware interfaces there exist emulator replacements for using in a local developer setup. However there is neither a distribution assembly for x86_64 nor one using the emulator replacements. The whole build system in the end is focused around creating Linux-only images. The solution was to simply rip out all functionality which was in the way and create a patch file.

This patch file and the docker build instructions are now located at a different repository where I can easily modify those and hook it up to the DockerHub build system: ctron/kura-emulator. Currently there are three tags for docker images in the Kura Emulator DockerHub repository: latest (which is the most recent, but stable release), 3.0.0-RC1 and develop (most recent, less stable). As there is currently no released version of Kura 3.0.0, the latest tag is also using the develop branch of Kura. The 3.0.0-RC1 tag is a stable version of the emulator which is known to work and won’t be updated in the future.

There is a more detailed README file in the GitHub repository which explains how to use and build the emulator yourself. In a nutshell you can start it with:

docker run -ti -p 8080:8080 ctron/kura-emulator

And afterwards you can navigate with your browser to http://localhost:8080 and use the Kura Web UI.

As Docker is also available for Windows, this will work the same way on either Linux or Windows, and although I didn’t test it, it should also work on Mac OS X.

JMX & Debugging

As the Camel tooling makes use of JMX, it was necessary to also enable JMX support for Kura, which normally is not available with Kura. By setting the JAVA_OPTS environment variable it is not only possible to enable JMX, but also to enable plain Java debugging for the docker image. Of course you will need to publish the selected ports with -p when running the docker image. And for Windows you cannot simply use localhost but you will need to use the IP addresses created by docker for windows: also see README.md.

Drop in & activate

After the conference was over, I started to think about what we actually had achieved by doing all this. We had a read-to-run Kura image, dockerized, capable of running of Windows (with docker), debuggable. The only part which was still missing was the ability to add a new, custom bundle to the emulator.

Apache Felix File Install to the rescue! In the past I created an Apache Felix File Install DP for Kura (DP = deployment package for Kura). File Install works in a way that it monitors a directory and automatically loads, unloads and updates an OSGi JAR file which you drop into this directory. The DP can simply be dropped into Kura, which extends Kura with this File Install functionality.

So I pre-seeded the Kura docker image with the File Install DP and declared a volume mount, so that you can simply mount a path of the docker image onto your host system. Dropping a file into the directory on the host system will make it available to the docker container and File Install will automatically pick it up and start it, but inside the docker container.

docker run -ti -p 8080:8080 -v c:/path/to/bundles:/opt/eclipse/kura/load ctron/kura-emulator

And this even works with Docker for Windows, if you share your drive first:

Share drive with Docker

Choose your tools

Currently Kura requires you to use a rather complicated setup for developing applications for Kura. You will need to learn about Eclipse PDE, target platforms, Tycho for Maven and bunch of other things to get your Kura application developed, built and packaged.

I already created a GitHub repository for showing a different way to develop Kura applications: ctron/kura-examples. Those project use plain maven, the maven-bundle-plugin and my osgi-dp plugin to create the final DP. Those examples also make use of the newer OSGi annotations instead of requiring your to craft all OSGi metadata by hand.

So if you wanted, you could already use your favorite IDE and start developing Kura application with style. But in order to run them, you still needed a Kura device. But with this docker image you can now simply let the emulator run and let File Install pick up the compiled results:

Summary

So yes, it is possible to use IntelliJ on Windows to develop and debug your Kura application, in a stylish fashion. Or you can simply do the same, just using an excellent IDE like Eclipse and an awesome operating system like Linux, with the same stylish approach ;-)