IoT

29 posts

Integrating Eclipse IoT

The Eclipse IoT project is a top level project at the Eclipse Foundation. It currently consists of around 40 projects, which focus on different aspects of IoT. This may either be complete solutions, like the Eclipse SmartHome project, the PLC runtime and IDE, Eclipse 4DIAC. Or it may be building block projects, like the MQTT libraries of Eclipse Paho, or the cloud scale IoT messaging infrastructure of Eclipse Hono. I can only encourage you to have a look at the list of projects and do a bit of exploring.

And while it is great to a have a diverse set of projects, covering the three tiers of IoT (Device, Gateway and Cloud), it can be a challenge to explain people, how all of those projects can create something, which is bigger than the individual projects. Because having 40 different IoT projects is great, but imagine the possibilities of having a whole IoT ecosystem of projects. Mixing and matching, building your IoT solution as you see fit.

Continue reading

Apache Camel Java DSL in combination Eclipse Kura Wires

In part #1 and part #2, we saw how easy it is to interface Apache Camel with Kura Wires. Simply by re-using some existing functionality. A few lines of XML, Groovy and you can already build an IoT solution based on the Camel ecosystem and the Eclipse Kura runtime. This part will focus on the Java DSL of Apache Camel.

It will also take into account, that when you develop and deploy an application, you need some kind of development, test and integration environment. When you build something, no matter how big, based on Camel or Kura Wires, you do want to test it. You want to have unit tests, and the capability to automatically test if your solution works, or still works after you made changes.

Using Kura Wires alone, this can be a problem. But Camel offers you a way to easily run your solution in a local IDE, debugging the whole process. You can have extra support for debugging Camel specific constructs like routes and endpoints. Camel has support for JUnit and e.g. using the “seda” endpoints, you can in create an abstraction layer between Camel and Wires.

The goal

I’ll make this one up (and yes, let’s try to keep it realistic). We have a device, and his device allows to set two parameters for its operation (P1 and P2, both floating points). Now we already have the device connection set up in Kura. Maybe using Modbus, or something else. Kura can talk to it using Kura Wires and that is all that counts.

Now we do get two additional requirements. There is some kind of operating panel next to the device, which should allow viewing and setting those parameters locally. Also, those parameters should be accessible, using IEC 60870-5-104, for an additional device, right next to the Kura gateway.

All of those operations have to be local only, and still work when no connection to the cloud is possible. But of course, we don’t want to lose the ability to monitor the parameters from our cloud system.

The operator panel will, of course, be programmed in the newest and hippest Web UI technology possible. It is super easy to fire a few HTTP API calls and encode everything in JSON. While, at the same time, the IEC 60870 layer has no idea about complex data structures. The panel application will send a full update of both parameters, while the 60870 client, due to the nature of this protocol, will only send one parameter at a time.

Doesn’t sound too unrealistic, does it?

The project structure

The full project source code is available at ctron/kura-examples, on GitHub. So this blog post will only focus on some important spots of the whole project.

The project is a standard Maven project, and thus has the typical project structure:

Maven Camel project structure

There are only three important differences to a standard Java Maven project:

The packaging type is bundle, which requires the maven-bundle-plugin. It will create an OSGi bundle JAR, instead of a plain JAR file. This is required as the Kura IoT gateway is based on OSGi.

We will create a “DP” package at the end of the build, using the OSGi DP Maven Plugin. This package can directly be uploaded to the Kura instance. As this plugin does include direct dependencies, but doesn’t include transient dependencies (on purpose), the project declares a few dependencies as “provided” to prevent them from being re-packaged in the final DP package.

The project uses the maven-antrun-plugin to download and unpack the static Swagger UI resources. Swagger UI is just a convenience for playing around with the REST API later on. Camel will take care of creating the OpenAPI (Swagger) JSON description, even if the SwaggerUI part is removed. So in a production setup, you most likely would not add Swagger UI to the deployment.

Starting it up

The project has three entry points:

  • CamelApplicationComponent is the OSGi service, which will be managed by the OSGi service component runtime (SCR) when the component is uploaded to Kura.
  • TestApplication is a local test application, which is intended to be started from the local IDE for manual testing.
  • CamelApplicationComponentTest is the JUnit 4 based test for testing the Camel routes.

All three entry points will have a slightly different creation process for the Camel Context. This is simply due to the fact that different environments (like plain Java, OSGI and JUnit) have different requirements.

The routes configuration, which is the same for all entry points, is located in Routes.

Let’s have a quick look at the OSGi startup:

@Activate
public void start(final BundleContext context) throws Exception {
  this.context = new OsgiDefaultCamelContext(context, SwaggerUi.createRegistry());
  this.context.addRoutes(new Routes(this.state));
  this.context.start();

  final Dictionary<String, Object> properties = new Hashtable<>();
  properties.put("camel.context.id", "camel.example.4");
  this.registration = context.registerService(CamelContext.class, this.context, properties);
}

Once the component is placed inside an OSGi container, the start method will be called and set up the Camel context. This is all pretty straightforward Camel code. As the last step, the Camel context will be registered with the OSGi service layer. Setting the service property camel.context.id in the process. This property is important, as we will, later on, use it to locate the Camel context from the Kura Wires graph by it.

The Java DSL routes

The routes configuration is pretty simple Camel stuff. First, the REST DSL will be used to configure the REST API. For example, the “GET” operation to receive the currently active parameters:

…
  .get()
  .description("Get the current parameters")
  .outType(Parameters.class)
  .to("direct:getParameters")
…

This creates a get operation, which is being redirected to the internal “direct:getParameters” endpoint. Which is a way of forwarding that call to another Camel Route. This way Camel routes can be re-used from different callers.

Like for example the `direct:updateParameters` route, which will be called by all routes which want to update the parameters, no matter if that call originated in the IEC 60870, the REST or the Kura Wires component:

from("direct:updateParameters")
  .routeId("updateParameters")
  .bean(this.state, "updateCurrentParameters")
  .multicast()
  .to("direct:update.wires", "direct:update.iec.p1", "direct:update.iec.p2").end();

The route will forward the new parameters to the method updateCurrentParameters of the State class. This class is a plain Java class, holding the state and filling in null parameters with the current state. The result of this method will be forwarded to the other routes, for updating Kura Wires and the two parameters in the IEC 60870 data layer.

Trying it out

If you have Java and Maven installed, then you can simply compile the package by running:

cd camel/camel-example4
mvn clean package

This will compile, run the unit tests and create the .dp package in the folder target.

You can upload the package directly to your Kura instance. Please note that you do need the dependencies installed in part #1 of the tutorial. In additional you will need to install the following dependencies:

  • https://repo1.maven.org/maven2/de/dentrassi/kura/addons/de.dentrassi.kura.addons.camel.iec60870/0.6.1/de.dentrassi.kura.addons.camel.iec60870-0.6.1.dp
  • https://repo1.maven.org/maven2/de/dentrassi/kura/addons/de.dentrassi.kura.addons.camel.jetty/0.6.1/de.dentrassi.kura.addons.camel.jetty-0.6.1.dp
  • https://repo1.maven.org/maven2/de/dentrassi/kura/addons/de.dentrassi.kura.addons.camel.swagger/0.6.1/de.dentrassi.kura.addons.camel.swagger-0.6.1.dp

This will install the support for REST APIs, backed by Jetty. As Kura already contains Jetty, it only makes sense to re-use those existing components.

Once the component is deployed and started, you can navigate your web browser to http://:8090/api. This should bring up the Swagger UI, showing the API of the routes:

SwaggerUI of Camel example for Kura

Next, you can create the following components in the Kura wires graph:

  • Create a new “Camel consumer”, named consumer2
    • Set the ID to camel.example.4
    • Set the endpoint URI to seda:wiresOutput1
  • Create a new “Logger”, named logger2
    • Set it to “verbose”
  • Connect consumer2 with logger2
  • Click on “Apply” to activate the changes

Open the console of Kura and then open the Swagger UI page with the Web browser. Click on ““Try Out” of the “PUT” operation, enter some new values for setpoint1 and/or setpoint2 and click on the blue “Execute” button.

In the console of Kura you should see the following output:

2018-09-17T13:35:49,589 [Camel (camel-10) thread #27 - seda://wiresOutput1] INFO  o.e.k.i.w.l.Logger - Received WireEnvelope from org.eclipse.kura.wire.camel.CamelConsume-1537188764126-1
2018-09-17T13:35:49,589 […] INFO  o.e.k.i.w.l.Logger - Record List content:
2018-09-17T13:35:49,589 […] INFO  o.e.k.i.w.l.Logger -   Record content:
2018-09-17T13:35:49,589 […] INFO  o.e.k.i.w.l.Logger -     P1 : 3.0
2018-09-17T13:35:49,589 […] INFO  o.e.k.i.w.l.Logger -     P2 : 2.0
2018-09-17T13:35:49,589 […] INFO  o.e.k.i.w.l.Logger -

This is the result of the “Logger” component from Kura Wires. Which did receive the new parameter updates from the Camel Context, as they got triggered through the Web UI. At the same time, the IEC 60870 server would update all clients being subscribed to those data items.

Wrapping it up

The last part of this tutorial showed that, if the prep-prepared XML router component of Eclipse Kura, is not enough, then you can drop in your own and powerful replacements. Developing with all the bells and whistles of Apache Camel, and still integrate with Kura Wires if necessary.

Sunny weather with Apache Camel and Kura Wires

Part #1 of the Apache Camel to Kura Wires integration tutorial did focus on pushing data from Kura Wires to Camel and processing it there. But part #1 already mentioned that it is also possible to pull in data from Camel into Kura Wires.

Apache Camel consumer node in Kura Wires

Preparations

For the next step, you again need to install a Camel package, for interfacing with Open Weather Map: https://repo1.maven.org/maven2/de/dentrassi/kura/addons/de.dentrassi.kura.addons.camel.weather/0.6.0/de.dentrassi.kura.addons.camel.weather-0.6.0.dp The installation follows the same way as already described in part #1.

In addition to the installation of the package, you will also need to create an account at https://openweathermap.org/ and create an API key. You can select the free tier plan, it is more than enough for our example.

Back to Wires

Next, create a new Camel context, like before, and give it the ID “camel2”. Add the required component weather, the required language groovy and set the following XML router content (be sure to replace <appid> with your API token):

<routes xmlns="http://camel.apache.org/schema/spring">

  <route>

    <from uri="weather:dummy?appid=<YOUR API TOKEN>&amp;lat=48.1351&amp;lon=11.5820"/>
    <to uri="stream:out"/>

    <unmarshal><json library="Gson"></json></unmarshal>
    <transform><simple>${body["main"]["temp"]}</simple></transform>
    <convertBodyTo type="java.lang.Double"/>
    <to uri="stream:out"/>

    <transform><groovy>["TEMP": request.body-273.15]</groovy></transform>
    <to uri="stream:out"/>
    <to uri="seda:output1"/>

  </route>

</routes>

After applying the changes, you can create two new components in the Wire graph:

  • Create a new “Camel Consumer”, name it consumer1
    • Set the Camel context ID camel2
    • Set the endpoint URI seda:output1
  • Create a new “Logger”, name it logger1
    • Set it to “verbose”
  • Connect consumer1 with logger1
  • Click on “Apply” to activate the changes

What does it do?

What this Camel context does, is to first start polling information from the Open Weather Map API. It requests with a manually provided GPS location, Munich.

It then parses the JSON, so that we can work with the data. Then it extracts the current temperature from the rather complex Open Weather Map structure. Of course, we could also use a different approach and extract additional or other information.

The extracted value could still be a number, represented internally by a string. So we ask Camel to ensure that the body of the message gets converted to a Double. If the body already is a double, then nothing will be done. But, if necessary, Camel will pull in its type converter system and optionally convert e.g. a string to a double by parsing it.

Now the body contains the raw value, as a Java double. But we still have two issues with that. The first one is, that the value is in degree Kelvin. Living in Germany, I would expect degree Celsius ;-) The second issue is, that Kura Wires requires some kind of key to that value, like a Map structure.

Fortunately, we easily can solve both issues with a short snippet of Groovy: ["TEMP": request.body-273.15]. This will take the message (request) body, convert it to degree Celsius, and using this as a value for the key TEMP in the newly created map.

Checking the result

As soon as you apply the changes, you should see some output on the console, which shows the incoming weather data:

{"coord":{"lon":11.58,"lat":48.14},"weather":[{"id":801,"main":"Clouds","description":"few clouds","icon":"02d"}],"base":"stations","main":{"temp":297.72,"pressure":1021,"humidity":53,"temp_min":295.15,"temp_max":299.15},"visibility":10000,"wind":{"speed":1.5},"clouds":{"all":20},"dt":1537190400,"sys":{"type":1,"id":4914,"message":0.0022,"country":"DE","sunrise":1537160035,"sunset":1537204873},"id":2867714,"name":"Muenchen","cod":200}
297.72
{TEMP=24.57000000000005}

Every change, which should happen every second, shows three lines. First the raw JSON data, directly from the Open Weather Map API. Then the raw temperature in degree Kelvin, parsed by Camel and converted into a Java type already. Followed by the custom Map structure, created by the Groovy script. The beauty here is again, that you don’t need to fiddle around with custom data structures of the Kura Wires system, but can rely on standard data structures likes plain Java maps.

Looking at the Kura log file, which is by default /var/log/kura.log, you should see some output like this:

2018-09-17T13:57:10,117 [Camel (camel-15) thread #31 - seda://output1] INFO  o.e.k.i.w.l.Logger - Received WireEnvelope from org.eclipse.kura.wire.camel.CamelConsume-1537188764126-1
2018-09-17T13:57:10,117 [Camel (camel-15) thread #31 - seda://output1] INFO  o.e.k.i.w.l.Logger - Record List content:
2018-09-17T13:57:10,118 [Camel (camel-15) thread #31 - seda://output1] INFO  o.e.k.i.w.l.Logger -   Record content:
2018-09-17T13:57:10,118 [Camel (camel-15) thread #31 - seda://output1] INFO  o.e.k.i.w.l.Logger -     TEMP : 24.57000000000005
2018-09-17T13:57:10,118 [Camel (camel-15) thread #31 - seda://output1] INFO  o.e.k.i.w.l.Logger -

This shows the same value, as processed by the Camel context but received by Kura Wires.

Wrapping it up

Now, of course, a simple logger component isn’t really useful. But as you might now, Kura has the ability to connect to a GPS receiver. So you could also take the current position as an input to the Open Weather Map request. And instead of using my static GPS coordinates of Munich, you could query for the nearby weather information. So this might allow you to create some amazing IoT applications.

Stay tuned for Part #3, where we will look at a Camel based solution, which can run inside of Kura, as well as outside. Including actual unit tests, ready for continuous delivery.

Leveraging the power of Apache Camel in Eclipse Kura

With the upcoming version of Eclipse Kura 4, we will see some nice new features for the embedded Apache Camel runtime. This tutorial walks you through the Camel integration of Kura wires, which allows you to bridge both technologies, and leverage the power of Apache Camel for your solutions on the IoT gateway.

Kura Wires is a graph-oriented programming model of Eclipse Kura. It allows wiring up different components, like a Modbus client to the internal Kura Cloud Service. It is similar to Node-RED.

Apache Camel is a message-oriented integration platform with a rule-based routing approach. It has a huge eco-system of components, allowing to integrate numerous messaging endpoints, data formats, and scripting languages.

A graphical approach, like Kura Wires may be interesting for a single instance, which is manually administered. But assume that you want to re-deploy the same solution multiple times. In this case you would want to locally develop and test it. Have proper tooling like validation and debugging. And then you want to automatically package it and run a set of unit and integration tests. And only after that you would want to deploy this. This model is supported when you are using Apache Camel. There is a lot of tooling available, tutorials, training, books on how to work with Apache Camel. And you can make use of the over 100 components which Camel itself provides. In addition to that, you have a whole ecosystem around Apache Camel, which can extend this even more. So it is definitely worth a look.

Prerequisites

As a prerequisite, you will need an instance of Kura 4. As this is currently not yet released, you can also use a snapshot build of Kura 3.3, which will later become Kura 4.

If you don’t want to set up a dedicated device just for playing around, you can always use the Kura container image and it e.g. with Docker. There is a short introduction on how to get started with this at the DockerHub repository: https://hub.docker.com/r/ctron/kura/

Starting a new Kura instance is as easy as:

docker run -ti ctron/kura:develop -p 8080:8080

The following tutorial assumes that you have already set up Kura, and started with a fresh instance.

Baby Steps

The first step we take is to create a very simple, empty, Camel Context and hook and directly hook up a Camel endpoint without further configuration.

New Camel Context

As a first step, we create a new XML Router Camel context:

  • Open the Kura Web UI
  • Click on the “+” button next to the services search box
  • Select the org.eclipse.kura.camel.xml.XmlRouterComponent factory
  • Enter the name camel1
  • Press “Submit”

New Camel Context Component

A new service should appear in the left side navigation area. Sometimes it happens that the service does not show up, but reloading the Web UI will reveal the newly created service.

Now select the service and edit the newly created context. Clear out the “Router XML” and only leave the root element:

<routes xmlns="http://camel.apache.org/schema/spring">
</routes>

In the field “Required Camel Components” add the stream component. Click on “Apply” to activate the changes. This will configure the Camel context to have no routes, but wait for the stream component to be present in the OSGi runtime. The stream component is a default component, provided by the Eclipse Kura Camel runtime. The Camel context should be ready immediately and will be registered as an OSGi service for others to consume.

The Wires Graph

The next step is to configure the Kura Wires graph:

  • Switch to “Wire Graph” in the navigation pane
  • Add a new “Timer” component named timer1
    • Configure the component to fire every second
  • Add a new “Camel Producer” named producer1
    • Set the Context ID field of the component to camel1
    • Set the endpoint URI to stream:out
  • Connect the nodes timer1 and producer1
  • Click on Apply to activate the changes

If you look at the console of the Kura instance, then you should see something like this:

org.eclipse.kura.wire.WireEnvelope@bdc823c
org.eclipse.kura.wire.WireEnvelope@5b1f50f4
org.eclipse.kura.wire.WireEnvelope@50851555
org.eclipse.kura.wire.WireEnvelope@34cce95d

Note: If you are running Kura on an actual device, then the output might be in the file /var/log/kura-console.log.

What is happening is, that the Kura wires timer component will trigger a Wires event every second. That event is passed along to the Camel endpoint stream:out in the Camel context camel1. This isn’t using any Camel routes yet. But this is a basic integration, which allows you to access all available Camel endpoints directly from Kura Wires.

Producer, Consumer, Processor

In addition to the “Producer” component, it is also possible to use the “Consumer”, or the “Processor”. The Consumer takes events from the Camel context and forwards them to the Kura Wires graph. While the “Processor” takes an event from the Wire Graph, processes it using Camel, and passes along the result to Wires again:


For Producer and Consumer, this would be a unidirectional message exchange from a Camel point of view. The Processor component would use an “in”/”out” message exchange, which is more like “request/response”. Of course that only makes sense when you have an endpoint which actually hands back a response, like the HTTP client endpoint.

In the following sections, we will see that in most cases there will be a more complex route set up that the Camel Wire component would interact with, proxied by a seda Camel component. Still, the “in”, “out” flow of the Camel message exchange would be end-to-end between whatever endpoint you have and the Wires graph.

Getting professional

Apache Camel mostly uses the concept of routes. And while accessing an endpoint directly from the Kura Camel component technically works, I wouldn’t recommend it. Mainly due to the fact that you would be missing an abstraction layer, there is no way to inject anything between the Kura Wires component and the final destination at the Camel endpoint. You directly hook up Kura Wires with the endpoint and thus lose all ways that Camel allows you to work with your data.

So as a first step, let’s decouple the Camel endpoint from Kura Wires and provide an API for our Camel Context.

In the camel1 configurations screen, change the “Router XML” to:

<routes xmlns="http://camel.apache.org/schema/spring">
    <route>
        <from uri="seda:input1"/>
        <to uri="stream:out"/>
    </route>
</routes>

Then configure the producer1 component in the Wire Graph to use the “Endpoint URI” seda:input1 instead of directly using stream:out.

If everything is right, then you should still see the same output on the Kura console, but now Wires and Camel are decoupled and properly interfaced using an internal event queue, which allows us to use Camel routes for the following steps.

One benefit of this approach also is that you can now take the XML route definitions outside of Kura and test them in your local IDE. There are various IDE extensions for Eclipse, IntelliJ and Visual Studio, which can help to work with Camel XML route definitions. And of course, there are the JBoss Tools as well ;-). So you can easily test the routes outside of a running Kura instance and feed in emulated Kura Wires events using the seda endpoints.

To JSON

This first example already shows a common problem, when working with data, and even so for IoT use cases. The output of org.eclipse.kura.wire.WireEnvelope@3e0cef10 is definitely not what is of much use. But Camel is great a converting data, so let’s make use of that.

As a first step we need to enable the JSON support for Camel:

  • Navigate to “Packages”
  • Click on “Install/Upgrade”
  • Enter the URL: https://repo1.maven.org/maven2/de/dentrassi/kura/addons/de.dentrassi.kura.addons.camel.gson/0.6.0/de.dentrassi.kura.addons.camel.gson-0.6.0.dp
  • Click on “Submit”

After a while, the package de.dentrassi.kura.addons.gson should appear in the list of installed packages. It may happen that the list doesn’t properly refresh. Clicking on “refresh” or reloading the Web page will help.

Instead of downloading the package directly to the Kura installation you can also download the file to your local machine and then upload it by providing the file in the “Install/Upgrade” dialog box.

As a next step, you need to change the “Router XML” of the Camel context camel1 to the following configuration:

<routes xmlns="http://camel.apache.org/schema/spring">
    <route>
        <from uri="seda:input1"/>
        <marshal><json library="Gson"/></marshal>
        <transform><simple>${body}\n</simple></transform>
        <to uri="stream:out"/>
    </route>
</routes>

In the Kura console you will now see that we successfully transformed the internal Kura Wires data format to simple JSON:

{"value":[{"properties":{"TIMER":{}}}],"identification":"org.eclipse.kura.wire.Timer-1536913933101-5","scope":"WIRES"}

This change did intercept the internal Kura wires objects and serialized them into proper JSON structures. The following step simply appends the content with a “newline” character in order to have a more readable output on the command line.

Transforming data

Depending on your IoT use case, transforming data can become rather complex. Camel is good at handling this. Transforming, filtering, splitting, aggregating, … for this tutorial I want to stick to a rather simple example, in order to focus in the integration between Kura and Camel, and less on the powers of Camel itself.

As the next step will use the “Groovy” script language to transform data, we will need to install an additional package using the same way as before: https://repo1.maven.org/maven2/de/dentrassi/kura/addons/de.dentrassi.kura.addons.camel.groovy/0.6.0/de.dentrassi.kura.addons.camel.groovy-0.6.0.dp

Then go ahead and modify the “Router XML” to include a transformation step, add the following content before the JSON conversion:

<transform><groovy>
return  ["value": new Random().nextInt(10), "timer": request.body.identification ];
</groovy></transform>

The full XML context should now be:

<routes xmlns="http://camel.apache.org/schema/spring">
    <route>
        <from uri="seda:input1"/>
        <transform><groovy>
        return  ["value": new Random().nextInt(10), "timer": request.body.identification ];
        </groovy></transform>
        <marshal><json library="Gson"/></marshal>
        <transform><simple>${body}\n</simple></transform>
        <to uri="stream:out"/>
    </route>
</routes>

After applying the changes, the output on the console should change to something like:

{"value":2,"timer":"org.eclipse.kura.wire.Timer-1536913933101-5"}

As you can see, we now created a new data structure, based on generated content and based on the original Kura Wires event information.

Off to the Eclipse Hono HTTP Adapter

Printing out JSON to the console is nice, but let’s get a bit more professional. Yes, Kura allows you to use its Kura specific MQTT data format. But what we want to send this piece of JSON to some HTTP endpoint, like the Eclipse Hono HTTP protocol adapter?

Camel has a huge variety of endpoints for connecting to various APIs, transport mechanisms and protocols. I doubt you directly would like your IoT gateway to contact Salesforce or Twitter, but using OPC UA, MQTT, HTTP, IEC 60870, might be a reasonable use case for IoT.

As a first step, we need to install Camel HTTP endpoint support: https://repo1.maven.org/maven2/de/dentrassi/kura/addons/de.dentrassi.kura.addons.camel.http/0.6.0/de.dentrassi.kura.addons.camel.http-0.6.0.dp

The next step requires an instance of Eclipse Hono, thankfully there is a Hono sandbox server running at hono.eclipse.org.

In the XML Router we need two steps for this. You can add them after the to element, so that we still see the JSON on the command line:

<setHeader headerName=”Content-Type”><constant>application/json</constant></setHeader>
<to uri="https4://hono.eclipse.org:28080/telemetry?authenticationPreemptive=true&amp;authUsername=sensor1@DEFAULT_TENANT&amp;authPassword=hono-secret"/>

The first step sets the content type to application/json, which is passed along by Hono to the AMQP network.

Yes, it really is http4://, this is not a typo but the Camel endpoint using Apache HttpClient 4.

You may need to register the device with Hono before actually publishing data to the instance. Also, it is necessary that a consumer is attached, which receives the data. Hono rejects devices publish data if no consumer is attached. Also see: https://www.eclipse.org/hono/getting-started/#publishing-data

If you are using a custom deployment of Hono using the OpenShift S2I approach, then the to URL would look more like:

<to uri="https4://hono-adapter-http-vertx-sec-hono.my.openshift.cluster/telemetry?authenticationPreemptive=true&amp;authUsername=sensor1@DEFAULT_TENANT&amp;authPassword=hono-secret"/>

Wrapping it up

What we have seen so far is that, with a few lines of XML, it is possible to interface with Kura Wires, and start processing data that was originally not supported by Kura, sending to a target that also isn’t supported by Kura. On for that we only used a few lines of XML.

In addition to that, you can test and develop everything in a local, confined space. Without having to worry too much about actually running a Kura instance.

In Part #2, we will have a look at ways to get data from Camel back into Kura Wires. And in Part #3 of this tutorial, we will continue with this approach and develop a Camel based solution, which can run inside of Kura, as well as outside, including actual unit tests.

We scaled IoT – Eclipse Hono in the lab

Working for Red Hat is awesome. Not only can you work on amazing things, you will also get the tools you need in order to do just that. We wanted to test Eclipse Hono (yes, again) and see how far we can scale it. And of course which limits and issues we encounter on the way. So we took the current development version of Hono (0.7) from Eclipse IoT, backed by EnMasse 0.21 and ran it on an OpenShift 3.9 cluster.

Continue reading

Eclipse Kura on the Intel UP² with CentOS

Intel UP² In the past I was testing modifications to Kura with a Raspberry Pi 3 and Fedora for ARM. But I got a nice little Intel UP² just recently, and so I decided to perform my next Kura tests, with the modifications to the Apache Camel runtime in Kura, on this nice board. Creating a new device profile for Kura using CentOS 7 and the Intel UP² looked like a good idea anyway.

At the time of writing, the PR for merging the device profile into Kura is still pending (PR #2093). But my hope is that this will be merged before Kura 4 comes out.

Build your own Kura image

But it is possible to try this out right now by using the preview branch (preview/intel_up2_1) on my forked repository: ctron/kura.

The following commands use the kura-build container. For more information about building Kura with this container see: https://github.com/ctron/kura-build and https://hub.docker.com/r/ctron/kura-build/.

So for the moment you will need to build this image yourself. But if you have Docker installed, then it only needs a few minutes to create your own build of Kura:

docker run -v /path/to/output:/output -ti ctron/kura-build -r ctron/kura -b preview/intel_up2_1 -- -Pintel-up2-centos-7

Where /path/to/output must be replaced with a local directory where the resulting output should be placed. If you are running Docker with SElinux enabled, then you might need to append :z to the volume:

docker run -v /path/to/output:/output:z -ti ctron/kura-build -r ctron/kura -b preview/intel_up2_1 -- -Pintel-up2-centos-7

As you might guess, it is also possible to build other branches and repositories of Kura in the same way. That docker image only ensures that all the necessary build dependencies are present when executing the build.

If you are running on Linux and do have all the dependencies installed locally. Then of course there is no need to run through Docker, you can simply call the build-kura script directly:

./build-kura preview/intel_up2_1 -r ctron/kura -b preview/intel_up2_1 -- -Pintel-up2-centos-7

Setting up CentOS 7

This is rather simple step, you simply need to download CentOS from https://www.centos.org/download/ (the Minimal ISO is just fine). Copy the ISO to a USB stick (https://wiki.centos.org/HowTos/InstallFromUSBkey). On a Linux-ish system this should work like (where /dev/sdX is the USB stick, all data on this stick will be lost!):

sudo dd if=CentOS-7-x86_64-Minimal-1804.iso of=/dev/sdX bs=8M status=progress oflag=direct

Rebooting your UP with the USB stick attached, this should reboot into the CentOS installer from where you can perform a standard installation.

After the installation is finished and you booted into CentOS, you will need to enable EPEL, as Kura requires some extra components (like wireless-tools and hostapd). You can do this by executing:

sudo yum install epel-release

You might also want to install a more recent kernel into CentOS. All the core things works with the default CentOS kernel. However some things like support for the GPIO support is still missing in the default CentOS kernel. But the mainline kernel from ELRepo can easily be installed:

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install kernel-ml

For more information check e.g.: https://www.howtoforge.com/tutorial/how-to-upgrade-kernel-in-centos-7-server/

Installing Kura on the Intel UP²

Copy the RPM you just created from the build process over to the UP, e.g. by:

scp kura-build-output/2018XXXX-YYYY/kura-intel-up2-centos-7*.rpm user@my-up:

And then on the device run:

yum install kura-*.rpm

This will install the Kura package as well as any required dependencies. After the installation has completed, reboot the machine and navigate your web browser to “http://my-up”, using the credentials “admin” / “admin”.

More information

Build your own IoT cloud platform

If you want to do large scale IoT and build your own IoT cloud platform, then you will need a messaging layer which can actually handle this. Not only handle the sheer load of messages, the number of connections. Even more important may be ability to integrate your custom bits and pieces and be able to make changes to every layer of that installation, in a controlled, yet simple way.

An overview

Eclipse Hono is an open source project under umbrella of the Eclipse IoT top level project. It provides a set of components and services used for building up your own IoT cloud platform:

Overview of Eclipse Hono IoT cloud platform

In a nutshell, Hono does provide a framework to create protocol adapters, and also delivers two “standard” protocol adapters, one for HTTP and one for MQTT. Both options are equally important to the project, because there will always be a special case for which you might want a custom solution.

Aside from the standard components, Hono also defines at set of APIs based on AMQP 1.0 in order to mesh in other services. Using the same ideas from adding custom protocol adapters, Hono allows to hook up your custom device registry and your existing authentication/authorization system (read more about Eclipse Hono APIs).

The final direct or store-and-forward message delivery is offloaded to an existing messaging layer. The scope of Hono is to create an IoT messaging infrastructure by re-using an existing, use case agnostic messaging layer and not to create another one. In this post we will assume that EnMasse is being used for that purpose. Simply because EnMasse is the best choice for AMQP 1.0 when it comes to Kubernetes/OpenShift. It is a combination of Apache Qpid, Apache Artemis, Keycloak and some EnMasse native components.

In addition to that, you will of course need to plug in your actual custom business logic. Which leaves you with a zoo of containers. Don’t get me wrong, containers are awesome, simply imagine you would need to deploy all of this on a single machine.

Container freshness

But this also means that you need to take care of containers freshness at some point. Most likely making changes to your custom logic and maybe even to Hono itself. What is “container freshness”?! – Containers are great to use, and easy to build in the beginning. Simply create a Dockerfile, run docker build and you are good to go. You can also do this during your Maven release and have one (or more) final output containers(s) for your release, like Hono does it for example. The big flaw here is, that a container is a stack of layers, making up your final (application) image. Starting with a basic operating system layer, adding additional tools, adding Java and maybe more. And finally your local bits and pieces (like the Hono services).

All those layers link to exactly one parent layer. And this link cannot be updated. So Hono 0.5 points to a specific version of the “openjdk” layer, which again points to a specific version of “debian”. But you want your IoT cloud platform to stay fresh and up-to-date. Now assume that there is some issue in any of the Java or Debian base layers, like a security issue in the “glibc”. Unless Hono releases a new set of images, you are unable to get rid of this issue. In most cases you want to upgrade your base layers more frequently than you actual application layer.

Or consider the idea of using a different base layer than the Hono project had in mind. What if you don’t want to use Debian as a base layer? Or want to use Eclipse J9 instead of the OpenJDK JVM?

Building with OpenShift

When you are using OpenShift as a container platform (and Kubernetes supports the same approach) you can make use of image streams and builds. An image stream simply is a way to store images and maintaining versions. When an image stream is created, it normally is empty. You can start to populate it with images, either by importing them from existing repositories, like DockerHub or your internal ones. Or by creating images yourself with a build running inside of OpenShift. Of course you are in charge of all operations, including tagging versions. This means that you can easily replace an image version, but in a controlled way. So no automatic update of a container will break your complete system.

There are different types of builds. A rather simple one is the well known “Dockerfile” approach. You define a base image and add a few commands which will make up the new container layer. Then there is the “source-to-image” (S2I) build, which we will have a look at in a second.

Building & Image Streams

Now with that functionality you can define a setup like this:

Diagram of example image streams

The base image gets pulled in from an external registry. And during that process you map versions to your internal versioning schema. What a move from “v1” to “v2” means in your setup is completely up to you.

The pulled in image gets fed into a build step, which will produce a new image based on the defined parent, e.g. your custom base image. Maybe this means simply adding a few command line utilities to the existing base image. Or some policy file, … The custom base image can then be used by the next build process to create an application specific container, hosting your custom application. Again, what a versioning schema you use, is completely up to you.

If you like you can also define triggers between these steps. So that when OpenShift pulls in a new image from the external source or the source code of the git repository changes, all required builds get executed and finally the new application versions gets deployed automatically. Old image versions may be kept so that you can easily switch back to an older version.

Source-to-Image (S2I)

Hono uses a plain Maven build and is based on Vert.x and Spring Boot. The default way of building new container images is to check out the sources from git and run a local maven build. During the build Maven wants to talk to some Docker Daemon in order to assemble new images and storing it into its registry.

Now that approach may be fine for developers. But first of all this is a quite complex, manual job. And second, in the context described above, it doesn’t really fit.

As already described, OpenShift supports different build types to create new images. One of those build types is “S2I”. The basic idea behind S2I is that you define a build container image, which adheres to a set of entry and exit points. Processing the provided source, creating a new container image which can be used to actually run this source. For Java, Spring Boot and Maven there is an S2I image from “fabric8”, which can be tweaked with a few arguments. It will run a maven build, find the Spring Boot entry point, take care of container heap management for Java, inject a JMX agent, …

That way, for Hono you can simply reuse this existing S2I image in a build template like:

source:
  type: Git
  git:
    uri: "https://github.com/eclipse/hono"
    ref: "0.5.x"
strategy:
  type: source
  sourceStrategy:
    from:
      kind: ImageStreamTag
      name: "fabric8-s2i-java:2.1"
    env:
      - name: MAVEN_ARGS_APPEND
        value: "-B -pl org.eclipse.hono:hono-service-messaging --also-make"
      - name: ARTIFACT_DIR
        value: "services/messaging/target"
      - name: ARTIFACT_COPY_ARGS
        value: "*-exec.jar"

This simple template allows you to reuse the complete existing Hono source code repository and build system. And yet you can start making modifications using custom base images or changes in Git right away.

Of course you can reuse this for your custom protocol adapters as well. And for your custom application parts. In your development process you can still use plain Maven, Spring Boot or whatever your prefer. When it comes to deploying your stack in the cloud, you hand over the same build scripts to OpenShift and S2I and let your application be built in the same way.

Choose your stack

The beauty of S2I is, that it is not tied to any specific language or toolset. In this case, for Hono, we used the “fabric8” S2I image for Java. But if you would prefer to write your custom protocol adaptor in something else, like Python, Go, .NET, … you still could use S2I and the same patterns to go with this language and toolset.

Also, Hono supports creating protocol adapters and services in different (non-JVM based) languages. Hono components get meshed up using Hono’s AMQP 1.0 APIs, which allow to use the same flow control mechanism for services as they are used for IoT data, building your IoT cloud platform using a stack you prefer most.

… and beyond the infinite

OpenShift has a lot more to offer when it comes to building your platform. It is possible to use build pipelines, which allow workflows publishing to some staging setup before going to production. Re-using the same generated images. Or things like:

  • Automatic key and certificate generation for the inter-service communication of Hono.
  • Easy management of Hono configuration files, logging configuration using “ConfigMaps”.
  • Application specific metrics generation to get some insights of application performance and throughput.

That would have been a bit too much for a single blog post. But I do encourage you to have a look at the OpenShift Hono setup at my forked Hono repository on GitHub, which makes use of some of this. This setup tries to provide a more production ready deployment setup for using Hono. However it can only be seen as a reference, as any production grade setup would definitely require replacements for the example device registry, a better tuned logging configuration and definitely a few other tweaks of your personal preference ;-)

Hono also offers a lot more than this blog post can cover when building your own IoT cloud platform. One important aspect definitely is data privacy, yet supporting multiple tenants on the same instance. Hono already supports full mulit-tenancy, down to the messaging infrastructure. This makes it a perfect solution for honoring data privacy in the public and private cloud. Read more about new multi-tenant features of the next Hono version in Kai Hudalla’s blog post.

Take a look – EclipseCon France 2018

Dejan and I will have a talk about Hono at the EclipseCon France in Toulouse (June 13-14). We will present Hono in combination with EnMasse as an IoT cloud platform. We will also bring the setup described above with us and would be happy to you show everything in action. See you in Toulouse.

CEP & Machine learning for IoT – Drools on Kura

Machine learning and predicate maintenance we are the role models of IoT use cases. Having an IoT gateway allows you to pre-process data before you send it upstream to your cloud. It also allows you to do local decisions, without the actual need for a cloud upload. But as you know from sitting at your favorite restaurant, staring at the menu, making decisions can be quite hard ;-) Complex event process and machine learning models can help you with IoT use cases though.

Eclipse Kura is an open source IoT gateway with a focus on industrial use cases. Drools is an open source rule engine and, with Drools Fusion, provides complex event processing. It also supports making decisions based on Predictive Model Markup Language (PMML) based models. So why not bring both components together?! PMML’s can be used for all kind of scenarios. But the classic IoT use case would probably be to do predicate maintenance, based on a PMML generated by your machine learning solution on the cloud. Sending back the “learned” knowledge to the edge gateway for local processing.

Installation

The Drools addon for Kura provide two DPs (the package type used by Kura) which can be deployed in order to extend Kura with Drools.

Note: If you don’t have a Raspberry Pi at hand, or don’t want to install Kura on some “real” device. You can always use the Kura Emulator docker image (see also Developing for Eclipse Kura in Windows).

Setup

Once the components are installed, it is possible to create a new Drools instance by clicking in the blue “+” on the left side of the Kura services area. Create a new component of the type de.dentrassi.kura.addons.drools.component.DroolsInstance. Choose any unused ID and click “Apply”. It might be necessary to reload the Web UI of Kura at this point as the refresh doesn’t properly work. When the service is listed in the left hand side “services” list, select it in order to configure:

The actual rules document comes from the file SimpleScorecard.pmml of the PMML drools example mpbravo/brms-pmml-example. Also be sure to set the file type to “Predictive Model Markup Language”. Save the changes by clicking on “Apply”.

Next we will use Kura Wires in order to mesh up the model with some “data”. A need to use a timer as input source, as Kura currently doesn’t offer any kind of value creating like a function or sine curve. So create a new “timer”, you can leave the default of 10 seconds. Add a new logger, which we simply use for testing, you should set the “verbosity” to “VERBOSE”. And then create a new “DroolsProcess” component with the following configuration:

ID
pmml1 – The ID of the drools session
Fire all rules
true – After the fact has been injected, rules have to fired
Delete after fire
true – After the rules have been fired, we can remove the fact from the session
Fact Package
org.drools.scorecards.example – The package name, from the rules model
Fact Type
SampleScore – The type name, from the rules model
Inputs
age=TIMER – comma separated list for Wire record names to fact object properties
Outputs
result=scorecard_calculatedScore – comma separated list for Wire record names to fact object properties

Finally wire that all up:

Results

Looking at the Kura log file /var/log/kura.log should show you something like (where result is coming from the PMML model):

2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  d.d.k.a.d.c.w.DroolsProcess - Result - type: class java.lang.Double, value: 24.0
2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  o.e.k.i.w.l.Logger - Received WireEnvelope from de.dentrassi.kura.addons.drools.component.wires.DroolsProcess-1521129031885-7
2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  o.e.k.i.w.l.Logger - Record List content: 
2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  o.e.k.i.w.l.Logger -   Record content: 
2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  o.e.k.i.w.l.Logger -     result : 24.0
2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  o.e.k.i.w.l.Logger - 

Conclusion

Of course the example is rather trivial. And the actual model and the way we wired it up is just an example. But of course you will bring your own model, based on your machine learning solutions, and have your own data to wire up. So go ahead and explore.

🔗 Varlink for Java – What wonderful world it could be

Varlink for Java is a Java based implementation of the Varlink interface. This blog post shows how varlink can be used in the Java world to solve the problem of accessing operating system functionality.

Consuming operating system functionality from Java, when running on Linux, has always been a problem. There are numerous examples where people fork processes and parse the result in ways which tend to break the next time you upgrade your CLI tools. Not even thinking about switching to a different version of your favorite Linux distribution or switching to another distribution at all. Of course there have been all kinds of approaches to solve this like JNI, DBus, … Then again, the operating system is way more than the kernel and the desktop. Configuring a network time server, installing additional packages, reading the system log, … And of course in a polyglot world, all this is not necessarily exposed using a C based API.

Over the time, and thanks to Harald, I have been following the Varlink project. You can also read more about this in his recent blog post about varlink. Varlink defines itself as:

… is an interface description format and protocol that aims to make services accessible to both humans and machines in the simplest feasible way.

So let’s put that claim to a test. :-)

Quick overview

Varlink uses a socket based, client/server based approach to communicate. It support TCP but also Unix domain sockets (UDS). Although the latter is still not officially supported in Java, Netty offers a neat solution and also allows you to use the same networking API with TCP and UDS. Still, let’s go the extra mile and use UDS for this.

The protocol for communicating between client and server is rather simple. The client issues a call and waits for the result. All communication is zero-byte terminated strings, which happen to be JSON. I won’t dive into the protocol any further, it really is that simple and you can read about it at the Varlink protocol documentation anyway.

As Netty does most of the networking, GSON takes care of the JSON processing, so we can focus on the actual API we want to have. For this let’s have a closer looks at how Varlink works.

Varlink offers services aka “interfaces” to expose their functionality. Every interface does also export information about itself. Varlink interfaces actually run in different processes (or even in the Linux kernel) and do expose their functionality over different addresses (e.g. unix domain sockets or TCP addresses). Therefore a default (well known) service of Varlink is the “resolver”, which allows to you to register your service with, so that others will be able to find you. As a first step I decided to focus on the client side, consuming APIs rather then publishing them. So the steps required are simply:

  • Contact the resolver
  • Ask for the address of the required service
  • Contact the resolved address
  • Perform the actual operation

Of course talking to the resolver is using the same functionality as talking to other interfaces, as the resolver is a varlink interface itself.

A simple test

After around two to three hours I came up with the following API, contacting the varlink interface io.systemd.network, querying all the existing network interfaces of the system:

try (Varlink v = varlink()) {

  // shorter & sync way

  List<Netdev> devices1 = v
    .resolveSync(Network.class)
    .sync()
    .list();

  dump(devices1);

  // more explicit & async way

  List<Netdev> devices2 = v
    .resolver()
    .async()
    .resolve(Network.class)
    .thenCompose(network -> network.async().list())
    .get();

  dump(devices2);
}

To be honest, for this specific task, I could have also used the Java NetworkInterface API. But the same way I am querying the network interfaces with varlink, I could also access the io.systemd.journal interface or org.kernel.kmod and interface with the system log or the kernel module system.

Just for comparison you can have a look at the Eclipse Kura USB modem functionality, which needs to call a bunch of command line utilities, access lock files, call into JNI code, …

The IDL – Xtext awesomeness

If you don’t know Xtext, it is a toolchain for creating your own DSL. Living in the Eclipse modeling ecosystem, it allows you to define your DSL grammar and it will take care of creating a parser, a complete editor with code completion, syntax highlighting, support for the language server protocol and much more. It does support the Eclipse IDE, IntelliJ and plain web. And of course you can create an Xtext grammar for the Varlink IDL quite easily. After around one hour of fighting with grammars, I came up with the following editor:

Varlink IDL editor
Varlink IDL editor

As you can see, the Varlink IDL has been parsed. I am pretty sure there are still some issues with grammar, but it is quite a good start. Now everything is available in a parsed ECore model and can be visualized or transformed with any of the Eclipse Modeling tools/libraries. Creating a quick diagram editor with Eclipse Sirius is only a few more minutes away.

What is next, what is missing

Altogether this was quite easy. Varlink indeed offers a solution for accessing system services in a simplest feasible way. So, what is next?

varlink-java is already available on GitHub. I would like to clean it up a bit, add a decent build setup and publish it on Maven Central. Adding the Xtext bits in a simple way, if possible. Tycho and plain Maven builds always tend to get in each others way.

Varlink offers something called “monitoring”. Instead of getting a single reply to a call, the call can follow up with additional updates. Like changes in the device list, following on log entries, … This is currently not supported by the varlink-java API, but it is an important feature and I really would like to add it as well.

In the current example the bindings to io.systemd.network where created manually. This is fine for a first example, but combining this with the Xtext based IDL parser it would be a simple task to create a Maven plugin which creates the binding code base on the provided varlink files on the fly.

Of course, there is so much more: creating a graphical System API browser, the whole server/interface side and dozens of bindings to create.

Conclusion

Varlink is an amazing piece of technology. And mostly because it is that simple. It does offer the right functionality to solve so many issues we currently face when accessing operating system APIs. And it definitely is a candidate to get rid of all the ugly wrapper code around command line calls and other things which are currently necessary to talk to operating system functionality. And simply using plain Java functionality (at least if you go with TCP ;-) ).

Links & stuff

How to install varlink (on Fedora 27, for CentOS use “yum”):

sudo dnf copr enable "@varlink/varlink"
sudo dnf install fedora-varlink
sudo systemctl enable --now org.varlink.resolver.socket
varlink help

OPC UA solutions with Eclipse Milo

Eclipse IoTThis article walks you through the first steps of creating an OPC UA solution based on Eclipse Milo. OPC UA, also known as IEC 62541, is an IoT solution for connecting industrial automation systems. Eclipse Milo™ is an open-source Java based implementation.

What is OPC UA?

OPC UA is a point-to-point, client/server based communication protocol used in industrial automation scenarios. It offers APIs for telemetry data, command and control, historical data, alarming and event logs. And a bit more.

OPC UA is also the successor of OPC DA (AE, HD, …) and puts a lot more emphasis on interoperability than the older, COM/DCOM based, variants. It not only offers a platform neutral communication layer, with security built right into it, but also offers a rich set of interfaces for handling telemetry data, alarms and events, historical data and more. OPC clearly has an industrial background as it is coming from an area of process control, PLC, SCADA like systems. It is also known as IEC 62541.

Looking at OPC UA from an MQTT perspective one might ask, why do we need OPC UA? Where MQTT offers a completely undefined topics structure and data types, OPC UA provides a framework for standard and custom datatypes, a defined (hierarchical) namespace and a definition for request/response style communication patterns. Especially the type system, even with simple types, is a real improvement over MQTT’s BLOB approach. With MQTT you never know what is inside your message. It may be a numeric value encoded as string, a JSON encoded object or even a picture of a cat. OPC UA on the other side does offer you a type system which holds the information about the types, in combination with the actual values.

OPC UA’s subscription model also provides a really efficient way of transmitting data in a lively manner, but only transmitting data when necessary as defined by client and server. In combination with the binary protocol this can a real resource safer.

The architecture of Eclipse Milo

Traditionally OPC UA frameworks are split up in “stack” and “SDK”. The “stack” is the core communication protocol implementation. While the “SDK” is building on top of that, offering a simpler application development model.

Eclipse Milo Components

Eclipse Milo offers both “stack” and “SDK” for both “client” and “server”. “core” is the common code shared between client and server. This should explain the module structure of Milo when doing a search for “org.eclipse.milo” on Maven Central:

org.eclipse.milo : stack-core
org.eclipse.milo : stack-client
org.eclipse.milo : stack-server
org.eclipse.milo : sdk-core
org.eclipse.milo : sdk-client
org.eclipse.milo : sdk-server

Connecting to an existing OPC UA server would require you to use “sdk-client” only, as all the other modules are transient dependencies of this module. Likewise, creating your own OPC UA server would also only require the “sdk-server” modules.

Making contact

Focusing on the most common use case of OPC, data acquisition and command & control, we will now create a simple client which will read out telemetry data from an existing OPC UA server.

The first step is to look up the “endpoint descriptors” from the remote server:

EndpointDescription[] endpoints =
  UaTcpStackClient.getEndpoints("opc.tcp://localhost:4840")
    .get();

The trailing .get() might have tipped you off that Milo makes use of Java 8 futures and thus easily allows asynchronous programming. However this example will use the synchronous .get() call in order to wait for a result. This will make the tutorial more readable as we will look at the code step by step.

Normally the next step would be to pick the “best” endpoint descriptor. However “best” is relative and highly depends on your security and connectivity requirements. So we simply pick the first one:

OpcUaClientConfigBuilder cfg = new OpcUaClientConfigBuilder();
cfg.setEndpoint(endpoints[0]);

Next we will create and connect the OPC client instance based on this configuration. Of course the configuration offers a lot more options. Feel free to explore them all.

OpcUaClient client = new OpcUaClient(cfg.build());
client.connect().get();

Node IDs & the namespace

OPC UA does identify its elements, objects, folders, items by using “Node IDs”. Each server has multiple namespaces and each namespace has a tree of folders, objects and items. There is a browser API which allows you to browse through this tree, but this is mostly for human interaction. If you know the Node ID of the element you would like to access, then you can simply provide the node ID. Node IDs can be string encoded and might look something like ns=1;i=123. This example would reference to a node in namespace #1 identified by the numeric id “123”. Aside from numeric IDs, there as also string IDs (s=), UUID/GUID IDs (g=) and even a BLOB type (b=).

The following examples will assume that a node ID has been parsed into variables like nodeId, which can be done by the following code with Milo:

NodeId nodeIdNumeric = NodeId.parse("ns=1;i=42");
NodeId nodeIdString  = NodeId.parse("ns=1;s=foo-bar");

Of course instances of “NodeId” can also be created using the different constructors. This approach is more performant than using the parse method.

NodeId nodeIdNumeric = new NodeId(1, 42);
NodeId nodeIdString  = new NodeId(1, "foo-bar");

The main reason behind using Node IDs is, that those can be efficiently encoded when they are transmitted. For example is it possible to lookup an item by a larger string based browse path and then only use the numeric Node ID for further interaction. Node IDs can also be efficiently encoded in the OPC UA binary protocol.

Additionally there is a set of “well known” node IDs. For example the root folder always has the node ID ns=0;i=84. So there is no need to look those up, they can be considered constants and be directly used. Milo defines all well known IDs in the Identifiers class.

Reading data

After the connection has been established we will request a single read of a value. Normally OPC UA is used in an event driven manner, but we will start simple by using a single requested read:

DataValue value =
  client.readValue(0, TimestampsToReturn.Both, nodeId)
    .get();

The first parameter, max age, lets the server know that we may be ok reading a value which is a bit older. This could reduce traffic to the underlying device/system. However using zero as a parameter we request a fresh update from the value source.

The above call is actually a simplified version of a more complex read call. In OPC UA items do have attributes and there is a “main” attribute, the value. This call defaults to reading the “value” attribute. However it is possible to read all kinds of other attributes from the same item. The next snippet shows the more complex read call, which allows to not only read different attributes of the item, but also multiple items at the same time:

ReadValueId readValueId =
  new ReadValueId(nodeId, AttributeId.Value.uid(), null, null);

ReadResponse response = 
  client
    .read(0, TimestampsToReturn.Both, Arrays.asList(readValueId))
      .get();

Also for this read call we do request both, the server and source timestamp. OPC UA will timestamp values and so you know when the value switched to this reported value. But it is also possible that the device itself does the timestamping. Depending on your device and application, this can be a real benefit to your use case.

Subscriptions

As already explained, OPC UA can do way better than explicit single reads. Using subscriptions it is possible to have fine grained control over what you request and even how you request data.

When coming from MQTT you know that you get updates once they got published. However you have no control over the frequency you get those updates. Imagine your data source is originally some temperate sensor. It probably is capable of supplying data in a sub-microsecond resolution and frequency. But pushing a value update every microsecond, even if no one listens, is a waste of resources.

In OPC UA

OPC UA does allow you to take control over the subscription process from both sides. When a client creates a new subscription it will provide information like the number of in-flight events, the rate of updates, … the server has the ability to modify the request, but will try to adhere to it. It will then start serving the request. Also will data only be sent of there are actual changes. Imagine a pressure sensor, the value may stay the same for quite a while, but then suddenly change rather quickly. So re-transmitting the same value over and over again, just to achieve high frequency updates when an actual change occurs is again a waste of resources.

OPC UA Subscriptions Example

In order to achieve this in OPC UA the client will request a subscription from the server, the server will fulfill that subscription and notify the client of both value changes and subscription state changes. So the client knows if the connection to the device is broken or if there are simply no updates in the value. If no value changes occurred nothing will be transmitted. Of course there is a heartbeat on the OPC UA connection level (one for all subscriptions), which ensures detection communication loss as well.

In Eclipse Milo

The following code snippets will create a new subscription in Milo. The first step is to use the subscription manager and create a new subscription context:

// what to read
ReadValueId readValueId =
    new ReadValueId(nodeId, AttributeId.Value.uid(), null, null);

// monitoring parameters
int clientHandle = 123456789;
MonitoringParameters parameters =
    new MonitoringParameters(uint(clientHandle), 1000.0, null, uint(10), true);

// creation request
MonitoredItemCreateRequest request =
    new MonitoredItemCreateRequest(readValueId, MonitoringMode.Reporting, parameters);

The “client handle” is a client assigned identifier which allows the client to reference the subscribed item later on. If you don’t need to reference by client handle, simply set it so some random or incrementing number.

The next step will define an initial setup callback and add the items to the subscription. The setup callback will ensure that the newly created subscription will be able to hook up listeners before the first values are received from the server side, without any race condition:

// The actual consumer

BiConsumer<UaMonitoredItem, DataValue> consumer =
  (item, value) ->
    System.out.format("%s -> %s%n", item, value);

// setting the consumer after the subscription creation

BiConsumer<UaMonitoredItem, Integer> onItemCreated =
  (monitoredItem, id) ->
    monitoredItem.setValueConsumer(consumer);

// creating the subscription

UaSubscription subscription =
    client.getSubscriptionManager().createSubscription(1000.0).get();

List<UaMonitoredItem> items = subscription.createMonitoredItems(
    TimestampsToReturn.Both,
    Arrays.asList(request),
    onItemCreated)
  .get();

Taking Control

Of course consuming telemetry data is fun, but sometimes it is necessary to issue control commands as well. Issuing a command or setting a value on the target device is as easy as:

client
  .writeValue(nodeId, DataValue.valueOnly(new Variant(true)))
    .get();

This snippet will send the value TRUE to the object identified by nodeId. Of course, like for the read, there are more options when writing values/attributes of an object.

But wait … there is more!

This article only scratched the complex topic of OPC UA. But it should have given you a brief introduction in OPC UA and Eclipse Milo. And it should get you started into OPC UA from a client perspective.

Watch out for the repository ctron/milo-ece2017 which will receive some more content getting into OPC UA and Eclipse Milo and which will be accompanying repository for my talk Developing OPC UA with Eclipse Milo™ at EclipseCon Europe 2017.

I would like to thank Kevin Herron for not only helping me with this article, but for helping me so many times understanding Milo and OPC UA.

Also see:

Please note: As the next version Milo (0.3.0) will break the APIs, the examples might no longer work out of the box beyond Milo 0.2.x. I will try to update them as soon as Milo 0.3.0 is released.