Technical Stuff

78 posts

Assorted technical stuff of all kind.

🔗 Varlink for Java – What wonderful world it could be

Varlink for Java is a Java based implementation of the Varlink interface. This blog post shows how varlink can be used in the Java world to solve the problem of accessing operating system functionality.

Consuming operating system functionality from Java, when running on Linux, has always been a problem. There are numerous examples where people fork processes and parse the result in ways which tend to break the next time you upgrade your CLI tools. Not even thinking about switching to a different version of your favorite Linux distribution or switching to another distribution at all. Of course there have been all kinds of approaches to solve this like JNI, DBus, … Then again, the operating system is way more than the kernel and the desktop. Configuring a network time server, installing additional packages, reading the system log, … And of course in a polyglot world, all this is not necessarily exposed using a C based API.

Over the time, and thanks to Harald, I have been following the Varlink project. You can also read more about this in his recent blog post about varlink. Varlink defines itself as:

… is an interface description format and protocol that aims to make services accessible to both humans and machines in the simplest feasible way.

So let’s put that claim to a test. 🙂

Quick overview

Varlink uses a socket based, client/server based approach to communicate. It support TCP but also Unix domain sockets (UDS). Although the latter is still not officially supported in Java, Netty offers a neat solution and also allows you to use the same networking API with TCP and UDS. Still, let’s go the extra mile and use UDS for this.

The protocol for communicating between client and server is rather simple. The client issues a call and waits for the result. All communication is zero-byte terminated strings, which happen to be JSON. I won’t dive into the protocol any further, it really is that simple and you can read about it at the Varlink protocol documentation anyway.

As Netty does most of the networking, GSON takes care of the JSON processing, so we can focus on the actual API we want to have. For this let’s have a closer looks at how Varlink works.

Varlink offers services aka “interfaces” to expose their functionality. Every interface does also export information about itself. Varlink interfaces actually run in different processes (or even in the Linux kernel) and do expose their functionality over different addresses (e.g. unix domain sockets or TCP addresses). Therefore a default (well known) service of Varlink is the “resolver”, which allows to you to register your service with, so that others will be able to find you. As a first step I decided to focus on the client side, consuming APIs rather then publishing them. So the steps required are simply:

  • Contact the resolver
  • Ask for the address of the required service
  • Contact the resolved address
  • Perform the actual operation

Of course talking to the resolver is using the same functionality as talking to other interfaces, as the resolver is a varlink interface itself.

A simple test

After around two to three hours I came up with the following API, contacting the varlink interface io.systemd.network, querying all the existing network interfaces of the system:

try (Varlink v = varlink()) {

  // shorter & sync way

  List<Netdev> devices1 = v
    .resolveSync(Network.class)
    .sync()
    .list();

  dump(devices1);

  // more explicit & async way

  List<Netdev> devices2 = v
    .resolver()
    .async()
    .resolve(Network.class)
    .thenCompose(network -> network.async().list())
    .get();

  dump(devices2);
}

To be honest, for this specific task, I could have also used the Java NetworkInterface API. But the same way I am querying the network interfaces with varlink, I could also access the io.systemd.journal interface or org.kernel.kmod and interface with the system log or the kernel module system.

Just for comparison you can have a look at the Eclipse Kura USB modem functionality, which needs to call a bunch of command line utilities, access lock files, call into JNI code, …

The IDL – Xtext awesomeness

If you don’t know Xtext, it is a toolchain for creating your own DSL. Living in the Eclipse modeling ecosystem, it allows you to define your DSL grammar and it will take care of creating a parser, a complete editor with code completion, syntax highlighting, support for the language server protocol and much more. It does support the Eclipse IDE, IntelliJ and plain web. And of course you can create an Xtext grammar for the Varlink IDL quite easily. After around one hour of fighting with grammars, I came up with the following editor:

Varlink IDL editor
Varlink IDL editor

As you can see, the Varlink IDL has been parsed. I am pretty sure there are still some issues with grammar, but it is quite a good start. Now everything is available in a parsed ECore model and can be visualized or transformed with any of the Eclipse Modeling tools/libraries. Creating a quick diagram editor with Eclipse Sirius is only a few more minutes away.

What is next, what is missing

Altogether this was quite easy. Varlink indeed offers a solution for accessing system services in a simplest feasible way. So, what is next?

varlink-java is already available on GitHub. I would like to clean it up a bit, add a decent build setup and publish it on Maven Central. Adding the Xtext bits in a simple way, if possible. Tycho and plain Maven builds always tend to get in each others way.

Varlink offers something called “monitoring”. Instead of getting a single reply to a call, the call can follow up with additional updates. Like changes in the device list, following on log entries, … This is currently not supported by the varlink-java API, but it is an important feature and I really would like to add it as well.

In the current example the bindings to io.systemd.network where created manually. This is fine for a first example, but combining this with the Xtext based IDL parser it would be a simple task to create a Maven plugin which creates the binding code base on the provided varlink files on the fly.

Of course, there is so much more: creating a graphical System API browser, the whole server/interface side and dozens of bindings to create.

Conclusion

Varlink is an amazing piece of technology. And mostly because it is that simple. It does offer the right functionality to solve so many issues we currently face when accessing operating system APIs. And it definitely is a candidate to get rid of all the ugly wrapper code around command line calls and other things which are currently necessary to talk to operating system functionality. And simply using plain Java functionality (at least if you go with TCP 😉 ).

Links & stuff

How to install varlink (on Fedora 27, for CentOS use “yum”):

sudo dnf copr enable "@varlink/varlink"
sudo dnf install fedora-varlink
sudo systemctl enable --now org.varlink.resolver.socket
varlink help

Manually reclaiming a persistent volume in OpenShift

When you have a persistent volume in OpenShift configured with “Retain”, the volume will switch to “Released” after the claim has been deleted. But what now? How to manually recycle it? This post will give a brief overview on how to manually reclaim the volume.

Deleting the claim

Delete the persistent volume claim in OpenShift is simple, either using the Web UI or by executing:

$ oc delete pvc/my-claim

If you check, then you will see the persistent volume is “Released” but not “Available”:

$ oc get pv
NAME              CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS     CLAIM                         REASON    AGE
my-pv             40Gi       RWO           Retain          Released   my-project/my-claim                     2d

What the documentation tells us

The OpenShift documentation states:

By default, persistent volumes are set to Retain. NFS volumes which are set to Recycle are scrubbed (i.e., rm -rf is run on the volume) after being released from their claim (i.e, after the user’s PersistentVolumeClaim bound to the volume is deleted). Once recycled, the NFS volume can be bound to a new claim.

At a different location it simply says:

Retained reclaim policy allows manual reclamation of the resource for those volume plug-ins that support it.

But how to actually do that? How to manually reclaim the volume?

Reclaiming the volume

First of all ensure that the data is actually deleted. Using NFS you will need to manually delete the content of the share using e.g. rm -Rf /exports/my-volume/*, but the be sure to the keep the actual export directory in place.

Now it is time to actually make the PV available again for being claimed. For this the reference to the previous claim (spec/claimRef) has to be removed from the persistent volume. You can manually do this from the Web UI or with short command from the shell (assuming you are using bash):

$ oc patch pv/my-pv --type json -p $'- op: remove\n  path: /spec/claimRef'
"my-pv" patched

This should return the volume into state “Available”:

$ oc get pv
NAME              CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS     CLAIM                         REASON    AGE
my-pv             40Gi       RWO           Retain          Available  my-project/my-claim                     2d

OPC UA solutions with Eclipse Milo

Eclipse IoT
This article walks you through the first steps of creating an OPC UA solution based on Eclipse Milo. OPC UA, also known as IEC 62541, is an IoT solution for connecting industrial automation systems. Eclipse Milo™ is an open-source Java based implementation.

What is OPC UA?

OPC UA is a point-to-point, client/server based communication protocol used in industrial automation scenarios. It offers APIs for telemetry data, command and control, historical data, alarming and event logs. And a bit more.

OPC UA is also the successor of OPC DA (AE, HD, …) and puts a lot more emphasis on interoperability than the older, COM/DCOM based, variants. It not only offers a platform neutral communication layer, with security built right into it, but also offers a rich set of interfaces for handling telemetry data, alarms and events, historical data and more. OPC clearly has an industrial background as it is coming from an area of process control, PLC, SCADA like systems. It is also known as IEC 62541.

Looking at OPC UA from an MQTT perspective one might ask, why do we need OPC UA? Where MQTT offers a completely undefined topics structure and data types, OPC UA provides a framework for standard and custom datatypes, a defined (hierarchical) namespace and a definition for request/response style communication patterns. Especially the type system, even with simple types, is a real improvement over MQTT’s BLOB approach. With MQTT you never know what is inside your message. It may be a numeric value encoded as string, a JSON encoded object or even a picture of a cat. OPC UA on the other side does offer you a type system which holds the information about the types, in combination with the actual values.

OPC UA’s subscription model also provides a really efficient way of transmitting data in a lively manner, but only transmitting data when necessary as defined by client and server. In combination with the binary protocol this can a real resource safer.

The architecture of Eclipse Milo

Traditionally OPC UA frameworks are split up in “stack” and “SDK”. The “stack” is the core communication protocol implementation. While the “SDK” is building on top of that, offering a simpler application development model.

Eclipse Milo Components

Eclipse Milo offers both “stack” and “SDK” for both “client” and “server”. “core” is the common code shared between client and server. This should explain the module structure of Milo when doing a search for “org.eclipse.milo” on Maven Central:

org.eclipse.milo : stack-core
org.eclipse.milo : stack-client
org.eclipse.milo : stack-server
org.eclipse.milo : sdk-core
org.eclipse.milo : sdk-client
org.eclipse.milo : sdk-server

Connecting to an existing OPC UA server would require you to use “sdk-client” only, as all the other modules are transient dependencies of this module. Likewise, creating your own OPC UA server would also only require the “sdk-server” modules.

Making contact

Focusing on the most common use case of OPC, data acquisition and command & control, we will now create a simple client which will read out telemetry data from an existing OPC UA server.

The first step is to look up the “endpoint descriptors” from the remote server:

EndpointDescription[] endpoints =
  UaTcpStackClient.getEndpoints("opc.tcp://localhost:4840")
    .get();

The trailing .get() might have tipped you off that Milo makes use of Java 8 futures and thus easily allows asynchronous programming. However this example will use the synchronous .get() call in order to wait for a result. This will make the tutorial more readable as we will look at the code step by step.

Normally the next step would be to pick the “best” endpoint descriptor. However “best” is relative and highly depends on your security and connectivity requirements. So we simply pick the first one:

OpcUaClientConfigBuilder cfg = new OpcUaClientConfigBuilder();
cfg.setEndpoint(endpoints[0]);

Next we will create and connect the OPC client instance based on this configuration. Of course the configuration offers a lot more options. Feel free to explore them all.

OpcUaClient client = new OpcUaClient(cfg.build());
client.connect().get();

Node IDs & the namespace

OPC UA does identify its elements, objects, folders, items by using “Node IDs”. Each server has multiple namespaces and each namespace has a tree of folders, objects and items. There is a browser API which allows you to browse through this tree, but this is mostly for human interaction. If you know the Node ID of the element you would like to access, then you can simply provide the node ID. Node IDs can be string encoded and might look something like ns=1;i=123. This example would reference to a node in namespace #1 identified by the numeric id “123”. Aside from numeric IDs, there as also string IDs (s=), UUID/GUID IDs (g=) and even a BLOB type (b=).

The following examples will assume that a node ID has been parsed into variables like nodeId, which can be done by the following code with Milo:

NodeId nodeIdNumeric = NodeId.parse("ns=1;i=42");
NodeId nodeIdString  = NodeId.parse("ns=1;s=foo-bar");

Of course instances of “NodeId” can also be created using the different constructors. This approach is more performant than using the parse method.

NodeId nodeIdNumeric = new NodeId(1, 42);
NodeId nodeIdString  = new NodeId(1, "foo-bar");

The main reason behind using Node IDs is, that those can be efficiently encoded when they are transmitted. For example is it possible to lookup an item by a larger string based browse path and then only use the numeric Node ID for further interaction. Node IDs can also be efficiently encoded in the OPC UA binary protocol.

Additionally there is a set of “well known” node IDs. For example the root folder always has the node ID ns=0;i=84. So there is no need to look those up, they can be considered constants and be directly used. Milo defines all well known IDs in the Identifiers class.

Reading data

After the connection has been established we will request a single read of a value. Normally OPC UA is used in an event driven manner, but we will start simple by using a single requested read:

DataValue value =
  client.readValue(0, TimestampsToReturn.Both, nodeId)
    .get();

The first parameter, max age, lets the server know that we may be ok reading a value which is a bit older. This could reduce traffic to the underlying device/system. However using zero as a parameter we request a fresh update from the value source.

The above call is actually a simplified version of a more complex read call. In OPC UA items do have attributes and there is a “main” attribute, the value. This call defaults to reading the “value” attribute. However it is possible to read all kinds of other attributes from the same item. The next snippet shows the more complex read call, which allows to not only read different attributes of the item, but also multiple items at the same time:

ReadValueId readValueId =
  new ReadValueId(nodeId, AttributeId.Value.uid(), null, null);

client
  .read(0, TimestampsToReturn.Both, Arrays.asList(readValueId))
    .get();

Also for this read call we do request both, the server and source timestamp. OPC UA will timestamp values and so you know when the value switched to this reported value. But it is also possible that the device itself does the timestamping. Depending on your device and application, this can be a real benefit to your use case.

Subscriptions

As already explained, OPC UA can do way better than explicit single reads. Using subscriptions it is possible to have fine grained control over what you request and even how you request data.

When coming from MQTT you know that you get updates once they got published. However you have no control over the frequency you get those updates. Imagine your data source is originally some temperate sensor. It probably is capable of supplying data in a sub-microsecond resolution and frequency. But pushing a value update every microsecond, even if no one listens, is a waste of resources.

In OPC UA

OPC UA does allow you to take control over the subscription process from both sides. When a client creates a new subscription it will provide information like the number of in-flight events, the rate of updates, … the server has the ability to modify the request, but will try to adhere to it. It will then start serving the request. Also will data only be sent of there are actual changes. Imagine a pressure sensor, the value may stay the same for quite a while, but then suddenly change rather quickly. So re-transmitting the same value over and over again, just to achieve high frequency updates when an actual change occurs is again a waste of resources.

OPC UA Subscriptions Example

In order to achieve this in OPC UA the client will request a subscription from the server, the server will fulfill that subscription and notify the client of both value changes and subscription state changes. So the client knows if the connection to the device is broken or if there are simply no updates in the value. If no value changes occurred nothing will be transmitted. Of course there is a heartbeat on the OPC UA connection level (one for all subscriptions), which ensures detection communication loss as well.

In Eclipse Milo

The following code snippets will create a new subscription in Milo. The first step is to use the subscription manager and create a new subscription context:

// what to read
ReadValueId readValueId =
    new ReadValueId(nodeId, AttributeId.Value.uid(), null, null);

// monitoring parameters
int clientHandle = 123456789;
MonitoringParameters parameters =
    new MonitoringParameters(uint(clientHandle), 1000.0, null, uint(10), true);

// creation request
MonitoredItemCreateRequest request =
    new MonitoredItemCreateRequest(readValueId, MonitoringMode.Reporting, parameters);

The “client handle” is a client assigned identifier which allows the client to reference the subscribed item later on. If you don’t need to reference by client handle, simply set it so some random or incrementing number.

The next step will define an initial setup callback and add the items to the subscription. The setup callback will ensure that the newly created subscription will be able to hook up listeners before the first values are received from the server side, without any race condition:

// The actual consumer

BiConsumer<UaMonitoredItem, DataValue> consumer =
  (item, value) ->
    System.out.format("%s -> %s%n", item, value);

// setting the consumer after the subscription creation

BiConsumer<UaMonitoredItem, Integer> onItemCreated =
  (monitoredItem, id) ->
    monitoredItem.setValueConsumer(consumer);

// creating the subscription

UaSubscription subscription =
    client.getSubscriptionManager().createSubscription(1000.0).get();

List<UaMonitoredItem> items = subscription.createMonitoredItems(
    TimestampsToReturn.Both,
    Arrays.asList(request),
    onItemCreated)
  .get();

Taking Control

Of course consuming telemetry data is fun, but sometimes it is necessary to issue control commands as well. Issuing a command or setting a value on the target device is as easy as:

client
  .writeValue(nodeId, DataValue.valueOnly(new Variant(true)))
    .get();

This snippet will send the value TRUE to the object identified by nodeId. Of course, like for the read, there are more options when writing values/attributes of an object.

But wait … there is more!

This article only scratched the complex topic of OPC UA. But it should have given you a brief introduction in OPC UA and Eclipse Milo. And it should get you started into OPC UA from a client perspective.

Watch out for the repository ctron/milo-ece2017 which will receive some more content getting into OPC UA and Eclipse Milo and which will be accompanying repository for my talk Developing OPC UA with Eclipse Milo™ at EclipseCon Europe 2017.

I would like to thank Kevin Herron for not only helping me with this article, but for helping me so many times understanding Milo and OPC UA.

Also see:

AsyncAPI Java Tools 0.0.4 released

It started a as proof-of-concept, checking out AsyncAPI and learning something about it. Actually that topic came to me from two different angles. We where looking into solutions for RPC over messaging (namely AMQP 1.0) instead of HTTP/REST. The second angle was a discussion with a very good friend of mine of “how do you define today an API about messages from the server to the client”. Everyone is talking about Swagger and OpenAPI, but this seems to be all request/response for HTTP based APIs.

And then someone mentioned AsyncAPI, I had to look into this. And this is what I came up with so far.

AsyncAPI tools

ctron/asyncapi is a set of tools, written in Java, around AsyncAPI in general. Reading the specification file into a Java model, a code generator for client and server side code, a few base classes which the generate code makes use of.

The definition file parser still is not optimal. The idea is that the definition is based on JSON schema, but I couldn’t find any good support for JSON schema in Java at the moment. So right now it all home-brew and although it does get the job done, I don’t think it is pretty.

Code generation

The code generation is a bit more sophisticated. The idea of taking this out of the Maven plugin came pretty early. I guess it would be cool to have a gradle plugin at some time in the future and the Maven plugin really is just a small wrapper around the code generator. So that should easily be possible.

The code generation is backed by Eclipse JDT, which allows you to create and parse Java code in a DOM style way. I am constantly torn apart between liking and hating this at the same time. I did work with generic model-to-text tools in the past, and tools like Acceleo or Xpand are way easier to read than Java code generating Java code in that way. On the other side you would need a full blown Ecore model before and then wrap this again by a Maven plugin. I am not sure this is fun either. Also, using the DOM approach, it is quite simple to write extension modules for the code generation, which allow to actually process the generated Java DOM and extend it, without the requirement to write an awfully complex code generation template. So let’s see where this is going in the future.

What works?

Don’t expect productive code … yet 😉 Currently the code generator will create the defined types/schemas, the messages and their payload. Topics will be parsed into services, versions and actions and generate Client and Server interfaces. Also is there a client and server implementation using AMQP and Qpid JMS. This already allows you to communicate between client and server. Right now GSON will be used for creating JSON payload serialization. But I think that it should be pretty simple to swap this with e.g. Jackson.

What does not work?

The AsyncAPI specification actually defines a bit more than the tooling can currently handle. The server section is missing, various meta data like license and descriptions are not supported. But the most important thing which is currently missing IMHO is some implementation backed on MQTT. Not that I am a big fan on MQTT, I would prefer AMQP over that in this case. But I would like to see two different AsyncAPI partners communicate via MQTT/JSON. One of them being generated by this tool-set and the other one based on something completely different.

Also is JSON schema more powerful than what the parser can currently handle. I am not sure if JSON schema and Java is a very good fit, but if you want to go AsyncAPI, then you need to work with JSON schema. And this toolset needs to do a better job here.

AsyncAPI Maven Plugin

The AsyncAPI Maven plugin takes the code generator from the main tools project and wraps in into a Maven plugin. The idea is to simply drop in you AsyncAPI YAML file and let the Maven plugin generate the code for it. Of course this is Eclipse M2E aware, so that you can simply safe your YAML file and Eclipse will on the fly generate new code for you.

Examples

Interested in how this would look like in the form of source code? Well, here are some examples. Also be sure to check out my examples repository.

First we need to create a builder for either the server or the client implementation:

Builder<JmsClient> builder = JmsClient.newBuilder()
  .host("localhost")
  .profile(AmqpProfile.DEFAULT_PROFILE)
  .payloadFormat();

Next create an instance from it, the following is a client instance which will listen to some server-side event:

try (JmsClient client = builder.build();
     ListenerHandle listener =
        client.accounts()
          .eventUserSignup().subscribe(System.out::println)) {

  System.out.println("Waiting for messages…");
  Thread.sleep(Long.MAX_VALUE);

}

The of course we need a server to publish messages:

try (JmsServer client = builder.build()) {

  client.accounts().eventUserSignup()
    .publish(newUserMessage())
      .toCompletableFuture()
      .get();

}

Of course there is no need to actively wait for the message to be sent with get() if you don’t need to. But you can use the AsyncAPI in sync way if you like to 😉

What is next?

There is a lot to do. Really, a lot! I would like to make an interop test with MQTT. There are several fields from the specification which are currently not supported. The server side JMS API isn’t really suitable for JEE style programming, especially when it comes to container managed JMS.

So I hope I will find some time to work on the MQTT backed implementation, because that would validate that two different AsyncAPI tools could work together. And I think this is what it is all about.

Before I forget…

The initial idea of why I looked into AsyncAPI was to get message based request/response. Well, that is something which AsyncAPI doesn’t really provide. But, to be fair, it would only require a few changes to add this to the specification.

See also

Kapua micro client SDK, running on a microcontroller

A few weeks back, while being at EclipseCon France, I did stumble over a nice little gadget. There was talk from MicroEJ around Java on microcontrollers. And they where showing an IoT related demo based on their development environment. And it seemed they did have Eclipse Paho (including TLS) and Google Protobuf running on their JVM without too much troubles.

ST Board with Thermocloud

My first idea was to simply drop the Kapua Gateway Client SDK on top of it, implementing the cloud facing API of MicroEJ and let their IoT demo publish data towards Kapua.
After a few days I was able to order such a STM32F746G-DISCO board myself and play a little bit around with it. It quickly turned out that is was pretty easy to drop some Java code on the device, using the gateway client SDK was not an option. The MicroEJ JVM is based on Java CDLC 8. Sounds like Java 8, right? Well, it is more like Java 7. Aside from a few classes which are missing, the core features missing where Java 8’s lambdas and enhancements to interfaces (like static methods and default methods).

Rewriting the gateway client SDK in Java 7, dropping the shiny API which we currently have, didn’t sound very appealing. But then again, implementing the Kapua communication stack actually isn’t that complicated and such an embedded device wouldn’t really need the extensibility and modularity of the Java 8 based gateway client SDK. So in a few hours there was the Kapua micro client SDK, which doesn’t consume any dependencies other than Paho and Protobuf and also only uses a minimal set of Java 7 functionality.

The second step was to implement the MicroEJ specific APIs and map the calls to the Kapua micro SDK, which wasn’t too difficult either. So now it is possible to simply install the “Kapua Data Channel Provider” from the MicroEJ Community Store. Alternatively you can compile the sources yourself as the code for this adapter is also on GitHub. Once the data channel provider is installed you can fire up any application consuming the DataChannel API, like the “Thermocloud” application, and publish data to Kapua. Please be sure to follow the installation instructions on the Kapua data channel provider for configuring the connection to your Kapua instance.

Kapua Data Channel Provider
Data from Thermocloud in Kapua

As the micro client is capable of running on Java 7, it might also be a choice for people wanting to connect from Android to Kapua without the need to go for Java 8. As Java 8 on Android still seems to be rather painful, this could be an option.

Also see:

I would like to thank Laurent and Frédéric from MicroEJ, who did help me fix all the noob-issues I had.

GSoC with Kapua – Just peeking

The deadline for the evaluations of phase for Google Summer of Code are pretty close. So it was time to take another look at what Arthur did with the logistics simulation for Eclipse Kapua.

The first phase was focused on creating a simple logistic network simulation and pushing telemetry data to Eclipse Kapua. The data isn’t realistic and that is ok, since this never was a requirement for the simulation. The second phase was focused around creating a simple dashboard for visualizing the data generated by the simulator.

So let’s have a quick look:

Now, as you can see, trains driving through oceans isn’t that realistic 😉 But we agreed on not taking things like this under consideration. Companies and locations are
randomly generated and figuring out which vehicles could drive on which roads between locations would simple be too much for this project.

The dashboard component is implemented using Dart and simply because it was considered cool 😉

There are still a few rough edges and the next phase will be focused on cleaning things up and making it consumable by people to play around with it.

Just a bit of Apache Camel

Sometimes you write something and then you nearly forget that you did it … although it is quite handy sometimes, here are a few lines of Apache Camel XML running in Eclipse Kura:

I just wanted to publish some random data from Kura to Kapua, without the need to code, deploy or build anything. Camel came to the rescue:

<routes xmlns="http://camel.apache.org/schema/spring">
  <route id="route1">

    <from uri="timer:1"/>
    <setBody><simple>${bean:payloadFactory.create("value", ${random(100)})}</simple></setBody>
    <to uri="kura-cloud:myapp/topic"/>

  </route>
</routes>

Dropping this snippet into the default XML Camel router:

  • Registers a Kura application named myapp
  • Creates a random number between 0 and 100 every second
  • Converts this to the Kura Payload structure
  • And publishes it on the topic topic

Talking to the cloud

While working on Eclipse Kapua, I wanted to do different tests, pushing telemetry data into the system. So I started to work on the Kura simulator, which can used to simulator an Eclipse Kura IoT gateway in a plain Java project, no special setup required. Now that helped a lot for unit testing and scale testing. Even generating a few simple telemetry data streams for simulating data works out of the box.

But then again I wanted to have something more lightweight and controllable. With the simulator you actually derive some a simple class and get fully controlled by the simulator framework. That may work well in some cases, but in others you may want to turn over the control to the actual application. Assume you already have a component which is “in charge” of your data, and now you want to push this into the cloud. Of course you can do this somehow, working around that. But creating a nice API for that, which is simple and easy to understand is way more fun 😉

So here is my take on a Gateway Client API, sending IoT data to the cloud, consuming command & control from it.

Intentions

I wanted to have a simple API, easy to understand, readable. Preventing you from making mistakes in the first place. And if something goes wrong, it should go wrong right away. Currently we go with MQTT, but there would be an option to go with HTTP as well, or AMQP in the future. And also for MQTT we have Eclipse Paho and FUSE MQTT. Both should be available, both may have special properties, but share some common ground. So implementing new providers should be possible, while sharing code should be easy as well.

Example

Now here is with what I came up with:

try (Client client = KuraMqttProfile.newProfile(FuseClient.Builder::new)
  .accountName("kapua-sys")
  .clientId("foo-bar-1")
  .brokerUrl("tcp://localhost:1883")
  .credentials(userAndPassword("kapua-broker", "kapua-password"))
  .build()) {

  try (Application application = client.buildApplication("app1").build()) {

    // subscribe to a topic

    application.data(Topic.of("my", "receiver")).subscribe(message -> {
      System.out.format("Received: %s%n", message);
    });

    // cache sender instance

    Sender sender = application
      .data(Topic.of("my", "sender"))
      .errors(ignore());

    int i = 0;
    while (true) {
      // send
      sender.send(Payload.of("counter", i++));
      Thread.sleep(1000);
    }
  }
}

Looks pretty simple right? On the background the MQTT connection is managed, payload gets encoded, birth certificates get exchanges and subscriptions get managed. But still the main application is in control of the data flow.

How to do this at home

If you want to have a look at the code, it is available on GitHub (ctron/kapua-gateway-client) and ready to consume on Maven Central (de.dentrassi.kapua). But please be aware of the fact that this is a proof-of-concept, and may never become more than that.

Simply adding the following dependency to your project should be enough:

<dependency>
  <groupId>de.dentrassi.kapua</groupId>
  <artifactId>kapua-gateway-client-provider-mqtt-fuse</artifactId>
  <version>0.2.0</version> <!-- check for a more recent version -->
</dependency>

With this dependency you can use the example above. If you want to got for Paho instead of FUSE use kapua-gateway-client-provider-mqtt-paho instead.

Taking for a test drive

Now taking this for a test drive as even more fun. Eclipse SmartHome has the concept of a persistence system, where telemetry data gets stored in a time series like database. There exists a default implementation for rrdb4j. So re-implementing this interface for Kapua was quite easy and resulted in an example module which can be installed into the Karaf based OpenHAB 2 distribution with just a few commands:

openhab> repo-add mvn:de.dentrassi.kapua/karaf/0.2.0/xml/features
openhab> feature:install eclipse-smarthome-kapua-persistence

Then you need to re-configure the component over the “Paper UI” and point it towards your Kapua setup. Maybe you will need to tweak the “kapua.persist” file in order to define what gets persisted and when. And if everything goes well, your temperate readings will get pushed from SmartHome to Kapua.

More information

Google Summer of Code 2017 with Eclipse Kapua

I am happy to announce that Eclipse Kapua got two slots in this year’s Google Summer of Code. Yes, two projects got accepted, and both are for the Eclipse Kapua project.

Anastasiya Lazarenko will provide a simulation of a fish tank and Arthur Deschamps will go for a supply chain simulation. Both simulations are planned to feed in their data into Eclipse Kapua using the Kura simulator framework. Although both projects seem to be quite similar from a high level perspective, I think they are quite different when it comes to the details.

The basic idea is not to provide something like a statistically/physically/… valid simulation, but something to play around and interact with. Spinning up a few virtual instances of both models and hooking them up to our cloud based IoT solution and interact a bit with them, getting some reasonable feedback values.

For Kapua this will definitely mean evolving the simulator framework based on the feedback from both students, making it (hopefully) easier to use for other tasks. And maybe, just maybe, we can also got for the extra mile and make the same simulations available for Eclipse Hono.

If you want to read more about Anastasiya and Arthur just read through their introductions on kapua-dev@eclipse.org and give them a warm welcome:

read Anastasiya’s introduction
read Arthur’s introduction

Best of luck to you!

OPC UA with Apache Camel

Apache Camel 2.19.0 is close to is release and the OPC UA component called “camel-milo” will be part of it. This is my Eclipse Milo backed component which was previously hosted in my personal GitHub repository ctron/de.dentrassi.camel.milo. It now got accepted into Apache Camel and will be part of the 2.19.0 release. As there are already a release candidates available, I think it is a great time to give a short introduction.

In a nutshell OPC UA is an industrial IoT communication protocol for acquiring telemetry data and command and control of industrial grade automation systems. It is also known as IEC 62541.

The Camel Milo component offers both an OPC UA client (milo-client) and server (milo-server) endpoint.

Running an OPC UA server

The following Camel example is based on Camel Blueprint and provides some random data over OPC UA, acting as a server:

Example project layout
Example project layout

The blueprint configuration would be:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="
	http://www.osgi.org/xmlns/blueprint/v1.0.0 https://osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
	http://camel.apache.org/schema/blueprint https://camel.apache.org/schema/blueprint/camel-blueprint.xsd
	">

	<bean id="milo-server"
		class="org.apache.camel.component.milo.server.MiloServerComponent">
		<property name="enableAnonymousAuthentication" value="true" />
	</bean>

	<camelContext xmlns="http://camel.apache.org/schema/blueprint">
		<route>
			<from uri="timer:test" />
			<setBody>
				<simple>random(0,100)</simple>
			</setBody>
			<to uri="milo-server:test-item" />
		</route>
	</camelContext>

</blueprint>

And adding the following Maven build configuration:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

	<modelVersion>4.0.0</modelVersion>

	<groupId>de.dentrassi.camel.milo</groupId>
	<artifactId>example1</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<packaging>bundle</packaging>

	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
		<camel.version>2.19.0</camel.version>
	</properties>

	<dependencies>
		<dependency>
			<groupId>ch.qos.logback</groupId>
			<artifactId>logback-classic</artifactId>
			<version>1.2.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.camel</groupId>
			<artifactId>camel-core-osgi</artifactId>
			<version>${camel.version}</version>
		</dependency>
		<dependency>
			<groupId>org.apache.camel</groupId>
			<artifactId>camel-milo</artifactId>
			<version>${camel.version}</version>
		</dependency>
	</dependencies>

	<build>
		<plugins>
			<plugin>
				<groupId>org.apache.felix</groupId>
				<artifactId>maven-bundle-plugin</artifactId>
				<version>3.3.0</version>
				<extensions>true</extensions>
			</plugin>
			<plugin>
				<groupId>org.apache.camel</groupId>
				<artifactId>camel-maven-plugin</artifactId>
				<version>${camel.version}</version>
			</plugin>
		</plugins>
	</build>

</project>

This allows you to simply run the OPC UA server with:

mvn package camel:run

Afterwards you can connect with the OPC UA client of your choice and subscribe to the item test-item, receiving that random number.

Release candidate

As this is currently the release candidate of Camel 2.19.0, it is necessary to add the release candidate Maven repository to the pom.xml. I did omit
this in the example above, as this will no longer be necessary when Camel 2.19.0 is released:

<repositories>
		<repository>
			<id>camel</id>
			<url>https://repository.apache.org/content/repositories/orgapachecamel-1073/</url>
		</repository>
	</repositories>

	<pluginRepositories>
		<pluginRepository>
			<id>camel</id>
			<url>https://repository.apache.org/content/repositories/orgapachecamel-1073/</url>
		</pluginRepository>
	</pluginRepositories>

It may also be that the URLs (marked above) will change as a new release candidate gets built. In this case it is necessary that you update the URLs to the appropriate
repository URL.

What’s next?

Once Camel 2.19.0 is released, I will also mark my old, personal GitHub repository as deprecated and point people towards this new component.

And of course I am happy to get some feedback and suggestions.