IoT

33 posts

Eclipse Kura on Apache Karaf

It took quite a while and triggered a bunch of pull request upstream, but finally I do have Eclipse Kura™ running with Apache Karaf™.

Now of course I immediately got the question: Why is that good? After all the recent setup still uses Equinox underneath in Karaf. Now what? … isn’t Equinox and Karaf something different? Read on …

What is Karaf?

Ok, back to the basics. Eclipse Equinox™ is an implementation of OSGi. As is Apache Felix™. OSGi is a highly modular Java framework for creating modular applications. And it eats its own dogfood. So there is a thin layer of OSGi (the framework) and a bunch of “add-on” functionality (services) which is part of OSGi, but at the same time running on top of that thin OSGi layer. Now Equinox and Felix both provide this thin OSGi layer as well a set of “standard” OSGi services. But it is possible to interchange service implementations from Equinox to Felix, since both use OSGi. For example in Package Drone I started to use Equinox as OSGi container, but in some release switched to the Apache Felix ConfigurationAdmin since it didn’t have the bugs of the Equinox implementation. So the final target is a plain Equinox installation with some parts from Apache Felix.

Apache Karaf can be seen as a distribution of OSGi framework and services. Like you have a Linux distribution starting with a kernel and a set of packages which work well together, Karaf is a combination of an OSGi framework (either Felix or Equinox or one of others) and a set of add-on functionality which is tested against each other.

And those add-on services are the salt in the OSGi soup. You can have a command line shell with completion support, JMX support, OSGi blueprint, webserver and websocket suport, JAAS, SSH, JPA and way more. Installing bundles, features or complete archives directly from local sources, HTTP or Maven central. It wraps some non-OSGi dependencies automatically into OSGi bundles. And many 3rd party components are already taking advantage of this. So for example installing Apache Camel™ into an Karaf contain is done with two lines on the Karaf shell.

What Kura currently does

Eclipse Kura currently assembles its own distribution based on pure and simple Equinox components using a big Ant file. While this has an advantage when it comes to distribution size and the fact that you can control each an every aspect of that distribution, it also comes at a prize that you need to do exactly that. That Ant file needs to create each startup configuration file, knowing about each and every bundle version.

Karaf on the other hand already has a ready to run distribution in various flavors. And has some tooling to create new ones.

What is different now

In order to bring Kura to Karaf there were a few changes necessary in Kura. In addition to that a new set of Maven projects assembly a target distribution. Right now it is not possible to simply drop Kura into Karaf as a Karaf feature. A special Karaf assembly has to be prepared which can then be uploaded and installed at a specific location. This is mostly necessary due to the fact that Kura expects files at a specific location in the file system. Karaf on the other hand can simple be downloaded, unzipped and started at any location.

In addition to that a few bugs in Kura had to be fixed. Kura did make a few assumption on how Equinox and its service implementations handle things which may no longer be true in other OSGi environments. these fixes will probably make it into Kura 2.1.

Can I test it today?

If you just want to start up a stable build (not a release) you can easily do this with Docker:

[code language=”bash”]
sudo docker run -ti ctron/kura:karaf-stable # stable milestone
sudo docker run -ti ctron/kura:karaf-develop # latest develop build
[/code]

This will drop you in a Karaf shell of Kura running inside docker. You can play around with the console, install some additional functionality or try out an example like OPC UA with Apache Camel.

You can also check out the branch feature/karaf in my fork of Kura which contains all the upstream patches and the Karaf distribution projects in one branch. This branch is also used to generate the docker images.

What is next?

Clean up the target assembly: Right now assembling the target platform for Karaf is not as simple as I had hoped. This is mostly due to the complexity of such a setup (mixing different variants of Linux, processors and devices) and the way Maven works. With a bit of work this can made even simpler by adding a specific Maven plugin and make one or two fixes in the Karaf Maven plugin.

Enable Network management: Right now I disabled network management for the two bare-metal Linux targets. Simply because I did have a few issues getting this running on RHEL 7 and Fedora. Things like ethernet devices which are no longer named “eth0” and stuff like that.

Bring Kura to Maven Central/More Kura bundles: These two things go hand in hand. Of course I could simply create more Karaf features, packaging more Kura bundles. But it would be much better to have Kura artifacts available on Maven Central, being able to consume them directly with Karaf. That way it would be possible to either just drop in new functionality or to create a custom Kura+Karaf distribution based on existing modules.

Will it become the default for Kura?

Hard to say. Hopefully in the future. My plan is to bring it into Kura in the 2.2 release cycle. But this also means that quite a set of dependencies has to go through the Eclipse CQ process I we want to provide Karaf ready distributions for download. A first step could be to just provide the recipes for creating your own Karaf distribution of Kura.

So the next step is to bring support for Karaf into Kura upstream.

Collecting data to OpenTSDB with Apache Camel

OpenTSDB is an open source, scalable, <buzzword>big data</buzzword> solution for storing time series data. Well, that what the name of the project actually says ;-) In combination with Grafana you can pretty easy build a few nice dashboards for visualizing that data. The only question is of course, how do you get your data into that system.

My intention was to provide a simple way to stream metrics into OpenTSDB using Apache Camel. A quick search did not bring up any existing solutions for pushing data from Apache Camel to OpenTSDB, so I decided to write a small Camel component which can pick up data and send this to OpenTSDB using the HTTP API. Of course having a plain OpenTSDB HTTP Collector API for that would be fine as well. So I did split up the component into three different modules. A generic OpenTSDB collector API, an HTTP implementation of that and finally the Apache Camel component.

All components are already available in Maven Central, and although they have the version number 0.0.3, they are working quite well ;-)

[code language=”xml”]
<dependency>
<groupId>de.dentrassi.iot</groupId>
<artifactId>de.dentrassi.iot.opentsdb.collector</artifactId>
<version>0.0.3</version>
</dependency>
<dependency>
<groupId>de.dentrassi.iot</groupId>
<artifactId>de.dentrassi.iot.opentsdb.collector.http</artifactId>
<version>0.0.3</version>
</dependency>
<dependency>
<groupId>de.dentrassi.iot</groupId>
<artifactId>de.dentrassi.iot.opentsdb.collector.camel</artifactId>
<version>0.0.3</version>
</dependency>
[/code]

Dropping those dependencies into your classpath, or into your OSGi container ;-), you can use the following code to easily push data coming from MQTT directly into OpenTSDB:

[code language=”java”]
CamelContext context = new DefaultCamelContext();

// add routes

context.addRoutes(new RouteBuilder() {

@Override
public void configure() throws Exception {
from("paho:sensors/test2/temperature?brokerUrl=tcp://iot.eclipse.org")
.log("${body}")
.convertBodyTo(String.class).convertBodyTo(Float.class)
.to("open-tsdb:http://localhost:4242#test2/value=temp");

from("paho:tele/devices/TEMP?brokerUrl=tcp://iot.eclipse.org")
.log("${body}")
.convertBodyTo(String.class).convertBodyTo(Float.class)
.to("open-tsdb:http://localhost:4242#test3/value=temp");
}
});

// start the context
context.start();
[/code]

You can directly push Floats or Integers into OpenTSDB. The example above shows a “to” component which does directly address a metric. Using #test3/value=temp If you need more tags, then those can be added using #test3/value=temp/foo=bar.

But it is also possible to have a generic endpoint and provide the metric information in the actual payload. In this case you have to use the type de.dentrassi.iot.opentsdb.collector.Data and fill in the necessary information. It is also possible to publish an array of Data[].

Bringing OPC UA to Apache Camel

My first two weeks at Red Hat have been quite awesome! There is a lot to learn and one the first things I checked out was Apache Camel and Eclipse Milo. While Camel is more known open source project, Eclipse Milo is a newcomer project at the Eclipse Foundation. And while it is officially still in the Incubation Phase, it is a tool you can really work with! Eclipse Milo is an OPC UA client and server implementation.

Overview

Although Apache Camel already has an OPC DA connector, based on OpenSCADA’s Utgard library (sorry, but I couldn’t resist ;-) ), OPC UA is a complete different thing. OPC DA remote connectivity is based on DCOM and has always been really painful. That’s also the reason for the name: Utgard. But with OPC UA that has been cleared up. It features different communication layers, the most prominent, up until now, is the custom, TCP based binary protocol.

I started by investigating the way Camel works and dug a little bit in the API of Eclipse Milo. From an API perspective, both tools couldn’t be more different. Where Camel does try to focus on simplicity and reducing the full complexity down to a very slim and simple interface, Milo simply unleashes the full complexity of OPC UA into a very nice, but complex, Java 8 style API. Now you can argue night and day what is the better approach, with a little bit of glue code, both sides work perfectly together.

To make it short, the final result is an open source project on GitHub: ctron/de.dentrassi.camel.milo, which features two Camel components providing OPC UA client and OPC UA server support. Meaning that it is possible now to consume or publish OPC UA value updates by simply configuring a Camel URI.

Examples

For example:

milo-client:tcp://foo:bar@localhost:12685?nodeId=items-MyItem&namespaceUri=urn:org:apache:camel

Can be used to configure an OPC UA Client connection to “localhost:12685”. Implementing both Camel producer and consumer, it is possible to subscribe/monitor this item or write value updates.

The following URI does create a server item named “MyItem”:

milo-server:MyItem

Which can then be accessed using an OPC UA client. For the server component the configuration options like bind address and port are located on the Camel component, not the endpoint. However it is possible with Camel to register the same component type with different configurations.

Also see the testing classes, which show a few examples.

What is next?

With help from @hekonsek, who knows a lot more about Camel than I do, we hope to contribute this extension to the Apache Camel project. So once Eclipse Milo has it’s first release, this could become an “out-of-the-box” experience when using Apache Camel, thanks to another wonderful project of Eclipse IoT of course ;-)

Also, with a little bit of luck, there will be a talk at EclipseCon Europe 2016 about this adventure. It will even go a bit further because I do want to bring this module into Eclipse Kura, so that Eclipse Kura will feature OPC UA support using Apache Camel.