Eclipse Kura on Apache Karaf

It took quite a while and triggered a bunch of pull request upstream, but finally I do have Eclipse Kura™ running with Apache Karaf™.

Now of course I immediately got the question: Why is that good? After all the recent setup still uses Equinox underneath in Karaf. Now what? … isn’t Equinox and Karaf something different? Read on …

What is Karaf?

Ok, back to the basics. Eclipse Equinox™ is an implementation of OSGi. As is Apache Felix™. OSGi is a highly modular Java framework for creating modular applications. And it eats its own dogfood. So there is a thin layer of OSGi (the framework) and a bunch of “add-on” functionality (services) which is part of OSGi, but at the same time running on top of that thin OSGi layer. Now Equinox and Felix both provide this thin OSGi layer as well a set of “standard” OSGi services. But it is possible to interchange service implementations from Equinox to Felix, since both use OSGi. For example in Package Drone I started to use Equinox as OSGi container, but in some release switched to the Apache Felix ConfigurationAdmin since it didn’t have the bugs of the Equinox implementation. So the final target is a plain Equinox installation with some parts from Apache Felix.

Apache Karaf can be seen as a distribution of OSGi framework and services. Like you have a Linux distribution starting with a kernel and a set of packages which work well together, Karaf is a combination of an OSGi framework (either Felix or Equinox or one of others) and a set of add-on functionality which is tested against each other.

And those add-on services are the salt in the OSGi soup. You can have a command line shell with completion support, JMX support, OSGi blueprint, webserver and websocket suport, JAAS, SSH, JPA and way more. Installing bundles, features or complete archives directly from local sources, HTTP or Maven central. It wraps some non-OSGi dependencies automatically into OSGi bundles. And many 3rd party components are already taking advantage of this. So for example installing Apache Camel™ into an Karaf contain is done with two lines on the Karaf shell.

What Kura currently does

Eclipse Kura currently assembles its own distribution based on pure and simple Equinox components using a big Ant file. While this has an advantage when it comes to distribution size and the fact that you can control each an every aspect of that distribution, it also comes at a prize that you need to do exactly that. That Ant file needs to create each startup configuration file, knowing about each and every bundle version.

Karaf on the other hand already has a ready to run distribution in various flavors. And has some tooling to create new ones.

What is different now

In order to bring Kura to Karaf there were a few changes necessary in Kura. In addition to that a new set of Maven projects assembly a target distribution. Right now it is not possible to simply drop Kura into Karaf as a Karaf feature. A special Karaf assembly has to be prepared which can then be uploaded and installed at a specific location. This is mostly necessary due to the fact that Kura expects files at a specific location in the file system. Karaf on the other hand can simple be downloaded, unzipped and started at any location.

In addition to that a few bugs in Kura had to be fixed. Kura did make a few assumption on how Equinox and its service implementations handle things which may no longer be true in other OSGi environments. these fixes will probably make it into Kura 2.1.

Can I test it today?

If you just want to start up a stable build (not a release) you can easily do this with Docker:

sudo docker run -ti ctron/kura:karaf-stable    # stable milestone
sudo docker run -ti ctron/kura:karaf-develop   # latest develop build

This will drop you in a Karaf shell of Kura running inside docker. You can play around with the console, install some additional functionality or try out an example like OPC UA with Apache Camel.

You can also check out the branch feature/karaf in my fork of Kura which contains all the upstream patches and the Karaf distribution projects in one branch. This branch is also used to generate the docker images.

What is next?

Clean up the target assembly: Right now assembling the target platform for Karaf is not as simple as I had hoped. This is mostly due to the complexity of such a setup (mixing different variants of Linux, processors and devices) and the way Maven works. With a bit of work this can made even simpler by adding a specific Maven plugin and make one or two fixes in the Karaf Maven plugin.

Enable Network management: Right now I disabled network management for the two bare-metal Linux targets. Simply because I did have a few issues getting this running on RHEL 7 and Fedora. Things like ethernet devices which are no longer named “eth0” and stuff like that.

Bring Kura to Maven Central/More Kura bundles: These two things go hand in hand. Of course I could simply create more Karaf features, packaging more Kura bundles. But it would be much better to have Kura artifacts available on Maven Central, being able to consume them directly with Karaf. That way it would be possible to either just drop in new functionality or to create a custom Kura+Karaf distribution based on existing modules.

Will it become the default for Kura?

Hard to say. Hopefully in the future. My plan is to bring it into Kura in the 2.2 release cycle. But this also means that quite a set of dependencies has to go through the Eclipse CQ process I we want to provide Karaf ready distributions for download. A first step could be to just provide the recipes for creating your own Karaf distribution of Kura.

So the next step is to bring support for Karaf into Kura upstream.

Collecting data to OpenTSDB with Apache Camel

OpenTSDB is an open source, scalable, <buzzword>big data</buzzword> solution for storing time series data. Well, that what the name of the project actually says 😉 In combination with Grafana you can pretty easy build a few nice dashboards for visualizing that data. The only question is of course, how do you get your data into that system.

My intention was to provide a simple way to stream metrics into OpenTSDB using Apache Camel. A quick search did not bring up any existing solutions for pushing data from Apache Camel to OpenTSDB, so I decided to write a small Camel component which can pick up data and send this to OpenTSDB using the HTTP API. Of course having a plain OpenTSDB HTTP Collector API for that would be fine as well. So I did split up the component into three different modules. A generic OpenTSDB collector API, an HTTP implementation of that and finally the Apache Camel component.

All components are already available in Maven Central, and although they have the version number 0.0.3, they are working quite well 😉


Dropping those dependencies into your classpath, or into your OSGi container ;-), you can use the following code to easily push data coming from MQTT directly into OpenTSDB:

CamelContext context = new DefaultCamelContext();

// add routes

context.addRoutes(new RouteBuilder() {

public void configure() throws Exception {


// start the context

You can directly push Floats or Integers into OpenTSDB. The example above shows a “to” component which does directly address a metric. Using #test3/value=temp If you need more tags, then those can be added using #test3/value=temp/foo=bar.

But it is also possible to have a generic endpoint and provide the metric information in the actual payload. In this case you have to use the type de.dentrassi.iot.opentsdb.collector.Data and fill in the necessary information. It is also possible to publish an array of Data[].

Camel, Kura and OSGi, struggling with ‘sun.misc.Unsafe’

So here comes a puzzle for you … You do have Apache Camel (2.17), which internally uses com.googlecode.concurrentlinkedhashmap, which uses sun.misc.Unsafe. Now you can argue a lot about this is necessary or not. I just is that way. So starting up Apache Camel in an OSGi container which does strict processing of classes, using Apache Camel will run into a “java.lang.NoClassDefFoundError” issue due to “sun/misc/Unsafe”.

The cause

The cause is rather simple. Apache Camel makes use of sun.misc and so it should declare that in the OSGi manifest. OSGi R6 (and version before that as well) defines in section “3.9.4” of the core specification that java.* is forwarded to he parent class loader, but the rest is not. So sun.misc will not go the parent class loader (which finally is the JVM) by default.


As always, there are a few. There may be a few more possible than I describe here, but I don’t want to list any which require changing Apache Camel itself.


Two Fragments
Two Fragments
OSGi fragments are a way to enhance an already existing OSGi bundle. So the kind of merge in into the bundle. So it is possible to create a fragment for Apache Camel which does Import-Package: sun.misc. This should quickly resolve the issue as long as the bundle is installed into you OSGi container at the same time Apache Camel is, so that it is available at the time Apache Camel is started. The host bundle has to be org.apache.camel.camel-core, since this is the bundle requiring sun.misc.Unsafe.

Of course this brings up the next issue, there is nobody who exports sun.misc. But there is again a way to fix this.

The actual provider of sun.misc is the JVM. However the JVM does not know about OSGi. But the OSGi container itself, the framework, can act as a proxy. So if the framework bundle (aka bundle zero) would export sun.misc it would be able to actually resolve the class by using the JVM bootclasspath. The solution therefore is another fragment, which performs an Export-Package: sun.misc. That will bring both bundles with their fragments together, correctly wiring up sun.misc.

But as we have seen before, the fragment requires a “host bundle” and this would be different when e.g. using Apache Felix instead of Eclipse Equinox.

Again, there is a solution. The system bundle is also know as system.bundle. So the fragment can specify system.bundle with the attribute extension:=framework as bundle host:

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-SymbolicName: my.sun.misc.provider
Bundle-Version: 1.0.0
Fragment-Host: system.bundle; extension:=framework
Export-Package: sun.misc

Of course you can also export other JVM internal packages using that way.

There are only two things to keep in mind. First of all, but is is true to all other solutions as well, if the JVM does not provide sun.misc then this won’t work. Since the class cannot be found. Second, and this is specific to this solution, if you start “camel-core” before those two fragments are installed, then you need to “refresh” the Apache Camel Core bundle in order for the OSGi framework to re-wire your imports/exports.

There are also some pre-made extension bundles for this setup. Just search maven central.

Equinox and Felix

Some setups of Felix and Equinox do provide an “out of the box” workaround. Equinox for example does automatically forward all failed class lookups to the boot class loader, as a last resort, in the case the framework is started by using the org.eclipse.equinox.launcher_*.jar instead of the org.eclipse.osgi_*.jar launcher.

Bootclasspath delegation for Equinox

Eclipse Equinox also allows to set a few system properties in order to allow falling back to the bootclasspath and delegating the lookup of “sun.misc” to the JVM:

Also see:

This fill fall back to the bootclassloader like using the launcher “org.eclipse.equinox.launcher”

Bootclasspath delegation for all

The OSGi core specification also allows to configure direct delegation of lookups to the boot classloader (Section 3.9.3 of the OSGi core specificion):

This will forward requests for “sun.misc.*” directly to the boot class loader.


Now people may complain “oh how complicates this OSGi-thingy is”. Well, “sun.misc.Unsafe” was never intended to be used outside the JVM. Java 9 will correct this with their module system. OSGi already can do that. But it also provides a way to solve this.

If you prefer to use system properties, a different launcher or the “two fragment” approach, that is up to you and your situation. For me the problem simply was to make it happen without changing either Apache Camel or the launcher configuration of Eclipse Kura. So I went with the “two fragments” approach.


I am just writing this down in order to help others. And I got help from others to solve this myself. So thanks to some people who posted this “on the net”, it is a long time, I stumbled over you googling about a solutions some time ago. Sorry I forgot where I initially found this solution.

Also thanks to Neil Bartlett for pointing out the OSGi conform solution with “org.osgi.framework.bootdelegation”.

Bringing OPC UA to Apache Camel

My first two weeks at Red Hat have been quite awesome! There is a lot to learn and one the first things I checked out was Apache Camel and Eclipse Milo. While Camel is more known open source project, Eclipse Milo is a newcomer project at the Eclipse Foundation. And while it is officially still in the Incubation Phase, it is a tool you can really work with! Eclipse Milo is an OPC UA client and server implementation.


Although Apache Camel already has an OPC DA connector, based on OpenSCADA’s Utgard library (sorry, but I couldn’t resist 😉 ), OPC UA is a complete different thing. OPC DA remote connectivity is based on DCOM and has always been really painful. That’s also the reason for the name: Utgard. But with OPC UA that has been cleared up. It features different communication layers, the most prominent, up until now, is the custom, TCP based binary protocol.

I started by investigating the way Camel works and dug a little bit in the API of Eclipse Milo. From an API perspective, both tools couldn’t be more different. Where Camel does try to focus on simplicity and reducing the full complexity down to a very slim and simple interface, Milo simply unleashes the full complexity of OPC UA into a very nice, but complex, Java 8 style API. Now you can argue night and day what is the better approach, with a little bit of glue code, both sides work perfectly together.

To make it short, the final result is an open source project on GitHub: ctron/de.dentrassi.camel.milo, which features two Camel components providing OPC UA client and OPC UA server support. Meaning that it is possible now to consume or publish OPC UA value updates by simply configuring a Camel URI.


For example:


Can be used to configure an OPC UA Client connection to “localhost:12685”. Implementing both Camel producer and consumer, it is possible to subscribe/monitor this item or write value updates.

The following URI does create a server item named “MyItem”:


Which can then be accessed using an OPC UA client. For the server component the configuration options like bind address and port are located on the Camel component, not the endpoint. However it is possible with Camel to register the same component type with different configurations.

Also see the testing classes, which show a few examples.

What is next?

With help from @hekonsek, who knows a lot more about Camel than I do, we hope to contribute this extension to the Apache Camel project. So once Eclipse Milo has it’s first release, this could become an “out-of-the-box” experience when using Apache Camel, thanks to another wonderful project of Eclipse IoT of course 😉

Also, with a little bit of luck, there will be a talk at EclipseCon Europe 2016 about this adventure. It will even go a bit further because I do want to bring this module into Eclipse Kura, so that Eclipse Kura will feature OPC UA support using Apache Camel.

10 years

Many of my blog posts are actually a “note to self”. Things I write down in order not to forget them. This one even goes a bit further. So save yourself some time and skip it 😉

10 years ago I started an adventure [1]. Creating an open source SCADA system and a business with that. Having an open source SCADA system was not the business case back then, it was just a side effect.

So in those 10 years a lot of things haven happened. Just think back a moment, there was no “smart phone” back then. Although there was Java 5, most of us hesitated and stuck with 1.4, not having generics. 32bit was standard. Raspberry PIs had not been invented yet (though I did have a Lego Mindstorms NXT). There was no Git and you could be lucky if you had Subversion. Not talking about services like GitHub, Travis-CI or other hipster cloud things.

Also in the area that I worked, let’s just label it “SCADA”, a lot has happened. During that time we developed openSCADA, which later became an Eclipse project named “Eclipse SCADA” and now “Eclipse NeoSCADA”. And similar things happened in that industry field. Instead of having closed source, C based, isolated systems, we now have a more open field, where data is transmitted using virtual machine based systems, wrapping up data in XML or JSON structures [2], re-use code from open source projects. And quite many of those things happen in the Eclipse IoT working group and top-level project, which I am glad to be a part of.

Looking back 10 years again, Eclipse was an IDE like Apache was a web server [3]. And today both foundations govern a huge set of projects, far away from their original starting points and quite a sum of them has, in some way, to do with IoT.

So I am really looking forward to next Monday, when I will start a new position at the IoT team of RedHat. To me this is a great opportunity and a great adventure into the next 10 years. I am sure that a lot will happen in the area of IoT, communication between devices and services. And I appreciate it that I can be a part of this.

But also will I miss what I am leaving behind. Not only the software that I made and the things I have accomplished, but, more important, colleagues who became friends.

[1] To be honest, it was 10 years and 3 months ago. Then again the first three months were more preparing than actually doing.
[2] Not sure if this is an improvement, but it is the way it goes.
[3] I know it was more back then already, but not as it is now

New version of Maven RPM builder

I just released a new version of the Maven RPM builder. Version 0.6.0 allows one to influence the way the RPM release information is generated during a SNAPSHOT build (also see issue #2).

While the default behavior is still the same, it is now possible to specify the snapshotBuildId, which will then be added as release suffix instead of the current timestamp. Setting forceRelease can be used to disable the SNAPSHOT specific logic altogether and just use the provided release information.

Mattermost at Eclipse

About half a year back Cédric and I did start a test to see if Mattermost is a valuable tool for the Eclipse community. Failure was an option, since a new tool should bring some benefit to the community.

The server this instance was running on was sponsored by my current employer, IBH SYSTEMS GmbH. The test was scheduled to be terminate at the end of June. And since I will change to RedHat, starting in July, we were forced to make a decision:

🙂 Mattermost now is a permanent, community supported service of the Eclipse Foundation. Hosted on the Eclipse Foundation’s infrastructure, but supported by its community. We also dropped the “-test” in the domain name. So please update your links, if you had some. All content was migrated to the new server.

Also did we setup a new IRC bridge, which does bridge the IRC channel #eclipse-dev on freenode to the Mattermost channel developers-qa.

Cédric and I proposed a talk at for EclipseCon Europe 2016 to show the community what Mattermost is and how it can be used for engaging users and committers.

Happy chatting 😉

Writing RPM files … in plain Java … on Maven Central

A few weeks back I wrote a blog post about writing RPM files in plain Java.

What was left over was the fact that the library was not available outside of Package Drone itself. Although it was created as a stand alone functionality you would need to fetch the JAR and somehow integrate it into your build.

With the recent release of Package Drone 0.13.0 I was finally able to officially push the module to Maven Central.


In the meanwhile I did work on a Maven RPM builder plugin, which allows creating RPM files on any platform. The newest version (0.5.0) has been released today as well, which already uses the new library.

So working with RPM files just got a bit easier 😉

Building RPMs on any platform with Maven

In several occasions I had to build RPM packages for installing software. In the past I mostly did it with a Maven build using the RPM Maven Plugin.

The process is simple: At the end of your build you gather up all resources, try to understand the mapping configuration, bang your head a few times in order to figure out way to work with -SNAPSHOT versions and that’s it. In the end you have a few RPM files.

The only problem is, that the plugin actually creates a spec file and runs the rpmbuild command line tool. Which is, of course, only available on an RPM like system. Fortunately Debian/Ubuntu based distributions, although they use something different, provide at least the rpmbuild tool.

On Windows or Mac OS the situation looks different. Adding rpmbuild to Windows can be quite a task. Still the question remains, why this is necessary since Java can run on all platforms.

So time to write a Maven plugin which does not the rpmbuild tool, but create RPM packages native in Java:

de.dentrassi.maven:rpm is a Maven Plugin which does create RPM packages using plain Java as a Maven Plugin. The process is simply and fast and does not require additional command line tool. The plugin is open source and the source code is available on GitHub ctron/rpm-builder.